]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
ifcvt: Sort PHI arguments not only occurrences but also complexity [PR109154]
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
d77de738
ML
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
74d5206f 10Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
d77de738
ML
11
12Permission is granted to copy, distribute and/or modify this document
13under the terms of the GNU Free Documentation License, Version 1.3 or
14any later version published by the Free Software Foundation; with the
15Invariant Sections being ``Funding Free Software'', the Front-Cover
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35@end direntry
36
37This manual documents libgomp, the GNU Offloading and Multi Processing
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
53@title GNU Offloading and Multi Processing Runtime Library
54@subtitle The GNU OpenMP and OpenACC Implementation
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top, Enabling OpenMP
72@top Introduction
73@cindex Introduction
74
75This manual documents the usage of libgomp, the GNU Offloading and
76Multi Processing Runtime Library. This includes the GNU
77implementation of the @uref{https://www.openmp.org, OpenMP} Application
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
80@uref{https://www.openacc.org, OpenACC} Application Programming
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
83
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
98* OpenMP Implementation Status:: List of implemented features by OpenMP version
99* OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102* OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105* Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107* OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111* CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113* OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115* OpenACC Profiling Interface::
116* OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118* Offload-Target Specifics:: Notes on offload-target specific internals
119* The libgomp ABI:: Notes on the external ABI presented by libgomp.
120* Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122* Copying:: GNU general public license says
123 how you can copy and share libgomp.
124* GNU Free Documentation License::
125 How you can copy and share this manual.
126* Funding:: How to help assure continued work for free
127 software.
128* Library Index:: Index of this documentation.
129@end menu
130
131
132@c ---------------------------------------------------------------------
133@c Enabling OpenMP
134@c ---------------------------------------------------------------------
135
136@node Enabling OpenMP
137@chapter Enabling OpenMP
138
139To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143@code{!$} conditional compilation sentinels in free form and @code{c$},
144@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145arranges for automatic linking of the OpenMP runtime library
146(@ref{Runtime Library Routines}).
147
148A complete description of all OpenMP directives may be found in the
149@uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150See also @ref{OpenMP Implementation Status}.
151
152
153@c ---------------------------------------------------------------------
154@c OpenMP Implementation Status
155@c ---------------------------------------------------------------------
156
157@node OpenMP Implementation Status
158@chapter OpenMP Implementation Status
159
160@menu
161* OpenMP 4.5:: Feature completion status to 4.5 specification
162* OpenMP 5.0:: Feature completion status to 5.0 specification
163* OpenMP 5.1:: Feature completion status to 5.1 specification
164* OpenMP 5.2:: Feature completion status to 5.2 specification
c16e85d7 165* OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
d77de738
ML
166@end menu
167
168The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170the value @code{201511} (i.e. OpenMP 4.5).
171
172@node OpenMP 4.5
173@section OpenMP 4.5
174
175The OpenMP 4.5 specification is fully supported.
176
177@node OpenMP 5.0
178@section OpenMP 5.0
179
180@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181@c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183@multitable @columnfractions .60 .10 .25
184@headitem Description @tab Status @tab Comments
185@item Array shaping @tab N @tab
186@item Array sections with non-unit strides in C and C++ @tab N @tab
187@item Iterators @tab Y @tab
188@item @code{metadirective} directive @tab N @tab
189@item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
2cd0689a 191@item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
d77de738 192 env variable @tab Y @tab
2cd0689a 193@item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
d77de738 194@item @code{requires} directive @tab P
8c2fc744 195 @tab complete but no non-host device provides @code{unified_shared_memory}
d77de738 196@item @code{teams} construct outside an enclosing target region @tab Y @tab
20552407 197@item Non-rectangular loop nests @tab P @tab Full support for C/C++, partial for Fortran
d77de738
ML
198@item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
199@item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
200 constructs @tab Y @tab
201@item Collapse of associated loops that are imperfectly nested loops @tab N @tab
202@item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
203 @code{simd} construct @tab Y @tab
204@item @code{atomic} constructs in @code{simd} @tab Y @tab
205@item @code{loop} construct @tab Y @tab
206@item @code{order(concurrent)} clause @tab Y @tab
207@item @code{scan} directive and @code{in_scan} modifier for the
208 @code{reduction} clause @tab Y @tab
209@item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
210@item @code{in_reduction} clause on @code{target} constructs @tab P
211 @tab @code{nowait} only stub
212@item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
213@item @code{task} modifier to @code{reduction} clause @tab Y @tab
214@item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
215@item @code{detach} clause to @code{task} construct @tab Y @tab
216@item @code{omp_fulfill_event} runtime routine @tab Y @tab
217@item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
218 and @code{taskloop simd} constructs @tab Y @tab
219@item @code{taskloop} construct cancelable by @code{cancel} construct
220 @tab Y @tab
221@item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
222 @tab Y @tab
223@item Predefined memory spaces, memory allocators, allocator traits
13c3e29d 224 @tab Y @tab See also @ref{Memory allocation}
d77de738
ML
225@item Memory management routines @tab Y @tab
226@item @code{allocate} directive @tab N @tab
227@item @code{allocate} clause @tab P @tab Initial support
228@item @code{use_device_addr} clause on @code{target data} @tab Y @tab
f84fdb13 229@item @code{ancestor} modifier on @code{device} clause @tab Y @tab
d77de738
ML
230@item Implicit declare target directive @tab Y @tab
231@item Discontiguous array section with @code{target update} construct
232 @tab N @tab
233@item C/C++'s lvalue expressions in @code{to}, @code{from}
234 and @code{map} clauses @tab N @tab
235@item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
236@item Nested @code{declare target} directive @tab Y @tab
237@item Combined @code{master} constructs @tab Y @tab
238@item @code{depend} clause on @code{taskwait} @tab Y @tab
239@item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
240 @tab Y @tab
241@item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
242@item @code{depobj} construct and depend objects @tab Y @tab
243@item Lock hints were renamed to synchronization hints @tab Y @tab
244@item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
245@item Map-order clarifications @tab P @tab
246@item @code{close} @emph{map-type-modifier} @tab Y @tab
247@item Mapping C/C++ pointer variables and to assign the address of
248 device memory mapped by an array section @tab P @tab
249@item Mapping of Fortran pointer and allocatable variables, including pointer
250 and allocatable components of variables
251 @tab P @tab Mapping of vars with allocatable components unsupported
252@item @code{defaultmap} extensions @tab Y @tab
253@item @code{declare mapper} directive @tab N @tab
254@item @code{omp_get_supported_active_levels} routine @tab Y @tab
255@item Runtime routines and environment variables to display runtime thread
256 affinity information @tab Y @tab
257@item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
258 routines @tab Y @tab
259@item @code{omp_get_device_num} runtime routine @tab Y @tab
260@item OMPT interface @tab N @tab
261@item OMPD interface @tab N @tab
262@end multitable
263
264@unnumberedsubsec Other new OpenMP 5.0 features
265
266@multitable @columnfractions .60 .10 .25
267@headitem Description @tab Status @tab Comments
268@item Supporting C++'s range-based for loop @tab Y @tab
269@end multitable
270
271
272@node OpenMP 5.1
273@section OpenMP 5.1
274
275@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
276
277@multitable @columnfractions .60 .10 .25
278@headitem Description @tab Status @tab Comments
279@item OpenMP directive as C++ attribute specifiers @tab Y @tab
280@item @code{omp_all_memory} reserved locator @tab Y @tab
281@item @emph{target_device trait} in OpenMP Context @tab N @tab
282@item @code{target_device} selector set in context selectors @tab N @tab
283@item C/C++'s @code{declare variant} directive: elision support of
284 preprocessed code @tab N @tab
285@item @code{declare variant}: new clauses @code{adjust_args} and
286 @code{append_args} @tab N @tab
287@item @code{dispatch} construct @tab N @tab
288@item device-specific ICV settings with environment variables @tab Y @tab
eda38850 289@item @code{assume} and @code{assumes} directives @tab Y @tab
d77de738
ML
290@item @code{nothing} directive @tab Y @tab
291@item @code{error} directive @tab Y @tab
292@item @code{masked} construct @tab Y @tab
293@item @code{scope} directive @tab Y @tab
294@item Loop transformation constructs @tab N @tab
295@item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
296 clauses of the @code{taskloop} construct @tab Y @tab
b2e1c49b
TB
297@item @code{align} clause in @code{allocate} directive @tab N @tab
298@item @code{align} modifier in @code{allocate} clause @tab Y @tab
d77de738
ML
299@item @code{thread_limit} clause to @code{target} construct @tab Y @tab
300@item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
301@item Iterators in @code{target update} motion clauses and @code{map}
302 clauses @tab N @tab
303@item Indirect calls to the device version of a procedure or function in
304 @code{target} regions @tab N @tab
305@item @code{interop} directive @tab N @tab
306@item @code{omp_interop_t} object support in runtime routines @tab N @tab
307@item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
308@item Extensions to the @code{atomic} directive @tab Y @tab
309@item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
310@item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
311@item @code{private} and @code{firstprivate} argument to @code{default}
312 clause in C and C++ @tab Y @tab
4ede915d 313@item @code{present} argument to @code{defaultmap} clause @tab Y @tab
d77de738
ML
314@item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
315 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
316 routines @tab Y @tab
317@item @code{omp_target_is_accessible} runtime routine @tab Y @tab
318@item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
319 runtime routines @tab Y @tab
320@item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
321@item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
322 @code{omp_aligned_calloc} runtime routines @tab Y @tab
323@item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
324 @code{omp_atv_default} changed @tab Y @tab
325@item @code{omp_display_env} runtime routine @tab Y @tab
326@item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
327@item @code{ompt_sync_region_t} enum additions @tab N @tab
328@item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
329 and @code{ompt_state_wait_barrier_teams} @tab N @tab
330@item @code{ompt_callback_target_data_op_emi_t},
331 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
332 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
333@item @code{ompt_callback_error_t} type @tab N @tab
334@item @code{OMP_PLACES} syntax extensions @tab Y @tab
335@item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
336 variables @tab Y @tab
337@end multitable
338
339@unnumberedsubsec Other new OpenMP 5.1 features
340
341@multitable @columnfractions .60 .10 .25
342@headitem Description @tab Status @tab Comments
343@item Support of strictly structured blocks in Fortran @tab Y @tab
344@item Support of structured block sequences in C/C++ @tab Y @tab
345@item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
346 clause @tab Y @tab
347@item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
348@item Pointer predetermined firstprivate getting initialized
349to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
350@item For Fortran, diagnose placing declarative before/between @code{USE},
351 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
eda38850 352@item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
c16e85d7
TB
353@item @code{indirect} clause in @code{declare target} @tab N @tab
354@item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
4ede915d
TB
355@item @code{present} modifier to the @code{map}, @code{to} and @code{from}
356 clauses @tab Y @tab
d77de738
ML
357@end multitable
358
359
360@node OpenMP 5.2
361@section OpenMP 5.2
362
363@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
364
365@multitable @columnfractions .60 .10 .25
366@headitem Description @tab Status @tab Comments
2cd0689a 367@item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
d77de738
ML
368 @tab Y @tab
369@item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
370 namespaces @tab N/A
371 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
372 sentinel as C/C++ pragma and C++ attributes are warned for with
373 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
374 (enabled by default), respectively; for Fortran free-source code, there is
375 a warning enabled by default and, for fixed-source code, the @code{omx}
376 sentinel is warned for with with @code{-Wsurprising} (enabled by
377 @code{-Wall}). Unknown clauses are always rejected with an error.}
091b6dbc 378@item Clauses on @code{end} directive can be on directive @tab Y @tab
d77de738
ML
379@item Deprecation of no-argument @code{destroy} clause on @code{depobj}
380 @tab N @tab
381@item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
382@item Deprecation of minus operator for reductions @tab N @tab
383@item Deprecation of separating @code{map} modifiers without comma @tab N @tab
384@item @code{declare mapper} with iterator and @code{present} modifiers
385 @tab N @tab
386@item If a matching mapped list item is not found in the data environment, the
b25ea7ab 387 pointer retains its original value @tab Y @tab
d77de738
ML
388@item New @code{enter} clause as alias for @code{to} on declare target directive
389 @tab Y @tab
390@item Deprecation of @code{to} clause on declare target directive @tab N @tab
391@item Extended list of directives permitted in Fortran pure procedures
2df7e451 392 @tab Y @tab
d77de738
ML
393@item New @code{allocators} directive for Fortran @tab N @tab
394@item Deprecation of @code{allocate} directive for Fortran
395 allocatables/pointers @tab N @tab
396@item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
397@item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
398 @tab N @tab
399@item Deprecation of traits array following the allocator_handle expression in
400 @code{uses_allocators} @tab N @tab
401@item New @code{otherwise} clause as alias for @code{default} on metadirectives
402 @tab N @tab
403@item Deprecation of @code{default} clause on metadirectives @tab N @tab
404@item Deprecation of delimited form of @code{declare target} @tab N @tab
405@item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
406@item @code{allocate} and @code{firstprivate} clauses on @code{scope}
407 @tab Y @tab
408@item @code{ompt_callback_work} @tab N @tab
9f80367e 409@item Default map-type for the @code{map} clause in @code{target enter/exit data}
d77de738
ML
410 @tab Y @tab
411@item New @code{doacross} clause as alias for @code{depend} with
412 @code{source}/@code{sink} modifier @tab Y @tab
413@item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
414 @tab N @tab
415@item @code{omp_cur_iteration} keyword @tab Y @tab
416@end multitable
417
418@unnumberedsubsec Other new OpenMP 5.2 features
419
420@multitable @columnfractions .60 .10 .25
421@headitem Description @tab Status @tab Comments
422@item For Fortran, optional comma between directive and clause @tab N @tab
423@item Conforming device numbers and @code{omp_initial_device} and
424 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
2cd0689a 425@item Initial value of @var{default-device-var} ICV with
18c8b56c 426 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
d77de738
ML
427@item @emph{interop_types} in any position of the modifier list for the @code{init} clause
428 of the @code{interop} construct @tab N @tab
429@end multitable
430
431
c16e85d7
TB
432@node OpenMP Technical Report 11
433@section OpenMP Technical Report 11
434
435Technical Report (TR) 11 is the first preview for OpenMP 6.0.
436
437@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
438@multitable @columnfractions .60 .10 .25
439@item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
440 @tab N/A @tab Backward compatibility
441@item The @code{decl} attribute was added to the C++ attribute syntax
442 @tab N @tab
443@item @code{_ALL} suffix to the device-scope environment variables
444 @tab P @tab Host device number wrongly accepted
445@item For Fortran, @emph{locator list} can be also function reference with
446 data pointer result @tab N @tab
447@item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
448 @tab N @tab
449@item Implicit reduction identifiers of C++ classes
450 @tab N @tab
451@item Change of the @emph{map-type} property from @emph{ultimate} to
452 @emph{default} @tab N @tab
453@item Concept of @emph{assumed-size arrays} in C and C++
454 @tab N @tab
455@item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
456 @tab N @tab
457@item @code{groupprivate} directive @tab N @tab
458@item @code{local} clause to declare target directive @tab N @tab
459@item @code{part_size} allocator trait @tab N @tab
460@item @code{pin_device}, @code{preferred_device} and @code{target_access}
461 allocator traits
462 @tab N @tab
463@item @code{access} allocator trait changes @tab N @tab
464@item Extension of @code{interop} operation of @code{append_args}, allowing all
465 modifiers of the @code{init} clause
9f80367e 466 @tab N @tab
c16e85d7
TB
467@item @code{interop} clause to @code{dispatch} @tab N @tab
468@item @code{apply} code to loop-transforming constructs @tab N @tab
469@item @code{omp_curr_progress_width} identifier @tab N @tab
470@item @code{safesync} clause to the @code{parallel} construct @tab N @tab
471@item @code{omp_get_max_progress_width} runtime routine @tab N @tab
8da7476c 472@item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
c16e85d7
TB
473@item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
474@item Routines for obtaining memory spaces/allocators for shared/device memory
475 @tab N @tab
476@item @code{omp_get_memspace_num_resources} routine @tab N @tab
477@item @code{omp_get_submemspace} routine @tab N @tab
478@item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
479@item Extension of @code{OMP_DEFAULT_DEVICE} and new
480 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
481@item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
482@end multitable
483
484@unnumberedsubsec Other new TR 11 features
485@multitable @columnfractions .60 .10 .25
486@item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
487@item Mapping lambda captures @tab N @tab
488@item For Fortran, atomic compare with storing the comparison result
489 @tab N @tab
c16e85d7
TB
490@end multitable
491
492
493
d77de738
ML
494@c ---------------------------------------------------------------------
495@c OpenMP Runtime Library Routines
496@c ---------------------------------------------------------------------
497
498@node Runtime Library Routines
499@chapter OpenMP Runtime Library Routines
500
501The runtime routines described here are defined by Section 3 of the OpenMP
502specification in version 4.5. The routines are structured in following
503three parts:
504
505@menu
506Control threads, processors and the parallel environment. They have C
507linkage, and do not throw exceptions.
508
509* omp_get_active_level:: Number of active parallel regions
510* omp_get_ancestor_thread_num:: Ancestor thread ID
511* omp_get_cancellation:: Whether cancellation support is enabled
512* omp_get_default_device:: Get the default device for target regions
513* omp_get_device_num:: Get device that current thread is running on
514* omp_get_dynamic:: Dynamic teams setting
515* omp_get_initial_device:: Device number of host device
516* omp_get_level:: Number of parallel regions
517* omp_get_max_active_levels:: Current maximum number of active regions
518* omp_get_max_task_priority:: Maximum task priority value that can be set
519* omp_get_max_teams:: Maximum number of teams for teams region
520* omp_get_max_threads:: Maximum number of threads of parallel region
521* omp_get_nested:: Nested parallel regions
522* omp_get_num_devices:: Number of target devices
523* omp_get_num_procs:: Number of processors online
524* omp_get_num_teams:: Number of teams
525* omp_get_num_threads:: Size of the active team
0b9bd33d 526* omp_get_proc_bind:: Whether threads may be moved between CPUs
d77de738
ML
527* omp_get_schedule:: Obtain the runtime scheduling method
528* omp_get_supported_active_levels:: Maximum number of active regions supported
529* omp_get_team_num:: Get team number
530* omp_get_team_size:: Number of threads in a team
531* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
532* omp_get_thread_limit:: Maximum number of threads
533* omp_get_thread_num:: Current thread ID
534* omp_in_parallel:: Whether a parallel region is active
535* omp_in_final:: Whether in final or included task region
536* omp_is_initial_device:: Whether executing on the host device
537* omp_set_default_device:: Set the default device for target regions
538* omp_set_dynamic:: Enable/disable dynamic teams
539* omp_set_max_active_levels:: Limits the number of active parallel regions
540* omp_set_nested:: Enable/disable nested parallel regions
541* omp_set_num_teams:: Set upper teams limit for teams region
542* omp_set_num_threads:: Set upper team size limit
543* omp_set_schedule:: Set the runtime scheduling method
544* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
545
546Initialize, set, test, unset and destroy simple and nested locks.
547
548* omp_init_lock:: Initialize simple lock
549* omp_set_lock:: Wait for and set simple lock
550* omp_test_lock:: Test and set simple lock if available
551* omp_unset_lock:: Unset simple lock
552* omp_destroy_lock:: Destroy simple lock
553* omp_init_nest_lock:: Initialize nested lock
554* omp_set_nest_lock:: Wait for and set simple lock
555* omp_test_nest_lock:: Test and set nested lock if available
556* omp_unset_nest_lock:: Unset nested lock
557* omp_destroy_nest_lock:: Destroy nested lock
558
559Portable, thread-based, wall clock timer.
560
561* omp_get_wtick:: Get timer precision.
562* omp_get_wtime:: Elapsed wall clock time.
563
564Support for event objects.
565
566* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
567@end menu
568
569
570
571@node omp_get_active_level
572@section @code{omp_get_active_level} -- Number of parallel regions
573@table @asis
574@item @emph{Description}:
575This function returns the nesting level for the active parallel blocks,
576which enclose the calling call.
577
578@item @emph{C/C++}
579@multitable @columnfractions .20 .80
580@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
581@end multitable
582
583@item @emph{Fortran}:
584@multitable @columnfractions .20 .80
585@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
586@end multitable
587
588@item @emph{See also}:
589@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
590
591@item @emph{Reference}:
592@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
593@end table
594
595
596
597@node omp_get_ancestor_thread_num
598@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
599@table @asis
600@item @emph{Description}:
601This function returns the thread identification number for the given
602nesting level of the current thread. For values of @var{level} outside
603zero to @code{omp_get_level} -1 is returned; if @var{level} is
604@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
605
606@item @emph{C/C++}
607@multitable @columnfractions .20 .80
608@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
609@end multitable
610
611@item @emph{Fortran}:
612@multitable @columnfractions .20 .80
613@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
614@item @tab @code{integer level}
615@end multitable
616
617@item @emph{See also}:
618@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
619
620@item @emph{Reference}:
621@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
622@end table
623
624
625
626@node omp_get_cancellation
627@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
628@table @asis
629@item @emph{Description}:
630This function returns @code{true} if cancellation is activated, @code{false}
631otherwise. Here, @code{true} and @code{false} represent their language-specific
632counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
633deactivated.
634
635@item @emph{C/C++}:
636@multitable @columnfractions .20 .80
637@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
638@end multitable
639
640@item @emph{Fortran}:
641@multitable @columnfractions .20 .80
642@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
643@end multitable
644
645@item @emph{See also}:
646@ref{OMP_CANCELLATION}
647
648@item @emph{Reference}:
649@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
650@end table
651
652
653
654@node omp_get_default_device
655@section @code{omp_get_default_device} -- Get the default device for target regions
656@table @asis
657@item @emph{Description}:
658Get the default device for target regions without device clause.
659
660@item @emph{C/C++}:
661@multitable @columnfractions .20 .80
662@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
663@end multitable
664
665@item @emph{Fortran}:
666@multitable @columnfractions .20 .80
667@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
668@end multitable
669
670@item @emph{See also}:
671@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
672
673@item @emph{Reference}:
674@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
675@end table
676
677
678
679@node omp_get_device_num
680@section @code{omp_get_device_num} -- Return device number of current device
681@table @asis
682@item @emph{Description}:
683This function returns a device number that represents the device that the
684current thread is executing on. For OpenMP 5.0, this must be equal to the
685value returned by the @code{omp_get_initial_device} function when called
686from the host.
687
688@item @emph{C/C++}
689@multitable @columnfractions .20 .80
690@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
691@end multitable
692
693@item @emph{Fortran}:
694@multitable @columnfractions .20 .80
695@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
696@end multitable
697
698@item @emph{See also}:
699@ref{omp_get_initial_device}
700
701@item @emph{Reference}:
702@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
703@end table
704
705
706
707@node omp_get_dynamic
708@section @code{omp_get_dynamic} -- Dynamic teams setting
709@table @asis
710@item @emph{Description}:
711This function returns @code{true} if enabled, @code{false} otherwise.
712Here, @code{true} and @code{false} represent their language-specific
713counterparts.
714
715The dynamic team setting may be initialized at startup by the
716@env{OMP_DYNAMIC} environment variable or at runtime using
717@code{omp_set_dynamic}. If undefined, dynamic adjustment is
718disabled by default.
719
720@item @emph{C/C++}:
721@multitable @columnfractions .20 .80
722@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
723@end multitable
724
725@item @emph{Fortran}:
726@multitable @columnfractions .20 .80
727@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
728@end multitable
729
730@item @emph{See also}:
731@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
732
733@item @emph{Reference}:
734@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
735@end table
736
737
738
739@node omp_get_initial_device
740@section @code{omp_get_initial_device} -- Return device number of initial device
741@table @asis
742@item @emph{Description}:
743This function returns a device number that represents the host device.
744For OpenMP 5.1, this must be equal to the value returned by the
745@code{omp_get_num_devices} function.
746
747@item @emph{C/C++}
748@multitable @columnfractions .20 .80
749@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
750@end multitable
751
752@item @emph{Fortran}:
753@multitable @columnfractions .20 .80
754@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
755@end multitable
756
757@item @emph{See also}:
758@ref{omp_get_num_devices}
759
760@item @emph{Reference}:
761@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
762@end table
763
764
765
766@node omp_get_level
767@section @code{omp_get_level} -- Obtain the current nesting level
768@table @asis
769@item @emph{Description}:
770This function returns the nesting level for the parallel blocks,
771which enclose the calling call.
772
773@item @emph{C/C++}
774@multitable @columnfractions .20 .80
775@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
776@end multitable
777
778@item @emph{Fortran}:
779@multitable @columnfractions .20 .80
780@item @emph{Interface}: @tab @code{integer function omp_level()}
781@end multitable
782
783@item @emph{See also}:
784@ref{omp_get_active_level}
785
786@item @emph{Reference}:
787@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
788@end table
789
790
791
792@node omp_get_max_active_levels
793@section @code{omp_get_max_active_levels} -- Current maximum number of active regions
794@table @asis
795@item @emph{Description}:
796This function obtains the maximum allowed number of nested, active parallel regions.
797
798@item @emph{C/C++}
799@multitable @columnfractions .20 .80
800@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
801@end multitable
802
803@item @emph{Fortran}:
804@multitable @columnfractions .20 .80
805@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
806@end multitable
807
808@item @emph{See also}:
809@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
810
811@item @emph{Reference}:
812@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
813@end table
814
815
816@node omp_get_max_task_priority
817@section @code{omp_get_max_task_priority} -- Maximum priority value
818that can be set for tasks.
819@table @asis
820@item @emph{Description}:
821This function obtains the maximum allowed priority number for tasks.
822
823@item @emph{C/C++}
824@multitable @columnfractions .20 .80
825@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
826@end multitable
827
828@item @emph{Fortran}:
829@multitable @columnfractions .20 .80
830@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
831@end multitable
832
833@item @emph{Reference}:
834@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
835@end table
836
837
838@node omp_get_max_teams
839@section @code{omp_get_max_teams} -- Maximum number of teams of teams region
840@table @asis
841@item @emph{Description}:
842Return the maximum number of teams used for the teams region
843that does not use the clause @code{num_teams}.
844
845@item @emph{C/C++}:
846@multitable @columnfractions .20 .80
847@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
848@end multitable
849
850@item @emph{Fortran}:
851@multitable @columnfractions .20 .80
852@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
853@end multitable
854
855@item @emph{See also}:
856@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
857
858@item @emph{Reference}:
859@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
860@end table
861
862
863
864@node omp_get_max_threads
865@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
866@table @asis
867@item @emph{Description}:
868Return the maximum number of threads used for the current parallel region
869that does not use the clause @code{num_threads}.
870
871@item @emph{C/C++}:
872@multitable @columnfractions .20 .80
873@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
874@end multitable
875
876@item @emph{Fortran}:
877@multitable @columnfractions .20 .80
878@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
879@end multitable
880
881@item @emph{See also}:
882@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
883
884@item @emph{Reference}:
885@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
886@end table
887
888
889
890@node omp_get_nested
891@section @code{omp_get_nested} -- Nested parallel regions
892@table @asis
893@item @emph{Description}:
894This function returns @code{true} if nested parallel regions are
895enabled, @code{false} otherwise. Here, @code{true} and @code{false}
896represent their language-specific counterparts.
897
898The state of nested parallel regions at startup depends on several
899environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
900and is set to greater than one, then nested parallel regions will be
901enabled. If not defined, then the value of the @env{OMP_NESTED}
902environment variable will be followed if defined. If neither are
903defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
904are defined with a list of more than one value, then nested parallel
905regions are enabled. If none of these are defined, then nested parallel
906regions are disabled by default.
907
908Nested parallel regions can be enabled or disabled at runtime using
909@code{omp_set_nested}, or by setting the maximum number of nested
910regions with @code{omp_set_max_active_levels} to one to disable, or
911above one to enable.
912
2cd0689a
TB
913Note that the @code{omp_get_nested} API routine was deprecated
914in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
915
d77de738
ML
916@item @emph{C/C++}:
917@multitable @columnfractions .20 .80
918@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
919@end multitable
920
921@item @emph{Fortran}:
922@multitable @columnfractions .20 .80
923@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
924@end multitable
925
926@item @emph{See also}:
2cd0689a 927@ref{omp_get_max_active_levels}, @ref{omp_set_nested},
d77de738
ML
928@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
929
930@item @emph{Reference}:
931@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
932@end table
933
934
935
936@node omp_get_num_devices
937@section @code{omp_get_num_devices} -- Number of target devices
938@table @asis
939@item @emph{Description}:
940Returns the number of target devices.
941
942@item @emph{C/C++}:
943@multitable @columnfractions .20 .80
944@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
945@end multitable
946
947@item @emph{Fortran}:
948@multitable @columnfractions .20 .80
949@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
950@end multitable
951
952@item @emph{Reference}:
953@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
954@end table
955
956
957
958@node omp_get_num_procs
959@section @code{omp_get_num_procs} -- Number of processors online
960@table @asis
961@item @emph{Description}:
962Returns the number of processors online on that device.
963
964@item @emph{C/C++}:
965@multitable @columnfractions .20 .80
966@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
967@end multitable
968
969@item @emph{Fortran}:
970@multitable @columnfractions .20 .80
971@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
972@end multitable
973
974@item @emph{Reference}:
975@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
976@end table
977
978
979
980@node omp_get_num_teams
981@section @code{omp_get_num_teams} -- Number of teams
982@table @asis
983@item @emph{Description}:
984Returns the number of teams in the current team region.
985
986@item @emph{C/C++}:
987@multitable @columnfractions .20 .80
988@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
989@end multitable
990
991@item @emph{Fortran}:
992@multitable @columnfractions .20 .80
993@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
994@end multitable
995
996@item @emph{Reference}:
997@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
998@end table
999
1000
1001
1002@node omp_get_num_threads
1003@section @code{omp_get_num_threads} -- Size of the active team
1004@table @asis
1005@item @emph{Description}:
1006Returns the number of threads in the current team. In a sequential section of
1007the program @code{omp_get_num_threads} returns 1.
1008
1009The default team size may be initialized at startup by the
1010@env{OMP_NUM_THREADS} environment variable. At runtime, the size
1011of the current team may be set either by the @code{NUM_THREADS}
1012clause or by @code{omp_set_num_threads}. If none of the above were
1013used to define a specific value and @env{OMP_DYNAMIC} is disabled,
1014one thread per CPU online is used.
1015
1016@item @emph{C/C++}:
1017@multitable @columnfractions .20 .80
1018@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
1019@end multitable
1020
1021@item @emph{Fortran}:
1022@multitable @columnfractions .20 .80
1023@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
1024@end multitable
1025
1026@item @emph{See also}:
1027@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
1028
1029@item @emph{Reference}:
1030@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
1031@end table
1032
1033
1034
1035@node omp_get_proc_bind
0b9bd33d 1036@section @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
d77de738
ML
1037@table @asis
1038@item @emph{Description}:
1039This functions returns the currently active thread affinity policy, which is
1040set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1041@code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1042@code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1043where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1044
1045@item @emph{C/C++}:
1046@multitable @columnfractions .20 .80
1047@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1048@end multitable
1049
1050@item @emph{Fortran}:
1051@multitable @columnfractions .20 .80
1052@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1053@end multitable
1054
1055@item @emph{See also}:
1056@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1057
1058@item @emph{Reference}:
1059@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1060@end table
1061
1062
1063
1064@node omp_get_schedule
1065@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
1066@table @asis
1067@item @emph{Description}:
1068Obtain the runtime scheduling method. The @var{kind} argument will be
1069set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
1070@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
1071@var{chunk_size}, is set to the chunk size.
1072
1073@item @emph{C/C++}
1074@multitable @columnfractions .20 .80
1075@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
1076@end multitable
1077
1078@item @emph{Fortran}:
1079@multitable @columnfractions .20 .80
1080@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
1081@item @tab @code{integer(kind=omp_sched_kind) kind}
1082@item @tab @code{integer chunk_size}
1083@end multitable
1084
1085@item @emph{See also}:
1086@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
1087
1088@item @emph{Reference}:
1089@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
1090@end table
1091
1092
1093@node omp_get_supported_active_levels
1094@section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1095@table @asis
1096@item @emph{Description}:
1097This function returns the maximum number of nested, active parallel regions
1098supported by this implementation.
1099
1100@item @emph{C/C++}
1101@multitable @columnfractions .20 .80
1102@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1103@end multitable
1104
1105@item @emph{Fortran}:
1106@multitable @columnfractions .20 .80
1107@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1108@end multitable
1109
1110@item @emph{See also}:
1111@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1112
1113@item @emph{Reference}:
1114@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1115@end table
1116
1117
1118
1119@node omp_get_team_num
1120@section @code{omp_get_team_num} -- Get team number
1121@table @asis
1122@item @emph{Description}:
1123Returns the team number of the calling thread.
1124
1125@item @emph{C/C++}:
1126@multitable @columnfractions .20 .80
1127@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1128@end multitable
1129
1130@item @emph{Fortran}:
1131@multitable @columnfractions .20 .80
1132@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1133@end multitable
1134
1135@item @emph{Reference}:
1136@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1137@end table
1138
1139
1140
1141@node omp_get_team_size
1142@section @code{omp_get_team_size} -- Number of threads in a team
1143@table @asis
1144@item @emph{Description}:
1145This function returns the number of threads in a thread team to which
1146either the current thread or its ancestor belongs. For values of @var{level}
1147outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
11481 is returned, and for @code{omp_get_level}, the result is identical
1149to @code{omp_get_num_threads}.
1150
1151@item @emph{C/C++}:
1152@multitable @columnfractions .20 .80
1153@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1154@end multitable
1155
1156@item @emph{Fortran}:
1157@multitable @columnfractions .20 .80
1158@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1159@item @tab @code{integer level}
1160@end multitable
1161
1162@item @emph{See also}:
1163@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1164
1165@item @emph{Reference}:
1166@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1167@end table
1168
1169
1170
1171@node omp_get_teams_thread_limit
1172@section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
1173@table @asis
1174@item @emph{Description}:
1175Return the maximum number of threads that will be able to participate in
1176each team created by a teams construct.
1177
1178@item @emph{C/C++}:
1179@multitable @columnfractions .20 .80
1180@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
1181@end multitable
1182
1183@item @emph{Fortran}:
1184@multitable @columnfractions .20 .80
1185@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
1186@end multitable
1187
1188@item @emph{See also}:
1189@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1190
1191@item @emph{Reference}:
1192@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1193@end table
1194
1195
1196
1197@node omp_get_thread_limit
1198@section @code{omp_get_thread_limit} -- Maximum number of threads
1199@table @asis
1200@item @emph{Description}:
1201Return the maximum number of threads of the program.
1202
1203@item @emph{C/C++}:
1204@multitable @columnfractions .20 .80
1205@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1206@end multitable
1207
1208@item @emph{Fortran}:
1209@multitable @columnfractions .20 .80
1210@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1211@end multitable
1212
1213@item @emph{See also}:
1214@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1215
1216@item @emph{Reference}:
1217@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1218@end table
1219
1220
1221
1222@node omp_get_thread_num
1223@section @code{omp_get_thread_num} -- Current thread ID
1224@table @asis
1225@item @emph{Description}:
1226Returns a unique thread identification number within the current team.
1227In a sequential parts of the program, @code{omp_get_thread_num}
1228always returns 0. In parallel regions the return value varies
1229from 0 to @code{omp_get_num_threads}-1 inclusive. The return
1230value of the primary thread of a team is always 0.
1231
1232@item @emph{C/C++}:
1233@multitable @columnfractions .20 .80
1234@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
1235@end multitable
1236
1237@item @emph{Fortran}:
1238@multitable @columnfractions .20 .80
1239@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
1240@end multitable
1241
1242@item @emph{See also}:
1243@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
1244
1245@item @emph{Reference}:
1246@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
1247@end table
1248
1249
1250
1251@node omp_in_parallel
1252@section @code{omp_in_parallel} -- Whether a parallel region is active
1253@table @asis
1254@item @emph{Description}:
1255This function returns @code{true} if currently running in parallel,
1256@code{false} otherwise. Here, @code{true} and @code{false} represent
1257their language-specific counterparts.
1258
1259@item @emph{C/C++}:
1260@multitable @columnfractions .20 .80
1261@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
1262@end multitable
1263
1264@item @emph{Fortran}:
1265@multitable @columnfractions .20 .80
1266@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
1267@end multitable
1268
1269@item @emph{Reference}:
1270@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
1271@end table
1272
1273
1274@node omp_in_final
1275@section @code{omp_in_final} -- Whether in final or included task region
1276@table @asis
1277@item @emph{Description}:
1278This function returns @code{true} if currently running in a final
1279or included task region, @code{false} otherwise. Here, @code{true}
1280and @code{false} represent their language-specific counterparts.
1281
1282@item @emph{C/C++}:
1283@multitable @columnfractions .20 .80
1284@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1285@end multitable
1286
1287@item @emph{Fortran}:
1288@multitable @columnfractions .20 .80
1289@item @emph{Interface}: @tab @code{logical function omp_in_final()}
1290@end multitable
1291
1292@item @emph{Reference}:
1293@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1294@end table
1295
1296
1297
1298@node omp_is_initial_device
1299@section @code{omp_is_initial_device} -- Whether executing on the host device
1300@table @asis
1301@item @emph{Description}:
1302This function returns @code{true} if currently running on the host device,
1303@code{false} otherwise. Here, @code{true} and @code{false} represent
1304their language-specific counterparts.
1305
1306@item @emph{C/C++}:
1307@multitable @columnfractions .20 .80
1308@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1309@end multitable
1310
1311@item @emph{Fortran}:
1312@multitable @columnfractions .20 .80
1313@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1314@end multitable
1315
1316@item @emph{Reference}:
1317@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1318@end table
1319
1320
1321
1322@node omp_set_default_device
1323@section @code{omp_set_default_device} -- Set the default device for target regions
1324@table @asis
1325@item @emph{Description}:
1326Set the default device for target regions without device clause. The argument
1327shall be a nonnegative device number.
1328
1329@item @emph{C/C++}:
1330@multitable @columnfractions .20 .80
1331@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1332@end multitable
1333
1334@item @emph{Fortran}:
1335@multitable @columnfractions .20 .80
1336@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1337@item @tab @code{integer device_num}
1338@end multitable
1339
1340@item @emph{See also}:
1341@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1342
1343@item @emph{Reference}:
1344@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1345@end table
1346
1347
1348
1349@node omp_set_dynamic
1350@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
1351@table @asis
1352@item @emph{Description}:
1353Enable or disable the dynamic adjustment of the number of threads
1354within a team. The function takes the language-specific equivalent
1355of @code{true} and @code{false}, where @code{true} enables dynamic
1356adjustment of team sizes and @code{false} disables it.
1357
1358@item @emph{C/C++}:
1359@multitable @columnfractions .20 .80
1360@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
1361@end multitable
1362
1363@item @emph{Fortran}:
1364@multitable @columnfractions .20 .80
1365@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
1366@item @tab @code{logical, intent(in) :: dynamic_threads}
1367@end multitable
1368
1369@item @emph{See also}:
1370@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
1371
1372@item @emph{Reference}:
1373@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
1374@end table
1375
1376
1377
1378@node omp_set_max_active_levels
1379@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1380@table @asis
1381@item @emph{Description}:
1382This function limits the maximum allowed number of nested, active
1383parallel regions. @var{max_levels} must be less or equal to
1384the value returned by @code{omp_get_supported_active_levels}.
1385
1386@item @emph{C/C++}
1387@multitable @columnfractions .20 .80
1388@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1389@end multitable
1390
1391@item @emph{Fortran}:
1392@multitable @columnfractions .20 .80
1393@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1394@item @tab @code{integer max_levels}
1395@end multitable
1396
1397@item @emph{See also}:
1398@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1399@ref{omp_get_supported_active_levels}
1400
1401@item @emph{Reference}:
1402@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1403@end table
1404
1405
1406
1407@node omp_set_nested
1408@section @code{omp_set_nested} -- Enable/disable nested parallel regions
1409@table @asis
1410@item @emph{Description}:
1411Enable or disable nested parallel regions, i.e., whether team members
1412are allowed to create new teams. The function takes the language-specific
1413equivalent of @code{true} and @code{false}, where @code{true} enables
1414dynamic adjustment of team sizes and @code{false} disables it.
1415
1416Enabling nested parallel regions will also set the maximum number of
1417active nested regions to the maximum supported. Disabling nested parallel
1418regions will set the maximum number of active nested regions to one.
1419
2cd0689a
TB
1420Note that the @code{omp_set_nested} API routine was deprecated
1421in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
1422
d77de738
ML
1423@item @emph{C/C++}:
1424@multitable @columnfractions .20 .80
1425@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1426@end multitable
1427
1428@item @emph{Fortran}:
1429@multitable @columnfractions .20 .80
1430@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1431@item @tab @code{logical, intent(in) :: nested}
1432@end multitable
1433
1434@item @emph{See also}:
1435@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1436@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1437
1438@item @emph{Reference}:
1439@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1440@end table
1441
1442
1443
1444@node omp_set_num_teams
1445@section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1446@table @asis
1447@item @emph{Description}:
1448Specifies the upper bound for number of teams created by the teams construct
1449which does not specify a @code{num_teams} clause. The
1450argument of @code{omp_set_num_teams} shall be a positive integer.
1451
1452@item @emph{C/C++}:
1453@multitable @columnfractions .20 .80
1454@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1455@end multitable
1456
1457@item @emph{Fortran}:
1458@multitable @columnfractions .20 .80
1459@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1460@item @tab @code{integer, intent(in) :: num_teams}
1461@end multitable
1462
1463@item @emph{See also}:
1464@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1465
1466@item @emph{Reference}:
1467@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1468@end table
1469
1470
1471
1472@node omp_set_num_threads
1473@section @code{omp_set_num_threads} -- Set upper team size limit
1474@table @asis
1475@item @emph{Description}:
1476Specifies the number of threads used by default in subsequent parallel
1477sections, if those do not specify a @code{num_threads} clause. The
1478argument of @code{omp_set_num_threads} shall be a positive integer.
1479
1480@item @emph{C/C++}:
1481@multitable @columnfractions .20 .80
1482@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1483@end multitable
1484
1485@item @emph{Fortran}:
1486@multitable @columnfractions .20 .80
1487@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1488@item @tab @code{integer, intent(in) :: num_threads}
1489@end multitable
1490
1491@item @emph{See also}:
1492@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1493
1494@item @emph{Reference}:
1495@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1496@end table
1497
1498
1499
1500@node omp_set_schedule
1501@section @code{omp_set_schedule} -- Set the runtime scheduling method
1502@table @asis
1503@item @emph{Description}:
1504Sets the runtime scheduling method. The @var{kind} argument can have the
1505value @code{omp_sched_static}, @code{omp_sched_dynamic},
1506@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1507@code{omp_sched_auto}, the chunk size is set to the value of
1508@var{chunk_size} if positive, or to the default value if zero or negative.
1509For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1510
1511@item @emph{C/C++}
1512@multitable @columnfractions .20 .80
1513@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1514@end multitable
1515
1516@item @emph{Fortran}:
1517@multitable @columnfractions .20 .80
1518@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1519@item @tab @code{integer(kind=omp_sched_kind) kind}
1520@item @tab @code{integer chunk_size}
1521@end multitable
1522
1523@item @emph{See also}:
1524@ref{omp_get_schedule}
1525@ref{OMP_SCHEDULE}
1526
1527@item @emph{Reference}:
1528@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1529@end table
1530
1531
1532
1533@node omp_set_teams_thread_limit
1534@section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1535@table @asis
1536@item @emph{Description}:
1537Specifies the upper bound for number of threads that will be available
1538for each team created by the teams construct which does not specify a
1539@code{thread_limit} clause. The argument of
1540@code{omp_set_teams_thread_limit} shall be a positive integer.
1541
1542@item @emph{C/C++}:
1543@multitable @columnfractions .20 .80
1544@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1545@end multitable
1546
1547@item @emph{Fortran}:
1548@multitable @columnfractions .20 .80
1549@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1550@item @tab @code{integer, intent(in) :: thread_limit}
1551@end multitable
1552
1553@item @emph{See also}:
1554@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1555
1556@item @emph{Reference}:
1557@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1558@end table
1559
1560
1561
1562@node omp_init_lock
1563@section @code{omp_init_lock} -- Initialize simple lock
1564@table @asis
1565@item @emph{Description}:
1566Initialize a simple lock. After initialization, the lock is in
1567an unlocked state.
1568
1569@item @emph{C/C++}:
1570@multitable @columnfractions .20 .80
1571@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1572@end multitable
1573
1574@item @emph{Fortran}:
1575@multitable @columnfractions .20 .80
1576@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1577@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1578@end multitable
1579
1580@item @emph{See also}:
1581@ref{omp_destroy_lock}
1582
1583@item @emph{Reference}:
1584@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1585@end table
1586
1587
1588
1589@node omp_set_lock
1590@section @code{omp_set_lock} -- Wait for and set simple lock
1591@table @asis
1592@item @emph{Description}:
1593Before setting a simple lock, the lock variable must be initialized by
1594@code{omp_init_lock}. The calling thread is blocked until the lock
1595is available. If the lock is already held by the current thread,
1596a deadlock occurs.
1597
1598@item @emph{C/C++}:
1599@multitable @columnfractions .20 .80
1600@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1601@end multitable
1602
1603@item @emph{Fortran}:
1604@multitable @columnfractions .20 .80
1605@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1606@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1607@end multitable
1608
1609@item @emph{See also}:
1610@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1611
1612@item @emph{Reference}:
1613@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1614@end table
1615
1616
1617
1618@node omp_test_lock
1619@section @code{omp_test_lock} -- Test and set simple lock if available
1620@table @asis
1621@item @emph{Description}:
1622Before setting a simple lock, the lock variable must be initialized by
1623@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1624does not block if the lock is not available. This function returns
1625@code{true} upon success, @code{false} otherwise. Here, @code{true} and
1626@code{false} represent their language-specific counterparts.
1627
1628@item @emph{C/C++}:
1629@multitable @columnfractions .20 .80
1630@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1631@end multitable
1632
1633@item @emph{Fortran}:
1634@multitable @columnfractions .20 .80
1635@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1636@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1637@end multitable
1638
1639@item @emph{See also}:
1640@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1641
1642@item @emph{Reference}:
1643@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1644@end table
1645
1646
1647
1648@node omp_unset_lock
1649@section @code{omp_unset_lock} -- Unset simple lock
1650@table @asis
1651@item @emph{Description}:
1652A simple lock about to be unset must have been locked by @code{omp_set_lock}
1653or @code{omp_test_lock} before. In addition, the lock must be held by the
1654thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1655or more threads attempted to set the lock before, one of them is chosen to,
1656again, set the lock to itself.
1657
1658@item @emph{C/C++}:
1659@multitable @columnfractions .20 .80
1660@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1661@end multitable
1662
1663@item @emph{Fortran}:
1664@multitable @columnfractions .20 .80
1665@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1666@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1667@end multitable
1668
1669@item @emph{See also}:
1670@ref{omp_set_lock}, @ref{omp_test_lock}
1671
1672@item @emph{Reference}:
1673@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1674@end table
1675
1676
1677
1678@node omp_destroy_lock
1679@section @code{omp_destroy_lock} -- Destroy simple lock
1680@table @asis
1681@item @emph{Description}:
1682Destroy a simple lock. In order to be destroyed, a simple lock must be
1683in the unlocked state.
1684
1685@item @emph{C/C++}:
1686@multitable @columnfractions .20 .80
1687@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1688@end multitable
1689
1690@item @emph{Fortran}:
1691@multitable @columnfractions .20 .80
1692@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1693@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1694@end multitable
1695
1696@item @emph{See also}:
1697@ref{omp_init_lock}
1698
1699@item @emph{Reference}:
1700@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1701@end table
1702
1703
1704
1705@node omp_init_nest_lock
1706@section @code{omp_init_nest_lock} -- Initialize nested lock
1707@table @asis
1708@item @emph{Description}:
1709Initialize a nested lock. After initialization, the lock is in
1710an unlocked state and the nesting count is set to zero.
1711
1712@item @emph{C/C++}:
1713@multitable @columnfractions .20 .80
1714@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1715@end multitable
1716
1717@item @emph{Fortran}:
1718@multitable @columnfractions .20 .80
1719@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1720@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1721@end multitable
1722
1723@item @emph{See also}:
1724@ref{omp_destroy_nest_lock}
1725
1726@item @emph{Reference}:
1727@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1728@end table
1729
1730
1731@node omp_set_nest_lock
1732@section @code{omp_set_nest_lock} -- Wait for and set nested lock
1733@table @asis
1734@item @emph{Description}:
1735Before setting a nested lock, the lock variable must be initialized by
1736@code{omp_init_nest_lock}. The calling thread is blocked until the lock
1737is available. If the lock is already held by the current thread, the
1738nesting count for the lock is incremented.
1739
1740@item @emph{C/C++}:
1741@multitable @columnfractions .20 .80
1742@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1743@end multitable
1744
1745@item @emph{Fortran}:
1746@multitable @columnfractions .20 .80
1747@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1748@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1749@end multitable
1750
1751@item @emph{See also}:
1752@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1753
1754@item @emph{Reference}:
1755@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1756@end table
1757
1758
1759
1760@node omp_test_nest_lock
1761@section @code{omp_test_nest_lock} -- Test and set nested lock if available
1762@table @asis
1763@item @emph{Description}:
1764Before setting a nested lock, the lock variable must be initialized by
1765@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1766@code{omp_test_nest_lock} does not block if the lock is not available.
1767If the lock is already held by the current thread, the new nesting count
1768is returned. Otherwise, the return value equals zero.
1769
1770@item @emph{C/C++}:
1771@multitable @columnfractions .20 .80
1772@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1773@end multitable
1774
1775@item @emph{Fortran}:
1776@multitable @columnfractions .20 .80
1777@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1778@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1779@end multitable
1780
1781
1782@item @emph{See also}:
1783@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1784
1785@item @emph{Reference}:
1786@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1787@end table
1788
1789
1790
1791@node omp_unset_nest_lock
1792@section @code{omp_unset_nest_lock} -- Unset nested lock
1793@table @asis
1794@item @emph{Description}:
1795A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1796or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1797thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1798lock becomes unlocked. If one ore more threads attempted to set the lock before,
1799one of them is chosen to, again, set the lock to itself.
1800
1801@item @emph{C/C++}:
1802@multitable @columnfractions .20 .80
1803@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1804@end multitable
1805
1806@item @emph{Fortran}:
1807@multitable @columnfractions .20 .80
1808@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1809@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1810@end multitable
1811
1812@item @emph{See also}:
1813@ref{omp_set_nest_lock}
1814
1815@item @emph{Reference}:
1816@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1817@end table
1818
1819
1820
1821@node omp_destroy_nest_lock
1822@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1823@table @asis
1824@item @emph{Description}:
1825Destroy a nested lock. In order to be destroyed, a nested lock must be
1826in the unlocked state and its nesting count must equal zero.
1827
1828@item @emph{C/C++}:
1829@multitable @columnfractions .20 .80
1830@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1831@end multitable
1832
1833@item @emph{Fortran}:
1834@multitable @columnfractions .20 .80
1835@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1836@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1837@end multitable
1838
1839@item @emph{See also}:
1840@ref{omp_init_lock}
1841
1842@item @emph{Reference}:
1843@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1844@end table
1845
1846
1847
1848@node omp_get_wtick
1849@section @code{omp_get_wtick} -- Get timer precision
1850@table @asis
1851@item @emph{Description}:
1852Gets the timer precision, i.e., the number of seconds between two
1853successive clock ticks.
1854
1855@item @emph{C/C++}:
1856@multitable @columnfractions .20 .80
1857@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1858@end multitable
1859
1860@item @emph{Fortran}:
1861@multitable @columnfractions .20 .80
1862@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1863@end multitable
1864
1865@item @emph{See also}:
1866@ref{omp_get_wtime}
1867
1868@item @emph{Reference}:
1869@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1870@end table
1871
1872
1873
1874@node omp_get_wtime
1875@section @code{omp_get_wtime} -- Elapsed wall clock time
1876@table @asis
1877@item @emph{Description}:
1878Elapsed wall clock time in seconds. The time is measured per thread, no
1879guarantee can be made that two distinct threads measure the same time.
1880Time is measured from some "time in the past", which is an arbitrary time
1881guaranteed not to change during the execution of the program.
1882
1883@item @emph{C/C++}:
1884@multitable @columnfractions .20 .80
1885@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1886@end multitable
1887
1888@item @emph{Fortran}:
1889@multitable @columnfractions .20 .80
1890@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1891@end multitable
1892
1893@item @emph{See also}:
1894@ref{omp_get_wtick}
1895
1896@item @emph{Reference}:
1897@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1898@end table
1899
1900
1901
1902@node omp_fulfill_event
1903@section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1904@table @asis
1905@item @emph{Description}:
1906Fulfill the event associated with the event handle argument. Currently, it
1907is only used to fulfill events generated by detach clauses on task
1908constructs - the effect of fulfilling the event is to allow the task to
1909complete.
1910
1911The result of calling @code{omp_fulfill_event} with an event handle other
1912than that generated by a detach clause is undefined. Calling it with an
1913event handle that has already been fulfilled is also undefined.
1914
1915@item @emph{C/C++}:
1916@multitable @columnfractions .20 .80
1917@item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1918@end multitable
1919
1920@item @emph{Fortran}:
1921@multitable @columnfractions .20 .80
1922@item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1923@item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1924@end multitable
1925
1926@item @emph{Reference}:
1927@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1928@end table
1929
1930
1931
1932@c ---------------------------------------------------------------------
1933@c OpenMP Environment Variables
1934@c ---------------------------------------------------------------------
1935
1936@node Environment Variables
1937@chapter OpenMP Environment Variables
1938
1939The environment variables which beginning with @env{OMP_} are defined by
2cd0689a
TB
1940section 4 of the OpenMP specification in version 4.5 or in a later version
1941of the specification, while those beginning with @env{GOMP_} are GNU extensions.
1942Most @env{OMP_} environment variables have an associated internal control
1943variable (ICV).
1944
1945For any OpenMP environment variable that sets an ICV and is neither
1946@code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
1947device-specific environment variables exist. For them, the environment
1948variable without suffix affects the host. The suffix @code{_DEV_} followed
1949by a non-negative device number less that the number of available devices sets
1950the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
1951of all non-host devices for which a device-specific corresponding environment
1952variable has not been set while the @code{_ALL} suffix sets the ICV of all
1953host and non-host devices for which a more specific corresponding environment
1954variable is not set.
d77de738
ML
1955
1956@menu
73a0d3bf
TB
1957* OMP_ALLOCATOR:: Set the default allocator
1958* OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
d77de738 1959* OMP_CANCELLATION:: Set whether cancellation is activated
73a0d3bf 1960* OMP_DISPLAY_AFFINITY:: Display thread affinity information
d77de738
ML
1961* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1962* OMP_DEFAULT_DEVICE:: Set the device used in target regions
1963* OMP_DYNAMIC:: Dynamic adjustment of threads
1964* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1965* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1966* OMP_NESTED:: Nested parallel regions
1967* OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
1968* OMP_NUM_THREADS:: Specifies the number of threads to use
0b9bd33d
JJ
1969* OMP_PROC_BIND:: Whether threads may be moved between CPUs
1970* OMP_PLACES:: Specifies on which CPUs the threads should be placed
d77de738
ML
1971* OMP_STACKSIZE:: Set default thread stack size
1972* OMP_SCHEDULE:: How threads are scheduled
1973* OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1974* OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
1975* OMP_THREAD_LIMIT:: Set the maximum number of threads
1976* OMP_WAIT_POLICY:: How waiting threads are handled
1977* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1978* GOMP_DEBUG:: Enable debugging output
1979* GOMP_STACKSIZE:: Set default thread stack size
1980* GOMP_SPINCOUNT:: Set the busy-wait spin count
1981* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1982@end menu
1983
1984
73a0d3bf
TB
1985@node OMP_ALLOCATOR
1986@section @env{OMP_ALLOCATOR} -- Set the default allocator
1987@cindex Environment Variable
1988@table @asis
2cd0689a
TB
1989@item @emph{ICV:} @var{available-devices-var}
1990@item @emph{Scope:} data environment
73a0d3bf
TB
1991@item @emph{Description}:
1992Sets the default allocator that is used when no allocator has been specified
1993in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
1994routine is invoked with the @code{omp_null_allocator} allocator.
1995If unset, @code{omp_default_mem_alloc} is used.
1996
1997The value can either be a predefined allocator or a predefined memory space
1998or a predefined memory space followed by a colon and a comma-separated list
1999of memory trait and value pairs, separated by @code{=}.
2000
2cd0689a
TB
2001Note: The corresponding device environment variables are currently not
2002supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
2003initialized to @code{omp_default_mem_alloc}. However, on all devices,
2004the @code{omp_set_default_allocator} API routine can be used to change
2005value.
2006
73a0d3bf
TB
2007@multitable @columnfractions .45 .45
2008@headitem Predefined allocators @tab Predefined memory spaces
2009@item omp_default_mem_alloc @tab omp_default_mem_space
2010@item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
2011@item omp_const_mem_alloc @tab omp_const_mem_space
2012@item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
2013@item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
2014@item omp_cgroup_mem_alloc @tab --
2015@item omp_pteam_mem_alloc @tab --
2016@item omp_thread_mem_alloc @tab --
2017@end multitable
2018
2019@multitable @columnfractions .30 .60
2020@headitem Trait @tab Allowed values
2021@item @code{sync_hint} @tab @code{contended}, @code{uncontended},
2022 @code{serialized}, @code{private}
2023@item @code{alignment} @tab Positive integer being a power of two
2024@item @code{access} @tab @code{all}, @code{cgroup},
2025 @code{pteam}, @code{thread}
2026@item @code{pool_size} @tab Positive integer
2027@item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
2028 @code{abort_fb}, @code{allocator_fb}
2029@item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
2030@item @code{pinned} @tab @code{true}, @code{false}
2031@item @code{partition} @tab @code{environment}, @code{nearest},
2032 @code{blocked}, @code{interleaved}
2033@end multitable
2034
2035Examples:
2036@smallexample
2037OMP_ALLOCATOR=omp_high_bw_mem_alloc
2038OMP_ALLOCATOR=omp_large_cap_mem_space
2039OMP_ALLOCATR=omp_low_lat_mem_space:pinned=true,partition=nearest
2040@end smallexample
2041
2042@c @item @emph{See also}:
2043
2044@item @emph{Reference}:
2045@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
2046@end table
2047
2048
2049
2050@node OMP_AFFINITY_FORMAT
2051@section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
2052@cindex Environment Variable
2053@table @asis
2cd0689a
TB
2054@item @emph{ICV:} @var{affinity-format-var}
2055@item @emph{Scope:} device
73a0d3bf
TB
2056@item @emph{Description}:
2057Sets the format string used when displaying OpenMP thread affinity information.
2058Special values are output using @code{%} followed by an optional size
2059specification and then either the single-character field type or its long
2060name enclosed in curly braces; using @code{%%} will display a literal percent.
2061The size specification consists of an optional @code{0.} or @code{.} followed
450b05ce 2062by a positive integer, specifying the minimal width of the output. With
73a0d3bf
TB
2063@code{0.} and numerical values, the output is padded with zeros on the left;
2064with @code{.}, the output is padded by spaces on the left; otherwise, the
2065output is padded by spaces on the right. If unset, the value is
2066``@code{level %L thread %i affinity %A}''.
2067
2068Supported field types are:
2069
2070@multitable @columnfractions .10 .25 .60
2071@item t @tab team_num @tab value returned by @code{omp_get_team_num}
2072@item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
2073@item L @tab nesting_level @tab value returned by @code{omp_get_level}
2074@item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
2075@item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
2076@item a @tab ancestor_tnum
2077 @tab value returned by
2078 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
2079@item H @tab host @tab name of the host that executes the thread
450b05ce
TB
2080@item P @tab process_id @tab process identifier
2081@item i @tab native_thread_id @tab native thread identifier
73a0d3bf
TB
2082@item A @tab thread_affinity
2083 @tab comma separated list of integer values or ranges, representing the
2084 processors on which a process might execute, subject to affinity
2085 mechanisms
2086@end multitable
2087
2088For instance, after setting
2089
2090@smallexample
2091OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
2092@end smallexample
2093
2094with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
2095@code{omp_display_affinity} with @code{NULL} or an empty string, the program
2096might display the following:
2097
2098@smallexample
209900!0! 1!4; 0;01;0;1;0-11
210000!3! 1!4; 0;01;0;1;0-11
210100!2! 1!4; 0;01;0;1;0-11
210200!1! 1!4; 0;01;0;1;0-11
2103@end smallexample
2104
2105@item @emph{See also}:
2106@ref{OMP_DISPLAY_AFFINITY}
2107
2108@item @emph{Reference}:
2109@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
2110@end table
2111
2112
2113
d77de738
ML
2114@node OMP_CANCELLATION
2115@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
2116@cindex Environment Variable
2117@table @asis
2cd0689a
TB
2118@item @emph{ICV:} @var{cancel-var}
2119@item @emph{Scope:} global
d77de738
ML
2120@item @emph{Description}:
2121If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
2122if unset, cancellation is disabled and the @code{cancel} construct is ignored.
2123
2124@item @emph{See also}:
2125@ref{omp_get_cancellation}
2126
2127@item @emph{Reference}:
2128@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
2129@end table
2130
2131
2132
73a0d3bf
TB
2133@node OMP_DISPLAY_AFFINITY
2134@section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
2135@cindex Environment Variable
2136@table @asis
2cd0689a
TB
2137@item @emph{ICV:} @var{display-affinity-var}
2138@item @emph{Scope:} global
73a0d3bf
TB
2139@item @emph{Description}:
2140If set to @code{FALSE} or if unset, affinity displaying is disabled.
2141If set to @code{TRUE}, the runtime will display affinity information about
2142OpenMP threads in a parallel region upon entering the region and every time
2143any change occurs.
2144
2145@item @emph{See also}:
2146@ref{OMP_AFFINITY_FORMAT}
2147
2148@item @emph{Reference}:
2149@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
2150@end table
2151
2152
2153
2154
d77de738
ML
2155@node OMP_DISPLAY_ENV
2156@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
2157@cindex Environment Variable
2158@table @asis
2cd0689a
TB
2159@item @emph{ICV:} none
2160@item @emph{Scope:} not applicable
d77de738
ML
2161@item @emph{Description}:
2162If set to @code{TRUE}, the OpenMP version number and the values
2163associated with the OpenMP environment variables are printed to @code{stderr}.
2164If set to @code{VERBOSE}, it additionally shows the value of the environment
2165variables which are GNU extensions. If undefined or set to @code{FALSE},
2166this information will not be shown.
2167
2168
2169@item @emph{Reference}:
2170@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2171@end table
2172
2173
2174
2175@node OMP_DEFAULT_DEVICE
2176@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2177@cindex Environment Variable
2178@table @asis
2cd0689a
TB
2179@item @emph{ICV:} @var{default-device-var}
2180@item @emph{Scope:} data environment
d77de738
ML
2181@item @emph{Description}:
2182Set to choose the device which is used in a @code{target} region, unless the
2183value is overridden by @code{omp_set_default_device} or by a @code{device}
2184clause. The value shall be the nonnegative device number. If no device with
2185the given device number exists, the code is executed on the host. If unset,
18c8b56c
TB
2186@env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
2187available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
d77de738
ML
2188device number 0 will be used.
2189
2190
2191@item @emph{See also}:
2192@ref{omp_get_default_device}, @ref{omp_set_default_device},
2193
2194@item @emph{Reference}:
2195@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2196@end table
2197
2198
2199
2200@node OMP_DYNAMIC
2201@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2202@cindex Environment Variable
2203@table @asis
2cd0689a
TB
2204@item @emph{ICV:} @var{dyn-var}
2205@item @emph{Scope:} global
d77de738
ML
2206@item @emph{Description}:
2207Enable or disable the dynamic adjustment of the number of threads
2208within a team. The value of this environment variable shall be
2209@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2210disabled by default.
2211
2212@item @emph{See also}:
2213@ref{omp_set_dynamic}
2214
2215@item @emph{Reference}:
2216@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2217@end table
2218
2219
2220
2221@node OMP_MAX_ACTIVE_LEVELS
2222@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2223@cindex Environment Variable
2224@table @asis
2cd0689a
TB
2225@item @emph{ICV:} @var{max-active-levels-var}
2226@item @emph{Scope:} data environment
d77de738
ML
2227@item @emph{Description}:
2228Specifies the initial value for the maximum number of nested parallel
2229regions. The value of this variable shall be a positive integer.
2230If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2231if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2232a list with more than one item, the maximum number of nested parallel
2233regions will be initialized to the largest number supported, otherwise
2234it will be set to one.
2235
2236@item @emph{See also}:
2cd0689a
TB
2237@ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
2238@ref{OMP_NUM_THREADS}
2239
d77de738
ML
2240
2241@item @emph{Reference}:
2242@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2243@end table
2244
2245
2246
2247@node OMP_MAX_TASK_PRIORITY
2248@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2249number that can be set for a task.
2250@cindex Environment Variable
2251@table @asis
2cd0689a
TB
2252@item @emph{ICV:} @var{max-task-priority-var}
2253@item @emph{Scope:} global
d77de738
ML
2254@item @emph{Description}:
2255Specifies the initial value for the maximum priority value that can be
2256set for a task. The value of this variable shall be a non-negative
2257integer, and zero is allowed. If undefined, the default priority is
22580.
2259
2260@item @emph{See also}:
2261@ref{omp_get_max_task_priority}
2262
2263@item @emph{Reference}:
2264@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2265@end table
2266
2267
2268
2269@node OMP_NESTED
2270@section @env{OMP_NESTED} -- Nested parallel regions
2271@cindex Environment Variable
2272@cindex Implementation specific setting
2273@table @asis
2cd0689a
TB
2274@item @emph{ICV:} @var{max-active-levels-var}
2275@item @emph{Scope:} data environment
d77de738
ML
2276@item @emph{Description}:
2277Enable or disable nested parallel regions, i.e., whether team members
2278are allowed to create new teams. The value of this environment variable
2279shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2280of maximum active nested regions supported will by default be set to the
2281maximum supported, otherwise it will be set to one. If
2282@env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2283setting. If both are undefined, nested parallel regions are enabled if
2284@env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2285more than one item, otherwise they are disabled by default.
2286
2cd0689a
TB
2287Note that the @code{OMP_NESTED} environment variable was deprecated in
2288the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
2289
d77de738 2290@item @emph{See also}:
2cd0689a
TB
2291@ref{omp_set_max_active_levels}, @ref{omp_set_nested},
2292@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2293
2294@item @emph{Reference}:
2295@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2296@end table
2297
2298
2299
2300@node OMP_NUM_TEAMS
2301@section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2302@cindex Environment Variable
2303@table @asis
2cd0689a
TB
2304@item @emph{ICV:} @var{nteams-var}
2305@item @emph{Scope:} device
d77de738
ML
2306@item @emph{Description}:
2307Specifies the upper bound for number of teams to use in teams regions
2308without explicit @code{num_teams} clause. The value of this variable shall
2309be a positive integer. If undefined it defaults to 0 which means
2310implementation defined upper bound.
2311
2312@item @emph{See also}:
2313@ref{omp_set_num_teams}
2314
2315@item @emph{Reference}:
2316@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2317@end table
2318
2319
2320
2321@node OMP_NUM_THREADS
2322@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2323@cindex Environment Variable
2324@cindex Implementation specific setting
2325@table @asis
2cd0689a
TB
2326@item @emph{ICV:} @var{nthreads-var}
2327@item @emph{Scope:} data environment
d77de738
ML
2328@item @emph{Description}:
2329Specifies the default number of threads to use in parallel regions. The
2330value of this variable shall be a comma-separated list of positive integers;
2331the value specifies the number of threads to use for the corresponding nested
2332level. Specifying more than one item in the list will automatically enable
2333nesting by default. If undefined one thread per CPU is used.
2334
2cd0689a
TB
2335When a list with more than value is specified, it also affects the
2336@var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2337
d77de738 2338@item @emph{See also}:
2cd0689a 2339@ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2340
2341@item @emph{Reference}:
2342@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2343@end table
2344
2345
2346
2347@node OMP_PROC_BIND
0b9bd33d 2348@section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
d77de738
ML
2349@cindex Environment Variable
2350@table @asis
2cd0689a
TB
2351@item @emph{ICV:} @var{bind-var}
2352@item @emph{Scope:} data environment
d77de738
ML
2353@item @emph{Description}:
2354Specifies whether threads may be moved between processors. If set to
0b9bd33d 2355@code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
d77de738
ML
2356they may be moved. Alternatively, a comma separated list with the
2357values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2358be used to specify the thread affinity policy for the corresponding nesting
2359level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
2360same place partition as the primary thread. With @code{CLOSE} those are
2361kept close to the primary thread in contiguous place partitions. And
2362with @code{SPREAD} a sparse distribution
2363across the place partitions is used. Specifying more than one item in the
2364list will automatically enable nesting by default.
2365
2cd0689a
TB
2366When a list is specified, it also affects the @var{max-active-levels-var} ICV
2367as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2368
d77de738
ML
2369When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
2370@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
2371
2372@item @emph{See also}:
2cd0689a
TB
2373@ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
2374@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2375
2376@item @emph{Reference}:
2377@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
2378@end table
2379
2380
2381
2382@node OMP_PLACES
0b9bd33d 2383@section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
d77de738
ML
2384@cindex Environment Variable
2385@table @asis
2cd0689a
TB
2386@item @emph{ICV:} @var{place-partition-var}
2387@item @emph{Scope:} implicit tasks
d77de738
ML
2388@item @emph{Description}:
2389The thread placement can be either specified using an abstract name or by an
2390explicit list of the places. The abstract names @code{threads}, @code{cores},
2391@code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
2392followed by a positive number in parentheses, which denotes the how many places
2393shall be created. With @code{threads} each place corresponds to a single
2394hardware thread; @code{cores} to a single core with the corresponding number of
2395hardware threads; with @code{sockets} the place corresponds to a single
2396socket; with @code{ll_caches} to a set of cores that shares the last level
2397cache on the device; and @code{numa_domains} to a set of cores for which their
2398closest memory on the device is the same memory and at a similar distance from
2399the cores. The resulting placement can be shown by setting the
2400@env{OMP_DISPLAY_ENV} environment variable.
2401
2402Alternatively, the placement can be specified explicitly as comma-separated
2403list of places. A place is specified by set of nonnegative numbers in curly
2404braces, denoting the hardware threads. The curly braces can be omitted
2405when only a single number has been specified. The hardware threads
2406belonging to a place can either be specified as comma-separated list of
2407nonnegative thread numbers or using an interval. Multiple places can also be
2408either specified by a comma-separated list of places or by an interval. To
2409specify an interval, a colon followed by the count is placed after
2410the hardware thread number or the place. Optionally, the length can be
2411followed by a colon and the stride number -- otherwise a unit stride is
2412assumed. Placing an exclamation mark (@code{!}) directly before a curly
2413brace or numbers inside the curly braces (excluding intervals) will
2414exclude those hardware threads.
2415
2416For instance, the following specifies the same places list:
2417@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
2418@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
2419
2420If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
2421@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
2422between CPUs following no placement policy.
2423
2424@item @emph{See also}:
2425@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
2426@ref{OMP_DISPLAY_ENV}
2427
2428@item @emph{Reference}:
2429@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
2430@end table
2431
2432
2433
2434@node OMP_STACKSIZE
2435@section @env{OMP_STACKSIZE} -- Set default thread stack size
2436@cindex Environment Variable
2437@table @asis
2cd0689a
TB
2438@item @emph{ICV:} @var{stacksize-var}
2439@item @emph{Scope:} device
d77de738
ML
2440@item @emph{Description}:
2441Set the default thread stack size in kilobytes, unless the number
2442is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
2443case the size is, respectively, in bytes, kilobytes, megabytes
2444or gigabytes. This is different from @code{pthread_attr_setstacksize}
2445which gets the number of bytes as an argument. If the stack size cannot
2446be set due to system constraints, an error is reported and the initial
2447stack size is left unchanged. If undefined, the stack size is system
2448dependent.
2449
2cd0689a
TB
2450@item @emph{See also}:
2451@ref{GOMP_STACKSIZE}
2452
d77de738
ML
2453@item @emph{Reference}:
2454@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
2455@end table
2456
2457
2458
2459@node OMP_SCHEDULE
2460@section @env{OMP_SCHEDULE} -- How threads are scheduled
2461@cindex Environment Variable
2462@cindex Implementation specific setting
2463@table @asis
2cd0689a
TB
2464@item @emph{ICV:} @var{run-sched-var}
2465@item @emph{Scope:} data environment
d77de738
ML
2466@item @emph{Description}:
2467Allows to specify @code{schedule type} and @code{chunk size}.
2468The value of the variable shall have the form: @code{type[,chunk]} where
2469@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
2470The optional @code{chunk} size shall be a positive integer. If undefined,
2471dynamic scheduling and a chunk size of 1 is used.
2472
2473@item @emph{See also}:
2474@ref{omp_set_schedule}
2475
2476@item @emph{Reference}:
2477@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
2478@end table
2479
2480
2481
2482@node OMP_TARGET_OFFLOAD
2483@section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
2484@cindex Environment Variable
2485@cindex Implementation specific setting
2486@table @asis
2cd0689a
TB
2487@item @emph{ICV:} @var{target-offload-var}
2488@item @emph{Scope:} global
d77de738
ML
2489@item @emph{Description}:
2490Specifies the behaviour with regard to offloading code to a device. This
2491variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
2492or @code{DEFAULT}.
2493
2494If set to @code{MANDATORY}, the program will terminate with an error if
2495the offload device is not present or is not supported. If set to
2496@code{DISABLED}, then offloading is disabled and all code will run on the
2497host. If set to @code{DEFAULT}, the program will try offloading to the
2498device first, then fall back to running code on the host if it cannot.
2499
2500If undefined, then the program will behave as if @code{DEFAULT} was set.
2501
2502@item @emph{Reference}:
2503@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
2504@end table
2505
2506
2507
2508@node OMP_TEAMS_THREAD_LIMIT
2509@section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
2510@cindex Environment Variable
2511@table @asis
2cd0689a
TB
2512@item @emph{ICV:} @var{teams-thread-limit-var}
2513@item @emph{Scope:} device
d77de738
ML
2514@item @emph{Description}:
2515Specifies an upper bound for the number of threads to use by each contention
2516group created by a teams construct without explicit @code{thread_limit}
2517clause. The value of this variable shall be a positive integer. If undefined,
2518the value of 0 is used which stands for an implementation defined upper
2519limit.
2520
2521@item @emph{See also}:
2522@ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
2523
2524@item @emph{Reference}:
2525@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
2526@end table
2527
2528
2529
2530@node OMP_THREAD_LIMIT
2531@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
2532@cindex Environment Variable
2533@table @asis
2cd0689a
TB
2534@item @emph{ICV:} @var{thread-limit-var}
2535@item @emph{Scope:} data environment
d77de738
ML
2536@item @emph{Description}:
2537Specifies the number of threads to use for the whole program. The
2538value of this variable shall be a positive integer. If undefined,
2539the number of threads is not limited.
2540
2541@item @emph{See also}:
2542@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
2543
2544@item @emph{Reference}:
2545@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
2546@end table
2547
2548
2549
2550@node OMP_WAIT_POLICY
2551@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
2552@cindex Environment Variable
2553@table @asis
2554@item @emph{Description}:
2555Specifies whether waiting threads should be active or passive. If
2556the value is @code{PASSIVE}, waiting threads should not consume CPU
2557power while waiting; while the value is @code{ACTIVE} specifies that
2558they should. If undefined, threads wait actively for a short time
2559before waiting passively.
2560
2561@item @emph{See also}:
2562@ref{GOMP_SPINCOUNT}
2563
2564@item @emph{Reference}:
2565@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
2566@end table
2567
2568
2569
2570@node GOMP_CPU_AFFINITY
2571@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
2572@cindex Environment Variable
2573@table @asis
2574@item @emph{Description}:
2575Binds threads to specific CPUs. The variable should contain a space-separated
2576or comma-separated list of CPUs. This list may contain different kinds of
2577entries: either single CPU numbers in any order, a range of CPUs (M-N)
2578or a range with some stride (M-N:S). CPU numbers are zero based. For example,
2579@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
2580to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
2581CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
2582and 14 respectively and then start assigning back from the beginning of
2583the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
2584
2585There is no libgomp library routine to determine whether a CPU affinity
2586specification is in effect. As a workaround, language-specific library
2587functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
2588Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
2589environment variable. A defined CPU affinity on startup cannot be changed
2590or disabled during the runtime of the application.
2591
2592If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
2593@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
2594@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
2595@code{FALSE}, the host system will handle the assignment of threads to CPUs.
2596
2597@item @emph{See also}:
2598@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
2599@end table
2600
2601
2602
2603@node GOMP_DEBUG
2604@section @env{GOMP_DEBUG} -- Enable debugging output
2605@cindex Environment Variable
2606@table @asis
2607@item @emph{Description}:
2608Enable debugging output. The variable should be set to @code{0}
2609(disabled, also the default if not set), or @code{1} (enabled).
2610
2611If enabled, some debugging output will be printed during execution.
2612This is currently not specified in more detail, and subject to change.
2613@end table
2614
2615
2616
2617@node GOMP_STACKSIZE
2618@section @env{GOMP_STACKSIZE} -- Set default thread stack size
2619@cindex Environment Variable
2620@cindex Implementation specific setting
2621@table @asis
2622@item @emph{Description}:
2623Set the default thread stack size in kilobytes. This is different from
2624@code{pthread_attr_setstacksize} which gets the number of bytes as an
2625argument. If the stack size cannot be set due to system constraints, an
2626error is reported and the initial stack size is left unchanged. If undefined,
2627the stack size is system dependent.
2628
2629@item @emph{See also}:
2630@ref{OMP_STACKSIZE}
2631
2632@item @emph{Reference}:
2633@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
2634GCC Patches Mailinglist},
2635@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
2636GCC Patches Mailinglist}
2637@end table
2638
2639
2640
2641@node GOMP_SPINCOUNT
2642@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
2643@cindex Environment Variable
2644@cindex Implementation specific setting
2645@table @asis
2646@item @emph{Description}:
2647Determines how long a threads waits actively with consuming CPU power
2648before waiting passively without consuming CPU power. The value may be
2649either @code{INFINITE}, @code{INFINITY} to always wait actively or an
2650integer which gives the number of spins of the busy-wait loop. The
2651integer may optionally be followed by the following suffixes acting
2652as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
2653million), @code{G} (giga, billion), or @code{T} (tera, trillion).
2654If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
2655300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
265630 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
2657If there are more OpenMP threads than available CPUs, 1000 and 100
2658spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
2659undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
2660or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
2661
2662@item @emph{See also}:
2663@ref{OMP_WAIT_POLICY}
2664@end table
2665
2666
2667
2668@node GOMP_RTEMS_THREAD_POOLS
2669@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
2670@cindex Environment Variable
2671@cindex Implementation specific setting
2672@table @asis
2673@item @emph{Description}:
2674This environment variable is only used on the RTEMS real-time operating system.
2675It determines the scheduler instance specific thread pools. The format for
2676@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
2677@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
2678separated by @code{:} where:
2679@itemize @bullet
2680@item @code{<thread-pool-count>} is the thread pool count for this scheduler
2681instance.
2682@item @code{$<priority>} is an optional priority for the worker threads of a
2683thread pool according to @code{pthread_setschedparam}. In case a priority
2684value is omitted, then a worker thread will inherit the priority of the OpenMP
2685primary thread that created it. The priority of the worker thread is not
2686changed after creation, even if a new OpenMP primary thread using the worker has
2687a different priority.
2688@item @code{@@<scheduler-name>} is the scheduler instance name according to the
2689RTEMS application configuration.
2690@end itemize
2691In case no thread pool configuration is specified for a scheduler instance,
2692then each OpenMP primary thread of this scheduler instance will use its own
2693dynamically allocated thread pool. To limit the worker thread count of the
2694thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
2695@item @emph{Example}:
2696Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
2697@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
2698@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
2699scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
2700one thread pool available. Since no priority is specified for this scheduler
2701instance, the worker thread inherits the priority of the OpenMP primary thread
2702that created it. In the scheduler instance @code{WRK1} there are three thread
2703pools available and their worker threads run at priority four.
2704@end table
2705
2706
2707
2708@c ---------------------------------------------------------------------
2709@c Enabling OpenACC
2710@c ---------------------------------------------------------------------
2711
2712@node Enabling OpenACC
2713@chapter Enabling OpenACC
2714
2715To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
2716flag @option{-fopenacc} must be specified. This enables the OpenACC directive
2717@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
2718@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
2719@code{!$} conditional compilation sentinels in free form and @code{c$},
2720@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
2721arranges for automatic linking of the OpenACC runtime library
2722(@ref{OpenACC Runtime Library Routines}).
2723
2724See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
2725
2726A complete description of all OpenACC directives accepted may be found in
2727the @uref{https://www.openacc.org, OpenACC} Application Programming
2728Interface manual, version 2.6.
2729
2730
2731
2732@c ---------------------------------------------------------------------
2733@c OpenACC Runtime Library Routines
2734@c ---------------------------------------------------------------------
2735
2736@node OpenACC Runtime Library Routines
2737@chapter OpenACC Runtime Library Routines
2738
2739The runtime routines described here are defined by section 3 of the OpenACC
2740specifications in version 2.6.
2741They have C linkage, and do not throw exceptions.
2742Generally, they are available only for the host, with the exception of
2743@code{acc_on_device}, which is available for both the host and the
2744acceleration device.
2745
2746@menu
2747* acc_get_num_devices:: Get number of devices for the given device
2748 type.
2749* acc_set_device_type:: Set type of device accelerator to use.
2750* acc_get_device_type:: Get type of device accelerator to be used.
2751* acc_set_device_num:: Set device number to use.
2752* acc_get_device_num:: Get device number to be used.
2753* acc_get_property:: Get device property.
2754* acc_async_test:: Tests for completion of a specific asynchronous
2755 operation.
2756* acc_async_test_all:: Tests for completion of all asynchronous
2757 operations.
2758* acc_wait:: Wait for completion of a specific asynchronous
2759 operation.
2760* acc_wait_all:: Waits for completion of all asynchronous
2761 operations.
2762* acc_wait_all_async:: Wait for completion of all asynchronous
2763 operations.
2764* acc_wait_async:: Wait for completion of asynchronous operations.
2765* acc_init:: Initialize runtime for a specific device type.
2766* acc_shutdown:: Shuts down the runtime for a specific device
2767 type.
2768* acc_on_device:: Whether executing on a particular device
2769* acc_malloc:: Allocate device memory.
2770* acc_free:: Free device memory.
2771* acc_copyin:: Allocate device memory and copy host memory to
2772 it.
2773* acc_present_or_copyin:: If the data is not present on the device,
2774 allocate device memory and copy from host
2775 memory.
2776* acc_create:: Allocate device memory and map it to host
2777 memory.
2778* acc_present_or_create:: If the data is not present on the device,
2779 allocate device memory and map it to host
2780 memory.
2781* acc_copyout:: Copy device memory to host memory.
2782* acc_delete:: Free device memory.
2783* acc_update_device:: Update device memory from mapped host memory.
2784* acc_update_self:: Update host memory from mapped device memory.
2785* acc_map_data:: Map previously allocated device memory to host
2786 memory.
2787* acc_unmap_data:: Unmap device memory from host memory.
2788* acc_deviceptr:: Get device pointer associated with specific
2789 host address.
2790* acc_hostptr:: Get host pointer associated with specific
2791 device address.
2792* acc_is_present:: Indicate whether host variable / array is
2793 present on device.
2794* acc_memcpy_to_device:: Copy host memory to device memory.
2795* acc_memcpy_from_device:: Copy device memory to host memory.
2796* acc_attach:: Let device pointer point to device-pointer target.
2797* acc_detach:: Let device pointer point to host-pointer target.
2798
2799API routines for target platforms.
2800
2801* acc_get_current_cuda_device:: Get CUDA device handle.
2802* acc_get_current_cuda_context::Get CUDA context handle.
2803* acc_get_cuda_stream:: Get CUDA stream handle.
2804* acc_set_cuda_stream:: Set CUDA stream handle.
2805
2806API routines for the OpenACC Profiling Interface.
2807
2808* acc_prof_register:: Register callbacks.
2809* acc_prof_unregister:: Unregister callbacks.
2810* acc_prof_lookup:: Obtain inquiry functions.
2811* acc_register_library:: Library registration.
2812@end menu
2813
2814
2815
2816@node acc_get_num_devices
2817@section @code{acc_get_num_devices} -- Get number of devices for given device type
2818@table @asis
2819@item @emph{Description}
2820This function returns a value indicating the number of devices available
2821for the device type specified in @var{devicetype}.
2822
2823@item @emph{C/C++}:
2824@multitable @columnfractions .20 .80
2825@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2826@end multitable
2827
2828@item @emph{Fortran}:
2829@multitable @columnfractions .20 .80
2830@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2831@item @tab @code{integer(kind=acc_device_kind) devicetype}
2832@end multitable
2833
2834@item @emph{Reference}:
2835@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28363.2.1.
2837@end table
2838
2839
2840
2841@node acc_set_device_type
2842@section @code{acc_set_device_type} -- Set type of device accelerator to use.
2843@table @asis
2844@item @emph{Description}
2845This function indicates to the runtime library which device type, specified
2846in @var{devicetype}, to use when executing a parallel or kernels region.
2847
2848@item @emph{C/C++}:
2849@multitable @columnfractions .20 .80
2850@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2851@end multitable
2852
2853@item @emph{Fortran}:
2854@multitable @columnfractions .20 .80
2855@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2856@item @tab @code{integer(kind=acc_device_kind) devicetype}
2857@end multitable
2858
2859@item @emph{Reference}:
2860@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28613.2.2.
2862@end table
2863
2864
2865
2866@node acc_get_device_type
2867@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2868@table @asis
2869@item @emph{Description}
2870This function returns what device type will be used when executing a
2871parallel or kernels region.
2872
2873This function returns @code{acc_device_none} if
2874@code{acc_get_device_type} is called from
2875@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2876callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2877Interface}), that is, if the device is currently being initialized.
2878
2879@item @emph{C/C++}:
2880@multitable @columnfractions .20 .80
2881@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2882@end multitable
2883
2884@item @emph{Fortran}:
2885@multitable @columnfractions .20 .80
2886@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2887@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2888@end multitable
2889
2890@item @emph{Reference}:
2891@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28923.2.3.
2893@end table
2894
2895
2896
2897@node acc_set_device_num
2898@section @code{acc_set_device_num} -- Set device number to use.
2899@table @asis
2900@item @emph{Description}
2901This function will indicate to the runtime which device number,
2902specified by @var{devicenum}, associated with the specified device
2903type @var{devicetype}.
2904
2905@item @emph{C/C++}:
2906@multitable @columnfractions .20 .80
2907@item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2908@end multitable
2909
2910@item @emph{Fortran}:
2911@multitable @columnfractions .20 .80
2912@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2913@item @tab @code{integer devicenum}
2914@item @tab @code{integer(kind=acc_device_kind) devicetype}
2915@end multitable
2916
2917@item @emph{Reference}:
2918@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29193.2.4.
2920@end table
2921
2922
2923
2924@node acc_get_device_num
2925@section @code{acc_get_device_num} -- Get device number to be used.
2926@table @asis
2927@item @emph{Description}
2928This function returns which device number associated with the specified device
2929type @var{devicetype}, will be used when executing a parallel or kernels
2930region.
2931
2932@item @emph{C/C++}:
2933@multitable @columnfractions .20 .80
2934@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2935@end multitable
2936
2937@item @emph{Fortran}:
2938@multitable @columnfractions .20 .80
2939@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2940@item @tab @code{integer(kind=acc_device_kind) devicetype}
2941@item @tab @code{integer acc_get_device_num}
2942@end multitable
2943
2944@item @emph{Reference}:
2945@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29463.2.5.
2947@end table
2948
2949
2950
2951@node acc_get_property
2952@section @code{acc_get_property} -- Get device property.
2953@cindex acc_get_property
2954@cindex acc_get_property_string
2955@table @asis
2956@item @emph{Description}
2957These routines return the value of the specified @var{property} for the
2958device being queried according to @var{devicenum} and @var{devicetype}.
2959Integer-valued and string-valued properties are returned by
2960@code{acc_get_property} and @code{acc_get_property_string} respectively.
2961The Fortran @code{acc_get_property_string} subroutine returns the string
2962retrieved in its fourth argument while the remaining entry points are
2963functions, which pass the return value as their result.
2964
2965Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2966modified the interface introduced in OpenACC 2.6. The kind-value parameter
2967@code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2968for consistency and the return type of the @code{acc_get_property} function is
2969now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2970The parameter @code{acc_device_property} will continue to be provided,
2971but might be removed in a future version of GCC.
2972
2973@item @emph{C/C++}:
2974@multitable @columnfractions .20 .80
2975@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2976@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2977@end multitable
2978
2979@item @emph{Fortran}:
2980@multitable @columnfractions .20 .80
2981@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2982@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2983@item @tab @code{use ISO_C_Binding, only: c_size_t}
2984@item @tab @code{integer devicenum}
2985@item @tab @code{integer(kind=acc_device_kind) devicetype}
2986@item @tab @code{integer(kind=acc_device_property_kind) property}
2987@item @tab @code{integer(kind=c_size_t) acc_get_property}
2988@item @tab @code{character(*) string}
2989@end multitable
2990
2991@item @emph{Reference}:
2992@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29933.2.6.
2994@end table
2995
2996
2997
2998@node acc_async_test
2999@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
3000@table @asis
3001@item @emph{Description}
3002This function tests for completion of the asynchronous operation specified
3003in @var{arg}. In C/C++, a non-zero value will be returned to indicate
3004the specified asynchronous operation has completed. While Fortran will return
3005a @code{true}. If the asynchronous operation has not completed, C/C++ returns
3006a zero and Fortran returns a @code{false}.
3007
3008@item @emph{C/C++}:
3009@multitable @columnfractions .20 .80
3010@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
3011@end multitable
3012
3013@item @emph{Fortran}:
3014@multitable @columnfractions .20 .80
3015@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
3016@item @tab @code{integer(kind=acc_handle_kind) arg}
3017@item @tab @code{logical acc_async_test}
3018@end multitable
3019
3020@item @emph{Reference}:
3021@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30223.2.9.
3023@end table
3024
3025
3026
3027@node acc_async_test_all
3028@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
3029@table @asis
3030@item @emph{Description}
3031This function tests for completion of all asynchronous operations.
3032In C/C++, a non-zero value will be returned to indicate all asynchronous
3033operations have completed. While Fortran will return a @code{true}. If
3034any asynchronous operation has not completed, C/C++ returns a zero and
3035Fortran returns a @code{false}.
3036
3037@item @emph{C/C++}:
3038@multitable @columnfractions .20 .80
3039@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
3040@end multitable
3041
3042@item @emph{Fortran}:
3043@multitable @columnfractions .20 .80
3044@item @emph{Interface}: @tab @code{function acc_async_test()}
3045@item @tab @code{logical acc_get_device_num}
3046@end multitable
3047
3048@item @emph{Reference}:
3049@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30503.2.10.
3051@end table
3052
3053
3054
3055@node acc_wait
3056@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
3057@table @asis
3058@item @emph{Description}
3059This function waits for completion of the asynchronous operation
3060specified in @var{arg}.
3061
3062@item @emph{C/C++}:
3063@multitable @columnfractions .20 .80
3064@item @emph{Prototype}: @tab @code{acc_wait(arg);}
3065@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
3066@end multitable
3067
3068@item @emph{Fortran}:
3069@multitable @columnfractions .20 .80
3070@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
3071@item @tab @code{integer(acc_handle_kind) arg}
3072@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
3073@item @tab @code{integer(acc_handle_kind) arg}
3074@end multitable
3075
3076@item @emph{Reference}:
3077@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30783.2.11.
3079@end table
3080
3081
3082
3083@node acc_wait_all
3084@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
3085@table @asis
3086@item @emph{Description}
3087This function waits for the completion of all asynchronous operations.
3088
3089@item @emph{C/C++}:
3090@multitable @columnfractions .20 .80
3091@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
3092@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
3093@end multitable
3094
3095@item @emph{Fortran}:
3096@multitable @columnfractions .20 .80
3097@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
3098@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
3099@end multitable
3100
3101@item @emph{Reference}:
3102@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31033.2.13.
3104@end table
3105
3106
3107
3108@node acc_wait_all_async
3109@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
3110@table @asis
3111@item @emph{Description}
3112This function enqueues a wait operation on the queue @var{async} for any
3113and all asynchronous operations that have been previously enqueued on
3114any queue.
3115
3116@item @emph{C/C++}:
3117@multitable @columnfractions .20 .80
3118@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
3119@end multitable
3120
3121@item @emph{Fortran}:
3122@multitable @columnfractions .20 .80
3123@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
3124@item @tab @code{integer(acc_handle_kind) async}
3125@end multitable
3126
3127@item @emph{Reference}:
3128@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31293.2.14.
3130@end table
3131
3132
3133
3134@node acc_wait_async
3135@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
3136@table @asis
3137@item @emph{Description}
3138This function enqueues a wait operation on queue @var{async} for any and all
3139asynchronous operations enqueued on queue @var{arg}.
3140
3141@item @emph{C/C++}:
3142@multitable @columnfractions .20 .80
3143@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
3144@end multitable
3145
3146@item @emph{Fortran}:
3147@multitable @columnfractions .20 .80
3148@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
3149@item @tab @code{integer(acc_handle_kind) arg, async}
3150@end multitable
3151
3152@item @emph{Reference}:
3153@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31543.2.12.
3155@end table
3156
3157
3158
3159@node acc_init
3160@section @code{acc_init} -- Initialize runtime for a specific device type.
3161@table @asis
3162@item @emph{Description}
3163This function initializes the runtime for the device type specified in
3164@var{devicetype}.
3165
3166@item @emph{C/C++}:
3167@multitable @columnfractions .20 .80
3168@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
3169@end multitable
3170
3171@item @emph{Fortran}:
3172@multitable @columnfractions .20 .80
3173@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
3174@item @tab @code{integer(acc_device_kind) devicetype}
3175@end multitable
3176
3177@item @emph{Reference}:
3178@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31793.2.7.
3180@end table
3181
3182
3183
3184@node acc_shutdown
3185@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
3186@table @asis
3187@item @emph{Description}
3188This function shuts down the runtime for the device type specified in
3189@var{devicetype}.
3190
3191@item @emph{C/C++}:
3192@multitable @columnfractions .20 .80
3193@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
3194@end multitable
3195
3196@item @emph{Fortran}:
3197@multitable @columnfractions .20 .80
3198@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
3199@item @tab @code{integer(acc_device_kind) devicetype}
3200@end multitable
3201
3202@item @emph{Reference}:
3203@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32043.2.8.
3205@end table
3206
3207
3208
3209@node acc_on_device
3210@section @code{acc_on_device} -- Whether executing on a particular device
3211@table @asis
3212@item @emph{Description}:
3213This function returns whether the program is executing on a particular
3214device specified in @var{devicetype}. In C/C++ a non-zero value is
3215returned to indicate the device is executing on the specified device type.
3216In Fortran, @code{true} will be returned. If the program is not executing
3217on the specified device type C/C++ will return a zero, while Fortran will
3218return @code{false}.
3219
3220@item @emph{C/C++}:
3221@multitable @columnfractions .20 .80
3222@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3223@end multitable
3224
3225@item @emph{Fortran}:
3226@multitable @columnfractions .20 .80
3227@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3228@item @tab @code{integer(acc_device_kind) devicetype}
3229@item @tab @code{logical acc_on_device}
3230@end multitable
3231
3232
3233@item @emph{Reference}:
3234@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32353.2.17.
3236@end table
3237
3238
3239
3240@node acc_malloc
3241@section @code{acc_malloc} -- Allocate device memory.
3242@table @asis
3243@item @emph{Description}
3244This function allocates @var{len} bytes of device memory. It returns
3245the device address of the allocated memory.
3246
3247@item @emph{C/C++}:
3248@multitable @columnfractions .20 .80
3249@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3250@end multitable
3251
3252@item @emph{Reference}:
3253@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32543.2.18.
3255@end table
3256
3257
3258
3259@node acc_free
3260@section @code{acc_free} -- Free device memory.
3261@table @asis
3262@item @emph{Description}
3263Free previously allocated device memory at the device address @code{a}.
3264
3265@item @emph{C/C++}:
3266@multitable @columnfractions .20 .80
3267@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3268@end multitable
3269
3270@item @emph{Reference}:
3271@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32723.2.19.
3273@end table
3274
3275
3276
3277@node acc_copyin
3278@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3279@table @asis
3280@item @emph{Description}
3281In C/C++, this function allocates @var{len} bytes of device memory
3282and maps it to the specified host address in @var{a}. The device
3283address of the newly allocated device memory is returned.
3284
3285In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3286a contiguous array section. The second form @var{a} specifies a
3287variable or array element and @var{len} specifies the length in bytes.
3288
3289@item @emph{C/C++}:
3290@multitable @columnfractions .20 .80
3291@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3292@item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3293@end multitable
3294
3295@item @emph{Fortran}:
3296@multitable @columnfractions .20 .80
3297@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3298@item @tab @code{type, dimension(:[,:]...) :: a}
3299@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3300@item @tab @code{type, dimension(:[,:]...) :: a}
3301@item @tab @code{integer len}
3302@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3303@item @tab @code{type, dimension(:[,:]...) :: a}
3304@item @tab @code{integer(acc_handle_kind) :: async}
3305@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3306@item @tab @code{type, dimension(:[,:]...) :: a}
3307@item @tab @code{integer len}
3308@item @tab @code{integer(acc_handle_kind) :: async}
3309@end multitable
3310
3311@item @emph{Reference}:
3312@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33133.2.20.
3314@end table
3315
3316
3317
3318@node acc_present_or_copyin
3319@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3320@table @asis
3321@item @emph{Description}
3322This function tests if the host data specified by @var{a} and of length
3323@var{len} is present or not. If it is not present, then device memory
3324will be allocated and the host memory copied. The device address of
3325the newly allocated device memory is returned.
3326
3327In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3328a contiguous array section. The second form @var{a} specifies a variable or
3329array element and @var{len} specifies the length in bytes.
3330
3331Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3332backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3333
3334@item @emph{C/C++}:
3335@multitable @columnfractions .20 .80
3336@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3337@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3338@end multitable
3339
3340@item @emph{Fortran}:
3341@multitable @columnfractions .20 .80
3342@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3343@item @tab @code{type, dimension(:[,:]...) :: a}
3344@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3345@item @tab @code{type, dimension(:[,:]...) :: a}
3346@item @tab @code{integer len}
3347@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3348@item @tab @code{type, dimension(:[,:]...) :: a}
3349@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3350@item @tab @code{type, dimension(:[,:]...) :: a}
3351@item @tab @code{integer len}
3352@end multitable
3353
3354@item @emph{Reference}:
3355@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33563.2.20.
3357@end table
3358
3359
3360
3361@node acc_create
3362@section @code{acc_create} -- Allocate device memory and map it to host memory.
3363@table @asis
3364@item @emph{Description}
3365This function allocates device memory and maps it to host memory specified
3366by the host address @var{a} with a length of @var{len} bytes. In C/C++,
3367the function returns the device address of the allocated device memory.
3368
3369In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3370a contiguous array section. The second form @var{a} specifies a variable or
3371array element and @var{len} specifies the length in bytes.
3372
3373@item @emph{C/C++}:
3374@multitable @columnfractions .20 .80
3375@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
3376@item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
3377@end multitable
3378
3379@item @emph{Fortran}:
3380@multitable @columnfractions .20 .80
3381@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
3382@item @tab @code{type, dimension(:[,:]...) :: a}
3383@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
3384@item @tab @code{type, dimension(:[,:]...) :: a}
3385@item @tab @code{integer len}
3386@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
3387@item @tab @code{type, dimension(:[,:]...) :: a}
3388@item @tab @code{integer(acc_handle_kind) :: async}
3389@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
3390@item @tab @code{type, dimension(:[,:]...) :: a}
3391@item @tab @code{integer len}
3392@item @tab @code{integer(acc_handle_kind) :: async}
3393@end multitable
3394
3395@item @emph{Reference}:
3396@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33973.2.21.
3398@end table
3399
3400
3401
3402@node acc_present_or_create
3403@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
3404@table @asis
3405@item @emph{Description}
3406This function tests if the host data specified by @var{a} and of length
3407@var{len} is present or not. If it is not present, then device memory
3408will be allocated and mapped to host memory. In C/C++, the device address
3409of the newly allocated device memory is returned.
3410
3411In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3412a contiguous array section. The second form @var{a} specifies a variable or
3413array element and @var{len} specifies the length in bytes.
3414
3415Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
3416backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
3417
3418@item @emph{C/C++}:
3419@multitable @columnfractions .20 .80
3420@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
3421@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
3422@end multitable
3423
3424@item @emph{Fortran}:
3425@multitable @columnfractions .20 .80
3426@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
3427@item @tab @code{type, dimension(:[,:]...) :: a}
3428@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
3429@item @tab @code{type, dimension(:[,:]...) :: a}
3430@item @tab @code{integer len}
3431@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
3432@item @tab @code{type, dimension(:[,:]...) :: a}
3433@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
3434@item @tab @code{type, dimension(:[,:]...) :: a}
3435@item @tab @code{integer len}
3436@end multitable
3437
3438@item @emph{Reference}:
3439@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34403.2.21.
3441@end table
3442
3443
3444
3445@node acc_copyout
3446@section @code{acc_copyout} -- Copy device memory to host memory.
3447@table @asis
3448@item @emph{Description}
3449This function copies mapped device memory to host memory which is specified
3450by host address @var{a} for a length @var{len} bytes in C/C++.
3451
3452In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3453a contiguous array section. The second form @var{a} specifies a variable or
3454array element and @var{len} specifies the length in bytes.
3455
3456@item @emph{C/C++}:
3457@multitable @columnfractions .20 .80
3458@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
3459@item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
3460@item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
3461@item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
3462@end multitable
3463
3464@item @emph{Fortran}:
3465@multitable @columnfractions .20 .80
3466@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
3467@item @tab @code{type, dimension(:[,:]...) :: a}
3468@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
3469@item @tab @code{type, dimension(:[,:]...) :: a}
3470@item @tab @code{integer len}
3471@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
3472@item @tab @code{type, dimension(:[,:]...) :: a}
3473@item @tab @code{integer(acc_handle_kind) :: async}
3474@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
3475@item @tab @code{type, dimension(:[,:]...) :: a}
3476@item @tab @code{integer len}
3477@item @tab @code{integer(acc_handle_kind) :: async}
3478@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
3479@item @tab @code{type, dimension(:[,:]...) :: a}
3480@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
3481@item @tab @code{type, dimension(:[,:]...) :: a}
3482@item @tab @code{integer len}
3483@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
3484@item @tab @code{type, dimension(:[,:]...) :: a}
3485@item @tab @code{integer(acc_handle_kind) :: async}
3486@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
3487@item @tab @code{type, dimension(:[,:]...) :: a}
3488@item @tab @code{integer len}
3489@item @tab @code{integer(acc_handle_kind) :: async}
3490@end multitable
3491
3492@item @emph{Reference}:
3493@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34943.2.22.
3495@end table
3496
3497
3498
3499@node acc_delete
3500@section @code{acc_delete} -- Free device memory.
3501@table @asis
3502@item @emph{Description}
3503This function frees previously allocated device memory specified by
3504the device address @var{a} and the length of @var{len} bytes.
3505
3506In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3507a contiguous array section. The second form @var{a} specifies a variable or
3508array element and @var{len} specifies the length in bytes.
3509
3510@item @emph{C/C++}:
3511@multitable @columnfractions .20 .80
3512@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
3513@item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
3514@item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
3515@item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
3516@end multitable
3517
3518@item @emph{Fortran}:
3519@multitable @columnfractions .20 .80
3520@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
3521@item @tab @code{type, dimension(:[,:]...) :: a}
3522@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
3523@item @tab @code{type, dimension(:[,:]...) :: a}
3524@item @tab @code{integer len}
3525@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
3526@item @tab @code{type, dimension(:[,:]...) :: a}
3527@item @tab @code{integer(acc_handle_kind) :: async}
3528@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
3529@item @tab @code{type, dimension(:[,:]...) :: a}
3530@item @tab @code{integer len}
3531@item @tab @code{integer(acc_handle_kind) :: async}
3532@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
3533@item @tab @code{type, dimension(:[,:]...) :: a}
3534@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
3535@item @tab @code{type, dimension(:[,:]...) :: a}
3536@item @tab @code{integer len}
3537@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
3538@item @tab @code{type, dimension(:[,:]...) :: a}
3539@item @tab @code{integer(acc_handle_kind) :: async}
3540@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
3541@item @tab @code{type, dimension(:[,:]...) :: a}
3542@item @tab @code{integer len}
3543@item @tab @code{integer(acc_handle_kind) :: async}
3544@end multitable
3545
3546@item @emph{Reference}:
3547@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35483.2.23.
3549@end table
3550
3551
3552
3553@node acc_update_device
3554@section @code{acc_update_device} -- Update device memory from mapped host memory.
3555@table @asis
3556@item @emph{Description}
3557This function updates the device copy from the previously mapped host memory.
3558The host memory is specified with the host address @var{a} and a length of
3559@var{len} bytes.
3560
3561In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3562a contiguous array section. The second form @var{a} specifies a variable or
3563array element and @var{len} specifies the length in bytes.
3564
3565@item @emph{C/C++}:
3566@multitable @columnfractions .20 .80
3567@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
3568@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
3569@end multitable
3570
3571@item @emph{Fortran}:
3572@multitable @columnfractions .20 .80
3573@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
3574@item @tab @code{type, dimension(:[,:]...) :: a}
3575@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
3576@item @tab @code{type, dimension(:[,:]...) :: a}
3577@item @tab @code{integer len}
3578@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
3579@item @tab @code{type, dimension(:[,:]...) :: a}
3580@item @tab @code{integer(acc_handle_kind) :: async}
3581@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
3582@item @tab @code{type, dimension(:[,:]...) :: a}
3583@item @tab @code{integer len}
3584@item @tab @code{integer(acc_handle_kind) :: async}
3585@end multitable
3586
3587@item @emph{Reference}:
3588@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35893.2.24.
3590@end table
3591
3592
3593
3594@node acc_update_self
3595@section @code{acc_update_self} -- Update host memory from mapped device memory.
3596@table @asis
3597@item @emph{Description}
3598This function updates the host copy from the previously mapped device memory.
3599The host memory is specified with the host address @var{a} and a length of
3600@var{len} bytes.
3601
3602In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3603a contiguous array section. The second form @var{a} specifies a variable or
3604array element and @var{len} specifies the length in bytes.
3605
3606@item @emph{C/C++}:
3607@multitable @columnfractions .20 .80
3608@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
3609@item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
3610@end multitable
3611
3612@item @emph{Fortran}:
3613@multitable @columnfractions .20 .80
3614@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
3615@item @tab @code{type, dimension(:[,:]...) :: a}
3616@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
3617@item @tab @code{type, dimension(:[,:]...) :: a}
3618@item @tab @code{integer len}
3619@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
3620@item @tab @code{type, dimension(:[,:]...) :: a}
3621@item @tab @code{integer(acc_handle_kind) :: async}
3622@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
3623@item @tab @code{type, dimension(:[,:]...) :: a}
3624@item @tab @code{integer len}
3625@item @tab @code{integer(acc_handle_kind) :: async}
3626@end multitable
3627
3628@item @emph{Reference}:
3629@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36303.2.25.
3631@end table
3632
3633
3634
3635@node acc_map_data
3636@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
3637@table @asis
3638@item @emph{Description}
3639This function maps previously allocated device and host memory. The device
3640memory is specified with the device address @var{d}. The host memory is
3641specified with the host address @var{h} and a length of @var{len}.
3642
3643@item @emph{C/C++}:
3644@multitable @columnfractions .20 .80
3645@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
3646@end multitable
3647
3648@item @emph{Reference}:
3649@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36503.2.26.
3651@end table
3652
3653
3654
3655@node acc_unmap_data
3656@section @code{acc_unmap_data} -- Unmap device memory from host memory.
3657@table @asis
3658@item @emph{Description}
3659This function unmaps previously mapped device and host memory. The latter
3660specified by @var{h}.
3661
3662@item @emph{C/C++}:
3663@multitable @columnfractions .20 .80
3664@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
3665@end multitable
3666
3667@item @emph{Reference}:
3668@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36693.2.27.
3670@end table
3671
3672
3673
3674@node acc_deviceptr
3675@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
3676@table @asis
3677@item @emph{Description}
3678This function returns the device address that has been mapped to the
3679host address specified by @var{h}.
3680
3681@item @emph{C/C++}:
3682@multitable @columnfractions .20 .80
3683@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
3684@end multitable
3685
3686@item @emph{Reference}:
3687@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36883.2.28.
3689@end table
3690
3691
3692
3693@node acc_hostptr
3694@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
3695@table @asis
3696@item @emph{Description}
3697This function returns the host address that has been mapped to the
3698device address specified by @var{d}.
3699
3700@item @emph{C/C++}:
3701@multitable @columnfractions .20 .80
3702@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
3703@end multitable
3704
3705@item @emph{Reference}:
3706@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37073.2.29.
3708@end table
3709
3710
3711
3712@node acc_is_present
3713@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
3714@table @asis
3715@item @emph{Description}
3716This function indicates whether the specified host address in @var{a} and a
3717length of @var{len} bytes is present on the device. In C/C++, a non-zero
3718value is returned to indicate the presence of the mapped memory on the
3719device. A zero is returned to indicate the memory is not mapped on the
3720device.
3721
3722In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3723a contiguous array section. The second form @var{a} specifies a variable or
3724array element and @var{len} specifies the length in bytes. If the host
3725memory is mapped to device memory, then a @code{true} is returned. Otherwise,
3726a @code{false} is return to indicate the mapped memory is not present.
3727
3728@item @emph{C/C++}:
3729@multitable @columnfractions .20 .80
3730@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3731@end multitable
3732
3733@item @emph{Fortran}:
3734@multitable @columnfractions .20 .80
3735@item @emph{Interface}: @tab @code{function acc_is_present(a)}
3736@item @tab @code{type, dimension(:[,:]...) :: a}
3737@item @tab @code{logical acc_is_present}
3738@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3739@item @tab @code{type, dimension(:[,:]...) :: a}
3740@item @tab @code{integer len}
3741@item @tab @code{logical acc_is_present}
3742@end multitable
3743
3744@item @emph{Reference}:
3745@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37463.2.30.
3747@end table
3748
3749
3750
3751@node acc_memcpy_to_device
3752@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3753@table @asis
3754@item @emph{Description}
3755This function copies host memory specified by host address of @var{src} to
3756device memory specified by the device address @var{dest} for a length of
3757@var{bytes} bytes.
3758
3759@item @emph{C/C++}:
3760@multitable @columnfractions .20 .80
3761@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3762@end multitable
3763
3764@item @emph{Reference}:
3765@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37663.2.31.
3767@end table
3768
3769
3770
3771@node acc_memcpy_from_device
3772@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3773@table @asis
3774@item @emph{Description}
3775This function copies host memory specified by host address of @var{src} from
3776device memory specified by the device address @var{dest} for a length of
3777@var{bytes} bytes.
3778
3779@item @emph{C/C++}:
3780@multitable @columnfractions .20 .80
3781@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3782@end multitable
3783
3784@item @emph{Reference}:
3785@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37863.2.32.
3787@end table
3788
3789
3790
3791@node acc_attach
3792@section @code{acc_attach} -- Let device pointer point to device-pointer target.
3793@table @asis
3794@item @emph{Description}
3795This function updates a pointer on the device from pointing to a host-pointer
3796address to pointing to the corresponding device data.
3797
3798@item @emph{C/C++}:
3799@multitable @columnfractions .20 .80
3800@item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3801@item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3802@end multitable
3803
3804@item @emph{Reference}:
3805@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38063.2.34.
3807@end table
3808
3809
3810
3811@node acc_detach
3812@section @code{acc_detach} -- Let device pointer point to host-pointer target.
3813@table @asis
3814@item @emph{Description}
3815This function updates a pointer on the device from pointing to a device-pointer
3816address to pointing to the corresponding host data.
3817
3818@item @emph{C/C++}:
3819@multitable @columnfractions .20 .80
3820@item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3821@item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3822@item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3823@item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3824@end multitable
3825
3826@item @emph{Reference}:
3827@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38283.2.35.
3829@end table
3830
3831
3832
3833@node acc_get_current_cuda_device
3834@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3835@table @asis
3836@item @emph{Description}
3837This function returns the CUDA device handle. This handle is the same
3838as used by the CUDA Runtime or Driver API's.
3839
3840@item @emph{C/C++}:
3841@multitable @columnfractions .20 .80
3842@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3843@end multitable
3844
3845@item @emph{Reference}:
3846@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3847A.2.1.1.
3848@end table
3849
3850
3851
3852@node acc_get_current_cuda_context
3853@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3854@table @asis
3855@item @emph{Description}
3856This function returns the CUDA context handle. This handle is the same
3857as used by the CUDA Runtime or Driver API's.
3858
3859@item @emph{C/C++}:
3860@multitable @columnfractions .20 .80
3861@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3862@end multitable
3863
3864@item @emph{Reference}:
3865@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3866A.2.1.2.
3867@end table
3868
3869
3870
3871@node acc_get_cuda_stream
3872@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3873@table @asis
3874@item @emph{Description}
3875This function returns the CUDA stream handle for the queue @var{async}.
3876This handle is the same as used by the CUDA Runtime or Driver API's.
3877
3878@item @emph{C/C++}:
3879@multitable @columnfractions .20 .80
3880@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3881@end multitable
3882
3883@item @emph{Reference}:
3884@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3885A.2.1.3.
3886@end table
3887
3888
3889
3890@node acc_set_cuda_stream
3891@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3892@table @asis
3893@item @emph{Description}
3894This function associates the stream handle specified by @var{stream} with
3895the queue @var{async}.
3896
3897This cannot be used to change the stream handle associated with
3898@code{acc_async_sync}.
3899
3900The return value is not specified.
3901
3902@item @emph{C/C++}:
3903@multitable @columnfractions .20 .80
3904@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3905@end multitable
3906
3907@item @emph{Reference}:
3908@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3909A.2.1.4.
3910@end table
3911
3912
3913
3914@node acc_prof_register
3915@section @code{acc_prof_register} -- Register callbacks.
3916@table @asis
3917@item @emph{Description}:
3918This function registers callbacks.
3919
3920@item @emph{C/C++}:
3921@multitable @columnfractions .20 .80
3922@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3923@end multitable
3924
3925@item @emph{See also}:
3926@ref{OpenACC Profiling Interface}
3927
3928@item @emph{Reference}:
3929@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39305.3.
3931@end table
3932
3933
3934
3935@node acc_prof_unregister
3936@section @code{acc_prof_unregister} -- Unregister callbacks.
3937@table @asis
3938@item @emph{Description}:
3939This function unregisters callbacks.
3940
3941@item @emph{C/C++}:
3942@multitable @columnfractions .20 .80
3943@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3944@end multitable
3945
3946@item @emph{See also}:
3947@ref{OpenACC Profiling Interface}
3948
3949@item @emph{Reference}:
3950@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39515.3.
3952@end table
3953
3954
3955
3956@node acc_prof_lookup
3957@section @code{acc_prof_lookup} -- Obtain inquiry functions.
3958@table @asis
3959@item @emph{Description}:
3960Function to obtain inquiry functions.
3961
3962@item @emph{C/C++}:
3963@multitable @columnfractions .20 .80
3964@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3965@end multitable
3966
3967@item @emph{See also}:
3968@ref{OpenACC Profiling Interface}
3969
3970@item @emph{Reference}:
3971@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39725.3.
3973@end table
3974
3975
3976
3977@node acc_register_library
3978@section @code{acc_register_library} -- Library registration.
3979@table @asis
3980@item @emph{Description}:
3981Function for library registration.
3982
3983@item @emph{C/C++}:
3984@multitable @columnfractions .20 .80
3985@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3986@end multitable
3987
3988@item @emph{See also}:
3989@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3990
3991@item @emph{Reference}:
3992@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39935.3.
3994@end table
3995
3996
3997
3998@c ---------------------------------------------------------------------
3999@c OpenACC Environment Variables
4000@c ---------------------------------------------------------------------
4001
4002@node OpenACC Environment Variables
4003@chapter OpenACC Environment Variables
4004
4005The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
4006are defined by section 4 of the OpenACC specification in version 2.0.
4007The variable @env{ACC_PROFLIB}
4008is defined by section 4 of the OpenACC specification in version 2.6.
4009The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
4010
4011@menu
4012* ACC_DEVICE_TYPE::
4013* ACC_DEVICE_NUM::
4014* ACC_PROFLIB::
4015* GCC_ACC_NOTIFY::
4016@end menu
4017
4018
4019
4020@node ACC_DEVICE_TYPE
4021@section @code{ACC_DEVICE_TYPE}
4022@table @asis
4023@item @emph{Reference}:
4024@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40254.1.
4026@end table
4027
4028
4029
4030@node ACC_DEVICE_NUM
4031@section @code{ACC_DEVICE_NUM}
4032@table @asis
4033@item @emph{Reference}:
4034@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40354.2.
4036@end table
4037
4038
4039
4040@node ACC_PROFLIB
4041@section @code{ACC_PROFLIB}
4042@table @asis
4043@item @emph{See also}:
4044@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
4045
4046@item @emph{Reference}:
4047@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40484.3.
4049@end table
4050
4051
4052
4053@node GCC_ACC_NOTIFY
4054@section @code{GCC_ACC_NOTIFY}
4055@table @asis
4056@item @emph{Description}:
4057Print debug information pertaining to the accelerator.
4058@end table
4059
4060
4061
4062@c ---------------------------------------------------------------------
4063@c CUDA Streams Usage
4064@c ---------------------------------------------------------------------
4065
4066@node CUDA Streams Usage
4067@chapter CUDA Streams Usage
4068
4069This applies to the @code{nvptx} plugin only.
4070
4071The library provides elements that perform asynchronous movement of
4072data and asynchronous operation of computing constructs. This
4073asynchronous functionality is implemented by making use of CUDA
4074streams@footnote{See "Stream Management" in "CUDA Driver API",
4075TRM-06703-001, Version 5.5, for additional information}.
4076
4077The primary means by that the asynchronous functionality is accessed
4078is through the use of those OpenACC directives which make use of the
4079@code{async} and @code{wait} clauses. When the @code{async} clause is
4080first used with a directive, it creates a CUDA stream. If an
4081@code{async-argument} is used with the @code{async} clause, then the
4082stream is associated with the specified @code{async-argument}.
4083
4084Following the creation of an association between a CUDA stream and the
4085@code{async-argument} of an @code{async} clause, both the @code{wait}
4086clause and the @code{wait} directive can be used. When either the
4087clause or directive is used after stream creation, it creates a
4088rendezvous point whereby execution waits until all operations
4089associated with the @code{async-argument}, that is, stream, have
4090completed.
4091
4092Normally, the management of the streams that are created as a result of
4093using the @code{async} clause, is done without any intervention by the
4094caller. This implies the association between the @code{async-argument}
4095and the CUDA stream will be maintained for the lifetime of the program.
4096However, this association can be changed through the use of the library
4097function @code{acc_set_cuda_stream}. When the function
4098@code{acc_set_cuda_stream} is called, the CUDA stream that was
4099originally associated with the @code{async} clause will be destroyed.
4100Caution should be taken when changing the association as subsequent
4101references to the @code{async-argument} refer to a different
4102CUDA stream.
4103
4104
4105
4106@c ---------------------------------------------------------------------
4107@c OpenACC Library Interoperability
4108@c ---------------------------------------------------------------------
4109
4110@node OpenACC Library Interoperability
4111@chapter OpenACC Library Interoperability
4112
4113@section Introduction
4114
4115The OpenACC library uses the CUDA Driver API, and may interact with
4116programs that use the Runtime library directly, or another library
4117based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
4118"Interactions with the CUDA Driver API" in
4119"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
4120Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
4121for additional information on library interoperability.}.
4122This chapter describes the use cases and what changes are
4123required in order to use both the OpenACC library and the CUBLAS and Runtime
4124libraries within a program.
4125
4126@section First invocation: NVIDIA CUBLAS library API
4127
4128In this first use case (see below), a function in the CUBLAS library is called
4129prior to any of the functions in the OpenACC library. More specifically, the
4130function @code{cublasCreate()}.
4131
4132When invoked, the function initializes the library and allocates the
4133hardware resources on the host and the device on behalf of the caller. Once
4134the initialization and allocation has completed, a handle is returned to the
4135caller. The OpenACC library also requires initialization and allocation of
4136hardware resources. Since the CUBLAS library has already allocated the
4137hardware resources for the device, all that is left to do is to initialize
4138the OpenACC library and acquire the hardware resources on the host.
4139
4140Prior to calling the OpenACC function that initializes the library and
4141allocate the host hardware resources, you need to acquire the device number
4142that was allocated during the call to @code{cublasCreate()}. The invoking of the
4143runtime library function @code{cudaGetDevice()} accomplishes this. Once
4144acquired, the device number is passed along with the device type as
4145parameters to the OpenACC library function @code{acc_set_device_num()}.
4146
4147Once the call to @code{acc_set_device_num()} has completed, the OpenACC
4148library uses the context that was created during the call to
4149@code{cublasCreate()}. In other words, both libraries will be sharing the
4150same context.
4151
4152@smallexample
4153 /* Create the handle */
4154 s = cublasCreate(&h);
4155 if (s != CUBLAS_STATUS_SUCCESS)
4156 @{
4157 fprintf(stderr, "cublasCreate failed %d\n", s);
4158 exit(EXIT_FAILURE);
4159 @}
4160
4161 /* Get the device number */
4162 e = cudaGetDevice(&dev);
4163 if (e != cudaSuccess)
4164 @{
4165 fprintf(stderr, "cudaGetDevice failed %d\n", e);
4166 exit(EXIT_FAILURE);
4167 @}
4168
4169 /* Initialize OpenACC library and use device 'dev' */
4170 acc_set_device_num(dev, acc_device_nvidia);
4171
4172@end smallexample
4173@center Use Case 1
4174
4175@section First invocation: OpenACC library API
4176
4177In this second use case (see below), a function in the OpenACC library is
eda38850 4178called prior to any of the functions in the CUBLAS library. More specifically,
d77de738
ML
4179the function @code{acc_set_device_num()}.
4180
4181In the use case presented here, the function @code{acc_set_device_num()}
4182is used to both initialize the OpenACC library and allocate the hardware
4183resources on the host and the device. In the call to the function, the
4184call parameters specify which device to use and what device
4185type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
4186is but one method to initialize the OpenACC library and allocate the
4187appropriate hardware resources. Other methods are available through the
4188use of environment variables and these will be discussed in the next section.
4189
4190Once the call to @code{acc_set_device_num()} has completed, other OpenACC
4191functions can be called as seen with multiple calls being made to
4192@code{acc_copyin()}. In addition, calls can be made to functions in the
4193CUBLAS library. In the use case a call to @code{cublasCreate()} is made
4194subsequent to the calls to @code{acc_copyin()}.
4195As seen in the previous use case, a call to @code{cublasCreate()}
4196initializes the CUBLAS library and allocates the hardware resources on the
4197host and the device. However, since the device has already been allocated,
4198@code{cublasCreate()} will only initialize the CUBLAS library and allocate
4199the appropriate hardware resources on the host. The context that was created
4200as part of the OpenACC initialization is shared with the CUBLAS library,
4201similarly to the first use case.
4202
4203@smallexample
4204 dev = 0;
4205
4206 acc_set_device_num(dev, acc_device_nvidia);
4207
4208 /* Copy the first set to the device */
4209 d_X = acc_copyin(&h_X[0], N * sizeof (float));
4210 if (d_X == NULL)
4211 @{
4212 fprintf(stderr, "copyin error h_X\n");
4213 exit(EXIT_FAILURE);
4214 @}
4215
4216 /* Copy the second set to the device */
4217 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4218 if (d_Y == NULL)
4219 @{
4220 fprintf(stderr, "copyin error h_Y1\n");
4221 exit(EXIT_FAILURE);
4222 @}
4223
4224 /* Create the handle */
4225 s = cublasCreate(&h);
4226 if (s != CUBLAS_STATUS_SUCCESS)
4227 @{
4228 fprintf(stderr, "cublasCreate failed %d\n", s);
4229 exit(EXIT_FAILURE);
4230 @}
4231
4232 /* Perform saxpy using CUBLAS library function */
4233 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4234 if (s != CUBLAS_STATUS_SUCCESS)
4235 @{
4236 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4237 exit(EXIT_FAILURE);
4238 @}
4239
4240 /* Copy the results from the device */
4241 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4242
4243@end smallexample
4244@center Use Case 2
4245
4246@section OpenACC library and environment variables
4247
4248There are two environment variables associated with the OpenACC library
4249that may be used to control the device type and device number:
4250@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4251environment variables can be used as an alternative to calling
4252@code{acc_set_device_num()}. As seen in the second use case, the device
4253type and device number were specified using @code{acc_set_device_num()}.
4254If however, the aforementioned environment variables were set, then the
4255call to @code{acc_set_device_num()} would not be required.
4256
4257
4258The use of the environment variables is only relevant when an OpenACC function
4259is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4260is called prior to a call to an OpenACC function, then you must call
4261@code{acc_set_device_num()}@footnote{More complete information
4262about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4263sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4264Application Programming Interface”, Version 2.6.}
4265
4266
4267
4268@c ---------------------------------------------------------------------
4269@c OpenACC Profiling Interface
4270@c ---------------------------------------------------------------------
4271
4272@node OpenACC Profiling Interface
4273@chapter OpenACC Profiling Interface
4274
4275@section Implementation Status and Implementation-Defined Behavior
4276
4277We're implementing the OpenACC Profiling Interface as defined by the
4278OpenACC 2.6 specification. We're clarifying some aspects here as
4279@emph{implementation-defined behavior}, while they're still under
4280discussion within the OpenACC Technical Committee.
4281
4282This implementation is tuned to keep the performance impact as low as
4283possible for the (very common) case that the Profiling Interface is
4284not enabled. This is relevant, as the Profiling Interface affects all
4285the @emph{hot} code paths (in the target code, not in the offloaded
4286code). Users of the OpenACC Profiling Interface can be expected to
4287understand that performance will be impacted to some degree once the
4288Profiling Interface has gotten enabled: for example, because of the
4289@emph{runtime} (libgomp) calling into a third-party @emph{library} for
4290every event that has been registered.
4291
4292We're not yet accounting for the fact that @cite{OpenACC events may
4293occur during event processing}.
4294We just handle one case specially, as required by CUDA 9.0
4295@command{nvprof}, that @code{acc_get_device_type}
4296(@ref{acc_get_device_type})) may be called from
4297@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4298callbacks.
4299
4300We're not yet implementing initialization via a
4301@code{acc_register_library} function that is either statically linked
4302in, or dynamically via @env{LD_PRELOAD}.
4303Initialization via @code{acc_register_library} functions dynamically
4304loaded via the @env{ACC_PROFLIB} environment variable does work, as
4305does directly calling @code{acc_prof_register},
4306@code{acc_prof_unregister}, @code{acc_prof_lookup}.
4307
4308As currently there are no inquiry functions defined, calls to
4309@code{acc_prof_lookup} will always return @code{NULL}.
4310
4311There aren't separate @emph{start}, @emph{stop} events defined for the
4312event types @code{acc_ev_create}, @code{acc_ev_delete},
4313@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4314should be triggered before or after the actual device-specific call is
4315made. We trigger them after.
4316
4317Remarks about data provided to callbacks:
4318
4319@table @asis
4320
4321@item @code{acc_prof_info.event_type}
4322It's not clear if for @emph{nested} event callbacks (for example,
4323@code{acc_ev_enqueue_launch_start} as part of a parent compute
4324construct), this should be set for the nested event
4325(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4326construct should remain (@code{acc_ev_compute_construct_start}). In
4327this implementation, the value will generally correspond to the
4328innermost nested event type.
4329
4330@item @code{acc_prof_info.device_type}
4331@itemize
4332
4333@item
4334For @code{acc_ev_compute_construct_start}, and in presence of an
4335@code{if} clause with @emph{false} argument, this will still refer to
4336the offloading device type.
4337It's not clear if that's the expected behavior.
4338
4339@item
4340Complementary to the item before, for
4341@code{acc_ev_compute_construct_end}, this is set to
4342@code{acc_device_host} in presence of an @code{if} clause with
4343@emph{false} argument.
4344It's not clear if that's the expected behavior.
4345
4346@end itemize
4347
4348@item @code{acc_prof_info.thread_id}
4349Always @code{-1}; not yet implemented.
4350
4351@item @code{acc_prof_info.async}
4352@itemize
4353
4354@item
4355Not yet implemented correctly for
4356@code{acc_ev_compute_construct_start}.
4357
4358@item
4359In a compute construct, for host-fallback
4360execution/@code{acc_device_host} it will always be
4361@code{acc_async_sync}.
4362It's not clear if that's the expected behavior.
4363
4364@item
4365For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
4366it will always be @code{acc_async_sync}.
4367It's not clear if that's the expected behavior.
4368
4369@end itemize
4370
4371@item @code{acc_prof_info.async_queue}
4372There is no @cite{limited number of asynchronous queues} in libgomp.
4373This will always have the same value as @code{acc_prof_info.async}.
4374
4375@item @code{acc_prof_info.src_file}
4376Always @code{NULL}; not yet implemented.
4377
4378@item @code{acc_prof_info.func_name}
4379Always @code{NULL}; not yet implemented.
4380
4381@item @code{acc_prof_info.line_no}
4382Always @code{-1}; not yet implemented.
4383
4384@item @code{acc_prof_info.end_line_no}
4385Always @code{-1}; not yet implemented.
4386
4387@item @code{acc_prof_info.func_line_no}
4388Always @code{-1}; not yet implemented.
4389
4390@item @code{acc_prof_info.func_end_line_no}
4391Always @code{-1}; not yet implemented.
4392
4393@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
4394Relating to @code{acc_prof_info.event_type} discussed above, in this
4395implementation, this will always be the same value as
4396@code{acc_prof_info.event_type}.
4397
4398@item @code{acc_event_info.*.parent_construct}
4399@itemize
4400
4401@item
4402Will be @code{acc_construct_parallel} for all OpenACC compute
4403constructs as well as many OpenACC Runtime API calls; should be the
4404one matching the actual construct, or
4405@code{acc_construct_runtime_api}, respectively.
4406
4407@item
4408Will be @code{acc_construct_enter_data} or
4409@code{acc_construct_exit_data} when processing variable mappings
4410specified in OpenACC @emph{declare} directives; should be
4411@code{acc_construct_declare}.
4412
4413@item
4414For implicit @code{acc_ev_device_init_start},
4415@code{acc_ev_device_init_end}, and explicit as well as implicit
4416@code{acc_ev_alloc}, @code{acc_ev_free},
4417@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4418@code{acc_ev_enqueue_download_start}, and
4419@code{acc_ev_enqueue_download_end}, will be
4420@code{acc_construct_parallel}; should reflect the real parent
4421construct.
4422
4423@end itemize
4424
4425@item @code{acc_event_info.*.implicit}
4426For @code{acc_ev_alloc}, @code{acc_ev_free},
4427@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4428@code{acc_ev_enqueue_download_start}, and
4429@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
4430also for explicit usage.
4431
4432@item @code{acc_event_info.data_event.var_name}
4433Always @code{NULL}; not yet implemented.
4434
4435@item @code{acc_event_info.data_event.host_ptr}
4436For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
4437@code{NULL}.
4438
4439@item @code{typedef union acc_api_info}
4440@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
4441Information}. This should obviously be @code{typedef @emph{struct}
4442acc_api_info}.
4443
4444@item @code{acc_api_info.device_api}
4445Possibly not yet implemented correctly for
4446@code{acc_ev_compute_construct_start},
4447@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
4448will always be @code{acc_device_api_none} for these event types.
4449For @code{acc_ev_enter_data_start}, it will be
4450@code{acc_device_api_none} in some cases.
4451
4452@item @code{acc_api_info.device_type}
4453Always the same as @code{acc_prof_info.device_type}.
4454
4455@item @code{acc_api_info.vendor}
4456Always @code{-1}; not yet implemented.
4457
4458@item @code{acc_api_info.device_handle}
4459Always @code{NULL}; not yet implemented.
4460
4461@item @code{acc_api_info.context_handle}
4462Always @code{NULL}; not yet implemented.
4463
4464@item @code{acc_api_info.async_handle}
4465Always @code{NULL}; not yet implemented.
4466
4467@end table
4468
4469Remarks about certain event types:
4470
4471@table @asis
4472
4473@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4474@itemize
4475
4476@item
4477@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
4478@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
4479@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
4480When a compute construct triggers implicit
4481@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
4482events, they currently aren't @emph{nested within} the corresponding
4483@code{acc_ev_compute_construct_start} and
4484@code{acc_ev_compute_construct_end}, but they're currently observed
4485@emph{before} @code{acc_ev_compute_construct_start}.
4486It's not clear what to do: the standard asks us provide a lot of
4487details to the @code{acc_ev_compute_construct_start} callback, without
4488(implicitly) initializing a device before?
4489
4490@item
4491Callbacks for these event types will not be invoked for calls to the
4492@code{acc_set_device_type} and @code{acc_set_device_num} functions.
4493It's not clear if they should be.
4494
4495@end itemize
4496
4497@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
4498@itemize
4499
4500@item
4501Callbacks for these event types will also be invoked for OpenACC
4502@emph{host_data} constructs.
4503It's not clear if they should be.
4504
4505@item
4506Callbacks for these event types will also be invoked when processing
4507variable mappings specified in OpenACC @emph{declare} directives.
4508It's not clear if they should be.
4509
4510@end itemize
4511
4512@end table
4513
4514Callbacks for the following event types will be invoked, but dispatch
4515and information provided therein has not yet been thoroughly reviewed:
4516
4517@itemize
4518@item @code{acc_ev_alloc}
4519@item @code{acc_ev_free}
4520@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
4521@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
4522@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
4523@end itemize
4524
4525During device initialization, and finalization, respectively,
4526callbacks for the following event types will not yet be invoked:
4527
4528@itemize
4529@item @code{acc_ev_alloc}
4530@item @code{acc_ev_free}
4531@end itemize
4532
4533Callbacks for the following event types have not yet been implemented,
4534so currently won't be invoked:
4535
4536@itemize
4537@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
4538@item @code{acc_ev_runtime_shutdown}
4539@item @code{acc_ev_create}, @code{acc_ev_delete}
4540@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
4541@end itemize
4542
4543For the following runtime library functions, not all expected
4544callbacks will be invoked (mostly concerning implicit device
4545initialization):
4546
4547@itemize
4548@item @code{acc_get_num_devices}
4549@item @code{acc_set_device_type}
4550@item @code{acc_get_device_type}
4551@item @code{acc_set_device_num}
4552@item @code{acc_get_device_num}
4553@item @code{acc_init}
4554@item @code{acc_shutdown}
4555@end itemize
4556
4557Aside from implicit device initialization, for the following runtime
4558library functions, no callbacks will be invoked for shared-memory
4559offloading devices (it's not clear if they should be):
4560
4561@itemize
4562@item @code{acc_malloc}
4563@item @code{acc_free}
4564@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
4565@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
4566@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
4567@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
4568@item @code{acc_update_device}, @code{acc_update_device_async}
4569@item @code{acc_update_self}, @code{acc_update_self_async}
4570@item @code{acc_map_data}, @code{acc_unmap_data}
4571@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
4572@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
4573@end itemize
4574
4575@c ---------------------------------------------------------------------
4576@c OpenMP-Implementation Specifics
4577@c ---------------------------------------------------------------------
4578
4579@node OpenMP-Implementation Specifics
4580@chapter OpenMP-Implementation Specifics
4581
4582@menu
2cd0689a 4583* Implementation-defined ICV Initialization::
d77de738 4584* OpenMP Context Selectors::
450b05ce 4585* Memory allocation::
d77de738
ML
4586@end menu
4587
2cd0689a
TB
4588@node Implementation-defined ICV Initialization
4589@section Implementation-defined ICV Initialization
4590@cindex Implementation specific setting
4591
4592@multitable @columnfractions .30 .70
4593@item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
4594@item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
4595@item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
4596@item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
4597@item @var{nthreads-var} @tab See @code{OMP_NUM_THREADS}.
4598@item @var{num-devices-var} @tab Number of non-host devices found
4599by GCC's run-time library
4600@item @var{num-procs-var} @tab The number of CPU cores on the
4601initial device, except that affinity settings might lead to a
4602smaller number. On non-host devices, the value of the
4603@var{nthreads-var} ICV.
4604@item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
4605@item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
4606@item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
4607@item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
4608@item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
4609@ref{GOMP_SPINCOUNT}
4610@end multitable
4611
d77de738
ML
4612@node OpenMP Context Selectors
4613@section OpenMP Context Selectors
4614
4615@code{vendor} is always @code{gnu}. References are to the GCC manual.
4616
4617@multitable @columnfractions .60 .10 .25
4618@headitem @code{arch} @tab @code{kind} @tab @code{isa}
4619@item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
4620 @code{i586}, @code{i686}, @code{ia32}
4621 @tab @code{host}
4622 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
4623@item @code{amdgcn}, @code{gcn}
4624 @tab @code{gpu}
e0b95c2e
TB
4625 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
4626 @code{gfx803} is supported as an alias for @code{fiji}.}
d77de738
ML
4627@item @code{nvptx}
4628 @tab @code{gpu}
4629 @tab See @code{-march=} in ``Nvidia PTX Options''
4630@end multitable
4631
450b05ce
TB
4632@node Memory allocation
4633@section Memory allocation
d77de738 4634
8c2fc744
TB
4635For the memory spaces, the following applies:
4636@itemize
4637@item @code{omp_default_mem_space} is supported
4638@item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
4639@item @code{omp_low_lat_mem_space} maps to @code{omp_default_mem_space}
4640@item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
4641 unless the memkind library is available
4642@item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
4643 unless the memkind library is available
4644@end itemize
4645
d77de738
ML
4646On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
4647library} (@code{libmemkind.so.0}) is available at runtime, it is used when
4648creating memory allocators requesting
4649
4650@itemize
4651@item the memory space @code{omp_high_bw_mem_space}
4652@item the memory space @code{omp_large_cap_mem_space}
450b05ce 4653@item the @code{partition} trait @code{interleaved}; note that for
8c2fc744 4654 @code{omp_large_cap_mem_space} the allocation will not be interleaved
d77de738
ML
4655@end itemize
4656
450b05ce
TB
4657On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
4658library} (@code{libnuma.so.1}) is available at runtime, it used when creating
4659memory allocators requesting
4660
4661@itemize
4662@item the @code{partition} trait @code{nearest}, except when both the
4663libmemkind library is available and the memory space is either
4664@code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
4665@end itemize
4666
4667Note that the numa library will round up the allocation size to a multiple of
4668the system page size; therefore, consider using it only with large data or
4669by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
4670kernel does not guarantee that an allocation will always be on the nearest NUMA
4671node nor that after reallocation the same node will be used. Note additionally
4672that, on Linux, the default setting of the memory placement policy is to use the
4673current node; therefore, unless the memory placement policy has been overridden,
4674the @code{partition} trait @code{environment} (the default) will be effectively
4675a @code{nearest} allocation.
4676
8c2fc744
TB
4677Additional notes:
4678@itemize
4679@item The @code{pinned} trait is unsupported.
4680@item For the @code{partition} trait, the partition part size will be the same
4681 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
4682 effect), except for @code{interleaved} when the memkind library is
450b05ce
TB
4683 available. Furthermore, for @code{nearest} and unless the numa library
4684 is available, the memory might not be on the same NUMA node as thread
4685 that allocated the memory; on Linux, this is in particular the case when
4686 the memory placement policy is set to preferred.
8c2fc744
TB
4687@item The @code{access} trait has no effect such that memory is always
4688 accessible by all threads.
4689@item The @code{sync_hint} trait has no effect.
4690@end itemize
d77de738
ML
4691
4692@c ---------------------------------------------------------------------
4693@c Offload-Target Specifics
4694@c ---------------------------------------------------------------------
4695
4696@node Offload-Target Specifics
4697@chapter Offload-Target Specifics
4698
4699The following sections present notes on the offload-target specifics
4700
4701@menu
4702* AMD Radeon::
4703* nvptx::
4704@end menu
4705
4706@node AMD Radeon
4707@section AMD Radeon (GCN)
4708
4709On the hardware side, there is the hierarchy (fine to coarse):
4710@itemize
4711@item work item (thread)
4712@item wavefront
4713@item work group
81476bc4 4714@item compute unit (CU)
d77de738
ML
4715@end itemize
4716
4717All OpenMP and OpenACC levels are used, i.e.
4718@itemize
4719@item OpenMP's simd and OpenACC's vector map to work items (thread)
4720@item OpenMP's threads (``parallel'') and OpenACC's workers map
4721 to wavefronts
4722@item OpenMP's teams and OpenACC's gang use a threadpool with the
4723 size of the number of teams or gangs, respectively.
4724@end itemize
4725
4726The used sizes are
4727@itemize
4728@item Number of teams is the specified @code{num_teams} (OpenMP) or
81476bc4
MV
4729 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
4730 by two times the number of CU.
d77de738
ML
4731@item Number of wavefronts is 4 for gfx900 and 16 otherwise;
4732 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
4733 overrides this if smaller.
4734@item The wavefront has 102 scalars and 64 vectors
4735@item Number of workitems is always 64
4736@item The hardware permits maximally 40 workgroups/CU and
4737 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
4738@item 80 scalars registers and 24 vector registers in non-kernel functions
4739 (the chosen procedure-calling API).
4740@item For the kernel itself: as many as register pressure demands (number of
4741 teams and number of threads, scaled down if registers are exhausted)
4742@end itemize
4743
4744The implementation remark:
4745@itemize
4746@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4747 using the C library @code{printf} functions and the Fortran
4748 @code{print}/@code{write} statements.
243fa488 4749@item Reverse offload regions (i.e. @code{target} regions with
f84fdb13
TB
4750 @code{device(ancestor:1)}) are processed serially per @code{target} region
4751 such that the next reverse offload region is only executed after the previous
4752 one returned.
f1af7d65 4753@item OpenMP code that has a @code{requires} directive with
f84fdb13
TB
4754 @code{unified_shared_memory} will remove any GCN device from the list of
4755 available devices (``host fallback'').
2e3dd14d
TB
4756@item The available stack size can be changed using the @code{GCN_STACK_SIZE}
4757 environment variable; the default is 32 kiB per thread.
d77de738
ML
4758@end itemize
4759
4760
4761
4762@node nvptx
4763@section nvptx
4764
4765On the hardware side, there is the hierarchy (fine to coarse):
4766@itemize
4767@item thread
4768@item warp
4769@item thread block
4770@item streaming multiprocessor
4771@end itemize
4772
4773All OpenMP and OpenACC levels are used, i.e.
4774@itemize
4775@item OpenMP's simd and OpenACC's vector map to threads
4776@item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
4777@item OpenMP's teams and OpenACC's gang use a threadpool with the
4778 size of the number of teams or gangs, respectively.
4779@end itemize
4780
4781The used sizes are
4782@itemize
4783@item The @code{warp_size} is always 32
4784@item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
81476bc4
MV
4785@item The number of teams is limited by the number of blocks the device can
4786 host simultaneously.
d77de738
ML
4787@end itemize
4788
4789Additional information can be obtained by setting the environment variable to
4790@code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
4791parameters).
4792
4793GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
4794which caches the JIT in the user's directory (see CUDA documentation; can be
4795tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
4796
4797Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
eda38850 4798options still affect the used PTX ISA code and, thus, the requirements on
d77de738
ML
4799CUDA version and hardware.
4800
4801The implementation remark:
4802@itemize
4803@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4804 using the C library @code{printf} functions. Note that the Fortran
4805 @code{print}/@code{write} statements are not supported, yet.
4806@item Compilation OpenMP code that contains @code{requires reverse_offload}
4807 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
4808 is not supported.
eda38850
TB
4809@item For code containing reverse offload (i.e. @code{target} regions with
4810 @code{device(ancestor:1)}), there is a slight performance penalty
4811 for @emph{all} target regions, consisting mostly of shutdown delay
4812 Per device, reverse offload regions are processed serially such that
4813 the next reverse offload region is only executed after the previous
4814 one returned.
f1af7d65
TB
4815@item OpenMP code that has a @code{requires} directive with
4816 @code{unified_shared_memory} will remove any nvptx device from the
eda38850 4817 list of available devices (``host fallback'').
2cd0689a
TB
4818@item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
4819 in the GCC manual.
d77de738
ML
4820@end itemize
4821
4822
4823@c ---------------------------------------------------------------------
4824@c The libgomp ABI
4825@c ---------------------------------------------------------------------
4826
4827@node The libgomp ABI
4828@chapter The libgomp ABI
4829
4830The following sections present notes on the external ABI as
4831presented by libgomp. Only maintainers should need them.
4832
4833@menu
4834* Implementing MASTER construct::
4835* Implementing CRITICAL construct::
4836* Implementing ATOMIC construct::
4837* Implementing FLUSH construct::
4838* Implementing BARRIER construct::
4839* Implementing THREADPRIVATE construct::
4840* Implementing PRIVATE clause::
4841* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
4842* Implementing REDUCTION clause::
4843* Implementing PARALLEL construct::
4844* Implementing FOR construct::
4845* Implementing ORDERED construct::
4846* Implementing SECTIONS construct::
4847* Implementing SINGLE construct::
4848* Implementing OpenACC's PARALLEL construct::
4849@end menu
4850
4851
4852@node Implementing MASTER construct
4853@section Implementing MASTER construct
4854
4855@smallexample
4856if (omp_get_thread_num () == 0)
4857 block
4858@end smallexample
4859
4860Alternately, we generate two copies of the parallel subfunction
4861and only include this in the version run by the primary thread.
4862Surely this is not worthwhile though...
4863
4864
4865
4866@node Implementing CRITICAL construct
4867@section Implementing CRITICAL construct
4868
4869Without a specified name,
4870
4871@smallexample
4872 void GOMP_critical_start (void);
4873 void GOMP_critical_end (void);
4874@end smallexample
4875
4876so that we don't get COPY relocations from libgomp to the main
4877application.
4878
4879With a specified name, use omp_set_lock and omp_unset_lock with
4880name being transformed into a variable declared like
4881
4882@smallexample
4883 omp_lock_t gomp_critical_user_<name> __attribute__((common))
4884@end smallexample
4885
4886Ideally the ABI would specify that all zero is a valid unlocked
4887state, and so we wouldn't need to initialize this at
4888startup.
4889
4890
4891
4892@node Implementing ATOMIC construct
4893@section Implementing ATOMIC construct
4894
4895The target should implement the @code{__sync} builtins.
4896
4897Failing that we could add
4898
4899@smallexample
4900 void GOMP_atomic_enter (void)
4901 void GOMP_atomic_exit (void)
4902@end smallexample
4903
4904which reuses the regular lock code, but with yet another lock
4905object private to the library.
4906
4907
4908
4909@node Implementing FLUSH construct
4910@section Implementing FLUSH construct
4911
4912Expands to the @code{__sync_synchronize} builtin.
4913
4914
4915
4916@node Implementing BARRIER construct
4917@section Implementing BARRIER construct
4918
4919@smallexample
4920 void GOMP_barrier (void)
4921@end smallexample
4922
4923
4924@node Implementing THREADPRIVATE construct
4925@section Implementing THREADPRIVATE construct
4926
4927In _most_ cases we can map this directly to @code{__thread}. Except
4928that OMP allows constructors for C++ objects. We can either
4929refuse to support this (how often is it used?) or we can
4930implement something akin to .ctors.
4931
4932Even more ideally, this ctor feature is handled by extensions
4933to the main pthreads library. Failing that, we can have a set
4934of entry points to register ctor functions to be called.
4935
4936
4937
4938@node Implementing PRIVATE clause
4939@section Implementing PRIVATE clause
4940
4941In association with a PARALLEL, or within the lexical extent
4942of a PARALLEL block, the variable becomes a local variable in
4943the parallel subfunction.
4944
4945In association with FOR or SECTIONS blocks, create a new
4946automatic variable within the current function. This preserves
4947the semantic of new variable creation.
4948
4949
4950
4951@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4952@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4953
4954This seems simple enough for PARALLEL blocks. Create a private
4955struct for communicating between the parent and subfunction.
4956In the parent, copy in values for scalar and "small" structs;
4957copy in addresses for others TREE_ADDRESSABLE types. In the
4958subfunction, copy the value into the local variable.
4959
4960It is not clear what to do with bare FOR or SECTION blocks.
4961The only thing I can figure is that we do something like:
4962
4963@smallexample
4964#pragma omp for firstprivate(x) lastprivate(y)
4965for (int i = 0; i < n; ++i)
4966 body;
4967@end smallexample
4968
4969which becomes
4970
4971@smallexample
4972@{
4973 int x = x, y;
4974
4975 // for stuff
4976
4977 if (i == n)
4978 y = y;
4979@}
4980@end smallexample
4981
4982where the "x=x" and "y=y" assignments actually have different
4983uids for the two variables, i.e. not something you could write
4984directly in C. Presumably this only makes sense if the "outer"
4985x and y are global variables.
4986
4987COPYPRIVATE would work the same way, except the structure
4988broadcast would have to happen via SINGLE machinery instead.
4989
4990
4991
4992@node Implementing REDUCTION clause
4993@section Implementing REDUCTION clause
4994
4995The private struct mentioned in the previous section should have
4996a pointer to an array of the type of the variable, indexed by the
4997thread's @var{team_id}. The thread stores its final value into the
4998array, and after the barrier, the primary thread iterates over the
4999array to collect the values.
5000
5001
5002@node Implementing PARALLEL construct
5003@section Implementing PARALLEL construct
5004
5005@smallexample
5006 #pragma omp parallel
5007 @{
5008 body;
5009 @}
5010@end smallexample
5011
5012becomes
5013
5014@smallexample
5015 void subfunction (void *data)
5016 @{
5017 use data;
5018 body;
5019 @}
5020
5021 setup data;
5022 GOMP_parallel_start (subfunction, &data, num_threads);
5023 subfunction (&data);
5024 GOMP_parallel_end ();
5025@end smallexample
5026
5027@smallexample
5028 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
5029@end smallexample
5030
5031The @var{FN} argument is the subfunction to be run in parallel.
5032
5033The @var{DATA} argument is a pointer to a structure used to
5034communicate data in and out of the subfunction, as discussed
5035above with respect to FIRSTPRIVATE et al.
5036
5037The @var{NUM_THREADS} argument is 1 if an IF clause is present
5038and false, or the value of the NUM_THREADS clause, if
5039present, or 0.
5040
5041The function needs to create the appropriate number of
5042threads and/or launch them from the dock. It needs to
5043create the team structure and assign team ids.
5044
5045@smallexample
5046 void GOMP_parallel_end (void)
5047@end smallexample
5048
5049Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
5050
5051
5052
5053@node Implementing FOR construct
5054@section Implementing FOR construct
5055
5056@smallexample
5057 #pragma omp parallel for
5058 for (i = lb; i <= ub; i++)
5059 body;
5060@end smallexample
5061
5062becomes
5063
5064@smallexample
5065 void subfunction (void *data)
5066 @{
5067 long _s0, _e0;
5068 while (GOMP_loop_static_next (&_s0, &_e0))
5069 @{
5070 long _e1 = _e0, i;
5071 for (i = _s0; i < _e1; i++)
5072 body;
5073 @}
5074 GOMP_loop_end_nowait ();
5075 @}
5076
5077 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
5078 subfunction (NULL);
5079 GOMP_parallel_end ();
5080@end smallexample
5081
5082@smallexample
5083 #pragma omp for schedule(runtime)
5084 for (i = 0; i < n; i++)
5085 body;
5086@end smallexample
5087
5088becomes
5089
5090@smallexample
5091 @{
5092 long i, _s0, _e0;
5093 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
5094 do @{
5095 long _e1 = _e0;
5096 for (i = _s0, i < _e0; i++)
5097 body;
5098 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
5099 GOMP_loop_end ();
5100 @}
5101@end smallexample
5102
5103Note that while it looks like there is trickiness to propagating
5104a non-constant STEP, there isn't really. We're explicitly allowed
5105to evaluate it as many times as we want, and any variables involved
5106should automatically be handled as PRIVATE or SHARED like any other
5107variables. So the expression should remain evaluable in the
5108subfunction. We can also pull it into a local variable if we like,
5109but since its supposed to remain unchanged, we can also not if we like.
5110
5111If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
5112able to get away with no work-sharing context at all, since we can
5113simply perform the arithmetic directly in each thread to divide up
5114the iterations. Which would mean that we wouldn't need to call any
5115of these routines.
5116
5117There are separate routines for handling loops with an ORDERED
5118clause. Bookkeeping for that is non-trivial...
5119
5120
5121
5122@node Implementing ORDERED construct
5123@section Implementing ORDERED construct
5124
5125@smallexample
5126 void GOMP_ordered_start (void)
5127 void GOMP_ordered_end (void)
5128@end smallexample
5129
5130
5131
5132@node Implementing SECTIONS construct
5133@section Implementing SECTIONS construct
5134
5135A block as
5136
5137@smallexample
5138 #pragma omp sections
5139 @{
5140 #pragma omp section
5141 stmt1;
5142 #pragma omp section
5143 stmt2;
5144 #pragma omp section
5145 stmt3;
5146 @}
5147@end smallexample
5148
5149becomes
5150
5151@smallexample
5152 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
5153 switch (i)
5154 @{
5155 case 1:
5156 stmt1;
5157 break;
5158 case 2:
5159 stmt2;
5160 break;
5161 case 3:
5162 stmt3;
5163 break;
5164 @}
5165 GOMP_barrier ();
5166@end smallexample
5167
5168
5169@node Implementing SINGLE construct
5170@section Implementing SINGLE construct
5171
5172A block like
5173
5174@smallexample
5175 #pragma omp single
5176 @{
5177 body;
5178 @}
5179@end smallexample
5180
5181becomes
5182
5183@smallexample
5184 if (GOMP_single_start ())
5185 body;
5186 GOMP_barrier ();
5187@end smallexample
5188
5189while
5190
5191@smallexample
5192 #pragma omp single copyprivate(x)
5193 body;
5194@end smallexample
5195
5196becomes
5197
5198@smallexample
5199 datap = GOMP_single_copy_start ();
5200 if (datap == NULL)
5201 @{
5202 body;
5203 data.x = x;
5204 GOMP_single_copy_end (&data);
5205 @}
5206 else
5207 x = datap->x;
5208 GOMP_barrier ();
5209@end smallexample
5210
5211
5212
5213@node Implementing OpenACC's PARALLEL construct
5214@section Implementing OpenACC's PARALLEL construct
5215
5216@smallexample
5217 void GOACC_parallel ()
5218@end smallexample
5219
5220
5221
5222@c ---------------------------------------------------------------------
5223@c Reporting Bugs
5224@c ---------------------------------------------------------------------
5225
5226@node Reporting Bugs
5227@chapter Reporting Bugs
5228
5229Bugs in the GNU Offloading and Multi Processing Runtime Library should
5230be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
5231"openacc", or "openmp", or both to the keywords field in the bug
5232report, as appropriate.
5233
5234
5235
5236@c ---------------------------------------------------------------------
5237@c GNU General Public License
5238@c ---------------------------------------------------------------------
5239
5240@include gpl_v3.texi
5241
5242
5243
5244@c ---------------------------------------------------------------------
5245@c GNU Free Documentation License
5246@c ---------------------------------------------------------------------
5247
5248@include fdl.texi
5249
5250
5251
5252@c ---------------------------------------------------------------------
5253@c Funding Free Software
5254@c ---------------------------------------------------------------------
5255
5256@include funding.texi
5257
5258@c ---------------------------------------------------------------------
5259@c Index
5260@c ---------------------------------------------------------------------
5261
5262@node Library Index
5263@unnumbered Library Index
5264
5265@printindex cp
5266
5267@bye