]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
reg-notes.def: Fix up description of REG_NOALIAS
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
d77de738
ML
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
74d5206f 10Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
d77de738
ML
11
12Permission is granted to copy, distribute and/or modify this document
13under the terms of the GNU Free Documentation License, Version 1.3 or
14any later version published by the Free Software Foundation; with the
15Invariant Sections being ``Funding Free Software'', the Front-Cover
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35@end direntry
36
37This manual documents libgomp, the GNU Offloading and Multi Processing
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
53@title GNU Offloading and Multi Processing Runtime Library
54@subtitle The GNU OpenMP and OpenACC Implementation
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top, Enabling OpenMP
72@top Introduction
73@cindex Introduction
74
75This manual documents the usage of libgomp, the GNU Offloading and
76Multi Processing Runtime Library. This includes the GNU
77implementation of the @uref{https://www.openmp.org, OpenMP} Application
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
80@uref{https://www.openacc.org, OpenACC} Application Programming
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
83
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
98* OpenMP Implementation Status:: List of implemented features by OpenMP version
99* OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102* OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105* Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107* OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111* CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113* OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115* OpenACC Profiling Interface::
116* OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118* Offload-Target Specifics:: Notes on offload-target specific internals
119* The libgomp ABI:: Notes on the external ABI presented by libgomp.
120* Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122* Copying:: GNU general public license says
123 how you can copy and share libgomp.
124* GNU Free Documentation License::
125 How you can copy and share this manual.
126* Funding:: How to help assure continued work for free
127 software.
128* Library Index:: Index of this documentation.
129@end menu
130
131
132@c ---------------------------------------------------------------------
133@c Enabling OpenMP
134@c ---------------------------------------------------------------------
135
136@node Enabling OpenMP
137@chapter Enabling OpenMP
138
139To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143@code{!$} conditional compilation sentinels in free form and @code{c$},
144@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145arranges for automatic linking of the OpenMP runtime library
146(@ref{Runtime Library Routines}).
147
148A complete description of all OpenMP directives may be found in the
149@uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150See also @ref{OpenMP Implementation Status}.
151
152
153@c ---------------------------------------------------------------------
154@c OpenMP Implementation Status
155@c ---------------------------------------------------------------------
156
157@node OpenMP Implementation Status
158@chapter OpenMP Implementation Status
159
160@menu
161* OpenMP 4.5:: Feature completion status to 4.5 specification
162* OpenMP 5.0:: Feature completion status to 5.0 specification
163* OpenMP 5.1:: Feature completion status to 5.1 specification
164* OpenMP 5.2:: Feature completion status to 5.2 specification
c16e85d7 165* OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
d77de738
ML
166@end menu
167
168The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170the value @code{201511} (i.e. OpenMP 4.5).
171
172@node OpenMP 4.5
173@section OpenMP 4.5
174
175The OpenMP 4.5 specification is fully supported.
176
177@node OpenMP 5.0
178@section OpenMP 5.0
179
180@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181@c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183@multitable @columnfractions .60 .10 .25
184@headitem Description @tab Status @tab Comments
185@item Array shaping @tab N @tab
186@item Array sections with non-unit strides in C and C++ @tab N @tab
187@item Iterators @tab Y @tab
188@item @code{metadirective} directive @tab N @tab
189@item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
2cd0689a 191@item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
d77de738 192 env variable @tab Y @tab
2cd0689a 193@item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
d77de738 194@item @code{requires} directive @tab P
8c2fc744 195 @tab complete but no non-host device provides @code{unified_shared_memory}
d77de738 196@item @code{teams} construct outside an enclosing target region @tab Y @tab
85da0b40
TB
197@item Non-rectangular loop nests @tab P
198 @tab Full support for C/C++, partial for Fortran
199 (@uref{https://gcc.gnu.org/PR110735,PR110735})
d77de738
ML
200@item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
201@item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
202 constructs @tab Y @tab
87f9b6c2 203@item Collapse of associated loops that are imperfectly nested loops @tab Y @tab
d77de738
ML
204@item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
205 @code{simd} construct @tab Y @tab
206@item @code{atomic} constructs in @code{simd} @tab Y @tab
207@item @code{loop} construct @tab Y @tab
208@item @code{order(concurrent)} clause @tab Y @tab
209@item @code{scan} directive and @code{in_scan} modifier for the
210 @code{reduction} clause @tab Y @tab
211@item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
212@item @code{in_reduction} clause on @code{target} constructs @tab P
213 @tab @code{nowait} only stub
214@item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
215@item @code{task} modifier to @code{reduction} clause @tab Y @tab
216@item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
217@item @code{detach} clause to @code{task} construct @tab Y @tab
218@item @code{omp_fulfill_event} runtime routine @tab Y @tab
219@item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
220 and @code{taskloop simd} constructs @tab Y @tab
221@item @code{taskloop} construct cancelable by @code{cancel} construct
222 @tab Y @tab
223@item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
224 @tab Y @tab
225@item Predefined memory spaces, memory allocators, allocator traits
13c3e29d 226 @tab Y @tab See also @ref{Memory allocation}
d77de738 227@item Memory management routines @tab Y @tab
1a554a2c 228@item @code{allocate} directive @tab P @tab Only C, only stack variables
d77de738
ML
229@item @code{allocate} clause @tab P @tab Initial support
230@item @code{use_device_addr} clause on @code{target data} @tab Y @tab
f84fdb13 231@item @code{ancestor} modifier on @code{device} clause @tab Y @tab
d77de738
ML
232@item Implicit declare target directive @tab Y @tab
233@item Discontiguous array section with @code{target update} construct
234 @tab N @tab
235@item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237@item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238@item Nested @code{declare target} directive @tab Y @tab
239@item Combined @code{master} constructs @tab Y @tab
240@item @code{depend} clause on @code{taskwait} @tab Y @tab
241@item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
242 @tab Y @tab
243@item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244@item @code{depobj} construct and depend objects @tab Y @tab
245@item Lock hints were renamed to synchronization hints @tab Y @tab
246@item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247@item Map-order clarifications @tab P @tab
248@item @code{close} @emph{map-type-modifier} @tab Y @tab
249@item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251@item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254@item @code{defaultmap} extensions @tab Y @tab
255@item @code{declare mapper} directive @tab N @tab
256@item @code{omp_get_supported_active_levels} routine @tab Y @tab
257@item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259@item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
260 routines @tab Y @tab
261@item @code{omp_get_device_num} runtime routine @tab Y @tab
262@item OMPT interface @tab N @tab
263@item OMPD interface @tab N @tab
264@end multitable
265
266@unnumberedsubsec Other new OpenMP 5.0 features
267
268@multitable @columnfractions .60 .10 .25
269@headitem Description @tab Status @tab Comments
270@item Supporting C++'s range-based for loop @tab Y @tab
271@end multitable
272
273
274@node OpenMP 5.1
275@section OpenMP 5.1
276
277@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
278
279@multitable @columnfractions .60 .10 .25
280@headitem Description @tab Status @tab Comments
281@item OpenMP directive as C++ attribute specifiers @tab Y @tab
282@item @code{omp_all_memory} reserved locator @tab Y @tab
283@item @emph{target_device trait} in OpenMP Context @tab N @tab
284@item @code{target_device} selector set in context selectors @tab N @tab
285@item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287@item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289@item @code{dispatch} construct @tab N @tab
290@item device-specific ICV settings with environment variables @tab Y @tab
eda38850 291@item @code{assume} and @code{assumes} directives @tab Y @tab
d77de738
ML
292@item @code{nothing} directive @tab Y @tab
293@item @code{error} directive @tab Y @tab
294@item @code{masked} construct @tab Y @tab
295@item @code{scope} directive @tab Y @tab
296@item Loop transformation constructs @tab N @tab
297@item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
1a554a2c
TB
299@item @code{align} clause in @code{allocate} directive @tab P
300 @tab Only C (and only stack variables)
b2e1c49b 301@item @code{align} modifier in @code{allocate} clause @tab Y @tab
d77de738
ML
302@item @code{thread_limit} clause to @code{target} construct @tab Y @tab
303@item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
304@item Iterators in @code{target update} motion clauses and @code{map}
305 clauses @tab N @tab
306@item Indirect calls to the device version of a procedure or function in
307 @code{target} regions @tab N @tab
308@item @code{interop} directive @tab N @tab
309@item @code{omp_interop_t} object support in runtime routines @tab N @tab
310@item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
311@item Extensions to the @code{atomic} directive @tab Y @tab
312@item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
313@item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
314@item @code{private} and @code{firstprivate} argument to @code{default}
315 clause in C and C++ @tab Y @tab
4ede915d 316@item @code{present} argument to @code{defaultmap} clause @tab Y @tab
d77de738
ML
317@item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
318 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
319 routines @tab Y @tab
320@item @code{omp_target_is_accessible} runtime routine @tab Y @tab
321@item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
322 runtime routines @tab Y @tab
323@item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
324@item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
325 @code{omp_aligned_calloc} runtime routines @tab Y @tab
326@item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
327 @code{omp_atv_default} changed @tab Y @tab
328@item @code{omp_display_env} runtime routine @tab Y @tab
329@item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
330@item @code{ompt_sync_region_t} enum additions @tab N @tab
331@item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
332 and @code{ompt_state_wait_barrier_teams} @tab N @tab
333@item @code{ompt_callback_target_data_op_emi_t},
334 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
335 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
336@item @code{ompt_callback_error_t} type @tab N @tab
337@item @code{OMP_PLACES} syntax extensions @tab Y @tab
338@item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
339 variables @tab Y @tab
340@end multitable
341
342@unnumberedsubsec Other new OpenMP 5.1 features
343
344@multitable @columnfractions .60 .10 .25
345@headitem Description @tab Status @tab Comments
346@item Support of strictly structured blocks in Fortran @tab Y @tab
347@item Support of structured block sequences in C/C++ @tab Y @tab
348@item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
349 clause @tab Y @tab
350@item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
351@item Pointer predetermined firstprivate getting initialized
352to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
353@item For Fortran, diagnose placing declarative before/between @code{USE},
354 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
eda38850 355@item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
c16e85d7
TB
356@item @code{indirect} clause in @code{declare target} @tab N @tab
357@item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
4ede915d
TB
358@item @code{present} modifier to the @code{map}, @code{to} and @code{from}
359 clauses @tab Y @tab
d77de738
ML
360@end multitable
361
362
363@node OpenMP 5.2
364@section OpenMP 5.2
365
366@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
367
368@multitable @columnfractions .60 .10 .25
369@headitem Description @tab Status @tab Comments
2cd0689a 370@item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
d77de738
ML
371 @tab Y @tab
372@item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
373 namespaces @tab N/A
374 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
375 sentinel as C/C++ pragma and C++ attributes are warned for with
376 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
377 (enabled by default), respectively; for Fortran free-source code, there is
378 a warning enabled by default and, for fixed-source code, the @code{omx}
379 sentinel is warned for with with @code{-Wsurprising} (enabled by
380 @code{-Wall}). Unknown clauses are always rejected with an error.}
091b6dbc 381@item Clauses on @code{end} directive can be on directive @tab Y @tab
0698c9fd
TB
382@item @code{destroy} clause with destroy-var argument on @code{depobj}
383 @tab N @tab
d77de738
ML
384@item Deprecation of no-argument @code{destroy} clause on @code{depobj}
385 @tab N @tab
386@item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
387@item Deprecation of minus operator for reductions @tab N @tab
388@item Deprecation of separating @code{map} modifiers without comma @tab N @tab
389@item @code{declare mapper} with iterator and @code{present} modifiers
390 @tab N @tab
391@item If a matching mapped list item is not found in the data environment, the
b25ea7ab 392 pointer retains its original value @tab Y @tab
d77de738
ML
393@item New @code{enter} clause as alias for @code{to} on declare target directive
394 @tab Y @tab
395@item Deprecation of @code{to} clause on declare target directive @tab N @tab
396@item Extended list of directives permitted in Fortran pure procedures
2df7e451 397 @tab Y @tab
d77de738
ML
398@item New @code{allocators} directive for Fortran @tab N @tab
399@item Deprecation of @code{allocate} directive for Fortran
400 allocatables/pointers @tab N @tab
401@item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
402@item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
403 @tab N @tab
404@item Deprecation of traits array following the allocator_handle expression in
405 @code{uses_allocators} @tab N @tab
406@item New @code{otherwise} clause as alias for @code{default} on metadirectives
407 @tab N @tab
408@item Deprecation of @code{default} clause on metadirectives @tab N @tab
409@item Deprecation of delimited form of @code{declare target} @tab N @tab
410@item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
411@item @code{allocate} and @code{firstprivate} clauses on @code{scope}
412 @tab Y @tab
413@item @code{ompt_callback_work} @tab N @tab
9f80367e 414@item Default map-type for the @code{map} clause in @code{target enter/exit data}
d77de738
ML
415 @tab Y @tab
416@item New @code{doacross} clause as alias for @code{depend} with
417 @code{source}/@code{sink} modifier @tab Y @tab
418@item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
419 @tab N @tab
420@item @code{omp_cur_iteration} keyword @tab Y @tab
421@end multitable
422
423@unnumberedsubsec Other new OpenMP 5.2 features
424
425@multitable @columnfractions .60 .10 .25
426@headitem Description @tab Status @tab Comments
427@item For Fortran, optional comma between directive and clause @tab N @tab
428@item Conforming device numbers and @code{omp_initial_device} and
429 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
2cd0689a 430@item Initial value of @var{default-device-var} ICV with
18c8b56c 431 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
0698c9fd 432@item @code{all} as @emph{implicit-behavior} for @code{defaultmap} @tab Y @tab
d77de738
ML
433@item @emph{interop_types} in any position of the modifier list for the @code{init} clause
434 of the @code{interop} construct @tab N @tab
435@end multitable
436
437
c16e85d7
TB
438@node OpenMP Technical Report 11
439@section OpenMP Technical Report 11
440
441Technical Report (TR) 11 is the first preview for OpenMP 6.0.
442
443@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
444@multitable @columnfractions .60 .10 .25
445@item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
446 @tab N/A @tab Backward compatibility
447@item The @code{decl} attribute was added to the C++ attribute syntax
04b2fb5b 448 @tab Y @tab
c16e85d7
TB
449@item @code{_ALL} suffix to the device-scope environment variables
450 @tab P @tab Host device number wrongly accepted
451@item For Fortran, @emph{locator list} can be also function reference with
452 data pointer result @tab N @tab
453@item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
454 @tab N @tab
455@item Implicit reduction identifiers of C++ classes
456 @tab N @tab
457@item Change of the @emph{map-type} property from @emph{ultimate} to
458 @emph{default} @tab N @tab
459@item Concept of @emph{assumed-size arrays} in C and C++
460 @tab N @tab
461@item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
462 @tab N @tab
463@item @code{groupprivate} directive @tab N @tab
464@item @code{local} clause to declare target directive @tab N @tab
465@item @code{part_size} allocator trait @tab N @tab
466@item @code{pin_device}, @code{preferred_device} and @code{target_access}
467 allocator traits
468 @tab N @tab
469@item @code{access} allocator trait changes @tab N @tab
470@item Extension of @code{interop} operation of @code{append_args}, allowing all
471 modifiers of the @code{init} clause
9f80367e 472 @tab N @tab
c16e85d7
TB
473@item @code{interop} clause to @code{dispatch} @tab N @tab
474@item @code{apply} code to loop-transforming constructs @tab N @tab
475@item @code{omp_curr_progress_width} identifier @tab N @tab
476@item @code{safesync} clause to the @code{parallel} construct @tab N @tab
477@item @code{omp_get_max_progress_width} runtime routine @tab N @tab
8da7476c 478@item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
c16e85d7
TB
479@item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
480@item Routines for obtaining memory spaces/allocators for shared/device memory
481 @tab N @tab
482@item @code{omp_get_memspace_num_resources} routine @tab N @tab
483@item @code{omp_get_submemspace} routine @tab N @tab
484@item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
485@item Extension of @code{OMP_DEFAULT_DEVICE} and new
486 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
487@item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
488@end multitable
489
490@unnumberedsubsec Other new TR 11 features
491@multitable @columnfractions .60 .10 .25
492@item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
493@item Mapping lambda captures @tab N @tab
494@item For Fortran, atomic compare with storing the comparison result
495 @tab N @tab
c16e85d7
TB
496@end multitable
497
498
499
d77de738
ML
500@c ---------------------------------------------------------------------
501@c OpenMP Runtime Library Routines
502@c ---------------------------------------------------------------------
503
504@node Runtime Library Routines
505@chapter OpenMP Runtime Library Routines
506
506f068e
TB
507The runtime routines described here are defined by Section 18 of the OpenMP
508specification in version 5.2.
d77de738
ML
509
510@menu
506f068e
TB
511* Thread Team Routines::
512* Thread Affinity Routines::
513* Teams Region Routines::
514* Tasking Routines::
515@c * Resource Relinquishing Routines::
516* Device Information Routines::
e0786ba6 517* Device Memory Routines::
506f068e
TB
518* Lock Routines::
519* Timing Routines::
520* Event Routine::
521@c * Interoperability Routines::
971f119f 522* Memory Management Routines::
506f068e
TB
523@c * Tool Control Routine::
524@c * Environment Display Routine::
525@end menu
d77de738 526
506f068e
TB
527
528
529@node Thread Team Routines
530@section Thread Team Routines
531
532Routines controlling threads in the current contention group.
533They have C linkage and do not throw exceptions.
534
535@menu
536* omp_set_num_threads:: Set upper team size limit
d77de738 537* omp_get_num_threads:: Size of the active team
506f068e 538* omp_get_max_threads:: Maximum number of threads of parallel region
d77de738
ML
539* omp_get_thread_num:: Current thread ID
540* omp_in_parallel:: Whether a parallel region is active
d77de738 541* omp_set_dynamic:: Enable/disable dynamic teams
506f068e
TB
542* omp_get_dynamic:: Dynamic teams setting
543* omp_get_cancellation:: Whether cancellation support is enabled
d77de738 544* omp_set_nested:: Enable/disable nested parallel regions
506f068e 545* omp_get_nested:: Nested parallel regions
d77de738 546* omp_set_schedule:: Set the runtime scheduling method
506f068e
TB
547* omp_get_schedule:: Obtain the runtime scheduling method
548* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
549* omp_get_supported_active_levels:: Maximum number of active regions supported
550* omp_set_max_active_levels:: Limits the number of active parallel regions
551* omp_get_max_active_levels:: Current maximum number of active regions
552* omp_get_level:: Number of parallel regions
553* omp_get_ancestor_thread_num:: Ancestor thread ID
554* omp_get_team_size:: Number of threads in a team
555* omp_get_active_level:: Number of active parallel regions
556@end menu
d77de738 557
d77de738 558
d77de738 559
506f068e
TB
560@node omp_set_num_threads
561@subsection @code{omp_set_num_threads} -- Set upper team size limit
562@table @asis
563@item @emph{Description}:
564Specifies the number of threads used by default in subsequent parallel
565sections, if those do not specify a @code{num_threads} clause. The
566argument of @code{omp_set_num_threads} shall be a positive integer.
d77de738 567
506f068e
TB
568@item @emph{C/C++}:
569@multitable @columnfractions .20 .80
570@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
571@end multitable
d77de738 572
506f068e
TB
573@item @emph{Fortran}:
574@multitable @columnfractions .20 .80
575@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
576@item @tab @code{integer, intent(in) :: num_threads}
577@end multitable
d77de738 578
506f068e
TB
579@item @emph{See also}:
580@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
d77de738 581
506f068e
TB
582@item @emph{Reference}:
583@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
584@end table
d77de738
ML
585
586
506f068e
TB
587
588@node omp_get_num_threads
589@subsection @code{omp_get_num_threads} -- Size of the active team
d77de738
ML
590@table @asis
591@item @emph{Description}:
506f068e
TB
592Returns the number of threads in the current team. In a sequential section of
593the program @code{omp_get_num_threads} returns 1.
d77de738 594
506f068e
TB
595The default team size may be initialized at startup by the
596@env{OMP_NUM_THREADS} environment variable. At runtime, the size
597of the current team may be set either by the @code{NUM_THREADS}
598clause or by @code{omp_set_num_threads}. If none of the above were
599used to define a specific value and @env{OMP_DYNAMIC} is disabled,
600one thread per CPU online is used.
601
602@item @emph{C/C++}:
d77de738 603@multitable @columnfractions .20 .80
506f068e 604@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
d77de738
ML
605@end multitable
606
607@item @emph{Fortran}:
608@multitable @columnfractions .20 .80
506f068e 609@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
d77de738
ML
610@end multitable
611
612@item @emph{See also}:
506f068e 613@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
d77de738
ML
614
615@item @emph{Reference}:
506f068e 616@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
d77de738
ML
617@end table
618
619
620
506f068e
TB
621@node omp_get_max_threads
622@subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
d77de738
ML
623@table @asis
624@item @emph{Description}:
506f068e
TB
625Return the maximum number of threads used for the current parallel region
626that does not use the clause @code{num_threads}.
d77de738 627
506f068e 628@item @emph{C/C++}:
d77de738 629@multitable @columnfractions .20 .80
506f068e 630@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
d77de738
ML
631@end multitable
632
633@item @emph{Fortran}:
634@multitable @columnfractions .20 .80
506f068e 635@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
d77de738
ML
636@end multitable
637
638@item @emph{See also}:
506f068e 639@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
d77de738
ML
640
641@item @emph{Reference}:
506f068e 642@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
d77de738
ML
643@end table
644
645
646
506f068e
TB
647@node omp_get_thread_num
648@subsection @code{omp_get_thread_num} -- Current thread ID
d77de738
ML
649@table @asis
650@item @emph{Description}:
506f068e
TB
651Returns a unique thread identification number within the current team.
652In a sequential parts of the program, @code{omp_get_thread_num}
653always returns 0. In parallel regions the return value varies
654from 0 to @code{omp_get_num_threads}-1 inclusive. The return
655value of the primary thread of a team is always 0.
d77de738
ML
656
657@item @emph{C/C++}:
658@multitable @columnfractions .20 .80
506f068e 659@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
d77de738
ML
660@end multitable
661
662@item @emph{Fortran}:
663@multitable @columnfractions .20 .80
506f068e 664@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
d77de738
ML
665@end multitable
666
667@item @emph{See also}:
506f068e 668@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
d77de738
ML
669
670@item @emph{Reference}:
506f068e 671@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
d77de738
ML
672@end table
673
674
675
506f068e
TB
676@node omp_in_parallel
677@subsection @code{omp_in_parallel} -- Whether a parallel region is active
d77de738
ML
678@table @asis
679@item @emph{Description}:
506f068e
TB
680This function returns @code{true} if currently running in parallel,
681@code{false} otherwise. Here, @code{true} and @code{false} represent
682their language-specific counterparts.
d77de738
ML
683
684@item @emph{C/C++}:
685@multitable @columnfractions .20 .80
506f068e 686@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
d77de738
ML
687@end multitable
688
689@item @emph{Fortran}:
690@multitable @columnfractions .20 .80
506f068e 691@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
d77de738
ML
692@end multitable
693
d77de738 694@item @emph{Reference}:
506f068e 695@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
d77de738
ML
696@end table
697
698
506f068e
TB
699@node omp_set_dynamic
700@subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
d77de738
ML
701@table @asis
702@item @emph{Description}:
506f068e
TB
703Enable or disable the dynamic adjustment of the number of threads
704within a team. The function takes the language-specific equivalent
705of @code{true} and @code{false}, where @code{true} enables dynamic
706adjustment of team sizes and @code{false} disables it.
d77de738 707
506f068e 708@item @emph{C/C++}:
d77de738 709@multitable @columnfractions .20 .80
506f068e 710@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
d77de738
ML
711@end multitable
712
713@item @emph{Fortran}:
714@multitable @columnfractions .20 .80
506f068e
TB
715@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
716@item @tab @code{logical, intent(in) :: dynamic_threads}
d77de738
ML
717@end multitable
718
719@item @emph{See also}:
506f068e 720@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
d77de738
ML
721
722@item @emph{Reference}:
506f068e 723@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
d77de738
ML
724@end table
725
726
727
728@node omp_get_dynamic
506f068e 729@subsection @code{omp_get_dynamic} -- Dynamic teams setting
d77de738
ML
730@table @asis
731@item @emph{Description}:
732This function returns @code{true} if enabled, @code{false} otherwise.
733Here, @code{true} and @code{false} represent their language-specific
734counterparts.
735
736The dynamic team setting may be initialized at startup by the
737@env{OMP_DYNAMIC} environment variable or at runtime using
738@code{omp_set_dynamic}. If undefined, dynamic adjustment is
739disabled by default.
740
741@item @emph{C/C++}:
742@multitable @columnfractions .20 .80
743@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
744@end multitable
745
746@item @emph{Fortran}:
747@multitable @columnfractions .20 .80
748@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
749@end multitable
750
751@item @emph{See also}:
752@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
753
754@item @emph{Reference}:
755@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
756@end table
757
758
759
506f068e
TB
760@node omp_get_cancellation
761@subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
d77de738
ML
762@table @asis
763@item @emph{Description}:
506f068e
TB
764This function returns @code{true} if cancellation is activated, @code{false}
765otherwise. Here, @code{true} and @code{false} represent their language-specific
766counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
767deactivated.
d77de738 768
506f068e 769@item @emph{C/C++}:
d77de738 770@multitable @columnfractions .20 .80
506f068e 771@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
d77de738
ML
772@end multitable
773
774@item @emph{Fortran}:
775@multitable @columnfractions .20 .80
506f068e 776@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
d77de738
ML
777@end multitable
778
779@item @emph{See also}:
506f068e 780@ref{OMP_CANCELLATION}
d77de738
ML
781
782@item @emph{Reference}:
506f068e 783@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
d77de738
ML
784@end table
785
786
787
506f068e
TB
788@node omp_set_nested
789@subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
d77de738
ML
790@table @asis
791@item @emph{Description}:
506f068e
TB
792Enable or disable nested parallel regions, i.e., whether team members
793are allowed to create new teams. The function takes the language-specific
794equivalent of @code{true} and @code{false}, where @code{true} enables
795dynamic adjustment of team sizes and @code{false} disables it.
d77de738 796
506f068e
TB
797Enabling nested parallel regions will also set the maximum number of
798active nested regions to the maximum supported. Disabling nested parallel
799regions will set the maximum number of active nested regions to one.
800
801Note that the @code{omp_set_nested} API routine was deprecated
802in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
803
804@item @emph{C/C++}:
d77de738 805@multitable @columnfractions .20 .80
506f068e 806@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
d77de738
ML
807@end multitable
808
809@item @emph{Fortran}:
810@multitable @columnfractions .20 .80
506f068e
TB
811@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
812@item @tab @code{logical, intent(in) :: nested}
d77de738
ML
813@end multitable
814
815@item @emph{See also}:
506f068e
TB
816@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
817@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
d77de738
ML
818
819@item @emph{Reference}:
506f068e 820@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
d77de738
ML
821@end table
822
823
824
506f068e
TB
825@node omp_get_nested
826@subsection @code{omp_get_nested} -- Nested parallel regions
d77de738
ML
827@table @asis
828@item @emph{Description}:
506f068e
TB
829This function returns @code{true} if nested parallel regions are
830enabled, @code{false} otherwise. Here, @code{true} and @code{false}
831represent their language-specific counterparts.
832
833The state of nested parallel regions at startup depends on several
834environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
835and is set to greater than one, then nested parallel regions will be
836enabled. If not defined, then the value of the @env{OMP_NESTED}
837environment variable will be followed if defined. If neither are
838defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
839are defined with a list of more than one value, then nested parallel
840regions are enabled. If none of these are defined, then nested parallel
841regions are disabled by default.
842
843Nested parallel regions can be enabled or disabled at runtime using
844@code{omp_set_nested}, or by setting the maximum number of nested
845regions with @code{omp_set_max_active_levels} to one to disable, or
846above one to enable.
847
848Note that the @code{omp_get_nested} API routine was deprecated
849in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
850
851@item @emph{C/C++}:
852@multitable @columnfractions .20 .80
853@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
854@end multitable
855
856@item @emph{Fortran}:
857@multitable @columnfractions .20 .80
858@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
859@end multitable
860
861@item @emph{See also}:
862@ref{omp_get_max_active_levels}, @ref{omp_set_nested},
863@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
864
865@item @emph{Reference}:
866@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
867@end table
868
869
870
871@node omp_set_schedule
872@subsection @code{omp_set_schedule} -- Set the runtime scheduling method
873@table @asis
874@item @emph{Description}:
875Sets the runtime scheduling method. The @var{kind} argument can have the
876value @code{omp_sched_static}, @code{omp_sched_dynamic},
877@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
878@code{omp_sched_auto}, the chunk size is set to the value of
879@var{chunk_size} if positive, or to the default value if zero or negative.
880For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
d77de738
ML
881
882@item @emph{C/C++}
883@multitable @columnfractions .20 .80
506f068e 884@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
d77de738
ML
885@end multitable
886
887@item @emph{Fortran}:
888@multitable @columnfractions .20 .80
506f068e
TB
889@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
890@item @tab @code{integer(kind=omp_sched_kind) kind}
891@item @tab @code{integer chunk_size}
d77de738
ML
892@end multitable
893
894@item @emph{See also}:
506f068e
TB
895@ref{omp_get_schedule}
896@ref{OMP_SCHEDULE}
d77de738
ML
897
898@item @emph{Reference}:
506f068e 899@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
d77de738
ML
900@end table
901
902
506f068e
TB
903
904@node omp_get_schedule
905@subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
d77de738
ML
906@table @asis
907@item @emph{Description}:
506f068e
TB
908Obtain the runtime scheduling method. The @var{kind} argument will be
909set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
910@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
911@var{chunk_size}, is set to the chunk size.
d77de738
ML
912
913@item @emph{C/C++}
914@multitable @columnfractions .20 .80
506f068e 915@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
d77de738
ML
916@end multitable
917
918@item @emph{Fortran}:
919@multitable @columnfractions .20 .80
506f068e
TB
920@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
921@item @tab @code{integer(kind=omp_sched_kind) kind}
922@item @tab @code{integer chunk_size}
d77de738
ML
923@end multitable
924
506f068e
TB
925@item @emph{See also}:
926@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
927
d77de738 928@item @emph{Reference}:
506f068e 929@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
d77de738
ML
930@end table
931
932
506f068e
TB
933@node omp_get_teams_thread_limit
934@subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
d77de738
ML
935@table @asis
936@item @emph{Description}:
506f068e
TB
937Return the maximum number of threads that will be able to participate in
938each team created by a teams construct.
d77de738
ML
939
940@item @emph{C/C++}:
941@multitable @columnfractions .20 .80
506f068e 942@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
d77de738
ML
943@end multitable
944
945@item @emph{Fortran}:
946@multitable @columnfractions .20 .80
506f068e 947@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
d77de738
ML
948@end multitable
949
950@item @emph{See also}:
506f068e 951@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
d77de738
ML
952
953@item @emph{Reference}:
506f068e 954@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
d77de738
ML
955@end table
956
957
958
506f068e
TB
959@node omp_get_supported_active_levels
960@subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
d77de738
ML
961@table @asis
962@item @emph{Description}:
506f068e
TB
963This function returns the maximum number of nested, active parallel regions
964supported by this implementation.
d77de738 965
506f068e 966@item @emph{C/C++}
d77de738 967@multitable @columnfractions .20 .80
506f068e 968@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
d77de738
ML
969@end multitable
970
971@item @emph{Fortran}:
972@multitable @columnfractions .20 .80
506f068e 973@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
d77de738
ML
974@end multitable
975
976@item @emph{See also}:
506f068e 977@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
d77de738
ML
978
979@item @emph{Reference}:
506f068e 980@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
d77de738
ML
981@end table
982
983
984
506f068e
TB
985@node omp_set_max_active_levels
986@subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
d77de738
ML
987@table @asis
988@item @emph{Description}:
506f068e
TB
989This function limits the maximum allowed number of nested, active
990parallel regions. @var{max_levels} must be less or equal to
991the value returned by @code{omp_get_supported_active_levels}.
d77de738 992
506f068e
TB
993@item @emph{C/C++}
994@multitable @columnfractions .20 .80
995@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
996@end multitable
d77de738 997
506f068e
TB
998@item @emph{Fortran}:
999@multitable @columnfractions .20 .80
1000@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1001@item @tab @code{integer max_levels}
1002@end multitable
d77de738 1003
506f068e
TB
1004@item @emph{See also}:
1005@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1006@ref{omp_get_supported_active_levels}
2cd0689a 1007
506f068e
TB
1008@item @emph{Reference}:
1009@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1010@end table
1011
1012
1013
1014@node omp_get_max_active_levels
1015@subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
1016@table @asis
1017@item @emph{Description}:
1018This function obtains the maximum allowed number of nested, active parallel regions.
1019
1020@item @emph{C/C++}
d77de738 1021@multitable @columnfractions .20 .80
506f068e 1022@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
d77de738
ML
1023@end multitable
1024
1025@item @emph{Fortran}:
1026@multitable @columnfractions .20 .80
506f068e 1027@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
d77de738
ML
1028@end multitable
1029
1030@item @emph{See also}:
506f068e 1031@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
d77de738
ML
1032
1033@item @emph{Reference}:
506f068e 1034@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
d77de738
ML
1035@end table
1036
1037
506f068e
TB
1038@node omp_get_level
1039@subsection @code{omp_get_level} -- Obtain the current nesting level
d77de738
ML
1040@table @asis
1041@item @emph{Description}:
506f068e
TB
1042This function returns the nesting level for the parallel blocks,
1043which enclose the calling call.
d77de738 1044
506f068e 1045@item @emph{C/C++}
d77de738 1046@multitable @columnfractions .20 .80
506f068e 1047@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
d77de738
ML
1048@end multitable
1049
1050@item @emph{Fortran}:
1051@multitable @columnfractions .20 .80
506f068e 1052@item @emph{Interface}: @tab @code{integer function omp_level()}
d77de738
ML
1053@end multitable
1054
506f068e
TB
1055@item @emph{See also}:
1056@ref{omp_get_active_level}
1057
d77de738 1058@item @emph{Reference}:
506f068e 1059@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
d77de738
ML
1060@end table
1061
1062
1063
506f068e
TB
1064@node omp_get_ancestor_thread_num
1065@subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
d77de738
ML
1066@table @asis
1067@item @emph{Description}:
506f068e
TB
1068This function returns the thread identification number for the given
1069nesting level of the current thread. For values of @var{level} outside
1070zero to @code{omp_get_level} -1 is returned; if @var{level} is
1071@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
d77de738 1072
506f068e 1073@item @emph{C/C++}
d77de738 1074@multitable @columnfractions .20 .80
506f068e 1075@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
d77de738
ML
1076@end multitable
1077
1078@item @emph{Fortran}:
1079@multitable @columnfractions .20 .80
506f068e
TB
1080@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
1081@item @tab @code{integer level}
d77de738
ML
1082@end multitable
1083
506f068e
TB
1084@item @emph{See also}:
1085@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
1086
d77de738 1087@item @emph{Reference}:
506f068e 1088@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
d77de738
ML
1089@end table
1090
1091
1092
506f068e
TB
1093@node omp_get_team_size
1094@subsection @code{omp_get_team_size} -- Number of threads in a team
d77de738
ML
1095@table @asis
1096@item @emph{Description}:
506f068e
TB
1097This function returns the number of threads in a thread team to which
1098either the current thread or its ancestor belongs. For values of @var{level}
1099outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
11001 is returned, and for @code{omp_get_level}, the result is identical
1101to @code{omp_get_num_threads}.
d77de738
ML
1102
1103@item @emph{C/C++}:
1104@multitable @columnfractions .20 .80
506f068e 1105@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
d77de738
ML
1106@end multitable
1107
1108@item @emph{Fortran}:
1109@multitable @columnfractions .20 .80
506f068e
TB
1110@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1111@item @tab @code{integer level}
d77de738
ML
1112@end multitable
1113
506f068e
TB
1114@item @emph{See also}:
1115@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1116
d77de738 1117@item @emph{Reference}:
506f068e 1118@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
d77de738
ML
1119@end table
1120
1121
1122
506f068e
TB
1123@node omp_get_active_level
1124@subsection @code{omp_get_active_level} -- Number of parallel regions
d77de738
ML
1125@table @asis
1126@item @emph{Description}:
506f068e
TB
1127This function returns the nesting level for the active parallel blocks,
1128which enclose the calling call.
d77de738 1129
506f068e 1130@item @emph{C/C++}
d77de738 1131@multitable @columnfractions .20 .80
506f068e 1132@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
d77de738
ML
1133@end multitable
1134
1135@item @emph{Fortran}:
1136@multitable @columnfractions .20 .80
506f068e 1137@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
d77de738
ML
1138@end multitable
1139
1140@item @emph{See also}:
506f068e 1141@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
d77de738
ML
1142
1143@item @emph{Reference}:
506f068e 1144@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
d77de738
ML
1145@end table
1146
1147
1148
506f068e
TB
1149@node Thread Affinity Routines
1150@section Thread Affinity Routines
1151
1152Routines controlling and accessing thread-affinity policies.
1153They have C linkage and do not throw exceptions.
1154
1155@menu
1156* omp_get_proc_bind:: Whether threads may be moved between CPUs
1157@c * omp_get_num_places:: <fixme>
1158@c * omp_get_place_num_procs:: <fixme>
1159@c * omp_get_place_proc_ids:: <fixme>
1160@c * omp_get_place_num:: <fixme>
1161@c * omp_get_partition_num_places:: <fixme>
1162@c * omp_get_partition_place_nums:: <fixme>
1163@c * omp_set_affinity_format:: <fixme>
1164@c * omp_get_affinity_format:: <fixme>
1165@c * omp_display_affinity:: <fixme>
1166@c * omp_capture_affinity:: <fixme>
1167@end menu
1168
1169
1170
d77de738 1171@node omp_get_proc_bind
506f068e 1172@subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
d77de738
ML
1173@table @asis
1174@item @emph{Description}:
1175This functions returns the currently active thread affinity policy, which is
1176set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1177@code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1178@code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1179where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1180
1181@item @emph{C/C++}:
1182@multitable @columnfractions .20 .80
1183@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1184@end multitable
1185
1186@item @emph{Fortran}:
1187@multitable @columnfractions .20 .80
1188@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1189@end multitable
1190
1191@item @emph{See also}:
1192@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1193
1194@item @emph{Reference}:
1195@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1196@end table
1197
1198
1199
506f068e
TB
1200@node Teams Region Routines
1201@section Teams Region Routines
d77de738 1202
506f068e
TB
1203Routines controlling the league of teams that are executed in a @code{teams}
1204region. They have C linkage and do not throw exceptions.
d77de738 1205
506f068e
TB
1206@menu
1207* omp_get_num_teams:: Number of teams
1208* omp_get_team_num:: Get team number
1209* omp_set_num_teams:: Set upper teams limit for teams region
1210* omp_get_max_teams:: Maximum number of teams for teams region
1211* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
1212* omp_get_thread_limit:: Maximum number of threads
1213@end menu
d77de738 1214
d77de738
ML
1215
1216
506f068e
TB
1217@node omp_get_num_teams
1218@subsection @code{omp_get_num_teams} -- Number of teams
d77de738
ML
1219@table @asis
1220@item @emph{Description}:
506f068e 1221Returns the number of teams in the current team region.
d77de738 1222
506f068e 1223@item @emph{C/C++}:
d77de738 1224@multitable @columnfractions .20 .80
506f068e 1225@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
d77de738
ML
1226@end multitable
1227
1228@item @emph{Fortran}:
1229@multitable @columnfractions .20 .80
506f068e 1230@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
d77de738
ML
1231@end multitable
1232
d77de738 1233@item @emph{Reference}:
506f068e 1234@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
d77de738
ML
1235@end table
1236
1237
1238
1239@node omp_get_team_num
506f068e 1240@subsection @code{omp_get_team_num} -- Get team number
d77de738
ML
1241@table @asis
1242@item @emph{Description}:
1243Returns the team number of the calling thread.
1244
1245@item @emph{C/C++}:
1246@multitable @columnfractions .20 .80
1247@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1248@end multitable
1249
1250@item @emph{Fortran}:
1251@multitable @columnfractions .20 .80
1252@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1253@end multitable
1254
1255@item @emph{Reference}:
1256@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1257@end table
1258
1259
1260
506f068e
TB
1261@node omp_set_num_teams
1262@subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
d77de738
ML
1263@table @asis
1264@item @emph{Description}:
506f068e
TB
1265Specifies the upper bound for number of teams created by the teams construct
1266which does not specify a @code{num_teams} clause. The
1267argument of @code{omp_set_num_teams} shall be a positive integer.
d77de738
ML
1268
1269@item @emph{C/C++}:
1270@multitable @columnfractions .20 .80
506f068e 1271@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
d77de738
ML
1272@end multitable
1273
1274@item @emph{Fortran}:
1275@multitable @columnfractions .20 .80
506f068e
TB
1276@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1277@item @tab @code{integer, intent(in) :: num_teams}
d77de738
ML
1278@end multitable
1279
1280@item @emph{See also}:
506f068e 1281@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
d77de738
ML
1282
1283@item @emph{Reference}:
506f068e 1284@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
d77de738
ML
1285@end table
1286
1287
1288
506f068e
TB
1289@node omp_get_max_teams
1290@subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
d77de738
ML
1291@table @asis
1292@item @emph{Description}:
506f068e
TB
1293Return the maximum number of teams used for the teams region
1294that does not use the clause @code{num_teams}.
d77de738
ML
1295
1296@item @emph{C/C++}:
1297@multitable @columnfractions .20 .80
506f068e 1298@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
d77de738
ML
1299@end multitable
1300
1301@item @emph{Fortran}:
1302@multitable @columnfractions .20 .80
506f068e 1303@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
d77de738
ML
1304@end multitable
1305
1306@item @emph{See also}:
506f068e 1307@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
d77de738
ML
1308
1309@item @emph{Reference}:
506f068e 1310@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
d77de738
ML
1311@end table
1312
1313
1314
506f068e
TB
1315@node omp_set_teams_thread_limit
1316@subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
d77de738
ML
1317@table @asis
1318@item @emph{Description}:
506f068e
TB
1319Specifies the upper bound for number of threads that will be available
1320for each team created by the teams construct which does not specify a
1321@code{thread_limit} clause. The argument of
1322@code{omp_set_teams_thread_limit} shall be a positive integer.
d77de738
ML
1323
1324@item @emph{C/C++}:
1325@multitable @columnfractions .20 .80
506f068e 1326@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
d77de738
ML
1327@end multitable
1328
1329@item @emph{Fortran}:
1330@multitable @columnfractions .20 .80
506f068e
TB
1331@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1332@item @tab @code{integer, intent(in) :: thread_limit}
d77de738
ML
1333@end multitable
1334
1335@item @emph{See also}:
506f068e 1336@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
d77de738
ML
1337
1338@item @emph{Reference}:
506f068e 1339@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
d77de738
ML
1340@end table
1341
1342
1343
506f068e
TB
1344@node omp_get_thread_limit
1345@subsection @code{omp_get_thread_limit} -- Maximum number of threads
d77de738
ML
1346@table @asis
1347@item @emph{Description}:
506f068e 1348Return the maximum number of threads of the program.
d77de738
ML
1349
1350@item @emph{C/C++}:
1351@multitable @columnfractions .20 .80
506f068e 1352@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
d77de738
ML
1353@end multitable
1354
1355@item @emph{Fortran}:
1356@multitable @columnfractions .20 .80
506f068e 1357@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
d77de738
ML
1358@end multitable
1359
1360@item @emph{See also}:
506f068e 1361@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
d77de738
ML
1362
1363@item @emph{Reference}:
506f068e 1364@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
d77de738
ML
1365@end table
1366
1367
1368
506f068e
TB
1369@node Tasking Routines
1370@section Tasking Routines
1371
1372Routines relating to explicit tasks.
1373They have C linkage and do not throw exceptions.
1374
1375@menu
1376* omp_get_max_task_priority:: Maximum task priority value that can be set
819f3d36 1377* omp_in_explicit_task:: Whether a given task is an explicit task
506f068e
TB
1378* omp_in_final:: Whether in final or included task region
1379@end menu
1380
1381
1382
1383@node omp_get_max_task_priority
1384@subsection @code{omp_get_max_task_priority} -- Maximum priority value
1385that can be set for tasks.
d77de738
ML
1386@table @asis
1387@item @emph{Description}:
506f068e 1388This function obtains the maximum allowed priority number for tasks.
d77de738 1389
506f068e 1390@item @emph{C/C++}
d77de738 1391@multitable @columnfractions .20 .80
506f068e 1392@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
d77de738
ML
1393@end multitable
1394
1395@item @emph{Fortran}:
1396@multitable @columnfractions .20 .80
506f068e 1397@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
d77de738
ML
1398@end multitable
1399
1400@item @emph{Reference}:
506f068e 1401@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
d77de738
ML
1402@end table
1403
1404
506f068e 1405
819f3d36
TB
1406@node omp_in_explicit_task
1407@subsection @code{omp_in_explicit_task} -- Whether a given task is an explicit task
1408@table @asis
1409@item @emph{Description}:
1410The function returns the @var{explicit-task-var} ICV; it returns true when the
1411encountering task was generated by a task-generating construct such as
1412@code{target}, @code{task} or @code{taskloop}. Otherwise, the encountering task
1413is in an implicit task region such as generated by the implicit or explicit
1414@code{parallel} region and @code{omp_in_explicit_task} returns false.
1415
1416@item @emph{C/C++}
1417@multitable @columnfractions .20 .80
1418@item @emph{Prototype}: @tab @code{int omp_in_explicit_task(void);}
1419@end multitable
1420
1421@item @emph{Fortran}:
1422@multitable @columnfractions .20 .80
1423@item @emph{Interface}: @tab @code{logical function omp_in_explicit_task()}
1424@end multitable
1425
1426@item @emph{Reference}:
1427@uref{https://www.openmp.org, OpenMP specification v5.2}, Section 18.5.2.
1428@end table
1429
1430
1431
d77de738 1432@node omp_in_final
506f068e 1433@subsection @code{omp_in_final} -- Whether in final or included task region
d77de738
ML
1434@table @asis
1435@item @emph{Description}:
1436This function returns @code{true} if currently running in a final
1437or included task region, @code{false} otherwise. Here, @code{true}
1438and @code{false} represent their language-specific counterparts.
1439
1440@item @emph{C/C++}:
1441@multitable @columnfractions .20 .80
1442@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1443@end multitable
1444
1445@item @emph{Fortran}:
1446@multitable @columnfractions .20 .80
1447@item @emph{Interface}: @tab @code{logical function omp_in_final()}
1448@end multitable
1449
1450@item @emph{Reference}:
1451@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1452@end table
1453
1454
1455
506f068e
TB
1456@c @node Resource Relinquishing Routines
1457@c @section Resource Relinquishing Routines
1458@c
1459@c Routines releasing resources used by the OpenMP runtime.
1460@c They have C linkage and do not throw exceptions.
1461@c
1462@c @menu
1463@c * omp_pause_resource:: <fixme>
1464@c * omp_pause_resource_all:: <fixme>
1465@c @end menu
1466
1467@node Device Information Routines
1468@section Device Information Routines
1469
1470Routines related to devices available to an OpenMP program.
1471They have C linkage and do not throw exceptions.
1472
1473@menu
1474* omp_get_num_procs:: Number of processors online
1475@c * omp_get_max_progress_width:: <fixme>/TR11
1476* omp_set_default_device:: Set the default device for target regions
1477* omp_get_default_device:: Get the default device for target regions
1478* omp_get_num_devices:: Number of target devices
1479* omp_get_device_num:: Get device that current thread is running on
1480* omp_is_initial_device:: Whether executing on the host device
1481* omp_get_initial_device:: Device number of host device
1482@end menu
1483
1484
1485
1486@node omp_get_num_procs
1487@subsection @code{omp_get_num_procs} -- Number of processors online
d77de738
ML
1488@table @asis
1489@item @emph{Description}:
506f068e 1490Returns the number of processors online on that device.
d77de738
ML
1491
1492@item @emph{C/C++}:
1493@multitable @columnfractions .20 .80
506f068e 1494@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
d77de738
ML
1495@end multitable
1496
1497@item @emph{Fortran}:
1498@multitable @columnfractions .20 .80
506f068e 1499@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
d77de738
ML
1500@end multitable
1501
1502@item @emph{Reference}:
506f068e 1503@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
d77de738
ML
1504@end table
1505
1506
1507
1508@node omp_set_default_device
506f068e 1509@subsection @code{omp_set_default_device} -- Set the default device for target regions
d77de738
ML
1510@table @asis
1511@item @emph{Description}:
1512Set the default device for target regions without device clause. The argument
1513shall be a nonnegative device number.
1514
1515@item @emph{C/C++}:
1516@multitable @columnfractions .20 .80
1517@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1518@end multitable
1519
1520@item @emph{Fortran}:
1521@multitable @columnfractions .20 .80
1522@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1523@item @tab @code{integer device_num}
1524@end multitable
1525
1526@item @emph{See also}:
1527@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1528
1529@item @emph{Reference}:
1530@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1531@end table
1532
1533
1534
506f068e
TB
1535@node omp_get_default_device
1536@subsection @code{omp_get_default_device} -- Get the default device for target regions
d77de738
ML
1537@table @asis
1538@item @emph{Description}:
506f068e 1539Get the default device for target regions without device clause.
2cd0689a 1540
d77de738
ML
1541@item @emph{C/C++}:
1542@multitable @columnfractions .20 .80
506f068e 1543@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
d77de738
ML
1544@end multitable
1545
1546@item @emph{Fortran}:
1547@multitable @columnfractions .20 .80
506f068e 1548@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
d77de738
ML
1549@end multitable
1550
1551@item @emph{See also}:
506f068e 1552@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
d77de738
ML
1553
1554@item @emph{Reference}:
506f068e 1555@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
d77de738
ML
1556@end table
1557
1558
1559
506f068e
TB
1560@node omp_get_num_devices
1561@subsection @code{omp_get_num_devices} -- Number of target devices
d77de738
ML
1562@table @asis
1563@item @emph{Description}:
506f068e 1564Returns the number of target devices.
d77de738
ML
1565
1566@item @emph{C/C++}:
1567@multitable @columnfractions .20 .80
506f068e 1568@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
d77de738
ML
1569@end multitable
1570
1571@item @emph{Fortran}:
1572@multitable @columnfractions .20 .80
506f068e 1573@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
d77de738
ML
1574@end multitable
1575
d77de738 1576@item @emph{Reference}:
506f068e 1577@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
d77de738
ML
1578@end table
1579
1580
1581
506f068e
TB
1582@node omp_get_device_num
1583@subsection @code{omp_get_device_num} -- Return device number of current device
d77de738
ML
1584@table @asis
1585@item @emph{Description}:
506f068e
TB
1586This function returns a device number that represents the device that the
1587current thread is executing on. For OpenMP 5.0, this must be equal to the
1588value returned by the @code{omp_get_initial_device} function when called
1589from the host.
d77de738 1590
506f068e 1591@item @emph{C/C++}
d77de738 1592@multitable @columnfractions .20 .80
506f068e 1593@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
d77de738
ML
1594@end multitable
1595
1596@item @emph{Fortran}:
506f068e
TB
1597@multitable @columnfractions .20 .80
1598@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
d77de738
ML
1599@end multitable
1600
1601@item @emph{See also}:
506f068e 1602@ref{omp_get_initial_device}
d77de738
ML
1603
1604@item @emph{Reference}:
506f068e 1605@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
d77de738
ML
1606@end table
1607
1608
1609
506f068e
TB
1610@node omp_is_initial_device
1611@subsection @code{omp_is_initial_device} -- Whether executing on the host device
d77de738
ML
1612@table @asis
1613@item @emph{Description}:
506f068e
TB
1614This function returns @code{true} if currently running on the host device,
1615@code{false} otherwise. Here, @code{true} and @code{false} represent
1616their language-specific counterparts.
d77de738 1617
506f068e 1618@item @emph{C/C++}:
d77de738 1619@multitable @columnfractions .20 .80
506f068e 1620@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
d77de738
ML
1621@end multitable
1622
1623@item @emph{Fortran}:
1624@multitable @columnfractions .20 .80
506f068e 1625@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
d77de738
ML
1626@end multitable
1627
d77de738 1628@item @emph{Reference}:
506f068e 1629@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
d77de738
ML
1630@end table
1631
1632
1633
506f068e
TB
1634@node omp_get_initial_device
1635@subsection @code{omp_get_initial_device} -- Return device number of initial device
d77de738
ML
1636@table @asis
1637@item @emph{Description}:
506f068e
TB
1638This function returns a device number that represents the host device.
1639For OpenMP 5.1, this must be equal to the value returned by the
1640@code{omp_get_num_devices} function.
d77de738 1641
506f068e 1642@item @emph{C/C++}
d77de738 1643@multitable @columnfractions .20 .80
506f068e 1644@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
d77de738
ML
1645@end multitable
1646
1647@item @emph{Fortran}:
1648@multitable @columnfractions .20 .80
506f068e 1649@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
d77de738
ML
1650@end multitable
1651
1652@item @emph{See also}:
506f068e 1653@ref{omp_get_num_devices}
d77de738
ML
1654
1655@item @emph{Reference}:
506f068e 1656@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
d77de738
ML
1657@end table
1658
1659
1660
e0786ba6
TB
1661@node Device Memory Routines
1662@section Device Memory Routines
1663
1664Routines related to memory allocation and managing corresponding
1665pointers on devices. They have C linkage and do not throw exceptions.
1666
1667@menu
1668* omp_target_alloc:: Allocate device memory
1669* omp_target_free:: Free device memory
1670* omp_target_is_present:: Check whether storage is mapped
506f068e
TB
1671@c * omp_target_is_accessible:: <fixme>
1672@c * omp_target_memcpy:: <fixme>
1673@c * omp_target_memcpy_rect:: <fixme>
1674@c * omp_target_memcpy_async:: <fixme>
1675@c * omp_target_memcpy_rect_async:: <fixme>
e0786ba6
TB
1676@c * omp_target_memset:: <fixme>/TR12
1677@c * omp_target_memset_async:: <fixme>/TR12
1678* omp_target_associate_ptr:: Associate a device pointer with a host pointer
1679* omp_target_disassociate_ptr:: Remove device--host pointer association
1680* omp_get_mapped_ptr:: Return device pointer to a host pointer
1681@end menu
1682
1683
1684
1685@node omp_target_alloc
1686@subsection @code{omp_target_alloc} -- Allocate device memory
1687@table @asis
1688@item @emph{Description}:
1689This routine allocates @var{size} bytes of memory in the device environment
1690associated with the device number @var{device_num}. If successful, a device
1691pointer is returned, otherwise a null pointer.
1692
1693In GCC, when the device is the host or the device shares memory with the host,
1694the memory is allocated on the host; in that case, when @var{size} is zero,
1695either NULL or a unique pointer value that can later be successfully passed to
1696@code{omp_target_free} is returned. When the allocation is not performed on
1697the host, a null pointer is returned when @var{size} is zero; in that case,
1698additionally a diagnostic might be printed to standard error (stderr).
1699
1700Running this routine in a @code{target} region except on the initial device
1701is not supported.
1702
1703@item @emph{C/C++}
1704@multitable @columnfractions .20 .80
1705@item @emph{Prototype}: @tab @code{void *omp_target_alloc(size_t size, int device_num)}
1706@end multitable
1707
1708@item @emph{Fortran}:
1709@multitable @columnfractions .20 .80
1710@item @emph{Interface}: @tab @code{type(c_ptr) function omp_target_alloc(size, device_num) bind(C)}
1711@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1712@item @tab @code{integer(c_size_t), value :: size}
1713@item @tab @code{integer(c_int), value :: device_num}
1714@end multitable
1715
1716@item @emph{See also}:
1717@ref{omp_target_free}, @ref{omp_target_associate_ptr}
1718
1719@item @emph{Reference}:
1720@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.1
1721@end table
1722
1723
1724
1725@node omp_target_free
1726@subsection @code{omp_target_free} -- Free device memory
1727@table @asis
1728@item @emph{Description}:
1729This routine frees memory allocated by the @code{omp_target_alloc} routine.
1730The @var{device_ptr} argument must be either a null pointer or a device pointer
1731returned by @code{omp_target_alloc} for the specified @code{device_num}. The
1732device number @var{device_num} must be a conforming device number.
1733
1734Running this routine in a @code{target} region except on the initial device
1735is not supported.
1736
1737@item @emph{C/C++}
1738@multitable @columnfractions .20 .80
1739@item @emph{Prototype}: @tab @code{void omp_target_free(void *device_ptr, int device_num)}
1740@end multitable
1741
1742@item @emph{Fortran}:
1743@multitable @columnfractions .20 .80
1744@item @emph{Interface}: @tab @code{subroutine omp_target_free(device_ptr, device_num) bind(C)}
1745@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1746@item @tab @code{type(c_ptr), value :: device_ptr}
1747@item @tab @code{integer(c_int), value :: device_num}
1748@end multitable
1749
1750@item @emph{See also}:
1751@ref{omp_target_alloc}, @ref{omp_target_disassociate_ptr}
1752
1753@item @emph{Reference}:
1754@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.2
1755@end table
1756
1757
1758
1759@node omp_target_is_present
1760@subsection @code{omp_target_is_present} -- Check whether storage is mapped
1761@table @asis
1762@item @emph{Description}:
1763This routine tests whether storage, identified by the host pointer @var{ptr}
1764is mapped to the device specified by @var{device_num}. If so, it returns
1765@emph{true} and otherwise @emph{false}.
1766
1767In GCC, this includes self mapping such that @code{omp_target_is_present}
1768returns @emph{true} when @var{device_num} specifies the host or when the host
1769and the device share memory. If @var{ptr} is a null pointer, @var{true} is
1770returned and if @var{device_num} is an invalid device number, @var{false} is
1771returned.
1772
1773If those conditions do not apply, @emph{true} is returned if the association has
1774been established by an explicit or implicit @code{map} clause, the
1775@code{declare target} directive or a call to the @code{omp_target_associate_ptr}
1776routine.
1777
1778Running this routine in a @code{target} region except on the initial device
1779is not supported.
1780
1781@item @emph{C/C++}
1782@multitable @columnfractions .20 .80
1783@item @emph{Prototype}: @tab @code{int omp_target_is_present(const void *ptr,}
1784@item @tab @code{ int device_num)}
1785@end multitable
1786
1787@item @emph{Fortran}:
1788@multitable @columnfractions .20 .80
1789@item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_present(ptr, &}
1790@item @tab @code{ device_num) bind(C)}
1791@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1792@item @tab @code{type(c_ptr), value :: ptr}
1793@item @tab @code{integer(c_int), value :: device_num}
1794@end multitable
1795
1796@item @emph{See also}:
1797@ref{omp_target_associate_ptr}
1798
1799@item @emph{Reference}:
1800@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.3
1801@end table
1802
1803
1804
1805@node omp_target_associate_ptr
1806@subsection @code{omp_target_associate_ptr} -- Associate a device pointer with a host pointer
1807@table @asis
1808@item @emph{Description}:
1809This routine associates storage on the host with storage on a device identified
1810by @var{device_num}. The device pointer is usually obtained by calling
1811@code{omp_target_alloc} or by other means (but not by using the @code{map}
1812clauses or the @code{declare target} directive). The host pointer should point
1813to memory that has a storage size of at least @var{size}.
1814
1815The @var{device_offset} parameter specifies the offset into @var{device_ptr}
1816that is used as the base address for the device side of the mapping; the
1817storage size should be at least @var{device_offset} plus @var{size}.
1818
1819After the association, the host pointer can be used in a @code{map} clause and
1820in the @code{to} and @code{from} clauses of the @code{target update} directive
1821to transfer data between the associated pointers. The reference count of such
1822associated storage is infinite. The association can be removed by calling
1823@code{omp_target_disassociate_ptr} which should be done before the lifetime
1824of either either storage ends.
1825
1826The routine returns nonzero (@code{EINVAL}) when the @var{device_num} invalid,
1827for when the initial device or the associated device shares memory with the
1828host. @code{omp_target_associate_ptr} returns zero if @var{host_ptr} points
1829into already associated storage that is fully inside of a previously associated
1830memory. Otherwise, if the association was successful zero is returned; if none
1831of the cases above apply, nonzero (@code{EINVAL}) is returned.
1832
1833The @code{omp_target_is_present} routine can be used to test whether
1834associated storage for a device pointer exists.
1835
1836Running this routine in a @code{target} region except on the initial device
1837is not supported.
1838
1839@item @emph{C/C++}
1840@multitable @columnfractions .20 .80
1841@item @emph{Prototype}: @tab @code{int omp_target_associate_ptr(const void *host_ptr,}
1842@item @tab @code{ const void *device_ptr,}
1843@item @tab @code{ size_t size,}
1844@item @tab @code{ size_t device_offset,}
1845@item @tab @code{ int device_num)}
1846@end multitable
1847
1848@item @emph{Fortran}:
1849@multitable @columnfractions .20 .80
1850@item @emph{Interface}: @tab @code{integer(c_int) function omp_target_associate_ptr(host_ptr, &}
1851@item @tab @code{ device_ptr, size, device_offset, device_num) bind(C)}
1852@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1853@item @tab @code{type(c_ptr), value :: host_ptr, device_ptr}
1854@item @tab @code{integer(c_size_t), value :: size, device_offset}
1855@item @tab @code{integer(c_int), value :: device_num}
1856@end multitable
1857
1858@item @emph{See also}:
1859@ref{omp_target_disassociate_ptr}, @ref{omp_target_is_present},
1860@ref{omp_target_alloc}
1861
1862@item @emph{Reference}:
1863@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.9
1864@end table
1865
1866
1867
1868@node omp_target_disassociate_ptr
1869@subsection @code{omp_target_disassociate_ptr} -- Remove device--host pointer association
1870@table @asis
1871@item @emph{Description}:
1872This routine removes the storage association established by calling
1873@code{omp_target_associate_ptr} and sets the reference count to zero,
1874even if @code{omp_target_associate_ptr} was invoked multiple times for
1875for host pointer @code{ptr}. If applicable, the device memory needs
1876to be freed by the user.
1877
1878If an associated device storage location for the @var{device_num} was
1879found and has infinite reference count, the association is removed and
1880zero is returned. In all other cases, nonzero (@code{EINVAL}) is returned
1881and no other action is taken.
1882
1883Note that passing a host pointer where the association to the device pointer
1884was established with the @code{declare target} directive yields undefined
1885behavior.
1886
1887Running this routine in a @code{target} region except on the initial device
1888is not supported.
1889
1890@item @emph{C/C++}
1891@multitable @columnfractions .20 .80
1892@item @emph{Prototype}: @tab @code{int omp_target_disassociate_ptr(const void *ptr,}
1893@item @tab @code{ int device_num)}
1894@end multitable
1895
1896@item @emph{Fortran}:
1897@multitable @columnfractions .20 .80
1898@item @emph{Interface}: @tab @code{integer(c_int) function omp_target_disassociate_ptr(ptr, &}
1899@item @tab @code{ device_num) bind(C)}
1900@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1901@item @tab @code{type(c_ptr), value :: ptr}
1902@item @tab @code{integer(c_int), value :: device_num}
1903@end multitable
1904
1905@item @emph{See also}:
1906@ref{omp_target_associate_ptr}
1907
1908@item @emph{Reference}:
1909@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.10
1910@end table
1911
1912
1913
1914@node omp_get_mapped_ptr
1915@subsection @code{omp_get_mapped_ptr} -- Return device pointer to a host pointer
1916@table @asis
1917@item @emph{Description}:
1918If the device number is refers to the initial device or to a device with
1919memory accessible from the host (shared memory), the @code{omp_get_mapped_ptr}
1920routines returnes the value of the passed @var{ptr}. Otherwise, if associated
1921storage to the passed host pointer @var{ptr} exists on device associated with
1922@var{device_num}, it returns that pointer. In all other cases and in cases of
1923an error, a null pointer is returned.
1924
1925The association of storage location is established either via an explicit or
1926implicit @code{map} clause, the @code{declare target} directive or the
1927@code{omp_target_associate_ptr} routine.
1928
1929Running this routine in a @code{target} region except on the initial device
1930is not supported.
1931
1932@item @emph{C/C++}
1933@multitable @columnfractions .20 .80
1934@item @emph{Prototype}: @tab @code{void *omp_get_mapped_ptr(const void *ptr, int device_num);}
1935@end multitable
1936
1937@item @emph{Fortran}:
1938@multitable @columnfractions .20 .80
1939@item @emph{Interface}: @tab @code{type(c_ptr) function omp_get_mapped_ptr(ptr, device_num) bind(C)}
1940@item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1941@item @tab @code{type(c_ptr), value :: ptr}
1942@item @tab @code{integer(c_int), value :: device_num}
1943@end multitable
1944
1945@item @emph{See also}:
1946@ref{omp_target_associate_ptr}
1947
1948@item @emph{Reference}:
1949@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.11
1950@end table
1951
1952
506f068e
TB
1953
1954@node Lock Routines
1955@section Lock Routines
1956
1957Initialize, set, test, unset and destroy simple and nested locks.
1958The routines have C linkage and do not throw exceptions.
1959
1960@menu
1961* omp_init_lock:: Initialize simple lock
1962* omp_init_nest_lock:: Initialize nested lock
1963@c * omp_init_lock_with_hint:: <fixme>
1964@c * omp_init_nest_lock_with_hint:: <fixme>
1965* omp_destroy_lock:: Destroy simple lock
1966* omp_destroy_nest_lock:: Destroy nested lock
1967* omp_set_lock:: Wait for and set simple lock
1968* omp_set_nest_lock:: Wait for and set simple lock
1969* omp_unset_lock:: Unset simple lock
1970* omp_unset_nest_lock:: Unset nested lock
1971* omp_test_lock:: Test and set simple lock if available
1972* omp_test_nest_lock:: Test and set nested lock if available
1973@end menu
1974
1975
1976
d77de738 1977@node omp_init_lock
506f068e 1978@subsection @code{omp_init_lock} -- Initialize simple lock
d77de738
ML
1979@table @asis
1980@item @emph{Description}:
1981Initialize a simple lock. After initialization, the lock is in
1982an unlocked state.
1983
1984@item @emph{C/C++}:
1985@multitable @columnfractions .20 .80
1986@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1987@end multitable
1988
1989@item @emph{Fortran}:
1990@multitable @columnfractions .20 .80
1991@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1992@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1993@end multitable
1994
1995@item @emph{See also}:
1996@ref{omp_destroy_lock}
1997
1998@item @emph{Reference}:
1999@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2000@end table
2001
2002
2003
506f068e
TB
2004@node omp_init_nest_lock
2005@subsection @code{omp_init_nest_lock} -- Initialize nested lock
d77de738
ML
2006@table @asis
2007@item @emph{Description}:
506f068e
TB
2008Initialize a nested lock. After initialization, the lock is in
2009an unlocked state and the nesting count is set to zero.
d77de738
ML
2010
2011@item @emph{C/C++}:
2012@multitable @columnfractions .20 .80
506f068e 2013@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
d77de738
ML
2014@end multitable
2015
2016@item @emph{Fortran}:
2017@multitable @columnfractions .20 .80
506f068e
TB
2018@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
2019@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
d77de738
ML
2020@end multitable
2021
2022@item @emph{See also}:
506f068e 2023@ref{omp_destroy_nest_lock}
d77de738 2024
506f068e
TB
2025@item @emph{Reference}:
2026@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
d77de738
ML
2027@end table
2028
2029
2030
506f068e
TB
2031@node omp_destroy_lock
2032@subsection @code{omp_destroy_lock} -- Destroy simple lock
d77de738
ML
2033@table @asis
2034@item @emph{Description}:
506f068e
TB
2035Destroy a simple lock. In order to be destroyed, a simple lock must be
2036in the unlocked state.
d77de738
ML
2037
2038@item @emph{C/C++}:
2039@multitable @columnfractions .20 .80
506f068e 2040@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
d77de738
ML
2041@end multitable
2042
2043@item @emph{Fortran}:
2044@multitable @columnfractions .20 .80
506f068e 2045@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
d77de738
ML
2046@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2047@end multitable
2048
2049@item @emph{See also}:
506f068e 2050@ref{omp_init_lock}
d77de738
ML
2051
2052@item @emph{Reference}:
506f068e 2053@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
d77de738
ML
2054@end table
2055
2056
2057
506f068e
TB
2058@node omp_destroy_nest_lock
2059@subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
d77de738
ML
2060@table @asis
2061@item @emph{Description}:
506f068e
TB
2062Destroy a nested lock. In order to be destroyed, a nested lock must be
2063in the unlocked state and its nesting count must equal zero.
d77de738
ML
2064
2065@item @emph{C/C++}:
2066@multitable @columnfractions .20 .80
506f068e 2067@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
d77de738
ML
2068@end multitable
2069
2070@item @emph{Fortran}:
2071@multitable @columnfractions .20 .80
506f068e
TB
2072@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
2073@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
d77de738
ML
2074@end multitable
2075
2076@item @emph{See also}:
506f068e 2077@ref{omp_init_lock}
d77de738
ML
2078
2079@item @emph{Reference}:
506f068e 2080@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
d77de738
ML
2081@end table
2082
2083
2084
506f068e
TB
2085@node omp_set_lock
2086@subsection @code{omp_set_lock} -- Wait for and set simple lock
d77de738
ML
2087@table @asis
2088@item @emph{Description}:
506f068e
TB
2089Before setting a simple lock, the lock variable must be initialized by
2090@code{omp_init_lock}. The calling thread is blocked until the lock
2091is available. If the lock is already held by the current thread,
2092a deadlock occurs.
d77de738
ML
2093
2094@item @emph{C/C++}:
2095@multitable @columnfractions .20 .80
506f068e 2096@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
d77de738
ML
2097@end multitable
2098
2099@item @emph{Fortran}:
2100@multitable @columnfractions .20 .80
506f068e 2101@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
d77de738
ML
2102@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2103@end multitable
2104
2105@item @emph{See also}:
506f068e 2106@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
d77de738
ML
2107
2108@item @emph{Reference}:
506f068e 2109@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
d77de738
ML
2110@end table
2111
2112
2113
d77de738 2114@node omp_set_nest_lock
506f068e 2115@subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
d77de738
ML
2116@table @asis
2117@item @emph{Description}:
2118Before setting a nested lock, the lock variable must be initialized by
2119@code{omp_init_nest_lock}. The calling thread is blocked until the lock
2120is available. If the lock is already held by the current thread, the
2121nesting count for the lock is incremented.
2122
2123@item @emph{C/C++}:
2124@multitable @columnfractions .20 .80
2125@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
2126@end multitable
2127
2128@item @emph{Fortran}:
2129@multitable @columnfractions .20 .80
2130@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
2131@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2132@end multitable
2133
2134@item @emph{See also}:
2135@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
2136
2137@item @emph{Reference}:
2138@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2139@end table
2140
2141
2142
506f068e
TB
2143@node omp_unset_lock
2144@subsection @code{omp_unset_lock} -- Unset simple lock
d77de738
ML
2145@table @asis
2146@item @emph{Description}:
506f068e
TB
2147A simple lock about to be unset must have been locked by @code{omp_set_lock}
2148or @code{omp_test_lock} before. In addition, the lock must be held by the
2149thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
2150or more threads attempted to set the lock before, one of them is chosen to,
2151again, set the lock to itself.
d77de738
ML
2152
2153@item @emph{C/C++}:
2154@multitable @columnfractions .20 .80
506f068e 2155@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
d77de738
ML
2156@end multitable
2157
2158@item @emph{Fortran}:
2159@multitable @columnfractions .20 .80
506f068e
TB
2160@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
2161@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
d77de738
ML
2162@end multitable
2163
d77de738 2164@item @emph{See also}:
506f068e 2165@ref{omp_set_lock}, @ref{omp_test_lock}
d77de738
ML
2166
2167@item @emph{Reference}:
506f068e 2168@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
d77de738
ML
2169@end table
2170
2171
2172
2173@node omp_unset_nest_lock
506f068e 2174@subsection @code{omp_unset_nest_lock} -- Unset nested lock
d77de738
ML
2175@table @asis
2176@item @emph{Description}:
2177A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
2178or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
2179thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
2180lock becomes unlocked. If one ore more threads attempted to set the lock before,
2181one of them is chosen to, again, set the lock to itself.
2182
2183@item @emph{C/C++}:
2184@multitable @columnfractions .20 .80
2185@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
2186@end multitable
2187
2188@item @emph{Fortran}:
2189@multitable @columnfractions .20 .80
2190@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
2191@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2192@end multitable
2193
2194@item @emph{See also}:
2195@ref{omp_set_nest_lock}
2196
2197@item @emph{Reference}:
2198@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2199@end table
2200
2201
2202
506f068e
TB
2203@node omp_test_lock
2204@subsection @code{omp_test_lock} -- Test and set simple lock if available
d77de738
ML
2205@table @asis
2206@item @emph{Description}:
506f068e
TB
2207Before setting a simple lock, the lock variable must be initialized by
2208@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
2209does not block if the lock is not available. This function returns
2210@code{true} upon success, @code{false} otherwise. Here, @code{true} and
2211@code{false} represent their language-specific counterparts.
d77de738
ML
2212
2213@item @emph{C/C++}:
2214@multitable @columnfractions .20 .80
506f068e 2215@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
d77de738
ML
2216@end multitable
2217
2218@item @emph{Fortran}:
2219@multitable @columnfractions .20 .80
506f068e
TB
2220@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
2221@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2222@end multitable
2223
2224@item @emph{See also}:
2225@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2226
2227@item @emph{Reference}:
2228@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2229@end table
2230
2231
2232
2233@node omp_test_nest_lock
2234@subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
2235@table @asis
2236@item @emph{Description}:
2237Before setting a nested lock, the lock variable must be initialized by
2238@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
2239@code{omp_test_nest_lock} does not block if the lock is not available.
2240If the lock is already held by the current thread, the new nesting count
2241is returned. Otherwise, the return value equals zero.
2242
2243@item @emph{C/C++}:
2244@multitable @columnfractions .20 .80
2245@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
2246@end multitable
2247
2248@item @emph{Fortran}:
2249@multitable @columnfractions .20 .80
2250@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
d77de738
ML
2251@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2252@end multitable
2253
506f068e 2254
d77de738 2255@item @emph{See also}:
506f068e 2256@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
d77de738
ML
2257
2258@item @emph{Reference}:
506f068e 2259@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
d77de738
ML
2260@end table
2261
2262
2263
506f068e
TB
2264@node Timing Routines
2265@section Timing Routines
2266
2267Portable, thread-based, wall clock timer.
2268The routines have C linkage and do not throw exceptions.
2269
2270@menu
2271* omp_get_wtick:: Get timer precision.
2272* omp_get_wtime:: Elapsed wall clock time.
2273@end menu
2274
2275
2276
d77de738 2277@node omp_get_wtick
506f068e 2278@subsection @code{omp_get_wtick} -- Get timer precision
d77de738
ML
2279@table @asis
2280@item @emph{Description}:
2281Gets the timer precision, i.e., the number of seconds between two
2282successive clock ticks.
2283
2284@item @emph{C/C++}:
2285@multitable @columnfractions .20 .80
2286@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
2287@end multitable
2288
2289@item @emph{Fortran}:
2290@multitable @columnfractions .20 .80
2291@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
2292@end multitable
2293
2294@item @emph{See also}:
2295@ref{omp_get_wtime}
2296
2297@item @emph{Reference}:
2298@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
2299@end table
2300
2301
2302
2303@node omp_get_wtime
506f068e 2304@subsection @code{omp_get_wtime} -- Elapsed wall clock time
d77de738
ML
2305@table @asis
2306@item @emph{Description}:
2307Elapsed wall clock time in seconds. The time is measured per thread, no
2308guarantee can be made that two distinct threads measure the same time.
2309Time is measured from some "time in the past", which is an arbitrary time
2310guaranteed not to change during the execution of the program.
2311
2312@item @emph{C/C++}:
2313@multitable @columnfractions .20 .80
2314@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
2315@end multitable
2316
2317@item @emph{Fortran}:
2318@multitable @columnfractions .20 .80
2319@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
2320@end multitable
2321
2322@item @emph{See also}:
2323@ref{omp_get_wtick}
2324
2325@item @emph{Reference}:
2326@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
2327@end table
2328
2329
2330
506f068e
TB
2331@node Event Routine
2332@section Event Routine
2333
2334Support for event objects.
2335The routine has C linkage and do not throw exceptions.
2336
2337@menu
2338* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
2339@end menu
2340
2341
2342
d77de738 2343@node omp_fulfill_event
506f068e 2344@subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
d77de738
ML
2345@table @asis
2346@item @emph{Description}:
2347Fulfill the event associated with the event handle argument. Currently, it
2348is only used to fulfill events generated by detach clauses on task
2349constructs - the effect of fulfilling the event is to allow the task to
2350complete.
2351
2352The result of calling @code{omp_fulfill_event} with an event handle other
2353than that generated by a detach clause is undefined. Calling it with an
2354event handle that has already been fulfilled is also undefined.
2355
2356@item @emph{C/C++}:
2357@multitable @columnfractions .20 .80
2358@item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
2359@end multitable
2360
2361@item @emph{Fortran}:
2362@multitable @columnfractions .20 .80
2363@item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
2364@item @tab @code{integer (kind=omp_event_handle_kind) :: event}
2365@end multitable
2366
2367@item @emph{Reference}:
2368@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
2369@end table
2370
2371
2372
506f068e
TB
2373@c @node Interoperability Routines
2374@c @section Interoperability Routines
2375@c
2376@c Routines to obtain properties from an @code{omp_interop_t} object.
2377@c They have C linkage and do not throw exceptions.
2378@c
2379@c @menu
2380@c * omp_get_num_interop_properties:: <fixme>
2381@c * omp_get_interop_int:: <fixme>
2382@c * omp_get_interop_ptr:: <fixme>
2383@c * omp_get_interop_str:: <fixme>
2384@c * omp_get_interop_name:: <fixme>
2385@c * omp_get_interop_type_desc:: <fixme>
2386@c * omp_get_interop_rc_desc:: <fixme>
2387@c @end menu
2388
971f119f
TB
2389@node Memory Management Routines
2390@section Memory Management Routines
2391
2392Routines to manage and allocate memory on the current device.
2393They have C linkage and do not throw exceptions.
2394
2395@menu
2396* omp_init_allocator:: Create an allocator
2397* omp_destroy_allocator:: Destroy an allocator
2398* omp_set_default_allocator:: Set the default allocator
2399* omp_get_default_allocator:: Get the default allocator
506f068e
TB
2400@c * omp_alloc:: <fixme>
2401@c * omp_aligned_alloc:: <fixme>
2402@c * omp_free:: <fixme>
2403@c * omp_calloc:: <fixme>
2404@c * omp_aligned_calloc:: <fixme>
2405@c * omp_realloc:: <fixme>
2406@c * omp_get_memspace_num_resources:: <fixme>/TR11
2407@c * omp_get_submemspace:: <fixme>/TR11
971f119f
TB
2408@end menu
2409
2410
2411
2412@node omp_init_allocator
2413@subsection @code{omp_init_allocator} -- Create an allocator
2414@table @asis
2415@item @emph{Description}:
2416Create an allocator that uses the specified memory space and has the specified
2417traits; if an allocator that fulfills the requirements cannot be created,
2418@code{omp_null_allocator} is returned.
2419
2420The predefined memory spaces and available traits can be found at
2421@ref{OMP_ALLOCATOR}, where the trait names have to be be prefixed by
2422@code{omp_atk_} (e.g. @code{omp_atk_pinned}) and the named trait values by
2423@code{omp_atv_} (e.g. @code{omp_atv_true}); additionally, @code{omp_atv_default}
2424may be used as trait value to specify that the default value should be used.
2425
2426@item @emph{C/C++}:
2427@multitable @columnfractions .20 .80
2428@item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_init_allocator(}
2429@item @tab @code{ omp_memspace_handle_t memspace,}
2430@item @tab @code{ int ntraits,}
2431@item @tab @code{ const omp_alloctrait_t traits[]);}
2432@end multitable
2433
2434@item @emph{Fortran}:
2435@multitable @columnfractions .20 .80
2436@item @emph{Interface}: @tab @code{function omp_init_allocator(memspace, ntraits, traits)}
2437@item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_init_allocator}
2438@item @tab @code{integer (kind=omp_memspace_handle_kind), intent(in) :: memspace}
2439@item @tab @code{integer, intent(in) :: ntraits}
2440@item @tab @code{type (omp_alloctrait), intent(in) :: traits(*)}
2441@end multitable
2442
2443@item @emph{See also}:
2444@ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_destroy_allocator}
2445
2446@item @emph{Reference}:
2447@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.2
2448@end table
2449
2450
2451
2452@node omp_destroy_allocator
2453@subsection @code{omp_destroy_allocator} -- Destroy an allocator
2454@table @asis
2455@item @emph{Description}:
2456Releases all resources used by a memory allocator, which must not represent
2457a predefined memory allocator. Accessing memory after its allocator has been
2458destroyed has unspecified behavior. Passing @code{omp_null_allocator} to the
2459routine is permitted but will have no effect.
2460
2461
2462@item @emph{C/C++}:
2463@multitable @columnfractions .20 .80
2464@item @emph{Prototype}: @tab @code{void omp_destroy_allocator (omp_allocator_handle_t allocator);}
2465@end multitable
2466
2467@item @emph{Fortran}:
2468@multitable @columnfractions .20 .80
2469@item @emph{Interface}: @tab @code{subroutine omp_destroy_allocator(allocator)}
2470@item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2471@end multitable
2472
2473@item @emph{See also}:
2474@ref{omp_init_allocator}
2475
2476@item @emph{Reference}:
2477@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.3
2478@end table
2479
2480
2481
2482@node omp_set_default_allocator
2483@subsection @code{omp_set_default_allocator} -- Set the default allocator
2484@table @asis
2485@item @emph{Description}:
2486Sets the default allocator that is used when no allocator has been specified
2487in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2488routine is invoked with the @code{omp_null_allocator} allocator.
2489
2490@item @emph{C/C++}:
2491@multitable @columnfractions .20 .80
2492@item @emph{Prototype}: @tab @code{void omp_set_default_allocator(omp_allocator_handle_t allocator);}
2493@end multitable
2494
2495@item @emph{Fortran}:
2496@multitable @columnfractions .20 .80
2497@item @emph{Interface}: @tab @code{subroutine omp_set_default_allocator(allocator)}
2498@item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2499@end multitable
2500
2501@item @emph{See also}:
2502@ref{omp_get_default_allocator}, @ref{omp_init_allocator}, @ref{OMP_ALLOCATOR},
2503@ref{Memory allocation}
2504
2505@item @emph{Reference}:
2506@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.4
2507@end table
2508
2509
2510
2511@node omp_get_default_allocator
2512@subsection @code{omp_get_default_allocator} -- Get the default allocator
2513@table @asis
2514@item @emph{Description}:
2515The routine returns the default allocator that is used when no allocator has
2516been specified in the @code{allocate} or @code{allocator} clause or if an
2517OpenMP memory routine is invoked with the @code{omp_null_allocator} allocator.
2518
2519@item @emph{C/C++}:
2520@multitable @columnfractions .20 .80
2521@item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_get_default_allocator();}
2522@end multitable
2523
2524@item @emph{Fortran}:
2525@multitable @columnfractions .20 .80
2526@item @emph{Interface}: @tab @code{function omp_get_default_allocator()}
2527@item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_get_default_allocator}
2528@end multitable
2529
2530@item @emph{See also}:
2531@ref{omp_set_default_allocator}, @ref{OMP_ALLOCATOR}
2532
2533@item @emph{Reference}:
2534@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.5
2535@end table
2536
2537
506f068e
TB
2538
2539@c @node Tool Control Routine
2540@c
2541@c FIXME
2542
2543@c @node Environment Display Routine
2544@c @section Environment Display Routine
2545@c
2546@c Routine to display the OpenMP number and the initial value of ICVs.
2547@c It has C linkage and do not throw exceptions.
2548@c
2549@c menu
2550@c * omp_display_env:: <fixme>
2551@c end menu
2552
d77de738
ML
2553@c ---------------------------------------------------------------------
2554@c OpenMP Environment Variables
2555@c ---------------------------------------------------------------------
2556
2557@node Environment Variables
2558@chapter OpenMP Environment Variables
2559
2560The environment variables which beginning with @env{OMP_} are defined by
2cd0689a
TB
2561section 4 of the OpenMP specification in version 4.5 or in a later version
2562of the specification, while those beginning with @env{GOMP_} are GNU extensions.
2563Most @env{OMP_} environment variables have an associated internal control
2564variable (ICV).
2565
2566For any OpenMP environment variable that sets an ICV and is neither
2567@code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
2568device-specific environment variables exist. For them, the environment
2569variable without suffix affects the host. The suffix @code{_DEV_} followed
2570by a non-negative device number less that the number of available devices sets
2571the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
2572of all non-host devices for which a device-specific corresponding environment
2573variable has not been set while the @code{_ALL} suffix sets the ICV of all
2574host and non-host devices for which a more specific corresponding environment
2575variable is not set.
d77de738
ML
2576
2577@menu
73a0d3bf
TB
2578* OMP_ALLOCATOR:: Set the default allocator
2579* OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
d77de738 2580* OMP_CANCELLATION:: Set whether cancellation is activated
73a0d3bf 2581* OMP_DISPLAY_AFFINITY:: Display thread affinity information
d77de738
ML
2582* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
2583* OMP_DEFAULT_DEVICE:: Set the device used in target regions
2584* OMP_DYNAMIC:: Dynamic adjustment of threads
2585* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
2586* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
2587* OMP_NESTED:: Nested parallel regions
2588* OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
2589* OMP_NUM_THREADS:: Specifies the number of threads to use
0b9bd33d
JJ
2590* OMP_PROC_BIND:: Whether threads may be moved between CPUs
2591* OMP_PLACES:: Specifies on which CPUs the threads should be placed
d77de738
ML
2592* OMP_STACKSIZE:: Set default thread stack size
2593* OMP_SCHEDULE:: How threads are scheduled
2594* OMP_TARGET_OFFLOAD:: Controls offloading behaviour
2595* OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
2596* OMP_THREAD_LIMIT:: Set the maximum number of threads
2597* OMP_WAIT_POLICY:: How waiting threads are handled
2598* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
2599* GOMP_DEBUG:: Enable debugging output
2600* GOMP_STACKSIZE:: Set default thread stack size
2601* GOMP_SPINCOUNT:: Set the busy-wait spin count
2602* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
2603@end menu
2604
2605
73a0d3bf
TB
2606@node OMP_ALLOCATOR
2607@section @env{OMP_ALLOCATOR} -- Set the default allocator
2608@cindex Environment Variable
2609@table @asis
971f119f 2610@item @emph{ICV:} @var{def-allocator-var}
2cd0689a 2611@item @emph{Scope:} data environment
73a0d3bf
TB
2612@item @emph{Description}:
2613Sets the default allocator that is used when no allocator has been specified
2614in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2615routine is invoked with the @code{omp_null_allocator} allocator.
2616If unset, @code{omp_default_mem_alloc} is used.
2617
2618The value can either be a predefined allocator or a predefined memory space
2619or a predefined memory space followed by a colon and a comma-separated list
2620of memory trait and value pairs, separated by @code{=}.
2621
2cd0689a
TB
2622Note: The corresponding device environment variables are currently not
2623supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
2624initialized to @code{omp_default_mem_alloc}. However, on all devices,
2625the @code{omp_set_default_allocator} API routine can be used to change
2626value.
2627
73a0d3bf 2628@multitable @columnfractions .45 .45
a85a106c 2629@headitem Predefined allocators @tab Associated predefined memory spaces
73a0d3bf
TB
2630@item omp_default_mem_alloc @tab omp_default_mem_space
2631@item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
2632@item omp_const_mem_alloc @tab omp_const_mem_space
2633@item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
2634@item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
2635@item omp_cgroup_mem_alloc @tab --
2636@item omp_pteam_mem_alloc @tab --
2637@item omp_thread_mem_alloc @tab --
2638@end multitable
2639
a85a106c
TB
2640The predefined allocators use the default values for the traits,
2641as listed below. Except that the last three allocators have the
2642@code{access} trait set to @code{cgroup}, @code{pteam}, and
2643@code{thread}, respectively.
2644
2645@multitable @columnfractions .25 .40 .25
2646@headitem Trait @tab Allowed values @tab Default value
73a0d3bf
TB
2647@item @code{sync_hint} @tab @code{contended}, @code{uncontended},
2648 @code{serialized}, @code{private}
a85a106c 2649 @tab @code{contended}
73a0d3bf 2650@item @code{alignment} @tab Positive integer being a power of two
a85a106c 2651 @tab 1 byte
73a0d3bf
TB
2652@item @code{access} @tab @code{all}, @code{cgroup},
2653 @code{pteam}, @code{thread}
a85a106c 2654 @tab @code{all}
73a0d3bf 2655@item @code{pool_size} @tab Positive integer
a85a106c 2656 @tab See @ref{Memory allocation}
73a0d3bf
TB
2657@item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
2658 @code{abort_fb}, @code{allocator_fb}
a85a106c 2659 @tab See below
73a0d3bf 2660@item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
a85a106c 2661 @tab (none)
73a0d3bf 2662@item @code{pinned} @tab @code{true}, @code{false}
a85a106c 2663 @tab @code{false}
73a0d3bf
TB
2664@item @code{partition} @tab @code{environment}, @code{nearest},
2665 @code{blocked}, @code{interleaved}
a85a106c 2666 @tab @code{environment}
73a0d3bf
TB
2667@end multitable
2668
a85a106c
TB
2669For the @code{fallback} trait, the default value is @code{null_fb} for the
2670@code{omp_default_mem_alloc} allocator and any allocator that is associated
2671with device memory; for all other other allocators, it is @code{default_mem_fb}
2672by default.
2673
73a0d3bf
TB
2674Examples:
2675@smallexample
2676OMP_ALLOCATOR=omp_high_bw_mem_alloc
2677OMP_ALLOCATOR=omp_large_cap_mem_space
506f068e 2678OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
73a0d3bf
TB
2679@end smallexample
2680
a85a106c 2681@item @emph{See also}:
971f119f
TB
2682@ref{Memory allocation}, @ref{omp_get_default_allocator},
2683@ref{omp_set_default_allocator}
73a0d3bf
TB
2684
2685@item @emph{Reference}:
2686@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
2687@end table
2688
2689
2690
2691@node OMP_AFFINITY_FORMAT
2692@section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
2693@cindex Environment Variable
2694@table @asis
2cd0689a
TB
2695@item @emph{ICV:} @var{affinity-format-var}
2696@item @emph{Scope:} device
73a0d3bf
TB
2697@item @emph{Description}:
2698Sets the format string used when displaying OpenMP thread affinity information.
2699Special values are output using @code{%} followed by an optional size
2700specification and then either the single-character field type or its long
2701name enclosed in curly braces; using @code{%%} will display a literal percent.
2702The size specification consists of an optional @code{0.} or @code{.} followed
450b05ce 2703by a positive integer, specifying the minimal width of the output. With
73a0d3bf
TB
2704@code{0.} and numerical values, the output is padded with zeros on the left;
2705with @code{.}, the output is padded by spaces on the left; otherwise, the
2706output is padded by spaces on the right. If unset, the value is
2707``@code{level %L thread %i affinity %A}''.
2708
2709Supported field types are:
2710
2711@multitable @columnfractions .10 .25 .60
2712@item t @tab team_num @tab value returned by @code{omp_get_team_num}
2713@item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
2714@item L @tab nesting_level @tab value returned by @code{omp_get_level}
2715@item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
2716@item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
2717@item a @tab ancestor_tnum
2718 @tab value returned by
2719 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
2720@item H @tab host @tab name of the host that executes the thread
450b05ce
TB
2721@item P @tab process_id @tab process identifier
2722@item i @tab native_thread_id @tab native thread identifier
73a0d3bf
TB
2723@item A @tab thread_affinity
2724 @tab comma separated list of integer values or ranges, representing the
2725 processors on which a process might execute, subject to affinity
2726 mechanisms
2727@end multitable
2728
2729For instance, after setting
2730
2731@smallexample
2732OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
2733@end smallexample
2734
2735with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
2736@code{omp_display_affinity} with @code{NULL} or an empty string, the program
2737might display the following:
2738
2739@smallexample
274000!0! 1!4; 0;01;0;1;0-11
274100!3! 1!4; 0;01;0;1;0-11
274200!2! 1!4; 0;01;0;1;0-11
274300!1! 1!4; 0;01;0;1;0-11
2744@end smallexample
2745
2746@item @emph{See also}:
2747@ref{OMP_DISPLAY_AFFINITY}
2748
2749@item @emph{Reference}:
2750@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
2751@end table
2752
2753
2754
d77de738
ML
2755@node OMP_CANCELLATION
2756@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
2757@cindex Environment Variable
2758@table @asis
2cd0689a
TB
2759@item @emph{ICV:} @var{cancel-var}
2760@item @emph{Scope:} global
d77de738
ML
2761@item @emph{Description}:
2762If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
2763if unset, cancellation is disabled and the @code{cancel} construct is ignored.
2764
2765@item @emph{See also}:
2766@ref{omp_get_cancellation}
2767
2768@item @emph{Reference}:
2769@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
2770@end table
2771
2772
2773
73a0d3bf
TB
2774@node OMP_DISPLAY_AFFINITY
2775@section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
2776@cindex Environment Variable
2777@table @asis
2cd0689a
TB
2778@item @emph{ICV:} @var{display-affinity-var}
2779@item @emph{Scope:} global
73a0d3bf
TB
2780@item @emph{Description}:
2781If set to @code{FALSE} or if unset, affinity displaying is disabled.
2782If set to @code{TRUE}, the runtime will display affinity information about
2783OpenMP threads in a parallel region upon entering the region and every time
2784any change occurs.
2785
2786@item @emph{See also}:
2787@ref{OMP_AFFINITY_FORMAT}
2788
2789@item @emph{Reference}:
2790@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
2791@end table
2792
2793
2794
2795
d77de738
ML
2796@node OMP_DISPLAY_ENV
2797@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
2798@cindex Environment Variable
2799@table @asis
2cd0689a
TB
2800@item @emph{ICV:} none
2801@item @emph{Scope:} not applicable
d77de738
ML
2802@item @emph{Description}:
2803If set to @code{TRUE}, the OpenMP version number and the values
2804associated with the OpenMP environment variables are printed to @code{stderr}.
2805If set to @code{VERBOSE}, it additionally shows the value of the environment
2806variables which are GNU extensions. If undefined or set to @code{FALSE},
2807this information will not be shown.
2808
2809
2810@item @emph{Reference}:
2811@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2812@end table
2813
2814
2815
2816@node OMP_DEFAULT_DEVICE
2817@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2818@cindex Environment Variable
2819@table @asis
2cd0689a
TB
2820@item @emph{ICV:} @var{default-device-var}
2821@item @emph{Scope:} data environment
d77de738
ML
2822@item @emph{Description}:
2823Set to choose the device which is used in a @code{target} region, unless the
2824value is overridden by @code{omp_set_default_device} or by a @code{device}
2825clause. The value shall be the nonnegative device number. If no device with
2826the given device number exists, the code is executed on the host. If unset,
18c8b56c
TB
2827@env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
2828available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
d77de738
ML
2829device number 0 will be used.
2830
2831
2832@item @emph{See also}:
2833@ref{omp_get_default_device}, @ref{omp_set_default_device},
2834
2835@item @emph{Reference}:
2836@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2837@end table
2838
2839
2840
2841@node OMP_DYNAMIC
2842@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2843@cindex Environment Variable
2844@table @asis
2cd0689a
TB
2845@item @emph{ICV:} @var{dyn-var}
2846@item @emph{Scope:} global
d77de738
ML
2847@item @emph{Description}:
2848Enable or disable the dynamic adjustment of the number of threads
2849within a team. The value of this environment variable shall be
2850@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2851disabled by default.
2852
2853@item @emph{See also}:
2854@ref{omp_set_dynamic}
2855
2856@item @emph{Reference}:
2857@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2858@end table
2859
2860
2861
2862@node OMP_MAX_ACTIVE_LEVELS
2863@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2864@cindex Environment Variable
2865@table @asis
2cd0689a
TB
2866@item @emph{ICV:} @var{max-active-levels-var}
2867@item @emph{Scope:} data environment
d77de738
ML
2868@item @emph{Description}:
2869Specifies the initial value for the maximum number of nested parallel
2870regions. The value of this variable shall be a positive integer.
2871If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2872if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2873a list with more than one item, the maximum number of nested parallel
2874regions will be initialized to the largest number supported, otherwise
2875it will be set to one.
2876
2877@item @emph{See also}:
2cd0689a
TB
2878@ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
2879@ref{OMP_NUM_THREADS}
2880
d77de738
ML
2881
2882@item @emph{Reference}:
2883@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2884@end table
2885
2886
2887
2888@node OMP_MAX_TASK_PRIORITY
2889@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2890number that can be set for a task.
2891@cindex Environment Variable
2892@table @asis
2cd0689a
TB
2893@item @emph{ICV:} @var{max-task-priority-var}
2894@item @emph{Scope:} global
d77de738
ML
2895@item @emph{Description}:
2896Specifies the initial value for the maximum priority value that can be
2897set for a task. The value of this variable shall be a non-negative
2898integer, and zero is allowed. If undefined, the default priority is
28990.
2900
2901@item @emph{See also}:
2902@ref{omp_get_max_task_priority}
2903
2904@item @emph{Reference}:
2905@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2906@end table
2907
2908
2909
2910@node OMP_NESTED
2911@section @env{OMP_NESTED} -- Nested parallel regions
2912@cindex Environment Variable
2913@cindex Implementation specific setting
2914@table @asis
2cd0689a
TB
2915@item @emph{ICV:} @var{max-active-levels-var}
2916@item @emph{Scope:} data environment
d77de738
ML
2917@item @emph{Description}:
2918Enable or disable nested parallel regions, i.e., whether team members
2919are allowed to create new teams. The value of this environment variable
2920shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2921of maximum active nested regions supported will by default be set to the
2922maximum supported, otherwise it will be set to one. If
2923@env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2924setting. If both are undefined, nested parallel regions are enabled if
2925@env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2926more than one item, otherwise they are disabled by default.
2927
2cd0689a
TB
2928Note that the @code{OMP_NESTED} environment variable was deprecated in
2929the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
2930
d77de738 2931@item @emph{See also}:
2cd0689a
TB
2932@ref{omp_set_max_active_levels}, @ref{omp_set_nested},
2933@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2934
2935@item @emph{Reference}:
2936@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2937@end table
2938
2939
2940
2941@node OMP_NUM_TEAMS
2942@section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2943@cindex Environment Variable
2944@table @asis
2cd0689a
TB
2945@item @emph{ICV:} @var{nteams-var}
2946@item @emph{Scope:} device
d77de738
ML
2947@item @emph{Description}:
2948Specifies the upper bound for number of teams to use in teams regions
2949without explicit @code{num_teams} clause. The value of this variable shall
2950be a positive integer. If undefined it defaults to 0 which means
2951implementation defined upper bound.
2952
2953@item @emph{See also}:
2954@ref{omp_set_num_teams}
2955
2956@item @emph{Reference}:
2957@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2958@end table
2959
2960
2961
2962@node OMP_NUM_THREADS
2963@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2964@cindex Environment Variable
2965@cindex Implementation specific setting
2966@table @asis
2cd0689a
TB
2967@item @emph{ICV:} @var{nthreads-var}
2968@item @emph{Scope:} data environment
d77de738
ML
2969@item @emph{Description}:
2970Specifies the default number of threads to use in parallel regions. The
2971value of this variable shall be a comma-separated list of positive integers;
2972the value specifies the number of threads to use for the corresponding nested
2973level. Specifying more than one item in the list will automatically enable
2974nesting by default. If undefined one thread per CPU is used.
2975
2cd0689a
TB
2976When a list with more than value is specified, it also affects the
2977@var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2978
d77de738 2979@item @emph{See also}:
2cd0689a 2980@ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2981
2982@item @emph{Reference}:
2983@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2984@end table
2985
2986
2987
2988@node OMP_PROC_BIND
0b9bd33d 2989@section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
d77de738
ML
2990@cindex Environment Variable
2991@table @asis
2cd0689a
TB
2992@item @emph{ICV:} @var{bind-var}
2993@item @emph{Scope:} data environment
d77de738
ML
2994@item @emph{Description}:
2995Specifies whether threads may be moved between processors. If set to
0b9bd33d 2996@code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
d77de738
ML
2997they may be moved. Alternatively, a comma separated list with the
2998values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2999be used to specify the thread affinity policy for the corresponding nesting
3000level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
3001same place partition as the primary thread. With @code{CLOSE} those are
3002kept close to the primary thread in contiguous place partitions. And
3003with @code{SPREAD} a sparse distribution
3004across the place partitions is used. Specifying more than one item in the
3005list will automatically enable nesting by default.
3006
2cd0689a
TB
3007When a list is specified, it also affects the @var{max-active-levels-var} ICV
3008as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3009
d77de738
ML
3010When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
3011@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
3012
3013@item @emph{See also}:
2cd0689a
TB
3014@ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
3015@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
3016
3017@item @emph{Reference}:
3018@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
3019@end table
3020
3021
3022
3023@node OMP_PLACES
0b9bd33d 3024@section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
d77de738
ML
3025@cindex Environment Variable
3026@table @asis
2cd0689a
TB
3027@item @emph{ICV:} @var{place-partition-var}
3028@item @emph{Scope:} implicit tasks
d77de738
ML
3029@item @emph{Description}:
3030The thread placement can be either specified using an abstract name or by an
3031explicit list of the places. The abstract names @code{threads}, @code{cores},
3032@code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
3033followed by a positive number in parentheses, which denotes the how many places
3034shall be created. With @code{threads} each place corresponds to a single
3035hardware thread; @code{cores} to a single core with the corresponding number of
3036hardware threads; with @code{sockets} the place corresponds to a single
3037socket; with @code{ll_caches} to a set of cores that shares the last level
3038cache on the device; and @code{numa_domains} to a set of cores for which their
3039closest memory on the device is the same memory and at a similar distance from
3040the cores. The resulting placement can be shown by setting the
3041@env{OMP_DISPLAY_ENV} environment variable.
3042
3043Alternatively, the placement can be specified explicitly as comma-separated
3044list of places. A place is specified by set of nonnegative numbers in curly
3045braces, denoting the hardware threads. The curly braces can be omitted
3046when only a single number has been specified. The hardware threads
3047belonging to a place can either be specified as comma-separated list of
3048nonnegative thread numbers or using an interval. Multiple places can also be
3049either specified by a comma-separated list of places or by an interval. To
3050specify an interval, a colon followed by the count is placed after
3051the hardware thread number or the place. Optionally, the length can be
3052followed by a colon and the stride number -- otherwise a unit stride is
3053assumed. Placing an exclamation mark (@code{!}) directly before a curly
3054brace or numbers inside the curly braces (excluding intervals) will
3055exclude those hardware threads.
3056
3057For instance, the following specifies the same places list:
3058@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
3059@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
3060
3061If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
3062@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
3063between CPUs following no placement policy.
3064
3065@item @emph{See also}:
3066@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
3067@ref{OMP_DISPLAY_ENV}
3068
3069@item @emph{Reference}:
3070@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
3071@end table
3072
3073
3074
3075@node OMP_STACKSIZE
3076@section @env{OMP_STACKSIZE} -- Set default thread stack size
3077@cindex Environment Variable
3078@table @asis
2cd0689a
TB
3079@item @emph{ICV:} @var{stacksize-var}
3080@item @emph{Scope:} device
d77de738
ML
3081@item @emph{Description}:
3082Set the default thread stack size in kilobytes, unless the number
3083is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
3084case the size is, respectively, in bytes, kilobytes, megabytes
3085or gigabytes. This is different from @code{pthread_attr_setstacksize}
3086which gets the number of bytes as an argument. If the stack size cannot
3087be set due to system constraints, an error is reported and the initial
3088stack size is left unchanged. If undefined, the stack size is system
3089dependent.
3090
2cd0689a
TB
3091@item @emph{See also}:
3092@ref{GOMP_STACKSIZE}
3093
d77de738
ML
3094@item @emph{Reference}:
3095@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3096@end table
3097
3098
3099
3100@node OMP_SCHEDULE
3101@section @env{OMP_SCHEDULE} -- How threads are scheduled
3102@cindex Environment Variable
3103@cindex Implementation specific setting
3104@table @asis
2cd0689a
TB
3105@item @emph{ICV:} @var{run-sched-var}
3106@item @emph{Scope:} data environment
d77de738
ML
3107@item @emph{Description}:
3108Allows to specify @code{schedule type} and @code{chunk size}.
3109The value of the variable shall have the form: @code{type[,chunk]} where
3110@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
3111The optional @code{chunk} size shall be a positive integer. If undefined,
3112dynamic scheduling and a chunk size of 1 is used.
3113
3114@item @emph{See also}:
3115@ref{omp_set_schedule}
3116
3117@item @emph{Reference}:
3118@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
3119@end table
3120
3121
3122
3123@node OMP_TARGET_OFFLOAD
3124@section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
3125@cindex Environment Variable
3126@cindex Implementation specific setting
3127@table @asis
2cd0689a
TB
3128@item @emph{ICV:} @var{target-offload-var}
3129@item @emph{Scope:} global
d77de738
ML
3130@item @emph{Description}:
3131Specifies the behaviour with regard to offloading code to a device. This
3132variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
3133or @code{DEFAULT}.
3134
3135If set to @code{MANDATORY}, the program will terminate with an error if
3136the offload device is not present or is not supported. If set to
3137@code{DISABLED}, then offloading is disabled and all code will run on the
3138host. If set to @code{DEFAULT}, the program will try offloading to the
3139device first, then fall back to running code on the host if it cannot.
3140
3141If undefined, then the program will behave as if @code{DEFAULT} was set.
3142
3143@item @emph{Reference}:
3144@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
3145@end table
3146
3147
3148
3149@node OMP_TEAMS_THREAD_LIMIT
3150@section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
3151@cindex Environment Variable
3152@table @asis
2cd0689a
TB
3153@item @emph{ICV:} @var{teams-thread-limit-var}
3154@item @emph{Scope:} device
d77de738
ML
3155@item @emph{Description}:
3156Specifies an upper bound for the number of threads to use by each contention
3157group created by a teams construct without explicit @code{thread_limit}
3158clause. The value of this variable shall be a positive integer. If undefined,
3159the value of 0 is used which stands for an implementation defined upper
3160limit.
3161
3162@item @emph{See also}:
3163@ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
3164
3165@item @emph{Reference}:
3166@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
3167@end table
3168
3169
3170
3171@node OMP_THREAD_LIMIT
3172@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
3173@cindex Environment Variable
3174@table @asis
2cd0689a
TB
3175@item @emph{ICV:} @var{thread-limit-var}
3176@item @emph{Scope:} data environment
d77de738
ML
3177@item @emph{Description}:
3178Specifies the number of threads to use for the whole program. The
3179value of this variable shall be a positive integer. If undefined,
3180the number of threads is not limited.
3181
3182@item @emph{See also}:
3183@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
3184
3185@item @emph{Reference}:
3186@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
3187@end table
3188
3189
3190
3191@node OMP_WAIT_POLICY
3192@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
3193@cindex Environment Variable
3194@table @asis
3195@item @emph{Description}:
3196Specifies whether waiting threads should be active or passive. If
3197the value is @code{PASSIVE}, waiting threads should not consume CPU
3198power while waiting; while the value is @code{ACTIVE} specifies that
3199they should. If undefined, threads wait actively for a short time
3200before waiting passively.
3201
3202@item @emph{See also}:
3203@ref{GOMP_SPINCOUNT}
3204
3205@item @emph{Reference}:
3206@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3207@end table
3208
3209
3210
3211@node GOMP_CPU_AFFINITY
3212@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
3213@cindex Environment Variable
3214@table @asis
3215@item @emph{Description}:
3216Binds threads to specific CPUs. The variable should contain a space-separated
3217or comma-separated list of CPUs. This list may contain different kinds of
3218entries: either single CPU numbers in any order, a range of CPUs (M-N)
3219or a range with some stride (M-N:S). CPU numbers are zero based. For example,
3220@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
3221to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
3222CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
3223and 14 respectively and then start assigning back from the beginning of
3224the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
3225
3226There is no libgomp library routine to determine whether a CPU affinity
3227specification is in effect. As a workaround, language-specific library
3228functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
3229Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
3230environment variable. A defined CPU affinity on startup cannot be changed
3231or disabled during the runtime of the application.
3232
3233If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
3234@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
3235@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
3236@code{FALSE}, the host system will handle the assignment of threads to CPUs.
3237
3238@item @emph{See also}:
3239@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3240@end table
3241
3242
3243
3244@node GOMP_DEBUG
3245@section @env{GOMP_DEBUG} -- Enable debugging output
3246@cindex Environment Variable
3247@table @asis
3248@item @emph{Description}:
3249Enable debugging output. The variable should be set to @code{0}
3250(disabled, also the default if not set), or @code{1} (enabled).
3251
3252If enabled, some debugging output will be printed during execution.
3253This is currently not specified in more detail, and subject to change.
3254@end table
3255
3256
3257
3258@node GOMP_STACKSIZE
3259@section @env{GOMP_STACKSIZE} -- Set default thread stack size
3260@cindex Environment Variable
3261@cindex Implementation specific setting
3262@table @asis
3263@item @emph{Description}:
3264Set the default thread stack size in kilobytes. This is different from
3265@code{pthread_attr_setstacksize} which gets the number of bytes as an
3266argument. If the stack size cannot be set due to system constraints, an
3267error is reported and the initial stack size is left unchanged. If undefined,
3268the stack size is system dependent.
3269
3270@item @emph{See also}:
3271@ref{OMP_STACKSIZE}
3272
3273@item @emph{Reference}:
3274@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3275GCC Patches Mailinglist},
3276@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3277GCC Patches Mailinglist}
3278@end table
3279
3280
3281
3282@node GOMP_SPINCOUNT
3283@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
3284@cindex Environment Variable
3285@cindex Implementation specific setting
3286@table @asis
3287@item @emph{Description}:
3288Determines how long a threads waits actively with consuming CPU power
3289before waiting passively without consuming CPU power. The value may be
3290either @code{INFINITE}, @code{INFINITY} to always wait actively or an
3291integer which gives the number of spins of the busy-wait loop. The
3292integer may optionally be followed by the following suffixes acting
3293as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
3294million), @code{G} (giga, billion), or @code{T} (tera, trillion).
3295If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
3296300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
329730 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
3298If there are more OpenMP threads than available CPUs, 1000 and 100
3299spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
3300undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
3301or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
3302
3303@item @emph{See also}:
3304@ref{OMP_WAIT_POLICY}
3305@end table
3306
3307
3308
3309@node GOMP_RTEMS_THREAD_POOLS
3310@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
3311@cindex Environment Variable
3312@cindex Implementation specific setting
3313@table @asis
3314@item @emph{Description}:
3315This environment variable is only used on the RTEMS real-time operating system.
3316It determines the scheduler instance specific thread pools. The format for
3317@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
3318@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
3319separated by @code{:} where:
3320@itemize @bullet
3321@item @code{<thread-pool-count>} is the thread pool count for this scheduler
3322instance.
3323@item @code{$<priority>} is an optional priority for the worker threads of a
3324thread pool according to @code{pthread_setschedparam}. In case a priority
3325value is omitted, then a worker thread will inherit the priority of the OpenMP
3326primary thread that created it. The priority of the worker thread is not
3327changed after creation, even if a new OpenMP primary thread using the worker has
3328a different priority.
3329@item @code{@@<scheduler-name>} is the scheduler instance name according to the
3330RTEMS application configuration.
3331@end itemize
3332In case no thread pool configuration is specified for a scheduler instance,
3333then each OpenMP primary thread of this scheduler instance will use its own
3334dynamically allocated thread pool. To limit the worker thread count of the
3335thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
3336@item @emph{Example}:
3337Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
3338@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
3339@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
3340scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
3341one thread pool available. Since no priority is specified for this scheduler
3342instance, the worker thread inherits the priority of the OpenMP primary thread
3343that created it. In the scheduler instance @code{WRK1} there are three thread
3344pools available and their worker threads run at priority four.
3345@end table
3346
3347
3348
3349@c ---------------------------------------------------------------------
3350@c Enabling OpenACC
3351@c ---------------------------------------------------------------------
3352
3353@node Enabling OpenACC
3354@chapter Enabling OpenACC
3355
3356To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
3357flag @option{-fopenacc} must be specified. This enables the OpenACC directive
3358@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
3359@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
3360@code{!$} conditional compilation sentinels in free form and @code{c$},
3361@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
3362arranges for automatic linking of the OpenACC runtime library
3363(@ref{OpenACC Runtime Library Routines}).
3364
3365See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
3366
3367A complete description of all OpenACC directives accepted may be found in
3368the @uref{https://www.openacc.org, OpenACC} Application Programming
3369Interface manual, version 2.6.
3370
3371
3372
3373@c ---------------------------------------------------------------------
3374@c OpenACC Runtime Library Routines
3375@c ---------------------------------------------------------------------
3376
3377@node OpenACC Runtime Library Routines
3378@chapter OpenACC Runtime Library Routines
3379
3380The runtime routines described here are defined by section 3 of the OpenACC
3381specifications in version 2.6.
3382They have C linkage, and do not throw exceptions.
3383Generally, they are available only for the host, with the exception of
3384@code{acc_on_device}, which is available for both the host and the
3385acceleration device.
3386
3387@menu
3388* acc_get_num_devices:: Get number of devices for the given device
3389 type.
3390* acc_set_device_type:: Set type of device accelerator to use.
3391* acc_get_device_type:: Get type of device accelerator to be used.
3392* acc_set_device_num:: Set device number to use.
3393* acc_get_device_num:: Get device number to be used.
3394* acc_get_property:: Get device property.
3395* acc_async_test:: Tests for completion of a specific asynchronous
3396 operation.
3397* acc_async_test_all:: Tests for completion of all asynchronous
3398 operations.
3399* acc_wait:: Wait for completion of a specific asynchronous
3400 operation.
3401* acc_wait_all:: Waits for completion of all asynchronous
3402 operations.
3403* acc_wait_all_async:: Wait for completion of all asynchronous
3404 operations.
3405* acc_wait_async:: Wait for completion of asynchronous operations.
3406* acc_init:: Initialize runtime for a specific device type.
3407* acc_shutdown:: Shuts down the runtime for a specific device
3408 type.
3409* acc_on_device:: Whether executing on a particular device
3410* acc_malloc:: Allocate device memory.
3411* acc_free:: Free device memory.
3412* acc_copyin:: Allocate device memory and copy host memory to
3413 it.
3414* acc_present_or_copyin:: If the data is not present on the device,
3415 allocate device memory and copy from host
3416 memory.
3417* acc_create:: Allocate device memory and map it to host
3418 memory.
3419* acc_present_or_create:: If the data is not present on the device,
3420 allocate device memory and map it to host
3421 memory.
3422* acc_copyout:: Copy device memory to host memory.
3423* acc_delete:: Free device memory.
3424* acc_update_device:: Update device memory from mapped host memory.
3425* acc_update_self:: Update host memory from mapped device memory.
3426* acc_map_data:: Map previously allocated device memory to host
3427 memory.
3428* acc_unmap_data:: Unmap device memory from host memory.
3429* acc_deviceptr:: Get device pointer associated with specific
3430 host address.
3431* acc_hostptr:: Get host pointer associated with specific
3432 device address.
3433* acc_is_present:: Indicate whether host variable / array is
3434 present on device.
3435* acc_memcpy_to_device:: Copy host memory to device memory.
3436* acc_memcpy_from_device:: Copy device memory to host memory.
3437* acc_attach:: Let device pointer point to device-pointer target.
3438* acc_detach:: Let device pointer point to host-pointer target.
3439
3440API routines for target platforms.
3441
3442* acc_get_current_cuda_device:: Get CUDA device handle.
3443* acc_get_current_cuda_context::Get CUDA context handle.
3444* acc_get_cuda_stream:: Get CUDA stream handle.
3445* acc_set_cuda_stream:: Set CUDA stream handle.
3446
3447API routines for the OpenACC Profiling Interface.
3448
3449* acc_prof_register:: Register callbacks.
3450* acc_prof_unregister:: Unregister callbacks.
3451* acc_prof_lookup:: Obtain inquiry functions.
3452* acc_register_library:: Library registration.
3453@end menu
3454
3455
3456
3457@node acc_get_num_devices
3458@section @code{acc_get_num_devices} -- Get number of devices for given device type
3459@table @asis
3460@item @emph{Description}
3461This function returns a value indicating the number of devices available
3462for the device type specified in @var{devicetype}.
3463
3464@item @emph{C/C++}:
3465@multitable @columnfractions .20 .80
3466@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
3467@end multitable
3468
3469@item @emph{Fortran}:
3470@multitable @columnfractions .20 .80
3471@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
3472@item @tab @code{integer(kind=acc_device_kind) devicetype}
3473@end multitable
3474
3475@item @emph{Reference}:
3476@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34773.2.1.
3478@end table
3479
3480
3481
3482@node acc_set_device_type
3483@section @code{acc_set_device_type} -- Set type of device accelerator to use.
3484@table @asis
3485@item @emph{Description}
3486This function indicates to the runtime library which device type, specified
3487in @var{devicetype}, to use when executing a parallel or kernels region.
3488
3489@item @emph{C/C++}:
3490@multitable @columnfractions .20 .80
3491@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
3492@end multitable
3493
3494@item @emph{Fortran}:
3495@multitable @columnfractions .20 .80
3496@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
3497@item @tab @code{integer(kind=acc_device_kind) devicetype}
3498@end multitable
3499
3500@item @emph{Reference}:
3501@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35023.2.2.
3503@end table
3504
3505
3506
3507@node acc_get_device_type
3508@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
3509@table @asis
3510@item @emph{Description}
3511This function returns what device type will be used when executing a
3512parallel or kernels region.
3513
3514This function returns @code{acc_device_none} if
3515@code{acc_get_device_type} is called from
3516@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3517callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
3518Interface}), that is, if the device is currently being initialized.
3519
3520@item @emph{C/C++}:
3521@multitable @columnfractions .20 .80
3522@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
3523@end multitable
3524
3525@item @emph{Fortran}:
3526@multitable @columnfractions .20 .80
3527@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
3528@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
3529@end multitable
3530
3531@item @emph{Reference}:
3532@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35333.2.3.
3534@end table
3535
3536
3537
3538@node acc_set_device_num
3539@section @code{acc_set_device_num} -- Set device number to use.
3540@table @asis
3541@item @emph{Description}
3542This function will indicate to the runtime which device number,
3543specified by @var{devicenum}, associated with the specified device
3544type @var{devicetype}.
3545
3546@item @emph{C/C++}:
3547@multitable @columnfractions .20 .80
3548@item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
3549@end multitable
3550
3551@item @emph{Fortran}:
3552@multitable @columnfractions .20 .80
3553@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
3554@item @tab @code{integer devicenum}
3555@item @tab @code{integer(kind=acc_device_kind) devicetype}
3556@end multitable
3557
3558@item @emph{Reference}:
3559@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35603.2.4.
3561@end table
3562
3563
3564
3565@node acc_get_device_num
3566@section @code{acc_get_device_num} -- Get device number to be used.
3567@table @asis
3568@item @emph{Description}
3569This function returns which device number associated with the specified device
3570type @var{devicetype}, will be used when executing a parallel or kernels
3571region.
3572
3573@item @emph{C/C++}:
3574@multitable @columnfractions .20 .80
3575@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
3576@end multitable
3577
3578@item @emph{Fortran}:
3579@multitable @columnfractions .20 .80
3580@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
3581@item @tab @code{integer(kind=acc_device_kind) devicetype}
3582@item @tab @code{integer acc_get_device_num}
3583@end multitable
3584
3585@item @emph{Reference}:
3586@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35873.2.5.
3588@end table
3589
3590
3591
3592@node acc_get_property
3593@section @code{acc_get_property} -- Get device property.
3594@cindex acc_get_property
3595@cindex acc_get_property_string
3596@table @asis
3597@item @emph{Description}
3598These routines return the value of the specified @var{property} for the
3599device being queried according to @var{devicenum} and @var{devicetype}.
3600Integer-valued and string-valued properties are returned by
3601@code{acc_get_property} and @code{acc_get_property_string} respectively.
3602The Fortran @code{acc_get_property_string} subroutine returns the string
3603retrieved in its fourth argument while the remaining entry points are
3604functions, which pass the return value as their result.
3605
3606Note for Fortran, only: the OpenACC technical committee corrected and, hence,
3607modified the interface introduced in OpenACC 2.6. The kind-value parameter
3608@code{acc_device_property} has been renamed to @code{acc_device_property_kind}
3609for consistency and the return type of the @code{acc_get_property} function is
3610now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
3611The parameter @code{acc_device_property} will continue to be provided,
3612but might be removed in a future version of GCC.
3613
3614@item @emph{C/C++}:
3615@multitable @columnfractions .20 .80
3616@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3617@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3618@end multitable
3619
3620@item @emph{Fortran}:
3621@multitable @columnfractions .20 .80
3622@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
3623@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
3624@item @tab @code{use ISO_C_Binding, only: c_size_t}
3625@item @tab @code{integer devicenum}
3626@item @tab @code{integer(kind=acc_device_kind) devicetype}
3627@item @tab @code{integer(kind=acc_device_property_kind) property}
3628@item @tab @code{integer(kind=c_size_t) acc_get_property}
3629@item @tab @code{character(*) string}
3630@end multitable
3631
3632@item @emph{Reference}:
3633@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36343.2.6.
3635@end table
3636
3637
3638
3639@node acc_async_test
3640@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
3641@table @asis
3642@item @emph{Description}
3643This function tests for completion of the asynchronous operation specified
3644in @var{arg}. In C/C++, a non-zero value will be returned to indicate
3645the specified asynchronous operation has completed. While Fortran will return
3646a @code{true}. If the asynchronous operation has not completed, C/C++ returns
3647a zero and Fortran returns a @code{false}.
3648
3649@item @emph{C/C++}:
3650@multitable @columnfractions .20 .80
3651@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
3652@end multitable
3653
3654@item @emph{Fortran}:
3655@multitable @columnfractions .20 .80
3656@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
3657@item @tab @code{integer(kind=acc_handle_kind) arg}
3658@item @tab @code{logical acc_async_test}
3659@end multitable
3660
3661@item @emph{Reference}:
3662@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36633.2.9.
3664@end table
3665
3666
3667
3668@node acc_async_test_all
3669@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
3670@table @asis
3671@item @emph{Description}
3672This function tests for completion of all asynchronous operations.
3673In C/C++, a non-zero value will be returned to indicate all asynchronous
3674operations have completed. While Fortran will return a @code{true}. If
3675any asynchronous operation has not completed, C/C++ returns a zero and
3676Fortran returns a @code{false}.
3677
3678@item @emph{C/C++}:
3679@multitable @columnfractions .20 .80
3680@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
3681@end multitable
3682
3683@item @emph{Fortran}:
3684@multitable @columnfractions .20 .80
3685@item @emph{Interface}: @tab @code{function acc_async_test()}
3686@item @tab @code{logical acc_get_device_num}
3687@end multitable
3688
3689@item @emph{Reference}:
3690@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36913.2.10.
3692@end table
3693
3694
3695
3696@node acc_wait
3697@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
3698@table @asis
3699@item @emph{Description}
3700This function waits for completion of the asynchronous operation
3701specified in @var{arg}.
3702
3703@item @emph{C/C++}:
3704@multitable @columnfractions .20 .80
3705@item @emph{Prototype}: @tab @code{acc_wait(arg);}
3706@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
3707@end multitable
3708
3709@item @emph{Fortran}:
3710@multitable @columnfractions .20 .80
3711@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
3712@item @tab @code{integer(acc_handle_kind) arg}
3713@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
3714@item @tab @code{integer(acc_handle_kind) arg}
3715@end multitable
3716
3717@item @emph{Reference}:
3718@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37193.2.11.
3720@end table
3721
3722
3723
3724@node acc_wait_all
3725@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
3726@table @asis
3727@item @emph{Description}
3728This function waits for the completion of all asynchronous operations.
3729
3730@item @emph{C/C++}:
3731@multitable @columnfractions .20 .80
3732@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
3733@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
3734@end multitable
3735
3736@item @emph{Fortran}:
3737@multitable @columnfractions .20 .80
3738@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
3739@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
3740@end multitable
3741
3742@item @emph{Reference}:
3743@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37443.2.13.
3745@end table
3746
3747
3748
3749@node acc_wait_all_async
3750@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
3751@table @asis
3752@item @emph{Description}
3753This function enqueues a wait operation on the queue @var{async} for any
3754and all asynchronous operations that have been previously enqueued on
3755any queue.
3756
3757@item @emph{C/C++}:
3758@multitable @columnfractions .20 .80
3759@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
3760@end multitable
3761
3762@item @emph{Fortran}:
3763@multitable @columnfractions .20 .80
3764@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
3765@item @tab @code{integer(acc_handle_kind) async}
3766@end multitable
3767
3768@item @emph{Reference}:
3769@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37703.2.14.
3771@end table
3772
3773
3774
3775@node acc_wait_async
3776@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
3777@table @asis
3778@item @emph{Description}
3779This function enqueues a wait operation on queue @var{async} for any and all
3780asynchronous operations enqueued on queue @var{arg}.
3781
3782@item @emph{C/C++}:
3783@multitable @columnfractions .20 .80
3784@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
3785@end multitable
3786
3787@item @emph{Fortran}:
3788@multitable @columnfractions .20 .80
3789@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
3790@item @tab @code{integer(acc_handle_kind) arg, async}
3791@end multitable
3792
3793@item @emph{Reference}:
3794@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37953.2.12.
3796@end table
3797
3798
3799
3800@node acc_init
3801@section @code{acc_init} -- Initialize runtime for a specific device type.
3802@table @asis
3803@item @emph{Description}
3804This function initializes the runtime for the device type specified in
3805@var{devicetype}.
3806
3807@item @emph{C/C++}:
3808@multitable @columnfractions .20 .80
3809@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
3810@end multitable
3811
3812@item @emph{Fortran}:
3813@multitable @columnfractions .20 .80
3814@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
3815@item @tab @code{integer(acc_device_kind) devicetype}
3816@end multitable
3817
3818@item @emph{Reference}:
3819@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38203.2.7.
3821@end table
3822
3823
3824
3825@node acc_shutdown
3826@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
3827@table @asis
3828@item @emph{Description}
3829This function shuts down the runtime for the device type specified in
3830@var{devicetype}.
3831
3832@item @emph{C/C++}:
3833@multitable @columnfractions .20 .80
3834@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
3835@end multitable
3836
3837@item @emph{Fortran}:
3838@multitable @columnfractions .20 .80
3839@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
3840@item @tab @code{integer(acc_device_kind) devicetype}
3841@end multitable
3842
3843@item @emph{Reference}:
3844@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38453.2.8.
3846@end table
3847
3848
3849
3850@node acc_on_device
3851@section @code{acc_on_device} -- Whether executing on a particular device
3852@table @asis
3853@item @emph{Description}:
3854This function returns whether the program is executing on a particular
3855device specified in @var{devicetype}. In C/C++ a non-zero value is
3856returned to indicate the device is executing on the specified device type.
3857In Fortran, @code{true} will be returned. If the program is not executing
3858on the specified device type C/C++ will return a zero, while Fortran will
3859return @code{false}.
3860
3861@item @emph{C/C++}:
3862@multitable @columnfractions .20 .80
3863@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3864@end multitable
3865
3866@item @emph{Fortran}:
3867@multitable @columnfractions .20 .80
3868@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3869@item @tab @code{integer(acc_device_kind) devicetype}
3870@item @tab @code{logical acc_on_device}
3871@end multitable
3872
3873
3874@item @emph{Reference}:
3875@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38763.2.17.
3877@end table
3878
3879
3880
3881@node acc_malloc
3882@section @code{acc_malloc} -- Allocate device memory.
3883@table @asis
3884@item @emph{Description}
3885This function allocates @var{len} bytes of device memory. It returns
3886the device address of the allocated memory.
3887
3888@item @emph{C/C++}:
3889@multitable @columnfractions .20 .80
3890@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3891@end multitable
3892
3893@item @emph{Reference}:
3894@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38953.2.18.
3896@end table
3897
3898
3899
3900@node acc_free
3901@section @code{acc_free} -- Free device memory.
3902@table @asis
3903@item @emph{Description}
3904Free previously allocated device memory at the device address @code{a}.
3905
3906@item @emph{C/C++}:
3907@multitable @columnfractions .20 .80
3908@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3909@end multitable
3910
3911@item @emph{Reference}:
3912@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39133.2.19.
3914@end table
3915
3916
3917
3918@node acc_copyin
3919@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3920@table @asis
3921@item @emph{Description}
3922In C/C++, this function allocates @var{len} bytes of device memory
3923and maps it to the specified host address in @var{a}. The device
3924address of the newly allocated device memory is returned.
3925
3926In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3927a contiguous array section. The second form @var{a} specifies a
3928variable or array element and @var{len} specifies the length in bytes.
3929
3930@item @emph{C/C++}:
3931@multitable @columnfractions .20 .80
3932@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3933@item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3934@end multitable
3935
3936@item @emph{Fortran}:
3937@multitable @columnfractions .20 .80
3938@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3939@item @tab @code{type, dimension(:[,:]...) :: a}
3940@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3941@item @tab @code{type, dimension(:[,:]...) :: a}
3942@item @tab @code{integer len}
3943@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3944@item @tab @code{type, dimension(:[,:]...) :: a}
3945@item @tab @code{integer(acc_handle_kind) :: async}
3946@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3947@item @tab @code{type, dimension(:[,:]...) :: a}
3948@item @tab @code{integer len}
3949@item @tab @code{integer(acc_handle_kind) :: async}
3950@end multitable
3951
3952@item @emph{Reference}:
3953@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39543.2.20.
3955@end table
3956
3957
3958
3959@node acc_present_or_copyin
3960@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3961@table @asis
3962@item @emph{Description}
3963This function tests if the host data specified by @var{a} and of length
3964@var{len} is present or not. If it is not present, then device memory
3965will be allocated and the host memory copied. The device address of
3966the newly allocated device memory is returned.
3967
3968In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3969a contiguous array section. The second form @var{a} specifies a variable or
3970array element and @var{len} specifies the length in bytes.
3971
3972Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3973backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3974
3975@item @emph{C/C++}:
3976@multitable @columnfractions .20 .80
3977@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3978@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3979@end multitable
3980
3981@item @emph{Fortran}:
3982@multitable @columnfractions .20 .80
3983@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3984@item @tab @code{type, dimension(:[,:]...) :: a}
3985@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3986@item @tab @code{type, dimension(:[,:]...) :: a}
3987@item @tab @code{integer len}
3988@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3989@item @tab @code{type, dimension(:[,:]...) :: a}
3990@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3991@item @tab @code{type, dimension(:[,:]...) :: a}
3992@item @tab @code{integer len}
3993@end multitable
3994
3995@item @emph{Reference}:
3996@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39973.2.20.
3998@end table
3999
4000
4001
4002@node acc_create
4003@section @code{acc_create} -- Allocate device memory and map it to host memory.
4004@table @asis
4005@item @emph{Description}
4006This function allocates device memory and maps it to host memory specified
4007by the host address @var{a} with a length of @var{len} bytes. In C/C++,
4008the function returns the device address of the allocated device memory.
4009
4010In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4011a contiguous array section. The second form @var{a} specifies a variable or
4012array element and @var{len} specifies the length in bytes.
4013
4014@item @emph{C/C++}:
4015@multitable @columnfractions .20 .80
4016@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
4017@item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
4018@end multitable
4019
4020@item @emph{Fortran}:
4021@multitable @columnfractions .20 .80
4022@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
4023@item @tab @code{type, dimension(:[,:]...) :: a}
4024@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
4025@item @tab @code{type, dimension(:[,:]...) :: a}
4026@item @tab @code{integer len}
4027@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
4028@item @tab @code{type, dimension(:[,:]...) :: a}
4029@item @tab @code{integer(acc_handle_kind) :: async}
4030@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
4031@item @tab @code{type, dimension(:[,:]...) :: a}
4032@item @tab @code{integer len}
4033@item @tab @code{integer(acc_handle_kind) :: async}
4034@end multitable
4035
4036@item @emph{Reference}:
4037@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40383.2.21.
4039@end table
4040
4041
4042
4043@node acc_present_or_create
4044@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
4045@table @asis
4046@item @emph{Description}
4047This function tests if the host data specified by @var{a} and of length
4048@var{len} is present or not. If it is not present, then device memory
4049will be allocated and mapped to host memory. In C/C++, the device address
4050of the newly allocated device memory is returned.
4051
4052In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4053a contiguous array section. The second form @var{a} specifies a variable or
4054array element and @var{len} specifies the length in bytes.
4055
4056Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
4057backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
4058
4059@item @emph{C/C++}:
4060@multitable @columnfractions .20 .80
4061@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
4062@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
4063@end multitable
4064
4065@item @emph{Fortran}:
4066@multitable @columnfractions .20 .80
4067@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
4068@item @tab @code{type, dimension(:[,:]...) :: a}
4069@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
4070@item @tab @code{type, dimension(:[,:]...) :: a}
4071@item @tab @code{integer len}
4072@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
4073@item @tab @code{type, dimension(:[,:]...) :: a}
4074@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
4075@item @tab @code{type, dimension(:[,:]...) :: a}
4076@item @tab @code{integer len}
4077@end multitable
4078
4079@item @emph{Reference}:
4080@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40813.2.21.
4082@end table
4083
4084
4085
4086@node acc_copyout
4087@section @code{acc_copyout} -- Copy device memory to host memory.
4088@table @asis
4089@item @emph{Description}
4090This function copies mapped device memory to host memory which is specified
4091by host address @var{a} for a length @var{len} bytes in C/C++.
4092
4093In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4094a contiguous array section. The second form @var{a} specifies a variable or
4095array element and @var{len} specifies the length in bytes.
4096
4097@item @emph{C/C++}:
4098@multitable @columnfractions .20 .80
4099@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
4100@item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
4101@item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
4102@item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
4103@end multitable
4104
4105@item @emph{Fortran}:
4106@multitable @columnfractions .20 .80
4107@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
4108@item @tab @code{type, dimension(:[,:]...) :: a}
4109@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
4110@item @tab @code{type, dimension(:[,:]...) :: a}
4111@item @tab @code{integer len}
4112@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
4113@item @tab @code{type, dimension(:[,:]...) :: a}
4114@item @tab @code{integer(acc_handle_kind) :: async}
4115@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
4116@item @tab @code{type, dimension(:[,:]...) :: a}
4117@item @tab @code{integer len}
4118@item @tab @code{integer(acc_handle_kind) :: async}
4119@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
4120@item @tab @code{type, dimension(:[,:]...) :: a}
4121@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
4122@item @tab @code{type, dimension(:[,:]...) :: a}
4123@item @tab @code{integer len}
4124@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
4125@item @tab @code{type, dimension(:[,:]...) :: a}
4126@item @tab @code{integer(acc_handle_kind) :: async}
4127@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
4128@item @tab @code{type, dimension(:[,:]...) :: a}
4129@item @tab @code{integer len}
4130@item @tab @code{integer(acc_handle_kind) :: async}
4131@end multitable
4132
4133@item @emph{Reference}:
4134@uref{https://www.openacc.org, OpenACC specification v2.6}, section
41353.2.22.
4136@end table
4137
4138
4139
4140@node acc_delete
4141@section @code{acc_delete} -- Free device memory.
4142@table @asis
4143@item @emph{Description}
4144This function frees previously allocated device memory specified by
4145the device address @var{a} and the length of @var{len} bytes.
4146
4147In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4148a contiguous array section. The second form @var{a} specifies a variable or
4149array element and @var{len} specifies the length in bytes.
4150
4151@item @emph{C/C++}:
4152@multitable @columnfractions .20 .80
4153@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
4154@item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
4155@item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
4156@item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
4157@end multitable
4158
4159@item @emph{Fortran}:
4160@multitable @columnfractions .20 .80
4161@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
4162@item @tab @code{type, dimension(:[,:]...) :: a}
4163@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
4164@item @tab @code{type, dimension(:[,:]...) :: a}
4165@item @tab @code{integer len}
4166@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
4167@item @tab @code{type, dimension(:[,:]...) :: a}
4168@item @tab @code{integer(acc_handle_kind) :: async}
4169@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
4170@item @tab @code{type, dimension(:[,:]...) :: a}
4171@item @tab @code{integer len}
4172@item @tab @code{integer(acc_handle_kind) :: async}
4173@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
4174@item @tab @code{type, dimension(:[,:]...) :: a}
4175@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
4176@item @tab @code{type, dimension(:[,:]...) :: a}
4177@item @tab @code{integer len}
4178@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
4179@item @tab @code{type, dimension(:[,:]...) :: a}
4180@item @tab @code{integer(acc_handle_kind) :: async}
4181@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
4182@item @tab @code{type, dimension(:[,:]...) :: a}
4183@item @tab @code{integer len}
4184@item @tab @code{integer(acc_handle_kind) :: async}
4185@end multitable
4186
4187@item @emph{Reference}:
4188@uref{https://www.openacc.org, OpenACC specification v2.6}, section
41893.2.23.
4190@end table
4191
4192
4193
4194@node acc_update_device
4195@section @code{acc_update_device} -- Update device memory from mapped host memory.
4196@table @asis
4197@item @emph{Description}
4198This function updates the device copy from the previously mapped host memory.
4199The host memory is specified with the host address @var{a} and a length of
4200@var{len} bytes.
4201
4202In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4203a contiguous array section. The second form @var{a} specifies a variable or
4204array element and @var{len} specifies the length in bytes.
4205
4206@item @emph{C/C++}:
4207@multitable @columnfractions .20 .80
4208@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
4209@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
4210@end multitable
4211
4212@item @emph{Fortran}:
4213@multitable @columnfractions .20 .80
4214@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
4215@item @tab @code{type, dimension(:[,:]...) :: a}
4216@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
4217@item @tab @code{type, dimension(:[,:]...) :: a}
4218@item @tab @code{integer len}
4219@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
4220@item @tab @code{type, dimension(:[,:]...) :: a}
4221@item @tab @code{integer(acc_handle_kind) :: async}
4222@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
4223@item @tab @code{type, dimension(:[,:]...) :: a}
4224@item @tab @code{integer len}
4225@item @tab @code{integer(acc_handle_kind) :: async}
4226@end multitable
4227
4228@item @emph{Reference}:
4229@uref{https://www.openacc.org, OpenACC specification v2.6}, section
42303.2.24.
4231@end table
4232
4233
4234
4235@node acc_update_self
4236@section @code{acc_update_self} -- Update host memory from mapped device memory.
4237@table @asis
4238@item @emph{Description}
4239This function updates the host copy from the previously mapped device memory.
4240The host memory is specified with the host address @var{a} and a length of
4241@var{len} bytes.
4242
4243In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4244a contiguous array section. The second form @var{a} specifies a variable or
4245array element and @var{len} specifies the length in bytes.
4246
4247@item @emph{C/C++}:
4248@multitable @columnfractions .20 .80
4249@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
4250@item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
4251@end multitable
4252
4253@item @emph{Fortran}:
4254@multitable @columnfractions .20 .80
4255@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
4256@item @tab @code{type, dimension(:[,:]...) :: a}
4257@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
4258@item @tab @code{type, dimension(:[,:]...) :: a}
4259@item @tab @code{integer len}
4260@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
4261@item @tab @code{type, dimension(:[,:]...) :: a}
4262@item @tab @code{integer(acc_handle_kind) :: async}
4263@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
4264@item @tab @code{type, dimension(:[,:]...) :: a}
4265@item @tab @code{integer len}
4266@item @tab @code{integer(acc_handle_kind) :: async}
4267@end multitable
4268
4269@item @emph{Reference}:
4270@uref{https://www.openacc.org, OpenACC specification v2.6}, section
42713.2.25.
4272@end table
4273
4274
4275
4276@node acc_map_data
4277@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
4278@table @asis
4279@item @emph{Description}
4280This function maps previously allocated device and host memory. The device
4281memory is specified with the device address @var{d}. The host memory is
4282specified with the host address @var{h} and a length of @var{len}.
4283
4284@item @emph{C/C++}:
4285@multitable @columnfractions .20 .80
4286@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
4287@end multitable
4288
4289@item @emph{Reference}:
4290@uref{https://www.openacc.org, OpenACC specification v2.6}, section
42913.2.26.
4292@end table
4293
4294
4295
4296@node acc_unmap_data
4297@section @code{acc_unmap_data} -- Unmap device memory from host memory.
4298@table @asis
4299@item @emph{Description}
4300This function unmaps previously mapped device and host memory. The latter
4301specified by @var{h}.
4302
4303@item @emph{C/C++}:
4304@multitable @columnfractions .20 .80
4305@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
4306@end multitable
4307
4308@item @emph{Reference}:
4309@uref{https://www.openacc.org, OpenACC specification v2.6}, section
43103.2.27.
4311@end table
4312
4313
4314
4315@node acc_deviceptr
4316@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
4317@table @asis
4318@item @emph{Description}
4319This function returns the device address that has been mapped to the
4320host address specified by @var{h}.
4321
4322@item @emph{C/C++}:
4323@multitable @columnfractions .20 .80
4324@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
4325@end multitable
4326
4327@item @emph{Reference}:
4328@uref{https://www.openacc.org, OpenACC specification v2.6}, section
43293.2.28.
4330@end table
4331
4332
4333
4334@node acc_hostptr
4335@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
4336@table @asis
4337@item @emph{Description}
4338This function returns the host address that has been mapped to the
4339device address specified by @var{d}.
4340
4341@item @emph{C/C++}:
4342@multitable @columnfractions .20 .80
4343@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
4344@end multitable
4345
4346@item @emph{Reference}:
4347@uref{https://www.openacc.org, OpenACC specification v2.6}, section
43483.2.29.
4349@end table
4350
4351
4352
4353@node acc_is_present
4354@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
4355@table @asis
4356@item @emph{Description}
4357This function indicates whether the specified host address in @var{a} and a
4358length of @var{len} bytes is present on the device. In C/C++, a non-zero
4359value is returned to indicate the presence of the mapped memory on the
4360device. A zero is returned to indicate the memory is not mapped on the
4361device.
4362
4363In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4364a contiguous array section. The second form @var{a} specifies a variable or
4365array element and @var{len} specifies the length in bytes. If the host
4366memory is mapped to device memory, then a @code{true} is returned. Otherwise,
4367a @code{false} is return to indicate the mapped memory is not present.
4368
4369@item @emph{C/C++}:
4370@multitable @columnfractions .20 .80
4371@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
4372@end multitable
4373
4374@item @emph{Fortran}:
4375@multitable @columnfractions .20 .80
4376@item @emph{Interface}: @tab @code{function acc_is_present(a)}
4377@item @tab @code{type, dimension(:[,:]...) :: a}
4378@item @tab @code{logical acc_is_present}
4379@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
4380@item @tab @code{type, dimension(:[,:]...) :: a}
4381@item @tab @code{integer len}
4382@item @tab @code{logical acc_is_present}
4383@end multitable
4384
4385@item @emph{Reference}:
4386@uref{https://www.openacc.org, OpenACC specification v2.6}, section
43873.2.30.
4388@end table
4389
4390
4391
4392@node acc_memcpy_to_device
4393@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
4394@table @asis
4395@item @emph{Description}
4396This function copies host memory specified by host address of @var{src} to
4397device memory specified by the device address @var{dest} for a length of
4398@var{bytes} bytes.
4399
4400@item @emph{C/C++}:
4401@multitable @columnfractions .20 .80
4402@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
4403@end multitable
4404
4405@item @emph{Reference}:
4406@uref{https://www.openacc.org, OpenACC specification v2.6}, section
44073.2.31.
4408@end table
4409
4410
4411
4412@node acc_memcpy_from_device
4413@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
4414@table @asis
4415@item @emph{Description}
4416This function copies host memory specified by host address of @var{src} from
4417device memory specified by the device address @var{dest} for a length of
4418@var{bytes} bytes.
4419
4420@item @emph{C/C++}:
4421@multitable @columnfractions .20 .80
4422@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
4423@end multitable
4424
4425@item @emph{Reference}:
4426@uref{https://www.openacc.org, OpenACC specification v2.6}, section
44273.2.32.
4428@end table
4429
4430
4431
4432@node acc_attach
4433@section @code{acc_attach} -- Let device pointer point to device-pointer target.
4434@table @asis
4435@item @emph{Description}
4436This function updates a pointer on the device from pointing to a host-pointer
4437address to pointing to the corresponding device data.
4438
4439@item @emph{C/C++}:
4440@multitable @columnfractions .20 .80
4441@item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
4442@item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
4443@end multitable
4444
4445@item @emph{Reference}:
4446@uref{https://www.openacc.org, OpenACC specification v2.6}, section
44473.2.34.
4448@end table
4449
4450
4451
4452@node acc_detach
4453@section @code{acc_detach} -- Let device pointer point to host-pointer target.
4454@table @asis
4455@item @emph{Description}
4456This function updates a pointer on the device from pointing to a device-pointer
4457address to pointing to the corresponding host data.
4458
4459@item @emph{C/C++}:
4460@multitable @columnfractions .20 .80
4461@item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
4462@item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
4463@item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
4464@item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
4465@end multitable
4466
4467@item @emph{Reference}:
4468@uref{https://www.openacc.org, OpenACC specification v2.6}, section
44693.2.35.
4470@end table
4471
4472
4473
4474@node acc_get_current_cuda_device
4475@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
4476@table @asis
4477@item @emph{Description}
4478This function returns the CUDA device handle. This handle is the same
4479as used by the CUDA Runtime or Driver API's.
4480
4481@item @emph{C/C++}:
4482@multitable @columnfractions .20 .80
4483@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
4484@end multitable
4485
4486@item @emph{Reference}:
4487@uref{https://www.openacc.org, OpenACC specification v2.6}, section
4488A.2.1.1.
4489@end table
4490
4491
4492
4493@node acc_get_current_cuda_context
4494@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
4495@table @asis
4496@item @emph{Description}
4497This function returns the CUDA context handle. This handle is the same
4498as used by the CUDA Runtime or Driver API's.
4499
4500@item @emph{C/C++}:
4501@multitable @columnfractions .20 .80
4502@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
4503@end multitable
4504
4505@item @emph{Reference}:
4506@uref{https://www.openacc.org, OpenACC specification v2.6}, section
4507A.2.1.2.
4508@end table
4509
4510
4511
4512@node acc_get_cuda_stream
4513@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
4514@table @asis
4515@item @emph{Description}
4516This function returns the CUDA stream handle for the queue @var{async}.
4517This handle is the same as used by the CUDA Runtime or Driver API's.
4518
4519@item @emph{C/C++}:
4520@multitable @columnfractions .20 .80
4521@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
4522@end multitable
4523
4524@item @emph{Reference}:
4525@uref{https://www.openacc.org, OpenACC specification v2.6}, section
4526A.2.1.3.
4527@end table
4528
4529
4530
4531@node acc_set_cuda_stream
4532@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
4533@table @asis
4534@item @emph{Description}
4535This function associates the stream handle specified by @var{stream} with
4536the queue @var{async}.
4537
4538This cannot be used to change the stream handle associated with
4539@code{acc_async_sync}.
4540
4541The return value is not specified.
4542
4543@item @emph{C/C++}:
4544@multitable @columnfractions .20 .80
4545@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
4546@end multitable
4547
4548@item @emph{Reference}:
4549@uref{https://www.openacc.org, OpenACC specification v2.6}, section
4550A.2.1.4.
4551@end table
4552
4553
4554
4555@node acc_prof_register
4556@section @code{acc_prof_register} -- Register callbacks.
4557@table @asis
4558@item @emph{Description}:
4559This function registers callbacks.
4560
4561@item @emph{C/C++}:
4562@multitable @columnfractions .20 .80
4563@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
4564@end multitable
4565
4566@item @emph{See also}:
4567@ref{OpenACC Profiling Interface}
4568
4569@item @emph{Reference}:
4570@uref{https://www.openacc.org, OpenACC specification v2.6}, section
45715.3.
4572@end table
4573
4574
4575
4576@node acc_prof_unregister
4577@section @code{acc_prof_unregister} -- Unregister callbacks.
4578@table @asis
4579@item @emph{Description}:
4580This function unregisters callbacks.
4581
4582@item @emph{C/C++}:
4583@multitable @columnfractions .20 .80
4584@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
4585@end multitable
4586
4587@item @emph{See also}:
4588@ref{OpenACC Profiling Interface}
4589
4590@item @emph{Reference}:
4591@uref{https://www.openacc.org, OpenACC specification v2.6}, section
45925.3.
4593@end table
4594
4595
4596
4597@node acc_prof_lookup
4598@section @code{acc_prof_lookup} -- Obtain inquiry functions.
4599@table @asis
4600@item @emph{Description}:
4601Function to obtain inquiry functions.
4602
4603@item @emph{C/C++}:
4604@multitable @columnfractions .20 .80
4605@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
4606@end multitable
4607
4608@item @emph{See also}:
4609@ref{OpenACC Profiling Interface}
4610
4611@item @emph{Reference}:
4612@uref{https://www.openacc.org, OpenACC specification v2.6}, section
46135.3.
4614@end table
4615
4616
4617
4618@node acc_register_library
4619@section @code{acc_register_library} -- Library registration.
4620@table @asis
4621@item @emph{Description}:
4622Function for library registration.
4623
4624@item @emph{C/C++}:
4625@multitable @columnfractions .20 .80
4626@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
4627@end multitable
4628
4629@item @emph{See also}:
4630@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
4631
4632@item @emph{Reference}:
4633@uref{https://www.openacc.org, OpenACC specification v2.6}, section
46345.3.
4635@end table
4636
4637
4638
4639@c ---------------------------------------------------------------------
4640@c OpenACC Environment Variables
4641@c ---------------------------------------------------------------------
4642
4643@node OpenACC Environment Variables
4644@chapter OpenACC Environment Variables
4645
4646The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
4647are defined by section 4 of the OpenACC specification in version 2.0.
4648The variable @env{ACC_PROFLIB}
4649is defined by section 4 of the OpenACC specification in version 2.6.
4650The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
4651
4652@menu
4653* ACC_DEVICE_TYPE::
4654* ACC_DEVICE_NUM::
4655* ACC_PROFLIB::
4656* GCC_ACC_NOTIFY::
4657@end menu
4658
4659
4660
4661@node ACC_DEVICE_TYPE
4662@section @code{ACC_DEVICE_TYPE}
4663@table @asis
4664@item @emph{Reference}:
4665@uref{https://www.openacc.org, OpenACC specification v2.6}, section
46664.1.
4667@end table
4668
4669
4670
4671@node ACC_DEVICE_NUM
4672@section @code{ACC_DEVICE_NUM}
4673@table @asis
4674@item @emph{Reference}:
4675@uref{https://www.openacc.org, OpenACC specification v2.6}, section
46764.2.
4677@end table
4678
4679
4680
4681@node ACC_PROFLIB
4682@section @code{ACC_PROFLIB}
4683@table @asis
4684@item @emph{See also}:
4685@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
4686
4687@item @emph{Reference}:
4688@uref{https://www.openacc.org, OpenACC specification v2.6}, section
46894.3.
4690@end table
4691
4692
4693
4694@node GCC_ACC_NOTIFY
4695@section @code{GCC_ACC_NOTIFY}
4696@table @asis
4697@item @emph{Description}:
4698Print debug information pertaining to the accelerator.
4699@end table
4700
4701
4702
4703@c ---------------------------------------------------------------------
4704@c CUDA Streams Usage
4705@c ---------------------------------------------------------------------
4706
4707@node CUDA Streams Usage
4708@chapter CUDA Streams Usage
4709
4710This applies to the @code{nvptx} plugin only.
4711
4712The library provides elements that perform asynchronous movement of
4713data and asynchronous operation of computing constructs. This
4714asynchronous functionality is implemented by making use of CUDA
4715streams@footnote{See "Stream Management" in "CUDA Driver API",
4716TRM-06703-001, Version 5.5, for additional information}.
4717
4718The primary means by that the asynchronous functionality is accessed
4719is through the use of those OpenACC directives which make use of the
4720@code{async} and @code{wait} clauses. When the @code{async} clause is
4721first used with a directive, it creates a CUDA stream. If an
4722@code{async-argument} is used with the @code{async} clause, then the
4723stream is associated with the specified @code{async-argument}.
4724
4725Following the creation of an association between a CUDA stream and the
4726@code{async-argument} of an @code{async} clause, both the @code{wait}
4727clause and the @code{wait} directive can be used. When either the
4728clause or directive is used after stream creation, it creates a
4729rendezvous point whereby execution waits until all operations
4730associated with the @code{async-argument}, that is, stream, have
4731completed.
4732
4733Normally, the management of the streams that are created as a result of
4734using the @code{async} clause, is done without any intervention by the
4735caller. This implies the association between the @code{async-argument}
4736and the CUDA stream will be maintained for the lifetime of the program.
4737However, this association can be changed through the use of the library
4738function @code{acc_set_cuda_stream}. When the function
4739@code{acc_set_cuda_stream} is called, the CUDA stream that was
4740originally associated with the @code{async} clause will be destroyed.
4741Caution should be taken when changing the association as subsequent
4742references to the @code{async-argument} refer to a different
4743CUDA stream.
4744
4745
4746
4747@c ---------------------------------------------------------------------
4748@c OpenACC Library Interoperability
4749@c ---------------------------------------------------------------------
4750
4751@node OpenACC Library Interoperability
4752@chapter OpenACC Library Interoperability
4753
4754@section Introduction
4755
4756The OpenACC library uses the CUDA Driver API, and may interact with
4757programs that use the Runtime library directly, or another library
4758based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
4759"Interactions with the CUDA Driver API" in
4760"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
4761Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
4762for additional information on library interoperability.}.
4763This chapter describes the use cases and what changes are
4764required in order to use both the OpenACC library and the CUBLAS and Runtime
4765libraries within a program.
4766
4767@section First invocation: NVIDIA CUBLAS library API
4768
4769In this first use case (see below), a function in the CUBLAS library is called
4770prior to any of the functions in the OpenACC library. More specifically, the
4771function @code{cublasCreate()}.
4772
4773When invoked, the function initializes the library and allocates the
4774hardware resources on the host and the device on behalf of the caller. Once
4775the initialization and allocation has completed, a handle is returned to the
4776caller. The OpenACC library also requires initialization and allocation of
4777hardware resources. Since the CUBLAS library has already allocated the
4778hardware resources for the device, all that is left to do is to initialize
4779the OpenACC library and acquire the hardware resources on the host.
4780
4781Prior to calling the OpenACC function that initializes the library and
4782allocate the host hardware resources, you need to acquire the device number
4783that was allocated during the call to @code{cublasCreate()}. The invoking of the
4784runtime library function @code{cudaGetDevice()} accomplishes this. Once
4785acquired, the device number is passed along with the device type as
4786parameters to the OpenACC library function @code{acc_set_device_num()}.
4787
4788Once the call to @code{acc_set_device_num()} has completed, the OpenACC
4789library uses the context that was created during the call to
4790@code{cublasCreate()}. In other words, both libraries will be sharing the
4791same context.
4792
4793@smallexample
4794 /* Create the handle */
4795 s = cublasCreate(&h);
4796 if (s != CUBLAS_STATUS_SUCCESS)
4797 @{
4798 fprintf(stderr, "cublasCreate failed %d\n", s);
4799 exit(EXIT_FAILURE);
4800 @}
4801
4802 /* Get the device number */
4803 e = cudaGetDevice(&dev);
4804 if (e != cudaSuccess)
4805 @{
4806 fprintf(stderr, "cudaGetDevice failed %d\n", e);
4807 exit(EXIT_FAILURE);
4808 @}
4809
4810 /* Initialize OpenACC library and use device 'dev' */
4811 acc_set_device_num(dev, acc_device_nvidia);
4812
4813@end smallexample
4814@center Use Case 1
4815
4816@section First invocation: OpenACC library API
4817
4818In this second use case (see below), a function in the OpenACC library is
eda38850 4819called prior to any of the functions in the CUBLAS library. More specifically,
d77de738
ML
4820the function @code{acc_set_device_num()}.
4821
4822In the use case presented here, the function @code{acc_set_device_num()}
4823is used to both initialize the OpenACC library and allocate the hardware
4824resources on the host and the device. In the call to the function, the
4825call parameters specify which device to use and what device
4826type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
4827is but one method to initialize the OpenACC library and allocate the
4828appropriate hardware resources. Other methods are available through the
4829use of environment variables and these will be discussed in the next section.
4830
4831Once the call to @code{acc_set_device_num()} has completed, other OpenACC
4832functions can be called as seen with multiple calls being made to
4833@code{acc_copyin()}. In addition, calls can be made to functions in the
4834CUBLAS library. In the use case a call to @code{cublasCreate()} is made
4835subsequent to the calls to @code{acc_copyin()}.
4836As seen in the previous use case, a call to @code{cublasCreate()}
4837initializes the CUBLAS library and allocates the hardware resources on the
4838host and the device. However, since the device has already been allocated,
4839@code{cublasCreate()} will only initialize the CUBLAS library and allocate
4840the appropriate hardware resources on the host. The context that was created
4841as part of the OpenACC initialization is shared with the CUBLAS library,
4842similarly to the first use case.
4843
4844@smallexample
4845 dev = 0;
4846
4847 acc_set_device_num(dev, acc_device_nvidia);
4848
4849 /* Copy the first set to the device */
4850 d_X = acc_copyin(&h_X[0], N * sizeof (float));
4851 if (d_X == NULL)
4852 @{
4853 fprintf(stderr, "copyin error h_X\n");
4854 exit(EXIT_FAILURE);
4855 @}
4856
4857 /* Copy the second set to the device */
4858 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4859 if (d_Y == NULL)
4860 @{
4861 fprintf(stderr, "copyin error h_Y1\n");
4862 exit(EXIT_FAILURE);
4863 @}
4864
4865 /* Create the handle */
4866 s = cublasCreate(&h);
4867 if (s != CUBLAS_STATUS_SUCCESS)
4868 @{
4869 fprintf(stderr, "cublasCreate failed %d\n", s);
4870 exit(EXIT_FAILURE);
4871 @}
4872
4873 /* Perform saxpy using CUBLAS library function */
4874 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4875 if (s != CUBLAS_STATUS_SUCCESS)
4876 @{
4877 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4878 exit(EXIT_FAILURE);
4879 @}
4880
4881 /* Copy the results from the device */
4882 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4883
4884@end smallexample
4885@center Use Case 2
4886
4887@section OpenACC library and environment variables
4888
4889There are two environment variables associated with the OpenACC library
4890that may be used to control the device type and device number:
4891@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4892environment variables can be used as an alternative to calling
4893@code{acc_set_device_num()}. As seen in the second use case, the device
4894type and device number were specified using @code{acc_set_device_num()}.
4895If however, the aforementioned environment variables were set, then the
4896call to @code{acc_set_device_num()} would not be required.
4897
4898
4899The use of the environment variables is only relevant when an OpenACC function
4900is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4901is called prior to a call to an OpenACC function, then you must call
4902@code{acc_set_device_num()}@footnote{More complete information
4903about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4904sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4905Application Programming Interface”, Version 2.6.}
4906
4907
4908
4909@c ---------------------------------------------------------------------
4910@c OpenACC Profiling Interface
4911@c ---------------------------------------------------------------------
4912
4913@node OpenACC Profiling Interface
4914@chapter OpenACC Profiling Interface
4915
4916@section Implementation Status and Implementation-Defined Behavior
4917
4918We're implementing the OpenACC Profiling Interface as defined by the
4919OpenACC 2.6 specification. We're clarifying some aspects here as
4920@emph{implementation-defined behavior}, while they're still under
4921discussion within the OpenACC Technical Committee.
4922
4923This implementation is tuned to keep the performance impact as low as
4924possible for the (very common) case that the Profiling Interface is
4925not enabled. This is relevant, as the Profiling Interface affects all
4926the @emph{hot} code paths (in the target code, not in the offloaded
4927code). Users of the OpenACC Profiling Interface can be expected to
4928understand that performance will be impacted to some degree once the
4929Profiling Interface has gotten enabled: for example, because of the
4930@emph{runtime} (libgomp) calling into a third-party @emph{library} for
4931every event that has been registered.
4932
4933We're not yet accounting for the fact that @cite{OpenACC events may
4934occur during event processing}.
4935We just handle one case specially, as required by CUDA 9.0
4936@command{nvprof}, that @code{acc_get_device_type}
4937(@ref{acc_get_device_type})) may be called from
4938@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4939callbacks.
4940
4941We're not yet implementing initialization via a
4942@code{acc_register_library} function that is either statically linked
4943in, or dynamically via @env{LD_PRELOAD}.
4944Initialization via @code{acc_register_library} functions dynamically
4945loaded via the @env{ACC_PROFLIB} environment variable does work, as
4946does directly calling @code{acc_prof_register},
4947@code{acc_prof_unregister}, @code{acc_prof_lookup}.
4948
4949As currently there are no inquiry functions defined, calls to
4950@code{acc_prof_lookup} will always return @code{NULL}.
4951
4952There aren't separate @emph{start}, @emph{stop} events defined for the
4953event types @code{acc_ev_create}, @code{acc_ev_delete},
4954@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4955should be triggered before or after the actual device-specific call is
4956made. We trigger them after.
4957
4958Remarks about data provided to callbacks:
4959
4960@table @asis
4961
4962@item @code{acc_prof_info.event_type}
4963It's not clear if for @emph{nested} event callbacks (for example,
4964@code{acc_ev_enqueue_launch_start} as part of a parent compute
4965construct), this should be set for the nested event
4966(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4967construct should remain (@code{acc_ev_compute_construct_start}). In
4968this implementation, the value will generally correspond to the
4969innermost nested event type.
4970
4971@item @code{acc_prof_info.device_type}
4972@itemize
4973
4974@item
4975For @code{acc_ev_compute_construct_start}, and in presence of an
4976@code{if} clause with @emph{false} argument, this will still refer to
4977the offloading device type.
4978It's not clear if that's the expected behavior.
4979
4980@item
4981Complementary to the item before, for
4982@code{acc_ev_compute_construct_end}, this is set to
4983@code{acc_device_host} in presence of an @code{if} clause with
4984@emph{false} argument.
4985It's not clear if that's the expected behavior.
4986
4987@end itemize
4988
4989@item @code{acc_prof_info.thread_id}
4990Always @code{-1}; not yet implemented.
4991
4992@item @code{acc_prof_info.async}
4993@itemize
4994
4995@item
4996Not yet implemented correctly for
4997@code{acc_ev_compute_construct_start}.
4998
4999@item
5000In a compute construct, for host-fallback
5001execution/@code{acc_device_host} it will always be
5002@code{acc_async_sync}.
5003It's not clear if that's the expected behavior.
5004
5005@item
5006For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
5007it will always be @code{acc_async_sync}.
5008It's not clear if that's the expected behavior.
5009
5010@end itemize
5011
5012@item @code{acc_prof_info.async_queue}
5013There is no @cite{limited number of asynchronous queues} in libgomp.
5014This will always have the same value as @code{acc_prof_info.async}.
5015
5016@item @code{acc_prof_info.src_file}
5017Always @code{NULL}; not yet implemented.
5018
5019@item @code{acc_prof_info.func_name}
5020Always @code{NULL}; not yet implemented.
5021
5022@item @code{acc_prof_info.line_no}
5023Always @code{-1}; not yet implemented.
5024
5025@item @code{acc_prof_info.end_line_no}
5026Always @code{-1}; not yet implemented.
5027
5028@item @code{acc_prof_info.func_line_no}
5029Always @code{-1}; not yet implemented.
5030
5031@item @code{acc_prof_info.func_end_line_no}
5032Always @code{-1}; not yet implemented.
5033
5034@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
5035Relating to @code{acc_prof_info.event_type} discussed above, in this
5036implementation, this will always be the same value as
5037@code{acc_prof_info.event_type}.
5038
5039@item @code{acc_event_info.*.parent_construct}
5040@itemize
5041
5042@item
5043Will be @code{acc_construct_parallel} for all OpenACC compute
5044constructs as well as many OpenACC Runtime API calls; should be the
5045one matching the actual construct, or
5046@code{acc_construct_runtime_api}, respectively.
5047
5048@item
5049Will be @code{acc_construct_enter_data} or
5050@code{acc_construct_exit_data} when processing variable mappings
5051specified in OpenACC @emph{declare} directives; should be
5052@code{acc_construct_declare}.
5053
5054@item
5055For implicit @code{acc_ev_device_init_start},
5056@code{acc_ev_device_init_end}, and explicit as well as implicit
5057@code{acc_ev_alloc}, @code{acc_ev_free},
5058@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5059@code{acc_ev_enqueue_download_start}, and
5060@code{acc_ev_enqueue_download_end}, will be
5061@code{acc_construct_parallel}; should reflect the real parent
5062construct.
5063
5064@end itemize
5065
5066@item @code{acc_event_info.*.implicit}
5067For @code{acc_ev_alloc}, @code{acc_ev_free},
5068@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5069@code{acc_ev_enqueue_download_start}, and
5070@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
5071also for explicit usage.
5072
5073@item @code{acc_event_info.data_event.var_name}
5074Always @code{NULL}; not yet implemented.
5075
5076@item @code{acc_event_info.data_event.host_ptr}
5077For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
5078@code{NULL}.
5079
5080@item @code{typedef union acc_api_info}
5081@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
5082Information}. This should obviously be @code{typedef @emph{struct}
5083acc_api_info}.
5084
5085@item @code{acc_api_info.device_api}
5086Possibly not yet implemented correctly for
5087@code{acc_ev_compute_construct_start},
5088@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
5089will always be @code{acc_device_api_none} for these event types.
5090For @code{acc_ev_enter_data_start}, it will be
5091@code{acc_device_api_none} in some cases.
5092
5093@item @code{acc_api_info.device_type}
5094Always the same as @code{acc_prof_info.device_type}.
5095
5096@item @code{acc_api_info.vendor}
5097Always @code{-1}; not yet implemented.
5098
5099@item @code{acc_api_info.device_handle}
5100Always @code{NULL}; not yet implemented.
5101
5102@item @code{acc_api_info.context_handle}
5103Always @code{NULL}; not yet implemented.
5104
5105@item @code{acc_api_info.async_handle}
5106Always @code{NULL}; not yet implemented.
5107
5108@end table
5109
5110Remarks about certain event types:
5111
5112@table @asis
5113
5114@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5115@itemize
5116
5117@item
5118@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
5119@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
5120@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
5121When a compute construct triggers implicit
5122@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
5123events, they currently aren't @emph{nested within} the corresponding
5124@code{acc_ev_compute_construct_start} and
5125@code{acc_ev_compute_construct_end}, but they're currently observed
5126@emph{before} @code{acc_ev_compute_construct_start}.
5127It's not clear what to do: the standard asks us provide a lot of
5128details to the @code{acc_ev_compute_construct_start} callback, without
5129(implicitly) initializing a device before?
5130
5131@item
5132Callbacks for these event types will not be invoked for calls to the
5133@code{acc_set_device_type} and @code{acc_set_device_num} functions.
5134It's not clear if they should be.
5135
5136@end itemize
5137
5138@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
5139@itemize
5140
5141@item
5142Callbacks for these event types will also be invoked for OpenACC
5143@emph{host_data} constructs.
5144It's not clear if they should be.
5145
5146@item
5147Callbacks for these event types will also be invoked when processing
5148variable mappings specified in OpenACC @emph{declare} directives.
5149It's not clear if they should be.
5150
5151@end itemize
5152
5153@end table
5154
5155Callbacks for the following event types will be invoked, but dispatch
5156and information provided therein has not yet been thoroughly reviewed:
5157
5158@itemize
5159@item @code{acc_ev_alloc}
5160@item @code{acc_ev_free}
5161@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
5162@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
5163@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
5164@end itemize
5165
5166During device initialization, and finalization, respectively,
5167callbacks for the following event types will not yet be invoked:
5168
5169@itemize
5170@item @code{acc_ev_alloc}
5171@item @code{acc_ev_free}
5172@end itemize
5173
5174Callbacks for the following event types have not yet been implemented,
5175so currently won't be invoked:
5176
5177@itemize
5178@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
5179@item @code{acc_ev_runtime_shutdown}
5180@item @code{acc_ev_create}, @code{acc_ev_delete}
5181@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
5182@end itemize
5183
5184For the following runtime library functions, not all expected
5185callbacks will be invoked (mostly concerning implicit device
5186initialization):
5187
5188@itemize
5189@item @code{acc_get_num_devices}
5190@item @code{acc_set_device_type}
5191@item @code{acc_get_device_type}
5192@item @code{acc_set_device_num}
5193@item @code{acc_get_device_num}
5194@item @code{acc_init}
5195@item @code{acc_shutdown}
5196@end itemize
5197
5198Aside from implicit device initialization, for the following runtime
5199library functions, no callbacks will be invoked for shared-memory
5200offloading devices (it's not clear if they should be):
5201
5202@itemize
5203@item @code{acc_malloc}
5204@item @code{acc_free}
5205@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
5206@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
5207@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
5208@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
5209@item @code{acc_update_device}, @code{acc_update_device_async}
5210@item @code{acc_update_self}, @code{acc_update_self_async}
5211@item @code{acc_map_data}, @code{acc_unmap_data}
5212@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
5213@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
5214@end itemize
5215
5216@c ---------------------------------------------------------------------
5217@c OpenMP-Implementation Specifics
5218@c ---------------------------------------------------------------------
5219
5220@node OpenMP-Implementation Specifics
5221@chapter OpenMP-Implementation Specifics
5222
5223@menu
2cd0689a 5224* Implementation-defined ICV Initialization::
d77de738 5225* OpenMP Context Selectors::
450b05ce 5226* Memory allocation::
d77de738
ML
5227@end menu
5228
2cd0689a
TB
5229@node Implementation-defined ICV Initialization
5230@section Implementation-defined ICV Initialization
5231@cindex Implementation specific setting
5232
5233@multitable @columnfractions .30 .70
5234@item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
5235@item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
5236@item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
5237@item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
819f3d36 5238@item @var{nthreads-var} @tab See @ref{OMP_NUM_THREADS}.
2cd0689a
TB
5239@item @var{num-devices-var} @tab Number of non-host devices found
5240by GCC's run-time library
5241@item @var{num-procs-var} @tab The number of CPU cores on the
5242initial device, except that affinity settings might lead to a
5243smaller number. On non-host devices, the value of the
5244@var{nthreads-var} ICV.
5245@item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
5246@item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
5247@item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
5248@item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
5249@item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
5250@ref{GOMP_SPINCOUNT}
5251@end multitable
5252
d77de738
ML
5253@node OpenMP Context Selectors
5254@section OpenMP Context Selectors
5255
5256@code{vendor} is always @code{gnu}. References are to the GCC manual.
5257
75e3773b
TB
5258@c NOTE: Only the following selectors have been implemented. To add
5259@c additional traits for target architecture, TARGET_OMP_DEVICE_KIND_ARCH_ISA
5260@c has to be implemented; cf. also PR target/105640.
5261@c For offload devices, add *additionally* gcc/config/*/t-omp-device.
5262
5263For the host compiler, @code{kind} always matches @code{host}; for the
5264offloading architectures AMD GCN and Nvidia PTX, @code{kind} always matches
5265@code{gpu}. For the x86 family of computers, AMD GCN and Nvidia PTX
5266the following traits are supported in addition; while OpenMP is supported
5267on more architectures, GCC currently does not match any @code{arch} or
5268@code{isa} traits for those.
5269
5270@multitable @columnfractions .65 .30
5271@headitem @code{arch} @tab @code{isa}
d77de738
ML
5272@item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
5273 @code{i586}, @code{i686}, @code{ia32}
d77de738
ML
5274 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
5275@item @code{amdgcn}, @code{gcn}
e0b95c2e
TB
5276 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
5277 @code{gfx803} is supported as an alias for @code{fiji}.}
d77de738 5278@item @code{nvptx}
d77de738
ML
5279 @tab See @code{-march=} in ``Nvidia PTX Options''
5280@end multitable
5281
450b05ce
TB
5282@node Memory allocation
5283@section Memory allocation
d77de738 5284
a85a106c
TB
5285For the available predefined allocators and, as applicable, their associated
5286predefined memory spaces and for the available traits and their default values,
5287see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
5288space use the @code{omp_default_mem_space} memory space.
5289
8c2fc744
TB
5290For the memory spaces, the following applies:
5291@itemize
5292@item @code{omp_default_mem_space} is supported
5293@item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
5294@item @code{omp_low_lat_mem_space} maps to @code{omp_default_mem_space}
5295@item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
5296 unless the memkind library is available
5297@item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
5298 unless the memkind library is available
5299@end itemize
5300
d77de738
ML
5301On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
5302library} (@code{libmemkind.so.0}) is available at runtime, it is used when
5303creating memory allocators requesting
5304
5305@itemize
5306@item the memory space @code{omp_high_bw_mem_space}
5307@item the memory space @code{omp_large_cap_mem_space}
450b05ce 5308@item the @code{partition} trait @code{interleaved}; note that for
8c2fc744 5309 @code{omp_large_cap_mem_space} the allocation will not be interleaved
d77de738
ML
5310@end itemize
5311
450b05ce
TB
5312On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
5313library} (@code{libnuma.so.1}) is available at runtime, it used when creating
5314memory allocators requesting
5315
5316@itemize
5317@item the @code{partition} trait @code{nearest}, except when both the
5318libmemkind library is available and the memory space is either
5319@code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
5320@end itemize
5321
5322Note that the numa library will round up the allocation size to a multiple of
5323the system page size; therefore, consider using it only with large data or
5324by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
5325kernel does not guarantee that an allocation will always be on the nearest NUMA
5326node nor that after reallocation the same node will be used. Note additionally
5327that, on Linux, the default setting of the memory placement policy is to use the
5328current node; therefore, unless the memory placement policy has been overridden,
5329the @code{partition} trait @code{environment} (the default) will be effectively
5330a @code{nearest} allocation.
5331
a85a106c 5332Additional notes regarding the traits:
8c2fc744
TB
5333@itemize
5334@item The @code{pinned} trait is unsupported.
a85a106c
TB
5335@item The default for the @code{pool_size} trait is no pool and for every
5336 (re)allocation the associated library routine is called, which might
5337 internally use a memory pool.
8c2fc744
TB
5338@item For the @code{partition} trait, the partition part size will be the same
5339 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
5340 effect), except for @code{interleaved} when the memkind library is
450b05ce
TB
5341 available. Furthermore, for @code{nearest} and unless the numa library
5342 is available, the memory might not be on the same NUMA node as thread
5343 that allocated the memory; on Linux, this is in particular the case when
5344 the memory placement policy is set to preferred.
8c2fc744
TB
5345@item The @code{access} trait has no effect such that memory is always
5346 accessible by all threads.
5347@item The @code{sync_hint} trait has no effect.
5348@end itemize
d77de738
ML
5349
5350@c ---------------------------------------------------------------------
5351@c Offload-Target Specifics
5352@c ---------------------------------------------------------------------
5353
5354@node Offload-Target Specifics
5355@chapter Offload-Target Specifics
5356
5357The following sections present notes on the offload-target specifics
5358
5359@menu
5360* AMD Radeon::
5361* nvptx::
5362@end menu
5363
5364@node AMD Radeon
5365@section AMD Radeon (GCN)
5366
5367On the hardware side, there is the hierarchy (fine to coarse):
5368@itemize
5369@item work item (thread)
5370@item wavefront
5371@item work group
81476bc4 5372@item compute unit (CU)
d77de738
ML
5373@end itemize
5374
5375All OpenMP and OpenACC levels are used, i.e.
5376@itemize
5377@item OpenMP's simd and OpenACC's vector map to work items (thread)
5378@item OpenMP's threads (``parallel'') and OpenACC's workers map
5379 to wavefronts
5380@item OpenMP's teams and OpenACC's gang use a threadpool with the
5381 size of the number of teams or gangs, respectively.
5382@end itemize
5383
5384The used sizes are
5385@itemize
5386@item Number of teams is the specified @code{num_teams} (OpenMP) or
81476bc4
MV
5387 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
5388 by two times the number of CU.
d77de738
ML
5389@item Number of wavefronts is 4 for gfx900 and 16 otherwise;
5390 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
5391 overrides this if smaller.
5392@item The wavefront has 102 scalars and 64 vectors
5393@item Number of workitems is always 64
5394@item The hardware permits maximally 40 workgroups/CU and
5395 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
5396@item 80 scalars registers and 24 vector registers in non-kernel functions
5397 (the chosen procedure-calling API).
5398@item For the kernel itself: as many as register pressure demands (number of
5399 teams and number of threads, scaled down if registers are exhausted)
5400@end itemize
5401
5402The implementation remark:
5403@itemize
5404@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5405 using the C library @code{printf} functions and the Fortran
5406 @code{print}/@code{write} statements.
243fa488 5407@item Reverse offload regions (i.e. @code{target} regions with
f84fdb13
TB
5408 @code{device(ancestor:1)}) are processed serially per @code{target} region
5409 such that the next reverse offload region is only executed after the previous
5410 one returned.
f1af7d65 5411@item OpenMP code that has a @code{requires} directive with
f84fdb13
TB
5412 @code{unified_shared_memory} will remove any GCN device from the list of
5413 available devices (``host fallback'').
2e3dd14d
TB
5414@item The available stack size can be changed using the @code{GCN_STACK_SIZE}
5415 environment variable; the default is 32 kiB per thread.
d77de738
ML
5416@end itemize
5417
5418
5419
5420@node nvptx
5421@section nvptx
5422
5423On the hardware side, there is the hierarchy (fine to coarse):
5424@itemize
5425@item thread
5426@item warp
5427@item thread block
5428@item streaming multiprocessor
5429@end itemize
5430
5431All OpenMP and OpenACC levels are used, i.e.
5432@itemize
5433@item OpenMP's simd and OpenACC's vector map to threads
5434@item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
5435@item OpenMP's teams and OpenACC's gang use a threadpool with the
5436 size of the number of teams or gangs, respectively.
5437@end itemize
5438
5439The used sizes are
5440@itemize
5441@item The @code{warp_size} is always 32
5442@item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
81476bc4
MV
5443@item The number of teams is limited by the number of blocks the device can
5444 host simultaneously.
d77de738
ML
5445@end itemize
5446
5447Additional information can be obtained by setting the environment variable to
5448@code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
5449parameters).
5450
5451GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
5452which caches the JIT in the user's directory (see CUDA documentation; can be
5453tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
5454
5455Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
eda38850 5456options still affect the used PTX ISA code and, thus, the requirements on
d77de738
ML
5457CUDA version and hardware.
5458
5459The implementation remark:
5460@itemize
5461@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5462 using the C library @code{printf} functions. Note that the Fortran
5463 @code{print}/@code{write} statements are not supported, yet.
5464@item Compilation OpenMP code that contains @code{requires reverse_offload}
5465 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
5466 is not supported.
eda38850
TB
5467@item For code containing reverse offload (i.e. @code{target} regions with
5468 @code{device(ancestor:1)}), there is a slight performance penalty
5469 for @emph{all} target regions, consisting mostly of shutdown delay
5470 Per device, reverse offload regions are processed serially such that
5471 the next reverse offload region is only executed after the previous
5472 one returned.
f1af7d65
TB
5473@item OpenMP code that has a @code{requires} directive with
5474 @code{unified_shared_memory} will remove any nvptx device from the
eda38850 5475 list of available devices (``host fallback'').
2cd0689a
TB
5476@item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
5477 in the GCC manual.
25072a47
TB
5478@item The OpenMP routines @code{omp_target_memcpy_rect} and
5479 @code{omp_target_memcpy_rect_async} and the @code{target update}
5480 directive for non-contiguous list items will use the 2D and 3D
5481 memory-copy functions of the CUDA library. Higher dimensions will
5482 call those functions in a loop and are therefore supported.
d77de738
ML
5483@end itemize
5484
5485
5486@c ---------------------------------------------------------------------
5487@c The libgomp ABI
5488@c ---------------------------------------------------------------------
5489
5490@node The libgomp ABI
5491@chapter The libgomp ABI
5492
5493The following sections present notes on the external ABI as
5494presented by libgomp. Only maintainers should need them.
5495
5496@menu
5497* Implementing MASTER construct::
5498* Implementing CRITICAL construct::
5499* Implementing ATOMIC construct::
5500* Implementing FLUSH construct::
5501* Implementing BARRIER construct::
5502* Implementing THREADPRIVATE construct::
5503* Implementing PRIVATE clause::
5504* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
5505* Implementing REDUCTION clause::
5506* Implementing PARALLEL construct::
5507* Implementing FOR construct::
5508* Implementing ORDERED construct::
5509* Implementing SECTIONS construct::
5510* Implementing SINGLE construct::
5511* Implementing OpenACC's PARALLEL construct::
5512@end menu
5513
5514
5515@node Implementing MASTER construct
5516@section Implementing MASTER construct
5517
5518@smallexample
5519if (omp_get_thread_num () == 0)
5520 block
5521@end smallexample
5522
5523Alternately, we generate two copies of the parallel subfunction
5524and only include this in the version run by the primary thread.
5525Surely this is not worthwhile though...
5526
5527
5528
5529@node Implementing CRITICAL construct
5530@section Implementing CRITICAL construct
5531
5532Without a specified name,
5533
5534@smallexample
5535 void GOMP_critical_start (void);
5536 void GOMP_critical_end (void);
5537@end smallexample
5538
5539so that we don't get COPY relocations from libgomp to the main
5540application.
5541
5542With a specified name, use omp_set_lock and omp_unset_lock with
5543name being transformed into a variable declared like
5544
5545@smallexample
5546 omp_lock_t gomp_critical_user_<name> __attribute__((common))
5547@end smallexample
5548
5549Ideally the ABI would specify that all zero is a valid unlocked
5550state, and so we wouldn't need to initialize this at
5551startup.
5552
5553
5554
5555@node Implementing ATOMIC construct
5556@section Implementing ATOMIC construct
5557
5558The target should implement the @code{__sync} builtins.
5559
5560Failing that we could add
5561
5562@smallexample
5563 void GOMP_atomic_enter (void)
5564 void GOMP_atomic_exit (void)
5565@end smallexample
5566
5567which reuses the regular lock code, but with yet another lock
5568object private to the library.
5569
5570
5571
5572@node Implementing FLUSH construct
5573@section Implementing FLUSH construct
5574
5575Expands to the @code{__sync_synchronize} builtin.
5576
5577
5578
5579@node Implementing BARRIER construct
5580@section Implementing BARRIER construct
5581
5582@smallexample
5583 void GOMP_barrier (void)
5584@end smallexample
5585
5586
5587@node Implementing THREADPRIVATE construct
5588@section Implementing THREADPRIVATE construct
5589
5590In _most_ cases we can map this directly to @code{__thread}. Except
5591that OMP allows constructors for C++ objects. We can either
5592refuse to support this (how often is it used?) or we can
5593implement something akin to .ctors.
5594
5595Even more ideally, this ctor feature is handled by extensions
5596to the main pthreads library. Failing that, we can have a set
5597of entry points to register ctor functions to be called.
5598
5599
5600
5601@node Implementing PRIVATE clause
5602@section Implementing PRIVATE clause
5603
5604In association with a PARALLEL, or within the lexical extent
5605of a PARALLEL block, the variable becomes a local variable in
5606the parallel subfunction.
5607
5608In association with FOR or SECTIONS blocks, create a new
5609automatic variable within the current function. This preserves
5610the semantic of new variable creation.
5611
5612
5613
5614@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5615@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5616
5617This seems simple enough for PARALLEL blocks. Create a private
5618struct for communicating between the parent and subfunction.
5619In the parent, copy in values for scalar and "small" structs;
5620copy in addresses for others TREE_ADDRESSABLE types. In the
5621subfunction, copy the value into the local variable.
5622
5623It is not clear what to do with bare FOR or SECTION blocks.
5624The only thing I can figure is that we do something like:
5625
5626@smallexample
5627#pragma omp for firstprivate(x) lastprivate(y)
5628for (int i = 0; i < n; ++i)
5629 body;
5630@end smallexample
5631
5632which becomes
5633
5634@smallexample
5635@{
5636 int x = x, y;
5637
5638 // for stuff
5639
5640 if (i == n)
5641 y = y;
5642@}
5643@end smallexample
5644
5645where the "x=x" and "y=y" assignments actually have different
5646uids for the two variables, i.e. not something you could write
5647directly in C. Presumably this only makes sense if the "outer"
5648x and y are global variables.
5649
5650COPYPRIVATE would work the same way, except the structure
5651broadcast would have to happen via SINGLE machinery instead.
5652
5653
5654
5655@node Implementing REDUCTION clause
5656@section Implementing REDUCTION clause
5657
5658The private struct mentioned in the previous section should have
5659a pointer to an array of the type of the variable, indexed by the
5660thread's @var{team_id}. The thread stores its final value into the
5661array, and after the barrier, the primary thread iterates over the
5662array to collect the values.
5663
5664
5665@node Implementing PARALLEL construct
5666@section Implementing PARALLEL construct
5667
5668@smallexample
5669 #pragma omp parallel
5670 @{
5671 body;
5672 @}
5673@end smallexample
5674
5675becomes
5676
5677@smallexample
5678 void subfunction (void *data)
5679 @{
5680 use data;
5681 body;
5682 @}
5683
5684 setup data;
5685 GOMP_parallel_start (subfunction, &data, num_threads);
5686 subfunction (&data);
5687 GOMP_parallel_end ();
5688@end smallexample
5689
5690@smallexample
5691 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
5692@end smallexample
5693
5694The @var{FN} argument is the subfunction to be run in parallel.
5695
5696The @var{DATA} argument is a pointer to a structure used to
5697communicate data in and out of the subfunction, as discussed
5698above with respect to FIRSTPRIVATE et al.
5699
5700The @var{NUM_THREADS} argument is 1 if an IF clause is present
5701and false, or the value of the NUM_THREADS clause, if
5702present, or 0.
5703
5704The function needs to create the appropriate number of
5705threads and/or launch them from the dock. It needs to
5706create the team structure and assign team ids.
5707
5708@smallexample
5709 void GOMP_parallel_end (void)
5710@end smallexample
5711
5712Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
5713
5714
5715
5716@node Implementing FOR construct
5717@section Implementing FOR construct
5718
5719@smallexample
5720 #pragma omp parallel for
5721 for (i = lb; i <= ub; i++)
5722 body;
5723@end smallexample
5724
5725becomes
5726
5727@smallexample
5728 void subfunction (void *data)
5729 @{
5730 long _s0, _e0;
5731 while (GOMP_loop_static_next (&_s0, &_e0))
5732 @{
5733 long _e1 = _e0, i;
5734 for (i = _s0; i < _e1; i++)
5735 body;
5736 @}
5737 GOMP_loop_end_nowait ();
5738 @}
5739
5740 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
5741 subfunction (NULL);
5742 GOMP_parallel_end ();
5743@end smallexample
5744
5745@smallexample
5746 #pragma omp for schedule(runtime)
5747 for (i = 0; i < n; i++)
5748 body;
5749@end smallexample
5750
5751becomes
5752
5753@smallexample
5754 @{
5755 long i, _s0, _e0;
5756 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
5757 do @{
5758 long _e1 = _e0;
5759 for (i = _s0, i < _e0; i++)
5760 body;
5761 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
5762 GOMP_loop_end ();
5763 @}
5764@end smallexample
5765
5766Note that while it looks like there is trickiness to propagating
5767a non-constant STEP, there isn't really. We're explicitly allowed
5768to evaluate it as many times as we want, and any variables involved
5769should automatically be handled as PRIVATE or SHARED like any other
5770variables. So the expression should remain evaluable in the
5771subfunction. We can also pull it into a local variable if we like,
5772but since its supposed to remain unchanged, we can also not if we like.
5773
5774If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
5775able to get away with no work-sharing context at all, since we can
5776simply perform the arithmetic directly in each thread to divide up
5777the iterations. Which would mean that we wouldn't need to call any
5778of these routines.
5779
5780There are separate routines for handling loops with an ORDERED
5781clause. Bookkeeping for that is non-trivial...
5782
5783
5784
5785@node Implementing ORDERED construct
5786@section Implementing ORDERED construct
5787
5788@smallexample
5789 void GOMP_ordered_start (void)
5790 void GOMP_ordered_end (void)
5791@end smallexample
5792
5793
5794
5795@node Implementing SECTIONS construct
5796@section Implementing SECTIONS construct
5797
5798A block as
5799
5800@smallexample
5801 #pragma omp sections
5802 @{
5803 #pragma omp section
5804 stmt1;
5805 #pragma omp section
5806 stmt2;
5807 #pragma omp section
5808 stmt3;
5809 @}
5810@end smallexample
5811
5812becomes
5813
5814@smallexample
5815 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
5816 switch (i)
5817 @{
5818 case 1:
5819 stmt1;
5820 break;
5821 case 2:
5822 stmt2;
5823 break;
5824 case 3:
5825 stmt3;
5826 break;
5827 @}
5828 GOMP_barrier ();
5829@end smallexample
5830
5831
5832@node Implementing SINGLE construct
5833@section Implementing SINGLE construct
5834
5835A block like
5836
5837@smallexample
5838 #pragma omp single
5839 @{
5840 body;
5841 @}
5842@end smallexample
5843
5844becomes
5845
5846@smallexample
5847 if (GOMP_single_start ())
5848 body;
5849 GOMP_barrier ();
5850@end smallexample
5851
5852while
5853
5854@smallexample
5855 #pragma omp single copyprivate(x)
5856 body;
5857@end smallexample
5858
5859becomes
5860
5861@smallexample
5862 datap = GOMP_single_copy_start ();
5863 if (datap == NULL)
5864 @{
5865 body;
5866 data.x = x;
5867 GOMP_single_copy_end (&data);
5868 @}
5869 else
5870 x = datap->x;
5871 GOMP_barrier ();
5872@end smallexample
5873
5874
5875
5876@node Implementing OpenACC's PARALLEL construct
5877@section Implementing OpenACC's PARALLEL construct
5878
5879@smallexample
5880 void GOACC_parallel ()
5881@end smallexample
5882
5883
5884
5885@c ---------------------------------------------------------------------
5886@c Reporting Bugs
5887@c ---------------------------------------------------------------------
5888
5889@node Reporting Bugs
5890@chapter Reporting Bugs
5891
5892Bugs in the GNU Offloading and Multi Processing Runtime Library should
5893be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
5894"openacc", or "openmp", or both to the keywords field in the bug
5895report, as appropriate.
5896
5897
5898
5899@c ---------------------------------------------------------------------
5900@c GNU General Public License
5901@c ---------------------------------------------------------------------
5902
5903@include gpl_v3.texi
5904
5905
5906
5907@c ---------------------------------------------------------------------
5908@c GNU Free Documentation License
5909@c ---------------------------------------------------------------------
5910
5911@include fdl.texi
5912
5913
5914
5915@c ---------------------------------------------------------------------
5916@c Funding Free Software
5917@c ---------------------------------------------------------------------
5918
5919@include funding.texi
5920
5921@c ---------------------------------------------------------------------
5922@c Index
5923@c ---------------------------------------------------------------------
5924
5925@node Library Index
5926@unnumbered Library Index
5927
5928@printindex cp
5929
5930@bye