]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
ada: Allow No_Caching on volatile types
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
d77de738
ML
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
10Copyright @copyright{} 2006-2022 Free Software Foundation, Inc.
11
12Permission is granted to copy, distribute and/or modify this document
13under the terms of the GNU Free Documentation License, Version 1.3 or
14any later version published by the Free Software Foundation; with the
15Invariant Sections being ``Funding Free Software'', the Front-Cover
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35@end direntry
36
37This manual documents libgomp, the GNU Offloading and Multi Processing
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
53@title GNU Offloading and Multi Processing Runtime Library
54@subtitle The GNU OpenMP and OpenACC Implementation
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top, Enabling OpenMP
72@top Introduction
73@cindex Introduction
74
75This manual documents the usage of libgomp, the GNU Offloading and
76Multi Processing Runtime Library. This includes the GNU
77implementation of the @uref{https://www.openmp.org, OpenMP} Application
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
80@uref{https://www.openacc.org, OpenACC} Application Programming
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
83
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
98* OpenMP Implementation Status:: List of implemented features by OpenMP version
99* OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102* OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105* Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107* OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111* CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113* OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115* OpenACC Profiling Interface::
116* OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118* Offload-Target Specifics:: Notes on offload-target specific internals
119* The libgomp ABI:: Notes on the external ABI presented by libgomp.
120* Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122* Copying:: GNU general public license says
123 how you can copy and share libgomp.
124* GNU Free Documentation License::
125 How you can copy and share this manual.
126* Funding:: How to help assure continued work for free
127 software.
128* Library Index:: Index of this documentation.
129@end menu
130
131
132@c ---------------------------------------------------------------------
133@c Enabling OpenMP
134@c ---------------------------------------------------------------------
135
136@node Enabling OpenMP
137@chapter Enabling OpenMP
138
139To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143@code{!$} conditional compilation sentinels in free form and @code{c$},
144@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145arranges for automatic linking of the OpenMP runtime library
146(@ref{Runtime Library Routines}).
147
148A complete description of all OpenMP directives may be found in the
149@uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150See also @ref{OpenMP Implementation Status}.
151
152
153@c ---------------------------------------------------------------------
154@c OpenMP Implementation Status
155@c ---------------------------------------------------------------------
156
157@node OpenMP Implementation Status
158@chapter OpenMP Implementation Status
159
160@menu
161* OpenMP 4.5:: Feature completion status to 4.5 specification
162* OpenMP 5.0:: Feature completion status to 5.0 specification
163* OpenMP 5.1:: Feature completion status to 5.1 specification
164* OpenMP 5.2:: Feature completion status to 5.2 specification
c16e85d7 165* OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
d77de738
ML
166@end menu
167
168The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170the value @code{201511} (i.e. OpenMP 4.5).
171
172@node OpenMP 4.5
173@section OpenMP 4.5
174
175The OpenMP 4.5 specification is fully supported.
176
177@node OpenMP 5.0
178@section OpenMP 5.0
179
180@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181@c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183@multitable @columnfractions .60 .10 .25
184@headitem Description @tab Status @tab Comments
185@item Array shaping @tab N @tab
186@item Array sections with non-unit strides in C and C++ @tab N @tab
187@item Iterators @tab Y @tab
188@item @code{metadirective} directive @tab N @tab
189@item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
191@item @emph{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
192 env variable @tab Y @tab
193@item Nested-parallel changes to @emph{max-active-levels-var} ICV @tab Y @tab
194@item @code{requires} directive @tab P
195 @tab complete but no non-host devices provides @code{unified_address},
196 @code{unified_shared_memory} or @code{reverse_offload}
197@item @code{teams} construct outside an enclosing target region @tab Y @tab
198@item Non-rectangular loop nests @tab Y @tab
199@item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
200@item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
201 constructs @tab Y @tab
202@item Collapse of associated loops that are imperfectly nested loops @tab N @tab
203@item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
204 @code{simd} construct @tab Y @tab
205@item @code{atomic} constructs in @code{simd} @tab Y @tab
206@item @code{loop} construct @tab Y @tab
207@item @code{order(concurrent)} clause @tab Y @tab
208@item @code{scan} directive and @code{in_scan} modifier for the
209 @code{reduction} clause @tab Y @tab
210@item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
211@item @code{in_reduction} clause on @code{target} constructs @tab P
212 @tab @code{nowait} only stub
213@item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
214@item @code{task} modifier to @code{reduction} clause @tab Y @tab
215@item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
216@item @code{detach} clause to @code{task} construct @tab Y @tab
217@item @code{omp_fulfill_event} runtime routine @tab Y @tab
218@item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
219 and @code{taskloop simd} constructs @tab Y @tab
220@item @code{taskloop} construct cancelable by @code{cancel} construct
221 @tab Y @tab
222@item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
223 @tab Y @tab
224@item Predefined memory spaces, memory allocators, allocator traits
225 @tab Y @tab Some are only stubs
226@item Memory management routines @tab Y @tab
227@item @code{allocate} directive @tab N @tab
228@item @code{allocate} clause @tab P @tab Initial support
229@item @code{use_device_addr} clause on @code{target data} @tab Y @tab
230@item @code{ancestor} modifier on @code{device} clause
231 @tab Y @tab See comment for @code{requires}
232@item Implicit declare target directive @tab Y @tab
233@item Discontiguous array section with @code{target update} construct
234 @tab N @tab
235@item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237@item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238@item Nested @code{declare target} directive @tab Y @tab
239@item Combined @code{master} constructs @tab Y @tab
240@item @code{depend} clause on @code{taskwait} @tab Y @tab
241@item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
242 @tab Y @tab
243@item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244@item @code{depobj} construct and depend objects @tab Y @tab
245@item Lock hints were renamed to synchronization hints @tab Y @tab
246@item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247@item Map-order clarifications @tab P @tab
248@item @code{close} @emph{map-type-modifier} @tab Y @tab
249@item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251@item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254@item @code{defaultmap} extensions @tab Y @tab
255@item @code{declare mapper} directive @tab N @tab
256@item @code{omp_get_supported_active_levels} routine @tab Y @tab
257@item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259@item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
260 routines @tab Y @tab
261@item @code{omp_get_device_num} runtime routine @tab Y @tab
262@item OMPT interface @tab N @tab
263@item OMPD interface @tab N @tab
264@end multitable
265
266@unnumberedsubsec Other new OpenMP 5.0 features
267
268@multitable @columnfractions .60 .10 .25
269@headitem Description @tab Status @tab Comments
270@item Supporting C++'s range-based for loop @tab Y @tab
271@end multitable
272
273
274@node OpenMP 5.1
275@section OpenMP 5.1
276
277@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
278
279@multitable @columnfractions .60 .10 .25
280@headitem Description @tab Status @tab Comments
281@item OpenMP directive as C++ attribute specifiers @tab Y @tab
282@item @code{omp_all_memory} reserved locator @tab Y @tab
283@item @emph{target_device trait} in OpenMP Context @tab N @tab
284@item @code{target_device} selector set in context selectors @tab N @tab
285@item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287@item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289@item @code{dispatch} construct @tab N @tab
290@item device-specific ICV settings with environment variables @tab Y @tab
291@item @code{assume} directive @tab Y @tab
292@item @code{nothing} directive @tab Y @tab
293@item @code{error} directive @tab Y @tab
294@item @code{masked} construct @tab Y @tab
295@item @code{scope} directive @tab Y @tab
296@item Loop transformation constructs @tab N @tab
297@item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
299@item @code{align} clause/modifier in @code{allocate} directive/clause
300 and @code{allocator} directive @tab P @tab C/C++ on clause only
301@item @code{thread_limit} clause to @code{target} construct @tab Y @tab
302@item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
303@item Iterators in @code{target update} motion clauses and @code{map}
304 clauses @tab N @tab
305@item Indirect calls to the device version of a procedure or function in
306 @code{target} regions @tab N @tab
307@item @code{interop} directive @tab N @tab
308@item @code{omp_interop_t} object support in runtime routines @tab N @tab
309@item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
310@item Extensions to the @code{atomic} directive @tab Y @tab
311@item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
312@item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
313@item @code{private} and @code{firstprivate} argument to @code{default}
314 clause in C and C++ @tab Y @tab
315@item @code{present} argument to @code{defaultmap} clause @tab N @tab
316@item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
317 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
318 routines @tab Y @tab
319@item @code{omp_target_is_accessible} runtime routine @tab Y @tab
320@item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
321 runtime routines @tab Y @tab
322@item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
323@item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
324 @code{omp_aligned_calloc} runtime routines @tab Y @tab
325@item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
326 @code{omp_atv_default} changed @tab Y @tab
327@item @code{omp_display_env} runtime routine @tab Y @tab
328@item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
329@item @code{ompt_sync_region_t} enum additions @tab N @tab
330@item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
331 and @code{ompt_state_wait_barrier_teams} @tab N @tab
332@item @code{ompt_callback_target_data_op_emi_t},
333 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
334 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
335@item @code{ompt_callback_error_t} type @tab N @tab
336@item @code{OMP_PLACES} syntax extensions @tab Y @tab
337@item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
338 variables @tab Y @tab
339@end multitable
340
341@unnumberedsubsec Other new OpenMP 5.1 features
342
343@multitable @columnfractions .60 .10 .25
344@headitem Description @tab Status @tab Comments
345@item Support of strictly structured blocks in Fortran @tab Y @tab
346@item Support of structured block sequences in C/C++ @tab Y @tab
347@item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
348 clause @tab Y @tab
349@item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
350@item Pointer predetermined firstprivate getting initialized
351to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
352@item For Fortran, diagnose placing declarative before/between @code{USE},
353 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
c16e85d7
TB
354@item Optional comma beween directive and clause in the @code{#pragma} form @tab Y @tab
355@item @code{indirect} clause in @code{declare target} @tab N @tab
356@item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
d77de738
ML
357@end multitable
358
359
360@node OpenMP 5.2
361@section OpenMP 5.2
362
363@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
364
365@multitable @columnfractions .60 .10 .25
366@headitem Description @tab Status @tab Comments
367@item @code{omp_in_explicit_task} routine and @emph{explicit-task-var} ICV
368 @tab Y @tab
369@item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
370 namespaces @tab N/A
371 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
372 sentinel as C/C++ pragma and C++ attributes are warned for with
373 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
374 (enabled by default), respectively; for Fortran free-source code, there is
375 a warning enabled by default and, for fixed-source code, the @code{omx}
376 sentinel is warned for with with @code{-Wsurprising} (enabled by
377 @code{-Wall}). Unknown clauses are always rejected with an error.}
091b6dbc 378@item Clauses on @code{end} directive can be on directive @tab Y @tab
d77de738
ML
379@item Deprecation of no-argument @code{destroy} clause on @code{depobj}
380 @tab N @tab
381@item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
382@item Deprecation of minus operator for reductions @tab N @tab
383@item Deprecation of separating @code{map} modifiers without comma @tab N @tab
384@item @code{declare mapper} with iterator and @code{present} modifiers
385 @tab N @tab
386@item If a matching mapped list item is not found in the data environment, the
387 pointer retains its original value @tab N @tab
388@item New @code{enter} clause as alias for @code{to} on declare target directive
389 @tab Y @tab
390@item Deprecation of @code{to} clause on declare target directive @tab N @tab
391@item Extended list of directives permitted in Fortran pure procedures
392 @tab N @tab
393@item New @code{allocators} directive for Fortran @tab N @tab
394@item Deprecation of @code{allocate} directive for Fortran
395 allocatables/pointers @tab N @tab
396@item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
397@item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
398 @tab N @tab
399@item Deprecation of traits array following the allocator_handle expression in
400 @code{uses_allocators} @tab N @tab
401@item New @code{otherwise} clause as alias for @code{default} on metadirectives
402 @tab N @tab
403@item Deprecation of @code{default} clause on metadirectives @tab N @tab
404@item Deprecation of delimited form of @code{declare target} @tab N @tab
405@item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
406@item @code{allocate} and @code{firstprivate} clauses on @code{scope}
407 @tab Y @tab
408@item @code{ompt_callback_work} @tab N @tab
9f80367e 409@item Default map-type for the @code{map} clause in @code{target enter/exit data}
d77de738
ML
410 @tab Y @tab
411@item New @code{doacross} clause as alias for @code{depend} with
412 @code{source}/@code{sink} modifier @tab Y @tab
413@item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
414 @tab N @tab
415@item @code{omp_cur_iteration} keyword @tab Y @tab
416@end multitable
417
418@unnumberedsubsec Other new OpenMP 5.2 features
419
420@multitable @columnfractions .60 .10 .25
421@headitem Description @tab Status @tab Comments
422@item For Fortran, optional comma between directive and clause @tab N @tab
423@item Conforming device numbers and @code{omp_initial_device} and
424 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
425@item Initial value of @emph{default-device-var} ICV with
426 @code{OMP_TARGET_OFFLOAD=mandatory} @tab N @tab
427@item @emph{interop_types} in any position of the modifier list for the @code{init} clause
428 of the @code{interop} construct @tab N @tab
429@end multitable
430
431
c16e85d7
TB
432@node OpenMP Technical Report 11
433@section OpenMP Technical Report 11
434
435Technical Report (TR) 11 is the first preview for OpenMP 6.0.
436
437@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
438@multitable @columnfractions .60 .10 .25
439@item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
440 @tab N/A @tab Backward compatibility
441@item The @code{decl} attribute was added to the C++ attribute syntax
442 @tab N @tab
443@item @code{_ALL} suffix to the device-scope environment variables
444 @tab P @tab Host device number wrongly accepted
445@item For Fortran, @emph{locator list} can be also function reference with
446 data pointer result @tab N @tab
447@item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
448 @tab N @tab
449@item Implicit reduction identifiers of C++ classes
450 @tab N @tab
451@item Change of the @emph{map-type} property from @emph{ultimate} to
452 @emph{default} @tab N @tab
453@item Concept of @emph{assumed-size arrays} in C and C++
454 @tab N @tab
455@item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
456 @tab N @tab
457@item @code{groupprivate} directive @tab N @tab
458@item @code{local} clause to declare target directive @tab N @tab
459@item @code{part_size} allocator trait @tab N @tab
460@item @code{pin_device}, @code{preferred_device} and @code{target_access}
461 allocator traits
462 @tab N @tab
463@item @code{access} allocator trait changes @tab N @tab
464@item Extension of @code{interop} operation of @code{append_args}, allowing all
465 modifiers of the @code{init} clause
9f80367e 466 @tab N @tab
c16e85d7
TB
467@item @code{interop} clause to @code{dispatch} @tab N @tab
468@item @code{apply} code to loop-transforming constructs @tab N @tab
469@item @code{omp_curr_progress_width} identifier @tab N @tab
470@item @code{safesync} clause to the @code{parallel} construct @tab N @tab
471@item @code{omp_get_max_progress_width} runtime routine @tab N @tab
472@item @code{strict} modifier keyword to @code{num_threads}, @code{num_tasks}
473 and @code{grainsize} @tab N @tab
474@item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
475@item Routines for obtaining memory spaces/allocators for shared/device memory
476 @tab N @tab
477@item @code{omp_get_memspace_num_resources} routine @tab N @tab
478@item @code{omp_get_submemspace} routine @tab N @tab
479@item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
480@item Extension of @code{OMP_DEFAULT_DEVICE} and new
481 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
482@item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
483@end multitable
484
485@unnumberedsubsec Other new TR 11 features
486@multitable @columnfractions .60 .10 .25
487@item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
488@item Mapping lambda captures @tab N @tab
489@item For Fortran, atomic compare with storing the comparison result
490 @tab N @tab
491@item @code{aligned} clause changes for @code{simd} and @code{declare simd}
492 @tab N @tab
493@end multitable
494
495
496
d77de738
ML
497@c ---------------------------------------------------------------------
498@c OpenMP Runtime Library Routines
499@c ---------------------------------------------------------------------
500
501@node Runtime Library Routines
502@chapter OpenMP Runtime Library Routines
503
504The runtime routines described here are defined by Section 3 of the OpenMP
505specification in version 4.5. The routines are structured in following
506three parts:
507
508@menu
509Control threads, processors and the parallel environment. They have C
510linkage, and do not throw exceptions.
511
512* omp_get_active_level:: Number of active parallel regions
513* omp_get_ancestor_thread_num:: Ancestor thread ID
514* omp_get_cancellation:: Whether cancellation support is enabled
515* omp_get_default_device:: Get the default device for target regions
516* omp_get_device_num:: Get device that current thread is running on
517* omp_get_dynamic:: Dynamic teams setting
518* omp_get_initial_device:: Device number of host device
519* omp_get_level:: Number of parallel regions
520* omp_get_max_active_levels:: Current maximum number of active regions
521* omp_get_max_task_priority:: Maximum task priority value that can be set
522* omp_get_max_teams:: Maximum number of teams for teams region
523* omp_get_max_threads:: Maximum number of threads of parallel region
524* omp_get_nested:: Nested parallel regions
525* omp_get_num_devices:: Number of target devices
526* omp_get_num_procs:: Number of processors online
527* omp_get_num_teams:: Number of teams
528* omp_get_num_threads:: Size of the active team
529* omp_get_proc_bind:: Whether theads may be moved between CPUs
530* omp_get_schedule:: Obtain the runtime scheduling method
531* omp_get_supported_active_levels:: Maximum number of active regions supported
532* omp_get_team_num:: Get team number
533* omp_get_team_size:: Number of threads in a team
534* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
535* omp_get_thread_limit:: Maximum number of threads
536* omp_get_thread_num:: Current thread ID
537* omp_in_parallel:: Whether a parallel region is active
538* omp_in_final:: Whether in final or included task region
539* omp_is_initial_device:: Whether executing on the host device
540* omp_set_default_device:: Set the default device for target regions
541* omp_set_dynamic:: Enable/disable dynamic teams
542* omp_set_max_active_levels:: Limits the number of active parallel regions
543* omp_set_nested:: Enable/disable nested parallel regions
544* omp_set_num_teams:: Set upper teams limit for teams region
545* omp_set_num_threads:: Set upper team size limit
546* omp_set_schedule:: Set the runtime scheduling method
547* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
548
549Initialize, set, test, unset and destroy simple and nested locks.
550
551* omp_init_lock:: Initialize simple lock
552* omp_set_lock:: Wait for and set simple lock
553* omp_test_lock:: Test and set simple lock if available
554* omp_unset_lock:: Unset simple lock
555* omp_destroy_lock:: Destroy simple lock
556* omp_init_nest_lock:: Initialize nested lock
557* omp_set_nest_lock:: Wait for and set simple lock
558* omp_test_nest_lock:: Test and set nested lock if available
559* omp_unset_nest_lock:: Unset nested lock
560* omp_destroy_nest_lock:: Destroy nested lock
561
562Portable, thread-based, wall clock timer.
563
564* omp_get_wtick:: Get timer precision.
565* omp_get_wtime:: Elapsed wall clock time.
566
567Support for event objects.
568
569* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
570@end menu
571
572
573
574@node omp_get_active_level
575@section @code{omp_get_active_level} -- Number of parallel regions
576@table @asis
577@item @emph{Description}:
578This function returns the nesting level for the active parallel blocks,
579which enclose the calling call.
580
581@item @emph{C/C++}
582@multitable @columnfractions .20 .80
583@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
584@end multitable
585
586@item @emph{Fortran}:
587@multitable @columnfractions .20 .80
588@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
589@end multitable
590
591@item @emph{See also}:
592@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
593
594@item @emph{Reference}:
595@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
596@end table
597
598
599
600@node omp_get_ancestor_thread_num
601@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
602@table @asis
603@item @emph{Description}:
604This function returns the thread identification number for the given
605nesting level of the current thread. For values of @var{level} outside
606zero to @code{omp_get_level} -1 is returned; if @var{level} is
607@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
608
609@item @emph{C/C++}
610@multitable @columnfractions .20 .80
611@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
612@end multitable
613
614@item @emph{Fortran}:
615@multitable @columnfractions .20 .80
616@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
617@item @tab @code{integer level}
618@end multitable
619
620@item @emph{See also}:
621@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
622
623@item @emph{Reference}:
624@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
625@end table
626
627
628
629@node omp_get_cancellation
630@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
631@table @asis
632@item @emph{Description}:
633This function returns @code{true} if cancellation is activated, @code{false}
634otherwise. Here, @code{true} and @code{false} represent their language-specific
635counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
636deactivated.
637
638@item @emph{C/C++}:
639@multitable @columnfractions .20 .80
640@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
641@end multitable
642
643@item @emph{Fortran}:
644@multitable @columnfractions .20 .80
645@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
646@end multitable
647
648@item @emph{See also}:
649@ref{OMP_CANCELLATION}
650
651@item @emph{Reference}:
652@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
653@end table
654
655
656
657@node omp_get_default_device
658@section @code{omp_get_default_device} -- Get the default device for target regions
659@table @asis
660@item @emph{Description}:
661Get the default device for target regions without device clause.
662
663@item @emph{C/C++}:
664@multitable @columnfractions .20 .80
665@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
666@end multitable
667
668@item @emph{Fortran}:
669@multitable @columnfractions .20 .80
670@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
671@end multitable
672
673@item @emph{See also}:
674@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
675
676@item @emph{Reference}:
677@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
678@end table
679
680
681
682@node omp_get_device_num
683@section @code{omp_get_device_num} -- Return device number of current device
684@table @asis
685@item @emph{Description}:
686This function returns a device number that represents the device that the
687current thread is executing on. For OpenMP 5.0, this must be equal to the
688value returned by the @code{omp_get_initial_device} function when called
689from the host.
690
691@item @emph{C/C++}
692@multitable @columnfractions .20 .80
693@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
694@end multitable
695
696@item @emph{Fortran}:
697@multitable @columnfractions .20 .80
698@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
699@end multitable
700
701@item @emph{See also}:
702@ref{omp_get_initial_device}
703
704@item @emph{Reference}:
705@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
706@end table
707
708
709
710@node omp_get_dynamic
711@section @code{omp_get_dynamic} -- Dynamic teams setting
712@table @asis
713@item @emph{Description}:
714This function returns @code{true} if enabled, @code{false} otherwise.
715Here, @code{true} and @code{false} represent their language-specific
716counterparts.
717
718The dynamic team setting may be initialized at startup by the
719@env{OMP_DYNAMIC} environment variable or at runtime using
720@code{omp_set_dynamic}. If undefined, dynamic adjustment is
721disabled by default.
722
723@item @emph{C/C++}:
724@multitable @columnfractions .20 .80
725@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
726@end multitable
727
728@item @emph{Fortran}:
729@multitable @columnfractions .20 .80
730@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
731@end multitable
732
733@item @emph{See also}:
734@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
735
736@item @emph{Reference}:
737@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
738@end table
739
740
741
742@node omp_get_initial_device
743@section @code{omp_get_initial_device} -- Return device number of initial device
744@table @asis
745@item @emph{Description}:
746This function returns a device number that represents the host device.
747For OpenMP 5.1, this must be equal to the value returned by the
748@code{omp_get_num_devices} function.
749
750@item @emph{C/C++}
751@multitable @columnfractions .20 .80
752@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
753@end multitable
754
755@item @emph{Fortran}:
756@multitable @columnfractions .20 .80
757@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
758@end multitable
759
760@item @emph{See also}:
761@ref{omp_get_num_devices}
762
763@item @emph{Reference}:
764@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
765@end table
766
767
768
769@node omp_get_level
770@section @code{omp_get_level} -- Obtain the current nesting level
771@table @asis
772@item @emph{Description}:
773This function returns the nesting level for the parallel blocks,
774which enclose the calling call.
775
776@item @emph{C/C++}
777@multitable @columnfractions .20 .80
778@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
779@end multitable
780
781@item @emph{Fortran}:
782@multitable @columnfractions .20 .80
783@item @emph{Interface}: @tab @code{integer function omp_level()}
784@end multitable
785
786@item @emph{See also}:
787@ref{omp_get_active_level}
788
789@item @emph{Reference}:
790@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
791@end table
792
793
794
795@node omp_get_max_active_levels
796@section @code{omp_get_max_active_levels} -- Current maximum number of active regions
797@table @asis
798@item @emph{Description}:
799This function obtains the maximum allowed number of nested, active parallel regions.
800
801@item @emph{C/C++}
802@multitable @columnfractions .20 .80
803@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
804@end multitable
805
806@item @emph{Fortran}:
807@multitable @columnfractions .20 .80
808@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
809@end multitable
810
811@item @emph{See also}:
812@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
813
814@item @emph{Reference}:
815@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
816@end table
817
818
819@node omp_get_max_task_priority
820@section @code{omp_get_max_task_priority} -- Maximum priority value
821that can be set for tasks.
822@table @asis
823@item @emph{Description}:
824This function obtains the maximum allowed priority number for tasks.
825
826@item @emph{C/C++}
827@multitable @columnfractions .20 .80
828@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
829@end multitable
830
831@item @emph{Fortran}:
832@multitable @columnfractions .20 .80
833@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
834@end multitable
835
836@item @emph{Reference}:
837@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
838@end table
839
840
841@node omp_get_max_teams
842@section @code{omp_get_max_teams} -- Maximum number of teams of teams region
843@table @asis
844@item @emph{Description}:
845Return the maximum number of teams used for the teams region
846that does not use the clause @code{num_teams}.
847
848@item @emph{C/C++}:
849@multitable @columnfractions .20 .80
850@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
851@end multitable
852
853@item @emph{Fortran}:
854@multitable @columnfractions .20 .80
855@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
856@end multitable
857
858@item @emph{See also}:
859@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
860
861@item @emph{Reference}:
862@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
863@end table
864
865
866
867@node omp_get_max_threads
868@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
869@table @asis
870@item @emph{Description}:
871Return the maximum number of threads used for the current parallel region
872that does not use the clause @code{num_threads}.
873
874@item @emph{C/C++}:
875@multitable @columnfractions .20 .80
876@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
877@end multitable
878
879@item @emph{Fortran}:
880@multitable @columnfractions .20 .80
881@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
882@end multitable
883
884@item @emph{See also}:
885@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
886
887@item @emph{Reference}:
888@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
889@end table
890
891
892
893@node omp_get_nested
894@section @code{omp_get_nested} -- Nested parallel regions
895@table @asis
896@item @emph{Description}:
897This function returns @code{true} if nested parallel regions are
898enabled, @code{false} otherwise. Here, @code{true} and @code{false}
899represent their language-specific counterparts.
900
901The state of nested parallel regions at startup depends on several
902environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
903and is set to greater than one, then nested parallel regions will be
904enabled. If not defined, then the value of the @env{OMP_NESTED}
905environment variable will be followed if defined. If neither are
906defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
907are defined with a list of more than one value, then nested parallel
908regions are enabled. If none of these are defined, then nested parallel
909regions are disabled by default.
910
911Nested parallel regions can be enabled or disabled at runtime using
912@code{omp_set_nested}, or by setting the maximum number of nested
913regions with @code{omp_set_max_active_levels} to one to disable, or
914above one to enable.
915
916@item @emph{C/C++}:
917@multitable @columnfractions .20 .80
918@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
919@end multitable
920
921@item @emph{Fortran}:
922@multitable @columnfractions .20 .80
923@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
924@end multitable
925
926@item @emph{See also}:
927@ref{omp_set_max_active_levels}, @ref{omp_set_nested},
928@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
929
930@item @emph{Reference}:
931@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
932@end table
933
934
935
936@node omp_get_num_devices
937@section @code{omp_get_num_devices} -- Number of target devices
938@table @asis
939@item @emph{Description}:
940Returns the number of target devices.
941
942@item @emph{C/C++}:
943@multitable @columnfractions .20 .80
944@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
945@end multitable
946
947@item @emph{Fortran}:
948@multitable @columnfractions .20 .80
949@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
950@end multitable
951
952@item @emph{Reference}:
953@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
954@end table
955
956
957
958@node omp_get_num_procs
959@section @code{omp_get_num_procs} -- Number of processors online
960@table @asis
961@item @emph{Description}:
962Returns the number of processors online on that device.
963
964@item @emph{C/C++}:
965@multitable @columnfractions .20 .80
966@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
967@end multitable
968
969@item @emph{Fortran}:
970@multitable @columnfractions .20 .80
971@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
972@end multitable
973
974@item @emph{Reference}:
975@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
976@end table
977
978
979
980@node omp_get_num_teams
981@section @code{omp_get_num_teams} -- Number of teams
982@table @asis
983@item @emph{Description}:
984Returns the number of teams in the current team region.
985
986@item @emph{C/C++}:
987@multitable @columnfractions .20 .80
988@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
989@end multitable
990
991@item @emph{Fortran}:
992@multitable @columnfractions .20 .80
993@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
994@end multitable
995
996@item @emph{Reference}:
997@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
998@end table
999
1000
1001
1002@node omp_get_num_threads
1003@section @code{omp_get_num_threads} -- Size of the active team
1004@table @asis
1005@item @emph{Description}:
1006Returns the number of threads in the current team. In a sequential section of
1007the program @code{omp_get_num_threads} returns 1.
1008
1009The default team size may be initialized at startup by the
1010@env{OMP_NUM_THREADS} environment variable. At runtime, the size
1011of the current team may be set either by the @code{NUM_THREADS}
1012clause or by @code{omp_set_num_threads}. If none of the above were
1013used to define a specific value and @env{OMP_DYNAMIC} is disabled,
1014one thread per CPU online is used.
1015
1016@item @emph{C/C++}:
1017@multitable @columnfractions .20 .80
1018@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
1019@end multitable
1020
1021@item @emph{Fortran}:
1022@multitable @columnfractions .20 .80
1023@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
1024@end multitable
1025
1026@item @emph{See also}:
1027@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
1028
1029@item @emph{Reference}:
1030@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
1031@end table
1032
1033
1034
1035@node omp_get_proc_bind
1036@section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
1037@table @asis
1038@item @emph{Description}:
1039This functions returns the currently active thread affinity policy, which is
1040set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1041@code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1042@code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1043where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1044
1045@item @emph{C/C++}:
1046@multitable @columnfractions .20 .80
1047@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1048@end multitable
1049
1050@item @emph{Fortran}:
1051@multitable @columnfractions .20 .80
1052@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1053@end multitable
1054
1055@item @emph{See also}:
1056@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1057
1058@item @emph{Reference}:
1059@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1060@end table
1061
1062
1063
1064@node omp_get_schedule
1065@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
1066@table @asis
1067@item @emph{Description}:
1068Obtain the runtime scheduling method. The @var{kind} argument will be
1069set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
1070@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
1071@var{chunk_size}, is set to the chunk size.
1072
1073@item @emph{C/C++}
1074@multitable @columnfractions .20 .80
1075@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
1076@end multitable
1077
1078@item @emph{Fortran}:
1079@multitable @columnfractions .20 .80
1080@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
1081@item @tab @code{integer(kind=omp_sched_kind) kind}
1082@item @tab @code{integer chunk_size}
1083@end multitable
1084
1085@item @emph{See also}:
1086@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
1087
1088@item @emph{Reference}:
1089@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
1090@end table
1091
1092
1093@node omp_get_supported_active_levels
1094@section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1095@table @asis
1096@item @emph{Description}:
1097This function returns the maximum number of nested, active parallel regions
1098supported by this implementation.
1099
1100@item @emph{C/C++}
1101@multitable @columnfractions .20 .80
1102@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1103@end multitable
1104
1105@item @emph{Fortran}:
1106@multitable @columnfractions .20 .80
1107@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1108@end multitable
1109
1110@item @emph{See also}:
1111@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1112
1113@item @emph{Reference}:
1114@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1115@end table
1116
1117
1118
1119@node omp_get_team_num
1120@section @code{omp_get_team_num} -- Get team number
1121@table @asis
1122@item @emph{Description}:
1123Returns the team number of the calling thread.
1124
1125@item @emph{C/C++}:
1126@multitable @columnfractions .20 .80
1127@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1128@end multitable
1129
1130@item @emph{Fortran}:
1131@multitable @columnfractions .20 .80
1132@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1133@end multitable
1134
1135@item @emph{Reference}:
1136@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1137@end table
1138
1139
1140
1141@node omp_get_team_size
1142@section @code{omp_get_team_size} -- Number of threads in a team
1143@table @asis
1144@item @emph{Description}:
1145This function returns the number of threads in a thread team to which
1146either the current thread or its ancestor belongs. For values of @var{level}
1147outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
11481 is returned, and for @code{omp_get_level}, the result is identical
1149to @code{omp_get_num_threads}.
1150
1151@item @emph{C/C++}:
1152@multitable @columnfractions .20 .80
1153@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1154@end multitable
1155
1156@item @emph{Fortran}:
1157@multitable @columnfractions .20 .80
1158@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1159@item @tab @code{integer level}
1160@end multitable
1161
1162@item @emph{See also}:
1163@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1164
1165@item @emph{Reference}:
1166@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1167@end table
1168
1169
1170
1171@node omp_get_teams_thread_limit
1172@section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
1173@table @asis
1174@item @emph{Description}:
1175Return the maximum number of threads that will be able to participate in
1176each team created by a teams construct.
1177
1178@item @emph{C/C++}:
1179@multitable @columnfractions .20 .80
1180@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
1181@end multitable
1182
1183@item @emph{Fortran}:
1184@multitable @columnfractions .20 .80
1185@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
1186@end multitable
1187
1188@item @emph{See also}:
1189@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1190
1191@item @emph{Reference}:
1192@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1193@end table
1194
1195
1196
1197@node omp_get_thread_limit
1198@section @code{omp_get_thread_limit} -- Maximum number of threads
1199@table @asis
1200@item @emph{Description}:
1201Return the maximum number of threads of the program.
1202
1203@item @emph{C/C++}:
1204@multitable @columnfractions .20 .80
1205@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1206@end multitable
1207
1208@item @emph{Fortran}:
1209@multitable @columnfractions .20 .80
1210@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1211@end multitable
1212
1213@item @emph{See also}:
1214@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1215
1216@item @emph{Reference}:
1217@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1218@end table
1219
1220
1221
1222@node omp_get_thread_num
1223@section @code{omp_get_thread_num} -- Current thread ID
1224@table @asis
1225@item @emph{Description}:
1226Returns a unique thread identification number within the current team.
1227In a sequential parts of the program, @code{omp_get_thread_num}
1228always returns 0. In parallel regions the return value varies
1229from 0 to @code{omp_get_num_threads}-1 inclusive. The return
1230value of the primary thread of a team is always 0.
1231
1232@item @emph{C/C++}:
1233@multitable @columnfractions .20 .80
1234@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
1235@end multitable
1236
1237@item @emph{Fortran}:
1238@multitable @columnfractions .20 .80
1239@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
1240@end multitable
1241
1242@item @emph{See also}:
1243@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
1244
1245@item @emph{Reference}:
1246@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
1247@end table
1248
1249
1250
1251@node omp_in_parallel
1252@section @code{omp_in_parallel} -- Whether a parallel region is active
1253@table @asis
1254@item @emph{Description}:
1255This function returns @code{true} if currently running in parallel,
1256@code{false} otherwise. Here, @code{true} and @code{false} represent
1257their language-specific counterparts.
1258
1259@item @emph{C/C++}:
1260@multitable @columnfractions .20 .80
1261@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
1262@end multitable
1263
1264@item @emph{Fortran}:
1265@multitable @columnfractions .20 .80
1266@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
1267@end multitable
1268
1269@item @emph{Reference}:
1270@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
1271@end table
1272
1273
1274@node omp_in_final
1275@section @code{omp_in_final} -- Whether in final or included task region
1276@table @asis
1277@item @emph{Description}:
1278This function returns @code{true} if currently running in a final
1279or included task region, @code{false} otherwise. Here, @code{true}
1280and @code{false} represent their language-specific counterparts.
1281
1282@item @emph{C/C++}:
1283@multitable @columnfractions .20 .80
1284@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1285@end multitable
1286
1287@item @emph{Fortran}:
1288@multitable @columnfractions .20 .80
1289@item @emph{Interface}: @tab @code{logical function omp_in_final()}
1290@end multitable
1291
1292@item @emph{Reference}:
1293@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1294@end table
1295
1296
1297
1298@node omp_is_initial_device
1299@section @code{omp_is_initial_device} -- Whether executing on the host device
1300@table @asis
1301@item @emph{Description}:
1302This function returns @code{true} if currently running on the host device,
1303@code{false} otherwise. Here, @code{true} and @code{false} represent
1304their language-specific counterparts.
1305
1306@item @emph{C/C++}:
1307@multitable @columnfractions .20 .80
1308@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1309@end multitable
1310
1311@item @emph{Fortran}:
1312@multitable @columnfractions .20 .80
1313@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1314@end multitable
1315
1316@item @emph{Reference}:
1317@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1318@end table
1319
1320
1321
1322@node omp_set_default_device
1323@section @code{omp_set_default_device} -- Set the default device for target regions
1324@table @asis
1325@item @emph{Description}:
1326Set the default device for target regions without device clause. The argument
1327shall be a nonnegative device number.
1328
1329@item @emph{C/C++}:
1330@multitable @columnfractions .20 .80
1331@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1332@end multitable
1333
1334@item @emph{Fortran}:
1335@multitable @columnfractions .20 .80
1336@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1337@item @tab @code{integer device_num}
1338@end multitable
1339
1340@item @emph{See also}:
1341@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1342
1343@item @emph{Reference}:
1344@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1345@end table
1346
1347
1348
1349@node omp_set_dynamic
1350@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
1351@table @asis
1352@item @emph{Description}:
1353Enable or disable the dynamic adjustment of the number of threads
1354within a team. The function takes the language-specific equivalent
1355of @code{true} and @code{false}, where @code{true} enables dynamic
1356adjustment of team sizes and @code{false} disables it.
1357
1358@item @emph{C/C++}:
1359@multitable @columnfractions .20 .80
1360@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
1361@end multitable
1362
1363@item @emph{Fortran}:
1364@multitable @columnfractions .20 .80
1365@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
1366@item @tab @code{logical, intent(in) :: dynamic_threads}
1367@end multitable
1368
1369@item @emph{See also}:
1370@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
1371
1372@item @emph{Reference}:
1373@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
1374@end table
1375
1376
1377
1378@node omp_set_max_active_levels
1379@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1380@table @asis
1381@item @emph{Description}:
1382This function limits the maximum allowed number of nested, active
1383parallel regions. @var{max_levels} must be less or equal to
1384the value returned by @code{omp_get_supported_active_levels}.
1385
1386@item @emph{C/C++}
1387@multitable @columnfractions .20 .80
1388@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1389@end multitable
1390
1391@item @emph{Fortran}:
1392@multitable @columnfractions .20 .80
1393@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1394@item @tab @code{integer max_levels}
1395@end multitable
1396
1397@item @emph{See also}:
1398@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1399@ref{omp_get_supported_active_levels}
1400
1401@item @emph{Reference}:
1402@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1403@end table
1404
1405
1406
1407@node omp_set_nested
1408@section @code{omp_set_nested} -- Enable/disable nested parallel regions
1409@table @asis
1410@item @emph{Description}:
1411Enable or disable nested parallel regions, i.e., whether team members
1412are allowed to create new teams. The function takes the language-specific
1413equivalent of @code{true} and @code{false}, where @code{true} enables
1414dynamic adjustment of team sizes and @code{false} disables it.
1415
1416Enabling nested parallel regions will also set the maximum number of
1417active nested regions to the maximum supported. Disabling nested parallel
1418regions will set the maximum number of active nested regions to one.
1419
1420@item @emph{C/C++}:
1421@multitable @columnfractions .20 .80
1422@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1423@end multitable
1424
1425@item @emph{Fortran}:
1426@multitable @columnfractions .20 .80
1427@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1428@item @tab @code{logical, intent(in) :: nested}
1429@end multitable
1430
1431@item @emph{See also}:
1432@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1433@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1434
1435@item @emph{Reference}:
1436@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1437@end table
1438
1439
1440
1441@node omp_set_num_teams
1442@section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1443@table @asis
1444@item @emph{Description}:
1445Specifies the upper bound for number of teams created by the teams construct
1446which does not specify a @code{num_teams} clause. The
1447argument of @code{omp_set_num_teams} shall be a positive integer.
1448
1449@item @emph{C/C++}:
1450@multitable @columnfractions .20 .80
1451@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1452@end multitable
1453
1454@item @emph{Fortran}:
1455@multitable @columnfractions .20 .80
1456@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1457@item @tab @code{integer, intent(in) :: num_teams}
1458@end multitable
1459
1460@item @emph{See also}:
1461@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1462
1463@item @emph{Reference}:
1464@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1465@end table
1466
1467
1468
1469@node omp_set_num_threads
1470@section @code{omp_set_num_threads} -- Set upper team size limit
1471@table @asis
1472@item @emph{Description}:
1473Specifies the number of threads used by default in subsequent parallel
1474sections, if those do not specify a @code{num_threads} clause. The
1475argument of @code{omp_set_num_threads} shall be a positive integer.
1476
1477@item @emph{C/C++}:
1478@multitable @columnfractions .20 .80
1479@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1480@end multitable
1481
1482@item @emph{Fortran}:
1483@multitable @columnfractions .20 .80
1484@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1485@item @tab @code{integer, intent(in) :: num_threads}
1486@end multitable
1487
1488@item @emph{See also}:
1489@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1490
1491@item @emph{Reference}:
1492@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1493@end table
1494
1495
1496
1497@node omp_set_schedule
1498@section @code{omp_set_schedule} -- Set the runtime scheduling method
1499@table @asis
1500@item @emph{Description}:
1501Sets the runtime scheduling method. The @var{kind} argument can have the
1502value @code{omp_sched_static}, @code{omp_sched_dynamic},
1503@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1504@code{omp_sched_auto}, the chunk size is set to the value of
1505@var{chunk_size} if positive, or to the default value if zero or negative.
1506For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1507
1508@item @emph{C/C++}
1509@multitable @columnfractions .20 .80
1510@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1511@end multitable
1512
1513@item @emph{Fortran}:
1514@multitable @columnfractions .20 .80
1515@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1516@item @tab @code{integer(kind=omp_sched_kind) kind}
1517@item @tab @code{integer chunk_size}
1518@end multitable
1519
1520@item @emph{See also}:
1521@ref{omp_get_schedule}
1522@ref{OMP_SCHEDULE}
1523
1524@item @emph{Reference}:
1525@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1526@end table
1527
1528
1529
1530@node omp_set_teams_thread_limit
1531@section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1532@table @asis
1533@item @emph{Description}:
1534Specifies the upper bound for number of threads that will be available
1535for each team created by the teams construct which does not specify a
1536@code{thread_limit} clause. The argument of
1537@code{omp_set_teams_thread_limit} shall be a positive integer.
1538
1539@item @emph{C/C++}:
1540@multitable @columnfractions .20 .80
1541@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1542@end multitable
1543
1544@item @emph{Fortran}:
1545@multitable @columnfractions .20 .80
1546@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1547@item @tab @code{integer, intent(in) :: thread_limit}
1548@end multitable
1549
1550@item @emph{See also}:
1551@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1552
1553@item @emph{Reference}:
1554@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1555@end table
1556
1557
1558
1559@node omp_init_lock
1560@section @code{omp_init_lock} -- Initialize simple lock
1561@table @asis
1562@item @emph{Description}:
1563Initialize a simple lock. After initialization, the lock is in
1564an unlocked state.
1565
1566@item @emph{C/C++}:
1567@multitable @columnfractions .20 .80
1568@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1569@end multitable
1570
1571@item @emph{Fortran}:
1572@multitable @columnfractions .20 .80
1573@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1574@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1575@end multitable
1576
1577@item @emph{See also}:
1578@ref{omp_destroy_lock}
1579
1580@item @emph{Reference}:
1581@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1582@end table
1583
1584
1585
1586@node omp_set_lock
1587@section @code{omp_set_lock} -- Wait for and set simple lock
1588@table @asis
1589@item @emph{Description}:
1590Before setting a simple lock, the lock variable must be initialized by
1591@code{omp_init_lock}. The calling thread is blocked until the lock
1592is available. If the lock is already held by the current thread,
1593a deadlock occurs.
1594
1595@item @emph{C/C++}:
1596@multitable @columnfractions .20 .80
1597@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1598@end multitable
1599
1600@item @emph{Fortran}:
1601@multitable @columnfractions .20 .80
1602@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1603@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1604@end multitable
1605
1606@item @emph{See also}:
1607@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1608
1609@item @emph{Reference}:
1610@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1611@end table
1612
1613
1614
1615@node omp_test_lock
1616@section @code{omp_test_lock} -- Test and set simple lock if available
1617@table @asis
1618@item @emph{Description}:
1619Before setting a simple lock, the lock variable must be initialized by
1620@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1621does not block if the lock is not available. This function returns
1622@code{true} upon success, @code{false} otherwise. Here, @code{true} and
1623@code{false} represent their language-specific counterparts.
1624
1625@item @emph{C/C++}:
1626@multitable @columnfractions .20 .80
1627@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1628@end multitable
1629
1630@item @emph{Fortran}:
1631@multitable @columnfractions .20 .80
1632@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1633@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1634@end multitable
1635
1636@item @emph{See also}:
1637@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1638
1639@item @emph{Reference}:
1640@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1641@end table
1642
1643
1644
1645@node omp_unset_lock
1646@section @code{omp_unset_lock} -- Unset simple lock
1647@table @asis
1648@item @emph{Description}:
1649A simple lock about to be unset must have been locked by @code{omp_set_lock}
1650or @code{omp_test_lock} before. In addition, the lock must be held by the
1651thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1652or more threads attempted to set the lock before, one of them is chosen to,
1653again, set the lock to itself.
1654
1655@item @emph{C/C++}:
1656@multitable @columnfractions .20 .80
1657@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1658@end multitable
1659
1660@item @emph{Fortran}:
1661@multitable @columnfractions .20 .80
1662@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1663@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1664@end multitable
1665
1666@item @emph{See also}:
1667@ref{omp_set_lock}, @ref{omp_test_lock}
1668
1669@item @emph{Reference}:
1670@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1671@end table
1672
1673
1674
1675@node omp_destroy_lock
1676@section @code{omp_destroy_lock} -- Destroy simple lock
1677@table @asis
1678@item @emph{Description}:
1679Destroy a simple lock. In order to be destroyed, a simple lock must be
1680in the unlocked state.
1681
1682@item @emph{C/C++}:
1683@multitable @columnfractions .20 .80
1684@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1685@end multitable
1686
1687@item @emph{Fortran}:
1688@multitable @columnfractions .20 .80
1689@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1690@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1691@end multitable
1692
1693@item @emph{See also}:
1694@ref{omp_init_lock}
1695
1696@item @emph{Reference}:
1697@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1698@end table
1699
1700
1701
1702@node omp_init_nest_lock
1703@section @code{omp_init_nest_lock} -- Initialize nested lock
1704@table @asis
1705@item @emph{Description}:
1706Initialize a nested lock. After initialization, the lock is in
1707an unlocked state and the nesting count is set to zero.
1708
1709@item @emph{C/C++}:
1710@multitable @columnfractions .20 .80
1711@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1712@end multitable
1713
1714@item @emph{Fortran}:
1715@multitable @columnfractions .20 .80
1716@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1717@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1718@end multitable
1719
1720@item @emph{See also}:
1721@ref{omp_destroy_nest_lock}
1722
1723@item @emph{Reference}:
1724@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1725@end table
1726
1727
1728@node omp_set_nest_lock
1729@section @code{omp_set_nest_lock} -- Wait for and set nested lock
1730@table @asis
1731@item @emph{Description}:
1732Before setting a nested lock, the lock variable must be initialized by
1733@code{omp_init_nest_lock}. The calling thread is blocked until the lock
1734is available. If the lock is already held by the current thread, the
1735nesting count for the lock is incremented.
1736
1737@item @emph{C/C++}:
1738@multitable @columnfractions .20 .80
1739@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1740@end multitable
1741
1742@item @emph{Fortran}:
1743@multitable @columnfractions .20 .80
1744@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1745@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1746@end multitable
1747
1748@item @emph{See also}:
1749@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1750
1751@item @emph{Reference}:
1752@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1753@end table
1754
1755
1756
1757@node omp_test_nest_lock
1758@section @code{omp_test_nest_lock} -- Test and set nested lock if available
1759@table @asis
1760@item @emph{Description}:
1761Before setting a nested lock, the lock variable must be initialized by
1762@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1763@code{omp_test_nest_lock} does not block if the lock is not available.
1764If the lock is already held by the current thread, the new nesting count
1765is returned. Otherwise, the return value equals zero.
1766
1767@item @emph{C/C++}:
1768@multitable @columnfractions .20 .80
1769@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1770@end multitable
1771
1772@item @emph{Fortran}:
1773@multitable @columnfractions .20 .80
1774@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1775@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1776@end multitable
1777
1778
1779@item @emph{See also}:
1780@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1781
1782@item @emph{Reference}:
1783@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1784@end table
1785
1786
1787
1788@node omp_unset_nest_lock
1789@section @code{omp_unset_nest_lock} -- Unset nested lock
1790@table @asis
1791@item @emph{Description}:
1792A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1793or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1794thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1795lock becomes unlocked. If one ore more threads attempted to set the lock before,
1796one of them is chosen to, again, set the lock to itself.
1797
1798@item @emph{C/C++}:
1799@multitable @columnfractions .20 .80
1800@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1801@end multitable
1802
1803@item @emph{Fortran}:
1804@multitable @columnfractions .20 .80
1805@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1806@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1807@end multitable
1808
1809@item @emph{See also}:
1810@ref{omp_set_nest_lock}
1811
1812@item @emph{Reference}:
1813@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1814@end table
1815
1816
1817
1818@node omp_destroy_nest_lock
1819@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1820@table @asis
1821@item @emph{Description}:
1822Destroy a nested lock. In order to be destroyed, a nested lock must be
1823in the unlocked state and its nesting count must equal zero.
1824
1825@item @emph{C/C++}:
1826@multitable @columnfractions .20 .80
1827@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1828@end multitable
1829
1830@item @emph{Fortran}:
1831@multitable @columnfractions .20 .80
1832@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1833@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1834@end multitable
1835
1836@item @emph{See also}:
1837@ref{omp_init_lock}
1838
1839@item @emph{Reference}:
1840@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1841@end table
1842
1843
1844
1845@node omp_get_wtick
1846@section @code{omp_get_wtick} -- Get timer precision
1847@table @asis
1848@item @emph{Description}:
1849Gets the timer precision, i.e., the number of seconds between two
1850successive clock ticks.
1851
1852@item @emph{C/C++}:
1853@multitable @columnfractions .20 .80
1854@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1855@end multitable
1856
1857@item @emph{Fortran}:
1858@multitable @columnfractions .20 .80
1859@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1860@end multitable
1861
1862@item @emph{See also}:
1863@ref{omp_get_wtime}
1864
1865@item @emph{Reference}:
1866@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1867@end table
1868
1869
1870
1871@node omp_get_wtime
1872@section @code{omp_get_wtime} -- Elapsed wall clock time
1873@table @asis
1874@item @emph{Description}:
1875Elapsed wall clock time in seconds. The time is measured per thread, no
1876guarantee can be made that two distinct threads measure the same time.
1877Time is measured from some "time in the past", which is an arbitrary time
1878guaranteed not to change during the execution of the program.
1879
1880@item @emph{C/C++}:
1881@multitable @columnfractions .20 .80
1882@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1883@end multitable
1884
1885@item @emph{Fortran}:
1886@multitable @columnfractions .20 .80
1887@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1888@end multitable
1889
1890@item @emph{See also}:
1891@ref{omp_get_wtick}
1892
1893@item @emph{Reference}:
1894@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1895@end table
1896
1897
1898
1899@node omp_fulfill_event
1900@section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1901@table @asis
1902@item @emph{Description}:
1903Fulfill the event associated with the event handle argument. Currently, it
1904is only used to fulfill events generated by detach clauses on task
1905constructs - the effect of fulfilling the event is to allow the task to
1906complete.
1907
1908The result of calling @code{omp_fulfill_event} with an event handle other
1909than that generated by a detach clause is undefined. Calling it with an
1910event handle that has already been fulfilled is also undefined.
1911
1912@item @emph{C/C++}:
1913@multitable @columnfractions .20 .80
1914@item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1915@end multitable
1916
1917@item @emph{Fortran}:
1918@multitable @columnfractions .20 .80
1919@item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1920@item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1921@end multitable
1922
1923@item @emph{Reference}:
1924@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1925@end table
1926
1927
1928
1929@c ---------------------------------------------------------------------
1930@c OpenMP Environment Variables
1931@c ---------------------------------------------------------------------
1932
1933@node Environment Variables
1934@chapter OpenMP Environment Variables
1935
1936The environment variables which beginning with @env{OMP_} are defined by
1937section 4 of the OpenMP specification in version 4.5, while those
1938beginning with @env{GOMP_} are GNU extensions.
1939
1940@menu
1941* OMP_CANCELLATION:: Set whether cancellation is activated
1942* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1943* OMP_DEFAULT_DEVICE:: Set the device used in target regions
1944* OMP_DYNAMIC:: Dynamic adjustment of threads
1945* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1946* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1947* OMP_NESTED:: Nested parallel regions
1948* OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
1949* OMP_NUM_THREADS:: Specifies the number of threads to use
1950* OMP_PROC_BIND:: Whether theads may be moved between CPUs
1951* OMP_PLACES:: Specifies on which CPUs the theads should be placed
1952* OMP_STACKSIZE:: Set default thread stack size
1953* OMP_SCHEDULE:: How threads are scheduled
1954* OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1955* OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
1956* OMP_THREAD_LIMIT:: Set the maximum number of threads
1957* OMP_WAIT_POLICY:: How waiting threads are handled
1958* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1959* GOMP_DEBUG:: Enable debugging output
1960* GOMP_STACKSIZE:: Set default thread stack size
1961* GOMP_SPINCOUNT:: Set the busy-wait spin count
1962* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1963@end menu
1964
1965
1966@node OMP_CANCELLATION
1967@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1968@cindex Environment Variable
1969@table @asis
1970@item @emph{Description}:
1971If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1972if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1973
1974@item @emph{See also}:
1975@ref{omp_get_cancellation}
1976
1977@item @emph{Reference}:
1978@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1979@end table
1980
1981
1982
1983@node OMP_DISPLAY_ENV
1984@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1985@cindex Environment Variable
1986@table @asis
1987@item @emph{Description}:
1988If set to @code{TRUE}, the OpenMP version number and the values
1989associated with the OpenMP environment variables are printed to @code{stderr}.
1990If set to @code{VERBOSE}, it additionally shows the value of the environment
1991variables which are GNU extensions. If undefined or set to @code{FALSE},
1992this information will not be shown.
1993
1994
1995@item @emph{Reference}:
1996@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
1997@end table
1998
1999
2000
2001@node OMP_DEFAULT_DEVICE
2002@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2003@cindex Environment Variable
2004@table @asis
2005@item @emph{Description}:
2006Set to choose the device which is used in a @code{target} region, unless the
2007value is overridden by @code{omp_set_default_device} or by a @code{device}
2008clause. The value shall be the nonnegative device number. If no device with
2009the given device number exists, the code is executed on the host. If unset,
2010device number 0 will be used.
2011
2012
2013@item @emph{See also}:
2014@ref{omp_get_default_device}, @ref{omp_set_default_device},
2015
2016@item @emph{Reference}:
2017@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2018@end table
2019
2020
2021
2022@node OMP_DYNAMIC
2023@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2024@cindex Environment Variable
2025@table @asis
2026@item @emph{Description}:
2027Enable or disable the dynamic adjustment of the number of threads
2028within a team. The value of this environment variable shall be
2029@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2030disabled by default.
2031
2032@item @emph{See also}:
2033@ref{omp_set_dynamic}
2034
2035@item @emph{Reference}:
2036@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2037@end table
2038
2039
2040
2041@node OMP_MAX_ACTIVE_LEVELS
2042@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2043@cindex Environment Variable
2044@table @asis
2045@item @emph{Description}:
2046Specifies the initial value for the maximum number of nested parallel
2047regions. The value of this variable shall be a positive integer.
2048If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2049if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2050a list with more than one item, the maximum number of nested parallel
2051regions will be initialized to the largest number supported, otherwise
2052it will be set to one.
2053
2054@item @emph{See also}:
2055@ref{omp_set_max_active_levels}, @ref{OMP_NESTED}
2056
2057@item @emph{Reference}:
2058@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2059@end table
2060
2061
2062
2063@node OMP_MAX_TASK_PRIORITY
2064@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2065number that can be set for a task.
2066@cindex Environment Variable
2067@table @asis
2068@item @emph{Description}:
2069Specifies the initial value for the maximum priority value that can be
2070set for a task. The value of this variable shall be a non-negative
2071integer, and zero is allowed. If undefined, the default priority is
20720.
2073
2074@item @emph{See also}:
2075@ref{omp_get_max_task_priority}
2076
2077@item @emph{Reference}:
2078@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2079@end table
2080
2081
2082
2083@node OMP_NESTED
2084@section @env{OMP_NESTED} -- Nested parallel regions
2085@cindex Environment Variable
2086@cindex Implementation specific setting
2087@table @asis
2088@item @emph{Description}:
2089Enable or disable nested parallel regions, i.e., whether team members
2090are allowed to create new teams. The value of this environment variable
2091shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2092of maximum active nested regions supported will by default be set to the
2093maximum supported, otherwise it will be set to one. If
2094@env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2095setting. If both are undefined, nested parallel regions are enabled if
2096@env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2097more than one item, otherwise they are disabled by default.
2098
2099@item @emph{See also}:
2100@ref{omp_set_max_active_levels}, @ref{omp_set_nested}
2101
2102@item @emph{Reference}:
2103@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2104@end table
2105
2106
2107
2108@node OMP_NUM_TEAMS
2109@section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2110@cindex Environment Variable
2111@table @asis
2112@item @emph{Description}:
2113Specifies the upper bound for number of teams to use in teams regions
2114without explicit @code{num_teams} clause. The value of this variable shall
2115be a positive integer. If undefined it defaults to 0 which means
2116implementation defined upper bound.
2117
2118@item @emph{See also}:
2119@ref{omp_set_num_teams}
2120
2121@item @emph{Reference}:
2122@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2123@end table
2124
2125
2126
2127@node OMP_NUM_THREADS
2128@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2129@cindex Environment Variable
2130@cindex Implementation specific setting
2131@table @asis
2132@item @emph{Description}:
2133Specifies the default number of threads to use in parallel regions. The
2134value of this variable shall be a comma-separated list of positive integers;
2135the value specifies the number of threads to use for the corresponding nested
2136level. Specifying more than one item in the list will automatically enable
2137nesting by default. If undefined one thread per CPU is used.
2138
2139@item @emph{See also}:
2140@ref{omp_set_num_threads}, @ref{OMP_NESTED}
2141
2142@item @emph{Reference}:
2143@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2144@end table
2145
2146
2147
2148@node OMP_PROC_BIND
2149@section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
2150@cindex Environment Variable
2151@table @asis
2152@item @emph{Description}:
2153Specifies whether threads may be moved between processors. If set to
2154@code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
2155they may be moved. Alternatively, a comma separated list with the
2156values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2157be used to specify the thread affinity policy for the corresponding nesting
2158level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
2159same place partition as the primary thread. With @code{CLOSE} those are
2160kept close to the primary thread in contiguous place partitions. And
2161with @code{SPREAD} a sparse distribution
2162across the place partitions is used. Specifying more than one item in the
2163list will automatically enable nesting by default.
2164
2165When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
2166@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
2167
2168@item @emph{See also}:
2169@ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY},
2170@ref{OMP_NESTED}, @ref{OMP_PLACES}
2171
2172@item @emph{Reference}:
2173@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
2174@end table
2175
2176
2177
2178@node OMP_PLACES
2179@section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
2180@cindex Environment Variable
2181@table @asis
2182@item @emph{Description}:
2183The thread placement can be either specified using an abstract name or by an
2184explicit list of the places. The abstract names @code{threads}, @code{cores},
2185@code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
2186followed by a positive number in parentheses, which denotes the how many places
2187shall be created. With @code{threads} each place corresponds to a single
2188hardware thread; @code{cores} to a single core with the corresponding number of
2189hardware threads; with @code{sockets} the place corresponds to a single
2190socket; with @code{ll_caches} to a set of cores that shares the last level
2191cache on the device; and @code{numa_domains} to a set of cores for which their
2192closest memory on the device is the same memory and at a similar distance from
2193the cores. The resulting placement can be shown by setting the
2194@env{OMP_DISPLAY_ENV} environment variable.
2195
2196Alternatively, the placement can be specified explicitly as comma-separated
2197list of places. A place is specified by set of nonnegative numbers in curly
2198braces, denoting the hardware threads. The curly braces can be omitted
2199when only a single number has been specified. The hardware threads
2200belonging to a place can either be specified as comma-separated list of
2201nonnegative thread numbers or using an interval. Multiple places can also be
2202either specified by a comma-separated list of places or by an interval. To
2203specify an interval, a colon followed by the count is placed after
2204the hardware thread number or the place. Optionally, the length can be
2205followed by a colon and the stride number -- otherwise a unit stride is
2206assumed. Placing an exclamation mark (@code{!}) directly before a curly
2207brace or numbers inside the curly braces (excluding intervals) will
2208exclude those hardware threads.
2209
2210For instance, the following specifies the same places list:
2211@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
2212@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
2213
2214If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
2215@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
2216between CPUs following no placement policy.
2217
2218@item @emph{See also}:
2219@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
2220@ref{OMP_DISPLAY_ENV}
2221
2222@item @emph{Reference}:
2223@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
2224@end table
2225
2226
2227
2228@node OMP_STACKSIZE
2229@section @env{OMP_STACKSIZE} -- Set default thread stack size
2230@cindex Environment Variable
2231@table @asis
2232@item @emph{Description}:
2233Set the default thread stack size in kilobytes, unless the number
2234is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
2235case the size is, respectively, in bytes, kilobytes, megabytes
2236or gigabytes. This is different from @code{pthread_attr_setstacksize}
2237which gets the number of bytes as an argument. If the stack size cannot
2238be set due to system constraints, an error is reported and the initial
2239stack size is left unchanged. If undefined, the stack size is system
2240dependent.
2241
2242@item @emph{Reference}:
2243@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
2244@end table
2245
2246
2247
2248@node OMP_SCHEDULE
2249@section @env{OMP_SCHEDULE} -- How threads are scheduled
2250@cindex Environment Variable
2251@cindex Implementation specific setting
2252@table @asis
2253@item @emph{Description}:
2254Allows to specify @code{schedule type} and @code{chunk size}.
2255The value of the variable shall have the form: @code{type[,chunk]} where
2256@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
2257The optional @code{chunk} size shall be a positive integer. If undefined,
2258dynamic scheduling and a chunk size of 1 is used.
2259
2260@item @emph{See also}:
2261@ref{omp_set_schedule}
2262
2263@item @emph{Reference}:
2264@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
2265@end table
2266
2267
2268
2269@node OMP_TARGET_OFFLOAD
2270@section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
2271@cindex Environment Variable
2272@cindex Implementation specific setting
2273@table @asis
2274@item @emph{Description}:
2275Specifies the behaviour with regard to offloading code to a device. This
2276variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
2277or @code{DEFAULT}.
2278
2279If set to @code{MANDATORY}, the program will terminate with an error if
2280the offload device is not present or is not supported. If set to
2281@code{DISABLED}, then offloading is disabled and all code will run on the
2282host. If set to @code{DEFAULT}, the program will try offloading to the
2283device first, then fall back to running code on the host if it cannot.
2284
2285If undefined, then the program will behave as if @code{DEFAULT} was set.
2286
2287@item @emph{Reference}:
2288@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
2289@end table
2290
2291
2292
2293@node OMP_TEAMS_THREAD_LIMIT
2294@section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
2295@cindex Environment Variable
2296@table @asis
2297@item @emph{Description}:
2298Specifies an upper bound for the number of threads to use by each contention
2299group created by a teams construct without explicit @code{thread_limit}
2300clause. The value of this variable shall be a positive integer. If undefined,
2301the value of 0 is used which stands for an implementation defined upper
2302limit.
2303
2304@item @emph{See also}:
2305@ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
2306
2307@item @emph{Reference}:
2308@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
2309@end table
2310
2311
2312
2313@node OMP_THREAD_LIMIT
2314@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
2315@cindex Environment Variable
2316@table @asis
2317@item @emph{Description}:
2318Specifies the number of threads to use for the whole program. The
2319value of this variable shall be a positive integer. If undefined,
2320the number of threads is not limited.
2321
2322@item @emph{See also}:
2323@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
2324
2325@item @emph{Reference}:
2326@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
2327@end table
2328
2329
2330
2331@node OMP_WAIT_POLICY
2332@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
2333@cindex Environment Variable
2334@table @asis
2335@item @emph{Description}:
2336Specifies whether waiting threads should be active or passive. If
2337the value is @code{PASSIVE}, waiting threads should not consume CPU
2338power while waiting; while the value is @code{ACTIVE} specifies that
2339they should. If undefined, threads wait actively for a short time
2340before waiting passively.
2341
2342@item @emph{See also}:
2343@ref{GOMP_SPINCOUNT}
2344
2345@item @emph{Reference}:
2346@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
2347@end table
2348
2349
2350
2351@node GOMP_CPU_AFFINITY
2352@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
2353@cindex Environment Variable
2354@table @asis
2355@item @emph{Description}:
2356Binds threads to specific CPUs. The variable should contain a space-separated
2357or comma-separated list of CPUs. This list may contain different kinds of
2358entries: either single CPU numbers in any order, a range of CPUs (M-N)
2359or a range with some stride (M-N:S). CPU numbers are zero based. For example,
2360@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
2361to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
2362CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
2363and 14 respectively and then start assigning back from the beginning of
2364the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
2365
2366There is no libgomp library routine to determine whether a CPU affinity
2367specification is in effect. As a workaround, language-specific library
2368functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
2369Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
2370environment variable. A defined CPU affinity on startup cannot be changed
2371or disabled during the runtime of the application.
2372
2373If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
2374@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
2375@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
2376@code{FALSE}, the host system will handle the assignment of threads to CPUs.
2377
2378@item @emph{See also}:
2379@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
2380@end table
2381
2382
2383
2384@node GOMP_DEBUG
2385@section @env{GOMP_DEBUG} -- Enable debugging output
2386@cindex Environment Variable
2387@table @asis
2388@item @emph{Description}:
2389Enable debugging output. The variable should be set to @code{0}
2390(disabled, also the default if not set), or @code{1} (enabled).
2391
2392If enabled, some debugging output will be printed during execution.
2393This is currently not specified in more detail, and subject to change.
2394@end table
2395
2396
2397
2398@node GOMP_STACKSIZE
2399@section @env{GOMP_STACKSIZE} -- Set default thread stack size
2400@cindex Environment Variable
2401@cindex Implementation specific setting
2402@table @asis
2403@item @emph{Description}:
2404Set the default thread stack size in kilobytes. This is different from
2405@code{pthread_attr_setstacksize} which gets the number of bytes as an
2406argument. If the stack size cannot be set due to system constraints, an
2407error is reported and the initial stack size is left unchanged. If undefined,
2408the stack size is system dependent.
2409
2410@item @emph{See also}:
2411@ref{OMP_STACKSIZE}
2412
2413@item @emph{Reference}:
2414@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
2415GCC Patches Mailinglist},
2416@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
2417GCC Patches Mailinglist}
2418@end table
2419
2420
2421
2422@node GOMP_SPINCOUNT
2423@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
2424@cindex Environment Variable
2425@cindex Implementation specific setting
2426@table @asis
2427@item @emph{Description}:
2428Determines how long a threads waits actively with consuming CPU power
2429before waiting passively without consuming CPU power. The value may be
2430either @code{INFINITE}, @code{INFINITY} to always wait actively or an
2431integer which gives the number of spins of the busy-wait loop. The
2432integer may optionally be followed by the following suffixes acting
2433as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
2434million), @code{G} (giga, billion), or @code{T} (tera, trillion).
2435If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
2436300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
243730 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
2438If there are more OpenMP threads than available CPUs, 1000 and 100
2439spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
2440undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
2441or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
2442
2443@item @emph{See also}:
2444@ref{OMP_WAIT_POLICY}
2445@end table
2446
2447
2448
2449@node GOMP_RTEMS_THREAD_POOLS
2450@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
2451@cindex Environment Variable
2452@cindex Implementation specific setting
2453@table @asis
2454@item @emph{Description}:
2455This environment variable is only used on the RTEMS real-time operating system.
2456It determines the scheduler instance specific thread pools. The format for
2457@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
2458@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
2459separated by @code{:} where:
2460@itemize @bullet
2461@item @code{<thread-pool-count>} is the thread pool count for this scheduler
2462instance.
2463@item @code{$<priority>} is an optional priority for the worker threads of a
2464thread pool according to @code{pthread_setschedparam}. In case a priority
2465value is omitted, then a worker thread will inherit the priority of the OpenMP
2466primary thread that created it. The priority of the worker thread is not
2467changed after creation, even if a new OpenMP primary thread using the worker has
2468a different priority.
2469@item @code{@@<scheduler-name>} is the scheduler instance name according to the
2470RTEMS application configuration.
2471@end itemize
2472In case no thread pool configuration is specified for a scheduler instance,
2473then each OpenMP primary thread of this scheduler instance will use its own
2474dynamically allocated thread pool. To limit the worker thread count of the
2475thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
2476@item @emph{Example}:
2477Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
2478@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
2479@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
2480scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
2481one thread pool available. Since no priority is specified for this scheduler
2482instance, the worker thread inherits the priority of the OpenMP primary thread
2483that created it. In the scheduler instance @code{WRK1} there are three thread
2484pools available and their worker threads run at priority four.
2485@end table
2486
2487
2488
2489@c ---------------------------------------------------------------------
2490@c Enabling OpenACC
2491@c ---------------------------------------------------------------------
2492
2493@node Enabling OpenACC
2494@chapter Enabling OpenACC
2495
2496To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
2497flag @option{-fopenacc} must be specified. This enables the OpenACC directive
2498@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
2499@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
2500@code{!$} conditional compilation sentinels in free form and @code{c$},
2501@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
2502arranges for automatic linking of the OpenACC runtime library
2503(@ref{OpenACC Runtime Library Routines}).
2504
2505See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
2506
2507A complete description of all OpenACC directives accepted may be found in
2508the @uref{https://www.openacc.org, OpenACC} Application Programming
2509Interface manual, version 2.6.
2510
2511
2512
2513@c ---------------------------------------------------------------------
2514@c OpenACC Runtime Library Routines
2515@c ---------------------------------------------------------------------
2516
2517@node OpenACC Runtime Library Routines
2518@chapter OpenACC Runtime Library Routines
2519
2520The runtime routines described here are defined by section 3 of the OpenACC
2521specifications in version 2.6.
2522They have C linkage, and do not throw exceptions.
2523Generally, they are available only for the host, with the exception of
2524@code{acc_on_device}, which is available for both the host and the
2525acceleration device.
2526
2527@menu
2528* acc_get_num_devices:: Get number of devices for the given device
2529 type.
2530* acc_set_device_type:: Set type of device accelerator to use.
2531* acc_get_device_type:: Get type of device accelerator to be used.
2532* acc_set_device_num:: Set device number to use.
2533* acc_get_device_num:: Get device number to be used.
2534* acc_get_property:: Get device property.
2535* acc_async_test:: Tests for completion of a specific asynchronous
2536 operation.
2537* acc_async_test_all:: Tests for completion of all asynchronous
2538 operations.
2539* acc_wait:: Wait for completion of a specific asynchronous
2540 operation.
2541* acc_wait_all:: Waits for completion of all asynchronous
2542 operations.
2543* acc_wait_all_async:: Wait for completion of all asynchronous
2544 operations.
2545* acc_wait_async:: Wait for completion of asynchronous operations.
2546* acc_init:: Initialize runtime for a specific device type.
2547* acc_shutdown:: Shuts down the runtime for a specific device
2548 type.
2549* acc_on_device:: Whether executing on a particular device
2550* acc_malloc:: Allocate device memory.
2551* acc_free:: Free device memory.
2552* acc_copyin:: Allocate device memory and copy host memory to
2553 it.
2554* acc_present_or_copyin:: If the data is not present on the device,
2555 allocate device memory and copy from host
2556 memory.
2557* acc_create:: Allocate device memory and map it to host
2558 memory.
2559* acc_present_or_create:: If the data is not present on the device,
2560 allocate device memory and map it to host
2561 memory.
2562* acc_copyout:: Copy device memory to host memory.
2563* acc_delete:: Free device memory.
2564* acc_update_device:: Update device memory from mapped host memory.
2565* acc_update_self:: Update host memory from mapped device memory.
2566* acc_map_data:: Map previously allocated device memory to host
2567 memory.
2568* acc_unmap_data:: Unmap device memory from host memory.
2569* acc_deviceptr:: Get device pointer associated with specific
2570 host address.
2571* acc_hostptr:: Get host pointer associated with specific
2572 device address.
2573* acc_is_present:: Indicate whether host variable / array is
2574 present on device.
2575* acc_memcpy_to_device:: Copy host memory to device memory.
2576* acc_memcpy_from_device:: Copy device memory to host memory.
2577* acc_attach:: Let device pointer point to device-pointer target.
2578* acc_detach:: Let device pointer point to host-pointer target.
2579
2580API routines for target platforms.
2581
2582* acc_get_current_cuda_device:: Get CUDA device handle.
2583* acc_get_current_cuda_context::Get CUDA context handle.
2584* acc_get_cuda_stream:: Get CUDA stream handle.
2585* acc_set_cuda_stream:: Set CUDA stream handle.
2586
2587API routines for the OpenACC Profiling Interface.
2588
2589* acc_prof_register:: Register callbacks.
2590* acc_prof_unregister:: Unregister callbacks.
2591* acc_prof_lookup:: Obtain inquiry functions.
2592* acc_register_library:: Library registration.
2593@end menu
2594
2595
2596
2597@node acc_get_num_devices
2598@section @code{acc_get_num_devices} -- Get number of devices for given device type
2599@table @asis
2600@item @emph{Description}
2601This function returns a value indicating the number of devices available
2602for the device type specified in @var{devicetype}.
2603
2604@item @emph{C/C++}:
2605@multitable @columnfractions .20 .80
2606@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2607@end multitable
2608
2609@item @emph{Fortran}:
2610@multitable @columnfractions .20 .80
2611@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2612@item @tab @code{integer(kind=acc_device_kind) devicetype}
2613@end multitable
2614
2615@item @emph{Reference}:
2616@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26173.2.1.
2618@end table
2619
2620
2621
2622@node acc_set_device_type
2623@section @code{acc_set_device_type} -- Set type of device accelerator to use.
2624@table @asis
2625@item @emph{Description}
2626This function indicates to the runtime library which device type, specified
2627in @var{devicetype}, to use when executing a parallel or kernels region.
2628
2629@item @emph{C/C++}:
2630@multitable @columnfractions .20 .80
2631@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2632@end multitable
2633
2634@item @emph{Fortran}:
2635@multitable @columnfractions .20 .80
2636@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2637@item @tab @code{integer(kind=acc_device_kind) devicetype}
2638@end multitable
2639
2640@item @emph{Reference}:
2641@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26423.2.2.
2643@end table
2644
2645
2646
2647@node acc_get_device_type
2648@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2649@table @asis
2650@item @emph{Description}
2651This function returns what device type will be used when executing a
2652parallel or kernels region.
2653
2654This function returns @code{acc_device_none} if
2655@code{acc_get_device_type} is called from
2656@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2657callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2658Interface}), that is, if the device is currently being initialized.
2659
2660@item @emph{C/C++}:
2661@multitable @columnfractions .20 .80
2662@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2663@end multitable
2664
2665@item @emph{Fortran}:
2666@multitable @columnfractions .20 .80
2667@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2668@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2669@end multitable
2670
2671@item @emph{Reference}:
2672@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26733.2.3.
2674@end table
2675
2676
2677
2678@node acc_set_device_num
2679@section @code{acc_set_device_num} -- Set device number to use.
2680@table @asis
2681@item @emph{Description}
2682This function will indicate to the runtime which device number,
2683specified by @var{devicenum}, associated with the specified device
2684type @var{devicetype}.
2685
2686@item @emph{C/C++}:
2687@multitable @columnfractions .20 .80
2688@item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2689@end multitable
2690
2691@item @emph{Fortran}:
2692@multitable @columnfractions .20 .80
2693@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2694@item @tab @code{integer devicenum}
2695@item @tab @code{integer(kind=acc_device_kind) devicetype}
2696@end multitable
2697
2698@item @emph{Reference}:
2699@uref{https://www.openacc.org, OpenACC specification v2.6}, section
27003.2.4.
2701@end table
2702
2703
2704
2705@node acc_get_device_num
2706@section @code{acc_get_device_num} -- Get device number to be used.
2707@table @asis
2708@item @emph{Description}
2709This function returns which device number associated with the specified device
2710type @var{devicetype}, will be used when executing a parallel or kernels
2711region.
2712
2713@item @emph{C/C++}:
2714@multitable @columnfractions .20 .80
2715@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2716@end multitable
2717
2718@item @emph{Fortran}:
2719@multitable @columnfractions .20 .80
2720@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2721@item @tab @code{integer(kind=acc_device_kind) devicetype}
2722@item @tab @code{integer acc_get_device_num}
2723@end multitable
2724
2725@item @emph{Reference}:
2726@uref{https://www.openacc.org, OpenACC specification v2.6}, section
27273.2.5.
2728@end table
2729
2730
2731
2732@node acc_get_property
2733@section @code{acc_get_property} -- Get device property.
2734@cindex acc_get_property
2735@cindex acc_get_property_string
2736@table @asis
2737@item @emph{Description}
2738These routines return the value of the specified @var{property} for the
2739device being queried according to @var{devicenum} and @var{devicetype}.
2740Integer-valued and string-valued properties are returned by
2741@code{acc_get_property} and @code{acc_get_property_string} respectively.
2742The Fortran @code{acc_get_property_string} subroutine returns the string
2743retrieved in its fourth argument while the remaining entry points are
2744functions, which pass the return value as their result.
2745
2746Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2747modified the interface introduced in OpenACC 2.6. The kind-value parameter
2748@code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2749for consistency and the return type of the @code{acc_get_property} function is
2750now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2751The parameter @code{acc_device_property} will continue to be provided,
2752but might be removed in a future version of GCC.
2753
2754@item @emph{C/C++}:
2755@multitable @columnfractions .20 .80
2756@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2757@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2758@end multitable
2759
2760@item @emph{Fortran}:
2761@multitable @columnfractions .20 .80
2762@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2763@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2764@item @tab @code{use ISO_C_Binding, only: c_size_t}
2765@item @tab @code{integer devicenum}
2766@item @tab @code{integer(kind=acc_device_kind) devicetype}
2767@item @tab @code{integer(kind=acc_device_property_kind) property}
2768@item @tab @code{integer(kind=c_size_t) acc_get_property}
2769@item @tab @code{character(*) string}
2770@end multitable
2771
2772@item @emph{Reference}:
2773@uref{https://www.openacc.org, OpenACC specification v2.6}, section
27743.2.6.
2775@end table
2776
2777
2778
2779@node acc_async_test
2780@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2781@table @asis
2782@item @emph{Description}
2783This function tests for completion of the asynchronous operation specified
2784in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2785the specified asynchronous operation has completed. While Fortran will return
2786a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2787a zero and Fortran returns a @code{false}.
2788
2789@item @emph{C/C++}:
2790@multitable @columnfractions .20 .80
2791@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2792@end multitable
2793
2794@item @emph{Fortran}:
2795@multitable @columnfractions .20 .80
2796@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2797@item @tab @code{integer(kind=acc_handle_kind) arg}
2798@item @tab @code{logical acc_async_test}
2799@end multitable
2800
2801@item @emph{Reference}:
2802@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28033.2.9.
2804@end table
2805
2806
2807
2808@node acc_async_test_all
2809@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2810@table @asis
2811@item @emph{Description}
2812This function tests for completion of all asynchronous operations.
2813In C/C++, a non-zero value will be returned to indicate all asynchronous
2814operations have completed. While Fortran will return a @code{true}. If
2815any asynchronous operation has not completed, C/C++ returns a zero and
2816Fortran returns a @code{false}.
2817
2818@item @emph{C/C++}:
2819@multitable @columnfractions .20 .80
2820@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2821@end multitable
2822
2823@item @emph{Fortran}:
2824@multitable @columnfractions .20 .80
2825@item @emph{Interface}: @tab @code{function acc_async_test()}
2826@item @tab @code{logical acc_get_device_num}
2827@end multitable
2828
2829@item @emph{Reference}:
2830@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28313.2.10.
2832@end table
2833
2834
2835
2836@node acc_wait
2837@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2838@table @asis
2839@item @emph{Description}
2840This function waits for completion of the asynchronous operation
2841specified in @var{arg}.
2842
2843@item @emph{C/C++}:
2844@multitable @columnfractions .20 .80
2845@item @emph{Prototype}: @tab @code{acc_wait(arg);}
2846@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2847@end multitable
2848
2849@item @emph{Fortran}:
2850@multitable @columnfractions .20 .80
2851@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2852@item @tab @code{integer(acc_handle_kind) arg}
2853@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2854@item @tab @code{integer(acc_handle_kind) arg}
2855@end multitable
2856
2857@item @emph{Reference}:
2858@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28593.2.11.
2860@end table
2861
2862
2863
2864@node acc_wait_all
2865@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2866@table @asis
2867@item @emph{Description}
2868This function waits for the completion of all asynchronous operations.
2869
2870@item @emph{C/C++}:
2871@multitable @columnfractions .20 .80
2872@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2873@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2874@end multitable
2875
2876@item @emph{Fortran}:
2877@multitable @columnfractions .20 .80
2878@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2879@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2880@end multitable
2881
2882@item @emph{Reference}:
2883@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28843.2.13.
2885@end table
2886
2887
2888
2889@node acc_wait_all_async
2890@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2891@table @asis
2892@item @emph{Description}
2893This function enqueues a wait operation on the queue @var{async} for any
2894and all asynchronous operations that have been previously enqueued on
2895any queue.
2896
2897@item @emph{C/C++}:
2898@multitable @columnfractions .20 .80
2899@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2900@end multitable
2901
2902@item @emph{Fortran}:
2903@multitable @columnfractions .20 .80
2904@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2905@item @tab @code{integer(acc_handle_kind) async}
2906@end multitable
2907
2908@item @emph{Reference}:
2909@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29103.2.14.
2911@end table
2912
2913
2914
2915@node acc_wait_async
2916@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2917@table @asis
2918@item @emph{Description}
2919This function enqueues a wait operation on queue @var{async} for any and all
2920asynchronous operations enqueued on queue @var{arg}.
2921
2922@item @emph{C/C++}:
2923@multitable @columnfractions .20 .80
2924@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2925@end multitable
2926
2927@item @emph{Fortran}:
2928@multitable @columnfractions .20 .80
2929@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2930@item @tab @code{integer(acc_handle_kind) arg, async}
2931@end multitable
2932
2933@item @emph{Reference}:
2934@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29353.2.12.
2936@end table
2937
2938
2939
2940@node acc_init
2941@section @code{acc_init} -- Initialize runtime for a specific device type.
2942@table @asis
2943@item @emph{Description}
2944This function initializes the runtime for the device type specified in
2945@var{devicetype}.
2946
2947@item @emph{C/C++}:
2948@multitable @columnfractions .20 .80
2949@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2950@end multitable
2951
2952@item @emph{Fortran}:
2953@multitable @columnfractions .20 .80
2954@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2955@item @tab @code{integer(acc_device_kind) devicetype}
2956@end multitable
2957
2958@item @emph{Reference}:
2959@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29603.2.7.
2961@end table
2962
2963
2964
2965@node acc_shutdown
2966@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2967@table @asis
2968@item @emph{Description}
2969This function shuts down the runtime for the device type specified in
2970@var{devicetype}.
2971
2972@item @emph{C/C++}:
2973@multitable @columnfractions .20 .80
2974@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2975@end multitable
2976
2977@item @emph{Fortran}:
2978@multitable @columnfractions .20 .80
2979@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2980@item @tab @code{integer(acc_device_kind) devicetype}
2981@end multitable
2982
2983@item @emph{Reference}:
2984@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29853.2.8.
2986@end table
2987
2988
2989
2990@node acc_on_device
2991@section @code{acc_on_device} -- Whether executing on a particular device
2992@table @asis
2993@item @emph{Description}:
2994This function returns whether the program is executing on a particular
2995device specified in @var{devicetype}. In C/C++ a non-zero value is
2996returned to indicate the device is executing on the specified device type.
2997In Fortran, @code{true} will be returned. If the program is not executing
2998on the specified device type C/C++ will return a zero, while Fortran will
2999return @code{false}.
3000
3001@item @emph{C/C++}:
3002@multitable @columnfractions .20 .80
3003@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3004@end multitable
3005
3006@item @emph{Fortran}:
3007@multitable @columnfractions .20 .80
3008@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3009@item @tab @code{integer(acc_device_kind) devicetype}
3010@item @tab @code{logical acc_on_device}
3011@end multitable
3012
3013
3014@item @emph{Reference}:
3015@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30163.2.17.
3017@end table
3018
3019
3020
3021@node acc_malloc
3022@section @code{acc_malloc} -- Allocate device memory.
3023@table @asis
3024@item @emph{Description}
3025This function allocates @var{len} bytes of device memory. It returns
3026the device address of the allocated memory.
3027
3028@item @emph{C/C++}:
3029@multitable @columnfractions .20 .80
3030@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3031@end multitable
3032
3033@item @emph{Reference}:
3034@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30353.2.18.
3036@end table
3037
3038
3039
3040@node acc_free
3041@section @code{acc_free} -- Free device memory.
3042@table @asis
3043@item @emph{Description}
3044Free previously allocated device memory at the device address @code{a}.
3045
3046@item @emph{C/C++}:
3047@multitable @columnfractions .20 .80
3048@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3049@end multitable
3050
3051@item @emph{Reference}:
3052@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30533.2.19.
3054@end table
3055
3056
3057
3058@node acc_copyin
3059@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3060@table @asis
3061@item @emph{Description}
3062In C/C++, this function allocates @var{len} bytes of device memory
3063and maps it to the specified host address in @var{a}. The device
3064address of the newly allocated device memory is returned.
3065
3066In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3067a contiguous array section. The second form @var{a} specifies a
3068variable or array element and @var{len} specifies the length in bytes.
3069
3070@item @emph{C/C++}:
3071@multitable @columnfractions .20 .80
3072@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3073@item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3074@end multitable
3075
3076@item @emph{Fortran}:
3077@multitable @columnfractions .20 .80
3078@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3079@item @tab @code{type, dimension(:[,:]...) :: a}
3080@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3081@item @tab @code{type, dimension(:[,:]...) :: a}
3082@item @tab @code{integer len}
3083@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3084@item @tab @code{type, dimension(:[,:]...) :: a}
3085@item @tab @code{integer(acc_handle_kind) :: async}
3086@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3087@item @tab @code{type, dimension(:[,:]...) :: a}
3088@item @tab @code{integer len}
3089@item @tab @code{integer(acc_handle_kind) :: async}
3090@end multitable
3091
3092@item @emph{Reference}:
3093@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30943.2.20.
3095@end table
3096
3097
3098
3099@node acc_present_or_copyin
3100@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3101@table @asis
3102@item @emph{Description}
3103This function tests if the host data specified by @var{a} and of length
3104@var{len} is present or not. If it is not present, then device memory
3105will be allocated and the host memory copied. The device address of
3106the newly allocated device memory is returned.
3107
3108In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3109a contiguous array section. The second form @var{a} specifies a variable or
3110array element and @var{len} specifies the length in bytes.
3111
3112Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3113backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3114
3115@item @emph{C/C++}:
3116@multitable @columnfractions .20 .80
3117@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3118@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3119@end multitable
3120
3121@item @emph{Fortran}:
3122@multitable @columnfractions .20 .80
3123@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3124@item @tab @code{type, dimension(:[,:]...) :: a}
3125@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3126@item @tab @code{type, dimension(:[,:]...) :: a}
3127@item @tab @code{integer len}
3128@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3129@item @tab @code{type, dimension(:[,:]...) :: a}
3130@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3131@item @tab @code{type, dimension(:[,:]...) :: a}
3132@item @tab @code{integer len}
3133@end multitable
3134
3135@item @emph{Reference}:
3136@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31373.2.20.
3138@end table
3139
3140
3141
3142@node acc_create
3143@section @code{acc_create} -- Allocate device memory and map it to host memory.
3144@table @asis
3145@item @emph{Description}
3146This function allocates device memory and maps it to host memory specified
3147by the host address @var{a} with a length of @var{len} bytes. In C/C++,
3148the function returns the device address of the allocated device memory.
3149
3150In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3151a contiguous array section. The second form @var{a} specifies a variable or
3152array element and @var{len} specifies the length in bytes.
3153
3154@item @emph{C/C++}:
3155@multitable @columnfractions .20 .80
3156@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
3157@item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
3158@end multitable
3159
3160@item @emph{Fortran}:
3161@multitable @columnfractions .20 .80
3162@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
3163@item @tab @code{type, dimension(:[,:]...) :: a}
3164@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
3165@item @tab @code{type, dimension(:[,:]...) :: a}
3166@item @tab @code{integer len}
3167@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
3168@item @tab @code{type, dimension(:[,:]...) :: a}
3169@item @tab @code{integer(acc_handle_kind) :: async}
3170@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
3171@item @tab @code{type, dimension(:[,:]...) :: a}
3172@item @tab @code{integer len}
3173@item @tab @code{integer(acc_handle_kind) :: async}
3174@end multitable
3175
3176@item @emph{Reference}:
3177@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31783.2.21.
3179@end table
3180
3181
3182
3183@node acc_present_or_create
3184@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
3185@table @asis
3186@item @emph{Description}
3187This function tests if the host data specified by @var{a} and of length
3188@var{len} is present or not. If it is not present, then device memory
3189will be allocated and mapped to host memory. In C/C++, the device address
3190of the newly allocated device memory is returned.
3191
3192In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3193a contiguous array section. The second form @var{a} specifies a variable or
3194array element and @var{len} specifies the length in bytes.
3195
3196Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
3197backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
3198
3199@item @emph{C/C++}:
3200@multitable @columnfractions .20 .80
3201@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
3202@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
3203@end multitable
3204
3205@item @emph{Fortran}:
3206@multitable @columnfractions .20 .80
3207@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
3208@item @tab @code{type, dimension(:[,:]...) :: a}
3209@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
3210@item @tab @code{type, dimension(:[,:]...) :: a}
3211@item @tab @code{integer len}
3212@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
3213@item @tab @code{type, dimension(:[,:]...) :: a}
3214@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
3215@item @tab @code{type, dimension(:[,:]...) :: a}
3216@item @tab @code{integer len}
3217@end multitable
3218
3219@item @emph{Reference}:
3220@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32213.2.21.
3222@end table
3223
3224
3225
3226@node acc_copyout
3227@section @code{acc_copyout} -- Copy device memory to host memory.
3228@table @asis
3229@item @emph{Description}
3230This function copies mapped device memory to host memory which is specified
3231by host address @var{a} for a length @var{len} bytes in C/C++.
3232
3233In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3234a contiguous array section. The second form @var{a} specifies a variable or
3235array element and @var{len} specifies the length in bytes.
3236
3237@item @emph{C/C++}:
3238@multitable @columnfractions .20 .80
3239@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
3240@item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
3241@item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
3242@item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
3243@end multitable
3244
3245@item @emph{Fortran}:
3246@multitable @columnfractions .20 .80
3247@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
3248@item @tab @code{type, dimension(:[,:]...) :: a}
3249@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
3250@item @tab @code{type, dimension(:[,:]...) :: a}
3251@item @tab @code{integer len}
3252@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
3253@item @tab @code{type, dimension(:[,:]...) :: a}
3254@item @tab @code{integer(acc_handle_kind) :: async}
3255@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
3256@item @tab @code{type, dimension(:[,:]...) :: a}
3257@item @tab @code{integer len}
3258@item @tab @code{integer(acc_handle_kind) :: async}
3259@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
3260@item @tab @code{type, dimension(:[,:]...) :: a}
3261@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
3262@item @tab @code{type, dimension(:[,:]...) :: a}
3263@item @tab @code{integer len}
3264@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
3265@item @tab @code{type, dimension(:[,:]...) :: a}
3266@item @tab @code{integer(acc_handle_kind) :: async}
3267@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
3268@item @tab @code{type, dimension(:[,:]...) :: a}
3269@item @tab @code{integer len}
3270@item @tab @code{integer(acc_handle_kind) :: async}
3271@end multitable
3272
3273@item @emph{Reference}:
3274@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32753.2.22.
3276@end table
3277
3278
3279
3280@node acc_delete
3281@section @code{acc_delete} -- Free device memory.
3282@table @asis
3283@item @emph{Description}
3284This function frees previously allocated device memory specified by
3285the device address @var{a} and the length of @var{len} bytes.
3286
3287In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3288a contiguous array section. The second form @var{a} specifies a variable or
3289array element and @var{len} specifies the length in bytes.
3290
3291@item @emph{C/C++}:
3292@multitable @columnfractions .20 .80
3293@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
3294@item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
3295@item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
3296@item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
3297@end multitable
3298
3299@item @emph{Fortran}:
3300@multitable @columnfractions .20 .80
3301@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
3302@item @tab @code{type, dimension(:[,:]...) :: a}
3303@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
3304@item @tab @code{type, dimension(:[,:]...) :: a}
3305@item @tab @code{integer len}
3306@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
3307@item @tab @code{type, dimension(:[,:]...) :: a}
3308@item @tab @code{integer(acc_handle_kind) :: async}
3309@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
3310@item @tab @code{type, dimension(:[,:]...) :: a}
3311@item @tab @code{integer len}
3312@item @tab @code{integer(acc_handle_kind) :: async}
3313@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
3314@item @tab @code{type, dimension(:[,:]...) :: a}
3315@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
3316@item @tab @code{type, dimension(:[,:]...) :: a}
3317@item @tab @code{integer len}
3318@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
3319@item @tab @code{type, dimension(:[,:]...) :: a}
3320@item @tab @code{integer(acc_handle_kind) :: async}
3321@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
3322@item @tab @code{type, dimension(:[,:]...) :: a}
3323@item @tab @code{integer len}
3324@item @tab @code{integer(acc_handle_kind) :: async}
3325@end multitable
3326
3327@item @emph{Reference}:
3328@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33293.2.23.
3330@end table
3331
3332
3333
3334@node acc_update_device
3335@section @code{acc_update_device} -- Update device memory from mapped host memory.
3336@table @asis
3337@item @emph{Description}
3338This function updates the device copy from the previously mapped host memory.
3339The host memory is specified with the host address @var{a} and a length of
3340@var{len} bytes.
3341
3342In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3343a contiguous array section. The second form @var{a} specifies a variable or
3344array element and @var{len} specifies the length in bytes.
3345
3346@item @emph{C/C++}:
3347@multitable @columnfractions .20 .80
3348@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
3349@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
3350@end multitable
3351
3352@item @emph{Fortran}:
3353@multitable @columnfractions .20 .80
3354@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
3355@item @tab @code{type, dimension(:[,:]...) :: a}
3356@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
3357@item @tab @code{type, dimension(:[,:]...) :: a}
3358@item @tab @code{integer len}
3359@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
3360@item @tab @code{type, dimension(:[,:]...) :: a}
3361@item @tab @code{integer(acc_handle_kind) :: async}
3362@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
3363@item @tab @code{type, dimension(:[,:]...) :: a}
3364@item @tab @code{integer len}
3365@item @tab @code{integer(acc_handle_kind) :: async}
3366@end multitable
3367
3368@item @emph{Reference}:
3369@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33703.2.24.
3371@end table
3372
3373
3374
3375@node acc_update_self
3376@section @code{acc_update_self} -- Update host memory from mapped device memory.
3377@table @asis
3378@item @emph{Description}
3379This function updates the host copy from the previously mapped device memory.
3380The host memory is specified with the host address @var{a} and a length of
3381@var{len} bytes.
3382
3383In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3384a contiguous array section. The second form @var{a} specifies a variable or
3385array element and @var{len} specifies the length in bytes.
3386
3387@item @emph{C/C++}:
3388@multitable @columnfractions .20 .80
3389@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
3390@item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
3391@end multitable
3392
3393@item @emph{Fortran}:
3394@multitable @columnfractions .20 .80
3395@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
3396@item @tab @code{type, dimension(:[,:]...) :: a}
3397@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
3398@item @tab @code{type, dimension(:[,:]...) :: a}
3399@item @tab @code{integer len}
3400@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
3401@item @tab @code{type, dimension(:[,:]...) :: a}
3402@item @tab @code{integer(acc_handle_kind) :: async}
3403@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
3404@item @tab @code{type, dimension(:[,:]...) :: a}
3405@item @tab @code{integer len}
3406@item @tab @code{integer(acc_handle_kind) :: async}
3407@end multitable
3408
3409@item @emph{Reference}:
3410@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34113.2.25.
3412@end table
3413
3414
3415
3416@node acc_map_data
3417@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
3418@table @asis
3419@item @emph{Description}
3420This function maps previously allocated device and host memory. The device
3421memory is specified with the device address @var{d}. The host memory is
3422specified with the host address @var{h} and a length of @var{len}.
3423
3424@item @emph{C/C++}:
3425@multitable @columnfractions .20 .80
3426@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
3427@end multitable
3428
3429@item @emph{Reference}:
3430@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34313.2.26.
3432@end table
3433
3434
3435
3436@node acc_unmap_data
3437@section @code{acc_unmap_data} -- Unmap device memory from host memory.
3438@table @asis
3439@item @emph{Description}
3440This function unmaps previously mapped device and host memory. The latter
3441specified by @var{h}.
3442
3443@item @emph{C/C++}:
3444@multitable @columnfractions .20 .80
3445@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
3446@end multitable
3447
3448@item @emph{Reference}:
3449@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34503.2.27.
3451@end table
3452
3453
3454
3455@node acc_deviceptr
3456@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
3457@table @asis
3458@item @emph{Description}
3459This function returns the device address that has been mapped to the
3460host address specified by @var{h}.
3461
3462@item @emph{C/C++}:
3463@multitable @columnfractions .20 .80
3464@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
3465@end multitable
3466
3467@item @emph{Reference}:
3468@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34693.2.28.
3470@end table
3471
3472
3473
3474@node acc_hostptr
3475@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
3476@table @asis
3477@item @emph{Description}
3478This function returns the host address that has been mapped to the
3479device address specified by @var{d}.
3480
3481@item @emph{C/C++}:
3482@multitable @columnfractions .20 .80
3483@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
3484@end multitable
3485
3486@item @emph{Reference}:
3487@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34883.2.29.
3489@end table
3490
3491
3492
3493@node acc_is_present
3494@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
3495@table @asis
3496@item @emph{Description}
3497This function indicates whether the specified host address in @var{a} and a
3498length of @var{len} bytes is present on the device. In C/C++, a non-zero
3499value is returned to indicate the presence of the mapped memory on the
3500device. A zero is returned to indicate the memory is not mapped on the
3501device.
3502
3503In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3504a contiguous array section. The second form @var{a} specifies a variable or
3505array element and @var{len} specifies the length in bytes. If the host
3506memory is mapped to device memory, then a @code{true} is returned. Otherwise,
3507a @code{false} is return to indicate the mapped memory is not present.
3508
3509@item @emph{C/C++}:
3510@multitable @columnfractions .20 .80
3511@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3512@end multitable
3513
3514@item @emph{Fortran}:
3515@multitable @columnfractions .20 .80
3516@item @emph{Interface}: @tab @code{function acc_is_present(a)}
3517@item @tab @code{type, dimension(:[,:]...) :: a}
3518@item @tab @code{logical acc_is_present}
3519@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3520@item @tab @code{type, dimension(:[,:]...) :: a}
3521@item @tab @code{integer len}
3522@item @tab @code{logical acc_is_present}
3523@end multitable
3524
3525@item @emph{Reference}:
3526@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35273.2.30.
3528@end table
3529
3530
3531
3532@node acc_memcpy_to_device
3533@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3534@table @asis
3535@item @emph{Description}
3536This function copies host memory specified by host address of @var{src} to
3537device memory specified by the device address @var{dest} for a length of
3538@var{bytes} bytes.
3539
3540@item @emph{C/C++}:
3541@multitable @columnfractions .20 .80
3542@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3543@end multitable
3544
3545@item @emph{Reference}:
3546@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35473.2.31.
3548@end table
3549
3550
3551
3552@node acc_memcpy_from_device
3553@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3554@table @asis
3555@item @emph{Description}
3556This function copies host memory specified by host address of @var{src} from
3557device memory specified by the device address @var{dest} for a length of
3558@var{bytes} bytes.
3559
3560@item @emph{C/C++}:
3561@multitable @columnfractions .20 .80
3562@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3563@end multitable
3564
3565@item @emph{Reference}:
3566@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35673.2.32.
3568@end table
3569
3570
3571
3572@node acc_attach
3573@section @code{acc_attach} -- Let device pointer point to device-pointer target.
3574@table @asis
3575@item @emph{Description}
3576This function updates a pointer on the device from pointing to a host-pointer
3577address to pointing to the corresponding device data.
3578
3579@item @emph{C/C++}:
3580@multitable @columnfractions .20 .80
3581@item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3582@item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3583@end multitable
3584
3585@item @emph{Reference}:
3586@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35873.2.34.
3588@end table
3589
3590
3591
3592@node acc_detach
3593@section @code{acc_detach} -- Let device pointer point to host-pointer target.
3594@table @asis
3595@item @emph{Description}
3596This function updates a pointer on the device from pointing to a device-pointer
3597address to pointing to the corresponding host data.
3598
3599@item @emph{C/C++}:
3600@multitable @columnfractions .20 .80
3601@item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3602@item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3603@item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3604@item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3605@end multitable
3606
3607@item @emph{Reference}:
3608@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36093.2.35.
3610@end table
3611
3612
3613
3614@node acc_get_current_cuda_device
3615@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3616@table @asis
3617@item @emph{Description}
3618This function returns the CUDA device handle. This handle is the same
3619as used by the CUDA Runtime or Driver API's.
3620
3621@item @emph{C/C++}:
3622@multitable @columnfractions .20 .80
3623@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3624@end multitable
3625
3626@item @emph{Reference}:
3627@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3628A.2.1.1.
3629@end table
3630
3631
3632
3633@node acc_get_current_cuda_context
3634@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3635@table @asis
3636@item @emph{Description}
3637This function returns the CUDA context handle. This handle is the same
3638as used by the CUDA Runtime or Driver API's.
3639
3640@item @emph{C/C++}:
3641@multitable @columnfractions .20 .80
3642@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3643@end multitable
3644
3645@item @emph{Reference}:
3646@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3647A.2.1.2.
3648@end table
3649
3650
3651
3652@node acc_get_cuda_stream
3653@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3654@table @asis
3655@item @emph{Description}
3656This function returns the CUDA stream handle for the queue @var{async}.
3657This handle is the same as used by the CUDA Runtime or Driver API's.
3658
3659@item @emph{C/C++}:
3660@multitable @columnfractions .20 .80
3661@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3662@end multitable
3663
3664@item @emph{Reference}:
3665@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3666A.2.1.3.
3667@end table
3668
3669
3670
3671@node acc_set_cuda_stream
3672@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3673@table @asis
3674@item @emph{Description}
3675This function associates the stream handle specified by @var{stream} with
3676the queue @var{async}.
3677
3678This cannot be used to change the stream handle associated with
3679@code{acc_async_sync}.
3680
3681The return value is not specified.
3682
3683@item @emph{C/C++}:
3684@multitable @columnfractions .20 .80
3685@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3686@end multitable
3687
3688@item @emph{Reference}:
3689@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3690A.2.1.4.
3691@end table
3692
3693
3694
3695@node acc_prof_register
3696@section @code{acc_prof_register} -- Register callbacks.
3697@table @asis
3698@item @emph{Description}:
3699This function registers callbacks.
3700
3701@item @emph{C/C++}:
3702@multitable @columnfractions .20 .80
3703@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3704@end multitable
3705
3706@item @emph{See also}:
3707@ref{OpenACC Profiling Interface}
3708
3709@item @emph{Reference}:
3710@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37115.3.
3712@end table
3713
3714
3715
3716@node acc_prof_unregister
3717@section @code{acc_prof_unregister} -- Unregister callbacks.
3718@table @asis
3719@item @emph{Description}:
3720This function unregisters callbacks.
3721
3722@item @emph{C/C++}:
3723@multitable @columnfractions .20 .80
3724@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3725@end multitable
3726
3727@item @emph{See also}:
3728@ref{OpenACC Profiling Interface}
3729
3730@item @emph{Reference}:
3731@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37325.3.
3733@end table
3734
3735
3736
3737@node acc_prof_lookup
3738@section @code{acc_prof_lookup} -- Obtain inquiry functions.
3739@table @asis
3740@item @emph{Description}:
3741Function to obtain inquiry functions.
3742
3743@item @emph{C/C++}:
3744@multitable @columnfractions .20 .80
3745@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3746@end multitable
3747
3748@item @emph{See also}:
3749@ref{OpenACC Profiling Interface}
3750
3751@item @emph{Reference}:
3752@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37535.3.
3754@end table
3755
3756
3757
3758@node acc_register_library
3759@section @code{acc_register_library} -- Library registration.
3760@table @asis
3761@item @emph{Description}:
3762Function for library registration.
3763
3764@item @emph{C/C++}:
3765@multitable @columnfractions .20 .80
3766@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3767@end multitable
3768
3769@item @emph{See also}:
3770@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3771
3772@item @emph{Reference}:
3773@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37745.3.
3775@end table
3776
3777
3778
3779@c ---------------------------------------------------------------------
3780@c OpenACC Environment Variables
3781@c ---------------------------------------------------------------------
3782
3783@node OpenACC Environment Variables
3784@chapter OpenACC Environment Variables
3785
3786The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
3787are defined by section 4 of the OpenACC specification in version 2.0.
3788The variable @env{ACC_PROFLIB}
3789is defined by section 4 of the OpenACC specification in version 2.6.
3790The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
3791
3792@menu
3793* ACC_DEVICE_TYPE::
3794* ACC_DEVICE_NUM::
3795* ACC_PROFLIB::
3796* GCC_ACC_NOTIFY::
3797@end menu
3798
3799
3800
3801@node ACC_DEVICE_TYPE
3802@section @code{ACC_DEVICE_TYPE}
3803@table @asis
3804@item @emph{Reference}:
3805@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38064.1.
3807@end table
3808
3809
3810
3811@node ACC_DEVICE_NUM
3812@section @code{ACC_DEVICE_NUM}
3813@table @asis
3814@item @emph{Reference}:
3815@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38164.2.
3817@end table
3818
3819
3820
3821@node ACC_PROFLIB
3822@section @code{ACC_PROFLIB}
3823@table @asis
3824@item @emph{See also}:
3825@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3826
3827@item @emph{Reference}:
3828@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38294.3.
3830@end table
3831
3832
3833
3834@node GCC_ACC_NOTIFY
3835@section @code{GCC_ACC_NOTIFY}
3836@table @asis
3837@item @emph{Description}:
3838Print debug information pertaining to the accelerator.
3839@end table
3840
3841
3842
3843@c ---------------------------------------------------------------------
3844@c CUDA Streams Usage
3845@c ---------------------------------------------------------------------
3846
3847@node CUDA Streams Usage
3848@chapter CUDA Streams Usage
3849
3850This applies to the @code{nvptx} plugin only.
3851
3852The library provides elements that perform asynchronous movement of
3853data and asynchronous operation of computing constructs. This
3854asynchronous functionality is implemented by making use of CUDA
3855streams@footnote{See "Stream Management" in "CUDA Driver API",
3856TRM-06703-001, Version 5.5, for additional information}.
3857
3858The primary means by that the asynchronous functionality is accessed
3859is through the use of those OpenACC directives which make use of the
3860@code{async} and @code{wait} clauses. When the @code{async} clause is
3861first used with a directive, it creates a CUDA stream. If an
3862@code{async-argument} is used with the @code{async} clause, then the
3863stream is associated with the specified @code{async-argument}.
3864
3865Following the creation of an association between a CUDA stream and the
3866@code{async-argument} of an @code{async} clause, both the @code{wait}
3867clause and the @code{wait} directive can be used. When either the
3868clause or directive is used after stream creation, it creates a
3869rendezvous point whereby execution waits until all operations
3870associated with the @code{async-argument}, that is, stream, have
3871completed.
3872
3873Normally, the management of the streams that are created as a result of
3874using the @code{async} clause, is done without any intervention by the
3875caller. This implies the association between the @code{async-argument}
3876and the CUDA stream will be maintained for the lifetime of the program.
3877However, this association can be changed through the use of the library
3878function @code{acc_set_cuda_stream}. When the function
3879@code{acc_set_cuda_stream} is called, the CUDA stream that was
3880originally associated with the @code{async} clause will be destroyed.
3881Caution should be taken when changing the association as subsequent
3882references to the @code{async-argument} refer to a different
3883CUDA stream.
3884
3885
3886
3887@c ---------------------------------------------------------------------
3888@c OpenACC Library Interoperability
3889@c ---------------------------------------------------------------------
3890
3891@node OpenACC Library Interoperability
3892@chapter OpenACC Library Interoperability
3893
3894@section Introduction
3895
3896The OpenACC library uses the CUDA Driver API, and may interact with
3897programs that use the Runtime library directly, or another library
3898based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3899"Interactions with the CUDA Driver API" in
3900"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3901Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3902for additional information on library interoperability.}.
3903This chapter describes the use cases and what changes are
3904required in order to use both the OpenACC library and the CUBLAS and Runtime
3905libraries within a program.
3906
3907@section First invocation: NVIDIA CUBLAS library API
3908
3909In this first use case (see below), a function in the CUBLAS library is called
3910prior to any of the functions in the OpenACC library. More specifically, the
3911function @code{cublasCreate()}.
3912
3913When invoked, the function initializes the library and allocates the
3914hardware resources on the host and the device on behalf of the caller. Once
3915the initialization and allocation has completed, a handle is returned to the
3916caller. The OpenACC library also requires initialization and allocation of
3917hardware resources. Since the CUBLAS library has already allocated the
3918hardware resources for the device, all that is left to do is to initialize
3919the OpenACC library and acquire the hardware resources on the host.
3920
3921Prior to calling the OpenACC function that initializes the library and
3922allocate the host hardware resources, you need to acquire the device number
3923that was allocated during the call to @code{cublasCreate()}. The invoking of the
3924runtime library function @code{cudaGetDevice()} accomplishes this. Once
3925acquired, the device number is passed along with the device type as
3926parameters to the OpenACC library function @code{acc_set_device_num()}.
3927
3928Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3929library uses the context that was created during the call to
3930@code{cublasCreate()}. In other words, both libraries will be sharing the
3931same context.
3932
3933@smallexample
3934 /* Create the handle */
3935 s = cublasCreate(&h);
3936 if (s != CUBLAS_STATUS_SUCCESS)
3937 @{
3938 fprintf(stderr, "cublasCreate failed %d\n", s);
3939 exit(EXIT_FAILURE);
3940 @}
3941
3942 /* Get the device number */
3943 e = cudaGetDevice(&dev);
3944 if (e != cudaSuccess)
3945 @{
3946 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3947 exit(EXIT_FAILURE);
3948 @}
3949
3950 /* Initialize OpenACC library and use device 'dev' */
3951 acc_set_device_num(dev, acc_device_nvidia);
3952
3953@end smallexample
3954@center Use Case 1
3955
3956@section First invocation: OpenACC library API
3957
3958In this second use case (see below), a function in the OpenACC library is
3959called prior to any of the functions in the CUBLAS library. More specificially,
3960the function @code{acc_set_device_num()}.
3961
3962In the use case presented here, the function @code{acc_set_device_num()}
3963is used to both initialize the OpenACC library and allocate the hardware
3964resources on the host and the device. In the call to the function, the
3965call parameters specify which device to use and what device
3966type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3967is but one method to initialize the OpenACC library and allocate the
3968appropriate hardware resources. Other methods are available through the
3969use of environment variables and these will be discussed in the next section.
3970
3971Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3972functions can be called as seen with multiple calls being made to
3973@code{acc_copyin()}. In addition, calls can be made to functions in the
3974CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3975subsequent to the calls to @code{acc_copyin()}.
3976As seen in the previous use case, a call to @code{cublasCreate()}
3977initializes the CUBLAS library and allocates the hardware resources on the
3978host and the device. However, since the device has already been allocated,
3979@code{cublasCreate()} will only initialize the CUBLAS library and allocate
3980the appropriate hardware resources on the host. The context that was created
3981as part of the OpenACC initialization is shared with the CUBLAS library,
3982similarly to the first use case.
3983
3984@smallexample
3985 dev = 0;
3986
3987 acc_set_device_num(dev, acc_device_nvidia);
3988
3989 /* Copy the first set to the device */
3990 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3991 if (d_X == NULL)
3992 @{
3993 fprintf(stderr, "copyin error h_X\n");
3994 exit(EXIT_FAILURE);
3995 @}
3996
3997 /* Copy the second set to the device */
3998 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3999 if (d_Y == NULL)
4000 @{
4001 fprintf(stderr, "copyin error h_Y1\n");
4002 exit(EXIT_FAILURE);
4003 @}
4004
4005 /* Create the handle */
4006 s = cublasCreate(&h);
4007 if (s != CUBLAS_STATUS_SUCCESS)
4008 @{
4009 fprintf(stderr, "cublasCreate failed %d\n", s);
4010 exit(EXIT_FAILURE);
4011 @}
4012
4013 /* Perform saxpy using CUBLAS library function */
4014 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4015 if (s != CUBLAS_STATUS_SUCCESS)
4016 @{
4017 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4018 exit(EXIT_FAILURE);
4019 @}
4020
4021 /* Copy the results from the device */
4022 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4023
4024@end smallexample
4025@center Use Case 2
4026
4027@section OpenACC library and environment variables
4028
4029There are two environment variables associated with the OpenACC library
4030that may be used to control the device type and device number:
4031@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4032environment variables can be used as an alternative to calling
4033@code{acc_set_device_num()}. As seen in the second use case, the device
4034type and device number were specified using @code{acc_set_device_num()}.
4035If however, the aforementioned environment variables were set, then the
4036call to @code{acc_set_device_num()} would not be required.
4037
4038
4039The use of the environment variables is only relevant when an OpenACC function
4040is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4041is called prior to a call to an OpenACC function, then you must call
4042@code{acc_set_device_num()}@footnote{More complete information
4043about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4044sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4045Application Programming Interface”, Version 2.6.}
4046
4047
4048
4049@c ---------------------------------------------------------------------
4050@c OpenACC Profiling Interface
4051@c ---------------------------------------------------------------------
4052
4053@node OpenACC Profiling Interface
4054@chapter OpenACC Profiling Interface
4055
4056@section Implementation Status and Implementation-Defined Behavior
4057
4058We're implementing the OpenACC Profiling Interface as defined by the
4059OpenACC 2.6 specification. We're clarifying some aspects here as
4060@emph{implementation-defined behavior}, while they're still under
4061discussion within the OpenACC Technical Committee.
4062
4063This implementation is tuned to keep the performance impact as low as
4064possible for the (very common) case that the Profiling Interface is
4065not enabled. This is relevant, as the Profiling Interface affects all
4066the @emph{hot} code paths (in the target code, not in the offloaded
4067code). Users of the OpenACC Profiling Interface can be expected to
4068understand that performance will be impacted to some degree once the
4069Profiling Interface has gotten enabled: for example, because of the
4070@emph{runtime} (libgomp) calling into a third-party @emph{library} for
4071every event that has been registered.
4072
4073We're not yet accounting for the fact that @cite{OpenACC events may
4074occur during event processing}.
4075We just handle one case specially, as required by CUDA 9.0
4076@command{nvprof}, that @code{acc_get_device_type}
4077(@ref{acc_get_device_type})) may be called from
4078@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4079callbacks.
4080
4081We're not yet implementing initialization via a
4082@code{acc_register_library} function that is either statically linked
4083in, or dynamically via @env{LD_PRELOAD}.
4084Initialization via @code{acc_register_library} functions dynamically
4085loaded via the @env{ACC_PROFLIB} environment variable does work, as
4086does directly calling @code{acc_prof_register},
4087@code{acc_prof_unregister}, @code{acc_prof_lookup}.
4088
4089As currently there are no inquiry functions defined, calls to
4090@code{acc_prof_lookup} will always return @code{NULL}.
4091
4092There aren't separate @emph{start}, @emph{stop} events defined for the
4093event types @code{acc_ev_create}, @code{acc_ev_delete},
4094@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4095should be triggered before or after the actual device-specific call is
4096made. We trigger them after.
4097
4098Remarks about data provided to callbacks:
4099
4100@table @asis
4101
4102@item @code{acc_prof_info.event_type}
4103It's not clear if for @emph{nested} event callbacks (for example,
4104@code{acc_ev_enqueue_launch_start} as part of a parent compute
4105construct), this should be set for the nested event
4106(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4107construct should remain (@code{acc_ev_compute_construct_start}). In
4108this implementation, the value will generally correspond to the
4109innermost nested event type.
4110
4111@item @code{acc_prof_info.device_type}
4112@itemize
4113
4114@item
4115For @code{acc_ev_compute_construct_start}, and in presence of an
4116@code{if} clause with @emph{false} argument, this will still refer to
4117the offloading device type.
4118It's not clear if that's the expected behavior.
4119
4120@item
4121Complementary to the item before, for
4122@code{acc_ev_compute_construct_end}, this is set to
4123@code{acc_device_host} in presence of an @code{if} clause with
4124@emph{false} argument.
4125It's not clear if that's the expected behavior.
4126
4127@end itemize
4128
4129@item @code{acc_prof_info.thread_id}
4130Always @code{-1}; not yet implemented.
4131
4132@item @code{acc_prof_info.async}
4133@itemize
4134
4135@item
4136Not yet implemented correctly for
4137@code{acc_ev_compute_construct_start}.
4138
4139@item
4140In a compute construct, for host-fallback
4141execution/@code{acc_device_host} it will always be
4142@code{acc_async_sync}.
4143It's not clear if that's the expected behavior.
4144
4145@item
4146For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
4147it will always be @code{acc_async_sync}.
4148It's not clear if that's the expected behavior.
4149
4150@end itemize
4151
4152@item @code{acc_prof_info.async_queue}
4153There is no @cite{limited number of asynchronous queues} in libgomp.
4154This will always have the same value as @code{acc_prof_info.async}.
4155
4156@item @code{acc_prof_info.src_file}
4157Always @code{NULL}; not yet implemented.
4158
4159@item @code{acc_prof_info.func_name}
4160Always @code{NULL}; not yet implemented.
4161
4162@item @code{acc_prof_info.line_no}
4163Always @code{-1}; not yet implemented.
4164
4165@item @code{acc_prof_info.end_line_no}
4166Always @code{-1}; not yet implemented.
4167
4168@item @code{acc_prof_info.func_line_no}
4169Always @code{-1}; not yet implemented.
4170
4171@item @code{acc_prof_info.func_end_line_no}
4172Always @code{-1}; not yet implemented.
4173
4174@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
4175Relating to @code{acc_prof_info.event_type} discussed above, in this
4176implementation, this will always be the same value as
4177@code{acc_prof_info.event_type}.
4178
4179@item @code{acc_event_info.*.parent_construct}
4180@itemize
4181
4182@item
4183Will be @code{acc_construct_parallel} for all OpenACC compute
4184constructs as well as many OpenACC Runtime API calls; should be the
4185one matching the actual construct, or
4186@code{acc_construct_runtime_api}, respectively.
4187
4188@item
4189Will be @code{acc_construct_enter_data} or
4190@code{acc_construct_exit_data} when processing variable mappings
4191specified in OpenACC @emph{declare} directives; should be
4192@code{acc_construct_declare}.
4193
4194@item
4195For implicit @code{acc_ev_device_init_start},
4196@code{acc_ev_device_init_end}, and explicit as well as implicit
4197@code{acc_ev_alloc}, @code{acc_ev_free},
4198@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4199@code{acc_ev_enqueue_download_start}, and
4200@code{acc_ev_enqueue_download_end}, will be
4201@code{acc_construct_parallel}; should reflect the real parent
4202construct.
4203
4204@end itemize
4205
4206@item @code{acc_event_info.*.implicit}
4207For @code{acc_ev_alloc}, @code{acc_ev_free},
4208@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4209@code{acc_ev_enqueue_download_start}, and
4210@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
4211also for explicit usage.
4212
4213@item @code{acc_event_info.data_event.var_name}
4214Always @code{NULL}; not yet implemented.
4215
4216@item @code{acc_event_info.data_event.host_ptr}
4217For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
4218@code{NULL}.
4219
4220@item @code{typedef union acc_api_info}
4221@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
4222Information}. This should obviously be @code{typedef @emph{struct}
4223acc_api_info}.
4224
4225@item @code{acc_api_info.device_api}
4226Possibly not yet implemented correctly for
4227@code{acc_ev_compute_construct_start},
4228@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
4229will always be @code{acc_device_api_none} for these event types.
4230For @code{acc_ev_enter_data_start}, it will be
4231@code{acc_device_api_none} in some cases.
4232
4233@item @code{acc_api_info.device_type}
4234Always the same as @code{acc_prof_info.device_type}.
4235
4236@item @code{acc_api_info.vendor}
4237Always @code{-1}; not yet implemented.
4238
4239@item @code{acc_api_info.device_handle}
4240Always @code{NULL}; not yet implemented.
4241
4242@item @code{acc_api_info.context_handle}
4243Always @code{NULL}; not yet implemented.
4244
4245@item @code{acc_api_info.async_handle}
4246Always @code{NULL}; not yet implemented.
4247
4248@end table
4249
4250Remarks about certain event types:
4251
4252@table @asis
4253
4254@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4255@itemize
4256
4257@item
4258@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
4259@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
4260@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
4261When a compute construct triggers implicit
4262@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
4263events, they currently aren't @emph{nested within} the corresponding
4264@code{acc_ev_compute_construct_start} and
4265@code{acc_ev_compute_construct_end}, but they're currently observed
4266@emph{before} @code{acc_ev_compute_construct_start}.
4267It's not clear what to do: the standard asks us provide a lot of
4268details to the @code{acc_ev_compute_construct_start} callback, without
4269(implicitly) initializing a device before?
4270
4271@item
4272Callbacks for these event types will not be invoked for calls to the
4273@code{acc_set_device_type} and @code{acc_set_device_num} functions.
4274It's not clear if they should be.
4275
4276@end itemize
4277
4278@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
4279@itemize
4280
4281@item
4282Callbacks for these event types will also be invoked for OpenACC
4283@emph{host_data} constructs.
4284It's not clear if they should be.
4285
4286@item
4287Callbacks for these event types will also be invoked when processing
4288variable mappings specified in OpenACC @emph{declare} directives.
4289It's not clear if they should be.
4290
4291@end itemize
4292
4293@end table
4294
4295Callbacks for the following event types will be invoked, but dispatch
4296and information provided therein has not yet been thoroughly reviewed:
4297
4298@itemize
4299@item @code{acc_ev_alloc}
4300@item @code{acc_ev_free}
4301@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
4302@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
4303@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
4304@end itemize
4305
4306During device initialization, and finalization, respectively,
4307callbacks for the following event types will not yet be invoked:
4308
4309@itemize
4310@item @code{acc_ev_alloc}
4311@item @code{acc_ev_free}
4312@end itemize
4313
4314Callbacks for the following event types have not yet been implemented,
4315so currently won't be invoked:
4316
4317@itemize
4318@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
4319@item @code{acc_ev_runtime_shutdown}
4320@item @code{acc_ev_create}, @code{acc_ev_delete}
4321@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
4322@end itemize
4323
4324For the following runtime library functions, not all expected
4325callbacks will be invoked (mostly concerning implicit device
4326initialization):
4327
4328@itemize
4329@item @code{acc_get_num_devices}
4330@item @code{acc_set_device_type}
4331@item @code{acc_get_device_type}
4332@item @code{acc_set_device_num}
4333@item @code{acc_get_device_num}
4334@item @code{acc_init}
4335@item @code{acc_shutdown}
4336@end itemize
4337
4338Aside from implicit device initialization, for the following runtime
4339library functions, no callbacks will be invoked for shared-memory
4340offloading devices (it's not clear if they should be):
4341
4342@itemize
4343@item @code{acc_malloc}
4344@item @code{acc_free}
4345@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
4346@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
4347@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
4348@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
4349@item @code{acc_update_device}, @code{acc_update_device_async}
4350@item @code{acc_update_self}, @code{acc_update_self_async}
4351@item @code{acc_map_data}, @code{acc_unmap_data}
4352@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
4353@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
4354@end itemize
4355
4356@c ---------------------------------------------------------------------
4357@c OpenMP-Implementation Specifics
4358@c ---------------------------------------------------------------------
4359
4360@node OpenMP-Implementation Specifics
4361@chapter OpenMP-Implementation Specifics
4362
4363@menu
4364* OpenMP Context Selectors::
4365* Memory allocation with libmemkind::
4366@end menu
4367
4368@node OpenMP Context Selectors
4369@section OpenMP Context Selectors
4370
4371@code{vendor} is always @code{gnu}. References are to the GCC manual.
4372
4373@multitable @columnfractions .60 .10 .25
4374@headitem @code{arch} @tab @code{kind} @tab @code{isa}
4375@item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
4376 @code{i586}, @code{i686}, @code{ia32}
4377 @tab @code{host}
4378 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
4379@item @code{amdgcn}, @code{gcn}
4380 @tab @code{gpu}
e0b95c2e
TB
4381 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
4382 @code{gfx803} is supported as an alias for @code{fiji}.}
d77de738
ML
4383@item @code{nvptx}
4384 @tab @code{gpu}
4385 @tab See @code{-march=} in ``Nvidia PTX Options''
4386@end multitable
4387
4388@node Memory allocation with libmemkind
4389@section Memory allocation with libmemkind
4390
4391On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
4392library} (@code{libmemkind.so.0}) is available at runtime, it is used when
4393creating memory allocators requesting
4394
4395@itemize
4396@item the memory space @code{omp_high_bw_mem_space}
4397@item the memory space @code{omp_large_cap_mem_space}
4398@item the partition trait @code{omp_atv_interleaved}
4399@end itemize
4400
4401
4402@c ---------------------------------------------------------------------
4403@c Offload-Target Specifics
4404@c ---------------------------------------------------------------------
4405
4406@node Offload-Target Specifics
4407@chapter Offload-Target Specifics
4408
4409The following sections present notes on the offload-target specifics
4410
4411@menu
4412* AMD Radeon::
4413* nvptx::
4414@end menu
4415
4416@node AMD Radeon
4417@section AMD Radeon (GCN)
4418
4419On the hardware side, there is the hierarchy (fine to coarse):
4420@itemize
4421@item work item (thread)
4422@item wavefront
4423@item work group
4424@item compute unite (CU)
4425@end itemize
4426
4427All OpenMP and OpenACC levels are used, i.e.
4428@itemize
4429@item OpenMP's simd and OpenACC's vector map to work items (thread)
4430@item OpenMP's threads (``parallel'') and OpenACC's workers map
4431 to wavefronts
4432@item OpenMP's teams and OpenACC's gang use a threadpool with the
4433 size of the number of teams or gangs, respectively.
4434@end itemize
4435
4436The used sizes are
4437@itemize
4438@item Number of teams is the specified @code{num_teams} (OpenMP) or
4439 @code{num_gangs} (OpenACC) or otherwise the number of CU
4440@item Number of wavefronts is 4 for gfx900 and 16 otherwise;
4441 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
4442 overrides this if smaller.
4443@item The wavefront has 102 scalars and 64 vectors
4444@item Number of workitems is always 64
4445@item The hardware permits maximally 40 workgroups/CU and
4446 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
4447@item 80 scalars registers and 24 vector registers in non-kernel functions
4448 (the chosen procedure-calling API).
4449@item For the kernel itself: as many as register pressure demands (number of
4450 teams and number of threads, scaled down if registers are exhausted)
4451@end itemize
4452
4453The implementation remark:
4454@itemize
4455@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4456 using the C library @code{printf} functions and the Fortran
4457 @code{print}/@code{write} statements.
4458@end itemize
4459
4460
4461
4462@node nvptx
4463@section nvptx
4464
4465On the hardware side, there is the hierarchy (fine to coarse):
4466@itemize
4467@item thread
4468@item warp
4469@item thread block
4470@item streaming multiprocessor
4471@end itemize
4472
4473All OpenMP and OpenACC levels are used, i.e.
4474@itemize
4475@item OpenMP's simd and OpenACC's vector map to threads
4476@item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
4477@item OpenMP's teams and OpenACC's gang use a threadpool with the
4478 size of the number of teams or gangs, respectively.
4479@end itemize
4480
4481The used sizes are
4482@itemize
4483@item The @code{warp_size} is always 32
4484@item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
4485@end itemize
4486
4487Additional information can be obtained by setting the environment variable to
4488@code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
4489parameters).
4490
4491GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
4492which caches the JIT in the user's directory (see CUDA documentation; can be
4493tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
4494
4495Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
4496options still affect the used PTX ISA code and, thus, the requirments on
4497CUDA version and hardware.
4498
4499The implementation remark:
4500@itemize
4501@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4502 using the C library @code{printf} functions. Note that the Fortran
4503 @code{print}/@code{write} statements are not supported, yet.
4504@item Compilation OpenMP code that contains @code{requires reverse_offload}
4505 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
4506 is not supported.
4507@end itemize
4508
4509
4510@c ---------------------------------------------------------------------
4511@c The libgomp ABI
4512@c ---------------------------------------------------------------------
4513
4514@node The libgomp ABI
4515@chapter The libgomp ABI
4516
4517The following sections present notes on the external ABI as
4518presented by libgomp. Only maintainers should need them.
4519
4520@menu
4521* Implementing MASTER construct::
4522* Implementing CRITICAL construct::
4523* Implementing ATOMIC construct::
4524* Implementing FLUSH construct::
4525* Implementing BARRIER construct::
4526* Implementing THREADPRIVATE construct::
4527* Implementing PRIVATE clause::
4528* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
4529* Implementing REDUCTION clause::
4530* Implementing PARALLEL construct::
4531* Implementing FOR construct::
4532* Implementing ORDERED construct::
4533* Implementing SECTIONS construct::
4534* Implementing SINGLE construct::
4535* Implementing OpenACC's PARALLEL construct::
4536@end menu
4537
4538
4539@node Implementing MASTER construct
4540@section Implementing MASTER construct
4541
4542@smallexample
4543if (omp_get_thread_num () == 0)
4544 block
4545@end smallexample
4546
4547Alternately, we generate two copies of the parallel subfunction
4548and only include this in the version run by the primary thread.
4549Surely this is not worthwhile though...
4550
4551
4552
4553@node Implementing CRITICAL construct
4554@section Implementing CRITICAL construct
4555
4556Without a specified name,
4557
4558@smallexample
4559 void GOMP_critical_start (void);
4560 void GOMP_critical_end (void);
4561@end smallexample
4562
4563so that we don't get COPY relocations from libgomp to the main
4564application.
4565
4566With a specified name, use omp_set_lock and omp_unset_lock with
4567name being transformed into a variable declared like
4568
4569@smallexample
4570 omp_lock_t gomp_critical_user_<name> __attribute__((common))
4571@end smallexample
4572
4573Ideally the ABI would specify that all zero is a valid unlocked
4574state, and so we wouldn't need to initialize this at
4575startup.
4576
4577
4578
4579@node Implementing ATOMIC construct
4580@section Implementing ATOMIC construct
4581
4582The target should implement the @code{__sync} builtins.
4583
4584Failing that we could add
4585
4586@smallexample
4587 void GOMP_atomic_enter (void)
4588 void GOMP_atomic_exit (void)
4589@end smallexample
4590
4591which reuses the regular lock code, but with yet another lock
4592object private to the library.
4593
4594
4595
4596@node Implementing FLUSH construct
4597@section Implementing FLUSH construct
4598
4599Expands to the @code{__sync_synchronize} builtin.
4600
4601
4602
4603@node Implementing BARRIER construct
4604@section Implementing BARRIER construct
4605
4606@smallexample
4607 void GOMP_barrier (void)
4608@end smallexample
4609
4610
4611@node Implementing THREADPRIVATE construct
4612@section Implementing THREADPRIVATE construct
4613
4614In _most_ cases we can map this directly to @code{__thread}. Except
4615that OMP allows constructors for C++ objects. We can either
4616refuse to support this (how often is it used?) or we can
4617implement something akin to .ctors.
4618
4619Even more ideally, this ctor feature is handled by extensions
4620to the main pthreads library. Failing that, we can have a set
4621of entry points to register ctor functions to be called.
4622
4623
4624
4625@node Implementing PRIVATE clause
4626@section Implementing PRIVATE clause
4627
4628In association with a PARALLEL, or within the lexical extent
4629of a PARALLEL block, the variable becomes a local variable in
4630the parallel subfunction.
4631
4632In association with FOR or SECTIONS blocks, create a new
4633automatic variable within the current function. This preserves
4634the semantic of new variable creation.
4635
4636
4637
4638@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4639@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4640
4641This seems simple enough for PARALLEL blocks. Create a private
4642struct for communicating between the parent and subfunction.
4643In the parent, copy in values for scalar and "small" structs;
4644copy in addresses for others TREE_ADDRESSABLE types. In the
4645subfunction, copy the value into the local variable.
4646
4647It is not clear what to do with bare FOR or SECTION blocks.
4648The only thing I can figure is that we do something like:
4649
4650@smallexample
4651#pragma omp for firstprivate(x) lastprivate(y)
4652for (int i = 0; i < n; ++i)
4653 body;
4654@end smallexample
4655
4656which becomes
4657
4658@smallexample
4659@{
4660 int x = x, y;
4661
4662 // for stuff
4663
4664 if (i == n)
4665 y = y;
4666@}
4667@end smallexample
4668
4669where the "x=x" and "y=y" assignments actually have different
4670uids for the two variables, i.e. not something you could write
4671directly in C. Presumably this only makes sense if the "outer"
4672x and y are global variables.
4673
4674COPYPRIVATE would work the same way, except the structure
4675broadcast would have to happen via SINGLE machinery instead.
4676
4677
4678
4679@node Implementing REDUCTION clause
4680@section Implementing REDUCTION clause
4681
4682The private struct mentioned in the previous section should have
4683a pointer to an array of the type of the variable, indexed by the
4684thread's @var{team_id}. The thread stores its final value into the
4685array, and after the barrier, the primary thread iterates over the
4686array to collect the values.
4687
4688
4689@node Implementing PARALLEL construct
4690@section Implementing PARALLEL construct
4691
4692@smallexample
4693 #pragma omp parallel
4694 @{
4695 body;
4696 @}
4697@end smallexample
4698
4699becomes
4700
4701@smallexample
4702 void subfunction (void *data)
4703 @{
4704 use data;
4705 body;
4706 @}
4707
4708 setup data;
4709 GOMP_parallel_start (subfunction, &data, num_threads);
4710 subfunction (&data);
4711 GOMP_parallel_end ();
4712@end smallexample
4713
4714@smallexample
4715 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
4716@end smallexample
4717
4718The @var{FN} argument is the subfunction to be run in parallel.
4719
4720The @var{DATA} argument is a pointer to a structure used to
4721communicate data in and out of the subfunction, as discussed
4722above with respect to FIRSTPRIVATE et al.
4723
4724The @var{NUM_THREADS} argument is 1 if an IF clause is present
4725and false, or the value of the NUM_THREADS clause, if
4726present, or 0.
4727
4728The function needs to create the appropriate number of
4729threads and/or launch them from the dock. It needs to
4730create the team structure and assign team ids.
4731
4732@smallexample
4733 void GOMP_parallel_end (void)
4734@end smallexample
4735
4736Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
4737
4738
4739
4740@node Implementing FOR construct
4741@section Implementing FOR construct
4742
4743@smallexample
4744 #pragma omp parallel for
4745 for (i = lb; i <= ub; i++)
4746 body;
4747@end smallexample
4748
4749becomes
4750
4751@smallexample
4752 void subfunction (void *data)
4753 @{
4754 long _s0, _e0;
4755 while (GOMP_loop_static_next (&_s0, &_e0))
4756 @{
4757 long _e1 = _e0, i;
4758 for (i = _s0; i < _e1; i++)
4759 body;
4760 @}
4761 GOMP_loop_end_nowait ();
4762 @}
4763
4764 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
4765 subfunction (NULL);
4766 GOMP_parallel_end ();
4767@end smallexample
4768
4769@smallexample
4770 #pragma omp for schedule(runtime)
4771 for (i = 0; i < n; i++)
4772 body;
4773@end smallexample
4774
4775becomes
4776
4777@smallexample
4778 @{
4779 long i, _s0, _e0;
4780 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
4781 do @{
4782 long _e1 = _e0;
4783 for (i = _s0, i < _e0; i++)
4784 body;
4785 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
4786 GOMP_loop_end ();
4787 @}
4788@end smallexample
4789
4790Note that while it looks like there is trickiness to propagating
4791a non-constant STEP, there isn't really. We're explicitly allowed
4792to evaluate it as many times as we want, and any variables involved
4793should automatically be handled as PRIVATE or SHARED like any other
4794variables. So the expression should remain evaluable in the
4795subfunction. We can also pull it into a local variable if we like,
4796but since its supposed to remain unchanged, we can also not if we like.
4797
4798If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
4799able to get away with no work-sharing context at all, since we can
4800simply perform the arithmetic directly in each thread to divide up
4801the iterations. Which would mean that we wouldn't need to call any
4802of these routines.
4803
4804There are separate routines for handling loops with an ORDERED
4805clause. Bookkeeping for that is non-trivial...
4806
4807
4808
4809@node Implementing ORDERED construct
4810@section Implementing ORDERED construct
4811
4812@smallexample
4813 void GOMP_ordered_start (void)
4814 void GOMP_ordered_end (void)
4815@end smallexample
4816
4817
4818
4819@node Implementing SECTIONS construct
4820@section Implementing SECTIONS construct
4821
4822A block as
4823
4824@smallexample
4825 #pragma omp sections
4826 @{
4827 #pragma omp section
4828 stmt1;
4829 #pragma omp section
4830 stmt2;
4831 #pragma omp section
4832 stmt3;
4833 @}
4834@end smallexample
4835
4836becomes
4837
4838@smallexample
4839 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
4840 switch (i)
4841 @{
4842 case 1:
4843 stmt1;
4844 break;
4845 case 2:
4846 stmt2;
4847 break;
4848 case 3:
4849 stmt3;
4850 break;
4851 @}
4852 GOMP_barrier ();
4853@end smallexample
4854
4855
4856@node Implementing SINGLE construct
4857@section Implementing SINGLE construct
4858
4859A block like
4860
4861@smallexample
4862 #pragma omp single
4863 @{
4864 body;
4865 @}
4866@end smallexample
4867
4868becomes
4869
4870@smallexample
4871 if (GOMP_single_start ())
4872 body;
4873 GOMP_barrier ();
4874@end smallexample
4875
4876while
4877
4878@smallexample
4879 #pragma omp single copyprivate(x)
4880 body;
4881@end smallexample
4882
4883becomes
4884
4885@smallexample
4886 datap = GOMP_single_copy_start ();
4887 if (datap == NULL)
4888 @{
4889 body;
4890 data.x = x;
4891 GOMP_single_copy_end (&data);
4892 @}
4893 else
4894 x = datap->x;
4895 GOMP_barrier ();
4896@end smallexample
4897
4898
4899
4900@node Implementing OpenACC's PARALLEL construct
4901@section Implementing OpenACC's PARALLEL construct
4902
4903@smallexample
4904 void GOACC_parallel ()
4905@end smallexample
4906
4907
4908
4909@c ---------------------------------------------------------------------
4910@c Reporting Bugs
4911@c ---------------------------------------------------------------------
4912
4913@node Reporting Bugs
4914@chapter Reporting Bugs
4915
4916Bugs in the GNU Offloading and Multi Processing Runtime Library should
4917be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
4918"openacc", or "openmp", or both to the keywords field in the bug
4919report, as appropriate.
4920
4921
4922
4923@c ---------------------------------------------------------------------
4924@c GNU General Public License
4925@c ---------------------------------------------------------------------
4926
4927@include gpl_v3.texi
4928
4929
4930
4931@c ---------------------------------------------------------------------
4932@c GNU Free Documentation License
4933@c ---------------------------------------------------------------------
4934
4935@include fdl.texi
4936
4937
4938
4939@c ---------------------------------------------------------------------
4940@c Funding Free Software
4941@c ---------------------------------------------------------------------
4942
4943@include funding.texi
4944
4945@c ---------------------------------------------------------------------
4946@c Index
4947@c ---------------------------------------------------------------------
4948
4949@node Library Index
4950@unnumbered Library Index
4951
4952@printindex cp
4953
4954@bye