]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
Daily bump.
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
d77de738
ML
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
74d5206f 10Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
d77de738
ML
11
12Permission is granted to copy, distribute and/or modify this document
13under the terms of the GNU Free Documentation License, Version 1.3 or
14any later version published by the Free Software Foundation; with the
15Invariant Sections being ``Funding Free Software'', the Front-Cover
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35@end direntry
36
37This manual documents libgomp, the GNU Offloading and Multi Processing
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
53@title GNU Offloading and Multi Processing Runtime Library
54@subtitle The GNU OpenMP and OpenACC Implementation
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top, Enabling OpenMP
72@top Introduction
73@cindex Introduction
74
75This manual documents the usage of libgomp, the GNU Offloading and
76Multi Processing Runtime Library. This includes the GNU
77implementation of the @uref{https://www.openmp.org, OpenMP} Application
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
80@uref{https://www.openacc.org, OpenACC} Application Programming
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
83
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
98* OpenMP Implementation Status:: List of implemented features by OpenMP version
99* OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102* OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105* Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107* OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111* CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113* OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115* OpenACC Profiling Interface::
116* OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118* Offload-Target Specifics:: Notes on offload-target specific internals
119* The libgomp ABI:: Notes on the external ABI presented by libgomp.
120* Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122* Copying:: GNU general public license says
123 how you can copy and share libgomp.
124* GNU Free Documentation License::
125 How you can copy and share this manual.
126* Funding:: How to help assure continued work for free
127 software.
128* Library Index:: Index of this documentation.
129@end menu
130
131
132@c ---------------------------------------------------------------------
133@c Enabling OpenMP
134@c ---------------------------------------------------------------------
135
136@node Enabling OpenMP
137@chapter Enabling OpenMP
138
139To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143@code{!$} conditional compilation sentinels in free form and @code{c$},
144@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145arranges for automatic linking of the OpenMP runtime library
146(@ref{Runtime Library Routines}).
147
148A complete description of all OpenMP directives may be found in the
149@uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150See also @ref{OpenMP Implementation Status}.
151
152
153@c ---------------------------------------------------------------------
154@c OpenMP Implementation Status
155@c ---------------------------------------------------------------------
156
157@node OpenMP Implementation Status
158@chapter OpenMP Implementation Status
159
160@menu
161* OpenMP 4.5:: Feature completion status to 4.5 specification
162* OpenMP 5.0:: Feature completion status to 5.0 specification
163* OpenMP 5.1:: Feature completion status to 5.1 specification
164* OpenMP 5.2:: Feature completion status to 5.2 specification
c16e85d7 165* OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
d77de738
ML
166@end menu
167
168The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170the value @code{201511} (i.e. OpenMP 4.5).
171
172@node OpenMP 4.5
173@section OpenMP 4.5
174
175The OpenMP 4.5 specification is fully supported.
176
177@node OpenMP 5.0
178@section OpenMP 5.0
179
180@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181@c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183@multitable @columnfractions .60 .10 .25
184@headitem Description @tab Status @tab Comments
185@item Array shaping @tab N @tab
186@item Array sections with non-unit strides in C and C++ @tab N @tab
187@item Iterators @tab Y @tab
188@item @code{metadirective} directive @tab N @tab
189@item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
191@item @emph{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
192 env variable @tab Y @tab
193@item Nested-parallel changes to @emph{max-active-levels-var} ICV @tab Y @tab
194@item @code{requires} directive @tab P
f1af7d65 195 @tab complete but no non-host devices provides @code{unified_shared_memory}
d77de738 196@item @code{teams} construct outside an enclosing target region @tab Y @tab
20552407 197@item Non-rectangular loop nests @tab P @tab Full support for C/C++, partial for Fortran
d77de738
ML
198@item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
199@item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
200 constructs @tab Y @tab
201@item Collapse of associated loops that are imperfectly nested loops @tab N @tab
202@item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
203 @code{simd} construct @tab Y @tab
204@item @code{atomic} constructs in @code{simd} @tab Y @tab
205@item @code{loop} construct @tab Y @tab
206@item @code{order(concurrent)} clause @tab Y @tab
207@item @code{scan} directive and @code{in_scan} modifier for the
208 @code{reduction} clause @tab Y @tab
209@item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
210@item @code{in_reduction} clause on @code{target} constructs @tab P
211 @tab @code{nowait} only stub
212@item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
213@item @code{task} modifier to @code{reduction} clause @tab Y @tab
214@item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
215@item @code{detach} clause to @code{task} construct @tab Y @tab
216@item @code{omp_fulfill_event} runtime routine @tab Y @tab
217@item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
218 and @code{taskloop simd} constructs @tab Y @tab
219@item @code{taskloop} construct cancelable by @code{cancel} construct
220 @tab Y @tab
221@item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
222 @tab Y @tab
223@item Predefined memory spaces, memory allocators, allocator traits
224 @tab Y @tab Some are only stubs
225@item Memory management routines @tab Y @tab
226@item @code{allocate} directive @tab N @tab
227@item @code{allocate} clause @tab P @tab Initial support
228@item @code{use_device_addr} clause on @code{target data} @tab Y @tab
f84fdb13 229@item @code{ancestor} modifier on @code{device} clause @tab Y @tab
d77de738
ML
230@item Implicit declare target directive @tab Y @tab
231@item Discontiguous array section with @code{target update} construct
232 @tab N @tab
233@item C/C++'s lvalue expressions in @code{to}, @code{from}
234 and @code{map} clauses @tab N @tab
235@item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
236@item Nested @code{declare target} directive @tab Y @tab
237@item Combined @code{master} constructs @tab Y @tab
238@item @code{depend} clause on @code{taskwait} @tab Y @tab
239@item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
240 @tab Y @tab
241@item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
242@item @code{depobj} construct and depend objects @tab Y @tab
243@item Lock hints were renamed to synchronization hints @tab Y @tab
244@item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
245@item Map-order clarifications @tab P @tab
246@item @code{close} @emph{map-type-modifier} @tab Y @tab
247@item Mapping C/C++ pointer variables and to assign the address of
248 device memory mapped by an array section @tab P @tab
249@item Mapping of Fortran pointer and allocatable variables, including pointer
250 and allocatable components of variables
251 @tab P @tab Mapping of vars with allocatable components unsupported
252@item @code{defaultmap} extensions @tab Y @tab
253@item @code{declare mapper} directive @tab N @tab
254@item @code{omp_get_supported_active_levels} routine @tab Y @tab
255@item Runtime routines and environment variables to display runtime thread
256 affinity information @tab Y @tab
257@item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
258 routines @tab Y @tab
259@item @code{omp_get_device_num} runtime routine @tab Y @tab
260@item OMPT interface @tab N @tab
261@item OMPD interface @tab N @tab
262@end multitable
263
264@unnumberedsubsec Other new OpenMP 5.0 features
265
266@multitable @columnfractions .60 .10 .25
267@headitem Description @tab Status @tab Comments
268@item Supporting C++'s range-based for loop @tab Y @tab
269@end multitable
270
271
272@node OpenMP 5.1
273@section OpenMP 5.1
274
275@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
276
277@multitable @columnfractions .60 .10 .25
278@headitem Description @tab Status @tab Comments
279@item OpenMP directive as C++ attribute specifiers @tab Y @tab
280@item @code{omp_all_memory} reserved locator @tab Y @tab
281@item @emph{target_device trait} in OpenMP Context @tab N @tab
282@item @code{target_device} selector set in context selectors @tab N @tab
283@item C/C++'s @code{declare variant} directive: elision support of
284 preprocessed code @tab N @tab
285@item @code{declare variant}: new clauses @code{adjust_args} and
286 @code{append_args} @tab N @tab
287@item @code{dispatch} construct @tab N @tab
288@item device-specific ICV settings with environment variables @tab Y @tab
eda38850 289@item @code{assume} and @code{assumes} directives @tab Y @tab
d77de738
ML
290@item @code{nothing} directive @tab Y @tab
291@item @code{error} directive @tab Y @tab
292@item @code{masked} construct @tab Y @tab
293@item @code{scope} directive @tab Y @tab
294@item Loop transformation constructs @tab N @tab
295@item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
296 clauses of the @code{taskloop} construct @tab Y @tab
b2e1c49b
TB
297@item @code{align} clause in @code{allocate} directive @tab N @tab
298@item @code{align} modifier in @code{allocate} clause @tab Y @tab
d77de738
ML
299@item @code{thread_limit} clause to @code{target} construct @tab Y @tab
300@item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
301@item Iterators in @code{target update} motion clauses and @code{map}
302 clauses @tab N @tab
303@item Indirect calls to the device version of a procedure or function in
304 @code{target} regions @tab N @tab
305@item @code{interop} directive @tab N @tab
306@item @code{omp_interop_t} object support in runtime routines @tab N @tab
307@item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
308@item Extensions to the @code{atomic} directive @tab Y @tab
309@item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
310@item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
311@item @code{private} and @code{firstprivate} argument to @code{default}
312 clause in C and C++ @tab Y @tab
4ede915d 313@item @code{present} argument to @code{defaultmap} clause @tab Y @tab
d77de738
ML
314@item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
315 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
316 routines @tab Y @tab
317@item @code{omp_target_is_accessible} runtime routine @tab Y @tab
318@item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
319 runtime routines @tab Y @tab
320@item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
321@item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
322 @code{omp_aligned_calloc} runtime routines @tab Y @tab
323@item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
324 @code{omp_atv_default} changed @tab Y @tab
325@item @code{omp_display_env} runtime routine @tab Y @tab
326@item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
327@item @code{ompt_sync_region_t} enum additions @tab N @tab
328@item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
329 and @code{ompt_state_wait_barrier_teams} @tab N @tab
330@item @code{ompt_callback_target_data_op_emi_t},
331 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
332 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
333@item @code{ompt_callback_error_t} type @tab N @tab
334@item @code{OMP_PLACES} syntax extensions @tab Y @tab
335@item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
336 variables @tab Y @tab
337@end multitable
338
339@unnumberedsubsec Other new OpenMP 5.1 features
340
341@multitable @columnfractions .60 .10 .25
342@headitem Description @tab Status @tab Comments
343@item Support of strictly structured blocks in Fortran @tab Y @tab
344@item Support of structured block sequences in C/C++ @tab Y @tab
345@item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
346 clause @tab Y @tab
347@item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
348@item Pointer predetermined firstprivate getting initialized
349to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
350@item For Fortran, diagnose placing declarative before/between @code{USE},
351 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
eda38850 352@item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
c16e85d7
TB
353@item @code{indirect} clause in @code{declare target} @tab N @tab
354@item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
4ede915d
TB
355@item @code{present} modifier to the @code{map}, @code{to} and @code{from}
356 clauses @tab Y @tab
d77de738
ML
357@end multitable
358
359
360@node OpenMP 5.2
361@section OpenMP 5.2
362
363@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
364
365@multitable @columnfractions .60 .10 .25
366@headitem Description @tab Status @tab Comments
367@item @code{omp_in_explicit_task} routine and @emph{explicit-task-var} ICV
368 @tab Y @tab
369@item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
370 namespaces @tab N/A
371 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
372 sentinel as C/C++ pragma and C++ attributes are warned for with
373 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
374 (enabled by default), respectively; for Fortran free-source code, there is
375 a warning enabled by default and, for fixed-source code, the @code{omx}
376 sentinel is warned for with with @code{-Wsurprising} (enabled by
377 @code{-Wall}). Unknown clauses are always rejected with an error.}
091b6dbc 378@item Clauses on @code{end} directive can be on directive @tab Y @tab
d77de738
ML
379@item Deprecation of no-argument @code{destroy} clause on @code{depobj}
380 @tab N @tab
381@item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
382@item Deprecation of minus operator for reductions @tab N @tab
383@item Deprecation of separating @code{map} modifiers without comma @tab N @tab
384@item @code{declare mapper} with iterator and @code{present} modifiers
385 @tab N @tab
386@item If a matching mapped list item is not found in the data environment, the
387 pointer retains its original value @tab N @tab
388@item New @code{enter} clause as alias for @code{to} on declare target directive
389 @tab Y @tab
390@item Deprecation of @code{to} clause on declare target directive @tab N @tab
391@item Extended list of directives permitted in Fortran pure procedures
2df7e451 392 @tab Y @tab
d77de738
ML
393@item New @code{allocators} directive for Fortran @tab N @tab
394@item Deprecation of @code{allocate} directive for Fortran
395 allocatables/pointers @tab N @tab
396@item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
397@item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
398 @tab N @tab
399@item Deprecation of traits array following the allocator_handle expression in
400 @code{uses_allocators} @tab N @tab
401@item New @code{otherwise} clause as alias for @code{default} on metadirectives
402 @tab N @tab
403@item Deprecation of @code{default} clause on metadirectives @tab N @tab
404@item Deprecation of delimited form of @code{declare target} @tab N @tab
405@item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
406@item @code{allocate} and @code{firstprivate} clauses on @code{scope}
407 @tab Y @tab
408@item @code{ompt_callback_work} @tab N @tab
9f80367e 409@item Default map-type for the @code{map} clause in @code{target enter/exit data}
d77de738
ML
410 @tab Y @tab
411@item New @code{doacross} clause as alias for @code{depend} with
412 @code{source}/@code{sink} modifier @tab Y @tab
413@item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
414 @tab N @tab
415@item @code{omp_cur_iteration} keyword @tab Y @tab
416@end multitable
417
418@unnumberedsubsec Other new OpenMP 5.2 features
419
420@multitable @columnfractions .60 .10 .25
421@headitem Description @tab Status @tab Comments
422@item For Fortran, optional comma between directive and clause @tab N @tab
423@item Conforming device numbers and @code{omp_initial_device} and
424 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
425@item Initial value of @emph{default-device-var} ICV with
426 @code{OMP_TARGET_OFFLOAD=mandatory} @tab N @tab
427@item @emph{interop_types} in any position of the modifier list for the @code{init} clause
428 of the @code{interop} construct @tab N @tab
429@end multitable
430
431
c16e85d7
TB
432@node OpenMP Technical Report 11
433@section OpenMP Technical Report 11
434
435Technical Report (TR) 11 is the first preview for OpenMP 6.0.
436
437@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
438@multitable @columnfractions .60 .10 .25
439@item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
440 @tab N/A @tab Backward compatibility
441@item The @code{decl} attribute was added to the C++ attribute syntax
442 @tab N @tab
443@item @code{_ALL} suffix to the device-scope environment variables
444 @tab P @tab Host device number wrongly accepted
445@item For Fortran, @emph{locator list} can be also function reference with
446 data pointer result @tab N @tab
447@item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
448 @tab N @tab
449@item Implicit reduction identifiers of C++ classes
450 @tab N @tab
451@item Change of the @emph{map-type} property from @emph{ultimate} to
452 @emph{default} @tab N @tab
453@item Concept of @emph{assumed-size arrays} in C and C++
454 @tab N @tab
455@item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
456 @tab N @tab
457@item @code{groupprivate} directive @tab N @tab
458@item @code{local} clause to declare target directive @tab N @tab
459@item @code{part_size} allocator trait @tab N @tab
460@item @code{pin_device}, @code{preferred_device} and @code{target_access}
461 allocator traits
462 @tab N @tab
463@item @code{access} allocator trait changes @tab N @tab
464@item Extension of @code{interop} operation of @code{append_args}, allowing all
465 modifiers of the @code{init} clause
9f80367e 466 @tab N @tab
c16e85d7
TB
467@item @code{interop} clause to @code{dispatch} @tab N @tab
468@item @code{apply} code to loop-transforming constructs @tab N @tab
469@item @code{omp_curr_progress_width} identifier @tab N @tab
470@item @code{safesync} clause to the @code{parallel} construct @tab N @tab
471@item @code{omp_get_max_progress_width} runtime routine @tab N @tab
8da7476c 472@item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
c16e85d7
TB
473@item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
474@item Routines for obtaining memory spaces/allocators for shared/device memory
475 @tab N @tab
476@item @code{omp_get_memspace_num_resources} routine @tab N @tab
477@item @code{omp_get_submemspace} routine @tab N @tab
478@item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
479@item Extension of @code{OMP_DEFAULT_DEVICE} and new
480 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
481@item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
482@end multitable
483
484@unnumberedsubsec Other new TR 11 features
485@multitable @columnfractions .60 .10 .25
486@item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
487@item Mapping lambda captures @tab N @tab
488@item For Fortran, atomic compare with storing the comparison result
489 @tab N @tab
490@item @code{aligned} clause changes for @code{simd} and @code{declare simd}
491 @tab N @tab
492@end multitable
493
494
495
d77de738
ML
496@c ---------------------------------------------------------------------
497@c OpenMP Runtime Library Routines
498@c ---------------------------------------------------------------------
499
500@node Runtime Library Routines
501@chapter OpenMP Runtime Library Routines
502
503The runtime routines described here are defined by Section 3 of the OpenMP
504specification in version 4.5. The routines are structured in following
505three parts:
506
507@menu
508Control threads, processors and the parallel environment. They have C
509linkage, and do not throw exceptions.
510
511* omp_get_active_level:: Number of active parallel regions
512* omp_get_ancestor_thread_num:: Ancestor thread ID
513* omp_get_cancellation:: Whether cancellation support is enabled
514* omp_get_default_device:: Get the default device for target regions
515* omp_get_device_num:: Get device that current thread is running on
516* omp_get_dynamic:: Dynamic teams setting
517* omp_get_initial_device:: Device number of host device
518* omp_get_level:: Number of parallel regions
519* omp_get_max_active_levels:: Current maximum number of active regions
520* omp_get_max_task_priority:: Maximum task priority value that can be set
521* omp_get_max_teams:: Maximum number of teams for teams region
522* omp_get_max_threads:: Maximum number of threads of parallel region
523* omp_get_nested:: Nested parallel regions
524* omp_get_num_devices:: Number of target devices
525* omp_get_num_procs:: Number of processors online
526* omp_get_num_teams:: Number of teams
527* omp_get_num_threads:: Size of the active team
0b9bd33d 528* omp_get_proc_bind:: Whether threads may be moved between CPUs
d77de738
ML
529* omp_get_schedule:: Obtain the runtime scheduling method
530* omp_get_supported_active_levels:: Maximum number of active regions supported
531* omp_get_team_num:: Get team number
532* omp_get_team_size:: Number of threads in a team
533* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
534* omp_get_thread_limit:: Maximum number of threads
535* omp_get_thread_num:: Current thread ID
536* omp_in_parallel:: Whether a parallel region is active
537* omp_in_final:: Whether in final or included task region
538* omp_is_initial_device:: Whether executing on the host device
539* omp_set_default_device:: Set the default device for target regions
540* omp_set_dynamic:: Enable/disable dynamic teams
541* omp_set_max_active_levels:: Limits the number of active parallel regions
542* omp_set_nested:: Enable/disable nested parallel regions
543* omp_set_num_teams:: Set upper teams limit for teams region
544* omp_set_num_threads:: Set upper team size limit
545* omp_set_schedule:: Set the runtime scheduling method
546* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
547
548Initialize, set, test, unset and destroy simple and nested locks.
549
550* omp_init_lock:: Initialize simple lock
551* omp_set_lock:: Wait for and set simple lock
552* omp_test_lock:: Test and set simple lock if available
553* omp_unset_lock:: Unset simple lock
554* omp_destroy_lock:: Destroy simple lock
555* omp_init_nest_lock:: Initialize nested lock
556* omp_set_nest_lock:: Wait for and set simple lock
557* omp_test_nest_lock:: Test and set nested lock if available
558* omp_unset_nest_lock:: Unset nested lock
559* omp_destroy_nest_lock:: Destroy nested lock
560
561Portable, thread-based, wall clock timer.
562
563* omp_get_wtick:: Get timer precision.
564* omp_get_wtime:: Elapsed wall clock time.
565
566Support for event objects.
567
568* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
569@end menu
570
571
572
573@node omp_get_active_level
574@section @code{omp_get_active_level} -- Number of parallel regions
575@table @asis
576@item @emph{Description}:
577This function returns the nesting level for the active parallel blocks,
578which enclose the calling call.
579
580@item @emph{C/C++}
581@multitable @columnfractions .20 .80
582@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
583@end multitable
584
585@item @emph{Fortran}:
586@multitable @columnfractions .20 .80
587@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
588@end multitable
589
590@item @emph{See also}:
591@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
592
593@item @emph{Reference}:
594@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
595@end table
596
597
598
599@node omp_get_ancestor_thread_num
600@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
601@table @asis
602@item @emph{Description}:
603This function returns the thread identification number for the given
604nesting level of the current thread. For values of @var{level} outside
605zero to @code{omp_get_level} -1 is returned; if @var{level} is
606@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
607
608@item @emph{C/C++}
609@multitable @columnfractions .20 .80
610@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
611@end multitable
612
613@item @emph{Fortran}:
614@multitable @columnfractions .20 .80
615@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
616@item @tab @code{integer level}
617@end multitable
618
619@item @emph{See also}:
620@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
621
622@item @emph{Reference}:
623@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
624@end table
625
626
627
628@node omp_get_cancellation
629@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
630@table @asis
631@item @emph{Description}:
632This function returns @code{true} if cancellation is activated, @code{false}
633otherwise. Here, @code{true} and @code{false} represent their language-specific
634counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
635deactivated.
636
637@item @emph{C/C++}:
638@multitable @columnfractions .20 .80
639@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
640@end multitable
641
642@item @emph{Fortran}:
643@multitable @columnfractions .20 .80
644@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
645@end multitable
646
647@item @emph{See also}:
648@ref{OMP_CANCELLATION}
649
650@item @emph{Reference}:
651@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
652@end table
653
654
655
656@node omp_get_default_device
657@section @code{omp_get_default_device} -- Get the default device for target regions
658@table @asis
659@item @emph{Description}:
660Get the default device for target regions without device clause.
661
662@item @emph{C/C++}:
663@multitable @columnfractions .20 .80
664@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
665@end multitable
666
667@item @emph{Fortran}:
668@multitable @columnfractions .20 .80
669@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
670@end multitable
671
672@item @emph{See also}:
673@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
674
675@item @emph{Reference}:
676@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
677@end table
678
679
680
681@node omp_get_device_num
682@section @code{omp_get_device_num} -- Return device number of current device
683@table @asis
684@item @emph{Description}:
685This function returns a device number that represents the device that the
686current thread is executing on. For OpenMP 5.0, this must be equal to the
687value returned by the @code{omp_get_initial_device} function when called
688from the host.
689
690@item @emph{C/C++}
691@multitable @columnfractions .20 .80
692@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
693@end multitable
694
695@item @emph{Fortran}:
696@multitable @columnfractions .20 .80
697@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
698@end multitable
699
700@item @emph{See also}:
701@ref{omp_get_initial_device}
702
703@item @emph{Reference}:
704@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
705@end table
706
707
708
709@node omp_get_dynamic
710@section @code{omp_get_dynamic} -- Dynamic teams setting
711@table @asis
712@item @emph{Description}:
713This function returns @code{true} if enabled, @code{false} otherwise.
714Here, @code{true} and @code{false} represent their language-specific
715counterparts.
716
717The dynamic team setting may be initialized at startup by the
718@env{OMP_DYNAMIC} environment variable or at runtime using
719@code{omp_set_dynamic}. If undefined, dynamic adjustment is
720disabled by default.
721
722@item @emph{C/C++}:
723@multitable @columnfractions .20 .80
724@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
725@end multitable
726
727@item @emph{Fortran}:
728@multitable @columnfractions .20 .80
729@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
730@end multitable
731
732@item @emph{See also}:
733@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
734
735@item @emph{Reference}:
736@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
737@end table
738
739
740
741@node omp_get_initial_device
742@section @code{omp_get_initial_device} -- Return device number of initial device
743@table @asis
744@item @emph{Description}:
745This function returns a device number that represents the host device.
746For OpenMP 5.1, this must be equal to the value returned by the
747@code{omp_get_num_devices} function.
748
749@item @emph{C/C++}
750@multitable @columnfractions .20 .80
751@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
752@end multitable
753
754@item @emph{Fortran}:
755@multitable @columnfractions .20 .80
756@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
757@end multitable
758
759@item @emph{See also}:
760@ref{omp_get_num_devices}
761
762@item @emph{Reference}:
763@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
764@end table
765
766
767
768@node omp_get_level
769@section @code{omp_get_level} -- Obtain the current nesting level
770@table @asis
771@item @emph{Description}:
772This function returns the nesting level for the parallel blocks,
773which enclose the calling call.
774
775@item @emph{C/C++}
776@multitable @columnfractions .20 .80
777@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
778@end multitable
779
780@item @emph{Fortran}:
781@multitable @columnfractions .20 .80
782@item @emph{Interface}: @tab @code{integer function omp_level()}
783@end multitable
784
785@item @emph{See also}:
786@ref{omp_get_active_level}
787
788@item @emph{Reference}:
789@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
790@end table
791
792
793
794@node omp_get_max_active_levels
795@section @code{omp_get_max_active_levels} -- Current maximum number of active regions
796@table @asis
797@item @emph{Description}:
798This function obtains the maximum allowed number of nested, active parallel regions.
799
800@item @emph{C/C++}
801@multitable @columnfractions .20 .80
802@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
803@end multitable
804
805@item @emph{Fortran}:
806@multitable @columnfractions .20 .80
807@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
808@end multitable
809
810@item @emph{See also}:
811@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
812
813@item @emph{Reference}:
814@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
815@end table
816
817
818@node omp_get_max_task_priority
819@section @code{omp_get_max_task_priority} -- Maximum priority value
820that can be set for tasks.
821@table @asis
822@item @emph{Description}:
823This function obtains the maximum allowed priority number for tasks.
824
825@item @emph{C/C++}
826@multitable @columnfractions .20 .80
827@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
828@end multitable
829
830@item @emph{Fortran}:
831@multitable @columnfractions .20 .80
832@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
833@end multitable
834
835@item @emph{Reference}:
836@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
837@end table
838
839
840@node omp_get_max_teams
841@section @code{omp_get_max_teams} -- Maximum number of teams of teams region
842@table @asis
843@item @emph{Description}:
844Return the maximum number of teams used for the teams region
845that does not use the clause @code{num_teams}.
846
847@item @emph{C/C++}:
848@multitable @columnfractions .20 .80
849@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
850@end multitable
851
852@item @emph{Fortran}:
853@multitable @columnfractions .20 .80
854@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
855@end multitable
856
857@item @emph{See also}:
858@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
859
860@item @emph{Reference}:
861@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
862@end table
863
864
865
866@node omp_get_max_threads
867@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
868@table @asis
869@item @emph{Description}:
870Return the maximum number of threads used for the current parallel region
871that does not use the clause @code{num_threads}.
872
873@item @emph{C/C++}:
874@multitable @columnfractions .20 .80
875@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
876@end multitable
877
878@item @emph{Fortran}:
879@multitable @columnfractions .20 .80
880@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
881@end multitable
882
883@item @emph{See also}:
884@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
885
886@item @emph{Reference}:
887@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
888@end table
889
890
891
892@node omp_get_nested
893@section @code{omp_get_nested} -- Nested parallel regions
894@table @asis
895@item @emph{Description}:
896This function returns @code{true} if nested parallel regions are
897enabled, @code{false} otherwise. Here, @code{true} and @code{false}
898represent their language-specific counterparts.
899
900The state of nested parallel regions at startup depends on several
901environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
902and is set to greater than one, then nested parallel regions will be
903enabled. If not defined, then the value of the @env{OMP_NESTED}
904environment variable will be followed if defined. If neither are
905defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
906are defined with a list of more than one value, then nested parallel
907regions are enabled. If none of these are defined, then nested parallel
908regions are disabled by default.
909
910Nested parallel regions can be enabled or disabled at runtime using
911@code{omp_set_nested}, or by setting the maximum number of nested
912regions with @code{omp_set_max_active_levels} to one to disable, or
913above one to enable.
914
915@item @emph{C/C++}:
916@multitable @columnfractions .20 .80
917@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
918@end multitable
919
920@item @emph{Fortran}:
921@multitable @columnfractions .20 .80
922@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
923@end multitable
924
925@item @emph{See also}:
926@ref{omp_set_max_active_levels}, @ref{omp_set_nested},
927@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
928
929@item @emph{Reference}:
930@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
931@end table
932
933
934
935@node omp_get_num_devices
936@section @code{omp_get_num_devices} -- Number of target devices
937@table @asis
938@item @emph{Description}:
939Returns the number of target devices.
940
941@item @emph{C/C++}:
942@multitable @columnfractions .20 .80
943@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
944@end multitable
945
946@item @emph{Fortran}:
947@multitable @columnfractions .20 .80
948@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
949@end multitable
950
951@item @emph{Reference}:
952@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
953@end table
954
955
956
957@node omp_get_num_procs
958@section @code{omp_get_num_procs} -- Number of processors online
959@table @asis
960@item @emph{Description}:
961Returns the number of processors online on that device.
962
963@item @emph{C/C++}:
964@multitable @columnfractions .20 .80
965@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
966@end multitable
967
968@item @emph{Fortran}:
969@multitable @columnfractions .20 .80
970@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
971@end multitable
972
973@item @emph{Reference}:
974@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
975@end table
976
977
978
979@node omp_get_num_teams
980@section @code{omp_get_num_teams} -- Number of teams
981@table @asis
982@item @emph{Description}:
983Returns the number of teams in the current team region.
984
985@item @emph{C/C++}:
986@multitable @columnfractions .20 .80
987@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
988@end multitable
989
990@item @emph{Fortran}:
991@multitable @columnfractions .20 .80
992@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
993@end multitable
994
995@item @emph{Reference}:
996@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
997@end table
998
999
1000
1001@node omp_get_num_threads
1002@section @code{omp_get_num_threads} -- Size of the active team
1003@table @asis
1004@item @emph{Description}:
1005Returns the number of threads in the current team. In a sequential section of
1006the program @code{omp_get_num_threads} returns 1.
1007
1008The default team size may be initialized at startup by the
1009@env{OMP_NUM_THREADS} environment variable. At runtime, the size
1010of the current team may be set either by the @code{NUM_THREADS}
1011clause or by @code{omp_set_num_threads}. If none of the above were
1012used to define a specific value and @env{OMP_DYNAMIC} is disabled,
1013one thread per CPU online is used.
1014
1015@item @emph{C/C++}:
1016@multitable @columnfractions .20 .80
1017@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
1018@end multitable
1019
1020@item @emph{Fortran}:
1021@multitable @columnfractions .20 .80
1022@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
1023@end multitable
1024
1025@item @emph{See also}:
1026@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
1027
1028@item @emph{Reference}:
1029@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
1030@end table
1031
1032
1033
1034@node omp_get_proc_bind
0b9bd33d 1035@section @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
d77de738
ML
1036@table @asis
1037@item @emph{Description}:
1038This functions returns the currently active thread affinity policy, which is
1039set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1040@code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1041@code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1042where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1043
1044@item @emph{C/C++}:
1045@multitable @columnfractions .20 .80
1046@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1047@end multitable
1048
1049@item @emph{Fortran}:
1050@multitable @columnfractions .20 .80
1051@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1052@end multitable
1053
1054@item @emph{See also}:
1055@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1056
1057@item @emph{Reference}:
1058@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1059@end table
1060
1061
1062
1063@node omp_get_schedule
1064@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
1065@table @asis
1066@item @emph{Description}:
1067Obtain the runtime scheduling method. The @var{kind} argument will be
1068set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
1069@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
1070@var{chunk_size}, is set to the chunk size.
1071
1072@item @emph{C/C++}
1073@multitable @columnfractions .20 .80
1074@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
1075@end multitable
1076
1077@item @emph{Fortran}:
1078@multitable @columnfractions .20 .80
1079@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
1080@item @tab @code{integer(kind=omp_sched_kind) kind}
1081@item @tab @code{integer chunk_size}
1082@end multitable
1083
1084@item @emph{See also}:
1085@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
1086
1087@item @emph{Reference}:
1088@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
1089@end table
1090
1091
1092@node omp_get_supported_active_levels
1093@section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1094@table @asis
1095@item @emph{Description}:
1096This function returns the maximum number of nested, active parallel regions
1097supported by this implementation.
1098
1099@item @emph{C/C++}
1100@multitable @columnfractions .20 .80
1101@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1102@end multitable
1103
1104@item @emph{Fortran}:
1105@multitable @columnfractions .20 .80
1106@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1107@end multitable
1108
1109@item @emph{See also}:
1110@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1111
1112@item @emph{Reference}:
1113@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1114@end table
1115
1116
1117
1118@node omp_get_team_num
1119@section @code{omp_get_team_num} -- Get team number
1120@table @asis
1121@item @emph{Description}:
1122Returns the team number of the calling thread.
1123
1124@item @emph{C/C++}:
1125@multitable @columnfractions .20 .80
1126@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1127@end multitable
1128
1129@item @emph{Fortran}:
1130@multitable @columnfractions .20 .80
1131@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1132@end multitable
1133
1134@item @emph{Reference}:
1135@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1136@end table
1137
1138
1139
1140@node omp_get_team_size
1141@section @code{omp_get_team_size} -- Number of threads in a team
1142@table @asis
1143@item @emph{Description}:
1144This function returns the number of threads in a thread team to which
1145either the current thread or its ancestor belongs. For values of @var{level}
1146outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
11471 is returned, and for @code{omp_get_level}, the result is identical
1148to @code{omp_get_num_threads}.
1149
1150@item @emph{C/C++}:
1151@multitable @columnfractions .20 .80
1152@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1153@end multitable
1154
1155@item @emph{Fortran}:
1156@multitable @columnfractions .20 .80
1157@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1158@item @tab @code{integer level}
1159@end multitable
1160
1161@item @emph{See also}:
1162@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1163
1164@item @emph{Reference}:
1165@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1166@end table
1167
1168
1169
1170@node omp_get_teams_thread_limit
1171@section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
1172@table @asis
1173@item @emph{Description}:
1174Return the maximum number of threads that will be able to participate in
1175each team created by a teams construct.
1176
1177@item @emph{C/C++}:
1178@multitable @columnfractions .20 .80
1179@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
1180@end multitable
1181
1182@item @emph{Fortran}:
1183@multitable @columnfractions .20 .80
1184@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
1185@end multitable
1186
1187@item @emph{See also}:
1188@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1189
1190@item @emph{Reference}:
1191@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1192@end table
1193
1194
1195
1196@node omp_get_thread_limit
1197@section @code{omp_get_thread_limit} -- Maximum number of threads
1198@table @asis
1199@item @emph{Description}:
1200Return the maximum number of threads of the program.
1201
1202@item @emph{C/C++}:
1203@multitable @columnfractions .20 .80
1204@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1205@end multitable
1206
1207@item @emph{Fortran}:
1208@multitable @columnfractions .20 .80
1209@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1210@end multitable
1211
1212@item @emph{See also}:
1213@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1214
1215@item @emph{Reference}:
1216@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1217@end table
1218
1219
1220
1221@node omp_get_thread_num
1222@section @code{omp_get_thread_num} -- Current thread ID
1223@table @asis
1224@item @emph{Description}:
1225Returns a unique thread identification number within the current team.
1226In a sequential parts of the program, @code{omp_get_thread_num}
1227always returns 0. In parallel regions the return value varies
1228from 0 to @code{omp_get_num_threads}-1 inclusive. The return
1229value of the primary thread of a team is always 0.
1230
1231@item @emph{C/C++}:
1232@multitable @columnfractions .20 .80
1233@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
1234@end multitable
1235
1236@item @emph{Fortran}:
1237@multitable @columnfractions .20 .80
1238@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
1239@end multitable
1240
1241@item @emph{See also}:
1242@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
1243
1244@item @emph{Reference}:
1245@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
1246@end table
1247
1248
1249
1250@node omp_in_parallel
1251@section @code{omp_in_parallel} -- Whether a parallel region is active
1252@table @asis
1253@item @emph{Description}:
1254This function returns @code{true} if currently running in parallel,
1255@code{false} otherwise. Here, @code{true} and @code{false} represent
1256their language-specific counterparts.
1257
1258@item @emph{C/C++}:
1259@multitable @columnfractions .20 .80
1260@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
1261@end multitable
1262
1263@item @emph{Fortran}:
1264@multitable @columnfractions .20 .80
1265@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
1266@end multitable
1267
1268@item @emph{Reference}:
1269@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
1270@end table
1271
1272
1273@node omp_in_final
1274@section @code{omp_in_final} -- Whether in final or included task region
1275@table @asis
1276@item @emph{Description}:
1277This function returns @code{true} if currently running in a final
1278or included task region, @code{false} otherwise. Here, @code{true}
1279and @code{false} represent their language-specific counterparts.
1280
1281@item @emph{C/C++}:
1282@multitable @columnfractions .20 .80
1283@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1284@end multitable
1285
1286@item @emph{Fortran}:
1287@multitable @columnfractions .20 .80
1288@item @emph{Interface}: @tab @code{logical function omp_in_final()}
1289@end multitable
1290
1291@item @emph{Reference}:
1292@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1293@end table
1294
1295
1296
1297@node omp_is_initial_device
1298@section @code{omp_is_initial_device} -- Whether executing on the host device
1299@table @asis
1300@item @emph{Description}:
1301This function returns @code{true} if currently running on the host device,
1302@code{false} otherwise. Here, @code{true} and @code{false} represent
1303their language-specific counterparts.
1304
1305@item @emph{C/C++}:
1306@multitable @columnfractions .20 .80
1307@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1308@end multitable
1309
1310@item @emph{Fortran}:
1311@multitable @columnfractions .20 .80
1312@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1313@end multitable
1314
1315@item @emph{Reference}:
1316@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1317@end table
1318
1319
1320
1321@node omp_set_default_device
1322@section @code{omp_set_default_device} -- Set the default device for target regions
1323@table @asis
1324@item @emph{Description}:
1325Set the default device for target regions without device clause. The argument
1326shall be a nonnegative device number.
1327
1328@item @emph{C/C++}:
1329@multitable @columnfractions .20 .80
1330@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1331@end multitable
1332
1333@item @emph{Fortran}:
1334@multitable @columnfractions .20 .80
1335@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1336@item @tab @code{integer device_num}
1337@end multitable
1338
1339@item @emph{See also}:
1340@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1341
1342@item @emph{Reference}:
1343@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1344@end table
1345
1346
1347
1348@node omp_set_dynamic
1349@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
1350@table @asis
1351@item @emph{Description}:
1352Enable or disable the dynamic adjustment of the number of threads
1353within a team. The function takes the language-specific equivalent
1354of @code{true} and @code{false}, where @code{true} enables dynamic
1355adjustment of team sizes and @code{false} disables it.
1356
1357@item @emph{C/C++}:
1358@multitable @columnfractions .20 .80
1359@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
1360@end multitable
1361
1362@item @emph{Fortran}:
1363@multitable @columnfractions .20 .80
1364@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
1365@item @tab @code{logical, intent(in) :: dynamic_threads}
1366@end multitable
1367
1368@item @emph{See also}:
1369@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
1370
1371@item @emph{Reference}:
1372@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
1373@end table
1374
1375
1376
1377@node omp_set_max_active_levels
1378@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1379@table @asis
1380@item @emph{Description}:
1381This function limits the maximum allowed number of nested, active
1382parallel regions. @var{max_levels} must be less or equal to
1383the value returned by @code{omp_get_supported_active_levels}.
1384
1385@item @emph{C/C++}
1386@multitable @columnfractions .20 .80
1387@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1388@end multitable
1389
1390@item @emph{Fortran}:
1391@multitable @columnfractions .20 .80
1392@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1393@item @tab @code{integer max_levels}
1394@end multitable
1395
1396@item @emph{See also}:
1397@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1398@ref{omp_get_supported_active_levels}
1399
1400@item @emph{Reference}:
1401@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1402@end table
1403
1404
1405
1406@node omp_set_nested
1407@section @code{omp_set_nested} -- Enable/disable nested parallel regions
1408@table @asis
1409@item @emph{Description}:
1410Enable or disable nested parallel regions, i.e., whether team members
1411are allowed to create new teams. The function takes the language-specific
1412equivalent of @code{true} and @code{false}, where @code{true} enables
1413dynamic adjustment of team sizes and @code{false} disables it.
1414
1415Enabling nested parallel regions will also set the maximum number of
1416active nested regions to the maximum supported. Disabling nested parallel
1417regions will set the maximum number of active nested regions to one.
1418
1419@item @emph{C/C++}:
1420@multitable @columnfractions .20 .80
1421@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1422@end multitable
1423
1424@item @emph{Fortran}:
1425@multitable @columnfractions .20 .80
1426@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1427@item @tab @code{logical, intent(in) :: nested}
1428@end multitable
1429
1430@item @emph{See also}:
1431@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1432@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1433
1434@item @emph{Reference}:
1435@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1436@end table
1437
1438
1439
1440@node omp_set_num_teams
1441@section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1442@table @asis
1443@item @emph{Description}:
1444Specifies the upper bound for number of teams created by the teams construct
1445which does not specify a @code{num_teams} clause. The
1446argument of @code{omp_set_num_teams} shall be a positive integer.
1447
1448@item @emph{C/C++}:
1449@multitable @columnfractions .20 .80
1450@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1451@end multitable
1452
1453@item @emph{Fortran}:
1454@multitable @columnfractions .20 .80
1455@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1456@item @tab @code{integer, intent(in) :: num_teams}
1457@end multitable
1458
1459@item @emph{See also}:
1460@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1461
1462@item @emph{Reference}:
1463@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1464@end table
1465
1466
1467
1468@node omp_set_num_threads
1469@section @code{omp_set_num_threads} -- Set upper team size limit
1470@table @asis
1471@item @emph{Description}:
1472Specifies the number of threads used by default in subsequent parallel
1473sections, if those do not specify a @code{num_threads} clause. The
1474argument of @code{omp_set_num_threads} shall be a positive integer.
1475
1476@item @emph{C/C++}:
1477@multitable @columnfractions .20 .80
1478@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1479@end multitable
1480
1481@item @emph{Fortran}:
1482@multitable @columnfractions .20 .80
1483@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1484@item @tab @code{integer, intent(in) :: num_threads}
1485@end multitable
1486
1487@item @emph{See also}:
1488@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1489
1490@item @emph{Reference}:
1491@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1492@end table
1493
1494
1495
1496@node omp_set_schedule
1497@section @code{omp_set_schedule} -- Set the runtime scheduling method
1498@table @asis
1499@item @emph{Description}:
1500Sets the runtime scheduling method. The @var{kind} argument can have the
1501value @code{omp_sched_static}, @code{omp_sched_dynamic},
1502@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1503@code{omp_sched_auto}, the chunk size is set to the value of
1504@var{chunk_size} if positive, or to the default value if zero or negative.
1505For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1506
1507@item @emph{C/C++}
1508@multitable @columnfractions .20 .80
1509@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1510@end multitable
1511
1512@item @emph{Fortran}:
1513@multitable @columnfractions .20 .80
1514@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1515@item @tab @code{integer(kind=omp_sched_kind) kind}
1516@item @tab @code{integer chunk_size}
1517@end multitable
1518
1519@item @emph{See also}:
1520@ref{omp_get_schedule}
1521@ref{OMP_SCHEDULE}
1522
1523@item @emph{Reference}:
1524@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1525@end table
1526
1527
1528
1529@node omp_set_teams_thread_limit
1530@section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1531@table @asis
1532@item @emph{Description}:
1533Specifies the upper bound for number of threads that will be available
1534for each team created by the teams construct which does not specify a
1535@code{thread_limit} clause. The argument of
1536@code{omp_set_teams_thread_limit} shall be a positive integer.
1537
1538@item @emph{C/C++}:
1539@multitable @columnfractions .20 .80
1540@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1541@end multitable
1542
1543@item @emph{Fortran}:
1544@multitable @columnfractions .20 .80
1545@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1546@item @tab @code{integer, intent(in) :: thread_limit}
1547@end multitable
1548
1549@item @emph{See also}:
1550@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1551
1552@item @emph{Reference}:
1553@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1554@end table
1555
1556
1557
1558@node omp_init_lock
1559@section @code{omp_init_lock} -- Initialize simple lock
1560@table @asis
1561@item @emph{Description}:
1562Initialize a simple lock. After initialization, the lock is in
1563an unlocked state.
1564
1565@item @emph{C/C++}:
1566@multitable @columnfractions .20 .80
1567@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1568@end multitable
1569
1570@item @emph{Fortran}:
1571@multitable @columnfractions .20 .80
1572@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1573@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1574@end multitable
1575
1576@item @emph{See also}:
1577@ref{omp_destroy_lock}
1578
1579@item @emph{Reference}:
1580@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1581@end table
1582
1583
1584
1585@node omp_set_lock
1586@section @code{omp_set_lock} -- Wait for and set simple lock
1587@table @asis
1588@item @emph{Description}:
1589Before setting a simple lock, the lock variable must be initialized by
1590@code{omp_init_lock}. The calling thread is blocked until the lock
1591is available. If the lock is already held by the current thread,
1592a deadlock occurs.
1593
1594@item @emph{C/C++}:
1595@multitable @columnfractions .20 .80
1596@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1597@end multitable
1598
1599@item @emph{Fortran}:
1600@multitable @columnfractions .20 .80
1601@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1602@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1603@end multitable
1604
1605@item @emph{See also}:
1606@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1607
1608@item @emph{Reference}:
1609@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1610@end table
1611
1612
1613
1614@node omp_test_lock
1615@section @code{omp_test_lock} -- Test and set simple lock if available
1616@table @asis
1617@item @emph{Description}:
1618Before setting a simple lock, the lock variable must be initialized by
1619@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1620does not block if the lock is not available. This function returns
1621@code{true} upon success, @code{false} otherwise. Here, @code{true} and
1622@code{false} represent their language-specific counterparts.
1623
1624@item @emph{C/C++}:
1625@multitable @columnfractions .20 .80
1626@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1627@end multitable
1628
1629@item @emph{Fortran}:
1630@multitable @columnfractions .20 .80
1631@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1632@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1633@end multitable
1634
1635@item @emph{See also}:
1636@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1637
1638@item @emph{Reference}:
1639@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1640@end table
1641
1642
1643
1644@node omp_unset_lock
1645@section @code{omp_unset_lock} -- Unset simple lock
1646@table @asis
1647@item @emph{Description}:
1648A simple lock about to be unset must have been locked by @code{omp_set_lock}
1649or @code{omp_test_lock} before. In addition, the lock must be held by the
1650thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1651or more threads attempted to set the lock before, one of them is chosen to,
1652again, set the lock to itself.
1653
1654@item @emph{C/C++}:
1655@multitable @columnfractions .20 .80
1656@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1657@end multitable
1658
1659@item @emph{Fortran}:
1660@multitable @columnfractions .20 .80
1661@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1662@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1663@end multitable
1664
1665@item @emph{See also}:
1666@ref{omp_set_lock}, @ref{omp_test_lock}
1667
1668@item @emph{Reference}:
1669@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1670@end table
1671
1672
1673
1674@node omp_destroy_lock
1675@section @code{omp_destroy_lock} -- Destroy simple lock
1676@table @asis
1677@item @emph{Description}:
1678Destroy a simple lock. In order to be destroyed, a simple lock must be
1679in the unlocked state.
1680
1681@item @emph{C/C++}:
1682@multitable @columnfractions .20 .80
1683@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1684@end multitable
1685
1686@item @emph{Fortran}:
1687@multitable @columnfractions .20 .80
1688@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1689@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1690@end multitable
1691
1692@item @emph{See also}:
1693@ref{omp_init_lock}
1694
1695@item @emph{Reference}:
1696@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1697@end table
1698
1699
1700
1701@node omp_init_nest_lock
1702@section @code{omp_init_nest_lock} -- Initialize nested lock
1703@table @asis
1704@item @emph{Description}:
1705Initialize a nested lock. After initialization, the lock is in
1706an unlocked state and the nesting count is set to zero.
1707
1708@item @emph{C/C++}:
1709@multitable @columnfractions .20 .80
1710@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1711@end multitable
1712
1713@item @emph{Fortran}:
1714@multitable @columnfractions .20 .80
1715@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1716@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1717@end multitable
1718
1719@item @emph{See also}:
1720@ref{omp_destroy_nest_lock}
1721
1722@item @emph{Reference}:
1723@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1724@end table
1725
1726
1727@node omp_set_nest_lock
1728@section @code{omp_set_nest_lock} -- Wait for and set nested lock
1729@table @asis
1730@item @emph{Description}:
1731Before setting a nested lock, the lock variable must be initialized by
1732@code{omp_init_nest_lock}. The calling thread is blocked until the lock
1733is available. If the lock is already held by the current thread, the
1734nesting count for the lock is incremented.
1735
1736@item @emph{C/C++}:
1737@multitable @columnfractions .20 .80
1738@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1739@end multitable
1740
1741@item @emph{Fortran}:
1742@multitable @columnfractions .20 .80
1743@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1744@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1745@end multitable
1746
1747@item @emph{See also}:
1748@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1749
1750@item @emph{Reference}:
1751@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1752@end table
1753
1754
1755
1756@node omp_test_nest_lock
1757@section @code{omp_test_nest_lock} -- Test and set nested lock if available
1758@table @asis
1759@item @emph{Description}:
1760Before setting a nested lock, the lock variable must be initialized by
1761@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1762@code{omp_test_nest_lock} does not block if the lock is not available.
1763If the lock is already held by the current thread, the new nesting count
1764is returned. Otherwise, the return value equals zero.
1765
1766@item @emph{C/C++}:
1767@multitable @columnfractions .20 .80
1768@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1769@end multitable
1770
1771@item @emph{Fortran}:
1772@multitable @columnfractions .20 .80
1773@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1774@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1775@end multitable
1776
1777
1778@item @emph{See also}:
1779@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1780
1781@item @emph{Reference}:
1782@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1783@end table
1784
1785
1786
1787@node omp_unset_nest_lock
1788@section @code{omp_unset_nest_lock} -- Unset nested lock
1789@table @asis
1790@item @emph{Description}:
1791A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1792or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1793thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1794lock becomes unlocked. If one ore more threads attempted to set the lock before,
1795one of them is chosen to, again, set the lock to itself.
1796
1797@item @emph{C/C++}:
1798@multitable @columnfractions .20 .80
1799@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1800@end multitable
1801
1802@item @emph{Fortran}:
1803@multitable @columnfractions .20 .80
1804@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1805@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1806@end multitable
1807
1808@item @emph{See also}:
1809@ref{omp_set_nest_lock}
1810
1811@item @emph{Reference}:
1812@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1813@end table
1814
1815
1816
1817@node omp_destroy_nest_lock
1818@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1819@table @asis
1820@item @emph{Description}:
1821Destroy a nested lock. In order to be destroyed, a nested lock must be
1822in the unlocked state and its nesting count must equal zero.
1823
1824@item @emph{C/C++}:
1825@multitable @columnfractions .20 .80
1826@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1827@end multitable
1828
1829@item @emph{Fortran}:
1830@multitable @columnfractions .20 .80
1831@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1832@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1833@end multitable
1834
1835@item @emph{See also}:
1836@ref{omp_init_lock}
1837
1838@item @emph{Reference}:
1839@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1840@end table
1841
1842
1843
1844@node omp_get_wtick
1845@section @code{omp_get_wtick} -- Get timer precision
1846@table @asis
1847@item @emph{Description}:
1848Gets the timer precision, i.e., the number of seconds between two
1849successive clock ticks.
1850
1851@item @emph{C/C++}:
1852@multitable @columnfractions .20 .80
1853@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1854@end multitable
1855
1856@item @emph{Fortran}:
1857@multitable @columnfractions .20 .80
1858@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1859@end multitable
1860
1861@item @emph{See also}:
1862@ref{omp_get_wtime}
1863
1864@item @emph{Reference}:
1865@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1866@end table
1867
1868
1869
1870@node omp_get_wtime
1871@section @code{omp_get_wtime} -- Elapsed wall clock time
1872@table @asis
1873@item @emph{Description}:
1874Elapsed wall clock time in seconds. The time is measured per thread, no
1875guarantee can be made that two distinct threads measure the same time.
1876Time is measured from some "time in the past", which is an arbitrary time
1877guaranteed not to change during the execution of the program.
1878
1879@item @emph{C/C++}:
1880@multitable @columnfractions .20 .80
1881@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1882@end multitable
1883
1884@item @emph{Fortran}:
1885@multitable @columnfractions .20 .80
1886@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1887@end multitable
1888
1889@item @emph{See also}:
1890@ref{omp_get_wtick}
1891
1892@item @emph{Reference}:
1893@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1894@end table
1895
1896
1897
1898@node omp_fulfill_event
1899@section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1900@table @asis
1901@item @emph{Description}:
1902Fulfill the event associated with the event handle argument. Currently, it
1903is only used to fulfill events generated by detach clauses on task
1904constructs - the effect of fulfilling the event is to allow the task to
1905complete.
1906
1907The result of calling @code{omp_fulfill_event} with an event handle other
1908than that generated by a detach clause is undefined. Calling it with an
1909event handle that has already been fulfilled is also undefined.
1910
1911@item @emph{C/C++}:
1912@multitable @columnfractions .20 .80
1913@item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1914@end multitable
1915
1916@item @emph{Fortran}:
1917@multitable @columnfractions .20 .80
1918@item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1919@item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1920@end multitable
1921
1922@item @emph{Reference}:
1923@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1924@end table
1925
1926
1927
1928@c ---------------------------------------------------------------------
1929@c OpenMP Environment Variables
1930@c ---------------------------------------------------------------------
1931
1932@node Environment Variables
1933@chapter OpenMP Environment Variables
1934
1935The environment variables which beginning with @env{OMP_} are defined by
1936section 4 of the OpenMP specification in version 4.5, while those
1937beginning with @env{GOMP_} are GNU extensions.
1938
1939@menu
1940* OMP_CANCELLATION:: Set whether cancellation is activated
1941* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1942* OMP_DEFAULT_DEVICE:: Set the device used in target regions
1943* OMP_DYNAMIC:: Dynamic adjustment of threads
1944* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1945* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1946* OMP_NESTED:: Nested parallel regions
1947* OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
1948* OMP_NUM_THREADS:: Specifies the number of threads to use
0b9bd33d
JJ
1949* OMP_PROC_BIND:: Whether threads may be moved between CPUs
1950* OMP_PLACES:: Specifies on which CPUs the threads should be placed
d77de738
ML
1951* OMP_STACKSIZE:: Set default thread stack size
1952* OMP_SCHEDULE:: How threads are scheduled
1953* OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1954* OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
1955* OMP_THREAD_LIMIT:: Set the maximum number of threads
1956* OMP_WAIT_POLICY:: How waiting threads are handled
1957* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1958* GOMP_DEBUG:: Enable debugging output
1959* GOMP_STACKSIZE:: Set default thread stack size
1960* GOMP_SPINCOUNT:: Set the busy-wait spin count
1961* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1962@end menu
1963
1964
1965@node OMP_CANCELLATION
1966@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1967@cindex Environment Variable
1968@table @asis
1969@item @emph{Description}:
1970If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1971if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1972
1973@item @emph{See also}:
1974@ref{omp_get_cancellation}
1975
1976@item @emph{Reference}:
1977@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1978@end table
1979
1980
1981
1982@node OMP_DISPLAY_ENV
1983@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1984@cindex Environment Variable
1985@table @asis
1986@item @emph{Description}:
1987If set to @code{TRUE}, the OpenMP version number and the values
1988associated with the OpenMP environment variables are printed to @code{stderr}.
1989If set to @code{VERBOSE}, it additionally shows the value of the environment
1990variables which are GNU extensions. If undefined or set to @code{FALSE},
1991this information will not be shown.
1992
1993
1994@item @emph{Reference}:
1995@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
1996@end table
1997
1998
1999
2000@node OMP_DEFAULT_DEVICE
2001@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2002@cindex Environment Variable
2003@table @asis
2004@item @emph{Description}:
2005Set to choose the device which is used in a @code{target} region, unless the
2006value is overridden by @code{omp_set_default_device} or by a @code{device}
2007clause. The value shall be the nonnegative device number. If no device with
2008the given device number exists, the code is executed on the host. If unset,
2009device number 0 will be used.
2010
2011
2012@item @emph{See also}:
2013@ref{omp_get_default_device}, @ref{omp_set_default_device},
2014
2015@item @emph{Reference}:
2016@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2017@end table
2018
2019
2020
2021@node OMP_DYNAMIC
2022@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2023@cindex Environment Variable
2024@table @asis
2025@item @emph{Description}:
2026Enable or disable the dynamic adjustment of the number of threads
2027within a team. The value of this environment variable shall be
2028@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2029disabled by default.
2030
2031@item @emph{See also}:
2032@ref{omp_set_dynamic}
2033
2034@item @emph{Reference}:
2035@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2036@end table
2037
2038
2039
2040@node OMP_MAX_ACTIVE_LEVELS
2041@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2042@cindex Environment Variable
2043@table @asis
2044@item @emph{Description}:
2045Specifies the initial value for the maximum number of nested parallel
2046regions. The value of this variable shall be a positive integer.
2047If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2048if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2049a list with more than one item, the maximum number of nested parallel
2050regions will be initialized to the largest number supported, otherwise
2051it will be set to one.
2052
2053@item @emph{See also}:
2054@ref{omp_set_max_active_levels}, @ref{OMP_NESTED}
2055
2056@item @emph{Reference}:
2057@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2058@end table
2059
2060
2061
2062@node OMP_MAX_TASK_PRIORITY
2063@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2064number that can be set for a task.
2065@cindex Environment Variable
2066@table @asis
2067@item @emph{Description}:
2068Specifies the initial value for the maximum priority value that can be
2069set for a task. The value of this variable shall be a non-negative
2070integer, and zero is allowed. If undefined, the default priority is
20710.
2072
2073@item @emph{See also}:
2074@ref{omp_get_max_task_priority}
2075
2076@item @emph{Reference}:
2077@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2078@end table
2079
2080
2081
2082@node OMP_NESTED
2083@section @env{OMP_NESTED} -- Nested parallel regions
2084@cindex Environment Variable
2085@cindex Implementation specific setting
2086@table @asis
2087@item @emph{Description}:
2088Enable or disable nested parallel regions, i.e., whether team members
2089are allowed to create new teams. The value of this environment variable
2090shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2091of maximum active nested regions supported will by default be set to the
2092maximum supported, otherwise it will be set to one. If
2093@env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2094setting. If both are undefined, nested parallel regions are enabled if
2095@env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2096more than one item, otherwise they are disabled by default.
2097
2098@item @emph{See also}:
2099@ref{omp_set_max_active_levels}, @ref{omp_set_nested}
2100
2101@item @emph{Reference}:
2102@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2103@end table
2104
2105
2106
2107@node OMP_NUM_TEAMS
2108@section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2109@cindex Environment Variable
2110@table @asis
2111@item @emph{Description}:
2112Specifies the upper bound for number of teams to use in teams regions
2113without explicit @code{num_teams} clause. The value of this variable shall
2114be a positive integer. If undefined it defaults to 0 which means
2115implementation defined upper bound.
2116
2117@item @emph{See also}:
2118@ref{omp_set_num_teams}
2119
2120@item @emph{Reference}:
2121@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2122@end table
2123
2124
2125
2126@node OMP_NUM_THREADS
2127@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2128@cindex Environment Variable
2129@cindex Implementation specific setting
2130@table @asis
2131@item @emph{Description}:
2132Specifies the default number of threads to use in parallel regions. The
2133value of this variable shall be a comma-separated list of positive integers;
2134the value specifies the number of threads to use for the corresponding nested
2135level. Specifying more than one item in the list will automatically enable
2136nesting by default. If undefined one thread per CPU is used.
2137
2138@item @emph{See also}:
2139@ref{omp_set_num_threads}, @ref{OMP_NESTED}
2140
2141@item @emph{Reference}:
2142@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2143@end table
2144
2145
2146
2147@node OMP_PROC_BIND
0b9bd33d 2148@section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
d77de738
ML
2149@cindex Environment Variable
2150@table @asis
2151@item @emph{Description}:
2152Specifies whether threads may be moved between processors. If set to
0b9bd33d 2153@code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
d77de738
ML
2154they may be moved. Alternatively, a comma separated list with the
2155values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2156be used to specify the thread affinity policy for the corresponding nesting
2157level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
2158same place partition as the primary thread. With @code{CLOSE} those are
2159kept close to the primary thread in contiguous place partitions. And
2160with @code{SPREAD} a sparse distribution
2161across the place partitions is used. Specifying more than one item in the
2162list will automatically enable nesting by default.
2163
2164When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
2165@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
2166
2167@item @emph{See also}:
2168@ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY},
2169@ref{OMP_NESTED}, @ref{OMP_PLACES}
2170
2171@item @emph{Reference}:
2172@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
2173@end table
2174
2175
2176
2177@node OMP_PLACES
0b9bd33d 2178@section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
d77de738
ML
2179@cindex Environment Variable
2180@table @asis
2181@item @emph{Description}:
2182The thread placement can be either specified using an abstract name or by an
2183explicit list of the places. The abstract names @code{threads}, @code{cores},
2184@code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
2185followed by a positive number in parentheses, which denotes the how many places
2186shall be created. With @code{threads} each place corresponds to a single
2187hardware thread; @code{cores} to a single core with the corresponding number of
2188hardware threads; with @code{sockets} the place corresponds to a single
2189socket; with @code{ll_caches} to a set of cores that shares the last level
2190cache on the device; and @code{numa_domains} to a set of cores for which their
2191closest memory on the device is the same memory and at a similar distance from
2192the cores. The resulting placement can be shown by setting the
2193@env{OMP_DISPLAY_ENV} environment variable.
2194
2195Alternatively, the placement can be specified explicitly as comma-separated
2196list of places. A place is specified by set of nonnegative numbers in curly
2197braces, denoting the hardware threads. The curly braces can be omitted
2198when only a single number has been specified. The hardware threads
2199belonging to a place can either be specified as comma-separated list of
2200nonnegative thread numbers or using an interval. Multiple places can also be
2201either specified by a comma-separated list of places or by an interval. To
2202specify an interval, a colon followed by the count is placed after
2203the hardware thread number or the place. Optionally, the length can be
2204followed by a colon and the stride number -- otherwise a unit stride is
2205assumed. Placing an exclamation mark (@code{!}) directly before a curly
2206brace or numbers inside the curly braces (excluding intervals) will
2207exclude those hardware threads.
2208
2209For instance, the following specifies the same places list:
2210@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
2211@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
2212
2213If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
2214@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
2215between CPUs following no placement policy.
2216
2217@item @emph{See also}:
2218@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
2219@ref{OMP_DISPLAY_ENV}
2220
2221@item @emph{Reference}:
2222@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
2223@end table
2224
2225
2226
2227@node OMP_STACKSIZE
2228@section @env{OMP_STACKSIZE} -- Set default thread stack size
2229@cindex Environment Variable
2230@table @asis
2231@item @emph{Description}:
2232Set the default thread stack size in kilobytes, unless the number
2233is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
2234case the size is, respectively, in bytes, kilobytes, megabytes
2235or gigabytes. This is different from @code{pthread_attr_setstacksize}
2236which gets the number of bytes as an argument. If the stack size cannot
2237be set due to system constraints, an error is reported and the initial
2238stack size is left unchanged. If undefined, the stack size is system
2239dependent.
2240
2241@item @emph{Reference}:
2242@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
2243@end table
2244
2245
2246
2247@node OMP_SCHEDULE
2248@section @env{OMP_SCHEDULE} -- How threads are scheduled
2249@cindex Environment Variable
2250@cindex Implementation specific setting
2251@table @asis
2252@item @emph{Description}:
2253Allows to specify @code{schedule type} and @code{chunk size}.
2254The value of the variable shall have the form: @code{type[,chunk]} where
2255@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
2256The optional @code{chunk} size shall be a positive integer. If undefined,
2257dynamic scheduling and a chunk size of 1 is used.
2258
2259@item @emph{See also}:
2260@ref{omp_set_schedule}
2261
2262@item @emph{Reference}:
2263@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
2264@end table
2265
2266
2267
2268@node OMP_TARGET_OFFLOAD
2269@section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
2270@cindex Environment Variable
2271@cindex Implementation specific setting
2272@table @asis
2273@item @emph{Description}:
2274Specifies the behaviour with regard to offloading code to a device. This
2275variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
2276or @code{DEFAULT}.
2277
2278If set to @code{MANDATORY}, the program will terminate with an error if
2279the offload device is not present or is not supported. If set to
2280@code{DISABLED}, then offloading is disabled and all code will run on the
2281host. If set to @code{DEFAULT}, the program will try offloading to the
2282device first, then fall back to running code on the host if it cannot.
2283
2284If undefined, then the program will behave as if @code{DEFAULT} was set.
2285
2286@item @emph{Reference}:
2287@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
2288@end table
2289
2290
2291
2292@node OMP_TEAMS_THREAD_LIMIT
2293@section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
2294@cindex Environment Variable
2295@table @asis
2296@item @emph{Description}:
2297Specifies an upper bound for the number of threads to use by each contention
2298group created by a teams construct without explicit @code{thread_limit}
2299clause. The value of this variable shall be a positive integer. If undefined,
2300the value of 0 is used which stands for an implementation defined upper
2301limit.
2302
2303@item @emph{See also}:
2304@ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
2305
2306@item @emph{Reference}:
2307@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
2308@end table
2309
2310
2311
2312@node OMP_THREAD_LIMIT
2313@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
2314@cindex Environment Variable
2315@table @asis
2316@item @emph{Description}:
2317Specifies the number of threads to use for the whole program. The
2318value of this variable shall be a positive integer. If undefined,
2319the number of threads is not limited.
2320
2321@item @emph{See also}:
2322@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
2323
2324@item @emph{Reference}:
2325@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
2326@end table
2327
2328
2329
2330@node OMP_WAIT_POLICY
2331@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
2332@cindex Environment Variable
2333@table @asis
2334@item @emph{Description}:
2335Specifies whether waiting threads should be active or passive. If
2336the value is @code{PASSIVE}, waiting threads should not consume CPU
2337power while waiting; while the value is @code{ACTIVE} specifies that
2338they should. If undefined, threads wait actively for a short time
2339before waiting passively.
2340
2341@item @emph{See also}:
2342@ref{GOMP_SPINCOUNT}
2343
2344@item @emph{Reference}:
2345@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
2346@end table
2347
2348
2349
2350@node GOMP_CPU_AFFINITY
2351@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
2352@cindex Environment Variable
2353@table @asis
2354@item @emph{Description}:
2355Binds threads to specific CPUs. The variable should contain a space-separated
2356or comma-separated list of CPUs. This list may contain different kinds of
2357entries: either single CPU numbers in any order, a range of CPUs (M-N)
2358or a range with some stride (M-N:S). CPU numbers are zero based. For example,
2359@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
2360to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
2361CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
2362and 14 respectively and then start assigning back from the beginning of
2363the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
2364
2365There is no libgomp library routine to determine whether a CPU affinity
2366specification is in effect. As a workaround, language-specific library
2367functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
2368Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
2369environment variable. A defined CPU affinity on startup cannot be changed
2370or disabled during the runtime of the application.
2371
2372If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
2373@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
2374@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
2375@code{FALSE}, the host system will handle the assignment of threads to CPUs.
2376
2377@item @emph{See also}:
2378@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
2379@end table
2380
2381
2382
2383@node GOMP_DEBUG
2384@section @env{GOMP_DEBUG} -- Enable debugging output
2385@cindex Environment Variable
2386@table @asis
2387@item @emph{Description}:
2388Enable debugging output. The variable should be set to @code{0}
2389(disabled, also the default if not set), or @code{1} (enabled).
2390
2391If enabled, some debugging output will be printed during execution.
2392This is currently not specified in more detail, and subject to change.
2393@end table
2394
2395
2396
2397@node GOMP_STACKSIZE
2398@section @env{GOMP_STACKSIZE} -- Set default thread stack size
2399@cindex Environment Variable
2400@cindex Implementation specific setting
2401@table @asis
2402@item @emph{Description}:
2403Set the default thread stack size in kilobytes. This is different from
2404@code{pthread_attr_setstacksize} which gets the number of bytes as an
2405argument. If the stack size cannot be set due to system constraints, an
2406error is reported and the initial stack size is left unchanged. If undefined,
2407the stack size is system dependent.
2408
2409@item @emph{See also}:
2410@ref{OMP_STACKSIZE}
2411
2412@item @emph{Reference}:
2413@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
2414GCC Patches Mailinglist},
2415@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
2416GCC Patches Mailinglist}
2417@end table
2418
2419
2420
2421@node GOMP_SPINCOUNT
2422@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
2423@cindex Environment Variable
2424@cindex Implementation specific setting
2425@table @asis
2426@item @emph{Description}:
2427Determines how long a threads waits actively with consuming CPU power
2428before waiting passively without consuming CPU power. The value may be
2429either @code{INFINITE}, @code{INFINITY} to always wait actively or an
2430integer which gives the number of spins of the busy-wait loop. The
2431integer may optionally be followed by the following suffixes acting
2432as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
2433million), @code{G} (giga, billion), or @code{T} (tera, trillion).
2434If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
2435300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
243630 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
2437If there are more OpenMP threads than available CPUs, 1000 and 100
2438spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
2439undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
2440or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
2441
2442@item @emph{See also}:
2443@ref{OMP_WAIT_POLICY}
2444@end table
2445
2446
2447
2448@node GOMP_RTEMS_THREAD_POOLS
2449@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
2450@cindex Environment Variable
2451@cindex Implementation specific setting
2452@table @asis
2453@item @emph{Description}:
2454This environment variable is only used on the RTEMS real-time operating system.
2455It determines the scheduler instance specific thread pools. The format for
2456@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
2457@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
2458separated by @code{:} where:
2459@itemize @bullet
2460@item @code{<thread-pool-count>} is the thread pool count for this scheduler
2461instance.
2462@item @code{$<priority>} is an optional priority for the worker threads of a
2463thread pool according to @code{pthread_setschedparam}. In case a priority
2464value is omitted, then a worker thread will inherit the priority of the OpenMP
2465primary thread that created it. The priority of the worker thread is not
2466changed after creation, even if a new OpenMP primary thread using the worker has
2467a different priority.
2468@item @code{@@<scheduler-name>} is the scheduler instance name according to the
2469RTEMS application configuration.
2470@end itemize
2471In case no thread pool configuration is specified for a scheduler instance,
2472then each OpenMP primary thread of this scheduler instance will use its own
2473dynamically allocated thread pool. To limit the worker thread count of the
2474thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
2475@item @emph{Example}:
2476Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
2477@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
2478@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
2479scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
2480one thread pool available. Since no priority is specified for this scheduler
2481instance, the worker thread inherits the priority of the OpenMP primary thread
2482that created it. In the scheduler instance @code{WRK1} there are three thread
2483pools available and their worker threads run at priority four.
2484@end table
2485
2486
2487
2488@c ---------------------------------------------------------------------
2489@c Enabling OpenACC
2490@c ---------------------------------------------------------------------
2491
2492@node Enabling OpenACC
2493@chapter Enabling OpenACC
2494
2495To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
2496flag @option{-fopenacc} must be specified. This enables the OpenACC directive
2497@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
2498@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
2499@code{!$} conditional compilation sentinels in free form and @code{c$},
2500@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
2501arranges for automatic linking of the OpenACC runtime library
2502(@ref{OpenACC Runtime Library Routines}).
2503
2504See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
2505
2506A complete description of all OpenACC directives accepted may be found in
2507the @uref{https://www.openacc.org, OpenACC} Application Programming
2508Interface manual, version 2.6.
2509
2510
2511
2512@c ---------------------------------------------------------------------
2513@c OpenACC Runtime Library Routines
2514@c ---------------------------------------------------------------------
2515
2516@node OpenACC Runtime Library Routines
2517@chapter OpenACC Runtime Library Routines
2518
2519The runtime routines described here are defined by section 3 of the OpenACC
2520specifications in version 2.6.
2521They have C linkage, and do not throw exceptions.
2522Generally, they are available only for the host, with the exception of
2523@code{acc_on_device}, which is available for both the host and the
2524acceleration device.
2525
2526@menu
2527* acc_get_num_devices:: Get number of devices for the given device
2528 type.
2529* acc_set_device_type:: Set type of device accelerator to use.
2530* acc_get_device_type:: Get type of device accelerator to be used.
2531* acc_set_device_num:: Set device number to use.
2532* acc_get_device_num:: Get device number to be used.
2533* acc_get_property:: Get device property.
2534* acc_async_test:: Tests for completion of a specific asynchronous
2535 operation.
2536* acc_async_test_all:: Tests for completion of all asynchronous
2537 operations.
2538* acc_wait:: Wait for completion of a specific asynchronous
2539 operation.
2540* acc_wait_all:: Waits for completion of all asynchronous
2541 operations.
2542* acc_wait_all_async:: Wait for completion of all asynchronous
2543 operations.
2544* acc_wait_async:: Wait for completion of asynchronous operations.
2545* acc_init:: Initialize runtime for a specific device type.
2546* acc_shutdown:: Shuts down the runtime for a specific device
2547 type.
2548* acc_on_device:: Whether executing on a particular device
2549* acc_malloc:: Allocate device memory.
2550* acc_free:: Free device memory.
2551* acc_copyin:: Allocate device memory and copy host memory to
2552 it.
2553* acc_present_or_copyin:: If the data is not present on the device,
2554 allocate device memory and copy from host
2555 memory.
2556* acc_create:: Allocate device memory and map it to host
2557 memory.
2558* acc_present_or_create:: If the data is not present on the device,
2559 allocate device memory and map it to host
2560 memory.
2561* acc_copyout:: Copy device memory to host memory.
2562* acc_delete:: Free device memory.
2563* acc_update_device:: Update device memory from mapped host memory.
2564* acc_update_self:: Update host memory from mapped device memory.
2565* acc_map_data:: Map previously allocated device memory to host
2566 memory.
2567* acc_unmap_data:: Unmap device memory from host memory.
2568* acc_deviceptr:: Get device pointer associated with specific
2569 host address.
2570* acc_hostptr:: Get host pointer associated with specific
2571 device address.
2572* acc_is_present:: Indicate whether host variable / array is
2573 present on device.
2574* acc_memcpy_to_device:: Copy host memory to device memory.
2575* acc_memcpy_from_device:: Copy device memory to host memory.
2576* acc_attach:: Let device pointer point to device-pointer target.
2577* acc_detach:: Let device pointer point to host-pointer target.
2578
2579API routines for target platforms.
2580
2581* acc_get_current_cuda_device:: Get CUDA device handle.
2582* acc_get_current_cuda_context::Get CUDA context handle.
2583* acc_get_cuda_stream:: Get CUDA stream handle.
2584* acc_set_cuda_stream:: Set CUDA stream handle.
2585
2586API routines for the OpenACC Profiling Interface.
2587
2588* acc_prof_register:: Register callbacks.
2589* acc_prof_unregister:: Unregister callbacks.
2590* acc_prof_lookup:: Obtain inquiry functions.
2591* acc_register_library:: Library registration.
2592@end menu
2593
2594
2595
2596@node acc_get_num_devices
2597@section @code{acc_get_num_devices} -- Get number of devices for given device type
2598@table @asis
2599@item @emph{Description}
2600This function returns a value indicating the number of devices available
2601for the device type specified in @var{devicetype}.
2602
2603@item @emph{C/C++}:
2604@multitable @columnfractions .20 .80
2605@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2606@end multitable
2607
2608@item @emph{Fortran}:
2609@multitable @columnfractions .20 .80
2610@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2611@item @tab @code{integer(kind=acc_device_kind) devicetype}
2612@end multitable
2613
2614@item @emph{Reference}:
2615@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26163.2.1.
2617@end table
2618
2619
2620
2621@node acc_set_device_type
2622@section @code{acc_set_device_type} -- Set type of device accelerator to use.
2623@table @asis
2624@item @emph{Description}
2625This function indicates to the runtime library which device type, specified
2626in @var{devicetype}, to use when executing a parallel or kernels region.
2627
2628@item @emph{C/C++}:
2629@multitable @columnfractions .20 .80
2630@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2631@end multitable
2632
2633@item @emph{Fortran}:
2634@multitable @columnfractions .20 .80
2635@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2636@item @tab @code{integer(kind=acc_device_kind) devicetype}
2637@end multitable
2638
2639@item @emph{Reference}:
2640@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26413.2.2.
2642@end table
2643
2644
2645
2646@node acc_get_device_type
2647@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2648@table @asis
2649@item @emph{Description}
2650This function returns what device type will be used when executing a
2651parallel or kernels region.
2652
2653This function returns @code{acc_device_none} if
2654@code{acc_get_device_type} is called from
2655@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2656callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2657Interface}), that is, if the device is currently being initialized.
2658
2659@item @emph{C/C++}:
2660@multitable @columnfractions .20 .80
2661@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2662@end multitable
2663
2664@item @emph{Fortran}:
2665@multitable @columnfractions .20 .80
2666@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2667@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2668@end multitable
2669
2670@item @emph{Reference}:
2671@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26723.2.3.
2673@end table
2674
2675
2676
2677@node acc_set_device_num
2678@section @code{acc_set_device_num} -- Set device number to use.
2679@table @asis
2680@item @emph{Description}
2681This function will indicate to the runtime which device number,
2682specified by @var{devicenum}, associated with the specified device
2683type @var{devicetype}.
2684
2685@item @emph{C/C++}:
2686@multitable @columnfractions .20 .80
2687@item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2688@end multitable
2689
2690@item @emph{Fortran}:
2691@multitable @columnfractions .20 .80
2692@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2693@item @tab @code{integer devicenum}
2694@item @tab @code{integer(kind=acc_device_kind) devicetype}
2695@end multitable
2696
2697@item @emph{Reference}:
2698@uref{https://www.openacc.org, OpenACC specification v2.6}, section
26993.2.4.
2700@end table
2701
2702
2703
2704@node acc_get_device_num
2705@section @code{acc_get_device_num} -- Get device number to be used.
2706@table @asis
2707@item @emph{Description}
2708This function returns which device number associated with the specified device
2709type @var{devicetype}, will be used when executing a parallel or kernels
2710region.
2711
2712@item @emph{C/C++}:
2713@multitable @columnfractions .20 .80
2714@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2715@end multitable
2716
2717@item @emph{Fortran}:
2718@multitable @columnfractions .20 .80
2719@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2720@item @tab @code{integer(kind=acc_device_kind) devicetype}
2721@item @tab @code{integer acc_get_device_num}
2722@end multitable
2723
2724@item @emph{Reference}:
2725@uref{https://www.openacc.org, OpenACC specification v2.6}, section
27263.2.5.
2727@end table
2728
2729
2730
2731@node acc_get_property
2732@section @code{acc_get_property} -- Get device property.
2733@cindex acc_get_property
2734@cindex acc_get_property_string
2735@table @asis
2736@item @emph{Description}
2737These routines return the value of the specified @var{property} for the
2738device being queried according to @var{devicenum} and @var{devicetype}.
2739Integer-valued and string-valued properties are returned by
2740@code{acc_get_property} and @code{acc_get_property_string} respectively.
2741The Fortran @code{acc_get_property_string} subroutine returns the string
2742retrieved in its fourth argument while the remaining entry points are
2743functions, which pass the return value as their result.
2744
2745Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2746modified the interface introduced in OpenACC 2.6. The kind-value parameter
2747@code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2748for consistency and the return type of the @code{acc_get_property} function is
2749now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2750The parameter @code{acc_device_property} will continue to be provided,
2751but might be removed in a future version of GCC.
2752
2753@item @emph{C/C++}:
2754@multitable @columnfractions .20 .80
2755@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2756@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2757@end multitable
2758
2759@item @emph{Fortran}:
2760@multitable @columnfractions .20 .80
2761@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2762@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2763@item @tab @code{use ISO_C_Binding, only: c_size_t}
2764@item @tab @code{integer devicenum}
2765@item @tab @code{integer(kind=acc_device_kind) devicetype}
2766@item @tab @code{integer(kind=acc_device_property_kind) property}
2767@item @tab @code{integer(kind=c_size_t) acc_get_property}
2768@item @tab @code{character(*) string}
2769@end multitable
2770
2771@item @emph{Reference}:
2772@uref{https://www.openacc.org, OpenACC specification v2.6}, section
27733.2.6.
2774@end table
2775
2776
2777
2778@node acc_async_test
2779@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2780@table @asis
2781@item @emph{Description}
2782This function tests for completion of the asynchronous operation specified
2783in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2784the specified asynchronous operation has completed. While Fortran will return
2785a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2786a zero and Fortran returns a @code{false}.
2787
2788@item @emph{C/C++}:
2789@multitable @columnfractions .20 .80
2790@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2791@end multitable
2792
2793@item @emph{Fortran}:
2794@multitable @columnfractions .20 .80
2795@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2796@item @tab @code{integer(kind=acc_handle_kind) arg}
2797@item @tab @code{logical acc_async_test}
2798@end multitable
2799
2800@item @emph{Reference}:
2801@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28023.2.9.
2803@end table
2804
2805
2806
2807@node acc_async_test_all
2808@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2809@table @asis
2810@item @emph{Description}
2811This function tests for completion of all asynchronous operations.
2812In C/C++, a non-zero value will be returned to indicate all asynchronous
2813operations have completed. While Fortran will return a @code{true}. If
2814any asynchronous operation has not completed, C/C++ returns a zero and
2815Fortran returns a @code{false}.
2816
2817@item @emph{C/C++}:
2818@multitable @columnfractions .20 .80
2819@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2820@end multitable
2821
2822@item @emph{Fortran}:
2823@multitable @columnfractions .20 .80
2824@item @emph{Interface}: @tab @code{function acc_async_test()}
2825@item @tab @code{logical acc_get_device_num}
2826@end multitable
2827
2828@item @emph{Reference}:
2829@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28303.2.10.
2831@end table
2832
2833
2834
2835@node acc_wait
2836@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2837@table @asis
2838@item @emph{Description}
2839This function waits for completion of the asynchronous operation
2840specified in @var{arg}.
2841
2842@item @emph{C/C++}:
2843@multitable @columnfractions .20 .80
2844@item @emph{Prototype}: @tab @code{acc_wait(arg);}
2845@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2846@end multitable
2847
2848@item @emph{Fortran}:
2849@multitable @columnfractions .20 .80
2850@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2851@item @tab @code{integer(acc_handle_kind) arg}
2852@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2853@item @tab @code{integer(acc_handle_kind) arg}
2854@end multitable
2855
2856@item @emph{Reference}:
2857@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28583.2.11.
2859@end table
2860
2861
2862
2863@node acc_wait_all
2864@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2865@table @asis
2866@item @emph{Description}
2867This function waits for the completion of all asynchronous operations.
2868
2869@item @emph{C/C++}:
2870@multitable @columnfractions .20 .80
2871@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2872@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2873@end multitable
2874
2875@item @emph{Fortran}:
2876@multitable @columnfractions .20 .80
2877@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2878@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2879@end multitable
2880
2881@item @emph{Reference}:
2882@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28833.2.13.
2884@end table
2885
2886
2887
2888@node acc_wait_all_async
2889@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2890@table @asis
2891@item @emph{Description}
2892This function enqueues a wait operation on the queue @var{async} for any
2893and all asynchronous operations that have been previously enqueued on
2894any queue.
2895
2896@item @emph{C/C++}:
2897@multitable @columnfractions .20 .80
2898@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2899@end multitable
2900
2901@item @emph{Fortran}:
2902@multitable @columnfractions .20 .80
2903@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2904@item @tab @code{integer(acc_handle_kind) async}
2905@end multitable
2906
2907@item @emph{Reference}:
2908@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29093.2.14.
2910@end table
2911
2912
2913
2914@node acc_wait_async
2915@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2916@table @asis
2917@item @emph{Description}
2918This function enqueues a wait operation on queue @var{async} for any and all
2919asynchronous operations enqueued on queue @var{arg}.
2920
2921@item @emph{C/C++}:
2922@multitable @columnfractions .20 .80
2923@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2924@end multitable
2925
2926@item @emph{Fortran}:
2927@multitable @columnfractions .20 .80
2928@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2929@item @tab @code{integer(acc_handle_kind) arg, async}
2930@end multitable
2931
2932@item @emph{Reference}:
2933@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29343.2.12.
2935@end table
2936
2937
2938
2939@node acc_init
2940@section @code{acc_init} -- Initialize runtime for a specific device type.
2941@table @asis
2942@item @emph{Description}
2943This function initializes the runtime for the device type specified in
2944@var{devicetype}.
2945
2946@item @emph{C/C++}:
2947@multitable @columnfractions .20 .80
2948@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2949@end multitable
2950
2951@item @emph{Fortran}:
2952@multitable @columnfractions .20 .80
2953@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2954@item @tab @code{integer(acc_device_kind) devicetype}
2955@end multitable
2956
2957@item @emph{Reference}:
2958@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29593.2.7.
2960@end table
2961
2962
2963
2964@node acc_shutdown
2965@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2966@table @asis
2967@item @emph{Description}
2968This function shuts down the runtime for the device type specified in
2969@var{devicetype}.
2970
2971@item @emph{C/C++}:
2972@multitable @columnfractions .20 .80
2973@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2974@end multitable
2975
2976@item @emph{Fortran}:
2977@multitable @columnfractions .20 .80
2978@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2979@item @tab @code{integer(acc_device_kind) devicetype}
2980@end multitable
2981
2982@item @emph{Reference}:
2983@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29843.2.8.
2985@end table
2986
2987
2988
2989@node acc_on_device
2990@section @code{acc_on_device} -- Whether executing on a particular device
2991@table @asis
2992@item @emph{Description}:
2993This function returns whether the program is executing on a particular
2994device specified in @var{devicetype}. In C/C++ a non-zero value is
2995returned to indicate the device is executing on the specified device type.
2996In Fortran, @code{true} will be returned. If the program is not executing
2997on the specified device type C/C++ will return a zero, while Fortran will
2998return @code{false}.
2999
3000@item @emph{C/C++}:
3001@multitable @columnfractions .20 .80
3002@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3003@end multitable
3004
3005@item @emph{Fortran}:
3006@multitable @columnfractions .20 .80
3007@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3008@item @tab @code{integer(acc_device_kind) devicetype}
3009@item @tab @code{logical acc_on_device}
3010@end multitable
3011
3012
3013@item @emph{Reference}:
3014@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30153.2.17.
3016@end table
3017
3018
3019
3020@node acc_malloc
3021@section @code{acc_malloc} -- Allocate device memory.
3022@table @asis
3023@item @emph{Description}
3024This function allocates @var{len} bytes of device memory. It returns
3025the device address of the allocated memory.
3026
3027@item @emph{C/C++}:
3028@multitable @columnfractions .20 .80
3029@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3030@end multitable
3031
3032@item @emph{Reference}:
3033@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30343.2.18.
3035@end table
3036
3037
3038
3039@node acc_free
3040@section @code{acc_free} -- Free device memory.
3041@table @asis
3042@item @emph{Description}
3043Free previously allocated device memory at the device address @code{a}.
3044
3045@item @emph{C/C++}:
3046@multitable @columnfractions .20 .80
3047@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3048@end multitable
3049
3050@item @emph{Reference}:
3051@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30523.2.19.
3053@end table
3054
3055
3056
3057@node acc_copyin
3058@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3059@table @asis
3060@item @emph{Description}
3061In C/C++, this function allocates @var{len} bytes of device memory
3062and maps it to the specified host address in @var{a}. The device
3063address of the newly allocated device memory is returned.
3064
3065In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3066a contiguous array section. The second form @var{a} specifies a
3067variable or array element and @var{len} specifies the length in bytes.
3068
3069@item @emph{C/C++}:
3070@multitable @columnfractions .20 .80
3071@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3072@item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3073@end multitable
3074
3075@item @emph{Fortran}:
3076@multitable @columnfractions .20 .80
3077@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3078@item @tab @code{type, dimension(:[,:]...) :: a}
3079@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3080@item @tab @code{type, dimension(:[,:]...) :: a}
3081@item @tab @code{integer len}
3082@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3083@item @tab @code{type, dimension(:[,:]...) :: a}
3084@item @tab @code{integer(acc_handle_kind) :: async}
3085@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3086@item @tab @code{type, dimension(:[,:]...) :: a}
3087@item @tab @code{integer len}
3088@item @tab @code{integer(acc_handle_kind) :: async}
3089@end multitable
3090
3091@item @emph{Reference}:
3092@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30933.2.20.
3094@end table
3095
3096
3097
3098@node acc_present_or_copyin
3099@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3100@table @asis
3101@item @emph{Description}
3102This function tests if the host data specified by @var{a} and of length
3103@var{len} is present or not. If it is not present, then device memory
3104will be allocated and the host memory copied. The device address of
3105the newly allocated device memory is returned.
3106
3107In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3108a contiguous array section. The second form @var{a} specifies a variable or
3109array element and @var{len} specifies the length in bytes.
3110
3111Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3112backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3113
3114@item @emph{C/C++}:
3115@multitable @columnfractions .20 .80
3116@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3117@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3118@end multitable
3119
3120@item @emph{Fortran}:
3121@multitable @columnfractions .20 .80
3122@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3123@item @tab @code{type, dimension(:[,:]...) :: a}
3124@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3125@item @tab @code{type, dimension(:[,:]...) :: a}
3126@item @tab @code{integer len}
3127@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3128@item @tab @code{type, dimension(:[,:]...) :: a}
3129@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3130@item @tab @code{type, dimension(:[,:]...) :: a}
3131@item @tab @code{integer len}
3132@end multitable
3133
3134@item @emph{Reference}:
3135@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31363.2.20.
3137@end table
3138
3139
3140
3141@node acc_create
3142@section @code{acc_create} -- Allocate device memory and map it to host memory.
3143@table @asis
3144@item @emph{Description}
3145This function allocates device memory and maps it to host memory specified
3146by the host address @var{a} with a length of @var{len} bytes. In C/C++,
3147the function returns the device address of the allocated device memory.
3148
3149In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3150a contiguous array section. The second form @var{a} specifies a variable or
3151array element and @var{len} specifies the length in bytes.
3152
3153@item @emph{C/C++}:
3154@multitable @columnfractions .20 .80
3155@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
3156@item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
3157@end multitable
3158
3159@item @emph{Fortran}:
3160@multitable @columnfractions .20 .80
3161@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
3162@item @tab @code{type, dimension(:[,:]...) :: a}
3163@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
3164@item @tab @code{type, dimension(:[,:]...) :: a}
3165@item @tab @code{integer len}
3166@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
3167@item @tab @code{type, dimension(:[,:]...) :: a}
3168@item @tab @code{integer(acc_handle_kind) :: async}
3169@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
3170@item @tab @code{type, dimension(:[,:]...) :: a}
3171@item @tab @code{integer len}
3172@item @tab @code{integer(acc_handle_kind) :: async}
3173@end multitable
3174
3175@item @emph{Reference}:
3176@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31773.2.21.
3178@end table
3179
3180
3181
3182@node acc_present_or_create
3183@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
3184@table @asis
3185@item @emph{Description}
3186This function tests if the host data specified by @var{a} and of length
3187@var{len} is present or not. If it is not present, then device memory
3188will be allocated and mapped to host memory. In C/C++, the device address
3189of the newly allocated device memory is returned.
3190
3191In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3192a contiguous array section. The second form @var{a} specifies a variable or
3193array element and @var{len} specifies the length in bytes.
3194
3195Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
3196backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
3197
3198@item @emph{C/C++}:
3199@multitable @columnfractions .20 .80
3200@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
3201@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
3202@end multitable
3203
3204@item @emph{Fortran}:
3205@multitable @columnfractions .20 .80
3206@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
3207@item @tab @code{type, dimension(:[,:]...) :: a}
3208@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
3209@item @tab @code{type, dimension(:[,:]...) :: a}
3210@item @tab @code{integer len}
3211@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
3212@item @tab @code{type, dimension(:[,:]...) :: a}
3213@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
3214@item @tab @code{type, dimension(:[,:]...) :: a}
3215@item @tab @code{integer len}
3216@end multitable
3217
3218@item @emph{Reference}:
3219@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32203.2.21.
3221@end table
3222
3223
3224
3225@node acc_copyout
3226@section @code{acc_copyout} -- Copy device memory to host memory.
3227@table @asis
3228@item @emph{Description}
3229This function copies mapped device memory to host memory which is specified
3230by host address @var{a} for a length @var{len} bytes in C/C++.
3231
3232In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3233a contiguous array section. The second form @var{a} specifies a variable or
3234array element and @var{len} specifies the length in bytes.
3235
3236@item @emph{C/C++}:
3237@multitable @columnfractions .20 .80
3238@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
3239@item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
3240@item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
3241@item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
3242@end multitable
3243
3244@item @emph{Fortran}:
3245@multitable @columnfractions .20 .80
3246@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
3247@item @tab @code{type, dimension(:[,:]...) :: a}
3248@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
3249@item @tab @code{type, dimension(:[,:]...) :: a}
3250@item @tab @code{integer len}
3251@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
3252@item @tab @code{type, dimension(:[,:]...) :: a}
3253@item @tab @code{integer(acc_handle_kind) :: async}
3254@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
3255@item @tab @code{type, dimension(:[,:]...) :: a}
3256@item @tab @code{integer len}
3257@item @tab @code{integer(acc_handle_kind) :: async}
3258@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
3259@item @tab @code{type, dimension(:[,:]...) :: a}
3260@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
3261@item @tab @code{type, dimension(:[,:]...) :: a}
3262@item @tab @code{integer len}
3263@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
3264@item @tab @code{type, dimension(:[,:]...) :: a}
3265@item @tab @code{integer(acc_handle_kind) :: async}
3266@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
3267@item @tab @code{type, dimension(:[,:]...) :: a}
3268@item @tab @code{integer len}
3269@item @tab @code{integer(acc_handle_kind) :: async}
3270@end multitable
3271
3272@item @emph{Reference}:
3273@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32743.2.22.
3275@end table
3276
3277
3278
3279@node acc_delete
3280@section @code{acc_delete} -- Free device memory.
3281@table @asis
3282@item @emph{Description}
3283This function frees previously allocated device memory specified by
3284the device address @var{a} and the length of @var{len} bytes.
3285
3286In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3287a contiguous array section. The second form @var{a} specifies a variable or
3288array element and @var{len} specifies the length in bytes.
3289
3290@item @emph{C/C++}:
3291@multitable @columnfractions .20 .80
3292@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
3293@item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
3294@item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
3295@item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
3296@end multitable
3297
3298@item @emph{Fortran}:
3299@multitable @columnfractions .20 .80
3300@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
3301@item @tab @code{type, dimension(:[,:]...) :: a}
3302@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
3303@item @tab @code{type, dimension(:[,:]...) :: a}
3304@item @tab @code{integer len}
3305@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
3306@item @tab @code{type, dimension(:[,:]...) :: a}
3307@item @tab @code{integer(acc_handle_kind) :: async}
3308@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
3309@item @tab @code{type, dimension(:[,:]...) :: a}
3310@item @tab @code{integer len}
3311@item @tab @code{integer(acc_handle_kind) :: async}
3312@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
3313@item @tab @code{type, dimension(:[,:]...) :: a}
3314@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
3315@item @tab @code{type, dimension(:[,:]...) :: a}
3316@item @tab @code{integer len}
3317@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
3318@item @tab @code{type, dimension(:[,:]...) :: a}
3319@item @tab @code{integer(acc_handle_kind) :: async}
3320@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
3321@item @tab @code{type, dimension(:[,:]...) :: a}
3322@item @tab @code{integer len}
3323@item @tab @code{integer(acc_handle_kind) :: async}
3324@end multitable
3325
3326@item @emph{Reference}:
3327@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33283.2.23.
3329@end table
3330
3331
3332
3333@node acc_update_device
3334@section @code{acc_update_device} -- Update device memory from mapped host memory.
3335@table @asis
3336@item @emph{Description}
3337This function updates the device copy from the previously mapped host memory.
3338The host memory is specified with the host address @var{a} and a length of
3339@var{len} bytes.
3340
3341In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3342a contiguous array section. The second form @var{a} specifies a variable or
3343array element and @var{len} specifies the length in bytes.
3344
3345@item @emph{C/C++}:
3346@multitable @columnfractions .20 .80
3347@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
3348@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
3349@end multitable
3350
3351@item @emph{Fortran}:
3352@multitable @columnfractions .20 .80
3353@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
3354@item @tab @code{type, dimension(:[,:]...) :: a}
3355@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
3356@item @tab @code{type, dimension(:[,:]...) :: a}
3357@item @tab @code{integer len}
3358@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
3359@item @tab @code{type, dimension(:[,:]...) :: a}
3360@item @tab @code{integer(acc_handle_kind) :: async}
3361@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
3362@item @tab @code{type, dimension(:[,:]...) :: a}
3363@item @tab @code{integer len}
3364@item @tab @code{integer(acc_handle_kind) :: async}
3365@end multitable
3366
3367@item @emph{Reference}:
3368@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33693.2.24.
3370@end table
3371
3372
3373
3374@node acc_update_self
3375@section @code{acc_update_self} -- Update host memory from mapped device memory.
3376@table @asis
3377@item @emph{Description}
3378This function updates the host copy from the previously mapped device memory.
3379The host memory is specified with the host address @var{a} and a length of
3380@var{len} bytes.
3381
3382In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3383a contiguous array section. The second form @var{a} specifies a variable or
3384array element and @var{len} specifies the length in bytes.
3385
3386@item @emph{C/C++}:
3387@multitable @columnfractions .20 .80
3388@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
3389@item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
3390@end multitable
3391
3392@item @emph{Fortran}:
3393@multitable @columnfractions .20 .80
3394@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
3395@item @tab @code{type, dimension(:[,:]...) :: a}
3396@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
3397@item @tab @code{type, dimension(:[,:]...) :: a}
3398@item @tab @code{integer len}
3399@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
3400@item @tab @code{type, dimension(:[,:]...) :: a}
3401@item @tab @code{integer(acc_handle_kind) :: async}
3402@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
3403@item @tab @code{type, dimension(:[,:]...) :: a}
3404@item @tab @code{integer len}
3405@item @tab @code{integer(acc_handle_kind) :: async}
3406@end multitable
3407
3408@item @emph{Reference}:
3409@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34103.2.25.
3411@end table
3412
3413
3414
3415@node acc_map_data
3416@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
3417@table @asis
3418@item @emph{Description}
3419This function maps previously allocated device and host memory. The device
3420memory is specified with the device address @var{d}. The host memory is
3421specified with the host address @var{h} and a length of @var{len}.
3422
3423@item @emph{C/C++}:
3424@multitable @columnfractions .20 .80
3425@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
3426@end multitable
3427
3428@item @emph{Reference}:
3429@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34303.2.26.
3431@end table
3432
3433
3434
3435@node acc_unmap_data
3436@section @code{acc_unmap_data} -- Unmap device memory from host memory.
3437@table @asis
3438@item @emph{Description}
3439This function unmaps previously mapped device and host memory. The latter
3440specified by @var{h}.
3441
3442@item @emph{C/C++}:
3443@multitable @columnfractions .20 .80
3444@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
3445@end multitable
3446
3447@item @emph{Reference}:
3448@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34493.2.27.
3450@end table
3451
3452
3453
3454@node acc_deviceptr
3455@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
3456@table @asis
3457@item @emph{Description}
3458This function returns the device address that has been mapped to the
3459host address specified by @var{h}.
3460
3461@item @emph{C/C++}:
3462@multitable @columnfractions .20 .80
3463@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
3464@end multitable
3465
3466@item @emph{Reference}:
3467@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34683.2.28.
3469@end table
3470
3471
3472
3473@node acc_hostptr
3474@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
3475@table @asis
3476@item @emph{Description}
3477This function returns the host address that has been mapped to the
3478device address specified by @var{d}.
3479
3480@item @emph{C/C++}:
3481@multitable @columnfractions .20 .80
3482@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
3483@end multitable
3484
3485@item @emph{Reference}:
3486@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34873.2.29.
3488@end table
3489
3490
3491
3492@node acc_is_present
3493@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
3494@table @asis
3495@item @emph{Description}
3496This function indicates whether the specified host address in @var{a} and a
3497length of @var{len} bytes is present on the device. In C/C++, a non-zero
3498value is returned to indicate the presence of the mapped memory on the
3499device. A zero is returned to indicate the memory is not mapped on the
3500device.
3501
3502In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3503a contiguous array section. The second form @var{a} specifies a variable or
3504array element and @var{len} specifies the length in bytes. If the host
3505memory is mapped to device memory, then a @code{true} is returned. Otherwise,
3506a @code{false} is return to indicate the mapped memory is not present.
3507
3508@item @emph{C/C++}:
3509@multitable @columnfractions .20 .80
3510@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3511@end multitable
3512
3513@item @emph{Fortran}:
3514@multitable @columnfractions .20 .80
3515@item @emph{Interface}: @tab @code{function acc_is_present(a)}
3516@item @tab @code{type, dimension(:[,:]...) :: a}
3517@item @tab @code{logical acc_is_present}
3518@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3519@item @tab @code{type, dimension(:[,:]...) :: a}
3520@item @tab @code{integer len}
3521@item @tab @code{logical acc_is_present}
3522@end multitable
3523
3524@item @emph{Reference}:
3525@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35263.2.30.
3527@end table
3528
3529
3530
3531@node acc_memcpy_to_device
3532@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3533@table @asis
3534@item @emph{Description}
3535This function copies host memory specified by host address of @var{src} to
3536device memory specified by the device address @var{dest} for a length of
3537@var{bytes} bytes.
3538
3539@item @emph{C/C++}:
3540@multitable @columnfractions .20 .80
3541@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3542@end multitable
3543
3544@item @emph{Reference}:
3545@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35463.2.31.
3547@end table
3548
3549
3550
3551@node acc_memcpy_from_device
3552@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3553@table @asis
3554@item @emph{Description}
3555This function copies host memory specified by host address of @var{src} from
3556device memory specified by the device address @var{dest} for a length of
3557@var{bytes} bytes.
3558
3559@item @emph{C/C++}:
3560@multitable @columnfractions .20 .80
3561@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3562@end multitable
3563
3564@item @emph{Reference}:
3565@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35663.2.32.
3567@end table
3568
3569
3570
3571@node acc_attach
3572@section @code{acc_attach} -- Let device pointer point to device-pointer target.
3573@table @asis
3574@item @emph{Description}
3575This function updates a pointer on the device from pointing to a host-pointer
3576address to pointing to the corresponding device data.
3577
3578@item @emph{C/C++}:
3579@multitable @columnfractions .20 .80
3580@item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3581@item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3582@end multitable
3583
3584@item @emph{Reference}:
3585@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35863.2.34.
3587@end table
3588
3589
3590
3591@node acc_detach
3592@section @code{acc_detach} -- Let device pointer point to host-pointer target.
3593@table @asis
3594@item @emph{Description}
3595This function updates a pointer on the device from pointing to a device-pointer
3596address to pointing to the corresponding host data.
3597
3598@item @emph{C/C++}:
3599@multitable @columnfractions .20 .80
3600@item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3601@item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3602@item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3603@item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3604@end multitable
3605
3606@item @emph{Reference}:
3607@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36083.2.35.
3609@end table
3610
3611
3612
3613@node acc_get_current_cuda_device
3614@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3615@table @asis
3616@item @emph{Description}
3617This function returns the CUDA device handle. This handle is the same
3618as used by the CUDA Runtime or Driver API's.
3619
3620@item @emph{C/C++}:
3621@multitable @columnfractions .20 .80
3622@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3623@end multitable
3624
3625@item @emph{Reference}:
3626@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3627A.2.1.1.
3628@end table
3629
3630
3631
3632@node acc_get_current_cuda_context
3633@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3634@table @asis
3635@item @emph{Description}
3636This function returns the CUDA context handle. This handle is the same
3637as used by the CUDA Runtime or Driver API's.
3638
3639@item @emph{C/C++}:
3640@multitable @columnfractions .20 .80
3641@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3642@end multitable
3643
3644@item @emph{Reference}:
3645@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3646A.2.1.2.
3647@end table
3648
3649
3650
3651@node acc_get_cuda_stream
3652@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3653@table @asis
3654@item @emph{Description}
3655This function returns the CUDA stream handle for the queue @var{async}.
3656This handle is the same as used by the CUDA Runtime or Driver API's.
3657
3658@item @emph{C/C++}:
3659@multitable @columnfractions .20 .80
3660@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3661@end multitable
3662
3663@item @emph{Reference}:
3664@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3665A.2.1.3.
3666@end table
3667
3668
3669
3670@node acc_set_cuda_stream
3671@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3672@table @asis
3673@item @emph{Description}
3674This function associates the stream handle specified by @var{stream} with
3675the queue @var{async}.
3676
3677This cannot be used to change the stream handle associated with
3678@code{acc_async_sync}.
3679
3680The return value is not specified.
3681
3682@item @emph{C/C++}:
3683@multitable @columnfractions .20 .80
3684@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3685@end multitable
3686
3687@item @emph{Reference}:
3688@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3689A.2.1.4.
3690@end table
3691
3692
3693
3694@node acc_prof_register
3695@section @code{acc_prof_register} -- Register callbacks.
3696@table @asis
3697@item @emph{Description}:
3698This function registers callbacks.
3699
3700@item @emph{C/C++}:
3701@multitable @columnfractions .20 .80
3702@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3703@end multitable
3704
3705@item @emph{See also}:
3706@ref{OpenACC Profiling Interface}
3707
3708@item @emph{Reference}:
3709@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37105.3.
3711@end table
3712
3713
3714
3715@node acc_prof_unregister
3716@section @code{acc_prof_unregister} -- Unregister callbacks.
3717@table @asis
3718@item @emph{Description}:
3719This function unregisters callbacks.
3720
3721@item @emph{C/C++}:
3722@multitable @columnfractions .20 .80
3723@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3724@end multitable
3725
3726@item @emph{See also}:
3727@ref{OpenACC Profiling Interface}
3728
3729@item @emph{Reference}:
3730@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37315.3.
3732@end table
3733
3734
3735
3736@node acc_prof_lookup
3737@section @code{acc_prof_lookup} -- Obtain inquiry functions.
3738@table @asis
3739@item @emph{Description}:
3740Function to obtain inquiry functions.
3741
3742@item @emph{C/C++}:
3743@multitable @columnfractions .20 .80
3744@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3745@end multitable
3746
3747@item @emph{See also}:
3748@ref{OpenACC Profiling Interface}
3749
3750@item @emph{Reference}:
3751@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37525.3.
3753@end table
3754
3755
3756
3757@node acc_register_library
3758@section @code{acc_register_library} -- Library registration.
3759@table @asis
3760@item @emph{Description}:
3761Function for library registration.
3762
3763@item @emph{C/C++}:
3764@multitable @columnfractions .20 .80
3765@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3766@end multitable
3767
3768@item @emph{See also}:
3769@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3770
3771@item @emph{Reference}:
3772@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37735.3.
3774@end table
3775
3776
3777
3778@c ---------------------------------------------------------------------
3779@c OpenACC Environment Variables
3780@c ---------------------------------------------------------------------
3781
3782@node OpenACC Environment Variables
3783@chapter OpenACC Environment Variables
3784
3785The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
3786are defined by section 4 of the OpenACC specification in version 2.0.
3787The variable @env{ACC_PROFLIB}
3788is defined by section 4 of the OpenACC specification in version 2.6.
3789The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
3790
3791@menu
3792* ACC_DEVICE_TYPE::
3793* ACC_DEVICE_NUM::
3794* ACC_PROFLIB::
3795* GCC_ACC_NOTIFY::
3796@end menu
3797
3798
3799
3800@node ACC_DEVICE_TYPE
3801@section @code{ACC_DEVICE_TYPE}
3802@table @asis
3803@item @emph{Reference}:
3804@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38054.1.
3806@end table
3807
3808
3809
3810@node ACC_DEVICE_NUM
3811@section @code{ACC_DEVICE_NUM}
3812@table @asis
3813@item @emph{Reference}:
3814@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38154.2.
3816@end table
3817
3818
3819
3820@node ACC_PROFLIB
3821@section @code{ACC_PROFLIB}
3822@table @asis
3823@item @emph{See also}:
3824@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3825
3826@item @emph{Reference}:
3827@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38284.3.
3829@end table
3830
3831
3832
3833@node GCC_ACC_NOTIFY
3834@section @code{GCC_ACC_NOTIFY}
3835@table @asis
3836@item @emph{Description}:
3837Print debug information pertaining to the accelerator.
3838@end table
3839
3840
3841
3842@c ---------------------------------------------------------------------
3843@c CUDA Streams Usage
3844@c ---------------------------------------------------------------------
3845
3846@node CUDA Streams Usage
3847@chapter CUDA Streams Usage
3848
3849This applies to the @code{nvptx} plugin only.
3850
3851The library provides elements that perform asynchronous movement of
3852data and asynchronous operation of computing constructs. This
3853asynchronous functionality is implemented by making use of CUDA
3854streams@footnote{See "Stream Management" in "CUDA Driver API",
3855TRM-06703-001, Version 5.5, for additional information}.
3856
3857The primary means by that the asynchronous functionality is accessed
3858is through the use of those OpenACC directives which make use of the
3859@code{async} and @code{wait} clauses. When the @code{async} clause is
3860first used with a directive, it creates a CUDA stream. If an
3861@code{async-argument} is used with the @code{async} clause, then the
3862stream is associated with the specified @code{async-argument}.
3863
3864Following the creation of an association between a CUDA stream and the
3865@code{async-argument} of an @code{async} clause, both the @code{wait}
3866clause and the @code{wait} directive can be used. When either the
3867clause or directive is used after stream creation, it creates a
3868rendezvous point whereby execution waits until all operations
3869associated with the @code{async-argument}, that is, stream, have
3870completed.
3871
3872Normally, the management of the streams that are created as a result of
3873using the @code{async} clause, is done without any intervention by the
3874caller. This implies the association between the @code{async-argument}
3875and the CUDA stream will be maintained for the lifetime of the program.
3876However, this association can be changed through the use of the library
3877function @code{acc_set_cuda_stream}. When the function
3878@code{acc_set_cuda_stream} is called, the CUDA stream that was
3879originally associated with the @code{async} clause will be destroyed.
3880Caution should be taken when changing the association as subsequent
3881references to the @code{async-argument} refer to a different
3882CUDA stream.
3883
3884
3885
3886@c ---------------------------------------------------------------------
3887@c OpenACC Library Interoperability
3888@c ---------------------------------------------------------------------
3889
3890@node OpenACC Library Interoperability
3891@chapter OpenACC Library Interoperability
3892
3893@section Introduction
3894
3895The OpenACC library uses the CUDA Driver API, and may interact with
3896programs that use the Runtime library directly, or another library
3897based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3898"Interactions with the CUDA Driver API" in
3899"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3900Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3901for additional information on library interoperability.}.
3902This chapter describes the use cases and what changes are
3903required in order to use both the OpenACC library and the CUBLAS and Runtime
3904libraries within a program.
3905
3906@section First invocation: NVIDIA CUBLAS library API
3907
3908In this first use case (see below), a function in the CUBLAS library is called
3909prior to any of the functions in the OpenACC library. More specifically, the
3910function @code{cublasCreate()}.
3911
3912When invoked, the function initializes the library and allocates the
3913hardware resources on the host and the device on behalf of the caller. Once
3914the initialization and allocation has completed, a handle is returned to the
3915caller. The OpenACC library also requires initialization and allocation of
3916hardware resources. Since the CUBLAS library has already allocated the
3917hardware resources for the device, all that is left to do is to initialize
3918the OpenACC library and acquire the hardware resources on the host.
3919
3920Prior to calling the OpenACC function that initializes the library and
3921allocate the host hardware resources, you need to acquire the device number
3922that was allocated during the call to @code{cublasCreate()}. The invoking of the
3923runtime library function @code{cudaGetDevice()} accomplishes this. Once
3924acquired, the device number is passed along with the device type as
3925parameters to the OpenACC library function @code{acc_set_device_num()}.
3926
3927Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3928library uses the context that was created during the call to
3929@code{cublasCreate()}. In other words, both libraries will be sharing the
3930same context.
3931
3932@smallexample
3933 /* Create the handle */
3934 s = cublasCreate(&h);
3935 if (s != CUBLAS_STATUS_SUCCESS)
3936 @{
3937 fprintf(stderr, "cublasCreate failed %d\n", s);
3938 exit(EXIT_FAILURE);
3939 @}
3940
3941 /* Get the device number */
3942 e = cudaGetDevice(&dev);
3943 if (e != cudaSuccess)
3944 @{
3945 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3946 exit(EXIT_FAILURE);
3947 @}
3948
3949 /* Initialize OpenACC library and use device 'dev' */
3950 acc_set_device_num(dev, acc_device_nvidia);
3951
3952@end smallexample
3953@center Use Case 1
3954
3955@section First invocation: OpenACC library API
3956
3957In this second use case (see below), a function in the OpenACC library is
eda38850 3958called prior to any of the functions in the CUBLAS library. More specifically,
d77de738
ML
3959the function @code{acc_set_device_num()}.
3960
3961In the use case presented here, the function @code{acc_set_device_num()}
3962is used to both initialize the OpenACC library and allocate the hardware
3963resources on the host and the device. In the call to the function, the
3964call parameters specify which device to use and what device
3965type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3966is but one method to initialize the OpenACC library and allocate the
3967appropriate hardware resources. Other methods are available through the
3968use of environment variables and these will be discussed in the next section.
3969
3970Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3971functions can be called as seen with multiple calls being made to
3972@code{acc_copyin()}. In addition, calls can be made to functions in the
3973CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3974subsequent to the calls to @code{acc_copyin()}.
3975As seen in the previous use case, a call to @code{cublasCreate()}
3976initializes the CUBLAS library and allocates the hardware resources on the
3977host and the device. However, since the device has already been allocated,
3978@code{cublasCreate()} will only initialize the CUBLAS library and allocate
3979the appropriate hardware resources on the host. The context that was created
3980as part of the OpenACC initialization is shared with the CUBLAS library,
3981similarly to the first use case.
3982
3983@smallexample
3984 dev = 0;
3985
3986 acc_set_device_num(dev, acc_device_nvidia);
3987
3988 /* Copy the first set to the device */
3989 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3990 if (d_X == NULL)
3991 @{
3992 fprintf(stderr, "copyin error h_X\n");
3993 exit(EXIT_FAILURE);
3994 @}
3995
3996 /* Copy the second set to the device */
3997 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3998 if (d_Y == NULL)
3999 @{
4000 fprintf(stderr, "copyin error h_Y1\n");
4001 exit(EXIT_FAILURE);
4002 @}
4003
4004 /* Create the handle */
4005 s = cublasCreate(&h);
4006 if (s != CUBLAS_STATUS_SUCCESS)
4007 @{
4008 fprintf(stderr, "cublasCreate failed %d\n", s);
4009 exit(EXIT_FAILURE);
4010 @}
4011
4012 /* Perform saxpy using CUBLAS library function */
4013 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4014 if (s != CUBLAS_STATUS_SUCCESS)
4015 @{
4016 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4017 exit(EXIT_FAILURE);
4018 @}
4019
4020 /* Copy the results from the device */
4021 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4022
4023@end smallexample
4024@center Use Case 2
4025
4026@section OpenACC library and environment variables
4027
4028There are two environment variables associated with the OpenACC library
4029that may be used to control the device type and device number:
4030@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4031environment variables can be used as an alternative to calling
4032@code{acc_set_device_num()}. As seen in the second use case, the device
4033type and device number were specified using @code{acc_set_device_num()}.
4034If however, the aforementioned environment variables were set, then the
4035call to @code{acc_set_device_num()} would not be required.
4036
4037
4038The use of the environment variables is only relevant when an OpenACC function
4039is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4040is called prior to a call to an OpenACC function, then you must call
4041@code{acc_set_device_num()}@footnote{More complete information
4042about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4043sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4044Application Programming Interface”, Version 2.6.}
4045
4046
4047
4048@c ---------------------------------------------------------------------
4049@c OpenACC Profiling Interface
4050@c ---------------------------------------------------------------------
4051
4052@node OpenACC Profiling Interface
4053@chapter OpenACC Profiling Interface
4054
4055@section Implementation Status and Implementation-Defined Behavior
4056
4057We're implementing the OpenACC Profiling Interface as defined by the
4058OpenACC 2.6 specification. We're clarifying some aspects here as
4059@emph{implementation-defined behavior}, while they're still under
4060discussion within the OpenACC Technical Committee.
4061
4062This implementation is tuned to keep the performance impact as low as
4063possible for the (very common) case that the Profiling Interface is
4064not enabled. This is relevant, as the Profiling Interface affects all
4065the @emph{hot} code paths (in the target code, not in the offloaded
4066code). Users of the OpenACC Profiling Interface can be expected to
4067understand that performance will be impacted to some degree once the
4068Profiling Interface has gotten enabled: for example, because of the
4069@emph{runtime} (libgomp) calling into a third-party @emph{library} for
4070every event that has been registered.
4071
4072We're not yet accounting for the fact that @cite{OpenACC events may
4073occur during event processing}.
4074We just handle one case specially, as required by CUDA 9.0
4075@command{nvprof}, that @code{acc_get_device_type}
4076(@ref{acc_get_device_type})) may be called from
4077@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4078callbacks.
4079
4080We're not yet implementing initialization via a
4081@code{acc_register_library} function that is either statically linked
4082in, or dynamically via @env{LD_PRELOAD}.
4083Initialization via @code{acc_register_library} functions dynamically
4084loaded via the @env{ACC_PROFLIB} environment variable does work, as
4085does directly calling @code{acc_prof_register},
4086@code{acc_prof_unregister}, @code{acc_prof_lookup}.
4087
4088As currently there are no inquiry functions defined, calls to
4089@code{acc_prof_lookup} will always return @code{NULL}.
4090
4091There aren't separate @emph{start}, @emph{stop} events defined for the
4092event types @code{acc_ev_create}, @code{acc_ev_delete},
4093@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4094should be triggered before or after the actual device-specific call is
4095made. We trigger them after.
4096
4097Remarks about data provided to callbacks:
4098
4099@table @asis
4100
4101@item @code{acc_prof_info.event_type}
4102It's not clear if for @emph{nested} event callbacks (for example,
4103@code{acc_ev_enqueue_launch_start} as part of a parent compute
4104construct), this should be set for the nested event
4105(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4106construct should remain (@code{acc_ev_compute_construct_start}). In
4107this implementation, the value will generally correspond to the
4108innermost nested event type.
4109
4110@item @code{acc_prof_info.device_type}
4111@itemize
4112
4113@item
4114For @code{acc_ev_compute_construct_start}, and in presence of an
4115@code{if} clause with @emph{false} argument, this will still refer to
4116the offloading device type.
4117It's not clear if that's the expected behavior.
4118
4119@item
4120Complementary to the item before, for
4121@code{acc_ev_compute_construct_end}, this is set to
4122@code{acc_device_host} in presence of an @code{if} clause with
4123@emph{false} argument.
4124It's not clear if that's the expected behavior.
4125
4126@end itemize
4127
4128@item @code{acc_prof_info.thread_id}
4129Always @code{-1}; not yet implemented.
4130
4131@item @code{acc_prof_info.async}
4132@itemize
4133
4134@item
4135Not yet implemented correctly for
4136@code{acc_ev_compute_construct_start}.
4137
4138@item
4139In a compute construct, for host-fallback
4140execution/@code{acc_device_host} it will always be
4141@code{acc_async_sync}.
4142It's not clear if that's the expected behavior.
4143
4144@item
4145For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
4146it will always be @code{acc_async_sync}.
4147It's not clear if that's the expected behavior.
4148
4149@end itemize
4150
4151@item @code{acc_prof_info.async_queue}
4152There is no @cite{limited number of asynchronous queues} in libgomp.
4153This will always have the same value as @code{acc_prof_info.async}.
4154
4155@item @code{acc_prof_info.src_file}
4156Always @code{NULL}; not yet implemented.
4157
4158@item @code{acc_prof_info.func_name}
4159Always @code{NULL}; not yet implemented.
4160
4161@item @code{acc_prof_info.line_no}
4162Always @code{-1}; not yet implemented.
4163
4164@item @code{acc_prof_info.end_line_no}
4165Always @code{-1}; not yet implemented.
4166
4167@item @code{acc_prof_info.func_line_no}
4168Always @code{-1}; not yet implemented.
4169
4170@item @code{acc_prof_info.func_end_line_no}
4171Always @code{-1}; not yet implemented.
4172
4173@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
4174Relating to @code{acc_prof_info.event_type} discussed above, in this
4175implementation, this will always be the same value as
4176@code{acc_prof_info.event_type}.
4177
4178@item @code{acc_event_info.*.parent_construct}
4179@itemize
4180
4181@item
4182Will be @code{acc_construct_parallel} for all OpenACC compute
4183constructs as well as many OpenACC Runtime API calls; should be the
4184one matching the actual construct, or
4185@code{acc_construct_runtime_api}, respectively.
4186
4187@item
4188Will be @code{acc_construct_enter_data} or
4189@code{acc_construct_exit_data} when processing variable mappings
4190specified in OpenACC @emph{declare} directives; should be
4191@code{acc_construct_declare}.
4192
4193@item
4194For implicit @code{acc_ev_device_init_start},
4195@code{acc_ev_device_init_end}, and explicit as well as implicit
4196@code{acc_ev_alloc}, @code{acc_ev_free},
4197@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4198@code{acc_ev_enqueue_download_start}, and
4199@code{acc_ev_enqueue_download_end}, will be
4200@code{acc_construct_parallel}; should reflect the real parent
4201construct.
4202
4203@end itemize
4204
4205@item @code{acc_event_info.*.implicit}
4206For @code{acc_ev_alloc}, @code{acc_ev_free},
4207@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4208@code{acc_ev_enqueue_download_start}, and
4209@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
4210also for explicit usage.
4211
4212@item @code{acc_event_info.data_event.var_name}
4213Always @code{NULL}; not yet implemented.
4214
4215@item @code{acc_event_info.data_event.host_ptr}
4216For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
4217@code{NULL}.
4218
4219@item @code{typedef union acc_api_info}
4220@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
4221Information}. This should obviously be @code{typedef @emph{struct}
4222acc_api_info}.
4223
4224@item @code{acc_api_info.device_api}
4225Possibly not yet implemented correctly for
4226@code{acc_ev_compute_construct_start},
4227@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
4228will always be @code{acc_device_api_none} for these event types.
4229For @code{acc_ev_enter_data_start}, it will be
4230@code{acc_device_api_none} in some cases.
4231
4232@item @code{acc_api_info.device_type}
4233Always the same as @code{acc_prof_info.device_type}.
4234
4235@item @code{acc_api_info.vendor}
4236Always @code{-1}; not yet implemented.
4237
4238@item @code{acc_api_info.device_handle}
4239Always @code{NULL}; not yet implemented.
4240
4241@item @code{acc_api_info.context_handle}
4242Always @code{NULL}; not yet implemented.
4243
4244@item @code{acc_api_info.async_handle}
4245Always @code{NULL}; not yet implemented.
4246
4247@end table
4248
4249Remarks about certain event types:
4250
4251@table @asis
4252
4253@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4254@itemize
4255
4256@item
4257@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
4258@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
4259@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
4260When a compute construct triggers implicit
4261@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
4262events, they currently aren't @emph{nested within} the corresponding
4263@code{acc_ev_compute_construct_start} and
4264@code{acc_ev_compute_construct_end}, but they're currently observed
4265@emph{before} @code{acc_ev_compute_construct_start}.
4266It's not clear what to do: the standard asks us provide a lot of
4267details to the @code{acc_ev_compute_construct_start} callback, without
4268(implicitly) initializing a device before?
4269
4270@item
4271Callbacks for these event types will not be invoked for calls to the
4272@code{acc_set_device_type} and @code{acc_set_device_num} functions.
4273It's not clear if they should be.
4274
4275@end itemize
4276
4277@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
4278@itemize
4279
4280@item
4281Callbacks for these event types will also be invoked for OpenACC
4282@emph{host_data} constructs.
4283It's not clear if they should be.
4284
4285@item
4286Callbacks for these event types will also be invoked when processing
4287variable mappings specified in OpenACC @emph{declare} directives.
4288It's not clear if they should be.
4289
4290@end itemize
4291
4292@end table
4293
4294Callbacks for the following event types will be invoked, but dispatch
4295and information provided therein has not yet been thoroughly reviewed:
4296
4297@itemize
4298@item @code{acc_ev_alloc}
4299@item @code{acc_ev_free}
4300@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
4301@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
4302@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
4303@end itemize
4304
4305During device initialization, and finalization, respectively,
4306callbacks for the following event types will not yet be invoked:
4307
4308@itemize
4309@item @code{acc_ev_alloc}
4310@item @code{acc_ev_free}
4311@end itemize
4312
4313Callbacks for the following event types have not yet been implemented,
4314so currently won't be invoked:
4315
4316@itemize
4317@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
4318@item @code{acc_ev_runtime_shutdown}
4319@item @code{acc_ev_create}, @code{acc_ev_delete}
4320@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
4321@end itemize
4322
4323For the following runtime library functions, not all expected
4324callbacks will be invoked (mostly concerning implicit device
4325initialization):
4326
4327@itemize
4328@item @code{acc_get_num_devices}
4329@item @code{acc_set_device_type}
4330@item @code{acc_get_device_type}
4331@item @code{acc_set_device_num}
4332@item @code{acc_get_device_num}
4333@item @code{acc_init}
4334@item @code{acc_shutdown}
4335@end itemize
4336
4337Aside from implicit device initialization, for the following runtime
4338library functions, no callbacks will be invoked for shared-memory
4339offloading devices (it's not clear if they should be):
4340
4341@itemize
4342@item @code{acc_malloc}
4343@item @code{acc_free}
4344@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
4345@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
4346@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
4347@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
4348@item @code{acc_update_device}, @code{acc_update_device_async}
4349@item @code{acc_update_self}, @code{acc_update_self_async}
4350@item @code{acc_map_data}, @code{acc_unmap_data}
4351@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
4352@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
4353@end itemize
4354
4355@c ---------------------------------------------------------------------
4356@c OpenMP-Implementation Specifics
4357@c ---------------------------------------------------------------------
4358
4359@node OpenMP-Implementation Specifics
4360@chapter OpenMP-Implementation Specifics
4361
4362@menu
4363* OpenMP Context Selectors::
4364* Memory allocation with libmemkind::
4365@end menu
4366
4367@node OpenMP Context Selectors
4368@section OpenMP Context Selectors
4369
4370@code{vendor} is always @code{gnu}. References are to the GCC manual.
4371
4372@multitable @columnfractions .60 .10 .25
4373@headitem @code{arch} @tab @code{kind} @tab @code{isa}
4374@item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
4375 @code{i586}, @code{i686}, @code{ia32}
4376 @tab @code{host}
4377 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
4378@item @code{amdgcn}, @code{gcn}
4379 @tab @code{gpu}
e0b95c2e
TB
4380 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
4381 @code{gfx803} is supported as an alias for @code{fiji}.}
d77de738
ML
4382@item @code{nvptx}
4383 @tab @code{gpu}
4384 @tab See @code{-march=} in ``Nvidia PTX Options''
4385@end multitable
4386
4387@node Memory allocation with libmemkind
4388@section Memory allocation with libmemkind
4389
4390On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
4391library} (@code{libmemkind.so.0}) is available at runtime, it is used when
4392creating memory allocators requesting
4393
4394@itemize
4395@item the memory space @code{omp_high_bw_mem_space}
4396@item the memory space @code{omp_large_cap_mem_space}
4397@item the partition trait @code{omp_atv_interleaved}
4398@end itemize
4399
4400
4401@c ---------------------------------------------------------------------
4402@c Offload-Target Specifics
4403@c ---------------------------------------------------------------------
4404
4405@node Offload-Target Specifics
4406@chapter Offload-Target Specifics
4407
4408The following sections present notes on the offload-target specifics
4409
4410@menu
4411* AMD Radeon::
4412* nvptx::
4413@end menu
4414
4415@node AMD Radeon
4416@section AMD Radeon (GCN)
4417
4418On the hardware side, there is the hierarchy (fine to coarse):
4419@itemize
4420@item work item (thread)
4421@item wavefront
4422@item work group
81476bc4 4423@item compute unit (CU)
d77de738
ML
4424@end itemize
4425
4426All OpenMP and OpenACC levels are used, i.e.
4427@itemize
4428@item OpenMP's simd and OpenACC's vector map to work items (thread)
4429@item OpenMP's threads (``parallel'') and OpenACC's workers map
4430 to wavefronts
4431@item OpenMP's teams and OpenACC's gang use a threadpool with the
4432 size of the number of teams or gangs, respectively.
4433@end itemize
4434
4435The used sizes are
4436@itemize
4437@item Number of teams is the specified @code{num_teams} (OpenMP) or
81476bc4
MV
4438 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
4439 by two times the number of CU.
d77de738
ML
4440@item Number of wavefronts is 4 for gfx900 and 16 otherwise;
4441 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
4442 overrides this if smaller.
4443@item The wavefront has 102 scalars and 64 vectors
4444@item Number of workitems is always 64
4445@item The hardware permits maximally 40 workgroups/CU and
4446 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
4447@item 80 scalars registers and 24 vector registers in non-kernel functions
4448 (the chosen procedure-calling API).
4449@item For the kernel itself: as many as register pressure demands (number of
4450 teams and number of threads, scaled down if registers are exhausted)
4451@end itemize
4452
4453The implementation remark:
4454@itemize
4455@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4456 using the C library @code{printf} functions and the Fortran
4457 @code{print}/@code{write} statements.
243fa488 4458@item Reverse offload regions (i.e. @code{target} regions with
f84fdb13
TB
4459 @code{device(ancestor:1)}) are processed serially per @code{target} region
4460 such that the next reverse offload region is only executed after the previous
4461 one returned.
f1af7d65 4462@item OpenMP code that has a @code{requires} directive with
f84fdb13
TB
4463 @code{unified_shared_memory} will remove any GCN device from the list of
4464 available devices (``host fallback'').
2e3dd14d
TB
4465@item The available stack size can be changed using the @code{GCN_STACK_SIZE}
4466 environment variable; the default is 32 kiB per thread.
d77de738
ML
4467@end itemize
4468
4469
4470
4471@node nvptx
4472@section nvptx
4473
4474On the hardware side, there is the hierarchy (fine to coarse):
4475@itemize
4476@item thread
4477@item warp
4478@item thread block
4479@item streaming multiprocessor
4480@end itemize
4481
4482All OpenMP and OpenACC levels are used, i.e.
4483@itemize
4484@item OpenMP's simd and OpenACC's vector map to threads
4485@item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
4486@item OpenMP's teams and OpenACC's gang use a threadpool with the
4487 size of the number of teams or gangs, respectively.
4488@end itemize
4489
4490The used sizes are
4491@itemize
4492@item The @code{warp_size} is always 32
4493@item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
81476bc4
MV
4494@item The number of teams is limited by the number of blocks the device can
4495 host simultaneously.
d77de738
ML
4496@end itemize
4497
4498Additional information can be obtained by setting the environment variable to
4499@code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
4500parameters).
4501
4502GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
4503which caches the JIT in the user's directory (see CUDA documentation; can be
4504tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
4505
4506Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
eda38850 4507options still affect the used PTX ISA code and, thus, the requirements on
d77de738
ML
4508CUDA version and hardware.
4509
4510The implementation remark:
4511@itemize
4512@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4513 using the C library @code{printf} functions. Note that the Fortran
4514 @code{print}/@code{write} statements are not supported, yet.
4515@item Compilation OpenMP code that contains @code{requires reverse_offload}
4516 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
4517 is not supported.
eda38850
TB
4518@item For code containing reverse offload (i.e. @code{target} regions with
4519 @code{device(ancestor:1)}), there is a slight performance penalty
4520 for @emph{all} target regions, consisting mostly of shutdown delay
4521 Per device, reverse offload regions are processed serially such that
4522 the next reverse offload region is only executed after the previous
4523 one returned.
f1af7d65
TB
4524@item OpenMP code that has a @code{requires} directive with
4525 @code{unified_shared_memory} will remove any nvptx device from the
eda38850 4526 list of available devices (``host fallback'').
d77de738
ML
4527@end itemize
4528
4529
4530@c ---------------------------------------------------------------------
4531@c The libgomp ABI
4532@c ---------------------------------------------------------------------
4533
4534@node The libgomp ABI
4535@chapter The libgomp ABI
4536
4537The following sections present notes on the external ABI as
4538presented by libgomp. Only maintainers should need them.
4539
4540@menu
4541* Implementing MASTER construct::
4542* Implementing CRITICAL construct::
4543* Implementing ATOMIC construct::
4544* Implementing FLUSH construct::
4545* Implementing BARRIER construct::
4546* Implementing THREADPRIVATE construct::
4547* Implementing PRIVATE clause::
4548* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
4549* Implementing REDUCTION clause::
4550* Implementing PARALLEL construct::
4551* Implementing FOR construct::
4552* Implementing ORDERED construct::
4553* Implementing SECTIONS construct::
4554* Implementing SINGLE construct::
4555* Implementing OpenACC's PARALLEL construct::
4556@end menu
4557
4558
4559@node Implementing MASTER construct
4560@section Implementing MASTER construct
4561
4562@smallexample
4563if (omp_get_thread_num () == 0)
4564 block
4565@end smallexample
4566
4567Alternately, we generate two copies of the parallel subfunction
4568and only include this in the version run by the primary thread.
4569Surely this is not worthwhile though...
4570
4571
4572
4573@node Implementing CRITICAL construct
4574@section Implementing CRITICAL construct
4575
4576Without a specified name,
4577
4578@smallexample
4579 void GOMP_critical_start (void);
4580 void GOMP_critical_end (void);
4581@end smallexample
4582
4583so that we don't get COPY relocations from libgomp to the main
4584application.
4585
4586With a specified name, use omp_set_lock and omp_unset_lock with
4587name being transformed into a variable declared like
4588
4589@smallexample
4590 omp_lock_t gomp_critical_user_<name> __attribute__((common))
4591@end smallexample
4592
4593Ideally the ABI would specify that all zero is a valid unlocked
4594state, and so we wouldn't need to initialize this at
4595startup.
4596
4597
4598
4599@node Implementing ATOMIC construct
4600@section Implementing ATOMIC construct
4601
4602The target should implement the @code{__sync} builtins.
4603
4604Failing that we could add
4605
4606@smallexample
4607 void GOMP_atomic_enter (void)
4608 void GOMP_atomic_exit (void)
4609@end smallexample
4610
4611which reuses the regular lock code, but with yet another lock
4612object private to the library.
4613
4614
4615
4616@node Implementing FLUSH construct
4617@section Implementing FLUSH construct
4618
4619Expands to the @code{__sync_synchronize} builtin.
4620
4621
4622
4623@node Implementing BARRIER construct
4624@section Implementing BARRIER construct
4625
4626@smallexample
4627 void GOMP_barrier (void)
4628@end smallexample
4629
4630
4631@node Implementing THREADPRIVATE construct
4632@section Implementing THREADPRIVATE construct
4633
4634In _most_ cases we can map this directly to @code{__thread}. Except
4635that OMP allows constructors for C++ objects. We can either
4636refuse to support this (how often is it used?) or we can
4637implement something akin to .ctors.
4638
4639Even more ideally, this ctor feature is handled by extensions
4640to the main pthreads library. Failing that, we can have a set
4641of entry points to register ctor functions to be called.
4642
4643
4644
4645@node Implementing PRIVATE clause
4646@section Implementing PRIVATE clause
4647
4648In association with a PARALLEL, or within the lexical extent
4649of a PARALLEL block, the variable becomes a local variable in
4650the parallel subfunction.
4651
4652In association with FOR or SECTIONS blocks, create a new
4653automatic variable within the current function. This preserves
4654the semantic of new variable creation.
4655
4656
4657
4658@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4659@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4660
4661This seems simple enough for PARALLEL blocks. Create a private
4662struct for communicating between the parent and subfunction.
4663In the parent, copy in values for scalar and "small" structs;
4664copy in addresses for others TREE_ADDRESSABLE types. In the
4665subfunction, copy the value into the local variable.
4666
4667It is not clear what to do with bare FOR or SECTION blocks.
4668The only thing I can figure is that we do something like:
4669
4670@smallexample
4671#pragma omp for firstprivate(x) lastprivate(y)
4672for (int i = 0; i < n; ++i)
4673 body;
4674@end smallexample
4675
4676which becomes
4677
4678@smallexample
4679@{
4680 int x = x, y;
4681
4682 // for stuff
4683
4684 if (i == n)
4685 y = y;
4686@}
4687@end smallexample
4688
4689where the "x=x" and "y=y" assignments actually have different
4690uids for the two variables, i.e. not something you could write
4691directly in C. Presumably this only makes sense if the "outer"
4692x and y are global variables.
4693
4694COPYPRIVATE would work the same way, except the structure
4695broadcast would have to happen via SINGLE machinery instead.
4696
4697
4698
4699@node Implementing REDUCTION clause
4700@section Implementing REDUCTION clause
4701
4702The private struct mentioned in the previous section should have
4703a pointer to an array of the type of the variable, indexed by the
4704thread's @var{team_id}. The thread stores its final value into the
4705array, and after the barrier, the primary thread iterates over the
4706array to collect the values.
4707
4708
4709@node Implementing PARALLEL construct
4710@section Implementing PARALLEL construct
4711
4712@smallexample
4713 #pragma omp parallel
4714 @{
4715 body;
4716 @}
4717@end smallexample
4718
4719becomes
4720
4721@smallexample
4722 void subfunction (void *data)
4723 @{
4724 use data;
4725 body;
4726 @}
4727
4728 setup data;
4729 GOMP_parallel_start (subfunction, &data, num_threads);
4730 subfunction (&data);
4731 GOMP_parallel_end ();
4732@end smallexample
4733
4734@smallexample
4735 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
4736@end smallexample
4737
4738The @var{FN} argument is the subfunction to be run in parallel.
4739
4740The @var{DATA} argument is a pointer to a structure used to
4741communicate data in and out of the subfunction, as discussed
4742above with respect to FIRSTPRIVATE et al.
4743
4744The @var{NUM_THREADS} argument is 1 if an IF clause is present
4745and false, or the value of the NUM_THREADS clause, if
4746present, or 0.
4747
4748The function needs to create the appropriate number of
4749threads and/or launch them from the dock. It needs to
4750create the team structure and assign team ids.
4751
4752@smallexample
4753 void GOMP_parallel_end (void)
4754@end smallexample
4755
4756Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
4757
4758
4759
4760@node Implementing FOR construct
4761@section Implementing FOR construct
4762
4763@smallexample
4764 #pragma omp parallel for
4765 for (i = lb; i <= ub; i++)
4766 body;
4767@end smallexample
4768
4769becomes
4770
4771@smallexample
4772 void subfunction (void *data)
4773 @{
4774 long _s0, _e0;
4775 while (GOMP_loop_static_next (&_s0, &_e0))
4776 @{
4777 long _e1 = _e0, i;
4778 for (i = _s0; i < _e1; i++)
4779 body;
4780 @}
4781 GOMP_loop_end_nowait ();
4782 @}
4783
4784 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
4785 subfunction (NULL);
4786 GOMP_parallel_end ();
4787@end smallexample
4788
4789@smallexample
4790 #pragma omp for schedule(runtime)
4791 for (i = 0; i < n; i++)
4792 body;
4793@end smallexample
4794
4795becomes
4796
4797@smallexample
4798 @{
4799 long i, _s0, _e0;
4800 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
4801 do @{
4802 long _e1 = _e0;
4803 for (i = _s0, i < _e0; i++)
4804 body;
4805 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
4806 GOMP_loop_end ();
4807 @}
4808@end smallexample
4809
4810Note that while it looks like there is trickiness to propagating
4811a non-constant STEP, there isn't really. We're explicitly allowed
4812to evaluate it as many times as we want, and any variables involved
4813should automatically be handled as PRIVATE or SHARED like any other
4814variables. So the expression should remain evaluable in the
4815subfunction. We can also pull it into a local variable if we like,
4816but since its supposed to remain unchanged, we can also not if we like.
4817
4818If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
4819able to get away with no work-sharing context at all, since we can
4820simply perform the arithmetic directly in each thread to divide up
4821the iterations. Which would mean that we wouldn't need to call any
4822of these routines.
4823
4824There are separate routines for handling loops with an ORDERED
4825clause. Bookkeeping for that is non-trivial...
4826
4827
4828
4829@node Implementing ORDERED construct
4830@section Implementing ORDERED construct
4831
4832@smallexample
4833 void GOMP_ordered_start (void)
4834 void GOMP_ordered_end (void)
4835@end smallexample
4836
4837
4838
4839@node Implementing SECTIONS construct
4840@section Implementing SECTIONS construct
4841
4842A block as
4843
4844@smallexample
4845 #pragma omp sections
4846 @{
4847 #pragma omp section
4848 stmt1;
4849 #pragma omp section
4850 stmt2;
4851 #pragma omp section
4852 stmt3;
4853 @}
4854@end smallexample
4855
4856becomes
4857
4858@smallexample
4859 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
4860 switch (i)
4861 @{
4862 case 1:
4863 stmt1;
4864 break;
4865 case 2:
4866 stmt2;
4867 break;
4868 case 3:
4869 stmt3;
4870 break;
4871 @}
4872 GOMP_barrier ();
4873@end smallexample
4874
4875
4876@node Implementing SINGLE construct
4877@section Implementing SINGLE construct
4878
4879A block like
4880
4881@smallexample
4882 #pragma omp single
4883 @{
4884 body;
4885 @}
4886@end smallexample
4887
4888becomes
4889
4890@smallexample
4891 if (GOMP_single_start ())
4892 body;
4893 GOMP_barrier ();
4894@end smallexample
4895
4896while
4897
4898@smallexample
4899 #pragma omp single copyprivate(x)
4900 body;
4901@end smallexample
4902
4903becomes
4904
4905@smallexample
4906 datap = GOMP_single_copy_start ();
4907 if (datap == NULL)
4908 @{
4909 body;
4910 data.x = x;
4911 GOMP_single_copy_end (&data);
4912 @}
4913 else
4914 x = datap->x;
4915 GOMP_barrier ();
4916@end smallexample
4917
4918
4919
4920@node Implementing OpenACC's PARALLEL construct
4921@section Implementing OpenACC's PARALLEL construct
4922
4923@smallexample
4924 void GOACC_parallel ()
4925@end smallexample
4926
4927
4928
4929@c ---------------------------------------------------------------------
4930@c Reporting Bugs
4931@c ---------------------------------------------------------------------
4932
4933@node Reporting Bugs
4934@chapter Reporting Bugs
4935
4936Bugs in the GNU Offloading and Multi Processing Runtime Library should
4937be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
4938"openacc", or "openmp", or both to the keywords field in the bug
4939report, as appropriate.
4940
4941
4942
4943@c ---------------------------------------------------------------------
4944@c GNU General Public License
4945@c ---------------------------------------------------------------------
4946
4947@include gpl_v3.texi
4948
4949
4950
4951@c ---------------------------------------------------------------------
4952@c GNU Free Documentation License
4953@c ---------------------------------------------------------------------
4954
4955@include fdl.texi
4956
4957
4958
4959@c ---------------------------------------------------------------------
4960@c Funding Free Software
4961@c ---------------------------------------------------------------------
4962
4963@include funding.texi
4964
4965@c ---------------------------------------------------------------------
4966@c Index
4967@c ---------------------------------------------------------------------
4968
4969@node Library Index
4970@unnumbered Library Index
4971
4972@printindex cp
4973
4974@bye