]> git.ipfire.org Git - thirdparty/gcc.git/blob - libgomp/libgomp.texi
libgomp.texi: Clarify OMP_TARGET_OFFLOAD=mandatory
[thirdparty/gcc.git] / libgomp / libgomp.texi
1 \input texinfo @c -*-texinfo-*-
2
3 @c %**start of header
4 @setfilename libgomp.info
5 @settitle GNU libgomp
6 @c %**end of header
7
8
9 @copying
10 Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
11
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
19
20 (a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24 (b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29 @end copying
30
31 @ifinfo
32 @dircategory GNU Libraries
33 @direntry
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35 @end direntry
36
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
40 Fortran.
41
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
45
46 @insertcopying
47 @end ifinfo
48
49
50 @setchapternewpage odd
51
52 @titlepage
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
55 @page
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
58 @sp 1
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
62 @sp 1
63 @insertcopying
64 @end titlepage
65
66 @summarycontents
67 @contents
68 @page
69
70
71 @node Top, Enabling OpenMP
72 @top Introduction
73 @cindex Introduction
74
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
82 and Fortran.
83
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91 @comment
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
95 @comment
96 @menu
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Implementation Status:: List of implemented features by OpenMP version
99 * OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102 * OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105 * Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107 * OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111 * CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113 * OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115 * OpenACC Profiling Interface::
116 * OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118 * Offload-Target Specifics:: Notes on offload-target specific internals
119 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
120 * Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122 * Copying:: GNU general public license says
123 how you can copy and share libgomp.
124 * GNU Free Documentation License::
125 How you can copy and share this manual.
126 * Funding:: How to help assure continued work for free
127 software.
128 * Library Index:: Index of this documentation.
129 @end menu
130
131
132 @c ---------------------------------------------------------------------
133 @c Enabling OpenMP
134 @c ---------------------------------------------------------------------
135
136 @node Enabling OpenMP
137 @chapter Enabling OpenMP
138
139 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143 @code{!$} conditional compilation sentinels in free form and @code{c$},
144 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145 arranges for automatic linking of the OpenMP runtime library
146 (@ref{Runtime Library Routines}).
147
148 A complete description of all OpenMP directives may be found in the
149 @uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150 See also @ref{OpenMP Implementation Status}.
151
152
153 @c ---------------------------------------------------------------------
154 @c OpenMP Implementation Status
155 @c ---------------------------------------------------------------------
156
157 @node OpenMP Implementation Status
158 @chapter OpenMP Implementation Status
159
160 @menu
161 * OpenMP 4.5:: Feature completion status to 4.5 specification
162 * OpenMP 5.0:: Feature completion status to 5.0 specification
163 * OpenMP 5.1:: Feature completion status to 5.1 specification
164 * OpenMP 5.2:: Feature completion status to 5.2 specification
165 * OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
166 @end menu
167
168 The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169 parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170 the value @code{201511} (i.e. OpenMP 4.5).
171
172 @node OpenMP 4.5
173 @section OpenMP 4.5
174
175 The OpenMP 4.5 specification is fully supported.
176
177 @node OpenMP 5.0
178 @section OpenMP 5.0
179
180 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181 @c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183 @multitable @columnfractions .60 .10 .25
184 @headitem Description @tab Status @tab Comments
185 @item Array shaping @tab N @tab
186 @item Array sections with non-unit strides in C and C++ @tab N @tab
187 @item Iterators @tab Y @tab
188 @item @code{metadirective} directive @tab N @tab
189 @item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
191 @item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
192 env variable @tab Y @tab
193 @item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
194 @item @code{requires} directive @tab P
195 @tab complete but no non-host device provides @code{unified_shared_memory}
196 @item @code{teams} construct outside an enclosing target region @tab Y @tab
197 @item Non-rectangular loop nests @tab P
198 @tab Full support for C/C++, partial for Fortran
199 (@uref{https://gcc.gnu.org/PR110735,PR110735})
200 @item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
201 @item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
202 constructs @tab Y @tab
203 @item Collapse of associated loops that are imperfectly nested loops @tab Y @tab
204 @item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
205 @code{simd} construct @tab Y @tab
206 @item @code{atomic} constructs in @code{simd} @tab Y @tab
207 @item @code{loop} construct @tab Y @tab
208 @item @code{order(concurrent)} clause @tab Y @tab
209 @item @code{scan} directive and @code{in_scan} modifier for the
210 @code{reduction} clause @tab Y @tab
211 @item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
212 @item @code{in_reduction} clause on @code{target} constructs @tab P
213 @tab @code{nowait} only stub
214 @item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
215 @item @code{task} modifier to @code{reduction} clause @tab Y @tab
216 @item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
217 @item @code{detach} clause to @code{task} construct @tab Y @tab
218 @item @code{omp_fulfill_event} runtime routine @tab Y @tab
219 @item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
220 and @code{taskloop simd} constructs @tab Y @tab
221 @item @code{taskloop} construct cancelable by @code{cancel} construct
222 @tab Y @tab
223 @item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
224 @tab Y @tab
225 @item Predefined memory spaces, memory allocators, allocator traits
226 @tab Y @tab See also @ref{Memory allocation}
227 @item Memory management routines @tab Y @tab
228 @item @code{allocate} directive @tab P @tab Only C, only stack variables
229 @item @code{allocate} clause @tab P @tab Initial support
230 @item @code{use_device_addr} clause on @code{target data} @tab Y @tab
231 @item @code{ancestor} modifier on @code{device} clause @tab Y @tab
232 @item Implicit declare target directive @tab Y @tab
233 @item Discontiguous array section with @code{target update} construct
234 @tab N @tab
235 @item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237 @item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238 @item Nested @code{declare target} directive @tab Y @tab
239 @item Combined @code{master} constructs @tab Y @tab
240 @item @code{depend} clause on @code{taskwait} @tab Y @tab
241 @item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
242 @tab Y @tab
243 @item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244 @item @code{depobj} construct and depend objects @tab Y @tab
245 @item Lock hints were renamed to synchronization hints @tab Y @tab
246 @item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247 @item Map-order clarifications @tab P @tab
248 @item @code{close} @emph{map-type-modifier} @tab Y @tab
249 @item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251 @item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254 @item @code{defaultmap} extensions @tab Y @tab
255 @item @code{declare mapper} directive @tab N @tab
256 @item @code{omp_get_supported_active_levels} routine @tab Y @tab
257 @item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259 @item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
260 routines @tab Y @tab
261 @item @code{omp_get_device_num} runtime routine @tab Y @tab
262 @item OMPT interface @tab N @tab
263 @item OMPD interface @tab N @tab
264 @end multitable
265
266 @unnumberedsubsec Other new OpenMP 5.0 features
267
268 @multitable @columnfractions .60 .10 .25
269 @headitem Description @tab Status @tab Comments
270 @item Supporting C++'s range-based for loop @tab Y @tab
271 @end multitable
272
273
274 @node OpenMP 5.1
275 @section OpenMP 5.1
276
277 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
278
279 @multitable @columnfractions .60 .10 .25
280 @headitem Description @tab Status @tab Comments
281 @item OpenMP directive as C++ attribute specifiers @tab Y @tab
282 @item @code{omp_all_memory} reserved locator @tab Y @tab
283 @item @emph{target_device trait} in OpenMP Context @tab N @tab
284 @item @code{target_device} selector set in context selectors @tab N @tab
285 @item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287 @item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289 @item @code{dispatch} construct @tab N @tab
290 @item device-specific ICV settings with environment variables @tab Y @tab
291 @item @code{assume} and @code{assumes} directives @tab Y @tab
292 @item @code{nothing} directive @tab Y @tab
293 @item @code{error} directive @tab Y @tab
294 @item @code{masked} construct @tab Y @tab
295 @item @code{scope} directive @tab Y @tab
296 @item Loop transformation constructs @tab N @tab
297 @item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
299 @item @code{align} clause in @code{allocate} directive @tab P
300 @tab Only C (and only stack variables)
301 @item @code{align} modifier in @code{allocate} clause @tab Y @tab
302 @item @code{thread_limit} clause to @code{target} construct @tab Y @tab
303 @item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
304 @item Iterators in @code{target update} motion clauses and @code{map}
305 clauses @tab N @tab
306 @item Indirect calls to the device version of a procedure or function in
307 @code{target} regions @tab N @tab
308 @item @code{interop} directive @tab N @tab
309 @item @code{omp_interop_t} object support in runtime routines @tab N @tab
310 @item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
311 @item Extensions to the @code{atomic} directive @tab Y @tab
312 @item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
313 @item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
314 @item @code{private} and @code{firstprivate} argument to @code{default}
315 clause in C and C++ @tab Y @tab
316 @item @code{present} argument to @code{defaultmap} clause @tab Y @tab
317 @item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
318 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
319 routines @tab Y @tab
320 @item @code{omp_target_is_accessible} runtime routine @tab Y @tab
321 @item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
322 runtime routines @tab Y @tab
323 @item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
324 @item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
325 @code{omp_aligned_calloc} runtime routines @tab Y @tab
326 @item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
327 @code{omp_atv_default} changed @tab Y @tab
328 @item @code{omp_display_env} runtime routine @tab Y @tab
329 @item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
330 @item @code{ompt_sync_region_t} enum additions @tab N @tab
331 @item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
332 and @code{ompt_state_wait_barrier_teams} @tab N @tab
333 @item @code{ompt_callback_target_data_op_emi_t},
334 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
335 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
336 @item @code{ompt_callback_error_t} type @tab N @tab
337 @item @code{OMP_PLACES} syntax extensions @tab Y @tab
338 @item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
339 variables @tab Y @tab
340 @end multitable
341
342 @unnumberedsubsec Other new OpenMP 5.1 features
343
344 @multitable @columnfractions .60 .10 .25
345 @headitem Description @tab Status @tab Comments
346 @item Support of strictly structured blocks in Fortran @tab Y @tab
347 @item Support of structured block sequences in C/C++ @tab Y @tab
348 @item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
349 clause @tab Y @tab
350 @item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
351 @item Pointer predetermined firstprivate getting initialized
352 to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
353 @item For Fortran, diagnose placing declarative before/between @code{USE},
354 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
355 @item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
356 @item @code{indirect} clause in @code{declare target} @tab N @tab
357 @item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
358 @item @code{present} modifier to the @code{map}, @code{to} and @code{from}
359 clauses @tab Y @tab
360 @end multitable
361
362
363 @node OpenMP 5.2
364 @section OpenMP 5.2
365
366 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
367
368 @multitable @columnfractions .60 .10 .25
369 @headitem Description @tab Status @tab Comments
370 @item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
371 @tab Y @tab
372 @item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
373 namespaces @tab N/A
374 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
375 sentinel as C/C++ pragma and C++ attributes are warned for with
376 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
377 (enabled by default), respectively; for Fortran free-source code, there is
378 a warning enabled by default and, for fixed-source code, the @code{omx}
379 sentinel is warned for with with @code{-Wsurprising} (enabled by
380 @code{-Wall}). Unknown clauses are always rejected with an error.}
381 @item Clauses on @code{end} directive can be on directive @tab Y @tab
382 @item @code{destroy} clause with destroy-var argument on @code{depobj}
383 @tab N @tab
384 @item Deprecation of no-argument @code{destroy} clause on @code{depobj}
385 @tab N @tab
386 @item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
387 @item Deprecation of minus operator for reductions @tab N @tab
388 @item Deprecation of separating @code{map} modifiers without comma @tab N @tab
389 @item @code{declare mapper} with iterator and @code{present} modifiers
390 @tab N @tab
391 @item If a matching mapped list item is not found in the data environment, the
392 pointer retains its original value @tab Y @tab
393 @item New @code{enter} clause as alias for @code{to} on declare target directive
394 @tab Y @tab
395 @item Deprecation of @code{to} clause on declare target directive @tab N @tab
396 @item Extended list of directives permitted in Fortran pure procedures
397 @tab Y @tab
398 @item New @code{allocators} directive for Fortran @tab N @tab
399 @item Deprecation of @code{allocate} directive for Fortran
400 allocatables/pointers @tab N @tab
401 @item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
402 @item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
403 @tab N @tab
404 @item Deprecation of traits array following the allocator_handle expression in
405 @code{uses_allocators} @tab N @tab
406 @item New @code{otherwise} clause as alias for @code{default} on metadirectives
407 @tab N @tab
408 @item Deprecation of @code{default} clause on metadirectives @tab N @tab
409 @item Deprecation of delimited form of @code{declare target} @tab N @tab
410 @item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
411 @item @code{allocate} and @code{firstprivate} clauses on @code{scope}
412 @tab Y @tab
413 @item @code{ompt_callback_work} @tab N @tab
414 @item Default map-type for the @code{map} clause in @code{target enter/exit data}
415 @tab Y @tab
416 @item New @code{doacross} clause as alias for @code{depend} with
417 @code{source}/@code{sink} modifier @tab Y @tab
418 @item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
419 @tab N @tab
420 @item @code{omp_cur_iteration} keyword @tab Y @tab
421 @end multitable
422
423 @unnumberedsubsec Other new OpenMP 5.2 features
424
425 @multitable @columnfractions .60 .10 .25
426 @headitem Description @tab Status @tab Comments
427 @item For Fortran, optional comma between directive and clause @tab N @tab
428 @item Conforming device numbers and @code{omp_initial_device} and
429 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
430 @item Initial value of @var{default-device-var} ICV with
431 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
432 @item @code{all} as @emph{implicit-behavior} for @code{defaultmap} @tab Y @tab
433 @item @emph{interop_types} in any position of the modifier list for the @code{init} clause
434 of the @code{interop} construct @tab N @tab
435 @end multitable
436
437
438 @node OpenMP Technical Report 11
439 @section OpenMP Technical Report 11
440
441 Technical Report (TR) 11 is the first preview for OpenMP 6.0.
442
443 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
444 @multitable @columnfractions .60 .10 .25
445 @item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
446 @tab N/A @tab Backward compatibility
447 @item The @code{decl} attribute was added to the C++ attribute syntax
448 @tab Y @tab
449 @item @code{_ALL} suffix to the device-scope environment variables
450 @tab P @tab Host device number wrongly accepted
451 @item For Fortran, @emph{locator list} can be also function reference with
452 data pointer result @tab N @tab
453 @item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
454 @tab N @tab
455 @item Implicit reduction identifiers of C++ classes
456 @tab N @tab
457 @item Change of the @emph{map-type} property from @emph{ultimate} to
458 @emph{default} @tab N @tab
459 @item Concept of @emph{assumed-size arrays} in C and C++
460 @tab N @tab
461 @item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
462 @tab N @tab
463 @item @code{groupprivate} directive @tab N @tab
464 @item @code{local} clause to declare target directive @tab N @tab
465 @item @code{part_size} allocator trait @tab N @tab
466 @item @code{pin_device}, @code{preferred_device} and @code{target_access}
467 allocator traits
468 @tab N @tab
469 @item @code{access} allocator trait changes @tab N @tab
470 @item Extension of @code{interop} operation of @code{append_args}, allowing all
471 modifiers of the @code{init} clause
472 @tab N @tab
473 @item @code{interop} clause to @code{dispatch} @tab N @tab
474 @item @code{apply} code to loop-transforming constructs @tab N @tab
475 @item @code{omp_curr_progress_width} identifier @tab N @tab
476 @item @code{safesync} clause to the @code{parallel} construct @tab N @tab
477 @item @code{omp_get_max_progress_width} runtime routine @tab N @tab
478 @item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
479 @item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
480 @item Routines for obtaining memory spaces/allocators for shared/device memory
481 @tab N @tab
482 @item @code{omp_get_memspace_num_resources} routine @tab N @tab
483 @item @code{omp_get_submemspace} routine @tab N @tab
484 @item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
485 @item Extension of @code{OMP_DEFAULT_DEVICE} and new
486 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
487 @item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
488 @end multitable
489
490 @unnumberedsubsec Other new TR 11 features
491 @multitable @columnfractions .60 .10 .25
492 @item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
493 @item Mapping lambda captures @tab N @tab
494 @item For Fortran, atomic compare with storing the comparison result
495 @tab N @tab
496 @end multitable
497
498
499
500 @c ---------------------------------------------------------------------
501 @c OpenMP Runtime Library Routines
502 @c ---------------------------------------------------------------------
503
504 @node Runtime Library Routines
505 @chapter OpenMP Runtime Library Routines
506
507 The runtime routines described here are defined by Section 18 of the OpenMP
508 specification in version 5.2.
509
510 @menu
511 * Thread Team Routines::
512 * Thread Affinity Routines::
513 * Teams Region Routines::
514 * Tasking Routines::
515 @c * Resource Relinquishing Routines::
516 * Device Information Routines::
517 * Device Memory Routines::
518 * Lock Routines::
519 * Timing Routines::
520 * Event Routine::
521 @c * Interoperability Routines::
522 * Memory Management Routines::
523 @c * Tool Control Routine::
524 @c * Environment Display Routine::
525 @end menu
526
527
528
529 @node Thread Team Routines
530 @section Thread Team Routines
531
532 Routines controlling threads in the current contention group.
533 They have C linkage and do not throw exceptions.
534
535 @menu
536 * omp_set_num_threads:: Set upper team size limit
537 * omp_get_num_threads:: Size of the active team
538 * omp_get_max_threads:: Maximum number of threads of parallel region
539 * omp_get_thread_num:: Current thread ID
540 * omp_in_parallel:: Whether a parallel region is active
541 * omp_set_dynamic:: Enable/disable dynamic teams
542 * omp_get_dynamic:: Dynamic teams setting
543 * omp_get_cancellation:: Whether cancellation support is enabled
544 * omp_set_nested:: Enable/disable nested parallel regions
545 * omp_get_nested:: Nested parallel regions
546 * omp_set_schedule:: Set the runtime scheduling method
547 * omp_get_schedule:: Obtain the runtime scheduling method
548 * omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
549 * omp_get_supported_active_levels:: Maximum number of active regions supported
550 * omp_set_max_active_levels:: Limits the number of active parallel regions
551 * omp_get_max_active_levels:: Current maximum number of active regions
552 * omp_get_level:: Number of parallel regions
553 * omp_get_ancestor_thread_num:: Ancestor thread ID
554 * omp_get_team_size:: Number of threads in a team
555 * omp_get_active_level:: Number of active parallel regions
556 @end menu
557
558
559
560 @node omp_set_num_threads
561 @subsection @code{omp_set_num_threads} -- Set upper team size limit
562 @table @asis
563 @item @emph{Description}:
564 Specifies the number of threads used by default in subsequent parallel
565 sections, if those do not specify a @code{num_threads} clause. The
566 argument of @code{omp_set_num_threads} shall be a positive integer.
567
568 @item @emph{C/C++}:
569 @multitable @columnfractions .20 .80
570 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
571 @end multitable
572
573 @item @emph{Fortran}:
574 @multitable @columnfractions .20 .80
575 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
576 @item @tab @code{integer, intent(in) :: num_threads}
577 @end multitable
578
579 @item @emph{See also}:
580 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
581
582 @item @emph{Reference}:
583 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
584 @end table
585
586
587
588 @node omp_get_num_threads
589 @subsection @code{omp_get_num_threads} -- Size of the active team
590 @table @asis
591 @item @emph{Description}:
592 Returns the number of threads in the current team. In a sequential section of
593 the program @code{omp_get_num_threads} returns 1.
594
595 The default team size may be initialized at startup by the
596 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
597 of the current team may be set either by the @code{NUM_THREADS}
598 clause or by @code{omp_set_num_threads}. If none of the above were
599 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
600 one thread per CPU online is used.
601
602 @item @emph{C/C++}:
603 @multitable @columnfractions .20 .80
604 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
605 @end multitable
606
607 @item @emph{Fortran}:
608 @multitable @columnfractions .20 .80
609 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
610 @end multitable
611
612 @item @emph{See also}:
613 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
614
615 @item @emph{Reference}:
616 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
617 @end table
618
619
620
621 @node omp_get_max_threads
622 @subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
623 @table @asis
624 @item @emph{Description}:
625 Return the maximum number of threads used for the current parallel region
626 that does not use the clause @code{num_threads}.
627
628 @item @emph{C/C++}:
629 @multitable @columnfractions .20 .80
630 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
631 @end multitable
632
633 @item @emph{Fortran}:
634 @multitable @columnfractions .20 .80
635 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
636 @end multitable
637
638 @item @emph{See also}:
639 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
640
641 @item @emph{Reference}:
642 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
643 @end table
644
645
646
647 @node omp_get_thread_num
648 @subsection @code{omp_get_thread_num} -- Current thread ID
649 @table @asis
650 @item @emph{Description}:
651 Returns a unique thread identification number within the current team.
652 In a sequential parts of the program, @code{omp_get_thread_num}
653 always returns 0. In parallel regions the return value varies
654 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
655 value of the primary thread of a team is always 0.
656
657 @item @emph{C/C++}:
658 @multitable @columnfractions .20 .80
659 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
660 @end multitable
661
662 @item @emph{Fortran}:
663 @multitable @columnfractions .20 .80
664 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
665 @end multitable
666
667 @item @emph{See also}:
668 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
669
670 @item @emph{Reference}:
671 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
672 @end table
673
674
675
676 @node omp_in_parallel
677 @subsection @code{omp_in_parallel} -- Whether a parallel region is active
678 @table @asis
679 @item @emph{Description}:
680 This function returns @code{true} if currently running in parallel,
681 @code{false} otherwise. Here, @code{true} and @code{false} represent
682 their language-specific counterparts.
683
684 @item @emph{C/C++}:
685 @multitable @columnfractions .20 .80
686 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
687 @end multitable
688
689 @item @emph{Fortran}:
690 @multitable @columnfractions .20 .80
691 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
692 @end multitable
693
694 @item @emph{Reference}:
695 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
696 @end table
697
698
699 @node omp_set_dynamic
700 @subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
701 @table @asis
702 @item @emph{Description}:
703 Enable or disable the dynamic adjustment of the number of threads
704 within a team. The function takes the language-specific equivalent
705 of @code{true} and @code{false}, where @code{true} enables dynamic
706 adjustment of team sizes and @code{false} disables it.
707
708 @item @emph{C/C++}:
709 @multitable @columnfractions .20 .80
710 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
711 @end multitable
712
713 @item @emph{Fortran}:
714 @multitable @columnfractions .20 .80
715 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
716 @item @tab @code{logical, intent(in) :: dynamic_threads}
717 @end multitable
718
719 @item @emph{See also}:
720 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
721
722 @item @emph{Reference}:
723 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
724 @end table
725
726
727
728 @node omp_get_dynamic
729 @subsection @code{omp_get_dynamic} -- Dynamic teams setting
730 @table @asis
731 @item @emph{Description}:
732 This function returns @code{true} if enabled, @code{false} otherwise.
733 Here, @code{true} and @code{false} represent their language-specific
734 counterparts.
735
736 The dynamic team setting may be initialized at startup by the
737 @env{OMP_DYNAMIC} environment variable or at runtime using
738 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
739 disabled by default.
740
741 @item @emph{C/C++}:
742 @multitable @columnfractions .20 .80
743 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
744 @end multitable
745
746 @item @emph{Fortran}:
747 @multitable @columnfractions .20 .80
748 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
749 @end multitable
750
751 @item @emph{See also}:
752 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
753
754 @item @emph{Reference}:
755 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
756 @end table
757
758
759
760 @node omp_get_cancellation
761 @subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
762 @table @asis
763 @item @emph{Description}:
764 This function returns @code{true} if cancellation is activated, @code{false}
765 otherwise. Here, @code{true} and @code{false} represent their language-specific
766 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
767 deactivated.
768
769 @item @emph{C/C++}:
770 @multitable @columnfractions .20 .80
771 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
772 @end multitable
773
774 @item @emph{Fortran}:
775 @multitable @columnfractions .20 .80
776 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
777 @end multitable
778
779 @item @emph{See also}:
780 @ref{OMP_CANCELLATION}
781
782 @item @emph{Reference}:
783 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
784 @end table
785
786
787
788 @node omp_set_nested
789 @subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
790 @table @asis
791 @item @emph{Description}:
792 Enable or disable nested parallel regions, i.e., whether team members
793 are allowed to create new teams. The function takes the language-specific
794 equivalent of @code{true} and @code{false}, where @code{true} enables
795 dynamic adjustment of team sizes and @code{false} disables it.
796
797 Enabling nested parallel regions will also set the maximum number of
798 active nested regions to the maximum supported. Disabling nested parallel
799 regions will set the maximum number of active nested regions to one.
800
801 Note that the @code{omp_set_nested} API routine was deprecated
802 in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
803
804 @item @emph{C/C++}:
805 @multitable @columnfractions .20 .80
806 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
807 @end multitable
808
809 @item @emph{Fortran}:
810 @multitable @columnfractions .20 .80
811 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
812 @item @tab @code{logical, intent(in) :: nested}
813 @end multitable
814
815 @item @emph{See also}:
816 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
817 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
818
819 @item @emph{Reference}:
820 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
821 @end table
822
823
824
825 @node omp_get_nested
826 @subsection @code{omp_get_nested} -- Nested parallel regions
827 @table @asis
828 @item @emph{Description}:
829 This function returns @code{true} if nested parallel regions are
830 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
831 represent their language-specific counterparts.
832
833 The state of nested parallel regions at startup depends on several
834 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
835 and is set to greater than one, then nested parallel regions will be
836 enabled. If not defined, then the value of the @env{OMP_NESTED}
837 environment variable will be followed if defined. If neither are
838 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
839 are defined with a list of more than one value, then nested parallel
840 regions are enabled. If none of these are defined, then nested parallel
841 regions are disabled by default.
842
843 Nested parallel regions can be enabled or disabled at runtime using
844 @code{omp_set_nested}, or by setting the maximum number of nested
845 regions with @code{omp_set_max_active_levels} to one to disable, or
846 above one to enable.
847
848 Note that the @code{omp_get_nested} API routine was deprecated
849 in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
850
851 @item @emph{C/C++}:
852 @multitable @columnfractions .20 .80
853 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
854 @end multitable
855
856 @item @emph{Fortran}:
857 @multitable @columnfractions .20 .80
858 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
859 @end multitable
860
861 @item @emph{See also}:
862 @ref{omp_get_max_active_levels}, @ref{omp_set_nested},
863 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
864
865 @item @emph{Reference}:
866 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
867 @end table
868
869
870
871 @node omp_set_schedule
872 @subsection @code{omp_set_schedule} -- Set the runtime scheduling method
873 @table @asis
874 @item @emph{Description}:
875 Sets the runtime scheduling method. The @var{kind} argument can have the
876 value @code{omp_sched_static}, @code{omp_sched_dynamic},
877 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
878 @code{omp_sched_auto}, the chunk size is set to the value of
879 @var{chunk_size} if positive, or to the default value if zero or negative.
880 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
881
882 @item @emph{C/C++}
883 @multitable @columnfractions .20 .80
884 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
885 @end multitable
886
887 @item @emph{Fortran}:
888 @multitable @columnfractions .20 .80
889 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
890 @item @tab @code{integer(kind=omp_sched_kind) kind}
891 @item @tab @code{integer chunk_size}
892 @end multitable
893
894 @item @emph{See also}:
895 @ref{omp_get_schedule}
896 @ref{OMP_SCHEDULE}
897
898 @item @emph{Reference}:
899 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
900 @end table
901
902
903
904 @node omp_get_schedule
905 @subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
906 @table @asis
907 @item @emph{Description}:
908 Obtain the runtime scheduling method. The @var{kind} argument will be
909 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
910 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
911 @var{chunk_size}, is set to the chunk size.
912
913 @item @emph{C/C++}
914 @multitable @columnfractions .20 .80
915 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
916 @end multitable
917
918 @item @emph{Fortran}:
919 @multitable @columnfractions .20 .80
920 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
921 @item @tab @code{integer(kind=omp_sched_kind) kind}
922 @item @tab @code{integer chunk_size}
923 @end multitable
924
925 @item @emph{See also}:
926 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
927
928 @item @emph{Reference}:
929 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
930 @end table
931
932
933 @node omp_get_teams_thread_limit
934 @subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
935 @table @asis
936 @item @emph{Description}:
937 Return the maximum number of threads that will be able to participate in
938 each team created by a teams construct.
939
940 @item @emph{C/C++}:
941 @multitable @columnfractions .20 .80
942 @item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
943 @end multitable
944
945 @item @emph{Fortran}:
946 @multitable @columnfractions .20 .80
947 @item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
948 @end multitable
949
950 @item @emph{See also}:
951 @ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
952
953 @item @emph{Reference}:
954 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
955 @end table
956
957
958
959 @node omp_get_supported_active_levels
960 @subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
961 @table @asis
962 @item @emph{Description}:
963 This function returns the maximum number of nested, active parallel regions
964 supported by this implementation.
965
966 @item @emph{C/C++}
967 @multitable @columnfractions .20 .80
968 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
969 @end multitable
970
971 @item @emph{Fortran}:
972 @multitable @columnfractions .20 .80
973 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
974 @end multitable
975
976 @item @emph{See also}:
977 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
978
979 @item @emph{Reference}:
980 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
981 @end table
982
983
984
985 @node omp_set_max_active_levels
986 @subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
987 @table @asis
988 @item @emph{Description}:
989 This function limits the maximum allowed number of nested, active
990 parallel regions. @var{max_levels} must be less or equal to
991 the value returned by @code{omp_get_supported_active_levels}.
992
993 @item @emph{C/C++}
994 @multitable @columnfractions .20 .80
995 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
996 @end multitable
997
998 @item @emph{Fortran}:
999 @multitable @columnfractions .20 .80
1000 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1001 @item @tab @code{integer max_levels}
1002 @end multitable
1003
1004 @item @emph{See also}:
1005 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1006 @ref{omp_get_supported_active_levels}
1007
1008 @item @emph{Reference}:
1009 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1010 @end table
1011
1012
1013
1014 @node omp_get_max_active_levels
1015 @subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
1016 @table @asis
1017 @item @emph{Description}:
1018 This function obtains the maximum allowed number of nested, active parallel regions.
1019
1020 @item @emph{C/C++}
1021 @multitable @columnfractions .20 .80
1022 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
1023 @end multitable
1024
1025 @item @emph{Fortran}:
1026 @multitable @columnfractions .20 .80
1027 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
1028 @end multitable
1029
1030 @item @emph{See also}:
1031 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
1032
1033 @item @emph{Reference}:
1034 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
1035 @end table
1036
1037
1038 @node omp_get_level
1039 @subsection @code{omp_get_level} -- Obtain the current nesting level
1040 @table @asis
1041 @item @emph{Description}:
1042 This function returns the nesting level for the parallel blocks,
1043 which enclose the calling call.
1044
1045 @item @emph{C/C++}
1046 @multitable @columnfractions .20 .80
1047 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
1048 @end multitable
1049
1050 @item @emph{Fortran}:
1051 @multitable @columnfractions .20 .80
1052 @item @emph{Interface}: @tab @code{integer function omp_level()}
1053 @end multitable
1054
1055 @item @emph{See also}:
1056 @ref{omp_get_active_level}
1057
1058 @item @emph{Reference}:
1059 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
1060 @end table
1061
1062
1063
1064 @node omp_get_ancestor_thread_num
1065 @subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
1066 @table @asis
1067 @item @emph{Description}:
1068 This function returns the thread identification number for the given
1069 nesting level of the current thread. For values of @var{level} outside
1070 zero to @code{omp_get_level} -1 is returned; if @var{level} is
1071 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
1072
1073 @item @emph{C/C++}
1074 @multitable @columnfractions .20 .80
1075 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
1076 @end multitable
1077
1078 @item @emph{Fortran}:
1079 @multitable @columnfractions .20 .80
1080 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
1081 @item @tab @code{integer level}
1082 @end multitable
1083
1084 @item @emph{See also}:
1085 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
1086
1087 @item @emph{Reference}:
1088 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
1089 @end table
1090
1091
1092
1093 @node omp_get_team_size
1094 @subsection @code{omp_get_team_size} -- Number of threads in a team
1095 @table @asis
1096 @item @emph{Description}:
1097 This function returns the number of threads in a thread team to which
1098 either the current thread or its ancestor belongs. For values of @var{level}
1099 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1100 1 is returned, and for @code{omp_get_level}, the result is identical
1101 to @code{omp_get_num_threads}.
1102
1103 @item @emph{C/C++}:
1104 @multitable @columnfractions .20 .80
1105 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1106 @end multitable
1107
1108 @item @emph{Fortran}:
1109 @multitable @columnfractions .20 .80
1110 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1111 @item @tab @code{integer level}
1112 @end multitable
1113
1114 @item @emph{See also}:
1115 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1116
1117 @item @emph{Reference}:
1118 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1119 @end table
1120
1121
1122
1123 @node omp_get_active_level
1124 @subsection @code{omp_get_active_level} -- Number of parallel regions
1125 @table @asis
1126 @item @emph{Description}:
1127 This function returns the nesting level for the active parallel blocks,
1128 which enclose the calling call.
1129
1130 @item @emph{C/C++}
1131 @multitable @columnfractions .20 .80
1132 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
1133 @end multitable
1134
1135 @item @emph{Fortran}:
1136 @multitable @columnfractions .20 .80
1137 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
1138 @end multitable
1139
1140 @item @emph{See also}:
1141 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1142
1143 @item @emph{Reference}:
1144 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
1145 @end table
1146
1147
1148
1149 @node Thread Affinity Routines
1150 @section Thread Affinity Routines
1151
1152 Routines controlling and accessing thread-affinity policies.
1153 They have C linkage and do not throw exceptions.
1154
1155 @menu
1156 * omp_get_proc_bind:: Whether threads may be moved between CPUs
1157 @c * omp_get_num_places:: <fixme>
1158 @c * omp_get_place_num_procs:: <fixme>
1159 @c * omp_get_place_proc_ids:: <fixme>
1160 @c * omp_get_place_num:: <fixme>
1161 @c * omp_get_partition_num_places:: <fixme>
1162 @c * omp_get_partition_place_nums:: <fixme>
1163 @c * omp_set_affinity_format:: <fixme>
1164 @c * omp_get_affinity_format:: <fixme>
1165 @c * omp_display_affinity:: <fixme>
1166 @c * omp_capture_affinity:: <fixme>
1167 @end menu
1168
1169
1170
1171 @node omp_get_proc_bind
1172 @subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
1173 @table @asis
1174 @item @emph{Description}:
1175 This functions returns the currently active thread affinity policy, which is
1176 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1177 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1178 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1179 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1180
1181 @item @emph{C/C++}:
1182 @multitable @columnfractions .20 .80
1183 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1184 @end multitable
1185
1186 @item @emph{Fortran}:
1187 @multitable @columnfractions .20 .80
1188 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1189 @end multitable
1190
1191 @item @emph{See also}:
1192 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1193
1194 @item @emph{Reference}:
1195 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1196 @end table
1197
1198
1199
1200 @node Teams Region Routines
1201 @section Teams Region Routines
1202
1203 Routines controlling the league of teams that are executed in a @code{teams}
1204 region. They have C linkage and do not throw exceptions.
1205
1206 @menu
1207 * omp_get_num_teams:: Number of teams
1208 * omp_get_team_num:: Get team number
1209 * omp_set_num_teams:: Set upper teams limit for teams region
1210 * omp_get_max_teams:: Maximum number of teams for teams region
1211 * omp_set_teams_thread_limit:: Set upper thread limit for teams construct
1212 * omp_get_thread_limit:: Maximum number of threads
1213 @end menu
1214
1215
1216
1217 @node omp_get_num_teams
1218 @subsection @code{omp_get_num_teams} -- Number of teams
1219 @table @asis
1220 @item @emph{Description}:
1221 Returns the number of teams in the current team region.
1222
1223 @item @emph{C/C++}:
1224 @multitable @columnfractions .20 .80
1225 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
1226 @end multitable
1227
1228 @item @emph{Fortran}:
1229 @multitable @columnfractions .20 .80
1230 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
1231 @end multitable
1232
1233 @item @emph{Reference}:
1234 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1235 @end table
1236
1237
1238
1239 @node omp_get_team_num
1240 @subsection @code{omp_get_team_num} -- Get team number
1241 @table @asis
1242 @item @emph{Description}:
1243 Returns the team number of the calling thread.
1244
1245 @item @emph{C/C++}:
1246 @multitable @columnfractions .20 .80
1247 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1248 @end multitable
1249
1250 @item @emph{Fortran}:
1251 @multitable @columnfractions .20 .80
1252 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1253 @end multitable
1254
1255 @item @emph{Reference}:
1256 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1257 @end table
1258
1259
1260
1261 @node omp_set_num_teams
1262 @subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1263 @table @asis
1264 @item @emph{Description}:
1265 Specifies the upper bound for number of teams created by the teams construct
1266 which does not specify a @code{num_teams} clause. The
1267 argument of @code{omp_set_num_teams} shall be a positive integer.
1268
1269 @item @emph{C/C++}:
1270 @multitable @columnfractions .20 .80
1271 @item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1272 @end multitable
1273
1274 @item @emph{Fortran}:
1275 @multitable @columnfractions .20 .80
1276 @item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1277 @item @tab @code{integer, intent(in) :: num_teams}
1278 @end multitable
1279
1280 @item @emph{See also}:
1281 @ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1282
1283 @item @emph{Reference}:
1284 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1285 @end table
1286
1287
1288
1289 @node omp_get_max_teams
1290 @subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
1291 @table @asis
1292 @item @emph{Description}:
1293 Return the maximum number of teams used for the teams region
1294 that does not use the clause @code{num_teams}.
1295
1296 @item @emph{C/C++}:
1297 @multitable @columnfractions .20 .80
1298 @item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
1299 @end multitable
1300
1301 @item @emph{Fortran}:
1302 @multitable @columnfractions .20 .80
1303 @item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
1304 @end multitable
1305
1306 @item @emph{See also}:
1307 @ref{omp_set_num_teams}, @ref{omp_get_num_teams}
1308
1309 @item @emph{Reference}:
1310 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
1311 @end table
1312
1313
1314
1315 @node omp_set_teams_thread_limit
1316 @subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1317 @table @asis
1318 @item @emph{Description}:
1319 Specifies the upper bound for number of threads that will be available
1320 for each team created by the teams construct which does not specify a
1321 @code{thread_limit} clause. The argument of
1322 @code{omp_set_teams_thread_limit} shall be a positive integer.
1323
1324 @item @emph{C/C++}:
1325 @multitable @columnfractions .20 .80
1326 @item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1327 @end multitable
1328
1329 @item @emph{Fortran}:
1330 @multitable @columnfractions .20 .80
1331 @item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1332 @item @tab @code{integer, intent(in) :: thread_limit}
1333 @end multitable
1334
1335 @item @emph{See also}:
1336 @ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1337
1338 @item @emph{Reference}:
1339 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1340 @end table
1341
1342
1343
1344 @node omp_get_thread_limit
1345 @subsection @code{omp_get_thread_limit} -- Maximum number of threads
1346 @table @asis
1347 @item @emph{Description}:
1348 Return the maximum number of threads of the program.
1349
1350 @item @emph{C/C++}:
1351 @multitable @columnfractions .20 .80
1352 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1353 @end multitable
1354
1355 @item @emph{Fortran}:
1356 @multitable @columnfractions .20 .80
1357 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1358 @end multitable
1359
1360 @item @emph{See also}:
1361 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1362
1363 @item @emph{Reference}:
1364 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1365 @end table
1366
1367
1368
1369 @node Tasking Routines
1370 @section Tasking Routines
1371
1372 Routines relating to explicit tasks.
1373 They have C linkage and do not throw exceptions.
1374
1375 @menu
1376 * omp_get_max_task_priority:: Maximum task priority value that can be set
1377 * omp_in_explicit_task:: Whether a given task is an explicit task
1378 * omp_in_final:: Whether in final or included task region
1379 @end menu
1380
1381
1382
1383 @node omp_get_max_task_priority
1384 @subsection @code{omp_get_max_task_priority} -- Maximum priority value
1385 that can be set for tasks.
1386 @table @asis
1387 @item @emph{Description}:
1388 This function obtains the maximum allowed priority number for tasks.
1389
1390 @item @emph{C/C++}
1391 @multitable @columnfractions .20 .80
1392 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
1393 @end multitable
1394
1395 @item @emph{Fortran}:
1396 @multitable @columnfractions .20 .80
1397 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
1398 @end multitable
1399
1400 @item @emph{Reference}:
1401 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1402 @end table
1403
1404
1405
1406 @node omp_in_explicit_task
1407 @subsection @code{omp_in_explicit_task} -- Whether a given task is an explicit task
1408 @table @asis
1409 @item @emph{Description}:
1410 The function returns the @var{explicit-task-var} ICV; it returns true when the
1411 encountering task was generated by a task-generating construct such as
1412 @code{target}, @code{task} or @code{taskloop}. Otherwise, the encountering task
1413 is in an implicit task region such as generated by the implicit or explicit
1414 @code{parallel} region and @code{omp_in_explicit_task} returns false.
1415
1416 @item @emph{C/C++}
1417 @multitable @columnfractions .20 .80
1418 @item @emph{Prototype}: @tab @code{int omp_in_explicit_task(void);}
1419 @end multitable
1420
1421 @item @emph{Fortran}:
1422 @multitable @columnfractions .20 .80
1423 @item @emph{Interface}: @tab @code{logical function omp_in_explicit_task()}
1424 @end multitable
1425
1426 @item @emph{Reference}:
1427 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 18.5.2.
1428 @end table
1429
1430
1431
1432 @node omp_in_final
1433 @subsection @code{omp_in_final} -- Whether in final or included task region
1434 @table @asis
1435 @item @emph{Description}:
1436 This function returns @code{true} if currently running in a final
1437 or included task region, @code{false} otherwise. Here, @code{true}
1438 and @code{false} represent their language-specific counterparts.
1439
1440 @item @emph{C/C++}:
1441 @multitable @columnfractions .20 .80
1442 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1443 @end multitable
1444
1445 @item @emph{Fortran}:
1446 @multitable @columnfractions .20 .80
1447 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
1448 @end multitable
1449
1450 @item @emph{Reference}:
1451 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1452 @end table
1453
1454
1455
1456 @c @node Resource Relinquishing Routines
1457 @c @section Resource Relinquishing Routines
1458 @c
1459 @c Routines releasing resources used by the OpenMP runtime.
1460 @c They have C linkage and do not throw exceptions.
1461 @c
1462 @c @menu
1463 @c * omp_pause_resource:: <fixme>
1464 @c * omp_pause_resource_all:: <fixme>
1465 @c @end menu
1466
1467 @node Device Information Routines
1468 @section Device Information Routines
1469
1470 Routines related to devices available to an OpenMP program.
1471 They have C linkage and do not throw exceptions.
1472
1473 @menu
1474 * omp_get_num_procs:: Number of processors online
1475 @c * omp_get_max_progress_width:: <fixme>/TR11
1476 * omp_set_default_device:: Set the default device for target regions
1477 * omp_get_default_device:: Get the default device for target regions
1478 * omp_get_num_devices:: Number of target devices
1479 * omp_get_device_num:: Get device that current thread is running on
1480 * omp_is_initial_device:: Whether executing on the host device
1481 * omp_get_initial_device:: Device number of host device
1482 @end menu
1483
1484
1485
1486 @node omp_get_num_procs
1487 @subsection @code{omp_get_num_procs} -- Number of processors online
1488 @table @asis
1489 @item @emph{Description}:
1490 Returns the number of processors online on that device.
1491
1492 @item @emph{C/C++}:
1493 @multitable @columnfractions .20 .80
1494 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
1495 @end multitable
1496
1497 @item @emph{Fortran}:
1498 @multitable @columnfractions .20 .80
1499 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
1500 @end multitable
1501
1502 @item @emph{Reference}:
1503 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
1504 @end table
1505
1506
1507
1508 @node omp_set_default_device
1509 @subsection @code{omp_set_default_device} -- Set the default device for target regions
1510 @table @asis
1511 @item @emph{Description}:
1512 Set the default device for target regions without device clause. The argument
1513 shall be a nonnegative device number.
1514
1515 @item @emph{C/C++}:
1516 @multitable @columnfractions .20 .80
1517 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1518 @end multitable
1519
1520 @item @emph{Fortran}:
1521 @multitable @columnfractions .20 .80
1522 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1523 @item @tab @code{integer device_num}
1524 @end multitable
1525
1526 @item @emph{See also}:
1527 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1528
1529 @item @emph{Reference}:
1530 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1531 @end table
1532
1533
1534
1535 @node omp_get_default_device
1536 @subsection @code{omp_get_default_device} -- Get the default device for target regions
1537 @table @asis
1538 @item @emph{Description}:
1539 Get the default device for target regions without device clause.
1540
1541 @item @emph{C/C++}:
1542 @multitable @columnfractions .20 .80
1543 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
1544 @end multitable
1545
1546 @item @emph{Fortran}:
1547 @multitable @columnfractions .20 .80
1548 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
1549 @end multitable
1550
1551 @item @emph{See also}:
1552 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
1553
1554 @item @emph{Reference}:
1555 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
1556 @end table
1557
1558
1559
1560 @node omp_get_num_devices
1561 @subsection @code{omp_get_num_devices} -- Number of target devices
1562 @table @asis
1563 @item @emph{Description}:
1564 Returns the number of target devices.
1565
1566 @item @emph{C/C++}:
1567 @multitable @columnfractions .20 .80
1568 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
1569 @end multitable
1570
1571 @item @emph{Fortran}:
1572 @multitable @columnfractions .20 .80
1573 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
1574 @end multitable
1575
1576 @item @emph{Reference}:
1577 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
1578 @end table
1579
1580
1581
1582 @node omp_get_device_num
1583 @subsection @code{omp_get_device_num} -- Return device number of current device
1584 @table @asis
1585 @item @emph{Description}:
1586 This function returns a device number that represents the device that the
1587 current thread is executing on. For OpenMP 5.0, this must be equal to the
1588 value returned by the @code{omp_get_initial_device} function when called
1589 from the host.
1590
1591 @item @emph{C/C++}
1592 @multitable @columnfractions .20 .80
1593 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
1594 @end multitable
1595
1596 @item @emph{Fortran}:
1597 @multitable @columnfractions .20 .80
1598 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
1599 @end multitable
1600
1601 @item @emph{See also}:
1602 @ref{omp_get_initial_device}
1603
1604 @item @emph{Reference}:
1605 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
1606 @end table
1607
1608
1609
1610 @node omp_is_initial_device
1611 @subsection @code{omp_is_initial_device} -- Whether executing on the host device
1612 @table @asis
1613 @item @emph{Description}:
1614 This function returns @code{true} if currently running on the host device,
1615 @code{false} otherwise. Here, @code{true} and @code{false} represent
1616 their language-specific counterparts.
1617
1618 @item @emph{C/C++}:
1619 @multitable @columnfractions .20 .80
1620 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1621 @end multitable
1622
1623 @item @emph{Fortran}:
1624 @multitable @columnfractions .20 .80
1625 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1626 @end multitable
1627
1628 @item @emph{Reference}:
1629 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1630 @end table
1631
1632
1633
1634 @node omp_get_initial_device
1635 @subsection @code{omp_get_initial_device} -- Return device number of initial device
1636 @table @asis
1637 @item @emph{Description}:
1638 This function returns a device number that represents the host device.
1639 For OpenMP 5.1, this must be equal to the value returned by the
1640 @code{omp_get_num_devices} function.
1641
1642 @item @emph{C/C++}
1643 @multitable @columnfractions .20 .80
1644 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
1645 @end multitable
1646
1647 @item @emph{Fortran}:
1648 @multitable @columnfractions .20 .80
1649 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
1650 @end multitable
1651
1652 @item @emph{See also}:
1653 @ref{omp_get_num_devices}
1654
1655 @item @emph{Reference}:
1656 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
1657 @end table
1658
1659
1660
1661 @node Device Memory Routines
1662 @section Device Memory Routines
1663
1664 Routines related to memory allocation and managing corresponding
1665 pointers on devices. They have C linkage and do not throw exceptions.
1666
1667 @menu
1668 * omp_target_alloc:: Allocate device memory
1669 * omp_target_free:: Free device memory
1670 * omp_target_is_present:: Check whether storage is mapped
1671 @c * omp_target_is_accessible:: <fixme>
1672 @c * omp_target_memcpy:: <fixme>
1673 @c * omp_target_memcpy_rect:: <fixme>
1674 @c * omp_target_memcpy_async:: <fixme>
1675 @c * omp_target_memcpy_rect_async:: <fixme>
1676 @c * omp_target_memset:: <fixme>/TR12
1677 @c * omp_target_memset_async:: <fixme>/TR12
1678 * omp_target_associate_ptr:: Associate a device pointer with a host pointer
1679 * omp_target_disassociate_ptr:: Remove device--host pointer association
1680 * omp_get_mapped_ptr:: Return device pointer to a host pointer
1681 @end menu
1682
1683
1684
1685 @node omp_target_alloc
1686 @subsection @code{omp_target_alloc} -- Allocate device memory
1687 @table @asis
1688 @item @emph{Description}:
1689 This routine allocates @var{size} bytes of memory in the device environment
1690 associated with the device number @var{device_num}. If successful, a device
1691 pointer is returned, otherwise a null pointer.
1692
1693 In GCC, when the device is the host or the device shares memory with the host,
1694 the memory is allocated on the host; in that case, when @var{size} is zero,
1695 either NULL or a unique pointer value that can later be successfully passed to
1696 @code{omp_target_free} is returned. When the allocation is not performed on
1697 the host, a null pointer is returned when @var{size} is zero; in that case,
1698 additionally a diagnostic might be printed to standard error (stderr).
1699
1700 Running this routine in a @code{target} region except on the initial device
1701 is not supported.
1702
1703 @item @emph{C/C++}
1704 @multitable @columnfractions .20 .80
1705 @item @emph{Prototype}: @tab @code{void *omp_target_alloc(size_t size, int device_num)}
1706 @end multitable
1707
1708 @item @emph{Fortran}:
1709 @multitable @columnfractions .20 .80
1710 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_target_alloc(size, device_num) bind(C)}
1711 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1712 @item @tab @code{integer(c_size_t), value :: size}
1713 @item @tab @code{integer(c_int), value :: device_num}
1714 @end multitable
1715
1716 @item @emph{See also}:
1717 @ref{omp_target_free}, @ref{omp_target_associate_ptr}
1718
1719 @item @emph{Reference}:
1720 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.1
1721 @end table
1722
1723
1724
1725 @node omp_target_free
1726 @subsection @code{omp_target_free} -- Free device memory
1727 @table @asis
1728 @item @emph{Description}:
1729 This routine frees memory allocated by the @code{omp_target_alloc} routine.
1730 The @var{device_ptr} argument must be either a null pointer or a device pointer
1731 returned by @code{omp_target_alloc} for the specified @code{device_num}. The
1732 device number @var{device_num} must be a conforming device number.
1733
1734 Running this routine in a @code{target} region except on the initial device
1735 is not supported.
1736
1737 @item @emph{C/C++}
1738 @multitable @columnfractions .20 .80
1739 @item @emph{Prototype}: @tab @code{void omp_target_free(void *device_ptr, int device_num)}
1740 @end multitable
1741
1742 @item @emph{Fortran}:
1743 @multitable @columnfractions .20 .80
1744 @item @emph{Interface}: @tab @code{subroutine omp_target_free(device_ptr, device_num) bind(C)}
1745 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1746 @item @tab @code{type(c_ptr), value :: device_ptr}
1747 @item @tab @code{integer(c_int), value :: device_num}
1748 @end multitable
1749
1750 @item @emph{See also}:
1751 @ref{omp_target_alloc}, @ref{omp_target_disassociate_ptr}
1752
1753 @item @emph{Reference}:
1754 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.2
1755 @end table
1756
1757
1758
1759 @node omp_target_is_present
1760 @subsection @code{omp_target_is_present} -- Check whether storage is mapped
1761 @table @asis
1762 @item @emph{Description}:
1763 This routine tests whether storage, identified by the host pointer @var{ptr}
1764 is mapped to the device specified by @var{device_num}. If so, it returns
1765 @emph{true} and otherwise @emph{false}.
1766
1767 In GCC, this includes self mapping such that @code{omp_target_is_present}
1768 returns @emph{true} when @var{device_num} specifies the host or when the host
1769 and the device share memory. If @var{ptr} is a null pointer, @var{true} is
1770 returned and if @var{device_num} is an invalid device number, @var{false} is
1771 returned.
1772
1773 If those conditions do not apply, @emph{true} is returned if the association has
1774 been established by an explicit or implicit @code{map} clause, the
1775 @code{declare target} directive or a call to the @code{omp_target_associate_ptr}
1776 routine.
1777
1778 Running this routine in a @code{target} region except on the initial device
1779 is not supported.
1780
1781 @item @emph{C/C++}
1782 @multitable @columnfractions .20 .80
1783 @item @emph{Prototype}: @tab @code{int omp_target_is_present(const void *ptr,}
1784 @item @tab @code{ int device_num)}
1785 @end multitable
1786
1787 @item @emph{Fortran}:
1788 @multitable @columnfractions .20 .80
1789 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_present(ptr, &}
1790 @item @tab @code{ device_num) bind(C)}
1791 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1792 @item @tab @code{type(c_ptr), value :: ptr}
1793 @item @tab @code{integer(c_int), value :: device_num}
1794 @end multitable
1795
1796 @item @emph{See also}:
1797 @ref{omp_target_associate_ptr}
1798
1799 @item @emph{Reference}:
1800 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.3
1801 @end table
1802
1803
1804
1805 @node omp_target_associate_ptr
1806 @subsection @code{omp_target_associate_ptr} -- Associate a device pointer with a host pointer
1807 @table @asis
1808 @item @emph{Description}:
1809 This routine associates storage on the host with storage on a device identified
1810 by @var{device_num}. The device pointer is usually obtained by calling
1811 @code{omp_target_alloc} or by other means (but not by using the @code{map}
1812 clauses or the @code{declare target} directive). The host pointer should point
1813 to memory that has a storage size of at least @var{size}.
1814
1815 The @var{device_offset} parameter specifies the offset into @var{device_ptr}
1816 that is used as the base address for the device side of the mapping; the
1817 storage size should be at least @var{device_offset} plus @var{size}.
1818
1819 After the association, the host pointer can be used in a @code{map} clause and
1820 in the @code{to} and @code{from} clauses of the @code{target update} directive
1821 to transfer data between the associated pointers. The reference count of such
1822 associated storage is infinite. The association can be removed by calling
1823 @code{omp_target_disassociate_ptr} which should be done before the lifetime
1824 of either either storage ends.
1825
1826 The routine returns nonzero (@code{EINVAL}) when the @var{device_num} invalid,
1827 for when the initial device or the associated device shares memory with the
1828 host. @code{omp_target_associate_ptr} returns zero if @var{host_ptr} points
1829 into already associated storage that is fully inside of a previously associated
1830 memory. Otherwise, if the association was successful zero is returned; if none
1831 of the cases above apply, nonzero (@code{EINVAL}) is returned.
1832
1833 The @code{omp_target_is_present} routine can be used to test whether
1834 associated storage for a device pointer exists.
1835
1836 Running this routine in a @code{target} region except on the initial device
1837 is not supported.
1838
1839 @item @emph{C/C++}
1840 @multitable @columnfractions .20 .80
1841 @item @emph{Prototype}: @tab @code{int omp_target_associate_ptr(const void *host_ptr,}
1842 @item @tab @code{ const void *device_ptr,}
1843 @item @tab @code{ size_t size,}
1844 @item @tab @code{ size_t device_offset,}
1845 @item @tab @code{ int device_num)}
1846 @end multitable
1847
1848 @item @emph{Fortran}:
1849 @multitable @columnfractions .20 .80
1850 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_associate_ptr(host_ptr, &}
1851 @item @tab @code{ device_ptr, size, device_offset, device_num) bind(C)}
1852 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1853 @item @tab @code{type(c_ptr), value :: host_ptr, device_ptr}
1854 @item @tab @code{integer(c_size_t), value :: size, device_offset}
1855 @item @tab @code{integer(c_int), value :: device_num}
1856 @end multitable
1857
1858 @item @emph{See also}:
1859 @ref{omp_target_disassociate_ptr}, @ref{omp_target_is_present},
1860 @ref{omp_target_alloc}
1861
1862 @item @emph{Reference}:
1863 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.9
1864 @end table
1865
1866
1867
1868 @node omp_target_disassociate_ptr
1869 @subsection @code{omp_target_disassociate_ptr} -- Remove device--host pointer association
1870 @table @asis
1871 @item @emph{Description}:
1872 This routine removes the storage association established by calling
1873 @code{omp_target_associate_ptr} and sets the reference count to zero,
1874 even if @code{omp_target_associate_ptr} was invoked multiple times for
1875 for host pointer @code{ptr}. If applicable, the device memory needs
1876 to be freed by the user.
1877
1878 If an associated device storage location for the @var{device_num} was
1879 found and has infinite reference count, the association is removed and
1880 zero is returned. In all other cases, nonzero (@code{EINVAL}) is returned
1881 and no other action is taken.
1882
1883 Note that passing a host pointer where the association to the device pointer
1884 was established with the @code{declare target} directive yields undefined
1885 behavior.
1886
1887 Running this routine in a @code{target} region except on the initial device
1888 is not supported.
1889
1890 @item @emph{C/C++}
1891 @multitable @columnfractions .20 .80
1892 @item @emph{Prototype}: @tab @code{int omp_target_disassociate_ptr(const void *ptr,}
1893 @item @tab @code{ int device_num)}
1894 @end multitable
1895
1896 @item @emph{Fortran}:
1897 @multitable @columnfractions .20 .80
1898 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_disassociate_ptr(ptr, &}
1899 @item @tab @code{ device_num) bind(C)}
1900 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1901 @item @tab @code{type(c_ptr), value :: ptr}
1902 @item @tab @code{integer(c_int), value :: device_num}
1903 @end multitable
1904
1905 @item @emph{See also}:
1906 @ref{omp_target_associate_ptr}
1907
1908 @item @emph{Reference}:
1909 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.10
1910 @end table
1911
1912
1913
1914 @node omp_get_mapped_ptr
1915 @subsection @code{omp_get_mapped_ptr} -- Return device pointer to a host pointer
1916 @table @asis
1917 @item @emph{Description}:
1918 If the device number is refers to the initial device or to a device with
1919 memory accessible from the host (shared memory), the @code{omp_get_mapped_ptr}
1920 routines returnes the value of the passed @var{ptr}. Otherwise, if associated
1921 storage to the passed host pointer @var{ptr} exists on device associated with
1922 @var{device_num}, it returns that pointer. In all other cases and in cases of
1923 an error, a null pointer is returned.
1924
1925 The association of storage location is established either via an explicit or
1926 implicit @code{map} clause, the @code{declare target} directive or the
1927 @code{omp_target_associate_ptr} routine.
1928
1929 Running this routine in a @code{target} region except on the initial device
1930 is not supported.
1931
1932 @item @emph{C/C++}
1933 @multitable @columnfractions .20 .80
1934 @item @emph{Prototype}: @tab @code{void *omp_get_mapped_ptr(const void *ptr, int device_num);}
1935 @end multitable
1936
1937 @item @emph{Fortran}:
1938 @multitable @columnfractions .20 .80
1939 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_get_mapped_ptr(ptr, device_num) bind(C)}
1940 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1941 @item @tab @code{type(c_ptr), value :: ptr}
1942 @item @tab @code{integer(c_int), value :: device_num}
1943 @end multitable
1944
1945 @item @emph{See also}:
1946 @ref{omp_target_associate_ptr}
1947
1948 @item @emph{Reference}:
1949 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.11
1950 @end table
1951
1952
1953
1954 @node Lock Routines
1955 @section Lock Routines
1956
1957 Initialize, set, test, unset and destroy simple and nested locks.
1958 The routines have C linkage and do not throw exceptions.
1959
1960 @menu
1961 * omp_init_lock:: Initialize simple lock
1962 * omp_init_nest_lock:: Initialize nested lock
1963 @c * omp_init_lock_with_hint:: <fixme>
1964 @c * omp_init_nest_lock_with_hint:: <fixme>
1965 * omp_destroy_lock:: Destroy simple lock
1966 * omp_destroy_nest_lock:: Destroy nested lock
1967 * omp_set_lock:: Wait for and set simple lock
1968 * omp_set_nest_lock:: Wait for and set simple lock
1969 * omp_unset_lock:: Unset simple lock
1970 * omp_unset_nest_lock:: Unset nested lock
1971 * omp_test_lock:: Test and set simple lock if available
1972 * omp_test_nest_lock:: Test and set nested lock if available
1973 @end menu
1974
1975
1976
1977 @node omp_init_lock
1978 @subsection @code{omp_init_lock} -- Initialize simple lock
1979 @table @asis
1980 @item @emph{Description}:
1981 Initialize a simple lock. After initialization, the lock is in
1982 an unlocked state.
1983
1984 @item @emph{C/C++}:
1985 @multitable @columnfractions .20 .80
1986 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1987 @end multitable
1988
1989 @item @emph{Fortran}:
1990 @multitable @columnfractions .20 .80
1991 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1992 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1993 @end multitable
1994
1995 @item @emph{See also}:
1996 @ref{omp_destroy_lock}
1997
1998 @item @emph{Reference}:
1999 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2000 @end table
2001
2002
2003
2004 @node omp_init_nest_lock
2005 @subsection @code{omp_init_nest_lock} -- Initialize nested lock
2006 @table @asis
2007 @item @emph{Description}:
2008 Initialize a nested lock. After initialization, the lock is in
2009 an unlocked state and the nesting count is set to zero.
2010
2011 @item @emph{C/C++}:
2012 @multitable @columnfractions .20 .80
2013 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
2014 @end multitable
2015
2016 @item @emph{Fortran}:
2017 @multitable @columnfractions .20 .80
2018 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
2019 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
2020 @end multitable
2021
2022 @item @emph{See also}:
2023 @ref{omp_destroy_nest_lock}
2024
2025 @item @emph{Reference}:
2026 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2027 @end table
2028
2029
2030
2031 @node omp_destroy_lock
2032 @subsection @code{omp_destroy_lock} -- Destroy simple lock
2033 @table @asis
2034 @item @emph{Description}:
2035 Destroy a simple lock. In order to be destroyed, a simple lock must be
2036 in the unlocked state.
2037
2038 @item @emph{C/C++}:
2039 @multitable @columnfractions .20 .80
2040 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
2041 @end multitable
2042
2043 @item @emph{Fortran}:
2044 @multitable @columnfractions .20 .80
2045 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
2046 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2047 @end multitable
2048
2049 @item @emph{See also}:
2050 @ref{omp_init_lock}
2051
2052 @item @emph{Reference}:
2053 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2054 @end table
2055
2056
2057
2058 @node omp_destroy_nest_lock
2059 @subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
2060 @table @asis
2061 @item @emph{Description}:
2062 Destroy a nested lock. In order to be destroyed, a nested lock must be
2063 in the unlocked state and its nesting count must equal zero.
2064
2065 @item @emph{C/C++}:
2066 @multitable @columnfractions .20 .80
2067 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
2068 @end multitable
2069
2070 @item @emph{Fortran}:
2071 @multitable @columnfractions .20 .80
2072 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
2073 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2074 @end multitable
2075
2076 @item @emph{See also}:
2077 @ref{omp_init_lock}
2078
2079 @item @emph{Reference}:
2080 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2081 @end table
2082
2083
2084
2085 @node omp_set_lock
2086 @subsection @code{omp_set_lock} -- Wait for and set simple lock
2087 @table @asis
2088 @item @emph{Description}:
2089 Before setting a simple lock, the lock variable must be initialized by
2090 @code{omp_init_lock}. The calling thread is blocked until the lock
2091 is available. If the lock is already held by the current thread,
2092 a deadlock occurs.
2093
2094 @item @emph{C/C++}:
2095 @multitable @columnfractions .20 .80
2096 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
2097 @end multitable
2098
2099 @item @emph{Fortran}:
2100 @multitable @columnfractions .20 .80
2101 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
2102 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2103 @end multitable
2104
2105 @item @emph{See also}:
2106 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
2107
2108 @item @emph{Reference}:
2109 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2110 @end table
2111
2112
2113
2114 @node omp_set_nest_lock
2115 @subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
2116 @table @asis
2117 @item @emph{Description}:
2118 Before setting a nested lock, the lock variable must be initialized by
2119 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
2120 is available. If the lock is already held by the current thread, the
2121 nesting count for the lock is incremented.
2122
2123 @item @emph{C/C++}:
2124 @multitable @columnfractions .20 .80
2125 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
2126 @end multitable
2127
2128 @item @emph{Fortran}:
2129 @multitable @columnfractions .20 .80
2130 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
2131 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2132 @end multitable
2133
2134 @item @emph{See also}:
2135 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
2136
2137 @item @emph{Reference}:
2138 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2139 @end table
2140
2141
2142
2143 @node omp_unset_lock
2144 @subsection @code{omp_unset_lock} -- Unset simple lock
2145 @table @asis
2146 @item @emph{Description}:
2147 A simple lock about to be unset must have been locked by @code{omp_set_lock}
2148 or @code{omp_test_lock} before. In addition, the lock must be held by the
2149 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
2150 or more threads attempted to set the lock before, one of them is chosen to,
2151 again, set the lock to itself.
2152
2153 @item @emph{C/C++}:
2154 @multitable @columnfractions .20 .80
2155 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
2156 @end multitable
2157
2158 @item @emph{Fortran}:
2159 @multitable @columnfractions .20 .80
2160 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
2161 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2162 @end multitable
2163
2164 @item @emph{See also}:
2165 @ref{omp_set_lock}, @ref{omp_test_lock}
2166
2167 @item @emph{Reference}:
2168 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2169 @end table
2170
2171
2172
2173 @node omp_unset_nest_lock
2174 @subsection @code{omp_unset_nest_lock} -- Unset nested lock
2175 @table @asis
2176 @item @emph{Description}:
2177 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
2178 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
2179 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
2180 lock becomes unlocked. If one ore more threads attempted to set the lock before,
2181 one of them is chosen to, again, set the lock to itself.
2182
2183 @item @emph{C/C++}:
2184 @multitable @columnfractions .20 .80
2185 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
2186 @end multitable
2187
2188 @item @emph{Fortran}:
2189 @multitable @columnfractions .20 .80
2190 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
2191 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2192 @end multitable
2193
2194 @item @emph{See also}:
2195 @ref{omp_set_nest_lock}
2196
2197 @item @emph{Reference}:
2198 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2199 @end table
2200
2201
2202
2203 @node omp_test_lock
2204 @subsection @code{omp_test_lock} -- Test and set simple lock if available
2205 @table @asis
2206 @item @emph{Description}:
2207 Before setting a simple lock, the lock variable must be initialized by
2208 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
2209 does not block if the lock is not available. This function returns
2210 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
2211 @code{false} represent their language-specific counterparts.
2212
2213 @item @emph{C/C++}:
2214 @multitable @columnfractions .20 .80
2215 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
2216 @end multitable
2217
2218 @item @emph{Fortran}:
2219 @multitable @columnfractions .20 .80
2220 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
2221 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2222 @end multitable
2223
2224 @item @emph{See also}:
2225 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2226
2227 @item @emph{Reference}:
2228 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2229 @end table
2230
2231
2232
2233 @node omp_test_nest_lock
2234 @subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
2235 @table @asis
2236 @item @emph{Description}:
2237 Before setting a nested lock, the lock variable must be initialized by
2238 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
2239 @code{omp_test_nest_lock} does not block if the lock is not available.
2240 If the lock is already held by the current thread, the new nesting count
2241 is returned. Otherwise, the return value equals zero.
2242
2243 @item @emph{C/C++}:
2244 @multitable @columnfractions .20 .80
2245 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
2246 @end multitable
2247
2248 @item @emph{Fortran}:
2249 @multitable @columnfractions .20 .80
2250 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
2251 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2252 @end multitable
2253
2254
2255 @item @emph{See also}:
2256 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2257
2258 @item @emph{Reference}:
2259 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2260 @end table
2261
2262
2263
2264 @node Timing Routines
2265 @section Timing Routines
2266
2267 Portable, thread-based, wall clock timer.
2268 The routines have C linkage and do not throw exceptions.
2269
2270 @menu
2271 * omp_get_wtick:: Get timer precision.
2272 * omp_get_wtime:: Elapsed wall clock time.
2273 @end menu
2274
2275
2276
2277 @node omp_get_wtick
2278 @subsection @code{omp_get_wtick} -- Get timer precision
2279 @table @asis
2280 @item @emph{Description}:
2281 Gets the timer precision, i.e., the number of seconds between two
2282 successive clock ticks.
2283
2284 @item @emph{C/C++}:
2285 @multitable @columnfractions .20 .80
2286 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
2287 @end multitable
2288
2289 @item @emph{Fortran}:
2290 @multitable @columnfractions .20 .80
2291 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
2292 @end multitable
2293
2294 @item @emph{See also}:
2295 @ref{omp_get_wtime}
2296
2297 @item @emph{Reference}:
2298 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
2299 @end table
2300
2301
2302
2303 @node omp_get_wtime
2304 @subsection @code{omp_get_wtime} -- Elapsed wall clock time
2305 @table @asis
2306 @item @emph{Description}:
2307 Elapsed wall clock time in seconds. The time is measured per thread, no
2308 guarantee can be made that two distinct threads measure the same time.
2309 Time is measured from some "time in the past", which is an arbitrary time
2310 guaranteed not to change during the execution of the program.
2311
2312 @item @emph{C/C++}:
2313 @multitable @columnfractions .20 .80
2314 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
2315 @end multitable
2316
2317 @item @emph{Fortran}:
2318 @multitable @columnfractions .20 .80
2319 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
2320 @end multitable
2321
2322 @item @emph{See also}:
2323 @ref{omp_get_wtick}
2324
2325 @item @emph{Reference}:
2326 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
2327 @end table
2328
2329
2330
2331 @node Event Routine
2332 @section Event Routine
2333
2334 Support for event objects.
2335 The routine has C linkage and do not throw exceptions.
2336
2337 @menu
2338 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
2339 @end menu
2340
2341
2342
2343 @node omp_fulfill_event
2344 @subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
2345 @table @asis
2346 @item @emph{Description}:
2347 Fulfill the event associated with the event handle argument. Currently, it
2348 is only used to fulfill events generated by detach clauses on task
2349 constructs - the effect of fulfilling the event is to allow the task to
2350 complete.
2351
2352 The result of calling @code{omp_fulfill_event} with an event handle other
2353 than that generated by a detach clause is undefined. Calling it with an
2354 event handle that has already been fulfilled is also undefined.
2355
2356 @item @emph{C/C++}:
2357 @multitable @columnfractions .20 .80
2358 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
2359 @end multitable
2360
2361 @item @emph{Fortran}:
2362 @multitable @columnfractions .20 .80
2363 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
2364 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
2365 @end multitable
2366
2367 @item @emph{Reference}:
2368 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
2369 @end table
2370
2371
2372
2373 @c @node Interoperability Routines
2374 @c @section Interoperability Routines
2375 @c
2376 @c Routines to obtain properties from an @code{omp_interop_t} object.
2377 @c They have C linkage and do not throw exceptions.
2378 @c
2379 @c @menu
2380 @c * omp_get_num_interop_properties:: <fixme>
2381 @c * omp_get_interop_int:: <fixme>
2382 @c * omp_get_interop_ptr:: <fixme>
2383 @c * omp_get_interop_str:: <fixme>
2384 @c * omp_get_interop_name:: <fixme>
2385 @c * omp_get_interop_type_desc:: <fixme>
2386 @c * omp_get_interop_rc_desc:: <fixme>
2387 @c @end menu
2388
2389 @node Memory Management Routines
2390 @section Memory Management Routines
2391
2392 Routines to manage and allocate memory on the current device.
2393 They have C linkage and do not throw exceptions.
2394
2395 @menu
2396 * omp_init_allocator:: Create an allocator
2397 * omp_destroy_allocator:: Destroy an allocator
2398 * omp_set_default_allocator:: Set the default allocator
2399 * omp_get_default_allocator:: Get the default allocator
2400 @c * omp_alloc:: <fixme>
2401 @c * omp_aligned_alloc:: <fixme>
2402 @c * omp_free:: <fixme>
2403 @c * omp_calloc:: <fixme>
2404 @c * omp_aligned_calloc:: <fixme>
2405 @c * omp_realloc:: <fixme>
2406 @c * omp_get_memspace_num_resources:: <fixme>/TR11
2407 @c * omp_get_submemspace:: <fixme>/TR11
2408 @end menu
2409
2410
2411
2412 @node omp_init_allocator
2413 @subsection @code{omp_init_allocator} -- Create an allocator
2414 @table @asis
2415 @item @emph{Description}:
2416 Create an allocator that uses the specified memory space and has the specified
2417 traits; if an allocator that fulfills the requirements cannot be created,
2418 @code{omp_null_allocator} is returned.
2419
2420 The predefined memory spaces and available traits can be found at
2421 @ref{OMP_ALLOCATOR}, where the trait names have to be be prefixed by
2422 @code{omp_atk_} (e.g. @code{omp_atk_pinned}) and the named trait values by
2423 @code{omp_atv_} (e.g. @code{omp_atv_true}); additionally, @code{omp_atv_default}
2424 may be used as trait value to specify that the default value should be used.
2425
2426 @item @emph{C/C++}:
2427 @multitable @columnfractions .20 .80
2428 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_init_allocator(}
2429 @item @tab @code{ omp_memspace_handle_t memspace,}
2430 @item @tab @code{ int ntraits,}
2431 @item @tab @code{ const omp_alloctrait_t traits[]);}
2432 @end multitable
2433
2434 @item @emph{Fortran}:
2435 @multitable @columnfractions .20 .80
2436 @item @emph{Interface}: @tab @code{function omp_init_allocator(memspace, ntraits, traits)}
2437 @item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_init_allocator}
2438 @item @tab @code{integer (kind=omp_memspace_handle_kind), intent(in) :: memspace}
2439 @item @tab @code{integer, intent(in) :: ntraits}
2440 @item @tab @code{type (omp_alloctrait), intent(in) :: traits(*)}
2441 @end multitable
2442
2443 @item @emph{See also}:
2444 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_destroy_allocator}
2445
2446 @item @emph{Reference}:
2447 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.2
2448 @end table
2449
2450
2451
2452 @node omp_destroy_allocator
2453 @subsection @code{omp_destroy_allocator} -- Destroy an allocator
2454 @table @asis
2455 @item @emph{Description}:
2456 Releases all resources used by a memory allocator, which must not represent
2457 a predefined memory allocator. Accessing memory after its allocator has been
2458 destroyed has unspecified behavior. Passing @code{omp_null_allocator} to the
2459 routine is permitted but will have no effect.
2460
2461
2462 @item @emph{C/C++}:
2463 @multitable @columnfractions .20 .80
2464 @item @emph{Prototype}: @tab @code{void omp_destroy_allocator (omp_allocator_handle_t allocator);}
2465 @end multitable
2466
2467 @item @emph{Fortran}:
2468 @multitable @columnfractions .20 .80
2469 @item @emph{Interface}: @tab @code{subroutine omp_destroy_allocator(allocator)}
2470 @item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2471 @end multitable
2472
2473 @item @emph{See also}:
2474 @ref{omp_init_allocator}
2475
2476 @item @emph{Reference}:
2477 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.3
2478 @end table
2479
2480
2481
2482 @node omp_set_default_allocator
2483 @subsection @code{omp_set_default_allocator} -- Set the default allocator
2484 @table @asis
2485 @item @emph{Description}:
2486 Sets the default allocator that is used when no allocator has been specified
2487 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2488 routine is invoked with the @code{omp_null_allocator} allocator.
2489
2490 @item @emph{C/C++}:
2491 @multitable @columnfractions .20 .80
2492 @item @emph{Prototype}: @tab @code{void omp_set_default_allocator(omp_allocator_handle_t allocator);}
2493 @end multitable
2494
2495 @item @emph{Fortran}:
2496 @multitable @columnfractions .20 .80
2497 @item @emph{Interface}: @tab @code{subroutine omp_set_default_allocator(allocator)}
2498 @item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2499 @end multitable
2500
2501 @item @emph{See also}:
2502 @ref{omp_get_default_allocator}, @ref{omp_init_allocator}, @ref{OMP_ALLOCATOR},
2503 @ref{Memory allocation}
2504
2505 @item @emph{Reference}:
2506 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.4
2507 @end table
2508
2509
2510
2511 @node omp_get_default_allocator
2512 @subsection @code{omp_get_default_allocator} -- Get the default allocator
2513 @table @asis
2514 @item @emph{Description}:
2515 The routine returns the default allocator that is used when no allocator has
2516 been specified in the @code{allocate} or @code{allocator} clause or if an
2517 OpenMP memory routine is invoked with the @code{omp_null_allocator} allocator.
2518
2519 @item @emph{C/C++}:
2520 @multitable @columnfractions .20 .80
2521 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_get_default_allocator();}
2522 @end multitable
2523
2524 @item @emph{Fortran}:
2525 @multitable @columnfractions .20 .80
2526 @item @emph{Interface}: @tab @code{function omp_get_default_allocator()}
2527 @item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_get_default_allocator}
2528 @end multitable
2529
2530 @item @emph{See also}:
2531 @ref{omp_set_default_allocator}, @ref{OMP_ALLOCATOR}
2532
2533 @item @emph{Reference}:
2534 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.5
2535 @end table
2536
2537
2538
2539 @c @node Tool Control Routine
2540 @c
2541 @c FIXME
2542
2543 @c @node Environment Display Routine
2544 @c @section Environment Display Routine
2545 @c
2546 @c Routine to display the OpenMP number and the initial value of ICVs.
2547 @c It has C linkage and do not throw exceptions.
2548 @c
2549 @c menu
2550 @c * omp_display_env:: <fixme>
2551 @c end menu
2552
2553 @c ---------------------------------------------------------------------
2554 @c OpenMP Environment Variables
2555 @c ---------------------------------------------------------------------
2556
2557 @node Environment Variables
2558 @chapter OpenMP Environment Variables
2559
2560 The environment variables which beginning with @env{OMP_} are defined by
2561 section 4 of the OpenMP specification in version 4.5 or in a later version
2562 of the specification, while those beginning with @env{GOMP_} are GNU extensions.
2563 Most @env{OMP_} environment variables have an associated internal control
2564 variable (ICV).
2565
2566 For any OpenMP environment variable that sets an ICV and is neither
2567 @code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
2568 device-specific environment variables exist. For them, the environment
2569 variable without suffix affects the host. The suffix @code{_DEV_} followed
2570 by a non-negative device number less that the number of available devices sets
2571 the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
2572 of all non-host devices for which a device-specific corresponding environment
2573 variable has not been set while the @code{_ALL} suffix sets the ICV of all
2574 host and non-host devices for which a more specific corresponding environment
2575 variable is not set.
2576
2577 @menu
2578 * OMP_ALLOCATOR:: Set the default allocator
2579 * OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
2580 * OMP_CANCELLATION:: Set whether cancellation is activated
2581 * OMP_DISPLAY_AFFINITY:: Display thread affinity information
2582 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
2583 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
2584 * OMP_DYNAMIC:: Dynamic adjustment of threads
2585 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
2586 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
2587 * OMP_NESTED:: Nested parallel regions
2588 * OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
2589 * OMP_NUM_THREADS:: Specifies the number of threads to use
2590 * OMP_PROC_BIND:: Whether threads may be moved between CPUs
2591 * OMP_PLACES:: Specifies on which CPUs the threads should be placed
2592 * OMP_STACKSIZE:: Set default thread stack size
2593 * OMP_SCHEDULE:: How threads are scheduled
2594 * OMP_TARGET_OFFLOAD:: Controls offloading behaviour
2595 * OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
2596 * OMP_THREAD_LIMIT:: Set the maximum number of threads
2597 * OMP_WAIT_POLICY:: How waiting threads are handled
2598 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
2599 * GOMP_DEBUG:: Enable debugging output
2600 * GOMP_STACKSIZE:: Set default thread stack size
2601 * GOMP_SPINCOUNT:: Set the busy-wait spin count
2602 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
2603 @end menu
2604
2605
2606 @node OMP_ALLOCATOR
2607 @section @env{OMP_ALLOCATOR} -- Set the default allocator
2608 @cindex Environment Variable
2609 @table @asis
2610 @item @emph{ICV:} @var{def-allocator-var}
2611 @item @emph{Scope:} data environment
2612 @item @emph{Description}:
2613 Sets the default allocator that is used when no allocator has been specified
2614 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2615 routine is invoked with the @code{omp_null_allocator} allocator.
2616 If unset, @code{omp_default_mem_alloc} is used.
2617
2618 The value can either be a predefined allocator or a predefined memory space
2619 or a predefined memory space followed by a colon and a comma-separated list
2620 of memory trait and value pairs, separated by @code{=}.
2621
2622 Note: The corresponding device environment variables are currently not
2623 supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
2624 initialized to @code{omp_default_mem_alloc}. However, on all devices,
2625 the @code{omp_set_default_allocator} API routine can be used to change
2626 value.
2627
2628 @multitable @columnfractions .45 .45
2629 @headitem Predefined allocators @tab Associated predefined memory spaces
2630 @item omp_default_mem_alloc @tab omp_default_mem_space
2631 @item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
2632 @item omp_const_mem_alloc @tab omp_const_mem_space
2633 @item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
2634 @item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
2635 @item omp_cgroup_mem_alloc @tab --
2636 @item omp_pteam_mem_alloc @tab --
2637 @item omp_thread_mem_alloc @tab --
2638 @end multitable
2639
2640 The predefined allocators use the default values for the traits,
2641 as listed below. Except that the last three allocators have the
2642 @code{access} trait set to @code{cgroup}, @code{pteam}, and
2643 @code{thread}, respectively.
2644
2645 @multitable @columnfractions .25 .40 .25
2646 @headitem Trait @tab Allowed values @tab Default value
2647 @item @code{sync_hint} @tab @code{contended}, @code{uncontended},
2648 @code{serialized}, @code{private}
2649 @tab @code{contended}
2650 @item @code{alignment} @tab Positive integer being a power of two
2651 @tab 1 byte
2652 @item @code{access} @tab @code{all}, @code{cgroup},
2653 @code{pteam}, @code{thread}
2654 @tab @code{all}
2655 @item @code{pool_size} @tab Positive integer
2656 @tab See @ref{Memory allocation}
2657 @item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
2658 @code{abort_fb}, @code{allocator_fb}
2659 @tab See below
2660 @item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
2661 @tab (none)
2662 @item @code{pinned} @tab @code{true}, @code{false}
2663 @tab @code{false}
2664 @item @code{partition} @tab @code{environment}, @code{nearest},
2665 @code{blocked}, @code{interleaved}
2666 @tab @code{environment}
2667 @end multitable
2668
2669 For the @code{fallback} trait, the default value is @code{null_fb} for the
2670 @code{omp_default_mem_alloc} allocator and any allocator that is associated
2671 with device memory; for all other other allocators, it is @code{default_mem_fb}
2672 by default.
2673
2674 Examples:
2675 @smallexample
2676 OMP_ALLOCATOR=omp_high_bw_mem_alloc
2677 OMP_ALLOCATOR=omp_large_cap_mem_space
2678 OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
2679 @end smallexample
2680
2681 @item @emph{See also}:
2682 @ref{Memory allocation}, @ref{omp_get_default_allocator},
2683 @ref{omp_set_default_allocator}
2684
2685 @item @emph{Reference}:
2686 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
2687 @end table
2688
2689
2690
2691 @node OMP_AFFINITY_FORMAT
2692 @section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
2693 @cindex Environment Variable
2694 @table @asis
2695 @item @emph{ICV:} @var{affinity-format-var}
2696 @item @emph{Scope:} device
2697 @item @emph{Description}:
2698 Sets the format string used when displaying OpenMP thread affinity information.
2699 Special values are output using @code{%} followed by an optional size
2700 specification and then either the single-character field type or its long
2701 name enclosed in curly braces; using @code{%%} will display a literal percent.
2702 The size specification consists of an optional @code{0.} or @code{.} followed
2703 by a positive integer, specifying the minimal width of the output. With
2704 @code{0.} and numerical values, the output is padded with zeros on the left;
2705 with @code{.}, the output is padded by spaces on the left; otherwise, the
2706 output is padded by spaces on the right. If unset, the value is
2707 ``@code{level %L thread %i affinity %A}''.
2708
2709 Supported field types are:
2710
2711 @multitable @columnfractions .10 .25 .60
2712 @item t @tab team_num @tab value returned by @code{omp_get_team_num}
2713 @item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
2714 @item L @tab nesting_level @tab value returned by @code{omp_get_level}
2715 @item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
2716 @item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
2717 @item a @tab ancestor_tnum
2718 @tab value returned by
2719 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
2720 @item H @tab host @tab name of the host that executes the thread
2721 @item P @tab process_id @tab process identifier
2722 @item i @tab native_thread_id @tab native thread identifier
2723 @item A @tab thread_affinity
2724 @tab comma separated list of integer values or ranges, representing the
2725 processors on which a process might execute, subject to affinity
2726 mechanisms
2727 @end multitable
2728
2729 For instance, after setting
2730
2731 @smallexample
2732 OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
2733 @end smallexample
2734
2735 with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
2736 @code{omp_display_affinity} with @code{NULL} or an empty string, the program
2737 might display the following:
2738
2739 @smallexample
2740 00!0! 1!4; 0;01;0;1;0-11
2741 00!3! 1!4; 0;01;0;1;0-11
2742 00!2! 1!4; 0;01;0;1;0-11
2743 00!1! 1!4; 0;01;0;1;0-11
2744 @end smallexample
2745
2746 @item @emph{See also}:
2747 @ref{OMP_DISPLAY_AFFINITY}
2748
2749 @item @emph{Reference}:
2750 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
2751 @end table
2752
2753
2754
2755 @node OMP_CANCELLATION
2756 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
2757 @cindex Environment Variable
2758 @table @asis
2759 @item @emph{ICV:} @var{cancel-var}
2760 @item @emph{Scope:} global
2761 @item @emph{Description}:
2762 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
2763 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
2764
2765 @item @emph{See also}:
2766 @ref{omp_get_cancellation}
2767
2768 @item @emph{Reference}:
2769 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
2770 @end table
2771
2772
2773
2774 @node OMP_DISPLAY_AFFINITY
2775 @section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
2776 @cindex Environment Variable
2777 @table @asis
2778 @item @emph{ICV:} @var{display-affinity-var}
2779 @item @emph{Scope:} global
2780 @item @emph{Description}:
2781 If set to @code{FALSE} or if unset, affinity displaying is disabled.
2782 If set to @code{TRUE}, the runtime will display affinity information about
2783 OpenMP threads in a parallel region upon entering the region and every time
2784 any change occurs.
2785
2786 @item @emph{See also}:
2787 @ref{OMP_AFFINITY_FORMAT}
2788
2789 @item @emph{Reference}:
2790 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
2791 @end table
2792
2793
2794
2795
2796 @node OMP_DISPLAY_ENV
2797 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
2798 @cindex Environment Variable
2799 @table @asis
2800 @item @emph{ICV:} none
2801 @item @emph{Scope:} not applicable
2802 @item @emph{Description}:
2803 If set to @code{TRUE}, the OpenMP version number and the values
2804 associated with the OpenMP environment variables are printed to @code{stderr}.
2805 If set to @code{VERBOSE}, it additionally shows the value of the environment
2806 variables which are GNU extensions. If undefined or set to @code{FALSE},
2807 this information will not be shown.
2808
2809
2810 @item @emph{Reference}:
2811 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2812 @end table
2813
2814
2815
2816 @node OMP_DEFAULT_DEVICE
2817 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2818 @cindex Environment Variable
2819 @table @asis
2820 @item @emph{ICV:} @var{default-device-var}
2821 @item @emph{Scope:} data environment
2822 @item @emph{Description}:
2823 Set to choose the device which is used in a @code{target} region, unless the
2824 value is overridden by @code{omp_set_default_device} or by a @code{device}
2825 clause. The value shall be the nonnegative device number. If no device with
2826 the given device number exists, the code is executed on the host. If unset,
2827 @env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
2828 available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
2829 device number 0 will be used.
2830
2831
2832 @item @emph{See also}:
2833 @ref{omp_get_default_device}, @ref{omp_set_default_device},
2834 @ref{OMP_TARGET_OFFLOAD}
2835
2836 @item @emph{Reference}:
2837 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.7
2838 @end table
2839
2840
2841
2842 @node OMP_DYNAMIC
2843 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2844 @cindex Environment Variable
2845 @table @asis
2846 @item @emph{ICV:} @var{dyn-var}
2847 @item @emph{Scope:} global
2848 @item @emph{Description}:
2849 Enable or disable the dynamic adjustment of the number of threads
2850 within a team. The value of this environment variable shall be
2851 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2852 disabled by default.
2853
2854 @item @emph{See also}:
2855 @ref{omp_set_dynamic}
2856
2857 @item @emph{Reference}:
2858 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2859 @end table
2860
2861
2862
2863 @node OMP_MAX_ACTIVE_LEVELS
2864 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2865 @cindex Environment Variable
2866 @table @asis
2867 @item @emph{ICV:} @var{max-active-levels-var}
2868 @item @emph{Scope:} data environment
2869 @item @emph{Description}:
2870 Specifies the initial value for the maximum number of nested parallel
2871 regions. The value of this variable shall be a positive integer.
2872 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2873 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2874 a list with more than one item, the maximum number of nested parallel
2875 regions will be initialized to the largest number supported, otherwise
2876 it will be set to one.
2877
2878 @item @emph{See also}:
2879 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
2880 @ref{OMP_NUM_THREADS}
2881
2882
2883 @item @emph{Reference}:
2884 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2885 @end table
2886
2887
2888
2889 @node OMP_MAX_TASK_PRIORITY
2890 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2891 number that can be set for a task.
2892 @cindex Environment Variable
2893 @table @asis
2894 @item @emph{ICV:} @var{max-task-priority-var}
2895 @item @emph{Scope:} global
2896 @item @emph{Description}:
2897 Specifies the initial value for the maximum priority value that can be
2898 set for a task. The value of this variable shall be a non-negative
2899 integer, and zero is allowed. If undefined, the default priority is
2900 0.
2901
2902 @item @emph{See also}:
2903 @ref{omp_get_max_task_priority}
2904
2905 @item @emph{Reference}:
2906 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2907 @end table
2908
2909
2910
2911 @node OMP_NESTED
2912 @section @env{OMP_NESTED} -- Nested parallel regions
2913 @cindex Environment Variable
2914 @cindex Implementation specific setting
2915 @table @asis
2916 @item @emph{ICV:} @var{max-active-levels-var}
2917 @item @emph{Scope:} data environment
2918 @item @emph{Description}:
2919 Enable or disable nested parallel regions, i.e., whether team members
2920 are allowed to create new teams. The value of this environment variable
2921 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2922 of maximum active nested regions supported will by default be set to the
2923 maximum supported, otherwise it will be set to one. If
2924 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2925 setting. If both are undefined, nested parallel regions are enabled if
2926 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2927 more than one item, otherwise they are disabled by default.
2928
2929 Note that the @code{OMP_NESTED} environment variable was deprecated in
2930 the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
2931
2932 @item @emph{See also}:
2933 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
2934 @ref{OMP_MAX_ACTIVE_LEVELS}
2935
2936 @item @emph{Reference}:
2937 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2938 @end table
2939
2940
2941
2942 @node OMP_NUM_TEAMS
2943 @section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2944 @cindex Environment Variable
2945 @table @asis
2946 @item @emph{ICV:} @var{nteams-var}
2947 @item @emph{Scope:} device
2948 @item @emph{Description}:
2949 Specifies the upper bound for number of teams to use in teams regions
2950 without explicit @code{num_teams} clause. The value of this variable shall
2951 be a positive integer. If undefined it defaults to 0 which means
2952 implementation defined upper bound.
2953
2954 @item @emph{See also}:
2955 @ref{omp_set_num_teams}
2956
2957 @item @emph{Reference}:
2958 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2959 @end table
2960
2961
2962
2963 @node OMP_NUM_THREADS
2964 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2965 @cindex Environment Variable
2966 @cindex Implementation specific setting
2967 @table @asis
2968 @item @emph{ICV:} @var{nthreads-var}
2969 @item @emph{Scope:} data environment
2970 @item @emph{Description}:
2971 Specifies the default number of threads to use in parallel regions. The
2972 value of this variable shall be a comma-separated list of positive integers;
2973 the value specifies the number of threads to use for the corresponding nested
2974 level. Specifying more than one item in the list will automatically enable
2975 nesting by default. If undefined one thread per CPU is used.
2976
2977 When a list with more than value is specified, it also affects the
2978 @var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2979
2980 @item @emph{See also}:
2981 @ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
2982
2983 @item @emph{Reference}:
2984 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2985 @end table
2986
2987
2988
2989 @node OMP_PROC_BIND
2990 @section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
2991 @cindex Environment Variable
2992 @table @asis
2993 @item @emph{ICV:} @var{bind-var}
2994 @item @emph{Scope:} data environment
2995 @item @emph{Description}:
2996 Specifies whether threads may be moved between processors. If set to
2997 @code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
2998 they may be moved. Alternatively, a comma separated list with the
2999 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
3000 be used to specify the thread affinity policy for the corresponding nesting
3001 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
3002 same place partition as the primary thread. With @code{CLOSE} those are
3003 kept close to the primary thread in contiguous place partitions. And
3004 with @code{SPREAD} a sparse distribution
3005 across the place partitions is used. Specifying more than one item in the
3006 list will automatically enable nesting by default.
3007
3008 When a list is specified, it also affects the @var{max-active-levels-var} ICV
3009 as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3010
3011 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
3012 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
3013
3014 @item @emph{See also}:
3015 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
3016 @ref{OMP_MAX_ACTIVE_LEVELS}
3017
3018 @item @emph{Reference}:
3019 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
3020 @end table
3021
3022
3023
3024 @node OMP_PLACES
3025 @section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
3026 @cindex Environment Variable
3027 @table @asis
3028 @item @emph{ICV:} @var{place-partition-var}
3029 @item @emph{Scope:} implicit tasks
3030 @item @emph{Description}:
3031 The thread placement can be either specified using an abstract name or by an
3032 explicit list of the places. The abstract names @code{threads}, @code{cores},
3033 @code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
3034 followed by a positive number in parentheses, which denotes the how many places
3035 shall be created. With @code{threads} each place corresponds to a single
3036 hardware thread; @code{cores} to a single core with the corresponding number of
3037 hardware threads; with @code{sockets} the place corresponds to a single
3038 socket; with @code{ll_caches} to a set of cores that shares the last level
3039 cache on the device; and @code{numa_domains} to a set of cores for which their
3040 closest memory on the device is the same memory and at a similar distance from
3041 the cores. The resulting placement can be shown by setting the
3042 @env{OMP_DISPLAY_ENV} environment variable.
3043
3044 Alternatively, the placement can be specified explicitly as comma-separated
3045 list of places. A place is specified by set of nonnegative numbers in curly
3046 braces, denoting the hardware threads. The curly braces can be omitted
3047 when only a single number has been specified. The hardware threads
3048 belonging to a place can either be specified as comma-separated list of
3049 nonnegative thread numbers or using an interval. Multiple places can also be
3050 either specified by a comma-separated list of places or by an interval. To
3051 specify an interval, a colon followed by the count is placed after
3052 the hardware thread number or the place. Optionally, the length can be
3053 followed by a colon and the stride number -- otherwise a unit stride is
3054 assumed. Placing an exclamation mark (@code{!}) directly before a curly
3055 brace or numbers inside the curly braces (excluding intervals) will
3056 exclude those hardware threads.
3057
3058 For instance, the following specifies the same places list:
3059 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
3060 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
3061
3062 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
3063 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
3064 between CPUs following no placement policy.
3065
3066 @item @emph{See also}:
3067 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
3068 @ref{OMP_DISPLAY_ENV}
3069
3070 @item @emph{Reference}:
3071 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
3072 @end table
3073
3074
3075
3076 @node OMP_STACKSIZE
3077 @section @env{OMP_STACKSIZE} -- Set default thread stack size
3078 @cindex Environment Variable
3079 @table @asis
3080 @item @emph{ICV:} @var{stacksize-var}
3081 @item @emph{Scope:} device
3082 @item @emph{Description}:
3083 Set the default thread stack size in kilobytes, unless the number
3084 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
3085 case the size is, respectively, in bytes, kilobytes, megabytes
3086 or gigabytes. This is different from @code{pthread_attr_setstacksize}
3087 which gets the number of bytes as an argument. If the stack size cannot
3088 be set due to system constraints, an error is reported and the initial
3089 stack size is left unchanged. If undefined, the stack size is system
3090 dependent.
3091
3092 @item @emph{See also}:
3093 @ref{GOMP_STACKSIZE}
3094
3095 @item @emph{Reference}:
3096 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3097 @end table
3098
3099
3100
3101 @node OMP_SCHEDULE
3102 @section @env{OMP_SCHEDULE} -- How threads are scheduled
3103 @cindex Environment Variable
3104 @cindex Implementation specific setting
3105 @table @asis
3106 @item @emph{ICV:} @var{run-sched-var}
3107 @item @emph{Scope:} data environment
3108 @item @emph{Description}:
3109 Allows to specify @code{schedule type} and @code{chunk size}.
3110 The value of the variable shall have the form: @code{type[,chunk]} where
3111 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
3112 The optional @code{chunk} size shall be a positive integer. If undefined,
3113 dynamic scheduling and a chunk size of 1 is used.
3114
3115 @item @emph{See also}:
3116 @ref{omp_set_schedule}
3117
3118 @item @emph{Reference}:
3119 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
3120 @end table
3121
3122
3123
3124 @node OMP_TARGET_OFFLOAD
3125 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
3126 @cindex Environment Variable
3127 @cindex Implementation specific setting
3128 @table @asis
3129 @item @emph{ICV:} @var{target-offload-var}
3130 @item @emph{Scope:} global
3131 @item @emph{Description}:
3132 Specifies the behaviour with regard to offloading code to a device. This
3133 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
3134 or @code{DEFAULT}.
3135
3136 If set to @code{MANDATORY}, the program will terminate with an error if
3137 any device construct or device memory routine uses a device that is unavailable
3138 or not supported by the implementation, or uses a non-conforming device number.
3139 If set to @code{DISABLED}, then offloading is disabled and all code will run on
3140 the host. If set to @code{DEFAULT}, the program will try offloading to the
3141 device first, then fall back to running code on the host if it cannot.
3142
3143 If undefined, then the program will behave as if @code{DEFAULT} was set.
3144
3145 Note: Even with @code{MANDATORY}, there will be no run-time termination when
3146 the device number in a @code{device} clause or argument to a device memory
3147 routine is for host, which includes using the device number in the
3148 @var{default-device-var} ICV. However, the initial value of
3149 the @var{default-device-var} ICV is affected by @code{MANDATORY}.
3150
3151 @item @emph{See also}:
3152 @ref{OMP_DEFAULT_DEVICE}
3153
3154 @item @emph{Reference}:
3155 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.8
3156 @end table
3157
3158
3159
3160 @node OMP_TEAMS_THREAD_LIMIT
3161 @section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
3162 @cindex Environment Variable
3163 @table @asis
3164 @item @emph{ICV:} @var{teams-thread-limit-var}
3165 @item @emph{Scope:} device
3166 @item @emph{Description}:
3167 Specifies an upper bound for the number of threads to use by each contention
3168 group created by a teams construct without explicit @code{thread_limit}
3169 clause. The value of this variable shall be a positive integer. If undefined,
3170 the value of 0 is used which stands for an implementation defined upper
3171 limit.
3172
3173 @item @emph{See also}:
3174 @ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
3175
3176 @item @emph{Reference}:
3177 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
3178 @end table
3179
3180
3181
3182 @node OMP_THREAD_LIMIT
3183 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
3184 @cindex Environment Variable
3185 @table @asis
3186 @item @emph{ICV:} @var{thread-limit-var}
3187 @item @emph{Scope:} data environment
3188 @item @emph{Description}:
3189 Specifies the number of threads to use for the whole program. The
3190 value of this variable shall be a positive integer. If undefined,
3191 the number of threads is not limited.
3192
3193 @item @emph{See also}:
3194 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
3195
3196 @item @emph{Reference}:
3197 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
3198 @end table
3199
3200
3201
3202 @node OMP_WAIT_POLICY
3203 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
3204 @cindex Environment Variable
3205 @table @asis
3206 @item @emph{Description}:
3207 Specifies whether waiting threads should be active or passive. If
3208 the value is @code{PASSIVE}, waiting threads should not consume CPU
3209 power while waiting; while the value is @code{ACTIVE} specifies that
3210 they should. If undefined, threads wait actively for a short time
3211 before waiting passively.
3212
3213 @item @emph{See also}:
3214 @ref{GOMP_SPINCOUNT}
3215
3216 @item @emph{Reference}:
3217 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3218 @end table
3219
3220
3221
3222 @node GOMP_CPU_AFFINITY
3223 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
3224 @cindex Environment Variable
3225 @table @asis
3226 @item @emph{Description}:
3227 Binds threads to specific CPUs. The variable should contain a space-separated
3228 or comma-separated list of CPUs. This list may contain different kinds of
3229 entries: either single CPU numbers in any order, a range of CPUs (M-N)
3230 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
3231 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
3232 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
3233 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
3234 and 14 respectively and then start assigning back from the beginning of
3235 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
3236
3237 There is no libgomp library routine to determine whether a CPU affinity
3238 specification is in effect. As a workaround, language-specific library
3239 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
3240 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
3241 environment variable. A defined CPU affinity on startup cannot be changed
3242 or disabled during the runtime of the application.
3243
3244 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
3245 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
3246 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
3247 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
3248
3249 @item @emph{See also}:
3250 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3251 @end table
3252
3253
3254
3255 @node GOMP_DEBUG
3256 @section @env{GOMP_DEBUG} -- Enable debugging output
3257 @cindex Environment Variable
3258 @table @asis
3259 @item @emph{Description}:
3260 Enable debugging output. The variable should be set to @code{0}
3261 (disabled, also the default if not set), or @code{1} (enabled).
3262
3263 If enabled, some debugging output will be printed during execution.
3264 This is currently not specified in more detail, and subject to change.
3265 @end table
3266
3267
3268
3269 @node GOMP_STACKSIZE
3270 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
3271 @cindex Environment Variable
3272 @cindex Implementation specific setting
3273 @table @asis
3274 @item @emph{Description}:
3275 Set the default thread stack size in kilobytes. This is different from
3276 @code{pthread_attr_setstacksize} which gets the number of bytes as an
3277 argument. If the stack size cannot be set due to system constraints, an
3278 error is reported and the initial stack size is left unchanged. If undefined,
3279 the stack size is system dependent.
3280
3281 @item @emph{See also}:
3282 @ref{OMP_STACKSIZE}
3283
3284 @item @emph{Reference}:
3285 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3286 GCC Patches Mailinglist},
3287 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3288 GCC Patches Mailinglist}
3289 @end table
3290
3291
3292
3293 @node GOMP_SPINCOUNT
3294 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
3295 @cindex Environment Variable
3296 @cindex Implementation specific setting
3297 @table @asis
3298 @item @emph{Description}:
3299 Determines how long a threads waits actively with consuming CPU power
3300 before waiting passively without consuming CPU power. The value may be
3301 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
3302 integer which gives the number of spins of the busy-wait loop. The
3303 integer may optionally be followed by the following suffixes acting
3304 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
3305 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
3306 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
3307 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
3308 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
3309 If there are more OpenMP threads than available CPUs, 1000 and 100
3310 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
3311 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
3312 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
3313
3314 @item @emph{See also}:
3315 @ref{OMP_WAIT_POLICY}
3316 @end table
3317
3318
3319
3320 @node GOMP_RTEMS_THREAD_POOLS
3321 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
3322 @cindex Environment Variable
3323 @cindex Implementation specific setting
3324 @table @asis
3325 @item @emph{Description}:
3326 This environment variable is only used on the RTEMS real-time operating system.
3327 It determines the scheduler instance specific thread pools. The format for
3328 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
3329 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
3330 separated by @code{:} where:
3331 @itemize @bullet
3332 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
3333 instance.
3334 @item @code{$<priority>} is an optional priority for the worker threads of a
3335 thread pool according to @code{pthread_setschedparam}. In case a priority
3336 value is omitted, then a worker thread will inherit the priority of the OpenMP
3337 primary thread that created it. The priority of the worker thread is not
3338 changed after creation, even if a new OpenMP primary thread using the worker has
3339 a different priority.
3340 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
3341 RTEMS application configuration.
3342 @end itemize
3343 In case no thread pool configuration is specified for a scheduler instance,
3344 then each OpenMP primary thread of this scheduler instance will use its own
3345 dynamically allocated thread pool. To limit the worker thread count of the
3346 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
3347 @item @emph{Example}:
3348 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
3349 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
3350 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
3351 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
3352 one thread pool available. Since no priority is specified for this scheduler
3353 instance, the worker thread inherits the priority of the OpenMP primary thread
3354 that created it. In the scheduler instance @code{WRK1} there are three thread
3355 pools available and their worker threads run at priority four.
3356 @end table
3357
3358
3359
3360 @c ---------------------------------------------------------------------
3361 @c Enabling OpenACC
3362 @c ---------------------------------------------------------------------
3363
3364 @node Enabling OpenACC
3365 @chapter Enabling OpenACC
3366
3367 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
3368 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
3369 @code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
3370 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
3371 @code{!$} conditional compilation sentinels in free form and @code{c$},
3372 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
3373 arranges for automatic linking of the OpenACC runtime library
3374 (@ref{OpenACC Runtime Library Routines}).
3375
3376 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
3377
3378 A complete description of all OpenACC directives accepted may be found in
3379 the @uref{https://www.openacc.org, OpenACC} Application Programming
3380 Interface manual, version 2.6.
3381
3382
3383
3384 @c ---------------------------------------------------------------------
3385 @c OpenACC Runtime Library Routines
3386 @c ---------------------------------------------------------------------
3387
3388 @node OpenACC Runtime Library Routines
3389 @chapter OpenACC Runtime Library Routines
3390
3391 The runtime routines described here are defined by section 3 of the OpenACC
3392 specifications in version 2.6.
3393 They have C linkage, and do not throw exceptions.
3394 Generally, they are available only for the host, with the exception of
3395 @code{acc_on_device}, which is available for both the host and the
3396 acceleration device.
3397
3398 @menu
3399 * acc_get_num_devices:: Get number of devices for the given device
3400 type.
3401 * acc_set_device_type:: Set type of device accelerator to use.
3402 * acc_get_device_type:: Get type of device accelerator to be used.
3403 * acc_set_device_num:: Set device number to use.
3404 * acc_get_device_num:: Get device number to be used.
3405 * acc_get_property:: Get device property.
3406 * acc_async_test:: Tests for completion of a specific asynchronous
3407 operation.
3408 * acc_async_test_all:: Tests for completion of all asynchronous
3409 operations.
3410 * acc_wait:: Wait for completion of a specific asynchronous
3411 operation.
3412 * acc_wait_all:: Waits for completion of all asynchronous
3413 operations.
3414 * acc_wait_all_async:: Wait for completion of all asynchronous
3415 operations.
3416 * acc_wait_async:: Wait for completion of asynchronous operations.
3417 * acc_init:: Initialize runtime for a specific device type.
3418 * acc_shutdown:: Shuts down the runtime for a specific device
3419 type.
3420 * acc_on_device:: Whether executing on a particular device
3421 * acc_malloc:: Allocate device memory.
3422 * acc_free:: Free device memory.
3423 * acc_copyin:: Allocate device memory and copy host memory to
3424 it.
3425 * acc_present_or_copyin:: If the data is not present on the device,
3426 allocate device memory and copy from host
3427 memory.
3428 * acc_create:: Allocate device memory and map it to host
3429 memory.
3430 * acc_present_or_create:: If the data is not present on the device,
3431 allocate device memory and map it to host
3432 memory.
3433 * acc_copyout:: Copy device memory to host memory.
3434 * acc_delete:: Free device memory.
3435 * acc_update_device:: Update device memory from mapped host memory.
3436 * acc_update_self:: Update host memory from mapped device memory.
3437 * acc_map_data:: Map previously allocated device memory to host
3438 memory.
3439 * acc_unmap_data:: Unmap device memory from host memory.
3440 * acc_deviceptr:: Get device pointer associated with specific
3441 host address.
3442 * acc_hostptr:: Get host pointer associated with specific
3443 device address.
3444 * acc_is_present:: Indicate whether host variable / array is
3445 present on device.
3446 * acc_memcpy_to_device:: Copy host memory to device memory.
3447 * acc_memcpy_from_device:: Copy device memory to host memory.
3448 * acc_attach:: Let device pointer point to device-pointer target.
3449 * acc_detach:: Let device pointer point to host-pointer target.
3450
3451 API routines for target platforms.
3452
3453 * acc_get_current_cuda_device:: Get CUDA device handle.
3454 * acc_get_current_cuda_context::Get CUDA context handle.
3455 * acc_get_cuda_stream:: Get CUDA stream handle.
3456 * acc_set_cuda_stream:: Set CUDA stream handle.
3457
3458 API routines for the OpenACC Profiling Interface.
3459
3460 * acc_prof_register:: Register callbacks.
3461 * acc_prof_unregister:: Unregister callbacks.
3462 * acc_prof_lookup:: Obtain inquiry functions.
3463 * acc_register_library:: Library registration.
3464 @end menu
3465
3466
3467
3468 @node acc_get_num_devices
3469 @section @code{acc_get_num_devices} -- Get number of devices for given device type
3470 @table @asis
3471 @item @emph{Description}
3472 This function returns a value indicating the number of devices available
3473 for the device type specified in @var{devicetype}.
3474
3475 @item @emph{C/C++}:
3476 @multitable @columnfractions .20 .80
3477 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
3478 @end multitable
3479
3480 @item @emph{Fortran}:
3481 @multitable @columnfractions .20 .80
3482 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
3483 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3484 @end multitable
3485
3486 @item @emph{Reference}:
3487 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3488 3.2.1.
3489 @end table
3490
3491
3492
3493 @node acc_set_device_type
3494 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
3495 @table @asis
3496 @item @emph{Description}
3497 This function indicates to the runtime library which device type, specified
3498 in @var{devicetype}, to use when executing a parallel or kernels region.
3499
3500 @item @emph{C/C++}:
3501 @multitable @columnfractions .20 .80
3502 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
3503 @end multitable
3504
3505 @item @emph{Fortran}:
3506 @multitable @columnfractions .20 .80
3507 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
3508 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3509 @end multitable
3510
3511 @item @emph{Reference}:
3512 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3513 3.2.2.
3514 @end table
3515
3516
3517
3518 @node acc_get_device_type
3519 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
3520 @table @asis
3521 @item @emph{Description}
3522 This function returns what device type will be used when executing a
3523 parallel or kernels region.
3524
3525 This function returns @code{acc_device_none} if
3526 @code{acc_get_device_type} is called from
3527 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3528 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
3529 Interface}), that is, if the device is currently being initialized.
3530
3531 @item @emph{C/C++}:
3532 @multitable @columnfractions .20 .80
3533 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
3534 @end multitable
3535
3536 @item @emph{Fortran}:
3537 @multitable @columnfractions .20 .80
3538 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
3539 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
3540 @end multitable
3541
3542 @item @emph{Reference}:
3543 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3544 3.2.3.
3545 @end table
3546
3547
3548
3549 @node acc_set_device_num
3550 @section @code{acc_set_device_num} -- Set device number to use.
3551 @table @asis
3552 @item @emph{Description}
3553 This function will indicate to the runtime which device number,
3554 specified by @var{devicenum}, associated with the specified device
3555 type @var{devicetype}.
3556
3557 @item @emph{C/C++}:
3558 @multitable @columnfractions .20 .80
3559 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
3560 @end multitable
3561
3562 @item @emph{Fortran}:
3563 @multitable @columnfractions .20 .80
3564 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
3565 @item @tab @code{integer devicenum}
3566 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3567 @end multitable
3568
3569 @item @emph{Reference}:
3570 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3571 3.2.4.
3572 @end table
3573
3574
3575
3576 @node acc_get_device_num
3577 @section @code{acc_get_device_num} -- Get device number to be used.
3578 @table @asis
3579 @item @emph{Description}
3580 This function returns which device number associated with the specified device
3581 type @var{devicetype}, will be used when executing a parallel or kernels
3582 region.
3583
3584 @item @emph{C/C++}:
3585 @multitable @columnfractions .20 .80
3586 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
3587 @end multitable
3588
3589 @item @emph{Fortran}:
3590 @multitable @columnfractions .20 .80
3591 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
3592 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3593 @item @tab @code{integer acc_get_device_num}
3594 @end multitable
3595
3596 @item @emph{Reference}:
3597 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3598 3.2.5.
3599 @end table
3600
3601
3602
3603 @node acc_get_property
3604 @section @code{acc_get_property} -- Get device property.
3605 @cindex acc_get_property
3606 @cindex acc_get_property_string
3607 @table @asis
3608 @item @emph{Description}
3609 These routines return the value of the specified @var{property} for the
3610 device being queried according to @var{devicenum} and @var{devicetype}.
3611 Integer-valued and string-valued properties are returned by
3612 @code{acc_get_property} and @code{acc_get_property_string} respectively.
3613 The Fortran @code{acc_get_property_string} subroutine returns the string
3614 retrieved in its fourth argument while the remaining entry points are
3615 functions, which pass the return value as their result.
3616
3617 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
3618 modified the interface introduced in OpenACC 2.6. The kind-value parameter
3619 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
3620 for consistency and the return type of the @code{acc_get_property} function is
3621 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
3622 The parameter @code{acc_device_property} will continue to be provided,
3623 but might be removed in a future version of GCC.
3624
3625 @item @emph{C/C++}:
3626 @multitable @columnfractions .20 .80
3627 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3628 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3629 @end multitable
3630
3631 @item @emph{Fortran}:
3632 @multitable @columnfractions .20 .80
3633 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
3634 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
3635 @item @tab @code{use ISO_C_Binding, only: c_size_t}
3636 @item @tab @code{integer devicenum}
3637 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3638 @item @tab @code{integer(kind=acc_device_property_kind) property}
3639 @item @tab @code{integer(kind=c_size_t) acc_get_property}
3640 @item @tab @code{character(*) string}
3641 @end multitable
3642
3643 @item @emph{Reference}:
3644 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3645 3.2.6.
3646 @end table
3647
3648
3649
3650 @node acc_async_test
3651 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
3652 @table @asis
3653 @item @emph{Description}
3654 This function tests for completion of the asynchronous operation specified
3655 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
3656 the specified asynchronous operation has completed. While Fortran will return
3657 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
3658 a zero and Fortran returns a @code{false}.
3659
3660 @item @emph{C/C++}:
3661 @multitable @columnfractions .20 .80
3662 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
3663 @end multitable
3664
3665 @item @emph{Fortran}:
3666 @multitable @columnfractions .20 .80
3667 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
3668 @item @tab @code{integer(kind=acc_handle_kind) arg}
3669 @item @tab @code{logical acc_async_test}
3670 @end multitable
3671
3672 @item @emph{Reference}:
3673 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3674 3.2.9.
3675 @end table
3676
3677
3678
3679 @node acc_async_test_all
3680 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
3681 @table @asis
3682 @item @emph{Description}
3683 This function tests for completion of all asynchronous operations.
3684 In C/C++, a non-zero value will be returned to indicate all asynchronous
3685 operations have completed. While Fortran will return a @code{true}. If
3686 any asynchronous operation has not completed, C/C++ returns a zero and
3687 Fortran returns a @code{false}.
3688
3689 @item @emph{C/C++}:
3690 @multitable @columnfractions .20 .80
3691 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
3692 @end multitable
3693
3694 @item @emph{Fortran}:
3695 @multitable @columnfractions .20 .80
3696 @item @emph{Interface}: @tab @code{function acc_async_test()}
3697 @item @tab @code{logical acc_get_device_num}
3698 @end multitable
3699
3700 @item @emph{Reference}:
3701 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3702 3.2.10.
3703 @end table
3704
3705
3706
3707 @node acc_wait
3708 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
3709 @table @asis
3710 @item @emph{Description}
3711 This function waits for completion of the asynchronous operation
3712 specified in @var{arg}.
3713
3714 @item @emph{C/C++}:
3715 @multitable @columnfractions .20 .80
3716 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
3717 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
3718 @end multitable
3719
3720 @item @emph{Fortran}:
3721 @multitable @columnfractions .20 .80
3722 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
3723 @item @tab @code{integer(acc_handle_kind) arg}
3724 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
3725 @item @tab @code{integer(acc_handle_kind) arg}
3726 @end multitable
3727
3728 @item @emph{Reference}:
3729 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3730 3.2.11.
3731 @end table
3732
3733
3734
3735 @node acc_wait_all
3736 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
3737 @table @asis
3738 @item @emph{Description}
3739 This function waits for the completion of all asynchronous operations.
3740
3741 @item @emph{C/C++}:
3742 @multitable @columnfractions .20 .80
3743 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
3744 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
3745 @end multitable
3746
3747 @item @emph{Fortran}:
3748 @multitable @columnfractions .20 .80
3749 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
3750 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
3751 @end multitable
3752
3753 @item @emph{Reference}:
3754 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3755 3.2.13.
3756 @end table
3757
3758
3759
3760 @node acc_wait_all_async
3761 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
3762 @table @asis
3763 @item @emph{Description}
3764 This function enqueues a wait operation on the queue @var{async} for any
3765 and all asynchronous operations that have been previously enqueued on
3766 any queue.
3767
3768 @item @emph{C/C++}:
3769 @multitable @columnfractions .20 .80
3770 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
3771 @end multitable
3772
3773 @item @emph{Fortran}:
3774 @multitable @columnfractions .20 .80
3775 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
3776 @item @tab @code{integer(acc_handle_kind) async}
3777 @end multitable
3778
3779 @item @emph{Reference}:
3780 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3781 3.2.14.
3782 @end table
3783
3784
3785
3786 @node acc_wait_async
3787 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
3788 @table @asis
3789 @item @emph{Description}
3790 This function enqueues a wait operation on queue @var{async} for any and all
3791 asynchronous operations enqueued on queue @var{arg}.
3792
3793 @item @emph{C/C++}:
3794 @multitable @columnfractions .20 .80
3795 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
3796 @end multitable
3797
3798 @item @emph{Fortran}:
3799 @multitable @columnfractions .20 .80
3800 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
3801 @item @tab @code{integer(acc_handle_kind) arg, async}
3802 @end multitable
3803
3804 @item @emph{Reference}:
3805 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3806 3.2.12.
3807 @end table
3808
3809
3810
3811 @node acc_init
3812 @section @code{acc_init} -- Initialize runtime for a specific device type.
3813 @table @asis
3814 @item @emph{Description}
3815 This function initializes the runtime for the device type specified in
3816 @var{devicetype}.
3817
3818 @item @emph{C/C++}:
3819 @multitable @columnfractions .20 .80
3820 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
3821 @end multitable
3822
3823 @item @emph{Fortran}:
3824 @multitable @columnfractions .20 .80
3825 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
3826 @item @tab @code{integer(acc_device_kind) devicetype}
3827 @end multitable
3828
3829 @item @emph{Reference}:
3830 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3831 3.2.7.
3832 @end table
3833
3834
3835
3836 @node acc_shutdown
3837 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
3838 @table @asis
3839 @item @emph{Description}
3840 This function shuts down the runtime for the device type specified in
3841 @var{devicetype}.
3842
3843 @item @emph{C/C++}:
3844 @multitable @columnfractions .20 .80
3845 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
3846 @end multitable
3847
3848 @item @emph{Fortran}:
3849 @multitable @columnfractions .20 .80
3850 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
3851 @item @tab @code{integer(acc_device_kind) devicetype}
3852 @end multitable
3853
3854 @item @emph{Reference}:
3855 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3856 3.2.8.
3857 @end table
3858
3859
3860
3861 @node acc_on_device
3862 @section @code{acc_on_device} -- Whether executing on a particular device
3863 @table @asis
3864 @item @emph{Description}:
3865 This function returns whether the program is executing on a particular
3866 device specified in @var{devicetype}. In C/C++ a non-zero value is
3867 returned to indicate the device is executing on the specified device type.
3868 In Fortran, @code{true} will be returned. If the program is not executing
3869 on the specified device type C/C++ will return a zero, while Fortran will
3870 return @code{false}.
3871
3872 @item @emph{C/C++}:
3873 @multitable @columnfractions .20 .80
3874 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3875 @end multitable
3876
3877 @item @emph{Fortran}:
3878 @multitable @columnfractions .20 .80
3879 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3880 @item @tab @code{integer(acc_device_kind) devicetype}
3881 @item @tab @code{logical acc_on_device}
3882 @end multitable
3883
3884
3885 @item @emph{Reference}:
3886 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3887 3.2.17.
3888 @end table
3889
3890
3891
3892 @node acc_malloc
3893 @section @code{acc_malloc} -- Allocate device memory.
3894 @table @asis
3895 @item @emph{Description}
3896 This function allocates @var{len} bytes of device memory. It returns
3897 the device address of the allocated memory.
3898
3899 @item @emph{C/C++}:
3900 @multitable @columnfractions .20 .80
3901 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3902 @end multitable
3903
3904 @item @emph{Reference}:
3905 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3906 3.2.18.
3907 @end table
3908
3909
3910
3911 @node acc_free
3912 @section @code{acc_free} -- Free device memory.
3913 @table @asis
3914 @item @emph{Description}
3915 Free previously allocated device memory at the device address @code{a}.
3916
3917 @item @emph{C/C++}:
3918 @multitable @columnfractions .20 .80
3919 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3920 @end multitable
3921
3922 @item @emph{Reference}:
3923 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3924 3.2.19.
3925 @end table
3926
3927
3928
3929 @node acc_copyin
3930 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3931 @table @asis
3932 @item @emph{Description}
3933 In C/C++, this function allocates @var{len} bytes of device memory
3934 and maps it to the specified host address in @var{a}. The device
3935 address of the newly allocated device memory is returned.
3936
3937 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3938 a contiguous array section. The second form @var{a} specifies a
3939 variable or array element and @var{len} specifies the length in bytes.
3940
3941 @item @emph{C/C++}:
3942 @multitable @columnfractions .20 .80
3943 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3944 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3945 @end multitable
3946
3947 @item @emph{Fortran}:
3948 @multitable @columnfractions .20 .80
3949 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3950 @item @tab @code{type, dimension(:[,:]...) :: a}
3951 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3952 @item @tab @code{type, dimension(:[,:]...) :: a}
3953 @item @tab @code{integer len}
3954 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3955 @item @tab @code{type, dimension(:[,:]...) :: a}
3956 @item @tab @code{integer(acc_handle_kind) :: async}
3957 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3958 @item @tab @code{type, dimension(:[,:]...) :: a}
3959 @item @tab @code{integer len}
3960 @item @tab @code{integer(acc_handle_kind) :: async}
3961 @end multitable
3962
3963 @item @emph{Reference}:
3964 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3965 3.2.20.
3966 @end table
3967
3968
3969
3970 @node acc_present_or_copyin
3971 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3972 @table @asis
3973 @item @emph{Description}
3974 This function tests if the host data specified by @var{a} and of length
3975 @var{len} is present or not. If it is not present, then device memory
3976 will be allocated and the host memory copied. The device address of
3977 the newly allocated device memory is returned.
3978
3979 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3980 a contiguous array section. The second form @var{a} specifies a variable or
3981 array element and @var{len} specifies the length in bytes.
3982
3983 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3984 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3985
3986 @item @emph{C/C++}:
3987 @multitable @columnfractions .20 .80
3988 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3989 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3990 @end multitable
3991
3992 @item @emph{Fortran}:
3993 @multitable @columnfractions .20 .80
3994 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3995 @item @tab @code{type, dimension(:[,:]...) :: a}
3996 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3997 @item @tab @code{type, dimension(:[,:]...) :: a}
3998 @item @tab @code{integer len}
3999 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
4000 @item @tab @code{type, dimension(:[,:]...) :: a}
4001 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
4002 @item @tab @code{type, dimension(:[,:]...) :: a}
4003 @item @tab @code{integer len}
4004 @end multitable
4005
4006 @item @emph{Reference}:
4007 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4008 3.2.20.
4009 @end table
4010
4011
4012
4013 @node acc_create
4014 @section @code{acc_create} -- Allocate device memory and map it to host memory.
4015 @table @asis
4016 @item @emph{Description}
4017 This function allocates device memory and maps it to host memory specified
4018 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
4019 the function returns the device address of the allocated device memory.
4020
4021 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4022 a contiguous array section. The second form @var{a} specifies a variable or
4023 array element and @var{len} specifies the length in bytes.
4024
4025 @item @emph{C/C++}:
4026 @multitable @columnfractions .20 .80
4027 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
4028 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
4029 @end multitable
4030
4031 @item @emph{Fortran}:
4032 @multitable @columnfractions .20 .80
4033 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
4034 @item @tab @code{type, dimension(:[,:]...) :: a}
4035 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
4036 @item @tab @code{type, dimension(:[,:]...) :: a}
4037 @item @tab @code{integer len}
4038 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
4039 @item @tab @code{type, dimension(:[,:]...) :: a}
4040 @item @tab @code{integer(acc_handle_kind) :: async}
4041 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
4042 @item @tab @code{type, dimension(:[,:]...) :: a}
4043 @item @tab @code{integer len}
4044 @item @tab @code{integer(acc_handle_kind) :: async}
4045 @end multitable
4046
4047 @item @emph{Reference}:
4048 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4049 3.2.21.
4050 @end table
4051
4052
4053
4054 @node acc_present_or_create
4055 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
4056 @table @asis
4057 @item @emph{Description}
4058 This function tests if the host data specified by @var{a} and of length
4059 @var{len} is present or not. If it is not present, then device memory
4060 will be allocated and mapped to host memory. In C/C++, the device address
4061 of the newly allocated device memory is returned.
4062
4063 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4064 a contiguous array section. The second form @var{a} specifies a variable or
4065 array element and @var{len} specifies the length in bytes.
4066
4067 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
4068 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
4069
4070 @item @emph{C/C++}:
4071 @multitable @columnfractions .20 .80
4072 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
4073 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
4074 @end multitable
4075
4076 @item @emph{Fortran}:
4077 @multitable @columnfractions .20 .80
4078 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
4079 @item @tab @code{type, dimension(:[,:]...) :: a}
4080 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
4081 @item @tab @code{type, dimension(:[,:]...) :: a}
4082 @item @tab @code{integer len}
4083 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
4084 @item @tab @code{type, dimension(:[,:]...) :: a}
4085 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
4086 @item @tab @code{type, dimension(:[,:]...) :: a}
4087 @item @tab @code{integer len}
4088 @end multitable
4089
4090 @item @emph{Reference}:
4091 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4092 3.2.21.
4093 @end table
4094
4095
4096
4097 @node acc_copyout
4098 @section @code{acc_copyout} -- Copy device memory to host memory.
4099 @table @asis
4100 @item @emph{Description}
4101 This function copies mapped device memory to host memory which is specified
4102 by host address @var{a} for a length @var{len} bytes in C/C++.
4103
4104 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4105 a contiguous array section. The second form @var{a} specifies a variable or
4106 array element and @var{len} specifies the length in bytes.
4107
4108 @item @emph{C/C++}:
4109 @multitable @columnfractions .20 .80
4110 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
4111 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
4112 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
4113 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
4114 @end multitable
4115
4116 @item @emph{Fortran}:
4117 @multitable @columnfractions .20 .80
4118 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
4119 @item @tab @code{type, dimension(:[,:]...) :: a}
4120 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
4121 @item @tab @code{type, dimension(:[,:]...) :: a}
4122 @item @tab @code{integer len}
4123 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
4124 @item @tab @code{type, dimension(:[,:]...) :: a}
4125 @item @tab @code{integer(acc_handle_kind) :: async}
4126 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
4127 @item @tab @code{type, dimension(:[,:]...) :: a}
4128 @item @tab @code{integer len}
4129 @item @tab @code{integer(acc_handle_kind) :: async}
4130 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
4131 @item @tab @code{type, dimension(:[,:]...) :: a}
4132 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
4133 @item @tab @code{type, dimension(:[,:]...) :: a}
4134 @item @tab @code{integer len}
4135 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
4136 @item @tab @code{type, dimension(:[,:]...) :: a}
4137 @item @tab @code{integer(acc_handle_kind) :: async}
4138 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
4139 @item @tab @code{type, dimension(:[,:]...) :: a}
4140 @item @tab @code{integer len}
4141 @item @tab @code{integer(acc_handle_kind) :: async}
4142 @end multitable
4143
4144 @item @emph{Reference}:
4145 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4146 3.2.22.
4147 @end table
4148
4149
4150
4151 @node acc_delete
4152 @section @code{acc_delete} -- Free device memory.
4153 @table @asis
4154 @item @emph{Description}
4155 This function frees previously allocated device memory specified by
4156 the device address @var{a} and the length of @var{len} bytes.
4157
4158 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4159 a contiguous array section. The second form @var{a} specifies a variable or
4160 array element and @var{len} specifies the length in bytes.
4161
4162 @item @emph{C/C++}:
4163 @multitable @columnfractions .20 .80
4164 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
4165 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
4166 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
4167 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
4168 @end multitable
4169
4170 @item @emph{Fortran}:
4171 @multitable @columnfractions .20 .80
4172 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
4173 @item @tab @code{type, dimension(:[,:]...) :: a}
4174 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
4175 @item @tab @code{type, dimension(:[,:]...) :: a}
4176 @item @tab @code{integer len}
4177 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
4178 @item @tab @code{type, dimension(:[,:]...) :: a}
4179 @item @tab @code{integer(acc_handle_kind) :: async}
4180 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
4181 @item @tab @code{type, dimension(:[,:]...) :: a}
4182 @item @tab @code{integer len}
4183 @item @tab @code{integer(acc_handle_kind) :: async}
4184 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
4185 @item @tab @code{type, dimension(:[,:]...) :: a}
4186 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
4187 @item @tab @code{type, dimension(:[,:]...) :: a}
4188 @item @tab @code{integer len}
4189 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
4190 @item @tab @code{type, dimension(:[,:]...) :: a}
4191 @item @tab @code{integer(acc_handle_kind) :: async}
4192 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
4193 @item @tab @code{type, dimension(:[,:]...) :: a}
4194 @item @tab @code{integer len}
4195 @item @tab @code{integer(acc_handle_kind) :: async}
4196 @end multitable
4197
4198 @item @emph{Reference}:
4199 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4200 3.2.23.
4201 @end table
4202
4203
4204
4205 @node acc_update_device
4206 @section @code{acc_update_device} -- Update device memory from mapped host memory.
4207 @table @asis
4208 @item @emph{Description}
4209 This function updates the device copy from the previously mapped host memory.
4210 The host memory is specified with the host address @var{a} and a length of
4211 @var{len} bytes.
4212
4213 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4214 a contiguous array section. The second form @var{a} specifies a variable or
4215 array element and @var{len} specifies the length in bytes.
4216
4217 @item @emph{C/C++}:
4218 @multitable @columnfractions .20 .80
4219 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
4220 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
4221 @end multitable
4222
4223 @item @emph{Fortran}:
4224 @multitable @columnfractions .20 .80
4225 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
4226 @item @tab @code{type, dimension(:[,:]...) :: a}
4227 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
4228 @item @tab @code{type, dimension(:[,:]...) :: a}
4229 @item @tab @code{integer len}
4230 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
4231 @item @tab @code{type, dimension(:[,:]...) :: a}
4232 @item @tab @code{integer(acc_handle_kind) :: async}
4233 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
4234 @item @tab @code{type, dimension(:[,:]...) :: a}
4235 @item @tab @code{integer len}
4236 @item @tab @code{integer(acc_handle_kind) :: async}
4237 @end multitable
4238
4239 @item @emph{Reference}:
4240 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4241 3.2.24.
4242 @end table
4243
4244
4245
4246 @node acc_update_self
4247 @section @code{acc_update_self} -- Update host memory from mapped device memory.
4248 @table @asis
4249 @item @emph{Description}
4250 This function updates the host copy from the previously mapped device memory.
4251 The host memory is specified with the host address @var{a} and a length of
4252 @var{len} bytes.
4253
4254 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4255 a contiguous array section. The second form @var{a} specifies a variable or
4256 array element and @var{len} specifies the length in bytes.
4257
4258 @item @emph{C/C++}:
4259 @multitable @columnfractions .20 .80
4260 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
4261 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
4262 @end multitable
4263
4264 @item @emph{Fortran}:
4265 @multitable @columnfractions .20 .80
4266 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
4267 @item @tab @code{type, dimension(:[,:]...) :: a}
4268 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
4269 @item @tab @code{type, dimension(:[,:]...) :: a}
4270 @item @tab @code{integer len}
4271 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
4272 @item @tab @code{type, dimension(:[,:]...) :: a}
4273 @item @tab @code{integer(acc_handle_kind) :: async}
4274 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
4275 @item @tab @code{type, dimension(:[,:]...) :: a}
4276 @item @tab @code{integer len}
4277 @item @tab @code{integer(acc_handle_kind) :: async}
4278 @end multitable
4279
4280 @item @emph{Reference}:
4281 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4282 3.2.25.
4283 @end table
4284
4285
4286
4287 @node acc_map_data
4288 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
4289 @table @asis
4290 @item @emph{Description}
4291 This function maps previously allocated device and host memory. The device
4292 memory is specified with the device address @var{d}. The host memory is
4293 specified with the host address @var{h} and a length of @var{len}.
4294
4295 @item @emph{C/C++}:
4296 @multitable @columnfractions .20 .80
4297 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
4298 @end multitable
4299
4300 @item @emph{Reference}:
4301 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4302 3.2.26.
4303 @end table
4304
4305
4306
4307 @node acc_unmap_data
4308 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
4309 @table @asis
4310 @item @emph{Description}
4311 This function unmaps previously mapped device and host memory. The latter
4312 specified by @var{h}.
4313
4314 @item @emph{C/C++}:
4315 @multitable @columnfractions .20 .80
4316 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
4317 @end multitable
4318
4319 @item @emph{Reference}:
4320 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4321 3.2.27.
4322 @end table
4323
4324
4325
4326 @node acc_deviceptr
4327 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
4328 @table @asis
4329 @item @emph{Description}
4330 This function returns the device address that has been mapped to the
4331 host address specified by @var{h}.
4332
4333 @item @emph{C/C++}:
4334 @multitable @columnfractions .20 .80
4335 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
4336 @end multitable
4337
4338 @item @emph{Reference}:
4339 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4340 3.2.28.
4341 @end table
4342
4343
4344
4345 @node acc_hostptr
4346 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
4347 @table @asis
4348 @item @emph{Description}
4349 This function returns the host address that has been mapped to the
4350 device address specified by @var{d}.
4351
4352 @item @emph{C/C++}:
4353 @multitable @columnfractions .20 .80
4354 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
4355 @end multitable
4356
4357 @item @emph{Reference}:
4358 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4359 3.2.29.
4360 @end table
4361
4362
4363
4364 @node acc_is_present
4365 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
4366 @table @asis
4367 @item @emph{Description}
4368 This function indicates whether the specified host address in @var{a} and a
4369 length of @var{len} bytes is present on the device. In C/C++, a non-zero
4370 value is returned to indicate the presence of the mapped memory on the
4371 device. A zero is returned to indicate the memory is not mapped on the
4372 device.
4373
4374 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4375 a contiguous array section. The second form @var{a} specifies a variable or
4376 array element and @var{len} specifies the length in bytes. If the host
4377 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
4378 a @code{false} is return to indicate the mapped memory is not present.
4379
4380 @item @emph{C/C++}:
4381 @multitable @columnfractions .20 .80
4382 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
4383 @end multitable
4384
4385 @item @emph{Fortran}:
4386 @multitable @columnfractions .20 .80
4387 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
4388 @item @tab @code{type, dimension(:[,:]...) :: a}
4389 @item @tab @code{logical acc_is_present}
4390 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
4391 @item @tab @code{type, dimension(:[,:]...) :: a}
4392 @item @tab @code{integer len}
4393 @item @tab @code{logical acc_is_present}
4394 @end multitable
4395
4396 @item @emph{Reference}:
4397 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4398 3.2.30.
4399 @end table
4400
4401
4402
4403 @node acc_memcpy_to_device
4404 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
4405 @table @asis
4406 @item @emph{Description}
4407 This function copies host memory specified by host address of @var{src} to
4408 device memory specified by the device address @var{dest} for a length of
4409 @var{bytes} bytes.
4410
4411 @item @emph{C/C++}:
4412 @multitable @columnfractions .20 .80
4413 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
4414 @end multitable
4415
4416 @item @emph{Reference}:
4417 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4418 3.2.31.
4419 @end table
4420
4421
4422
4423 @node acc_memcpy_from_device
4424 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
4425 @table @asis
4426 @item @emph{Description}
4427 This function copies host memory specified by host address of @var{src} from
4428 device memory specified by the device address @var{dest} for a length of
4429 @var{bytes} bytes.
4430
4431 @item @emph{C/C++}:
4432 @multitable @columnfractions .20 .80
4433 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
4434 @end multitable
4435
4436 @item @emph{Reference}:
4437 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4438 3.2.32.
4439 @end table
4440
4441
4442
4443 @node acc_attach
4444 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
4445 @table @asis
4446 @item @emph{Description}
4447 This function updates a pointer on the device from pointing to a host-pointer
4448 address to pointing to the corresponding device data.
4449
4450 @item @emph{C/C++}:
4451 @multitable @columnfractions .20 .80
4452 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
4453 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
4454 @end multitable
4455
4456 @item @emph{Reference}:
4457 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4458 3.2.34.
4459 @end table
4460
4461
4462
4463 @node acc_detach
4464 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
4465 @table @asis
4466 @item @emph{Description}
4467 This function updates a pointer on the device from pointing to a device-pointer
4468 address to pointing to the corresponding host data.
4469
4470 @item @emph{C/C++}:
4471 @multitable @columnfractions .20 .80
4472 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
4473 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
4474 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
4475 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
4476 @end multitable
4477
4478 @item @emph{Reference}:
4479 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4480 3.2.35.
4481 @end table
4482
4483
4484
4485 @node acc_get_current_cuda_device
4486 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
4487 @table @asis
4488 @item @emph{Description}
4489 This function returns the CUDA device handle. This handle is the same
4490 as used by the CUDA Runtime or Driver API's.
4491
4492 @item @emph{C/C++}:
4493 @multitable @columnfractions .20 .80
4494 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
4495 @end multitable
4496
4497 @item @emph{Reference}:
4498 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4499 A.2.1.1.
4500 @end table
4501
4502
4503
4504 @node acc_get_current_cuda_context
4505 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
4506 @table @asis
4507 @item @emph{Description}
4508 This function returns the CUDA context handle. This handle is the same
4509 as used by the CUDA Runtime or Driver API's.
4510
4511 @item @emph{C/C++}:
4512 @multitable @columnfractions .20 .80
4513 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
4514 @end multitable
4515
4516 @item @emph{Reference}:
4517 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4518 A.2.1.2.
4519 @end table
4520
4521
4522
4523 @node acc_get_cuda_stream
4524 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
4525 @table @asis
4526 @item @emph{Description}
4527 This function returns the CUDA stream handle for the queue @var{async}.
4528 This handle is the same as used by the CUDA Runtime or Driver API's.
4529
4530 @item @emph{C/C++}:
4531 @multitable @columnfractions .20 .80
4532 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
4533 @end multitable
4534
4535 @item @emph{Reference}:
4536 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4537 A.2.1.3.
4538 @end table
4539
4540
4541
4542 @node acc_set_cuda_stream
4543 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
4544 @table @asis
4545 @item @emph{Description}
4546 This function associates the stream handle specified by @var{stream} with
4547 the queue @var{async}.
4548
4549 This cannot be used to change the stream handle associated with
4550 @code{acc_async_sync}.
4551
4552 The return value is not specified.
4553
4554 @item @emph{C/C++}:
4555 @multitable @columnfractions .20 .80
4556 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
4557 @end multitable
4558
4559 @item @emph{Reference}:
4560 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4561 A.2.1.4.
4562 @end table
4563
4564
4565
4566 @node acc_prof_register
4567 @section @code{acc_prof_register} -- Register callbacks.
4568 @table @asis
4569 @item @emph{Description}:
4570 This function registers callbacks.
4571
4572 @item @emph{C/C++}:
4573 @multitable @columnfractions .20 .80
4574 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
4575 @end multitable
4576
4577 @item @emph{See also}:
4578 @ref{OpenACC Profiling Interface}
4579
4580 @item @emph{Reference}:
4581 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4582 5.3.
4583 @end table
4584
4585
4586
4587 @node acc_prof_unregister
4588 @section @code{acc_prof_unregister} -- Unregister callbacks.
4589 @table @asis
4590 @item @emph{Description}:
4591 This function unregisters callbacks.
4592
4593 @item @emph{C/C++}:
4594 @multitable @columnfractions .20 .80
4595 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
4596 @end multitable
4597
4598 @item @emph{See also}:
4599 @ref{OpenACC Profiling Interface}
4600
4601 @item @emph{Reference}:
4602 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4603 5.3.
4604 @end table
4605
4606
4607
4608 @node acc_prof_lookup
4609 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
4610 @table @asis
4611 @item @emph{Description}:
4612 Function to obtain inquiry functions.
4613
4614 @item @emph{C/C++}:
4615 @multitable @columnfractions .20 .80
4616 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
4617 @end multitable
4618
4619 @item @emph{See also}:
4620 @ref{OpenACC Profiling Interface}
4621
4622 @item @emph{Reference}:
4623 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4624 5.3.
4625 @end table
4626
4627
4628
4629 @node acc_register_library
4630 @section @code{acc_register_library} -- Library registration.
4631 @table @asis
4632 @item @emph{Description}:
4633 Function for library registration.
4634
4635 @item @emph{C/C++}:
4636 @multitable @columnfractions .20 .80
4637 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
4638 @end multitable
4639
4640 @item @emph{See also}:
4641 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
4642
4643 @item @emph{Reference}:
4644 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4645 5.3.
4646 @end table
4647
4648
4649
4650 @c ---------------------------------------------------------------------
4651 @c OpenACC Environment Variables
4652 @c ---------------------------------------------------------------------
4653
4654 @node OpenACC Environment Variables
4655 @chapter OpenACC Environment Variables
4656
4657 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
4658 are defined by section 4 of the OpenACC specification in version 2.0.
4659 The variable @env{ACC_PROFLIB}
4660 is defined by section 4 of the OpenACC specification in version 2.6.
4661 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
4662
4663 @menu
4664 * ACC_DEVICE_TYPE::
4665 * ACC_DEVICE_NUM::
4666 * ACC_PROFLIB::
4667 * GCC_ACC_NOTIFY::
4668 @end menu
4669
4670
4671
4672 @node ACC_DEVICE_TYPE
4673 @section @code{ACC_DEVICE_TYPE}
4674 @table @asis
4675 @item @emph{Reference}:
4676 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4677 4.1.
4678 @end table
4679
4680
4681
4682 @node ACC_DEVICE_NUM
4683 @section @code{ACC_DEVICE_NUM}
4684 @table @asis
4685 @item @emph{Reference}:
4686 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4687 4.2.
4688 @end table
4689
4690
4691
4692 @node ACC_PROFLIB
4693 @section @code{ACC_PROFLIB}
4694 @table @asis
4695 @item @emph{See also}:
4696 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
4697
4698 @item @emph{Reference}:
4699 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4700 4.3.
4701 @end table
4702
4703
4704
4705 @node GCC_ACC_NOTIFY
4706 @section @code{GCC_ACC_NOTIFY}
4707 @table @asis
4708 @item @emph{Description}:
4709 Print debug information pertaining to the accelerator.
4710 @end table
4711
4712
4713
4714 @c ---------------------------------------------------------------------
4715 @c CUDA Streams Usage
4716 @c ---------------------------------------------------------------------
4717
4718 @node CUDA Streams Usage
4719 @chapter CUDA Streams Usage
4720
4721 This applies to the @code{nvptx} plugin only.
4722
4723 The library provides elements that perform asynchronous movement of
4724 data and asynchronous operation of computing constructs. This
4725 asynchronous functionality is implemented by making use of CUDA
4726 streams@footnote{See "Stream Management" in "CUDA Driver API",
4727 TRM-06703-001, Version 5.5, for additional information}.
4728
4729 The primary means by that the asynchronous functionality is accessed
4730 is through the use of those OpenACC directives which make use of the
4731 @code{async} and @code{wait} clauses. When the @code{async} clause is
4732 first used with a directive, it creates a CUDA stream. If an
4733 @code{async-argument} is used with the @code{async} clause, then the
4734 stream is associated with the specified @code{async-argument}.
4735
4736 Following the creation of an association between a CUDA stream and the
4737 @code{async-argument} of an @code{async} clause, both the @code{wait}
4738 clause and the @code{wait} directive can be used. When either the
4739 clause or directive is used after stream creation, it creates a
4740 rendezvous point whereby execution waits until all operations
4741 associated with the @code{async-argument}, that is, stream, have
4742 completed.
4743
4744 Normally, the management of the streams that are created as a result of
4745 using the @code{async} clause, is done without any intervention by the
4746 caller. This implies the association between the @code{async-argument}
4747 and the CUDA stream will be maintained for the lifetime of the program.
4748 However, this association can be changed through the use of the library
4749 function @code{acc_set_cuda_stream}. When the function
4750 @code{acc_set_cuda_stream} is called, the CUDA stream that was
4751 originally associated with the @code{async} clause will be destroyed.
4752 Caution should be taken when changing the association as subsequent
4753 references to the @code{async-argument} refer to a different
4754 CUDA stream.
4755
4756
4757
4758 @c ---------------------------------------------------------------------
4759 @c OpenACC Library Interoperability
4760 @c ---------------------------------------------------------------------
4761
4762 @node OpenACC Library Interoperability
4763 @chapter OpenACC Library Interoperability
4764
4765 @section Introduction
4766
4767 The OpenACC library uses the CUDA Driver API, and may interact with
4768 programs that use the Runtime library directly, or another library
4769 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
4770 "Interactions with the CUDA Driver API" in
4771 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
4772 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
4773 for additional information on library interoperability.}.
4774 This chapter describes the use cases and what changes are
4775 required in order to use both the OpenACC library and the CUBLAS and Runtime
4776 libraries within a program.
4777
4778 @section First invocation: NVIDIA CUBLAS library API
4779
4780 In this first use case (see below), a function in the CUBLAS library is called
4781 prior to any of the functions in the OpenACC library. More specifically, the
4782 function @code{cublasCreate()}.
4783
4784 When invoked, the function initializes the library and allocates the
4785 hardware resources on the host and the device on behalf of the caller. Once
4786 the initialization and allocation has completed, a handle is returned to the
4787 caller. The OpenACC library also requires initialization and allocation of
4788 hardware resources. Since the CUBLAS library has already allocated the
4789 hardware resources for the device, all that is left to do is to initialize
4790 the OpenACC library and acquire the hardware resources on the host.
4791
4792 Prior to calling the OpenACC function that initializes the library and
4793 allocate the host hardware resources, you need to acquire the device number
4794 that was allocated during the call to @code{cublasCreate()}. The invoking of the
4795 runtime library function @code{cudaGetDevice()} accomplishes this. Once
4796 acquired, the device number is passed along with the device type as
4797 parameters to the OpenACC library function @code{acc_set_device_num()}.
4798
4799 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
4800 library uses the context that was created during the call to
4801 @code{cublasCreate()}. In other words, both libraries will be sharing the
4802 same context.
4803
4804 @smallexample
4805 /* Create the handle */
4806 s = cublasCreate(&h);
4807 if (s != CUBLAS_STATUS_SUCCESS)
4808 @{
4809 fprintf(stderr, "cublasCreate failed %d\n", s);
4810 exit(EXIT_FAILURE);
4811 @}
4812
4813 /* Get the device number */
4814 e = cudaGetDevice(&dev);
4815 if (e != cudaSuccess)
4816 @{
4817 fprintf(stderr, "cudaGetDevice failed %d\n", e);
4818 exit(EXIT_FAILURE);
4819 @}
4820
4821 /* Initialize OpenACC library and use device 'dev' */
4822 acc_set_device_num(dev, acc_device_nvidia);
4823
4824 @end smallexample
4825 @center Use Case 1
4826
4827 @section First invocation: OpenACC library API
4828
4829 In this second use case (see below), a function in the OpenACC library is
4830 called prior to any of the functions in the CUBLAS library. More specifically,
4831 the function @code{acc_set_device_num()}.
4832
4833 In the use case presented here, the function @code{acc_set_device_num()}
4834 is used to both initialize the OpenACC library and allocate the hardware
4835 resources on the host and the device. In the call to the function, the
4836 call parameters specify which device to use and what device
4837 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
4838 is but one method to initialize the OpenACC library and allocate the
4839 appropriate hardware resources. Other methods are available through the
4840 use of environment variables and these will be discussed in the next section.
4841
4842 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
4843 functions can be called as seen with multiple calls being made to
4844 @code{acc_copyin()}. In addition, calls can be made to functions in the
4845 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
4846 subsequent to the calls to @code{acc_copyin()}.
4847 As seen in the previous use case, a call to @code{cublasCreate()}
4848 initializes the CUBLAS library and allocates the hardware resources on the
4849 host and the device. However, since the device has already been allocated,
4850 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
4851 the appropriate hardware resources on the host. The context that was created
4852 as part of the OpenACC initialization is shared with the CUBLAS library,
4853 similarly to the first use case.
4854
4855 @smallexample
4856 dev = 0;
4857
4858 acc_set_device_num(dev, acc_device_nvidia);
4859
4860 /* Copy the first set to the device */
4861 d_X = acc_copyin(&h_X[0], N * sizeof (float));
4862 if (d_X == NULL)
4863 @{
4864 fprintf(stderr, "copyin error h_X\n");
4865 exit(EXIT_FAILURE);
4866 @}
4867
4868 /* Copy the second set to the device */
4869 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4870 if (d_Y == NULL)
4871 @{
4872 fprintf(stderr, "copyin error h_Y1\n");
4873 exit(EXIT_FAILURE);
4874 @}
4875
4876 /* Create the handle */
4877 s = cublasCreate(&h);
4878 if (s != CUBLAS_STATUS_SUCCESS)
4879 @{
4880 fprintf(stderr, "cublasCreate failed %d\n", s);
4881 exit(EXIT_FAILURE);
4882 @}
4883
4884 /* Perform saxpy using CUBLAS library function */
4885 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4886 if (s != CUBLAS_STATUS_SUCCESS)
4887 @{
4888 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4889 exit(EXIT_FAILURE);
4890 @}
4891
4892 /* Copy the results from the device */
4893 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4894
4895 @end smallexample
4896 @center Use Case 2
4897
4898 @section OpenACC library and environment variables
4899
4900 There are two environment variables associated with the OpenACC library
4901 that may be used to control the device type and device number:
4902 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4903 environment variables can be used as an alternative to calling
4904 @code{acc_set_device_num()}. As seen in the second use case, the device
4905 type and device number were specified using @code{acc_set_device_num()}.
4906 If however, the aforementioned environment variables were set, then the
4907 call to @code{acc_set_device_num()} would not be required.
4908
4909
4910 The use of the environment variables is only relevant when an OpenACC function
4911 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4912 is called prior to a call to an OpenACC function, then you must call
4913 @code{acc_set_device_num()}@footnote{More complete information
4914 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4915 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4916 Application Programming Interface”, Version 2.6.}
4917
4918
4919
4920 @c ---------------------------------------------------------------------
4921 @c OpenACC Profiling Interface
4922 @c ---------------------------------------------------------------------
4923
4924 @node OpenACC Profiling Interface
4925 @chapter OpenACC Profiling Interface
4926
4927 @section Implementation Status and Implementation-Defined Behavior
4928
4929 We're implementing the OpenACC Profiling Interface as defined by the
4930 OpenACC 2.6 specification. We're clarifying some aspects here as
4931 @emph{implementation-defined behavior}, while they're still under
4932 discussion within the OpenACC Technical Committee.
4933
4934 This implementation is tuned to keep the performance impact as low as
4935 possible for the (very common) case that the Profiling Interface is
4936 not enabled. This is relevant, as the Profiling Interface affects all
4937 the @emph{hot} code paths (in the target code, not in the offloaded
4938 code). Users of the OpenACC Profiling Interface can be expected to
4939 understand that performance will be impacted to some degree once the
4940 Profiling Interface has gotten enabled: for example, because of the
4941 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
4942 every event that has been registered.
4943
4944 We're not yet accounting for the fact that @cite{OpenACC events may
4945 occur during event processing}.
4946 We just handle one case specially, as required by CUDA 9.0
4947 @command{nvprof}, that @code{acc_get_device_type}
4948 (@ref{acc_get_device_type})) may be called from
4949 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4950 callbacks.
4951
4952 We're not yet implementing initialization via a
4953 @code{acc_register_library} function that is either statically linked
4954 in, or dynamically via @env{LD_PRELOAD}.
4955 Initialization via @code{acc_register_library} functions dynamically
4956 loaded via the @env{ACC_PROFLIB} environment variable does work, as
4957 does directly calling @code{acc_prof_register},
4958 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
4959
4960 As currently there are no inquiry functions defined, calls to
4961 @code{acc_prof_lookup} will always return @code{NULL}.
4962
4963 There aren't separate @emph{start}, @emph{stop} events defined for the
4964 event types @code{acc_ev_create}, @code{acc_ev_delete},
4965 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4966 should be triggered before or after the actual device-specific call is
4967 made. We trigger them after.
4968
4969 Remarks about data provided to callbacks:
4970
4971 @table @asis
4972
4973 @item @code{acc_prof_info.event_type}
4974 It's not clear if for @emph{nested} event callbacks (for example,
4975 @code{acc_ev_enqueue_launch_start} as part of a parent compute
4976 construct), this should be set for the nested event
4977 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4978 construct should remain (@code{acc_ev_compute_construct_start}). In
4979 this implementation, the value will generally correspond to the
4980 innermost nested event type.
4981
4982 @item @code{acc_prof_info.device_type}
4983 @itemize
4984
4985 @item
4986 For @code{acc_ev_compute_construct_start}, and in presence of an
4987 @code{if} clause with @emph{false} argument, this will still refer to
4988 the offloading device type.
4989 It's not clear if that's the expected behavior.
4990
4991 @item
4992 Complementary to the item before, for
4993 @code{acc_ev_compute_construct_end}, this is set to
4994 @code{acc_device_host} in presence of an @code{if} clause with
4995 @emph{false} argument.
4996 It's not clear if that's the expected behavior.
4997
4998 @end itemize
4999
5000 @item @code{acc_prof_info.thread_id}
5001 Always @code{-1}; not yet implemented.
5002
5003 @item @code{acc_prof_info.async}
5004 @itemize
5005
5006 @item
5007 Not yet implemented correctly for
5008 @code{acc_ev_compute_construct_start}.
5009
5010 @item
5011 In a compute construct, for host-fallback
5012 execution/@code{acc_device_host} it will always be
5013 @code{acc_async_sync}.
5014 It's not clear if that's the expected behavior.
5015
5016 @item
5017 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
5018 it will always be @code{acc_async_sync}.
5019 It's not clear if that's the expected behavior.
5020
5021 @end itemize
5022
5023 @item @code{acc_prof_info.async_queue}
5024 There is no @cite{limited number of asynchronous queues} in libgomp.
5025 This will always have the same value as @code{acc_prof_info.async}.
5026
5027 @item @code{acc_prof_info.src_file}
5028 Always @code{NULL}; not yet implemented.
5029
5030 @item @code{acc_prof_info.func_name}
5031 Always @code{NULL}; not yet implemented.
5032
5033 @item @code{acc_prof_info.line_no}
5034 Always @code{-1}; not yet implemented.
5035
5036 @item @code{acc_prof_info.end_line_no}
5037 Always @code{-1}; not yet implemented.
5038
5039 @item @code{acc_prof_info.func_line_no}
5040 Always @code{-1}; not yet implemented.
5041
5042 @item @code{acc_prof_info.func_end_line_no}
5043 Always @code{-1}; not yet implemented.
5044
5045 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
5046 Relating to @code{acc_prof_info.event_type} discussed above, in this
5047 implementation, this will always be the same value as
5048 @code{acc_prof_info.event_type}.
5049
5050 @item @code{acc_event_info.*.parent_construct}
5051 @itemize
5052
5053 @item
5054 Will be @code{acc_construct_parallel} for all OpenACC compute
5055 constructs as well as many OpenACC Runtime API calls; should be the
5056 one matching the actual construct, or
5057 @code{acc_construct_runtime_api}, respectively.
5058
5059 @item
5060 Will be @code{acc_construct_enter_data} or
5061 @code{acc_construct_exit_data} when processing variable mappings
5062 specified in OpenACC @emph{declare} directives; should be
5063 @code{acc_construct_declare}.
5064
5065 @item
5066 For implicit @code{acc_ev_device_init_start},
5067 @code{acc_ev_device_init_end}, and explicit as well as implicit
5068 @code{acc_ev_alloc}, @code{acc_ev_free},
5069 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5070 @code{acc_ev_enqueue_download_start}, and
5071 @code{acc_ev_enqueue_download_end}, will be
5072 @code{acc_construct_parallel}; should reflect the real parent
5073 construct.
5074
5075 @end itemize
5076
5077 @item @code{acc_event_info.*.implicit}
5078 For @code{acc_ev_alloc}, @code{acc_ev_free},
5079 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5080 @code{acc_ev_enqueue_download_start}, and
5081 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
5082 also for explicit usage.
5083
5084 @item @code{acc_event_info.data_event.var_name}
5085 Always @code{NULL}; not yet implemented.
5086
5087 @item @code{acc_event_info.data_event.host_ptr}
5088 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
5089 @code{NULL}.
5090
5091 @item @code{typedef union acc_api_info}
5092 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
5093 Information}. This should obviously be @code{typedef @emph{struct}
5094 acc_api_info}.
5095
5096 @item @code{acc_api_info.device_api}
5097 Possibly not yet implemented correctly for
5098 @code{acc_ev_compute_construct_start},
5099 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
5100 will always be @code{acc_device_api_none} for these event types.
5101 For @code{acc_ev_enter_data_start}, it will be
5102 @code{acc_device_api_none} in some cases.
5103
5104 @item @code{acc_api_info.device_type}
5105 Always the same as @code{acc_prof_info.device_type}.
5106
5107 @item @code{acc_api_info.vendor}
5108 Always @code{-1}; not yet implemented.
5109
5110 @item @code{acc_api_info.device_handle}
5111 Always @code{NULL}; not yet implemented.
5112
5113 @item @code{acc_api_info.context_handle}
5114 Always @code{NULL}; not yet implemented.
5115
5116 @item @code{acc_api_info.async_handle}
5117 Always @code{NULL}; not yet implemented.
5118
5119 @end table
5120
5121 Remarks about certain event types:
5122
5123 @table @asis
5124
5125 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5126 @itemize
5127
5128 @item
5129 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
5130 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
5131 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
5132 When a compute construct triggers implicit
5133 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
5134 events, they currently aren't @emph{nested within} the corresponding
5135 @code{acc_ev_compute_construct_start} and
5136 @code{acc_ev_compute_construct_end}, but they're currently observed
5137 @emph{before} @code{acc_ev_compute_construct_start}.
5138 It's not clear what to do: the standard asks us provide a lot of
5139 details to the @code{acc_ev_compute_construct_start} callback, without
5140 (implicitly) initializing a device before?
5141
5142 @item
5143 Callbacks for these event types will not be invoked for calls to the
5144 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
5145 It's not clear if they should be.
5146
5147 @end itemize
5148
5149 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
5150 @itemize
5151
5152 @item
5153 Callbacks for these event types will also be invoked for OpenACC
5154 @emph{host_data} constructs.
5155 It's not clear if they should be.
5156
5157 @item
5158 Callbacks for these event types will also be invoked when processing
5159 variable mappings specified in OpenACC @emph{declare} directives.
5160 It's not clear if they should be.
5161
5162 @end itemize
5163
5164 @end table
5165
5166 Callbacks for the following event types will be invoked, but dispatch
5167 and information provided therein has not yet been thoroughly reviewed:
5168
5169 @itemize
5170 @item @code{acc_ev_alloc}
5171 @item @code{acc_ev_free}
5172 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
5173 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
5174 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
5175 @end itemize
5176
5177 During device initialization, and finalization, respectively,
5178 callbacks for the following event types will not yet be invoked:
5179
5180 @itemize
5181 @item @code{acc_ev_alloc}
5182 @item @code{acc_ev_free}
5183 @end itemize
5184
5185 Callbacks for the following event types have not yet been implemented,
5186 so currently won't be invoked:
5187
5188 @itemize
5189 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
5190 @item @code{acc_ev_runtime_shutdown}
5191 @item @code{acc_ev_create}, @code{acc_ev_delete}
5192 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
5193 @end itemize
5194
5195 For the following runtime library functions, not all expected
5196 callbacks will be invoked (mostly concerning implicit device
5197 initialization):
5198
5199 @itemize
5200 @item @code{acc_get_num_devices}
5201 @item @code{acc_set_device_type}
5202 @item @code{acc_get_device_type}
5203 @item @code{acc_set_device_num}
5204 @item @code{acc_get_device_num}
5205 @item @code{acc_init}
5206 @item @code{acc_shutdown}
5207 @end itemize
5208
5209 Aside from implicit device initialization, for the following runtime
5210 library functions, no callbacks will be invoked for shared-memory
5211 offloading devices (it's not clear if they should be):
5212
5213 @itemize
5214 @item @code{acc_malloc}
5215 @item @code{acc_free}
5216 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
5217 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
5218 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
5219 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
5220 @item @code{acc_update_device}, @code{acc_update_device_async}
5221 @item @code{acc_update_self}, @code{acc_update_self_async}
5222 @item @code{acc_map_data}, @code{acc_unmap_data}
5223 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
5224 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
5225 @end itemize
5226
5227 @c ---------------------------------------------------------------------
5228 @c OpenMP-Implementation Specifics
5229 @c ---------------------------------------------------------------------
5230
5231 @node OpenMP-Implementation Specifics
5232 @chapter OpenMP-Implementation Specifics
5233
5234 @menu
5235 * Implementation-defined ICV Initialization::
5236 * OpenMP Context Selectors::
5237 * Memory allocation::
5238 @end menu
5239
5240 @node Implementation-defined ICV Initialization
5241 @section Implementation-defined ICV Initialization
5242 @cindex Implementation specific setting
5243
5244 @multitable @columnfractions .30 .70
5245 @item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
5246 @item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
5247 @item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
5248 @item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
5249 @item @var{nthreads-var} @tab See @ref{OMP_NUM_THREADS}.
5250 @item @var{num-devices-var} @tab Number of non-host devices found
5251 by GCC's run-time library
5252 @item @var{num-procs-var} @tab The number of CPU cores on the
5253 initial device, except that affinity settings might lead to a
5254 smaller number. On non-host devices, the value of the
5255 @var{nthreads-var} ICV.
5256 @item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
5257 @item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
5258 @item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
5259 @item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
5260 @item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
5261 @ref{GOMP_SPINCOUNT}
5262 @end multitable
5263
5264 @node OpenMP Context Selectors
5265 @section OpenMP Context Selectors
5266
5267 @code{vendor} is always @code{gnu}. References are to the GCC manual.
5268
5269 @c NOTE: Only the following selectors have been implemented. To add
5270 @c additional traits for target architecture, TARGET_OMP_DEVICE_KIND_ARCH_ISA
5271 @c has to be implemented; cf. also PR target/105640.
5272 @c For offload devices, add *additionally* gcc/config/*/t-omp-device.
5273
5274 For the host compiler, @code{kind} always matches @code{host}; for the
5275 offloading architectures AMD GCN and Nvidia PTX, @code{kind} always matches
5276 @code{gpu}. For the x86 family of computers, AMD GCN and Nvidia PTX
5277 the following traits are supported in addition; while OpenMP is supported
5278 on more architectures, GCC currently does not match any @code{arch} or
5279 @code{isa} traits for those.
5280
5281 @multitable @columnfractions .65 .30
5282 @headitem @code{arch} @tab @code{isa}
5283 @item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
5284 @code{i586}, @code{i686}, @code{ia32}
5285 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
5286 @item @code{amdgcn}, @code{gcn}
5287 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
5288 @code{gfx803} is supported as an alias for @code{fiji}.}
5289 @item @code{nvptx}
5290 @tab See @code{-march=} in ``Nvidia PTX Options''
5291 @end multitable
5292
5293 @node Memory allocation
5294 @section Memory allocation
5295
5296 For the available predefined allocators and, as applicable, their associated
5297 predefined memory spaces and for the available traits and their default values,
5298 see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
5299 space use the @code{omp_default_mem_space} memory space.
5300
5301 For the memory spaces, the following applies:
5302 @itemize
5303 @item @code{omp_default_mem_space} is supported
5304 @item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
5305 @item @code{omp_low_lat_mem_space} maps to @code{omp_default_mem_space}
5306 @item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
5307 unless the memkind library is available
5308 @item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
5309 unless the memkind library is available
5310 @end itemize
5311
5312 On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
5313 library} (@code{libmemkind.so.0}) is available at runtime, it is used when
5314 creating memory allocators requesting
5315
5316 @itemize
5317 @item the memory space @code{omp_high_bw_mem_space}
5318 @item the memory space @code{omp_large_cap_mem_space}
5319 @item the @code{partition} trait @code{interleaved}; note that for
5320 @code{omp_large_cap_mem_space} the allocation will not be interleaved
5321 @end itemize
5322
5323 On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
5324 library} (@code{libnuma.so.1}) is available at runtime, it used when creating
5325 memory allocators requesting
5326
5327 @itemize
5328 @item the @code{partition} trait @code{nearest}, except when both the
5329 libmemkind library is available and the memory space is either
5330 @code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
5331 @end itemize
5332
5333 Note that the numa library will round up the allocation size to a multiple of
5334 the system page size; therefore, consider using it only with large data or
5335 by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
5336 kernel does not guarantee that an allocation will always be on the nearest NUMA
5337 node nor that after reallocation the same node will be used. Note additionally
5338 that, on Linux, the default setting of the memory placement policy is to use the
5339 current node; therefore, unless the memory placement policy has been overridden,
5340 the @code{partition} trait @code{environment} (the default) will be effectively
5341 a @code{nearest} allocation.
5342
5343 Additional notes regarding the traits:
5344 @itemize
5345 @item The @code{pinned} trait is unsupported.
5346 @item The default for the @code{pool_size} trait is no pool and for every
5347 (re)allocation the associated library routine is called, which might
5348 internally use a memory pool.
5349 @item For the @code{partition} trait, the partition part size will be the same
5350 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
5351 effect), except for @code{interleaved} when the memkind library is
5352 available. Furthermore, for @code{nearest} and unless the numa library
5353 is available, the memory might not be on the same NUMA node as thread
5354 that allocated the memory; on Linux, this is in particular the case when
5355 the memory placement policy is set to preferred.
5356 @item The @code{access} trait has no effect such that memory is always
5357 accessible by all threads.
5358 @item The @code{sync_hint} trait has no effect.
5359 @end itemize
5360
5361 @c ---------------------------------------------------------------------
5362 @c Offload-Target Specifics
5363 @c ---------------------------------------------------------------------
5364
5365 @node Offload-Target Specifics
5366 @chapter Offload-Target Specifics
5367
5368 The following sections present notes on the offload-target specifics
5369
5370 @menu
5371 * AMD Radeon::
5372 * nvptx::
5373 @end menu
5374
5375 @node AMD Radeon
5376 @section AMD Radeon (GCN)
5377
5378 On the hardware side, there is the hierarchy (fine to coarse):
5379 @itemize
5380 @item work item (thread)
5381 @item wavefront
5382 @item work group
5383 @item compute unit (CU)
5384 @end itemize
5385
5386 All OpenMP and OpenACC levels are used, i.e.
5387 @itemize
5388 @item OpenMP's simd and OpenACC's vector map to work items (thread)
5389 @item OpenMP's threads (``parallel'') and OpenACC's workers map
5390 to wavefronts
5391 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5392 size of the number of teams or gangs, respectively.
5393 @end itemize
5394
5395 The used sizes are
5396 @itemize
5397 @item Number of teams is the specified @code{num_teams} (OpenMP) or
5398 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
5399 by two times the number of CU.
5400 @item Number of wavefronts is 4 for gfx900 and 16 otherwise;
5401 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
5402 overrides this if smaller.
5403 @item The wavefront has 102 scalars and 64 vectors
5404 @item Number of workitems is always 64
5405 @item The hardware permits maximally 40 workgroups/CU and
5406 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
5407 @item 80 scalars registers and 24 vector registers in non-kernel functions
5408 (the chosen procedure-calling API).
5409 @item For the kernel itself: as many as register pressure demands (number of
5410 teams and number of threads, scaled down if registers are exhausted)
5411 @end itemize
5412
5413 The implementation remark:
5414 @itemize
5415 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5416 using the C library @code{printf} functions and the Fortran
5417 @code{print}/@code{write} statements.
5418 @item Reverse offload regions (i.e. @code{target} regions with
5419 @code{device(ancestor:1)}) are processed serially per @code{target} region
5420 such that the next reverse offload region is only executed after the previous
5421 one returned.
5422 @item OpenMP code that has a @code{requires} directive with
5423 @code{unified_shared_memory} will remove any GCN device from the list of
5424 available devices (``host fallback'').
5425 @item The available stack size can be changed using the @code{GCN_STACK_SIZE}
5426 environment variable; the default is 32 kiB per thread.
5427 @end itemize
5428
5429
5430
5431 @node nvptx
5432 @section nvptx
5433
5434 On the hardware side, there is the hierarchy (fine to coarse):
5435 @itemize
5436 @item thread
5437 @item warp
5438 @item thread block
5439 @item streaming multiprocessor
5440 @end itemize
5441
5442 All OpenMP and OpenACC levels are used, i.e.
5443 @itemize
5444 @item OpenMP's simd and OpenACC's vector map to threads
5445 @item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
5446 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5447 size of the number of teams or gangs, respectively.
5448 @end itemize
5449
5450 The used sizes are
5451 @itemize
5452 @item The @code{warp_size} is always 32
5453 @item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
5454 @item The number of teams is limited by the number of blocks the device can
5455 host simultaneously.
5456 @end itemize
5457
5458 Additional information can be obtained by setting the environment variable to
5459 @code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
5460 parameters).
5461
5462 GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
5463 which caches the JIT in the user's directory (see CUDA documentation; can be
5464 tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
5465
5466 Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
5467 options still affect the used PTX ISA code and, thus, the requirements on
5468 CUDA version and hardware.
5469
5470 The implementation remark:
5471 @itemize
5472 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5473 using the C library @code{printf} functions. Note that the Fortran
5474 @code{print}/@code{write} statements are not supported, yet.
5475 @item Compilation OpenMP code that contains @code{requires reverse_offload}
5476 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
5477 is not supported.
5478 @item For code containing reverse offload (i.e. @code{target} regions with
5479 @code{device(ancestor:1)}), there is a slight performance penalty
5480 for @emph{all} target regions, consisting mostly of shutdown delay
5481 Per device, reverse offload regions are processed serially such that
5482 the next reverse offload region is only executed after the previous
5483 one returned.
5484 @item OpenMP code that has a @code{requires} directive with
5485 @code{unified_shared_memory} will remove any nvptx device from the
5486 list of available devices (``host fallback'').
5487 @item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
5488 in the GCC manual.
5489 @item The OpenMP routines @code{omp_target_memcpy_rect} and
5490 @code{omp_target_memcpy_rect_async} and the @code{target update}
5491 directive for non-contiguous list items will use the 2D and 3D
5492 memory-copy functions of the CUDA library. Higher dimensions will
5493 call those functions in a loop and are therefore supported.
5494 @end itemize
5495
5496
5497 @c ---------------------------------------------------------------------
5498 @c The libgomp ABI
5499 @c ---------------------------------------------------------------------
5500
5501 @node The libgomp ABI
5502 @chapter The libgomp ABI
5503
5504 The following sections present notes on the external ABI as
5505 presented by libgomp. Only maintainers should need them.
5506
5507 @menu
5508 * Implementing MASTER construct::
5509 * Implementing CRITICAL construct::
5510 * Implementing ATOMIC construct::
5511 * Implementing FLUSH construct::
5512 * Implementing BARRIER construct::
5513 * Implementing THREADPRIVATE construct::
5514 * Implementing PRIVATE clause::
5515 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
5516 * Implementing REDUCTION clause::
5517 * Implementing PARALLEL construct::
5518 * Implementing FOR construct::
5519 * Implementing ORDERED construct::
5520 * Implementing SECTIONS construct::
5521 * Implementing SINGLE construct::
5522 * Implementing OpenACC's PARALLEL construct::
5523 @end menu
5524
5525
5526 @node Implementing MASTER construct
5527 @section Implementing MASTER construct
5528
5529 @smallexample
5530 if (omp_get_thread_num () == 0)
5531 block
5532 @end smallexample
5533
5534 Alternately, we generate two copies of the parallel subfunction
5535 and only include this in the version run by the primary thread.
5536 Surely this is not worthwhile though...
5537
5538
5539
5540 @node Implementing CRITICAL construct
5541 @section Implementing CRITICAL construct
5542
5543 Without a specified name,
5544
5545 @smallexample
5546 void GOMP_critical_start (void);
5547 void GOMP_critical_end (void);
5548 @end smallexample
5549
5550 so that we don't get COPY relocations from libgomp to the main
5551 application.
5552
5553 With a specified name, use omp_set_lock and omp_unset_lock with
5554 name being transformed into a variable declared like
5555
5556 @smallexample
5557 omp_lock_t gomp_critical_user_<name> __attribute__((common))
5558 @end smallexample
5559
5560 Ideally the ABI would specify that all zero is a valid unlocked
5561 state, and so we wouldn't need to initialize this at
5562 startup.
5563
5564
5565
5566 @node Implementing ATOMIC construct
5567 @section Implementing ATOMIC construct
5568
5569 The target should implement the @code{__sync} builtins.
5570
5571 Failing that we could add
5572
5573 @smallexample
5574 void GOMP_atomic_enter (void)
5575 void GOMP_atomic_exit (void)
5576 @end smallexample
5577
5578 which reuses the regular lock code, but with yet another lock
5579 object private to the library.
5580
5581
5582
5583 @node Implementing FLUSH construct
5584 @section Implementing FLUSH construct
5585
5586 Expands to the @code{__sync_synchronize} builtin.
5587
5588
5589
5590 @node Implementing BARRIER construct
5591 @section Implementing BARRIER construct
5592
5593 @smallexample
5594 void GOMP_barrier (void)
5595 @end smallexample
5596
5597
5598 @node Implementing THREADPRIVATE construct
5599 @section Implementing THREADPRIVATE construct
5600
5601 In _most_ cases we can map this directly to @code{__thread}. Except
5602 that OMP allows constructors for C++ objects. We can either
5603 refuse to support this (how often is it used?) or we can
5604 implement something akin to .ctors.
5605
5606 Even more ideally, this ctor feature is handled by extensions
5607 to the main pthreads library. Failing that, we can have a set
5608 of entry points to register ctor functions to be called.
5609
5610
5611
5612 @node Implementing PRIVATE clause
5613 @section Implementing PRIVATE clause
5614
5615 In association with a PARALLEL, or within the lexical extent
5616 of a PARALLEL block, the variable becomes a local variable in
5617 the parallel subfunction.
5618
5619 In association with FOR or SECTIONS blocks, create a new
5620 automatic variable within the current function. This preserves
5621 the semantic of new variable creation.
5622
5623
5624
5625 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5626 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5627
5628 This seems simple enough for PARALLEL blocks. Create a private
5629 struct for communicating between the parent and subfunction.
5630 In the parent, copy in values for scalar and "small" structs;
5631 copy in addresses for others TREE_ADDRESSABLE types. In the
5632 subfunction, copy the value into the local variable.
5633
5634 It is not clear what to do with bare FOR or SECTION blocks.
5635 The only thing I can figure is that we do something like:
5636
5637 @smallexample
5638 #pragma omp for firstprivate(x) lastprivate(y)
5639 for (int i = 0; i < n; ++i)
5640 body;
5641 @end smallexample
5642
5643 which becomes
5644
5645 @smallexample
5646 @{
5647 int x = x, y;
5648
5649 // for stuff
5650
5651 if (i == n)
5652 y = y;
5653 @}
5654 @end smallexample
5655
5656 where the "x=x" and "y=y" assignments actually have different
5657 uids for the two variables, i.e. not something you could write
5658 directly in C. Presumably this only makes sense if the "outer"
5659 x and y are global variables.
5660
5661 COPYPRIVATE would work the same way, except the structure
5662 broadcast would have to happen via SINGLE machinery instead.
5663
5664
5665
5666 @node Implementing REDUCTION clause
5667 @section Implementing REDUCTION clause
5668
5669 The private struct mentioned in the previous section should have
5670 a pointer to an array of the type of the variable, indexed by the
5671 thread's @var{team_id}. The thread stores its final value into the
5672 array, and after the barrier, the primary thread iterates over the
5673 array to collect the values.
5674
5675
5676 @node Implementing PARALLEL construct
5677 @section Implementing PARALLEL construct
5678
5679 @smallexample
5680 #pragma omp parallel
5681 @{
5682 body;
5683 @}
5684 @end smallexample
5685
5686 becomes
5687
5688 @smallexample
5689 void subfunction (void *data)
5690 @{
5691 use data;
5692 body;
5693 @}
5694
5695 setup data;
5696 GOMP_parallel_start (subfunction, &data, num_threads);
5697 subfunction (&data);
5698 GOMP_parallel_end ();
5699 @end smallexample
5700
5701 @smallexample
5702 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
5703 @end smallexample
5704
5705 The @var{FN} argument is the subfunction to be run in parallel.
5706
5707 The @var{DATA} argument is a pointer to a structure used to
5708 communicate data in and out of the subfunction, as discussed
5709 above with respect to FIRSTPRIVATE et al.
5710
5711 The @var{NUM_THREADS} argument is 1 if an IF clause is present
5712 and false, or the value of the NUM_THREADS clause, if
5713 present, or 0.
5714
5715 The function needs to create the appropriate number of
5716 threads and/or launch them from the dock. It needs to
5717 create the team structure and assign team ids.
5718
5719 @smallexample
5720 void GOMP_parallel_end (void)
5721 @end smallexample
5722
5723 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
5724
5725
5726
5727 @node Implementing FOR construct
5728 @section Implementing FOR construct
5729
5730 @smallexample
5731 #pragma omp parallel for
5732 for (i = lb; i <= ub; i++)
5733 body;
5734 @end smallexample
5735
5736 becomes
5737
5738 @smallexample
5739 void subfunction (void *data)
5740 @{
5741 long _s0, _e0;
5742 while (GOMP_loop_static_next (&_s0, &_e0))
5743 @{
5744 long _e1 = _e0, i;
5745 for (i = _s0; i < _e1; i++)
5746 body;
5747 @}
5748 GOMP_loop_end_nowait ();
5749 @}
5750
5751 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
5752 subfunction (NULL);
5753 GOMP_parallel_end ();
5754 @end smallexample
5755
5756 @smallexample
5757 #pragma omp for schedule(runtime)
5758 for (i = 0; i < n; i++)
5759 body;
5760 @end smallexample
5761
5762 becomes
5763
5764 @smallexample
5765 @{
5766 long i, _s0, _e0;
5767 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
5768 do @{
5769 long _e1 = _e0;
5770 for (i = _s0, i < _e0; i++)
5771 body;
5772 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
5773 GOMP_loop_end ();
5774 @}
5775 @end smallexample
5776
5777 Note that while it looks like there is trickiness to propagating
5778 a non-constant STEP, there isn't really. We're explicitly allowed
5779 to evaluate it as many times as we want, and any variables involved
5780 should automatically be handled as PRIVATE or SHARED like any other
5781 variables. So the expression should remain evaluable in the
5782 subfunction. We can also pull it into a local variable if we like,
5783 but since its supposed to remain unchanged, we can also not if we like.
5784
5785 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
5786 able to get away with no work-sharing context at all, since we can
5787 simply perform the arithmetic directly in each thread to divide up
5788 the iterations. Which would mean that we wouldn't need to call any
5789 of these routines.
5790
5791 There are separate routines for handling loops with an ORDERED
5792 clause. Bookkeeping for that is non-trivial...
5793
5794
5795
5796 @node Implementing ORDERED construct
5797 @section Implementing ORDERED construct
5798
5799 @smallexample
5800 void GOMP_ordered_start (void)
5801 void GOMP_ordered_end (void)
5802 @end smallexample
5803
5804
5805
5806 @node Implementing SECTIONS construct
5807 @section Implementing SECTIONS construct
5808
5809 A block as
5810
5811 @smallexample
5812 #pragma omp sections
5813 @{
5814 #pragma omp section
5815 stmt1;
5816 #pragma omp section
5817 stmt2;
5818 #pragma omp section
5819 stmt3;
5820 @}
5821 @end smallexample
5822
5823 becomes
5824
5825 @smallexample
5826 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
5827 switch (i)
5828 @{
5829 case 1:
5830 stmt1;
5831 break;
5832 case 2:
5833 stmt2;
5834 break;
5835 case 3:
5836 stmt3;
5837 break;
5838 @}
5839 GOMP_barrier ();
5840 @end smallexample
5841
5842
5843 @node Implementing SINGLE construct
5844 @section Implementing SINGLE construct
5845
5846 A block like
5847
5848 @smallexample
5849 #pragma omp single
5850 @{
5851 body;
5852 @}
5853 @end smallexample
5854
5855 becomes
5856
5857 @smallexample
5858 if (GOMP_single_start ())
5859 body;
5860 GOMP_barrier ();
5861 @end smallexample
5862
5863 while
5864
5865 @smallexample
5866 #pragma omp single copyprivate(x)
5867 body;
5868 @end smallexample
5869
5870 becomes
5871
5872 @smallexample
5873 datap = GOMP_single_copy_start ();
5874 if (datap == NULL)
5875 @{
5876 body;
5877 data.x = x;
5878 GOMP_single_copy_end (&data);
5879 @}
5880 else
5881 x = datap->x;
5882 GOMP_barrier ();
5883 @end smallexample
5884
5885
5886
5887 @node Implementing OpenACC's PARALLEL construct
5888 @section Implementing OpenACC's PARALLEL construct
5889
5890 @smallexample
5891 void GOACC_parallel ()
5892 @end smallexample
5893
5894
5895
5896 @c ---------------------------------------------------------------------
5897 @c Reporting Bugs
5898 @c ---------------------------------------------------------------------
5899
5900 @node Reporting Bugs
5901 @chapter Reporting Bugs
5902
5903 Bugs in the GNU Offloading and Multi Processing Runtime Library should
5904 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
5905 "openacc", or "openmp", or both to the keywords field in the bug
5906 report, as appropriate.
5907
5908
5909
5910 @c ---------------------------------------------------------------------
5911 @c GNU General Public License
5912 @c ---------------------------------------------------------------------
5913
5914 @include gpl_v3.texi
5915
5916
5917
5918 @c ---------------------------------------------------------------------
5919 @c GNU Free Documentation License
5920 @c ---------------------------------------------------------------------
5921
5922 @include fdl.texi
5923
5924
5925
5926 @c ---------------------------------------------------------------------
5927 @c Funding Free Software
5928 @c ---------------------------------------------------------------------
5929
5930 @include funding.texi
5931
5932 @c ---------------------------------------------------------------------
5933 @c Index
5934 @c ---------------------------------------------------------------------
5935
5936 @node Library Index
5937 @unnumbered Library Index
5938
5939 @printindex cp
5940
5941 @bye