]> git.ipfire.org Git - thirdparty/gcc.git/blob - libgomp/libgomp.texi
Avoid valid Coverity warning for comparing array to zero.
[thirdparty/gcc.git] / libgomp / libgomp.texi
1 \input texinfo @c -*-texinfo-*-
2
3 @c %**start of header
4 @setfilename libgomp.info
5 @settitle GNU libgomp
6 @c %**end of header
7
8
9 @copying
10 Copyright @copyright{} 2006-2021 Free Software Foundation, Inc.
11
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
19
20 (a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24 (b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29 @end copying
30
31 @ifinfo
32 @dircategory GNU Libraries
33 @direntry
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35 @end direntry
36
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
40 Fortran.
41
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
45
46 @insertcopying
47 @end ifinfo
48
49
50 @setchapternewpage odd
51
52 @titlepage
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
55 @page
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
58 @sp 1
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
62 @sp 1
63 @insertcopying
64 @end titlepage
65
66 @summarycontents
67 @contents
68 @page
69
70
71 @node Top, Enabling OpenMP
72 @top Introduction
73 @cindex Introduction
74
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
82 and Fortran.
83
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91 @comment
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
95 @comment
96 @menu
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Runtime Library Routines: Runtime Library Routines.
99 The OpenMP runtime application programming
100 interface.
101 * OpenMP Environment Variables: Environment Variables.
102 Influencing OpenMP runtime behavior with
103 environment variables.
104 * Enabling OpenACC:: How to enable OpenACC for your
105 applications.
106 * OpenACC Runtime Library Routines:: The OpenACC runtime application
107 programming interface.
108 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
109 environment variables.
110 * CUDA Streams Usage:: Notes on the implementation of
111 asynchronous operations.
112 * OpenACC Library Interoperability:: OpenACC library interoperability with the
113 NVIDIA CUBLAS library.
114 * OpenACC Profiling Interface::
115 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
116 * Reporting Bugs:: How to report bugs in the GNU Offloading and
117 Multi Processing Runtime Library.
118 * Copying:: GNU general public license says
119 how you can copy and share libgomp.
120 * GNU Free Documentation License::
121 How you can copy and share this manual.
122 * Funding:: How to help assure continued work for free
123 software.
124 * Library Index:: Index of this documentation.
125 @end menu
126
127
128 @c ---------------------------------------------------------------------
129 @c Enabling OpenMP
130 @c ---------------------------------------------------------------------
131
132 @node Enabling OpenMP
133 @chapter Enabling OpenMP
134
135 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
136 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
137 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
138 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
139 @code{!$} conditional compilation sentinels in free form and @code{c$},
140 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
141 arranges for automatic linking of the OpenMP runtime library
142 (@ref{Runtime Library Routines}).
143
144 A complete description of all OpenMP directives accepted may be found in
145 the @uref{https://www.openmp.org, OpenMP Application Program Interface} manual,
146 version 4.5.
147
148
149 @c ---------------------------------------------------------------------
150 @c OpenMP Runtime Library Routines
151 @c ---------------------------------------------------------------------
152
153 @node Runtime Library Routines
154 @chapter OpenMP Runtime Library Routines
155
156 The runtime routines described here are defined by Section 3 of the OpenMP
157 specification in version 4.5. The routines are structured in following
158 three parts:
159
160 @menu
161 Control threads, processors and the parallel environment. They have C
162 linkage, and do not throw exceptions.
163
164 * omp_get_active_level:: Number of active parallel regions
165 * omp_get_ancestor_thread_num:: Ancestor thread ID
166 * omp_get_cancellation:: Whether cancellation support is enabled
167 * omp_get_default_device:: Get the default device for target regions
168 * omp_get_device_num:: Get device that current thread is running on
169 * omp_get_dynamic:: Dynamic teams setting
170 * omp_get_initial_device:: Device number of host device
171 * omp_get_level:: Number of parallel regions
172 * omp_get_max_active_levels:: Current maximum number of active regions
173 * omp_get_max_task_priority:: Maximum task priority value that can be set
174 * omp_get_max_threads:: Maximum number of threads of parallel region
175 * omp_get_nested:: Nested parallel regions
176 * omp_get_num_devices:: Number of target devices
177 * omp_get_num_procs:: Number of processors online
178 * omp_get_num_teams:: Number of teams
179 * omp_get_num_threads:: Size of the active team
180 * omp_get_proc_bind:: Whether theads may be moved between CPUs
181 * omp_get_schedule:: Obtain the runtime scheduling method
182 * omp_get_supported_active_levels:: Maximum number of active regions supported
183 * omp_get_team_num:: Get team number
184 * omp_get_team_size:: Number of threads in a team
185 * omp_get_thread_limit:: Maximum number of threads
186 * omp_get_thread_num:: Current thread ID
187 * omp_in_parallel:: Whether a parallel region is active
188 * omp_in_final:: Whether in final or included task region
189 * omp_is_initial_device:: Whether executing on the host device
190 * omp_set_default_device:: Set the default device for target regions
191 * omp_set_dynamic:: Enable/disable dynamic teams
192 * omp_set_max_active_levels:: Limits the number of active parallel regions
193 * omp_set_nested:: Enable/disable nested parallel regions
194 * omp_set_num_threads:: Set upper team size limit
195 * omp_set_schedule:: Set the runtime scheduling method
196
197 Initialize, set, test, unset and destroy simple and nested locks.
198
199 * omp_init_lock:: Initialize simple lock
200 * omp_set_lock:: Wait for and set simple lock
201 * omp_test_lock:: Test and set simple lock if available
202 * omp_unset_lock:: Unset simple lock
203 * omp_destroy_lock:: Destroy simple lock
204 * omp_init_nest_lock:: Initialize nested lock
205 * omp_set_nest_lock:: Wait for and set simple lock
206 * omp_test_nest_lock:: Test and set nested lock if available
207 * omp_unset_nest_lock:: Unset nested lock
208 * omp_destroy_nest_lock:: Destroy nested lock
209
210 Portable, thread-based, wall clock timer.
211
212 * omp_get_wtick:: Get timer precision.
213 * omp_get_wtime:: Elapsed wall clock time.
214
215 Support for event objects.
216
217 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
218 @end menu
219
220
221
222 @node omp_get_active_level
223 @section @code{omp_get_active_level} -- Number of parallel regions
224 @table @asis
225 @item @emph{Description}:
226 This function returns the nesting level for the active parallel blocks,
227 which enclose the calling call.
228
229 @item @emph{C/C++}
230 @multitable @columnfractions .20 .80
231 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
232 @end multitable
233
234 @item @emph{Fortran}:
235 @multitable @columnfractions .20 .80
236 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
237 @end multitable
238
239 @item @emph{See also}:
240 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
241
242 @item @emph{Reference}:
243 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
244 @end table
245
246
247
248 @node omp_get_ancestor_thread_num
249 @section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
250 @table @asis
251 @item @emph{Description}:
252 This function returns the thread identification number for the given
253 nesting level of the current thread. For values of @var{level} outside
254 zero to @code{omp_get_level} -1 is returned; if @var{level} is
255 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
256
257 @item @emph{C/C++}
258 @multitable @columnfractions .20 .80
259 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
260 @end multitable
261
262 @item @emph{Fortran}:
263 @multitable @columnfractions .20 .80
264 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
265 @item @tab @code{integer level}
266 @end multitable
267
268 @item @emph{See also}:
269 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
270
271 @item @emph{Reference}:
272 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
273 @end table
274
275
276
277 @node omp_get_cancellation
278 @section @code{omp_get_cancellation} -- Whether cancellation support is enabled
279 @table @asis
280 @item @emph{Description}:
281 This function returns @code{true} if cancellation is activated, @code{false}
282 otherwise. Here, @code{true} and @code{false} represent their language-specific
283 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
284 deactivated.
285
286 @item @emph{C/C++}:
287 @multitable @columnfractions .20 .80
288 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
289 @end multitable
290
291 @item @emph{Fortran}:
292 @multitable @columnfractions .20 .80
293 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
294 @end multitable
295
296 @item @emph{See also}:
297 @ref{OMP_CANCELLATION}
298
299 @item @emph{Reference}:
300 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
301 @end table
302
303
304
305 @node omp_get_default_device
306 @section @code{omp_get_default_device} -- Get the default device for target regions
307 @table @asis
308 @item @emph{Description}:
309 Get the default device for target regions without device clause.
310
311 @item @emph{C/C++}:
312 @multitable @columnfractions .20 .80
313 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
314 @end multitable
315
316 @item @emph{Fortran}:
317 @multitable @columnfractions .20 .80
318 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
319 @end multitable
320
321 @item @emph{See also}:
322 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
323
324 @item @emph{Reference}:
325 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
326 @end table
327
328
329
330 @node omp_get_dynamic
331 @section @code{omp_get_dynamic} -- Dynamic teams setting
332 @table @asis
333 @item @emph{Description}:
334 This function returns @code{true} if enabled, @code{false} otherwise.
335 Here, @code{true} and @code{false} represent their language-specific
336 counterparts.
337
338 The dynamic team setting may be initialized at startup by the
339 @env{OMP_DYNAMIC} environment variable or at runtime using
340 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
341 disabled by default.
342
343 @item @emph{C/C++}:
344 @multitable @columnfractions .20 .80
345 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
346 @end multitable
347
348 @item @emph{Fortran}:
349 @multitable @columnfractions .20 .80
350 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
351 @end multitable
352
353 @item @emph{See also}:
354 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
355
356 @item @emph{Reference}:
357 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
358 @end table
359
360
361
362 @node omp_get_initial_device
363 @section @code{omp_get_initial_device} -- Return device number of initial device
364 @table @asis
365 @item @emph{Description}:
366 This function returns a device number that represents the host device.
367 For OpenMP 5.1, this must be equal to the value returned by the
368 @code{omp_get_num_devices} function.
369
370 @item @emph{C/C++}
371 @multitable @columnfractions .20 .80
372 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
373 @end multitable
374
375 @item @emph{Fortran}:
376 @multitable @columnfractions .20 .80
377 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
378 @end multitable
379
380 @item @emph{See also}:
381 @ref{omp_get_num_devices}
382
383 @item @emph{Reference}:
384 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
385 @end table
386
387
388
389 @node omp_get_device_num
390 @section @code{omp_get_device_num} -- Return device number of current device
391 @table @asis
392 @item @emph{Description}:
393 This function returns a device number that represents the device that the
394 current thread is executing on. For OpenMP 5.0, this must be equal to the
395 value returned by the @code{omp_get_initial_device} function when called
396 from the host.
397
398 @item @emph{C/C++}
399 @multitable @columnfractions .20 .80
400 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
401 @end multitable
402
403 @item @emph{Fortran}:
404 @multitable @columnfractions .20 .80
405 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
406 @end multitable
407
408 @item @emph{See also}:
409 @ref{omp_get_initial_device}
410
411 @item @emph{Reference}:
412 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
413 @end table
414
415
416
417 @node omp_get_level
418 @section @code{omp_get_level} -- Obtain the current nesting level
419 @table @asis
420 @item @emph{Description}:
421 This function returns the nesting level for the parallel blocks,
422 which enclose the calling call.
423
424 @item @emph{C/C++}
425 @multitable @columnfractions .20 .80
426 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
427 @end multitable
428
429 @item @emph{Fortran}:
430 @multitable @columnfractions .20 .80
431 @item @emph{Interface}: @tab @code{integer function omp_level()}
432 @end multitable
433
434 @item @emph{See also}:
435 @ref{omp_get_active_level}
436
437 @item @emph{Reference}:
438 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
439 @end table
440
441
442
443 @node omp_get_max_active_levels
444 @section @code{omp_get_max_active_levels} -- Current maximum number of active regions
445 @table @asis
446 @item @emph{Description}:
447 This function obtains the maximum allowed number of nested, active parallel regions.
448
449 @item @emph{C/C++}
450 @multitable @columnfractions .20 .80
451 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
452 @end multitable
453
454 @item @emph{Fortran}:
455 @multitable @columnfractions .20 .80
456 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
457 @end multitable
458
459 @item @emph{See also}:
460 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
461
462 @item @emph{Reference}:
463 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
464 @end table
465
466
467 @node omp_get_max_task_priority
468 @section @code{omp_get_max_task_priority} -- Maximum priority value
469 that can be set for tasks.
470 @table @asis
471 @item @emph{Description}:
472 This function obtains the maximum allowed priority number for tasks.
473
474 @item @emph{C/C++}
475 @multitable @columnfractions .20 .80
476 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
477 @end multitable
478
479 @item @emph{Fortran}:
480 @multitable @columnfractions .20 .80
481 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
482 @end multitable
483
484 @item @emph{Reference}:
485 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
486 @end table
487
488
489 @node omp_get_max_threads
490 @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
491 @table @asis
492 @item @emph{Description}:
493 Return the maximum number of threads used for the current parallel region
494 that does not use the clause @code{num_threads}.
495
496 @item @emph{C/C++}:
497 @multitable @columnfractions .20 .80
498 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
499 @end multitable
500
501 @item @emph{Fortran}:
502 @multitable @columnfractions .20 .80
503 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
504 @end multitable
505
506 @item @emph{See also}:
507 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
508
509 @item @emph{Reference}:
510 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
511 @end table
512
513
514
515 @node omp_get_nested
516 @section @code{omp_get_nested} -- Nested parallel regions
517 @table @asis
518 @item @emph{Description}:
519 This function returns @code{true} if nested parallel regions are
520 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
521 represent their language-specific counterparts.
522
523 The state of nested parallel regions at startup depends on several
524 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
525 and is set to greater than one, then nested parallel regions will be
526 enabled. If not defined, then the value of the @env{OMP_NESTED}
527 environment variable will be followed if defined. If neither are
528 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
529 are defined with a list of more than one value, then nested parallel
530 regions are enabled. If none of these are defined, then nested parallel
531 regions are disabled by default.
532
533 Nested parallel regions can be enabled or disabled at runtime using
534 @code{omp_set_nested}, or by setting the maximum number of nested
535 regions with @code{omp_set_max_active_levels} to one to disable, or
536 above one to enable.
537
538 @item @emph{C/C++}:
539 @multitable @columnfractions .20 .80
540 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
541 @end multitable
542
543 @item @emph{Fortran}:
544 @multitable @columnfractions .20 .80
545 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
546 @end multitable
547
548 @item @emph{See also}:
549 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
550 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
551
552 @item @emph{Reference}:
553 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
554 @end table
555
556
557
558 @node omp_get_num_devices
559 @section @code{omp_get_num_devices} -- Number of target devices
560 @table @asis
561 @item @emph{Description}:
562 Returns the number of target devices.
563
564 @item @emph{C/C++}:
565 @multitable @columnfractions .20 .80
566 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
567 @end multitable
568
569 @item @emph{Fortran}:
570 @multitable @columnfractions .20 .80
571 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
572 @end multitable
573
574 @item @emph{Reference}:
575 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
576 @end table
577
578
579
580 @node omp_get_num_procs
581 @section @code{omp_get_num_procs} -- Number of processors online
582 @table @asis
583 @item @emph{Description}:
584 Returns the number of processors online on that device.
585
586 @item @emph{C/C++}:
587 @multitable @columnfractions .20 .80
588 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
589 @end multitable
590
591 @item @emph{Fortran}:
592 @multitable @columnfractions .20 .80
593 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
594 @end multitable
595
596 @item @emph{Reference}:
597 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
598 @end table
599
600
601
602 @node omp_get_num_teams
603 @section @code{omp_get_num_teams} -- Number of teams
604 @table @asis
605 @item @emph{Description}:
606 Returns the number of teams in the current team region.
607
608 @item @emph{C/C++}:
609 @multitable @columnfractions .20 .80
610 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
611 @end multitable
612
613 @item @emph{Fortran}:
614 @multitable @columnfractions .20 .80
615 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
616 @end multitable
617
618 @item @emph{Reference}:
619 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
620 @end table
621
622
623
624 @node omp_get_num_threads
625 @section @code{omp_get_num_threads} -- Size of the active team
626 @table @asis
627 @item @emph{Description}:
628 Returns the number of threads in the current team. In a sequential section of
629 the program @code{omp_get_num_threads} returns 1.
630
631 The default team size may be initialized at startup by the
632 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
633 of the current team may be set either by the @code{NUM_THREADS}
634 clause or by @code{omp_set_num_threads}. If none of the above were
635 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
636 one thread per CPU online is used.
637
638 @item @emph{C/C++}:
639 @multitable @columnfractions .20 .80
640 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
641 @end multitable
642
643 @item @emph{Fortran}:
644 @multitable @columnfractions .20 .80
645 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
646 @end multitable
647
648 @item @emph{See also}:
649 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
650
651 @item @emph{Reference}:
652 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
653 @end table
654
655
656
657 @node omp_get_proc_bind
658 @section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
659 @table @asis
660 @item @emph{Description}:
661 This functions returns the currently active thread affinity policy, which is
662 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
663 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
664 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
665 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
666
667 @item @emph{C/C++}:
668 @multitable @columnfractions .20 .80
669 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
670 @end multitable
671
672 @item @emph{Fortran}:
673 @multitable @columnfractions .20 .80
674 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
675 @end multitable
676
677 @item @emph{See also}:
678 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
679
680 @item @emph{Reference}:
681 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
682 @end table
683
684
685
686 @node omp_get_schedule
687 @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
688 @table @asis
689 @item @emph{Description}:
690 Obtain the runtime scheduling method. The @var{kind} argument will be
691 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
692 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
693 @var{chunk_size}, is set to the chunk size.
694
695 @item @emph{C/C++}
696 @multitable @columnfractions .20 .80
697 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
698 @end multitable
699
700 @item @emph{Fortran}:
701 @multitable @columnfractions .20 .80
702 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
703 @item @tab @code{integer(kind=omp_sched_kind) kind}
704 @item @tab @code{integer chunk_size}
705 @end multitable
706
707 @item @emph{See also}:
708 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
709
710 @item @emph{Reference}:
711 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
712 @end table
713
714
715 @node omp_get_supported_active_levels
716 @section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
717 @table @asis
718 @item @emph{Description}:
719 This function returns the maximum number of nested, active parallel regions
720 supported by this implementation.
721
722 @item @emph{C/C++}
723 @multitable @columnfractions .20 .80
724 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
725 @end multitable
726
727 @item @emph{Fortran}:
728 @multitable @columnfractions .20 .80
729 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
730 @end multitable
731
732 @item @emph{See also}:
733 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
734
735 @item @emph{Reference}:
736 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
737 @end table
738
739
740
741 @node omp_get_team_num
742 @section @code{omp_get_team_num} -- Get team number
743 @table @asis
744 @item @emph{Description}:
745 Returns the team number of the calling thread.
746
747 @item @emph{C/C++}:
748 @multitable @columnfractions .20 .80
749 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
750 @end multitable
751
752 @item @emph{Fortran}:
753 @multitable @columnfractions .20 .80
754 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
755 @end multitable
756
757 @item @emph{Reference}:
758 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
759 @end table
760
761
762
763 @node omp_get_team_size
764 @section @code{omp_get_team_size} -- Number of threads in a team
765 @table @asis
766 @item @emph{Description}:
767 This function returns the number of threads in a thread team to which
768 either the current thread or its ancestor belongs. For values of @var{level}
769 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
770 1 is returned, and for @code{omp_get_level}, the result is identical
771 to @code{omp_get_num_threads}.
772
773 @item @emph{C/C++}:
774 @multitable @columnfractions .20 .80
775 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
776 @end multitable
777
778 @item @emph{Fortran}:
779 @multitable @columnfractions .20 .80
780 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
781 @item @tab @code{integer level}
782 @end multitable
783
784 @item @emph{See also}:
785 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
786
787 @item @emph{Reference}:
788 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
789 @end table
790
791
792
793 @node omp_get_thread_limit
794 @section @code{omp_get_thread_limit} -- Maximum number of threads
795 @table @asis
796 @item @emph{Description}:
797 Return the maximum number of threads of the program.
798
799 @item @emph{C/C++}:
800 @multitable @columnfractions .20 .80
801 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
802 @end multitable
803
804 @item @emph{Fortran}:
805 @multitable @columnfractions .20 .80
806 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
807 @end multitable
808
809 @item @emph{See also}:
810 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
811
812 @item @emph{Reference}:
813 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
814 @end table
815
816
817
818 @node omp_get_thread_num
819 @section @code{omp_get_thread_num} -- Current thread ID
820 @table @asis
821 @item @emph{Description}:
822 Returns a unique thread identification number within the current team.
823 In a sequential parts of the program, @code{omp_get_thread_num}
824 always returns 0. In parallel regions the return value varies
825 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
826 value of the primary thread of a team is always 0.
827
828 @item @emph{C/C++}:
829 @multitable @columnfractions .20 .80
830 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
831 @end multitable
832
833 @item @emph{Fortran}:
834 @multitable @columnfractions .20 .80
835 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
836 @end multitable
837
838 @item @emph{See also}:
839 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
840
841 @item @emph{Reference}:
842 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
843 @end table
844
845
846
847 @node omp_in_parallel
848 @section @code{omp_in_parallel} -- Whether a parallel region is active
849 @table @asis
850 @item @emph{Description}:
851 This function returns @code{true} if currently running in parallel,
852 @code{false} otherwise. Here, @code{true} and @code{false} represent
853 their language-specific counterparts.
854
855 @item @emph{C/C++}:
856 @multitable @columnfractions .20 .80
857 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
858 @end multitable
859
860 @item @emph{Fortran}:
861 @multitable @columnfractions .20 .80
862 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
863 @end multitable
864
865 @item @emph{Reference}:
866 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
867 @end table
868
869
870 @node omp_in_final
871 @section @code{omp_in_final} -- Whether in final or included task region
872 @table @asis
873 @item @emph{Description}:
874 This function returns @code{true} if currently running in a final
875 or included task region, @code{false} otherwise. Here, @code{true}
876 and @code{false} represent their language-specific counterparts.
877
878 @item @emph{C/C++}:
879 @multitable @columnfractions .20 .80
880 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
881 @end multitable
882
883 @item @emph{Fortran}:
884 @multitable @columnfractions .20 .80
885 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
886 @end multitable
887
888 @item @emph{Reference}:
889 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
890 @end table
891
892
893
894 @node omp_is_initial_device
895 @section @code{omp_is_initial_device} -- Whether executing on the host device
896 @table @asis
897 @item @emph{Description}:
898 This function returns @code{true} if currently running on the host device,
899 @code{false} otherwise. Here, @code{true} and @code{false} represent
900 their language-specific counterparts.
901
902 @item @emph{C/C++}:
903 @multitable @columnfractions .20 .80
904 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
905 @end multitable
906
907 @item @emph{Fortran}:
908 @multitable @columnfractions .20 .80
909 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
910 @end multitable
911
912 @item @emph{Reference}:
913 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
914 @end table
915
916
917
918 @node omp_set_default_device
919 @section @code{omp_set_default_device} -- Set the default device for target regions
920 @table @asis
921 @item @emph{Description}:
922 Set the default device for target regions without device clause. The argument
923 shall be a nonnegative device number.
924
925 @item @emph{C/C++}:
926 @multitable @columnfractions .20 .80
927 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
928 @end multitable
929
930 @item @emph{Fortran}:
931 @multitable @columnfractions .20 .80
932 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
933 @item @tab @code{integer device_num}
934 @end multitable
935
936 @item @emph{See also}:
937 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
938
939 @item @emph{Reference}:
940 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
941 @end table
942
943
944
945 @node omp_set_dynamic
946 @section @code{omp_set_dynamic} -- Enable/disable dynamic teams
947 @table @asis
948 @item @emph{Description}:
949 Enable or disable the dynamic adjustment of the number of threads
950 within a team. The function takes the language-specific equivalent
951 of @code{true} and @code{false}, where @code{true} enables dynamic
952 adjustment of team sizes and @code{false} disables it.
953
954 @item @emph{C/C++}:
955 @multitable @columnfractions .20 .80
956 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
957 @end multitable
958
959 @item @emph{Fortran}:
960 @multitable @columnfractions .20 .80
961 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
962 @item @tab @code{logical, intent(in) :: dynamic_threads}
963 @end multitable
964
965 @item @emph{See also}:
966 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
967
968 @item @emph{Reference}:
969 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
970 @end table
971
972
973
974 @node omp_set_max_active_levels
975 @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
976 @table @asis
977 @item @emph{Description}:
978 This function limits the maximum allowed number of nested, active
979 parallel regions. @var{max_levels} must be less or equal to
980 the value returned by @code{omp_get_supported_active_levels}.
981
982 @item @emph{C/C++}
983 @multitable @columnfractions .20 .80
984 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
985 @end multitable
986
987 @item @emph{Fortran}:
988 @multitable @columnfractions .20 .80
989 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
990 @item @tab @code{integer max_levels}
991 @end multitable
992
993 @item @emph{See also}:
994 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
995 @ref{omp_get_supported_active_levels}
996
997 @item @emph{Reference}:
998 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
999 @end table
1000
1001
1002
1003 @node omp_set_nested
1004 @section @code{omp_set_nested} -- Enable/disable nested parallel regions
1005 @table @asis
1006 @item @emph{Description}:
1007 Enable or disable nested parallel regions, i.e., whether team members
1008 are allowed to create new teams. The function takes the language-specific
1009 equivalent of @code{true} and @code{false}, where @code{true} enables
1010 dynamic adjustment of team sizes and @code{false} disables it.
1011
1012 Enabling nested parallel regions will also set the maximum number of
1013 active nested regions to the maximum supported. Disabling nested parallel
1014 regions will set the maximum number of active nested regions to one.
1015
1016 @item @emph{C/C++}:
1017 @multitable @columnfractions .20 .80
1018 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1019 @end multitable
1020
1021 @item @emph{Fortran}:
1022 @multitable @columnfractions .20 .80
1023 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1024 @item @tab @code{logical, intent(in) :: nested}
1025 @end multitable
1026
1027 @item @emph{See also}:
1028 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1029 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1030
1031 @item @emph{Reference}:
1032 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1033 @end table
1034
1035
1036
1037 @node omp_set_num_threads
1038 @section @code{omp_set_num_threads} -- Set upper team size limit
1039 @table @asis
1040 @item @emph{Description}:
1041 Specifies the number of threads used by default in subsequent parallel
1042 sections, if those do not specify a @code{num_threads} clause. The
1043 argument of @code{omp_set_num_threads} shall be a positive integer.
1044
1045 @item @emph{C/C++}:
1046 @multitable @columnfractions .20 .80
1047 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1048 @end multitable
1049
1050 @item @emph{Fortran}:
1051 @multitable @columnfractions .20 .80
1052 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1053 @item @tab @code{integer, intent(in) :: num_threads}
1054 @end multitable
1055
1056 @item @emph{See also}:
1057 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1058
1059 @item @emph{Reference}:
1060 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1061 @end table
1062
1063
1064
1065 @node omp_set_schedule
1066 @section @code{omp_set_schedule} -- Set the runtime scheduling method
1067 @table @asis
1068 @item @emph{Description}:
1069 Sets the runtime scheduling method. The @var{kind} argument can have the
1070 value @code{omp_sched_static}, @code{omp_sched_dynamic},
1071 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1072 @code{omp_sched_auto}, the chunk size is set to the value of
1073 @var{chunk_size} if positive, or to the default value if zero or negative.
1074 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1075
1076 @item @emph{C/C++}
1077 @multitable @columnfractions .20 .80
1078 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1079 @end multitable
1080
1081 @item @emph{Fortran}:
1082 @multitable @columnfractions .20 .80
1083 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1084 @item @tab @code{integer(kind=omp_sched_kind) kind}
1085 @item @tab @code{integer chunk_size}
1086 @end multitable
1087
1088 @item @emph{See also}:
1089 @ref{omp_get_schedule}
1090 @ref{OMP_SCHEDULE}
1091
1092 @item @emph{Reference}:
1093 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1094 @end table
1095
1096
1097
1098 @node omp_init_lock
1099 @section @code{omp_init_lock} -- Initialize simple lock
1100 @table @asis
1101 @item @emph{Description}:
1102 Initialize a simple lock. After initialization, the lock is in
1103 an unlocked state.
1104
1105 @item @emph{C/C++}:
1106 @multitable @columnfractions .20 .80
1107 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1108 @end multitable
1109
1110 @item @emph{Fortran}:
1111 @multitable @columnfractions .20 .80
1112 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1113 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1114 @end multitable
1115
1116 @item @emph{See also}:
1117 @ref{omp_destroy_lock}
1118
1119 @item @emph{Reference}:
1120 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1121 @end table
1122
1123
1124
1125 @node omp_set_lock
1126 @section @code{omp_set_lock} -- Wait for and set simple lock
1127 @table @asis
1128 @item @emph{Description}:
1129 Before setting a simple lock, the lock variable must be initialized by
1130 @code{omp_init_lock}. The calling thread is blocked until the lock
1131 is available. If the lock is already held by the current thread,
1132 a deadlock occurs.
1133
1134 @item @emph{C/C++}:
1135 @multitable @columnfractions .20 .80
1136 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1137 @end multitable
1138
1139 @item @emph{Fortran}:
1140 @multitable @columnfractions .20 .80
1141 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1142 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1143 @end multitable
1144
1145 @item @emph{See also}:
1146 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1147
1148 @item @emph{Reference}:
1149 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1150 @end table
1151
1152
1153
1154 @node omp_test_lock
1155 @section @code{omp_test_lock} -- Test and set simple lock if available
1156 @table @asis
1157 @item @emph{Description}:
1158 Before setting a simple lock, the lock variable must be initialized by
1159 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1160 does not block if the lock is not available. This function returns
1161 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
1162 @code{false} represent their language-specific counterparts.
1163
1164 @item @emph{C/C++}:
1165 @multitable @columnfractions .20 .80
1166 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1167 @end multitable
1168
1169 @item @emph{Fortran}:
1170 @multitable @columnfractions .20 .80
1171 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1172 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1173 @end multitable
1174
1175 @item @emph{See also}:
1176 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1177
1178 @item @emph{Reference}:
1179 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1180 @end table
1181
1182
1183
1184 @node omp_unset_lock
1185 @section @code{omp_unset_lock} -- Unset simple lock
1186 @table @asis
1187 @item @emph{Description}:
1188 A simple lock about to be unset must have been locked by @code{omp_set_lock}
1189 or @code{omp_test_lock} before. In addition, the lock must be held by the
1190 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1191 or more threads attempted to set the lock before, one of them is chosen to,
1192 again, set the lock to itself.
1193
1194 @item @emph{C/C++}:
1195 @multitable @columnfractions .20 .80
1196 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1197 @end multitable
1198
1199 @item @emph{Fortran}:
1200 @multitable @columnfractions .20 .80
1201 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1202 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1203 @end multitable
1204
1205 @item @emph{See also}:
1206 @ref{omp_set_lock}, @ref{omp_test_lock}
1207
1208 @item @emph{Reference}:
1209 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1210 @end table
1211
1212
1213
1214 @node omp_destroy_lock
1215 @section @code{omp_destroy_lock} -- Destroy simple lock
1216 @table @asis
1217 @item @emph{Description}:
1218 Destroy a simple lock. In order to be destroyed, a simple lock must be
1219 in the unlocked state.
1220
1221 @item @emph{C/C++}:
1222 @multitable @columnfractions .20 .80
1223 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1224 @end multitable
1225
1226 @item @emph{Fortran}:
1227 @multitable @columnfractions .20 .80
1228 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1229 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1230 @end multitable
1231
1232 @item @emph{See also}:
1233 @ref{omp_init_lock}
1234
1235 @item @emph{Reference}:
1236 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1237 @end table
1238
1239
1240
1241 @node omp_init_nest_lock
1242 @section @code{omp_init_nest_lock} -- Initialize nested lock
1243 @table @asis
1244 @item @emph{Description}:
1245 Initialize a nested lock. After initialization, the lock is in
1246 an unlocked state and the nesting count is set to zero.
1247
1248 @item @emph{C/C++}:
1249 @multitable @columnfractions .20 .80
1250 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1251 @end multitable
1252
1253 @item @emph{Fortran}:
1254 @multitable @columnfractions .20 .80
1255 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1256 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1257 @end multitable
1258
1259 @item @emph{See also}:
1260 @ref{omp_destroy_nest_lock}
1261
1262 @item @emph{Reference}:
1263 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1264 @end table
1265
1266
1267 @node omp_set_nest_lock
1268 @section @code{omp_set_nest_lock} -- Wait for and set nested lock
1269 @table @asis
1270 @item @emph{Description}:
1271 Before setting a nested lock, the lock variable must be initialized by
1272 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
1273 is available. If the lock is already held by the current thread, the
1274 nesting count for the lock is incremented.
1275
1276 @item @emph{C/C++}:
1277 @multitable @columnfractions .20 .80
1278 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1279 @end multitable
1280
1281 @item @emph{Fortran}:
1282 @multitable @columnfractions .20 .80
1283 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1284 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1285 @end multitable
1286
1287 @item @emph{See also}:
1288 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1289
1290 @item @emph{Reference}:
1291 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1292 @end table
1293
1294
1295
1296 @node omp_test_nest_lock
1297 @section @code{omp_test_nest_lock} -- Test and set nested lock if available
1298 @table @asis
1299 @item @emph{Description}:
1300 Before setting a nested lock, the lock variable must be initialized by
1301 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1302 @code{omp_test_nest_lock} does not block if the lock is not available.
1303 If the lock is already held by the current thread, the new nesting count
1304 is returned. Otherwise, the return value equals zero.
1305
1306 @item @emph{C/C++}:
1307 @multitable @columnfractions .20 .80
1308 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1309 @end multitable
1310
1311 @item @emph{Fortran}:
1312 @multitable @columnfractions .20 .80
1313 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1314 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1315 @end multitable
1316
1317
1318 @item @emph{See also}:
1319 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1320
1321 @item @emph{Reference}:
1322 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1323 @end table
1324
1325
1326
1327 @node omp_unset_nest_lock
1328 @section @code{omp_unset_nest_lock} -- Unset nested lock
1329 @table @asis
1330 @item @emph{Description}:
1331 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1332 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1333 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1334 lock becomes unlocked. If one ore more threads attempted to set the lock before,
1335 one of them is chosen to, again, set the lock to itself.
1336
1337 @item @emph{C/C++}:
1338 @multitable @columnfractions .20 .80
1339 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1340 @end multitable
1341
1342 @item @emph{Fortran}:
1343 @multitable @columnfractions .20 .80
1344 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1345 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1346 @end multitable
1347
1348 @item @emph{See also}:
1349 @ref{omp_set_nest_lock}
1350
1351 @item @emph{Reference}:
1352 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1353 @end table
1354
1355
1356
1357 @node omp_destroy_nest_lock
1358 @section @code{omp_destroy_nest_lock} -- Destroy nested lock
1359 @table @asis
1360 @item @emph{Description}:
1361 Destroy a nested lock. In order to be destroyed, a nested lock must be
1362 in the unlocked state and its nesting count must equal zero.
1363
1364 @item @emph{C/C++}:
1365 @multitable @columnfractions .20 .80
1366 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1367 @end multitable
1368
1369 @item @emph{Fortran}:
1370 @multitable @columnfractions .20 .80
1371 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1372 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1373 @end multitable
1374
1375 @item @emph{See also}:
1376 @ref{omp_init_lock}
1377
1378 @item @emph{Reference}:
1379 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1380 @end table
1381
1382
1383
1384 @node omp_get_wtick
1385 @section @code{omp_get_wtick} -- Get timer precision
1386 @table @asis
1387 @item @emph{Description}:
1388 Gets the timer precision, i.e., the number of seconds between two
1389 successive clock ticks.
1390
1391 @item @emph{C/C++}:
1392 @multitable @columnfractions .20 .80
1393 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1394 @end multitable
1395
1396 @item @emph{Fortran}:
1397 @multitable @columnfractions .20 .80
1398 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1399 @end multitable
1400
1401 @item @emph{See also}:
1402 @ref{omp_get_wtime}
1403
1404 @item @emph{Reference}:
1405 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1406 @end table
1407
1408
1409
1410 @node omp_get_wtime
1411 @section @code{omp_get_wtime} -- Elapsed wall clock time
1412 @table @asis
1413 @item @emph{Description}:
1414 Elapsed wall clock time in seconds. The time is measured per thread, no
1415 guarantee can be made that two distinct threads measure the same time.
1416 Time is measured from some "time in the past", which is an arbitrary time
1417 guaranteed not to change during the execution of the program.
1418
1419 @item @emph{C/C++}:
1420 @multitable @columnfractions .20 .80
1421 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1422 @end multitable
1423
1424 @item @emph{Fortran}:
1425 @multitable @columnfractions .20 .80
1426 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1427 @end multitable
1428
1429 @item @emph{See also}:
1430 @ref{omp_get_wtick}
1431
1432 @item @emph{Reference}:
1433 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1434 @end table
1435
1436
1437
1438 @node omp_fulfill_event
1439 @section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1440 @table @asis
1441 @item @emph{Description}:
1442 Fulfill the event associated with the event handle argument. Currently, it
1443 is only used to fulfill events generated by detach clauses on task
1444 constructs - the effect of fulfilling the event is to allow the task to
1445 complete.
1446
1447 The result of calling @code{omp_fulfill_event} with an event handle other
1448 than that generated by a detach clause is undefined. Calling it with an
1449 event handle that has already been fulfilled is also undefined.
1450
1451 @item @emph{C/C++}:
1452 @multitable @columnfractions .20 .80
1453 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1454 @end multitable
1455
1456 @item @emph{Fortran}:
1457 @multitable @columnfractions .20 .80
1458 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1459 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1460 @end multitable
1461
1462 @item @emph{Reference}:
1463 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1464 @end table
1465
1466
1467
1468 @c ---------------------------------------------------------------------
1469 @c OpenMP Environment Variables
1470 @c ---------------------------------------------------------------------
1471
1472 @node Environment Variables
1473 @chapter OpenMP Environment Variables
1474
1475 The environment variables which beginning with @env{OMP_} are defined by
1476 section 4 of the OpenMP specification in version 4.5, while those
1477 beginning with @env{GOMP_} are GNU extensions.
1478
1479 @menu
1480 * OMP_CANCELLATION:: Set whether cancellation is activated
1481 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1482 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
1483 * OMP_DYNAMIC:: Dynamic adjustment of threads
1484 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1485 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1486 * OMP_NESTED:: Nested parallel regions
1487 * OMP_NUM_THREADS:: Specifies the number of threads to use
1488 * OMP_PROC_BIND:: Whether theads may be moved between CPUs
1489 * OMP_PLACES:: Specifies on which CPUs the theads should be placed
1490 * OMP_STACKSIZE:: Set default thread stack size
1491 * OMP_SCHEDULE:: How threads are scheduled
1492 * OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1493 * OMP_THREAD_LIMIT:: Set the maximum number of threads
1494 * OMP_WAIT_POLICY:: How waiting threads are handled
1495 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1496 * GOMP_DEBUG:: Enable debugging output
1497 * GOMP_STACKSIZE:: Set default thread stack size
1498 * GOMP_SPINCOUNT:: Set the busy-wait spin count
1499 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1500 @end menu
1501
1502
1503 @node OMP_CANCELLATION
1504 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1505 @cindex Environment Variable
1506 @table @asis
1507 @item @emph{Description}:
1508 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1509 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1510
1511 @item @emph{See also}:
1512 @ref{omp_get_cancellation}
1513
1514 @item @emph{Reference}:
1515 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1516 @end table
1517
1518
1519
1520 @node OMP_DISPLAY_ENV
1521 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1522 @cindex Environment Variable
1523 @table @asis
1524 @item @emph{Description}:
1525 If set to @code{TRUE}, the OpenMP version number and the values
1526 associated with the OpenMP environment variables are printed to @code{stderr}.
1527 If set to @code{VERBOSE}, it additionally shows the value of the environment
1528 variables which are GNU extensions. If undefined or set to @code{FALSE},
1529 this information will not be shown.
1530
1531
1532 @item @emph{Reference}:
1533 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
1534 @end table
1535
1536
1537
1538 @node OMP_DEFAULT_DEVICE
1539 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
1540 @cindex Environment Variable
1541 @table @asis
1542 @item @emph{Description}:
1543 Set to choose the device which is used in a @code{target} region, unless the
1544 value is overridden by @code{omp_set_default_device} or by a @code{device}
1545 clause. The value shall be the nonnegative device number. If no device with
1546 the given device number exists, the code is executed on the host. If unset,
1547 device number 0 will be used.
1548
1549
1550 @item @emph{See also}:
1551 @ref{omp_get_default_device}, @ref{omp_set_default_device},
1552
1553 @item @emph{Reference}:
1554 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
1555 @end table
1556
1557
1558
1559 @node OMP_DYNAMIC
1560 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1561 @cindex Environment Variable
1562 @table @asis
1563 @item @emph{Description}:
1564 Enable or disable the dynamic adjustment of the number of threads
1565 within a team. The value of this environment variable shall be
1566 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
1567 disabled by default.
1568
1569 @item @emph{See also}:
1570 @ref{omp_set_dynamic}
1571
1572 @item @emph{Reference}:
1573 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
1574 @end table
1575
1576
1577
1578 @node OMP_MAX_ACTIVE_LEVELS
1579 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
1580 @cindex Environment Variable
1581 @table @asis
1582 @item @emph{Description}:
1583 Specifies the initial value for the maximum number of nested parallel
1584 regions. The value of this variable shall be a positive integer.
1585 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
1586 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
1587 a list with more than one item, the maximum number of nested parallel
1588 regions will be initialized to the largest number supported, otherwise
1589 it will be set to one.
1590
1591 @item @emph{See also}:
1592 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}
1593
1594 @item @emph{Reference}:
1595 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
1596 @end table
1597
1598
1599
1600 @node OMP_MAX_TASK_PRIORITY
1601 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
1602 number that can be set for a task.
1603 @cindex Environment Variable
1604 @table @asis
1605 @item @emph{Description}:
1606 Specifies the initial value for the maximum priority value that can be
1607 set for a task. The value of this variable shall be a non-negative
1608 integer, and zero is allowed. If undefined, the default priority is
1609 0.
1610
1611 @item @emph{See also}:
1612 @ref{omp_get_max_task_priority}
1613
1614 @item @emph{Reference}:
1615 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
1616 @end table
1617
1618
1619
1620 @node OMP_NESTED
1621 @section @env{OMP_NESTED} -- Nested parallel regions
1622 @cindex Environment Variable
1623 @cindex Implementation specific setting
1624 @table @asis
1625 @item @emph{Description}:
1626 Enable or disable nested parallel regions, i.e., whether team members
1627 are allowed to create new teams. The value of this environment variable
1628 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
1629 of maximum active nested regions supported will by default be set to the
1630 maximum supported, otherwise it will be set to one. If
1631 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
1632 setting. If both are undefined, nested parallel regions are enabled if
1633 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
1634 more than one item, otherwise they are disabled by default.
1635
1636 @item @emph{See also}:
1637 @ref{omp_set_max_active_levels}, @ref{omp_set_nested}
1638
1639 @item @emph{Reference}:
1640 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
1641 @end table
1642
1643
1644
1645 @node OMP_NUM_THREADS
1646 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1647 @cindex Environment Variable
1648 @cindex Implementation specific setting
1649 @table @asis
1650 @item @emph{Description}:
1651 Specifies the default number of threads to use in parallel regions. The
1652 value of this variable shall be a comma-separated list of positive integers;
1653 the value specifies the number of threads to use for the corresponding nested
1654 level. Specifying more than one item in the list will automatically enable
1655 nesting by default. If undefined one thread per CPU is used.
1656
1657 @item @emph{See also}:
1658 @ref{omp_set_num_threads}, @ref{OMP_NESTED}
1659
1660 @item @emph{Reference}:
1661 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
1662 @end table
1663
1664
1665
1666 @node OMP_PROC_BIND
1667 @section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1668 @cindex Environment Variable
1669 @table @asis
1670 @item @emph{Description}:
1671 Specifies whether threads may be moved between processors. If set to
1672 @code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
1673 they may be moved. Alternatively, a comma separated list with the
1674 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
1675 be used to specify the thread affinity policy for the corresponding nesting
1676 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
1677 same place partition as the primary thread. With @code{CLOSE} those are
1678 kept close to the primary thread in contiguous place partitions. And
1679 with @code{SPREAD} a sparse distribution
1680 across the place partitions is used. Specifying more than one item in the
1681 list will automatically enable nesting by default.
1682
1683 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
1684 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
1685
1686 @item @emph{See also}:
1687 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY},
1688 @ref{OMP_NESTED}, @ref{OMP_PLACES}
1689
1690 @item @emph{Reference}:
1691 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
1692 @end table
1693
1694
1695
1696 @node OMP_PLACES
1697 @section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
1698 @cindex Environment Variable
1699 @table @asis
1700 @item @emph{Description}:
1701 The thread placement can be either specified using an abstract name or by an
1702 explicit list of the places. The abstract names @code{threads}, @code{cores}
1703 and @code{sockets} can be optionally followed by a positive number in
1704 parentheses, which denotes the how many places shall be created. With
1705 @code{threads} each place corresponds to a single hardware thread; @code{cores}
1706 to a single core with the corresponding number of hardware threads; and with
1707 @code{sockets} the place corresponds to a single socket. The resulting
1708 placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
1709 variable.
1710
1711 Alternatively, the placement can be specified explicitly as comma-separated
1712 list of places. A place is specified by set of nonnegative numbers in curly
1713 braces, denoting the denoting the hardware threads. The hardware threads
1714 belonging to a place can either be specified as comma-separated list of
1715 nonnegative thread numbers or using an interval. Multiple places can also be
1716 either specified by a comma-separated list of places or by an interval. To
1717 specify an interval, a colon followed by the count is placed after after
1718 the hardware thread number or the place. Optionally, the length can be
1719 followed by a colon and the stride number -- otherwise a unit stride is
1720 assumed. For instance, the following specifies the same places list:
1721 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
1722 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
1723
1724 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
1725 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
1726 between CPUs following no placement policy.
1727
1728 @item @emph{See also}:
1729 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
1730 @ref{OMP_DISPLAY_ENV}
1731
1732 @item @emph{Reference}:
1733 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
1734 @end table
1735
1736
1737
1738 @node OMP_STACKSIZE
1739 @section @env{OMP_STACKSIZE} -- Set default thread stack size
1740 @cindex Environment Variable
1741 @table @asis
1742 @item @emph{Description}:
1743 Set the default thread stack size in kilobytes, unless the number
1744 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1745 case the size is, respectively, in bytes, kilobytes, megabytes
1746 or gigabytes. This is different from @code{pthread_attr_setstacksize}
1747 which gets the number of bytes as an argument. If the stack size cannot
1748 be set due to system constraints, an error is reported and the initial
1749 stack size is left unchanged. If undefined, the stack size is system
1750 dependent.
1751
1752 @item @emph{Reference}:
1753 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
1754 @end table
1755
1756
1757
1758 @node OMP_SCHEDULE
1759 @section @env{OMP_SCHEDULE} -- How threads are scheduled
1760 @cindex Environment Variable
1761 @cindex Implementation specific setting
1762 @table @asis
1763 @item @emph{Description}:
1764 Allows to specify @code{schedule type} and @code{chunk size}.
1765 The value of the variable shall have the form: @code{type[,chunk]} where
1766 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
1767 The optional @code{chunk} size shall be a positive integer. If undefined,
1768 dynamic scheduling and a chunk size of 1 is used.
1769
1770 @item @emph{See also}:
1771 @ref{omp_set_schedule}
1772
1773 @item @emph{Reference}:
1774 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
1775 @end table
1776
1777
1778
1779 @node OMP_TARGET_OFFLOAD
1780 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
1781 @cindex Environment Variable
1782 @cindex Implementation specific setting
1783 @table @asis
1784 @item @emph{Description}:
1785 Specifies the behaviour with regard to offloading code to a device. This
1786 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
1787 or @code{DEFAULT}.
1788
1789 If set to @code{MANDATORY}, the program will terminate with an error if
1790 the offload device is not present or is not supported. If set to
1791 @code{DISABLED}, then offloading is disabled and all code will run on the
1792 host. If set to @code{DEFAULT}, the program will try offloading to the
1793 device first, then fall back to running code on the host if it cannot.
1794
1795 If undefined, then the program will behave as if @code{DEFAULT} was set.
1796
1797 @item @emph{Reference}:
1798 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
1799 @end table
1800
1801
1802
1803 @node OMP_THREAD_LIMIT
1804 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
1805 @cindex Environment Variable
1806 @table @asis
1807 @item @emph{Description}:
1808 Specifies the number of threads to use for the whole program. The
1809 value of this variable shall be a positive integer. If undefined,
1810 the number of threads is not limited.
1811
1812 @item @emph{See also}:
1813 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
1814
1815 @item @emph{Reference}:
1816 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
1817 @end table
1818
1819
1820
1821 @node OMP_WAIT_POLICY
1822 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1823 @cindex Environment Variable
1824 @table @asis
1825 @item @emph{Description}:
1826 Specifies whether waiting threads should be active or passive. If
1827 the value is @code{PASSIVE}, waiting threads should not consume CPU
1828 power while waiting; while the value is @code{ACTIVE} specifies that
1829 they should. If undefined, threads wait actively for a short time
1830 before waiting passively.
1831
1832 @item @emph{See also}:
1833 @ref{GOMP_SPINCOUNT}
1834
1835 @item @emph{Reference}:
1836 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
1837 @end table
1838
1839
1840
1841 @node GOMP_CPU_AFFINITY
1842 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1843 @cindex Environment Variable
1844 @table @asis
1845 @item @emph{Description}:
1846 Binds threads to specific CPUs. The variable should contain a space-separated
1847 or comma-separated list of CPUs. This list may contain different kinds of
1848 entries: either single CPU numbers in any order, a range of CPUs (M-N)
1849 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
1850 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1851 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1852 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1853 and 14 respectively and then start assigning back from the beginning of
1854 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
1855
1856 There is no libgomp library routine to determine whether a CPU affinity
1857 specification is in effect. As a workaround, language-specific library
1858 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1859 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
1860 environment variable. A defined CPU affinity on startup cannot be changed
1861 or disabled during the runtime of the application.
1862
1863 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
1864 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
1865 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
1866 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
1867
1868 @item @emph{See also}:
1869 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
1870 @end table
1871
1872
1873
1874 @node GOMP_DEBUG
1875 @section @env{GOMP_DEBUG} -- Enable debugging output
1876 @cindex Environment Variable
1877 @table @asis
1878 @item @emph{Description}:
1879 Enable debugging output. The variable should be set to @code{0}
1880 (disabled, also the default if not set), or @code{1} (enabled).
1881
1882 If enabled, some debugging output will be printed during execution.
1883 This is currently not specified in more detail, and subject to change.
1884 @end table
1885
1886
1887
1888 @node GOMP_STACKSIZE
1889 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
1890 @cindex Environment Variable
1891 @cindex Implementation specific setting
1892 @table @asis
1893 @item @emph{Description}:
1894 Set the default thread stack size in kilobytes. This is different from
1895 @code{pthread_attr_setstacksize} which gets the number of bytes as an
1896 argument. If the stack size cannot be set due to system constraints, an
1897 error is reported and the initial stack size is left unchanged. If undefined,
1898 the stack size is system dependent.
1899
1900 @item @emph{See also}:
1901 @ref{OMP_STACKSIZE}
1902
1903 @item @emph{Reference}:
1904 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
1905 GCC Patches Mailinglist},
1906 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
1907 GCC Patches Mailinglist}
1908 @end table
1909
1910
1911
1912 @node GOMP_SPINCOUNT
1913 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
1914 @cindex Environment Variable
1915 @cindex Implementation specific setting
1916 @table @asis
1917 @item @emph{Description}:
1918 Determines how long a threads waits actively with consuming CPU power
1919 before waiting passively without consuming CPU power. The value may be
1920 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
1921 integer which gives the number of spins of the busy-wait loop. The
1922 integer may optionally be followed by the following suffixes acting
1923 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
1924 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
1925 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
1926 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
1927 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
1928 If there are more OpenMP threads than available CPUs, 1000 and 100
1929 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
1930 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
1931 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
1932
1933 @item @emph{See also}:
1934 @ref{OMP_WAIT_POLICY}
1935 @end table
1936
1937
1938
1939 @node GOMP_RTEMS_THREAD_POOLS
1940 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
1941 @cindex Environment Variable
1942 @cindex Implementation specific setting
1943 @table @asis
1944 @item @emph{Description}:
1945 This environment variable is only used on the RTEMS real-time operating system.
1946 It determines the scheduler instance specific thread pools. The format for
1947 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
1948 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
1949 separated by @code{:} where:
1950 @itemize @bullet
1951 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
1952 instance.
1953 @item @code{$<priority>} is an optional priority for the worker threads of a
1954 thread pool according to @code{pthread_setschedparam}. In case a priority
1955 value is omitted, then a worker thread will inherit the priority of the OpenMP
1956 primary thread that created it. The priority of the worker thread is not
1957 changed after creation, even if a new OpenMP primary thread using the worker has
1958 a different priority.
1959 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
1960 RTEMS application configuration.
1961 @end itemize
1962 In case no thread pool configuration is specified for a scheduler instance,
1963 then each OpenMP primary thread of this scheduler instance will use its own
1964 dynamically allocated thread pool. To limit the worker thread count of the
1965 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
1966 @item @emph{Example}:
1967 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
1968 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
1969 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
1970 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
1971 one thread pool available. Since no priority is specified for this scheduler
1972 instance, the worker thread inherits the priority of the OpenMP primary thread
1973 that created it. In the scheduler instance @code{WRK1} there are three thread
1974 pools available and their worker threads run at priority four.
1975 @end table
1976
1977
1978
1979 @c ---------------------------------------------------------------------
1980 @c Enabling OpenACC
1981 @c ---------------------------------------------------------------------
1982
1983 @node Enabling OpenACC
1984 @chapter Enabling OpenACC
1985
1986 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
1987 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
1988 @code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
1989 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
1990 @code{!$} conditional compilation sentinels in free form and @code{c$},
1991 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
1992 arranges for automatic linking of the OpenACC runtime library
1993 (@ref{OpenACC Runtime Library Routines}).
1994
1995 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
1996
1997 A complete description of all OpenACC directives accepted may be found in
1998 the @uref{https://www.openacc.org, OpenACC} Application Programming
1999 Interface manual, version 2.6.
2000
2001
2002
2003 @c ---------------------------------------------------------------------
2004 @c OpenACC Runtime Library Routines
2005 @c ---------------------------------------------------------------------
2006
2007 @node OpenACC Runtime Library Routines
2008 @chapter OpenACC Runtime Library Routines
2009
2010 The runtime routines described here are defined by section 3 of the OpenACC
2011 specifications in version 2.6.
2012 They have C linkage, and do not throw exceptions.
2013 Generally, they are available only for the host, with the exception of
2014 @code{acc_on_device}, which is available for both the host and the
2015 acceleration device.
2016
2017 @menu
2018 * acc_get_num_devices:: Get number of devices for the given device
2019 type.
2020 * acc_set_device_type:: Set type of device accelerator to use.
2021 * acc_get_device_type:: Get type of device accelerator to be used.
2022 * acc_set_device_num:: Set device number to use.
2023 * acc_get_device_num:: Get device number to be used.
2024 * acc_get_property:: Get device property.
2025 * acc_async_test:: Tests for completion of a specific asynchronous
2026 operation.
2027 * acc_async_test_all:: Tests for completion of all asynchronous
2028 operations.
2029 * acc_wait:: Wait for completion of a specific asynchronous
2030 operation.
2031 * acc_wait_all:: Waits for completion of all asynchronous
2032 operations.
2033 * acc_wait_all_async:: Wait for completion of all asynchronous
2034 operations.
2035 * acc_wait_async:: Wait for completion of asynchronous operations.
2036 * acc_init:: Initialize runtime for a specific device type.
2037 * acc_shutdown:: Shuts down the runtime for a specific device
2038 type.
2039 * acc_on_device:: Whether executing on a particular device
2040 * acc_malloc:: Allocate device memory.
2041 * acc_free:: Free device memory.
2042 * acc_copyin:: Allocate device memory and copy host memory to
2043 it.
2044 * acc_present_or_copyin:: If the data is not present on the device,
2045 allocate device memory and copy from host
2046 memory.
2047 * acc_create:: Allocate device memory and map it to host
2048 memory.
2049 * acc_present_or_create:: If the data is not present on the device,
2050 allocate device memory and map it to host
2051 memory.
2052 * acc_copyout:: Copy device memory to host memory.
2053 * acc_delete:: Free device memory.
2054 * acc_update_device:: Update device memory from mapped host memory.
2055 * acc_update_self:: Update host memory from mapped device memory.
2056 * acc_map_data:: Map previously allocated device memory to host
2057 memory.
2058 * acc_unmap_data:: Unmap device memory from host memory.
2059 * acc_deviceptr:: Get device pointer associated with specific
2060 host address.
2061 * acc_hostptr:: Get host pointer associated with specific
2062 device address.
2063 * acc_is_present:: Indicate whether host variable / array is
2064 present on device.
2065 * acc_memcpy_to_device:: Copy host memory to device memory.
2066 * acc_memcpy_from_device:: Copy device memory to host memory.
2067 * acc_attach:: Let device pointer point to device-pointer target.
2068 * acc_detach:: Let device pointer point to host-pointer target.
2069
2070 API routines for target platforms.
2071
2072 * acc_get_current_cuda_device:: Get CUDA device handle.
2073 * acc_get_current_cuda_context::Get CUDA context handle.
2074 * acc_get_cuda_stream:: Get CUDA stream handle.
2075 * acc_set_cuda_stream:: Set CUDA stream handle.
2076
2077 API routines for the OpenACC Profiling Interface.
2078
2079 * acc_prof_register:: Register callbacks.
2080 * acc_prof_unregister:: Unregister callbacks.
2081 * acc_prof_lookup:: Obtain inquiry functions.
2082 * acc_register_library:: Library registration.
2083 @end menu
2084
2085
2086
2087 @node acc_get_num_devices
2088 @section @code{acc_get_num_devices} -- Get number of devices for given device type
2089 @table @asis
2090 @item @emph{Description}
2091 This function returns a value indicating the number of devices available
2092 for the device type specified in @var{devicetype}.
2093
2094 @item @emph{C/C++}:
2095 @multitable @columnfractions .20 .80
2096 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2097 @end multitable
2098
2099 @item @emph{Fortran}:
2100 @multitable @columnfractions .20 .80
2101 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2102 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2103 @end multitable
2104
2105 @item @emph{Reference}:
2106 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2107 3.2.1.
2108 @end table
2109
2110
2111
2112 @node acc_set_device_type
2113 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
2114 @table @asis
2115 @item @emph{Description}
2116 This function indicates to the runtime library which device type, specified
2117 in @var{devicetype}, to use when executing a parallel or kernels region.
2118
2119 @item @emph{C/C++}:
2120 @multitable @columnfractions .20 .80
2121 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2122 @end multitable
2123
2124 @item @emph{Fortran}:
2125 @multitable @columnfractions .20 .80
2126 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2127 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2128 @end multitable
2129
2130 @item @emph{Reference}:
2131 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2132 3.2.2.
2133 @end table
2134
2135
2136
2137 @node acc_get_device_type
2138 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2139 @table @asis
2140 @item @emph{Description}
2141 This function returns what device type will be used when executing a
2142 parallel or kernels region.
2143
2144 This function returns @code{acc_device_none} if
2145 @code{acc_get_device_type} is called from
2146 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2147 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2148 Interface}), that is, if the device is currently being initialized.
2149
2150 @item @emph{C/C++}:
2151 @multitable @columnfractions .20 .80
2152 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2153 @end multitable
2154
2155 @item @emph{Fortran}:
2156 @multitable @columnfractions .20 .80
2157 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2158 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2159 @end multitable
2160
2161 @item @emph{Reference}:
2162 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2163 3.2.3.
2164 @end table
2165
2166
2167
2168 @node acc_set_device_num
2169 @section @code{acc_set_device_num} -- Set device number to use.
2170 @table @asis
2171 @item @emph{Description}
2172 This function will indicate to the runtime which device number,
2173 specified by @var{devicenum}, associated with the specified device
2174 type @var{devicetype}.
2175
2176 @item @emph{C/C++}:
2177 @multitable @columnfractions .20 .80
2178 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2179 @end multitable
2180
2181 @item @emph{Fortran}:
2182 @multitable @columnfractions .20 .80
2183 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2184 @item @tab @code{integer devicenum}
2185 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2186 @end multitable
2187
2188 @item @emph{Reference}:
2189 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2190 3.2.4.
2191 @end table
2192
2193
2194
2195 @node acc_get_device_num
2196 @section @code{acc_get_device_num} -- Get device number to be used.
2197 @table @asis
2198 @item @emph{Description}
2199 This function returns which device number associated with the specified device
2200 type @var{devicetype}, will be used when executing a parallel or kernels
2201 region.
2202
2203 @item @emph{C/C++}:
2204 @multitable @columnfractions .20 .80
2205 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2206 @end multitable
2207
2208 @item @emph{Fortran}:
2209 @multitable @columnfractions .20 .80
2210 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2211 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2212 @item @tab @code{integer acc_get_device_num}
2213 @end multitable
2214
2215 @item @emph{Reference}:
2216 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2217 3.2.5.
2218 @end table
2219
2220
2221
2222 @node acc_get_property
2223 @section @code{acc_get_property} -- Get device property.
2224 @cindex acc_get_property
2225 @cindex acc_get_property_string
2226 @table @asis
2227 @item @emph{Description}
2228 These routines return the value of the specified @var{property} for the
2229 device being queried according to @var{devicenum} and @var{devicetype}.
2230 Integer-valued and string-valued properties are returned by
2231 @code{acc_get_property} and @code{acc_get_property_string} respectively.
2232 The Fortran @code{acc_get_property_string} subroutine returns the string
2233 retrieved in its fourth argument while the remaining entry points are
2234 functions, which pass the return value as their result.
2235
2236 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2237 modified the interface introduced in OpenACC 2.6. The kind-value parameter
2238 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2239 for consistency and the return type of the @code{acc_get_property} function is
2240 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2241 The parameter @code{acc_device_property} will continue to be provided,
2242 but might be removed in a future version of GCC.
2243
2244 @item @emph{C/C++}:
2245 @multitable @columnfractions .20 .80
2246 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2247 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2248 @end multitable
2249
2250 @item @emph{Fortran}:
2251 @multitable @columnfractions .20 .80
2252 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2253 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2254 @item @tab @code{use ISO_C_Binding, only: c_size_t}
2255 @item @tab @code{integer devicenum}
2256 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2257 @item @tab @code{integer(kind=acc_device_property_kind) property}
2258 @item @tab @code{integer(kind=c_size_t) acc_get_property}
2259 @item @tab @code{character(*) string}
2260 @end multitable
2261
2262 @item @emph{Reference}:
2263 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2264 3.2.6.
2265 @end table
2266
2267
2268
2269 @node acc_async_test
2270 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2271 @table @asis
2272 @item @emph{Description}
2273 This function tests for completion of the asynchronous operation specified
2274 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2275 the specified asynchronous operation has completed. While Fortran will return
2276 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2277 a zero and Fortran returns a @code{false}.
2278
2279 @item @emph{C/C++}:
2280 @multitable @columnfractions .20 .80
2281 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2282 @end multitable
2283
2284 @item @emph{Fortran}:
2285 @multitable @columnfractions .20 .80
2286 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2287 @item @tab @code{integer(kind=acc_handle_kind) arg}
2288 @item @tab @code{logical acc_async_test}
2289 @end multitable
2290
2291 @item @emph{Reference}:
2292 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2293 3.2.9.
2294 @end table
2295
2296
2297
2298 @node acc_async_test_all
2299 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2300 @table @asis
2301 @item @emph{Description}
2302 This function tests for completion of all asynchronous operations.
2303 In C/C++, a non-zero value will be returned to indicate all asynchronous
2304 operations have completed. While Fortran will return a @code{true}. If
2305 any asynchronous operation has not completed, C/C++ returns a zero and
2306 Fortran returns a @code{false}.
2307
2308 @item @emph{C/C++}:
2309 @multitable @columnfractions .20 .80
2310 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2311 @end multitable
2312
2313 @item @emph{Fortran}:
2314 @multitable @columnfractions .20 .80
2315 @item @emph{Interface}: @tab @code{function acc_async_test()}
2316 @item @tab @code{logical acc_get_device_num}
2317 @end multitable
2318
2319 @item @emph{Reference}:
2320 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2321 3.2.10.
2322 @end table
2323
2324
2325
2326 @node acc_wait
2327 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2328 @table @asis
2329 @item @emph{Description}
2330 This function waits for completion of the asynchronous operation
2331 specified in @var{arg}.
2332
2333 @item @emph{C/C++}:
2334 @multitable @columnfractions .20 .80
2335 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
2336 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2337 @end multitable
2338
2339 @item @emph{Fortran}:
2340 @multitable @columnfractions .20 .80
2341 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2342 @item @tab @code{integer(acc_handle_kind) arg}
2343 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2344 @item @tab @code{integer(acc_handle_kind) arg}
2345 @end multitable
2346
2347 @item @emph{Reference}:
2348 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2349 3.2.11.
2350 @end table
2351
2352
2353
2354 @node acc_wait_all
2355 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2356 @table @asis
2357 @item @emph{Description}
2358 This function waits for the completion of all asynchronous operations.
2359
2360 @item @emph{C/C++}:
2361 @multitable @columnfractions .20 .80
2362 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2363 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2364 @end multitable
2365
2366 @item @emph{Fortran}:
2367 @multitable @columnfractions .20 .80
2368 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2369 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2370 @end multitable
2371
2372 @item @emph{Reference}:
2373 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2374 3.2.13.
2375 @end table
2376
2377
2378
2379 @node acc_wait_all_async
2380 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2381 @table @asis
2382 @item @emph{Description}
2383 This function enqueues a wait operation on the queue @var{async} for any
2384 and all asynchronous operations that have been previously enqueued on
2385 any queue.
2386
2387 @item @emph{C/C++}:
2388 @multitable @columnfractions .20 .80
2389 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2390 @end multitable
2391
2392 @item @emph{Fortran}:
2393 @multitable @columnfractions .20 .80
2394 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2395 @item @tab @code{integer(acc_handle_kind) async}
2396 @end multitable
2397
2398 @item @emph{Reference}:
2399 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2400 3.2.14.
2401 @end table
2402
2403
2404
2405 @node acc_wait_async
2406 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2407 @table @asis
2408 @item @emph{Description}
2409 This function enqueues a wait operation on queue @var{async} for any and all
2410 asynchronous operations enqueued on queue @var{arg}.
2411
2412 @item @emph{C/C++}:
2413 @multitable @columnfractions .20 .80
2414 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2415 @end multitable
2416
2417 @item @emph{Fortran}:
2418 @multitable @columnfractions .20 .80
2419 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2420 @item @tab @code{integer(acc_handle_kind) arg, async}
2421 @end multitable
2422
2423 @item @emph{Reference}:
2424 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2425 3.2.12.
2426 @end table
2427
2428
2429
2430 @node acc_init
2431 @section @code{acc_init} -- Initialize runtime for a specific device type.
2432 @table @asis
2433 @item @emph{Description}
2434 This function initializes the runtime for the device type specified in
2435 @var{devicetype}.
2436
2437 @item @emph{C/C++}:
2438 @multitable @columnfractions .20 .80
2439 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2440 @end multitable
2441
2442 @item @emph{Fortran}:
2443 @multitable @columnfractions .20 .80
2444 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2445 @item @tab @code{integer(acc_device_kind) devicetype}
2446 @end multitable
2447
2448 @item @emph{Reference}:
2449 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2450 3.2.7.
2451 @end table
2452
2453
2454
2455 @node acc_shutdown
2456 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2457 @table @asis
2458 @item @emph{Description}
2459 This function shuts down the runtime for the device type specified in
2460 @var{devicetype}.
2461
2462 @item @emph{C/C++}:
2463 @multitable @columnfractions .20 .80
2464 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2465 @end multitable
2466
2467 @item @emph{Fortran}:
2468 @multitable @columnfractions .20 .80
2469 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2470 @item @tab @code{integer(acc_device_kind) devicetype}
2471 @end multitable
2472
2473 @item @emph{Reference}:
2474 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2475 3.2.8.
2476 @end table
2477
2478
2479
2480 @node acc_on_device
2481 @section @code{acc_on_device} -- Whether executing on a particular device
2482 @table @asis
2483 @item @emph{Description}:
2484 This function returns whether the program is executing on a particular
2485 device specified in @var{devicetype}. In C/C++ a non-zero value is
2486 returned to indicate the device is executing on the specified device type.
2487 In Fortran, @code{true} will be returned. If the program is not executing
2488 on the specified device type C/C++ will return a zero, while Fortran will
2489 return @code{false}.
2490
2491 @item @emph{C/C++}:
2492 @multitable @columnfractions .20 .80
2493 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
2494 @end multitable
2495
2496 @item @emph{Fortran}:
2497 @multitable @columnfractions .20 .80
2498 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
2499 @item @tab @code{integer(acc_device_kind) devicetype}
2500 @item @tab @code{logical acc_on_device}
2501 @end multitable
2502
2503
2504 @item @emph{Reference}:
2505 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2506 3.2.17.
2507 @end table
2508
2509
2510
2511 @node acc_malloc
2512 @section @code{acc_malloc} -- Allocate device memory.
2513 @table @asis
2514 @item @emph{Description}
2515 This function allocates @var{len} bytes of device memory. It returns
2516 the device address of the allocated memory.
2517
2518 @item @emph{C/C++}:
2519 @multitable @columnfractions .20 .80
2520 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
2521 @end multitable
2522
2523 @item @emph{Reference}:
2524 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2525 3.2.18.
2526 @end table
2527
2528
2529
2530 @node acc_free
2531 @section @code{acc_free} -- Free device memory.
2532 @table @asis
2533 @item @emph{Description}
2534 Free previously allocated device memory at the device address @code{a}.
2535
2536 @item @emph{C/C++}:
2537 @multitable @columnfractions .20 .80
2538 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
2539 @end multitable
2540
2541 @item @emph{Reference}:
2542 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2543 3.2.19.
2544 @end table
2545
2546
2547
2548 @node acc_copyin
2549 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
2550 @table @asis
2551 @item @emph{Description}
2552 In C/C++, this function allocates @var{len} bytes of device memory
2553 and maps it to the specified host address in @var{a}. The device
2554 address of the newly allocated device memory is returned.
2555
2556 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2557 a contiguous array section. The second form @var{a} specifies a
2558 variable or array element and @var{len} specifies the length in bytes.
2559
2560 @item @emph{C/C++}:
2561 @multitable @columnfractions .20 .80
2562 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
2563 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
2564 @end multitable
2565
2566 @item @emph{Fortran}:
2567 @multitable @columnfractions .20 .80
2568 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
2569 @item @tab @code{type, dimension(:[,:]...) :: a}
2570 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
2571 @item @tab @code{type, dimension(:[,:]...) :: a}
2572 @item @tab @code{integer len}
2573 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
2574 @item @tab @code{type, dimension(:[,:]...) :: a}
2575 @item @tab @code{integer(acc_handle_kind) :: async}
2576 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
2577 @item @tab @code{type, dimension(:[,:]...) :: a}
2578 @item @tab @code{integer len}
2579 @item @tab @code{integer(acc_handle_kind) :: async}
2580 @end multitable
2581
2582 @item @emph{Reference}:
2583 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2584 3.2.20.
2585 @end table
2586
2587
2588
2589 @node acc_present_or_copyin
2590 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
2591 @table @asis
2592 @item @emph{Description}
2593 This function tests if the host data specified by @var{a} and of length
2594 @var{len} is present or not. If it is not present, then device memory
2595 will be allocated and the host memory copied. The device address of
2596 the newly allocated device memory is returned.
2597
2598 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2599 a contiguous array section. The second form @var{a} specifies a variable or
2600 array element and @var{len} specifies the length in bytes.
2601
2602 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
2603 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
2604
2605 @item @emph{C/C++}:
2606 @multitable @columnfractions .20 .80
2607 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
2608 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
2609 @end multitable
2610
2611 @item @emph{Fortran}:
2612 @multitable @columnfractions .20 .80
2613 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
2614 @item @tab @code{type, dimension(:[,:]...) :: a}
2615 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
2616 @item @tab @code{type, dimension(:[,:]...) :: a}
2617 @item @tab @code{integer len}
2618 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
2619 @item @tab @code{type, dimension(:[,:]...) :: a}
2620 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
2621 @item @tab @code{type, dimension(:[,:]...) :: a}
2622 @item @tab @code{integer len}
2623 @end multitable
2624
2625 @item @emph{Reference}:
2626 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2627 3.2.20.
2628 @end table
2629
2630
2631
2632 @node acc_create
2633 @section @code{acc_create} -- Allocate device memory and map it to host memory.
2634 @table @asis
2635 @item @emph{Description}
2636 This function allocates device memory and maps it to host memory specified
2637 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
2638 the function returns the device address of the allocated device memory.
2639
2640 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2641 a contiguous array section. The second form @var{a} specifies a variable or
2642 array element and @var{len} specifies the length in bytes.
2643
2644 @item @emph{C/C++}:
2645 @multitable @columnfractions .20 .80
2646 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
2647 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
2648 @end multitable
2649
2650 @item @emph{Fortran}:
2651 @multitable @columnfractions .20 .80
2652 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
2653 @item @tab @code{type, dimension(:[,:]...) :: a}
2654 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
2655 @item @tab @code{type, dimension(:[,:]...) :: a}
2656 @item @tab @code{integer len}
2657 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
2658 @item @tab @code{type, dimension(:[,:]...) :: a}
2659 @item @tab @code{integer(acc_handle_kind) :: async}
2660 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
2661 @item @tab @code{type, dimension(:[,:]...) :: a}
2662 @item @tab @code{integer len}
2663 @item @tab @code{integer(acc_handle_kind) :: async}
2664 @end multitable
2665
2666 @item @emph{Reference}:
2667 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2668 3.2.21.
2669 @end table
2670
2671
2672
2673 @node acc_present_or_create
2674 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
2675 @table @asis
2676 @item @emph{Description}
2677 This function tests if the host data specified by @var{a} and of length
2678 @var{len} is present or not. If it is not present, then device memory
2679 will be allocated and mapped to host memory. In C/C++, the device address
2680 of the newly allocated device memory is returned.
2681
2682 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2683 a contiguous array section. The second form @var{a} specifies a variable or
2684 array element and @var{len} specifies the length in bytes.
2685
2686 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
2687 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
2688
2689 @item @emph{C/C++}:
2690 @multitable @columnfractions .20 .80
2691 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
2692 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
2693 @end multitable
2694
2695 @item @emph{Fortran}:
2696 @multitable @columnfractions .20 .80
2697 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
2698 @item @tab @code{type, dimension(:[,:]...) :: a}
2699 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
2700 @item @tab @code{type, dimension(:[,:]...) :: a}
2701 @item @tab @code{integer len}
2702 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
2703 @item @tab @code{type, dimension(:[,:]...) :: a}
2704 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
2705 @item @tab @code{type, dimension(:[,:]...) :: a}
2706 @item @tab @code{integer len}
2707 @end multitable
2708
2709 @item @emph{Reference}:
2710 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2711 3.2.21.
2712 @end table
2713
2714
2715
2716 @node acc_copyout
2717 @section @code{acc_copyout} -- Copy device memory to host memory.
2718 @table @asis
2719 @item @emph{Description}
2720 This function copies mapped device memory to host memory which is specified
2721 by host address @var{a} for a length @var{len} bytes in C/C++.
2722
2723 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2724 a contiguous array section. The second form @var{a} specifies a variable or
2725 array element and @var{len} specifies the length in bytes.
2726
2727 @item @emph{C/C++}:
2728 @multitable @columnfractions .20 .80
2729 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
2730 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
2731 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
2732 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
2733 @end multitable
2734
2735 @item @emph{Fortran}:
2736 @multitable @columnfractions .20 .80
2737 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
2738 @item @tab @code{type, dimension(:[,:]...) :: a}
2739 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
2740 @item @tab @code{type, dimension(:[,:]...) :: a}
2741 @item @tab @code{integer len}
2742 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
2743 @item @tab @code{type, dimension(:[,:]...) :: a}
2744 @item @tab @code{integer(acc_handle_kind) :: async}
2745 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
2746 @item @tab @code{type, dimension(:[,:]...) :: a}
2747 @item @tab @code{integer len}
2748 @item @tab @code{integer(acc_handle_kind) :: async}
2749 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
2750 @item @tab @code{type, dimension(:[,:]...) :: a}
2751 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
2752 @item @tab @code{type, dimension(:[,:]...) :: a}
2753 @item @tab @code{integer len}
2754 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
2755 @item @tab @code{type, dimension(:[,:]...) :: a}
2756 @item @tab @code{integer(acc_handle_kind) :: async}
2757 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
2758 @item @tab @code{type, dimension(:[,:]...) :: a}
2759 @item @tab @code{integer len}
2760 @item @tab @code{integer(acc_handle_kind) :: async}
2761 @end multitable
2762
2763 @item @emph{Reference}:
2764 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2765 3.2.22.
2766 @end table
2767
2768
2769
2770 @node acc_delete
2771 @section @code{acc_delete} -- Free device memory.
2772 @table @asis
2773 @item @emph{Description}
2774 This function frees previously allocated device memory specified by
2775 the device address @var{a} and the length of @var{len} bytes.
2776
2777 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2778 a contiguous array section. The second form @var{a} specifies a variable or
2779 array element and @var{len} specifies the length in bytes.
2780
2781 @item @emph{C/C++}:
2782 @multitable @columnfractions .20 .80
2783 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
2784 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
2785 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
2786 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
2787 @end multitable
2788
2789 @item @emph{Fortran}:
2790 @multitable @columnfractions .20 .80
2791 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
2792 @item @tab @code{type, dimension(:[,:]...) :: a}
2793 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
2794 @item @tab @code{type, dimension(:[,:]...) :: a}
2795 @item @tab @code{integer len}
2796 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
2797 @item @tab @code{type, dimension(:[,:]...) :: a}
2798 @item @tab @code{integer(acc_handle_kind) :: async}
2799 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
2800 @item @tab @code{type, dimension(:[,:]...) :: a}
2801 @item @tab @code{integer len}
2802 @item @tab @code{integer(acc_handle_kind) :: async}
2803 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
2804 @item @tab @code{type, dimension(:[,:]...) :: a}
2805 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
2806 @item @tab @code{type, dimension(:[,:]...) :: a}
2807 @item @tab @code{integer len}
2808 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
2809 @item @tab @code{type, dimension(:[,:]...) :: a}
2810 @item @tab @code{integer(acc_handle_kind) :: async}
2811 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
2812 @item @tab @code{type, dimension(:[,:]...) :: a}
2813 @item @tab @code{integer len}
2814 @item @tab @code{integer(acc_handle_kind) :: async}
2815 @end multitable
2816
2817 @item @emph{Reference}:
2818 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2819 3.2.23.
2820 @end table
2821
2822
2823
2824 @node acc_update_device
2825 @section @code{acc_update_device} -- Update device memory from mapped host memory.
2826 @table @asis
2827 @item @emph{Description}
2828 This function updates the device copy from the previously mapped host memory.
2829 The host memory is specified with the host address @var{a} and a length of
2830 @var{len} bytes.
2831
2832 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2833 a contiguous array section. The second form @var{a} specifies a variable or
2834 array element and @var{len} specifies the length in bytes.
2835
2836 @item @emph{C/C++}:
2837 @multitable @columnfractions .20 .80
2838 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
2839 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
2840 @end multitable
2841
2842 @item @emph{Fortran}:
2843 @multitable @columnfractions .20 .80
2844 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
2845 @item @tab @code{type, dimension(:[,:]...) :: a}
2846 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
2847 @item @tab @code{type, dimension(:[,:]...) :: a}
2848 @item @tab @code{integer len}
2849 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
2850 @item @tab @code{type, dimension(:[,:]...) :: a}
2851 @item @tab @code{integer(acc_handle_kind) :: async}
2852 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
2853 @item @tab @code{type, dimension(:[,:]...) :: a}
2854 @item @tab @code{integer len}
2855 @item @tab @code{integer(acc_handle_kind) :: async}
2856 @end multitable
2857
2858 @item @emph{Reference}:
2859 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2860 3.2.24.
2861 @end table
2862
2863
2864
2865 @node acc_update_self
2866 @section @code{acc_update_self} -- Update host memory from mapped device memory.
2867 @table @asis
2868 @item @emph{Description}
2869 This function updates the host copy from the previously mapped device memory.
2870 The host memory is specified with the host address @var{a} and a length of
2871 @var{len} bytes.
2872
2873 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2874 a contiguous array section. The second form @var{a} specifies a variable or
2875 array element and @var{len} specifies the length in bytes.
2876
2877 @item @emph{C/C++}:
2878 @multitable @columnfractions .20 .80
2879 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
2880 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
2881 @end multitable
2882
2883 @item @emph{Fortran}:
2884 @multitable @columnfractions .20 .80
2885 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
2886 @item @tab @code{type, dimension(:[,:]...) :: a}
2887 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
2888 @item @tab @code{type, dimension(:[,:]...) :: a}
2889 @item @tab @code{integer len}
2890 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
2891 @item @tab @code{type, dimension(:[,:]...) :: a}
2892 @item @tab @code{integer(acc_handle_kind) :: async}
2893 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
2894 @item @tab @code{type, dimension(:[,:]...) :: a}
2895 @item @tab @code{integer len}
2896 @item @tab @code{integer(acc_handle_kind) :: async}
2897 @end multitable
2898
2899 @item @emph{Reference}:
2900 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2901 3.2.25.
2902 @end table
2903
2904
2905
2906 @node acc_map_data
2907 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
2908 @table @asis
2909 @item @emph{Description}
2910 This function maps previously allocated device and host memory. The device
2911 memory is specified with the device address @var{d}. The host memory is
2912 specified with the host address @var{h} and a length of @var{len}.
2913
2914 @item @emph{C/C++}:
2915 @multitable @columnfractions .20 .80
2916 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
2917 @end multitable
2918
2919 @item @emph{Reference}:
2920 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2921 3.2.26.
2922 @end table
2923
2924
2925
2926 @node acc_unmap_data
2927 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
2928 @table @asis
2929 @item @emph{Description}
2930 This function unmaps previously mapped device and host memory. The latter
2931 specified by @var{h}.
2932
2933 @item @emph{C/C++}:
2934 @multitable @columnfractions .20 .80
2935 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
2936 @end multitable
2937
2938 @item @emph{Reference}:
2939 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2940 3.2.27.
2941 @end table
2942
2943
2944
2945 @node acc_deviceptr
2946 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
2947 @table @asis
2948 @item @emph{Description}
2949 This function returns the device address that has been mapped to the
2950 host address specified by @var{h}.
2951
2952 @item @emph{C/C++}:
2953 @multitable @columnfractions .20 .80
2954 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
2955 @end multitable
2956
2957 @item @emph{Reference}:
2958 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2959 3.2.28.
2960 @end table
2961
2962
2963
2964 @node acc_hostptr
2965 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
2966 @table @asis
2967 @item @emph{Description}
2968 This function returns the host address that has been mapped to the
2969 device address specified by @var{d}.
2970
2971 @item @emph{C/C++}:
2972 @multitable @columnfractions .20 .80
2973 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
2974 @end multitable
2975
2976 @item @emph{Reference}:
2977 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2978 3.2.29.
2979 @end table
2980
2981
2982
2983 @node acc_is_present
2984 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
2985 @table @asis
2986 @item @emph{Description}
2987 This function indicates whether the specified host address in @var{a} and a
2988 length of @var{len} bytes is present on the device. In C/C++, a non-zero
2989 value is returned to indicate the presence of the mapped memory on the
2990 device. A zero is returned to indicate the memory is not mapped on the
2991 device.
2992
2993 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2994 a contiguous array section. The second form @var{a} specifies a variable or
2995 array element and @var{len} specifies the length in bytes. If the host
2996 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
2997 a @code{false} is return to indicate the mapped memory is not present.
2998
2999 @item @emph{C/C++}:
3000 @multitable @columnfractions .20 .80
3001 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3002 @end multitable
3003
3004 @item @emph{Fortran}:
3005 @multitable @columnfractions .20 .80
3006 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
3007 @item @tab @code{type, dimension(:[,:]...) :: a}
3008 @item @tab @code{logical acc_is_present}
3009 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3010 @item @tab @code{type, dimension(:[,:]...) :: a}
3011 @item @tab @code{integer len}
3012 @item @tab @code{logical acc_is_present}
3013 @end multitable
3014
3015 @item @emph{Reference}:
3016 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3017 3.2.30.
3018 @end table
3019
3020
3021
3022 @node acc_memcpy_to_device
3023 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3024 @table @asis
3025 @item @emph{Description}
3026 This function copies host memory specified by host address of @var{src} to
3027 device memory specified by the device address @var{dest} for a length of
3028 @var{bytes} bytes.
3029
3030 @item @emph{C/C++}:
3031 @multitable @columnfractions .20 .80
3032 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3033 @end multitable
3034
3035 @item @emph{Reference}:
3036 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3037 3.2.31.
3038 @end table
3039
3040
3041
3042 @node acc_memcpy_from_device
3043 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3044 @table @asis
3045 @item @emph{Description}
3046 This function copies host memory specified by host address of @var{src} from
3047 device memory specified by the device address @var{dest} for a length of
3048 @var{bytes} bytes.
3049
3050 @item @emph{C/C++}:
3051 @multitable @columnfractions .20 .80
3052 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3053 @end multitable
3054
3055 @item @emph{Reference}:
3056 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3057 3.2.32.
3058 @end table
3059
3060
3061
3062 @node acc_attach
3063 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
3064 @table @asis
3065 @item @emph{Description}
3066 This function updates a pointer on the device from pointing to a host-pointer
3067 address to pointing to the corresponding device data.
3068
3069 @item @emph{C/C++}:
3070 @multitable @columnfractions .20 .80
3071 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3072 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3073 @end multitable
3074
3075 @item @emph{Reference}:
3076 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3077 3.2.34.
3078 @end table
3079
3080
3081
3082 @node acc_detach
3083 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
3084 @table @asis
3085 @item @emph{Description}
3086 This function updates a pointer on the device from pointing to a device-pointer
3087 address to pointing to the corresponding host data.
3088
3089 @item @emph{C/C++}:
3090 @multitable @columnfractions .20 .80
3091 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3092 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3093 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3094 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3095 @end multitable
3096
3097 @item @emph{Reference}:
3098 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3099 3.2.35.
3100 @end table
3101
3102
3103
3104 @node acc_get_current_cuda_device
3105 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3106 @table @asis
3107 @item @emph{Description}
3108 This function returns the CUDA device handle. This handle is the same
3109 as used by the CUDA Runtime or Driver API's.
3110
3111 @item @emph{C/C++}:
3112 @multitable @columnfractions .20 .80
3113 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3114 @end multitable
3115
3116 @item @emph{Reference}:
3117 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3118 A.2.1.1.
3119 @end table
3120
3121
3122
3123 @node acc_get_current_cuda_context
3124 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3125 @table @asis
3126 @item @emph{Description}
3127 This function returns the CUDA context handle. This handle is the same
3128 as used by the CUDA Runtime or Driver API's.
3129
3130 @item @emph{C/C++}:
3131 @multitable @columnfractions .20 .80
3132 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3133 @end multitable
3134
3135 @item @emph{Reference}:
3136 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3137 A.2.1.2.
3138 @end table
3139
3140
3141
3142 @node acc_get_cuda_stream
3143 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3144 @table @asis
3145 @item @emph{Description}
3146 This function returns the CUDA stream handle for the queue @var{async}.
3147 This handle is the same as used by the CUDA Runtime or Driver API's.
3148
3149 @item @emph{C/C++}:
3150 @multitable @columnfractions .20 .80
3151 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3152 @end multitable
3153
3154 @item @emph{Reference}:
3155 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3156 A.2.1.3.
3157 @end table
3158
3159
3160
3161 @node acc_set_cuda_stream
3162 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3163 @table @asis
3164 @item @emph{Description}
3165 This function associates the stream handle specified by @var{stream} with
3166 the queue @var{async}.
3167
3168 This cannot be used to change the stream handle associated with
3169 @code{acc_async_sync}.
3170
3171 The return value is not specified.
3172
3173 @item @emph{C/C++}:
3174 @multitable @columnfractions .20 .80
3175 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3176 @end multitable
3177
3178 @item @emph{Reference}:
3179 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3180 A.2.1.4.
3181 @end table
3182
3183
3184
3185 @node acc_prof_register
3186 @section @code{acc_prof_register} -- Register callbacks.
3187 @table @asis
3188 @item @emph{Description}:
3189 This function registers callbacks.
3190
3191 @item @emph{C/C++}:
3192 @multitable @columnfractions .20 .80
3193 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3194 @end multitable
3195
3196 @item @emph{See also}:
3197 @ref{OpenACC Profiling Interface}
3198
3199 @item @emph{Reference}:
3200 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3201 5.3.
3202 @end table
3203
3204
3205
3206 @node acc_prof_unregister
3207 @section @code{acc_prof_unregister} -- Unregister callbacks.
3208 @table @asis
3209 @item @emph{Description}:
3210 This function unregisters callbacks.
3211
3212 @item @emph{C/C++}:
3213 @multitable @columnfractions .20 .80
3214 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3215 @end multitable
3216
3217 @item @emph{See also}:
3218 @ref{OpenACC Profiling Interface}
3219
3220 @item @emph{Reference}:
3221 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3222 5.3.
3223 @end table
3224
3225
3226
3227 @node acc_prof_lookup
3228 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
3229 @table @asis
3230 @item @emph{Description}:
3231 Function to obtain inquiry functions.
3232
3233 @item @emph{C/C++}:
3234 @multitable @columnfractions .20 .80
3235 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3236 @end multitable
3237
3238 @item @emph{See also}:
3239 @ref{OpenACC Profiling Interface}
3240
3241 @item @emph{Reference}:
3242 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3243 5.3.
3244 @end table
3245
3246
3247
3248 @node acc_register_library
3249 @section @code{acc_register_library} -- Library registration.
3250 @table @asis
3251 @item @emph{Description}:
3252 Function for library registration.
3253
3254 @item @emph{C/C++}:
3255 @multitable @columnfractions .20 .80
3256 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3257 @end multitable
3258
3259 @item @emph{See also}:
3260 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3261
3262 @item @emph{Reference}:
3263 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3264 5.3.
3265 @end table
3266
3267
3268
3269 @c ---------------------------------------------------------------------
3270 @c OpenACC Environment Variables
3271 @c ---------------------------------------------------------------------
3272
3273 @node OpenACC Environment Variables
3274 @chapter OpenACC Environment Variables
3275
3276 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
3277 are defined by section 4 of the OpenACC specification in version 2.0.
3278 The variable @env{ACC_PROFLIB}
3279 is defined by section 4 of the OpenACC specification in version 2.6.
3280 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
3281
3282 @menu
3283 * ACC_DEVICE_TYPE::
3284 * ACC_DEVICE_NUM::
3285 * ACC_PROFLIB::
3286 * GCC_ACC_NOTIFY::
3287 @end menu
3288
3289
3290
3291 @node ACC_DEVICE_TYPE
3292 @section @code{ACC_DEVICE_TYPE}
3293 @table @asis
3294 @item @emph{Reference}:
3295 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3296 4.1.
3297 @end table
3298
3299
3300
3301 @node ACC_DEVICE_NUM
3302 @section @code{ACC_DEVICE_NUM}
3303 @table @asis
3304 @item @emph{Reference}:
3305 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3306 4.2.
3307 @end table
3308
3309
3310
3311 @node ACC_PROFLIB
3312 @section @code{ACC_PROFLIB}
3313 @table @asis
3314 @item @emph{See also}:
3315 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3316
3317 @item @emph{Reference}:
3318 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3319 4.3.
3320 @end table
3321
3322
3323
3324 @node GCC_ACC_NOTIFY
3325 @section @code{GCC_ACC_NOTIFY}
3326 @table @asis
3327 @item @emph{Description}:
3328 Print debug information pertaining to the accelerator.
3329 @end table
3330
3331
3332
3333 @c ---------------------------------------------------------------------
3334 @c CUDA Streams Usage
3335 @c ---------------------------------------------------------------------
3336
3337 @node CUDA Streams Usage
3338 @chapter CUDA Streams Usage
3339
3340 This applies to the @code{nvptx} plugin only.
3341
3342 The library provides elements that perform asynchronous movement of
3343 data and asynchronous operation of computing constructs. This
3344 asynchronous functionality is implemented by making use of CUDA
3345 streams@footnote{See "Stream Management" in "CUDA Driver API",
3346 TRM-06703-001, Version 5.5, for additional information}.
3347
3348 The primary means by that the asynchronous functionality is accessed
3349 is through the use of those OpenACC directives which make use of the
3350 @code{async} and @code{wait} clauses. When the @code{async} clause is
3351 first used with a directive, it creates a CUDA stream. If an
3352 @code{async-argument} is used with the @code{async} clause, then the
3353 stream is associated with the specified @code{async-argument}.
3354
3355 Following the creation of an association between a CUDA stream and the
3356 @code{async-argument} of an @code{async} clause, both the @code{wait}
3357 clause and the @code{wait} directive can be used. When either the
3358 clause or directive is used after stream creation, it creates a
3359 rendezvous point whereby execution waits until all operations
3360 associated with the @code{async-argument}, that is, stream, have
3361 completed.
3362
3363 Normally, the management of the streams that are created as a result of
3364 using the @code{async} clause, is done without any intervention by the
3365 caller. This implies the association between the @code{async-argument}
3366 and the CUDA stream will be maintained for the lifetime of the program.
3367 However, this association can be changed through the use of the library
3368 function @code{acc_set_cuda_stream}. When the function
3369 @code{acc_set_cuda_stream} is called, the CUDA stream that was
3370 originally associated with the @code{async} clause will be destroyed.
3371 Caution should be taken when changing the association as subsequent
3372 references to the @code{async-argument} refer to a different
3373 CUDA stream.
3374
3375
3376
3377 @c ---------------------------------------------------------------------
3378 @c OpenACC Library Interoperability
3379 @c ---------------------------------------------------------------------
3380
3381 @node OpenACC Library Interoperability
3382 @chapter OpenACC Library Interoperability
3383
3384 @section Introduction
3385
3386 The OpenACC library uses the CUDA Driver API, and may interact with
3387 programs that use the Runtime library directly, or another library
3388 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3389 "Interactions with the CUDA Driver API" in
3390 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3391 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3392 for additional information on library interoperability.}.
3393 This chapter describes the use cases and what changes are
3394 required in order to use both the OpenACC library and the CUBLAS and Runtime
3395 libraries within a program.
3396
3397 @section First invocation: NVIDIA CUBLAS library API
3398
3399 In this first use case (see below), a function in the CUBLAS library is called
3400 prior to any of the functions in the OpenACC library. More specifically, the
3401 function @code{cublasCreate()}.
3402
3403 When invoked, the function initializes the library and allocates the
3404 hardware resources on the host and the device on behalf of the caller. Once
3405 the initialization and allocation has completed, a handle is returned to the
3406 caller. The OpenACC library also requires initialization and allocation of
3407 hardware resources. Since the CUBLAS library has already allocated the
3408 hardware resources for the device, all that is left to do is to initialize
3409 the OpenACC library and acquire the hardware resources on the host.
3410
3411 Prior to calling the OpenACC function that initializes the library and
3412 allocate the host hardware resources, you need to acquire the device number
3413 that was allocated during the call to @code{cublasCreate()}. The invoking of the
3414 runtime library function @code{cudaGetDevice()} accomplishes this. Once
3415 acquired, the device number is passed along with the device type as
3416 parameters to the OpenACC library function @code{acc_set_device_num()}.
3417
3418 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3419 library uses the context that was created during the call to
3420 @code{cublasCreate()}. In other words, both libraries will be sharing the
3421 same context.
3422
3423 @smallexample
3424 /* Create the handle */
3425 s = cublasCreate(&h);
3426 if (s != CUBLAS_STATUS_SUCCESS)
3427 @{
3428 fprintf(stderr, "cublasCreate failed %d\n", s);
3429 exit(EXIT_FAILURE);
3430 @}
3431
3432 /* Get the device number */
3433 e = cudaGetDevice(&dev);
3434 if (e != cudaSuccess)
3435 @{
3436 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3437 exit(EXIT_FAILURE);
3438 @}
3439
3440 /* Initialize OpenACC library and use device 'dev' */
3441 acc_set_device_num(dev, acc_device_nvidia);
3442
3443 @end smallexample
3444 @center Use Case 1
3445
3446 @section First invocation: OpenACC library API
3447
3448 In this second use case (see below), a function in the OpenACC library is
3449 called prior to any of the functions in the CUBLAS library. More specificially,
3450 the function @code{acc_set_device_num()}.
3451
3452 In the use case presented here, the function @code{acc_set_device_num()}
3453 is used to both initialize the OpenACC library and allocate the hardware
3454 resources on the host and the device. In the call to the function, the
3455 call parameters specify which device to use and what device
3456 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3457 is but one method to initialize the OpenACC library and allocate the
3458 appropriate hardware resources. Other methods are available through the
3459 use of environment variables and these will be discussed in the next section.
3460
3461 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3462 functions can be called as seen with multiple calls being made to
3463 @code{acc_copyin()}. In addition, calls can be made to functions in the
3464 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3465 subsequent to the calls to @code{acc_copyin()}.
3466 As seen in the previous use case, a call to @code{cublasCreate()}
3467 initializes the CUBLAS library and allocates the hardware resources on the
3468 host and the device. However, since the device has already been allocated,
3469 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
3470 the appropriate hardware resources on the host. The context that was created
3471 as part of the OpenACC initialization is shared with the CUBLAS library,
3472 similarly to the first use case.
3473
3474 @smallexample
3475 dev = 0;
3476
3477 acc_set_device_num(dev, acc_device_nvidia);
3478
3479 /* Copy the first set to the device */
3480 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3481 if (d_X == NULL)
3482 @{
3483 fprintf(stderr, "copyin error h_X\n");
3484 exit(EXIT_FAILURE);
3485 @}
3486
3487 /* Copy the second set to the device */
3488 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3489 if (d_Y == NULL)
3490 @{
3491 fprintf(stderr, "copyin error h_Y1\n");
3492 exit(EXIT_FAILURE);
3493 @}
3494
3495 /* Create the handle */
3496 s = cublasCreate(&h);
3497 if (s != CUBLAS_STATUS_SUCCESS)
3498 @{
3499 fprintf(stderr, "cublasCreate failed %d\n", s);
3500 exit(EXIT_FAILURE);
3501 @}
3502
3503 /* Perform saxpy using CUBLAS library function */
3504 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
3505 if (s != CUBLAS_STATUS_SUCCESS)
3506 @{
3507 fprintf(stderr, "cublasSaxpy failed %d\n", s);
3508 exit(EXIT_FAILURE);
3509 @}
3510
3511 /* Copy the results from the device */
3512 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
3513
3514 @end smallexample
3515 @center Use Case 2
3516
3517 @section OpenACC library and environment variables
3518
3519 There are two environment variables associated with the OpenACC library
3520 that may be used to control the device type and device number:
3521 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
3522 environment variables can be used as an alternative to calling
3523 @code{acc_set_device_num()}. As seen in the second use case, the device
3524 type and device number were specified using @code{acc_set_device_num()}.
3525 If however, the aforementioned environment variables were set, then the
3526 call to @code{acc_set_device_num()} would not be required.
3527
3528
3529 The use of the environment variables is only relevant when an OpenACC function
3530 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
3531 is called prior to a call to an OpenACC function, then you must call
3532 @code{acc_set_device_num()}@footnote{More complete information
3533 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
3534 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
3535 Application Programming Interface”, Version 2.6.}
3536
3537
3538
3539 @c ---------------------------------------------------------------------
3540 @c OpenACC Profiling Interface
3541 @c ---------------------------------------------------------------------
3542
3543 @node OpenACC Profiling Interface
3544 @chapter OpenACC Profiling Interface
3545
3546 @section Implementation Status and Implementation-Defined Behavior
3547
3548 We're implementing the OpenACC Profiling Interface as defined by the
3549 OpenACC 2.6 specification. We're clarifying some aspects here as
3550 @emph{implementation-defined behavior}, while they're still under
3551 discussion within the OpenACC Technical Committee.
3552
3553 This implementation is tuned to keep the performance impact as low as
3554 possible for the (very common) case that the Profiling Interface is
3555 not enabled. This is relevant, as the Profiling Interface affects all
3556 the @emph{hot} code paths (in the target code, not in the offloaded
3557 code). Users of the OpenACC Profiling Interface can be expected to
3558 understand that performance will be impacted to some degree once the
3559 Profiling Interface has gotten enabled: for example, because of the
3560 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
3561 every event that has been registered.
3562
3563 We're not yet accounting for the fact that @cite{OpenACC events may
3564 occur during event processing}.
3565 We just handle one case specially, as required by CUDA 9.0
3566 @command{nvprof}, that @code{acc_get_device_type}
3567 (@ref{acc_get_device_type})) may be called from
3568 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3569 callbacks.
3570
3571 We're not yet implementing initialization via a
3572 @code{acc_register_library} function that is either statically linked
3573 in, or dynamically via @env{LD_PRELOAD}.
3574 Initialization via @code{acc_register_library} functions dynamically
3575 loaded via the @env{ACC_PROFLIB} environment variable does work, as
3576 does directly calling @code{acc_prof_register},
3577 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
3578
3579 As currently there are no inquiry functions defined, calls to
3580 @code{acc_prof_lookup} will always return @code{NULL}.
3581
3582 There aren't separate @emph{start}, @emph{stop} events defined for the
3583 event types @code{acc_ev_create}, @code{acc_ev_delete},
3584 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
3585 should be triggered before or after the actual device-specific call is
3586 made. We trigger them after.
3587
3588 Remarks about data provided to callbacks:
3589
3590 @table @asis
3591
3592 @item @code{acc_prof_info.event_type}
3593 It's not clear if for @emph{nested} event callbacks (for example,
3594 @code{acc_ev_enqueue_launch_start} as part of a parent compute
3595 construct), this should be set for the nested event
3596 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
3597 construct should remain (@code{acc_ev_compute_construct_start}). In
3598 this implementation, the value will generally correspond to the
3599 innermost nested event type.
3600
3601 @item @code{acc_prof_info.device_type}
3602 @itemize
3603
3604 @item
3605 For @code{acc_ev_compute_construct_start}, and in presence of an
3606 @code{if} clause with @emph{false} argument, this will still refer to
3607 the offloading device type.
3608 It's not clear if that's the expected behavior.
3609
3610 @item
3611 Complementary to the item before, for
3612 @code{acc_ev_compute_construct_end}, this is set to
3613 @code{acc_device_host} in presence of an @code{if} clause with
3614 @emph{false} argument.
3615 It's not clear if that's the expected behavior.
3616
3617 @end itemize
3618
3619 @item @code{acc_prof_info.thread_id}
3620 Always @code{-1}; not yet implemented.
3621
3622 @item @code{acc_prof_info.async}
3623 @itemize
3624
3625 @item
3626 Not yet implemented correctly for
3627 @code{acc_ev_compute_construct_start}.
3628
3629 @item
3630 In a compute construct, for host-fallback
3631 execution/@code{acc_device_host} it will always be
3632 @code{acc_async_sync}.
3633 It's not clear if that's the expected behavior.
3634
3635 @item
3636 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
3637 it will always be @code{acc_async_sync}.
3638 It's not clear if that's the expected behavior.
3639
3640 @end itemize
3641
3642 @item @code{acc_prof_info.async_queue}
3643 There is no @cite{limited number of asynchronous queues} in libgomp.
3644 This will always have the same value as @code{acc_prof_info.async}.
3645
3646 @item @code{acc_prof_info.src_file}
3647 Always @code{NULL}; not yet implemented.
3648
3649 @item @code{acc_prof_info.func_name}
3650 Always @code{NULL}; not yet implemented.
3651
3652 @item @code{acc_prof_info.line_no}
3653 Always @code{-1}; not yet implemented.
3654
3655 @item @code{acc_prof_info.end_line_no}
3656 Always @code{-1}; not yet implemented.
3657
3658 @item @code{acc_prof_info.func_line_no}
3659 Always @code{-1}; not yet implemented.
3660
3661 @item @code{acc_prof_info.func_end_line_no}
3662 Always @code{-1}; not yet implemented.
3663
3664 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
3665 Relating to @code{acc_prof_info.event_type} discussed above, in this
3666 implementation, this will always be the same value as
3667 @code{acc_prof_info.event_type}.
3668
3669 @item @code{acc_event_info.*.parent_construct}
3670 @itemize
3671
3672 @item
3673 Will be @code{acc_construct_parallel} for all OpenACC compute
3674 constructs as well as many OpenACC Runtime API calls; should be the
3675 one matching the actual construct, or
3676 @code{acc_construct_runtime_api}, respectively.
3677
3678 @item
3679 Will be @code{acc_construct_enter_data} or
3680 @code{acc_construct_exit_data} when processing variable mappings
3681 specified in OpenACC @emph{declare} directives; should be
3682 @code{acc_construct_declare}.
3683
3684 @item
3685 For implicit @code{acc_ev_device_init_start},
3686 @code{acc_ev_device_init_end}, and explicit as well as implicit
3687 @code{acc_ev_alloc}, @code{acc_ev_free},
3688 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3689 @code{acc_ev_enqueue_download_start}, and
3690 @code{acc_ev_enqueue_download_end}, will be
3691 @code{acc_construct_parallel}; should reflect the real parent
3692 construct.
3693
3694 @end itemize
3695
3696 @item @code{acc_event_info.*.implicit}
3697 For @code{acc_ev_alloc}, @code{acc_ev_free},
3698 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3699 @code{acc_ev_enqueue_download_start}, and
3700 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
3701 also for explicit usage.
3702
3703 @item @code{acc_event_info.data_event.var_name}
3704 Always @code{NULL}; not yet implemented.
3705
3706 @item @code{acc_event_info.data_event.host_ptr}
3707 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
3708 @code{NULL}.
3709
3710 @item @code{typedef union acc_api_info}
3711 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
3712 Information}. This should obviously be @code{typedef @emph{struct}
3713 acc_api_info}.
3714
3715 @item @code{acc_api_info.device_api}
3716 Possibly not yet implemented correctly for
3717 @code{acc_ev_compute_construct_start},
3718 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
3719 will always be @code{acc_device_api_none} for these event types.
3720 For @code{acc_ev_enter_data_start}, it will be
3721 @code{acc_device_api_none} in some cases.
3722
3723 @item @code{acc_api_info.device_type}
3724 Always the same as @code{acc_prof_info.device_type}.
3725
3726 @item @code{acc_api_info.vendor}
3727 Always @code{-1}; not yet implemented.
3728
3729 @item @code{acc_api_info.device_handle}
3730 Always @code{NULL}; not yet implemented.
3731
3732 @item @code{acc_api_info.context_handle}
3733 Always @code{NULL}; not yet implemented.
3734
3735 @item @code{acc_api_info.async_handle}
3736 Always @code{NULL}; not yet implemented.
3737
3738 @end table
3739
3740 Remarks about certain event types:
3741
3742 @table @asis
3743
3744 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3745 @itemize
3746
3747 @item
3748 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
3749 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
3750 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
3751 Whan a compute construct triggers implicit
3752 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
3753 events, they currently aren't @emph{nested within} the corresponding
3754 @code{acc_ev_compute_construct_start} and
3755 @code{acc_ev_compute_construct_end}, but they're currently observed
3756 @emph{before} @code{acc_ev_compute_construct_start}.
3757 It's not clear what to do: the standard asks us provide a lot of
3758 details to the @code{acc_ev_compute_construct_start} callback, without
3759 (implicitly) initializing a device before?
3760
3761 @item
3762 Callbacks for these event types will not be invoked for calls to the
3763 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
3764 It's not clear if they should be.
3765
3766 @end itemize
3767
3768 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
3769 @itemize
3770
3771 @item
3772 Callbacks for these event types will also be invoked for OpenACC
3773 @emph{host_data} constructs.
3774 It's not clear if they should be.
3775
3776 @item
3777 Callbacks for these event types will also be invoked when processing
3778 variable mappings specified in OpenACC @emph{declare} directives.
3779 It's not clear if they should be.
3780
3781 @end itemize
3782
3783 @end table
3784
3785 Callbacks for the following event types will be invoked, but dispatch
3786 and information provided therein has not yet been thoroughly reviewed:
3787
3788 @itemize
3789 @item @code{acc_ev_alloc}
3790 @item @code{acc_ev_free}
3791 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
3792 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
3793 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
3794 @end itemize
3795
3796 During device initialization, and finalization, respectively,
3797 callbacks for the following event types will not yet be invoked:
3798
3799 @itemize
3800 @item @code{acc_ev_alloc}
3801 @item @code{acc_ev_free}
3802 @end itemize
3803
3804 Callbacks for the following event types have not yet been implemented,
3805 so currently won't be invoked:
3806
3807 @itemize
3808 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
3809 @item @code{acc_ev_runtime_shutdown}
3810 @item @code{acc_ev_create}, @code{acc_ev_delete}
3811 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
3812 @end itemize
3813
3814 For the following runtime library functions, not all expected
3815 callbacks will be invoked (mostly concerning implicit device
3816 initialization):
3817
3818 @itemize
3819 @item @code{acc_get_num_devices}
3820 @item @code{acc_set_device_type}
3821 @item @code{acc_get_device_type}
3822 @item @code{acc_set_device_num}
3823 @item @code{acc_get_device_num}
3824 @item @code{acc_init}
3825 @item @code{acc_shutdown}
3826 @end itemize
3827
3828 Aside from implicit device initialization, for the following runtime
3829 library functions, no callbacks will be invoked for shared-memory
3830 offloading devices (it's not clear if they should be):
3831
3832 @itemize
3833 @item @code{acc_malloc}
3834 @item @code{acc_free}
3835 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
3836 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
3837 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
3838 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
3839 @item @code{acc_update_device}, @code{acc_update_device_async}
3840 @item @code{acc_update_self}, @code{acc_update_self_async}
3841 @item @code{acc_map_data}, @code{acc_unmap_data}
3842 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
3843 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
3844 @end itemize
3845
3846
3847
3848 @c ---------------------------------------------------------------------
3849 @c The libgomp ABI
3850 @c ---------------------------------------------------------------------
3851
3852 @node The libgomp ABI
3853 @chapter The libgomp ABI
3854
3855 The following sections present notes on the external ABI as
3856 presented by libgomp. Only maintainers should need them.
3857
3858 @menu
3859 * Implementing MASTER construct::
3860 * Implementing CRITICAL construct::
3861 * Implementing ATOMIC construct::
3862 * Implementing FLUSH construct::
3863 * Implementing BARRIER construct::
3864 * Implementing THREADPRIVATE construct::
3865 * Implementing PRIVATE clause::
3866 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
3867 * Implementing REDUCTION clause::
3868 * Implementing PARALLEL construct::
3869 * Implementing FOR construct::
3870 * Implementing ORDERED construct::
3871 * Implementing SECTIONS construct::
3872 * Implementing SINGLE construct::
3873 * Implementing OpenACC's PARALLEL construct::
3874 @end menu
3875
3876
3877 @node Implementing MASTER construct
3878 @section Implementing MASTER construct
3879
3880 @smallexample
3881 if (omp_get_thread_num () == 0)
3882 block
3883 @end smallexample
3884
3885 Alternately, we generate two copies of the parallel subfunction
3886 and only include this in the version run by the primary thread.
3887 Surely this is not worthwhile though...
3888
3889
3890
3891 @node Implementing CRITICAL construct
3892 @section Implementing CRITICAL construct
3893
3894 Without a specified name,
3895
3896 @smallexample
3897 void GOMP_critical_start (void);
3898 void GOMP_critical_end (void);
3899 @end smallexample
3900
3901 so that we don't get COPY relocations from libgomp to the main
3902 application.
3903
3904 With a specified name, use omp_set_lock and omp_unset_lock with
3905 name being transformed into a variable declared like
3906
3907 @smallexample
3908 omp_lock_t gomp_critical_user_<name> __attribute__((common))
3909 @end smallexample
3910
3911 Ideally the ABI would specify that all zero is a valid unlocked
3912 state, and so we wouldn't need to initialize this at
3913 startup.
3914
3915
3916
3917 @node Implementing ATOMIC construct
3918 @section Implementing ATOMIC construct
3919
3920 The target should implement the @code{__sync} builtins.
3921
3922 Failing that we could add
3923
3924 @smallexample
3925 void GOMP_atomic_enter (void)
3926 void GOMP_atomic_exit (void)
3927 @end smallexample
3928
3929 which reuses the regular lock code, but with yet another lock
3930 object private to the library.
3931
3932
3933
3934 @node Implementing FLUSH construct
3935 @section Implementing FLUSH construct
3936
3937 Expands to the @code{__sync_synchronize} builtin.
3938
3939
3940
3941 @node Implementing BARRIER construct
3942 @section Implementing BARRIER construct
3943
3944 @smallexample
3945 void GOMP_barrier (void)
3946 @end smallexample
3947
3948
3949 @node Implementing THREADPRIVATE construct
3950 @section Implementing THREADPRIVATE construct
3951
3952 In _most_ cases we can map this directly to @code{__thread}. Except
3953 that OMP allows constructors for C++ objects. We can either
3954 refuse to support this (how often is it used?) or we can
3955 implement something akin to .ctors.
3956
3957 Even more ideally, this ctor feature is handled by extensions
3958 to the main pthreads library. Failing that, we can have a set
3959 of entry points to register ctor functions to be called.
3960
3961
3962
3963 @node Implementing PRIVATE clause
3964 @section Implementing PRIVATE clause
3965
3966 In association with a PARALLEL, or within the lexical extent
3967 of a PARALLEL block, the variable becomes a local variable in
3968 the parallel subfunction.
3969
3970 In association with FOR or SECTIONS blocks, create a new
3971 automatic variable within the current function. This preserves
3972 the semantic of new variable creation.
3973
3974
3975
3976 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3977 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3978
3979 This seems simple enough for PARALLEL blocks. Create a private
3980 struct for communicating between the parent and subfunction.
3981 In the parent, copy in values for scalar and "small" structs;
3982 copy in addresses for others TREE_ADDRESSABLE types. In the
3983 subfunction, copy the value into the local variable.
3984
3985 It is not clear what to do with bare FOR or SECTION blocks.
3986 The only thing I can figure is that we do something like:
3987
3988 @smallexample
3989 #pragma omp for firstprivate(x) lastprivate(y)
3990 for (int i = 0; i < n; ++i)
3991 body;
3992 @end smallexample
3993
3994 which becomes
3995
3996 @smallexample
3997 @{
3998 int x = x, y;
3999
4000 // for stuff
4001
4002 if (i == n)
4003 y = y;
4004 @}
4005 @end smallexample
4006
4007 where the "x=x" and "y=y" assignments actually have different
4008 uids for the two variables, i.e. not something you could write
4009 directly in C. Presumably this only makes sense if the "outer"
4010 x and y are global variables.
4011
4012 COPYPRIVATE would work the same way, except the structure
4013 broadcast would have to happen via SINGLE machinery instead.
4014
4015
4016
4017 @node Implementing REDUCTION clause
4018 @section Implementing REDUCTION clause
4019
4020 The private struct mentioned in the previous section should have
4021 a pointer to an array of the type of the variable, indexed by the
4022 thread's @var{team_id}. The thread stores its final value into the
4023 array, and after the barrier, the primary thread iterates over the
4024 array to collect the values.
4025
4026
4027 @node Implementing PARALLEL construct
4028 @section Implementing PARALLEL construct
4029
4030 @smallexample
4031 #pragma omp parallel
4032 @{
4033 body;
4034 @}
4035 @end smallexample
4036
4037 becomes
4038
4039 @smallexample
4040 void subfunction (void *data)
4041 @{
4042 use data;
4043 body;
4044 @}
4045
4046 setup data;
4047 GOMP_parallel_start (subfunction, &data, num_threads);
4048 subfunction (&data);
4049 GOMP_parallel_end ();
4050 @end smallexample
4051
4052 @smallexample
4053 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
4054 @end smallexample
4055
4056 The @var{FN} argument is the subfunction to be run in parallel.
4057
4058 The @var{DATA} argument is a pointer to a structure used to
4059 communicate data in and out of the subfunction, as discussed
4060 above with respect to FIRSTPRIVATE et al.
4061
4062 The @var{NUM_THREADS} argument is 1 if an IF clause is present
4063 and false, or the value of the NUM_THREADS clause, if
4064 present, or 0.
4065
4066 The function needs to create the appropriate number of
4067 threads and/or launch them from the dock. It needs to
4068 create the team structure and assign team ids.
4069
4070 @smallexample
4071 void GOMP_parallel_end (void)
4072 @end smallexample
4073
4074 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
4075
4076
4077
4078 @node Implementing FOR construct
4079 @section Implementing FOR construct
4080
4081 @smallexample
4082 #pragma omp parallel for
4083 for (i = lb; i <= ub; i++)
4084 body;
4085 @end smallexample
4086
4087 becomes
4088
4089 @smallexample
4090 void subfunction (void *data)
4091 @{
4092 long _s0, _e0;
4093 while (GOMP_loop_static_next (&_s0, &_e0))
4094 @{
4095 long _e1 = _e0, i;
4096 for (i = _s0; i < _e1; i++)
4097 body;
4098 @}
4099 GOMP_loop_end_nowait ();
4100 @}
4101
4102 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
4103 subfunction (NULL);
4104 GOMP_parallel_end ();
4105 @end smallexample
4106
4107 @smallexample
4108 #pragma omp for schedule(runtime)
4109 for (i = 0; i < n; i++)
4110 body;
4111 @end smallexample
4112
4113 becomes
4114
4115 @smallexample
4116 @{
4117 long i, _s0, _e0;
4118 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
4119 do @{
4120 long _e1 = _e0;
4121 for (i = _s0, i < _e0; i++)
4122 body;
4123 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
4124 GOMP_loop_end ();
4125 @}
4126 @end smallexample
4127
4128 Note that while it looks like there is trickiness to propagating
4129 a non-constant STEP, there isn't really. We're explicitly allowed
4130 to evaluate it as many times as we want, and any variables involved
4131 should automatically be handled as PRIVATE or SHARED like any other
4132 variables. So the expression should remain evaluable in the
4133 subfunction. We can also pull it into a local variable if we like,
4134 but since its supposed to remain unchanged, we can also not if we like.
4135
4136 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
4137 able to get away with no work-sharing context at all, since we can
4138 simply perform the arithmetic directly in each thread to divide up
4139 the iterations. Which would mean that we wouldn't need to call any
4140 of these routines.
4141
4142 There are separate routines for handling loops with an ORDERED
4143 clause. Bookkeeping for that is non-trivial...
4144
4145
4146
4147 @node Implementing ORDERED construct
4148 @section Implementing ORDERED construct
4149
4150 @smallexample
4151 void GOMP_ordered_start (void)
4152 void GOMP_ordered_end (void)
4153 @end smallexample
4154
4155
4156
4157 @node Implementing SECTIONS construct
4158 @section Implementing SECTIONS construct
4159
4160 A block as
4161
4162 @smallexample
4163 #pragma omp sections
4164 @{
4165 #pragma omp section
4166 stmt1;
4167 #pragma omp section
4168 stmt2;
4169 #pragma omp section
4170 stmt3;
4171 @}
4172 @end smallexample
4173
4174 becomes
4175
4176 @smallexample
4177 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
4178 switch (i)
4179 @{
4180 case 1:
4181 stmt1;
4182 break;
4183 case 2:
4184 stmt2;
4185 break;
4186 case 3:
4187 stmt3;
4188 break;
4189 @}
4190 GOMP_barrier ();
4191 @end smallexample
4192
4193
4194 @node Implementing SINGLE construct
4195 @section Implementing SINGLE construct
4196
4197 A block like
4198
4199 @smallexample
4200 #pragma omp single
4201 @{
4202 body;
4203 @}
4204 @end smallexample
4205
4206 becomes
4207
4208 @smallexample
4209 if (GOMP_single_start ())
4210 body;
4211 GOMP_barrier ();
4212 @end smallexample
4213
4214 while
4215
4216 @smallexample
4217 #pragma omp single copyprivate(x)
4218 body;
4219 @end smallexample
4220
4221 becomes
4222
4223 @smallexample
4224 datap = GOMP_single_copy_start ();
4225 if (datap == NULL)
4226 @{
4227 body;
4228 data.x = x;
4229 GOMP_single_copy_end (&data);
4230 @}
4231 else
4232 x = datap->x;
4233 GOMP_barrier ();
4234 @end smallexample
4235
4236
4237
4238 @node Implementing OpenACC's PARALLEL construct
4239 @section Implementing OpenACC's PARALLEL construct
4240
4241 @smallexample
4242 void GOACC_parallel ()
4243 @end smallexample
4244
4245
4246
4247 @c ---------------------------------------------------------------------
4248 @c Reporting Bugs
4249 @c ---------------------------------------------------------------------
4250
4251 @node Reporting Bugs
4252 @chapter Reporting Bugs
4253
4254 Bugs in the GNU Offloading and Multi Processing Runtime Library should
4255 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
4256 "openacc", or "openmp", or both to the keywords field in the bug
4257 report, as appropriate.
4258
4259
4260
4261 @c ---------------------------------------------------------------------
4262 @c GNU General Public License
4263 @c ---------------------------------------------------------------------
4264
4265 @include gpl_v3.texi
4266
4267
4268
4269 @c ---------------------------------------------------------------------
4270 @c GNU Free Documentation License
4271 @c ---------------------------------------------------------------------
4272
4273 @include fdl.texi
4274
4275
4276
4277 @c ---------------------------------------------------------------------
4278 @c Funding Free Software
4279 @c ---------------------------------------------------------------------
4280
4281 @include funding.texi
4282
4283 @c ---------------------------------------------------------------------
4284 @c Index
4285 @c ---------------------------------------------------------------------
4286
4287 @node Library Index
4288 @unnumbered Library Index
4289
4290 @printindex cp
4291
4292 @bye