]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
[OpenMP] Add missing parameters to omp_lib documentation (PR fortran/93541)
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
3721b9e1
DF
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
7e7065b9 10Copyright @copyright{} 2006-2020 Free Software Foundation, Inc.
3721b9e1
DF
11
12Permission is granted to copy, distribute and/or modify this document
07a67d6a 13under the terms of the GNU Free Documentation License, Version 1.3 or
3721b9e1 14any later version published by the Free Software Foundation; with the
70b1e376 15Invariant Sections being ``Funding Free Software'', the Front-Cover
3721b9e1
DF
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
f1f3453e 34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
3721b9e1
DF
35@end direntry
36
f1f3453e 37This manual documents libgomp, the GNU Offloading and Multi Processing
41dbbb37
TS
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
3721b9e1
DF
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
f1f3453e 53@title GNU Offloading and Multi Processing Runtime Library
41dbbb37 54@subtitle The GNU OpenMP and OpenACC Implementation
3721b9e1
DF
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top
72@top Introduction
73@cindex Introduction
74
f1f3453e 75This manual documents the usage of libgomp, the GNU Offloading and
41dbbb37 76Multi Processing Runtime Library. This includes the GNU
1a6d1d24 77implementation of the @uref{https://www.openmp.org, OpenMP} Application
41dbbb37
TS
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
9651fbaf 80@uref{https://www.openacc.org, OpenACC} Application Programming
41dbbb37
TS
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
3721b9e1 83
41dbbb37
TS
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
f1f3453e 88
3721b9e1
DF
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
4102bda6
TS
98* OpenMP Runtime Library Routines: Runtime Library Routines.
99 The OpenMP runtime application programming
3721b9e1 100 interface.
4102bda6
TS
101* OpenMP Environment Variables: Environment Variables.
102 Influencing OpenMP runtime behavior with
103 environment variables.
cdf6119d
JN
104* Enabling OpenACC:: How to enable OpenACC for your
105 applications.
106* OpenACC Runtime Library Routines:: The OpenACC runtime application
107 programming interface.
108* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
109 environment variables.
110* CUDA Streams Usage:: Notes on the implementation of
111 asynchronous operations.
112* OpenACC Library Interoperability:: OpenACC library interoperability with the
113 NVIDIA CUBLAS library.
5fae049d 114* OpenACC Profiling Interface::
3721b9e1 115* The libgomp ABI:: Notes on the external ABI presented by libgomp.
f1f3453e
TS
116* Reporting Bugs:: How to report bugs in the GNU Offloading and
117 Multi Processing Runtime Library.
3721b9e1
DF
118* Copying:: GNU general public license says
119 how you can copy and share libgomp.
120* GNU Free Documentation License::
121 How you can copy and share this manual.
122* Funding:: How to help assure continued work for free
123 software.
3d3949df 124* Library Index:: Index of this documentation.
3721b9e1
DF
125@end menu
126
127
128@c ---------------------------------------------------------------------
129@c Enabling OpenMP
130@c ---------------------------------------------------------------------
131
132@node Enabling OpenMP
133@chapter Enabling OpenMP
134
135To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
83fd6c5b 136flag @command{-fopenmp} must be specified. This enables the OpenMP directive
3721b9e1
DF
137@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
138@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
139@code{!$} conditional compilation sentinels in free form and @code{c$},
83fd6c5b 140@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
3721b9e1
DF
141arranges for automatic linking of the OpenMP runtime library
142(@ref{Runtime Library Routines}).
143
144A complete description of all OpenMP directives accepted may be found in
1a6d1d24 145the @uref{https://www.openmp.org, OpenMP Application Program Interface} manual,
00b9bd52 146version 4.5.
3721b9e1
DF
147
148
149@c ---------------------------------------------------------------------
4102bda6 150@c OpenMP Runtime Library Routines
3721b9e1
DF
151@c ---------------------------------------------------------------------
152
153@node Runtime Library Routines
4102bda6 154@chapter OpenMP Runtime Library Routines
3721b9e1 155
83fd6c5b 156The runtime routines described here are defined by Section 3 of the OpenMP
00b9bd52 157specification in version 4.5. The routines are structured in following
5c6ed53a 158three parts:
3721b9e1 159
72832460 160@menu
83fd6c5b
TB
161Control threads, processors and the parallel environment. They have C
162linkage, and do not throw exceptions.
f5745bed 163
5c6ed53a
TB
164* omp_get_active_level:: Number of active parallel regions
165* omp_get_ancestor_thread_num:: Ancestor thread ID
83fd6c5b
TB
166* omp_get_cancellation:: Whether cancellation support is enabled
167* omp_get_default_device:: Get the default device for target regions
5c6ed53a
TB
168* omp_get_dynamic:: Dynamic teams setting
169* omp_get_level:: Number of parallel regions
6a2ba183 170* omp_get_max_active_levels:: Maximum number of active regions
d9a6bd32 171* omp_get_max_task_priority:: Maximum task priority value that can be set
6a2ba183 172* omp_get_max_threads:: Maximum number of threads of parallel region
5c6ed53a 173* omp_get_nested:: Nested parallel regions
83fd6c5b 174* omp_get_num_devices:: Number of target devices
5c6ed53a 175* omp_get_num_procs:: Number of processors online
83fd6c5b 176* omp_get_num_teams:: Number of teams
5c6ed53a 177* omp_get_num_threads:: Size of the active team
83fd6c5b 178* omp_get_proc_bind:: Whether theads may be moved between CPUs
5c6ed53a 179* omp_get_schedule:: Obtain the runtime scheduling method
83fd6c5b 180* omp_get_team_num:: Get team number
5c6ed53a 181* omp_get_team_size:: Number of threads in a team
6a2ba183 182* omp_get_thread_limit:: Maximum number of threads
5c6ed53a
TB
183* omp_get_thread_num:: Current thread ID
184* omp_in_parallel:: Whether a parallel region is active
20906c66 185* omp_in_final:: Whether in final or included task region
83fd6c5b
TB
186* omp_is_initial_device:: Whether executing on the host device
187* omp_set_default_device:: Set the default device for target regions
5c6ed53a
TB
188* omp_set_dynamic:: Enable/disable dynamic teams
189* omp_set_max_active_levels:: Limits the number of active parallel regions
190* omp_set_nested:: Enable/disable nested parallel regions
191* omp_set_num_threads:: Set upper team size limit
192* omp_set_schedule:: Set the runtime scheduling method
3721b9e1
DF
193
194Initialize, set, test, unset and destroy simple and nested locks.
195
3721b9e1
DF
196* omp_init_lock:: Initialize simple lock
197* omp_set_lock:: Wait for and set simple lock
198* omp_test_lock:: Test and set simple lock if available
199* omp_unset_lock:: Unset simple lock
200* omp_destroy_lock:: Destroy simple lock
201* omp_init_nest_lock:: Initialize nested lock
202* omp_set_nest_lock:: Wait for and set simple lock
203* omp_test_nest_lock:: Test and set nested lock if available
204* omp_unset_nest_lock:: Unset nested lock
205* omp_destroy_nest_lock:: Destroy nested lock
3721b9e1
DF
206
207Portable, thread-based, wall clock timer.
208
3721b9e1
DF
209* omp_get_wtick:: Get timer precision.
210* omp_get_wtime:: Elapsed wall clock time.
211@end menu
212
5c6ed53a
TB
213
214
215@node omp_get_active_level
216@section @code{omp_get_active_level} -- Number of parallel regions
217@table @asis
218@item @emph{Description}:
219This function returns the nesting level for the active parallel blocks,
220which enclose the calling call.
221
222@item @emph{C/C++}
223@multitable @columnfractions .20 .80
6a2ba183 224@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
5c6ed53a
TB
225@end multitable
226
227@item @emph{Fortran}:
228@multitable @columnfractions .20 .80
acb5c916 229@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
5c6ed53a
TB
230@end multitable
231
232@item @emph{See also}:
233@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
234
235@item @emph{Reference}:
1a6d1d24 236@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
5c6ed53a
TB
237@end table
238
239
240
241@node omp_get_ancestor_thread_num
242@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
243@table @asis
244@item @emph{Description}:
245This function returns the thread identification number for the given
83fd6c5b 246nesting level of the current thread. For values of @var{level} outside
5c6ed53a
TB
247zero to @code{omp_get_level} -1 is returned; if @var{level} is
248@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
249
250@item @emph{C/C++}
251@multitable @columnfractions .20 .80
252@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
253@end multitable
254
255@item @emph{Fortran}:
256@multitable @columnfractions .20 .80
acb5c916 257@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
5c6ed53a
TB
258@item @tab @code{integer level}
259@end multitable
260
261@item @emph{See also}:
262@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
263
264@item @emph{Reference}:
1a6d1d24 265@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
83fd6c5b
TB
266@end table
267
268
269
270@node omp_get_cancellation
271@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
272@table @asis
273@item @emph{Description}:
274This function returns @code{true} if cancellation is activated, @code{false}
275otherwise. Here, @code{true} and @code{false} represent their language-specific
276counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
277deactivated.
278
279@item @emph{C/C++}:
280@multitable @columnfractions .20 .80
281@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
282@end multitable
283
284@item @emph{Fortran}:
285@multitable @columnfractions .20 .80
286@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
287@end multitable
288
289@item @emph{See also}:
290@ref{OMP_CANCELLATION}
291
292@item @emph{Reference}:
1a6d1d24 293@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
83fd6c5b
TB
294@end table
295
296
297
298@node omp_get_default_device
299@section @code{omp_get_default_device} -- Get the default device for target regions
300@table @asis
301@item @emph{Description}:
302Get the default device for target regions without device clause.
303
304@item @emph{C/C++}:
305@multitable @columnfractions .20 .80
306@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
307@end multitable
308
309@item @emph{Fortran}:
310@multitable @columnfractions .20 .80
311@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
312@end multitable
313
314@item @emph{See also}:
315@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
316
317@item @emph{Reference}:
1a6d1d24 318@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
5c6ed53a
TB
319@end table
320
321
322
3721b9e1
DF
323@node omp_get_dynamic
324@section @code{omp_get_dynamic} -- Dynamic teams setting
325@table @asis
326@item @emph{Description}:
327This function returns @code{true} if enabled, @code{false} otherwise.
328Here, @code{true} and @code{false} represent their language-specific
329counterparts.
330
14734fc7 331The dynamic team setting may be initialized at startup by the
83fd6c5b
TB
332@env{OMP_DYNAMIC} environment variable or at runtime using
333@code{omp_set_dynamic}. If undefined, dynamic adjustment is
14734fc7
DF
334disabled by default.
335
3721b9e1
DF
336@item @emph{C/C++}:
337@multitable @columnfractions .20 .80
6a2ba183 338@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
3721b9e1
DF
339@end multitable
340
341@item @emph{Fortran}:
342@multitable @columnfractions .20 .80
343@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
344@end multitable
345
346@item @emph{See also}:
14734fc7 347@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
3721b9e1
DF
348
349@item @emph{Reference}:
1a6d1d24 350@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
5c6ed53a
TB
351@end table
352
353
354
355@node omp_get_level
356@section @code{omp_get_level} -- Obtain the current nesting level
357@table @asis
358@item @emph{Description}:
359This function returns the nesting level for the parallel blocks,
360which enclose the calling call.
361
362@item @emph{C/C++}
363@multitable @columnfractions .20 .80
6a2ba183 364@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
5c6ed53a
TB
365@end multitable
366
367@item @emph{Fortran}:
368@multitable @columnfractions .20 .80
acb5c916 369@item @emph{Interface}: @tab @code{integer function omp_level()}
5c6ed53a
TB
370@end multitable
371
372@item @emph{See also}:
373@ref{omp_get_active_level}
374
375@item @emph{Reference}:
1a6d1d24 376@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
5c6ed53a
TB
377@end table
378
379
380
381@node omp_get_max_active_levels
6a2ba183 382@section @code{omp_get_max_active_levels} -- Maximum number of active regions
5c6ed53a
TB
383@table @asis
384@item @emph{Description}:
6a2ba183 385This function obtains the maximum allowed number of nested, active parallel regions.
5c6ed53a
TB
386
387@item @emph{C/C++}
388@multitable @columnfractions .20 .80
6a2ba183 389@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
5c6ed53a
TB
390@end multitable
391
392@item @emph{Fortran}:
393@multitable @columnfractions .20 .80
acb5c916 394@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
5c6ed53a
TB
395@end multitable
396
397@item @emph{See also}:
398@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
399
400@item @emph{Reference}:
1a6d1d24 401@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
3721b9e1
DF
402@end table
403
404
d9a6bd32
JJ
405@node omp_get_max_task_priority
406@section @code{omp_get_max_task_priority} -- Maximum priority value
407that can be set for tasks.
408@table @asis
409@item @emph{Description}:
410This function obtains the maximum allowed priority number for tasks.
411
412@item @emph{C/C++}
413@multitable @columnfractions .20 .80
414@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
415@end multitable
416
417@item @emph{Fortran}:
418@multitable @columnfractions .20 .80
419@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
420@end multitable
421
422@item @emph{Reference}:
1a6d1d24 423@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
d9a6bd32
JJ
424@end table
425
3721b9e1
DF
426
427@node omp_get_max_threads
6a2ba183 428@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
3721b9e1
DF
429@table @asis
430@item @emph{Description}:
6a2ba183 431Return the maximum number of threads used for the current parallel region
5c6ed53a 432that does not use the clause @code{num_threads}.
3721b9e1
DF
433
434@item @emph{C/C++}:
435@multitable @columnfractions .20 .80
6a2ba183 436@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
3721b9e1
DF
437@end multitable
438
439@item @emph{Fortran}:
440@multitable @columnfractions .20 .80
441@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
442@end multitable
443
444@item @emph{See also}:
5c6ed53a 445@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
3721b9e1
DF
446
447@item @emph{Reference}:
1a6d1d24 448@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
3721b9e1
DF
449@end table
450
451
452
453@node omp_get_nested
454@section @code{omp_get_nested} -- Nested parallel regions
455@table @asis
456@item @emph{Description}:
457This function returns @code{true} if nested parallel regions are
83fd6c5b 458enabled, @code{false} otherwise. Here, @code{true} and @code{false}
3721b9e1
DF
459represent their language-specific counterparts.
460
14734fc7 461Nested parallel regions may be initialized at startup by the
83fd6c5b
TB
462@env{OMP_NESTED} environment variable or at runtime using
463@code{omp_set_nested}. If undefined, nested parallel regions are
14734fc7
DF
464disabled by default.
465
3721b9e1
DF
466@item @emph{C/C++}:
467@multitable @columnfractions .20 .80
6a2ba183 468@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
3721b9e1
DF
469@end multitable
470
471@item @emph{Fortran}:
472@multitable @columnfractions .20 .80
87350d4a 473@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
3721b9e1
DF
474@end multitable
475
476@item @emph{See also}:
14734fc7 477@ref{omp_set_nested}, @ref{OMP_NESTED}
3721b9e1
DF
478
479@item @emph{Reference}:
1a6d1d24 480@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
83fd6c5b
TB
481@end table
482
483
484
485@node omp_get_num_devices
486@section @code{omp_get_num_devices} -- Number of target devices
487@table @asis
488@item @emph{Description}:
489Returns the number of target devices.
490
491@item @emph{C/C++}:
492@multitable @columnfractions .20 .80
493@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
494@end multitable
495
496@item @emph{Fortran}:
497@multitable @columnfractions .20 .80
498@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
499@end multitable
500
501@item @emph{Reference}:
1a6d1d24 502@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
3721b9e1
DF
503@end table
504
505
506
507@node omp_get_num_procs
508@section @code{omp_get_num_procs} -- Number of processors online
509@table @asis
510@item @emph{Description}:
83fd6c5b 511Returns the number of processors online on that device.
3721b9e1
DF
512
513@item @emph{C/C++}:
514@multitable @columnfractions .20 .80
6a2ba183 515@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
3721b9e1
DF
516@end multitable
517
518@item @emph{Fortran}:
519@multitable @columnfractions .20 .80
520@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
521@end multitable
522
523@item @emph{Reference}:
1a6d1d24 524@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
83fd6c5b
TB
525@end table
526
527
528
529@node omp_get_num_teams
530@section @code{omp_get_num_teams} -- Number of teams
531@table @asis
532@item @emph{Description}:
533Returns the number of teams in the current team region.
534
535@item @emph{C/C++}:
536@multitable @columnfractions .20 .80
537@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
538@end multitable
539
540@item @emph{Fortran}:
541@multitable @columnfractions .20 .80
542@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
543@end multitable
544
545@item @emph{Reference}:
1a6d1d24 546@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
3721b9e1
DF
547@end table
548
549
550
551@node omp_get_num_threads
552@section @code{omp_get_num_threads} -- Size of the active team
553@table @asis
554@item @emph{Description}:
83fd6c5b 555Returns the number of threads in the current team. In a sequential section of
3721b9e1
DF
556the program @code{omp_get_num_threads} returns 1.
557
14734fc7 558The default team size may be initialized at startup by the
83fd6c5b 559@env{OMP_NUM_THREADS} environment variable. At runtime, the size
14734fc7 560of the current team may be set either by the @code{NUM_THREADS}
83fd6c5b
TB
561clause or by @code{omp_set_num_threads}. If none of the above were
562used to define a specific value and @env{OMP_DYNAMIC} is disabled,
14734fc7
DF
563one thread per CPU online is used.
564
3721b9e1
DF
565@item @emph{C/C++}:
566@multitable @columnfractions .20 .80
6a2ba183 567@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
3721b9e1
DF
568@end multitable
569
570@item @emph{Fortran}:
571@multitable @columnfractions .20 .80
572@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
573@end multitable
574
575@item @emph{See also}:
576@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
577
578@item @emph{Reference}:
1a6d1d24 579@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
83fd6c5b
TB
580@end table
581
582
583
584@node omp_get_proc_bind
585@section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
586@table @asis
587@item @emph{Description}:
588This functions returns the currently active thread affinity policy, which is
589set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
590@code{omp_proc_bind_true}, @code{omp_proc_bind_master},
591@code{omp_proc_bind_close} and @code{omp_proc_bind_spread}.
592
593@item @emph{C/C++}:
594@multitable @columnfractions .20 .80
595@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
596@end multitable
597
598@item @emph{Fortran}:
599@multitable @columnfractions .20 .80
600@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
601@end multitable
602
603@item @emph{See also}:
604@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
605
606@item @emph{Reference}:
1a6d1d24 607@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
5c6ed53a
TB
608@end table
609
610
611
612@node omp_get_schedule
613@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
614@table @asis
615@item @emph{Description}:
83fd6c5b 616Obtain the runtime scheduling method. The @var{kind} argument will be
5c6ed53a 617set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
83fd6c5b 618@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
d9a6bd32 619@var{chunk_size}, is set to the chunk size.
5c6ed53a
TB
620
621@item @emph{C/C++}
622@multitable @columnfractions .20 .80
d9a6bd32 623@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
5c6ed53a
TB
624@end multitable
625
626@item @emph{Fortran}:
627@multitable @columnfractions .20 .80
d9a6bd32 628@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
5c6ed53a 629@item @tab @code{integer(kind=omp_sched_kind) kind}
d9a6bd32 630@item @tab @code{integer chunk_size}
5c6ed53a
TB
631@end multitable
632
633@item @emph{See also}:
634@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
635
636@item @emph{Reference}:
1a6d1d24 637@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
83fd6c5b
TB
638@end table
639
640
641
642@node omp_get_team_num
643@section @code{omp_get_team_num} -- Get team number
644@table @asis
645@item @emph{Description}:
646Returns the team number of the calling thread.
647
648@item @emph{C/C++}:
649@multitable @columnfractions .20 .80
650@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
651@end multitable
652
653@item @emph{Fortran}:
654@multitable @columnfractions .20 .80
655@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
656@end multitable
657
658@item @emph{Reference}:
1a6d1d24 659@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
5c6ed53a
TB
660@end table
661
662
663
664@node omp_get_team_size
665@section @code{omp_get_team_size} -- Number of threads in a team
666@table @asis
667@item @emph{Description}:
668This function returns the number of threads in a thread team to which
83fd6c5b 669either the current thread or its ancestor belongs. For values of @var{level}
6a2ba183
AH
670outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
6711 is returned, and for @code{omp_get_level}, the result is identical
5c6ed53a
TB
672to @code{omp_get_num_threads}.
673
674@item @emph{C/C++}:
675@multitable @columnfractions .20 .80
6a2ba183 676@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
5c6ed53a
TB
677@end multitable
678
679@item @emph{Fortran}:
680@multitable @columnfractions .20 .80
681@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
682@item @tab @code{integer level}
683@end multitable
684
685@item @emph{See also}:
686@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
687
688@item @emph{Reference}:
1a6d1d24 689@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
5c6ed53a
TB
690@end table
691
692
693
694@node omp_get_thread_limit
6a2ba183 695@section @code{omp_get_thread_limit} -- Maximum number of threads
5c6ed53a
TB
696@table @asis
697@item @emph{Description}:
6a2ba183 698Return the maximum number of threads of the program.
5c6ed53a
TB
699
700@item @emph{C/C++}:
701@multitable @columnfractions .20 .80
6a2ba183 702@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
5c6ed53a
TB
703@end multitable
704
705@item @emph{Fortran}:
706@multitable @columnfractions .20 .80
707@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
708@end multitable
709
710@item @emph{See also}:
711@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
712
713@item @emph{Reference}:
1a6d1d24 714@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
3721b9e1
DF
715@end table
716
717
718
83fd6c5b 719@node omp_get_thread_num
3721b9e1
DF
720@section @code{omp_get_thread_num} -- Current thread ID
721@table @asis
722@item @emph{Description}:
6a2ba183 723Returns a unique thread identification number within the current team.
5c6ed53a 724In a sequential parts of the program, @code{omp_get_thread_num}
83fd6c5b
TB
725always returns 0. In parallel regions the return value varies
726from 0 to @code{omp_get_num_threads}-1 inclusive. The return
3721b9e1
DF
727value of the master thread of a team is always 0.
728
729@item @emph{C/C++}:
730@multitable @columnfractions .20 .80
6a2ba183 731@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
3721b9e1
DF
732@end multitable
733
734@item @emph{Fortran}:
735@multitable @columnfractions .20 .80
736@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
737@end multitable
738
739@item @emph{See also}:
5c6ed53a 740@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
3721b9e1
DF
741
742@item @emph{Reference}:
1a6d1d24 743@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
3721b9e1
DF
744@end table
745
746
747
748@node omp_in_parallel
749@section @code{omp_in_parallel} -- Whether a parallel region is active
750@table @asis
751@item @emph{Description}:
83fd6c5b
TB
752This function returns @code{true} if currently running in parallel,
753@code{false} otherwise. Here, @code{true} and @code{false} represent
3721b9e1
DF
754their language-specific counterparts.
755
756@item @emph{C/C++}:
757@multitable @columnfractions .20 .80
6a2ba183 758@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
3721b9e1
DF
759@end multitable
760
761@item @emph{Fortran}:
762@multitable @columnfractions .20 .80
763@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
764@end multitable
765
766@item @emph{Reference}:
1a6d1d24 767@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
20906c66
JJ
768@end table
769
770
771@node omp_in_final
772@section @code{omp_in_final} -- Whether in final or included task region
773@table @asis
774@item @emph{Description}:
775This function returns @code{true} if currently running in a final
83fd6c5b 776or included task region, @code{false} otherwise. Here, @code{true}
20906c66
JJ
777and @code{false} represent their language-specific counterparts.
778
779@item @emph{C/C++}:
780@multitable @columnfractions .20 .80
781@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
782@end multitable
783
784@item @emph{Fortran}:
785@multitable @columnfractions .20 .80
786@item @emph{Interface}: @tab @code{logical function omp_in_final()}
787@end multitable
788
789@item @emph{Reference}:
1a6d1d24 790@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
3721b9e1
DF
791@end table
792
793
83fd6c5b
TB
794
795@node omp_is_initial_device
796@section @code{omp_is_initial_device} -- Whether executing on the host device
797@table @asis
798@item @emph{Description}:
799This function returns @code{true} if currently running on the host device,
800@code{false} otherwise. Here, @code{true} and @code{false} represent
801their language-specific counterparts.
802
803@item @emph{C/C++}:
804@multitable @columnfractions .20 .80
805@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
806@end multitable
807
808@item @emph{Fortran}:
809@multitable @columnfractions .20 .80
810@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
811@end multitable
812
813@item @emph{Reference}:
1a6d1d24 814@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
83fd6c5b
TB
815@end table
816
817
818
819@node omp_set_default_device
820@section @code{omp_set_default_device} -- Set the default device for target regions
821@table @asis
822@item @emph{Description}:
823Set the default device for target regions without device clause. The argument
824shall be a nonnegative device number.
825
826@item @emph{C/C++}:
827@multitable @columnfractions .20 .80
828@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
829@end multitable
830
831@item @emph{Fortran}:
832@multitable @columnfractions .20 .80
833@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
834@item @tab @code{integer device_num}
835@end multitable
836
837@item @emph{See also}:
838@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
839
840@item @emph{Reference}:
1a6d1d24 841@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
83fd6c5b
TB
842@end table
843
844
845
3721b9e1
DF
846@node omp_set_dynamic
847@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
848@table @asis
849@item @emph{Description}:
850Enable or disable the dynamic adjustment of the number of threads
83fd6c5b 851within a team. The function takes the language-specific equivalent
3721b9e1
DF
852of @code{true} and @code{false}, where @code{true} enables dynamic
853adjustment of team sizes and @code{false} disables it.
854
855@item @emph{C/C++}:
856@multitable @columnfractions .20 .80
4fed6b25 857@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
3721b9e1
DF
858@end multitable
859
860@item @emph{Fortran}:
861@multitable @columnfractions .20 .80
4fed6b25
TB
862@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
863@item @tab @code{logical, intent(in) :: dynamic_threads}
3721b9e1
DF
864@end multitable
865
866@item @emph{See also}:
867@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
868
869@item @emph{Reference}:
1a6d1d24 870@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
5c6ed53a
TB
871@end table
872
873
874
875@node omp_set_max_active_levels
876@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
877@table @asis
878@item @emph{Description}:
6a2ba183
AH
879This function limits the maximum allowed number of nested, active
880parallel regions.
5c6ed53a
TB
881
882@item @emph{C/C++}
883@multitable @columnfractions .20 .80
6a2ba183 884@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
5c6ed53a
TB
885@end multitable
886
887@item @emph{Fortran}:
888@multitable @columnfractions .20 .80
6a2ba183 889@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
5c6ed53a
TB
890@item @tab @code{integer max_levels}
891@end multitable
892
893@item @emph{See also}:
894@ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
895
896@item @emph{Reference}:
1a6d1d24 897@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
3721b9e1
DF
898@end table
899
900
901
902@node omp_set_nested
903@section @code{omp_set_nested} -- Enable/disable nested parallel regions
904@table @asis
905@item @emph{Description}:
f1b0882e 906Enable or disable nested parallel regions, i.e., whether team members
83fd6c5b 907are allowed to create new teams. The function takes the language-specific
3721b9e1
DF
908equivalent of @code{true} and @code{false}, where @code{true} enables
909dynamic adjustment of team sizes and @code{false} disables it.
910
911@item @emph{C/C++}:
912@multitable @columnfractions .20 .80
4fed6b25 913@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
3721b9e1
DF
914@end multitable
915
916@item @emph{Fortran}:
917@multitable @columnfractions .20 .80
4fed6b25
TB
918@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
919@item @tab @code{logical, intent(in) :: nested}
3721b9e1
DF
920@end multitable
921
922@item @emph{See also}:
923@ref{OMP_NESTED}, @ref{omp_get_nested}
924
925@item @emph{Reference}:
1a6d1d24 926@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
3721b9e1
DF
927@end table
928
929
930
931@node omp_set_num_threads
932@section @code{omp_set_num_threads} -- Set upper team size limit
933@table @asis
934@item @emph{Description}:
935Specifies the number of threads used by default in subsequent parallel
83fd6c5b
TB
936sections, if those do not specify a @code{num_threads} clause. The
937argument of @code{omp_set_num_threads} shall be a positive integer.
3721b9e1 938
3721b9e1
DF
939@item @emph{C/C++}:
940@multitable @columnfractions .20 .80
4fed6b25 941@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
3721b9e1
DF
942@end multitable
943
944@item @emph{Fortran}:
945@multitable @columnfractions .20 .80
4fed6b25
TB
946@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
947@item @tab @code{integer, intent(in) :: num_threads}
3721b9e1
DF
948@end multitable
949
950@item @emph{See also}:
951@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
952
953@item @emph{Reference}:
1a6d1d24 954@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
5c6ed53a
TB
955@end table
956
957
958
959@node omp_set_schedule
960@section @code{omp_set_schedule} -- Set the runtime scheduling method
961@table @asis
962@item @emph{Description}:
83fd6c5b 963Sets the runtime scheduling method. The @var{kind} argument can have the
5c6ed53a 964value @code{omp_sched_static}, @code{omp_sched_dynamic},
83fd6c5b 965@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
5c6ed53a 966@code{omp_sched_auto}, the chunk size is set to the value of
d9a6bd32
JJ
967@var{chunk_size} if positive, or to the default value if zero or negative.
968For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
5c6ed53a
TB
969
970@item @emph{C/C++}
971@multitable @columnfractions .20 .80
d9a6bd32 972@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
5c6ed53a
TB
973@end multitable
974
975@item @emph{Fortran}:
976@multitable @columnfractions .20 .80
d9a6bd32 977@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
5c6ed53a 978@item @tab @code{integer(kind=omp_sched_kind) kind}
d9a6bd32 979@item @tab @code{integer chunk_size}
5c6ed53a
TB
980@end multitable
981
982@item @emph{See also}:
983@ref{omp_get_schedule}
984@ref{OMP_SCHEDULE}
985
986@item @emph{Reference}:
1a6d1d24 987@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
3721b9e1
DF
988@end table
989
990
991
992@node omp_init_lock
993@section @code{omp_init_lock} -- Initialize simple lock
994@table @asis
995@item @emph{Description}:
83fd6c5b 996Initialize a simple lock. After initialization, the lock is in
3721b9e1
DF
997an unlocked state.
998
999@item @emph{C/C++}:
1000@multitable @columnfractions .20 .80
1001@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1002@end multitable
1003
1004@item @emph{Fortran}:
1005@multitable @columnfractions .20 .80
4fed6b25
TB
1006@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1007@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
3721b9e1
DF
1008@end multitable
1009
1010@item @emph{See also}:
1011@ref{omp_destroy_lock}
1012
1013@item @emph{Reference}:
1a6d1d24 1014@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
3721b9e1
DF
1015@end table
1016
1017
1018
1019@node omp_set_lock
1020@section @code{omp_set_lock} -- Wait for and set simple lock
1021@table @asis
1022@item @emph{Description}:
1023Before setting a simple lock, the lock variable must be initialized by
83fd6c5b
TB
1024@code{omp_init_lock}. The calling thread is blocked until the lock
1025is available. If the lock is already held by the current thread,
3721b9e1
DF
1026a deadlock occurs.
1027
1028@item @emph{C/C++}:
1029@multitable @columnfractions .20 .80
1030@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1031@end multitable
1032
1033@item @emph{Fortran}:
1034@multitable @columnfractions .20 .80
4fed6b25
TB
1035@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1036@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
3721b9e1
DF
1037@end multitable
1038
1039@item @emph{See also}:
1040@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1041
1042@item @emph{Reference}:
1a6d1d24 1043@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
3721b9e1
DF
1044@end table
1045
1046
1047
1048@node omp_test_lock
1049@section @code{omp_test_lock} -- Test and set simple lock if available
1050@table @asis
1051@item @emph{Description}:
1052Before setting a simple lock, the lock variable must be initialized by
83fd6c5b
TB
1053@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1054does not block if the lock is not available. This function returns
1055@code{true} upon success, @code{false} otherwise. Here, @code{true} and
3721b9e1
DF
1056@code{false} represent their language-specific counterparts.
1057
1058@item @emph{C/C++}:
1059@multitable @columnfractions .20 .80
1060@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1061@end multitable
1062
1063@item @emph{Fortran}:
1064@multitable @columnfractions .20 .80
4fed6b25
TB
1065@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1066@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
3721b9e1
DF
1067@end multitable
1068
1069@item @emph{See also}:
1070@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1071
1072@item @emph{Reference}:
1a6d1d24 1073@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
3721b9e1
DF
1074@end table
1075
1076
1077
1078@node omp_unset_lock
1079@section @code{omp_unset_lock} -- Unset simple lock
1080@table @asis
1081@item @emph{Description}:
1082A simple lock about to be unset must have been locked by @code{omp_set_lock}
83fd6c5b
TB
1083or @code{omp_test_lock} before. In addition, the lock must be held by the
1084thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1085or more threads attempted to set the lock before, one of them is chosen to,
20906c66 1086again, set the lock to itself.
3721b9e1
DF
1087
1088@item @emph{C/C++}:
1089@multitable @columnfractions .20 .80
1090@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1091@end multitable
1092
1093@item @emph{Fortran}:
1094@multitable @columnfractions .20 .80
4fed6b25
TB
1095@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1096@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
3721b9e1
DF
1097@end multitable
1098
1099@item @emph{See also}:
1100@ref{omp_set_lock}, @ref{omp_test_lock}
1101
1102@item @emph{Reference}:
1a6d1d24 1103@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
3721b9e1
DF
1104@end table
1105
1106
1107
1108@node omp_destroy_lock
1109@section @code{omp_destroy_lock} -- Destroy simple lock
1110@table @asis
1111@item @emph{Description}:
83fd6c5b 1112Destroy a simple lock. In order to be destroyed, a simple lock must be
3721b9e1
DF
1113in the unlocked state.
1114
1115@item @emph{C/C++}:
1116@multitable @columnfractions .20 .80
6a2ba183 1117@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
3721b9e1
DF
1118@end multitable
1119
1120@item @emph{Fortran}:
1121@multitable @columnfractions .20 .80
4fed6b25
TB
1122@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1123@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
3721b9e1
DF
1124@end multitable
1125
1126@item @emph{See also}:
1127@ref{omp_init_lock}
1128
1129@item @emph{Reference}:
1a6d1d24 1130@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
3721b9e1
DF
1131@end table
1132
1133
1134
1135@node omp_init_nest_lock
1136@section @code{omp_init_nest_lock} -- Initialize nested lock
1137@table @asis
1138@item @emph{Description}:
83fd6c5b 1139Initialize a nested lock. After initialization, the lock is in
3721b9e1
DF
1140an unlocked state and the nesting count is set to zero.
1141
1142@item @emph{C/C++}:
1143@multitable @columnfractions .20 .80
1144@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1145@end multitable
1146
1147@item @emph{Fortran}:
1148@multitable @columnfractions .20 .80
4fed6b25
TB
1149@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1150@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
3721b9e1
DF
1151@end multitable
1152
1153@item @emph{See also}:
1154@ref{omp_destroy_nest_lock}
1155
1156@item @emph{Reference}:
1a6d1d24 1157@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
3721b9e1
DF
1158@end table
1159
1160
1161@node omp_set_nest_lock
6a2ba183 1162@section @code{omp_set_nest_lock} -- Wait for and set nested lock
3721b9e1
DF
1163@table @asis
1164@item @emph{Description}:
1165Before setting a nested lock, the lock variable must be initialized by
83fd6c5b
TB
1166@code{omp_init_nest_lock}. The calling thread is blocked until the lock
1167is available. If the lock is already held by the current thread, the
20906c66 1168nesting count for the lock is incremented.
3721b9e1
DF
1169
1170@item @emph{C/C++}:
1171@multitable @columnfractions .20 .80
1172@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1173@end multitable
1174
1175@item @emph{Fortran}:
1176@multitable @columnfractions .20 .80
4fed6b25
TB
1177@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1178@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
3721b9e1
DF
1179@end multitable
1180
1181@item @emph{See also}:
1182@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1183
1184@item @emph{Reference}:
1a6d1d24 1185@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
3721b9e1
DF
1186@end table
1187
1188
1189
1190@node omp_test_nest_lock
1191@section @code{omp_test_nest_lock} -- Test and set nested lock if available
1192@table @asis
1193@item @emph{Description}:
1194Before setting a nested lock, the lock variable must be initialized by
83fd6c5b 1195@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
3721b9e1
DF
1196@code{omp_test_nest_lock} does not block if the lock is not available.
1197If the lock is already held by the current thread, the new nesting count
83fd6c5b 1198is returned. Otherwise, the return value equals zero.
3721b9e1
DF
1199
1200@item @emph{C/C++}:
1201@multitable @columnfractions .20 .80
1202@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1203@end multitable
1204
1205@item @emph{Fortran}:
1206@multitable @columnfractions .20 .80
4fed6b25
TB
1207@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1208@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
3721b9e1
DF
1209@end multitable
1210
1211
1212@item @emph{See also}:
1213@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1214
1215@item @emph{Reference}:
1a6d1d24 1216@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
3721b9e1
DF
1217@end table
1218
1219
1220
1221@node omp_unset_nest_lock
1222@section @code{omp_unset_nest_lock} -- Unset nested lock
1223@table @asis
1224@item @emph{Description}:
1225A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
83fd6c5b
TB
1226or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1227thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1228lock becomes unlocked. If one ore more threads attempted to set the lock before,
20906c66 1229one of them is chosen to, again, set the lock to itself.
3721b9e1
DF
1230
1231@item @emph{C/C++}:
1232@multitable @columnfractions .20 .80
1233@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1234@end multitable
1235
1236@item @emph{Fortran}:
1237@multitable @columnfractions .20 .80
4fed6b25
TB
1238@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1239@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
3721b9e1
DF
1240@end multitable
1241
1242@item @emph{See also}:
1243@ref{omp_set_nest_lock}
1244
1245@item @emph{Reference}:
1a6d1d24 1246@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
3721b9e1
DF
1247@end table
1248
1249
1250
1251@node omp_destroy_nest_lock
1252@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1253@table @asis
1254@item @emph{Description}:
83fd6c5b 1255Destroy a nested lock. In order to be destroyed, a nested lock must be
3721b9e1
DF
1256in the unlocked state and its nesting count must equal zero.
1257
1258@item @emph{C/C++}:
1259@multitable @columnfractions .20 .80
1260@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1261@end multitable
1262
1263@item @emph{Fortran}:
1264@multitable @columnfractions .20 .80
4fed6b25
TB
1265@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1266@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
3721b9e1
DF
1267@end multitable
1268
1269@item @emph{See also}:
1270@ref{omp_init_lock}
1271
1272@item @emph{Reference}:
1a6d1d24 1273@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
3721b9e1
DF
1274@end table
1275
1276
1277
1278@node omp_get_wtick
1279@section @code{omp_get_wtick} -- Get timer precision
1280@table @asis
1281@item @emph{Description}:
f1b0882e 1282Gets the timer precision, i.e., the number of seconds between two
3721b9e1
DF
1283successive clock ticks.
1284
1285@item @emph{C/C++}:
1286@multitable @columnfractions .20 .80
6a2ba183 1287@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
3721b9e1
DF
1288@end multitable
1289
1290@item @emph{Fortran}:
1291@multitable @columnfractions .20 .80
1292@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1293@end multitable
1294
1295@item @emph{See also}:
1296@ref{omp_get_wtime}
1297
1298@item @emph{Reference}:
1a6d1d24 1299@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
3721b9e1
DF
1300@end table
1301
1302
1303
1304@node omp_get_wtime
1305@section @code{omp_get_wtime} -- Elapsed wall clock time
1306@table @asis
1307@item @emph{Description}:
83fd6c5b 1308Elapsed wall clock time in seconds. The time is measured per thread, no
6a2ba183 1309guarantee can be made that two distinct threads measure the same time.
21e1e594
JJ
1310Time is measured from some "time in the past", which is an arbitrary time
1311guaranteed not to change during the execution of the program.
3721b9e1
DF
1312
1313@item @emph{C/C++}:
1314@multitable @columnfractions .20 .80
6a2ba183 1315@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
3721b9e1
DF
1316@end multitable
1317
1318@item @emph{Fortran}:
1319@multitable @columnfractions .20 .80
1320@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1321@end multitable
1322
1323@item @emph{See also}:
1324@ref{omp_get_wtick}
1325
1326@item @emph{Reference}:
1a6d1d24 1327@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
3721b9e1
DF
1328@end table
1329
1330
1331
1332@c ---------------------------------------------------------------------
4102bda6 1333@c OpenMP Environment Variables
3721b9e1
DF
1334@c ---------------------------------------------------------------------
1335
1336@node Environment Variables
4102bda6 1337@chapter OpenMP Environment Variables
3721b9e1 1338
acf0174b 1339The environment variables which beginning with @env{OMP_} are defined by
00b9bd52 1340section 4 of the OpenMP specification in version 4.5, while those
acf0174b 1341beginning with @env{GOMP_} are GNU extensions.
3721b9e1
DF
1342
1343@menu
06441dd5
SH
1344* OMP_CANCELLATION:: Set whether cancellation is activated
1345* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1346* OMP_DEFAULT_DEVICE:: Set the device used in target regions
1347* OMP_DYNAMIC:: Dynamic adjustment of threads
1348* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
d9a6bd32 1349* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
06441dd5
SH
1350* OMP_NESTED:: Nested parallel regions
1351* OMP_NUM_THREADS:: Specifies the number of threads to use
1352* OMP_PROC_BIND:: Whether theads may be moved between CPUs
1353* OMP_PLACES:: Specifies on which CPUs the theads should be placed
1354* OMP_STACKSIZE:: Set default thread stack size
1355* OMP_SCHEDULE:: How threads are scheduled
1356* OMP_THREAD_LIMIT:: Set the maximum number of threads
1357* OMP_WAIT_POLICY:: How waiting threads are handled
1358* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1359* GOMP_DEBUG:: Enable debugging output
1360* GOMP_STACKSIZE:: Set default thread stack size
1361* GOMP_SPINCOUNT:: Set the busy-wait spin count
1362* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
3721b9e1
DF
1363@end menu
1364
1365
83fd6c5b
TB
1366@node OMP_CANCELLATION
1367@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1368@cindex Environment Variable
1369@table @asis
1370@item @emph{Description}:
1371If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1372if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1373
1374@item @emph{See also}:
1375@ref{omp_get_cancellation}
1376
1377@item @emph{Reference}:
1a6d1d24 1378@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
83fd6c5b
TB
1379@end table
1380
1381
1382
1383@node OMP_DISPLAY_ENV
1384@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1385@cindex Environment Variable
1386@table @asis
1387@item @emph{Description}:
1388If set to @code{TRUE}, the OpenMP version number and the values
1389associated with the OpenMP environment variables are printed to @code{stderr}.
1390If set to @code{VERBOSE}, it additionally shows the value of the environment
1391variables which are GNU extensions. If undefined or set to @code{FALSE},
1392this information will not be shown.
1393
1394
1395@item @emph{Reference}:
1a6d1d24 1396@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
83fd6c5b
TB
1397@end table
1398
1399
1400
1401@node OMP_DEFAULT_DEVICE
1402@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
1403@cindex Environment Variable
1404@table @asis
1405@item @emph{Description}:
1406Set to choose the device which is used in a @code{target} region, unless the
1407value is overridden by @code{omp_set_default_device} or by a @code{device}
1408clause. The value shall be the nonnegative device number. If no device with
1409the given device number exists, the code is executed on the host. If unset,
1410device number 0 will be used.
1411
1412
1413@item @emph{See also}:
1414@ref{omp_get_default_device}, @ref{omp_set_default_device},
1415
1416@item @emph{Reference}:
1a6d1d24 1417@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
83fd6c5b
TB
1418@end table
1419
1420
1421
3721b9e1
DF
1422@node OMP_DYNAMIC
1423@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1424@cindex Environment Variable
1425@table @asis
1426@item @emph{Description}:
1427Enable or disable the dynamic adjustment of the number of threads
83fd6c5b
TB
1428within a team. The value of this environment variable shall be
1429@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
7c2b7f45 1430disabled by default.
3721b9e1
DF
1431
1432@item @emph{See also}:
1433@ref{omp_set_dynamic}
1434
1435@item @emph{Reference}:
1a6d1d24 1436@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
5c6ed53a
TB
1437@end table
1438
1439
1440
1441@node OMP_MAX_ACTIVE_LEVELS
6a2ba183 1442@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
5c6ed53a
TB
1443@cindex Environment Variable
1444@table @asis
1445@item @emph{Description}:
6a2ba183 1446Specifies the initial value for the maximum number of nested parallel
83fd6c5b 1447regions. The value of this variable shall be a positive integer.
5c6ed53a
TB
1448If undefined, the number of active levels is unlimited.
1449
1450@item @emph{See also}:
1451@ref{omp_set_max_active_levels}
1452
1453@item @emph{Reference}:
1a6d1d24 1454@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
3721b9e1
DF
1455@end table
1456
1457
1458
d9a6bd32
JJ
1459@node OMP_MAX_TASK_PRIORITY
1460@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
1461number that can be set for a task.
1462@cindex Environment Variable
1463@table @asis
1464@item @emph{Description}:
1465Specifies the initial value for the maximum priority value that can be
1466set for a task. The value of this variable shall be a non-negative
1467integer, and zero is allowed. If undefined, the default priority is
14680.
1469
1470@item @emph{See also}:
1471@ref{omp_get_max_task_priority}
1472
1473@item @emph{Reference}:
1a6d1d24 1474@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
d9a6bd32
JJ
1475@end table
1476
1477
1478
3721b9e1
DF
1479@node OMP_NESTED
1480@section @env{OMP_NESTED} -- Nested parallel regions
1481@cindex Environment Variable
14734fc7 1482@cindex Implementation specific setting
3721b9e1
DF
1483@table @asis
1484@item @emph{Description}:
f1b0882e 1485Enable or disable nested parallel regions, i.e., whether team members
83fd6c5b
TB
1486are allowed to create new teams. The value of this environment variable
1487shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
7c2b7f45 1488regions are disabled by default.
3721b9e1
DF
1489
1490@item @emph{See also}:
1491@ref{omp_set_nested}
1492
1493@item @emph{Reference}:
1a6d1d24 1494@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
3721b9e1
DF
1495@end table
1496
1497
1498
1499@node OMP_NUM_THREADS
1500@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1501@cindex Environment Variable
14734fc7 1502@cindex Implementation specific setting
3721b9e1
DF
1503@table @asis
1504@item @emph{Description}:
83fd6c5b 1505Specifies the default number of threads to use in parallel regions. The
20906c66
JJ
1506value of this variable shall be a comma-separated list of positive integers;
1507the value specified the number of threads to use for the corresponding nested
83fd6c5b 1508level. If undefined one thread per CPU is used.
3721b9e1
DF
1509
1510@item @emph{See also}:
1511@ref{omp_set_num_threads}
1512
1513@item @emph{Reference}:
1a6d1d24 1514@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
83fd6c5b
TB
1515@end table
1516
1517
1518
72832460
UB
1519@node OMP_PROC_BIND
1520@section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1521@cindex Environment Variable
1522@table @asis
1523@item @emph{Description}:
1524Specifies whether threads may be moved between processors. If set to
1525@code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
1526they may be moved. Alternatively, a comma separated list with the
1527values @code{MASTER}, @code{CLOSE} and @code{SPREAD} can be used to specify
1528the thread affinity policy for the corresponding nesting level. With
1529@code{MASTER} the worker threads are in the same place partition as the
1530master thread. With @code{CLOSE} those are kept close to the master thread
1531in contiguous place partitions. And with @code{SPREAD} a sparse distribution
1532across the place partitions is used.
1533
1534When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
1535@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
1536
1537@item @emph{See also}:
1538@ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind}
1539
1540@item @emph{Reference}:
1a6d1d24 1541@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
72832460
UB
1542@end table
1543
1544
1545
83fd6c5b
TB
1546@node OMP_PLACES
1547@section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
1548@cindex Environment Variable
1549@table @asis
1550@item @emph{Description}:
1551The thread placement can be either specified using an abstract name or by an
1552explicit list of the places. The abstract names @code{threads}, @code{cores}
1553and @code{sockets} can be optionally followed by a positive number in
1554parentheses, which denotes the how many places shall be created. With
1555@code{threads} each place corresponds to a single hardware thread; @code{cores}
1556to a single core with the corresponding number of hardware threads; and with
1557@code{sockets} the place corresponds to a single socket. The resulting
1558placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
1559variable.
1560
1561Alternatively, the placement can be specified explicitly as comma-separated
1562list of places. A place is specified by set of nonnegative numbers in curly
1563braces, denoting the denoting the hardware threads. The hardware threads
1564belonging to a place can either be specified as comma-separated list of
1565nonnegative thread numbers or using an interval. Multiple places can also be
1566either specified by a comma-separated list of places or by an interval. To
1567specify an interval, a colon followed by the count is placed after after
1568the hardware thread number or the place. Optionally, the length can be
1569followed by a colon and the stride number -- otherwise a unit stride is
1570assumed. For instance, the following specifies the same places list:
1571@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
1572@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
1573
1574If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
1575@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
1576between CPUs following no placement policy.
1577
1578@item @emph{See also}:
1579@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
1580@ref{OMP_DISPLAY_ENV}
1581
1582@item @emph{Reference}:
1a6d1d24 1583@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
83fd6c5b
TB
1584@end table
1585
1586
1587
72832460
UB
1588@node OMP_STACKSIZE
1589@section @env{OMP_STACKSIZE} -- Set default thread stack size
83fd6c5b
TB
1590@cindex Environment Variable
1591@table @asis
1592@item @emph{Description}:
72832460
UB
1593Set the default thread stack size in kilobytes, unless the number
1594is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1595case the size is, respectively, in bytes, kilobytes, megabytes
1596or gigabytes. This is different from @code{pthread_attr_setstacksize}
1597which gets the number of bytes as an argument. If the stack size cannot
1598be set due to system constraints, an error is reported and the initial
1599stack size is left unchanged. If undefined, the stack size is system
1600dependent.
83fd6c5b 1601
72832460 1602@item @emph{Reference}:
1a6d1d24 1603@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3721b9e1
DF
1604@end table
1605
1606
1607
1608@node OMP_SCHEDULE
1609@section @env{OMP_SCHEDULE} -- How threads are scheduled
1610@cindex Environment Variable
14734fc7 1611@cindex Implementation specific setting
3721b9e1
DF
1612@table @asis
1613@item @emph{Description}:
1614Allows to specify @code{schedule type} and @code{chunk size}.
1615The value of the variable shall have the form: @code{type[,chunk]} where
5c6ed53a 1616@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
83fd6c5b 1617The optional @code{chunk} size shall be a positive integer. If undefined,
7c2b7f45 1618dynamic scheduling and a chunk size of 1 is used.
3721b9e1 1619
5c6ed53a
TB
1620@item @emph{See also}:
1621@ref{omp_set_schedule}
1622
1623@item @emph{Reference}:
1a6d1d24 1624@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
5c6ed53a
TB
1625@end table
1626
1627
1628
5c6ed53a 1629@node OMP_THREAD_LIMIT
6a2ba183 1630@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
5c6ed53a
TB
1631@cindex Environment Variable
1632@table @asis
1633@item @emph{Description}:
83fd6c5b
TB
1634Specifies the number of threads to use for the whole program. The
1635value of this variable shall be a positive integer. If undefined,
5c6ed53a
TB
1636the number of threads is not limited.
1637
1638@item @emph{See also}:
83fd6c5b 1639@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
5c6ed53a
TB
1640
1641@item @emph{Reference}:
1a6d1d24 1642@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
5c6ed53a
TB
1643@end table
1644
1645
1646
1647@node OMP_WAIT_POLICY
1648@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1649@cindex Environment Variable
1650@table @asis
1651@item @emph{Description}:
83fd6c5b 1652Specifies whether waiting threads should be active or passive. If
5c6ed53a
TB
1653the value is @code{PASSIVE}, waiting threads should not consume CPU
1654power while waiting; while the value is @code{ACTIVE} specifies that
83fd6c5b 1655they should. If undefined, threads wait actively for a short time
acf0174b
JJ
1656before waiting passively.
1657
1658@item @emph{See also}:
1659@ref{GOMP_SPINCOUNT}
5c6ed53a
TB
1660
1661@item @emph{Reference}:
1a6d1d24 1662@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3721b9e1
DF
1663@end table
1664
1665
1666
1667@node GOMP_CPU_AFFINITY
1668@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1669@cindex Environment Variable
1670@table @asis
1671@item @emph{Description}:
83fd6c5b
TB
1672Binds threads to specific CPUs. The variable should contain a space-separated
1673or comma-separated list of CPUs. This list may contain different kinds of
06785a48 1674entries: either single CPU numbers in any order, a range of CPUs (M-N)
83fd6c5b 1675or a range with some stride (M-N:S). CPU numbers are zero based. For example,
06785a48
DF
1676@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1677to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1678CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1679and 14 respectively and then start assigning back from the beginning of
6a2ba183 1680the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
06785a48 1681
f1f3453e 1682There is no libgomp library routine to determine whether a CPU affinity
83fd6c5b 1683specification is in effect. As a workaround, language-specific library
06785a48
DF
1684functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1685Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
83fd6c5b 1686environment variable. A defined CPU affinity on startup cannot be changed
06785a48
DF
1687or disabled during the runtime of the application.
1688
83fd6c5b
TB
1689If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
1690@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
1691@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
1692@code{FALSE}, the host system will handle the assignment of threads to CPUs.
20906c66
JJ
1693
1694@item @emph{See also}:
83fd6c5b 1695@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3721b9e1
DF
1696@end table
1697
1698
1699
41dbbb37
TS
1700@node GOMP_DEBUG
1701@section @env{GOMP_DEBUG} -- Enable debugging output
1702@cindex Environment Variable
1703@table @asis
1704@item @emph{Description}:
1705Enable debugging output. The variable should be set to @code{0}
1706(disabled, also the default if not set), or @code{1} (enabled).
1707
1708If enabled, some debugging output will be printed during execution.
1709This is currently not specified in more detail, and subject to change.
1710@end table
1711
1712
1713
3721b9e1
DF
1714@node GOMP_STACKSIZE
1715@section @env{GOMP_STACKSIZE} -- Set default thread stack size
1716@cindex Environment Variable
14734fc7 1717@cindex Implementation specific setting
3721b9e1
DF
1718@table @asis
1719@item @emph{Description}:
83fd6c5b 1720Set the default thread stack size in kilobytes. This is different from
5c6ed53a 1721@code{pthread_attr_setstacksize} which gets the number of bytes as an
83fd6c5b
TB
1722argument. If the stack size cannot be set due to system constraints, an
1723error is reported and the initial stack size is left unchanged. If undefined,
7c2b7f45 1724the stack size is system dependent.
3721b9e1 1725
5c6ed53a 1726@item @emph{See also}:
0024f1af 1727@ref{OMP_STACKSIZE}
5c6ed53a 1728
3721b9e1 1729@item @emph{Reference}:
c1030b5c 1730@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3721b9e1 1731GCC Patches Mailinglist},
c1030b5c 1732@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3721b9e1
DF
1733GCC Patches Mailinglist}
1734@end table
1735
1736
1737
acf0174b
JJ
1738@node GOMP_SPINCOUNT
1739@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
1740@cindex Environment Variable
1741@cindex Implementation specific setting
1742@table @asis
1743@item @emph{Description}:
1744Determines how long a threads waits actively with consuming CPU power
83fd6c5b 1745before waiting passively without consuming CPU power. The value may be
acf0174b 1746either @code{INFINITE}, @code{INFINITY} to always wait actively or an
83fd6c5b 1747integer which gives the number of spins of the busy-wait loop. The
acf0174b
JJ
1748integer may optionally be followed by the following suffixes acting
1749as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
1750million), @code{G} (giga, billion), or @code{T} (tera, trillion).
1751If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
1752300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
175330 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
1754If there are more OpenMP threads than available CPUs, 1000 and 100
1755spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
1756undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
1757or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
1758
1759@item @emph{See also}:
1760@ref{OMP_WAIT_POLICY}
1761@end table
1762
1763
1764
06441dd5
SH
1765@node GOMP_RTEMS_THREAD_POOLS
1766@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
1767@cindex Environment Variable
1768@cindex Implementation specific setting
1769@table @asis
1770@item @emph{Description}:
1771This environment variable is only used on the RTEMS real-time operating system.
1772It determines the scheduler instance specific thread pools. The format for
1773@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
1774@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
1775separated by @code{:} where:
1776@itemize @bullet
1777@item @code{<thread-pool-count>} is the thread pool count for this scheduler
1778instance.
1779@item @code{$<priority>} is an optional priority for the worker threads of a
1780thread pool according to @code{pthread_setschedparam}. In case a priority
1781value is omitted, then a worker thread will inherit the priority of the OpenMP
1782master thread that created it. The priority of the worker thread is not
1783changed after creation, even if a new OpenMP master thread using the worker has
1784a different priority.
1785@item @code{@@<scheduler-name>} is the scheduler instance name according to the
1786RTEMS application configuration.
1787@end itemize
1788In case no thread pool configuration is specified for a scheduler instance,
1789then each OpenMP master thread of this scheduler instance will use its own
1790dynamically allocated thread pool. To limit the worker thread count of the
1791thread pools, each OpenMP master thread must call @code{omp_set_num_threads}.
1792@item @emph{Example}:
1793Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
1794@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
1795@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
1796scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
1797one thread pool available. Since no priority is specified for this scheduler
1798instance, the worker thread inherits the priority of the OpenMP master thread
1799that created it. In the scheduler instance @code{WRK1} there are three thread
1800pools available and their worker threads run at priority four.
1801@end table
1802
1803
1804
cdf6119d
JN
1805@c ---------------------------------------------------------------------
1806@c Enabling OpenACC
1807@c ---------------------------------------------------------------------
1808
1809@node Enabling OpenACC
1810@chapter Enabling OpenACC
1811
1812To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
1813flag @option{-fopenacc} must be specified. This enables the OpenACC directive
c1030b5c 1814@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
cdf6119d
JN
1815@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
1816@code{!$} conditional compilation sentinels in free form and @code{c$},
1817@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
1818arranges for automatic linking of the OpenACC runtime library
1819(@ref{OpenACC Runtime Library Routines}).
1820
1821A complete description of all OpenACC directives accepted may be found in
9651fbaf 1822the @uref{https://www.openacc.org, OpenACC} Application Programming
cdf6119d
JN
1823Interface manual, version 2.0.
1824
1825Note that this is an experimental feature and subject to
1826change in future versions of GCC. See
1827@uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
1828
1829
1830
1831@c ---------------------------------------------------------------------
1832@c OpenACC Runtime Library Routines
1833@c ---------------------------------------------------------------------
1834
1835@node OpenACC Runtime Library Routines
1836@chapter OpenACC Runtime Library Routines
1837
1838The runtime routines described here are defined by section 3 of the OpenACC
1839specifications in version 2.0.
1840They have C linkage, and do not throw exceptions.
1841Generally, they are available only for the host, with the exception of
1842@code{acc_on_device}, which is available for both the host and the
1843acceleration device.
1844
1845@menu
1846* acc_get_num_devices:: Get number of devices for the given device
1847 type.
1848* acc_set_device_type:: Set type of device accelerator to use.
1849* acc_get_device_type:: Get type of device accelerator to be used.
1850* acc_set_device_num:: Set device number to use.
1851* acc_get_device_num:: Get device number to be used.
6c84c8bf 1852* acc_get_property:: Get device property.
cdf6119d
JN
1853* acc_async_test:: Tests for completion of a specific asynchronous
1854 operation.
c1030b5c 1855* acc_async_test_all:: Tests for completion of all asynchronous
cdf6119d
JN
1856 operations.
1857* acc_wait:: Wait for completion of a specific asynchronous
1858 operation.
c1030b5c 1859* acc_wait_all:: Waits for completion of all asynchronous
cdf6119d
JN
1860 operations.
1861* acc_wait_all_async:: Wait for completion of all asynchronous
1862 operations.
1863* acc_wait_async:: Wait for completion of asynchronous operations.
1864* acc_init:: Initialize runtime for a specific device type.
1865* acc_shutdown:: Shuts down the runtime for a specific device
1866 type.
1867* acc_on_device:: Whether executing on a particular device
1868* acc_malloc:: Allocate device memory.
1869* acc_free:: Free device memory.
1870* acc_copyin:: Allocate device memory and copy host memory to
1871 it.
1872* acc_present_or_copyin:: If the data is not present on the device,
1873 allocate device memory and copy from host
1874 memory.
1875* acc_create:: Allocate device memory and map it to host
1876 memory.
1877* acc_present_or_create:: If the data is not present on the device,
1878 allocate device memory and map it to host
1879 memory.
1880* acc_copyout:: Copy device memory to host memory.
1881* acc_delete:: Free device memory.
1882* acc_update_device:: Update device memory from mapped host memory.
1883* acc_update_self:: Update host memory from mapped device memory.
1884* acc_map_data:: Map previously allocated device memory to host
1885 memory.
1886* acc_unmap_data:: Unmap device memory from host memory.
1887* acc_deviceptr:: Get device pointer associated with specific
1888 host address.
1889* acc_hostptr:: Get host pointer associated with specific
1890 device address.
93d90219 1891* acc_is_present:: Indicate whether host variable / array is
cdf6119d
JN
1892 present on device.
1893* acc_memcpy_to_device:: Copy host memory to device memory.
1894* acc_memcpy_from_device:: Copy device memory to host memory.
1895
1896API routines for target platforms.
1897
1898* acc_get_current_cuda_device:: Get CUDA device handle.
1899* acc_get_current_cuda_context::Get CUDA context handle.
1900* acc_get_cuda_stream:: Get CUDA stream handle.
1901* acc_set_cuda_stream:: Set CUDA stream handle.
5fae049d
TS
1902
1903API routines for the OpenACC Profiling Interface.
1904
1905* acc_prof_register:: Register callbacks.
1906* acc_prof_unregister:: Unregister callbacks.
1907* acc_prof_lookup:: Obtain inquiry functions.
1908* acc_register_library:: Library registration.
cdf6119d
JN
1909@end menu
1910
1911
1912
1913@node acc_get_num_devices
1914@section @code{acc_get_num_devices} -- Get number of devices for given device type
1915@table @asis
1916@item @emph{Description}
1917This function returns a value indicating the number of devices available
1918for the device type specified in @var{devicetype}.
1919
1920@item @emph{C/C++}:
1921@multitable @columnfractions .20 .80
1922@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
1923@end multitable
1924
1925@item @emph{Fortran}:
1926@multitable @columnfractions .20 .80
1927@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
1928@item @tab @code{integer(kind=acc_device_kind) devicetype}
1929@end multitable
1930
1931@item @emph{Reference}:
9651fbaf 1932@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
19333.2.1.
1934@end table
1935
1936
1937
1938@node acc_set_device_type
1939@section @code{acc_set_device_type} -- Set type of device accelerator to use.
1940@table @asis
1941@item @emph{Description}
c1030b5c 1942This function indicates to the runtime library which device type, specified
cdf6119d
JN
1943in @var{devicetype}, to use when executing a parallel or kernels region.
1944
1945@item @emph{C/C++}:
1946@multitable @columnfractions .20 .80
1947@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
1948@end multitable
1949
1950@item @emph{Fortran}:
1951@multitable @columnfractions .20 .80
1952@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
1953@item @tab @code{integer(kind=acc_device_kind) devicetype}
1954@end multitable
1955
1956@item @emph{Reference}:
9651fbaf 1957@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
19583.2.2.
1959@end table
1960
1961
1962
1963@node acc_get_device_type
1964@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
1965@table @asis
1966@item @emph{Description}
1967This function returns what device type will be used when executing a
1968parallel or kernels region.
1969
1970@item @emph{C/C++}:
1971@multitable @columnfractions .20 .80
1972@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
1973@end multitable
1974
1975@item @emph{Fortran}:
1976@multitable @columnfractions .20 .80
1977@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
1978@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
1979@end multitable
1980
1981@item @emph{Reference}:
9651fbaf 1982@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
19833.2.3.
1984@end table
1985
1986
1987
1988@node acc_set_device_num
1989@section @code{acc_set_device_num} -- Set device number to use.
1990@table @asis
1991@item @emph{Description}
1992This function will indicate to the runtime which device number,
c1030b5c 1993specified by @var{num}, associated with the specified device
cdf6119d
JN
1994type @var{devicetype}.
1995
1996@item @emph{C/C++}:
1997@multitable @columnfractions .20 .80
1998@item @emph{Prototype}: @tab @code{acc_set_device_num(int num, acc_device_t devicetype);}
1999@end multitable
2000
2001@item @emph{Fortran}:
2002@multitable @columnfractions .20 .80
2003@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2004@item @tab @code{integer devicenum}
2005@item @tab @code{integer(kind=acc_device_kind) devicetype}
2006@end multitable
2007
2008@item @emph{Reference}:
9651fbaf 2009@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
20103.2.4.
2011@end table
2012
2013
2014
2015@node acc_get_device_num
2016@section @code{acc_get_device_num} -- Get device number to be used.
2017@table @asis
2018@item @emph{Description}
2019This function returns which device number associated with the specified device
2020type @var{devicetype}, will be used when executing a parallel or kernels
2021region.
2022
2023@item @emph{C/C++}:
2024@multitable @columnfractions .20 .80
2025@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2026@end multitable
2027
2028@item @emph{Fortran}:
2029@multitable @columnfractions .20 .80
2030@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2031@item @tab @code{integer(kind=acc_device_kind) devicetype}
2032@item @tab @code{integer acc_get_device_num}
2033@end multitable
2034
2035@item @emph{Reference}:
9651fbaf 2036@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
20373.2.5.
2038@end table
2039
2040
2041
6c84c8bf
MR
2042@node acc_get_property
2043@section @code{acc_get_property} -- Get device property.
2044@cindex acc_get_property
2045@cindex acc_get_property_string
2046@table @asis
2047@item @emph{Description}
2048These routines return the value of the specified @var{property} for the
2049device being queried according to @var{devicenum} and @var{devicetype}.
2050Integer-valued and string-valued properties are returned by
2051@code{acc_get_property} and @code{acc_get_property_string} respectively.
2052The Fortran @code{acc_get_property_string} subroutine returns the string
2053retrieved in its fourth argument while the remaining entry points are
2054functions, which pass the return value as their result.
2055
2056@item @emph{C/C++}:
2057@multitable @columnfractions .20 .80
2058@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2059@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2060@end multitable
2061
2062@item @emph{Fortran}:
2063@multitable @columnfractions .20 .80
2064@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2065@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2066@item @tab @code{integer devicenum}
2067@item @tab @code{integer(kind=acc_device_kind) devicetype}
2068@item @tab @code{integer(kind=acc_device_property) property}
2069@item @tab @code{integer(kind=acc_device_property) acc_get_property}
2070@item @tab @code{character(*) string}
2071@end multitable
2072
2073@item @emph{Reference}:
2074@uref{https://www.openacc.org, OpenACC specification v2.6}, section
20753.2.6.
2076@end table
2077
2078
2079
cdf6119d
JN
2080@node acc_async_test
2081@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2082@table @asis
2083@item @emph{Description}
93d90219 2084This function tests for completion of the asynchronous operation specified
cdf6119d
JN
2085in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2086the specified asynchronous operation has completed. While Fortran will return
93d90219 2087a @code{true}. If the asynchronous operation has not completed, C/C++ returns
cdf6119d
JN
2088a zero and Fortran returns a @code{false}.
2089
2090@item @emph{C/C++}:
2091@multitable @columnfractions .20 .80
2092@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2093@end multitable
2094
2095@item @emph{Fortran}:
2096@multitable @columnfractions .20 .80
2097@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2098@item @tab @code{integer(kind=acc_handle_kind) arg}
2099@item @tab @code{logical acc_async_test}
2100@end multitable
2101
2102@item @emph{Reference}:
9651fbaf 2103@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
21043.2.6.
2105@end table
2106
2107
2108
2109@node acc_async_test_all
2110@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2111@table @asis
2112@item @emph{Description}
93d90219 2113This function tests for completion of all asynchronous operations.
cdf6119d
JN
2114In C/C++, a non-zero value will be returned to indicate all asynchronous
2115operations have completed. While Fortran will return a @code{true}. If
2116any asynchronous operation has not completed, C/C++ returns a zero and
2117Fortran returns a @code{false}.
2118
2119@item @emph{C/C++}:
2120@multitable @columnfractions .20 .80
2121@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2122@end multitable
2123
2124@item @emph{Fortran}:
2125@multitable @columnfractions .20 .80
2126@item @emph{Interface}: @tab @code{function acc_async_test()}
2127@item @tab @code{logical acc_get_device_num}
2128@end multitable
2129
2130@item @emph{Reference}:
9651fbaf 2131@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
21323.2.7.
2133@end table
2134
2135
2136
2137@node acc_wait
2138@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2139@table @asis
2140@item @emph{Description}
2141This function waits for completion of the asynchronous operation
2142specified in @var{arg}.
2143
2144@item @emph{C/C++}:
2145@multitable @columnfractions .20 .80
2146@item @emph{Prototype}: @tab @code{acc_wait(arg);}
7ce64403 2147@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
cdf6119d
JN
2148@end multitable
2149
2150@item @emph{Fortran}:
2151@multitable @columnfractions .20 .80
2152@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2153@item @tab @code{integer(acc_handle_kind) arg}
7ce64403
TS
2154@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2155@item @tab @code{integer(acc_handle_kind) arg}
cdf6119d
JN
2156@end multitable
2157
2158@item @emph{Reference}:
9651fbaf 2159@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
21603.2.8.
2161@end table
2162
2163
2164
2165@node acc_wait_all
2166@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2167@table @asis
2168@item @emph{Description}
2169This function waits for the completion of all asynchronous operations.
2170
2171@item @emph{C/C++}:
2172@multitable @columnfractions .20 .80
2173@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
7ce64403 2174@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
cdf6119d
JN
2175@end multitable
2176
2177@item @emph{Fortran}:
2178@multitable @columnfractions .20 .80
7ce64403
TS
2179@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2180@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
cdf6119d
JN
2181@end multitable
2182
2183@item @emph{Reference}:
9651fbaf 2184@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
21853.2.10.
2186@end table
2187
2188
2189
2190@node acc_wait_all_async
2191@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2192@table @asis
2193@item @emph{Description}
2194This function enqueues a wait operation on the queue @var{async} for any
2195and all asynchronous operations that have been previously enqueued on
2196any queue.
2197
2198@item @emph{C/C++}:
2199@multitable @columnfractions .20 .80
2200@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2201@end multitable
2202
2203@item @emph{Fortran}:
2204@multitable @columnfractions .20 .80
2205@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2206@item @tab @code{integer(acc_handle_kind) async}
2207@end multitable
2208
2209@item @emph{Reference}:
9651fbaf 2210@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
22113.2.11.
2212@end table
2213
2214
2215
2216@node acc_wait_async
2217@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2218@table @asis
2219@item @emph{Description}
2220This function enqueues a wait operation on queue @var{async} for any and all
2221asynchronous operations enqueued on queue @var{arg}.
2222
2223@item @emph{C/C++}:
2224@multitable @columnfractions .20 .80
2225@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2226@end multitable
2227
2228@item @emph{Fortran}:
2229@multitable @columnfractions .20 .80
2230@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2231@item @tab @code{integer(acc_handle_kind) arg, async}
2232@end multitable
2233
2234@item @emph{Reference}:
9651fbaf 2235@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
22363.2.9.
2237@end table
2238
2239
2240
2241@node acc_init
2242@section @code{acc_init} -- Initialize runtime for a specific device type.
2243@table @asis
2244@item @emph{Description}
2245This function initializes the runtime for the device type specified in
2246@var{devicetype}.
2247
2248@item @emph{C/C++}:
2249@multitable @columnfractions .20 .80
2250@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2251@end multitable
2252
2253@item @emph{Fortran}:
2254@multitable @columnfractions .20 .80
2255@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2256@item @tab @code{integer(acc_device_kind) devicetype}
2257@end multitable
2258
2259@item @emph{Reference}:
9651fbaf 2260@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
22613.2.12.
2262@end table
2263
2264
2265
2266@node acc_shutdown
2267@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2268@table @asis
2269@item @emph{Description}
2270This function shuts down the runtime for the device type specified in
2271@var{devicetype}.
2272
2273@item @emph{C/C++}:
2274@multitable @columnfractions .20 .80
2275@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2276@end multitable
2277
2278@item @emph{Fortran}:
2279@multitable @columnfractions .20 .80
2280@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2281@item @tab @code{integer(acc_device_kind) devicetype}
2282@end multitable
2283
2284@item @emph{Reference}:
9651fbaf 2285@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
22863.2.13.
2287@end table
2288
2289
2290
2291@node acc_on_device
2292@section @code{acc_on_device} -- Whether executing on a particular device
2293@table @asis
2294@item @emph{Description}:
2295This function returns whether the program is executing on a particular
2296device specified in @var{devicetype}. In C/C++ a non-zero value is
93d90219 2297returned to indicate the device is executing on the specified device type.
cdf6119d
JN
2298In Fortran, @code{true} will be returned. If the program is not executing
2299on the specified device type C/C++ will return a zero, while Fortran will
2300return @code{false}.
2301
2302@item @emph{C/C++}:
2303@multitable @columnfractions .20 .80
2304@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
2305@end multitable
2306
2307@item @emph{Fortran}:
2308@multitable @columnfractions .20 .80
2309@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
2310@item @tab @code{integer(acc_device_kind) devicetype}
2311@item @tab @code{logical acc_on_device}
2312@end multitable
2313
2314
2315@item @emph{Reference}:
9651fbaf 2316@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
23173.2.14.
2318@end table
2319
2320
2321
2322@node acc_malloc
2323@section @code{acc_malloc} -- Allocate device memory.
2324@table @asis
2325@item @emph{Description}
2326This function allocates @var{len} bytes of device memory. It returns
2327the device address of the allocated memory.
2328
2329@item @emph{C/C++}:
2330@multitable @columnfractions .20 .80
2331@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
2332@end multitable
2333
2334@item @emph{Reference}:
9651fbaf 2335@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
23363.2.15.
2337@end table
2338
2339
2340
2341@node acc_free
2342@section @code{acc_free} -- Free device memory.
2343@table @asis
2344@item @emph{Description}
2345Free previously allocated device memory at the device address @code{a}.
2346
2347@item @emph{C/C++}:
2348@multitable @columnfractions .20 .80
2349@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
2350@end multitable
2351
2352@item @emph{Reference}:
9651fbaf 2353@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
23543.2.16.
2355@end table
2356
2357
2358
2359@node acc_copyin
2360@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
2361@table @asis
2362@item @emph{Description}
2363In C/C++, this function allocates @var{len} bytes of device memory
2364and maps it to the specified host address in @var{a}. The device
2365address of the newly allocated device memory is returned.
2366
2367In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2368a contiguous array section. The second form @var{a} specifies a
2369variable or array element and @var{len} specifies the length in bytes.
2370
2371@item @emph{C/C++}:
2372@multitable @columnfractions .20 .80
2373@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
2374@end multitable
2375
2376@item @emph{Fortran}:
2377@multitable @columnfractions .20 .80
2378@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
2379@item @tab @code{type, dimension(:[,:]...) :: a}
2380@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
2381@item @tab @code{type, dimension(:[,:]...) :: a}
2382@item @tab @code{integer len}
2383@end multitable
2384
2385@item @emph{Reference}:
9651fbaf 2386@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
23873.2.17.
2388@end table
2389
2390
2391
2392@node acc_present_or_copyin
2393@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
2394@table @asis
2395@item @emph{Description}
c1030b5c 2396This function tests if the host data specified by @var{a} and of length
cdf6119d
JN
2397@var{len} is present or not. If it is not present, then device memory
2398will be allocated and the host memory copied. The device address of
2399the newly allocated device memory is returned.
2400
2401In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2402a contiguous array section. The second form @var{a} specifies a variable or
2403array element and @var{len} specifies the length in bytes.
2404
2405@item @emph{C/C++}:
2406@multitable @columnfractions .20 .80
2407@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
2408@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
2409@end multitable
2410
2411@item @emph{Fortran}:
2412@multitable @columnfractions .20 .80
2413@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
2414@item @tab @code{type, dimension(:[,:]...) :: a}
2415@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
2416@item @tab @code{type, dimension(:[,:]...) :: a}
2417@item @tab @code{integer len}
2418@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
2419@item @tab @code{type, dimension(:[,:]...) :: a}
2420@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
2421@item @tab @code{type, dimension(:[,:]...) :: a}
2422@item @tab @code{integer len}
2423@end multitable
2424
2425@item @emph{Reference}:
9651fbaf 2426@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
24273.2.18.
2428@end table
2429
2430
2431
2432@node acc_create
2433@section @code{acc_create} -- Allocate device memory and map it to host memory.
2434@table @asis
2435@item @emph{Description}
2436This function allocates device memory and maps it to host memory specified
2437by the host address @var{a} with a length of @var{len} bytes. In C/C++,
2438the function returns the device address of the allocated device memory.
2439
2440In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2441a contiguous array section. The second form @var{a} specifies a variable or
2442array element and @var{len} specifies the length in bytes.
2443
2444@item @emph{C/C++}:
2445@multitable @columnfractions .20 .80
2446@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
2447@end multitable
2448
2449@item @emph{Fortran}:
2450@multitable @columnfractions .20 .80
2451@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
2452@item @tab @code{type, dimension(:[,:]...) :: a}
2453@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
2454@item @tab @code{type, dimension(:[,:]...) :: a}
2455@item @tab @code{integer len}
2456@end multitable
2457
2458@item @emph{Reference}:
9651fbaf 2459@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
24603.2.19.
2461@end table
2462
2463
2464
2465@node acc_present_or_create
2466@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
2467@table @asis
2468@item @emph{Description}
c1030b5c 2469This function tests if the host data specified by @var{a} and of length
cdf6119d
JN
2470@var{len} is present or not. If it is not present, then device memory
2471will be allocated and mapped to host memory. In C/C++, the device address
2472of the newly allocated device memory is returned.
2473
2474In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2475a contiguous array section. The second form @var{a} specifies a variable or
2476array element and @var{len} specifies the length in bytes.
2477
2478
2479@item @emph{C/C++}:
2480@multitable @columnfractions .20 .80
2481@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
2482@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
2483@end multitable
2484
2485@item @emph{Fortran}:
2486@multitable @columnfractions .20 .80
2487@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
2488@item @tab @code{type, dimension(:[,:]...) :: a}
2489@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
2490@item @tab @code{type, dimension(:[,:]...) :: a}
2491@item @tab @code{integer len}
2492@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
2493@item @tab @code{type, dimension(:[,:]...) :: a}
2494@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
2495@item @tab @code{type, dimension(:[,:]...) :: a}
2496@item @tab @code{integer len}
2497@end multitable
2498
2499@item @emph{Reference}:
9651fbaf 2500@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
25013.2.20.
2502@end table
2503
2504
2505
2506@node acc_copyout
2507@section @code{acc_copyout} -- Copy device memory to host memory.
2508@table @asis
2509@item @emph{Description}
2510This function copies mapped device memory to host memory which is specified
2511by host address @var{a} for a length @var{len} bytes in C/C++.
2512
2513In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2514a contiguous array section. The second form @var{a} specifies a variable or
2515array element and @var{len} specifies the length in bytes.
2516
2517@item @emph{C/C++}:
2518@multitable @columnfractions .20 .80
2519@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
2520@end multitable
2521
2522@item @emph{Fortran}:
2523@multitable @columnfractions .20 .80
2524@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
2525@item @tab @code{type, dimension(:[,:]...) :: a}
2526@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
2527@item @tab @code{type, dimension(:[,:]...) :: a}
2528@item @tab @code{integer len}
2529@end multitable
2530
2531@item @emph{Reference}:
9651fbaf 2532@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
25333.2.21.
2534@end table
2535
2536
2537
2538@node acc_delete
2539@section @code{acc_delete} -- Free device memory.
2540@table @asis
2541@item @emph{Description}
2542This function frees previously allocated device memory specified by
2543the device address @var{a} and the length of @var{len} bytes.
2544
2545In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2546a contiguous array section. The second form @var{a} specifies a variable or
2547array element and @var{len} specifies the length in bytes.
2548
2549@item @emph{C/C++}:
2550@multitable @columnfractions .20 .80
2551@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
2552@end multitable
2553
2554@item @emph{Fortran}:
2555@multitable @columnfractions .20 .80
2556@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
2557@item @tab @code{type, dimension(:[,:]...) :: a}
2558@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
2559@item @tab @code{type, dimension(:[,:]...) :: a}
2560@item @tab @code{integer len}
2561@end multitable
2562
2563@item @emph{Reference}:
9651fbaf 2564@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
25653.2.22.
2566@end table
2567
2568
2569
2570@node acc_update_device
2571@section @code{acc_update_device} -- Update device memory from mapped host memory.
2572@table @asis
2573@item @emph{Description}
2574This function updates the device copy from the previously mapped host memory.
2575The host memory is specified with the host address @var{a} and a length of
2576@var{len} bytes.
2577
2578In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2579a contiguous array section. The second form @var{a} specifies a variable or
2580array element and @var{len} specifies the length in bytes.
2581
2582@item @emph{C/C++}:
2583@multitable @columnfractions .20 .80
2584@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
2585@end multitable
2586
2587@item @emph{Fortran}:
2588@multitable @columnfractions .20 .80
2589@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
2590@item @tab @code{type, dimension(:[,:]...) :: a}
2591@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
2592@item @tab @code{type, dimension(:[,:]...) :: a}
2593@item @tab @code{integer len}
2594@end multitable
2595
2596@item @emph{Reference}:
9651fbaf 2597@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
25983.2.23.
2599@end table
2600
2601
2602
2603@node acc_update_self
2604@section @code{acc_update_self} -- Update host memory from mapped device memory.
2605@table @asis
2606@item @emph{Description}
2607This function updates the host copy from the previously mapped device memory.
2608The host memory is specified with the host address @var{a} and a length of
2609@var{len} bytes.
2610
2611In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2612a contiguous array section. The second form @var{a} specifies a variable or
2613array element and @var{len} specifies the length in bytes.
2614
2615@item @emph{C/C++}:
2616@multitable @columnfractions .20 .80
2617@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
2618@end multitable
2619
2620@item @emph{Fortran}:
2621@multitable @columnfractions .20 .80
2622@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
2623@item @tab @code{type, dimension(:[,:]...) :: a}
2624@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
2625@item @tab @code{type, dimension(:[,:]...) :: a}
2626@item @tab @code{integer len}
2627@end multitable
2628
2629@item @emph{Reference}:
9651fbaf 2630@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
26313.2.24.
2632@end table
2633
2634
2635
2636@node acc_map_data
2637@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
2638@table @asis
2639@item @emph{Description}
2640This function maps previously allocated device and host memory. The device
2641memory is specified with the device address @var{d}. The host memory is
2642specified with the host address @var{h} and a length of @var{len}.
2643
2644@item @emph{C/C++}:
2645@multitable @columnfractions .20 .80
2646@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
2647@end multitable
2648
2649@item @emph{Reference}:
9651fbaf 2650@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
26513.2.25.
2652@end table
2653
2654
2655
2656@node acc_unmap_data
2657@section @code{acc_unmap_data} -- Unmap device memory from host memory.
2658@table @asis
2659@item @emph{Description}
2660This function unmaps previously mapped device and host memory. The latter
2661specified by @var{h}.
2662
2663@item @emph{C/C++}:
2664@multitable @columnfractions .20 .80
2665@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
2666@end multitable
2667
2668@item @emph{Reference}:
9651fbaf 2669@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
26703.2.26.
2671@end table
2672
2673
2674
2675@node acc_deviceptr
2676@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
2677@table @asis
2678@item @emph{Description}
2679This function returns the device address that has been mapped to the
2680host address specified by @var{h}.
2681
2682@item @emph{C/C++}:
2683@multitable @columnfractions .20 .80
2684@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
2685@end multitable
2686
2687@item @emph{Reference}:
9651fbaf 2688@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
26893.2.27.
2690@end table
2691
2692
2693
2694@node acc_hostptr
2695@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
2696@table @asis
2697@item @emph{Description}
2698This function returns the host address that has been mapped to the
2699device address specified by @var{d}.
2700
2701@item @emph{C/C++}:
2702@multitable @columnfractions .20 .80
2703@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
2704@end multitable
2705
2706@item @emph{Reference}:
9651fbaf 2707@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
27083.2.28.
2709@end table
2710
2711
2712
2713@node acc_is_present
2714@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
2715@table @asis
2716@item @emph{Description}
2717This function indicates whether the specified host address in @var{a} and a
2718length of @var{len} bytes is present on the device. In C/C++, a non-zero
2719value is returned to indicate the presence of the mapped memory on the
2720device. A zero is returned to indicate the memory is not mapped on the
2721device.
2722
2723In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2724a contiguous array section. The second form @var{a} specifies a variable or
2725array element and @var{len} specifies the length in bytes. If the host
2726memory is mapped to device memory, then a @code{true} is returned. Otherwise,
2727a @code{false} is return to indicate the mapped memory is not present.
2728
2729@item @emph{C/C++}:
2730@multitable @columnfractions .20 .80
2731@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
2732@end multitable
2733
2734@item @emph{Fortran}:
2735@multitable @columnfractions .20 .80
2736@item @emph{Interface}: @tab @code{function acc_is_present(a)}
2737@item @tab @code{type, dimension(:[,:]...) :: a}
2738@item @tab @code{logical acc_is_present}
2739@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
2740@item @tab @code{type, dimension(:[,:]...) :: a}
2741@item @tab @code{integer len}
2742@item @tab @code{logical acc_is_present}
2743@end multitable
2744
2745@item @emph{Reference}:
9651fbaf 2746@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
27473.2.29.
2748@end table
2749
2750
2751
2752@node acc_memcpy_to_device
2753@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
2754@table @asis
2755@item @emph{Description}
2756This function copies host memory specified by host address of @var{src} to
2757device memory specified by the device address @var{dest} for a length of
2758@var{bytes} bytes.
2759
2760@item @emph{C/C++}:
2761@multitable @columnfractions .20 .80
2762@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
2763@end multitable
2764
2765@item @emph{Reference}:
9651fbaf 2766@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
27673.2.30.
2768@end table
2769
2770
2771
2772@node acc_memcpy_from_device
2773@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
2774@table @asis
2775@item @emph{Description}
2776This function copies host memory specified by host address of @var{src} from
2777device memory specified by the device address @var{dest} for a length of
2778@var{bytes} bytes.
2779
2780@item @emph{C/C++}:
2781@multitable @columnfractions .20 .80
2782@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
2783@end multitable
2784
2785@item @emph{Reference}:
9651fbaf 2786@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
27873.2.31.
2788@end table
2789
2790
2791
2792@node acc_get_current_cuda_device
2793@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
2794@table @asis
2795@item @emph{Description}
2796This function returns the CUDA device handle. This handle is the same
2797as used by the CUDA Runtime or Driver API's.
2798
2799@item @emph{C/C++}:
2800@multitable @columnfractions .20 .80
2801@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
2802@end multitable
2803
2804@item @emph{Reference}:
9651fbaf 2805@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
2806A.2.1.1.
2807@end table
2808
2809
2810
2811@node acc_get_current_cuda_context
2812@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
2813@table @asis
2814@item @emph{Description}
2815This function returns the CUDA context handle. This handle is the same
2816as used by the CUDA Runtime or Driver API's.
2817
2818@item @emph{C/C++}:
2819@multitable @columnfractions .20 .80
18c247cc 2820@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
cdf6119d
JN
2821@end multitable
2822
2823@item @emph{Reference}:
9651fbaf 2824@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
2825A.2.1.2.
2826@end table
2827
2828
2829
2830@node acc_get_cuda_stream
2831@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
2832@table @asis
2833@item @emph{Description}
18c247cc
TS
2834This function returns the CUDA stream handle for the queue @var{async}.
2835This handle is the same as used by the CUDA Runtime or Driver API's.
cdf6119d
JN
2836
2837@item @emph{C/C++}:
2838@multitable @columnfractions .20 .80
18c247cc 2839@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
cdf6119d
JN
2840@end multitable
2841
2842@item @emph{Reference}:
9651fbaf 2843@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
2844A.2.1.3.
2845@end table
2846
2847
2848
2849@node acc_set_cuda_stream
2850@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
2851@table @asis
2852@item @emph{Description}
2853This function associates the stream handle specified by @var{stream} with
18c247cc
TS
2854the queue @var{async}.
2855
2856This cannot be used to change the stream handle associated with
2857@code{acc_async_sync}.
2858
2859The return value is not specified.
cdf6119d
JN
2860
2861@item @emph{C/C++}:
2862@multitable @columnfractions .20 .80
18c247cc 2863@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
cdf6119d
JN
2864@end multitable
2865
2866@item @emph{Reference}:
9651fbaf 2867@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
2868A.2.1.4.
2869@end table
2870
2871
2872
5fae049d
TS
2873@node acc_prof_register
2874@section @code{acc_prof_register} -- Register callbacks.
2875@table @asis
2876@item @emph{Description}:
2877This function registers callbacks.
2878
2879@item @emph{C/C++}:
2880@multitable @columnfractions .20 .80
2881@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
2882@end multitable
2883
2884@item @emph{See also}:
2885@ref{OpenACC Profiling Interface}
2886
2887@item @emph{Reference}:
2888@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28895.3.
2890@end table
2891
2892
2893
2894@node acc_prof_unregister
2895@section @code{acc_prof_unregister} -- Unregister callbacks.
2896@table @asis
2897@item @emph{Description}:
2898This function unregisters callbacks.
2899
2900@item @emph{C/C++}:
2901@multitable @columnfractions .20 .80
2902@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
2903@end multitable
2904
2905@item @emph{See also}:
2906@ref{OpenACC Profiling Interface}
2907
2908@item @emph{Reference}:
2909@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29105.3.
2911@end table
2912
2913
2914
2915@node acc_prof_lookup
2916@section @code{acc_prof_lookup} -- Obtain inquiry functions.
2917@table @asis
2918@item @emph{Description}:
2919Function to obtain inquiry functions.
2920
2921@item @emph{C/C++}:
2922@multitable @columnfractions .20 .80
2923@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
2924@end multitable
2925
2926@item @emph{See also}:
2927@ref{OpenACC Profiling Interface}
2928
2929@item @emph{Reference}:
2930@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29315.3.
2932@end table
2933
2934
2935
2936@node acc_register_library
2937@section @code{acc_register_library} -- Library registration.
2938@table @asis
2939@item @emph{Description}:
2940Function for library registration.
2941
2942@item @emph{C/C++}:
2943@multitable @columnfractions .20 .80
2944@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
2945@end multitable
2946
2947@item @emph{See also}:
2948@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
2949
2950@item @emph{Reference}:
2951@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29525.3.
2953@end table
2954
2955
2956
cdf6119d
JN
2957@c ---------------------------------------------------------------------
2958@c OpenACC Environment Variables
2959@c ---------------------------------------------------------------------
2960
2961@node OpenACC Environment Variables
2962@chapter OpenACC Environment Variables
2963
2964The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
2965are defined by section 4 of the OpenACC specification in version 2.0.
5fae049d
TS
2966The variable @env{ACC_PROFLIB}
2967is defined by section 4 of the OpenACC specification in version 2.6.
cdf6119d
JN
2968The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
2969
2970@menu
2971* ACC_DEVICE_TYPE::
2972* ACC_DEVICE_NUM::
5fae049d 2973* ACC_PROFLIB::
cdf6119d
JN
2974* GCC_ACC_NOTIFY::
2975@end menu
2976
2977
2978
2979@node ACC_DEVICE_TYPE
2980@section @code{ACC_DEVICE_TYPE}
2981@table @asis
2982@item @emph{Reference}:
9651fbaf 2983@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
29844.1.
2985@end table
2986
2987
2988
2989@node ACC_DEVICE_NUM
2990@section @code{ACC_DEVICE_NUM}
2991@table @asis
2992@item @emph{Reference}:
9651fbaf 2993@uref{https://www.openacc.org, OpenACC specification v2.0}, section
cdf6119d
JN
29944.2.
2995@end table
2996
2997
2998
5fae049d
TS
2999@node ACC_PROFLIB
3000@section @code{ACC_PROFLIB}
3001@table @asis
3002@item @emph{See also}:
3003@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3004
3005@item @emph{Reference}:
3006@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30074.3.
3008@end table
3009
3010
3011
cdf6119d
JN
3012@node GCC_ACC_NOTIFY
3013@section @code{GCC_ACC_NOTIFY}
3014@table @asis
3015@item @emph{Description}:
3016Print debug information pertaining to the accelerator.
3017@end table
3018
3019
3020
3021@c ---------------------------------------------------------------------
3022@c CUDA Streams Usage
3023@c ---------------------------------------------------------------------
3024
3025@node CUDA Streams Usage
3026@chapter CUDA Streams Usage
3027
3028This applies to the @code{nvptx} plugin only.
3029
3030The library provides elements that perform asynchronous movement of
3031data and asynchronous operation of computing constructs. This
3032asynchronous functionality is implemented by making use of CUDA
3033streams@footnote{See "Stream Management" in "CUDA Driver API",
3034TRM-06703-001, Version 5.5, for additional information}.
3035
c1030b5c 3036The primary means by that the asynchronous functionality is accessed
cdf6119d
JN
3037is through the use of those OpenACC directives which make use of the
3038@code{async} and @code{wait} clauses. When the @code{async} clause is
3039first used with a directive, it creates a CUDA stream. If an
3040@code{async-argument} is used with the @code{async} clause, then the
3041stream is associated with the specified @code{async-argument}.
3042
3043Following the creation of an association between a CUDA stream and the
3044@code{async-argument} of an @code{async} clause, both the @code{wait}
3045clause and the @code{wait} directive can be used. When either the
3046clause or directive is used after stream creation, it creates a
3047rendezvous point whereby execution waits until all operations
3048associated with the @code{async-argument}, that is, stream, have
3049completed.
3050
3051Normally, the management of the streams that are created as a result of
3052using the @code{async} clause, is done without any intervention by the
3053caller. This implies the association between the @code{async-argument}
3054and the CUDA stream will be maintained for the lifetime of the program.
3055However, this association can be changed through the use of the library
3056function @code{acc_set_cuda_stream}. When the function
3057@code{acc_set_cuda_stream} is called, the CUDA stream that was
3058originally associated with the @code{async} clause will be destroyed.
3059Caution should be taken when changing the association as subsequent
3060references to the @code{async-argument} refer to a different
3061CUDA stream.
3062
3063
3064
3065@c ---------------------------------------------------------------------
3066@c OpenACC Library Interoperability
3067@c ---------------------------------------------------------------------
3068
3069@node OpenACC Library Interoperability
3070@chapter OpenACC Library Interoperability
3071
3072@section Introduction
3073
3074The OpenACC library uses the CUDA Driver API, and may interact with
3075programs that use the Runtime library directly, or another library
3076based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3077"Interactions with the CUDA Driver API" in
3078"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3079Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3080for additional information on library interoperability.}.
3081This chapter describes the use cases and what changes are
3082required in order to use both the OpenACC library and the CUBLAS and Runtime
3083libraries within a program.
3084
3085@section First invocation: NVIDIA CUBLAS library API
3086
3087In this first use case (see below), a function in the CUBLAS library is called
3088prior to any of the functions in the OpenACC library. More specifically, the
3089function @code{cublasCreate()}.
3090
3091When invoked, the function initializes the library and allocates the
3092hardware resources on the host and the device on behalf of the caller. Once
3093the initialization and allocation has completed, a handle is returned to the
3094caller. The OpenACC library also requires initialization and allocation of
3095hardware resources. Since the CUBLAS library has already allocated the
3096hardware resources for the device, all that is left to do is to initialize
3097the OpenACC library and acquire the hardware resources on the host.
3098
3099Prior to calling the OpenACC function that initializes the library and
3100allocate the host hardware resources, you need to acquire the device number
3101that was allocated during the call to @code{cublasCreate()}. The invoking of the
3102runtime library function @code{cudaGetDevice()} accomplishes this. Once
3103acquired, the device number is passed along with the device type as
3104parameters to the OpenACC library function @code{acc_set_device_num()}.
3105
3106Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3107library uses the context that was created during the call to
3108@code{cublasCreate()}. In other words, both libraries will be sharing the
3109same context.
3110
3111@smallexample
3112 /* Create the handle */
3113 s = cublasCreate(&h);
3114 if (s != CUBLAS_STATUS_SUCCESS)
3115 @{
3116 fprintf(stderr, "cublasCreate failed %d\n", s);
3117 exit(EXIT_FAILURE);
3118 @}
3119
3120 /* Get the device number */
3121 e = cudaGetDevice(&dev);
3122 if (e != cudaSuccess)
3123 @{
3124 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3125 exit(EXIT_FAILURE);
3126 @}
3127
3128 /* Initialize OpenACC library and use device 'dev' */
3129 acc_set_device_num(dev, acc_device_nvidia);
3130
3131@end smallexample
3132@center Use Case 1
3133
3134@section First invocation: OpenACC library API
3135
3136In this second use case (see below), a function in the OpenACC library is
3137called prior to any of the functions in the CUBLAS library. More specificially,
3138the function @code{acc_set_device_num()}.
3139
3140In the use case presented here, the function @code{acc_set_device_num()}
3141is used to both initialize the OpenACC library and allocate the hardware
3142resources on the host and the device. In the call to the function, the
3143call parameters specify which device to use and what device
3144type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3145is but one method to initialize the OpenACC library and allocate the
3146appropriate hardware resources. Other methods are available through the
3147use of environment variables and these will be discussed in the next section.
3148
3149Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3150functions can be called as seen with multiple calls being made to
3151@code{acc_copyin()}. In addition, calls can be made to functions in the
3152CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3153subsequent to the calls to @code{acc_copyin()}.
3154As seen in the previous use case, a call to @code{cublasCreate()}
3155initializes the CUBLAS library and allocates the hardware resources on the
3156host and the device. However, since the device has already been allocated,
3157@code{cublasCreate()} will only initialize the CUBLAS library and allocate
3158the appropriate hardware resources on the host. The context that was created
3159as part of the OpenACC initialization is shared with the CUBLAS library,
3160similarly to the first use case.
3161
3162@smallexample
3163 dev = 0;
3164
3165 acc_set_device_num(dev, acc_device_nvidia);
3166
3167 /* Copy the first set to the device */
3168 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3169 if (d_X == NULL)
3170 @{
3171 fprintf(stderr, "copyin error h_X\n");
3172 exit(EXIT_FAILURE);
3173 @}
3174
3175 /* Copy the second set to the device */
3176 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3177 if (d_Y == NULL)
3178 @{
3179 fprintf(stderr, "copyin error h_Y1\n");
3180 exit(EXIT_FAILURE);
3181 @}
3182
3183 /* Create the handle */
3184 s = cublasCreate(&h);
3185 if (s != CUBLAS_STATUS_SUCCESS)
3186 @{
3187 fprintf(stderr, "cublasCreate failed %d\n", s);
3188 exit(EXIT_FAILURE);
3189 @}
3190
3191 /* Perform saxpy using CUBLAS library function */
3192 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
3193 if (s != CUBLAS_STATUS_SUCCESS)
3194 @{
3195 fprintf(stderr, "cublasSaxpy failed %d\n", s);
3196 exit(EXIT_FAILURE);
3197 @}
3198
3199 /* Copy the results from the device */
3200 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
3201
3202@end smallexample
3203@center Use Case 2
3204
3205@section OpenACC library and environment variables
3206
3207There are two environment variables associated with the OpenACC library
3208that may be used to control the device type and device number:
3209@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respecively. These two
3210environement variables can be used as an alternative to calling
3211@code{acc_set_device_num()}. As seen in the second use case, the device
3212type and device number were specified using @code{acc_set_device_num()}.
3213If however, the aforementioned environment variables were set, then the
3214call to @code{acc_set_device_num()} would not be required.
3215
3216
3217The use of the environment variables is only relevant when an OpenACC function
3218is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
3219is called prior to a call to an OpenACC function, then you must call
3220@code{acc_set_device_num()}@footnote{More complete information
3221about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
9651fbaf 3222sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
cdf6119d
JN
3223Application Programming Interface”, Version 2.0.}
3224
3225
3226
5fae049d
TS
3227@c ---------------------------------------------------------------------
3228@c OpenACC Profiling Interface
3229@c ---------------------------------------------------------------------
3230
3231@node OpenACC Profiling Interface
3232@chapter OpenACC Profiling Interface
3233
3234@section Implementation Status and Implementation-Defined Behavior
3235
3236We're implementing the OpenACC Profiling Interface as defined by the
3237OpenACC 2.6 specification. We're clarifying some aspects here as
3238@emph{implementation-defined behavior}, while they're still under
3239discussion within the OpenACC Technical Committee.
3240
3241This implementation is tuned to keep the performance impact as low as
3242possible for the (very common) case that the Profiling Interface is
3243not enabled. This is relevant, as the Profiling Interface affects all
3244the @emph{hot} code paths (in the target code, not in the offloaded
3245code). Users of the OpenACC Profiling Interface can be expected to
3246understand that performance will be impacted to some degree once the
3247Profiling Interface has gotten enabled: for example, because of the
3248@emph{runtime} (libgomp) calling into a third-party @emph{library} for
3249every event that has been registered.
3250
3251We're not yet accounting for the fact that @cite{OpenACC events may
3252occur during event processing}.
3253
3254We're not yet implementing initialization via a
3255@code{acc_register_library} function that is either statically linked
3256in, or dynamically via @env{LD_PRELOAD}.
3257Initialization via @code{acc_register_library} functions dynamically
3258loaded via the @env{ACC_PROFLIB} environment variable does work, as
3259does directly calling @code{acc_prof_register},
3260@code{acc_prof_unregister}, @code{acc_prof_lookup}.
3261
3262As currently there are no inquiry functions defined, calls to
3263@code{acc_prof_lookup} will always return @code{NULL}.
3264
3265There aren't separate @emph{start}, @emph{stop} events defined for the
3266event types @code{acc_ev_create}, @code{acc_ev_delete},
3267@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
3268should be triggered before or after the actual device-specific call is
3269made. We trigger them after.
3270
3271Remarks about data provided to callbacks:
3272
3273@table @asis
3274
3275@item @code{acc_prof_info.event_type}
3276It's not clear if for @emph{nested} event callbacks (for example,
3277@code{acc_ev_enqueue_launch_start} as part of a parent compute
3278construct), this should be set for the nested event
3279(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
3280construct should remain (@code{acc_ev_compute_construct_start}). In
3281this implementation, the value will generally correspond to the
3282innermost nested event type.
3283
3284@item @code{acc_prof_info.device_type}
3285@itemize
3286
3287@item
3288For @code{acc_ev_compute_construct_start}, and in presence of an
3289@code{if} clause with @emph{false} argument, this will still refer to
3290the offloading device type.
3291It's not clear if that's the expected behavior.
3292
3293@item
3294Complementary to the item before, for
3295@code{acc_ev_compute_construct_end}, this is set to
3296@code{acc_device_host} in presence of an @code{if} clause with
3297@emph{false} argument.
3298It's not clear if that's the expected behavior.
3299
3300@end itemize
3301
3302@item @code{acc_prof_info.thread_id}
3303Always @code{-1}; not yet implemented.
3304
3305@item @code{acc_prof_info.async}
3306@itemize
3307
3308@item
3309Not yet implemented correctly for
3310@code{acc_ev_compute_construct_start}.
3311
3312@item
3313In a compute construct, for host-fallback
3314execution/@code{acc_device_host} it will always be
3315@code{acc_async_sync}.
3316It's not clear if that's the expected behavior.
3317
3318@item
3319For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
3320it will always be @code{acc_async_sync}.
3321It's not clear if that's the expected behavior.
3322
3323@end itemize
3324
3325@item @code{acc_prof_info.async_queue}
3326There is no @cite{limited number of asynchronous queues} in libgomp.
3327This will always have the same value as @code{acc_prof_info.async}.
3328
3329@item @code{acc_prof_info.src_file}
3330Always @code{NULL}; not yet implemented.
3331
3332@item @code{acc_prof_info.func_name}
3333Always @code{NULL}; not yet implemented.
3334
3335@item @code{acc_prof_info.line_no}
3336Always @code{-1}; not yet implemented.
3337
3338@item @code{acc_prof_info.end_line_no}
3339Always @code{-1}; not yet implemented.
3340
3341@item @code{acc_prof_info.func_line_no}
3342Always @code{-1}; not yet implemented.
3343
3344@item @code{acc_prof_info.func_end_line_no}
3345Always @code{-1}; not yet implemented.
3346
3347@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
3348Relating to @code{acc_prof_info.event_type} discussed above, in this
3349implementation, this will always be the same value as
3350@code{acc_prof_info.event_type}.
3351
3352@item @code{acc_event_info.*.parent_construct}
3353@itemize
3354
3355@item
3356Will be @code{acc_construct_parallel} for all OpenACC compute
3357constructs as well as many OpenACC Runtime API calls; should be the
3358one matching the actual construct, or
3359@code{acc_construct_runtime_api}, respectively.
3360
3361@item
3362Will be @code{acc_construct_enter_data} or
3363@code{acc_construct_exit_data} when processing variable mappings
3364specified in OpenACC @emph{declare} directives; should be
3365@code{acc_construct_declare}.
3366
3367@item
3368For implicit @code{acc_ev_device_init_start},
3369@code{acc_ev_device_init_end}, and explicit as well as implicit
3370@code{acc_ev_alloc}, @code{acc_ev_free},
3371@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3372@code{acc_ev_enqueue_download_start}, and
3373@code{acc_ev_enqueue_download_end}, will be
3374@code{acc_construct_parallel}; should reflect the real parent
3375construct.
3376
3377@end itemize
3378
3379@item @code{acc_event_info.*.implicit}
3380For @code{acc_ev_alloc}, @code{acc_ev_free},
3381@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3382@code{acc_ev_enqueue_download_start}, and
3383@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
3384also for explicit usage.
3385
3386@item @code{acc_event_info.data_event.var_name}
3387Always @code{NULL}; not yet implemented.
3388
3389@item @code{acc_event_info.data_event.host_ptr}
3390For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
3391@code{NULL}.
3392
3393@item @code{typedef union acc_api_info}
3394@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
3395Information}. This should obviously be @code{typedef @emph{struct}
3396acc_api_info}.
3397
3398@item @code{acc_api_info.device_api}
3399Possibly not yet implemented correctly for
3400@code{acc_ev_compute_construct_start},
3401@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
3402will always be @code{acc_device_api_none} for these event types.
3403For @code{acc_ev_enter_data_start}, it will be
3404@code{acc_device_api_none} in some cases.
3405
3406@item @code{acc_api_info.device_type}
3407Always the same as @code{acc_prof_info.device_type}.
3408
3409@item @code{acc_api_info.vendor}
3410Always @code{-1}; not yet implemented.
3411
3412@item @code{acc_api_info.device_handle}
3413Always @code{NULL}; not yet implemented.
3414
3415@item @code{acc_api_info.context_handle}
3416Always @code{NULL}; not yet implemented.
3417
3418@item @code{acc_api_info.async_handle}
3419Always @code{NULL}; not yet implemented.
3420
3421@end table
3422
3423Remarks about certain event types:
3424
3425@table @asis
3426
3427@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3428@itemize
3429
3430@item
3431@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
3432@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
3433@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
3434Whan a compute construct triggers implicit
3435@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
3436events, they currently aren't @emph{nested within} the corresponding
3437@code{acc_ev_compute_construct_start} and
3438@code{acc_ev_compute_construct_end}, but they're currently observed
3439@emph{before} @code{acc_ev_compute_construct_start}.
3440It's not clear what to do: the standard asks us provide a lot of
3441details to the @code{acc_ev_compute_construct_start} callback, without
3442(implicitly) initializing a device before?
3443
3444@item
3445Callbacks for these event types will not be invoked for calls to the
3446@code{acc_set_device_type} and @code{acc_set_device_num} functions.
3447It's not clear if they should be.
3448
3449@end itemize
3450
3451@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
3452@itemize
3453
3454@item
3455Callbacks for these event types will also be invoked for OpenACC
3456@emph{host_data} constructs.
3457It's not clear if they should be.
3458
3459@item
3460Callbacks for these event types will also be invoked when processing
3461variable mappings specified in OpenACC @emph{declare} directives.
3462It's not clear if they should be.
3463
3464@end itemize
3465
3466@end table
3467
3468Callbacks for the following event types will be invoked, but dispatch
3469and information provided therein has not yet been thoroughly reviewed:
3470
3471@itemize
3472@item @code{acc_ev_alloc}
3473@item @code{acc_ev_free}
3474@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
3475@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
3476@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
3477@end itemize
3478
3479During device initialization, and finalization, respectively,
3480callbacks for the following event types will not yet be invoked:
3481
3482@itemize
3483@item @code{acc_ev_alloc}
3484@item @code{acc_ev_free}
3485@end itemize
3486
3487Callbacks for the following event types have not yet been implemented,
3488so currently won't be invoked:
3489
3490@itemize
3491@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
3492@item @code{acc_ev_runtime_shutdown}
3493@item @code{acc_ev_create}, @code{acc_ev_delete}
3494@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
3495@end itemize
3496
3497For the following runtime library functions, not all expected
3498callbacks will be invoked (mostly concerning implicit device
3499initialization):
3500
3501@itemize
3502@item @code{acc_get_num_devices}
3503@item @code{acc_set_device_type}
3504@item @code{acc_get_device_type}
3505@item @code{acc_set_device_num}
3506@item @code{acc_get_device_num}
3507@item @code{acc_init}
3508@item @code{acc_shutdown}
3509@end itemize
3510
3511Aside from implicit device initialization, for the following runtime
3512library functions, no callbacks will be invoked for shared-memory
3513offloading devices (it's not clear if they should be):
3514
3515@itemize
3516@item @code{acc_malloc}
3517@item @code{acc_free}
3518@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
3519@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
3520@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
3521@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
3522@item @code{acc_update_device}, @code{acc_update_device_async}
3523@item @code{acc_update_self}, @code{acc_update_self_async}
3524@item @code{acc_map_data}, @code{acc_unmap_data}
3525@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
3526@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
3527@end itemize
3528
3529
3530
3721b9e1
DF
3531@c ---------------------------------------------------------------------
3532@c The libgomp ABI
3533@c ---------------------------------------------------------------------
3534
3535@node The libgomp ABI
3536@chapter The libgomp ABI
3537
3538The following sections present notes on the external ABI as
6a2ba183 3539presented by libgomp. Only maintainers should need them.
3721b9e1
DF
3540
3541@menu
3542* Implementing MASTER construct::
3543* Implementing CRITICAL construct::
3544* Implementing ATOMIC construct::
3545* Implementing FLUSH construct::
3546* Implementing BARRIER construct::
3547* Implementing THREADPRIVATE construct::
3548* Implementing PRIVATE clause::
3549* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
3550* Implementing REDUCTION clause::
3551* Implementing PARALLEL construct::
3552* Implementing FOR construct::
3553* Implementing ORDERED construct::
3554* Implementing SECTIONS construct::
3555* Implementing SINGLE construct::
cdf6119d 3556* Implementing OpenACC's PARALLEL construct::
3721b9e1
DF
3557@end menu
3558
3559
3560@node Implementing MASTER construct
3561@section Implementing MASTER construct
3562
3563@smallexample
3564if (omp_get_thread_num () == 0)
3565 block
3566@end smallexample
3567
3568Alternately, we generate two copies of the parallel subfunction
3569and only include this in the version run by the master thread.
6a2ba183 3570Surely this is not worthwhile though...
3721b9e1
DF
3571
3572
3573
3574@node Implementing CRITICAL construct
3575@section Implementing CRITICAL construct
3576
3577Without a specified name,
3578
3579@smallexample
3580 void GOMP_critical_start (void);
3581 void GOMP_critical_end (void);
3582@end smallexample
3583
3584so that we don't get COPY relocations from libgomp to the main
3585application.
3586
3587With a specified name, use omp_set_lock and omp_unset_lock with
3588name being transformed into a variable declared like
3589
3590@smallexample
3591 omp_lock_t gomp_critical_user_<name> __attribute__((common))
3592@end smallexample
3593
3594Ideally the ABI would specify that all zero is a valid unlocked
6a2ba183 3595state, and so we wouldn't need to initialize this at
3721b9e1
DF
3596startup.
3597
3598
3599
3600@node Implementing ATOMIC construct
3601@section Implementing ATOMIC construct
3602
3603The target should implement the @code{__sync} builtins.
3604
3605Failing that we could add
3606
3607@smallexample
3608 void GOMP_atomic_enter (void)
3609 void GOMP_atomic_exit (void)
3610@end smallexample
3611
3612which reuses the regular lock code, but with yet another lock
3613object private to the library.
3614
3615
3616
3617@node Implementing FLUSH construct
3618@section Implementing FLUSH construct
3619
3620Expands to the @code{__sync_synchronize} builtin.
3621
3622
3623
3624@node Implementing BARRIER construct
3625@section Implementing BARRIER construct
3626
3627@smallexample
3628 void GOMP_barrier (void)
3629@end smallexample
3630
3631
3632@node Implementing THREADPRIVATE construct
3633@section Implementing THREADPRIVATE construct
3634
3635In _most_ cases we can map this directly to @code{__thread}. Except
3636that OMP allows constructors for C++ objects. We can either
3637refuse to support this (how often is it used?) or we can
3638implement something akin to .ctors.
3639
3640Even more ideally, this ctor feature is handled by extensions
3641to the main pthreads library. Failing that, we can have a set
3642of entry points to register ctor functions to be called.
3643
3644
3645
3646@node Implementing PRIVATE clause
3647@section Implementing PRIVATE clause
3648
3649In association with a PARALLEL, or within the lexical extent
3650of a PARALLEL block, the variable becomes a local variable in
3651the parallel subfunction.
3652
3653In association with FOR or SECTIONS blocks, create a new
3654automatic variable within the current function. This preserves
3655the semantic of new variable creation.
3656
3657
3658
3659@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3660@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3661
6a2ba183
AH
3662This seems simple enough for PARALLEL blocks. Create a private
3663struct for communicating between the parent and subfunction.
3721b9e1
DF
3664In the parent, copy in values for scalar and "small" structs;
3665copy in addresses for others TREE_ADDRESSABLE types. In the
3666subfunction, copy the value into the local variable.
3667
6a2ba183
AH
3668It is not clear what to do with bare FOR or SECTION blocks.
3669The only thing I can figure is that we do something like:
3721b9e1
DF
3670
3671@smallexample
3672#pragma omp for firstprivate(x) lastprivate(y)
3673for (int i = 0; i < n; ++i)
3674 body;
3675@end smallexample
3676
3677which becomes
3678
3679@smallexample
3680@{
3681 int x = x, y;
3682
3683 // for stuff
3684
3685 if (i == n)
3686 y = y;
3687@}
3688@end smallexample
3689
3690where the "x=x" and "y=y" assignments actually have different
3691uids for the two variables, i.e. not something you could write
3692directly in C. Presumably this only makes sense if the "outer"
3693x and y are global variables.
3694
3695COPYPRIVATE would work the same way, except the structure
3696broadcast would have to happen via SINGLE machinery instead.
3697
3698
3699
3700@node Implementing REDUCTION clause
3701@section Implementing REDUCTION clause
3702
3703The private struct mentioned in the previous section should have
3704a pointer to an array of the type of the variable, indexed by the
3705thread's @var{team_id}. The thread stores its final value into the
6a2ba183 3706array, and after the barrier, the master thread iterates over the
3721b9e1
DF
3707array to collect the values.
3708
3709
3710@node Implementing PARALLEL construct
3711@section Implementing PARALLEL construct
3712
3713@smallexample
3714 #pragma omp parallel
3715 @{
3716 body;
3717 @}
3718@end smallexample
3719
3720becomes
3721
3722@smallexample
3723 void subfunction (void *data)
3724 @{
3725 use data;
3726 body;
3727 @}
3728
3729 setup data;
3730 GOMP_parallel_start (subfunction, &data, num_threads);
3731 subfunction (&data);
3732 GOMP_parallel_end ();
3733@end smallexample
3734
3735@smallexample
3736 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
3737@end smallexample
3738
3739The @var{FN} argument is the subfunction to be run in parallel.
3740
3741The @var{DATA} argument is a pointer to a structure used to
3742communicate data in and out of the subfunction, as discussed
f1b0882e 3743above with respect to FIRSTPRIVATE et al.
3721b9e1
DF
3744
3745The @var{NUM_THREADS} argument is 1 if an IF clause is present
3746and false, or the value of the NUM_THREADS clause, if
3747present, or 0.
3748
3749The function needs to create the appropriate number of
3750threads and/or launch them from the dock. It needs to
3751create the team structure and assign team ids.
3752
3753@smallexample
3754 void GOMP_parallel_end (void)
3755@end smallexample
3756
3757Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
3758
3759
3760
3761@node Implementing FOR construct
3762@section Implementing FOR construct
3763
3764@smallexample
3765 #pragma omp parallel for
3766 for (i = lb; i <= ub; i++)
3767 body;
3768@end smallexample
3769
3770becomes
3771
3772@smallexample
3773 void subfunction (void *data)
3774 @{
3775 long _s0, _e0;
3776 while (GOMP_loop_static_next (&_s0, &_e0))
3777 @{
3778 long _e1 = _e0, i;
3779 for (i = _s0; i < _e1; i++)
3780 body;
3781 @}
3782 GOMP_loop_end_nowait ();
3783 @}
3784
3785 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
3786 subfunction (NULL);
3787 GOMP_parallel_end ();
3788@end smallexample
3789
3790@smallexample
3791 #pragma omp for schedule(runtime)
3792 for (i = 0; i < n; i++)
3793 body;
3794@end smallexample
3795
3796becomes
3797
3798@smallexample
3799 @{
3800 long i, _s0, _e0;
3801 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
3802 do @{
3803 long _e1 = _e0;
3804 for (i = _s0, i < _e0; i++)
3805 body;
3806 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
3807 GOMP_loop_end ();
3808 @}
3809@end smallexample
3810
6a2ba183 3811Note that while it looks like there is trickiness to propagating
3721b9e1
DF
3812a non-constant STEP, there isn't really. We're explicitly allowed
3813to evaluate it as many times as we want, and any variables involved
3814should automatically be handled as PRIVATE or SHARED like any other
3815variables. So the expression should remain evaluable in the
3816subfunction. We can also pull it into a local variable if we like,
3817but since its supposed to remain unchanged, we can also not if we like.
3818
3819If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
3820able to get away with no work-sharing context at all, since we can
3821simply perform the arithmetic directly in each thread to divide up
3822the iterations. Which would mean that we wouldn't need to call any
3823of these routines.
3824
3825There are separate routines for handling loops with an ORDERED
3826clause. Bookkeeping for that is non-trivial...
3827
3828
3829
3830@node Implementing ORDERED construct
3831@section Implementing ORDERED construct
3832
3833@smallexample
3834 void GOMP_ordered_start (void)
3835 void GOMP_ordered_end (void)
3836@end smallexample
3837
3838
3839
3840@node Implementing SECTIONS construct
3841@section Implementing SECTIONS construct
3842
3843A block as
3844
3845@smallexample
3846 #pragma omp sections
3847 @{
3848 #pragma omp section
3849 stmt1;
3850 #pragma omp section
3851 stmt2;
3852 #pragma omp section
3853 stmt3;
3854 @}
3855@end smallexample
3856
3857becomes
3858
3859@smallexample
3860 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
3861 switch (i)
3862 @{
3863 case 1:
3864 stmt1;
3865 break;
3866 case 2:
3867 stmt2;
3868 break;
3869 case 3:
3870 stmt3;
3871 break;
3872 @}
3873 GOMP_barrier ();
3874@end smallexample
3875
3876
3877@node Implementing SINGLE construct
3878@section Implementing SINGLE construct
3879
3880A block like
3881
3882@smallexample
3883 #pragma omp single
3884 @{
3885 body;
3886 @}
3887@end smallexample
3888
3889becomes
3890
3891@smallexample
3892 if (GOMP_single_start ())
3893 body;
3894 GOMP_barrier ();
3895@end smallexample
3896
3897while
3898
3899@smallexample
3900 #pragma omp single copyprivate(x)
3901 body;
3902@end smallexample
3903
3904becomes
3905
3906@smallexample
3907 datap = GOMP_single_copy_start ();
3908 if (datap == NULL)
3909 @{
3910 body;
3911 data.x = x;
3912 GOMP_single_copy_end (&data);
3913 @}
3914 else
3915 x = datap->x;
3916 GOMP_barrier ();
3917@end smallexample
3918
3919
3920
cdf6119d
JN
3921@node Implementing OpenACC's PARALLEL construct
3922@section Implementing OpenACC's PARALLEL construct
3923
3924@smallexample
3925 void GOACC_parallel ()
3926@end smallexample
3927
3928
3929
3721b9e1 3930@c ---------------------------------------------------------------------
f1f3453e 3931@c Reporting Bugs
3721b9e1
DF
3932@c ---------------------------------------------------------------------
3933
3934@node Reporting Bugs
3935@chapter Reporting Bugs
3936
f1f3453e 3937Bugs in the GNU Offloading and Multi Processing Runtime Library should
c1030b5c 3938be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
41dbbb37
TS
3939"openacc", or "openmp", or both to the keywords field in the bug
3940report, as appropriate.
3721b9e1
DF
3941
3942
3943
3944@c ---------------------------------------------------------------------
3945@c GNU General Public License
3946@c ---------------------------------------------------------------------
3947
e6fdc918 3948@include gpl_v3.texi
3721b9e1
DF
3949
3950
3951
3952@c ---------------------------------------------------------------------
3953@c GNU Free Documentation License
3954@c ---------------------------------------------------------------------
3955
3956@include fdl.texi
3957
3958
3959
3960@c ---------------------------------------------------------------------
3961@c Funding Free Software
3962@c ---------------------------------------------------------------------
3963
3964@include funding.texi
3965
3966@c ---------------------------------------------------------------------
3967@c Index
3968@c ---------------------------------------------------------------------
3969
3d3949df
SL
3970@node Library Index
3971@unnumbered Library Index
3721b9e1
DF
3972
3973@printindex cp
3974
3975@bye