]> git.ipfire.org Git - thirdparty/gcc.git/blob - libgomp/libgomp.texi
OpenACC 2.6 deep copy: attach/detach API routines
[thirdparty/gcc.git] / libgomp / libgomp.texi
1 \input texinfo @c -*-texinfo-*-
2
3 @c %**start of header
4 @setfilename libgomp.info
5 @settitle GNU libgomp
6 @c %**end of header
7
8
9 @copying
10 Copyright @copyright{} 2006-2019 Free Software Foundation, Inc.
11
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
19
20 (a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24 (b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29 @end copying
30
31 @ifinfo
32 @dircategory GNU Libraries
33 @direntry
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35 @end direntry
36
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
40 Fortran.
41
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
45
46 @insertcopying
47 @end ifinfo
48
49
50 @setchapternewpage odd
51
52 @titlepage
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
55 @page
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
58 @sp 1
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
62 @sp 1
63 @insertcopying
64 @end titlepage
65
66 @summarycontents
67 @contents
68 @page
69
70
71 @node Top
72 @top Introduction
73 @cindex Introduction
74
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
82 and Fortran.
83
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91 @comment
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
95 @comment
96 @menu
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Runtime Library Routines: Runtime Library Routines.
99 The OpenMP runtime application programming
100 interface.
101 * OpenMP Environment Variables: Environment Variables.
102 Influencing OpenMP runtime behavior with
103 environment variables.
104 * Enabling OpenACC:: How to enable OpenACC for your
105 applications.
106 * OpenACC Runtime Library Routines:: The OpenACC runtime application
107 programming interface.
108 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
109 environment variables.
110 * CUDA Streams Usage:: Notes on the implementation of
111 asynchronous operations.
112 * OpenACC Library Interoperability:: OpenACC library interoperability with the
113 NVIDIA CUBLAS library.
114 * OpenACC Profiling Interface::
115 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
116 * Reporting Bugs:: How to report bugs in the GNU Offloading and
117 Multi Processing Runtime Library.
118 * Copying:: GNU general public license says
119 how you can copy and share libgomp.
120 * GNU Free Documentation License::
121 How you can copy and share this manual.
122 * Funding:: How to help assure continued work for free
123 software.
124 * Library Index:: Index of this documentation.
125 @end menu
126
127
128 @c ---------------------------------------------------------------------
129 @c Enabling OpenMP
130 @c ---------------------------------------------------------------------
131
132 @node Enabling OpenMP
133 @chapter Enabling OpenMP
134
135 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
136 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
137 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
138 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
139 @code{!$} conditional compilation sentinels in free form and @code{c$},
140 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
141 arranges for automatic linking of the OpenMP runtime library
142 (@ref{Runtime Library Routines}).
143
144 A complete description of all OpenMP directives accepted may be found in
145 the @uref{https://www.openmp.org, OpenMP Application Program Interface} manual,
146 version 4.5.
147
148
149 @c ---------------------------------------------------------------------
150 @c OpenMP Runtime Library Routines
151 @c ---------------------------------------------------------------------
152
153 @node Runtime Library Routines
154 @chapter OpenMP Runtime Library Routines
155
156 The runtime routines described here are defined by Section 3 of the OpenMP
157 specification in version 4.5. The routines are structured in following
158 three parts:
159
160 @menu
161 Control threads, processors and the parallel environment. They have C
162 linkage, and do not throw exceptions.
163
164 * omp_get_active_level:: Number of active parallel regions
165 * omp_get_ancestor_thread_num:: Ancestor thread ID
166 * omp_get_cancellation:: Whether cancellation support is enabled
167 * omp_get_default_device:: Get the default device for target regions
168 * omp_get_dynamic:: Dynamic teams setting
169 * omp_get_level:: Number of parallel regions
170 * omp_get_max_active_levels:: Maximum number of active regions
171 * omp_get_max_task_priority:: Maximum task priority value that can be set
172 * omp_get_max_threads:: Maximum number of threads of parallel region
173 * omp_get_nested:: Nested parallel regions
174 * omp_get_num_devices:: Number of target devices
175 * omp_get_num_procs:: Number of processors online
176 * omp_get_num_teams:: Number of teams
177 * omp_get_num_threads:: Size of the active team
178 * omp_get_proc_bind:: Whether theads may be moved between CPUs
179 * omp_get_schedule:: Obtain the runtime scheduling method
180 * omp_get_team_num:: Get team number
181 * omp_get_team_size:: Number of threads in a team
182 * omp_get_thread_limit:: Maximum number of threads
183 * omp_get_thread_num:: Current thread ID
184 * omp_in_parallel:: Whether a parallel region is active
185 * omp_in_final:: Whether in final or included task region
186 * omp_is_initial_device:: Whether executing on the host device
187 * omp_set_default_device:: Set the default device for target regions
188 * omp_set_dynamic:: Enable/disable dynamic teams
189 * omp_set_max_active_levels:: Limits the number of active parallel regions
190 * omp_set_nested:: Enable/disable nested parallel regions
191 * omp_set_num_threads:: Set upper team size limit
192 * omp_set_schedule:: Set the runtime scheduling method
193
194 Initialize, set, test, unset and destroy simple and nested locks.
195
196 * omp_init_lock:: Initialize simple lock
197 * omp_set_lock:: Wait for and set simple lock
198 * omp_test_lock:: Test and set simple lock if available
199 * omp_unset_lock:: Unset simple lock
200 * omp_destroy_lock:: Destroy simple lock
201 * omp_init_nest_lock:: Initialize nested lock
202 * omp_set_nest_lock:: Wait for and set simple lock
203 * omp_test_nest_lock:: Test and set nested lock if available
204 * omp_unset_nest_lock:: Unset nested lock
205 * omp_destroy_nest_lock:: Destroy nested lock
206
207 Portable, thread-based, wall clock timer.
208
209 * omp_get_wtick:: Get timer precision.
210 * omp_get_wtime:: Elapsed wall clock time.
211 @end menu
212
213
214
215 @node omp_get_active_level
216 @section @code{omp_get_active_level} -- Number of parallel regions
217 @table @asis
218 @item @emph{Description}:
219 This function returns the nesting level for the active parallel blocks,
220 which enclose the calling call.
221
222 @item @emph{C/C++}
223 @multitable @columnfractions .20 .80
224 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
225 @end multitable
226
227 @item @emph{Fortran}:
228 @multitable @columnfractions .20 .80
229 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
230 @end multitable
231
232 @item @emph{See also}:
233 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
234
235 @item @emph{Reference}:
236 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
237 @end table
238
239
240
241 @node omp_get_ancestor_thread_num
242 @section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
243 @table @asis
244 @item @emph{Description}:
245 This function returns the thread identification number for the given
246 nesting level of the current thread. For values of @var{level} outside
247 zero to @code{omp_get_level} -1 is returned; if @var{level} is
248 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
249
250 @item @emph{C/C++}
251 @multitable @columnfractions .20 .80
252 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
253 @end multitable
254
255 @item @emph{Fortran}:
256 @multitable @columnfractions .20 .80
257 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
258 @item @tab @code{integer level}
259 @end multitable
260
261 @item @emph{See also}:
262 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
263
264 @item @emph{Reference}:
265 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
266 @end table
267
268
269
270 @node omp_get_cancellation
271 @section @code{omp_get_cancellation} -- Whether cancellation support is enabled
272 @table @asis
273 @item @emph{Description}:
274 This function returns @code{true} if cancellation is activated, @code{false}
275 otherwise. Here, @code{true} and @code{false} represent their language-specific
276 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
277 deactivated.
278
279 @item @emph{C/C++}:
280 @multitable @columnfractions .20 .80
281 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
282 @end multitable
283
284 @item @emph{Fortran}:
285 @multitable @columnfractions .20 .80
286 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
287 @end multitable
288
289 @item @emph{See also}:
290 @ref{OMP_CANCELLATION}
291
292 @item @emph{Reference}:
293 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
294 @end table
295
296
297
298 @node omp_get_default_device
299 @section @code{omp_get_default_device} -- Get the default device for target regions
300 @table @asis
301 @item @emph{Description}:
302 Get the default device for target regions without device clause.
303
304 @item @emph{C/C++}:
305 @multitable @columnfractions .20 .80
306 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
307 @end multitable
308
309 @item @emph{Fortran}:
310 @multitable @columnfractions .20 .80
311 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
312 @end multitable
313
314 @item @emph{See also}:
315 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
316
317 @item @emph{Reference}:
318 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
319 @end table
320
321
322
323 @node omp_get_dynamic
324 @section @code{omp_get_dynamic} -- Dynamic teams setting
325 @table @asis
326 @item @emph{Description}:
327 This function returns @code{true} if enabled, @code{false} otherwise.
328 Here, @code{true} and @code{false} represent their language-specific
329 counterparts.
330
331 The dynamic team setting may be initialized at startup by the
332 @env{OMP_DYNAMIC} environment variable or at runtime using
333 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
334 disabled by default.
335
336 @item @emph{C/C++}:
337 @multitable @columnfractions .20 .80
338 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
339 @end multitable
340
341 @item @emph{Fortran}:
342 @multitable @columnfractions .20 .80
343 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
344 @end multitable
345
346 @item @emph{See also}:
347 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
348
349 @item @emph{Reference}:
350 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
351 @end table
352
353
354
355 @node omp_get_level
356 @section @code{omp_get_level} -- Obtain the current nesting level
357 @table @asis
358 @item @emph{Description}:
359 This function returns the nesting level for the parallel blocks,
360 which enclose the calling call.
361
362 @item @emph{C/C++}
363 @multitable @columnfractions .20 .80
364 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
365 @end multitable
366
367 @item @emph{Fortran}:
368 @multitable @columnfractions .20 .80
369 @item @emph{Interface}: @tab @code{integer function omp_level()}
370 @end multitable
371
372 @item @emph{See also}:
373 @ref{omp_get_active_level}
374
375 @item @emph{Reference}:
376 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
377 @end table
378
379
380
381 @node omp_get_max_active_levels
382 @section @code{omp_get_max_active_levels} -- Maximum number of active regions
383 @table @asis
384 @item @emph{Description}:
385 This function obtains the maximum allowed number of nested, active parallel regions.
386
387 @item @emph{C/C++}
388 @multitable @columnfractions .20 .80
389 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
390 @end multitable
391
392 @item @emph{Fortran}:
393 @multitable @columnfractions .20 .80
394 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
395 @end multitable
396
397 @item @emph{See also}:
398 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
399
400 @item @emph{Reference}:
401 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
402 @end table
403
404
405 @node omp_get_max_task_priority
406 @section @code{omp_get_max_task_priority} -- Maximum priority value
407 that can be set for tasks.
408 @table @asis
409 @item @emph{Description}:
410 This function obtains the maximum allowed priority number for tasks.
411
412 @item @emph{C/C++}
413 @multitable @columnfractions .20 .80
414 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
415 @end multitable
416
417 @item @emph{Fortran}:
418 @multitable @columnfractions .20 .80
419 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
420 @end multitable
421
422 @item @emph{Reference}:
423 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
424 @end table
425
426
427 @node omp_get_max_threads
428 @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
429 @table @asis
430 @item @emph{Description}:
431 Return the maximum number of threads used for the current parallel region
432 that does not use the clause @code{num_threads}.
433
434 @item @emph{C/C++}:
435 @multitable @columnfractions .20 .80
436 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
437 @end multitable
438
439 @item @emph{Fortran}:
440 @multitable @columnfractions .20 .80
441 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
442 @end multitable
443
444 @item @emph{See also}:
445 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
446
447 @item @emph{Reference}:
448 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
449 @end table
450
451
452
453 @node omp_get_nested
454 @section @code{omp_get_nested} -- Nested parallel regions
455 @table @asis
456 @item @emph{Description}:
457 This function returns @code{true} if nested parallel regions are
458 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
459 represent their language-specific counterparts.
460
461 Nested parallel regions may be initialized at startup by the
462 @env{OMP_NESTED} environment variable or at runtime using
463 @code{omp_set_nested}. If undefined, nested parallel regions are
464 disabled by default.
465
466 @item @emph{C/C++}:
467 @multitable @columnfractions .20 .80
468 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
469 @end multitable
470
471 @item @emph{Fortran}:
472 @multitable @columnfractions .20 .80
473 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
474 @end multitable
475
476 @item @emph{See also}:
477 @ref{omp_set_nested}, @ref{OMP_NESTED}
478
479 @item @emph{Reference}:
480 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
481 @end table
482
483
484
485 @node omp_get_num_devices
486 @section @code{omp_get_num_devices} -- Number of target devices
487 @table @asis
488 @item @emph{Description}:
489 Returns the number of target devices.
490
491 @item @emph{C/C++}:
492 @multitable @columnfractions .20 .80
493 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
494 @end multitable
495
496 @item @emph{Fortran}:
497 @multitable @columnfractions .20 .80
498 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
499 @end multitable
500
501 @item @emph{Reference}:
502 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
503 @end table
504
505
506
507 @node omp_get_num_procs
508 @section @code{omp_get_num_procs} -- Number of processors online
509 @table @asis
510 @item @emph{Description}:
511 Returns the number of processors online on that device.
512
513 @item @emph{C/C++}:
514 @multitable @columnfractions .20 .80
515 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
516 @end multitable
517
518 @item @emph{Fortran}:
519 @multitable @columnfractions .20 .80
520 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
521 @end multitable
522
523 @item @emph{Reference}:
524 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
525 @end table
526
527
528
529 @node omp_get_num_teams
530 @section @code{omp_get_num_teams} -- Number of teams
531 @table @asis
532 @item @emph{Description}:
533 Returns the number of teams in the current team region.
534
535 @item @emph{C/C++}:
536 @multitable @columnfractions .20 .80
537 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
538 @end multitable
539
540 @item @emph{Fortran}:
541 @multitable @columnfractions .20 .80
542 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
543 @end multitable
544
545 @item @emph{Reference}:
546 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
547 @end table
548
549
550
551 @node omp_get_num_threads
552 @section @code{omp_get_num_threads} -- Size of the active team
553 @table @asis
554 @item @emph{Description}:
555 Returns the number of threads in the current team. In a sequential section of
556 the program @code{omp_get_num_threads} returns 1.
557
558 The default team size may be initialized at startup by the
559 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
560 of the current team may be set either by the @code{NUM_THREADS}
561 clause or by @code{omp_set_num_threads}. If none of the above were
562 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
563 one thread per CPU online is used.
564
565 @item @emph{C/C++}:
566 @multitable @columnfractions .20 .80
567 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
568 @end multitable
569
570 @item @emph{Fortran}:
571 @multitable @columnfractions .20 .80
572 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
573 @end multitable
574
575 @item @emph{See also}:
576 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
577
578 @item @emph{Reference}:
579 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
580 @end table
581
582
583
584 @node omp_get_proc_bind
585 @section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
586 @table @asis
587 @item @emph{Description}:
588 This functions returns the currently active thread affinity policy, which is
589 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
590 @code{omp_proc_bind_true}, @code{omp_proc_bind_master},
591 @code{omp_proc_bind_close} and @code{omp_proc_bind_spread}.
592
593 @item @emph{C/C++}:
594 @multitable @columnfractions .20 .80
595 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
596 @end multitable
597
598 @item @emph{Fortran}:
599 @multitable @columnfractions .20 .80
600 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
601 @end multitable
602
603 @item @emph{See also}:
604 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
605
606 @item @emph{Reference}:
607 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
608 @end table
609
610
611
612 @node omp_get_schedule
613 @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
614 @table @asis
615 @item @emph{Description}:
616 Obtain the runtime scheduling method. The @var{kind} argument will be
617 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
618 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
619 @var{chunk_size}, is set to the chunk size.
620
621 @item @emph{C/C++}
622 @multitable @columnfractions .20 .80
623 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
624 @end multitable
625
626 @item @emph{Fortran}:
627 @multitable @columnfractions .20 .80
628 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
629 @item @tab @code{integer(kind=omp_sched_kind) kind}
630 @item @tab @code{integer chunk_size}
631 @end multitable
632
633 @item @emph{See also}:
634 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
635
636 @item @emph{Reference}:
637 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
638 @end table
639
640
641
642 @node omp_get_team_num
643 @section @code{omp_get_team_num} -- Get team number
644 @table @asis
645 @item @emph{Description}:
646 Returns the team number of the calling thread.
647
648 @item @emph{C/C++}:
649 @multitable @columnfractions .20 .80
650 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
651 @end multitable
652
653 @item @emph{Fortran}:
654 @multitable @columnfractions .20 .80
655 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
656 @end multitable
657
658 @item @emph{Reference}:
659 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
660 @end table
661
662
663
664 @node omp_get_team_size
665 @section @code{omp_get_team_size} -- Number of threads in a team
666 @table @asis
667 @item @emph{Description}:
668 This function returns the number of threads in a thread team to which
669 either the current thread or its ancestor belongs. For values of @var{level}
670 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
671 1 is returned, and for @code{omp_get_level}, the result is identical
672 to @code{omp_get_num_threads}.
673
674 @item @emph{C/C++}:
675 @multitable @columnfractions .20 .80
676 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
677 @end multitable
678
679 @item @emph{Fortran}:
680 @multitable @columnfractions .20 .80
681 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
682 @item @tab @code{integer level}
683 @end multitable
684
685 @item @emph{See also}:
686 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
687
688 @item @emph{Reference}:
689 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
690 @end table
691
692
693
694 @node omp_get_thread_limit
695 @section @code{omp_get_thread_limit} -- Maximum number of threads
696 @table @asis
697 @item @emph{Description}:
698 Return the maximum number of threads of the program.
699
700 @item @emph{C/C++}:
701 @multitable @columnfractions .20 .80
702 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
703 @end multitable
704
705 @item @emph{Fortran}:
706 @multitable @columnfractions .20 .80
707 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
708 @end multitable
709
710 @item @emph{See also}:
711 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
712
713 @item @emph{Reference}:
714 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
715 @end table
716
717
718
719 @node omp_get_thread_num
720 @section @code{omp_get_thread_num} -- Current thread ID
721 @table @asis
722 @item @emph{Description}:
723 Returns a unique thread identification number within the current team.
724 In a sequential parts of the program, @code{omp_get_thread_num}
725 always returns 0. In parallel regions the return value varies
726 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
727 value of the master thread of a team is always 0.
728
729 @item @emph{C/C++}:
730 @multitable @columnfractions .20 .80
731 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
732 @end multitable
733
734 @item @emph{Fortran}:
735 @multitable @columnfractions .20 .80
736 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
737 @end multitable
738
739 @item @emph{See also}:
740 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
741
742 @item @emph{Reference}:
743 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
744 @end table
745
746
747
748 @node omp_in_parallel
749 @section @code{omp_in_parallel} -- Whether a parallel region is active
750 @table @asis
751 @item @emph{Description}:
752 This function returns @code{true} if currently running in parallel,
753 @code{false} otherwise. Here, @code{true} and @code{false} represent
754 their language-specific counterparts.
755
756 @item @emph{C/C++}:
757 @multitable @columnfractions .20 .80
758 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
759 @end multitable
760
761 @item @emph{Fortran}:
762 @multitable @columnfractions .20 .80
763 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
764 @end multitable
765
766 @item @emph{Reference}:
767 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
768 @end table
769
770
771 @node omp_in_final
772 @section @code{omp_in_final} -- Whether in final or included task region
773 @table @asis
774 @item @emph{Description}:
775 This function returns @code{true} if currently running in a final
776 or included task region, @code{false} otherwise. Here, @code{true}
777 and @code{false} represent their language-specific counterparts.
778
779 @item @emph{C/C++}:
780 @multitable @columnfractions .20 .80
781 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
782 @end multitable
783
784 @item @emph{Fortran}:
785 @multitable @columnfractions .20 .80
786 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
787 @end multitable
788
789 @item @emph{Reference}:
790 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
791 @end table
792
793
794
795 @node omp_is_initial_device
796 @section @code{omp_is_initial_device} -- Whether executing on the host device
797 @table @asis
798 @item @emph{Description}:
799 This function returns @code{true} if currently running on the host device,
800 @code{false} otherwise. Here, @code{true} and @code{false} represent
801 their language-specific counterparts.
802
803 @item @emph{C/C++}:
804 @multitable @columnfractions .20 .80
805 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
806 @end multitable
807
808 @item @emph{Fortran}:
809 @multitable @columnfractions .20 .80
810 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
811 @end multitable
812
813 @item @emph{Reference}:
814 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
815 @end table
816
817
818
819 @node omp_set_default_device
820 @section @code{omp_set_default_device} -- Set the default device for target regions
821 @table @asis
822 @item @emph{Description}:
823 Set the default device for target regions without device clause. The argument
824 shall be a nonnegative device number.
825
826 @item @emph{C/C++}:
827 @multitable @columnfractions .20 .80
828 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
829 @end multitable
830
831 @item @emph{Fortran}:
832 @multitable @columnfractions .20 .80
833 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
834 @item @tab @code{integer device_num}
835 @end multitable
836
837 @item @emph{See also}:
838 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
839
840 @item @emph{Reference}:
841 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
842 @end table
843
844
845
846 @node omp_set_dynamic
847 @section @code{omp_set_dynamic} -- Enable/disable dynamic teams
848 @table @asis
849 @item @emph{Description}:
850 Enable or disable the dynamic adjustment of the number of threads
851 within a team. The function takes the language-specific equivalent
852 of @code{true} and @code{false}, where @code{true} enables dynamic
853 adjustment of team sizes and @code{false} disables it.
854
855 @item @emph{C/C++}:
856 @multitable @columnfractions .20 .80
857 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
858 @end multitable
859
860 @item @emph{Fortran}:
861 @multitable @columnfractions .20 .80
862 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
863 @item @tab @code{logical, intent(in) :: dynamic_threads}
864 @end multitable
865
866 @item @emph{See also}:
867 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
868
869 @item @emph{Reference}:
870 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
871 @end table
872
873
874
875 @node omp_set_max_active_levels
876 @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
877 @table @asis
878 @item @emph{Description}:
879 This function limits the maximum allowed number of nested, active
880 parallel regions.
881
882 @item @emph{C/C++}
883 @multitable @columnfractions .20 .80
884 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
885 @end multitable
886
887 @item @emph{Fortran}:
888 @multitable @columnfractions .20 .80
889 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
890 @item @tab @code{integer max_levels}
891 @end multitable
892
893 @item @emph{See also}:
894 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
895
896 @item @emph{Reference}:
897 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
898 @end table
899
900
901
902 @node omp_set_nested
903 @section @code{omp_set_nested} -- Enable/disable nested parallel regions
904 @table @asis
905 @item @emph{Description}:
906 Enable or disable nested parallel regions, i.e., whether team members
907 are allowed to create new teams. The function takes the language-specific
908 equivalent of @code{true} and @code{false}, where @code{true} enables
909 dynamic adjustment of team sizes and @code{false} disables it.
910
911 @item @emph{C/C++}:
912 @multitable @columnfractions .20 .80
913 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
914 @end multitable
915
916 @item @emph{Fortran}:
917 @multitable @columnfractions .20 .80
918 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
919 @item @tab @code{logical, intent(in) :: nested}
920 @end multitable
921
922 @item @emph{See also}:
923 @ref{OMP_NESTED}, @ref{omp_get_nested}
924
925 @item @emph{Reference}:
926 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
927 @end table
928
929
930
931 @node omp_set_num_threads
932 @section @code{omp_set_num_threads} -- Set upper team size limit
933 @table @asis
934 @item @emph{Description}:
935 Specifies the number of threads used by default in subsequent parallel
936 sections, if those do not specify a @code{num_threads} clause. The
937 argument of @code{omp_set_num_threads} shall be a positive integer.
938
939 @item @emph{C/C++}:
940 @multitable @columnfractions .20 .80
941 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
942 @end multitable
943
944 @item @emph{Fortran}:
945 @multitable @columnfractions .20 .80
946 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
947 @item @tab @code{integer, intent(in) :: num_threads}
948 @end multitable
949
950 @item @emph{See also}:
951 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
952
953 @item @emph{Reference}:
954 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
955 @end table
956
957
958
959 @node omp_set_schedule
960 @section @code{omp_set_schedule} -- Set the runtime scheduling method
961 @table @asis
962 @item @emph{Description}:
963 Sets the runtime scheduling method. The @var{kind} argument can have the
964 value @code{omp_sched_static}, @code{omp_sched_dynamic},
965 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
966 @code{omp_sched_auto}, the chunk size is set to the value of
967 @var{chunk_size} if positive, or to the default value if zero or negative.
968 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
969
970 @item @emph{C/C++}
971 @multitable @columnfractions .20 .80
972 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
973 @end multitable
974
975 @item @emph{Fortran}:
976 @multitable @columnfractions .20 .80
977 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
978 @item @tab @code{integer(kind=omp_sched_kind) kind}
979 @item @tab @code{integer chunk_size}
980 @end multitable
981
982 @item @emph{See also}:
983 @ref{omp_get_schedule}
984 @ref{OMP_SCHEDULE}
985
986 @item @emph{Reference}:
987 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
988 @end table
989
990
991
992 @node omp_init_lock
993 @section @code{omp_init_lock} -- Initialize simple lock
994 @table @asis
995 @item @emph{Description}:
996 Initialize a simple lock. After initialization, the lock is in
997 an unlocked state.
998
999 @item @emph{C/C++}:
1000 @multitable @columnfractions .20 .80
1001 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1002 @end multitable
1003
1004 @item @emph{Fortran}:
1005 @multitable @columnfractions .20 .80
1006 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1007 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1008 @end multitable
1009
1010 @item @emph{See also}:
1011 @ref{omp_destroy_lock}
1012
1013 @item @emph{Reference}:
1014 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1015 @end table
1016
1017
1018
1019 @node omp_set_lock
1020 @section @code{omp_set_lock} -- Wait for and set simple lock
1021 @table @asis
1022 @item @emph{Description}:
1023 Before setting a simple lock, the lock variable must be initialized by
1024 @code{omp_init_lock}. The calling thread is blocked until the lock
1025 is available. If the lock is already held by the current thread,
1026 a deadlock occurs.
1027
1028 @item @emph{C/C++}:
1029 @multitable @columnfractions .20 .80
1030 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1031 @end multitable
1032
1033 @item @emph{Fortran}:
1034 @multitable @columnfractions .20 .80
1035 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1036 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1037 @end multitable
1038
1039 @item @emph{See also}:
1040 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1041
1042 @item @emph{Reference}:
1043 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1044 @end table
1045
1046
1047
1048 @node omp_test_lock
1049 @section @code{omp_test_lock} -- Test and set simple lock if available
1050 @table @asis
1051 @item @emph{Description}:
1052 Before setting a simple lock, the lock variable must be initialized by
1053 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1054 does not block if the lock is not available. This function returns
1055 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
1056 @code{false} represent their language-specific counterparts.
1057
1058 @item @emph{C/C++}:
1059 @multitable @columnfractions .20 .80
1060 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1061 @end multitable
1062
1063 @item @emph{Fortran}:
1064 @multitable @columnfractions .20 .80
1065 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1066 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1067 @end multitable
1068
1069 @item @emph{See also}:
1070 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1071
1072 @item @emph{Reference}:
1073 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1074 @end table
1075
1076
1077
1078 @node omp_unset_lock
1079 @section @code{omp_unset_lock} -- Unset simple lock
1080 @table @asis
1081 @item @emph{Description}:
1082 A simple lock about to be unset must have been locked by @code{omp_set_lock}
1083 or @code{omp_test_lock} before. In addition, the lock must be held by the
1084 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1085 or more threads attempted to set the lock before, one of them is chosen to,
1086 again, set the lock to itself.
1087
1088 @item @emph{C/C++}:
1089 @multitable @columnfractions .20 .80
1090 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1091 @end multitable
1092
1093 @item @emph{Fortran}:
1094 @multitable @columnfractions .20 .80
1095 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1096 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1097 @end multitable
1098
1099 @item @emph{See also}:
1100 @ref{omp_set_lock}, @ref{omp_test_lock}
1101
1102 @item @emph{Reference}:
1103 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1104 @end table
1105
1106
1107
1108 @node omp_destroy_lock
1109 @section @code{omp_destroy_lock} -- Destroy simple lock
1110 @table @asis
1111 @item @emph{Description}:
1112 Destroy a simple lock. In order to be destroyed, a simple lock must be
1113 in the unlocked state.
1114
1115 @item @emph{C/C++}:
1116 @multitable @columnfractions .20 .80
1117 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1118 @end multitable
1119
1120 @item @emph{Fortran}:
1121 @multitable @columnfractions .20 .80
1122 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1123 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1124 @end multitable
1125
1126 @item @emph{See also}:
1127 @ref{omp_init_lock}
1128
1129 @item @emph{Reference}:
1130 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1131 @end table
1132
1133
1134
1135 @node omp_init_nest_lock
1136 @section @code{omp_init_nest_lock} -- Initialize nested lock
1137 @table @asis
1138 @item @emph{Description}:
1139 Initialize a nested lock. After initialization, the lock is in
1140 an unlocked state and the nesting count is set to zero.
1141
1142 @item @emph{C/C++}:
1143 @multitable @columnfractions .20 .80
1144 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1145 @end multitable
1146
1147 @item @emph{Fortran}:
1148 @multitable @columnfractions .20 .80
1149 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1150 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1151 @end multitable
1152
1153 @item @emph{See also}:
1154 @ref{omp_destroy_nest_lock}
1155
1156 @item @emph{Reference}:
1157 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1158 @end table
1159
1160
1161 @node omp_set_nest_lock
1162 @section @code{omp_set_nest_lock} -- Wait for and set nested lock
1163 @table @asis
1164 @item @emph{Description}:
1165 Before setting a nested lock, the lock variable must be initialized by
1166 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
1167 is available. If the lock is already held by the current thread, the
1168 nesting count for the lock is incremented.
1169
1170 @item @emph{C/C++}:
1171 @multitable @columnfractions .20 .80
1172 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1173 @end multitable
1174
1175 @item @emph{Fortran}:
1176 @multitable @columnfractions .20 .80
1177 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1178 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1179 @end multitable
1180
1181 @item @emph{See also}:
1182 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1183
1184 @item @emph{Reference}:
1185 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1186 @end table
1187
1188
1189
1190 @node omp_test_nest_lock
1191 @section @code{omp_test_nest_lock} -- Test and set nested lock if available
1192 @table @asis
1193 @item @emph{Description}:
1194 Before setting a nested lock, the lock variable must be initialized by
1195 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1196 @code{omp_test_nest_lock} does not block if the lock is not available.
1197 If the lock is already held by the current thread, the new nesting count
1198 is returned. Otherwise, the return value equals zero.
1199
1200 @item @emph{C/C++}:
1201 @multitable @columnfractions .20 .80
1202 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1203 @end multitable
1204
1205 @item @emph{Fortran}:
1206 @multitable @columnfractions .20 .80
1207 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1208 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1209 @end multitable
1210
1211
1212 @item @emph{See also}:
1213 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1214
1215 @item @emph{Reference}:
1216 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1217 @end table
1218
1219
1220
1221 @node omp_unset_nest_lock
1222 @section @code{omp_unset_nest_lock} -- Unset nested lock
1223 @table @asis
1224 @item @emph{Description}:
1225 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1226 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1227 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1228 lock becomes unlocked. If one ore more threads attempted to set the lock before,
1229 one of them is chosen to, again, set the lock to itself.
1230
1231 @item @emph{C/C++}:
1232 @multitable @columnfractions .20 .80
1233 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1234 @end multitable
1235
1236 @item @emph{Fortran}:
1237 @multitable @columnfractions .20 .80
1238 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1239 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1240 @end multitable
1241
1242 @item @emph{See also}:
1243 @ref{omp_set_nest_lock}
1244
1245 @item @emph{Reference}:
1246 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1247 @end table
1248
1249
1250
1251 @node omp_destroy_nest_lock
1252 @section @code{omp_destroy_nest_lock} -- Destroy nested lock
1253 @table @asis
1254 @item @emph{Description}:
1255 Destroy a nested lock. In order to be destroyed, a nested lock must be
1256 in the unlocked state and its nesting count must equal zero.
1257
1258 @item @emph{C/C++}:
1259 @multitable @columnfractions .20 .80
1260 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1261 @end multitable
1262
1263 @item @emph{Fortran}:
1264 @multitable @columnfractions .20 .80
1265 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1266 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1267 @end multitable
1268
1269 @item @emph{See also}:
1270 @ref{omp_init_lock}
1271
1272 @item @emph{Reference}:
1273 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1274 @end table
1275
1276
1277
1278 @node omp_get_wtick
1279 @section @code{omp_get_wtick} -- Get timer precision
1280 @table @asis
1281 @item @emph{Description}:
1282 Gets the timer precision, i.e., the number of seconds between two
1283 successive clock ticks.
1284
1285 @item @emph{C/C++}:
1286 @multitable @columnfractions .20 .80
1287 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1288 @end multitable
1289
1290 @item @emph{Fortran}:
1291 @multitable @columnfractions .20 .80
1292 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1293 @end multitable
1294
1295 @item @emph{See also}:
1296 @ref{omp_get_wtime}
1297
1298 @item @emph{Reference}:
1299 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1300 @end table
1301
1302
1303
1304 @node omp_get_wtime
1305 @section @code{omp_get_wtime} -- Elapsed wall clock time
1306 @table @asis
1307 @item @emph{Description}:
1308 Elapsed wall clock time in seconds. The time is measured per thread, no
1309 guarantee can be made that two distinct threads measure the same time.
1310 Time is measured from some "time in the past", which is an arbitrary time
1311 guaranteed not to change during the execution of the program.
1312
1313 @item @emph{C/C++}:
1314 @multitable @columnfractions .20 .80
1315 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1316 @end multitable
1317
1318 @item @emph{Fortran}:
1319 @multitable @columnfractions .20 .80
1320 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1321 @end multitable
1322
1323 @item @emph{See also}:
1324 @ref{omp_get_wtick}
1325
1326 @item @emph{Reference}:
1327 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1328 @end table
1329
1330
1331
1332 @c ---------------------------------------------------------------------
1333 @c OpenMP Environment Variables
1334 @c ---------------------------------------------------------------------
1335
1336 @node Environment Variables
1337 @chapter OpenMP Environment Variables
1338
1339 The environment variables which beginning with @env{OMP_} are defined by
1340 section 4 of the OpenMP specification in version 4.5, while those
1341 beginning with @env{GOMP_} are GNU extensions.
1342
1343 @menu
1344 * OMP_CANCELLATION:: Set whether cancellation is activated
1345 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1346 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
1347 * OMP_DYNAMIC:: Dynamic adjustment of threads
1348 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1349 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1350 * OMP_NESTED:: Nested parallel regions
1351 * OMP_NUM_THREADS:: Specifies the number of threads to use
1352 * OMP_PROC_BIND:: Whether theads may be moved between CPUs
1353 * OMP_PLACES:: Specifies on which CPUs the theads should be placed
1354 * OMP_STACKSIZE:: Set default thread stack size
1355 * OMP_SCHEDULE:: How threads are scheduled
1356 * OMP_THREAD_LIMIT:: Set the maximum number of threads
1357 * OMP_WAIT_POLICY:: How waiting threads are handled
1358 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1359 * GOMP_DEBUG:: Enable debugging output
1360 * GOMP_STACKSIZE:: Set default thread stack size
1361 * GOMP_SPINCOUNT:: Set the busy-wait spin count
1362 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1363 @end menu
1364
1365
1366 @node OMP_CANCELLATION
1367 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1368 @cindex Environment Variable
1369 @table @asis
1370 @item @emph{Description}:
1371 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1372 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1373
1374 @item @emph{See also}:
1375 @ref{omp_get_cancellation}
1376
1377 @item @emph{Reference}:
1378 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1379 @end table
1380
1381
1382
1383 @node OMP_DISPLAY_ENV
1384 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1385 @cindex Environment Variable
1386 @table @asis
1387 @item @emph{Description}:
1388 If set to @code{TRUE}, the OpenMP version number and the values
1389 associated with the OpenMP environment variables are printed to @code{stderr}.
1390 If set to @code{VERBOSE}, it additionally shows the value of the environment
1391 variables which are GNU extensions. If undefined or set to @code{FALSE},
1392 this information will not be shown.
1393
1394
1395 @item @emph{Reference}:
1396 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
1397 @end table
1398
1399
1400
1401 @node OMP_DEFAULT_DEVICE
1402 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
1403 @cindex Environment Variable
1404 @table @asis
1405 @item @emph{Description}:
1406 Set to choose the device which is used in a @code{target} region, unless the
1407 value is overridden by @code{omp_set_default_device} or by a @code{device}
1408 clause. The value shall be the nonnegative device number. If no device with
1409 the given device number exists, the code is executed on the host. If unset,
1410 device number 0 will be used.
1411
1412
1413 @item @emph{See also}:
1414 @ref{omp_get_default_device}, @ref{omp_set_default_device},
1415
1416 @item @emph{Reference}:
1417 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
1418 @end table
1419
1420
1421
1422 @node OMP_DYNAMIC
1423 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1424 @cindex Environment Variable
1425 @table @asis
1426 @item @emph{Description}:
1427 Enable or disable the dynamic adjustment of the number of threads
1428 within a team. The value of this environment variable shall be
1429 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
1430 disabled by default.
1431
1432 @item @emph{See also}:
1433 @ref{omp_set_dynamic}
1434
1435 @item @emph{Reference}:
1436 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
1437 @end table
1438
1439
1440
1441 @node OMP_MAX_ACTIVE_LEVELS
1442 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
1443 @cindex Environment Variable
1444 @table @asis
1445 @item @emph{Description}:
1446 Specifies the initial value for the maximum number of nested parallel
1447 regions. The value of this variable shall be a positive integer.
1448 If undefined, the number of active levels is unlimited.
1449
1450 @item @emph{See also}:
1451 @ref{omp_set_max_active_levels}
1452
1453 @item @emph{Reference}:
1454 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
1455 @end table
1456
1457
1458
1459 @node OMP_MAX_TASK_PRIORITY
1460 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
1461 number that can be set for a task.
1462 @cindex Environment Variable
1463 @table @asis
1464 @item @emph{Description}:
1465 Specifies the initial value for the maximum priority value that can be
1466 set for a task. The value of this variable shall be a non-negative
1467 integer, and zero is allowed. If undefined, the default priority is
1468 0.
1469
1470 @item @emph{See also}:
1471 @ref{omp_get_max_task_priority}
1472
1473 @item @emph{Reference}:
1474 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
1475 @end table
1476
1477
1478
1479 @node OMP_NESTED
1480 @section @env{OMP_NESTED} -- Nested parallel regions
1481 @cindex Environment Variable
1482 @cindex Implementation specific setting
1483 @table @asis
1484 @item @emph{Description}:
1485 Enable or disable nested parallel regions, i.e., whether team members
1486 are allowed to create new teams. The value of this environment variable
1487 shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
1488 regions are disabled by default.
1489
1490 @item @emph{See also}:
1491 @ref{omp_set_nested}
1492
1493 @item @emph{Reference}:
1494 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
1495 @end table
1496
1497
1498
1499 @node OMP_NUM_THREADS
1500 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1501 @cindex Environment Variable
1502 @cindex Implementation specific setting
1503 @table @asis
1504 @item @emph{Description}:
1505 Specifies the default number of threads to use in parallel regions. The
1506 value of this variable shall be a comma-separated list of positive integers;
1507 the value specified the number of threads to use for the corresponding nested
1508 level. If undefined one thread per CPU is used.
1509
1510 @item @emph{See also}:
1511 @ref{omp_set_num_threads}
1512
1513 @item @emph{Reference}:
1514 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
1515 @end table
1516
1517
1518
1519 @node OMP_PROC_BIND
1520 @section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1521 @cindex Environment Variable
1522 @table @asis
1523 @item @emph{Description}:
1524 Specifies whether threads may be moved between processors. If set to
1525 @code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
1526 they may be moved. Alternatively, a comma separated list with the
1527 values @code{MASTER}, @code{CLOSE} and @code{SPREAD} can be used to specify
1528 the thread affinity policy for the corresponding nesting level. With
1529 @code{MASTER} the worker threads are in the same place partition as the
1530 master thread. With @code{CLOSE} those are kept close to the master thread
1531 in contiguous place partitions. And with @code{SPREAD} a sparse distribution
1532 across the place partitions is used.
1533
1534 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
1535 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
1536
1537 @item @emph{See also}:
1538 @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind}
1539
1540 @item @emph{Reference}:
1541 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
1542 @end table
1543
1544
1545
1546 @node OMP_PLACES
1547 @section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
1548 @cindex Environment Variable
1549 @table @asis
1550 @item @emph{Description}:
1551 The thread placement can be either specified using an abstract name or by an
1552 explicit list of the places. The abstract names @code{threads}, @code{cores}
1553 and @code{sockets} can be optionally followed by a positive number in
1554 parentheses, which denotes the how many places shall be created. With
1555 @code{threads} each place corresponds to a single hardware thread; @code{cores}
1556 to a single core with the corresponding number of hardware threads; and with
1557 @code{sockets} the place corresponds to a single socket. The resulting
1558 placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
1559 variable.
1560
1561 Alternatively, the placement can be specified explicitly as comma-separated
1562 list of places. A place is specified by set of nonnegative numbers in curly
1563 braces, denoting the denoting the hardware threads. The hardware threads
1564 belonging to a place can either be specified as comma-separated list of
1565 nonnegative thread numbers or using an interval. Multiple places can also be
1566 either specified by a comma-separated list of places or by an interval. To
1567 specify an interval, a colon followed by the count is placed after after
1568 the hardware thread number or the place. Optionally, the length can be
1569 followed by a colon and the stride number -- otherwise a unit stride is
1570 assumed. For instance, the following specifies the same places list:
1571 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
1572 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
1573
1574 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
1575 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
1576 between CPUs following no placement policy.
1577
1578 @item @emph{See also}:
1579 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
1580 @ref{OMP_DISPLAY_ENV}
1581
1582 @item @emph{Reference}:
1583 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
1584 @end table
1585
1586
1587
1588 @node OMP_STACKSIZE
1589 @section @env{OMP_STACKSIZE} -- Set default thread stack size
1590 @cindex Environment Variable
1591 @table @asis
1592 @item @emph{Description}:
1593 Set the default thread stack size in kilobytes, unless the number
1594 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1595 case the size is, respectively, in bytes, kilobytes, megabytes
1596 or gigabytes. This is different from @code{pthread_attr_setstacksize}
1597 which gets the number of bytes as an argument. If the stack size cannot
1598 be set due to system constraints, an error is reported and the initial
1599 stack size is left unchanged. If undefined, the stack size is system
1600 dependent.
1601
1602 @item @emph{Reference}:
1603 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
1604 @end table
1605
1606
1607
1608 @node OMP_SCHEDULE
1609 @section @env{OMP_SCHEDULE} -- How threads are scheduled
1610 @cindex Environment Variable
1611 @cindex Implementation specific setting
1612 @table @asis
1613 @item @emph{Description}:
1614 Allows to specify @code{schedule type} and @code{chunk size}.
1615 The value of the variable shall have the form: @code{type[,chunk]} where
1616 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
1617 The optional @code{chunk} size shall be a positive integer. If undefined,
1618 dynamic scheduling and a chunk size of 1 is used.
1619
1620 @item @emph{See also}:
1621 @ref{omp_set_schedule}
1622
1623 @item @emph{Reference}:
1624 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
1625 @end table
1626
1627
1628
1629 @node OMP_THREAD_LIMIT
1630 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
1631 @cindex Environment Variable
1632 @table @asis
1633 @item @emph{Description}:
1634 Specifies the number of threads to use for the whole program. The
1635 value of this variable shall be a positive integer. If undefined,
1636 the number of threads is not limited.
1637
1638 @item @emph{See also}:
1639 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
1640
1641 @item @emph{Reference}:
1642 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
1643 @end table
1644
1645
1646
1647 @node OMP_WAIT_POLICY
1648 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1649 @cindex Environment Variable
1650 @table @asis
1651 @item @emph{Description}:
1652 Specifies whether waiting threads should be active or passive. If
1653 the value is @code{PASSIVE}, waiting threads should not consume CPU
1654 power while waiting; while the value is @code{ACTIVE} specifies that
1655 they should. If undefined, threads wait actively for a short time
1656 before waiting passively.
1657
1658 @item @emph{See also}:
1659 @ref{GOMP_SPINCOUNT}
1660
1661 @item @emph{Reference}:
1662 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
1663 @end table
1664
1665
1666
1667 @node GOMP_CPU_AFFINITY
1668 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1669 @cindex Environment Variable
1670 @table @asis
1671 @item @emph{Description}:
1672 Binds threads to specific CPUs. The variable should contain a space-separated
1673 or comma-separated list of CPUs. This list may contain different kinds of
1674 entries: either single CPU numbers in any order, a range of CPUs (M-N)
1675 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
1676 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1677 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1678 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1679 and 14 respectively and then start assigning back from the beginning of
1680 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
1681
1682 There is no libgomp library routine to determine whether a CPU affinity
1683 specification is in effect. As a workaround, language-specific library
1684 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1685 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
1686 environment variable. A defined CPU affinity on startup cannot be changed
1687 or disabled during the runtime of the application.
1688
1689 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
1690 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
1691 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
1692 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
1693
1694 @item @emph{See also}:
1695 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
1696 @end table
1697
1698
1699
1700 @node GOMP_DEBUG
1701 @section @env{GOMP_DEBUG} -- Enable debugging output
1702 @cindex Environment Variable
1703 @table @asis
1704 @item @emph{Description}:
1705 Enable debugging output. The variable should be set to @code{0}
1706 (disabled, also the default if not set), or @code{1} (enabled).
1707
1708 If enabled, some debugging output will be printed during execution.
1709 This is currently not specified in more detail, and subject to change.
1710 @end table
1711
1712
1713
1714 @node GOMP_STACKSIZE
1715 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
1716 @cindex Environment Variable
1717 @cindex Implementation specific setting
1718 @table @asis
1719 @item @emph{Description}:
1720 Set the default thread stack size in kilobytes. This is different from
1721 @code{pthread_attr_setstacksize} which gets the number of bytes as an
1722 argument. If the stack size cannot be set due to system constraints, an
1723 error is reported and the initial stack size is left unchanged. If undefined,
1724 the stack size is system dependent.
1725
1726 @item @emph{See also}:
1727 @ref{OMP_STACKSIZE}
1728
1729 @item @emph{Reference}:
1730 @uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
1731 GCC Patches Mailinglist},
1732 @uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
1733 GCC Patches Mailinglist}
1734 @end table
1735
1736
1737
1738 @node GOMP_SPINCOUNT
1739 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
1740 @cindex Environment Variable
1741 @cindex Implementation specific setting
1742 @table @asis
1743 @item @emph{Description}:
1744 Determines how long a threads waits actively with consuming CPU power
1745 before waiting passively without consuming CPU power. The value may be
1746 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
1747 integer which gives the number of spins of the busy-wait loop. The
1748 integer may optionally be followed by the following suffixes acting
1749 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
1750 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
1751 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
1752 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
1753 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
1754 If there are more OpenMP threads than available CPUs, 1000 and 100
1755 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
1756 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
1757 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
1758
1759 @item @emph{See also}:
1760 @ref{OMP_WAIT_POLICY}
1761 @end table
1762
1763
1764
1765 @node GOMP_RTEMS_THREAD_POOLS
1766 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
1767 @cindex Environment Variable
1768 @cindex Implementation specific setting
1769 @table @asis
1770 @item @emph{Description}:
1771 This environment variable is only used on the RTEMS real-time operating system.
1772 It determines the scheduler instance specific thread pools. The format for
1773 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
1774 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
1775 separated by @code{:} where:
1776 @itemize @bullet
1777 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
1778 instance.
1779 @item @code{$<priority>} is an optional priority for the worker threads of a
1780 thread pool according to @code{pthread_setschedparam}. In case a priority
1781 value is omitted, then a worker thread will inherit the priority of the OpenMP
1782 master thread that created it. The priority of the worker thread is not
1783 changed after creation, even if a new OpenMP master thread using the worker has
1784 a different priority.
1785 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
1786 RTEMS application configuration.
1787 @end itemize
1788 In case no thread pool configuration is specified for a scheduler instance,
1789 then each OpenMP master thread of this scheduler instance will use its own
1790 dynamically allocated thread pool. To limit the worker thread count of the
1791 thread pools, each OpenMP master thread must call @code{omp_set_num_threads}.
1792 @item @emph{Example}:
1793 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
1794 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
1795 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
1796 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
1797 one thread pool available. Since no priority is specified for this scheduler
1798 instance, the worker thread inherits the priority of the OpenMP master thread
1799 that created it. In the scheduler instance @code{WRK1} there are three thread
1800 pools available and their worker threads run at priority four.
1801 @end table
1802
1803
1804
1805 @c ---------------------------------------------------------------------
1806 @c Enabling OpenACC
1807 @c ---------------------------------------------------------------------
1808
1809 @node Enabling OpenACC
1810 @chapter Enabling OpenACC
1811
1812 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
1813 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
1814 @code{#pragma acc} in C/C++ and @code{!$accp} directives in free form,
1815 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
1816 @code{!$} conditional compilation sentinels in free form and @code{c$},
1817 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
1818 arranges for automatic linking of the OpenACC runtime library
1819 (@ref{OpenACC Runtime Library Routines}).
1820
1821 A complete description of all OpenACC directives accepted may be found in
1822 the @uref{https://www.openacc.org, OpenACC} Application Programming
1823 Interface manual, version 2.0.
1824
1825 Note that this is an experimental feature and subject to
1826 change in future versions of GCC. See
1827 @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
1828
1829
1830
1831 @c ---------------------------------------------------------------------
1832 @c OpenACC Runtime Library Routines
1833 @c ---------------------------------------------------------------------
1834
1835 @node OpenACC Runtime Library Routines
1836 @chapter OpenACC Runtime Library Routines
1837
1838 The runtime routines described here are defined by section 3 of the OpenACC
1839 specifications in version 2.0.
1840 They have C linkage, and do not throw exceptions.
1841 Generally, they are available only for the host, with the exception of
1842 @code{acc_on_device}, which is available for both the host and the
1843 acceleration device.
1844
1845 @menu
1846 * acc_get_num_devices:: Get number of devices for the given device
1847 type.
1848 * acc_set_device_type:: Set type of device accelerator to use.
1849 * acc_get_device_type:: Get type of device accelerator to be used.
1850 * acc_set_device_num:: Set device number to use.
1851 * acc_get_device_num:: Get device number to be used.
1852 * acc_async_test:: Tests for completion of a specific asynchronous
1853 operation.
1854 * acc_async_test_all:: Tests for completion of all asychronous
1855 operations.
1856 * acc_wait:: Wait for completion of a specific asynchronous
1857 operation.
1858 * acc_wait_all:: Waits for completion of all asyncrhonous
1859 operations.
1860 * acc_wait_all_async:: Wait for completion of all asynchronous
1861 operations.
1862 * acc_wait_async:: Wait for completion of asynchronous operations.
1863 * acc_init:: Initialize runtime for a specific device type.
1864 * acc_shutdown:: Shuts down the runtime for a specific device
1865 type.
1866 * acc_on_device:: Whether executing on a particular device
1867 * acc_malloc:: Allocate device memory.
1868 * acc_free:: Free device memory.
1869 * acc_copyin:: Allocate device memory and copy host memory to
1870 it.
1871 * acc_present_or_copyin:: If the data is not present on the device,
1872 allocate device memory and copy from host
1873 memory.
1874 * acc_create:: Allocate device memory and map it to host
1875 memory.
1876 * acc_present_or_create:: If the data is not present on the device,
1877 allocate device memory and map it to host
1878 memory.
1879 * acc_copyout:: Copy device memory to host memory.
1880 * acc_delete:: Free device memory.
1881 * acc_update_device:: Update device memory from mapped host memory.
1882 * acc_update_self:: Update host memory from mapped device memory.
1883 * acc_map_data:: Map previously allocated device memory to host
1884 memory.
1885 * acc_unmap_data:: Unmap device memory from host memory.
1886 * acc_deviceptr:: Get device pointer associated with specific
1887 host address.
1888 * acc_hostptr:: Get host pointer associated with specific
1889 device address.
1890 * acc_is_present:: Indicate whether host variable / array is
1891 present on device.
1892 * acc_memcpy_to_device:: Copy host memory to device memory.
1893 * acc_memcpy_from_device:: Copy device memory to host memory.
1894
1895 API routines for target platforms.
1896
1897 * acc_get_current_cuda_device:: Get CUDA device handle.
1898 * acc_get_current_cuda_context::Get CUDA context handle.
1899 * acc_get_cuda_stream:: Get CUDA stream handle.
1900 * acc_set_cuda_stream:: Set CUDA stream handle.
1901
1902 API routines for the OpenACC Profiling Interface.
1903
1904 * acc_prof_register:: Register callbacks.
1905 * acc_prof_unregister:: Unregister callbacks.
1906 * acc_prof_lookup:: Obtain inquiry functions.
1907 * acc_register_library:: Library registration.
1908 @end menu
1909
1910
1911
1912 @node acc_get_num_devices
1913 @section @code{acc_get_num_devices} -- Get number of devices for given device type
1914 @table @asis
1915 @item @emph{Description}
1916 This function returns a value indicating the number of devices available
1917 for the device type specified in @var{devicetype}.
1918
1919 @item @emph{C/C++}:
1920 @multitable @columnfractions .20 .80
1921 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
1922 @end multitable
1923
1924 @item @emph{Fortran}:
1925 @multitable @columnfractions .20 .80
1926 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
1927 @item @tab @code{integer(kind=acc_device_kind) devicetype}
1928 @end multitable
1929
1930 @item @emph{Reference}:
1931 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
1932 3.2.1.
1933 @end table
1934
1935
1936
1937 @node acc_set_device_type
1938 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
1939 @table @asis
1940 @item @emph{Description}
1941 This function indicates to the runtime library which device typr, specified
1942 in @var{devicetype}, to use when executing a parallel or kernels region.
1943
1944 @item @emph{C/C++}:
1945 @multitable @columnfractions .20 .80
1946 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
1947 @end multitable
1948
1949 @item @emph{Fortran}:
1950 @multitable @columnfractions .20 .80
1951 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
1952 @item @tab @code{integer(kind=acc_device_kind) devicetype}
1953 @end multitable
1954
1955 @item @emph{Reference}:
1956 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
1957 3.2.2.
1958 @end table
1959
1960
1961
1962 @node acc_get_device_type
1963 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
1964 @table @asis
1965 @item @emph{Description}
1966 This function returns what device type will be used when executing a
1967 parallel or kernels region.
1968
1969 @item @emph{C/C++}:
1970 @multitable @columnfractions .20 .80
1971 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
1972 @end multitable
1973
1974 @item @emph{Fortran}:
1975 @multitable @columnfractions .20 .80
1976 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
1977 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
1978 @end multitable
1979
1980 @item @emph{Reference}:
1981 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
1982 3.2.3.
1983 @end table
1984
1985
1986
1987 @node acc_set_device_num
1988 @section @code{acc_set_device_num} -- Set device number to use.
1989 @table @asis
1990 @item @emph{Description}
1991 This function will indicate to the runtime which device number,
1992 specified by @var{num}, associated with the specifed device
1993 type @var{devicetype}.
1994
1995 @item @emph{C/C++}:
1996 @multitable @columnfractions .20 .80
1997 @item @emph{Prototype}: @tab @code{acc_set_device_num(int num, acc_device_t devicetype);}
1998 @end multitable
1999
2000 @item @emph{Fortran}:
2001 @multitable @columnfractions .20 .80
2002 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2003 @item @tab @code{integer devicenum}
2004 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2005 @end multitable
2006
2007 @item @emph{Reference}:
2008 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2009 3.2.4.
2010 @end table
2011
2012
2013
2014 @node acc_get_device_num
2015 @section @code{acc_get_device_num} -- Get device number to be used.
2016 @table @asis
2017 @item @emph{Description}
2018 This function returns which device number associated with the specified device
2019 type @var{devicetype}, will be used when executing a parallel or kernels
2020 region.
2021
2022 @item @emph{C/C++}:
2023 @multitable @columnfractions .20 .80
2024 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2025 @end multitable
2026
2027 @item @emph{Fortran}:
2028 @multitable @columnfractions .20 .80
2029 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2030 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2031 @item @tab @code{integer acc_get_device_num}
2032 @end multitable
2033
2034 @item @emph{Reference}:
2035 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2036 3.2.5.
2037 @end table
2038
2039
2040
2041 @node acc_async_test
2042 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2043 @table @asis
2044 @item @emph{Description}
2045 This function tests for completion of the asynchronous operation specified
2046 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2047 the specified asynchronous operation has completed. While Fortran will return
2048 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2049 a zero and Fortran returns a @code{false}.
2050
2051 @item @emph{C/C++}:
2052 @multitable @columnfractions .20 .80
2053 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2054 @end multitable
2055
2056 @item @emph{Fortran}:
2057 @multitable @columnfractions .20 .80
2058 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2059 @item @tab @code{integer(kind=acc_handle_kind) arg}
2060 @item @tab @code{logical acc_async_test}
2061 @end multitable
2062
2063 @item @emph{Reference}:
2064 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2065 3.2.6.
2066 @end table
2067
2068
2069
2070 @node acc_async_test_all
2071 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2072 @table @asis
2073 @item @emph{Description}
2074 This function tests for completion of all asynchronous operations.
2075 In C/C++, a non-zero value will be returned to indicate all asynchronous
2076 operations have completed. While Fortran will return a @code{true}. If
2077 any asynchronous operation has not completed, C/C++ returns a zero and
2078 Fortran returns a @code{false}.
2079
2080 @item @emph{C/C++}:
2081 @multitable @columnfractions .20 .80
2082 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2083 @end multitable
2084
2085 @item @emph{Fortran}:
2086 @multitable @columnfractions .20 .80
2087 @item @emph{Interface}: @tab @code{function acc_async_test()}
2088 @item @tab @code{logical acc_get_device_num}
2089 @end multitable
2090
2091 @item @emph{Reference}:
2092 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2093 3.2.7.
2094 @end table
2095
2096
2097
2098 @node acc_wait
2099 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2100 @table @asis
2101 @item @emph{Description}
2102 This function waits for completion of the asynchronous operation
2103 specified in @var{arg}.
2104
2105 @item @emph{C/C++}:
2106 @multitable @columnfractions .20 .80
2107 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
2108 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2109 @end multitable
2110
2111 @item @emph{Fortran}:
2112 @multitable @columnfractions .20 .80
2113 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2114 @item @tab @code{integer(acc_handle_kind) arg}
2115 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2116 @item @tab @code{integer(acc_handle_kind) arg}
2117 @end multitable
2118
2119 @item @emph{Reference}:
2120 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2121 3.2.8.
2122 @end table
2123
2124
2125
2126 @node acc_wait_all
2127 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2128 @table @asis
2129 @item @emph{Description}
2130 This function waits for the completion of all asynchronous operations.
2131
2132 @item @emph{C/C++}:
2133 @multitable @columnfractions .20 .80
2134 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2135 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2136 @end multitable
2137
2138 @item @emph{Fortran}:
2139 @multitable @columnfractions .20 .80
2140 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2141 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2142 @end multitable
2143
2144 @item @emph{Reference}:
2145 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2146 3.2.10.
2147 @end table
2148
2149
2150
2151 @node acc_wait_all_async
2152 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2153 @table @asis
2154 @item @emph{Description}
2155 This function enqueues a wait operation on the queue @var{async} for any
2156 and all asynchronous operations that have been previously enqueued on
2157 any queue.
2158
2159 @item @emph{C/C++}:
2160 @multitable @columnfractions .20 .80
2161 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2162 @end multitable
2163
2164 @item @emph{Fortran}:
2165 @multitable @columnfractions .20 .80
2166 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2167 @item @tab @code{integer(acc_handle_kind) async}
2168 @end multitable
2169
2170 @item @emph{Reference}:
2171 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2172 3.2.11.
2173 @end table
2174
2175
2176
2177 @node acc_wait_async
2178 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2179 @table @asis
2180 @item @emph{Description}
2181 This function enqueues a wait operation on queue @var{async} for any and all
2182 asynchronous operations enqueued on queue @var{arg}.
2183
2184 @item @emph{C/C++}:
2185 @multitable @columnfractions .20 .80
2186 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2187 @end multitable
2188
2189 @item @emph{Fortran}:
2190 @multitable @columnfractions .20 .80
2191 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2192 @item @tab @code{integer(acc_handle_kind) arg, async}
2193 @end multitable
2194
2195 @item @emph{Reference}:
2196 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2197 3.2.9.
2198 @end table
2199
2200
2201
2202 @node acc_init
2203 @section @code{acc_init} -- Initialize runtime for a specific device type.
2204 @table @asis
2205 @item @emph{Description}
2206 This function initializes the runtime for the device type specified in
2207 @var{devicetype}.
2208
2209 @item @emph{C/C++}:
2210 @multitable @columnfractions .20 .80
2211 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2212 @end multitable
2213
2214 @item @emph{Fortran}:
2215 @multitable @columnfractions .20 .80
2216 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2217 @item @tab @code{integer(acc_device_kind) devicetype}
2218 @end multitable
2219
2220 @item @emph{Reference}:
2221 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2222 3.2.12.
2223 @end table
2224
2225
2226
2227 @node acc_shutdown
2228 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2229 @table @asis
2230 @item @emph{Description}
2231 This function shuts down the runtime for the device type specified in
2232 @var{devicetype}.
2233
2234 @item @emph{C/C++}:
2235 @multitable @columnfractions .20 .80
2236 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2237 @end multitable
2238
2239 @item @emph{Fortran}:
2240 @multitable @columnfractions .20 .80
2241 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2242 @item @tab @code{integer(acc_device_kind) devicetype}
2243 @end multitable
2244
2245 @item @emph{Reference}:
2246 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2247 3.2.13.
2248 @end table
2249
2250
2251
2252 @node acc_on_device
2253 @section @code{acc_on_device} -- Whether executing on a particular device
2254 @table @asis
2255 @item @emph{Description}:
2256 This function returns whether the program is executing on a particular
2257 device specified in @var{devicetype}. In C/C++ a non-zero value is
2258 returned to indicate the device is executing on the specified device type.
2259 In Fortran, @code{true} will be returned. If the program is not executing
2260 on the specified device type C/C++ will return a zero, while Fortran will
2261 return @code{false}.
2262
2263 @item @emph{C/C++}:
2264 @multitable @columnfractions .20 .80
2265 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
2266 @end multitable
2267
2268 @item @emph{Fortran}:
2269 @multitable @columnfractions .20 .80
2270 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
2271 @item @tab @code{integer(acc_device_kind) devicetype}
2272 @item @tab @code{logical acc_on_device}
2273 @end multitable
2274
2275
2276 @item @emph{Reference}:
2277 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2278 3.2.14.
2279 @end table
2280
2281
2282
2283 @node acc_malloc
2284 @section @code{acc_malloc} -- Allocate device memory.
2285 @table @asis
2286 @item @emph{Description}
2287 This function allocates @var{len} bytes of device memory. It returns
2288 the device address of the allocated memory.
2289
2290 @item @emph{C/C++}:
2291 @multitable @columnfractions .20 .80
2292 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
2293 @end multitable
2294
2295 @item @emph{Reference}:
2296 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2297 3.2.15.
2298 @end table
2299
2300
2301
2302 @node acc_free
2303 @section @code{acc_free} -- Free device memory.
2304 @table @asis
2305 @item @emph{Description}
2306 Free previously allocated device memory at the device address @code{a}.
2307
2308 @item @emph{C/C++}:
2309 @multitable @columnfractions .20 .80
2310 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
2311 @end multitable
2312
2313 @item @emph{Reference}:
2314 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2315 3.2.16.
2316 @end table
2317
2318
2319
2320 @node acc_copyin
2321 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
2322 @table @asis
2323 @item @emph{Description}
2324 In C/C++, this function allocates @var{len} bytes of device memory
2325 and maps it to the specified host address in @var{a}. The device
2326 address of the newly allocated device memory is returned.
2327
2328 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2329 a contiguous array section. The second form @var{a} specifies a
2330 variable or array element and @var{len} specifies the length in bytes.
2331
2332 @item @emph{C/C++}:
2333 @multitable @columnfractions .20 .80
2334 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
2335 @end multitable
2336
2337 @item @emph{Fortran}:
2338 @multitable @columnfractions .20 .80
2339 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
2340 @item @tab @code{type, dimension(:[,:]...) :: a}
2341 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
2342 @item @tab @code{type, dimension(:[,:]...) :: a}
2343 @item @tab @code{integer len}
2344 @end multitable
2345
2346 @item @emph{Reference}:
2347 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2348 3.2.17.
2349 @end table
2350
2351
2352
2353 @node acc_present_or_copyin
2354 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
2355 @table @asis
2356 @item @emph{Description}
2357 This function tests if the host data specifed by @var{a} and of length
2358 @var{len} is present or not. If it is not present, then device memory
2359 will be allocated and the host memory copied. The device address of
2360 the newly allocated device memory is returned.
2361
2362 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2363 a contiguous array section. The second form @var{a} specifies a variable or
2364 array element and @var{len} specifies the length in bytes.
2365
2366 @item @emph{C/C++}:
2367 @multitable @columnfractions .20 .80
2368 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
2369 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
2370 @end multitable
2371
2372 @item @emph{Fortran}:
2373 @multitable @columnfractions .20 .80
2374 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
2375 @item @tab @code{type, dimension(:[,:]...) :: a}
2376 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
2377 @item @tab @code{type, dimension(:[,:]...) :: a}
2378 @item @tab @code{integer len}
2379 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
2380 @item @tab @code{type, dimension(:[,:]...) :: a}
2381 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
2382 @item @tab @code{type, dimension(:[,:]...) :: a}
2383 @item @tab @code{integer len}
2384 @end multitable
2385
2386 @item @emph{Reference}:
2387 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2388 3.2.18.
2389 @end table
2390
2391
2392
2393 @node acc_create
2394 @section @code{acc_create} -- Allocate device memory and map it to host memory.
2395 @table @asis
2396 @item @emph{Description}
2397 This function allocates device memory and maps it to host memory specified
2398 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
2399 the function returns the device address of the allocated device memory.
2400
2401 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2402 a contiguous array section. The second form @var{a} specifies a variable or
2403 array element and @var{len} specifies the length in bytes.
2404
2405 @item @emph{C/C++}:
2406 @multitable @columnfractions .20 .80
2407 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
2408 @end multitable
2409
2410 @item @emph{Fortran}:
2411 @multitable @columnfractions .20 .80
2412 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
2413 @item @tab @code{type, dimension(:[,:]...) :: a}
2414 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
2415 @item @tab @code{type, dimension(:[,:]...) :: a}
2416 @item @tab @code{integer len}
2417 @end multitable
2418
2419 @item @emph{Reference}:
2420 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2421 3.2.19.
2422 @end table
2423
2424
2425
2426 @node acc_present_or_create
2427 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
2428 @table @asis
2429 @item @emph{Description}
2430 This function tests if the host data specifed by @var{a} and of length
2431 @var{len} is present or not. If it is not present, then device memory
2432 will be allocated and mapped to host memory. In C/C++, the device address
2433 of the newly allocated device memory is returned.
2434
2435 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2436 a contiguous array section. The second form @var{a} specifies a variable or
2437 array element and @var{len} specifies the length in bytes.
2438
2439
2440 @item @emph{C/C++}:
2441 @multitable @columnfractions .20 .80
2442 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
2443 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
2444 @end multitable
2445
2446 @item @emph{Fortran}:
2447 @multitable @columnfractions .20 .80
2448 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
2449 @item @tab @code{type, dimension(:[,:]...) :: a}
2450 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
2451 @item @tab @code{type, dimension(:[,:]...) :: a}
2452 @item @tab @code{integer len}
2453 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
2454 @item @tab @code{type, dimension(:[,:]...) :: a}
2455 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
2456 @item @tab @code{type, dimension(:[,:]...) :: a}
2457 @item @tab @code{integer len}
2458 @end multitable
2459
2460 @item @emph{Reference}:
2461 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2462 3.2.20.
2463 @end table
2464
2465
2466
2467 @node acc_copyout
2468 @section @code{acc_copyout} -- Copy device memory to host memory.
2469 @table @asis
2470 @item @emph{Description}
2471 This function copies mapped device memory to host memory which is specified
2472 by host address @var{a} for a length @var{len} bytes in C/C++.
2473
2474 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2475 a contiguous array section. The second form @var{a} specifies a variable or
2476 array element and @var{len} specifies the length in bytes.
2477
2478 @item @emph{C/C++}:
2479 @multitable @columnfractions .20 .80
2480 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
2481 @end multitable
2482
2483 @item @emph{Fortran}:
2484 @multitable @columnfractions .20 .80
2485 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
2486 @item @tab @code{type, dimension(:[,:]...) :: a}
2487 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
2488 @item @tab @code{type, dimension(:[,:]...) :: a}
2489 @item @tab @code{integer len}
2490 @end multitable
2491
2492 @item @emph{Reference}:
2493 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2494 3.2.21.
2495 @end table
2496
2497
2498
2499 @node acc_delete
2500 @section @code{acc_delete} -- Free device memory.
2501 @table @asis
2502 @item @emph{Description}
2503 This function frees previously allocated device memory specified by
2504 the device address @var{a} and the length of @var{len} bytes.
2505
2506 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2507 a contiguous array section. The second form @var{a} specifies a variable or
2508 array element and @var{len} specifies the length in bytes.
2509
2510 @item @emph{C/C++}:
2511 @multitable @columnfractions .20 .80
2512 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
2513 @end multitable
2514
2515 @item @emph{Fortran}:
2516 @multitable @columnfractions .20 .80
2517 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
2518 @item @tab @code{type, dimension(:[,:]...) :: a}
2519 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
2520 @item @tab @code{type, dimension(:[,:]...) :: a}
2521 @item @tab @code{integer len}
2522 @end multitable
2523
2524 @item @emph{Reference}:
2525 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2526 3.2.22.
2527 @end table
2528
2529
2530
2531 @node acc_update_device
2532 @section @code{acc_update_device} -- Update device memory from mapped host memory.
2533 @table @asis
2534 @item @emph{Description}
2535 This function updates the device copy from the previously mapped host memory.
2536 The host memory is specified with the host address @var{a} and a length of
2537 @var{len} bytes.
2538
2539 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2540 a contiguous array section. The second form @var{a} specifies a variable or
2541 array element and @var{len} specifies the length in bytes.
2542
2543 @item @emph{C/C++}:
2544 @multitable @columnfractions .20 .80
2545 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
2546 @end multitable
2547
2548 @item @emph{Fortran}:
2549 @multitable @columnfractions .20 .80
2550 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
2551 @item @tab @code{type, dimension(:[,:]...) :: a}
2552 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
2553 @item @tab @code{type, dimension(:[,:]...) :: a}
2554 @item @tab @code{integer len}
2555 @end multitable
2556
2557 @item @emph{Reference}:
2558 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2559 3.2.23.
2560 @end table
2561
2562
2563
2564 @node acc_update_self
2565 @section @code{acc_update_self} -- Update host memory from mapped device memory.
2566 @table @asis
2567 @item @emph{Description}
2568 This function updates the host copy from the previously mapped device memory.
2569 The host memory is specified with the host address @var{a} and a length of
2570 @var{len} bytes.
2571
2572 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2573 a contiguous array section. The second form @var{a} specifies a variable or
2574 array element and @var{len} specifies the length in bytes.
2575
2576 @item @emph{C/C++}:
2577 @multitable @columnfractions .20 .80
2578 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
2579 @end multitable
2580
2581 @item @emph{Fortran}:
2582 @multitable @columnfractions .20 .80
2583 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
2584 @item @tab @code{type, dimension(:[,:]...) :: a}
2585 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
2586 @item @tab @code{type, dimension(:[,:]...) :: a}
2587 @item @tab @code{integer len}
2588 @end multitable
2589
2590 @item @emph{Reference}:
2591 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2592 3.2.24.
2593 @end table
2594
2595
2596
2597 @node acc_map_data
2598 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
2599 @table @asis
2600 @item @emph{Description}
2601 This function maps previously allocated device and host memory. The device
2602 memory is specified with the device address @var{d}. The host memory is
2603 specified with the host address @var{h} and a length of @var{len}.
2604
2605 @item @emph{C/C++}:
2606 @multitable @columnfractions .20 .80
2607 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
2608 @end multitable
2609
2610 @item @emph{Reference}:
2611 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2612 3.2.25.
2613 @end table
2614
2615
2616
2617 @node acc_unmap_data
2618 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
2619 @table @asis
2620 @item @emph{Description}
2621 This function unmaps previously mapped device and host memory. The latter
2622 specified by @var{h}.
2623
2624 @item @emph{C/C++}:
2625 @multitable @columnfractions .20 .80
2626 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
2627 @end multitable
2628
2629 @item @emph{Reference}:
2630 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2631 3.2.26.
2632 @end table
2633
2634
2635
2636 @node acc_deviceptr
2637 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
2638 @table @asis
2639 @item @emph{Description}
2640 This function returns the device address that has been mapped to the
2641 host address specified by @var{h}.
2642
2643 @item @emph{C/C++}:
2644 @multitable @columnfractions .20 .80
2645 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
2646 @end multitable
2647
2648 @item @emph{Reference}:
2649 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2650 3.2.27.
2651 @end table
2652
2653
2654
2655 @node acc_hostptr
2656 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
2657 @table @asis
2658 @item @emph{Description}
2659 This function returns the host address that has been mapped to the
2660 device address specified by @var{d}.
2661
2662 @item @emph{C/C++}:
2663 @multitable @columnfractions .20 .80
2664 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
2665 @end multitable
2666
2667 @item @emph{Reference}:
2668 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2669 3.2.28.
2670 @end table
2671
2672
2673
2674 @node acc_is_present
2675 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
2676 @table @asis
2677 @item @emph{Description}
2678 This function indicates whether the specified host address in @var{a} and a
2679 length of @var{len} bytes is present on the device. In C/C++, a non-zero
2680 value is returned to indicate the presence of the mapped memory on the
2681 device. A zero is returned to indicate the memory is not mapped on the
2682 device.
2683
2684 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2685 a contiguous array section. The second form @var{a} specifies a variable or
2686 array element and @var{len} specifies the length in bytes. If the host
2687 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
2688 a @code{false} is return to indicate the mapped memory is not present.
2689
2690 @item @emph{C/C++}:
2691 @multitable @columnfractions .20 .80
2692 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
2693 @end multitable
2694
2695 @item @emph{Fortran}:
2696 @multitable @columnfractions .20 .80
2697 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
2698 @item @tab @code{type, dimension(:[,:]...) :: a}
2699 @item @tab @code{logical acc_is_present}
2700 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
2701 @item @tab @code{type, dimension(:[,:]...) :: a}
2702 @item @tab @code{integer len}
2703 @item @tab @code{logical acc_is_present}
2704 @end multitable
2705
2706 @item @emph{Reference}:
2707 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2708 3.2.29.
2709 @end table
2710
2711
2712
2713 @node acc_memcpy_to_device
2714 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
2715 @table @asis
2716 @item @emph{Description}
2717 This function copies host memory specified by host address of @var{src} to
2718 device memory specified by the device address @var{dest} for a length of
2719 @var{bytes} bytes.
2720
2721 @item @emph{C/C++}:
2722 @multitable @columnfractions .20 .80
2723 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
2724 @end multitable
2725
2726 @item @emph{Reference}:
2727 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2728 3.2.30.
2729 @end table
2730
2731
2732
2733 @node acc_memcpy_from_device
2734 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
2735 @table @asis
2736 @item @emph{Description}
2737 This function copies host memory specified by host address of @var{src} from
2738 device memory specified by the device address @var{dest} for a length of
2739 @var{bytes} bytes.
2740
2741 @item @emph{C/C++}:
2742 @multitable @columnfractions .20 .80
2743 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
2744 @end multitable
2745
2746 @item @emph{Reference}:
2747 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2748 3.2.31.
2749 @end table
2750
2751
2752
2753 @node acc_get_current_cuda_device
2754 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
2755 @table @asis
2756 @item @emph{Description}
2757 This function returns the CUDA device handle. This handle is the same
2758 as used by the CUDA Runtime or Driver API's.
2759
2760 @item @emph{C/C++}:
2761 @multitable @columnfractions .20 .80
2762 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
2763 @end multitable
2764
2765 @item @emph{Reference}:
2766 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2767 A.2.1.1.
2768 @end table
2769
2770
2771
2772 @node acc_get_current_cuda_context
2773 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
2774 @table @asis
2775 @item @emph{Description}
2776 This function returns the CUDA context handle. This handle is the same
2777 as used by the CUDA Runtime or Driver API's.
2778
2779 @item @emph{C/C++}:
2780 @multitable @columnfractions .20 .80
2781 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
2782 @end multitable
2783
2784 @item @emph{Reference}:
2785 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2786 A.2.1.2.
2787 @end table
2788
2789
2790
2791 @node acc_get_cuda_stream
2792 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
2793 @table @asis
2794 @item @emph{Description}
2795 This function returns the CUDA stream handle for the queue @var{async}.
2796 This handle is the same as used by the CUDA Runtime or Driver API's.
2797
2798 @item @emph{C/C++}:
2799 @multitable @columnfractions .20 .80
2800 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
2801 @end multitable
2802
2803 @item @emph{Reference}:
2804 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2805 A.2.1.3.
2806 @end table
2807
2808
2809
2810 @node acc_set_cuda_stream
2811 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
2812 @table @asis
2813 @item @emph{Description}
2814 This function associates the stream handle specified by @var{stream} with
2815 the queue @var{async}.
2816
2817 This cannot be used to change the stream handle associated with
2818 @code{acc_async_sync}.
2819
2820 The return value is not specified.
2821
2822 @item @emph{C/C++}:
2823 @multitable @columnfractions .20 .80
2824 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
2825 @end multitable
2826
2827 @item @emph{Reference}:
2828 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2829 A.2.1.4.
2830 @end table
2831
2832
2833
2834 @node acc_prof_register
2835 @section @code{acc_prof_register} -- Register callbacks.
2836 @table @asis
2837 @item @emph{Description}:
2838 This function registers callbacks.
2839
2840 @item @emph{C/C++}:
2841 @multitable @columnfractions .20 .80
2842 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
2843 @end multitable
2844
2845 @item @emph{See also}:
2846 @ref{OpenACC Profiling Interface}
2847
2848 @item @emph{Reference}:
2849 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2850 5.3.
2851 @end table
2852
2853
2854
2855 @node acc_prof_unregister
2856 @section @code{acc_prof_unregister} -- Unregister callbacks.
2857 @table @asis
2858 @item @emph{Description}:
2859 This function unregisters callbacks.
2860
2861 @item @emph{C/C++}:
2862 @multitable @columnfractions .20 .80
2863 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
2864 @end multitable
2865
2866 @item @emph{See also}:
2867 @ref{OpenACC Profiling Interface}
2868
2869 @item @emph{Reference}:
2870 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2871 5.3.
2872 @end table
2873
2874
2875
2876 @node acc_prof_lookup
2877 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
2878 @table @asis
2879 @item @emph{Description}:
2880 Function to obtain inquiry functions.
2881
2882 @item @emph{C/C++}:
2883 @multitable @columnfractions .20 .80
2884 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
2885 @end multitable
2886
2887 @item @emph{See also}:
2888 @ref{OpenACC Profiling Interface}
2889
2890 @item @emph{Reference}:
2891 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2892 5.3.
2893 @end table
2894
2895
2896
2897 @node acc_register_library
2898 @section @code{acc_register_library} -- Library registration.
2899 @table @asis
2900 @item @emph{Description}:
2901 Function for library registration.
2902
2903 @item @emph{C/C++}:
2904 @multitable @columnfractions .20 .80
2905 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
2906 @end multitable
2907
2908 @item @emph{See also}:
2909 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
2910
2911 @item @emph{Reference}:
2912 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2913 5.3.
2914 @end table
2915
2916
2917
2918 @c ---------------------------------------------------------------------
2919 @c OpenACC Environment Variables
2920 @c ---------------------------------------------------------------------
2921
2922 @node OpenACC Environment Variables
2923 @chapter OpenACC Environment Variables
2924
2925 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
2926 are defined by section 4 of the OpenACC specification in version 2.0.
2927 The variable @env{ACC_PROFLIB}
2928 is defined by section 4 of the OpenACC specification in version 2.6.
2929 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
2930
2931 @menu
2932 * ACC_DEVICE_TYPE::
2933 * ACC_DEVICE_NUM::
2934 * ACC_PROFLIB::
2935 * GCC_ACC_NOTIFY::
2936 @end menu
2937
2938
2939
2940 @node ACC_DEVICE_TYPE
2941 @section @code{ACC_DEVICE_TYPE}
2942 @table @asis
2943 @item @emph{Reference}:
2944 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2945 4.1.
2946 @end table
2947
2948
2949
2950 @node ACC_DEVICE_NUM
2951 @section @code{ACC_DEVICE_NUM}
2952 @table @asis
2953 @item @emph{Reference}:
2954 @uref{https://www.openacc.org, OpenACC specification v2.0}, section
2955 4.2.
2956 @end table
2957
2958
2959
2960 @node ACC_PROFLIB
2961 @section @code{ACC_PROFLIB}
2962 @table @asis
2963 @item @emph{See also}:
2964 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
2965
2966 @item @emph{Reference}:
2967 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2968 4.3.
2969 @end table
2970
2971
2972
2973 @node GCC_ACC_NOTIFY
2974 @section @code{GCC_ACC_NOTIFY}
2975 @table @asis
2976 @item @emph{Description}:
2977 Print debug information pertaining to the accelerator.
2978 @end table
2979
2980
2981
2982 @c ---------------------------------------------------------------------
2983 @c CUDA Streams Usage
2984 @c ---------------------------------------------------------------------
2985
2986 @node CUDA Streams Usage
2987 @chapter CUDA Streams Usage
2988
2989 This applies to the @code{nvptx} plugin only.
2990
2991 The library provides elements that perform asynchronous movement of
2992 data and asynchronous operation of computing constructs. This
2993 asynchronous functionality is implemented by making use of CUDA
2994 streams@footnote{See "Stream Management" in "CUDA Driver API",
2995 TRM-06703-001, Version 5.5, for additional information}.
2996
2997 The primary means by that the asychronous functionality is accessed
2998 is through the use of those OpenACC directives which make use of the
2999 @code{async} and @code{wait} clauses. When the @code{async} clause is
3000 first used with a directive, it creates a CUDA stream. If an
3001 @code{async-argument} is used with the @code{async} clause, then the
3002 stream is associated with the specified @code{async-argument}.
3003
3004 Following the creation of an association between a CUDA stream and the
3005 @code{async-argument} of an @code{async} clause, both the @code{wait}
3006 clause and the @code{wait} directive can be used. When either the
3007 clause or directive is used after stream creation, it creates a
3008 rendezvous point whereby execution waits until all operations
3009 associated with the @code{async-argument}, that is, stream, have
3010 completed.
3011
3012 Normally, the management of the streams that are created as a result of
3013 using the @code{async} clause, is done without any intervention by the
3014 caller. This implies the association between the @code{async-argument}
3015 and the CUDA stream will be maintained for the lifetime of the program.
3016 However, this association can be changed through the use of the library
3017 function @code{acc_set_cuda_stream}. When the function
3018 @code{acc_set_cuda_stream} is called, the CUDA stream that was
3019 originally associated with the @code{async} clause will be destroyed.
3020 Caution should be taken when changing the association as subsequent
3021 references to the @code{async-argument} refer to a different
3022 CUDA stream.
3023
3024
3025
3026 @c ---------------------------------------------------------------------
3027 @c OpenACC Library Interoperability
3028 @c ---------------------------------------------------------------------
3029
3030 @node OpenACC Library Interoperability
3031 @chapter OpenACC Library Interoperability
3032
3033 @section Introduction
3034
3035 The OpenACC library uses the CUDA Driver API, and may interact with
3036 programs that use the Runtime library directly, or another library
3037 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3038 "Interactions with the CUDA Driver API" in
3039 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3040 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3041 for additional information on library interoperability.}.
3042 This chapter describes the use cases and what changes are
3043 required in order to use both the OpenACC library and the CUBLAS and Runtime
3044 libraries within a program.
3045
3046 @section First invocation: NVIDIA CUBLAS library API
3047
3048 In this first use case (see below), a function in the CUBLAS library is called
3049 prior to any of the functions in the OpenACC library. More specifically, the
3050 function @code{cublasCreate()}.
3051
3052 When invoked, the function initializes the library and allocates the
3053 hardware resources on the host and the device on behalf of the caller. Once
3054 the initialization and allocation has completed, a handle is returned to the
3055 caller. The OpenACC library also requires initialization and allocation of
3056 hardware resources. Since the CUBLAS library has already allocated the
3057 hardware resources for the device, all that is left to do is to initialize
3058 the OpenACC library and acquire the hardware resources on the host.
3059
3060 Prior to calling the OpenACC function that initializes the library and
3061 allocate the host hardware resources, you need to acquire the device number
3062 that was allocated during the call to @code{cublasCreate()}. The invoking of the
3063 runtime library function @code{cudaGetDevice()} accomplishes this. Once
3064 acquired, the device number is passed along with the device type as
3065 parameters to the OpenACC library function @code{acc_set_device_num()}.
3066
3067 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3068 library uses the context that was created during the call to
3069 @code{cublasCreate()}. In other words, both libraries will be sharing the
3070 same context.
3071
3072 @smallexample
3073 /* Create the handle */
3074 s = cublasCreate(&h);
3075 if (s != CUBLAS_STATUS_SUCCESS)
3076 @{
3077 fprintf(stderr, "cublasCreate failed %d\n", s);
3078 exit(EXIT_FAILURE);
3079 @}
3080
3081 /* Get the device number */
3082 e = cudaGetDevice(&dev);
3083 if (e != cudaSuccess)
3084 @{
3085 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3086 exit(EXIT_FAILURE);
3087 @}
3088
3089 /* Initialize OpenACC library and use device 'dev' */
3090 acc_set_device_num(dev, acc_device_nvidia);
3091
3092 @end smallexample
3093 @center Use Case 1
3094
3095 @section First invocation: OpenACC library API
3096
3097 In this second use case (see below), a function in the OpenACC library is
3098 called prior to any of the functions in the CUBLAS library. More specificially,
3099 the function @code{acc_set_device_num()}.
3100
3101 In the use case presented here, the function @code{acc_set_device_num()}
3102 is used to both initialize the OpenACC library and allocate the hardware
3103 resources on the host and the device. In the call to the function, the
3104 call parameters specify which device to use and what device
3105 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3106 is but one method to initialize the OpenACC library and allocate the
3107 appropriate hardware resources. Other methods are available through the
3108 use of environment variables and these will be discussed in the next section.
3109
3110 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3111 functions can be called as seen with multiple calls being made to
3112 @code{acc_copyin()}. In addition, calls can be made to functions in the
3113 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3114 subsequent to the calls to @code{acc_copyin()}.
3115 As seen in the previous use case, a call to @code{cublasCreate()}
3116 initializes the CUBLAS library and allocates the hardware resources on the
3117 host and the device. However, since the device has already been allocated,
3118 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
3119 the appropriate hardware resources on the host. The context that was created
3120 as part of the OpenACC initialization is shared with the CUBLAS library,
3121 similarly to the first use case.
3122
3123 @smallexample
3124 dev = 0;
3125
3126 acc_set_device_num(dev, acc_device_nvidia);
3127
3128 /* Copy the first set to the device */
3129 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3130 if (d_X == NULL)
3131 @{
3132 fprintf(stderr, "copyin error h_X\n");
3133 exit(EXIT_FAILURE);
3134 @}
3135
3136 /* Copy the second set to the device */
3137 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3138 if (d_Y == NULL)
3139 @{
3140 fprintf(stderr, "copyin error h_Y1\n");
3141 exit(EXIT_FAILURE);
3142 @}
3143
3144 /* Create the handle */
3145 s = cublasCreate(&h);
3146 if (s != CUBLAS_STATUS_SUCCESS)
3147 @{
3148 fprintf(stderr, "cublasCreate failed %d\n", s);
3149 exit(EXIT_FAILURE);
3150 @}
3151
3152 /* Perform saxpy using CUBLAS library function */
3153 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
3154 if (s != CUBLAS_STATUS_SUCCESS)
3155 @{
3156 fprintf(stderr, "cublasSaxpy failed %d\n", s);
3157 exit(EXIT_FAILURE);
3158 @}
3159
3160 /* Copy the results from the device */
3161 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
3162
3163 @end smallexample
3164 @center Use Case 2
3165
3166 @section OpenACC library and environment variables
3167
3168 There are two environment variables associated with the OpenACC library
3169 that may be used to control the device type and device number:
3170 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respecively. These two
3171 environement variables can be used as an alternative to calling
3172 @code{acc_set_device_num()}. As seen in the second use case, the device
3173 type and device number were specified using @code{acc_set_device_num()}.
3174 If however, the aforementioned environment variables were set, then the
3175 call to @code{acc_set_device_num()} would not be required.
3176
3177
3178 The use of the environment variables is only relevant when an OpenACC function
3179 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
3180 is called prior to a call to an OpenACC function, then you must call
3181 @code{acc_set_device_num()}@footnote{More complete information
3182 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
3183 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
3184 Application Programming Interface”, Version 2.0.}
3185
3186
3187
3188 @c ---------------------------------------------------------------------
3189 @c OpenACC Profiling Interface
3190 @c ---------------------------------------------------------------------
3191
3192 @node OpenACC Profiling Interface
3193 @chapter OpenACC Profiling Interface
3194
3195 @section Implementation Status and Implementation-Defined Behavior
3196
3197 We're implementing the OpenACC Profiling Interface as defined by the
3198 OpenACC 2.6 specification. We're clarifying some aspects here as
3199 @emph{implementation-defined behavior}, while they're still under
3200 discussion within the OpenACC Technical Committee.
3201
3202 This implementation is tuned to keep the performance impact as low as
3203 possible for the (very common) case that the Profiling Interface is
3204 not enabled. This is relevant, as the Profiling Interface affects all
3205 the @emph{hot} code paths (in the target code, not in the offloaded
3206 code). Users of the OpenACC Profiling Interface can be expected to
3207 understand that performance will be impacted to some degree once the
3208 Profiling Interface has gotten enabled: for example, because of the
3209 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
3210 every event that has been registered.
3211
3212 We're not yet accounting for the fact that @cite{OpenACC events may
3213 occur during event processing}.
3214
3215 We're not yet implementing initialization via a
3216 @code{acc_register_library} function that is either statically linked
3217 in, or dynamically via @env{LD_PRELOAD}.
3218 Initialization via @code{acc_register_library} functions dynamically
3219 loaded via the @env{ACC_PROFLIB} environment variable does work, as
3220 does directly calling @code{acc_prof_register},
3221 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
3222
3223 As currently there are no inquiry functions defined, calls to
3224 @code{acc_prof_lookup} will always return @code{NULL}.
3225
3226 There aren't separate @emph{start}, @emph{stop} events defined for the
3227 event types @code{acc_ev_create}, @code{acc_ev_delete},
3228 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
3229 should be triggered before or after the actual device-specific call is
3230 made. We trigger them after.
3231
3232 Remarks about data provided to callbacks:
3233
3234 @table @asis
3235
3236 @item @code{acc_prof_info.event_type}
3237 It's not clear if for @emph{nested} event callbacks (for example,
3238 @code{acc_ev_enqueue_launch_start} as part of a parent compute
3239 construct), this should be set for the nested event
3240 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
3241 construct should remain (@code{acc_ev_compute_construct_start}). In
3242 this implementation, the value will generally correspond to the
3243 innermost nested event type.
3244
3245 @item @code{acc_prof_info.device_type}
3246 @itemize
3247
3248 @item
3249 For @code{acc_ev_compute_construct_start}, and in presence of an
3250 @code{if} clause with @emph{false} argument, this will still refer to
3251 the offloading device type.
3252 It's not clear if that's the expected behavior.
3253
3254 @item
3255 Complementary to the item before, for
3256 @code{acc_ev_compute_construct_end}, this is set to
3257 @code{acc_device_host} in presence of an @code{if} clause with
3258 @emph{false} argument.
3259 It's not clear if that's the expected behavior.
3260
3261 @end itemize
3262
3263 @item @code{acc_prof_info.thread_id}
3264 Always @code{-1}; not yet implemented.
3265
3266 @item @code{acc_prof_info.async}
3267 @itemize
3268
3269 @item
3270 Not yet implemented correctly for
3271 @code{acc_ev_compute_construct_start}.
3272
3273 @item
3274 In a compute construct, for host-fallback
3275 execution/@code{acc_device_host} it will always be
3276 @code{acc_async_sync}.
3277 It's not clear if that's the expected behavior.
3278
3279 @item
3280 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
3281 it will always be @code{acc_async_sync}.
3282 It's not clear if that's the expected behavior.
3283
3284 @end itemize
3285
3286 @item @code{acc_prof_info.async_queue}
3287 There is no @cite{limited number of asynchronous queues} in libgomp.
3288 This will always have the same value as @code{acc_prof_info.async}.
3289
3290 @item @code{acc_prof_info.src_file}
3291 Always @code{NULL}; not yet implemented.
3292
3293 @item @code{acc_prof_info.func_name}
3294 Always @code{NULL}; not yet implemented.
3295
3296 @item @code{acc_prof_info.line_no}
3297 Always @code{-1}; not yet implemented.
3298
3299 @item @code{acc_prof_info.end_line_no}
3300 Always @code{-1}; not yet implemented.
3301
3302 @item @code{acc_prof_info.func_line_no}
3303 Always @code{-1}; not yet implemented.
3304
3305 @item @code{acc_prof_info.func_end_line_no}
3306 Always @code{-1}; not yet implemented.
3307
3308 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
3309 Relating to @code{acc_prof_info.event_type} discussed above, in this
3310 implementation, this will always be the same value as
3311 @code{acc_prof_info.event_type}.
3312
3313 @item @code{acc_event_info.*.parent_construct}
3314 @itemize
3315
3316 @item
3317 Will be @code{acc_construct_parallel} for all OpenACC compute
3318 constructs as well as many OpenACC Runtime API calls; should be the
3319 one matching the actual construct, or
3320 @code{acc_construct_runtime_api}, respectively.
3321
3322 @item
3323 Will be @code{acc_construct_enter_data} or
3324 @code{acc_construct_exit_data} when processing variable mappings
3325 specified in OpenACC @emph{declare} directives; should be
3326 @code{acc_construct_declare}.
3327
3328 @item
3329 For implicit @code{acc_ev_device_init_start},
3330 @code{acc_ev_device_init_end}, and explicit as well as implicit
3331 @code{acc_ev_alloc}, @code{acc_ev_free},
3332 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3333 @code{acc_ev_enqueue_download_start}, and
3334 @code{acc_ev_enqueue_download_end}, will be
3335 @code{acc_construct_parallel}; should reflect the real parent
3336 construct.
3337
3338 @end itemize
3339
3340 @item @code{acc_event_info.*.implicit}
3341 For @code{acc_ev_alloc}, @code{acc_ev_free},
3342 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3343 @code{acc_ev_enqueue_download_start}, and
3344 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
3345 also for explicit usage.
3346
3347 @item @code{acc_event_info.data_event.var_name}
3348 Always @code{NULL}; not yet implemented.
3349
3350 @item @code{acc_event_info.data_event.host_ptr}
3351 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
3352 @code{NULL}.
3353
3354 @item @code{typedef union acc_api_info}
3355 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
3356 Information}. This should obviously be @code{typedef @emph{struct}
3357 acc_api_info}.
3358
3359 @item @code{acc_api_info.device_api}
3360 Possibly not yet implemented correctly for
3361 @code{acc_ev_compute_construct_start},
3362 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
3363 will always be @code{acc_device_api_none} for these event types.
3364 For @code{acc_ev_enter_data_start}, it will be
3365 @code{acc_device_api_none} in some cases.
3366
3367 @item @code{acc_api_info.device_type}
3368 Always the same as @code{acc_prof_info.device_type}.
3369
3370 @item @code{acc_api_info.vendor}
3371 Always @code{-1}; not yet implemented.
3372
3373 @item @code{acc_api_info.device_handle}
3374 Always @code{NULL}; not yet implemented.
3375
3376 @item @code{acc_api_info.context_handle}
3377 Always @code{NULL}; not yet implemented.
3378
3379 @item @code{acc_api_info.async_handle}
3380 Always @code{NULL}; not yet implemented.
3381
3382 @end table
3383
3384 Remarks about certain event types:
3385
3386 @table @asis
3387
3388 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3389 @itemize
3390
3391 @item
3392 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
3393 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
3394 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
3395 Whan a compute construct triggers implicit
3396 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
3397 events, they currently aren't @emph{nested within} the corresponding
3398 @code{acc_ev_compute_construct_start} and
3399 @code{acc_ev_compute_construct_end}, but they're currently observed
3400 @emph{before} @code{acc_ev_compute_construct_start}.
3401 It's not clear what to do: the standard asks us provide a lot of
3402 details to the @code{acc_ev_compute_construct_start} callback, without
3403 (implicitly) initializing a device before?
3404
3405 @item
3406 Callbacks for these event types will not be invoked for calls to the
3407 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
3408 It's not clear if they should be.
3409
3410 @end itemize
3411
3412 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
3413 @itemize
3414
3415 @item
3416 Callbacks for these event types will also be invoked for OpenACC
3417 @emph{host_data} constructs.
3418 It's not clear if they should be.
3419
3420 @item
3421 Callbacks for these event types will also be invoked when processing
3422 variable mappings specified in OpenACC @emph{declare} directives.
3423 It's not clear if they should be.
3424
3425 @end itemize
3426
3427 @end table
3428
3429 Callbacks for the following event types will be invoked, but dispatch
3430 and information provided therein has not yet been thoroughly reviewed:
3431
3432 @itemize
3433 @item @code{acc_ev_alloc}
3434 @item @code{acc_ev_free}
3435 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
3436 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
3437 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
3438 @end itemize
3439
3440 During device initialization, and finalization, respectively,
3441 callbacks for the following event types will not yet be invoked:
3442
3443 @itemize
3444 @item @code{acc_ev_alloc}
3445 @item @code{acc_ev_free}
3446 @end itemize
3447
3448 Callbacks for the following event types have not yet been implemented,
3449 so currently won't be invoked:
3450
3451 @itemize
3452 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
3453 @item @code{acc_ev_runtime_shutdown}
3454 @item @code{acc_ev_create}, @code{acc_ev_delete}
3455 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
3456 @end itemize
3457
3458 For the following runtime library functions, not all expected
3459 callbacks will be invoked (mostly concerning implicit device
3460 initialization):
3461
3462 @itemize
3463 @item @code{acc_get_num_devices}
3464 @item @code{acc_set_device_type}
3465 @item @code{acc_get_device_type}
3466 @item @code{acc_set_device_num}
3467 @item @code{acc_get_device_num}
3468 @item @code{acc_init}
3469 @item @code{acc_shutdown}
3470 @end itemize
3471
3472 Aside from implicit device initialization, for the following runtime
3473 library functions, no callbacks will be invoked for shared-memory
3474 offloading devices (it's not clear if they should be):
3475
3476 @itemize
3477 @item @code{acc_malloc}
3478 @item @code{acc_free}
3479 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
3480 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
3481 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
3482 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
3483 @item @code{acc_update_device}, @code{acc_update_device_async}
3484 @item @code{acc_update_self}, @code{acc_update_self_async}
3485 @item @code{acc_map_data}, @code{acc_unmap_data}
3486 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
3487 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
3488 @end itemize
3489
3490
3491
3492 @c ---------------------------------------------------------------------
3493 @c The libgomp ABI
3494 @c ---------------------------------------------------------------------
3495
3496 @node The libgomp ABI
3497 @chapter The libgomp ABI
3498
3499 The following sections present notes on the external ABI as
3500 presented by libgomp. Only maintainers should need them.
3501
3502 @menu
3503 * Implementing MASTER construct::
3504 * Implementing CRITICAL construct::
3505 * Implementing ATOMIC construct::
3506 * Implementing FLUSH construct::
3507 * Implementing BARRIER construct::
3508 * Implementing THREADPRIVATE construct::
3509 * Implementing PRIVATE clause::
3510 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
3511 * Implementing REDUCTION clause::
3512 * Implementing PARALLEL construct::
3513 * Implementing FOR construct::
3514 * Implementing ORDERED construct::
3515 * Implementing SECTIONS construct::
3516 * Implementing SINGLE construct::
3517 * Implementing OpenACC's PARALLEL construct::
3518 @end menu
3519
3520
3521 @node Implementing MASTER construct
3522 @section Implementing MASTER construct
3523
3524 @smallexample
3525 if (omp_get_thread_num () == 0)
3526 block
3527 @end smallexample
3528
3529 Alternately, we generate two copies of the parallel subfunction
3530 and only include this in the version run by the master thread.
3531 Surely this is not worthwhile though...
3532
3533
3534
3535 @node Implementing CRITICAL construct
3536 @section Implementing CRITICAL construct
3537
3538 Without a specified name,
3539
3540 @smallexample
3541 void GOMP_critical_start (void);
3542 void GOMP_critical_end (void);
3543 @end smallexample
3544
3545 so that we don't get COPY relocations from libgomp to the main
3546 application.
3547
3548 With a specified name, use omp_set_lock and omp_unset_lock with
3549 name being transformed into a variable declared like
3550
3551 @smallexample
3552 omp_lock_t gomp_critical_user_<name> __attribute__((common))
3553 @end smallexample
3554
3555 Ideally the ABI would specify that all zero is a valid unlocked
3556 state, and so we wouldn't need to initialize this at
3557 startup.
3558
3559
3560
3561 @node Implementing ATOMIC construct
3562 @section Implementing ATOMIC construct
3563
3564 The target should implement the @code{__sync} builtins.
3565
3566 Failing that we could add
3567
3568 @smallexample
3569 void GOMP_atomic_enter (void)
3570 void GOMP_atomic_exit (void)
3571 @end smallexample
3572
3573 which reuses the regular lock code, but with yet another lock
3574 object private to the library.
3575
3576
3577
3578 @node Implementing FLUSH construct
3579 @section Implementing FLUSH construct
3580
3581 Expands to the @code{__sync_synchronize} builtin.
3582
3583
3584
3585 @node Implementing BARRIER construct
3586 @section Implementing BARRIER construct
3587
3588 @smallexample
3589 void GOMP_barrier (void)
3590 @end smallexample
3591
3592
3593 @node Implementing THREADPRIVATE construct
3594 @section Implementing THREADPRIVATE construct
3595
3596 In _most_ cases we can map this directly to @code{__thread}. Except
3597 that OMP allows constructors for C++ objects. We can either
3598 refuse to support this (how often is it used?) or we can
3599 implement something akin to .ctors.
3600
3601 Even more ideally, this ctor feature is handled by extensions
3602 to the main pthreads library. Failing that, we can have a set
3603 of entry points to register ctor functions to be called.
3604
3605
3606
3607 @node Implementing PRIVATE clause
3608 @section Implementing PRIVATE clause
3609
3610 In association with a PARALLEL, or within the lexical extent
3611 of a PARALLEL block, the variable becomes a local variable in
3612 the parallel subfunction.
3613
3614 In association with FOR or SECTIONS blocks, create a new
3615 automatic variable within the current function. This preserves
3616 the semantic of new variable creation.
3617
3618
3619
3620 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3621 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3622
3623 This seems simple enough for PARALLEL blocks. Create a private
3624 struct for communicating between the parent and subfunction.
3625 In the parent, copy in values for scalar and "small" structs;
3626 copy in addresses for others TREE_ADDRESSABLE types. In the
3627 subfunction, copy the value into the local variable.
3628
3629 It is not clear what to do with bare FOR or SECTION blocks.
3630 The only thing I can figure is that we do something like:
3631
3632 @smallexample
3633 #pragma omp for firstprivate(x) lastprivate(y)
3634 for (int i = 0; i < n; ++i)
3635 body;
3636 @end smallexample
3637
3638 which becomes
3639
3640 @smallexample
3641 @{
3642 int x = x, y;
3643
3644 // for stuff
3645
3646 if (i == n)
3647 y = y;
3648 @}
3649 @end smallexample
3650
3651 where the "x=x" and "y=y" assignments actually have different
3652 uids for the two variables, i.e. not something you could write
3653 directly in C. Presumably this only makes sense if the "outer"
3654 x and y are global variables.
3655
3656 COPYPRIVATE would work the same way, except the structure
3657 broadcast would have to happen via SINGLE machinery instead.
3658
3659
3660
3661 @node Implementing REDUCTION clause
3662 @section Implementing REDUCTION clause
3663
3664 The private struct mentioned in the previous section should have
3665 a pointer to an array of the type of the variable, indexed by the
3666 thread's @var{team_id}. The thread stores its final value into the
3667 array, and after the barrier, the master thread iterates over the
3668 array to collect the values.
3669
3670
3671 @node Implementing PARALLEL construct
3672 @section Implementing PARALLEL construct
3673
3674 @smallexample
3675 #pragma omp parallel
3676 @{
3677 body;
3678 @}
3679 @end smallexample
3680
3681 becomes
3682
3683 @smallexample
3684 void subfunction (void *data)
3685 @{
3686 use data;
3687 body;
3688 @}
3689
3690 setup data;
3691 GOMP_parallel_start (subfunction, &data, num_threads);
3692 subfunction (&data);
3693 GOMP_parallel_end ();
3694 @end smallexample
3695
3696 @smallexample
3697 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
3698 @end smallexample
3699
3700 The @var{FN} argument is the subfunction to be run in parallel.
3701
3702 The @var{DATA} argument is a pointer to a structure used to
3703 communicate data in and out of the subfunction, as discussed
3704 above with respect to FIRSTPRIVATE et al.
3705
3706 The @var{NUM_THREADS} argument is 1 if an IF clause is present
3707 and false, or the value of the NUM_THREADS clause, if
3708 present, or 0.
3709
3710 The function needs to create the appropriate number of
3711 threads and/or launch them from the dock. It needs to
3712 create the team structure and assign team ids.
3713
3714 @smallexample
3715 void GOMP_parallel_end (void)
3716 @end smallexample
3717
3718 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
3719
3720
3721
3722 @node Implementing FOR construct
3723 @section Implementing FOR construct
3724
3725 @smallexample
3726 #pragma omp parallel for
3727 for (i = lb; i <= ub; i++)
3728 body;
3729 @end smallexample
3730
3731 becomes
3732
3733 @smallexample
3734 void subfunction (void *data)
3735 @{
3736 long _s0, _e0;
3737 while (GOMP_loop_static_next (&_s0, &_e0))
3738 @{
3739 long _e1 = _e0, i;
3740 for (i = _s0; i < _e1; i++)
3741 body;
3742 @}
3743 GOMP_loop_end_nowait ();
3744 @}
3745
3746 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
3747 subfunction (NULL);
3748 GOMP_parallel_end ();
3749 @end smallexample
3750
3751 @smallexample
3752 #pragma omp for schedule(runtime)
3753 for (i = 0; i < n; i++)
3754 body;
3755 @end smallexample
3756
3757 becomes
3758
3759 @smallexample
3760 @{
3761 long i, _s0, _e0;
3762 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
3763 do @{
3764 long _e1 = _e0;
3765 for (i = _s0, i < _e0; i++)
3766 body;
3767 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
3768 GOMP_loop_end ();
3769 @}
3770 @end smallexample
3771
3772 Note that while it looks like there is trickiness to propagating
3773 a non-constant STEP, there isn't really. We're explicitly allowed
3774 to evaluate it as many times as we want, and any variables involved
3775 should automatically be handled as PRIVATE or SHARED like any other
3776 variables. So the expression should remain evaluable in the
3777 subfunction. We can also pull it into a local variable if we like,
3778 but since its supposed to remain unchanged, we can also not if we like.
3779
3780 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
3781 able to get away with no work-sharing context at all, since we can
3782 simply perform the arithmetic directly in each thread to divide up
3783 the iterations. Which would mean that we wouldn't need to call any
3784 of these routines.
3785
3786 There are separate routines for handling loops with an ORDERED
3787 clause. Bookkeeping for that is non-trivial...
3788
3789
3790
3791 @node Implementing ORDERED construct
3792 @section Implementing ORDERED construct
3793
3794 @smallexample
3795 void GOMP_ordered_start (void)
3796 void GOMP_ordered_end (void)
3797 @end smallexample
3798
3799
3800
3801 @node Implementing SECTIONS construct
3802 @section Implementing SECTIONS construct
3803
3804 A block as
3805
3806 @smallexample
3807 #pragma omp sections
3808 @{
3809 #pragma omp section
3810 stmt1;
3811 #pragma omp section
3812 stmt2;
3813 #pragma omp section
3814 stmt3;
3815 @}
3816 @end smallexample
3817
3818 becomes
3819
3820 @smallexample
3821 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
3822 switch (i)
3823 @{
3824 case 1:
3825 stmt1;
3826 break;
3827 case 2:
3828 stmt2;
3829 break;
3830 case 3:
3831 stmt3;
3832 break;
3833 @}
3834 GOMP_barrier ();
3835 @end smallexample
3836
3837
3838 @node Implementing SINGLE construct
3839 @section Implementing SINGLE construct
3840
3841 A block like
3842
3843 @smallexample
3844 #pragma omp single
3845 @{
3846 body;
3847 @}
3848 @end smallexample
3849
3850 becomes
3851
3852 @smallexample
3853 if (GOMP_single_start ())
3854 body;
3855 GOMP_barrier ();
3856 @end smallexample
3857
3858 while
3859
3860 @smallexample
3861 #pragma omp single copyprivate(x)
3862 body;
3863 @end smallexample
3864
3865 becomes
3866
3867 @smallexample
3868 datap = GOMP_single_copy_start ();
3869 if (datap == NULL)
3870 @{
3871 body;
3872 data.x = x;
3873 GOMP_single_copy_end (&data);
3874 @}
3875 else
3876 x = datap->x;
3877 GOMP_barrier ();
3878 @end smallexample
3879
3880
3881
3882 @node Implementing OpenACC's PARALLEL construct
3883 @section Implementing OpenACC's PARALLEL construct
3884
3885 @smallexample
3886 void GOACC_parallel ()
3887 @end smallexample
3888
3889
3890
3891 @c ---------------------------------------------------------------------
3892 @c Reporting Bugs
3893 @c ---------------------------------------------------------------------
3894
3895 @node Reporting Bugs
3896 @chapter Reporting Bugs
3897
3898 Bugs in the GNU Offloading and Multi Processing Runtime Library should
3899 be reported via @uref{http://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
3900 "openacc", or "openmp", or both to the keywords field in the bug
3901 report, as appropriate.
3902
3903
3904
3905 @c ---------------------------------------------------------------------
3906 @c GNU General Public License
3907 @c ---------------------------------------------------------------------
3908
3909 @include gpl_v3.texi
3910
3911
3912
3913 @c ---------------------------------------------------------------------
3914 @c GNU Free Documentation License
3915 @c ---------------------------------------------------------------------
3916
3917 @include fdl.texi
3918
3919
3920
3921 @c ---------------------------------------------------------------------
3922 @c Funding Free Software
3923 @c ---------------------------------------------------------------------
3924
3925 @include funding.texi
3926
3927 @c ---------------------------------------------------------------------
3928 @c Index
3929 @c ---------------------------------------------------------------------
3930
3931 @node Library Index
3932 @unnumbered Library Index
3933
3934 @printindex cp
3935
3936 @bye