@node Runtime Library Routines
@chapter OpenMP Runtime Library Routines
-The runtime routines described here are defined by Section 3 of the OpenMP
-specification in version 4.5. The routines are structured in following
-three parts:
+The runtime routines described here are defined by Section 18 of the OpenMP
+specification in version 5.2.
@menu
-Control threads, processors and the parallel environment. They have C
-linkage, and do not throw exceptions.
+* Thread Team Routines::
+* Thread Affinity Routines::
+* Teams Region Routines::
+* Tasking Routines::
+@c * Resource Relinquishing Routines::
+* Device Information Routines::
+@c * Device Memory Routines::
+* Lock Routines::
+* Timing Routines::
+* Event Routine::
+@c * Interoperability Routines::
+@c * Memory Management Routines::
+@c * Tool Control Routine::
+@c * Environment Display Routine::
+@end menu
-* omp_get_active_level:: Number of active parallel regions
-* omp_get_ancestor_thread_num:: Ancestor thread ID
-* omp_get_cancellation:: Whether cancellation support is enabled
-* omp_get_default_device:: Get the default device for target regions
-* omp_get_device_num:: Get device that current thread is running on
-* omp_get_dynamic:: Dynamic teams setting
-* omp_get_initial_device:: Device number of host device
-* omp_get_level:: Number of parallel regions
-* omp_get_max_active_levels:: Current maximum number of active regions
-* omp_get_max_task_priority:: Maximum task priority value that can be set
-* omp_get_max_teams:: Maximum number of teams for teams region
-* omp_get_max_threads:: Maximum number of threads of parallel region
-* omp_get_nested:: Nested parallel regions
-* omp_get_num_devices:: Number of target devices
-* omp_get_num_procs:: Number of processors online
-* omp_get_num_teams:: Number of teams
+
+
+@node Thread Team Routines
+@section Thread Team Routines
+
+Routines controlling threads in the current contention group.
+They have C linkage and do not throw exceptions.
+
+@menu
+* omp_set_num_threads:: Set upper team size limit
* omp_get_num_threads:: Size of the active team
-* omp_get_proc_bind:: Whether threads may be moved between CPUs
-* omp_get_schedule:: Obtain the runtime scheduling method
-* omp_get_supported_active_levels:: Maximum number of active regions supported
-* omp_get_team_num:: Get team number
-* omp_get_team_size:: Number of threads in a team
-* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
-* omp_get_thread_limit:: Maximum number of threads
+* omp_get_max_threads:: Maximum number of threads of parallel region
* omp_get_thread_num:: Current thread ID
* omp_in_parallel:: Whether a parallel region is active
-* omp_in_final:: Whether in final or included task region
-* omp_is_initial_device:: Whether executing on the host device
-* omp_set_default_device:: Set the default device for target regions
* omp_set_dynamic:: Enable/disable dynamic teams
-* omp_set_max_active_levels:: Limits the number of active parallel regions
+* omp_get_dynamic:: Dynamic teams setting
+* omp_get_cancellation:: Whether cancellation support is enabled
* omp_set_nested:: Enable/disable nested parallel regions
-* omp_set_num_teams:: Set upper teams limit for teams region
-* omp_set_num_threads:: Set upper team size limit
+* omp_get_nested:: Nested parallel regions
* omp_set_schedule:: Set the runtime scheduling method
-* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
+* omp_get_schedule:: Obtain the runtime scheduling method
+* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
+* omp_get_supported_active_levels:: Maximum number of active regions supported
+* omp_set_max_active_levels:: Limits the number of active parallel regions
+* omp_get_max_active_levels:: Current maximum number of active regions
+* omp_get_level:: Number of parallel regions
+* omp_get_ancestor_thread_num:: Ancestor thread ID
+* omp_get_team_size:: Number of threads in a team
+* omp_get_active_level:: Number of active parallel regions
+@end menu
-Initialize, set, test, unset and destroy simple and nested locks.
-* omp_init_lock:: Initialize simple lock
-* omp_set_lock:: Wait for and set simple lock
-* omp_test_lock:: Test and set simple lock if available
-* omp_unset_lock:: Unset simple lock
-* omp_destroy_lock:: Destroy simple lock
-* omp_init_nest_lock:: Initialize nested lock
-* omp_set_nest_lock:: Wait for and set simple lock
-* omp_test_nest_lock:: Test and set nested lock if available
-* omp_unset_nest_lock:: Unset nested lock
-* omp_destroy_nest_lock:: Destroy nested lock
-Portable, thread-based, wall clock timer.
+@node omp_set_num_threads
+@subsection @code{omp_set_num_threads} -- Set upper team size limit
+@table @asis
+@item @emph{Description}:
+Specifies the number of threads used by default in subsequent parallel
+sections, if those do not specify a @code{num_threads} clause. The
+argument of @code{omp_set_num_threads} shall be a positive integer.
-* omp_get_wtick:: Get timer precision.
-* omp_get_wtime:: Elapsed wall clock time.
+@item @emph{C/C++}:
+@multitable @columnfractions .20 .80
+@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
+@end multitable
-Support for event objects.
+@item @emph{Fortran}:
+@multitable @columnfractions .20 .80
+@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
+@item @tab @code{integer, intent(in) :: num_threads}
+@end multitable
-* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
-@end menu
+@item @emph{See also}:
+@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
+@item @emph{Reference}:
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
+@end table
-@node omp_get_active_level
-@section @code{omp_get_active_level} -- Number of parallel regions
+
+@node omp_get_num_threads
+@subsection @code{omp_get_num_threads} -- Size of the active team
@table @asis
@item @emph{Description}:
-This function returns the nesting level for the active parallel blocks,
-which enclose the calling call.
+Returns the number of threads in the current team. In a sequential section of
+the program @code{omp_get_num_threads} returns 1.
-@item @emph{C/C++}
+The default team size may be initialized at startup by the
+@env{OMP_NUM_THREADS} environment variable. At runtime, the size
+of the current team may be set either by the @code{NUM_THREADS}
+clause or by @code{omp_set_num_threads}. If none of the above were
+used to define a specific value and @env{OMP_DYNAMIC} is disabled,
+one thread per CPU online is used.
+
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
+@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
+@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
@end table
-@node omp_get_ancestor_thread_num
-@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
+@node omp_get_max_threads
+@subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
@table @asis
@item @emph{Description}:
-This function returns the thread identification number for the given
-nesting level of the current thread. For values of @var{level} outside
-zero to @code{omp_get_level} -1 is returned; if @var{level} is
-@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
+Return the maximum number of threads used for the current parallel region
+that does not use the clause @code{num_threads}.
-@item @emph{C/C++}
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
+@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
-@item @tab @code{integer level}
+@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
+@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
@end table
-@node omp_get_cancellation
-@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
+@node omp_get_thread_num
+@subsection @code{omp_get_thread_num} -- Current thread ID
@table @asis
@item @emph{Description}:
-This function returns @code{true} if cancellation is activated, @code{false}
-otherwise. Here, @code{true} and @code{false} represent their language-specific
-counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
-deactivated.
+Returns a unique thread identification number within the current team.
+In a sequential parts of the program, @code{omp_get_thread_num}
+always returns 0. In parallel regions the return value varies
+from 0 to @code{omp_get_num_threads}-1 inclusive. The return
+value of the primary thread of a team is always 0.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
+@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
@end multitable
@item @emph{See also}:
-@ref{OMP_CANCELLATION}
+@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
@end table
-@node omp_get_default_device
-@section @code{omp_get_default_device} -- Get the default device for target regions
+@node omp_in_parallel
+@subsection @code{omp_in_parallel} -- Whether a parallel region is active
@table @asis
@item @emph{Description}:
-Get the default device for target regions without device clause.
+This function returns @code{true} if currently running in parallel,
+@code{false} otherwise. Here, @code{true} and @code{false} represent
+their language-specific counterparts.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
+@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
+@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
@end multitable
-@item @emph{See also}:
-@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
-
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
@end table
-
-@node omp_get_device_num
-@section @code{omp_get_device_num} -- Return device number of current device
+@node omp_set_dynamic
+@subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
@table @asis
@item @emph{Description}:
-This function returns a device number that represents the device that the
-current thread is executing on. For OpenMP 5.0, this must be equal to the
-value returned by the @code{omp_get_initial_device} function when called
-from the host.
+Enable or disable the dynamic adjustment of the number of threads
+within a team. The function takes the language-specific equivalent
+of @code{true} and @code{false}, where @code{true} enables dynamic
+adjustment of team sizes and @code{false} disables it.
-@item @emph{C/C++}
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
+@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
+@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
+@item @tab @code{logical, intent(in) :: dynamic_threads}
@end multitable
@item @emph{See also}:
-@ref{omp_get_initial_device}
+@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
@end table
@node omp_get_dynamic
-@section @code{omp_get_dynamic} -- Dynamic teams setting
+@subsection @code{omp_get_dynamic} -- Dynamic teams setting
@table @asis
@item @emph{Description}:
This function returns @code{true} if enabled, @code{false} otherwise.
-@node omp_get_initial_device
-@section @code{omp_get_initial_device} -- Return device number of initial device
+@node omp_get_cancellation
+@subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
@table @asis
@item @emph{Description}:
-This function returns a device number that represents the host device.
-For OpenMP 5.1, this must be equal to the value returned by the
-@code{omp_get_num_devices} function.
+This function returns @code{true} if cancellation is activated, @code{false}
+otherwise. Here, @code{true} and @code{false} represent their language-specific
+counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
+deactivated.
-@item @emph{C/C++}
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
+@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_num_devices}
+@ref{OMP_CANCELLATION}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
@end table
-@node omp_get_level
-@section @code{omp_get_level} -- Obtain the current nesting level
+@node omp_set_nested
+@subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
@table @asis
@item @emph{Description}:
-This function returns the nesting level for the parallel blocks,
-which enclose the calling call.
+Enable or disable nested parallel regions, i.e., whether team members
+are allowed to create new teams. The function takes the language-specific
+equivalent of @code{true} and @code{false}, where @code{true} enables
+dynamic adjustment of team sizes and @code{false} disables it.
-@item @emph{C/C++}
+Enabling nested parallel regions will also set the maximum number of
+active nested regions to the maximum supported. Disabling nested parallel
+regions will set the maximum number of active nested regions to one.
+
+Note that the @code{omp_set_nested} API routine was deprecated
+in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
+
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
+@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_level()}
+@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
+@item @tab @code{logical, intent(in) :: nested}
@end multitable
@item @emph{See also}:
-@ref{omp_get_active_level}
+@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
+@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
@end table
-@node omp_get_max_active_levels
-@section @code{omp_get_max_active_levels} -- Current maximum number of active regions
+@node omp_get_nested
+@subsection @code{omp_get_nested} -- Nested parallel regions
@table @asis
@item @emph{Description}:
-This function obtains the maximum allowed number of nested, active parallel regions.
+This function returns @code{true} if nested parallel regions are
+enabled, @code{false} otherwise. Here, @code{true} and @code{false}
+represent their language-specific counterparts.
+
+The state of nested parallel regions at startup depends on several
+environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
+and is set to greater than one, then nested parallel regions will be
+enabled. If not defined, then the value of the @env{OMP_NESTED}
+environment variable will be followed if defined. If neither are
+defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
+are defined with a list of more than one value, then nested parallel
+regions are enabled. If none of these are defined, then nested parallel
+regions are disabled by default.
+
+Nested parallel regions can be enabled or disabled at runtime using
+@code{omp_set_nested}, or by setting the maximum number of nested
+regions with @code{omp_set_max_active_levels} to one to disable, or
+above one to enable.
+
+Note that the @code{omp_get_nested} API routine was deprecated
+in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
+
+@item @emph{C/C++}:
+@multitable @columnfractions .20 .80
+@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
+@end multitable
+
+@item @emph{Fortran}:
+@multitable @columnfractions .20 .80
+@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
+@end multitable
+
+@item @emph{See also}:
+@ref{omp_get_max_active_levels}, @ref{omp_set_nested},
+@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
+
+@item @emph{Reference}:
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
+@end table
+
+
+
+@node omp_set_schedule
+@subsection @code{omp_set_schedule} -- Set the runtime scheduling method
+@table @asis
+@item @emph{Description}:
+Sets the runtime scheduling method. The @var{kind} argument can have the
+value @code{omp_sched_static}, @code{omp_sched_dynamic},
+@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
+@code{omp_sched_auto}, the chunk size is set to the value of
+@var{chunk_size} if positive, or to the default value if zero or negative.
+For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
+@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
+@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
+@item @tab @code{integer(kind=omp_sched_kind) kind}
+@item @tab @code{integer chunk_size}
@end multitable
@item @emph{See also}:
-@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
+@ref{omp_get_schedule}
+@ref{OMP_SCHEDULE}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
@end table
-@node omp_get_max_task_priority
-@section @code{omp_get_max_task_priority} -- Maximum priority value
-that can be set for tasks.
+
+@node omp_get_schedule
+@subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
@table @asis
@item @emph{Description}:
-This function obtains the maximum allowed priority number for tasks.
+Obtain the runtime scheduling method. The @var{kind} argument will be
+set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
+@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
+@var{chunk_size}, is set to the chunk size.
@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
+@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
+@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
+@item @tab @code{integer(kind=omp_sched_kind) kind}
+@item @tab @code{integer chunk_size}
@end multitable
+@item @emph{See also}:
+@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
+
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
@end table
-@node omp_get_max_teams
-@section @code{omp_get_max_teams} -- Maximum number of teams of teams region
+@node omp_get_teams_thread_limit
+@subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
@table @asis
@item @emph{Description}:
-Return the maximum number of teams used for the teams region
-that does not use the clause @code{num_teams}.
+Return the maximum number of threads that will be able to participate in
+each team created by a teams construct.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
+@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
@end multitable
@item @emph{See also}:
-@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
+@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
+@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
@end table
-@node omp_get_max_threads
-@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
+@node omp_get_supported_active_levels
+@subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
@table @asis
@item @emph{Description}:
-Return the maximum number of threads used for the current parallel region
-that does not use the clause @code{num_threads}.
+This function returns the maximum number of nested, active parallel regions
+supported by this implementation.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
+@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
@end multitable
@item @emph{See also}:
-@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
+@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
+@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
@end table
-@node omp_get_nested
-@section @code{omp_get_nested} -- Nested parallel regions
+@node omp_set_max_active_levels
+@subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
@table @asis
@item @emph{Description}:
-This function returns @code{true} if nested parallel regions are
-enabled, @code{false} otherwise. Here, @code{true} and @code{false}
-represent their language-specific counterparts.
+This function limits the maximum allowed number of nested, active
+parallel regions. @var{max_levels} must be less or equal to
+the value returned by @code{omp_get_supported_active_levels}.
-The state of nested parallel regions at startup depends on several
-environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
-and is set to greater than one, then nested parallel regions will be
-enabled. If not defined, then the value of the @env{OMP_NESTED}
-environment variable will be followed if defined. If neither are
-defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
-are defined with a list of more than one value, then nested parallel
-regions are enabled. If none of these are defined, then nested parallel
-regions are disabled by default.
+@item @emph{C/C++}
+@multitable @columnfractions .20 .80
+@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
+@end multitable
-Nested parallel regions can be enabled or disabled at runtime using
-@code{omp_set_nested}, or by setting the maximum number of nested
-regions with @code{omp_set_max_active_levels} to one to disable, or
-above one to enable.
+@item @emph{Fortran}:
+@multitable @columnfractions .20 .80
+@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
+@item @tab @code{integer max_levels}
+@end multitable
-Note that the @code{omp_get_nested} API routine was deprecated
-in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
+@item @emph{See also}:
+@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
+@ref{omp_get_supported_active_levels}
-@item @emph{C/C++}:
+@item @emph{Reference}:
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
+@end table
+
+
+
+@node omp_get_max_active_levels
+@subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
+@table @asis
+@item @emph{Description}:
+This function obtains the maximum allowed number of nested, active parallel regions.
+
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
+@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_max_active_levels}, @ref{omp_set_nested},
-@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
+@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
@end table
-
-@node omp_get_num_devices
-@section @code{omp_get_num_devices} -- Number of target devices
+@node omp_get_level
+@subsection @code{omp_get_level} -- Obtain the current nesting level
@table @asis
@item @emph{Description}:
-Returns the number of target devices.
+This function returns the nesting level for the parallel blocks,
+which enclose the calling call.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
+@item @emph{Interface}: @tab @code{integer function omp_level()}
@end multitable
+@item @emph{See also}:
+@ref{omp_get_active_level}
+
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
@end table
-@node omp_get_num_procs
-@section @code{omp_get_num_procs} -- Number of processors online
+@node omp_get_ancestor_thread_num
+@subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
@table @asis
@item @emph{Description}:
-Returns the number of processors online on that device.
+This function returns the thread identification number for the given
+nesting level of the current thread. For values of @var{level} outside
+zero to @code{omp_get_level} -1 is returned; if @var{level} is
+@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
+@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
+@item @tab @code{integer level}
@end multitable
+@item @emph{See also}:
+@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
+
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
@end table
-@node omp_get_num_teams
-@section @code{omp_get_num_teams} -- Number of teams
+@node omp_get_team_size
+@subsection @code{omp_get_team_size} -- Number of threads in a team
@table @asis
@item @emph{Description}:
-Returns the number of teams in the current team region.
+This function returns the number of threads in a thread team to which
+either the current thread or its ancestor belongs. For values of @var{level}
+outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
+1 is returned, and for @code{omp_get_level}, the result is identical
+to @code{omp_get_num_threads}.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
+@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
+@item @tab @code{integer level}
@end multitable
+@item @emph{See also}:
+@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
+
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
@end table
-@node omp_get_num_threads
-@section @code{omp_get_num_threads} -- Size of the active team
+@node omp_get_active_level
+@subsection @code{omp_get_active_level} -- Number of parallel regions
@table @asis
@item @emph{Description}:
-Returns the number of threads in the current team. In a sequential section of
-the program @code{omp_get_num_threads} returns 1.
-
-The default team size may be initialized at startup by the
-@env{OMP_NUM_THREADS} environment variable. At runtime, the size
-of the current team may be set either by the @code{NUM_THREADS}
-clause or by @code{omp_set_num_threads}. If none of the above were
-used to define a specific value and @env{OMP_DYNAMIC} is disabled,
-one thread per CPU online is used.
+This function returns the nesting level for the active parallel blocks,
+which enclose the calling call.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
+@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
+@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
@end table
+@node Thread Affinity Routines
+@section Thread Affinity Routines
+
+Routines controlling and accessing thread-affinity policies.
+They have C linkage and do not throw exceptions.
+
+@menu
+* omp_get_proc_bind:: Whether threads may be moved between CPUs
+@c * omp_get_num_places:: <fixme>
+@c * omp_get_place_num_procs:: <fixme>
+@c * omp_get_place_proc_ids:: <fixme>
+@c * omp_get_place_num:: <fixme>
+@c * omp_get_partition_num_places:: <fixme>
+@c * omp_get_partition_place_nums:: <fixme>
+@c * omp_set_affinity_format:: <fixme>
+@c * omp_get_affinity_format:: <fixme>
+@c * omp_display_affinity:: <fixme>
+@c * omp_capture_affinity:: <fixme>
+@end menu
+
+
+
@node omp_get_proc_bind
-@section @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
+@subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
@table @asis
@item @emph{Description}:
This functions returns the currently active thread affinity policy, which is
-@node omp_get_schedule
-@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
-@table @asis
-@item @emph{Description}:
-Obtain the runtime scheduling method. The @var{kind} argument will be
-set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
-@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
-@var{chunk_size}, is set to the chunk size.
+@node Teams Region Routines
+@section Teams Region Routines
-@item @emph{C/C++}
-@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
-@end multitable
+Routines controlling the league of teams that are executed in a @code{teams}
+region. They have C linkage and do not throw exceptions.
-@item @emph{Fortran}:
-@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
-@item @tab @code{integer(kind=omp_sched_kind) kind}
-@item @tab @code{integer chunk_size}
-@end multitable
-
-@item @emph{See also}:
-@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
+@menu
+* omp_get_num_teams:: Number of teams
+* omp_get_team_num:: Get team number
+* omp_set_num_teams:: Set upper teams limit for teams region
+* omp_get_max_teams:: Maximum number of teams for teams region
+* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
+* omp_get_thread_limit:: Maximum number of threads
+@end menu
-@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
-@end table
-@node omp_get_supported_active_levels
-@section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
+@node omp_get_num_teams
+@subsection @code{omp_get_num_teams} -- Number of teams
@table @asis
@item @emph{Description}:
-This function returns the maximum number of nested, active parallel regions
-supported by this implementation.
+Returns the number of teams in the current team region.
-@item @emph{C/C++}
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
+@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
@end multitable
-@item @emph{See also}:
-@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
-
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
@end table
@node omp_get_team_num
-@section @code{omp_get_team_num} -- Get team number
+@subsection @code{omp_get_team_num} -- Get team number
@table @asis
@item @emph{Description}:
Returns the team number of the calling thread.
-@node omp_get_team_size
-@section @code{omp_get_team_size} -- Number of threads in a team
+@node omp_set_num_teams
+@subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
@table @asis
@item @emph{Description}:
-This function returns the number of threads in a thread team to which
-either the current thread or its ancestor belongs. For values of @var{level}
-outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
-1 is returned, and for @code{omp_get_level}, the result is identical
-to @code{omp_get_num_threads}.
+Specifies the upper bound for number of teams created by the teams construct
+which does not specify a @code{num_teams} clause. The
+argument of @code{omp_set_num_teams} shall be a positive integer.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
+@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
-@item @tab @code{integer level}
+@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
+@item @tab @code{integer, intent(in) :: num_teams}
@end multitable
@item @emph{See also}:
-@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
+@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
+@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
@end table
-@node omp_get_teams_thread_limit
-@section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
+@node omp_get_max_teams
+@subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
@table @asis
@item @emph{Description}:
-Return the maximum number of threads that will be able to participate in
-each team created by a teams construct.
+Return the maximum number of teams used for the teams region
+that does not use the clause @code{num_teams}.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
+@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
@end multitable
@item @emph{See also}:
-@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
+@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
+@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
@end table
-@node omp_get_thread_limit
-@section @code{omp_get_thread_limit} -- Maximum number of threads
+@node omp_set_teams_thread_limit
+@subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
@table @asis
@item @emph{Description}:
-Return the maximum number of threads of the program.
+Specifies the upper bound for number of threads that will be available
+for each team created by the teams construct which does not specify a
+@code{thread_limit} clause. The argument of
+@code{omp_set_teams_thread_limit} shall be a positive integer.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
+@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
+@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
+@item @tab @code{integer, intent(in) :: thread_limit}
@end multitable
@item @emph{See also}:
-@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
+@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
+@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
@end table
-@node omp_get_thread_num
-@section @code{omp_get_thread_num} -- Current thread ID
+@node omp_get_thread_limit
+@subsection @code{omp_get_thread_limit} -- Maximum number of threads
@table @asis
@item @emph{Description}:
-Returns a unique thread identification number within the current team.
-In a sequential parts of the program, @code{omp_get_thread_num}
-always returns 0. In parallel regions the return value varies
-from 0 to @code{omp_get_num_threads}-1 inclusive. The return
-value of the primary thread of a team is always 0.
+Return the maximum number of threads of the program.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
+@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
+@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
@end table
-@node omp_in_parallel
-@section @code{omp_in_parallel} -- Whether a parallel region is active
+@node Tasking Routines
+@section Tasking Routines
+
+Routines relating to explicit tasks.
+They have C linkage and do not throw exceptions.
+
+@menu
+* omp_get_max_task_priority:: Maximum task priority value that can be set
+@c * omp_in_explicit_task:: <fixme>
+* omp_in_final:: Whether in final or included task region
+@end menu
+
+
+
+@node omp_get_max_task_priority
+@subsection @code{omp_get_max_task_priority} -- Maximum priority value
+that can be set for tasks.
@table @asis
@item @emph{Description}:
-This function returns @code{true} if currently running in parallel,
-@code{false} otherwise. Here, @code{true} and @code{false} represent
-their language-specific counterparts.
+This function obtains the maximum allowed priority number for tasks.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
+@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
@end multitable
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
@end table
+
@node omp_in_final
-@section @code{omp_in_final} -- Whether in final or included task region
+@subsection @code{omp_in_final} -- Whether in final or included task region
@table @asis
@item @emph{Description}:
This function returns @code{true} if currently running in a final
-@node omp_is_initial_device
-@section @code{omp_is_initial_device} -- Whether executing on the host device
+@c @node Resource Relinquishing Routines
+@c @section Resource Relinquishing Routines
+@c
+@c Routines releasing resources used by the OpenMP runtime.
+@c They have C linkage and do not throw exceptions.
+@c
+@c @menu
+@c * omp_pause_resource:: <fixme>
+@c * omp_pause_resource_all:: <fixme>
+@c @end menu
+
+@node Device Information Routines
+@section Device Information Routines
+
+Routines related to devices available to an OpenMP program.
+They have C linkage and do not throw exceptions.
+
+@menu
+* omp_get_num_procs:: Number of processors online
+@c * omp_get_max_progress_width:: <fixme>/TR11
+* omp_set_default_device:: Set the default device for target regions
+* omp_get_default_device:: Get the default device for target regions
+* omp_get_num_devices:: Number of target devices
+* omp_get_device_num:: Get device that current thread is running on
+* omp_is_initial_device:: Whether executing on the host device
+* omp_get_initial_device:: Device number of host device
+@end menu
+
+
+
+@node omp_get_num_procs
+@subsection @code{omp_get_num_procs} -- Number of processors online
@table @asis
@item @emph{Description}:
-This function returns @code{true} if currently running on the host device,
-@code{false} otherwise. Here, @code{true} and @code{false} represent
-their language-specific counterparts.
+Returns the number of processors online on that device.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
+@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
+@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
@end multitable
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
@end table
@node omp_set_default_device
-@section @code{omp_set_default_device} -- Set the default device for target regions
+@subsection @code{omp_set_default_device} -- Set the default device for target regions
@table @asis
@item @emph{Description}:
Set the default device for target regions without device clause. The argument
-@node omp_set_dynamic
-@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
-@table @asis
-@item @emph{Description}:
-Enable or disable the dynamic adjustment of the number of threads
-within a team. The function takes the language-specific equivalent
-of @code{true} and @code{false}, where @code{true} enables dynamic
-adjustment of team sizes and @code{false} disables it.
-
-@item @emph{C/C++}:
-@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
-@end multitable
-
-@item @emph{Fortran}:
-@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
-@item @tab @code{logical, intent(in) :: dynamic_threads}
-@end multitable
-
-@item @emph{See also}:
-@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
-
-@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
-@end table
-
-
-
-@node omp_set_max_active_levels
-@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
-@table @asis
-@item @emph{Description}:
-This function limits the maximum allowed number of nested, active
-parallel regions. @var{max_levels} must be less or equal to
-the value returned by @code{omp_get_supported_active_levels}.
-
-@item @emph{C/C++}
-@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
-@end multitable
-
-@item @emph{Fortran}:
-@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
-@item @tab @code{integer max_levels}
-@end multitable
-
-@item @emph{See also}:
-@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
-@ref{omp_get_supported_active_levels}
-
-@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
-@end table
-
-
-
-@node omp_set_nested
-@section @code{omp_set_nested} -- Enable/disable nested parallel regions
+@node omp_get_default_device
+@subsection @code{omp_get_default_device} -- Get the default device for target regions
@table @asis
@item @emph{Description}:
-Enable or disable nested parallel regions, i.e., whether team members
-are allowed to create new teams. The function takes the language-specific
-equivalent of @code{true} and @code{false}, where @code{true} enables
-dynamic adjustment of team sizes and @code{false} disables it.
-
-Enabling nested parallel regions will also set the maximum number of
-active nested regions to the maximum supported. Disabling nested parallel
-regions will set the maximum number of active nested regions to one.
-
-Note that the @code{omp_set_nested} API routine was deprecated
-in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
+Get the default device for target regions without device clause.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
+@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
-@item @tab @code{logical, intent(in) :: nested}
+@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
@end multitable
@item @emph{See also}:
-@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
-@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
+@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
@end table
-@node omp_set_num_teams
-@section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
+@node omp_get_num_devices
+@subsection @code{omp_get_num_devices} -- Number of target devices
@table @asis
@item @emph{Description}:
-Specifies the upper bound for number of teams created by the teams construct
-which does not specify a @code{num_teams} clause. The
-argument of @code{omp_set_num_teams} shall be a positive integer.
+Returns the number of target devices.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
+@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
-@item @tab @code{integer, intent(in) :: num_teams}
+@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
@end multitable
-@item @emph{See also}:
-@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
-
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
@end table
-@node omp_set_num_threads
-@section @code{omp_set_num_threads} -- Set upper team size limit
+@node omp_get_device_num
+@subsection @code{omp_get_device_num} -- Return device number of current device
@table @asis
@item @emph{Description}:
-Specifies the number of threads used by default in subsequent parallel
-sections, if those do not specify a @code{num_threads} clause. The
-argument of @code{omp_set_num_threads} shall be a positive integer.
+This function returns a device number that represents the device that the
+current thread is executing on. For OpenMP 5.0, this must be equal to the
+value returned by the @code{omp_get_initial_device} function when called
+from the host.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
+@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
@end multitable
@item @emph{Fortran}:
-@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
-@item @tab @code{integer, intent(in) :: num_threads}
+@multitable @columnfractions .20 .80
+@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
@end multitable
@item @emph{See also}:
-@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
+@ref{omp_get_initial_device}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
+@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
@end table
-@node omp_set_schedule
-@section @code{omp_set_schedule} -- Set the runtime scheduling method
+@node omp_is_initial_device
+@subsection @code{omp_is_initial_device} -- Whether executing on the host device
@table @asis
@item @emph{Description}:
-Sets the runtime scheduling method. The @var{kind} argument can have the
-value @code{omp_sched_static}, @code{omp_sched_dynamic},
-@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
-@code{omp_sched_auto}, the chunk size is set to the value of
-@var{chunk_size} if positive, or to the default value if zero or negative.
-For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
+This function returns @code{true} if currently running on the host device,
+@code{false} otherwise. Here, @code{true} and @code{false} represent
+their language-specific counterparts.
-@item @emph{C/C++}
+@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
+@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
-@item @tab @code{integer(kind=omp_sched_kind) kind}
-@item @tab @code{integer chunk_size}
+@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
@end multitable
-@item @emph{See also}:
-@ref{omp_get_schedule}
-@ref{OMP_SCHEDULE}
-
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
@end table
-@node omp_set_teams_thread_limit
-@section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
+@node omp_get_initial_device
+@subsection @code{omp_get_initial_device} -- Return device number of initial device
@table @asis
@item @emph{Description}:
-Specifies the upper bound for number of threads that will be available
-for each team created by the teams construct which does not specify a
-@code{thread_limit} clause. The argument of
-@code{omp_set_teams_thread_limit} shall be a positive integer.
+This function returns a device number that represents the host device.
+For OpenMP 5.1, this must be equal to the value returned by the
+@code{omp_get_num_devices} function.
-@item @emph{C/C++}:
+@item @emph{C/C++}
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
+@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
-@item @tab @code{integer, intent(in) :: thread_limit}
+@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
@end multitable
@item @emph{See also}:
-@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
+@ref{omp_get_num_devices}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
@end table
+@c @node Device Memory Routines
+@c @section Device Memory Routines
+@c
+@c Routines related to memory allocation and managing corresponding
+@c pointers on devices. They have C linkage and do not throw exceptions.
+@c
+@c @menu
+@c * omp_target_alloc:: <fixme>
+@c * omp_target_free:: <fixme>
+@c * omp_target_is_present:: <fixme>
+@c * omp_target_is_accessible:: <fixme>
+@c * omp_target_memcpy:: <fixme>
+@c * omp_target_memcpy_rect:: <fixme>
+@c * omp_target_memcpy_async:: <fixme>
+@c * omp_target_memcpy_rect_async:: <fixme>
+@c * omp_target_associate_ptr:: <fixme>
+@c * omp_target_disassociate_ptr:: <fixme>
+@c * omp_get_mapped_ptr:: <fixme>
+@c @end menu
+
+@node Lock Routines
+@section Lock Routines
+
+Initialize, set, test, unset and destroy simple and nested locks.
+The routines have C linkage and do not throw exceptions.
+
+@menu
+* omp_init_lock:: Initialize simple lock
+* omp_init_nest_lock:: Initialize nested lock
+@c * omp_init_lock_with_hint:: <fixme>
+@c * omp_init_nest_lock_with_hint:: <fixme>
+* omp_destroy_lock:: Destroy simple lock
+* omp_destroy_nest_lock:: Destroy nested lock
+* omp_set_lock:: Wait for and set simple lock
+* omp_set_nest_lock:: Wait for and set simple lock
+* omp_unset_lock:: Unset simple lock
+* omp_unset_nest_lock:: Unset nested lock
+* omp_test_lock:: Test and set simple lock if available
+* omp_test_nest_lock:: Test and set nested lock if available
+@end menu
+
+
+
@node omp_init_lock
-@section @code{omp_init_lock} -- Initialize simple lock
+@subsection @code{omp_init_lock} -- Initialize simple lock
@table @asis
@item @emph{Description}:
Initialize a simple lock. After initialization, the lock is in
-@node omp_set_lock
-@section @code{omp_set_lock} -- Wait for and set simple lock
+@node omp_init_nest_lock
+@subsection @code{omp_init_nest_lock} -- Initialize nested lock
@table @asis
@item @emph{Description}:
-Before setting a simple lock, the lock variable must be initialized by
-@code{omp_init_lock}. The calling thread is blocked until the lock
-is available. If the lock is already held by the current thread,
-a deadlock occurs.
+Initialize a nested lock. After initialization, the lock is in
+an unlocked state and the nesting count is set to zero.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
+@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
-@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
+@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
+@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
@end multitable
@item @emph{See also}:
-@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
+@ref{omp_destroy_nest_lock}
-@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
+@item @emph{Reference}:
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
@end table
-@node omp_test_lock
-@section @code{omp_test_lock} -- Test and set simple lock if available
+@node omp_destroy_lock
+@subsection @code{omp_destroy_lock} -- Destroy simple lock
@table @asis
@item @emph{Description}:
-Before setting a simple lock, the lock variable must be initialized by
-@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
-does not block if the lock is not available. This function returns
-@code{true} upon success, @code{false} otherwise. Here, @code{true} and
-@code{false} represent their language-specific counterparts.
+Destroy a simple lock. In order to be destroyed, a simple lock must be
+in the unlocked state.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
+@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
+@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
@end multitable
@item @emph{See also}:
-@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
+@ref{omp_init_lock}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
@end table
-@node omp_unset_lock
-@section @code{omp_unset_lock} -- Unset simple lock
+@node omp_destroy_nest_lock
+@subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
@table @asis
@item @emph{Description}:
-A simple lock about to be unset must have been locked by @code{omp_set_lock}
-or @code{omp_test_lock} before. In addition, the lock must be held by the
-thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
-or more threads attempted to set the lock before, one of them is chosen to,
-again, set the lock to itself.
+Destroy a nested lock. In order to be destroyed, a nested lock must be
+in the unlocked state and its nesting count must equal zero.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
+@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
-@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
+@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
+@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
@end multitable
@item @emph{See also}:
-@ref{omp_set_lock}, @ref{omp_test_lock}
+@ref{omp_init_lock}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
@end table
-@node omp_destroy_lock
-@section @code{omp_destroy_lock} -- Destroy simple lock
+@node omp_set_lock
+@subsection @code{omp_set_lock} -- Wait for and set simple lock
@table @asis
@item @emph{Description}:
-Destroy a simple lock. In order to be destroyed, a simple lock must be
-in the unlocked state.
+Before setting a simple lock, the lock variable must be initialized by
+@code{omp_init_lock}. The calling thread is blocked until the lock
+is available. If the lock is already held by the current thread,
+a deadlock occurs.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
+@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
+@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
@end multitable
@item @emph{See also}:
-@ref{omp_init_lock}
+@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
@end table
-@node omp_init_nest_lock
-@section @code{omp_init_nest_lock} -- Initialize nested lock
-@table @asis
-@item @emph{Description}:
-Initialize a nested lock. After initialization, the lock is in
-an unlocked state and the nesting count is set to zero.
-
-@item @emph{C/C++}:
-@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
-@end multitable
-
-@item @emph{Fortran}:
-@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
-@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
-@end multitable
-
-@item @emph{See also}:
-@ref{omp_destroy_nest_lock}
-
-@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
-@end table
-
-
@node omp_set_nest_lock
-@section @code{omp_set_nest_lock} -- Wait for and set nested lock
+@subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
@table @asis
@item @emph{Description}:
Before setting a nested lock, the lock variable must be initialized by
-@node omp_test_nest_lock
-@section @code{omp_test_nest_lock} -- Test and set nested lock if available
+@node omp_unset_lock
+@subsection @code{omp_unset_lock} -- Unset simple lock
@table @asis
@item @emph{Description}:
-Before setting a nested lock, the lock variable must be initialized by
-@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
-@code{omp_test_nest_lock} does not block if the lock is not available.
-If the lock is already held by the current thread, the new nesting count
-is returned. Otherwise, the return value equals zero.
+A simple lock about to be unset must have been locked by @code{omp_set_lock}
+or @code{omp_test_lock} before. In addition, the lock must be held by the
+thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
+or more threads attempted to set the lock before, one of them is chosen to,
+again, set the lock to itself.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
+@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
-@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
+@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
+@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
@end multitable
-
@item @emph{See also}:
-@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
+@ref{omp_set_lock}, @ref{omp_test_lock}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
@end table
@node omp_unset_nest_lock
-@section @code{omp_unset_nest_lock} -- Unset nested lock
+@subsection @code{omp_unset_nest_lock} -- Unset nested lock
@table @asis
@item @emph{Description}:
A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
-@node omp_destroy_nest_lock
-@section @code{omp_destroy_nest_lock} -- Destroy nested lock
+@node omp_test_lock
+@subsection @code{omp_test_lock} -- Test and set simple lock if available
@table @asis
@item @emph{Description}:
-Destroy a nested lock. In order to be destroyed, a nested lock must be
-in the unlocked state and its nesting count must equal zero.
+Before setting a simple lock, the lock variable must be initialized by
+@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
+does not block if the lock is not available. This function returns
+@code{true} upon success, @code{false} otherwise. Here, @code{true} and
+@code{false} represent their language-specific counterparts.
@item @emph{C/C++}:
@multitable @columnfractions .20 .80
-@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
+@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
@end multitable
@item @emph{Fortran}:
@multitable @columnfractions .20 .80
-@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
+@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
+@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
+@end multitable
+
+@item @emph{See also}:
+@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
+
+@item @emph{Reference}:
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
+@end table
+
+
+
+@node omp_test_nest_lock
+@subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
+@table @asis
+@item @emph{Description}:
+Before setting a nested lock, the lock variable must be initialized by
+@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
+@code{omp_test_nest_lock} does not block if the lock is not available.
+If the lock is already held by the current thread, the new nesting count
+is returned. Otherwise, the return value equals zero.
+
+@item @emph{C/C++}:
+@multitable @columnfractions .20 .80
+@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
+@end multitable
+
+@item @emph{Fortran}:
+@multitable @columnfractions .20 .80
+@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
@end multitable
+
@item @emph{See also}:
-@ref{omp_init_lock}
+@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
@item @emph{Reference}:
-@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
+@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
@end table
+@node Timing Routines
+@section Timing Routines
+
+Portable, thread-based, wall clock timer.
+The routines have C linkage and do not throw exceptions.
+
+@menu
+* omp_get_wtick:: Get timer precision.
+* omp_get_wtime:: Elapsed wall clock time.
+@end menu
+
+
+
@node omp_get_wtick
-@section @code{omp_get_wtick} -- Get timer precision
+@subsection @code{omp_get_wtick} -- Get timer precision
@table @asis
@item @emph{Description}:
Gets the timer precision, i.e., the number of seconds between two
@node omp_get_wtime
-@section @code{omp_get_wtime} -- Elapsed wall clock time
+@subsection @code{omp_get_wtime} -- Elapsed wall clock time
@table @asis
@item @emph{Description}:
Elapsed wall clock time in seconds. The time is measured per thread, no
+@node Event Routine
+@section Event Routine
+
+Support for event objects.
+The routine has C linkage and do not throw exceptions.
+
+@menu
+* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
+@end menu
+
+
+
@node omp_fulfill_event
-@section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
+@subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
@table @asis
@item @emph{Description}:
Fulfill the event associated with the event handle argument. Currently, it
+@c @node Interoperability Routines
+@c @section Interoperability Routines
+@c
+@c Routines to obtain properties from an @code{omp_interop_t} object.
+@c They have C linkage and do not throw exceptions.
+@c
+@c @menu
+@c * omp_get_num_interop_properties:: <fixme>
+@c * omp_get_interop_int:: <fixme>
+@c * omp_get_interop_ptr:: <fixme>
+@c * omp_get_interop_str:: <fixme>
+@c * omp_get_interop_name:: <fixme>
+@c * omp_get_interop_type_desc:: <fixme>
+@c * omp_get_interop_rc_desc:: <fixme>
+@c @end menu
+
+@c @node Memory Management Routines
+@c @section Memory Management Routines
+@c
+@c Routines to manage and allocate memory on the current device.
+@c They have C linkage and do not throw exceptions.
+@c
+@c @menu
+@c * omp_init_allocator:: <fixme>
+@c * omp_destroy_allocator:: <fixme>
+@c * omp_set_default_allocator:: <fixme>
+@c * omp_get_default_allocator:: <fixme>
+@c * omp_alloc:: <fixme>
+@c * omp_aligned_alloc:: <fixme>
+@c * omp_free:: <fixme>
+@c * omp_calloc:: <fixme>
+@c * omp_aligned_calloc:: <fixme>
+@c * omp_realloc:: <fixme>
+@c * omp_get_memspace_num_resources:: <fixme>/TR11
+@c * omp_get_submemspace:: <fixme>/TR11
+@c @end menu
+
+@c @node Tool Control Routine
+@c
+@c FIXME
+
+@c @node Environment Display Routine
+@c @section Environment Display Routine
+@c
+@c Routine to display the OpenMP number and the initial value of ICVs.
+@c It has C linkage and do not throw exceptions.
+@c
+@c menu
+@c * omp_display_env:: <fixme>
+@c end menu
+
@c ---------------------------------------------------------------------
@c OpenMP Environment Variables
@c ---------------------------------------------------------------------
@smallexample
OMP_ALLOCATOR=omp_high_bw_mem_alloc
OMP_ALLOCATOR=omp_large_cap_mem_space
-OMP_ALLOCATR=omp_low_lat_mem_space:pinned=true,partition=nearest
+OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
@end smallexample
@item @emph{See also}: