From 1ff4a22103767cd0133f0c1db6e85220f28ab3fa Mon Sep 17 00:00:00 2001 From: Tobias Burnus Date: Tue, 15 Apr 2025 23:19:50 +0200 Subject: [PATCH] libgomp.texi (gcn, nvptx): Mention self_maps alongside USM libgomp/ChangeLog: * libgomp.texi (gcn, nvptx): Mention self_maps clause besides unified_shared_memory in the requirements item. --- libgomp/libgomp.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libgomp/libgomp.texi b/libgomp/libgomp.texi index 3d3a56cc29a..dfd189b646e 100644 --- a/libgomp/libgomp.texi +++ b/libgomp/libgomp.texi @@ -6888,7 +6888,7 @@ The implementation remark: @code{device(ancestor:1)}) are processed serially per @code{target} region such that the next reverse offload region is only executed after the previous one returned. -@item OpenMP code that has a @code{requires} directive with +@item OpenMP code that has a @code{requires} directive with @code{self_maps} or @code{unified_shared_memory} is only supported if all AMD GPUs have the @code{HSA_AMD_SYSTEM_INFO_SVM_ACCESSIBLE_BY_DEFAULT} property; for discrete GPUs, this may require setting the @code{HSA_XNACK} environment @@ -7045,7 +7045,7 @@ The implementation remark: Per device, reverse offload regions are processed serially such that the next reverse offload region is only executed after the previous one returned. -@item OpenMP code that has a @code{requires} directive with +@item OpenMP code that has a @code{requires} directive with @code{self_maps} or @code{unified_shared_memory} runs on nvptx devices if and only if all of those support the @code{pageableMemoryAccess} property;@footnote{ @uref{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements}} -- 2.47.2