]> git.ipfire.org Git - thirdparty/binutils-gdb.git/commit - include/ChangeLog
binutils/readelf: decode AMDGPU-specific e_flags
authorSimon Marchi <simon.marchi@efficios.com>
Wed, 16 Mar 2022 13:01:15 +0000 (09:01 -0400)
committerSimon Marchi <simon.marchi@polymtl.ca>
Wed, 16 Mar 2022 13:01:15 +0000 (09:01 -0400)
commitc077c5802c396e4548516f15c8f03d7684b236ef
tree74438621c9ae1313a00db836b9546019fc188943
parent37870be8740a4f903a61d43e6c1adede415473a9
binutils/readelf: decode AMDGPU-specific e_flags

Decode and print the AMDGPU-specific fields of e_flags, as documented
here:

  https://llvm.org/docs/AMDGPUUsage.html#header

That is:

 - The specific GPU model
 - Whether the xnack and sramecc features are enabled

The result looks like:

-  Flags:                             0x52f
+  Flags:                             0x52f, gfx906, xnack any, sramecc any

The flags for the "HSA" OS ABI are properly versioned and documented on
that page.  But the NONE, PAL and MESA3D OS ABIs are not well documented
nor versioned.  Taking a peek at the LLVM source code, we see that they
encode their flags the same way as HSA v3.  For example, for PAL:

  https://github.com/llvm/llvm-project/blob/c8b614cd74a92d85936aed5ac7c642af75ffdc29/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUTargetStreamer.cpp#L601

So for those other OS ABIs, we read them the same as HSA v3.

binutils/ChangeLog:

* readelf.c: Include elf/amdgcn.h.
(decode_AMDGPU_machine_flags): New.
(get_machine_flags): Handle flags for EM_AMDGPU machine type.

include/ChangeLog:

* elf/amdgcn.h: Add EF_AMDGPU_MACH_AMDGCN_* and
EF_AMDGPU_FEATURE_* defines.

Change-Id: Ib5b94df7cae0719a22cf4e4fd0629330e9485c12
binutils/ChangeLog
binutils/readelf.c
include/ChangeLog
include/elf/amdgpu.h