]> git.ipfire.org Git - thirdparty/binutils-gdb.git/commit
binutils/readelf: decode AMDGPU-specific e_flags
authorSimon Marchi <simon.marchi@efficios.com>
Thu, 3 Mar 2022 17:00:37 +0000 (12:00 -0500)
committerSimon Marchi <simon.marchi@polymtl.ca>
Tue, 15 Mar 2022 19:09:55 +0000 (15:09 -0400)
commit9452483507d997548b6800e1a50fe863e8250b32
treedfca985ee70f34cd928482d2810b061b4dfdef61
parent9f8890e0d14df4dffc44bc37d5d3cc857eaab394
binutils/readelf: decode AMDGPU-specific e_flags

Decode and print the AMDGPU-specific fields of e_flags, as documented
here:

  https://llvm.org/docs/AMDGPUUsage.html#header

That is:

 - The specific GPU model
 - Whether the xnack and sramecc features are enabled

The result looks like:

-  Flags:                             0x52f
+  Flags:                             0x52f, gfx906, xnack any, sramecc any

The flags for the "HSA" OS ABI are properly versioned and documented on
that page.  But the NONE, PAL and MESA3D OS ABIs are not well documented
nor versioned.  Taking a peek at the LLVM source code, we see that they
encode their flags the same way as HSA v3.  For example, for PAL:

  https://github.com/llvm/llvm-project/blob/c8b614cd74a92d85936aed5ac7c642af75ffdc29/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUTargetStreamer.cpp#L601

So for those other OS ABIs, we read them the same as HSA v3.

binutils/ChangeLog:

* readelf.c: Include elf/amdgcn.h.
(decode_AMDGPU_machine_flags): New.
(get_machine_flags): Handle flags for EM_AMDGPU machine type.

include/ChangeLog:

* elf/amdgcn.h: Add EF_AMDGPU_MACH_AMDGCN_* and
EF_AMDGPU_FEATURE_* defines.

Change-Id: Ib5b94df7cae0719a22cf4e4fd0629330e9485c12
binutils/readelf.c
include/elf/amdgpu.h