As with sizing, this needs to support type suppression and CTF_K_BIG
elision, and adapt to the DTD representation changes. Those changes cause a
general complexity reduction because we no longer have to memcpy the vlen
into place separately for every type kind, but can do it all at once using
shared code above the per-kind switch statement. That statement's only job
now is generating refs out of type IDs and string offsets, and translating
the struct offset from gap- into non-gap representation for non-big structs.
We do three distinct things:
- check whether all the types in a section are BTF-compatible, after
suppression of unwanted type kinds (including types with unwanted
prefixes), and elision of unneeded struct/union CTF_K_BIGs
- size the type section, taking suppression and CTF_K_BIG elision into
account
- actually emit it, again taking all the above into account
These all have to come to the same conclusions for every type: if the first
one gets things wrong we might try to emit something as BTF when we can't;
if the latter two are inconsistent, we might have a buffer overrun.
So the type emission code double-checks BTF-compatibility and raises
ECTF_NOTBTF if necessary; we also aggressively check for potential overruns
before every memcpy() into the buffer and raise an ECTF_INTERNAL assertion
failure if need be. Thankfully there are a lot fewer memcpy()s than there
used to be: there are only four places we need to check, all close to each
other, which is pretty maintainable.
We add a bit of debugging when --enable-libctf-hash-debugging is on,
printing the translation from provisional to final type ID so that you can
use it to map back to the provisional ID again when trying to track down
deduplicator problems, since the IDs the deduplicator will report at its
emission time are only provisional (the final parent-relative IDs are not
assigned until now).