macro: handle DECIMAL_STR_MAX() special cases more accurately
So far DECIMAL_STR_MAX() overestimated the types in two ways: it would
also adds space for a "-" for unsigned types.
And it would always return the same size for 64bit values regardless of
signedness, even though the longest maximum numbers for signed and
unsigned differ in length by one digit. i.e. 2^64-1 (i.e. UINT64_MAX) is
one decimal digit longer than -2^63 (INT64_MIN) - for the other integer
widths the number of digits in the "longest" decimal value is always the
same, regardless of signedness. by example: strlen("65535") ==
strlen("32768") (i.e. the relevant 16 bit limits) holds — and similar
for 8bit and 32bit integer width limits — but
strlen("
18446744073709551615") > strlen("
9223372036854775808") (i.e. the
relevant 64 bit limits).
Let's fix both misestimations.