Avoid requesting NEWSUB extended data through read-ahead while parsing the header. The full NEWSUB block size is still validated and consumed, but the extended data is not required to be present in memory during header parsing.
Add a test for a malformed NEWSUB header with a large packed size.
Allocate enough memory for possible addition of 3 characters within the
range of 0-Z. Since UTF-16 is in use, allocate 6 bytes + 2 bytes for the
terminating NUL character.
Also keep in mind that "l" is already size in bytes, which means that a
multiplication of 2 is not needed (and prevented overflow issues with
longer filenames).
It is possible to trigger an out of boundary write with short filenames
which contain illegal ISO9660 characters. For these files, Joliet IDs
are generated. If multiple files lead to the same ID (which can happen
because illegal characters are replaced with an underscore), 3
characters/digits in the range of 0-Z are added.
iso9660: Fix infinite loop in Joliet ID generation
3 characters/digits base 36 means that 46656 combinations are possible.
If a directory with even more conflicting identifiers is encountered, the
code would trigger an endless loop.
Strings pointed to by these variables are actually modified. They point
to modifiable data areas (own stack arrays or argv arguments), so the
code does not erroneously modify them. Instead, clarify that they are
modifiable by removing the qualifier.
Rudi Heitbaum [Mon, 16 Feb 2026 10:20:04 +0000 (10:20 +0000)]
fix handling of missing const type qualifier
Since glibc-2.43:
For ISO C23, the functions bsearch, memchr, strchr, strpbrk, strrchr,
strstr, wcschr, wcspbrk, wcsrchr, wcsstr and wmemchr that return pointers
into their input arrays now have definitions as macros that return a
pointer to a const-qualified type when the input argument is a pointer
to a const-qualified type.
The supplied nanoseconds of time keyword could be truncated due to
casting from int64_t to long (relevant for Windows and x86), resulting
in an incorrect value.
Since the implementation already caps the value at specific limits for
bug compatibility, just use the correct data type for parsing to not
make things worse.
If no flags are supplied, anchor flags are supposed to be not special.
This means that ^ at the beginning of a pattern should be treated as a
regular character.
This breaks current behavior, but complies with comments in code, i.e.
archive_pathmatch.h line 41/42.
Patterns with a lot of asterisks may overflow the call stack, crashing
the application. Check the recursion depth. If it is too deep, fail
with an error.
These functions can return negative values, in which case operation
itself failed. While internal libarchive libraries handle these cases,
the tools don't. Check for negative values in them as well.
Tim Kientzle [Fri, 8 May 2026 04:50:50 +0000 (21:50 -0700)]
[test_utils] Fix a minor UB
(UBSan occasionally finds something interesting and
often reports whacky non-bugs like this one. "Fixing"
it will make the real UB bugs easier to identify, so...)
According to C's integer promotion rules, `unsigned short` gets
promoted to _signed_ `int`, and shifting into the sign bit of an `int`
is technically UB. Explicit cast to `unsigned` quiets UBSan.
Tim Kientzle [Fri, 8 May 2026 04:11:54 +0000 (21:11 -0700)]
[XAR] Fix two UB
1. The XAR writer's path normalization code uses strcpy() to move
parts of a path string within the same buffer. The source and
destination ranges overlap, which is undefined behavior for strcpy().
2. Failure to check string length before accessing the last character
of a path component. For empty components (e.g., //), the length is 0,
and length-1 underflows to SIZE_MAX.
Tim Kientzle [Fri, 8 May 2026 03:15:37 +0000 (20:15 -0700)]
[pathmatch] Heap buffer over-read
The bracket expression matching [ in the pathmatch engine fails to
handle malformed patterns, specifically when a closing ] is missing or
when high-byte characters are used. The scanner advances the pattern
pointer beyond the allocated buffer.
Tim Kientzle [Fri, 8 May 2026 02:41:04 +0000 (19:41 -0700)]
[ACL] Parser out-of-bounds read
The ACL parser fails to validate buffer length when processing PAX
attributes (SCHILY.acl.access/default). The next_field() function
attempts to read a separator character from a pointer even when the
remaining length is zero.
Jose Luis Duran [Wed, 15 Apr 2026 01:36:07 +0000 (01:36 +0000)]
mtree: Do not append '/' when basename is '.'
If the basename is '.', it means it is the root directory ('/'). Do not
append '/' to '.', as this will produce a path '/.', resulting in an
invalid mtree entry.
Tim Kientzle [Thu, 7 May 2026 21:35:02 +0000 (14:35 -0700)]
Date parsing: reject date components with numbers of more than 4 digits
Only the Unix epoch format `@<timestamp>` can have a number with
more than 4 digits. So let's break out the numeric parsing into
a standalone uint64 parser and use it separately to parse epoch
timestamps (which are only limited by the range of time_t) and
other date components.
It also avoids a time-consuming leap-year correction for
nonsensically large year values.
Tim Kientzle [Wed, 6 May 2026 03:54:12 +0000 (20:54 -0700)]
[CMake] Automatically update `list.h`
`list.h` contains a list of all the tests and is generated
by grepping the test source files for `DEFINE_TEST`.
Previously, it was generated at configure time.
This meant that if you added a new test to an existing
source file, you had to manually reconfigure.
This adds the necessary dependencies so that `list.h`
is regenerated whenever any C test source changes.
This ensures that new tests are always discovered automatically.
Note: If someone wants to update the autoconf-based
build system to do this, please send a PR.
Tim Kientzle [Tue, 5 May 2026 17:00:01 +0000 (10:00 -0700)]
[RAR5] Correct handling of unknown filter types
The change to return FAILED for entry-specific issues uncovered
flaws in RAR5 handling of filter types:
* Supported filter types are verified in `parse_filter` and
were being also checked in `run_filter` -- the duplication
confused the error handling here.
* `do_uncompress_file` was only checking for `FATAL` from
the upstream filter logic, so failed to properly pass
`FAILED` errors through
Tim Kientzle [Mon, 4 May 2026 23:38:15 +0000 (16:38 -0700)]
[rar5] Fix infinite loop in header parsing
The change to return `FAILED` instead of `FATAL` for issues that
impact a single entry (but don't necessarily terminate the entire archive)
created a bug in header parsing since `FAILED` wasn't handled in a
header-check loop.
Tim Kientzle [Mon, 4 May 2026 03:42:22 +0000 (20:42 -0700)]
7zip: propagate skip_stream's actual error code in read_data_skip
archive_read_format_7zip_read_data_skip used to coerce any negative
skip_stream() return into ARCHIVE_FATAL. That is wrong in principle:
ARCHIVE_FAILED can legitimately propagate up from setup_decode_folder()
through read_stream() and skip_stream(), and the wrapper should not
upgrade it.
In the current encryption-partially test case this is empirically a
no-op because skip_stream() still returns ARCHIVE_FATAL via a second,
deeper code path through extract_pack_stream(). An inline TODO comment
flags that asymmetry for a follow-up audit.
Tim Kientzle [Mon, 4 May 2026 03:12:42 +0000 (20:12 -0700)]
rar5: convert remaining per-entry data errors to ARCHIVE_FAILED
Follow-up to 9fa772ab. An audit of the rar5 reader found many more
ARCHIVE_FATAL returns in data-decode paths that should be ARCHIVE_FAILED
so the caller can move on to the next entry after a corrupt one:
Programmer assertions, ENOMEM, true I/O errors, and propagation of
copy_string's window-buf-NULL FATAL return are intentionally kept as
ARCHIVE_FATAL because they are not recoverable per-entry conditions.
Tim Kientzle [Sun, 3 May 2026 23:25:55 +0000 (16:25 -0700)]
rar: convert remaining per-entry data errors to ARCHIVE_FAILED
Follow-up to 4f148608. A code review found additional ARCHIVE_FATAL
returns in RAR4 data-decode paths that should be ARCHIVE_FAILED so the
caller can move on to the next entry:
read_data_block Truncated RAR file data
read_data_compressed PPMd "Invalid symbol" (3 sites)
parse_codes Zero window size is invalid
add_value Prefix found (second site)
make_table_recurse Huffman tree was not created
make_table_recurse Invalid location to Huffman tree specified
These are all per-entry parse/decode failures. As with the earlier
batch, the rar4 input position is tracked by rar_br_fillup so
read_data_skip will correctly advance past the damaged entry, and
RAR4 solid mode is not supported, so subsequent entries are not at
risk from a half-consumed shared decoder state.
Tim Kientzle [Sun, 3 May 2026 22:23:48 +0000 (15:23 -0700)]
tests: update expected return codes from FATAL to FAILED
Per-entry data errors (encryption, invalid filters, bad bitstream) now
return ARCHIVE_FAILED instead of ARCHIVE_FATAL. Update tests that were
asserting the old incorrect ARCHIVE_FATAL return codes.
The one exception is test_read_format_7zip_encryption_partially line 71,
which asserts ARCHIVE_FATAL after archive_read_next_header on the entry
following an encrypted entry: 7zip cannot skip an encrypted entry (the
decode-folder setup fails), so the skip legitimately returns ARCHIVE_FATAL
and the archive is done.
Tim Kientzle [Sun, 3 May 2026 22:21:44 +0000 (15:21 -0700)]
7zip: return ARCHIVE_FAILED (not ARCHIVE_FATAL) for per-entry data errors
setup_decode_folder() returned ARCHIVE_FATAL for both header-level and
data-level encryption/filter errors. Header encryption is a true
archive-fatal condition; data encryption is per-entry.
Distinguish the two by returning ARCHIVE_FATAL when decoding archive
headers (header==1) and ARCHIVE_FAILED when decoding file content
(header==0). Fix the call site in read_stream() to propagate the
actual return value rather than mapping all errors to ARCHIVE_FATAL.
Tim Kientzle [Sun, 3 May 2026 22:21:09 +0000 (15:21 -0700)]
rar: return ARCHIVE_FAILED (not ARCHIVE_FATAL) for per-entry data errors
ARCHIVE_FATAL means the entire archive is unreadable and no further
operations are valid. ARCHIVE_FAILED means the current entry cannot
be processed but iteration over subsequent entries may still succeed.
The RAR4 decompressor was returning ARCHIVE_FATAL from a large number
of data-parsing failures (invalid Huffman prefix, invalid PPMd sequence,
bad CRC, invalid symbol, etc.) that are per-entry errors. Because each
entry's compressed data region can be skipped using the packed_size
recorded in its file header, a decompressor error does not prevent
reading the next entry's header.
Change all such per-entry errors in the data-reading path
(read_data_stored, read_data_compressed, parse_codes, create_code,
add_value, make_table_recurse, expand, copy_from_lzss_window,
copy_from_lzss_window_to_unp) to return ARCHIVE_FAILED. OOM errors
and true I/O failures (rar_br_preparation truncated-data) remain
ARCHIVE_FATAL.
Tim Kientzle [Sun, 3 May 2026 22:20:33 +0000 (15:20 -0700)]
archive_read: make ARCHIVE_FATAL sticky in data-reading entry points
Three entry points in archive_read.c could return ARCHIVE_FATAL from
the format layer without setting a->archive.state = ARCHIVE_STATE_FATAL,
so subsequent API calls would not see the archive as fatally broken:
- archive_read_data_skip() unconditionally reset state to HEADER even
when the format's skip returned ARCHIVE_FATAL.
- archive_seek_data() and _archive_read_data_block() forwarded FATAL
from the format layer without recording it in the archive state.
Fix all three so that ARCHIVE_FATAL causes the state to become
ARCHIVE_STATE_FATAL, consistent with the existing behavior of
archive_read_next_header().