Tim Kientzle [Sun, 3 May 2026 22:21:09 +0000 (15:21 -0700)]
rar: return ARCHIVE_FAILED (not ARCHIVE_FATAL) for per-entry data errors
ARCHIVE_FATAL means the entire archive is unreadable and no further
operations are valid. ARCHIVE_FAILED means the current entry cannot
be processed but iteration over subsequent entries may still succeed.
The RAR4 decompressor was returning ARCHIVE_FATAL from a large number
of data-parsing failures (invalid Huffman prefix, invalid PPMd sequence,
bad CRC, invalid symbol, etc.) that are per-entry errors. Because each
entry's compressed data region can be skipped using the packed_size
recorded in its file header, a decompressor error does not prevent
reading the next entry's header.
Change all such per-entry errors in the data-reading path
(read_data_stored, read_data_compressed, parse_codes, create_code,
add_value, make_table_recurse, expand, copy_from_lzss_window,
copy_from_lzss_window_to_unp) to return ARCHIVE_FAILED. OOM errors
and true I/O failures (rar_br_preparation truncated-data) remain
ARCHIVE_FATAL.
Tim Kientzle [Sun, 3 May 2026 22:20:33 +0000 (15:20 -0700)]
archive_read: make ARCHIVE_FATAL sticky in data-reading entry points
Three entry points in archive_read.c could return ARCHIVE_FATAL from
the format layer without setting a->archive.state = ARCHIVE_STATE_FATAL,
so subsequent API calls would not see the archive as fatally broken:
- archive_read_data_skip() unconditionally reset state to HEADER even
when the format's skip returned ARCHIVE_FATAL.
- archive_seek_data() and _archive_read_data_block() forwarded FATAL
from the format layer without recording it in the archive state.
Fix all three so that ARCHIVE_FATAL causes the state to become
ARCHIVE_STATE_FATAL, consistent with the existing behavior of
archive_read_next_header().
Tim Kientzle [Sat, 2 May 2026 16:46:06 +0000 (09:46 -0700)]
archive_acl: Fix buffer overrun and wrong output for NULL-name ACL entries
archive_acl_text_len() counted the trailing ":id" digits only when
ARCHIVE_ENTRY_ACL_STYLE_EXTRA_ID was set, but archive_acl_to_text_l()
always writes them for USER/GROUP entries whose name is NULL. With a
7-digit or larger id the allocated buffer was too short, causing
append_id() to write past its end.
Fix the estimator to also count the extra colon and digits when the
name is NULL, matching the serializer's logic.
The wide serializer (archive_acl_to_text_w) had the opposite problem:
it passed id=-1 to append_entry_w() for NULL-name entries regardless
of the id value, causing a garbage character to be written in the name
field and the numeric id to be omitted entirely. Fix it to mirror the
narrow serializer by setting id = ap->id when wname is NULL.
Darwin covers a wide range of platforms with similar but not identical
sets of libraries. MD5, SHA1 and SHA2 are available from libsystem on
all of these, but macOS also has them in libc and libmd. Restricting
our search to only libsystem means we can run configure on macOS and get
a config.h that also works for other Darwin platforms.
Return an int for error information and supply offset through a given
argument. This fits other function declarations and makes it much easier
to differentiate between status and "return value".
While at it, merge fallback mechanism of both functions.
Its functionality is split off from archive_read_format_7zip_bid
and returns offset to actual data, i.e. it handles self extracting
(SFX) files if offset 0 is not already a 7zip magic.
Tim Kientzle [Sat, 25 Apr 2026 18:58:17 +0000 (14:58 -0400)]
[7zip] Sanity-check the number of files
We allocate space early on to support the advertised number of
files. A malicious archive can set a nonsensical value here to exhaust
memory. This adds a check comparing the number of files to the number
of streams and the size of the total header.
Note that the just-added test does not actually fail without this.
The existing code recovers if the allocation fails, which it typically
will. The new check tightens the limit so that we reject nonsensical
file counts and avoid problems from large memory allocations.
François Degros [Fri, 24 Apr 2026 07:34:10 +0000 (17:34 +1000)]
Add tests for appending various filters before archive open
Extend test coverage to ensure all supported filters can be appended
to an archive reader before it is opened, matching the behavior
required to fix #2514.
François Degros [Fri, 24 Apr 2026 07:09:13 +0000 (17:09 +1000)]
Fix SIGSEGV in compress filter when appended before open
Calling archive_read_append_filter(a, ARCHIVE_FILTER_COMPRESS) would
previously trigger a crash because compress_bidder_init() attempted to
read header bits from the upstream filter immediately. If the archive
was not yet opened (common when setting up filters), the upstream filter
state was not ready for reading.
This commit defers the header reading and decompressor initialization
until the first read operation (lazy initialization), consistent with
other filter implementations in libarchive.
cab reader: Fix use of uninitialized values from Huffman table
Initialize the Huffman table to invalid values, which doesn't otherwise
affect the computation but avoids use of uninitialized values upon
extraction of some archives (as reported by `valgrind`).
libarchive: fix Windows compilation with ENABLE_CNG=OFF
Currently, libarchive_{random,util}.c use a couple bcrypt functions
regardless of whether HAVE_BCRYPT_H is defined as there are no other
implementations for Windows, but the actual <bcrypt.h> header is
included only under this macro.
To be able to build libarchive with ENABLE_CNG=OFF (for example, to
prefer a different crypto/digest engine) on Windows, don't guard
the include in these two files. In that case, bcrypt will still be
used, but only as an RNG.
This won't break anything because, as mentioned above, bcrypt is
used unconditionally here and if it's not present in the system,
the library won't build either way, with or without the change.
At least until we implement an RNG for Windows based on something
else.
Signed-off-by: Alexander Lobakin <alobakin@mailbox.org>
The function isofile_gen_utility_names could resolve .. directory
entries in a way that dirname will start with "../". If this happens,
the while-loop is unable to detect this because it forwards until the
cursor detects a slash again.
Fix this by also taking "../" at the beginning into account. Such an
entry can happen if "../../" points before the top directory.
The isofile_gen_utility_names function normalizes directories, including
dot dot directory entries. If such an entry has multiple slahes and leads
to the top directory, then the new path erroneously becomes absolute.
Skip multiple slashes.
If rp is not NULL, then it points to a slash already. Takes this into
account to unify the rp and dirname cases a bit more.
Resolving paths like "dir/../filename" to "filename" can lead
to a strcpy call with overlapping memory. Use memmove instead,
which already happens at times in isofile_gen_utility_names.
Benjamin Gilbert [Sun, 19 Apr 2026 04:05:06 +0000 (23:05 -0500)]
Have `make distcheck` verify CMake build succeeds
There have been multiple instances of test cases being added to the CMake
build but not the Autotools one, thus omitting them from the released dist
tarball. Prevent this by testing the CMake build during `make distcheck`.
Remove an #include controlled by a preprocessor symbol that nothing
defines. I'm not sure if this has ever been needed, or what for, but
it serves no purpose today.
Tim Kientzle [Tue, 14 Apr 2026 02:38:07 +0000 (19:38 -0700)]
Fix a double-free in the link resolver
The link resolver is a helper utility that tracks linked
entries so they can be correctly restored. Clients add link information
to the link resolver and incrementally query it to correctly
link entries as they are restored to disk. The link resolver
incrementally releases entries as they are consumed in order
to minimize memory usage.
The `archive_entry_linkresolver_free()` method cleans up
by repeatedly querying the cache and freeing each entry.
But this conflicted with the incremental clean up,
leading to double-frees of leftover items.
The easy fix here is to have `archive_entry_linkresolver_free()`
just repeatedly query the list without trying to free, relying
on the incremental clean up mechanism.
Credit: tianshuo han reported the issue and suggested the fix.
elhananhaenel [Thu, 19 Mar 2026 14:43:29 +0000 (16:43 +0200)]
Add regression test for zisofs 32-bit heap overflow
A crafted ISO with pz_log2_bs=2 and pz_uncompressed_size=0xFFFFFFF9
causes an integer overflow in the block pointer allocation in
zisofs_read_data(). On 32-bit, (ceil+1)*4 wraps size_t to 0, malloc(0)
returns a tiny buffer, and the code writes ~4GB past it.
The pz_log2_bs validation fix prevents this. Add a regression test with
a crafted 48KB ISO that triggers the overflow on unfixed 32-bit builds.
libarchive/ppmd8: mark the remaining functions static
Those 9 are not used anywhere outside the file (the actual
functionality is exported as a callback structure).
Make them static for a bit better compiler optimization
opportunities and, more important, to avoid symbol conflict
when static linking libarchive and any library which uses
the original Ppmd*.c from the LZMA SDK (like minizip-ng).
Also remove a couple declarations and macros not used
anywhere at all while we're here.
Signed-off-by: Alexander Lobakin <alobakin@mailbox.org>
The anchor characters ^ and $ have only special meanings if they are
located at the beginning (^) or at the end ($) of the pattern. And even
then they are supposed to be only special if flags are set.
If they are located within the pattern itself, they are regular
characters regardless of flags.
By only removing periods from error messages in Windows specific code,
but not adjusting its POSIX counterpart, the test fails on Windows but
not on POSIX systems.
Fix this by removing the period in test and in POSIX error messages.
Fixes: 3e0819b59e ("libarchive: Remove period from error messages") Signed-off-by: Tobias Stoeckmann <tobias@stoeckmann.org>