Nicholas Vinson [Sun, 13 Apr 2025 11:33:43 +0000 (07:33 -0400)]
Copy ae digests to mtree_entry
Copy ae digests to mtree_entry. This simplifies porting non-archive
formats to archive formats while preserving supported message
digests specifically in cases where recomputing digests is not
viable.
Signed-off-by: Nicholas Vinson <nvinson234@gmail.com>
Zhaofeng Li [Thu, 15 May 2025 12:08:14 +0000 (06:08 -0600)]
bsdtar: Support `--mtime` and `--clamp-mtime` (#2601)
Hi,
This PR adds support for setting a forced mtime on all written files
(`--mtime` and `--clamp-mtime`) in bsdtar.
The end goal will be to support all functionalities in
<https://reproducible-builds.org/docs/archives/#full-example>, namely
`--sort` and disabling other attributes (atime, ctime, etc.).
Fixes #971.
## History
- [v1](https://github.com/zhaofengli/libarchive/tree/forced-mtime-v1):
Added `archive_read_disk_set_forced_mtime` in libarchive. As a result,
it was only applied when reading from the filesystem and not from other
archives.
- [v2](https://github.com/zhaofengli/libarchive/tree/forced-mtime-v2):
Refactored to apply the forced mtime in `archive_write`.
- v3 (current): Reduced libarchive change to exposing
`archive_parse_date`, moved clamping logic into bsdtar.
---------
Signed-off-by: Zhaofeng Li <hello@zhaofeng.li> Co-authored-by: Dustin L. Howett <dustin@howett.net>
A filter block size must not be larger than the lzss window, which is
defined
by dictionary size, which in turn can be derived from unpacked file
size.
While at it, improve error messages and fix lzss window wrap around
logic.
rar: Fix double free with over 4 billion nodes (#2598)
If a system is capable of handling 4 billion nodes in memory, a double
free could occur because of an unsigned integer overflow leading to a
realloc call with size argument of 0. Eventually, the client will
release that memory again, triggering a double free.
mehrabiworkmail [Fri, 9 May 2025 17:21:32 +0000 (10:21 -0700)]
7z sfx overaly detection (#2088)
To detect 7z SFX files, libarchive currently searches for the 7z header
in a hard-coded addr range of the PE/ELF file
(specified via macros SFX_MIN_ADDR and SFX_MAX_ADDR). This causes it to
miss SFX files that may stray outside these values (libarchive fails to
extract 7z SFX ELF files created by recent versions of 7z tool because
of this issue). This patch fixes the issue by finding a more robust
starting point for the 7z header search: overlay in PE or the .data
section in ELF. This patch also adds 3 new test cases for 7z SFX to
libarchive.
7zip reader: add test for POWERPC filter support for LZMA compressor (#2460)
This new test archive contains a C hello world executable built like so
on a ubuntu 24.04 machine:
```
int main(int argc, char *argv[]) {
printf("hello, world\n");
return 0;
}
```
`powerpc-linux-gnu-gcc hw.c -o hw-powerpc -Wall`
The test archive that contains this executable was created like so,
using 7-Zip 24.08: `7zz a -t7z -m0=lzma2 -mf=ppc
libarchive/test/test_read_format_7zip_lzma2_powerpc.7z hw-powerpc`
The new test archive is required because the powerpc filter for lzma is
implemented in liblzma rather than in libarchive.
xar: add xmllite support to the XAR reader and writer (#2388)
This commit adds support for reading and writing XAR archives on Windows
using the built-in xmllite library. xmllite is present in all versions
of Windows starting with Windows XP.
With this change, no external XML library (libxml2, expat) is required
to read or produce XAR archives on Windows.
xmllite is a little bit annoying in that it's entirely a COM API--the
likes of which are annoying to use from C.
Tim Kientzle [Fri, 9 May 2025 11:36:05 +0000 (04:36 -0700)]
Polish for GNU tar format reading/writing (#2455)
A few small tweaks to improve reading/writing of the legacy GNU tar
format.
* Be more tolerant of redundant 'K' and 'L' headers
* Fill in missing error messages for redundant headers
* New test for reading archive with redundant 'L' headers
* Earlier identification of GNU tar format in some cases
These changes were inspired by Issue #2434. Although that was determined
to not technically be a bug in libarchive, it's relatively easy for
libarchive to tolerate duplicate 'K' and 'L' headers and we should be
issuing appropriate error messages in any case.
The refactoring of https://github.com/libarchive/libarchive/pull/2553
introduced three issues:
1. Introduction of a modifiable global static variable
This violates the goal of having no global variables as stated in [the
README.md](https://github.com/libarchive/libarchive/blob/b6f6557abb8235f604eced6facb42da8c7ab2a41/README.md?plain=1#L195)
which in turn leads to concurrency issues. Without any form of mutex
protection, multiple threads are not guaranteed to see the correct
min/max values. Since these are not needed in regular use cases but only
in edge cases, handle them in functions with local variables only.
Also the global variables are locale-dependent which can change during
runtime. In that case, future calls leads to issues.
2. Broken 32 bit support
The writers for zip and others affected by the previously mentioned PR
and test-suite on Debian 12 i686 are broken, because the calculation of
maximum MS-DOS time is not possible with a 32 bit time_t. Treat these
cases properly.
3. Edge case protection
Huge or tiny int64_t values can easily lead to unsigned integer
overflows. While these do not affect stability of libarchive, the
results are still wrong, i.e. are not capped at min/max as expected.
In total, the functions are much closer to their original versions again
(+ more range checks).
Make sure that size_t casts do not truncate the value of packed_size on
32 bit systems since it's 64 bit. Extensions to RAR format allow 64 bit
values to be specified in archives.
Also verify that 64 bit signed arithmetics do not overflow.
It is possible to trigger a call stack overflow by repeatedly entering
the rar_read_ahead function. In normal circumstances, this recursion is
optimized away by common compilers, but default settings with MSVC keep
the recursion in place. Explicitly turn the recursion into a goto-loop
to avoid the overflow even with no compiler optimizations.
Reset avail_in and next_in if the next entry of a split archive is
parsed to always update its internal structure to access next bytes when
cache runs empty.
rar: Support large headers on 32 bit systems (#2596)
Support header sizes larger than 32 bit even on 32 bit systems, since
these normally have large file support. Otherwise an unsigned integer
overflow could occur, leading to erroneous parsing on these systems.
Unify spacing in archive_read_support_format_rar.c (#2590)
Most source files use tabs instead of spaces, but
archive_read_support_format_rar.c uses spaces most of the time. A few
lines contain a mixture of tabs and spaces, which leads to poorly
formatted output with many default settings.
Unify the style. No functional change and preparation for upcoming
changes to get rid of white space diffs.
Tim Kientzle [Sat, 26 Apr 2025 23:10:45 +0000 (16:10 -0700)]
Improve support for AFIO 64-bit inode values (#2589)
PR #2258 hardened AFIO parsing by converting all inode values >= 2^63 to
zero values. This turns out to be problematic for filesystems that use
very large inode values; it results in all such files being viewed as
hardlinks to each other.
PR #2587 partially addressed this by instead considering inode values >=
2^63 as invalid and just ignoring them. This prevented the accidental
hardlinking, but at the cost of losing all hardlinks that involved large
inode values.
This PR further improves things by stripping the high order bit from
64-bit inode values in the AFIO parser. This allows them to be mostly
preserved and should allow hardlinks to get properly processed in the
vast majority of cases. The only false hardlinks would be when there are
inode values that differ exactly in the high order bit, which should be
very rare.
A full solution will require expanding inode handling to use unsigned
64-bit values; we can't do that without a major version bump, but this
PR also sets the stage for migrating inode support in a future
libarchive 4.0.
Brian Campbell [Sat, 26 Apr 2025 04:11:19 +0000 (05:11 +0100)]
Fix overflow in build_ustar_entry (#2588)
The calculations for the suffix and prefix can increment the endpoint
for a trailing slash. Hence the limits used should be one lower than the
maximum number of bytes.
Without this patch, when this happens for both the prefix and the
suffix, we end up with 156 + 100 bytes, and the write of the null at the
end will overflow the 256 byte buffer. This can be reproduced by running
```
mkdir -p foo/bar
bsdtar cvf test.tar foo////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////bar
```
when bsdtar is compiled with Address Sanitiser, although I originally
noticed this by accident with a genuine filename on a CHERI capability
system, which faults immediately on the buffer overflow.
But signed type is used in libarchive and there were some fuzzing issues
with it, https://github.com/libarchive/libarchive/pull/2258 converts
negative `ino` to `0`, which is actually a reserved inode value, but
more importantly it was still setting `AE_SET_INO` flag and later on
hardlink detection will treat all `0` on same `dev` as hardlinks to each
other if they have some hardlinks.
This showed up in BuildBarn FUSE filesystem
https://github.com/buildbarn/bb-remote-execution/issues/162 which has
both
- setting number of links to a big value
- generating random inode values in full uint64 range
Which in turn triggers false-positive hardlink detection in `bsdtar`
with high probability.
Let's mitigate it
- don't set `AE_SET_INO` on negative values (assuming rest of code isn't
ready to correctly handle full uint64 range)
- check that `ino + dev` are set in link resolver
Make sure to not skip past end of file for better error messages. One
such example is now visible with rar testsuite. You can see the
difference already by an actually not useless use of cat:
```
$ cat .../test_read_format_rar_ppmd_use_after_free.rar | bsdtar -t
bsdtar: Archive entry has empty or unreadable filename ... skipping.
bsdtar: Archive entry has empty or unreadable filename ... skipping.
bsdtar: Truncated input file (needed 119 bytes, only 0 available)
bsdtar: Error exit delayed from previous errors.
```
compared to
```
$ bsdtar -tf .../test_read_format_rar_ppmd_use_after_free.rar
bsdtar: Archive entry has empty or unreadable filename ... skipping.
bsdtar: Archive entry has empty or unreadable filename ... skipping.
bsdtar: Error exit delayed from previous errors.
```
Since the former cannot lseek, the error is a different one
(ARCHIVE_FATAL vs ARCHIVE_EOF). The piped version states explicitly that
truncation occurred, while the latter states EOF because the skip past
the end of file was successful.
tar: Improve LFS support on 32 bit systems (#2582)
The size_t data type is only 32 bit on 32 bit sytems while off_t is
generally 64 bit to support files larger than 2 GB.
If an entry is declared to be larger than 4 GB and the entry shall be
skipped, then 32 bit systems truncate the requested amount of bytes.
This leads to different interpretation of data in tar files compared to
64 bit systems.
Solves a Windows compile issue when OpenSSH/mbedTLS/Nettle is activated
and on the build system's paths by making the Windows API backend higher
priority on Windows (meaning that only RIPEMD160 will use
OpenSSH/mbedTLS/Nettle anymore).
If a warc archive claims to have more than INT64_MAX - 4 content bytes,
the inevitable failure to skip all these bytes could lead to parsing
data which should be ignored instead.
The test case contains a conversation entry with that many bytes and if
the entry is not properly skipped, the warc implementation would read
the conversation data as a new file entry.
The skip functions are limited to 1 GB for cases in which libarchive
runs on a system with an off_t or long with 32 bits. This has negative
impact on 64 bit systems.
Instead, make sure that _all_ subsequent functions truncate properly.
Some of them already did and some had regressions for over 10 years.
Tests pass on Debian 12 i686 configured with --disable-largefile, i.e.
running with an off_t with 32 bits.
Casts added where needed to still pass MSVC builds.
Tim Kientzle [Sun, 30 Mar 2025 16:26:25 +0000 (09:26 -0700)]
Issue 2548: Reading GNU sparse entries (#2558)
My attempt to fix #2404 just made the confusion between the size of the
extracted file and the size of the contents in the tar archive worse
than it was before.
@ferivoz in #2557 showed that the confusion stemmed from a point where
we were setting the size in the entry (which is by definition the size
of the file on disk) when we read the `GNU.sparse.size` and
`GNU.sparse.realsize` attributes (which might represent the size on disk
or in the archive) and then using that to determine whether to read the
value in ustar header (which represents the size of the data in the
archive).
The confusion stems from three issues:
* The GNU.sparse.* fields mean different things depending on the version
of GNU tar used.
* The regular Pax `size` field overrides the value in the ustar header,
but the GNU sparse size fields don't always do so.
* The previous libarchive code tried to reconcile different size
information as we went along, which is problematic because the order in
which this information appears can vary.
This PR makes one big structural change: We now have separate storage
for every different size field we might encounter. We now just store
these values and record which one we saw. Then at the end, when we have
all the information available at once, we can use this data to determine
the size on disk and the size in the archive.
A few key facts about GNU sparse formats:
* GNU legacy sparse format: Stored all the relevant info in an extension
of the ustar header.
* GNU pax 0.0 format: Used `GNU.sparse.size` to store the size on disk
* GNU pax 0.1 format: Used `GNU.sparse.size` to store the size on disk
* GNU pax 1.0 format: Used `GNU.sparse.realsize` to store the size on
disk; repurposed `GNU.sparse.size` to store the size in the archive, but
omitted this in favor of the ustar size field when that could be used.
And of course, some key precedence information:
* Pax `size` field always overrides the ustar header size field.
* GNU sparse size fields override it ONLY when they represent the size
of the data in the archive.
Peter Kokot [Sat, 22 Mar 2025 21:21:38 +0000 (22:21 +0100)]
CMake: Replace CMAKE_COMPILER_IS_GNUCC with CMAKE_C_COMPILER_ID (#2550)
Hello,
- The `CMAKE_COMPILER_IS_*` variables are deprecated and
`CMAKE_C_COMPILER_ID` can be used in this case instead.
- The legacy `endif()` command argument also simplified to avoid
repeating the condition.
Nicholas Vinson [Sun, 16 Mar 2025 23:37:47 +0000 (19:37 -0400)]
Move archive_entry_set_digest() to public API (#2540)
Moving archive_entry_set_digest() to the public API simplifies porting
non-archive formats to archive formats while preserving supported
message digests specifically in cases where recomputing digests is not
viable.
Peter Kästle [Mon, 10 Mar 2025 15:43:04 +0000 (16:43 +0100)]
fix CVE-2025-1632 and CVE-2025-25724 (#2532)
Hi,
please find my approach to fix the CVE-2025-1632 and CVE-2025-25724
vulnerabilities in this pr.
As both error cases did trigger a NULL pointer deref (and triggered
hopefully everywhere a coredump), we can safely replace the actual
information by a predefined invalid string without breaking any
functionality.
ljdarj [Sat, 8 Mar 2025 03:28:51 +0000 (04:28 +0100)]
archive_version_details' update (#2349)
Adding missing librairies to `archive_version_details()`'s output. I put
"system" if the library doesn't give a way to query its version and
"bundled" if there's a choice between the system copy of a library and a
bundled one and we took the bundled copy (Only one library in that case,
libb2. Maybe also xxhash in the future?).
I would have a question for the Windows specialists though: is there a
way to query the interface version of a CNG cryptographic provider?
Because I know of a way for Crypto API providers but I haven't found any
for CNG ones, despite `<bcrypt.h>` having an interface version
structure.
Tim Kientzle [Sat, 1 Mar 2025 17:06:31 +0000 (09:06 -0800)]
Avoid unreachable code in this test (#2528)
As remarked in #2521, this test has unreachable code on Windows, which
triggers a build failure in development due to warnings-as-errors.
(Release versions should not have warnings-as-errors.)
Silent [Wed, 1 Jan 2025 16:31:35 +0000 (17:31 +0100)]
Fix a Y2038 bug by replacing `Int32x32To64` with regular multiplication (#2471)
`Int32x32To64` macro internally truncates the arguments to int32, while
`time_t` is 64-bit on most/all modern platforms. Therefore, usage of
this macro creates a Year 2038 bug.
I detailed this issue a while ago in a writeup, and spotted the same
issue in this repository when updating the list of affected
repositories:
<https://cookieplmonster.github.io/2022/02/17/year-2038-problem/>
A few more notes:
1. I changed all uses of `Int32x32To64` en masse, even though at least
one of them was technically OK and used with int32 parameters only. IMO
better safe than sorry.
2. This is untested, but it's a small enough change that I hope the CI
success is a good enough indicator.
Endianness is easy to determine at runtime, but detecting this a single
time and then reusing the cached result might require API changes.
However we can use compile-time detection for some known compiler macros
without API changes fairly easily. Let's start by enabling this for
Clang and GCC.
The test archive that contains this executable was created like so,
using 7-Zip 24.08:
`7zz a -t7z -m0=deflate -mf=ppc
libarchive/test/test_read_format_7zip_deflate_powerpc.7z hw-powerpc`
This test fails in the first commit in this PR, and passes in the second
commit.
tar: fix bug when -s/a/b/ used more than once with b flag (#2435)
When the -s/regexp/replacement/ option was used with the b flag more
than once, the result of the previous substitution was appended to the
previous subject instead of replacing it. Fixed it by making sure the
subject is made the empty string before the call to realloc_strcat().
That in effect makes it more like a realloc_strcpy(), but creating a new
realloc_strcpy() function for that one usage doesn't feel worth it.
ci: use at most the number of make threads as there are cores on mac and linux github runners (#2437)
We previously told make to run as many threads as it likes on these CI
jobs, but that might sometimes hit resource limits like RAM or the
allowed number of open files.
These numbers were found experimentally by using `sysctl -n hw.ncpu` on
mac and `nproc` on linux.