Whenever we need to create a temporary file while writing to disk on a
POSIX system, try to create it in the same directory as the final file
instead of the current working directory. The target directory can
reasonably be expected to be writable (and if it isn't, creating the
file will fail anyway), but the current working directory may not be.
While here, consistently use __archive_mkstemp(), and increase the
template from six to eight random characters.
In archive_util.c, we have a private function named get_tempdir() which
is used by __archive_mktemp() to get the temporary directory if the
caller did not pass one.
In archive_read_disk_entry_from_file.c, we use the same logic with a
slight twist (don't trust the environment if setugid) to create a
temporary file for metadata.
Merge the two by renaming get_tempdir() to __archive_get_tempdir() and
unstaticizing it (with a prototype in archive_private.h).
dependabot[bot] [Mon, 6 Oct 2025 16:07:23 +0000 (16:07 +0000)]
CI: Bump the all-actions group across 1 directory with 3 updates
Bumps the all-actions group with 3 updates in the / directory: [actions/checkout](https://github.com/actions/checkout), [github/codeql-action](https://github.com/github/codeql-action) and [ossf/scorecard-action](https://github.com/ossf/scorecard-action).
Updates `actions/checkout` from 4.2.2 to 5.0.0
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/11bd71901bbe5b1630ceea73d27597364c9af683...08c6903cd8c0fde910a37f88322edcfb5dd907a8)
Updates `github/codeql-action` from 3.28.18 to 3.29.8
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/ff0a06e83cb2de871e5a09832bc6a81e7276941f...76621b61decf072c1cee8dd1ce2d2a82d33c17ed)
Updates `ossf/scorecard-action` from 2.4.1 to 2.4.2
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](https://github.com/ossf/scorecard-action/compare/f49aabe0b5af0936a0987cfb85d86b75731b0186...05b42c624433fc40578a4040d5cf5e36ddca8cde)
Tim Kientzle [Tue, 16 Sep 2025 15:25:57 +0000 (08:25 -0700)]
Fix an infinite loop when parsing `V` headers
Our tar header parsing tracks a count of bytes that need to be
consumed from the input. After each header, we skip this many bytes,
discard them, and reset the count to zero. The `V` header parsing
added the size of the `V` entry body to this count, but failed to
check whether that size was negative. A negative size (from
overflowing the 64-bit signed number parsing) would decrement this
count, potentially leading us to consume zero bytes and leading to an
infinite loop parsing the same header over and over.
There are two fixes here:
* Check for a negative size for the `V` body
* Check for errors when skipping the bytes that
need to be consumed
Thanks to Zhang Tianyi from Wuhan University for finding
and reporting this issue.
Tim Kientzle [Sat, 13 Sep 2025 19:30:03 +0000 (12:30 -0700)]
Rename err.h to avoid conflict with system header
Depending on header search path ordering, we can easily
confuse libarchive_fe/err.h with the system header.
Rename ours to lafe_err.h to avoid the confusion.
Rename libarchive_fe/err.c to match.
Tim Kientzle [Fri, 12 Sep 2025 16:01:13 +0000 (09:01 -0700)]
Ignore overlong gzip original_filename
We reuse the compression buffer to format the gzip header,
but didn't check for an overlong gzip original_filename.
This adds that check. If the original_filename is
over 32k (or bigger than the buffer in case someone shrinks
the buffer someday), we WARN and ignore the filename.
archive_write: Set archive state to fatal if format or filters fail
In archive_write_header(), if the format method or a filter flush method
fails, we set the archive state to fatal, but we did not do this in
archive_write_data() or archive_write_finish_entry(). There is no good
reason for this discrepancy. Not setting the archive state to fatal
means a subsequent archive_write_free() will invoke archive_write_close()
which may retry the operation and cause archive_write_free() to return
an unexpected ARCHIVE_FATAL.
write_add_filter_bzip2: End compression in the freer
If a fatal error occurs, the closer will not be called, so neither will
BZ2_bzCompressEnd(), and we will leak memory. Fix this by calling it a
second time from the freer. This is harmless in the non-error case as
it will see that the compression state has already been cleared and
immediately return BZ_PARAM_ERROR, which we simply ignore.
archive_write_client: Free state in freer, not in closer
The closer will not be called if a fatal error occurs, so the current
arrangement results in a memory leak. The downside is that the freer
may be called even if we were not fully constructed, so it needs to
perform additional checks. On the other hand, knowing that the freer
always gets called and will free the client state simplifies error
handling in the opener.
François Degros [Wed, 20 Aug 2025 05:45:32 +0000 (15:45 +1000)]
Use sysconf(_SC_OPEN_MAX) on systems without close_range or closefrom
Close all the file descriptors in the range [3 ..
sysconf(_SC_OPEN_MAX)-1] before executing a filter program to avoid
leaking file descriptors into subprocesses.
archive_read_disk_posix: Don't pass -1 to a function expecting errno
This fixes an unhelpful "Couldn't visit directory: Unknown error: -1" message.
Fixes: 3311bb52cbe4 ("Bring the code supporting directory traversals from bsdtar/tree.[ch] into archive_read_disk.c and modify it. Introduce new APIs archive_read_disk_open and archive_read_disk_descend.")
RAR5 reader: early fail when file declares data for a dir entry
RAR5 reader had inconsistent sanity checks for directory entries that
declare data. On one hand such declaration was accepted during the
header parsing process, but at the same time this was disallowed during
the data reading process. Disallow logic was returning the
ARCHIVE_FAILED error code that allowed the client to retry, while in
reality, the error was non-retryable.
This commit adds another sanity check during the header parsing logic
that disallows the directory entries to declare any data. This will make
clients fail early when such entry is detected.
Also, the commit changes the ARCHIVE_FAILED error code to ARCHIVE_FATAL
when trying to read data for the directory entry that declares data.
This makes sure that tools like bsdtar won't attempt to retry unpacking
such data.
RAR5 reader: fix multiple issues in extra field parsing function
This commit fixes multiple issues found in the function that parses
extra fields found in the "file"/"service" blocks.
1. In case the file declares just one extra field, which is an
unsupported field, the function returns ARCHIVE_FATAL.
The commit fixes this so this case is allowed, and the unsupported
extra field is skipped. The commit also introduces a test for this
case.
2. Current parsing method of extra fields can report parsing errors in
case the file is malformed. The problem is that next iteration of
parsing, which is meant to process the next extra field (if any),
overwrites the result of the previous iteration, even if previous
iteration has reported parsing error. A successful parse can be
returned in this case, leading to undefined behavior.
This commit changes the behavior to fail the parsing function early.
Also a test file is introduced for this case.
3. In case the file declares only the EX_CRYPT extra field, current
function simply returns ARCHIVE_FATAL, preventing the caller from
setting the proper error string. This results in libarchive returning
an ARCHIVE_FATAL without any error messages set. The PR #2096 (commit adee36b00) was specifically created to provide error strings in case
EX_CRYPT attribute was encountered, but current behavior contradicts
this case.
The commit changes the behavior so that ARCHIVE_OK is returned by the
extra field parsing function in only EX_CRYPT is encountered, so that
the caller header reading function can properly return ARCHIVE_FATAL
to the caller, at the same time setting a proper error string. A test
file is also provided for this case.
Marcin Mikula [Wed, 30 Jul 2025 08:29:12 +0000 (10:29 +0200)]
Fix CVE-2025-25724 by checking the result of the strftime
to avoid use of undefined content of buf, in case when custom
locale makes the result string longer than buf length.
Tim Kientzle [Sat, 26 Jul 2025 18:10:24 +0000 (11:10 -0700)]
Guard against invalid type arguments
Some experiments showed strange things happen if you
provide an invalid type value when appending a new ACL entry.
Guard against that, and while we're here be a little more
paranoid elsewhere against bad types in case there is another
way to get them in.
Benoit Pierre [Sat, 7 Jun 2025 22:30:14 +0000 (00:30 +0200)]
zip: fix writing with ZSTD compression
When testing the feature with `bsdtar -acf test.zip --options
zip:compression=zstd …` on a tree of ~100MB, the execution would
appear to "hang" while writing a multi-gigabytes ZIP file.
Alex James [Sat, 12 Jul 2025 20:44:55 +0000 (15:44 -0500)]
Fix mkstemp path in setup_mac_metadata
setup_mac_metadata currently concates the template after TMPDIR without
adding a path separator, which causes mkstemp to create a temporary file
next to TMPDIR instead of in TMPDIR. Add a path separator to the
template to ensure that the temporary file is created under TMPDIR.
I hit this while rebuilding libarchive in nixpkgs. Lix recently started
using a dedicated build directory (under /nix/var/nix/builds) instead of
using a directory under /tmp [1]. nixpkgs & Lix support (optional)
on macOS sandboxing. The default sandbox profile allows builds to access
paths under the build directory and any path under /tmp. Because the
build directory is no longer under /tmp, some of libarchive's tests
started to fail as they accessed paths next to (but not under) the build
directory:
cpio/test/test_basic.c:65: Contents don't match
Description: Expected: 2 blocks
, options=
file="pack.err"
0000_62_73_64_63_70_69_6f_3a_20_43_6f_75_6c_64_20_6e_bsdcpio: Could n
0010_6f_74_20_6f_70_65_6e_20_65_78_74_65_6e_64_65_64_ot open extended
0020_20_61_74_74_72_69_62_75_74_65_20_66_69_6c_65_0a_ attribute file.
The off_t datatype in Windows is 32 bit, which leads to issues when
handling files larger than 2 GB.
Add a wrapper around fstat/stat calls to return a struct which has a
properly sized st_size variable. On systems with an off_t representing
the actual system limits, use the native system calls.
This also fixes mtree's checkfs option with large files on Windows.