Julian Uy [Fri, 6 Dec 2024 15:57:27 +0000 (09:57 -0600)]
Add missing definition for getline polyfill (#2425)
The fallback for when `getline` is not implemented in libc was not
compiling due to the fact that the definition for it was missing, so add
the definition.
Alexander Ziaee [Fri, 6 Dec 2024 15:50:06 +0000 (10:50 -0500)]
bsdtar.1: Mention rar support + manual page polish (#2423)
I have been using this for years without realizing it decompresses rar.
+ add rar to supported decompression formats
+ use section references to link sections (this makes them clickable in
GUIs)
+ add paragraph breaks for consistent spacing
+ pdtar is not this program, so use Sy per mdoc style guide
+ do almost the same in reverse for bsdtar
+ remove parenthetical around a complete sentance
Test with XZ Utils 5.6.3 on windows CI jobs (#2417)
This change fixes the autotools build to work with xz-utils 5.6.3, which
changed library names on windows, and fixes a couple of tests that I
noticed had dependencies on liblzma.
ljdarj [Sun, 17 Nov 2024 01:42:27 +0000 (02:42 +0100)]
Moving the tests' integer reading functions to test_utils. (#2410)
Moving the tests' integer reading functions to test_utils so that they
all use the same as well as moving the few using the archive_endian
functions over to the test_utils helper.
Tim Kientzle [Wed, 6 Nov 2024 21:21:54 +0000 (13:21 -0800)]
Ignore ustar size when pax size is present (#2405)
When the pax `size` field is present, we should ignore the size value in
the ustar header. In particular, this fixes reading pax archives created
by GNU tar with entries larger than 8GB.
Note: This doesn't impact reading pax archives created by libarchive
because libarchive uses tar extensions to store an accurate large size
field in the ustar header. GNU tar instead strictly follows ustar in
this case, which prevents it from storing accurate sizes in the ustar
header.
The two test archives that contain this executable were created like so,
using the https://github.com/tehmul/p7zip-zstd fork of 7-Zip:
`7z a -t7z -m0=zstd -mf=SPARC
libarchive/test/test_read_format_7zip_zstd_sparc.7z hw-sparc64`
`7z a -t7z -m0=lzma2 -mf=SPARC
libarchive/test/test_read_format_7zip_lzma2_sparc.7z hw-sparc64`
Two test files are required, because the 7zip reader code has two
different paths, one for lzma and one for all other compressors.
The test_read_format_7zip_lzma2_sparc test is expected to pass, because
LZMA BCJ filters are implemented in liblzma.
The test_read_format_7zip_zstd_sparc test is expected to fail in the
first commit, because libarchive does not currently implement the SPARC
BCJ filter. The second commit will make test_read_format_7zip_zstd_sparc
pass.
Dustin L. Howett [Tue, 22 Oct 2024 09:10:50 +0000 (04:10 -0500)]
write_xar: move libxml2 behind an abstraction layer (#1849)
This commit prepares the XAR writer for another XML writing backend.
Almost everything in this changeset leaves the code identical to how
it started, except for a new layer of indirection between the xar writer
and the XML writer.
The things that are not one-to-one renames include:
- The removal of `UTF8Toisolat1` for the purposes of validating UTF-8
- The writer code made a copy of every filename for the purposes of
checking whether it was Latin-1 stored as UTF-8. In xar, Non-Latin-1
gets stored Base64-encoded.
- I've replaced this use because (1) it was inefficient and (2)
`UTF8Toisolat1` is a `libxml2` export.
- The new function has slightly different results than the one it is
replacing for invalid UTF-8. Namely, it treats illegal UTF-8 "overlong"
encodings of Latin-1 codepoints as _invalid_. It operates on the principle
that we can determine whether something is Latin-1 based entirely on how
long the sequence is expected to be.
- The move of `SetIndent` to before `StartDocument`, which the
abstraction layer immediately undoes. This is to accommodate XML writers
that require indent to be set _before_ the document starts.
ljdarj [Tue, 22 Oct 2024 08:58:22 +0000 (10:58 +0200)]
Adding XZ, LZMA, ZSTD and BZIP2 support to ZIP writer (#2284)
PPMD may come later but I'd rather first iron out style issues with the
ones needing only to wire up libraries already-used in Libarchive before
going at the ones possibly requiring implementing algorithms as well.
dependabot[bot] [Sun, 13 Oct 2024 07:42:01 +0000 (09:42 +0200)]
CI: Bump the all-actions group across 1 directory with 4 updates (#2379)
Bumps the all-actions group with 4 updates:
actions/checkout from 4.1.6 to 4.2.1
actions/upload-artifact from 4.3.3 to 4.4.3
github/codeql-action from 3.25.6 to 3.26.12
ossf/scorecard-action from 2.3.3 to 2.4.0
Emil Velikov [Sun, 13 Oct 2024 03:54:16 +0000 (04:54 +0100)]
Convert the tools and respective tests to SPDX (#2317)
This is the first part of converting the project to use SPDX license
identifiers instead using the verbose license text.
The patches are semi-automated and I've went through manually to ensure
no license changes were made. That said, I would welcome another pair of
eyes, since I am only human.
See https://github.com/libarchive/libarchive/issues/2298
---------
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Update archive_private to avoid template keyword (#2342)
People really should never, ever, ever use libarchive internal headers. And they definitely should not expect libarchive internal headers to work in a C++ compiler. (C++ and C are really just not that compatible.)
However, people do a lot of things they shouldn't: Avoid the reserved C++ keyword `template`
vcoxvco [Sun, 13 Oct 2024 00:44:32 +0000 (02:44 +0200)]
configure.ac,CMakeLists.txt: Add libbsd on Haiku for readpassphrase (#2352)
Followup from #2346
Add libbsd to make/cmake configuration for linking readpassphrase on
Haiku.
Maybe there is a better way to do this for cmake, I'm not that familiar
with it.
Duncan Horn [Fri, 11 Oct 2024 06:30:25 +0000 (23:30 -0700)]
[7zip] Read/write symlink paths as UTF-8 (#2252)
I previously tried to find documentation on how symlinks are expected to
be stored in 7zip files, however the best reference I could find was
[here](https://py7zr.readthedocs.io/en/latest/archive_format.html). That
site suggests that symlink paths are stored as UTF-8 encoded strings:
Duncan Horn [Fri, 11 Oct 2024 06:25:47 +0000 (23:25 -0700)]
Update RAR5 code to report encryption (#2096)
Currently, the RAR5 code always reports
`ARCHIVE_READ_FORMAT_ENCRYPTION_UNSUPPORTED` for
`archive_read_has_encrypted_entries`, nor does it set any of the
entry-specific properties, even though it has enough information to
properly report this information. Accurate reporting of encryption is
super useful for applications because reporting an error message such as
"the archive is encrypted, but we don't currently support encryption" is
a lot better than a not generally useful `errno` value and a
non-localizable error string with a confusing and unpredictable error
message.
ljdarj [Fri, 11 Oct 2024 06:18:55 +0000 (08:18 +0200)]
Change to Windows absolute symlinks. (#2362)
Change to read absolute symlinks as verbatim paths instead of NT paths:
as far as I can see, libarchive can deal with verbatim paths while it
can't with NT ones.
Michał Górny [Fri, 11 Oct 2024 06:17:01 +0000 (08:17 +0200)]
configure.ac: remove incorrect 4th argument to `AC_CHECK_FUNCS` (#2334)
Remove the incorrect 4th argument from `AC_CHECK_FUNCS` calls. The macro
uses only three arguments, so it was ignored anyway. Furthermore, in at
least once instance it was wrong -- due to a typo in `attr/xatr.h`
header name.
Tim Kientzle [Fri, 11 Oct 2024 06:16:12 +0000 (23:16 -0700)]
Don't crash on truncated tar archives (#2364)
The tar header parsing overhaul in #2127 introduced a systematic
mishandling of truncated files: the code incorrectly checks for whether
a given read operation failed, and ends up dereferencing a NULL pointer
in this case. I've gone back and double-checked how
`__archive_read_ahead` actually works (it returns NULL precisely when it
was unable to satisfy the read request) and reworked the error handling
for each call to this function in archive_read_support_format_tar.c
Tim Kientzle [Fri, 11 Oct 2024 06:14:58 +0000 (23:14 -0700)]
Sanity-check gzip header field length (#2366)
OSS-Fuzz managed to construct a small gzip input that decompresses into
another gzip input with an extremely large filename field. This causes
libarchive to hang processing the inner gzip.
Address this by rejecting any gzip input where the filename or comment
fields exceed 1MiB.
Tim Kientzle [Fri, 11 Oct 2024 06:13:00 +0000 (23:13 -0700)]
Clarify crc32 variable names (#2367)
No functional change, just a tiny style improvement.
Use `crc32_computed` to refer to the crc32 that the reader has computed
and `crc32_read` to refer to the value that we read from the archive.
That hopefully makes this code a tiny bit easier to follow. (It confused
me recently when I was double-checking something in this area, so I
thought an improvement here might help others.)
Tim Kientzle [Fri, 11 Oct 2024 06:11:43 +0000 (23:11 -0700)]
Fix error message printing (#2368)
We always print the error message with or without -v, but for some
reason, we were omitting the path being processed. Simplify so that we
always print the full error including context.
This fixes various code quality issues I encountered while chasing a
memory leak reported by test automation. I failed to reproduce the
memory leak, but I hope you find this useful nonetheless.
These were disabled when migrating from Cirrus CI. Let's enable them for
github workflows, disable any failing tests on this configuration and
leave TODO notes to fix them.
This was the only failure that I found:
```
684/764 Test #684: bsdtar_test_option_ignore_zeros_mode_c ...................................***Failed 0.10 sec
If tests fail or crash, details will be in:
C:\Users\RUNNER~1\AppData\Local\Temp/bsdtar_test.exe.2024-09-29T11.42.13-000
Reference files will be read from: D:/a/libarchive/libarchive/tar/test
Running tests on: "D:\a\libarchive\libarchive\build_ci\cmake\bin\Release\bsdtar.exe"
Exercising: bsdtar 3.8.0 - libarchive 3.8.0dev zlib/1.3 liblzma/5.4.4 bz2lib/1.1.0 libzstd/1.5.5
39: test_option_ignore_zeros_mode_c
D:\a\libarchive\libarchive\tar\test\test_option_ignore_zeros.c(99): File should be empty: test-c.err
File size: 112
Contents:
0000 62 73 64 74 61 72 2e 65 78 65 3a 20 61 3a 20 43 bsdtar.exe: a: C
0010 61 6e 27 74 20 74 72 61 6e 73 6c 61 74 65 20 75 an't translate u
0020 6e 61 6d 65 20 27 28 6e 75 6c 6c 29 27 20 74 6f name '(null)' to
0030 20 55 54 46 2d 38 0d 0a 62 73 64 74 61 72 2e 65 UTF-8..bsdtar.e
0040 78 65 3a 20 62 3a 20 43 61 6e 27 74 20 74 72 61 xe: b: Can't tra
0050 6e 73 6c 61 74 65 20 75 6e 61 6d 65 20 27 28 6e nslate uname '(n
0060 75 6c 6c 29 27 20 74 6f 20 55 54 46 2d 38 0d 0a ull)' to UTF-8..
Tim Kientzle [Sun, 22 Sep 2024 23:06:34 +0000 (16:06 -0700)]
Clean up linkpath between entries (#2343)
PR #2127 failed to clean up the linkpath storage between entries. As a
result, after the first hard/symlink entry in a pax format archive, all
subsequent entries would get the same link information.
I'm really unsure how this bug failed to trip CI. I'll do some digging
in the test suite before I merge this.
Resolves #2331 , #2337
P.S. Thanks to Brad King for noting that the linkpath wasn't being
managed correctly, which was a big hint for me.
Michał Górny [Sat, 21 Sep 2024 02:44:06 +0000 (04:44 +0200)]
tar/write.h: Support `sys/xattr.h` (#2335)
Synchronize the last use of `attr/xattr.h` to support using
`sys/xattr.h` instead. The former header is deprecated on GNU/Linux, and
this replacement makes it possible to build libarchive without the
`attr` package.
Brad King [Fri, 20 Sep 2024 12:11:43 +0000 (08:11 -0400)]
tar: fix memory leaks when processing symlinks or parsing pax headers (#2338)
Fix memory leaks introduced by #2127:
* `struct tar` member `entry_linkpath` was moved at the same time as
other members were removed, but its cleanup was accidentally removed
with the others.
* `header_pax_extension` local variable `attr_name` was not cleaned up.
Tim Kientzle [Fri, 20 Sep 2024 05:20:02 +0000 (22:20 -0700)]
Be more cautious about parsing ISO-9660 timestamps (#2330)
Some ISO images don't have valid timestamps for the root directory
entry. Parsing such timestamps can generate nonsensical results, which
in one case showed up as an unexpected overflow on a 32-bit system.
Add some validation logic that can check whether a 7-byte or 17-byte
timestamp is reasonable-looking, and use this to ignore invalid
timestamps in various locations. This also requires us to be a little
more careful about tracking which timestamps are actually known.
Followup to #2318 which accidentally made zlib required.
Tested locally by increasing the version in CMakeLists.txt to 1.4.1
(which does not exist yet), and confirming that the build reports that a
suitable version of zlib was not found, while the build continued..
Emil Velikov [Sun, 1 Sep 2024 03:28:57 +0000 (04:28 +0100)]
tests: reduce zstd long option to 23 (#2305)
With 26 and 27, the sub-test is pushing 2G and 4G memory respectively.
There is no particular reason why we need to push for higher limits
here, so let's pick 23 with weights around 0.25G. The test suite overall
is in the 0.25 - 0.5G range and this fits perfectly.
Closes: https://github.com/libarchive/libarchive/issues/2080 Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Fix `test_write_format_zip_stream` failure when `HAVE_ZLIB_H` is not
defined.
If `libz` is present, `zip` archives would be compressed by default,
which requires `zip_version=20`. Otherwise, the archive is not
compressed and only requires `zip_version=10`. I'm building libarchive
on a machine not intended for developing, so basicly there's no optional
dependencies like `libz` available, guess that's why nobody else has
reported this issue.
Tim Kientzle [Tue, 9 Jul 2024 11:55:23 +0000 (04:55 -0700)]
Pax parsing should consistently use the FIRST pathname/linkname (#2264)
Pax introduced new headers that appear _before_ the legacy
headers. So pax archives require earlier properties to
override later ones.
Originally, libarchive handled this by storing the early
headers in memory so that it could do the actual parsing
from back to front. With this scheme, properties from
early headers were parsed last and simply overwrote
properties from later headers.
PR #2127 reduced memory usage by parsing headers in the
order they appear in the file, which requires later headers
to avoid overwriting already-set properties. Apparently,
when I made this change, I did not fully consider how charset
translations get handled on Windows, so failed to consistently
recognize when the path or linkname properties were in fact
actually set. As a result, the legacy path/link values (which have
no charset information) overwrote the pax path/link values (which
are known to be UTF-8), leading to the behavior observed in
#2248. This PR corrects this bug by adding additional
tests to see if the wide character path or linkname properties
are set.
Related: This bug was exposed by a new test added in #2228
which does a write/read validation to ensure round-trip filename
handling. This was modified in #2248 to avoid tickling the bug above.
I've reverted the change from #2248 since it's no longer necessary.
I have also added some additional validation to this test to
help ensure that the intermediate archive actually is a pax
format that includes the expected path and linkname properties
in the expected places.
Fix 'test_pax_filename_encoding_UTF16_win' by explicitly setting hdrcharset (#2248)
It would seem as though #2127 conflicted with my change #2228.
I previously thought that the writer was putting info into the archive
that strings were encoded in UTF-8, but I'm not so sure of that
anymore... In any case, explicitly setting `hdrcharset` on the reader as
well is a reasonable alternative and something we do already.
The RAR5 reader is using a small stack of cached pointers to submit the
rendered data to the caller. In malformed files it's possible for this
pointer cache to be desynchronized with the memory buffer those pointers
are pointing to, making libarchive crash on invalid memory access.
In particular, this ensures that we cannot overflow rounding-up
calculations. Recent tar changes put in a lot of sanity limits on the
sizes of particular kinds of data, but the usual behavior in most cases
was to skip over-large values. The skipping behavior required
rounding-up and accumulating values that could potentially overflow
64-bit integers. This adds some coarser checks that fail more directly
when an entry claims to be more than 1 exbibyte (2^60 bytes), avoiding
any possibility of numeric overflow along these paths.
Tim Kientzle [Sat, 6 Jul 2024 07:45:38 +0000 (00:45 -0700)]
Fix a minor date-parsing bug and fill in missing ISO9660 testing (#2260)
This is somewhat academic, since we don't actually expose any of the
ISO9660 header information that is stored in 17-byte date format, but
inspection revealed an off-by-one error in the parsing here.
This also proved a nice motivation to fill in some verification in our
most basic ISO9660 test case.
archive_entry_perms.3: clarify that you don't need to strdup() [gu]name (#2239)
Currently updating archivemount which does
```c
pwd = getpwuid(st.st_uid);
if(pwd)
archive_entry_set_uname(node->entry, strdup(pwd->pw_name));
grp = getgrgid(st.st_gid);
if(grp)
archive_entry_set_gname(node->entry, strdup(grp->gr_name));
```
and I'm assuming the strdups are actually leaks? The manual is silent on
this.
Sam Bingner [Fri, 5 Jul 2024 19:34:43 +0000 (09:34 -1000)]
Fix max path-length metadata writing (#2243)
Previous code added `.XXXXXX` to the end of the filename to write the
mac metadata. This is a problem if the filename is at or near the
filesystem max path length. This reuses the same code used by
create_tempdatafork to ensure that the filename is not too long.
Tim Kientzle [Fri, 5 Jul 2024 10:08:38 +0000 (03:08 -0700)]
Ignore out-of-range gid/uid/size/ino and harden AFIO parsing (#2258)
The fuzzer constructed an AFIO (CPIO variant) archive that had a
rediculously large ino value, which caused an overflow of a signed
64-bit intermediate.
There are really three issues here:
* The CPIO parser was using a signed int64 as an intermediate type for
parsing numbers in all cases. I've addressed the overflow here by using
a uint64_t in the parser core, but left the resulting values as int64_t.
* The AFIO header parsing had no guards against rediculously large
values; it now rejects an archive when the ino or size fields (which are
allowed to be up to 16 hex digits long) overflow int64_t to produce a
negative value.
* The archive_entry would accept negative values for gid/uid/size/ino.
I've altered those so that these fields treat any negative value as zero
for these fields.
There was one test that actually verified that we could read a field
with size = -1. I've updated that to verify that the resulting size is
zero instead.
Tim Kientzle [Fri, 5 Jul 2024 10:05:41 +0000 (03:05 -0700)]
Don't try to read rediculously long names (#2259)
The Rar5 reader would read the name size, then read the name, then check
whether the name size was beyond the maximum size allowed. This can
result in a very large memory allocation to read a name. Instead, check
the name size before trying to read the name in order to avoid excessive
allocation.
Fix multiple vulnerabilities identified by SAST (#2256)
I went through ~50 findings of SAST reports and identified a few of them
as true positives. I might still have missed some intended uses or some
magic in the code so please provide feedback if you think some of these
shouldn't be applied and why.
Fatima Qarni [Sat, 22 Jun 2024 22:49:53 +0000 (17:49 -0500)]
Checks for null references (#2251)
Microsoft's static analysis tool found some vulnerabilities from
unguarded null references that I changed in
[microsoft/cmake](https://github.com/microsoft/cmake). Pushing these
changes upstream so they can be added to
[kitware/cmake](https://github.com/Kitware/CMake).
Duncan Horn [Thu, 20 Jun 2024 21:03:54 +0000 (14:03 -0700)]
Fix gnutar creation with unicode hardlink names on Windows (#2227)
The code currently uses `archive_entry_hardlink` to determine if an
entry is a hardlink, however on Windows, this call will fail if the path
cannot be represented in the current locale. This instead checks to see
if any entry in the `archive_mstring` is set.
Duncan Horn [Thu, 20 Jun 2024 21:01:47 +0000 (14:01 -0700)]
Fix & optimize string conversion functions for Windows (#2226)
All three parts of this change effectively stem from the same
assumption: most of the code in `archive_string.c` assumes that MBS <->
UTF-8 string conversion can be done directly and efficiently. This is
not quite true on Windows, where conversion looks more like MBS <-> WCS
<-> UTF-8. This results in a few inefficiencies currently present in the
code.
First, if the caller is asking for either the MBS or UTF-8 string, but
it's not currently set on the `archive_mstring`, then on Windows, it's
more efficient to first check if the WCS is set and do the conversion
with that. Otherwise, we'll end up doing a wasteful intermediate step of
converting either the MBS or UTF-8 string to WCS, which we already have.
Second, in the `archive_mstring_update_utf8` function, it's more
efficient on Windows to first convert to WCS and use that result to
convert to MBS, as opposed to the fallback I introduced in a previous
change, which converts UTF-8 to MBS first and disposes of the
intermediate WCS, only to re-calculate it.
Duncan Horn [Thu, 20 Jun 2024 03:15:13 +0000 (20:15 -0700)]
Fix issue when skipping first file in 7zip archive that is a multiple of 65536 bytes (#2245)
We noticed an issue where we had an archive that, if you skipped the
first entry and tried to extract the second, you'd get a failure saying
`Truncated 7-Zip file body`. Turns out that this is because the first
file in the archive is a multiple of 65,536 bytes (the size of the
uncompressed buffer) and therefore after `read_stream` skipped all of
the first file, `uncompressed_buffer_bytes_remaining` was set to zero
(because all data was consumed) and then it calls
`get_uncompressed_data` with `minimum` set to zero. This then saw that
`minimum > zip->uncompressed_buffer_bytes_remaining` evaluated to false,
causing us to read zero bytes, which got interpreted as a truncated
archive.
The fix here is simple: we now always call `extract_pack_stream` when
`uncompressed_buffer_bytes_remaining` is zero before exiting the
skipping loop.
Tim Kientzle [Mon, 17 Jun 2024 03:23:11 +0000 (20:23 -0700)]
[cpio test] Dates can be more than 12 bytes, depending on the locale (#2237)
In order to match cpio output, format the reference date with _at least_
12 bytes instead of _exactly_ 12 bytes. This should fix a gratuitous
test failure on certain systems that default to multi-byte locales.
Tim Kientzle [Mon, 17 Jun 2024 03:22:14 +0000 (20:22 -0700)]
Support ISOs with a non-standard PVD layouts (#2238)
The CSRG ISOs have a non-standard PVD layout with a 68-byte root
directory record (rather than the 34-byte record required by
ECMA119/ISO9660). I built a test image with this change and modified the
ISO9660 reader to accept it.
While I was working on the bid logic to recognize PVDs, I added a number
of additional correctness checks that should make our bidding a bit more
accurate. In particular, this should more than compensate for the
weakened check of the root directory record size.
Tim Kientzle [Sun, 16 Jun 2024 05:22:12 +0000 (22:22 -0700)]
Parse tar headers incrementally (#2127)
This rebuilds the tar reader to parse all header data incrementally as
it appears in the stream.
This definitively fixes a longstanding issue with unsupported pax
attributes. Libarchive must limit the amount of data that it reads into
memory, and this has caused problems with large unknown attributes. By
scanning iteratively, we can instead identify an attribute by name and
then decide whether to read it into memory or whether to skip it without
reading.
This design also allows us to vary our sanity limits for different pax
attributes (e.g., an attribute that is a single number can be limited to
a few dozen bytes while an attribute holding an ACL is allowed to be a
few hundred kilobytes). This allows us to be a little more resistant to
malicious archives that might try to force allocation of very large
amounts of memory, though there is still work to be done here.
This includes a number of changes to archive_entry processing to allow
us to consistently keep the _first_ appearance of any given value
instead of the original architecture that recursively cached data in
memory in order to effectively process all the data from back-to-front.
Duncan Horn [Sun, 16 Jun 2024 05:20:00 +0000 (22:20 -0700)]
Fix a couple issues with creating PAX archives (#2228)
Note: this is a partial cherry-pick from
https://github.com/libarchive/libarchive/pull/2095, which I'm going to
go through and break into smaller pieces in hopes of getting some things
in while discussion of other things can continue.
There's basically two fixes here:
The first is to check for the presence of the WCS pathname on Windows
before failing since the conversion from WCS -> MBS might fail. Later
execution already handles such paths correctly.
The second is to set the converted link name on the target entry where
relevant. Note that there has been prior discussion on this here:
https://github.com/libarchive/libarchive/pull/2095/files#r1531599325
alice [Sat, 15 Jun 2024 00:26:14 +0000 (02:26 +0200)]
rar: fix UB negation overflow for INT32_MIN address (#2235)
certain rar files seem to have the lowest possible address here, so flip
the argument order to correctly evaluate this instead of invoking UB
(caught via sanitize=undefined)
---
the backtrace looks something like:
```
* frame #0: 0x00007a1e3898727b libarchive.so.13`execute_filter [inlined] execute_filter_e8(filter=<unavailable>, vm=<unavailable>, pos=<unavailable>, e9also=<unavailable>) at archive_read_support_format_rar.c:3640:47
frame #1: 0x00007a1e3898727b libarchive.so.13`execute_filter(a=<unavailable>, filter=0x00007a1e39e2f090, vm=0x00007a1e31b1efd0, pos=<unavailable>) at archive_read_support_format_rar.c:0
frame #2: 0x00007a1e38983ac3 libarchive.so.13`read_data_compressed [inlined] run_filters(a=0x00007a1e34209700) at archive_read_support_format_rar.c:3395:8
frame #3: 0x00007a1e38983a9e libarchive.so.13`read_data_compressed(a=0x00007a1e34209700, buff=0x00007a1e31a01fd8, size=0x00007a1e31a01fd0, offset=0x00007a1e31a01fc0, looper=1) at archive_read_support_format_rar.c:2083:12
frame #4: 0x00007a1e38981b10 libarchive.so.13`archive_read_format_rar_read_data(a=0x00007a1e34209700, buff=0x00007a1e31a01fd8, size=0x00007a1e31a01fd0, offset=0x00007a1e31a01fc0) at archive_read_support_format_rar.c:1130:11
frame #5: 0x00006158bc5d30d3 file-roller`extract_archive_thread(result=0x00007a1e3711e2b0, object=<unavailable>, cancellable=0x00007a1e3870bf20) at fr-archive-libarchive.c:999:17
frame #6: 0x00007a1e39928d6d libgio-2.0.so.0`run_in_thread(job=<unavailable>, c=<unavailable>, _data=0x00007a1e326e9740) at gsimpleasyncresult.c:899:5
frame #7: 0x00007a1e3990614e libgio-2.0.so.0`io_job_thread(task=<unavailable>, source_object=<unavailable>, task_data=0x00007a1e2307fc20, cancellable=<unavailable>) at gioscheduler.c:75:16
frame #8: 0x00007a1e399433bf libgio-2.0.so.0`g_task_thread_pool_thread(thread_data=0x00007a1e35c18ab0, pool_data=<unavailable>) at gtask.c:1583:3
frame #9: 0x00007a1e39db77e8 libglib-2.0.so.0`g_thread_pool_thread_proxy(data=<unavailable>) at gthreadpool.c:336:15
frame #10: 0x00007a1e39db5bfb libglib-2.0.so.0`g_thread_proxy(data=0x00007a1e378147d0) at gthread.c:835:20
frame #11: 0x00007a1e3a0b5c7b ld-musl-x86_64.so.1`start(p=0x00007a1e31a02170) at pthread_create.c:208:17
frame #12: 0x00007a1e3a0b8a8b ld-musl-x86_64.so.1`__clone + 47
```
note the 0xd which is 14 which is NegateOverflow in ubsan:
for reference, the totally legal rar file is
https://img.ayaya.dev/05WYGFOcRPN9 , and this seems to only crash when
extracted via file-roller (or inside nautilus)
Duncan Horn [Wed, 12 Jun 2024 19:01:40 +0000 (12:01 -0700)]
Update ustar creation sanity check to use WCS path on Windows (#2230)
On Windows, the MBS pathname might be null if the string was set with a
WCS that can't be represented by the current locale. This is handled
properly by the rest of the code, but there's a sanity check that does
not make the proper distinction.
Note: this is a partial cherry-pick from
https://github.com/libarchive/libarchive/pull/2095, which I'm going to
go through and break into smaller pieces in hopes of getting some things
in while discussion of other things can continue.
Duncan Horn [Wed, 12 Jun 2024 19:00:24 +0000 (12:00 -0700)]
Add unicode test for creating zip files on Windows (#2231)
There's no bug fix here - this just adds a test to verify that zip
creation when using the _w functions works as expected on Windows.
Note: this is a partial cherry-pick from
https://github.com/libarchive/libarchive/pull/2095, which I'm going to
go through and break into smaller pieces in hopes of getting some things
in while discussion of other things can continue.
Mrmaxmeier [Wed, 12 Jun 2024 18:57:20 +0000 (20:57 +0200)]
Fuzzing: Expose `DONT_FAIL_ON_CRC_ERROR` as a CMake option and honor it in the rar5 decoder (#2229)
Hey,
the fuzzing infrastructure over at OSSFuzz builds libarchive with the
CMake option `-DDONT_FAIL_ON_CRC_ERROR=1`.
https://github.com/google/oss-fuzz/blob/e4643b64b3af4932bff23bb87afdfbac2a301969/projects/libarchive/build.sh#L35
This, unfortunatly, does not do anything since it's never been defined
as an option.
Building the fuzzers with CRC checks disabled should improve fuzzing
efficacy a bunch.
Lukas Javorsky [Tue, 11 Jun 2024 04:41:25 +0000 (06:41 +0200)]
Use calloc instead of malloc to clear the memory from leftovers (#2207)
This ensures that the buffer is properly initialized and does not
contain any leftover data from previous operations. It is used later in
the `archive_entry_copy_hardlink_l` function call and could be
uninitialized.
Duncan Horn [Tue, 11 Jun 2024 04:23:13 +0000 (21:23 -0700)]
Update archive_entry_link_resolver to copy the "wide" pathname for hardlinks on Windows (#2225)
On Windows, if you are using `archive_entry_link_resolver` and give it
an entry that links to past entry whose pathname was set using a "wide"
string that cannot be represented by the current locale (i.e. WCS -> MBS
conversion fails), this code will crash due to a null pointer read. This
updates to use the `_w` function instead on Windows.
Note: this is a partial cherry-pick from
https://github.com/libarchive/libarchive/pull/2095, which I'm going to
go through and break into smaller pieces in hopes of getting some things
in while discussion of other things can continue.
Sevan Janiyan [Tue, 11 Jun 2024 03:42:13 +0000 (04:42 +0100)]
Always use our supplied la_queue.h (#2222)
On legacy systems the OS supplied `sys/queue.h` may lack the required
macros, so to avoid having to verify if the version of queue.h is of
use, opt to always to `la_queue.h` which will match expectations.
Allows libarchive to build on legacy Darwin where `STAILQ_FOREACH` would
be missing from `sys/queue.h`.