Tim Kientzle [Mon, 1 Mar 2010 05:36:29 +0000 (00:36 -0500)]
Open a door to changing the current abort-on-state-failure behavior:
* Change __archive_check_magic to return a status code.
* Change callers to use the archive_check_magic() wrapper macro,
which calls __archive_check_magic and returns immediately
if there's an ARCHIVE_FATAL status.
* Update a bunch of API calls to actually do magic state checks.
I've also changed __archive_check_magic around a little bit:
* Magic number checks still call abort().
* State failures still call abort()
* Starting with libarchive 3.0, state failures will return ARCHIVE_FATAL.
Tim Kientzle [Sat, 27 Feb 2010 20:11:42 +0000 (15:11 -0500)]
Minor fixups for the raw handler: Use "raw" as the name
for consistency, set filetype/perm to something that
would allow this entry to be reasonably extracted.
Tim Kientzle [Thu, 25 Feb 2010 17:03:28 +0000 (12:03 -0500)]
archive_write_disk uses a series of chdir() operations to shorten path
arguments to PATH_MAX. If there is no PATH_MAX definition (as on
HURD), just skip this.
POSIX does provide a more complex way to deal with this concern using
fpathconf() to query the maximum relative pathname starting in any particular
directory. Someday, this code should probably be augmented to use
that mechanism.
Tim Kientzle [Thu, 25 Feb 2010 17:00:28 +0000 (12:00 -0500)]
Use st_size to size the buffer for reading a symbolic link value,
instead of using the arbitrary (and sometimes non-existent) PATH_MAX
variable. This is part of a fix for building on HURD.
Tim Kientzle [Thu, 25 Feb 2010 16:58:00 +0000 (11:58 -0500)]
Set archive_error_number to zero here. I'm a little uneasy about
this, as there are apparently libarchive uers that abuse archive_errno()
and this change is likely to mask bugs in such software.
Tim Kientzle [Tue, 23 Feb 2010 16:18:00 +0000 (11:18 -0500)]
Oops. Forgot to initialize the is_disk_like variable. While I'm
here, make the block size selection for disks be aware of the users
request. Users should be able to ask for larger block sizes.
Add description of archive_entry_perm/archive_entry_set_perm and
the interaction with archive_entry_set_mode. Move the description of
the latter to archive_entry_stat.3, where archive_entry_filetype and co
are.
Tim Kientzle [Mon, 22 Feb 2010 00:10:53 +0000 (19:10 -0500)]
Rework the file handling here to explicitly probe the type of input
we're using and use that to determine an explicit I/O strategy. This
was largely inspired by an email exchange with Duane Hesser, who
clarified some of the issues involved in doing high-quality tape
handling. I think the approach here will make it much easier to
provide optimized I/O strategies for tape and sockets.
Because of these changes, reading the directory of an ISO image stored
on a raw device (via "tar tvf /dev/cd0", for example) is about 100x
faster due to a combination of better detection of "disk-like" devices
and a more suitable strategy for handling forward skip requests.
Extracting tar devices stored on one disk drive onto a physically
separate drive should also be significantly faster because we
now do block-size cheating on disk-like devices.
Tim Kientzle [Sun, 21 Feb 2010 23:52:52 +0000 (18:52 -0500)]
Fill in archive_entry_perm() as the read counterpart to
archive_entry_set_perm(). In inadvertently used this
in the Mac copyfile() support without realizing that it
hadn't actually been implemented.
Tim Kientzle [Sun, 21 Feb 2010 20:33:51 +0000 (15:33 -0500)]
Prepare for the 3.0 ABI by switching a bunch of uses of off_t, dev_t,
ino_t, uid_t, and gid_t to use int64_t instead. These are all
conditional on ARCHIVE_VERSION_NUMBER >= 3000000 so we still have the
option of cutting a 2.9 release with the old ABI.
Tim Kientzle [Sun, 21 Feb 2010 08:25:42 +0000 (03:25 -0500)]
The only place blocking is really needed is just before calling
the client write functions. So I've moved all of the blocking
code (that used to be duplicated in every compression filter)
into archive_write.c in the code that wraps the client callbacks.
As a result, add_filter_none is a true no-op.
Simplify the line reader:
- Do not allocate a buffer in advance, it will be reallocated on the
first round anyway.
- Allocate one more byte to allow always terminating the buffer.
- Use strcpsn to compute the end of line. This slightly changes the
behavior for NUL in text lines as they are no longer truncated.
Provide a sane default strategy for the various formats.
zip and ar don't do hardlinks, so work like old-cpio.
shar is like tar.
Default to old-cpio as fallback as it is the least problematic of the
three options.
Tim Kientzle [Sat, 20 Feb 2010 05:53:11 +0000 (00:53 -0500)]
Stackable write filter support. This ended up touching an awful lot
of files. But, the old API is supported almost entirely unchanged, which
I wasn't certain would be possible.
Big changes:
* You can add more than one write filter by using
archive_write_add_filter_{bzip2,compress,gzip,lzma,xz}.
This will be more interesting when we have uuencode, RPM, encryption.
* The old archive_write_set_compression_XXXX are shorthands for
"remove all the current filters and add this one." They're
deprecated and scheduled to be removed in libarchive 4.0.
* The internal API and life cycle for write filters has been
rationalized: create, set options, open, write, close, free.
* New utility functions provide information about each filter
when there's more than one: code, name, and number of bytes processed
* Old archive_bytes_compressed(), etc, are implemented in terms of
the more generic new functions.
* The read side was generalized to also support the new utility
functions.
In particular, the write filters are much simpler since each
one doesn't have to deal with blocking. In this version, there's
still a "write_add_filter_none" that handles blocking, but I
think I'll soon fold that down into the client wrapper and
add_filter_none will become a no-op. I think this also gets
us a big step closer to multi-volume support on the write side.
Tim Kientzle [Sat, 20 Feb 2010 05:32:24 +0000 (00:32 -0500)]
Returning ARCHIVE_WARN when someone tries to write past the declared
file size seems entirely reasonable. I had thought about changing
this in 3.0 but have decided against it.
Tim Kientzle [Wed, 17 Feb 2010 05:54:50 +0000 (00:54 -0500)]
Modernize this test. Add additional assertions to verify that
archive_position_compressed()
== archive_position_uncompressed()
== number of bytes actually written
when we didn't overflow the buffer. These may not match when
the buffer does overflow because some writes down the pipeline
will fail.