Tim Kientzle [Sun, 2 Jan 2011 20:03:04 +0000 (15:03 -0500)]
Extend tar/test/test_option_r to verify the bug that Michihiro found
with -r brokenness.
This turns out to be a bug in the core archive_read I/O not correctly
tracking the current file position under some circumstances.
Extend libarchive/test/test_read_position to verify file position
tracking both with and without a registered skip function.
Fix the bug: When libarchive falls back to regular reads to fill
out an internal skip request, it failed to count those
bytes toward the current file position.
Tim Kientzle [Sun, 2 Jan 2011 19:16:59 +0000 (14:16 -0500)]
Extend the test of bsdtar -r to exercise larger file contents.
This does not reproduce the problem that Michihiro recently found,
but it's an improvement in the test, so it's worth checking in.
Tim Kientzle [Fri, 31 Dec 2010 17:57:48 +0000 (12:57 -0500)]
Support --numeric-owner for tar extraction as well as archive creation.
This just involves reworking tar's extraction routines slightly to use
archive_read_extract2() with a custom-configured archive_write_disk
object, instead of the more convenient archive_read_extract()
interface that automatically builds a standard archive_write_disk
object.
Currently, this code supports STORE and MSZIP.
This reader always decompress recorded data even if archive_read_data
function is not called at all, because all blocks, which are
splited in 32768 bytes of uncompresssed data, need uncompressed
data of previous block to be decompressed, and there is no chance
that the format reader in libarchive knows that a caller application
such as bsdtar performs listing up filenames or finding one to
extract a specific file.
Tim Kientzle [Sat, 11 Dec 2010 18:56:34 +0000 (13:56 -0500)]
Fix a test that breaks on systems where GID 17 is actually
used. Leave comments about some additional cases that would
be nice to test if someone can figure out a good way to do
so (a way that won't break on Windows, for example).
Fix issue 119.
Change the file location check that a file location does not exceed
volume block. New one is that a file content does not exceed volume
block(end of an ISO image). It is better than previous check even
if the issue did not happen.
While reading an ISO image generated by an older version of mkisofs
utility, a file location indicates the end the ISO image if its file
size is zero and it is the last file of all files of the ISO image,
so it is possible that the location value is the same as the number
of the total block of the ISO image.
Fix issue 119.
Change the file location check that a file location does not exceed
volume block. New one is that a file content does not exceed volume
block(end of an ISO image). It is better than previous check even
if the issue did not happen.
While reading an ISO image generated by an older version of mkisofs
utility, a file location indicates the end the ISO image if its file
size is zero and it is the last file of all files of the ISO image,
so it is possible that the location value is the same as the number
of the total block of the ISO image.
Tim Kientzle [Tue, 7 Dec 2010 05:02:31 +0000 (00:02 -0500)]
Don't try to copy entry data if the entry has zero size.
In particular, this causes "Cannot write to empty file" errors
when extracting GNU tar extended 'D' directory entries.
Tim Kientzle [Sun, 5 Dec 2010 20:50:03 +0000 (15:50 -0500)]
First part of the NFS4 ACL support.
This renames a few things to acknowledge that there really
is more than one kind of ACL in the world and extends
the basic ACL storage to support the NFS4/NTFS ACL
mode bits. The ACL storage has also gained some
error checks to ensure that a single ACL does not have
both NFS4/NTFS and POSIX.1e ACEs.
Tim Kientzle [Sun, 5 Dec 2010 20:30:23 +0000 (15:30 -0500)]
Merge r2811 from trunk: Don't try to verify that compression-level=0
produces larger results than the default compression, since this isn't
true for all versions of liblzma.
Tim Kientzle [Sun, 5 Dec 2010 20:28:34 +0000 (15:28 -0500)]
Don't assert that compression-level=0 produces larger file
than the default compression, since the actual result varies
depending on the version of liblzma.
Tim Kientzle [Mon, 29 Nov 2010 02:50:17 +0000 (21:50 -0500)]
Restore ACLs after calls to chmod().
In particular, current versions of ZFS on Solaris and
FreeBSD erase ACLs on each call to chmod, so ACL
restore has to follow chmod calls on those platforms.
Tim Kientzle [Mon, 29 Nov 2010 01:35:18 +0000 (20:35 -0500)]
Fix an annoying build problem where you could
configure with cmake, make some changes, re-configure,
and the build would consistently break with missing crypto
libraries.
The solution is to separate testing for available
implementations (the try-compiles happen only if
the decision hasn't been cached) from deciding which
additional libraries to add to the build (which happens
regardless).
Tim Kientzle [Mon, 29 Nov 2010 00:10:14 +0000 (19:10 -0500)]
Add explicit arguments to set_acls() for the file descriptor and entry.
This is a step towards refactoring directory ACLs to be written
during the fixup pass.
Tim Kientzle [Fri, 19 Nov 2010 06:49:07 +0000 (01:49 -0500)]
Big string overhaul:
* Remove __ from names (ISO C reserves names prefixed with __)
* Remove the gratuitous macro wrappers
* Remove a couple of unused functions
* Try to simplify some of the implementations a bit more.
* Move the "archive entry string" (aes) functions into archive_string
as "archive_multistring" so these can be used outside of archive_entry
Tim Kientzle [Fri, 12 Nov 2010 05:50:55 +0000 (00:50 -0500)]
Add %ls and %S to archive_string_sprintf() for formatting wide-character strings,
use it to fix a build error on Windows putting wchar_t paths into error messages.
Tim Kientzle [Wed, 10 Nov 2010 05:59:05 +0000 (00:59 -0500)]
Issue 113: When writing headers, return ARCHIVE_FAILED
on various problems:
* Missing name
* Missing size (except for hardlinks)
* Missing filetype
* Size too large for format
Tim Kientzle [Sun, 7 Nov 2010 02:14:21 +0000 (22:14 -0400)]
If we see junk when we're expecting a 'PK' signature
block, scan forward to see if we can find a suitable
signature.
This is necessary to read archives that have been modified
by some Zip utilities that update entries in-place without
compacting the entire archive.
Of course, this would be very natural if libarchive
used the Central directory. But even when libarchive
does support the Central directory, this kind of logic
will still be useful for reading Zip archives in streaming
mode.
Tim Kientzle [Sun, 31 Oct 2010 06:35:10 +0000 (02:35 -0400)]
Return NULL if there is no error message.
In particular, we should start filling in assertions in lots
of tests to verify that textual error messages are getting
generated on errors.
Tim Kientzle [Sun, 31 Oct 2010 04:44:03 +0000 (00:44 -0400)]
Reconcile the test harnesses across libarchive, tar, and cpio.
Remove almost all of the varargs capabilities from the various
assertion helpers, since they complicate the code and are
hardly ever used.
Tim Kientzle [Sat, 30 Oct 2010 07:38:53 +0000 (03:38 -0400)]
Handle umask a little more carefully: Instead of setting umask to
zero, then restoring a file, then restoring umask, just query
the umask and adjust the file restore operations to account for it.
Go ahead and query the umask at new() time as well.
This opens the possibility of removing the umask query from
write_disk_header if you want to make libarchive behave more
nicely with threads.
Brian Harring [Mon, 27 Sep 2010 15:42:07 +0000 (11:42 -0400)]
fix a bug introduced in the last set of cleanups to this file; if setting the unconsumed, don't invoke consume ourselves (this fixes the readahead/consume in this case)
Brian Harring [Sat, 25 Sep 2010 05:43:07 +0000 (01:43 -0400)]
finish conversion over to tar_flush_unconsumed, and converting adhoc readahead/consume pairing. Haven't been able to pinpoint why test_read_large, test_read_truncated, and test_read_data_large fail when poisoning is enabled- either the readahead/consume pairing is still off slightly, or (what I suspect) there is a dangling ptr that just happens to work currently. Will root this one out under libtransform where I can more easily make the actual space no longer valid (literal free'ing), hopefully smoking it out via a segfault.
Brian Harring [Sat, 25 Sep 2010 04:36:27 +0000 (00:36 -0400)]
more work to pair tar's readahead/consume; this still isn't perfect (the disabled poison code w/in tar_flush_unconsumed confirms this), although pretty close.
Brian Harring [Thu, 23 Sep 2010 11:40:18 +0000 (07:40 -0400)]
replace an adhoc consume/skip invocation with a proper skip/consume invocation (primarily relevant since the underlaying transforms/source may be able to shift to an lseek; unlikely, but it simplifies the code a bit). As for updating padding when the seek is less than what was requested, this is being done purely to keep existing behaviour- the ARCHIVE_FATAL return should make this a noop, but being safe.