Dave Chinner [Thu, 1 May 2014 23:30:39 +0000 (09:30 +1000)]
db: don't claim unchecked CRCs are correct
Currently xfs_db will claim the CRC on a structure is correct if the
buffer is not marked with an error. However, buffers may have been
read without a verifier, and hence have not had their CRCs
validated. in this case, we shoul dreport "unchecked" rather than
"correct". For example:
xfs_db> fsb 0x6003f
xfs_db> type dir3
xfs_db> p
dhdr.hdr.magic = 0x58444433
dhdr.hdr.crc = 0x2d0f9c9d (unchecked)
....
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Mark Tinguely [Mon, 14 Apr 2014 06:15:05 +0000 (16:15 +1000)]
logprint: handle split EFI entry
xfs_logprint does not correctly handle EFI entries that
are split across two log buffers. xfs_efi_copy_format()
falsely interrupts the truncated size of the split entry
as being a corrupt entry.
If the first log entry has enough information, namely the
number of extents in the entry and the identifier, then
display this information and a warning that this entry is
truncated. Otherwise, if there is not enough information in
the first log buffer, then print a message that the EFI decode
was not possible. These messages are similar to split inode
entries.
Example of a continued entry:
Oper (336): tid: f214bdb len: 44 clientid: TRANS flags: CONTINUE
EFI: #regs: 1 num_extents: 2 id: 0xffff880804f63900
EFI free extent data skipped (CONTINUE set, no space)
Reported-by: Michael L. Semon <mlsemon35@gmail.com> Signed-off-by: Mark Tinguely <tinguely@sgi.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:58 +0000 (16:13 +1000)]
libxfs: remove never-read "offset" assignment in readbufr_map & writebufr
libxfs_readbufr_map() & libxfs_writebufr() iterate
over bp->b_map[] and read each chunk. The loops start
out correctly, getting the offset from bm_bn and the
length from bm_len. After the IO it correctly
advances the target buffer pointer by len, but then
inexplicably advances "offset" by len as well. The
whole point of this exercise is to handle discontiguous
ranges - marching offset along by length of IO done
is incorrect.
Thankfully offset is immediately reset to the proper
value again at the top of the loop for the next range,
so this is harmless, other than being confusing.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:54 +0000 (16:13 +1000)]
xfs_io: fix random pread/pwrite to honor offset
xfs_io's pread & pwrite claim to support a random IO mode
where it will do random IOs between offset & offset+len.
However, offset was ignored, and we did the IOs between 0
and len instead.
Clang caught this by pointing out that the calculated/normalized
"offset" variable was never read.
(NB: If the range is larger than RAND_MAX, these functions don't
work, but that's always been true, so I'll leave it for another
day...)
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:50 +0000 (16:13 +1000)]
xfsprogs: remove write-only assignments
There are many instances where variable assignments are made,
but never read (or are re-assigned before they are read).
The Clang static analyzer finds these.
Here's a chunk of what I think are trivial removals of such
assignments; other detections point to more serious problems
(or are shared w/ kernel code so should probably be fixed there
first).
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:44 +0000 (16:13 +1000)]
mkfs: catch unknown format in protofile parsing
As the code stands today we can't get an unknown format in the
last case statement, but Coverity warns that if we ever do, we'll
use an uninitialized "ip" in the call to libxfs_trans_log_inode().
Adding a default: case to catch unknown formats is defensive and
makes the checker happy.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:42 +0000 (16:13 +1000)]
xfs_repair: address never-true tests in repair/bmap.c on 64 bit
The test "if (new_naexts > BLKMAP_NEXTS_MAX)" is never true
on a 64-bit platform; new_naexts is an int, and BLKMAP_NEXTS_MAX
is INT_MAX. So just wrap the whole thing in the #ifdef.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:39 +0000 (16:13 +1000)]
xfs_quota: remove impossible tests in printpath
printpath() had some cut & paste tests of "c" - but
nothing had set it yet other than c=0, so testing it
is pointless. Just remove tests for non-zero "c"
until we might have set it to something interesting.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:13:05 +0000 (16:13 +1000)]
libxfs: free resources in libxfs_alloc_file_space error paths
The bmap freelist & transaction pointer weren't
being freed in libxfs_alloc_file_space error paths;
more or less copy the error handling that exists
in kernelspace to resolve this.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:12:43 +0000 (16:12 +1000)]
xfs_quota: fix memory leak in quota_group_type() error path
quota_group_type has some rather contorted logic that's
been around since 2005.
In the (!name) case, if any of the 3 calls setting up ngroups fails,
we fall back to using just one group.
However, if it's the getgroups() call that fails, we overwrite
the allocated gid ptr with &gid, thus leaking that allocated
memory. Worse, we set "dofree" to 1, so will free non-allocated
local var gid. And that last else case is redundant; if we get there,
gids is guaranteed to be non-null.
Refactor it a bit to be more clear (I hope) and correct.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 14 Apr 2014 06:12:03 +0000 (16:12 +1000)]
xfs_fsr: refactor fsrall_cleanup
fsrall_cleanup leaked an fd in the non-timeout
case - but the logic was weird and tortured, refactor
it to make more sense and fix the fd leak as well.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Tue, 8 Apr 2014 08:56:59 +0000 (18:56 +1000)]
xfs_repair: fix prefetch queue waiting
97b1fcf xfs_repair: fix array overrun in do_inode_prefetch
The thread creation loop has 2 ways to exit; either via
the loop counter based on thread_count, or the break statement
if we've started enough workers to cover all AGs.
Whether or not the loop counter "i" reflects the number of
threads started depends on whether or not we exited via the
break.
The above commit prevented us from indexing off the end
of the queues[] array if we actually advanced "i" all the
way to thread_count, but in the case where we break, "i"
is one *less* than the nr of threads started, so we don't
wait for completion of all threads, and all hell breaks
loose in phase 5.
Just stop with the cleverness of re-using the loop counter -
instead, explicitly count threads that we start, and then use
that counter to wait for each worker to complete.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Mark Tinguely [Tue, 8 Apr 2014 08:56:56 +0000 (18:56 +1000)]
xfsprogs: fix directory hash ordering bug
Commit f5ea1100 ("xfs: add CRCs to dir2/da node blocks") introduced
in 3.10 incorrectly converted the btree hash index array pointer in
xfs_da3_fixhashpath(). It resulted in the the current hash always
being compared against the first entry in the btree rather than the
current block index into the btree block's hash entry array. As a
result, it was comparing the wrong hashes, and so could misorder the
entries in the btree.
For most cases, this doesn't cause any problems as it requires hash
collisions to expose the ordering problem. However, when there are
hash collisions within a directory there is a very good probability
that the entries will be ordered incorrectly and that actually
matters when duplicate hashes are placed into or removed from the
btree block hash entry array.
This bug results in an on-disk directory corruption and that results
in directory verifier functions throwing corruption warnings into
the logs. While no data or directory entries are lost, access to
them may be compromised, and attempts to remove entries from a
directory that has suffered from this corruption may result in a
filesystem shutdown. xfs_repair will fix the directory hash
ordering without data loss occuring.
Eric Sandeen [Tue, 8 Apr 2014 08:48:22 +0000 (18:48 +1000)]
xfs_db: hide debug bbmap output
Most of xfsprogs building with DEBUG enables extra
checks, asserts, etc, but this bunch of printfs was
extra output that's not generally helpful for most
people's runtime experience - and it breaks xfs/290
with all the noise.
I assume it's for actual debugging use, and not
generally useful, so bury it a bit deeper under
it's own #ifdef.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Tue, 8 Apr 2014 08:47:35 +0000 (18:47 +1000)]
repair: ensure that unused superblock fields are zeroed
When we grab a superblock off disk via get_sb(), we don't know what
the in-memory superblock we are filling out contained. We need to
ensure that the entire structure is returned in an initialised
state regardless of which fields libxfs_sb_from_disk() populates
from disk. In this case, it doesn't populate the sb_crc field,
and so uninitialised values can escape through to disk on v4
filesystems because of this. This causes xfs/031 to fail on v4
filesystems.
Reported-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Fri, 7 Mar 2014 01:44:02 +0000 (12:44 +1100)]
xfsprogs: fix use after free in inode_item_done()
Commit "3a19fb7 libxfs: stop caching inode structures"
introduced a use after free.
libxfs_iput() already does the check for ip->i_itemp, and a
kmem_zone_free() if it's present, and then frees the ip pointer.
Re-checking ip->i_itemp after the libxfs_iput call will access
the freed ip pointer, as will setting ip_>i_itemp to NULL.
Simply remove the offending code to fix this up.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:27:31 +0000 (10:27 +1100)]
repair: phase 1 does not handle superblock CRCs
Phase 1 of xfs_repair verifies and corrects the primary
superblock of the filesystem. It does not verify that the CRC of the
superblock that is found is correct, nor does it recalculate the CRC
of the superblock it rewrites.
This happens because phase1 does not use the libxfs buffer cache -
it just uses pread/pwrite on a memory buffer, and works directly
from that buffer. Hence we need to add CRC verification to
verify_sb(), and CRC recalculation to write_primary_sb() so that it
works correctly.
This also enables us to use get_sb() as the method of fetching the
superblock from disk after phase 1 without needing to use the libxfs
buffer cache and guessing at the sector size. This prevents a
verifier error because it attempts to CRC a superblock buffer that
is much longer than the usual sector sizes.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:27:30 +0000 (10:27 +1100)]
xfs_db: Use EFSBADCRC for CRC validity indication
xfs_db currently gives indication as to whether a buffer CRC is ok
or not. Currently it does this by checking for EFSCORRUPTED in the
b_error field of the buffer. Now that we have EFSBADCRC to indicate
a bad CRC independently of structure corruption, use that instead to
drive the CRC correct/incorrect indication in the structured output.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:26:11 +0000 (10:26 +1100)]
libxfs: modify verifiers to differentiate CRC from other errors
[userspace port]
Modify all read & write verifiers to differentiate
between CRC errors and other inconsistencies.
This sets the appropriate error number on bp->b_error,
and then calls xfs_verifier_error() if something went
wrong. That function will issue the appropriate message
to the user.
Also, fix the silly bug in xfs_buf_ioerror() that this patch
exposes.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:25:55 +0000 (10:25 +1100)]
libxfs: add xfs_verifier_error()
[userspace port]
We want to distinguish between corruption, CRC errors,
etc. In addition, the full stack trace on verifier errors
seems less than helpful; it looks more like an oops than
corruption.
Create a new function to specifically alert the user to
verifier errors, which can differentiate between
EFSCORRUPTED and CRC mismatches. It doesn't dump stack
unless the xfs error level is turned up high.
Define a new error message (EFSBADCRC) to clearly identify
CRC errors. (Defined to EBADMSG, bad message)
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:25:41 +0000 (10:25 +1100)]
libxfs: add helper for updating checksums on xfs_bufs
[userspace port]
Many/most callers of xfs_update_cksum() pass bp->b_addr and
BBTOB(bp->b_length) as the first 2 args. Add a helper
which can just accept the bp and the crc offset, and work
it out on its own, for brevity.
Dave Chinner [Thu, 6 Mar 2014 23:25:33 +0000 (10:25 +1100)]
libxfs: add helper for verifying checksums on xfs_bufs
[userspace port]
Many/most callers of xfs_verify_cksum() pass bp->b_addr and
BBTOB(bp->b_length) as the first 2 args. Add a helper
which can just accept the bp and the crc offset, and work
it out on its own, for brevity.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
detects superblock corruption, it'll be extremely noisy, dumping
2 stacks, 2 hexdumps, etc.
This is because we call XFS_CORRUPTION_ERROR in xfs_mount_validate_sb
as well as in xfs_sb_read_verify.
Also, *any* errors in xfs_mount_validate_sb which are not corruption
per se; things like too-big-blocksize, bad version, bad magic, v1 dirs,
rw-incompat etc - things which do not return EFSCORRUPTED - will
still do the whole XFS_CORRUPTION_ERROR spew when xfs_sb_read_verify
sees any error at all. And it suggests to the user that they
should run xfs_repair, even if the root cause of the mount failure
is a simple incompatibility.
I'll submit that the probably-not-corrupted errors don't warrant
this much noise, so this patch removes the warning for anything
other than EFSCORRUPTED returns, and replaces the lower-level
XFS_CORRUPTION_ERROR with an xfs_notice().
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:25:04 +0000 (10:25 +1100)]
libxfs: skip verification on initial "guess" superblock read
[userspace port]
When xfs_readsb() does the very first read of the superblock,
it makes a guess at the length of the buffer, based on the
sector size of the underlying storage. This may or may
not match the filesystem sector size in sb_sectsize, so
we can't i.e. do a CRC check on it; it might be too short.
In fact, mounting a filesystem with sb_sectsize larger
than the device sector size will cause a mount failure
if CRCs are enabled, because we are checksumming a length
which exceeds the buffer passed to it.
So always read twice; the first time we read with NULL
buffer ops to skip verification; then set the proper
read length, hook up the proper verifier, and give it
another go.
Once we are sure that we've got the right buffer length,
we can also use bp->b_length in the xfs_sb_read_verify,
rather than the less-trusted on-disk sectorsize for
secondary superblocks. Before this we ran the risk of
passing junk to the crc32c routines, which didn't always
handle extreme values.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:24:56 +0000 (10:24 +1100)]
libxfs: xfs_sb_read_verify() doesn't flag bad crcs on primary sb
[userspace port]
My earlier commit 10e6e65 deserves a layer or two of brown paper
bags. The logic in that commit means that a CRC failure on the
primary superblock will *never* result in an error return.
Hopefully this fixes it, so that we always return the error
if it's a primary superblock, otherwise only if the filesystem
has CRCs enabled.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Thu, 6 Mar 2014 23:24:44 +0000 (10:24 +1100)]
libxfs: sanitize sb_inopblock in xfs_mount_validate_sb
[userspace port]
xfs_mount_validate_sb doesn't check sb_inopblock for sanity
(as does its xfs_repair counterpart, FWIW).
If it's out of bounds, we can go off the rails in i.e.
xfs_inode_buf_verify(), which uses sb_inopblock as a loop
limit when stepping through a metadata buffer.
The problem can be demonstrated easily by corrupting
sb_inopblock with xfs_db and trying to mount the result:
Dave Chinner [Thu, 6 Mar 2014 23:24:33 +0000 (10:24 +1100)]
libxfs: be more forgiving of a v4 secondary sb w/ junk in v5 fields
[userspace port]
Today, if xfs_sb_read_verify encounters a v4 superblock
with junk past v4 fields which includes data in sb_crc,
it will be treated as a failing checksum and a significant
corruption.
There are known prior bugs which leave junk at the end
of the V4 superblock; we don't need to actually fail the
verification in this case if other checks pan out ok.
So if this is a secondary superblock, and the primary
superblock doesn't indicate that this is a V5 filesystem,
don't treat this as an actual checksum failure.
We should probably check the garbage condition as
we do in xfs_repair, and possibly warn about it
or self-heal, but that's a different scope of work.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Andrew Clayton [Mon, 3 Mar 2014 01:29:32 +0000 (12:29 +1100)]
xfs_repair.8: Fix a grammatical error
Fix an issue in the -o force_geometry suboption text.
Signed-off-by: Andrew Clayton <andrew@digital-domain.net> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
In testing the previous metadump changes, fsstress generated an
inline symlink of 336 bytes in length. This caused corruption of the
restored filesystem that wasn't present in the original filesystem -
it corrupted the magic number of the next inode in the chunk. The
reason being that the symlink data is not null terminated in the
inode literal area, and hence when the symlink consumes the entire
literal area like so:
the symlink data butts right up agains the magic number of the next
inode in the chunk. And then, when obfuscation gets to the final
pathname component, it gets it's length via:
strlen(), which looks for a null terminator and finds it several
bytes into the next inode. It then proceeds to obfuscate that
length, including the inode magic number of the next inode....
Fix this by ensuring we can't overrun the symlink buffer length
by assuming that the symlink is not null terminated. Tested against
the filesystem image that triggered the problem in the first place.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:29:32 +0000 (12:29 +1100)]
metadump: Only verify obfuscated metadata being dumped
The discontiguous buffer support series added a verifier check on
the metadata buffers before they go written to the metadump image.
If this failed, it returned an error, and the result would be that
we stopped processing the metadata and exited, truncating the dump.
xfs_metadump is supposed to dump the metadata in the filesystem for
forensic analysis purposes, which means we actually want it to
retain any corruptions it finds in the filesystem. Hence running the
verifier - even to recalculate CRCs - when the metadata is
unmodified is the wrong thing to be doing. And stopping the dump
when we come across an error is even worse.
We still need to do CRC recalculation when obfuscating names and
attributes. Hence we need to make running the verifier conditional
on the buffer or inode:
a) being uncorrupted when read, and
b) modified by the obfuscation code.
If either of these conditions is not true, then we don't run the
verifier or recalculate the CRCs.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:29:32 +0000 (12:29 +1100)]
metadump: contiguous metadata object need to be split
On crc enabled filesystems with obfuscation enabled we need to be
able to recalculate the CRCs on individual buffers.
process_single_fsb_objects() reads a contiguous range of single
block objects as a singel buffer, and hence we cannot correctly
recalculate the CRCs on them.
Split the loop up into individual buffer reads, processing and
writes rather than a single read, multiple block processing and a
single write.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 3 Mar 2014 01:29:32 +0000 (12:29 +1100)]
xfs_growfs: don't grow data if only -m is specified
While writing an xfstest to check imaxpct behavior, I realized
that xfs_growfs -m XXX /mnt/point will not only change
imaxpct, but also grow the filesystem if it's not currently
occupying the entire block device. This doesn't seem like
the expected behavior, so split it out such that if only
-m, and not -d, is specified, only the imaxpct will be
changed.
This is a change from previous behavior, but I think it
satisfies the principle of least surprise...
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 3 Mar 2014 01:29:32 +0000 (12:29 +1100)]
xfs_io: test for invalid -Tr flag combination before open
Coverity spotted this.
It complained that we didn't close the fd before returning in
the error case of incompatible options, but in reality, we wouldn't
have gotten that far because open(O_RDONLY|O_TMPFILE) would be
rejected with EINVAL.
So the error handling test would never actually be true.
Fix this by moving the error checking prior to the open so
the user gets a more useful error message than "Invalid Argument."
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 3 Mar 2014 01:22:02 +0000 (12:22 +1100)]
xfs_logprint: don't advance op counter in xlog_print_trans_icreate
xlog_print_trans_icreate is advancing the op counter
"(*i)++" incorrectly; it only contains one region, and
the loop which called it will properly advance the op
once we're done.
Found-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 3 Mar 2014 01:21:49 +0000 (12:21 +1100)]
xfs_logprint: Don't error out after split items lose context
xfs_logprint recognizes a "left over region from split log item"
but then expects the *next* op to be a valid start to a new
item. The problem is, we can split i.e. an xfs_inode_log_format
item, skip over it, and then land on the xfs_icdinode_t
data which follows it - this doesn't have a valid log item
magic (XFS_LI_*) and we error out. This results in something
like:
xfs_logprint: unknown log operation type (494e)
Fix this by recognizing that we've skipped over an item and
lost the context we're in, so just continue skipping over
op headers until we find the next valid start to a log item.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Mon, 3 Mar 2014 01:21:21 +0000 (12:21 +1100)]
xfs_copy: band-aids for CRC filesystems
xfs_copy needs a fair bit of work for CRCs because it rewrites
UUIDs by default, but this change will get it working properly
with the "-d" (duplicate) option which keeps the same UUID.
Accept the the CRC magic valid, change the ASSERT() to an error
message and exit more gracefully, and don't
even get started if we don't have the '-d' option.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:19:14 +0000 (12:19 +1100)]
repair: prefetch runs too far ahead
When trying to work out why a non-crc filesystem took 1m57 to repair
and the same CRC enabled filesystem took 11m35 to repair, I noticed
that there was far too much CRC checking going on. Prefetched buffers
should not be CRCed, yet shortly after starting this began
to happen. perf profiling also showed up an awful lot of time doing
buffer cache lookups, and the cache profile output indicated that
the hit rate was way below 3%. IOWs, the readahead was getting so
far ahead of the processing that it was thrashing the cache.
That there is a difference in processing rate between CRC and
non-CRC filesystems is not surprising. What is surprising is the
readahead behaviour - it basically just keeps reading ahead until it
has read everything on an AG, and then it goes on to the next AG,
and reads everything on it, and then goes on to the next AG,....
This goes on until it pushes all the buffers the processing threads
need out of the cache, and suddenly they start re-reading from disk
with the various CRC checking verifiers enabled, and we end up going
-really- slow. Yes, threading made up for it a bit, but it's just
wrong.
Basically, the code assumes that IO is going to be slower than
processing, so it doesn't throttle prefetch across AGs to slow
down prefetch to match the processing rate.
So, to fix this, don't let a prefetch thread get more than a single
AG ahead of it's processing thread, just like occurs for single
threaded (i.e. -o ag_stride=-1) operation.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:18:01 +0000 (12:18 +1100)]
repair: BMBT prefetch needs to be CRC aware
I'd been trying to track down a behavioural difference between
non-crc and crc enabled filesystems that was resulting non-crc
filesystem executing prefetch almost 3x faster than CRC filesystems.
After amny ratholes, I finally stumbled on the fact that btree
format directories are not being prefetched due to a missing magic
number check, and it's rejecting all XFS_BMAP_CRC_MAGIC format BMBT
buffers. This makes prefetch on CRC enabled filesystems behave the
same as for non-CRC filesystems.
The difference a single line of code can make on a 50 million inode
filesystem with a single threaded prefetch enabled run is pretty
amazing. It goes from 3,000 iops @ 50MB/s to 2,000 IOPS @ 800MB/s
and the cache hit rate goes from 3% to 49%. The runtime difference:
Dave Chinner [Mon, 3 Mar 2014 01:17:46 +0000 (12:17 +1100)]
repair: fix prefetch queue limiting
The length of the prefetch queue is limited by a semaphore. To avoid
a ABBA deadlock, we only trywait on the semaphore so if we fail to
get it we can kick the IO queues before sleeping. Unfortunately,
the "need to sleep" detection is just a little wrong - it needs to
lok at errno, not err for the EAGAIN value.
Hence this queue throttling has not been working for a long time.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
It's possible to have filesystems with hundreds of AGs on systems
with little concurrency and resources. In this case, we can easily
exhaust memory and fail to create threads and have all sorts of
interesting problems.
xfs/250 can cause this to occur, with failures like:
Dave Chinner [Mon, 3 Mar 2014 01:17:07 +0000 (12:17 +1100)]
libxfs: buffer cache hashing is suboptimal
The hashkey calculation is very simplistic,and throws away an amount
of entropy that should be folded into the hash. The result is
sub-optimal distribution across the hash tables. For example, with a
default 512 entry table, phase 2 results in this:
Max supported entries = 4096
Max utilized entries = 3970
Active entries = 3970
Hash table size = 512
Hits = 0
Misses = 3970
Hit ratio = 0.00
Hash buckets with 0 entries 12 ( 0%)
Hash buckets with 1 entries 3 ( 0%)
Hash buckets with 2 entries 10 ( 0%)
Hash buckets with 3 entries 2 ( 0%)
Hash buckets with 4 entries 129 ( 12%)
Hash buckets with 5 entries 20 ( 2%)
Hash buckets with 6 entries 54 ( 8%)
Hash buckets with 7 entries 22 ( 3%)
Hash buckets with 8 entries 150 ( 30%)
Hash buckets with 9 entries 14 ( 3%)
Hash buckets with 10 entries 16 ( 4%)
Hash buckets with 11 entries 7 ( 1%)
Hash buckets with 12 entries 38 ( 11%)
Hash buckets with 13 entries 5 ( 1%)
Hash buckets with 14 entries 4 ( 1%)
Hash buckets with 17 entries 1 ( 0%)
Hash buckets with 19 entries 1 ( 0%)
Hash buckets with 23 entries 1 ( 0%)
Hash buckets with >24 entries 23 ( 16%)
Now, given a perfect distribution, we shoul dhave 8 entries per
chain. What we end up with is nothing like that.
Unfortunately, for phase 3/4 and others, the number of cached
objects results in the cache being expanded to 256k entries, and
so the stats just give this;
Hits = 262276
Misses = 8130393
Hit ratio = 3.13
Hash buckets with >24 entries 512 (100%)
We can't evaluate the efficiency of the hashing algorithm here.
Let's increase the size of the hash table to 32768 entries and go
from there:
Phase 2:
Hash buckets with 0 entries 31884 ( 0%)
Hash buckets with 1 entries 35 ( 0%)
Hash buckets with 2 entries 78 ( 3%)
Hash buckets with 3 entries 30 ( 2%)
Hash buckets with 4 entries 649 ( 65%)
Hash buckets with 5 entries 12 ( 1%)
Hash buckets with 6 entries 13 ( 1%)
Hash buckets with 8 entries 40 ( 8%)
Hash buckets with 9 entries 1 ( 0%)
Hash buckets with 13 entries 1 ( 0%)
Hash buckets with 15 entries 1 ( 0%)
Hash buckets with 22 entries 1 ( 0%)
Hash buckets with 24 entries 17 ( 10%)
Hash buckets with >24 entries 6 ( 4%)
There's a significant number of collisions given the population is
only 15% of the size of the table itself....
Phase 3:
Max supported entries = 262144
Max utilized entries = 262144
Active entries = 262090
Hash table size = 32768
Hits = 530844
Misses = 7164575
Hit ratio = 6.90
Hash buckets with 0 entries 11898 ( 0%)
....
Hash buckets with 12 entries 5513 ( 25%)
Hash buckets with 13 entries 4188 ( 20%)
Hash buckets with 14 entries 2073 ( 11%)
Hash buckets with 15 entries 1811 ( 10%)
Hash buckets with 16 entries 1994 ( 12%)
....
Hash buckets with >24 entries 339 ( 4%)
So, a third of the hash table does not even has any entries in them,
despite having more than 7.5 million entries run through the cache.
Median chain lengths are 12-16 entries, ideal is 8. And lots of
collisions on the longer than 24 entrie chains...
Phase 6:
Hash buckets with 0 entries 14573 ( 0%)
....
Hash buckets with >24 entries 2291 ( 36%)
Modify the hash to be something more workable - steal the linux
kernel inode hash calculation and try that:
phase 2:
Max supported entries = 262144
Max utilized entries = 3970
Active entries = 3970
Hash table size = 32768
Hits = 0
Misses = 3972
Hit ratio = 0.00
Hash buckets with 0 entries 29055 ( 0%)
Hash buckets with 1 entries 3464 ( 87%)
Hash buckets with 2 entries 241 ( 12%)
Hash buckets with 3 entries 8 ( 0%)
Close to perfect.
Phase 3:
Max supported entries = 262144
Max utilized entries = 262144
Active entries = 262118
Hash table size = 32768
Hits = 567900
Misses = 7118749
Hit ratio = 7.39
Hash buckets with 5 entries 1572 ( 2%)
Hash buckets with 6 entries 2186 ( 5%)
Hash buckets with 7 entries 9217 ( 24%)
Hash buckets with 8 entries 8757 ( 26%)
Hash buckets with 9 entries 6135 ( 21%)
Hash buckets with 10 entries 3166 ( 12%)
Hash buckets with 11 entries 1257 ( 5%)
Hash buckets with 12 entries 364 ( 1%)
Hash buckets with 13 entries 94 ( 0%)
Hash buckets with 14 entries 14 ( 0%)
Hash buckets with 15 entries 5 ( 0%)
A near-perfect bell curve centered on the optimal distribution
number of 8 entries per chain.
Phase 6:
Hash buckets with 0 entries 24 ( 0%)
Hash buckets with 1 entries 190 ( 0%)
Hash buckets with 2 entries 571 ( 0%)
Hash buckets with 3 entries 1263 ( 1%)
Hash buckets with 4 entries 2465 ( 3%)
Hash buckets with 5 entries 3399 ( 6%)
Hash buckets with 6 entries 4002 ( 9%)
Hash buckets with 7 entries 4186 ( 11%)
Hash buckets with 8 entries 3773 ( 11%)
Hash buckets with 9 entries 3240 ( 11%)
Hash buckets with 10 entries 2523 ( 9%)
Hash buckets with 11 entries 2074 ( 8%)
Hash buckets with 12 entries 1582 ( 7%)
Hash buckets with 13 entries 1206 ( 5%)
Hash buckets with 14 entries 863 ( 4%)
Hash buckets with 15 entries 601 ( 3%)
Hash buckets with 16 entries 386 ( 2%)
Hash buckets with 17 entries 205 ( 1%)
Hash buckets with 18 entries 122 ( 0%)
Hash buckets with 19 entries 48 ( 0%)
Hash buckets with 20 entries 24 ( 0%)
Hash buckets with 21 entries 13 ( 0%)
Hash buckets with 22 entries 8 ( 0%)
A much wider bell curve than phase 3, but still centered around the
optimal value and far, far better than the distribution of the
current hash calculation. Runtime:
Essentially unchanged - this is somewhat of a "swings and
roundabouts" test here because what it is testing is the cache-miss
overhead.
FWIW, the comparison here shows a pretty good case for the existing
hash calculation. On a less populated filesystem (5m inodes rather
than 50m inodes) the typical hash distribution was:
Max supported entries = 262144
Max utilized entries = 262144
Active entries = 262094
Hash table size = 32768
Hits = 626228
Misses = 800166
Hit ratio = 43.90
Hash buckets with 0 entries 29274 ( 0%)
Hash buckets with 3 entries 1 ( 0%)
Hash buckets with 4 entries 1 ( 0%)
Hash buckets with 7 entries 1 ( 0%)
Hash buckets with 8 entries 1 ( 0%)
Hash buckets with 9 entries 1 ( 0%)
Hash buckets with 12 entries 1 ( 0%)
Hash buckets with 13 entries 1 ( 0%)
Hash buckets with 16 entries 2 ( 0%)
Hash buckets with 18 entries 1 ( 0%)
Hash buckets with 22 entries 1 ( 0%)
Hash buckets with >24 entries 3483 ( 99%)
Total and utter crap. Same filesystem, new hash function:
Max supported entries = 262144
Max utilized entries = 262144
Active entries = 262103
Hash table size = 32768
Hits = 673208
Misses = 838265
Hit ratio = 44.54
Hash buckets with 3 entries 558 ( 0%)
Hash buckets with 4 entries 1126 ( 1%)
Hash buckets with 5 entries 2440 ( 4%)
Hash buckets with 6 entries 4249 ( 9%)
Hash buckets with 7 entries 5280 ( 14%)
Hash buckets with 8 entries 5598 ( 17%)
Hash buckets with 9 entries 5446 ( 18%)
Hash buckets with 10 entries 3879 ( 14%)
Hash buckets with 11 entries 2405 ( 10%)
Hash buckets with 12 entries 1187 ( 5%)
Hash buckets with 13 entries 447 ( 2%)
Hash buckets with 14 entries 125 ( 0%)
Hash buckets with 15 entries 25 ( 0%)
Hash buckets with 16 entries 3 ( 0%)
Kinda says it all, really...
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:16:54 +0000 (12:16 +1100)]
repair: per AG locks contend for cachelines
The per-ag locks used to protect per-ag block lists are located in a tightly
packed array. That means that they share cachelines, so separate them out inot
separate 64 byte regions in the array.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Mar 2014 01:16:44 +0000 (12:16 +1100)]
repair: translation lookups limit scalability
A bit of perf magic showed that scalability was limits to 3-4
concurrent threads due to contention on a lock inside in something
called __dcigettext(). That some library somewhere that repair is
linked against, and it turns out to be inside the translation
infrastructure to support the _() string mechanism:
Fix this by initialising global variables that hold the translated
strings at startup, hence avoiding the need to do repeated runtime
translation of the same strings.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Fri, 28 Feb 2014 01:04:26 +0000 (12:04 +1100)]
mkfs: proto file creation does not set ftype correctly
Hence running xfs_repair on a ftype enable filesystem that has
contents created by a proto file will throw warnings on mismatched
ftype entries and correct them. xfs/031 fails due to this problem.
Fix it by settin gup the xname structure with the correct type
fields.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Fri, 28 Feb 2014 01:04:12 +0000 (12:04 +1100)]
mkfs: default log size for small filesystems too large
Recent changes to the log size scaling have resulted in using the
default size multiplier for the log size even on small filesystems.
Commit 88cd79b ("xfs: Add xfs_log_rlimit.c") changed the calculation
of the maximum transaction size that the kernel would issues and
that significantly increased the minimum size of the default log.
As such the size of the log on small filesystems was typically
larger than the prefious default, even though the previous default
was still larger than the minimum needed.
Rework the default log size calculation such that it will use the
original log size default if it is larger than the minimum log size
required, and only use a larger log if the configuration of the
filesystem requires it.
This is especially obvious in xfs/216, where the default log size is
10MB all the way up to 16GB filesystems. The current mkfs selects a
log size of 50MB for the same size filesystems and this is
unnecessarily large.
Return the scaling of the log size for small filesystems to
something similar to what xfs/216 expects.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Sun, 23 Feb 2014 21:13:28 +0000 (08:13 +1100)]
libxfs: clear stale buffer errors on write
If we've read a buffer and it's had an error (e.g a bad CRC) and the
caller corrects the problem with the buffer and writes it via
libxfs_writebuf() without clearing the error on the buffer,
subsequent reads of the buffer while it is still in cache can see
that error and fail inappropriately.
xfs/033 demonstrates this error, where phase 3 detects the corrupted
root inode and clears, but doesn't clear the b_error field. Later in
phase 6, the code that rebuilds the root directory tries to read the
root inode and sees a buffer with an error on it, thereby triggering
a fatal repair failure:
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
xfs_inode_buf_verify: XFS_CORRUPTION_ERROR
bad magic number 0x0 on inode 64
....
cleared root inode 64
....
Phase 6 - check inode connectivity...
reinitializing root directory
xfs_imap_to_bp: xfs_trans_read_buf() returned error 117.
fatal error -- could not iget root inode -- error - 117
#
Fix this by assuming buffers that are written are clean and correct
and hence we can zero the b_error field before retiring the buffer
to the cache.
Reported-by: Eric Sandeen <esandeen@redhat.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Eric Sandeen <esandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Sun, 23 Feb 2014 21:12:41 +0000 (08:12 +1100)]
libxfs: contiguous buffers are not discontigous
When discontiguous directory buffer support was fixed in xfs_repair,
(dd9093d xfs_repair: fix discontiguous directory block support)
it changed to using libxfs_getbuf_map() to support mapping
discontiguous blocks, and the prefetch code special cased such
discontiguous buffers.
The issue is that libxfs_getbuf_map() marks all buffers, even
contiguous ones - as LIBXFS_B_DISCONTIG, and so the prefetch code
was treating every buffer as discontiguous. This causes the prefetch
code to completely bypass the large IO optimisations for dense areas
of metadata. Because there was no obvious change in performance or
IO patterns, this wasn't noticed during performance testing.
However, this change mysteriously fixed a regression in xfs/033 in
the v3.2.0-alpha release, and this change in behaviour was
discovered as part of triaging why it "fixed" the regression.
Anyway, restoring the large IO prefetch optimisation results
a repair on a 10 million inode filesystem dropping from 197s to 173s,
and the peak IOPS rate in phase 3 dropping from 25,000 to roughly
2,000 by trading off a bandwidth increase of roughly 100% (i.e.
200MB/s to 400MB/s). Phase 4 saw similar changes in IO profile and
speed increases.
This, however, re-introduces the regression in xfs/033, which will
now be fixed in a separate patch.
Reported-by: Eric Sandeen <esandeen@redhat.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Eric Sandeen <esandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Sun, 23 Feb 2014 21:11:29 +0000 (08:11 +1100)]
xfs_repair: fix sibling pointer tests in verify_dir2_path()
RH QE reported that if we create a 1G filesystem with default
options, mount it, and create inodes until full, then run
repair, repair reports corruption in verify_dir2_path() with:
> bad back pointer in block 8390324 for directory inode 131
The commit 88b32f0 xfs: add CRCs to dir2/da node blocks
had a small error which regressed this; although we switch
to the "newnode," to check sibling pointers, we re-populate
the node hdr with the old "node" data. This causes the
backpointer test to be testing the wrong node's values.
Fixing this bug fixes the testcase.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Wed, 19 Feb 2014 00:52:25 +0000 (11:52 +1100)]
xfs_db: fix attribute leaf output for ATTR3 format
attr3_leaf_entries_count() checks against the wrong magic number.
hence returns zero for an entry count when it should be returning a
value. Fixing this makes xfs/021 pass on CRC enabled filesystems.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Brian Foster [Wed, 19 Feb 2014 00:52:24 +0000 (11:52 +1100)]
xfs_io: set argmax to 1 for imap command
The imap command supports an optional argument to specify the
number of inode records to capture per ioctl(), but argmax is
currently set to 0. This leads to an error if an argument is
provided on the command line. Set argmax to 1 to support the
optional argument.
Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Jie Liu <jeff.liu@oracle.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Mark Tinguely [Wed, 19 Feb 2014 00:16:09 +0000 (11:16 +1100)]
xfs_db: fix the setting of unaligned directory fields
Setting the directory startoff, startblock, and blockcount
fields are difficult on both big and little endian machines.
The setting of extentflag was completely broken.
Since the output fields and the lengths are not aligned to a byte,
setbitval requires them to be entered in big endian and properly
byte/nibble shifted. The extentflag output was aligned to a byte
but was not shifted correctly.
Convert the input to big endian on little endian machines and
nibble/byte shift on all platforms so setbitval can set the bits
correctly.
Clean some whitespace while in the setbitbal() function.
[dchinner: cleaned up remaining bad whitespace and comments]
Signed-off-by: Mark Tinguely <tinguely@sgi.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 22:41:51 +0000 (09:41 +1100)]
metadump: fully support discontiguous directory blocks
Now that directory block obfuscation can handle single contiguous
directory blocks, we can make the multi-block code use discontiguous
buffers to read in an entire directory block at a time. This allows
us to pass a complete directory object to the processing function
and hence be able to process any sort of directory object regardless
of it's underlying layout.
With the, we can remove the multi-block loop from the directory
processing code and get rid of allt eh structures used to hold
inter-call state. This graeatly simplifies the code as well as
adding the additional functionality.
With this patch, a CRC enabled filesystem now passes xfs/291.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 22:41:41 +0000 (09:41 +1100)]
metadump: walk single fsb objects a block at a time
To be able to support arbitrary discontiguous extents in multi-block
objects, we need to be able to process a single object at a time
presented as a single flat buffer. Right now we pass an arbitrary
extent and have the individal object processing functions break it
up and keep track of inter-call state.
This greatly complicates the processing of directory objects, such
that certain formats are simply not handled at all. Instead, for
single block objects loop over the extent a block at a time, feeding
a whole object to the processing function and hence making the
extent walking generic instead of per object.
At thsi point multi-block directory objects still need to use the
existing code, so duplicate the old single block object code into it
so we can fix it up properly. This means directory block processing
can't be fully cleaned up yet.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 22:40:13 +0000 (09:40 +1100)]
metadump: separate single block objects from multiblock objects
When trying to dump objects, we have to treat multi-block objects
differently to single block objects. Separate out the code paths for
single block vs multi-block objects so we can add a separate path
for multi-block objects.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 22:35:36 +0000 (09:35 +1100)]
metadump: support writing discontiguous io cursors
To handle discontiguous buffers, metadump needs to be able to handle
io cursrors that use discontiguous buffer mappings. Factor
write_buf() to extract the data copy routine and use that to
implement support for both flat and discontiguous buffer maps.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 22:35:26 +0000 (09:35 +1100)]
metadump: sanitise write_buf/index return values
Write_buf/write_index use confusing boolean values for return,
meaning that it's hard to tell what the correct error return is
supposed to be. Convert them to return zero on success or a
negative errno otherwise so that it's clear what the error case is.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 00:55:36 +0000 (11:55 +1100)]
xfs_repair: fix discontiguous directory block support
xfs/291 tests fragmented multi-block directories, and xfs_repair
throws lots of lookup badness errors in phase 3:
- agno = 1 7fa3d2758740: Badness in key lookup (length)
bp=(bno 0x1e040, len 4096 bytes) key=(bno 0x1e040, len 16384 bytes)
- agno = 2 7fa3d2758740: Badness in key lookup (length)
bp=(bno 0x2d0e8, len 4096 bytes) key=(bno 0x2d0e8, len 16384 bytes) 7fa3d2758740: Badness in key lookup (length)
bp=(bno 0x2d068, len 4096 bytes) key=(bno 0x2d068, len 16384 bytes)
- agno = 3 7fa3d2758740: Badness in key lookup (length)
bp=(bno 0x39130, len 4096 bytes) key=(bno 0x39130, len 16384 bytes) 7fa3d2758740: Badness in key lookup (length)
bp=(bno 0x391b0, len 4096 bytes) key=(bno 0x391b0, len 16384 bytes) 7fa3d2758740: Badness in key lookup (length)
This is because it is trying to read a directory buffer in full
(16k), but is finding a single 4k block in the cache instead.
The opposite is happening in phase 6 - phase 6 is trying to read 4k
buffers but is finding a 16k buffer there instead.
The problem is caused by the fact that directory buffers can be
represented as compound buffers or as individual buffers depending
on the code reading the directory blocks. The main problem is that
the IO prefetch code doesn't understand compound buffers, so teach
it about compound buffers to make the problem go away.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 00:11:11 +0000 (11:11 +1100)]
libxfs: remove map from libxfs_readbufr_map
The map passed in to libxfs_readbufr_map is used to check the buffer
matches the map. However, the repair readahead code has no map it
can use to validate the buffer it set up previously, so just get rid
of the map being passed in because it serves no useful purpose.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Dave Chinner [Mon, 3 Feb 2014 00:08:44 +0000 (11:08 +1100)]
xfs_repair: add support for validating dirent ftype field
Add code to track the filetype of an inode from phase 3 when all the
inodes are scanned throught to phase 6 when the directory structure
is validated and corrected.
Add code to phase 6 shortform and long form directory entry
validation to read the ftype from the dirent, lookup the inode
record and check they are the same. If they aren't and we are in
no-modify mode, issue a warning such as:
Phase 6 - check inode connectivity...
- traversing filesystem ...
would fix ftype mismatch (5/1) in directory/child inode 64/68
- traversal finished ...
- moving disconnected inodes to lost+found ...
If we are fixing the problem:
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
fixing ftype mismatch (5/1) in directory/child inode 64/68
- traversal finished ...
- moving disconnected inodes to lost+found ...
Note that this is from a leaf form directory entry that was
intentionally corrupted with xfs_db like so:
Shortform directory entry repair was tested in a similar fashion.
Further, track the ftype in the directory hash table that is build,
so if the directory is rebuild from scratch it has the necessary
ftype information to rebuild the directory correctly. Further, if we
detect a ftype mismatch, update the entry in the hash so that later
directory errors that lead to it being rebuilt use the corrected
ftype field, not the bad one.
Note that this code pulls in some kernel side code that is currently
in kernel private locations (xfs_mode_to_ftype table), so there'll
be some kernel/userspace sync work needed to bring these back into
line.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
Eric Sandeen [Thu, 12 Dec 2013 17:47:59 +0000 (17:47 +0000)]
xfs_metadump: Make -F (force) optional
If we do something crazy like:
# xfs_metadump /root/anaconda.cfg outfile
xfs_metadump will pass "-F" to xfs_db to carry on even in the face
of bad superblock magic. [1] Depending on what we gave as an input
file, we may very well fail quite badly:
> xfs_metadump: /root/anaconda.cfg is not a valid XFS filesystem (unexpected SB magic number 0x230a2320)
> Floating point exception
I don't think it's possible to harden every path through libxfs for
non-xfs filesystems as input. (Even if it's possible, I don't think it's
worth the effort).
So I propose making the "-F" optional; by default, xfs_metadump will
say no, this has bad magic, I'm stopping. If the admin really wants to
try to proceed, suggest that they can use "-F" and they can keep all the
broken pieces.
[1] behavior added in 7f98455 xfs_db: exit on invalid magic number
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Ben Myers [Tue, 10 Dec 2013 20:53:52 +0000 (20:53 +0000)]
xfs_repair: fix process_bmbt_reclist_int
There is a set checks for corruption in block map btrees in
process_bmbt_reclist_int that we identify but currently do not fix. It
appears that the author's intent in this function was to set error = 1,
and then only clear it when all of the checks were completed
successfully. Unfortunately error can be cleared when it is used for
the return value of blkmap_set_ext. Some kinds of corruption are not
being fixed, including duplicate extents, claiming free blocks, claiming
metadata blocks, and multiply used blocks.
Fix this by using error2 for the return code from blkmap_set_ext.
Signed-off-by: Ben Myers <bpm@sgi.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Eric Sandeen [Fri, 18 Oct 2013 22:30:18 +0000 (22:30 +0000)]
xfs_fsr: fix SWAPEXT failures under selinux
If we run xfs_fsr on a system which creates selinux extended
attributes, the temp file created by xfs_fsr may have a
large-ish local extended attribute as soon as it is created.
If the target file has NON-local extended attributes, it may
have a fork offset larger than the temp file, because i.e.
FMT_EXTENTS attributes take up less space. We currently
have no mechanism to grow the temp file's fork offset.
So in this case, the SWAPEXT ioctl will fail.
(With systems using selinux and lots of xattrs, this becomes
fairly common in the real world.)
After testing the target file for a non-local extent, and
checking to see if the temp forkoff needs to be grown on the
first pass, we can add a large attr to knock all attributes on
the temp file out of local format, and grow the fork offset for
this particular case.
This passes xfstest 227, and also resolves issues seen on
a metadata image provided by Gabriel.
Reported-by: Gabriel VLASIU <gabriel@vlasiu.net> Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:41:00 +0000 (06:41 +0000)]
repair: fix leaf node directory data check
When walking the leaf node format blocks (LEAFN) in the hash index
of a large directory, we could trip over btree node blocks (DA_NODE)
in the address space if there are enough entries in the directory.
These cause a verifier failure, and hence the directory is
considered corrupt and is trashed and rebuilt unnecessarily. Fix this
by using the correct verifier that can handle both types of blocks
without triggering failures.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:59 +0000 (06:40 +0000)]
repair: Increase default repair parallelism on large filesystems
Large filesystems or high AG count filesystems generally have more
inherent parallelism in the backing storage. We should make use of
this by default to speed up repair times. Make xfs_repair use an
"auto-stride" configuration on filesystems with enough AGs to be
considered "multidisk" configurations.
This difference in elapsed time to repair a 100TB filesystem with
50 million inodes in it with all metadata in flash is:
Time IOPS BW CPU RAM
vanilla: 2719s 2900 55MB/s 25% 0.95GB
patched: 908s varied varied varied 2.33GB
With the patched kernel, there were IO peaks of over 1.3GB/s during
AG scanning. Some phases now run at noticeably different speeds
- phase 3 ran at ~180% CPU, 18,000 IOPS and 130MB/s,
- phase 4 ran at ~280% CPU, 12,000 IOPS and 100MB/s
- the other phases were similar to the vanilla repair.
Memory usage is increased because of the increased buffer cache
size as a result of concurrent AG scanning using it.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:58 +0000 (06:40 +0000)]
repair: prefetching is turned off unnecessarily
When we have a large filesystem, prefetching is only enabled when
there is a significant amount of RAM available - roughly 16GB RAM
for every 100TB of disk space. For large filesystems, this memory
usage calculation is mostly derived from the memory needed to track
used space rather than inodes. That is, for a 100TB filesystem with
50 million inodes, only 50M * 4 bytes or 200MB of the required
16GB of RAM is used for tracking inodes. Hence with prefetching
turned off, such a filesystem only uses 230MB of memory to run
repair to completion.
With prefetching turned on, this increases to about 900MB of RAM,
but it is still far, far less than the predicted 16GB of RAM needed
to enable prefetching. Hence we are turning off prefetching when we
really don't need to and hence large filesystems are being checked
slower than they could be.
This patch makes prefetching always be enabled, but adds warnings in
the case that we might not have enough memory to complete
successfully and if it fails to run again with prefetching disabled:
Memory available for repair (12031MB) may not be sufficient.
At least 13044MB is needed to repair this filesystem efficiently
If repair fails due to lack of memory, please
turn prefetching off (-P) to reduce the memory footprint.
A similar warning is also added when prefetching is disabled and
xfs_repair exhausts memory then more RAM/swap should be added to the
system.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:57 +0000 (06:40 +0000)]
xfsprogs: kill experimental warnings for v5 filesystems
With xfsprogs now being close to feature complete on v5 filesystems,
remove the experimental warnings from the superblock verifier. This
means that we don't need to filter such warnings from the output in
xfstests and so we can see exactly what tests are failing due to
code deficiencies rather than from detecting warning noise.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:56 +0000 (06:40 +0000)]
xfs: support larger inode clusters on v5 filesystems
To allow the kernel to use larger inode clusters than the standard
8192 bytes, we need to set the inode alignment fields appropriately
so that the kernel is consistent in it's inode to buffer mappings.
We set the alignment to allow a constant 32 inodes per cluster,
instead of a fixed 8k cluster size.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:55 +0000 (06:40 +0000)]
db: enable metadump on CRC filesystems
Now that we can calculate CRCs through xfs_db, we can add support
for recalculating CRCs on obfuscated metadump images. This simply
requires us to call the write verifier manually before writing the
buffer to the metadump image.
We don't need to do anything special to mdrestore, as the metadata
blocks it reads from the image file will already have all the
correct CRCs in them. Hence it can be mostly oblivious to the fact
that the filesystem it is restoring contains CRCs.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:54 +0000 (06:40 +0000)]
libxfs: work around do_div() not handling 32 bit numerators
The libxfs dquot buffer code uses do_div() with a 32 bit numerator.
This gives incorrect results as do_div() passes the numerator by
reference as a pointer to a 64 bit value. Hence it does the division
using 32 bits of garbage gives the wrong result.
As per Christoph's suggestion, we can kill the usage of do_div()
here completely and just do the division directly, both in userspace
and kernel space.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:53 +0000 (06:40 +0000)]
xfs_db: avoid libxfs buffer lookup warnings
xfs_db is unique in the way it can read the same blocks with
different lengths from disk, so we really need a way to avoid having
duplicate buffers in the cache. To handle this in a generic way,
introduce a "purge on compare failure" feature to libxfs.
What this feature does is instead of throwing a warning when a
buffer miscompare occurs (e.g. due to a length mismatch), it purges
the buffer that is in cache from the cache. We can do this safely in
the context of xfs_db because it always writes back changes made to
buffers before it releases the reference to the buffer. Hence we can
purge buffers directly from the lookup code without having to worry
about whether they are dirty or not.
Doing this purge on miscompare operation avoids the
problem that libxfs is currently warning about, and hence if the
feature flag is set then we don't need to warn about miscompares any
more. Hence the whole problem goes away entirely for xfs_db, without
affecting any of the other users of libxfs based IO.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Dave Chinner [Wed, 13 Nov 2013 06:40:52 +0000 (06:40 +0000)]
xfs_db: use inode cluster buffers for inode IO
When we mount the filesystem inside xfs_db, libxfs is tasked with
reading some information from disk, such as root inodes. Because
libxfs does this inode reading, it uses inode cluster buffers to
read the inodes. xfs_db, OTOH, just uses FSB sized buffers to read
inodes, and hence xfs_db throws a warning when reading the root
inode block like so:
$ sudo xfs_db -c "sb 0" -c "p rootino" -c "inode 32" /dev/vda
Version 5 superblock detected. xfsprogs has EXPERIMENTAL support enabled!
Use of these features is at your own risk!
rootino = 32 7f59f20e6740: Badness in key lookup (length)
bp=(bno 0x20, len 8192 bytes) key=(bno 0x20, len 1024 bytes)
$
There is another way this can happen, and that is dumping raw data
from disk using either the "fsb NNN" or "daddr MMM" commands to dump
untyped information. This is always read in sector or filesystem
block units, and so will cause similar badness warnings.
To avoid this problem when reading inodes, teach xfs_db to read
inode clusters rather individual filesystem blocks when asked to
read an inode.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rich Johnston <rjohnston@sgi.com>