Kent Overstreet [Fri, 16 Jun 2023 23:21:21 +0000 (19:21 -0400)]
six locks: Single instance of six_lock_vals
Since we're not generating different versions of the lock functions for
each lock type, the constant propagation we were trying to do before is
no longer useful - this is now a small code size decrease.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 22:24:05 +0000 (18:24 -0400)]
six locks: lock->state.seq no longer used for write lock held
lock->state.seq is shortly being moved out of lock->state, to kill the
depedency on atomic64; in preparation for that, we change the write
locking bit to write locked.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 19:40:40 +0000 (15:40 -0400)]
six locks: Documentation, renaming
- Expanded and revamped overview documentation in six.h, giving an
overview of all features
- docbook-comments for all external interfaces
- Rename some functions for simplicity, i.e.
six_lock_ip_type() -> six_lock_ip()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 03:57:48 +0000 (23:57 -0400)]
six locks: Kill six_lock_state union
As suggested by Linus, this drops the six_lock_state union in favor of
raw bitmasks.
On the one hand, bitfields give more type-level structure to the code.
However, a significant amount of the code was working with
six_lock_state as a u64/atomic64_t, and the conversions from the
bitfields to the u64 were deemed a bit too out-there.
More significantly, because bitfield order is poorly defined (#ifdef
__LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the
sequence number would overflow into the rest of the bitfield if the
compiler didn't put the sequence number at the high end of the word.
The new code is a bit saner when we're on an architecture without real
atomic64_t support - all accesses to lock->state now go through
atomic64_*() operations.
On architectures with real atomic64_t support, we additionally use
atomic bit ops for setting/clearing individual bits.
Text size: 7467 bytes -> 4649 bytes - compilers still suck at
bitfields.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 01:44:30 +0000 (21:44 -0400)]
six locks: Simplify dispatch
Originally, we used inlining/flattening to cause the compiler to
generate different versions of lock/trylock/relock/unlock for each lock
type - read, intent, and write. This made the individual functions
smaller and let the compiler eliminate table lookups: however, as the
code has gotten more complicated these optimizations have gotten less
worthwhile, and all the tricky inlining and dispatching made the code
less readable.
Text size: 11015 bytes -> 7467 bytes, and benchmarks show no loss of
performance.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 00:37:53 +0000 (20:37 -0400)]
six locks: Centralize setting of waiting bit
Originally, the waiting bit was always set by trylock() on failure:
however, it's now set by __six_lock_type_slowpath(), with wait_lock held
- which is the more correct place to do it.
That made setting the waiting bit in trylock redundant, so this patch
deletes that.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 00:57:55 +0000 (20:57 -0400)]
six locks: Kill six_lock_pcpu_(alloc|free)
six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate
or free the percpu reader count on an existing lock that's in use, the
only safe time to allocate percpu readers is when the lock is first
being initialized.
This patch adds a flags parameter to six_lock_init(), and instead of
six_lock_pcpu_free() we now expose six_lock_exit(), which does the same
thing but is less likely to be misused.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 04:49:06 +0000 (00:49 -0400)]
bcachefs: Clear btree_node_just_written() when node reused or evicted
This fixes the following bug:
Journal reclaim attempts to flush a node, but races with the node being
evicted from the btree node cache; when we lock the node, the data
buffers have already been freed.
We don't evict a node that's dirty, so calling btree_node_write() is
fine - it's a noop - except that the btree_node_just_written bit causes
bch2_btree_post_write_cleanup() to run (resorting the node), which then
causes a null ptr deref.
Kent Overstreet [Mon, 15 May 2023 03:01:14 +0000 (23:01 -0400)]
bcachefs: Delete an incorrect bch2_trans_unlock()
These deletes a bch2_trans_unlock() call from __bch2_move_data(). It was
redundant; bch2_move_extent() has the correct unlock call, and it was
buggy because when move_extent calls bch2_extent_drop_ptrs() we don't
want the transaction to be unlocked yet - this fixes a btree_iter.c
assertion.
Kent Overstreet [Sat, 13 May 2023 00:28:54 +0000 (20:28 -0400)]
bcachefs: Replace a BUG_ON() with fatal error
A user hit this BUG_ON() - it's unclear how it happened, so replace it
with a fatal error that will cause us to go read only, and print out
more information.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 8 May 2023 18:23:08 +0000 (14:23 -0400)]
bcachefs: Delete some dead code in bch2_replicas_gc_end()
bch2_replicas_gc_(start|end) is now only used for journal replicas
entries, which don't have bucket sector counts - so this code is
entirely dead and can be deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Thu, 4 May 2023 16:44:15 +0000 (12:44 -0400)]
bcachefs: mark journal replicas before journal write submission
The journal write submission path marks the associated replica
entries for journal data in journal_write_done(), which is just
after journal write bio submission. This creates a small window
where journal entries might have been written out, but the
associated replica is not marked such that recovery does not know
that the associated device contains journal data.
Move the replica marking a bit earlier in the write path such that
recovery is guaranteed to recognize that the device contains journal
data in the event of a crash.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 29 Mar 2023 15:18:52 +0000 (11:18 -0400)]
bcachefs: BTREE_ID_snapshot_tree
This adds a new btree which gets us a persistent per-snapshot-tree
identifier.
- BTREE_ID_snapshot_trees
- KEY_TYPE_snapshot_tree
- bch_snapshot now has a field that points to a snapshot_tree
This is going to be used to designate one snapshot ID/subvolume out of a
given tree of snapshots as the "main" subvolume, so that we can do quota
accounting in that subvolume and not the rest.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 23:21:06 +0000 (19:21 -0400)]
bcachefs: bch2_bkey_make_mut() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_make_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 22:46:24 +0000 (18:46 -0400)]
bcachefs: bch2_bkey_get_mut() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_get_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 22:59:28 +0000 (18:59 -0400)]
bcachefs: bch2_bkey_alloc() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_alloc(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 28 Apr 2023 03:48:33 +0000 (23:48 -0400)]
bcachefs: bch2_bkey_get_mut() improvements
- bch2_bkey_get_mut() now handles types increasing in size, allocating
a buffer for the type's current size when necessary
- bch2_bkey_make_mut_typed()
- bch2_bkey_get_mut() now initializes the iterator, like
bch2_bkey_get_iter()
Also, refactor so that most of the code is in functions - now macros are
only used for wrappers.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- bch2_bkey_get_iter_type() returns -ENOENT if it doesn't find a key of
the correct type
- bch2_bkey_get_val_typed() copies the val out of the btree to a
(typically stack allocated) variable; it handles the case where the
value in the btree is smaller than the current version of the type,
zeroing out the remainder.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 29 Apr 2023 17:24:18 +0000 (13:24 -0400)]
bcachefs: bkey_ops.min_val_size
This adds a new field to bkey_ops for the minimum size of the value,
which standardizes that check and also enforces the new rule (previously
done somewhat ad-hoc) that we can extend value types by adding new
fields on to the end.
To make that work we do _not_ initialize min_val_size with sizeof,
instead we initialize it to the size of the first version of those
values.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Mon, 1 May 2023 11:09:33 +0000 (07:09 -0400)]
bcachefs: fix accounting corruption race between reclaim and dev add
When a device is removed from a bcachefs volume, the associated
content is removed from the various btrees. The alloc tree uses the
key cache, so when keys are removed the deletes exist in cache for a
period of time until reclaim comes along and flushes outstanding
updates.
When a device is re-added to the bcachefs volume, the add process
re-adds some of these previously deleted keys. When marking device
superblock locations on device add, the keys will likely refer to
some of the same alloc keys that were just removed. The memory
triggers for these key updates are responsible for further updates,
such as bch2_mark_alloc() calling into bch2_dev_usage_update() to
update per-device usage accounting.
When a new key is added to key cache, the trans update path also
flushes the key to the backing btree for coherency reasons for tree
walks.
With all of this context, if a device is removed and re-added
quickly enough such that some key deletes from the remove are still
pending a key cache flush, the trans update path can view this as
addition of a new key because the old key in the insert entry refers
to a deleted key. However the deleted cached key has not been filled
by absence of a btree key, but rather refers to an explicit deletion
of an existing key that occurred during device removal.
The trans update path adds a new update to flush the key and tags
the original (cached) update to skip running the memory triggers.
This results in running triggers on the non-cached update instead,
which in turn will perform accounting updates based on incoherent
values. For example, bch2_dev_usage_update() subtracts the the old
alloc key dirty sector count in the non-cached btree key from the
newly initialized (i.e. zeroed) per device counters, leading to
underflow and accounting corruption.
There are at least a few ways to avoid this problem, the simplest of
which may be to run triggers against the cached update rather than
the non-cached update. If the key only needs to be flushed when the
key is not present in the tree, however, then this still performs an
unnecessary update. We could potentially use the cached key dirty
state to determine whether the delete is a dirty, cached update vs.
a clean cache fill, but this may require transmitting key cache
dirty state across layers, which adds complexity and seems to be of
limited value. Instead, update flush_new_cached_update() to handle
this by simply checking for the key in the btree and only perform
the flush when a backing key is not present.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 27 Apr 2023 18:02:31 +0000 (14:02 -0400)]
bcachefs: Delete obsolete btree ptr check
This patch deletes a .key_invalid check for btree pointers that only
applies to _very_ old on disk format versions, and potentially
complicates the upgrade process.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 21 Apr 2023 07:42:41 +0000 (03:42 -0400)]
bcachefs: Kill bch2_verify_bucket_evacuated()
With backpointers, it's now impossible for bch2_evacuate_bucket() to be
completely reliable: it can race with an extent being partially
overwritten or split, which needs a new write buffer flush for the
backpointer to be seen.
This shouldn't be a real issue in practice; the previous patch added a
new tracepoint so we'll be able to see more easily if it is.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Wed, 19 Apr 2023 15:47:03 +0000 (11:47 -0400)]
bcachefs: remove bucket_gens btree keys on device removal
If a device has keys in the bucket_gens btree associated with its
buckets and is removed from a bcachefs volume, fsck will complain
about the presence of keys associated with an invalid device index.
A repair removes the associated keys and restores correctness.
Update bch2_dev_remove_alloc() to remove device related keys at
device removal time to avoid the problem.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 18 Apr 2023 17:05:47 +0000 (13:05 -0400)]
bcachefs: fix NULL bch_dev deref when checking bucket_gens keys
fsck removes bucket_gens keys for devices that do not exist in the
volume (i.e., if the device was removed). In 'fsck -n' mode, the
associated fsck_err_on() wrapper returns false to skip the key
removal. This proceeds on to the rest of the function, which
eventually segfaults on a NULL bch_dev because the device does not
exist.
Update bch2_check_bucket_gens_key() to skip out of the rest of the
function when the associated device does not exist, regardless of
running fsck in check or repair mode.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Mon, 3 Apr 2023 12:17:26 +0000 (08:17 -0400)]
bcachefs: folio pos to bch_folio_sector index helper
Create a small helper to translate from file offset to the
associated bch_folio_sector index in the underlying bch_folio. The
helper assumes the file offset is covered by the passed folio.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 17 Apr 2023 01:49:12 +0000 (21:49 -0400)]
bcachefs: Fix a null ptr deref in fsck check_extents()
It turns out, in rare situations we need to be passing in a disk
reservation, which will be used internally by the transaction commit
path when needed. Pass one in...
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Wed, 29 Mar 2023 14:43:23 +0000 (10:43 -0400)]
bcachefs: use u64 for folio end pos to avoid overflows
Some of the folio_end_*() helpers are prone to overflow of signed
64-bit types because the mapping is only limited by the max value of
loff_t and the associated helpers return the start offset of the
next folio. Therefore, a folio_end_pos() of the max allowable folio in a
mapping returns a value that overflows loff_t.
This makes it hard to rely on such values when doing folio
processing across a range of a file, as bcachefs attempts to do with
the recent folio changes. For example, generic/564 causes problems
in the buffered write path when testing writes at max boundary
conditions.
The current understanding is that the pagecache historically limited
the mapping to one less page to avoid this problem and this was
dropped with some of the folio conversions, but may be reinstated to
properly address the problem. In the meantime, update the internal
folio_end_*() helpers in bcachefs to return a u64, and all of the
associated code to use or cast to u64 to avoid overflow problems.
This allows generic/564 to pass and can be reverted back to using
loff_t if at any point the pagecache subsystem can guarantee these
boundary conditions will not overflow.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Wed, 29 Mar 2023 15:23:15 +0000 (11:23 -0400)]
bcachefs: clean up post-eof folios on -ENOSPC
The buffered write path batches folio creations in the file mapping
based on the requested size of the write. Under low free space
conditions, it is possible to add a bunch of folios to the mapping
and then return a short write or -ENOSPC due to lack of space. If
this occurs on an extending write, the file size is updated based on
the amount of data successfully written to the file. If folios were
added beyond the final i_size, they may hang around until reclaimed,
truncated or encountered unexpectedly by another operation.
For example, generic/083 reproduces a sequence of events where a
short write leaves around one or more post-EOF folios on an inode, a
subsequent zero range request extends beyond i_size and overlaps
with an aforementioned folio, and __bch2_truncate_folio() happens
across it and complains.
Update __bch2_buffered_write() to keep track of the start offset of
the last folio added to the mapping for a prospective write. After
i_size is updated, check whether this offset starts beyond EOF. If
so, truncate pagecache beyond the latest EOF to clean up any folios
that don't reside at least partially within EOF upon completion of
the write.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Wed, 29 Mar 2023 13:49:04 +0000 (09:49 -0400)]
bcachefs: fix truncate overflow if folio is beyond EOF
generic/083 occasionally reproduces a panic caused by an overflow
when accessing the bch_folio_sector array of the folio being
processed by __bch2_truncate_folio(). The immediate cause of the
overflow is that the folio offset is beyond i_size, and therefore
the sector index calculation underflows on subtraction of the folio
offset.
One cause of this is mainly observed on nocow mounts. When nocow is
enabled, fallocate performs physical block allocation (as opposed to
block reservation in cow mode), which range_has_data() then
interprets as valid data that requires partial zeroing on truncate.
Therefore, if a post-eof zero range request lands across post-eof
preallocated blocks, __bch2_truncate_folio() may actually create a
post-eof folio in order to perform zeroing. To avoid this problem,
update range_has_data() to filter out unwritten blocks from folio
creation and partial zeroing.
Even though we should never create folios beyond EOF like this, the
mere existence of such folios is not necessarily a fatal error. Fix
up the truncate code to warn about this condition and not overflow
the sector array and possibly crash the system. The addition of this
warning without the corresponding unwritten extent fix has shown
that various other fstests are able to reproduce this problem fairly
frequently, but often in ways that doesn't necessarily result in a
kernel panic or a change in user observable behavior, and therefore
the problem goes undetected.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 27 Mar 2023 20:55:27 +0000 (16:55 -0400)]
bcachefs: Check for folios that don't have bch_folio attached
With large folios, it's now incidentally possible to end up with a
clean, uptodate folio in the page cache that doesn't have a bch_folio
attached, if a folio has to be split.
This patch fixes __bch2_truncate_folio() to check for this; other code
paths appear to handle it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 17 Mar 2023 18:55:53 +0000 (14:55 -0400)]
bcachefs: Initial folio conversion
This converts fs-io.c to pass folios, not pages. We're not handling
large folios yet, there's no functional changes in this patch - just a
lot of churn doing the initial type conversions.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 10 Mar 2023 22:34:29 +0000 (17:34 -0500)]
bcachefs: Improve trace_move_extent_fail()
This greatly expands the move_extent_fail tracepoint - now it includes
all the information we have available, including exactly why the extent
wasn't updated.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Wed, 22 Mar 2023 12:27:58 +0000 (08:27 -0400)]
bcachefs: use reservation for log messages during recovery
If we block on journal reservation attempting to log journal
messages during recovery, particularly for the first message(s)
before we start doing actual work, chances are the filesystem ends
up deadlocked.
Allow logged messages to use reserved journal space to mitigate this
problem. In the worst case where no space is available whatsoever,
this at least allows the fs to recognize that the journal is stuck
and fail the mount gracefully.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 30 Mar 2023 02:47:30 +0000 (22:47 -0400)]
bcachefs: Data update path no longer leaves cached replicas
It turns out that it's currently impossible to invalidate buckets
containing only cached data if they're part of a stripe. The normal
bucket invalidate path can't do it because we have to be able to
incerement the bucket's gen, which isn't correct becasue it's still a
member of the stripe - and the bucket invalidate path makes the bucket
availabel for reuse right away, which also isn't correct for buckets in
stripes.
What would work is invalidating cached data by following backpointers,
except that cached replicas don't currently get backpointers - because
they would be awkward for the existing bucket invalidate path to delete
and they haven't been needed elsewhere.
So for the time being, to prevent running out of space in stripes,
switch the data update path to not leave cached replicas; we may revisit
this in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 11 Mar 2023 19:44:41 +0000 (14:44 -0500)]
bcachefs: Rhashtable based buckets_in_flight for copygc
Previously, copygc used a fifo for tracking buckets in flight - this had
the disadvantage of being fixed size, since we pass references to
elements into the move code.
This restructures it to be a hash table and linked list, since with
erasure coding we need to be able to pipeline across an arbitrary number
of buckets.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 29 Mar 2023 17:10:36 +0000 (13:10 -0400)]
bcachefs: Use BTREE_ITER_INTENT in ec_stripe_update_extent()
This adds a flags param to bch2_backpointer_get_key() so that we can
pass BTREE_ITER_INTENT, since ec_stripe_update_extent() is updating the
extent immediately.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 23 Mar 2023 01:22:51 +0000 (21:22 -0400)]
bcachefs: bch2_dev_freespace_init() Print out status every 10 seconds
It appears freespace init can still take awhile, and we've had a report
or two of it getting stuck - let's have it print out where it's at every
10 seconds.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 23 Mar 2023 00:48:37 +0000 (20:48 -0400)]
bcachefs: Run freespace init in device hot add path
Like in the recovery, and device add, we have to check if devices don't
have the freespace btree initialized - this was missed in the device hot
add path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 21 Mar 2023 16:18:10 +0000 (12:18 -0400)]
bcachefs: Call bch2_path_put_nokeep() before bch2_path_put()
bch2_path_put_nokeep() is sketchy, and we should consider removing it:
it unconditionally frees btree_paths once their ref hits 0.
The assumption is that we only use it for paths that have never been
visible outside the btree core btree code; i.e. higher level code will
never be making assumptions about locking based on these paths.
However, there's subtle brokenness with this approach:
- If we call bch2_path_put(), then bch2_path_put_nokeep(),
bch2_path_put() may free the first path on the assumption that we we
have another path keeping a node locked - but then
bch2_path_put_nokeep() just unconditionally frees it.
The same bug may arise if we're calling bch2_path_put() and
bch2_path_put_nokeep() on the same (refcounted) path, or two adjacent
paths that point to the same btree node.
This patch hacks around one of these bugs by calling
bch2_path_put_nokeep() first in bch2_trans_iter_exit.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 21 Mar 2023 12:09:16 +0000 (08:09 -0400)]
bcachefs: drop unnecessary journal stuck check from space calculation
The journal stucking check in bch2_journal_space_available() is
particularly aggressive and can lead to premature shutdown in some
rare cases. This is difficult to reproduce, but also comes along
with a fatal error and so is worthwhile to be cautious.
For example, we've seen instances where the journal is under heavy
reservation pressure, the journal allocation path transitions into
the final available journal bucket, the journal write path
immediately consumes that bucket and calls into
bch2_journal_space_available(), which then in turn flags the journal
as stuck because there is no available space and shuts down the
filesystem instead of submitting the journal write (that would have
otherwise succeeded).
To avoid this problem, simplify the journal stuck checking by just
relying on the higher level logic in the journal reservation path.
This produces more useful debug output and is a more reliable
indicator that things have bogged down.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 21 Mar 2023 12:03:18 +0000 (08:03 -0400)]
bcachefs: refactor journal stuck checking into standalone helper
bcachefs checks for journal stuck conditions both in the journal
space calculation code and the journal reservation slow path. The
logic in both places is rather tricky and can result in
non-deterministic failure characteristics and debug output.
In preparation to condense journal stuck handling to a single place,
refactor the __journal_res_get() logic into a standalone helper.
Since multiple callers into the reservation code can result in
duplicate reports, use the ->err_seq field as a serialization
mechanism for the debug dump. Finally, add some comments to help
explain the logic and hopefully facilitate further improvements in
the future.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Mon, 20 Mar 2023 17:21:19 +0000 (13:21 -0400)]
bcachefs: gracefully unwind journal res slowpath on shutdown
bcachefs detects journal stuck conditions in a couple different
places. If the logic in the journal reservation slow path happens to
detect the problem, I've seen instances where the filesystem remains
deadlocked even though it has been shut down. This is occasionally
reproduced by generic/333, and usually manifests as one or more
tasks stuck in the journal reservation slow path.
To help avoid this problem, repeat the journal error check in
__journal_res_get() once under spinlock to cover the case where the
previous lock holder might have triggered shutdown. This also helps
avoid spurious/duplicate stuck reports. Also, wake the journal from
the halt code to make sure blocked callers of the journal res
slowpath have a chance to wake up and observe the pending error.
This survives an overnight looping run of generic/333 without the
aforementioned lockups.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Fri, 17 Mar 2023 12:54:01 +0000 (08:54 -0400)]
bcachefs: more aggressive fast path write buffer key flushing
The btree write buffer flush code is prone to causing journal
deadlock due to inefficient use and release of reservation space.
Reservation is not pre-reserved for write buffered keys (as is done
for key cache keys, for example), because the write buffer flush
side uses a fast path that attempts insertion without need for any
reservation at all.
The write buffer flush attempts to deal with this by inserting keys
using the BTREE_INSERT_JOURNAL_RECLAIM flag to return an error on
journal reservations that require blocking. Upon first error, it
falls back to a slow path that inserts in journal order and supports
moving the associated journal pin forward.
The problem is that under pathological conditions (i.e. smaller log,
larger write buffer and journal reservation pressure), we've seen
instances where the fast path fails fairly quickly without having
completed many insertions, and then the slow path is unable to push
the journal pin forward enough to free up the space it needs to
completely flush the buffer. This problem is occasionally reproduced
by fstest generic/333.
To avoid this problem, update the fast path algorithm to skip key
inserts that fail due to inability to acquire needed journal
reservation without immediately breaking out of the loop. Instead,
insert as many keys as possible, zap the sequence numbers to mark
them as processed, and then fall back to the slow path to process
the remaining set in journal order. This reduces the amount of
journal reservation that might be required to flush the entire
buffer and increases the odds that the slow path is able to move the
journal pin forward and free up space as keys are processed.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Thu, 23 Mar 2023 18:09:05 +0000 (14:09 -0400)]
bcachefs: use dedicated workqueue for tasks holding write refs
A workqueue resource deadlock has been observed when running fsck
on a filesystem with a full/stuck journal. fsck is not currently
able to repair the fs due to fairly rapid emergency shutdown, but
rather than exit gracefully the fsck process hangs during the
shutdown sequence. Fortunately this is easily recoverable from
userspace, but the root cause involves code shared between the
kernel and userspace and so should be addressed.
The deadlock scenario involves the main task in the bch2_fs_stop()
-> bch2_fs_read_only() path waiting on write references to drain
with the fs state lock held. A bch2_read_only_work() workqueue task
is scheduled on the system_long_wq, blocked on the state lock.
Finally, various other write ref holding workqueue tasks are
scheduled to run on the same workqueue and must complete in order to
release references that the initial task is waiting on.
To avoid this problem, we can split the dependent workqueue tasks
across different workqueues. It's a bit of a waste to create a
dedicated wq for the read-only worker, but there are several tasks
throughout the fs that follow the pattern of acquiring a write
reference and then scheduling to the system wq. Use a local wq
for such tasks to break the subtle dependency between these and the
read-only worker.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 19 Mar 2023 18:29:51 +0000 (14:29 -0400)]
bcachefs: Make reconstruct_alloc quieter
We shouldn't be printing out fsck errors for expected errors - this
helps make test logs more readable, and makes it easier to see what the
actual failure was.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 19 Mar 2023 18:13:17 +0000 (14:13 -0400)]
bcachefs: Fix an unhandled transaction restart error
This is a bit awkward: we're passing around a btree_trans, but we're not
in a context where transaction restarts are handled - we should try to
come up with a better way to denote situations like this.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 19 Mar 2023 17:01:06 +0000 (13:01 -0400)]
bcachefs: Fix nocow write path closure bug
With regular waitlists, we need to ensure we always call finish_wait().
With closures, the equivalent is that we need to call closure_sync()
before returning with a stack-allocated closure.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>