Kent Overstreet [Mon, 14 Nov 2022 03:43:37 +0000 (22:43 -0500)]
bcachefs: Minor dio write path improvements
This switches where we take quota reservations to be per bch_wirte_op
instead of per dio_write, so we can drop the quota reservation in the
same place as we call i_sectors_acct(), and only take/release
ei_quota_lock once.
In the future we'd like ei_quota_lock to not be a mutex, so that we can
avoid punting to process context before deliving write completions in
nocow mode.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 14 Nov 2022 03:35:55 +0000 (22:35 -0500)]
bcachefs: Quota: Don't allocate memory under lock
The genradix code can handle multiple threads trying to allocate at the
same time - we don't need the genradix_ptr_alloc() call to happen under
a lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 14 Nov 2022 07:22:30 +0000 (02:22 -0500)]
bcachefs: Fix a use after free
This fixes a regression from percpu freedlists in the btree key cache
code: in a rare error path, we were immediately freeing a bkey_cached
that had been used before and should've waited for an SRCU barrier.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 4 Nov 2022 17:25:57 +0000 (13:25 -0400)]
bcachefs: Factor out two_state_shared_lock
We have a unique lock used for controlling adding to the pagecache: the
lock has two states, where both states are shared - the lock may be held
multiple times for either state - but not both states at the same time.
This is exactly what we need for nocow mode locking, so this patch pulls
it out of fs.c into its own file.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 3 Nov 2022 04:29:43 +0000 (00:29 -0400)]
bcachefs: Kill BCH_WRITE_FLUSH
BCH_WRITE_FLUSH is a write flag that causes a journal flush. It's only
used in the direct IO path, and this will allow for some consolidation
with the regular fsync path, which will help with the upcoming nocow
mode.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 1 Nov 2022 08:23:24 +0000 (04:23 -0400)]
bcachefs: Improve __bch2_btree_path_make_mut()
btree_path_copy() doesn't need to call
bch2_btree_path_check_sort_fast() - the newly allocated path will always
be in the correct position, post copy; also delete some redundant
branches from __bch2_btree_path_make_mut().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 1 Nov 2022 07:37:53 +0000 (03:37 -0400)]
bcachefs: Inlining improvements
- Don't call into bch2_encrypt_bio() when we're not encrypting
- Pull slowpath out of trans_lock_write()
- Make sure bc2h_trans_journal_res_get() gets inlined.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 1 Nov 2022 00:30:27 +0000 (20:30 -0400)]
bcachefs: DIO write path optimization
- With BCH_WRITE_SYNC, we no longer need the completion in struct
dio_write
- Pull out bch2_dio_write_copy_iov() into a separate non-inline
function, it's code that doesn't run in the common case
- Copy mapping and inode pointers into dio_write, avoiding pointer
chasing at the start of bch2_dio_write_loop()
- kthread_use_mm() is not needed in the common case; move it into
bch2_dio_write_loop_async()
- factor out various helpers from bch2_dio_write_loop() and rework
control flow for better icache utilization
Other small optimizations:
- bch2_keylist_free() is only used in one place, at the end of the
bch2_write() path - drop the reinit
- in bch2_disk_reservation_put(), check if res->sectors is nonzero
before touching c->online_reserved, since that will likely be a cache
miss
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs: More DIO write path optimization
Better code prefetching (?)
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 23 Oct 2022 21:37:23 +0000 (17:37 -0400)]
bcachefs: Journal keys overlay fixes
- In the btree iterator code that overlays keys from the journal, we
were incorrectly specifying level=0 instead of the btree_path's
current level in a few places
- When we didn't do journal replay, we shouldn't free the journal keys:
this fixes cmd_list and cmd_dump, which run in norecovery mode
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 21 Oct 2022 23:15:07 +0000 (19:15 -0400)]
bcachefs: Move bkey bkey_unpack_key() to bkey.h
Long ago, bkey_unpack_key() was added to bset.h instead of bkey.h
because bkey.h didn't include btree_types.h, which it needs for the
compiled unpack function.
This patch finally moves it to the proper location.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 21 Oct 2022 21:26:49 +0000 (17:26 -0400)]
bcachefs: Don't touch c->flags in bch2_trans_iter_init()
This moves the JOURNAL_REPLAY_DONE flag check from
bch2_trans_iter_init() to bch2_trans_init(), where we stash a copy in
btree_trans - gaining us a small performance improvement.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 19 Oct 2022 22:31:33 +0000 (18:31 -0400)]
bcachefs: Assorted checkpatch fixes
checkpatch.pl gives lots of warnings that we don't want - suggested
ignore list:
ASSIGN_IN_IF
UNSPECIFIED_INT - bcachefs coding style prefers single token type names
NEW_TYPEDEFS - typedefs are occasionally good
FUNCTION_ARGUMENTS - we prefer to look at functions in .c files
(hopefully with docbook documentation), not .h
file prototypes
MULTISTATEMENT_MACRO_USE_DO_WHILE
- we have _many_ x-macros and other macros where
we can't do this
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 21 Oct 2022 18:01:19 +0000 (14:01 -0400)]
bcachefs: Optimize bch2_dev_usage_read()
- add bch2_dev_usage_read_fast(), which doesn't return by value -
bch_dev_usage is big enough that we don't want the silent memcpy
- tweak the allocation path to only call bch2_dev_usage_read() once per
bucket allocated
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Daniel Hill [Tue, 11 Oct 2022 08:33:56 +0000 (21:33 +1300)]
bcachefs: fix bch2_write_extent() crc corruption.
crc.compression_type & nouce gets reset to inside bch2_rechecksum_bio(),
we set it back to the previous values calculated. This fixes
incompressible extents being marked as uncompressed.
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 17 Oct 2022 11:32:57 +0000 (07:32 -0400)]
bcachefs: Don't issue transaction restart on key cache realloc
This shouldn't be needed anymore, since we don't rely on the pointer
validity that this was guarding against anymore - we get a new good
reference and save it right after this function.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 17 Oct 2022 11:07:28 +0000 (07:07 -0400)]
bcachefs: bucket_alloc_fail tracepoint should only fire when we have to block
We don't want to fire the bucket_alloc_fail tracepoint on transaction
restart, when we can retry immediately - only when we the allocation
actually has to block.
Also, switch from sched_clock() to local_clock(), as we've been doing
elsewhere.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 17 Oct 2022 06:08:07 +0000 (02:08 -0400)]
bcachefs: Btree key cache shrinker fix
The shrinker assumes freed key cache items are ordered by age, so that
it doesn't have to scan the full list to find items that are old enough
(according to the srcu code) to be freed.
But percpu freelists broke this ordering; this patch fixes this by
ensuring we insert items into the proper position.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Daniel Hill [Wed, 25 May 2022 04:11:56 +0000 (16:11 +1200)]
bcachefs: make durability a read-write sysfs option
Sometimes the user may need to change durability after formatting to
match current hardware setup, this option provides a quick and flexible
alternative to removing then adding the device.
It is HIGHLY ADVISED TO RUN REREPLICATE after changing this value so the
system doesn't remain degraded.
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 15 Oct 2022 07:52:28 +0000 (03:52 -0400)]
bcachefs: Quota fixes
- We now correctly allow soft limits to be exceeded, instead of always
returning -EDQUOT
- Disk quota grate times/warnings can now be set, not just the
systemwide defaults
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 15 Oct 2022 04:47:21 +0000 (00:47 -0400)]
bcachefs: Btree key cache improvements
- In userspace, we don't have real percpu variables; this patch
disables the percpu freelists in userspace
- add some error messages for the asserts in
bch2_fs_btree_key_cache_exit(); we've been hitting this (only in
userspace, oddly), perhaps this will help us track down the error.
- bkey_cached_reuse() should likely be taking the key cache lock, and
it's a slowpath so it doesn't hurt to
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 14 Oct 2022 02:52:40 +0000 (22:52 -0400)]
bcachefs: Defer full journal entry validation
On journal read, previously we would do full journal entry validation
immediately after reading a journal entry.
However, this would lead to errors for journal entries we weren't
actually going to use, either because they were too old or too new
(newer than the most recent flush).
We've observed write tearing on journal entries newer than the newest
flush - which makes sense, prior to a flush there's no guarantees about
write persistence.
This patch defers full journal entry validation until the end of the
journal read path, when we know which journal entries we'll want to use.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Daniel Hill [Sat, 6 Aug 2022 02:48:49 +0000 (14:48 +1200)]
bcachefs: Mean and variance
This module provides a fast 64bit implementation of basic statistics
functions, including mean, variance and standard deviation in both
weighted and unweighted variants, the unweighted variant has a 32bit
limitation per sample to prevent overflow when squaring.
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 12 Oct 2022 20:21:08 +0000 (16:21 -0400)]
bcachefs: Support FS_XFLAG_PROJINHERIT
We already have support for the flag's semantics: inode options are
inherited by children if they were explicitly set on the parent. This
patch just maps the FS_XFLAG_PROJINHERIT flag to the "this option was
epxlicitly set" bit.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 12 Oct 2022 18:47:58 +0000 (14:47 -0400)]
bcachefs: Initialize sb_quota with default 1 week timer
For compliance with other quota implementations, we should be
initializing quota information with a default 1 week timelimit: this
fixes fstests generic/235.
Also, this adds to_text() functions for some quota structs - useful
debugging aids.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 12 Oct 2022 15:04:28 +0000 (11:04 -0400)]
bcachefs: Call bch2_btree_update_add_new_node() before dropping write lock
btree nodes can be written by other threads (shrinker, journal reclaim)
with only a read lock, but brand new nodes should only be written by the
thread doing the split/interior update. bch2_btree_update_add_new_node()
sets btree node flags to indicate that this is a new node and should not
be written out by other threads, thus we need to call it before dropping
our write lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 11 Oct 2022 08:32:14 +0000 (04:32 -0400)]
bcachefs: Reflink now respects quotas
This adds a new helper, quota_reserve_range(), which takes a quota
reservation for unallocated blocks in a given file range, and uses it in
bch2_remap_file_range().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 12 Oct 2022 11:58:50 +0000 (07:58 -0400)]
bcachefs: Fix a rare path in bch2_btree_path_peek_slot()
In the drop_alloc tests, we may end up calling
bch2_btree_iter_peek_slot() on an interior level that doesn't exist.
Previously, this would hit the path->uptodate assertion in
bch2_btree_path_peek_slot(); this path first checks a NULL btree node,
which is how we know we're at the end of the btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 11 Oct 2022 10:37:56 +0000 (06:37 -0400)]
bcachefs: bch2_path_put_nokeep()
The btree iterator code may allocate extra btree paths, temporarily,
that do not refer to keys being returned: we don't need to wait until
transaction restart to drop these, when they're not referenced they
should be deleted right away.
This fixes a transaction path overflow bug.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 27 Sep 2022 22:57:34 +0000 (18:57 -0400)]
bcachefs: Btree splits now only take the locks they need
Previously, bch2_btree_update_start() would always take all intent
locks, all the way up to the root.
We've finally got data from users where this became a scalability issue
- so, this patch fixes bch2_btree_update_start() to only take the locks
we need.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 2 Oct 2022 02:15:30 +0000 (22:15 -0400)]
bcachefs: Add error path to btree_split()
The next patch in the series is (finally!) going to change btree splits
(and interior updates in general) to not take intent locks all the way
up to the root - instead only locking the nodes they'll need to modify.
However, this will be introducing a race since if we're not holding a
write lock on a btree node it can be written out by another thread, and
then we might not have enough space for a new bset entry.
We can handle this by retrying - we just need to introduce a new error
path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 1 Oct 2022 04:34:02 +0000 (00:34 -0400)]
bcachefs: Write new btree nodes after parent update
In order to avoid locking all btree nodes up to the root for btree node
splits, we're going to have to introduce a new error path into
bch2_btree_insert_node(); this mean we can't have done any writes or
modified global state before that point.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 9 Oct 2022 08:55:02 +0000 (04:55 -0400)]
bcachefs: Simplify break_cycle()
We'd like to prioritize aborting transactions that have done less work -
however, it appears breaking cycles by telling other threads to abort
may still be buggy, so disable that for now.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 9 Oct 2022 08:29:04 +0000 (04:29 -0400)]
bcachefs: Print cycle on unrecoverable deadlock
Some lock operations can't fail; a cycle of nofail locks is impossible
to recover from. So we want to get rid of these nofail locking
operations, but as this is tricky it'll be done incrementally.
If such a cycle happens, this patch prints out which codepaths are
involved so we know what to work on next.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 9 Oct 2022 07:32:17 +0000 (03:32 -0400)]
bcachefs: Handle dropping pointers in data_update path
Cached pointers are generally dropped, not moved: this led to an
assertion firing in the data update path when there were no new replicas
being written.
This path adds a data_options field for pointers to be dropped, and
tweaks move_extent() to check if we're only dropping pointers, not
writing new ones, before kicking off a data update operation.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 3 Oct 2022 20:41:17 +0000 (16:41 -0400)]
bcachefs: Fix a deadlock in btree_update_nodes_written()
btree_node_lock_nopath() is something we'd like to get rid of, it's
always prone to deadlocks if we accidentally are holding other locks,
because it doesn't mark the lock it's taking in a path: we'll want to
get rid of it in the future, but for now this patch works it by calling
bch2_trans_unlock().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 2 Oct 2022 05:41:08 +0000 (01:41 -0400)]
bcachefs: Improve btree_deadlock debugfs output
This changes bch2_check_for_deadlock() to print the longest chains it
finds - when we have a deadlock because the cycle detector isn't finding
something, this will let us see what it's missing.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 2 Oct 2022 03:54:46 +0000 (23:54 -0400)]
bcachefs: Don't quash error in bch2_bucket_alloc_set_trans()
We were incorrectly returning -BCH_ERR_insufficient_devices when we'd
received a different error from bch2_bucket_alloc_trans(), which
(erronously) turns into -EROFS further up the call chain.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 28 Sep 2022 14:16:57 +0000 (10:16 -0400)]
bcachefs: Fix a trans path overflow in bch2_btree_delete_range_trans()
bch2_btree_delete_range_trans() was using btree_trans_too_many_iters()
to avoid path overflow, but this was buggy here (and also
btree_trans_too_many_iters() is suspect in general).
btree_trans_too_many_iters() only returns true when we're close to the
maximum number of paths - within 8 - but extent insert/delete assumes
that it can use more paths than that.
Instead, we need to call bch2_trans_begin() on every loop iteration.
Since we don't want to call bch2_trans_begin() (restarting the outer
transaction) if the call was a no-op - if we had no work to do - we have
to structure things a bit oddly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 26 Sep 2022 22:13:29 +0000 (18:13 -0400)]
bcachefs: Inline fast path of check_pos_snapshot_overwritten()
This moves the slowpath of check_pos_snapshot_overwritten() to a
separate function, and inlines the fast path - helping performance on
btrees that don't use snapshot and for users that aren't using
snapshots.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 26 Sep 2022 20:19:56 +0000 (16:19 -0400)]
bcachefs: Optimize btree_path_alloc()
- move slowpath code to a separate function, btree_path_overflow()
- no need to use hweight64
- copy nr_max_paths from btree_transaction_stats to btree_trans,
avoiding a data dependency in the fast path
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 26 Sep 2022 02:26:48 +0000 (22:26 -0400)]
bcachefs: Run bch2_fs_counters_init() earlier
We need counters to be initialized before initializing shrinkers - the
shrinker callbacks will update those counters. This fixes a segfault in
userspace.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Sep 2022 22:18:48 +0000 (18:18 -0400)]
bcachefs: Improve bch2_fsck_err()
- factor out fsck_err_get()
- if the "bcachefs (%s):" prefix has already been applied, don't
duplicate it
- convert to printbufs instead of static char arrays
- tidy up control flow a bit
- use bch2_print_string_as_lines(), to avoid messages getting truncated
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Sep 2022 20:42:53 +0000 (16:42 -0400)]
bcachefs: bch2_btree_node_relock_notrace()
Most of the node_relock_fail trace events are generated from
bch2_btree_path_verify_level(), when debugcheck_iterators is enabled -
but we're not interested in these trace events, they don't indicate that
we're in a slowpath.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Sep 2022 18:49:14 +0000 (14:49 -0400)]
bcachefs: bch2_btree_cache_scan() improvement
We're still seeing OOM issues caused by the btree node cache shrinker
not sufficiently freeing memory: thus, this patch changes the shrinker
to not exit if __GFP_FS was not supplied.
Instead, tweak btree node memory allocation so that we never invoke
memory reclaim while holding the btree node cache lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>