Peter Zijlstra [Sat, 4 Apr 2026 10:22:44 +0000 (12:22 +0200)]
sched/deadline: Use revised wakeup rule for dl_server
John noted that commit 115135422562 ("sched/deadline: Fix 'stuck' dl_server")
unfixed the issue from commit a3a70caf7906 ("sched/deadline: Fix dl_server
behaviour").
The issue in commit 115135422562 was for wakeups of the server after the
deadline; in which case you *have* to start a new period. The case for a3a70caf7906 is wakeups before the deadline.
Now, because the server is effectively running a least-laxity policy, it means
that any wakeup during the runnable phase means dl_entity_overflow() will be
true. This means we need to adjust the runtime to allow it to still run until
the existing deadline expires.
Use the revised wakeup rule for dl_defer entities.
Fixes: 115135422562 ("sched/deadline: Fix 'stuck' dl_server") Reported-by: John Stultz <jstultz@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Juri Lelli <juri.lelli@redhat.com> Tested-by: John Stultz <jstultz@google.com> Link: https://patch.msgid.link/20260404102244.GB22575@noisy.programming.kicks-ass.net
thermal: core: Suspend thermal zones later and resume them earlier
To avoid some undesirable interactions between thermal zone suspend
and resume with user space that is running when those operations are
carried out, move them closer to the suspend and resume of devices,
respectively, by updating dpm_prepare() to carry out thermal zone
suspend and dpm_complete() to start thermal zone resume (that will
continue asynchronously).
This also makes the code easier to follow by removing one, arguably
redundant, level of indirection represented by the thermal PM notifier.
Define thermal_class as a static structure to simplify thermal_init()
and to simplify thermal class availability checks that will need to
be carried out during the suspend and resume of thermal zones after
subsequent changes.
thermal: core: Drop redundant check from thermal_zone_device_update()
Since __thermal_zone_device_update() checks if tz->state is
TZ_STATE_READY and bails out immediately otherwise, it is not
necessary to check the thermal_zone_is_present() return value in
thermal_zone_device_update(). Namely, tz->state is equal to
TZ_STATE_FLAG_INIT initially and that flag is only cleared in
thermal_zone_init_complete() after adding tz to the list of thermal
zones, and thermal_zone_exit() sets TZ_STATE_FLAG_EXIT in tz->state
while removing tz from that list. Thus tz->state is not TZ_STATE_READY
when tz is not in the list and the check mentioned above is redundant.
Accordingly, drop the redundant thermal_zone_is_present() check from
thermal_zone_device_update() and drop the former altogether because it
has no more users.
thermal: core: Free thermal zone ID later during removal
The thermal zone removal ordering is different from the thermal zone
registration rollback path ordering and the former is arguably
problematic because freeing a thermal zone ID prematurely may cause
it to be used during the registration of another thermal zone which
may fail as a result.
Prevent that from occurring by changing the thermal zone removal
ordering to reflect the thermal zone registration rollback path
ordering.
Also more the ida_destroy() call from thermal_zone_device_unregister()
to thermal_release() for consistency.
thermal: core: Fix thermal zone governor cleanup issues
If thermal_zone_device_register_with_trips() fails after adding
a thermal governor to the thermal zone being registered, the
governor is not removed from it as appropriate which may lead to
a memory leak.
In turn, thermal_zone_device_unregister() calls thermal_set_governor()
without acquiring the thermal zone lock beforehand which may race with
a governor update via sysfs and may lead to a use-after-free in that
case.
Address these issues by adding two thermal_set_governor() calls, one to
thermal_release() to remove the governor from the given thermal zone,
and one to the thermal zone registration error path to cover failures
preceding the thermal zone device registration.
Fixes: e33df1d2f3a0 ("thermal: let governors have private data for each thermal zone") Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://patch.msgid.link/5092923.31r3eYUQgx@rafael.j.wysocki
Miguel Ojeda [Wed, 8 Apr 2026 08:44:46 +0000 (10:44 +0200)]
Merge tag 'pin-init-v7.1' of https://github.com/Rust-for-Linux/linux into rust-next
Pull pin-init updates from Benno Lossin:
- Replace the 'Zeroable' impls for 'Option<NonZero*>' with impls of
'ZeroableOption' for 'NonZero*'.
- Improve feature gate handling for unstable features.
- Declutter the documentation of implementations of 'Zeroable' for
tuples.
- Replace uses of 'addr_of[_mut]!' with '&raw [mut]'.
* tag 'pin-init-v7.1' of https://github.com/Rust-for-Linux/linux:
rust: pin-init: replace `addr_of_mut!` with `&raw mut`
rust: pin-init: implement ZeroableOption for NonZero* integer types
rust: pin-init: doc: de-clutter documentation with fake-variadics
rust: pin-init: properly document let binding workaround
rust: pin-init: build: simplify use of nightly features
Miguel Ojeda [Wed, 8 Apr 2026 07:46:01 +0000 (09:46 +0200)]
Merge tag 'rust-timekeeping-for-v7.1' of https://github.com/Rust-for-Linux/linux into rust-next
Pull timekeeping updates from Andreas Hindborg:
- Expand the example section in the 'HrTimer' documentation.
- Mark the 'ClockSource' trait as unsafe to ensure valid values for
'ktime_get()'.
- Add 'Delta::from_nanos()'.
This is a back merge since the pull request has a newer base -- we will
avoid that in the future.
And, given it is a back merge, it happens to resolve the "subtle" conflict
around '--remap-path-{prefix,scope}' that I discussed in linux-next [1],
plus a few other common conflicts. The result matches what we did for
next-20260407.
The actual diffstat (i.e. using a temporary merge of upstream first) is:
Currently the probe does not check whether getting the regmap succeeded.
This can cause crash when regmap is used, if it wasn't successfully
obtained. Failing to get the regmap is unlikely, especially since this
driver is expected to be kicked by the MFD driver only after registering
the regmap - but it is still better to handle this gracefully.
Input: uinput - fix circular locking dependency with ff-core
A lockdep circular locking dependency warning can be triggered
reproducibly when using a force-feedback gamepad with uinput (for
example, playing ELDEN RING under Wine with a Flydigi Vader 5
controller):
The cycle is caused by four lock acquisition paths:
1. ff upload: input_ff_upload() holds ff->mutex and calls
uinput_dev_upload_effect() -> uinput_request_submit() ->
uinput_request_send(), which acquires udev->mutex.
2. device create: uinput_ioctl_handler() holds udev->mutex and calls
uinput_create_device() -> input_register_device(), which acquires
input_mutex.
3. device register: input_register_device() holds input_mutex and
calls kbd_connect() -> input_register_handle(), which acquires
dev->mutex.
4. evdev release: evdev_release() calls input_flush_device() under
dev->mutex, which calls input_ff_flush() acquiring ff->mutex.
Fix this by introducing a new state_lock spinlock to protect
udev->state and udev->dev access in uinput_request_send() instead of
acquiring udev->mutex. The function only needs to atomically check
device state and queue an input event into the ring buffer via
uinput_dev_event() -- both operations are safe under a spinlock
(ktime_get_ts64() and wake_up_interruptible() do not sleep). This
breaks the ff->mutex -> udev->mutex link since a spinlock is a leaf in
the lock ordering and cannot form cycles with mutexes.
To keep state transitions visible to uinput_request_send(), protect
writes to udev->state in uinput_create_device() and
uinput_destroy_device() with the same state_lock spinlock.
Additionally, move init_completion(&request->done) from
uinput_request_send() to uinput_request_submit() before
uinput_request_reserve_slot(). Once the slot is allocated,
uinput_flush_requests() may call complete() on it at any time from
the destroy path, so the completion must be initialised before the
request becomes visible.
====================
seg6: fix dst_cache sharing in seg6 lwtunnel
The seg6 lwtunnel encap uses a single per-route dst_cache shared
between seg6_input_core() and seg6_output_core(). These two paths
can perform the post-encap SID lookup in different routing contexts
(e.g., ip rules matching on the ingress interface, or VRF table
separation). Whichever path runs first populates the cache, and the
other reuses it blindly, bypassing its own lookup.
Patch 1 fixes this by splitting the cache into cache_input and
cache_output. Patch 2 adds a selftest that validates the isolation.
====================
Andrea Mayer [Sat, 4 Apr 2026 00:44:05 +0000 (02:44 +0200)]
selftests: seg6: add test for dst_cache isolation in seg6 lwtunnel
Add a selftest that verifies the dst_cache in seg6 lwtunnel is not
shared between the input (forwarding) and output (locally generated)
paths.
The test creates three namespaces (ns_src, ns_router, ns_dst)
connected in a line. An SRv6 encap route on ns_router encapsulates
traffic destined to cafe::1 with SID fc00::100. The SID is
reachable only for forwarded traffic (from ns_src) via an ip rule
matching the ingress interface (iif veth-r0 lookup 100), and
blackholed in the main table.
The test verifies that:
1. A packet generated locally on ns_router does not reach
ns_dst with an empty cache, since the SID is blackholed;
2. A forwarded packet from ns_src populates the input cache
from table 100 and reaches ns_dst;
3. A packet generated locally on ns_router still does not
reach ns_dst after the input cache is populated,
confirming the output path does not reuse the input
cache entry.
Both the forwarded and local packets are pinned to the same CPU
with taskset, since dst_cache is per-cpu.
Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Reviewed-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Reviewed-by: Justin Iurman <justin.iurman@gmail.com> Link: https://patch.msgid.link/20260404004405.4057-3-andrea.mayer@uniroma2.it Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Andrea Mayer [Sat, 4 Apr 2026 00:44:04 +0000 (02:44 +0200)]
seg6: separate dst_cache for input and output paths in seg6 lwtunnel
The seg6 lwtunnel uses a single dst_cache per encap route, shared
between seg6_input_core() and seg6_output_core(). These two paths
can perform the post-encap SID lookup in different routing contexts
(e.g., ip rules matching on the ingress interface, or VRF table
separation). Whichever path runs first populates the cache, and the
other reuses it blindly, bypassing its own lookup.
Fix this by splitting the cache into cache_input and cache_output,
so each path maintains its own cached dst independently.
Fixes: 6c8702c60b88 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels") Cc: stable@vger.kernel.org Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Reviewed-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Reviewed-by: Justin Iurman <justin.iurman@gmail.com> Link: https://patch.msgid.link/20260404004405.4057-2-andrea.mayer@uniroma2.it Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Daniel Golle [Sun, 5 Apr 2026 21:29:19 +0000 (22:29 +0100)]
selftests: net: bridge_vlan_mcast: wait for h1 before querier check
The querier-interval test adds h1 (currently a slave of the VRF created
by simple_if_init) to a temporary bridge br1 acting as an outside IGMP
querier. The kernel VRF driver (drivers/net/vrf.c) calls cycle_netdev()
on every slave add and remove, toggling the interface admin-down then up.
Phylink takes the PHY down during the admin-down half of that cycle.
Since h1 and swp1 are cable-connected, swp1 also loses its link may need
several seconds to re-negotiate.
Use setup_wait_dev $h1 0 which waits for h1 to return to UP state, so the
test can rely on the link being back up at this point.
Jakub Kicinski [Sat, 4 Apr 2026 00:19:38 +0000 (17:19 -0700)]
net: avoid nul-deref trying to bind mp to incapable device
Sashiko points out that we use qops in __net_mp_open_rxq()
but never validate they are null. This was introduced when
check was moved from netdev_rx_queue_restart().
Look at ops directly instead of the locking config.
qops imply netdev_need_ops_lock(). We used netdev_need_ops_lock()
initially to signify that the real_num_rx_queues check below
is safe without rtnl_lock, but I'm not sure if this is actually
clear to most people, anyway.
Fixes: da7772a2b4ad ("net: move mp->rx_page_size validation to __net_mp_open_rxq()") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20260404001938.2425670-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Johan Alvarado [Mon, 6 Apr 2026 07:44:25 +0000 (07:44 +0000)]
net: stmmac: dwmac-motorcomm: fix eFUSE MAC address read failure
This patch fixes an issue where reading the MAC address from the eFUSE
fails due to a race condition.
The root cause was identified by comparing the driver's behavior with a
custom U-Boot port. In U-Boot, the MAC address was read successfully
every time because the driver was loaded later in the boot process, giving
the hardware ample time to initialize. In Linux, reading the eFUSE
immediately returns all zeros, resulting in a fallback to a random MAC address.
Hardware cold-boot testing revealed that the eFUSE controller requires a
short settling time to load its internal data. Adding a 2000-5000us
delay after the reset ensures the hardware is fully ready, allowing the
native MAC address to be read consistently.
Fixes: 02ff155ea281 ("net: stmmac: Add glue driver for Motorcomm YT6801 ethernet controller") Reported-by: Georg Gottleuber <ggo@tuxedocomputers.com> Closes: https://lore.kernel.org/24cfefff-1233-4745-8c47-812b502d5d19@tuxedocomputers.com Signed-off-by: Johan Alvarado <contact@c127.dev> Reviewed-by: Yao Zi <me@ziyao.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/fc5992a4-9532-49c3-8ec1-c2f8c5b84ca1@smtp-relay.sendinblue.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
John Pavlick [Mon, 6 Apr 2026 13:23:33 +0000 (13:23 +0000)]
net: sfp: add quirks for Hisense and HSGQ GPON ONT SFP modules
Several GPON ONT SFP sticks based on Realtek RTL960x report
1000BASE-LX at 1300MBd in their EEPROM but can operate at 2500base-X.
On hosts capable of 2500base-X (e.g. Banana Pi R3 / MT7986), the
kernel negotiates only 1G because it trusts the incorrect EEPROM data.
Each quirk advertises 2500base-X and ignores TX_FAULT during the
module's ~40s Linux boot time.
Tested on Banana Pi R3 (MT7986) with OpenWrt 25.12.1, confirmed
2.5Gbps link and full throughput with flow offloading.
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Suggested-by: Marcin Nita <marcin.nita@leolabs.pl> Signed-off-by: John Pavlick <jspavlick@posteo.net> Link: https://patch.msgid.link/20260406132321.72563-1-jspavlick@posteo.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
When a CREATE returns STATUS_STOPPED_ON_SYMLINK, smb2_check_message()
returns success without any length validation, leaving the symlink
parsers as the only defense against an untrusted server.
symlink_data() walks SMB 3.1.1 error contexts with the loop test "p <
end", but reads p->ErrorId at offset 4 and p->ErrorDataLength at offset
0. When the server-controlled ErrorDataLength advances p to within 1-7
bytes of end, the next iteration will read past it. When the matching
context is found, sym->SymLinkErrorTag is read at offset 4 from
p->ErrorContextData with no check that the symlink header itself fits.
smb2_parse_symlink_response() then bounds-checks the substitute name
using SMB2_SYMLINK_STRUCT_SIZE as the offset of PathBuffer from
iov_base. That value is computed as sizeof(smb2_err_rsp) +
sizeof(smb2_symlink_err_rsp), which is correct only when
ErrorContextCount == 0.
With at least one error context the symlink data sits 8 bytes deeper,
and each skipped non-matching context shifts it further by 8 +
ALIGN(ErrorDataLength, 8). The check is too short, allowing the
substitute name read to run past iov_len. The out-of-bound heap bytes
are UTF-16-decoded into the symlink target and returned to userspace via
readlink(2).
Fix this all up by making the loops test require the full context header
to fit, rejecting sym if its header runs past end, and bound the
substitute name against the actual position of sym->PathBuffer rather
than a fixed offset.
Because sub_offs and sub_len are 16bits, the pointer math will not
overflow here with the new greater-than.
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com> Cc: Shyam Prasad N <sprasad@microsoft.com> Cc: Tom Talpey <tom@talpey.com> Cc: Bharath SM <bharathsm@microsoft.com> Cc: linux-cifs@vger.kernel.org Cc: samba-technical@lists.samba.org Cc: stable <stable@kernel.org> Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org> Assisted-by: gregkh_clanker_t1000 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Steve French <stfrench@microsoft.com>
smb: client: fix off-by-8 bounds check in check_wsl_eas()
The bounds check uses (u8 *)ea + nlen + 1 + vlen as the end of the EA
name and value, but ea_data sits at offset sizeof(struct
smb2_file_full_ea_info) = 8 from ea, not at offset 0. The strncmp()
later reads ea->ea_data[0..nlen-1] and the value bytes follow at
ea_data[nlen+1..nlen+vlen], so the actual end is ea->ea_data + nlen + 1
+ vlen. Isn't pointer math fun?
The earlier check (u8 *)ea > end - sizeof(*ea) only guarantees the
8-byte header is in bounds, but since the last EA is placed within 8
bytes of the end of the response, the name and value bytes are read past
the end of iov.
Fix this mess all up by using ea->ea_data as the base for the bounds
check.
An "untrusted" server can use this to leak up to 8 bytes of kernel heap
into the EA name comparison and influence which WSL xattr the data is
interpreted as.
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com> Cc: Shyam Prasad N <sprasad@microsoft.com> Cc: Tom Talpey <tom@talpey.com> Cc: Bharath SM <bharathsm@microsoft.com> Cc: linux-cifs@vger.kernel.org Cc: samba-technical@lists.samba.org Cc: stable <stable@kernel.org> Assisted-by: gregkh_clanker_t1000 Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Steve French <stfrench@microsoft.com>
cpufreq/amd-pstate: Add POWER_SUPPLY select for dynamic EPP
The dynamic EPP feature uses power_supply_reg_notifier() and
power_supply_unreg_notifier() but doesn't declare a dependency on
POWER_SUPPLY, causing linker errors when POWER_SUPPLY is not enabled.
Add POWER_SUPPLY to the selects.
Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com> Fixes: e30ca6dd5345 ("cpufreq/amd-pstate: Add dynamic energy performance preference") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202604040742.ySEdkuAa-lkp@intel.com/ Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://patch.msgid.link/20260407194949.310114-1-mario.limonciello@amd.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
watchdog: ni903x_wdt: Convert to a platform driver
In all cases in which a struct acpi_driver is used for binding a driver
to an ACPI device object, a corresponding platform device is created by
the ACPI core and that device is regarded as a proper representation of
underlying hardware. Accordingly, a struct platform_driver should be
used by driver code to bind to that device. There are multiple reasons
why drivers should not bind directly to ACPI device objects [1].
In particular, registering a watchdog device under a struct acpi_device
is questionable because it causes the watchdog to be hidden in the ACPI
bus sysfs hierarchy and it goes against the general rule that a struct
acpi_device can only be a parent of another struct acpi_device.
Overall, it is better to bind drivers to platform devices than to their
ACPI companions, so convert the ni903x_wdt watchdog ACPI driver to a
platform one.
While this is not expected to alter functionality, it changes sysfs
layout and so it will be visible to user space.
Note that after this change it actually makes sense to look for
the "timeout-sec" property via device_property_read_u32() under the
device passed to watchdog_init_timeout() because it has an fwnode
handle (unlike a struct acpi_device which is an fwnode itself).
In all cases in which a struct acpi_driver is used for binding a driver
to an ACPI device object, a corresponding platform device is created by
the ACPI core and that device is regarded as a proper representation of
underlying hardware. Accordingly, a struct platform_driver should be
used by driver code to bind to that device. There are multiple reasons
why drivers should not bind directly to ACPI device objects [1].
Overall, it is better to bind drivers to platform devices than to their
ACPI companions, so convert the Xen ACPI processor aggregator device
(PAD) driver to a platform one.
While this is not expected to alter functionality, it changes sysfs
layout and so it will be visible to user space.
Boris Burkov [Mon, 6 Apr 2026 16:15:15 +0000 (09:15 -0700)]
btrfs: btrfs_log_dev_io_error() on all bio errors
As far as I can tell, we never intentionally constrained ourselves to
these status codes, and it is misleading and surprising to lack the
bdev error logging when we get a different error code from the block
layer. This can lead to jumping to a wrong conclusion like "this
system didn't see any bio failures but aborted with EIO".
For example on nvme devices, I observe many failures coming back as
BLK_STS_MEDIUM. It is apparent that the nvme driver returns a variety of
BLK_STS_* status values in nvme_error_status().
So handle the known expected errors and make some noise on the rest
which we expect won't really happen.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anand Jain <asj@kernel.org> Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs: fix silent IO error loss in encoded writes and zoned split
can_finish_ordered_extent() and btrfs_finish_ordered_zoned() set
BTRFS_ORDERED_IOERR via bare set_bit(). Later,
btrfs_mark_ordered_extent_error() in btrfs_finish_one_ordered() uses
test_and_set_bit(), finds it already set, and skips
mapping_set_error(). The error is never recorded on the inode's
address_space, making it invisible to fsync. For encoded writes this
causes btrfs receive to silently produce files with zero-filled holes.
Fix: replace bare set_bit(BTRFS_ORDERED_IOERR) with
btrfs_mark_ordered_extent_error() which pairs test_and_set_bit() with
mapping_set_error(), guaranteeing the error is recorded exactly once.
Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Mark Harmstone <mark@harmstone.com> Signed-off-by: Michal Grzedzicki <mge@meta.com> Signed-off-by: David Sterba <dsterba@suse.com>
Dave Chen [Mon, 30 Mar 2026 03:31:48 +0000 (11:31 +0800)]
btrfs: skip clearing EXTENT_DEFRAG for NOCOW ordered extents
In btrfs_finish_one_ordered(), clear_bits is unconditionally initialized
with EXTENT_DEFRAG. For NOCOW ordered extents this is always a no-op
because should_nocow() already forces the COW path when EXTENT_DEFRAG is
set, so a NOCOW ordered extent can never have EXTENT_DEFRAG on its range.
Although harmless, the unconditional btrfs_clear_extent_bit() call still
performs a cold rbtree lookup under the io tree spinlock on every NOCOW
write completion. Avoid this by only adding EXTENT_DEFRAG to clear_bits
for non-NOCOW ordered extents, and skip the call entirely when there are
no bits to clear.
Signed-off-by: Dave Chen <davechen@synology.com> Signed-off-by: Robbie Ko <robbieko@synology.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Dave Chen [Tue, 7 Apr 2026 03:36:24 +0000 (11:36 +0800)]
btrfs: use BTRFS_FS_UPDATE_UUID_TREE_GEN flag for UUID tree rescan check
The UUID tree rescan check in open_ctree() compares
fs_info->generation with the superblock's uuid_tree_generation.
This comparison is not reliable because fs_info->generation is
bumped at transaction start time in join_transaction(), while
uuid_tree_generation is only updated at commit time via
update_super_roots().
Between the early BTRFS_FS_UPDATE_UUID_TREE_GEN flag check and the
late rescan decision, mount operations such as file orphan cleanup
from an unclean shutdown start transactions without committing
them. This advances fs_info->generation past uuid_tree_generation
and produces a false-positive mismatch.
Use the BTRFS_FS_UPDATE_UUID_TREE_GEN flag directly instead. The
flag was already set earlier in open_ctree() when the generations
were known to match, and accurately represents "UUID tree is up to
date" without being affected by subsequent transaction starts.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Dave Chen <davechen@synology.com> Signed-off-by: Robbie Ko <robbieko@synology.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 1 Apr 2026 18:46:59 +0000 (19:46 +0100)]
btrfs: remove duplicate journal_info reset on failure to commit transaction
If we get an error during the transaction commit path, we are resetting
current->journal_info to NULL twice - once in btrfs_commit_transaction()
right before calling cleanup_transaction() and then once again inside
cleanup_transaction(). Remove the instance in btrfs_commit_transaction().
Reviewed-by: Anand Jain <asj@kernel.org> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 1 Apr 2026 17:51:35 +0000 (18:51 +0100)]
btrfs: tag as unlikely if statements that check for fs in error state
Having the filesystem in an error state, meaning we had a transaction
abort, is unexpected. Mark every check for the error state with the
unlikely annotation to convey that and to allow the compiler to generate
better code.
On x86_64, using gcc 14.2.0-19 from Debian, resulted in a slightly
reduced object size and better code.
Before:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 2008598 175912 15592 2200102 219226 fs/btrfs/btrfs.ko
After:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 2008450 175912 15592 2199954 219192 fs/btrfs/btrfs.ko
Reviewed-by: Anand Jain <asj@kernel.org> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Merge tag 'ata-7.0-final' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux
Pull ata fix from Niklas Cassel:
- Add a quirk for JMicron JMB582/JMB585 AHCI controllers such that
they only use 32-bit DMA addresses.
While these controllers do report that they support 64-bit DMA
addresses, a user reports that using 64-bit DMA addresses cause
silent corruption even on modern x86 systems (Arthur)
* tag 'ata-7.0-final' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux:
ata: ahci: force 32-bit DMA for JMicron JMB582/JMB585
Merge tag 'hyperv-fixes-signed-20260406' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux
Pull Hyper-V fixes from Wei Liu:
- Two fixes for Hyper-V PCI driver (Long Li, Sahil Chandna)
- Fix an infinite loop issue in MSHV driver (Stanislav Kinsburskii)
* tag 'hyperv-fixes-signed-20260406' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
mshv: Fix infinite fault loop on permission-denied GPA intercepts
PCI: hv: Fix double ida_free in hv_pci_probe error path
PCI: hv: Set default NUMA node to 0 for devices without affinity info
Keep the direct kfree(space_info) for the earlier failure path, but
after btrfs_sysfs_add_space_info_type() has called kobject_put(), let
the kobject release callback handle the cleanup.
Fixes: a11224a016d6d ("btrfs: fix memory leaks in create_space_info() error paths") CC: stable@vger.kernel.org # 6.19+ Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Guangshuo Li <lgs201920130244@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
Keep parent->sub_group[index] = NULL for the failure path, but after
btrfs_sysfs_add_space_info_type() has called kobject_put(), let the
kobject release callback handle the cleanup.
Qu Wenruo [Tue, 31 Mar 2026 23:02:57 +0000 (09:32 +1030)]
btrfs: do not reject a valid running dev-replace
[BUG]
There is a bug report that a btrfs with running dev-replace got rejected
with the following messages:
BTRFS error (device sdk1): devid 0 path /dev/sdk1 is registered but not found in chunk tree
BTRFS error (device sdk1): remove the above devices or use 'btrfs device scan --forget <dev>' to unregister them before mount
BTRFS error (device sdk1): open_ctree failed: -117
[CAUSE]
The tree and super block dumps show the fs is completely sane, except
one thing, there is no dev item for devid 0 in chunk tree.
However this is not a bug, as we do not insert dev item for devid 0 in
the first place.
Since the devid 0 is only there temporarily we do not really need to
insert a dev item for it and then later remove it again.
It is the commit 34308187395f ("btrfs: add extra device item checks at
mount") adding a overly strict check that triggers a false alert and
rejected the valid filesystem.
[FIX]
Add a special handling for devid 0, and doesn't require devid 0 to
have a device item in chunk tree.
Qu Wenruo [Mon, 30 Mar 2026 22:21:56 +0000 (08:51 +1030)]
btrfs: only invalidate btree inode pages after all ebs are released
In close_ctree(), we call invalidate_inode_pages2() to invalidate all
pages from btree inode.
But the problem is, it never returns 0, but always -EBUSY.
The problem is that we are still holding all the essential tree root
nodes, thus pages holding those tree blocks can not be invalidated thus
invalidate_inode_pages2() always returns -EBUSY.
This is also against the error cleanup path of open_ctree(), which
properly frees all root pointers before calling invalidate_inode_pages().
So fix the order by delaying invalidate_inode_pages2() until we have
freed all root pointers.
Reviewed-by: Anand Jain <asj@kernel.org> Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs: prevent direct reclaim during compressed readahead
Under memory pressure, direct reclaim can kick in during compressed
readahead. This puts the associated task into D-state. Then shrink_lruvec()
disables interrupts when acquiring the LRU lock. Under heavy pressure,
we've observed reclaim can run long enough that the CPU becomes prone to
CSD lock stalls since it cannot service incoming IPIs. Although the CSD
lock stalls are the worst case scenario, we have found many more subtle
occurrences of this latency on the order of seconds, over a minute in some
cases.
Prevent direct reclaim during compressed readahead. This is achieved by
using different GFP flags at key points when the bio is marked for
readahead.
There are two functions that allocate during compressed readahead:
btrfs_alloc_compr_folio() and add_ra_bio_pages(). Both currently use
GFP_NOFS which includes __GFP_DIRECT_RECLAIM.
For the internal API call btrfs_alloc_compr_folio(), the signature changes
to accept an additional gfp_t parameter. At the readahead call site, it
gets flags similar to GFP_NOFS but stripped of __GFP_DIRECT_RECLAIM.
__GFP_NOWARN is added since these allocations are allowed to fail. Demand
reads still use full GFP_NOFS and will enter reclaim if needed. All other
existing call sites of btrfs_alloc_compr_folio() now explicitly pass
GFP_NOFS to retain their current behavior.
add_ra_bio_pages() gains a bool parameter which allows callers to specify
if they want to allow direct reclaim or not. In either case, the
__GFP_NOWARN flag was added unconditionally since the allocations are
speculative.
There has been some previous work done on calling add_ra_bio_pages() [0].
This patch is complementary: where that patch reduces call frequency, this
patch reduces the latency associated with those calls.
Teng Liu [Sat, 28 Mar 2026 06:40:59 +0000 (07:40 +0100)]
btrfs: replace BUG_ON() with error return in cache_save_setup()
In cache_save_setup(), if create_free_space_inode() succeeds but the
subsequent lookup_free_space_inode() still fails on retry, the
BUG_ON(retries) will crash the kernel. This can happen due to I/O
errors or transient failures, not just programming bugs.
Replace the BUG_ON with proper error handling that returns the original
error code through the existing cleanup path. The callers already handle
this gracefully: disk_cache_state defaults to BTRFS_DC_ERROR, so the
space cache simply won't be written for that block group.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Teng Liu <27rabbitlt@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
David Sterba [Tue, 6 Jan 2026 16:20:34 +0000 (17:20 +0100)]
btrfs: zstd: don't cache sectorsize in a local variable
The sectorsize is used once or at most twice in the callbacks, no need
to cache it on stack. Minor effect on zstd_compress_folios() where it
saves 8 bytes of stack.
David Sterba [Tue, 6 Jan 2026 16:20:30 +0000 (17:20 +0100)]
btrfs: zlib: drop redundant folio address variable
We're caching the current output folio address but it's not really
necessary as we store it in the variable and then pass it to the stream
context. We can read the folio address directly.
David Sterba [Tue, 6 Jan 2026 16:20:28 +0000 (17:20 +0100)]
btrfs: use common eb range validation in read_extent_buffer_to_user_nofault()
The extent buffer access is checked in other helpers by
check_eb_range(), which validates the requested start, length against
the extent buffer. While this almost never fails we should still handle
it as an error and not just warn.
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
David Sterba [Tue, 6 Jan 2026 16:20:27 +0000 (17:20 +0100)]
btrfs: read eb folio index right before loops
There are generic helpers to access extent buffer folio data of any
length, potentially iterating over a few of them. This is a slow path,
either we use the type based accessors or the eb folio allocation is
contiguous and we can use the memcpy/memcmp helpers.
The initialization of 'i' is done at the beginning though it may not be
needed. Move it right before the folio loop, this has minor effect on
generated code in __write_extent_buffer().
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
David Sterba [Tue, 6 Jan 2026 16:20:25 +0000 (17:20 +0100)]
btrfs: unify types for binary search variables
The variables calculating where to jump next are using mixed in types
which requires some conversions on the instruction level. Using 'u32'
removes one call to 'movslq', making the main loop shorter.
This complements type conversion done in a724f313f84beb ("btrfs: do
unsigned integer division in the extent buffer binary search loop")
David Sterba [Tue, 6 Jan 2026 16:20:24 +0000 (17:20 +0100)]
btrfs: remove duplicate calculation of eb offset in btrfs_bin_search()
In the main search loop the variable 'oil' (offset in folio) is set
twice, one duplicated when the key fits completely to the contiguous
range. We can remove it and while it's just a simple calculation, the
binary search loop is executed many times so micro optimizations add up.
The code size is reduced by 64 bytes on release config, the loop is
reorganized a bit and a few instructions shorter.
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
Mark Harmstone [Wed, 25 Mar 2026 12:53:43 +0000 (12:53 +0000)]
btrfs: tree-checker: add remap-tree checks to check_block_group_item()
Add some write-time checks for block group items relating to the remap
tree.
Here we're checking:
* That the REMAPPED or METADATA_REMAP flags aren't set unless the
REMAP_TREE incompat flag is also set
* That `remap_bytes` isn't more than the size of the block group
* That `identity_remap_count` isn't more than the number of sectors in
the block group
Signed-off-by: Mark Harmstone <mark@harmstone.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Tue, 24 Mar 2026 15:01:34 +0000 (15:01 +0000)]
btrfs: make btrfs_free_log() and btrfs_free_log_root_tree() return void
These functions never fail, always return success (0) and none of the
callers care about their return values. Change their return type from int
to void.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Mon, 23 Mar 2026 15:50:13 +0000 (15:50 +0000)]
btrfs: fix deadlock between reflink and transaction commit when using flushoncommit
When using the flushoncommit mount option, we can have a deadlock between
a transaction commit and a reflink operation that copied an inline extent
to an offset beyond the current i_size of the destination node.
The deadlock happens like this:
1) Task A clones an inline extent from inode X to an offset of inode Y
that is beyond Y's current i_size. This means we copied the inline
extent's data to a folio of inode Y that is beyond its EOF, using a
call to copy_inline_to_page();
2) Task B starts a transaction commit and calls
btrfs_start_delalloc_flush() to flush delalloc;
3) The delalloc flushing sees the new dirty folio of inode Y and when it
attempts to flush it, it ends up at extent_writepage() and sees that
the offset of the folio is beyond the i_size of inode Y, so it attempts
to invalidate the folio by calling folio_invalidate(), which ends up at
btrfs' folio invalidate callback - btrfs_invalidate_folio(). There it
tries to lock the folio's range in inode Y's extent io tree, but it
blocks since it's currently locked by task A - during a reflink we lock
the inodes and the source and destination ranges after flushing all
delalloc and waiting for ordered extent completion - after that we
don't expect to have dirty folios in the ranges, the exception is if
we have to copy an inline extent's data (because the destination offset
is not zero);
4) Task A then attempts to start a transaction to update the inode item,
and then it's blocked since the current transaction is in the
TRANS_STATE_COMMIT_START state. Therefore task A has to wait for the
current transaction to become unblocked (its state >=
TRANS_STATE_UNBLOCKED).
So task A is waiting for the transaction commit done by task B, and
the later waiting on the extent lock of inode Y that is currently
held by task A.
Syzbot recently reported this with the following stack traces:
Fix this by updating the i_size of the destination inode of a reflink
operation after we copy an inline extent's data to an offset beyond the
i_size and before attempting to start a transaction to update the inode's
item.
Reported-by: syzbot+63056bf627663701bbbf@syzkaller.appspotmail.com Link: https://lore.kernel.org/linux-btrfs/69bba3fe.050a0220.227207.002f.GAE@google.com/ Fixes: 05a5a7621ce6 ("Btrfs: implement full reflink support for inline extents") Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Mark Harmstone [Tue, 3 Mar 2026 15:59:12 +0000 (15:59 +0000)]
btrfs: tree-checker: add checker for items in remap tree
Add write-time checking of items in the remap tree, to catch errors
before they are written to disk.
We're checking:
* That remap items, remap backrefs, and identity remaps aren't written
unless the REMAP_TREE incompat flag is set
* That identity remaps have a size of 0
* That remap items and remap backrefs have a size of sizeof(struct
btrfs_remap_item)
* That the objectid for these items is aligned to the sector size
* That the offset for these items (i.e. the size of the remapping) isn't
0 and is aligned to the sector size
* That objectid + offset doesn't overflow
Signed-off-by: Mark Harmstone <mark@harmstone.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Dave Chen [Mon, 23 Mar 2026 03:43:22 +0000 (11:43 +0800)]
btrfs: fix unnecessary flush on close when truncating zero-sized files
In btrfs_setsize(), when a file is truncated to size 0, the
BTRFS_INODE_FLUSH_ON_CLOSE flag is unconditionally set to ensure
pending writes get flushed on close. This flag was designed to protect
the "truncate-then-rewrite" pattern, where an application truncates a
file with existing data down to zero and writes new content, ensuring
the new data reach disk on close.
However, when a file already has a size of 0 (e.g. a newly created
file opened with O_CREAT | O_TRUNC), oldsize and newsize are both 0.
In this case, setting BTRFS_INODE_FLUSH_ON_CLOSE is unnecessary because
no "good data" was truncated away. The subsequent filemap_flush() in
btrfs_release_file() then triggers avoidable writeback that disrupts
the normal delayed writeback batching, adding I/O overhead.
This comes from a real workload. A backup service creates temporary
files via mkstemp(), closes them, and later reopens them with O_TRUNC
for writing. The O_TRUNC is defensive. The file creation and usage is
done by a different component, so removing the unneeded truncation is
not straightforward. This pattern repeats for a large number of files
each close() triggers an unnecessary filemap_flush().
Signed-off-by: Dave Chen <davechen@synology.com> Signed-off-by: Robbie Ko <robbieko@synology.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Fri, 27 Feb 2026 03:33:44 +0000 (14:03 +1030)]
btrfs: move shutdown and remove_bdev callbacks out of experimental features
These two new callbacks have been introduced in v6.19, and it has been
two releases in v7.1.
During that time we have not yet exposed bugs related that two features,
thus it's time to expose them for end users.
It's especially important to expose remove_bdev callback to end users.
That new callback makes btrfs automatically shutdown or go degraded
when a device is missing (depending on if the fs can maintain RW), which
is affecting end users.
We want some feedback from early adopters.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Yochai Eisenrich [Sun, 22 Mar 2026 06:39:35 +0000 (08:39 +0200)]
btrfs: fix btrfs_ioctl_space_info() slot_count TOCTOU which can lead to info-leak
btrfs_ioctl_space_info() has a TOCTOU race between two passes over the
block group RAID type lists. The first pass counts entries to determine
the allocation size, then the second pass fills the buffer. The
groups_sem rwlock is released between passes, allowing concurrent block
group removal to reduce the entry count.
When the second pass fills fewer entries than the first pass counted,
copy_to_user() copies the full alloc_size bytes including trailing
uninitialized kmalloc bytes to userspace.
Fix by copying only total_spaces entries (the actually-filled count from
the second pass) instead of alloc_size bytes, and switch to kzalloc so
any future copy size mismatch cannot leak heap data.
Fixes: 7fde62bffb57 ("Btrfs: buffer results in the space_info ioctl") CC: stable@vger.kernel.org # 3.0 Signed-off-by: Yochai Eisenrich <echelonh@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 18 Mar 2026 13:39:51 +0000 (13:39 +0000)]
btrfs: avoid taking the device_list_mutex in btrfs_run_dev_stats()
btrfs_run_dev_stats() is called during the critical section of a
transaction commit and it takes the device_list_mutex, which is also
acquired by fitrim, which does discard operations while holding that
mutex. Most of the time, if we are on a healthy filesystem, we don't have
new stat updates to persist in the device tree, so blocking on the
device_list_mutex is just wasting time and making any tasks that need to
start a new transaction wait longer that necessary.
Since the device list is RCU safe/protected, make btrfs_run_dev_stats()
do an initial check for device stat updates using RCU and quit without
taking the device_list_mutex in case there are no new device stats that
need to be persisted in the device tree.
Also note that adding/removing devices also requires starting a
transaction, and since btrfs_run_dev_stats() is called from the critical
section of a transaction commit, no one can be concurrently adding or
removing a device while btrfs_run_dev_stats() is called.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Leo Martins [Thu, 19 Mar 2026 23:49:08 +0000 (16:49 -0700)]
btrfs: avoid GFP_ATOMIC allocations in qgroup free paths
When qgroups are enabled, __btrfs_qgroup_release_data() and
qgroup_free_reserved_data() pass an extent_changeset to
btrfs_clear_record_extent_bits() to track how many bytes had their
EXTENT_QGROUP_RESERVED bits cleared. Inside the extent IO tree spinlock,
add_extent_changeset() calls ulist_add() with GFP_ATOMIC to record each
changed range. If this allocation fails, it hits a BUG_ON and panics the
kernel.
However, both of these callers only read changeset.bytes_changed
afterwards — the range_changed ulist is populated and immediately freed
without ever being iterated. The GFP_ATOMIC allocation is entirely
unnecessary for these paths.
Introduce extent_changeset_init_bytes_only() which uses a sentinel value
(EXTENT_CHANGESET_BYTES_ONLY) on the ulist's prealloc field to signal
that only bytes_changed should be tracked. add_extent_changeset() checks
for this sentinel and returns early after updating bytes_changed,
skipping the ulist_add() call entirely. This eliminates the GFP_ATOMIC
allocation and makes the BUG_ON unreachable for these paths.
Callers that need range tracking (qgroup_reserve_data,
qgroup_unreserve_range, btrfs_qgroup_check_reserved_leak) continue to
use extent_changeset_init() and are unaffected.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Leo Martins <loemra.dev@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs: decrease indentation of find_free_extent_update_loop
Decrease the indentation of find_free_extent_update_loop(), by inverting
the check if the loop state is smaller than LOOP_NO_EMPTY_SIZE.
This also allows for an early return from find_free_extent_update_loop(),
in case LOOP_NO_EMPTY_SIZE is already set at this point.
While at it change a
if () {
}
else if
else
pattern to all using curly braces and be consistent with the rest of btrfs
code.
Also change 'int exists' to 'bool have_trans' giving it a more meaningful
name and type.
No functional changes intended.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Tue, 17 Mar 2026 19:35:58 +0000 (19:35 +0000)]
btrfs: unexport btrfs_qgroup_reserve_meta()
There's only one caller outside qgroup.c of btrfs_qgroup_reserve_meta()
and we have btrfs_qgroup_reserve_meta_prealloc() is a wrapper around
that function. Make that caller use btrfs_qgroup_reserve_meta_prealloc()
and unexport btrfs_qgroup_reserve_meta(), simplifying the external API.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Tue, 17 Mar 2026 19:19:43 +0000 (19:19 +0000)]
btrfs: collapse __btrfs_qgroup_reserve_meta() into btrfs_qgroup_reserve_meta_prealloc()
Since __btrfs_qgroup_reserve_meta() is only called by
btrfs_qgroup_reserve_meta_prealloc(), which is a simple inline wrapper,
get rid of the later and rename __btrfs_qgroup_reserve_meta() to the
later.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Tue, 17 Mar 2026 19:09:15 +0000 (19:09 +0000)]
btrfs: collapse __btrfs_qgroup_free_meta() into btrfs_qgroup_free_meta_prealloc()
Since __btrfs_qgroup_free_meta() is only called by
btrfs_qgroup_free_meta_prealloc(), which is a simple inline wrapper, get
rid of the later and rename __btrfs_qgroup_free_meta() to the later.
Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 18:36:36 +0000 (18:36 +0000)]
btrfs: optimize clearing all bits from first extent record in an io tree
When we are clearing all the bits from the first record that contains the
target range and that record ends at or before our target range but starts
before our target range, we are doing a lot of unnecessary work:
1) Allocating a prealloc state if we don't have one already;
2) Adjust that record's start offset to the start of our range and
make the prealloc state have a range going from the original start
offset of that first record to the start offset of our target range,
and with the same bits as that first record. Then we insert the
prealloc extent in the rbtree - this is done in split_state();
3) Remove our adjusted first state from the rbtree since all the bits
were cleared - this is done in clear_state_bit().
This is only wasting time when we can simply trim that first record, so
that it represents the range from its start offset to the start offset of
our target range. So optimize for that case and avoid the prealloc state
allocation, insertion and deletion from the rbtree.
This patch is the last patch of a patchset comprised of the following
patches (in descending order):
btrfs: optimize clearing all bits from first extent record in an io tree
btrfs: panic instead of warn when splitting extent state not in the tree
btrfs: free cached state outside critical section in wait_extent_bit()
btrfs: avoid unnecessary wake ups on io trees when there are no waiters
btrfs: remove wake parameter from clear_state_bit()
btrfs: change last argument of add_extent_changeset() to boolean
btrfs: use extent_io_tree_panic() instead of BUG_ON()
btrfs: make add_extent_changeset() only return errors or success
btrfs: tag as unlikely branches that call extent_io_tree_panic()
btrfs: turn extent_io_tree_panic() into a macro for better error reporting
btrfs: optimize clearing all bits from the last extent record in an io tree
The following fio script was used to measure performance before and after
applying all the patches:
Also, using bpftrace to collect the duration (in nanoseconds) of all the
btrfs_clear_extent_bit_changeset() calls done during that fio test and
then making an histogram from that data, held the following results:
Filipe Manana [Wed, 11 Mar 2026 16:15:59 +0000 (16:15 +0000)]
btrfs: panic instead of warn when splitting extent state not in the tree
We are not expected ever to split an extent state record that is not in
the rbtree, as every record we pass to split_state() was found by
iterating the rbtree, so if that ever happens it means we are not holding
the extent io tree's spinlock or we have some memory corruption.
Instead of simply warning in case the extent state record passed to
split_state() is not in the rbtree, panic as this is a serious problem.
Also tag as unlikely the case where the record is not in the rbtree.
This also makes a tiny reduction the btrfs module's text size.
Before:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 2000080 174328 15592 2190000 216ab0 fs/btrfs/btrfs.ko
After:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 2000064 174328 15592 2189984 216aa0 fs/btrfs/btrfs.ko
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Mon, 16 Mar 2026 11:38:36 +0000 (11:38 +0000)]
btrfs: free cached state outside critical section in wait_extent_bit()
There's no need to free the cached extent state record while holding the
io tree's spinlock, it's just making the critical section longer than it
needs to be. So just do it after unlocking the io tree.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 15:07:11 +0000 (15:07 +0000)]
btrfs: avoid unnecessary wake ups on io trees when there are no waiters
Whenever clearing the extent lock bits of an extent state record, we
unconditionally call wake_up() on the state's waitqueue. Most of the
time there are no waiters on the queue so we are just wasting time calling
wake_up(), since that requires locking and unlocking the queue's spinlock,
disable and re-enable interrupts, function calls, and other minor overhead
while we are holding a critical section delimited by the extent io tree's
spinlock.
So call wake_up() only if there are waiters on an extent state's wait
queue.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 12:50:03 +0000 (12:50 +0000)]
btrfs: remove wake parameter from clear_state_bit()
There's no need to pass the 'wake' parameter, we can determine if we have
to wake up waiters by checking if EXTENT_LOCK_BITS is set in the bits to
clear. So simplify things and remove the parameter.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 12:35:33 +0000 (12:35 +0000)]
btrfs: use extent_io_tree_panic() instead of BUG_ON()
There's no need to call BUG_ON(), instead call extent_io_tree_panic(),
which also calls BUG(), but it prints an additional error message with
some useful information before hitting BUG().
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 12:17:03 +0000 (12:17 +0000)]
btrfs: make add_extent_changeset() only return errors or success
Currently add_extent_changeset() always returns the return value from its
call to ulist_add(), which can return an error, 0 or 1. There are no
callers that care about the difference between 0 and 1 and all except one
of them, check for negative values and ignore other values, but there is
another caller (btrfs_clear_extent_bit_changeset()) that must set its
'ret' variable to 0 after calling add_extent_changeset(), so that it
does not return an unexpected value of 1 to its caller.
So change add_extent_changeset() to only return errors or 0, avoiding
that caller (and any future callers) from having to deal with a return
value of 1.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 12:07:03 +0000 (12:07 +0000)]
btrfs: tag as unlikely branches that call extent_io_tree_panic()
It's unexpected to ever call extent_io_tree_panic() so surround with
'unlikely' every if statement condition that leads to it, making it
explicit to a reader and to hint the compiler to potentially generate
better code.
On x86_64, using gcc 14.2.0-19 from Debian, this resulted in a slightly
decrease of the btrfs module's text size.
Before:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 1999832 174320 15592 2189744 2169b0 fs/btrfs/btrfs.ko
After:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename 1999768 174320 15592 2189680 216970 fs/btrfs/btrfs.ko
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 11 Mar 2026 12:00:31 +0000 (12:00 +0000)]
btrfs: turn extent_io_tree_panic() into a macro for better error reporting
When extent_io_tree_panic() is called we get a stace trace that is not
very useful since the error message reports the location inside the
extent_io_tree_panic() function and not in the caller of the function.
Filipe Manana [Tue, 10 Mar 2026 15:29:33 +0000 (15:29 +0000)]
btrfs: optimize clearing all bits from the last extent record in an io tree
When we are clearing all the bits from the last record that contains the
target range (i.e. the record starts before our target range and ends
beyond it), we are doing a lot of unnecessary work:
1) Allocating a prealloc state if we don't have one already;
2) Adjust that last record's start offset to the end of our range and
make the prealloc state have a range going from the original start
offset of that last record to the end offset of our target range and
the same bits as the last record. Then we insert the prealloc extent
in the rbtree - this is done in split_state();
3) Remove our prealloc state from the rbtree since all the bits were
cleared - this is done in clear_state_bit().
This is only wasting time when we can simply trim the last record so
that it's start offset is adjust to the end of the target range. So
optimize for that case and avoid the prealloc state allocation, insertion
and deletion from the rbtree.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Sun, 15 Mar 2026 21:38:24 +0000 (08:08 +1030)]
btrfs: remove atomic parameter from btrfs_buffer_uptodate()
That parameter was introduced by commit b9fab919b748 ("Btrfs: avoid
sleeping in verify_parent_transid while atomic").
At that time we needed to lock the extent buffer range inside the io tree
to avoid content changes, thus it could sleep.
But that behavior is no longer there, as later commit 9e2aff90fc2a
("btrfs: stop using lock_extent in btrfs_buffer_uptodate") dropped the
io tree lock.
We can remove the @atomic parameter safely now.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Sat, 14 Mar 2026 00:00:39 +0000 (10:30 +1030)]
btrfs: output more info when duplicated ordered extent is found
During development of a new feature, I triggered that btrfs_panic()
inside insert_ordered_extent() and spent quite some unnecessary before
noticing I'm passing incorrect flags when creating a new ordered extent.
Unfortunately the existing error message is not providing much help.
Enhance the output to provide file offset, num bytes and flags of both
existing and new ordered extents.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Sat, 14 Mar 2026 00:00:38 +0000 (10:30 +1030)]
btrfs: check type flags in alloc_ordered_extent()
Unlike other flags used in btrfs, BTRFS_ORDERED_* macros are different as
they cannot be directly used as flags.
They are defined as bit values, thus they should be utilized with
bit operations, not directly with logical operations.
Unfortunately sometimes I forgot this and passed the incorrect flags
to alloc_ordered_extent() and hit weird bugs.
Enhance the type checks in alloc_ordered_extent():
- Make sure there is one and only one bit set for exclusive type flags
There are four exclusive type flags, REGULAR, NOCOW, PREALLOC and
COMPRESSED.
So introduce a new macro, BTRFS_ORDERED_EXCLUSIVE_FLAGS, to cover
above flags.
Add an ASSERT() to check one and only one of those exclusive flags can
be set for alloc_ordered_extent().
- Re-order the type bit numbers to the end of the enum
This is make it much harder to get a valid false negative.
E.g., with the old code BTRFS_ORDERED_REGULAR starts at zero, we can
have the following flags passing the bit uniqueness check:
* BTRFS_ORDERED_NOCOW
Be treated as BTRFS_ORDERED_REGULAR (1 == 1UL << 0).
* BTRFS_ORDERED_PREALLOC
Be treated as BTRFS_ORDERED_NOCOW (2 == 1UL << 1).
* BTRFS_ORDERED_DIRECT
Be treated as BTRFS_ORDERED_PREALLOC (4 == 1UL << 2).
Now all those types start at 8, passing any of those bit numbers as
flags directly will not pass the ASSERT().
- Add a static assert to avoid overflow
To make sure all BTRFS_ORDERED_* flags can fit into an unsigned long.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
ZhengYuan Huang [Fri, 13 Mar 2026 09:19:23 +0000 (17:19 +0800)]
btrfs: revalidate cached tree blocks on the uptodate path
read_extent_buffer_pages_nowait() returns immediately when an extent
buffer is already marked uptodate. On that cache-hit path,
the caller supplied btrfs_tree_parent_check is not re-run.
This can let read_tree_root_path() accept a cached tree block whose
actual header level/owner does not match the expected value derived from
the parent.
E.g. a corrupted root item that points to a tree block which doesn't
even belong to that root, and has mismatching level/owner.
But that tree block is already read and cached, later the corrupted tree
root got read from disk and hit the cached tree block.
Fix this by re-validating cached extent buffers against the supplied
btrfs_tree_parent_check on the uptodate path, and make
read_tree_root_path() pass its check to btrfs_buffer_uptodate().
This makes cache hits and fresh reads follow the same tree-parent
verification rules, and turns the corruption into a read failure instead
of constructing an inconsistent root object.
Signed-off-by: ZhengYuan Huang <gality369@gmail.com> Reviewed-by: Qu Wenruo <wqu@suse.com>
[ Resolve the conflict with extent_buffer_uptodate() helper, handle
transid mismatch case ] Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Philipp Hahn [Tue, 10 Mar 2026 11:48:28 +0000 (12:48 +0100)]
btrfs: prefer IS_ERR_OR_NULL() over manual NULL check
Prefer using IS_ERR_OR_NULL() over using IS_ERR() and a manual NULL
check.
IS_ERR_OR_NULL() already uses likely(!ptr) internally. checkpatch does
not like nesting it:
> WARNING: nested (un)?likely() calls, IS_ERR_OR_NULL already uses
> unlikely() internally
Remove the explicit use of likely().
Change generated with coccinelle.
Signed-off-by: Philipp Hahn <phahn-oss@avm.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
ZhengYuan Huang [Tue, 10 Mar 2026 21:56:10 +0000 (08:26 +1030)]
btrfs: tree-checker: introduce checks for FREE_SPACE_BITMAP
Introduce checks for FREE_SPACE_BITMAP item, which include:
- Key alignment check
Same as FREE_SPACE_EXTENT, the objectid is the logical bytenr of the
free space, and offset is the length of the free space, so both
should be aligned to the fs block size.
- Non-zero range check
A zero key->offset would describe an empty bitmap, which is invalid.
- Item size check
The item must hold exactly DIV_ROUND_UP(key->offset >> sectorsize_bits,
BITS_PER_BYTE) bytes. A mismatch indicates a truncated or otherwise
corrupt bitmap item; without this check, the bitmap loading path would
walk past the end of the leaf and trigger a NULL dereference in
assert_eb_folio_uptodate().
Signed-off-by: ZhengYuan Huang <gality369@gmail.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Mon, 9 Mar 2026 22:19:26 +0000 (08:49 +1030)]
btrfs: tree-checker: introduce checks for FREE_SPACE_EXTENT
Introduce FREE_SPACE_EXTENT checks, which include:
- The key alignment check
The objectid is the logical bytenr of the free space, and offset is the
length of the free space, thus they should all be aligned to the fs
block size.
- The item size check
The FREE_SPACE_EXTENT item should have a size of zero.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Mon, 9 Mar 2026 22:19:25 +0000 (08:49 +1030)]
btrfs: tree-checker: introduce checks for FREE_SPACE_INFO
Introduce checks for FREE_SPACE_INFO item, which include:
- Key alignment check
The objectid is the logical bytenr of the chunk/bg, and offset is the
length of the chunk/bg, thus they should all be aligned to the fs
block size.
- Item size check
The FREE_SPACE_INFO should a fix size.
- Flags check
The flags member should have no other flags than
BTRFS_FREE_SPACE_USING_BITMAPS.
For future expansion, introduce a new macro
BTRFS_FREE_SPACE_FLAGS_MASK for such checks.
And since we're here, the BTRFS_FREE_SPACE_USING_BITMAPS should not
use unsigned long long, as the flags is only 32 bits wide.
So fix that to use unsigned long.
- Extent count check
That member shows how many free space bitmap/extent items there are
inside the chunk/bg.
We know the chunk size (from key->offset), thus there should be at
most (key->offset >> sectorsize_bits) blocks inside the chunk.
Use that value as the upper limit and if that counter is larger than
that, there is a high chance it's a bitflip in high bits.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs: zoned: limit number of zones reclaimed in flush_space()
Limit the number of zones reclaimed in flush_space()'s RECLAIM_ZONES
state.
This prevents possibly long running reclaim sweeps to block other tasks in
the system, while the system is under pressure anyways, causing the
tasks to hang.
An example of this can be seen here, triggered by fstests generic/551:
generic/551 [ 27.042349] run fstests generic/551 at 2026-02-27 11:05:30
BTRFS: device fsid 78c16e29-20d9-4c8e-bc04-7ba431be38ff devid 1 transid 8 /dev/vdb (254:16) scanned by mount (806)
BTRFS info (device vdb): first mount of filesystem 78c16e29-20d9-4c8e-bc04-7ba431be38ff
BTRFS info (device vdb): using crc32c checksum algorithm
BTRFS info (device vdb): host-managed zoned block device /dev/vdb, 64 zones of 268435456 bytes
BTRFS info (device vdb): zoned mode enabled with zone size 268435456
BTRFS info (device vdb): checking UUID tree
BTRFS info (device vdb): enabling free space tree
INFO: task kworker/u38:1:90 blocked for more than 120 seconds.
Not tainted 7.0.0-rc1+ #345
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u38:1 state:D stack:0 pid:90 tgid:90 ppid:2 task_flags:0x4208060 flags:0x00080000
Workqueue: events_unbound btrfs_async_reclaim_data_space
Call Trace:
<TASK>
__schedule+0x34f/0xe70
schedule+0x41/0x140
schedule_timeout+0xa3/0x110
? mark_held_locks+0x40/0x70
? lockdep_hardirqs_on_prepare+0xd8/0x1c0
? trace_hardirqs_on+0x18/0x100
? lockdep_hardirqs_on+0x84/0x130
? _raw_spin_unlock_irq+0x33/0x50
wait_for_completion+0xa4/0x150
? __flush_work+0x24c/0x550
__flush_work+0x339/0x550
? __pfx_wq_barrier_func+0x10/0x10
? wait_for_completion+0x39/0x150
flush_space+0x243/0x660
? find_held_lock+0x2b/0x80
? kvm_sched_clock_read+0x11/0x20
? local_clock_noinstr+0x17/0x110
? local_clock+0x15/0x30
? lock_release+0x1b7/0x4b0
do_async_reclaim_data_space+0xe8/0x160
btrfs_async_reclaim_data_space+0x19/0x30
process_one_work+0x20a/0x5f0
? lock_is_held_type+0xcd/0x130
worker_thread+0x1e2/0x3c0
? __pfx_worker_thread+0x10/0x10
kthread+0x103/0x150
? __pfx_kthread+0x10/0x10
ret_from_fork+0x20d/0x320
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/67:
#0: ffffffff824d58e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x194
2 locks held by kworker/u38:1/90:
#0: ffff8881000aa158 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x3c4/0x5f0
#1: ffffc90000c17e58 ((work_completion)(&fs_info->async_data_reclaim_work)){+.+.}-{0:0}, at: process_one_work+0x1c0/0x5f0
5 locks held by kworker/u39:1/191:
#0: ffff8881000aa158 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x3c4/0x5f0
#1: ffffc90000dfbe58 ((work_completion)(&fs_info->reclaim_bgs_work)){+.+.}-{0:0}, at: process_one_work+0x1c0/0x5f0
#2: ffff888101da0420 (sb_writers#9){.+.+}-{0:0}, at: process_one_work+0x20a/0x5f0
#3: ffff88811040a648 (&fs_info->reclaim_bgs_lock){+.+.}-{4:4}, at: btrfs_reclaim_bgs_work+0x1de/0x770
#4: ffff888110408a18 (&fs_info->cleaner_mutex){+.+.}-{4:4}, at: btrfs_relocate_block_group+0x95a/0x20f0
1 lock held by aio-dio-write-v/980:
#0: ffff888110093008 (&sb->s_type->i_mutex_key#15){++++}-{4:4}, at: btrfs_inode_lock+0x51/0xb0
=============================================
To prevent these long running reclaims from blocking the system, only
reclaim 5 block_groups in the RECLAIM_ZONES state of flush_space(). Also
as these reclaims are now constrained, it opens up the use for a
synchronous call to brtfs_reclaim_block_groups(), eliminating the need
to place the reclaim task on a workqueue and then flushing the workqueue
again.
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
Create a function btrfs_reclaim_block_groups() that gets called from the
block-group reclaim worker.
This allows creating synchronous block_group reclaim later on.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs: move reclaiming of a single block group into its own function
The main work of reclaiming a single block-group in
btrfs_reclaim_bgs_work() is done inside the loop iterating over all the
block_groups in the fs_info->reclaim_bgs list.
Factor out reclaim of a single block group from the loop to improve
readability.
No functional change intended.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Wed, 4 Mar 2026 05:18:44 +0000 (15:48 +1030)]
btrfs: extract inlined creation into a dedicated delalloc helper
Currently we call cow_file_range_inline() in different situations, from
regular cow_file_range() to compress_file_range().
This is because inline extent creation has different conditions based on
whether it's a compressed one or not.
But on the other hand, inline extent creation shouldn't be so
distributed, we can just have a dedicated branch in
btrfs_run_delalloc_range().
It will become more obvious for compressed inline cases, it makes no
sense to go through all the complex async extent mechanism just to
inline a single block.
So here we introduce a dedicated run_delalloc_inline() helper, and
remove all inline related handling from cow_file_range() and
compress_file_range().
There is a special update to inode_need_compress(), that a new
@check_inline parameter is introduced.
This is to allow inline specific checks to be done inside
run_delalloc_inline(), which allows single block compression, but
other call sites should always reject single block compression.
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Tue, 3 Mar 2026 08:15:10 +0000 (18:45 +1030)]
btrfs: move the mapping_set_error() out of the loop in end_bbio_data_write()
Previously we have to call mapping_set_error() inside the
for_each_folio_all() loop, because we do not have a better way to grab
an inode, other than through folio->mapping.
But nowadays every btrfs_bio has its inode member populated, thus we can
easily grab the inode and its i_mapping easily, without the help from a
folio.
Now we can move that mapping_set_error() out of the loop, and use
bbio->inode to grab the i_mapping.
Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Tue, 3 Mar 2026 08:15:09 +0000 (18:45 +1030)]
btrfs: remove the alignment check in end_bbio_data_write()
The check is not necessary because:
- There is already assert_bbio_alignment() at btrfs_submit_bbio()
- There is also btrfs_subpage_assert() for all btrfs_folio_*() helpers
- The original commit mentions the check may go away in the future
Commit 17a5adccf3fd01 ("btrfs: do away with non-whole_page extent
I/O") introduced the check first, and in the commit message:
I've replaced the whole_page computations with warnings, just to be
sure that we're not issuing partial page reads or writes. The
warnings should probably just go away some time.
- No similar check in all other endio functions
No matter if it's data read, compressed read or write.
- There is no such report for very long
I do not even remember if there is any such report.
Thus the need to do such check in end_bbio_data_write() is very weak,
and we can just get rid of it.
Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Leo Martins [Thu, 26 Feb 2026 09:51:08 +0000 (01:51 -0800)]
btrfs: add tracepoint for search slot restart tracking
Add a btrfs_search_slot_restart tracepoint that fires at each restart
site in btrfs_search_slot(), recording the root, tree level, and
reason for the restart. This enables tracking search slot restarts
which contribute to COW amplification under memory pressure.
The four restart reasons are:
- write_lock: insufficient write lock level, need to restart with
higher lock
- setup_nodes: node setup returned -EAGAIN
- slot_zero: insertion at slot 0 requires higher write lock level
- read_block: read_block_for_search returned -EAGAIN (block not
cached or lock contention)
COW counts are already tracked by the existing trace_btrfs_cow_block()
tracepoint. The per-restart-site tracepoint avoids counter overhead
in the critical path when tracepoints are disabled, and provides
richer per-event information that bpftrace scripts can aggregate into
counts, histograms, and per-root breakdowns.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Leo Martins <loemra.dev@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
Leo Martins [Thu, 26 Feb 2026 09:51:07 +0000 (01:51 -0800)]
btrfs: inhibit extent buffer writeback to prevent COW amplification
Inhibit writeback on COW'd extent buffers for the lifetime of the
transaction handle, preventing background writeback from setting
BTRFS_HEADER_FLAG_WRITTEN and causing unnecessary re-COW.
COW amplification occurs when background writeback flushes an extent
buffer that a transaction handle is still actively modifying. When
lock_extent_buffer_for_io() transitions a buffer from dirty to
writeback, it sets BTRFS_HEADER_FLAG_WRITTEN, marking the block as
having been persisted to disk at its current bytenr. Once WRITTEN is
set, should_cow_block() must either COW the block again or overwrite
it in place, both of which are unnecessary overhead when the buffer
is still being modified by the same handle that allocated it. By
inhibiting background writeback on actively-used buffers, WRITTEN is
never set while a transaction handle holds a reference to the buffer,
avoiding this overhead entirely.
Add an atomic_t writeback_inhibitors counter to struct extent_buffer,
which fits in an existing 6-byte hole without increasing struct size.
When a buffer is COW'd in btrfs_force_cow_block(), call
btrfs_inhibit_eb_writeback() to store the eb in the transaction
handle's writeback_inhibited_ebs xarray (keyed by eb->start), take a
reference, and increment writeback_inhibitors. The function handles
dedup (same eb inhibited twice by the same handle) and replacement
(different eb at the same logical address). Allocation failure is
graceful: the buffer simply falls back to the pre-existing behavior
where it may be written back and re-COW'd.
Also inhibit writeback in should_cow_block() when COW is skipped,
so that every transaction handle that reuses an already-COW'd buffer
also inhibits its writeback. Without this, if handle A COWs a block
and inhibits it, and handle B later reuses the same block without
inhibiting, handle A's uninhibit on end_transaction leaves the buffer
unprotected while handle B is still using it. This ensures all handles
that access a COW'd buffer contribute to the inhibitor count, and the
buffer remains protected until the last handle releases it.
In lock_extent_buffer_for_io(), when writeback_inhibitors is non-zero
and the writeback mode is WB_SYNC_NONE, skip the buffer. WB_SYNC_NONE
is used by the VM flusher threads for background and periodic
writeback, which are the only paths that cause COW amplification by
opportunistically writing out dirty extent buffers mid-transaction.
Skipping these is safe because the buffers remain dirty in the page
cache and will be written out at transaction commit time.
WB_SYNC_ALL must always proceed regardless of writeback_inhibitors.
This is required for correctness in the fsync path: btrfs_sync_log()
writes log tree blocks via filemap_fdatawrite_range() (WB_SYNC_ALL)
while the transaction handle that inhibited those same blocks is still
active. Without the WB_SYNC_ALL bypass, those inhibited log tree
blocks would be silently skipped, resulting in an incomplete log on
disk and corruption on replay. btrfs_write_and_wait_transaction()
also uses WB_SYNC_ALL via filemap_fdatawrite_range(); for that path,
inhibitors are already cleared beforehand, but the bypass ensures
correctness regardless.
Uninhibit in __btrfs_end_transaction() before atomic_dec(num_writers)
to prevent a race where the committer proceeds while buffers are still
inhibited. Also uninhibit in btrfs_commit_transaction() before writing
and in cleanup_transaction() for the error path.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Sun YangKai <sunk67188@gmail.com> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Leo Martins <loemra.dev@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Wed, 25 Feb 2026 19:22:41 +0000 (19:22 +0000)]
btrfs: remove pointless error check in btrfs_check_dir_item_collision()
We're under the IS_ERR() branch so we know that 'ret', which got assigned
the value of PTR_ERR(di) is always negative, so there's no point in
checking if it's negative.
Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Thu, 19 Feb 2026 16:05:39 +0000 (16:05 +0000)]
btrfs: remove duplicated uuid tree existence check in btrfs_uuid_tree_add()
There's no point in checking if the uuid root exists in
btrfs_uuid_tree_add(), since we already do it in btrfs_uuid_tree_lookup().
We can just remove the check from btrfs_uuid_tree_add() and make
btrfs_uuid_tree_lookup() return -EINVAL instead of -ENOENT in case the
uuid tree does not exists.
Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Filipe Manana [Tue, 24 Feb 2026 15:13:32 +0000 (15:13 +0000)]
btrfs: stop checking for -EEXIST return value from btrfs_uuid_tree_add()
We never return -EEXIST from btrfs_uuid_tree_add(), if the item already
exists we extend it, so it's pointless to check for such return value.
Furthermore, in create_pending_snapshot(), the logic is completely broken.
The goal was to not error out and abort the transaction in case of -EEXIST
but we left 'ret' with the -EEXIST value, so we end up setting
pending->error to -EEXIST and return that error up the call chain up to
btrfs_commit_transaction(), which will abort the transaction.
Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Commit 347b7042fb26 ("Merge patch series "fs: generic file IO error
reporting"") has introduced a common framework for reporting errors to
fsnotify in a standard way.
One of the functions being introduced is fserror_report_shutdown() that,
when combined with the experimental support for shutdown in btrfs, it
means that user-space can also easily detect whenever a btrfs filesystem
has been marked as shutdown.
Signed-off-by: Miquel Sabaté Solà <mssola@mssola.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Commit 2932ba8d9c99 ("slab: Introduce kmalloc_obj() and family")
introduced, among many others, the kzalloc_objs() helper, which has some
benefits over kcalloc(). Namely, internal introspection of the allocated
type now becomes possible, allowing for future alignment-aware choices
to be made by the allocator and future hardening work that can be type
sensitive. Dropping 'sizeof' comes also as a nice side-effect.
Moreover, this also allows us to be in line with the recent tree-wide
migration to the kmalloc_obj() and family of helpers. See
commit 69050f8d6d07 ("treewide: Replace kmalloc with kmalloc_obj for
non-scalar types").
Reviewed-by: Kees Cook <kees@kernel.org> Signed-off-by: Miquel Sabaté Solà <mssola@mssola.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Thu, 19 Feb 2026 23:43:38 +0000 (10:13 +1030)]
btrfs: do compressed bio size roundup and zeroing in one go
Currently we zero out all the remaining bytes of the last folio of
the compressed bio, then round the bio size to fs block boundary.
But that is done in two different functions, zero_last_folio() to zero
the remaining bytes of the last folio, and round_up_last_block() to
round up the bio to fs block boundary.
There are some minor problems:
- zero_last_folio() is zeroing ranges we won't submit
This is mostly affecting block size < page size cases, where we can
have a large folio (e.g. 64K), but the fs block size is only 4K.
In that case, we may only want to submit the first 4K of the folio,
the remaining range won't matter, but we still zero them all.
This causes unnecessary CPU usage just to zero out some bytes we won't
utilized.
- compressed_bio_last_folio() is called twice in two different functions
Which in theory we only need to call it once.
Enhance the situation by:
- Only zero out bytes up to the fs block boundary
Thus this will reduce some overhead for bs < ps cases.
- Move the folio_zero_range() call into round_up_last_block()
So that we can reuse the same folio returned by
compressed_bio_last_folio().
Reviewed-by: Anand Jain <asj@kernel.org> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Qu Wenruo [Fri, 20 Feb 2026 03:41:51 +0000 (14:11 +1030)]
btrfs: reduce the size of compressed_bio
The member compressed_bio::compressed_len can be replaced by the bio
size, as we always submit the full compressed data without any partial
read/write.
Furthermore we already have enough ASSERT()s making sure the bio size
matches the ordered extent or the extent map.