]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
btrfs: zoned: fix zone unusable accounting for freed reserved extent
authorNaohiro Aota <naohiro.aota@wdc.com>
Tue, 1 Oct 2024 08:03:32 +0000 (17:03 +0900)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 1 Nov 2024 01:02:40 +0000 (02:02 +0100)
commit bf9821ba4792a0d9a2e72803ae7b4341faf3d532 upstream.

When btrfs reserves an extent and does not use it (e.g, by an error), it
calls btrfs_free_reserved_extent() to free the reserved extent. In the
process, it calls btrfs_add_free_space() and then it accounts the region
bytes as block_group->zone_unusable.

However, it leaves the space_info->bytes_zone_unusable side not updated. As
a result, ENOSPC can happen while a space_info reservation succeeded. The
reservation is fine because the freed region is not added in
space_info->bytes_zone_unusable, leaving that space as "free". OTOH,
corresponding block group counts it as zone_unusable and its allocation
pointer is not rewound, we cannot allocate an extent from that block group.
That will also negate space_info's async/sync reclaim process, and cause an
ENOSPC error from the extent allocation process.

Fix that by returning the space to space_info->bytes_zone_unusable.
Ideally, since a bio is not submitted for this reserved region, we should
return the space to free space and rewind the allocation pointer. But, it
needs rework on extent allocation handling, so let it work in this way for
now.

Fixes: 169e0da91a21 ("btrfs: zoned: track unusable bytes for zones")
CC: stable@vger.kernel.org # 5.15+
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/btrfs/block-group.c

index 2e49d978f504edd1d1d8e54b2c995d885dac23ef..4ca22d4655aea5863e2d9a4d1c69e8cbc7f5191a 100644 (file)
@@ -3819,6 +3819,8 @@ void btrfs_free_reserved_bytes(struct btrfs_block_group *cache,
        spin_lock(&cache->lock);
        if (cache->ro)
                space_info->bytes_readonly += num_bytes;
+       else if (btrfs_is_zoned(cache->fs_info))
+               space_info->bytes_zone_unusable += num_bytes;
        cache->reserved -= num_bytes;
        space_info->bytes_reserved -= num_bytes;
        space_info->max_extent_size = 0;