]> git.ipfire.org Git - thirdparty/linux.git/log
thirdparty/linux.git
6 days agomd/raid5: fix IO hang when array is broken with IO inflight
Yu Kuai [Mon, 17 Nov 2025 08:55:57 +0000 (16:55 +0800)] 
md/raid5: fix IO hang when array is broken with IO inflight

Following test can cause IO hang:

mdadm -CvR /dev/md0 -l10 -n4 /dev/sd[abcd] --assume-clean --chunk=64K --bitmap=none
sleep 5
echo 1 > /sys/block/sda/device/delete
echo 1 > /sys/block/sdb/device/delete
echo 1 > /sys/block/sdc/device/delete
echo 1 > /sys/block/sdd/device/delete

dd if=/dev/md0 of=/dev/null bs=8k count=1 iflag=direct

Root cause:

1) all disks removed, however all rdevs in the array is still in sync,
IO will be issued normally.

2) IO failure from sda, and set badblocks failed, sda will be faulty
and MD_SB_CHANGING_PENDING will be set.

3) error recovery try to recover this IO from other disks, IO will be
issued to sdb, sdc, and sdd.

4) IO failure from sdb, and set badblocks failed again, now array is
broken and will become read-only.

5) IO failure from sdc and sdd, however, stripe can't be handled anymore
because MD_SB_CHANGING_PENDING is set:

handle_stripe
 handle_stripe
 if (test_bit MD_SB_CHANGING_PENDING)
  set_bit STRIPE_HANDLE
  goto finish
  // skip handling failed stripe

release_stripe
 if (test_bit STRIPE_HANDLE)
  list_add_tail conf->hand_list

6) later raid5d can't handle failed stripe as well:

raid5d
 md_check_recovery
  md_update_sb
   if (!md_is_rdwr())
    // can't clear pending bit
    return
 if (test_bit MD_SB_CHANGING_PENDING)
  break;
  // can't handle failed stripe

Since MD_SB_CHANGING_PENDING can never be cleared for read-only array,
fix this problem by skip this checking for read-only array.

Link: https://lore.kernel.org/linux-raid/20251117085557.770572-3-yukuai@fnnas.com
Fixes: d87f064f5874 ("md: never update metadata when array is read-only.")
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
Reviewed-by: Li Nan <linan122@huawei.com>
6 days agomd: warn about updating super block failure
Yu Kuai [Mon, 17 Nov 2025 08:55:56 +0000 (16:55 +0800)] 
md: warn about updating super block failure

Many personalities will handle IO error from daemon thread(like raid1d,
raid10d, raid5d), and sb will require to be clean before hanlding these
failed IO. However update sb can fail, for example array is broken by
IO failure, or user config sysfs api array_state.

This patch adds warning if updating sb failed first, in case this will
be related to IO hang.

Link: https://lore.kernel.org/linux-raid/20251117085557.770572-2-yukuai@fnnas.com
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
Reviewed-by: Li Nan <linan122@huawei.com>
6 days agomd/raid0: fix NULL pointer dereference in create_strip_zones() for dm-raid
Yu Kuai [Sun, 16 Nov 2025 02:18:16 +0000 (10:18 +0800)] 
md/raid0: fix NULL pointer dereference in create_strip_zones() for dm-raid

Commit 2107457e31fa ("md/raid0: Move queue limit setup before r0conf
initialization") dereference mddev->gendisk unconditionally, which is
NULL for dm-raid.

Fix this problem by reverting to old codes for dm-raid.

Link: https://lore.kernel.org/linux-raid/20251116021816.107648-1-yukuai@fnnas.com
Fixes: 2107457e31fa ("md/raid0: Move queue limit setup before r0conf initialization")
Reported-and-tested-by: Changhui Zhong <czhong@redhat.com>
Closes: https://lore.kernel.org/all/CAGVVp+VqVnvGeneUoTbYvBv2cw6GwQRrR3B-iQ-_9rVfyumoKA@mail.gmail.com/
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
8 days agosbitmap: fix all kernel-doc warnings
Randy Dunlap [Fri, 28 Nov 2025 06:57:54 +0000 (22:57 -0800)] 
sbitmap: fix all kernel-doc warnings

Modify kernel-doc comments in sbitmap.h to prevent warnings:

Warning: include/linux/sbitmap.h:84 struct member 'alloc_hint' not
 described in 'sbitmap'
Warning: include/linux/sbitmap.h:151 struct member 'ws_active' not
 described in 'sbitmap_queue'
Warning: include/linux/sbitmap.h:552 No description found for
 return value of 'sbq_wait_ptr'

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoublk: add helper of __ublk_fetch()
Ming Lei [Fri, 21 Nov 2025 01:58:28 +0000 (09:58 +0800)] 
ublk: add helper of __ublk_fetch()

Add helper __ublk_fetch() for refactoring ublk_fetch().

Meantime move ublk_config_io_buf() out of __ublk_fetch() to make
the code structure cleaner.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoublk: pass const pointer to ublk_queue_is_zoned()
Ming Lei [Fri, 21 Nov 2025 01:58:27 +0000 (09:58 +0800)] 
ublk: pass const pointer to ublk_queue_is_zoned()

Pass const pointer to ublk_queue_is_zoned() because it is readonly.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoublk: refactor auto buffer register in ublk_dispatch_req()
Ming Lei [Fri, 21 Nov 2025 01:58:26 +0000 (09:58 +0800)] 
ublk: refactor auto buffer register in ublk_dispatch_req()

Refactor auto buffer register code and prepare for supporting batch IO
feature, and the main motivation is to put 'ublk_io' operation code
together, so that per-io lock can be applied for the code block.

The key changes are:
- Rename ublk_auto_buf_reg() as ublk_do_auto_buf_reg()
- Introduce an enum `auto_buf_reg_res` to represent the result of
  the buffer registration attempt (FAIL, FALLBACK, OK).
- Split the existing `ublk_do_auto_buf_reg` function into two:
  - `__ublk_do_auto_buf_reg`: Performs the actual buffer registration
    and returns the `auto_buf_reg_res` status.
  - `ublk_do_auto_buf_reg`: A wrapper that calls the internal function
    and handles the I/O preparation based on the result.
- Introduce `ublk_prep_auto_buf_reg_io` to encapsulate the logic for
  preparing the I/O for completion after buffer registration.
- Pass the `tag` directly to `ublk_auto_buf_reg_fallback` to avoid
  recalculating it.

This refactoring makes the control flow clearer and isolates the different
stages of the auto buffer registration process.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoublk: add `union ublk_io_buf` with improved naming
Ming Lei [Fri, 21 Nov 2025 01:58:25 +0000 (09:58 +0800)] 
ublk: add `union ublk_io_buf` with improved naming

Add `union ublk_io_buf` for naming the anonymous union of struct ublk_io's
addr and buf fields, meantime apply it to `struct ublk_io` for storing either
ublk auto buffer register data or ublk server io buffer address.

The union uses clear field names:
- `addr`: for regular ublk server io buffer addresses
- `auto_reg`: for ublk auto buffer registration data

This eliminates confusing access patterns and improves code readability.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg()
Ming Lei [Fri, 21 Nov 2025 01:58:24 +0000 (09:58 +0800)] 
ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg()

Add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() and
prepare for reusing this helper for the coming UBLK_BATCH_IO feature,
which can fetch & commit one batch of io commands via single uring_cmd.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agokfifo: add kfifo_alloc_node() helper for NUMA awareness
Ming Lei [Fri, 21 Nov 2025 01:58:23 +0000 (09:58 +0800)] 
kfifo: add kfifo_alloc_node() helper for NUMA awareness

Add __kfifo_alloc_node() by refactoring and reusing __kfifo_alloc(),
and define kfifo_alloc_node() macro to support NUMA-aware memory
allocation.

The new __kfifo_alloc_node() function accepts a NUMA node parameter
and uses kmalloc_array_node() instead of kmalloc_array() for
node-specific allocation. The existing __kfifo_alloc() now calls
__kfifo_alloc_node() with NUMA_NO_NODE to maintain backward
compatibility.

This enables users to allocate kfifo buffers on specific NUMA nodes,
which is important for performance in NUMA systems where the kfifo
will be primarily accessed by threads running on specific nodes.

Cc: Stefani Seibold <stefani@seibold.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoblk-mq: fix potential uaf for 'queue_hw_ctx'
Fengnan Chang [Fri, 28 Nov 2025 08:53:14 +0000 (16:53 +0800)] 
blk-mq: fix potential uaf for 'queue_hw_ctx'

This is just apply Kuai's patch in [1] with mirror changes.

blk_mq_realloc_hw_ctxs() will free the 'queue_hw_ctx'(e.g. undate
submit_queues through configfs for null_blk), while it might still be
used from other context(e.g. switch elevator to none):

t1 t2
elevator_switch
 blk_mq_unquiesce_queue
  blk_mq_run_hw_queues
   queue_for_each_hw_ctx
    // assembly code for hctx = (q)->queue_hw_ctx[i]
    mov    0x48(%rbp),%rdx -> read old queue_hw_ctx

__blk_mq_update_nr_hw_queues
 blk_mq_realloc_hw_ctxs
  hctxs = q->queue_hw_ctx
  q->queue_hw_ctx = new_hctxs
  kfree(hctxs)
    movslq %ebx,%rax
    mov    (%rdx,%rax,8),%rdi ->uaf

This problem was found by code review, and I comfirmed that the concurrent
scenario do exist(specifically 'q->queue_hw_ctx' can be changed during
blk_mq_run_hw_queues()), however, the uaf problem hasn't been repoduced yet
without hacking the kernel.

Sicne the queue is freezed in __blk_mq_update_nr_hw_queues(), fix the
problem by protecting 'queue_hw_ctx' through rcu where it can be accessed
without grabbing 'q_usage_counter'.

[1] https://lore.kernel.org/all/20220225072053.2472431-1-yukuai3@huawei.com/

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Fengnan Chang <changfengnan@bytedance.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
8 days agoblk-mq: use array manage hctx map instead of xarray
Fengnan Chang [Fri, 28 Nov 2025 08:53:13 +0000 (16:53 +0800)] 
blk-mq: use array manage hctx map instead of xarray

After commit 4e5cc99e1e48 ("blk-mq: manage hctx map via xarray"), we use
an xarray instead of array to store hctx, but in poll mode, each time
in blk_mq_poll, we need use xa_load to find corresponding hctx, this
introduce some costs. In my test, xa_load may cost 3.8% cpu.

This patch revert previous change, eliminates the overhead of xa_load
and can result in a 3% performance improvement.

Signed-off-by: Fengnan Chang <changfengnan@bytedance.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 days agoublk: prevent invalid access with DEBUG
Kevin Brodsky [Wed, 26 Nov 2025 12:48:35 +0000 (12:48 +0000)] 
ublk: prevent invalid access with DEBUG

ublk_ch_uring_cmd_local() may jump to the out label before
initialising the io pointer. This will cause trouble if DEBUG is
defined, because the pr_devel() call dereferences io. Clang reports:

drivers/block/ublk_drv.c:2403:6: error: variable 'io' is used uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized]
 2403 |         if (tag >= ub->dev_info.queue_depth)
      |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/block/ublk_drv.c:2492:32: note: uninitialized use occurs here
 2492 |                         __func__, cmd_op, tag, ret, io->flags);
      |

Fix this by initialising io to NULL and checking it before
dereferencing it.

Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Fixes: 71f28f3136af ("ublk_drv: add io_uring based userspace block driver")
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 days agos390/dasd: Use scnprintf() instead of sprintf()
Jan Höppner [Wed, 26 Nov 2025 16:06:34 +0000 (17:06 +0100)] 
s390/dasd: Use scnprintf() instead of sprintf()

Use scnprintf() instead of sprintf() for those cases where the
destination is an array and the size of the array is known at compile
time.

This prevents theoretical buffer overflows, but also avoids that people
again and again spend time to figure out if the code is actually safe.

Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 days agos390/dasd: Move device name formatting into separate function
Jan Höppner [Wed, 26 Nov 2025 16:06:33 +0000 (17:06 +0100)] 
s390/dasd: Move device name formatting into separate function

The device name formatting can be generalized and made more readable
compared to the current state. SCSI already provides a generalized way
to format many devices in the same naming scheme as DASD does, which was
introduced with commit 3e1a7ff8a0a7 ("block: allow disk to have extended
device number").

Use this much cleaner code from drivers/scsi/sd.c to handle the legacy
naming scheme in DASD as a replacement for the current implementation.

For easier error handling for the new function, move the gendisk free
portion of dasd_gendisk_free() out into a new function dasd_gd_free().

Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 days agos390/dasd: Remove unnecessary debugfs_create() return checks
Stefan Haberland [Wed, 26 Nov 2025 16:06:32 +0000 (17:06 +0100)] 
s390/dasd: Remove unnecessary debugfs_create() return checks

The DASD driver only uses the dentry pointers when removing debugfs
entries, and debugfs_remove() can safely handle both NULL and ERR_PTR.
There is therefore no need to check debugfs_create() return values.

This simplifies the debugfs setup code without changing functionality.

Suggested-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 days agos390/dasd: Fix gendisk parent after copy pair swap
Stefan Haberland [Wed, 26 Nov 2025 16:06:31 +0000 (17:06 +0100)] 
s390/dasd: Fix gendisk parent after copy pair swap

After a copy pair swap the block device's "device" symlink points to
the secondary CCW device, but the gendisk's parent remained the
primary, leaving /sys/block/<dasdx> under the wrong parent.

Move the gendisk to the secondary's device with device_move(), keeping
the sysfs topology consistent after the swap.

Fixes: 413862caad6f ("s390/dasd: add copy pair swap capability")
Cc: stable@vger.kernel.org #6.1
Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 days agoblock: ignore __blkdev_issue_discard() return value
Chaitanya Kulkarni [Mon, 24 Nov 2025 23:48:01 +0000 (15:48 -0800)] 
block: ignore __blkdev_issue_discard() return value

__blkdev_issue_discard() always returns 0, making the error check
in blkdev_issue_discard() dead code.

In function blkdev_issue_discard() initialize ret = 0, remove ret
assignment from __blkdev_issue_discard(), rely on bio == NULL check to
call submit_bio_wait(), preserve submit_bio_wait() error handling, and
preserve -EOPNOTSUPP to 0 mapping.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 days agoblock: fix typos in comments and strings in blk-core
shechenglong [Mon, 24 Nov 2025 02:02:58 +0000 (10:02 +0800)] 
block: fix typos in comments and strings in blk-core

This patch fixes multiple spelling mistakes in comments and documentation
in the file block/blk-core.c.

No functional changes intended.

Suggested-by: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: shechenglong <shechenglong@xfusion.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 days agoblock: Remove references to __device_add_disk()
John Garry [Tue, 25 Nov 2025 16:11:13 +0000 (16:11 +0000)] 
block: Remove references to __device_add_disk()

Since commit d1254a874971 ("block: remove support for delayed queue
registrations"), function __device_add_disk() has been replaced with
device_add_disk(), so fix up comments.

Signed-off-by: John Garry <john.g.garry@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 days agoRevert "Merge branch 'loop-aio-nowait' into for-6.19/block"
Jens Axboe [Tue, 25 Nov 2025 03:53:19 +0000 (20:53 -0700)] 
Revert "Merge branch 'loop-aio-nowait' into for-6.19/block"

This reverts commit f43fdeb9a368a5ff56b088b46edc245bd4b52cde, reversing
changes made to 2c6d792d4b7676e2b340df05425330452fee1f40.

There are concerns that doing inline submits can cause excessive
stack usage, particularly when going back into the filesystem. Revert
the loop dio nowait change for now.

Link: https://lore.kernel.org/linux-block/aSP3SG_KaROJTBHx@infradead.org/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: use min() instead of min_t()
David Laight [Wed, 19 Nov 2025 22:41:08 +0000 (22:41 +0000)] 
block: use min() instead of min_t()

min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
and so cannot discard significant bits.

In this case the 'unsigned long' value is small enough that the result
is ok.

(Similarly for max_t() and clamp_t().)

Detected by an extra check added to min_t().

Signed-off-by: David Laight <david.laight.linux@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: clear nowait flag in workqueue context
Chaitanya Kulkarni [Wed, 19 Nov 2025 23:22:34 +0000 (15:22 -0800)] 
zloop: clear nowait flag in workqueue context

The zloop driver advertises REQ_NOWAIT support through BLK_FEAT_NOWAIT
(enabled by default for all blk-mq devices), and honors the nowait
behavior throughout zloop_queue_rq().

However, actual I/O to the backing file is performed in a workqueue,
where blocking is allowed.

To avoid imposing unnecessary non-blocking constraints in this blocking
context, clear the REQ_NOWAIT flag before processing the request in the
workqueue context.

Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: clear nowait flag in workqueue context
Chaitanya Kulkarni [Wed, 19 Nov 2025 23:22:33 +0000 (15:22 -0800)] 
loop: clear nowait flag in workqueue context

The loop driver advertises REQ_NOWAIT support through BLK_FEAT_NOWAIT
(enabled by default for all blk-mq devices), and honors the nowait
behavior throughout loop_queue_rq().

However, actual I/O to the backing file is performed in a workqueue,
where blocking is allowed.

To avoid imposing unnecessary non-blocking constraints in this blocking
context, clear the REQ_NOWAIT flag before processing the request in the
workqueue context.

Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: fix zone append check in zloop_rw()
Damien Le Moal [Wed, 19 Nov 2025 04:34:23 +0000 (13:34 +0900)] 
zloop: fix zone append check in zloop_rw()

While commit cf28f6f923cb ("zloop: fail zone append operations that are
targeting full zones") added a check in zloop_rw() that a zone append is
not issued to a full zone, commit e3a96ca90462 ("zloop: simplify checks
for writes to sequential zones") inadvertently removed the check to
verify that there is enough unwritten space in a zone for an incoming
zone append opration.

Re-add this check in zloop_rw() to make sure we do not write beyond the
end of a zone. Of note is that this same check is already present in the
function zloop_set_zone_append_sector() when ordered zone append is in
use.

Reported-by: Hans Holmberg <Hans.Holmberg@wdc.com>
Fixes: e3a96ca90462 ("zloop: simplify checks for writes to sequential zones")
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hans Holmberg <hans.holmberg@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoMAINTAINERS: add a maintainer for zoned block device support
Damien Le Moal [Wed, 19 Nov 2025 03:02:20 +0000 (12:02 +0900)] 
MAINTAINERS: add a maintainer for zoned block device support

Add myself as the maintainer of the block layer support for the zoned
block device code and user API.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoMAINTAINERS: add missing block layer user API header files
Damien Le Moal [Wed, 19 Nov 2025 03:02:19 +0000 (12:02 +0900)] 
MAINTAINERS: add missing block layer user API header files

Add the missing user API header files related to the block layer to the
list of matching file patterns for Jens's block layer entry.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: remove the declaration of elevator_init_mq function
Chengkaitao [Tue, 18 Nov 2025 01:26:44 +0000 (09:26 +0800)] 
block: remove the declaration of elevator_init_mq function

In commit 1e44bedbc921 ("block: unifying elevator change"), the
elevator_init_mq function was deleted, but its declaration in elevator.h
was overlooked. This patch fixes it.

Signed-off-by: Chengkaitao <chengkaitao@kylinos.cn>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoRevert "block: consider discard merge last"
Jens Axboe [Tue, 18 Nov 2025 15:37:26 +0000 (08:37 -0700)] 
Revert "block: consider discard merge last"

This reverts commit 2516c246d01c23a5f5310e9ac78d9f8aad9b1d0e.

Suspected issues with discard merging post this patch, hence revert
it for now.

Link: https://lore.kernel.org/linux-block/26acdfdf-de13-430b-8c73-f890c7689a84@kernel.dk/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: Remove queue freezing from several sysfs store callbacks
Bart Van Assche [Fri, 14 Nov 2025 21:04:07 +0000 (13:04 -0800)] 
block: Remove queue freezing from several sysfs store callbacks

Freezing the request queue from inside sysfs store callbacks may cause a
deadlock in combination with the dm-multipath driver and the
queue_if_no_path option. Additionally, freezing the request queue slows
down system boot on systems where sysfs attributes are set synchronously.

Fix this by removing the blk_mq_freeze_queue() / blk_mq_unfreeze_queue()
calls from the store callbacks that do not strictly need these callbacks.
Add the __data_racy annotation to request_queue.rq_timeout to suppress
KCSAN data race reports about the rq_timeout reads.

This patch may cause a small delay in applying the new settings.

For all the attributes affected by this patch, I/O will complete
correctly whether the old or the new value of the attribute is used.

This patch affects the following sysfs attributes:
* io_poll_delay
* io_timeout
* nomerges
* read_ahead_kb
* rq_affinity

Here is an example of a deadlock triggered by running test srp/002
if this patch is not applied:

task:multipathd
Call Trace:
 <TASK>
 __schedule+0x8c1/0x1bf0
 schedule+0xdd/0x270
 schedule_preempt_disabled+0x1c/0x30
 __mutex_lock+0xb89/0x1650
 mutex_lock_nested+0x1f/0x30
 dm_table_set_restrictions+0x823/0xdf0
 __bind+0x166/0x590
 dm_swap_table+0x2a7/0x490
 do_resume+0x1b1/0x610
 dev_suspend+0x55/0x1a0
 ctl_ioctl+0x3a5/0x7e0
 dm_ctl_ioctl+0x12/0x20
 __x64_sys_ioctl+0x127/0x1a0
 x64_sys_call+0xe2b/0x17d0
 do_syscall_64+0x96/0x3a0
 entry_SYSCALL_64_after_hwframe+0x4b/0x53
 </TASK>
task:(udev-worker)
Call Trace:
 <TASK>
 __schedule+0x8c1/0x1bf0
 schedule+0xdd/0x270
 blk_mq_freeze_queue_wait+0xf2/0x140
 blk_mq_freeze_queue_nomemsave+0x23/0x30
 queue_ra_store+0x14e/0x290
 queue_attr_store+0x23e/0x2c0
 sysfs_kf_write+0xde/0x140
 kernfs_fop_write_iter+0x3b2/0x630
 vfs_write+0x4fd/0x1390
 ksys_write+0xfd/0x230
 __x64_sys_write+0x76/0xc0
 x64_sys_call+0x276/0x17d0
 do_syscall_64+0x96/0x3a0
 entry_SYSCALL_64_after_hwframe+0x4b/0x53
 </TASK>

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Nilay Shroff <nilay@linux.ibm.com>
Cc: Martin Wilck <mwilck@suse.com>
Cc: Benjamin Marzinski <bmarzins@redhat.com>
Cc: stable@vger.kernel.org
Fixes: af2814149883 ("block: freeze the queue in queue_attr_store")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agofs: Add the __data_racy annotation to backing_dev_info.ra_pages
Bart Van Assche [Fri, 14 Nov 2025 21:04:06 +0000 (13:04 -0800)] 
fs: Add the __data_racy annotation to backing_dev_info.ra_pages

Some but not all .ra_pages changes happen while block layer I/O is paused
with blk_mq_freeze_queue(). Filesystems may read .ra_pages even while
block layer I/O is paused, e.g. from inside their .fadvise callback.
Annotating all .ra_pages reads with READ_ONCE() would be cumbersome.
Hence, add the __data_racy annotatation to the .ra_pages member
variable.

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: plug attempts to batch allocate tags multiple times
Xue He [Tue, 18 Nov 2025 07:32:30 +0000 (07:32 +0000)] 
block: plug attempts to batch allocate tags multiple times

This patch aims to enable batch allocation of sufficient tags after
batch IO submission with plug mechanism, thereby avoiding the need for
frequent individual requests when the initial allocation is
insufficient.
-----------------------------------------------------------
HW:
16 CPUs/16 poll queues
Disk: Samsung PM9A3 Gen4 3.84T

CMD:
[global]
ioengine=io_uring
group_reporting=1
time_based=1
runtime=1m
refill_buffers=1
norandommap=1
randrepeat=0
fixedbufs=1
registerfiles=1
rw=randread
iodepth=128
iodepth_batch_submit=32
iodepth_batch_complete_min=32
iodepth_batch_complete_max=128
iodepth_low=32
bs=4k
numjobs=1
direct=1
hipri=1

[job1]
filename=/dev/nvme0n1
name=batch_test
------------------------------------------------------------
Perf:
base code: __blk_mq_alloc_requests() 1.47%
patch: __blk_mq_alloc_requests() 0.75%
------------------------------------------------------------

Signed-off-by: hexue <xue01.he@samsung.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoMerge branch 'loop-aio-nowait' into for-6.19/block
Jens Axboe [Tue, 18 Nov 2025 13:49:59 +0000 (06:49 -0700)] 
Merge branch 'loop-aio-nowait' into for-6.19/block

Merge async IO IOCB_NOWAIT support from Ming:

"This patchset improves loop aio perf by using IOCB_NOWAIT for avoiding
 to queue aio command to workqueue context, meantime refactor
 lo_rw_aio() a bit.

 In my test VM, loop disk perf becomes very close to perf of the backing
 block device(nvme/mq virtio-scsi).

 And Mikulas verified that this way can improve 12jobs sequential
 readwrite io by ~5X, and basically solve the reported problem together
 with loop MQ change.

 https://lore.kernel.org/linux-block/a8e5c76a-231f-07d1-a394-847de930f638@redhat.com/

 Zhaoyang Huang also mentioned it may fix their performance issue on
 Android use case.

 The loop MQ change will be posted as standalone patch, because it needs
 UAPI change."

Link: https://lore.kernel.org/linux-block/20251015110735.1361261-1-ming.lei@redhat.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* loop-aio-nowait:
  loop: add hint for handling aio via IOCB_NOWAIT
  loop: try to handle loop aio command via NOWAIT IO first
  loop: move command blkcg/memcg initialization into loop_queue_work
  loop: add lo_submit_rw_aio()
  loop: add helper lo_rw_aio_prep()
  loop: add helper lo_cmd_nr_bvec()

2 weeks agoloop: add hint for handling aio via IOCB_NOWAIT
Ming Lei [Wed, 15 Oct 2025 11:07:31 +0000 (19:07 +0800)] 
loop: add hint for handling aio via IOCB_NOWAIT

Add hint for using IOCB_NOWAIT to handle loop aio command for avoiding
to cause write(especially randwrite) perf regression on sparse backed file.

Try IOCB_NOWAIT in the following situations:

- backing file is block device

OR

- READ aio command

OR

- there isn't any queued blocking async WRITEs, because NOWAIT won't cause
contention with blocking WRITE, which often implies exclusive lock

With this simple policy, perf regression of randwrite/write on sparse
backing file is fixed.

Link: https://lore.kernel.org/dm-devel/7d6ae2c9-df8e-50d0-7ad6-b787cb3cfab4@redhat.com/
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: try to handle loop aio command via NOWAIT IO first
Ming Lei [Wed, 15 Oct 2025 11:07:30 +0000 (19:07 +0800)] 
loop: try to handle loop aio command via NOWAIT IO first

Try to handle loop aio command via NOWAIT IO first, then we can avoid to
queue the aio command into workqueue. This is usually one big win in
case that FS block mapping is stable, Mikulas verified [1] that this way
improves IO perf by close to 5X in 12jobs sequential read/write test,
in which FS block mapping is just stable.

Fallback to workqueue in case of -EAGAIN. This way may bring a little
cost from the 1st retry, but when running the following write test over
loop/sparse_file, the actual effect on randwrite is obvious:

```
truncate -s 4G 1.img    #1.img is created on XFS/virtio-scsi
losetup -f 1.img --direct-io=on
fio --direct=1 --bs=4k --runtime=40 --time_based --numjobs=1 --ioengine=libaio \
--iodepth=16 --group_reporting=1 --filename=/dev/loop0 -name=job --rw=$RW
```

- RW=randwrite: obvious IOPS drop observed
- RW=write: a little drop(%5 - 10%)

This perf drop on randwrite over sparse file will be addressed in the
following patch.

BLK_MQ_F_BLOCKING has to be set for calling into .read_iter() or .write_iter()
which might sleep even though it is NOWAIT, and the only effect is that rcu read
lock is replaced with srcu read lock.

Link: https://lore.kernel.org/linux-block/a8e5c76a-231f-07d1-a394-847de930f638@redhat.com/
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: move command blkcg/memcg initialization into loop_queue_work
Ming Lei [Wed, 15 Oct 2025 11:07:29 +0000 (19:07 +0800)] 
loop: move command blkcg/memcg initialization into loop_queue_work

Move loop command blkcg/memcg initialization into loop_queue_work,
and prepare for supporting to handle loop io command by IOCB_NOWAIT.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: add lo_submit_rw_aio()
Ming Lei [Wed, 15 Oct 2025 11:07:28 +0000 (19:07 +0800)] 
loop: add lo_submit_rw_aio()

Refactor lo_rw_aio() by extracting the I/O submission logic into a new
helper function lo_submit_rw_aio(). This further improves code organization
by separating the I/O preparation, submission, and completion handling into
distinct phases.

Prepare for using NOWAIT to improve loop performance.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: add helper lo_rw_aio_prep()
Ming Lei [Wed, 15 Oct 2025 11:07:27 +0000 (19:07 +0800)] 
loop: add helper lo_rw_aio_prep()

Add helper lo_rw_aio_prep() to separate the preparation phase(setting up bio
vectors and initializing the iocb structure) from the actual I/O execution
in the loop block driver.

Prepare for using NOWAIT to improve loop performance.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoloop: add helper lo_cmd_nr_bvec()
Ming Lei [Wed, 15 Oct 2025 11:07:26 +0000 (19:07 +0800)] 
loop: add helper lo_cmd_nr_bvec()

Add lo_cmd_nr_bvec() and prepare for refactoring lo_rw_aio().

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agodrbd: turn bitmap I/O comments into regular block comments
Sukrut Heroorkar [Tue, 18 Nov 2025 09:07:53 +0000 (14:37 +0530)] 
drbd: turn bitmap I/O comments into regular block comments

W=1 build warns because the bitmap I/O comments use '/**', which
marks them as kernel-doc comments even though these functions do not
document an external API.

Convert these comments to regular block comments so kernel-doc no
longer parses them.

Signed-off-by: Sukrut Heroorkar <hsukrut3@gmail.com>
Acked-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: rate-limit capacity change info log
Li Chen [Mon, 17 Nov 2025 05:34:07 +0000 (13:34 +0800)] 
block: rate-limit capacity change info log

loop devices under heavy stress-ng loop streessor can trigger many
capacity change events in a short time. Each event prints an info
message from set_capacity_and_notify(), flooding the console and
contributing to soft lockups on slow consoles.

Switch the printk in set_capacity_and_notify() to
pr_info_ratelimited() so frequent capacity changes do not spam
the log while still reporting occasional changes.

Cc: stable@vger.kernel.org
Signed-off-by: Li Chen <chenl311@chinatelecom.cn>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoDocumentation: admin-guide: blockdev: update zloop parameters
Damien Le Moal [Sat, 15 Nov 2025 12:15:56 +0000 (21:15 +0900)] 
Documentation: admin-guide: blockdev: update zloop parameters

In Documentation/admin-guide/blockdev/zoned_loop.rst, add the
description of the zone_append and ordered_zone_append configuration
arguments of zloop "add" command (device creation).

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: introduce the ordered_zone_append configuration parameter
Damien Le Moal [Sat, 15 Nov 2025 12:15:55 +0000 (21:15 +0900)] 
zloop: introduce the ordered_zone_append configuration parameter

The zone append operation processing for zloop devices is similar to any
other command, that is, the operation is processed as a command work
item, without any special serialization between the work items (beside
the zone mutex for mutually exclusive code sections).

This processing is fine and gives excellent performance. However, it has
a side effect: zone append operation are very often reordered and
processed in a sequence that is very different from their issuing order
by the user. This effect is very visible using an XFS file system on top
of a zloop device. A simple file write leads to many file extents as the
data writes using zone append are reordered and so result in the
physical order being different than the file logical order.
E.g. executing:

$ dd if=/dev/zero of=/mnt/test bs=1M count=10 && sync
$ xfs_bmap /mnt/test
/mnt/test:
0: [0..4095]: 2162688..2166783
1: [4096..6143]: 2168832..2170879
2: [6144..8191]: 2166784..2168831
3: [8192..10239]: 2170880..2172927
4: [10240..12287]: 2174976..2177023
5: [12288..14335]: 2172928..2174975
6: [14336..20479]: 2177024..2183167

For 10 IOs, 6 extents are created.

This is fine and actually allows to exercise XFS zone garbage collection
very well. However, this also makes debugging/working on XFS data
placement harder as the underlying device will most of the time reorder
IOs, resulting in many file extents.

Allow a user to mitigate this with the new ordered_zone_append
configuration parameter. For a zloop device created with this parameter
specified, the sector of a zone append command is set early, when the
command is submitted by the block layer with the zloop_queue_rq()
function, instead of in the zloop_rw() function which is exectued later
in the command work item context. This change ensures that more often
than not, zone append operations data end up being written in the same
order as the command submission by the user.

In the case of XFS, this leads to far less file data extents. E.g., for
the previous example, we get a single file data extent for the written
file.

$ dd if=/dev/zero of=/mnt/test bs=1M count=10 && sync
$ xfs_bmap /mnt/test
/mnt/test:
0: [0..20479]: 2162688..2183167

Since we cannot use a mutex in the context of the zloop_queue_rq()
function to atomically set a zone append operation sector to the target
zone write pointer location and increment that the write pointer, a new
per-zone spinlock is introduced to protect a zone write pointer access
and modifications. To check a zone write pointer location and set a zone
append operation target sector to that value, the function
zloop_set_zone_append_sector() is introduced and called from
zloop_queue_rq().

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: introduce the zone_append configuration parameter
Damien Le Moal [Sat, 15 Nov 2025 12:15:54 +0000 (21:15 +0900)] 
zloop: introduce the zone_append configuration parameter

A zloop zoned block device declares to the block layer that it supports
zone append operations. That is, a zloop device ressembles an NVMe ZNS
devices supporting zone append.

This native support is fine but it does not allow exercising the block
layer zone write plugging emulation of zone append, as is done with SCSI
or ATA SMR HDDs.

Introduce the zone_append configuration parameter to allow creating a
zloop device without native support for zone append, thus relying on the
block layer zone append emulation. If not specified, zone append support
is enabled by default. Otherwise, a value of 0 disables native zone
append and a value of 1 enables it.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: simplify checks for writes to sequential zones
Damien Le Moal [Sat, 15 Nov 2025 12:15:53 +0000 (21:15 +0900)] 
zloop: simplify checks for writes to sequential zones

The function zloop_rw() already checks early that a request is fully
contained within the target zone. So this check does not need to be done
again for regular writes to sequential zones. Furthermore, since zone
append operations are always directed to the zone write pointer
location, we do not need to check for their alignment to that value
after setting it. So turn the "if" checking the write pointer alignment
into an "else if".

While at it, improve the comment describing the write pointer
modification and how this value is corrected in case of error.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: fail zone append operations that are targeting full zones
Damien Le Moal [Sat, 15 Nov 2025 12:15:52 +0000 (21:15 +0900)] 
zloop: fail zone append operations that are targeting full zones

zloop_rw() will fail any regular write operation that targets a full
sequential zone. The check for this is indirect and achieved by checking
the write pointer alignment of the write operation. But this check is
ineffective for zone append operations since these are alwasy
automatically directed at a zone write pointer.

Prevent zone append operations from being executed in a full zone with
an explicit check of the zone condition.

Fixes: eb0570c7df23 ("block: new zoned loop block device driver")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agozloop: make the write pointer of full zones invalid
Damien Le Moal [Sat, 15 Nov 2025 12:15:51 +0000 (21:15 +0900)] 
zloop: make the write pointer of full zones invalid

The write pointer of zones that are in the full condition is always
invalid. Reflect that fact by setting the write pointer of full zones
to ULLONG_MAX.

Fixes: eb0570c7df23 ("block: new zoned loop block device driver")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock/blk-throttle: Remove throtl_slice from struct throtl_data
Guenter Roeck [Fri, 14 Nov 2025 23:54:34 +0000 (15:54 -0800)] 
block/blk-throttle: Remove throtl_slice from struct throtl_data

throtl_slice is now a constant. Remove the variable and use the constant
directly where needed.

Cc: Yu Kuai <yukuai@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock/blk-throttle: drop unneeded blk_stat_enable_accounting
Guenter Roeck [Fri, 14 Nov 2025 23:54:33 +0000 (15:54 -0800)] 
block/blk-throttle: drop unneeded blk_stat_enable_accounting

After the removal of CONFIG_BLK_DEV_THROTTLING_LOW, it is no longer
necessary to enable block accounting, so remove the call to
blk_stat_enable_accounting(). With that, the track_bio_latency variable
is no longer used and can be deleted from struct throtl_data. Also,
including blk-stat.h is no longer necessary.

Fixes: bf20ab538c81 ("blk-throttle: remove CONFIG_BLK_DEV_THROTTLING_LOW")
Cc: Yu Kuai <yukuai@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock/blk-throttle: Fix throttle slice time for SSDs
Guenter Roeck [Fri, 14 Nov 2025 23:54:32 +0000 (15:54 -0800)] 
block/blk-throttle: Fix throttle slice time for SSDs

Commit d61fcfa4bb18 ("blk-throttle: choose a small throtl_slice for SSD")
introduced device type specific throttle slices if BLK_DEV_THROTTLING_LOW
was enabled. Commit bf20ab538c81 ("blk-throttle: remove
CONFIG_BLK_DEV_THROTTLING_LOW") removed support for BLK_DEV_THROTTLING_LOW,
but left the device type specific throttle slices in place. This
effectively changed throttling behavior on systems with SSD which now use
a different and non-configurable slice time compared to non-SSD devices.
Practical impact is that throughput tests with low configured throttle
values (65536 bps) experience less than expected throughput on SSDs,
presumably due to rounding errors associated with the small throttle slice
time used for those devices. The same tests pass when setting the throttle
values to 65536 * 4 = 262144 bps.

The original code sets the throttle slice time to DFL_THROTL_SLICE_HD if
CONFIG_BLK_DEV_THROTTLING_LOW is disabled. Restore that code to fix the
problem. With that, DFL_THROTL_SLICE_SSD is no longer necessary. Revert to
the original code and re-introduce DFL_THROTL_SLICE to replace both
DFL_THROTL_SLICE_HD and DFL_THROTL_SLICE_SSD. This effectively reverts
commit d61fcfa4bb18 ("blk-throttle: choose a small throtl_slice for SSD").

While at it, also remove MAX_THROTL_SLICE since it is not used anymore.

Fixes: bf20ab538c81 ("blk-throttle: remove CONFIG_BLK_DEV_THROTTLING_LOW")
Cc: Yu Kuai <yukuai@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agoblock: consider discard merge last
Keith Busch [Fri, 14 Nov 2025 18:31:45 +0000 (10:31 -0800)] 
block: consider discard merge last

If the next discard range is contiguous with the current range being
considered, it's cheaper to expand the current range than to append an
additional bio.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 weeks agofloppy: fix for PAGE_SIZE != 4KB
Rene Rebe [Fri, 14 Nov 2025 13:41:27 +0000 (14:41 +0100)] 
floppy: fix for PAGE_SIZE != 4KB

For years I wondered why the floppy driver does not just work on
sparc64, e.g:

root@SUNW_375_0066:# disktype /dev/fd0
disktype: Can't open /dev/fd0: No such device or address

[  525.341906] disktype: attempt to access beyond end of device
fd0: rw=0, sector=0, nr_sectors = 16 limit=8
[  525.341991] floppy: error 10 while reading block 0

Turns out floppy.c __floppy_read_block_0 tries to read one page for
the first test read to determine the disk size and thus fails if that
is greater than 4k. Adjust minimum MAX_DISK_SIZE to PAGE_SIZE to fix
floppy on sparc64 and likely all other PAGE_SIZE != 4KB configs.

Cc: stable@vger.kernel.org
Signed-off-by: René Rebe <rene@exactco.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agops3disk: use memcpy_{from,to}_bvec index
Rene Rebe [Fri, 14 Nov 2025 14:30:33 +0000 (15:30 +0100)] 
ps3disk: use memcpy_{from,to}_bvec index

With 6e0a48552b8c (ps3disk: use memcpy_{from,to}_bvec) converting
ps3disk to new bvec helpers, incrementing the offset was accidently
lost, corrupting consecutive buffers. Restore index for non-corrupted
data transfers.

Fixes: 6e0a48552b8c (ps3disk: use memcpy_{from,to}_bvec)
Signed-off-by: René Rebe <rene@exactco.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoMerge branch 'p2pdma-mmio-6.19.v5' into for-6.19/block
Jens Axboe [Fri, 14 Nov 2025 12:10:09 +0000 (05:10 -0700)] 
Merge branch 'p2pdma-mmio-6.19.v5' into for-6.19/block

Merge MMIO P2P DMA series from Leon:

"This patch series improves block layer and NVMe driver support for MMIO
 memory regions, particularly for peer-to-peer (P2P) DMA transfers that
 go through the host bridge.

 The series addresses a critical gap where P2P transfers through the
 host bridge (PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) were not properly marked
 as MMIO memory, leading to potential issues with:

 - Inappropriate CPU cache synchronization operations on MMIO regions
 - Incorrect DMA mapping/unmapping that doesn't respect MMIO semantics
 - Missing IOMMU configuration for MMIO memory handling

 This work is extracted from the larger DMA physical API improvement
 series [1] and focuses specifically on block layer and NVMe
 requirements for MMIO memory support.

 [1] https://lore.kernel.org/all/cover.1757423202.git.leonro@nvidia.com/"

Link: https://lore.kernel.org/linux-block/20251114-block-with-mmio-v5-0-69d00f73d766@nvidia.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* p2pdma-mmio-6.19.v5:
  block-dma: properly take MMIO path
  nvme-pci: migrate to dma_map_phys instead of map_page

3 weeks agoblock-dma: properly take MMIO path
Leon Romanovsky [Fri, 14 Nov 2025 09:07:04 +0000 (11:07 +0200)] 
block-dma: properly take MMIO path

In commit eadaa8b255f3 ("dma-mapping: introduce new DMA attribute to
indicate MMIO memory"), DMA_ATTR_MMIO attribute was added to describe
MMIO addresses, which require to avoid any memory cache flushing, as
an outcome of the discussion pointed in Link tag below.

In case of PCI_P2PDMA_MAP_THRU_HOST_BRIDGE transfer, blk-mq-dm logic
treated this as regular page and relied on "struct page" DMA flow.
That flow performs CPU cache flushing, which shouldn't be done here,
and doesn't set IOMMU_MMIO flag in DMA-IOMMU case.

As a solution, let's encode peer-to-peer transaction type in NVMe IOD
flags variable and provide it to blk-mq-dma API.

Link: https://lore.kernel.org/all/f912c446-1ae9-4390-9c11-00dce7bf0fd3@arm.com/
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agonvme-pci: migrate to dma_map_phys instead of map_page
Leon Romanovsky [Fri, 14 Nov 2025 09:07:03 +0000 (11:07 +0200)] 
nvme-pci: migrate to dma_map_phys instead of map_page

After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.

Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoMerge branch 'elevator-switch-6.19' into for-6.19/block
Jens Axboe [Thu, 13 Nov 2025 16:28:00 +0000 (09:28 -0700)] 
Merge branch 'elevator-switch-6.19' into for-6.19/block

Merge elevator switching improvements from Nilay:

"This patchset reorganizes the elevator switch path used during both
 nr_hw_queues update and elv_iosched_store() operations to address a
 recently reported lockdep splat [1].

 The warning highlights a locking dependency between ->freeze_lock and
 ->elevator_lock on pcpu_alloc_mutex, triggered when the Kyber scheduler
 dynamically allocates its private scheduling data. The fix is to ensure
 that such allocations occur outside the locked sections, thus
 eliminating the dependency chain.

 While working on this, it also became evident that the nr_hw_queue
 update code maintains two disjoint xarrays—one for elevator tags and
 another for elevator type—both serving the same purpose. Unifying these
 into a single elv_change_ctx structure improves clarity and
 maintainability.

 This series therefore implements five patches:
 The first perparatory patch unifies elevator tags and type xarrays. It
 combines both xarrays into a single struct elv_change_ctx, simplifying
 per-queue elevator state management.

 The second patch is aimed to group together all elevator-related
 resources that share the same lifetime and as a first step we move the
 elevator tags pointer from struct elv_change_ctx into the newly
 introduced struct elevator_resources. The subsequent patch extends the
 struct elevator_resources to include other elevator-related data.

 The third patch introduce ->alloc_sched_data and ->free_sched_data
 elevator ops which could be then used to safely allocate and free
 scheduler data.

 The fourth patch now builds upon the previous patch and starts using
 the newly introduced alloc/free sched data methods in the earlier patch
 during elevator switch and nr_hw_queue update. And while doing so, it's
 ensured that sched data allocation and free happens before we acquire
 ->freeze_lock and ->elevator_lock thus preventing its dependency on
 pcpu_alloc_mutex.

 The last patch of this series converts Kyber scheduler to use the new
 methods inroduced in the previous patch. It hooks Kyber’s scheduler
 data allocation and teardown logic from ->init_sched and ->exit_sched
 into the new methods, ensuring memory operations are performed outside
 locked sections.

 Together, these changes simplify the elevator switch logic and prevent
 the reported lockdep splat."

Link: https://lore.kernel.org/linux-block/20251113090619.2030737-1-nilay@linux.ibm.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* elevator-switch-6.19:
  block: define alloc_sched_data and free_sched_data methods for kyber
  block: use {alloc|free}_sched data methods
  block: introduce alloc_sched_data and free_sched_data elevator methods
  block: move elevator tags into struct elevator_resources
  block: unify elevator tags and type xarrays into struct elv_change_ctx

3 weeks agoblock: define alloc_sched_data and free_sched_data methods for kyber
Nilay Shroff [Thu, 13 Nov 2025 08:58:22 +0000 (14:28 +0530)] 
block: define alloc_sched_data and free_sched_data methods for kyber

Currently, the Kyber elevator allocates its private data dynamically in
->init_sched and frees it in ->exit_sched. However, since ->init_sched
is invoked during elevator switch after acquiring both ->freeze_lock and
->elevator_lock, it may trigger the lockdep splat [1] due to dependency
on pcpu_alloc_mutex.

To resolve this, move the elevator data allocation and deallocation
logic from ->init_sched and ->exit_sched into the newly introduced
->alloc_sched_data and ->free_sched_data methods. These callbacks are
invoked before acquiring ->freeze_lock and ->elevator_lock, ensuring
that memory allocation happens safely without introducing additional
locking dependencies.

This change breaks the dependency chain involving pcpu_alloc_mutex and
prevents the reported lockdep warning.

[1] https://lore.kernel.org/all/CAGVVp+VNW4M-5DZMNoADp6o2VKFhi7KxWpTDkcnVyjO0=-D5+A@mail.gmail.com/

Reported-by: Changhui Zhong <czhong@redhat.com>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Closes: https://lore.kernel.org/all/CAGVVp+VNW4M-5DZMNoADp6o2VKFhi7KxWpTDkcnVyjO0=-D5+A@mail.gmail.com/
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: use {alloc|free}_sched data methods
Nilay Shroff [Thu, 13 Nov 2025 08:58:21 +0000 (14:28 +0530)] 
block: use {alloc|free}_sched data methods

The previous patch introduced ->alloc_sched_data and
->free_sched_data methods. This patch builds upon that
by now using these methods during elevator switch and
nr_hw_queue update.

It's also ensured that scheduler-specific data is
allocated and freed through the new callbacks outside
of the ->freeze_lock and ->elevator_lock locking contexts,
thereby preventing any dependency on pcpu_alloc_mutex.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: introduce alloc_sched_data and free_sched_data elevator methods
Nilay Shroff [Thu, 13 Nov 2025 08:58:20 +0000 (14:28 +0530)] 
block: introduce alloc_sched_data and free_sched_data elevator methods

The recent lockdep splat [1] highlights a potential deadlock risk
involving ->elevator_lock and ->freeze_lock dependencies on -pcpu_alloc_
mutex. The trace shows that the issue occurs when the Kyber scheduler
allocates dynamic memory for its elevator data during initialization.

To address this, introduce two new elevator operation callbacks:
->alloc_sched_data and ->free_sched_data. The subsequent patch would
build upon these newly introduced methods to suppress lockdep splat[1].

[1] https://lore.kernel.org/all/CAGVVp+VNW4M-5DZMNoADp6o2VKFhi7KxWpTDkcnVyjO0=-D5+A@mail.gmail.com/

Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: move elevator tags into struct elevator_resources
Nilay Shroff [Thu, 13 Nov 2025 08:58:19 +0000 (14:28 +0530)] 
block: move elevator tags into struct elevator_resources

This patch introduces a new structure, struct elevator_resources, to
group together all elevator-related resources that share the same
lifetime. As a first step, this change moves the elevator tag pointer
from struct elv_change_ctx into the new struct elevator_resources.

Additionally, rename blk_mq_alloc_sched_tags_batch() and
blk_mq_free_sched_tags_batch() to blk_mq_alloc_sched_res_batch() and
blk_mq_free_sched_res_batch(), respectively. Introduce two new wrapper
helpers, blk_mq_alloc_sched_res() and blk_mq_free_sched_res(), around
blk_mq_alloc_sched_tags() and blk_mq_free_sched_tags().

These changes pave the way for consolidating the allocation and freeing
of elevator-specific resources into common helper functions. This
refactoring improves encapsulation and prepares the code for future
extensions, allowing additional elevator-specific data to be added to
struct elevator_resources without cluttering struct elv_change_ctx.

Subsequent patches will extend struct elevator_resources to include
other elevator-related data.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: unify elevator tags and type xarrays into struct elv_change_ctx
Nilay Shroff [Thu, 13 Nov 2025 08:58:18 +0000 (14:28 +0530)] 
block: unify elevator tags and type xarrays into struct elv_change_ctx

Currently, the nr_hw_queues update path manages two disjoint xarrays —
one for elevator tags and another for elevator type — both used during
elevator switching. Maintaining these two parallel structures for the
same purpose adds unnecessary complexity and potential for mismatched
state.

This patch unifies both xarrays into a single structure, struct
elv_change_ctx, which holds all per-queue elevator change context. A
single xarray, named elv_tbl, now maps each queue (q->id) in a tagset
to its corresponding elv_change_ctx entry, encapsulating the elevator
tags, type and name references.

This unification simplifies the code, improves maintainability, and
clarifies ownership of per-queue elevator state.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoMerge branch 'bcache-updates-6.19' into for-6.19/block
Jens Axboe [Thu, 13 Nov 2025 16:18:19 +0000 (09:18 -0700)] 
Merge branch 'bcache-updates-6.19' into for-6.19/block

Merge bcache updates from Coly for 6.19:

"The major change is from me, which is to remove useless discard
 interface and code for cache device (not the backing device). And the
 last patch about gc latency is a cooperative result from Robert Pang
 (Google), Mingzhe Zou (Easystack) and me, by inspired from their
 previous works, I compose the final version and Robert prvides positive
 benchmark result.

 Marco contributes 2 patches to improve the usage of  per-cpu system
 work queue. Gustavo contributes a patch to fix the not-at-end
 flexible-array member warning by gcc14. And Qianfeng contributes a code
 cleanup patch to remove redundant __GFP_NOWARN."

Link: https://lore.kernel.org/linux-block/20251113053630.54218-1-colyli@fnnas.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* bcache-updates-6.19:
  bcache: Avoid -Wflex-array-member-not-at-end warning
  bcache: WQ_PERCPU added to alloc_workqueue users
  bcache: replace use of system_wq with system_percpu_wq
  bcache: remove redundant __GFP_NOWARN
  bcache: reduce gc latency by processing less nodes and sleep less time
  bcache: remove discard sysfs interface document
  bcache: drop discard sysfs interface
  bcache: remove discard code from alloc.c
  bcache: get rid of discard code from journal

3 weeks agobcache: Avoid -Wflex-array-member-not-at-end warning
Gustavo A. R. Silva [Thu, 13 Nov 2025 05:36:30 +0000 (13:36 +0800)] 
bcache: Avoid -Wflex-array-member-not-at-end warning

-Wflex-array-member-not-at-end was introduced in GCC-14, and we are
getting ready to enable it, globally.

Use the new TRAILING_OVERLAP() helper to fix the following warning:

drivers/md/bcache/bset.h:330:27: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]

This helper creates a union between a flexible-array member (FAM) and a
set of MEMBERS that would otherwise follow it.

This overlays the trailing MEMBER struct btree_iter_set stack_data[MAX_BSETS];
onto the FAM struct btree_iter::data[], while keeping the FAM and the start
of MEMBER aligned.

The static_assert() ensures this alignment remains, and it's
intentionally placed immediately after the corresponding structures --no
blank line in between.

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: WQ_PERCPU added to alloc_workqueue users
Marco Crivellari [Thu, 13 Nov 2025 05:36:29 +0000 (13:36 +0800)] 
bcache: WQ_PERCPU added to alloc_workqueue users

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: replace use of system_wq with system_percpu_wq
Marco Crivellari [Thu, 13 Nov 2025 05:36:28 +0000 (13:36 +0800)] 
bcache: replace use of system_wq with system_percpu_wq

Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

system_wq should be the per-cpu workqueue, yet in this name nothing makes
that clear, so replace system_wq with system_percpu_wq.

The old wq (system_wq) will be kept for a few release cycles.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: remove redundant __GFP_NOWARN
Qianfeng Rong [Thu, 13 Nov 2025 05:36:27 +0000 (13:36 +0800)] 
bcache: remove redundant __GFP_NOWARN

GFP_NOWAIT already includes __GFP_NOWARN, so let's remove the redundant
__GFP_NOWARN.

Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com>
Acked-by: Coly Li <colyli@fnnas.com>
Acked-by: Coly Li <colyli@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: reduce gc latency by processing less nodes and sleep less time
Coly Li [Thu, 13 Nov 2025 05:36:26 +0000 (13:36 +0800)] 
bcache: reduce gc latency by processing less nodes and sleep less time

When bcache device is busy for high I/O loads, there are two methods to
reduce the garbage collection latency,
- Process less nodes in eac loop of incremental garbage collection in
  btree_gc_recurse().
- Sleep less time between two full garbage collection in
  bch_btree_gc().

This patch introduces to hleper routines to provide different garbage
collection nodes number and sleep intervel time.
- btree_gc_min_nodes()
  If there is no front end I/O, return 128 nodes to process in each
  incremental loop, otherwise only 10 nodes are returned. Then front I/O
  is able to access the btree earlier.
- btree_gc_sleep_ms()
  If there is no synchronized wait for bucket allocation, sleep 100 ms
  between two incremental GC loop. Othersize only sleep 10 ms before
  incremental GC loop. Then a faster GC may provide available buckets
  earlier, to avoid most of bcache working threads from being starved by
  buckets allocation.

The idea is inspired by works from Mingzhe Zou and Robert Pang, but much
simpler and the expected behavior is more predictable.

Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Robert Pang <robertpang@google.com>
Signed-off-by: Mingzhe Zou <mingzhe.zou@easystack.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: remove discard sysfs interface document
Coly Li [Thu, 13 Nov 2025 05:36:25 +0000 (13:36 +0800)] 
bcache: remove discard sysfs interface document

This patch removes documents of bcache discard sysfs interface, it
drops discard related sections from,
- Documentation/ABI/testing/sysfs-block-bcache
- Documentation/admin-guide/bcache.rst

Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: drop discard sysfs interface
Coly Li [Thu, 13 Nov 2025 05:36:24 +0000 (13:36 +0800)] 
bcache: drop discard sysfs interface

Since discard code is removed, now the sysfs interface to enable discard
is useless. This patch removes the corresponding sysfs entry, and remove
bool variable 'discard' from struct cache as well.

Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: remove discard code from alloc.c
Coly Li [Thu, 13 Nov 2025 05:36:23 +0000 (13:36 +0800)] 
bcache: remove discard code from alloc.c

Bcache allocator initially has no free space to allocate. Firstly it
does a garbage collection which is triggered by a cache device write
and fills free space into ca->free[] lists. The discard happens after
the free bucket is handled by garbage collection added into one of the
ca->free[] lists. But normally this bucket will be allocated out very
soon to requester and filled data onto it. The discard hint on this
bucket LBA range doesn't help SSD control to improve internal erasure
performance, and waste extra CPU cycles to issue discard bios.

This patch removes the almost-useless discard code from alloc.c.

Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agobcache: get rid of discard code from journal
Coly Li [Thu, 13 Nov 2025 05:36:22 +0000 (13:36 +0800)] 
bcache: get rid of discard code from journal

In bcache journal there is discard functionality but almost useless in
reality. Because discard happens after a journal bucket is reclaimed,
and the reclaimed bucket is allocated for new journaling immediately.
There is no time for underlying SSD to use the discard hint for internal
data management.

The discard code in bcache journal doesn't bring any performance
optimization and wastes CPU cycles for issuing discard bios. Therefore
this patch gits rid of it from journal.c and journal.h.

Signed-off-by: Coly Li <colyli@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agodm: fix zone reset all operation processing
Damien Le Moal [Thu, 13 Nov 2025 13:40:28 +0000 (22:40 +0900)] 
dm: fix zone reset all operation processing

dm_zone_get_reset_bitmap() is used to generate a bitmap of the zones of
a zoned device target when a REQ_OP_ZONE_RESET_ALL request is being
processed. This bitmap is built by executing a zone report with a report
callback set to the function dm_zone_need_reset_cb() in struct
dm_report_zones_args. However, the cb callback pointer is not anymore
the same as the callback specified by callers of the
blkdev_report_zones() function. Rather, this is a DM internal callback
and report zones callback functions from blkdev_report_zones() are
passed using struct blk_report_zones_args, introduced with commit
db9aed869f34 ("block: introduce disk_report_zone()").

This commit changed the DM main report zones callback handler function
dm_report_zones_cb() to call the new disk_report_zone() so that callback
functions from blkdev_report_zones() are executed, and this change
resulted in the DM internal dm_zone_need_reset_cb() callback function to
not be executed anymore, turning any REQ_OP_ZONE_RESET_ALL request into
a no-op.

Fix this by calling in dm_report_zones_cb() the DM internal cb function
specified in struct dm_report_zones_args.

Fixes: db9aed869f34 ("block: introduce disk_report_zone()").
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: fix NULL pointer dereference in disk_report_zones()
Damien Le Moal [Thu, 13 Nov 2025 13:40:27 +0000 (22:40 +0900)] 
block: fix NULL pointer dereference in disk_report_zones()

Commit 2284eec5053d ("block: introduce blkdev_get_zone_info()")
introduced the report_active field in struct blk_report_zones_args so
that open and closed zones can be reported with the condition
BLK_ZONE_COND_ACTIVE in the case of a cached report zone.
However, the args pointer to a struct blk_report_zones_args that is
passed to disk_report_zones() can be NULL, e.g. in the case of internal
report zones operations for device mapper zoned targets.

Fix disk_report_zones() to make sure to check that the args is not null
before updating a zone condition for cached zone reports.

Fixes: 2284eec5053d ("block: introduce blkdev_get_zone_info()")
Reported-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: fix NULL pointer dereference in blk_zone_reset_all_bio_endio()
Damien Le Moal [Thu, 13 Nov 2025 13:40:26 +0000 (22:40 +0900)] 
block: fix NULL pointer dereference in blk_zone_reset_all_bio_endio()

For zoned block devices that do not need zone write plugs (e.g. most
device mapper devices that support zones), the disk hash table of zone
write plugs is NULL. For such devices, blk_zone_reset_all_bio_endio()
should not attempt to scan this has table as that causes a NULL pointer
dereference.

Fix this by checking that the disk does have zone write plugs using the
atomic counter. This is equivalent to checking for a non-NULL hash table
but has the advantage to also speed up the execution of
blk_zone_reset_all_bio_endio() for devices that do use zone write plugs
but do not have any plug in the hash table (e.g. a disk with only full
zones).

Fixes: efae226c2ef1 ("block: handle zone management operations completions")
Reported-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblk-zoned: Move code from disk_zone_wplug_add_bio() into its caller
Bart Van Assche [Tue, 11 Nov 2025 23:29:02 +0000 (15:29 -0800)] 
blk-zoned: Move code from disk_zone_wplug_add_bio() into its caller

Move the following code into the only caller of disk_zone_wplug_add_bio():
 - The code for clearing the REQ_NOWAIT flag.
 - The code that sets the BLK_ZONE_WPLUG_PLUGGED flag.
 - The disk_zone_wplug_schedule_bio_work() call.

This patch moves all code that is related to REQ_NOWAIT or to bio
scheduling into a single function. Additionally, the 'schedule_bio_work'
variable is removed. No functionality has been changed.

Cc: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlmoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblk-zoned: Document disk_zone_wplug_schedule_bio_work() locking
Bart Van Assche [Tue, 11 Nov 2025 23:29:01 +0000 (15:29 -0800)] 
blk-zoned: Document disk_zone_wplug_schedule_bio_work() locking

Document that all callers hold this lock because the code in
disk_zone_wplug_schedule_bio_work() depends on this.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblk-zoned: Fix a typo in a source code comment
Bart Van Assche [Tue, 11 Nov 2025 23:29:00 +0000 (15:29 -0800)] 
blk-zoned: Fix a typo in a source code comment

Remove a superfluous parenthesis that was introduced by commit fa8555630b32
("blk-zoned: Improve the queue reference count strategy documentation").

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agonull_blk: fix zone read length beyond write pointer
Keith Busch [Wed, 12 Nov 2025 16:42:18 +0000 (08:42 -0800)] 
null_blk: fix zone read length beyond write pointer

Fix up the divisor calculating the number of zone sectors being read and
handle a read that straddles the zone write pointer. The length is
rounded up a sector boundary, so be sure to truncate any excess bytes
off to avoid copying past the data segment.

Fixes: 3451cf34f51bb70 ("null_blk: allow byte aligned memory offsets")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Tested-by: Bart van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblk-mq-dma: fix kernel-doc function name for integrity DMA iterator
Kriish Sharma [Tue, 11 Nov 2025 11:58:10 +0000 (11:58 +0000)] 
blk-mq-dma: fix kernel-doc function name for integrity DMA iterator

Documentation build reported:

  Warning: block/blk-mq-dma.c:373 expecting prototype for blk_rq_integrity_dma_map_iter_start(). Prototype was for blk_rq_integrity_dma_map_iter_next() instead

The kernel-doc comment above `blk_rq_integrity_dma_map_iter_next()` used
the wrong function name (`blk_rq_integrity_dma_map_iter_start`) in its
header. This patch corrects the function name in the kernel-doc block to
match the actual implementation, ensuring clean documentation builds.

Fixes: fec9b16dc555 ("blk-mq-dma: add scatter-less integrity data DMA mapping")
Signed-off-by: Kriish Sharma <kriish.sharma2006@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: fix merging data-less bios
Keith Busch [Tue, 11 Nov 2025 14:06:20 +0000 (06:06 -0800)] 
block: fix merging data-less bios

The data segment gaps the block layer tracks doesn't apply to bio's that
don't have data. Skip calculating this to fix a NULL pointer access.

Fixes: 2f6b2565d43cdb5 ("block: accumulate memory segment gaps per bio")
Reported-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: return unsigned from ublk_{,un}map_io()
Caleb Sander Mateos [Sat, 8 Nov 2025 04:23:16 +0000 (21:23 -0700)] 
ublk: return unsigned from ublk_{,un}map_io()

ublk_map_io() and ublk_unmap_io() never return negative values, and
their return values are stored in variables of type unsigned. Clarify
that they can't fail by making their return types unsigned.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: remove unnecessary checks in ublk_check_and_get_req()
Caleb Sander Mateos [Sat, 8 Nov 2025 22:17:45 +0000 (15:17 -0700)] 
ublk: remove unnecessary checks in ublk_check_and_get_req()

ub = iocb->ki_filp->private_data cannot be NULL, as it's set in
ublk_ch_open() before it returns succesfully. req->mq_hctx cannot be
NULL as any inflight ublk request must belong to some queue. And
req->mq_hctx->driver_data cannot be NULL as it's set to the ublk_queue
pointer in ublk_init_hctx(). So drop the unnecessary checks.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoblock: add lockdep to queue_limits_commit_update()
Chaitanya Kulkarni [Sun, 9 Nov 2025 07:44:26 +0000 (23:44 -0800)] 
block: add lockdep to queue_limits_commit_update()

queue_limits_commit_update() expects q->limits_lock to be held by
the caller (via queue_limits_start_update()).

The API pattern is:

  lim = queue_limits_start_update(q);  /* acquires lock */
              /* modify lim */
  queue_limits_commit_update(q, &lim); /* releases lock */

  OR

  queue_limits_commit_update_frozen(q, &lim);
   lim = queue_limits_start_update(q); /* acquires lock */
  queue_limits_commit_update(q, &lim); /* releases lock */

Add lockdep_assert_held() to report incorrect API usage.

Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agonbd: defer config unlock in nbd_genl_connect
Zheng Qixing [Mon, 10 Nov 2025 12:49:20 +0000 (20:49 +0800)] 
nbd: defer config unlock in nbd_genl_connect

There is one use-after-free warning when running NBD_CMD_CONNECT and
NBD_CLEAR_SOCK:

nbd_genl_connect
  nbd_alloc_and_init_config // config_refs=1
  nbd_start_device // config_refs=2
  set NBD_RT_HAS_CONFIG_REF open nbd // config_refs=3
  recv_work done // config_refs=2
NBD_CLEAR_SOCK // config_refs=1
close nbd // config_refs=0
  refcount_inc -> uaf

------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 24 PID: 1014 at lib/refcount.c:25 refcount_warn_saturate+0x12e/0x290
 nbd_genl_connect+0x16d0/0x1ab0
 genl_family_rcv_msg_doit+0x1f3/0x310
 genl_rcv_msg+0x44a/0x790

The issue can be easily reproduced by adding a small delay before
refcount_inc(&nbd->config_refs) in nbd_genl_connect():

        mutex_unlock(&nbd->config_lock);
        if (!ret) {
                set_bit(NBD_RT_HAS_CONFIG_REF, &config->runtime_flags);
+               printk("before sleep\n");
+               mdelay(5 * 1000);
+               printk("after sleep\n");
                refcount_inc(&nbd->config_refs);
                nbd_connect_reply(info, nbd->index);
        }

Fixes: e46c7287b1c2 ("nbd: add a basic netlink interface")
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoMerge tag 'md-6.19-20251111' of gitolite.kernel.org:pub/scm/linux/kernel/git/mdraid...
Jens Axboe [Tue, 11 Nov 2025 13:58:11 +0000 (06:58 -0700)] 
Merge tag 'md-6.19-20251111' of gitolite.kernel.org:pub/scm/linux/kernel/git/mdraid/linux into for-6.19/block

Pull MD changes from Yu:

"- Change maintainer's email address (Yu Kuai)
 - Data can be lost if array is created with different lbs devices, fix
   this problem and record lbs of the array in metadata (Li Nan)
 - Fix rcu protection for md_thread (Yun Zhou)
 - Fix mddev kobject lifetime regression (Xiao Ni)
 - Enable atomic writes for md-linear (John Garry)
 - Some cleanups (Chen Ni, Huiwen He, Wu Guanghao)"

* tag 'md-6.19-20251111' of gitolite.kernel.org:pub/scm/linux/kernel/git/mdraid/linux:
  md: allow configuring logical block size
  md: add check_new_feature module parameter
  md/raid0: Move queue limit setup before r0conf initialization
  md: init bioset in mddev_init
  md: delete md_redundancy_group when array is becoming inactive
  md: prevent adding disks with larger logical_block_size to active arrays
  md/raid5: remove redundant __GFP_NOWARN
  md: avoid repeated calls to del_gendisk
  md/md-llbitmap: Remove unneeded semicolon
  md/md-linear: Enable atomic writes
  Factor out code into md_should_do_recovery()
  md: fix rcu protection in md_wakeup_thread
  md: delete mddev kobj before deleting gendisk kobj
  MAINTAINERS: Update Yu Kuai's E-mail address

3 weeks agomd: allow configuring logical block size
Li Nan [Mon, 3 Nov 2025 12:57:57 +0000 (20:57 +0800)] 
md: allow configuring logical block size

Previously, raid array used the maximum logical block size (LBS)
of all member disks. Adding a larger LBS disk at runtime could
unexpectedly increase RAID's LBS, risking corruption of existing
partitions. This can be reproduced by:

```
  # LBS of sd[de] is 512 bytes, sdf is 4096 bytes.
  mdadm -CRq /dev/md0 -l1 -n3 /dev/sd[de] missing --assume-clean

  # LBS is 512
  cat /sys/block/md0/queue/logical_block_size

  # create partition md0p1
  parted -s /dev/md0 mklabel gpt mkpart primary 1MiB 100%
  lsblk | grep md0p1

  # LBS becomes 4096 after adding sdf
  mdadm --add -q /dev/md0 /dev/sdf
  cat /sys/block/md0/queue/logical_block_size

  # partition lost
  partprobe /dev/md0
  lsblk | grep md0p1
```

Simply restricting larger-LBS disks is inflexible. In some scenarios,
only disks with 512 bytes LBS are available currently, but later, disks
with 4KB LBS may be added to the array.

Making LBS configurable is the best way to solve this scenario.
After this patch, the raid will:
  - store LBS in disk metadata
  - add a read-write sysfs 'mdX/logical_block_size'

Future mdadm should support setting LBS via metadata field during RAID
creation and the new sysfs. Though the kernel allows runtime LBS changes,
users should avoid modifying it after creating partitions or filesystems
to prevent compatibility issues.

Only 1.x metadata supports configurable LBS. 0.90 metadata inits all
fields to default values at auto-detect. Supporting 0.90 would require
more extensive changes and no such use case has been observed.

Note that many RAID paths rely on PAGE_SIZE alignment, including for
metadata I/O. A larger LBS than PAGE_SIZE will result in metadata
read/write failures. So this config should be prevented.

Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-6-linan666@huaweicloud.com
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
3 weeks agomd: add check_new_feature module parameter
Li Nan [Mon, 3 Nov 2025 12:57:56 +0000 (20:57 +0800)] 
md: add check_new_feature module parameter

Raid checks if pad3 is zero when loading superblock from disk. Arrays
created with new features may fail to assemble on old kernels as pad3
is used.

Add module parameter check_new_feature to bypass this check.

Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
3 weeks agomd/raid0: Move queue limit setup before r0conf initialization
Li Nan [Mon, 3 Nov 2025 12:57:55 +0000 (20:57 +0800)] 
md/raid0: Move queue limit setup before r0conf initialization

Prepare for making logical blocksize configurable. This change has no
impact until logical block size becomes configurable.

Move raid0_set_limits() before create_strip_zones(). It is safe as fields
modified in create_strip_zones() do not involve mddev configuration, and
rdev modifications there are not used in raid0_set_limits().

'blksize' in create_strip_zones() fetches mddev's logical block size,
which is already the maximum aross all rdevs, so the later max() can be
removed.

Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-4-linan666@huaweicloud.com
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
3 weeks agomd: init bioset in mddev_init
Li Nan [Mon, 3 Nov 2025 12:57:54 +0000 (20:57 +0800)] 
md: init bioset in mddev_init

IO operations may be needed before md_run(), such as updating metadata
after writing sysfs. Without bioset, this triggers a NULL pointer
dereference as below:

 BUG: kernel NULL pointer dereference, address: 0000000000000020
 Call Trace:
  md_update_sb+0x658/0xe00
  new_level_store+0xc5/0x120
  md_attr_store+0xc9/0x1e0
  sysfs_kf_write+0x6f/0xa0
  kernfs_fop_write_iter+0x141/0x2a0
  vfs_write+0x1fc/0x5a0
  ksys_write+0x79/0x180
  __x64_sys_write+0x1d/0x30
  x64_sys_call+0x2818/0x2880
  do_syscall_64+0xa9/0x580
  entry_SYSCALL_64_after_hwframe+0x4b/0x53

Reproducer
```
  mdadm -CR /dev/md0 -l1 -n2 /dev/sd[cd]
  echo inactive > /sys/block/md0/md/array_state
  echo 10 > /sys/block/md0/md/new_level
```

mddev_init() can only be called once per mddev, no need to test if bioset
has been initialized anymore.

Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-3-linan666@huaweicloud.com
Fixes: d981ed841930 ("md: Add new_level sysfs interface")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
3 weeks agomd: delete md_redundancy_group when array is becoming inactive
Li Nan [Mon, 3 Nov 2025 12:57:53 +0000 (20:57 +0800)] 
md: delete md_redundancy_group when array is becoming inactive

'md_redundancy_group' are created in md_run() and deleted in del_gendisk(),
but these are not paired. Writing inactive/active to sysfs array_state can
trigger md_run() multiple times without del_gendisk(), leading to
duplicate creation as below:

 sysfs: cannot create duplicate filename '/devices/virtual/block/md0/md/sync_action'
 Call Trace:
  dump_stack_lvl+0x9f/0x120
  dump_stack+0x14/0x20
  sysfs_warn_dup+0x96/0xc0
  sysfs_add_file_mode_ns+0x19c/0x1b0
  internal_create_group+0x213/0x830
  sysfs_create_group+0x17/0x20
  md_run+0x856/0xe60
  ? __x64_sys_openat+0x23/0x30
  do_md_run+0x26/0x1d0
  array_state_store+0x559/0x760
  md_attr_store+0xc9/0x1e0
  sysfs_kf_write+0x6f/0xa0
  kernfs_fop_write_iter+0x141/0x2a0
  vfs_write+0x1fc/0x5a0
  ksys_write+0x79/0x180
  __x64_sys_write+0x1d/0x30
  x64_sys_call+0x2818/0x2880
  do_syscall_64+0xa9/0x580
  entry_SYSCALL_64_after_hwframe+0x4b/0x53
 md: cannot register extra attributes for md0

Creation of it depends on 'pers', its lifecycle cannot be aligned with
gendisk. So fix this issue by triggering 'md_redundancy_group' deletion
when the array is becoming inactive.

Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-2-linan666@huaweicloud.com
Fixes: 790abe4d77af ("md: remove/add redundancy group only in level change")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
3 weeks agomd: prevent adding disks with larger logical_block_size to active arrays
Li Nan [Thu, 18 Sep 2025 11:57:58 +0000 (19:57 +0800)] 
md: prevent adding disks with larger logical_block_size to active arrays

When adding a disk to a md array, avoid updating the array's
logical_block_size to match the new disk. This prevents accidental
partition table loss that renders the array unusable.

The later patch will introduce a way to configure the array's
logical_block_size.

The issue was introduced before Linux 2.6.12-rc2.

Link: https://lore.kernel.org/linux-raid/20250918115759.334067-2-linan666@huaweicloud.com/
Fixes: d2e45eace8 ("[PATCH] Fix raid "bio too big" failures")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agoblock: clean up indentation in blk_rq_map_iter_init()
Caleb Sander Mateos [Sat, 8 Nov 2025 04:06:13 +0000 (21:06 -0700)] 
block: clean up indentation in blk_rq_map_iter_init()

blk_rq_map_iter_init() has one line with 7 spaces of indentation and
another that mixes 1 tab and 8 spaces. Convert both to tabs.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agonbd: defer config put in recv_work
Zheng Qixing [Sat, 8 Nov 2025 07:02:02 +0000 (15:02 +0800)] 
nbd: defer config put in recv_work

There is one uaf issue in recv_work when running NBD_CLEAR_SOCK and
NBD_CMD_RECONFIGURE:
  nbd_genl_connect     // conf_ref=2 (connect and recv_work A)
  nbd_open        // conf_ref=3
  recv_work A done     // conf_ref=2
  NBD_CLEAR_SOCK       // conf_ref=1
  nbd_genl_reconfigure // conf_ref=2 (trigger recv_work B)
  close nbd        // conf_ref=1
  recv_work B
    config_put         // conf_ref=0
    atomic_dec(&config->recv_threads); -> UAF

Or only running NBD_CLEAR_SOCK:
  nbd_genl_connect   // conf_ref=2
  nbd_open       // conf_ref=3
  NBD_CLEAR_SOCK     // conf_ref=2
  close nbd
    nbd_release
      config_put     // conf_ref=1
  recv_work
    config_put       // conf_ref=0
    atomic_dec(&config->recv_threads); -> UAF

Commit 87aac3a80af5 ("nbd: call nbd_config_put() before notifying the
waiter") moved nbd_config_put() to run before waking up the waiter in
recv_work, in order to ensure that nbd_start_device_ioctl() would not
be woken up while nbd->task_recv was still uncleared.

However, in nbd_start_device_ioctl(), after being woken up it explicitly
calls flush_workqueue() to make sure all current works are finished.
Therefore, there is no need to move the config put ahead of the wakeup.

Move nbd_config_put() to the end of recv_work, so that the reference is
held for the whole lifetime of the worker thread. This makes sure the
config cannot be freed while recv_work is still running, even if clear
+ reconfigure interleave.

In addition, we don't need to worry about recv_work dropping the last
nbd_put (which causes deadlock):

path A (netlink with NBD_CFLAG_DESTROY_ON_DISCONNECT):
  connect  // nbd_refs=1 (trigger recv_work)
  open nbd // nbd_refs=2
  NBD_CLEAR_SOCK
  close nbd
    nbd_release
      nbd_disconnect_and_put
        flush_workqueue // recv_work done
      nbd_config_put
        nbd_put // nbd_refs=1
      nbd_put // nbd_refs=0
        queue_work

path B (netlink without NBD_CFLAG_DESTROY_ON_DISCONNECT):
  connect  // nbd_refs=2 (trigger recv_work)
  open nbd // nbd_refs=3
  NBD_CLEAR_SOCK // conf_refs=2
  close nbd
    nbd_release
      nbd_config_put // conf_refs=1
      nbd_put // nbd_refs=2
  recv_work done // conf_refs=0, nbd_refs=1
  rmmod // nbd_refs=0

Reported-by: syzbot+56fbf4c7ddf65e95c7cc@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/6907edce.a70a0220.37351b.0014.GAE@google.com/T/
Fixes: 87aac3a80af5 ("nbd: make the config put is called before the notifying the waiter")
Depends-on: e2daec488c57 ("nbd: Fix hungtask when nbd_config_put")
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agomd/raid5: remove redundant __GFP_NOWARN
Huiwen He [Sun, 2 Nov 2025 15:25:40 +0000 (23:25 +0800)] 
md/raid5: remove redundant __GFP_NOWARN

The __GFP_NOWARN flag was included in GFP_NOWAIT since commit
16f5dfbc851b ("gfp: include __GFP_NOWARN in GFP_NOWAIT"). So
remove the redundant __GFP_NOWARN flag.

Link: https://lore.kernel.org/linux-raid/20251102152540.871568-1-hehuiwen@kylinos.cn
Signed-off-by: Huiwen He <hehuiwen@kylinos.cn>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agomd: avoid repeated calls to del_gendisk
Xiao Ni [Wed, 29 Oct 2025 06:34:19 +0000 (14:34 +0800)] 
md: avoid repeated calls to del_gendisk

There is a uaf problem which is found by case 23rdev-lifetime:

Oops: general protection fault, probably for non-canonical address 0xdead000000000122
RIP: 0010:bdi_unregister+0x4b/0x170
Call Trace:
 <TASK>
 __del_gendisk+0x356/0x3e0
 mddev_unlock+0x351/0x360
 rdev_attr_store+0x217/0x280
 kernfs_fop_write_iter+0x14a/0x210
 vfs_write+0x29e/0x550
 ksys_write+0x74/0xf0
 do_syscall_64+0xbb/0x380
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff5250a177e

The sequence is:
1. rdev remove path gets reconfig_mutex
2. rdev remove path release reconfig_mutex in mddev_unlock
3. md stop calls do_md_stop and sets MD_DELETED
4. rdev remove path calls del_gendisk because MD_DELETED is set
5. md stop path release reconfig_mutex and calls del_gendisk again

So there is a race condition we should resolve. This patch adds a
flag MD_DO_DELETE to avoid the race condition.

Link: https://lore.kernel.org/linux-raid/20251029063419.21700-1-xni@redhat.com
Fixes: 9e59d609763f ("md: call del_gendisk in control path")
Signed-off-by: Xiao Ni <xni@redhat.com>
Suggested-by: Yu Kuai <yukuai@fnnas.com>
Reviewed-by: Li Nan <linan122@huawei.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agomd/md-llbitmap: Remove unneeded semicolon
Chen Ni [Wed, 10 Sep 2025 09:19:12 +0000 (17:19 +0800)] 
md/md-llbitmap: Remove unneeded semicolon

Remove unnecessary semicolons reported by Coccinelle/coccicheck and the
semantic patch at scripts/coccinelle/misc/semicolon.cocci.

Link: https://lore.kernel.org/linux-raid/20250910091912.25624-1-nichen@iscas.ac.cn
Signed-off-by: Chen Ni <nichen@iscas.ac.cn>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agomd/md-linear: Enable atomic writes
John Garry [Wed, 3 Sep 2025 16:10:52 +0000 (16:10 +0000)] 
md/md-linear: Enable atomic writes

All the infrastructure has already been plumbed to support this for
stacked devices, so just enable the request_queue limits features flag.

A note about chunk sectors for linear arrays:
While it is possible to set a chunk sectors param for building a linear
array, this is for specifying the granularity at which data sectors from
the device are used. It is not the same as a stripe size, like for RAID0.

As such, it is not appropriate to set chunk_sectors request queue limit to
the same value, as chunk_sectors request limit is a boundary for which
requests cannot straddle.

However, request_queue limit max_hw_sectors is set to chunk sectors, which
almost has the same effect as setting chunk_sectors limit.

Link: https://lore.kernel.org/linux-raid/20250903161052.3326176-1-john.g.garry@oracle.com
Signed-off-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Yu Kuai <yukuai3@fnnas.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agoFactor out code into md_should_do_recovery()
Wu Guanghao [Mon, 8 Sep 2025 08:20:37 +0000 (16:20 +0800)] 
Factor out code into md_should_do_recovery()

In md_check_recovery(), use new helper to make code cleaner.

Link: https://lore.kernel.org/linux-raid/e62894c8-d916-94bc-ef48-3c60e6e1fc5d@huawei.com
Signed-off-by: Wu Guanghao <wuguanghao3@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@fnnas.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>
4 weeks agomd: fix rcu protection in md_wakeup_thread
Yun Zhou [Wed, 15 Oct 2025 08:32:27 +0000 (16:32 +0800)] 
md: fix rcu protection in md_wakeup_thread

We attempted to use RCU to protect the pointer 'thread', but directly
passed the value when calling md_wakeup_thread(). This means that the
RCU pointer has been acquired before rcu_read_lock(), which renders
rcu_read_lock() ineffective and could lead to a use-after-free.

Link: https://lore.kernel.org/linux-raid/20251015083227.1079009-1-yun.zhou@windriver.com
Fixes: 446931543982 ("md: protect md_thread with rcu")
Signed-off-by: Yun Zhou <yun.zhou@windriver.com>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Yu Kuai <yukuai@fnnas.com>