__getblk_slow() already implies failing a first lookup
as the fastpath, so try to create the buffers immediately
and avoid the redundant lookup. This saves 5-10% of the
total cost/latency of the slowpath.
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
Link: https://lore.kernel.org/20250515173925.147823-3-dave@stgolabs.net
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>
for (;;) {
struct buffer_head *bh;
+ if (!grow_buffers(bdev, block, size, gfp))
+ return NULL;
+
if (blocking)
bh = __find_get_block_nonatomic(bdev, block, size);
else
bh = __find_get_block(bdev, block, size);
if (bh)
return bh;
-
- if (!grow_buffers(bdev, block, size, gfp))
- return NULL;
}
}