]> git.ipfire.org Git - thirdparty/linux.git/commit
block: add scatterlist-less DMA mapping helpers
authorChristoph Hellwig <hch@lst.de>
Wed, 25 Jun 2025 11:34:59 +0000 (13:34 +0200)
committerJens Axboe <axboe@kernel.dk>
Mon, 30 Jun 2025 21:50:32 +0000 (15:50 -0600)
commit858299dc61603670823f8c1d62bf3fc7af44b18b
tree61b6cacd2bcc24eb35a6c949a277791b74b58758
parent38446014648c9f7b2843f87517c8f2b73906bb40
block: add scatterlist-less DMA mapping helpers

Add a new blk_rq_dma_map / blk_rq_dma_unmap pair that does away with
the wasteful scatterlist structure.  Instead it uses the mapping iterator
to either add segments to the IOVA for IOMMU operations, or just maps
them one by one for the direct mapping.  For the IOMMU case instead of
a scatterlist with an entry for each segment, only a single [dma_addr,len]
pair needs to be stored for processing a request, and for the direct
mapping the per-segment allocation shrinks from
[page,offset,len,dma_addr,dma_len] to just [dma_addr,len].

One big difference to the scatterlist API, which could be considered
downside, is that the IOVA collapsing only works when the driver sets
a virt_boundary that matches the IOMMU granule.  For NVMe this is done
already so it works perfectly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
block/blk-mq-dma.c
include/linux/blk-mq-dma.h [new file with mode: 0644]