]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()
authorWill Deacon <will@kernel.org>
Fri, 8 Mar 2024 15:28:26 +0000 (15:28 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 3 Apr 2024 13:28:50 +0000 (15:28 +0200)
commitae2f8dbe921ec300b8fbfa3a7af3440d02d1d035
tree0ba9b3ad33cfb6b573e149aa746500fa15a066fc
parent3e7acd6e25ba77dde48c3b721c54c89cd6a10534
swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()

[ Upstream commit cbf53074a528191df82b4dba1e3d21191102255e ]

core-api/dma-api-howto.rst states the following properties of
dma_alloc_coherent():

  | The CPU virtual address and the DMA address are both guaranteed to
  | be aligned to the smallest PAGE_SIZE order which is greater than or
  | equal to the requested size.

However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
parameter of swiotlb_find_slots() and so this property is not upheld.
Instead, allocations larger than a page are aligned to PAGE_SIZE,

Calculate the mask corresponding to the page order suitable for holding
the allocation and pass that to swiotlb_find_slots().

Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/dma/swiotlb.c