]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
dma-direct: avoid redundant memory sync for swiotlb
authorChao Gao <chao.gao@intel.com>
Wed, 13 Apr 2022 06:32:22 +0000 (08:32 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 20 Apr 2022 07:23:30 +0000 (09:23 +0200)
commit26f827e095aba4141ffd684276ef54025de27574
tree9491ca71a13e553472dc37d9c72cfe31c3c89702
parent9a5a4d23e24d39784b643e5db2e4c3a97da4a1ac
dma-direct: avoid redundant memory sync for swiotlb

commit 9e02977bfad006af328add9434c8bffa40e053bb upstream.

When we looked into FIO performance with swiotlb enabled in VM, we found
swiotlb_bounce() is always called one more time than expected for each DMA
read request.

It turns out that the bounce buffer is copied to original DMA buffer twice
after the completion of a DMA request (one is done by in
dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
But the content in bounce buffer actually doesn't change between the two
rounds of copy. So, one round of copy is redundant.

Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
skip the memory copy in it.

This fix increases FIO 64KB sequential read throughput in a guest with
swiotlb=force by 5.6%.

Fixes: 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")
Reported-by: Wang Zhaoyang1 <zhaoyang1.wang@intel.com>
Reported-by: Gao Liang <liang.gao@intel.com>
Signed-off-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/dma/direct.h