]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
iommufd/iova_bitmap: Handle recording beyond the mapped pages
authorJoao Martins <joao.m.martins@oracle.com>
Fri, 2 Feb 2024 13:34:10 +0000 (13:34 +0000)
committerJason Gunthorpe <jgg@nvidia.com>
Tue, 6 Feb 2024 15:31:45 +0000 (11:31 -0400)
commit2780025e01e2e1c92f83ee7da91d9727c2e58a3e
treef87ad7f220288d6bb2d862eed77c2b61cbe7e088
parent42af95114535dd94c39714b97ad720602d406b9a
iommufd/iova_bitmap: Handle recording beyond the mapped pages

IOVA bitmap is a zero-copy scheme of recording dirty bits that iterate the
different bitmap user pages at chunks of a maximum of
PAGE_SIZE/sizeof(struct page*) pages.

When the iterations are split up into 64G, the end of the range may be
broken up in a way that's aligned with a non base page PTE size. This
leads to only part of the huge page being recorded in the bitmap. Note
that in pratice this is only a problem for IOMMU dirty tracking i.e. when
the backing PTEs are in IOMMU hugepages and the bitmap is in base page
granularity. So far this not something that affects VF dirty trackers
(which reports and records at the same granularity).

To fix that, if there is a remainder of bits left to set in which the
current IOVA bitmap doesn't cover, make a copy of the bitmap structure and
iterate-and-set the rest of the bits remaining. Finally, when advancing
the iterator, skip all the bits that were set ahead.

Link: https://lore.kernel.org/r/20240202133415.23819-5-joao.m.martins@oracle.com
Reported-by: Avihai Horon <avihaih@nvidia.com>
Fixes: f35f22cc760e ("iommu/vt-d: Access/Dirty bit support for SS domains")
Fixes: 421a511a293f ("iommu/amd: Access/Dirty bit support in IOPTEs")
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Tested-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/iommu/iommufd/iova_bitmap.c