]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
iommu/io-pgtable-arm: Fix iova_to_phys for block entries
authorWill Deacon <will.deacon@arm.com>
Thu, 16 Jun 2016 17:21:19 +0000 (18:21 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 20 Aug 2016 16:10:57 +0000 (18:10 +0200)
commit 7c6d90e2bb1a98b86d73b9e8ab4d97ed5507e37c upstream.

The implementation of iova_to_phys for the long-descriptor ARM
io-pgtable code always masks with the granule size when inserting the
low virtual address bits into the physical address determined from the
page tables. In cases where the leaf entry is found before the final
level of table (i.e. due to a block mapping), this results in rounding
down to the bottom page of the block mapping. Consequently, the physical
address range batching in the vfio_unmap_unpin is defeated and we end
up taking the long way home.

This patch fixes the problem by masking the virtual address with the
appropriate mask for the level at which the leaf descriptor is located.
The short-descriptor code already gets this right, so no change is
needed there.

Reported-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/iommu/io-pgtable-arm.c

index a1ed1b73fed49f11259a635a786859b6bb44c523..f5c90e1366ce030249d1ae6d392762c6ea1834ef 100644 (file)
@@ -576,7 +576,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
        return 0;
 
 found_translation:
-       iova &= (ARM_LPAE_GRANULE(data) - 1);
+       iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1);
        return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
 }