]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/4.1.13/iommu-vt-d-fix-range-computation-when-making-room-for-large-pages.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.1.13 / iommu-vt-d-fix-range-computation-when-making-room-for-large-pages.patch
1 From ba2374fd2bf379f933773811fdb06cb6a5445f41 Mon Sep 17 00:00:00 2001
2 From: Christian Zander <christian@nervanasys.com>
3 Date: Wed, 10 Jun 2015 09:41:45 -0700
4 Subject: iommu/vt-d: fix range computation when making room for large pages
5
6 From: Christian Zander <christian@nervanasys.com>
7
8 commit ba2374fd2bf379f933773811fdb06cb6a5445f41 upstream.
9
10 In preparation for the installation of a large page, any small page
11 tables that may still exist in the target IOV address range are
12 removed. However, if a scatter/gather list entry is large enough to
13 fit more than one large page, the address space for any subsequent
14 large pages is not cleared of conflicting small page tables.
15
16 This can cause legitimate mapping requests to fail with errors of the
17 form below, potentially followed by a series of IOMMU faults:
18
19 ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083)
20
21 In this example, a 4MiB scatter/gather list entry resulted in the
22 successful installation of a large page @ vPFN 0xfdc00, followed by
23 a failed attempt to install another large page @ vPFN 0xfde00, due to
24 the presence of a pointer to a small page table @ 0x7f83a4000.
25
26 To address this problem, compute the number of large pages that fit
27 into a given scatter/gather list entry, and use it to derive the
28 last vPFN covered by the large page(s).
29
30 Signed-off-by: Christian Zander <christian@nervanasys.com>
31 Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
32 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
33
34 ---
35 drivers/iommu/intel-iommu.c | 12 ++++++++----
36 1 file changed, 8 insertions(+), 4 deletions(-)
37
38 --- a/drivers/iommu/intel-iommu.c
39 +++ b/drivers/iommu/intel-iommu.c
40 @@ -2033,15 +2033,19 @@ static int __domain_mapping(struct dmar_
41 return -ENOMEM;
42 /* It is large page*/
43 if (largepage_lvl > 1) {
44 + unsigned long nr_superpages, end_pfn;
45 +
46 pteval |= DMA_PTE_LARGE_PAGE;
47 lvl_pages = lvl_to_nr_pages(largepage_lvl);
48 +
49 + nr_superpages = sg_res / lvl_pages;
50 + end_pfn = iov_pfn + nr_superpages * lvl_pages - 1;
51 +
52 /*
53 * Ensure that old small page tables are
54 - * removed to make room for superpage,
55 - * if they exist.
56 + * removed to make room for superpage(s).
57 */
58 - dma_pte_free_pagetable(domain, iov_pfn,
59 - iov_pfn + lvl_pages - 1);
60 + dma_pte_free_pagetable(domain, iov_pfn, end_pfn);
61 } else {
62 pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
63 }