1 From 7170130e4c72ce0caa0cb42a1627c635cc262821 Mon Sep 17 00:00:00 2001
2 From: Balbir Singh <balbirs@nvidia.com>
3 Date: Tue, 1 Apr 2025 11:07:52 +1100
4 Subject: x86/mm/init: Handle the special case of device private pages in add_pages(), to not increase max_pfn and trigger dma_addressing_limited() bounce buffers
7 Content-Type: text/plain; charset=UTF-8
8 Content-Transfer-Encoding: 8bit
10 From: Balbir Singh <balbirs@nvidia.com>
12 commit 7170130e4c72ce0caa0cb42a1627c635cc262821 upstream.
14 As Bert Karwatzki reported, the following recent commit causes a
15 performance regression on AMD iGPU and dGPU systems:
17 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems")
19 It exposed a bug with nokaslr and zone device interaction.
21 The root cause of the bug is that, the GPU driver registers a zone
22 device private memory region. When KASLR is disabled or the above commit
23 is applied, the direct_map_physmem_end is set to much higher than 10 TiB
24 typically to the 64TiB address. When zone device private memory is added
25 to the system via add_pages(), it bumps up the max_pfn to the same
26 value. This causes dma_addressing_limited() to return true, since the
27 device cannot address memory all the way up to max_pfn.
29 This caused a regression for games played on the iGPU, as it resulted in
30 the DMA32 zone being used for GPU allocations.
32 Fix this by not bumping up max_pfn on x86 systems, when pgmap is passed
33 into add_pages(). The presence of pgmap is used to determine if device
34 private memory is being added via add_pages().
38 devm_request_mem_region() and request_free_mem_region() request for
39 device private memory. iomem_resource is passed as the base resource
40 with start and end parameters. iomem_resource's end depends on several
41 factors, including the platform and virtualization. On x86 for example
42 on bare metal, this value is set to boot_cpu_data.x86_phys_bits.
43 boot_cpu_data.x86_phys_bits can change depending on support for MKTME.
44 By default it is set to the same as log2(direct_map_physmem_end) which
45 is 46 to 52 bits depending on the number of levels in the page table.
46 The allocation routines used iomem_resource's end and
47 direct_map_physmem_end to figure out where to allocate the region.
49 [ arch/powerpc is also impacted by this problem, but this patch does not fix
50 the issue for PowerPC. ]
54 1. Tested on a virtual machine with test_hmm for zone device inseration
56 2. A previous version of this patch was tested by Bert, please see:
57 https://lore.kernel.org/lkml/d87680bab997fdc9fb4e638983132af235d9a03a.camel@web.de/
59 [ mingo: Clarified the comments and the changelog. ]
61 Reported-by: Bert Karwatzki <spasswolf@web.de>
62 Tested-by: Bert Karwatzki <spasswolf@web.de>
63 Fixes: 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems")
64 Signed-off-by: Balbir Singh <balbirs@nvidia.com>
65 Signed-off-by: Ingo Molnar <mingo@kernel.org>
66 Cc: Brian Gerst <brgerst@gmail.com>
67 Cc: Juergen Gross <jgross@suse.com>
68 Cc: H. Peter Anvin <hpa@zytor.com>
69 Cc: Linus Torvalds <torvalds@linux-foundation.org>
70 Cc: Andrew Morton <akpm@linux-foundation.org>
71 Cc: Christoph Hellwig <hch@lst.de>
72 Cc: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
73 Cc: Alex Deucher <alexander.deucher@amd.com>
74 Cc: Christian König <christian.koenig@amd.com>
75 Cc: David Airlie <airlied@gmail.com>
76 Cc: Simona Vetter <simona@ffwll.ch>
77 Link: https://lore.kernel.org/r/20250401000752.249348-1-balbirs@nvidia.com
78 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
80 arch/x86/mm/init_64.c | 15 ++++++++++++---
81 1 file changed, 12 insertions(+), 3 deletions(-)
83 --- a/arch/x86/mm/init_64.c
84 +++ b/arch/x86/mm/init_64.c
85 @@ -959,9 +959,18 @@ int add_pages(int nid, unsigned long sta
86 ret = __add_pages(nid, start_pfn, nr_pages, params);
89 - /* update max_pfn, max_low_pfn and high_memory */
90 - update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
91 - nr_pages << PAGE_SHIFT);
93 + * Special case: add_pages() is called by memremap_pages() for adding device
94 + * private pages. Do not bump up max_pfn in the device private path,
95 + * because max_pfn changes affect dma_addressing_limited().
97 + * dma_addressing_limited() returning true when max_pfn is the device's
98 + * addressable memory can force device drivers to use bounce buffers
99 + * and impact their performance negatively:
101 + if (!params->pgmap)
102 + /* update max_pfn, max_low_pfn and high_memory */
103 + update_end_of_memory_vars(start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);