From: Greg Kroah-Hartman Date: Tue, 27 May 2025 16:56:03 +0000 (+0200) Subject: drop 7170130e4c72ce0caa0cb42a1627c635cc262821 from 5.15 X-Git-Tag: v6.12.31~3 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=81dc9d12b1c2aaa20df148d914fbba83692122eb;p=thirdparty%2Fkernel%2Fstable-queue.git drop 7170130e4c72ce0caa0cb42a1627c635cc262821 from 5.15 --- diff --git a/queue-5.15/series b/queue-5.15/series index 6da07d97de..f529727516 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -1,4 +1,3 @@ -x86-mm-init-handle-the-special-case-of-device-private-pages-in-add_pages-to-not-increase-max_pfn-and-trigger-dma_addressing_limited-bounce-buffers.patch scsi-target-iscsi-fix-timeout-on-deleted-connection.patch virtio_ring-fix-data-race-by-tagging-event_triggered.patch dma-mapping-avoid-potential-unused-data-compilation-.patch diff --git a/queue-5.15/x86-mm-init-handle-the-special-case-of-device-private-pages-in-add_pages-to-not-increase-max_pfn-and-trigger-dma_addressing_limited-bounce-buffers.patch b/queue-5.15/x86-mm-init-handle-the-special-case-of-device-private-pages-in-add_pages-to-not-increase-max_pfn-and-trigger-dma_addressing_limited-bounce-buffers.patch deleted file mode 100644 index 937e7ebe32..0000000000 --- a/queue-5.15/x86-mm-init-handle-the-special-case-of-device-private-pages-in-add_pages-to-not-increase-max_pfn-and-trigger-dma_addressing_limited-bounce-buffers.patch +++ /dev/null @@ -1,106 +0,0 @@ -From 7170130e4c72ce0caa0cb42a1627c635cc262821 Mon Sep 17 00:00:00 2001 -From: Balbir Singh -Date: Tue, 1 Apr 2025 11:07:52 +1100 -Subject: x86/mm/init: Handle the special case of device private pages in add_pages(), to not increase max_pfn and trigger dma_addressing_limited() bounce buffers - bounce buffers -MIME-Version: 1.0 -Content-Type: text/plain; charset=UTF-8 -Content-Transfer-Encoding: 8bit - -From: Balbir Singh - -commit 7170130e4c72ce0caa0cb42a1627c635cc262821 upstream. - -As Bert Karwatzki reported, the following recent commit causes a -performance regression on AMD iGPU and dGPU systems: - - 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems") - -It exposed a bug with nokaslr and zone device interaction. - -The root cause of the bug is that, the GPU driver registers a zone -device private memory region. When KASLR is disabled or the above commit -is applied, the direct_map_physmem_end is set to much higher than 10 TiB -typically to the 64TiB address. When zone device private memory is added -to the system via add_pages(), it bumps up the max_pfn to the same -value. This causes dma_addressing_limited() to return true, since the -device cannot address memory all the way up to max_pfn. - -This caused a regression for games played on the iGPU, as it resulted in -the DMA32 zone being used for GPU allocations. - -Fix this by not bumping up max_pfn on x86 systems, when pgmap is passed -into add_pages(). The presence of pgmap is used to determine if device -private memory is being added via add_pages(). - -More details: - -devm_request_mem_region() and request_free_mem_region() request for -device private memory. iomem_resource is passed as the base resource -with start and end parameters. iomem_resource's end depends on several -factors, including the platform and virtualization. On x86 for example -on bare metal, this value is set to boot_cpu_data.x86_phys_bits. -boot_cpu_data.x86_phys_bits can change depending on support for MKTME. -By default it is set to the same as log2(direct_map_physmem_end) which -is 46 to 52 bits depending on the number of levels in the page table. -The allocation routines used iomem_resource's end and -direct_map_physmem_end to figure out where to allocate the region. - -[ arch/powerpc is also impacted by this problem, but this patch does not fix - the issue for PowerPC. ] - -Testing: - - 1. Tested on a virtual machine with test_hmm for zone device inseration - - 2. A previous version of this patch was tested by Bert, please see: - https://lore.kernel.org/lkml/d87680bab997fdc9fb4e638983132af235d9a03a.camel@web.de/ - -[ mingo: Clarified the comments and the changelog. ] - -Reported-by: Bert Karwatzki -Tested-by: Bert Karwatzki -Fixes: 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems") -Signed-off-by: Balbir Singh -Signed-off-by: Ingo Molnar -Cc: Brian Gerst -Cc: Juergen Gross -Cc: H. Peter Anvin -Cc: Linus Torvalds -Cc: Andrew Morton -Cc: Christoph Hellwig -Cc: Pierre-Eric Pelloux-Prayer -Cc: Alex Deucher -Cc: Christian König -Cc: David Airlie -Cc: Simona Vetter -Link: https://lore.kernel.org/r/20250401000752.249348-1-balbirs@nvidia.com -Signed-off-by: Greg Kroah-Hartman ---- - arch/x86/mm/init_64.c | 15 ++++++++++++--- - 1 file changed, 12 insertions(+), 3 deletions(-) - ---- a/arch/x86/mm/init_64.c -+++ b/arch/x86/mm/init_64.c -@@ -955,9 +955,18 @@ int add_pages(int nid, unsigned long sta - ret = __add_pages(nid, start_pfn, nr_pages, params); - WARN_ON_ONCE(ret); - -- /* update max_pfn, max_low_pfn and high_memory */ -- update_end_of_memory_vars(start_pfn << PAGE_SHIFT, -- nr_pages << PAGE_SHIFT); -+ /* -+ * Special case: add_pages() is called by memremap_pages() for adding device -+ * private pages. Do not bump up max_pfn in the device private path, -+ * because max_pfn changes affect dma_addressing_limited(). -+ * -+ * dma_addressing_limited() returning true when max_pfn is the device's -+ * addressable memory can force device drivers to use bounce buffers -+ * and impact their performance negatively: -+ */ -+ if (!params->pgmap) -+ /* update max_pfn, max_low_pfn and high_memory */ -+ update_end_of_memory_vars(start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); - - return ret; - }