From: Justin Green Date: Wed, 28 Jan 2026 22:56:47 +0000 (-0500) Subject: mm: refactor vma_map_pages to use vm_insert_pages X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=cc5cbf37ceac49d446aa9f1e888d35c3a3353616;p=thirdparty%2Fkernel%2Fstable.git mm: refactor vma_map_pages to use vm_insert_pages vma_map_pages currently calls vm_insert_page on each individual page in the mapping, which creates significant overhead because we are repeatedly spinlocking. Instead, we should batch insert pages using vm_insert_pages, which amortizes the cost of the spinlock. Tested through watching hardware accelerated video on a MTK ChromeOS device. This particular path maps both a V4L2 buffer and a GEM allocated buffer into userspace and converts the contents from one pixel format to another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this pathway. Link: https://lkml.kernel.org/r/20260128225648.2938636-1-greenjustin@chromium.org Signed-off-by: Justin Green Acked-by: Brian Geffon Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Arjun Roy Cc: David Hildenbrand Cc: David Rientjes Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Michal Hocko Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- diff --git a/mm/memory.c b/mm/memory.c index 187f16b7e996..2a347e31a077 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages, { unsigned long count = vma_pages(vma); unsigned long uaddr = vma->vm_start; - int ret, i; /* Fail if the user requested offset is beyond the end of the object */ if (offset >= num) @@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages, if (count > num - offset) return -ENXIO; - for (i = 0; i < count; i++) { - ret = vm_insert_page(vma, uaddr, pages[offset + i]); - if (ret < 0) - return ret; - uaddr += PAGE_SIZE; - } - - return 0; + return vm_insert_pages(vma, uaddr, pages + offset, &count); } /**