drm/shmem-helper: Map huge pages in fault handler
Attempt a PMD sized PFN insertion into the VMA if the faulty address
of the fault handler is part of a huge page.
On builds with CONFIG_TRANSPARENT_HUGEPAGE enabled, if the mmap() user
address is PMD size aligned, if the GEM object is backed by shmem
buffers on mountpoints setting the 'huge=' option and if the shmem
backing store manages to allocate a huge folio, CPU mapping would then
benefit from significantly increased memcpy() performance. When these
conditions are met on a system with 2 MiB huge pages, an aligned copy
of 2 MiB would raise a single page fault instead of 4096.
v4:
- implement map_pages instead of huge_fault
v6:
- get rid of map_pages handler for now (keep it for another series
along with arm64 contpte support)
v11:
- remove page fault validity check helper
- rename drm_gem_shmem_map_pmd() to drm_gem_shmem_try_map_pmd()
- add Boris R-b
v12:
- move up ret var decl in fault handler to minimize diff
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Link: https://patch.msgid.link/20251205182231.194072-3-loic.molinari@collabora.com
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>