From: Kefeng Wang Date: Mon, 12 Jan 2026 15:09:51 +0000 (+0800) Subject: mm: page_alloc: optimize pfn_range_valid_contig() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9a8e0c31b3121df8ed193437b59969adabc7e721;p=thirdparty%2Fkernel%2Flinux.git mm: page_alloc: optimize pfn_range_valid_contig() The alloc_contig_pages() spends a significant amount of time within pfn_range_valid_contig(). - set_max_huge_pages - 99.98% alloc_pool_huge_folio only_alloc_fresh_hugetlb_folio.isra.0 - alloc_contig_frozen_pages_noprof - 87.00% pfn_range_valid_contig pfn_to_online_page - 12.91% alloc_contig_frozen_range_noprof 4.51% replace_free_hugepage_folios - 4.02% prep_new_page prep_compound_page - 2.98% undo_isolate_page_range - 2.79% unset_migratetype_isolate - 2.75% __move_freepages_block_isolate 2.71% __move_freepages_block - 0.98% start_isolate_page_range 0.66% set_migratetype_isolate To optimize this process, use the new helper page_is_unmovable() to avoid more unnecessary iterations for compound pages, such as THP not on LRU, and high-order buddy pages, which significantly improving the efficiency of contiguous memory allocation. A simple test on machine with 114G free memory, allocate 120 * 1G HugeTLB folios(104 successfully returned), time echo 120 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages Before: 0m3.605s After: 0m0.602s Link: https://lkml.kernel.org/r/20260112150954.1802953-3-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang Reviewed-by: Oscar Salvador Reviewed-by: Zi Yan Cc: Brendan Jackman Cc: David Hildenbrand Cc: Jane Chu Cc: Johannes Weiner Cc: Matthew Wilcox (Oracle) Cc: Muchun Song Cc: Sidhartha Kumar Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2c70ba9d5cc65..e4104973e22fd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7153,18 +7153,20 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, unsigned long nr_pages, bool skip_hugetlb, bool *skipped_hugetlb) { - unsigned long i, end_pfn = start_pfn + nr_pages; + unsigned long end_pfn = start_pfn + nr_pages; struct page *page; - for (i = start_pfn; i < end_pfn; i++) { - page = pfn_to_online_page(i); + while (start_pfn < end_pfn) { + unsigned long step = 1; + + page = pfn_to_online_page(start_pfn); if (!page) return false; if (page_zone(page) != z) return false; - if (PageReserved(page)) + if (page_is_unmovable(z, page, PB_ISOLATE_MODE_OTHER, &step)) return false; /* @@ -7179,9 +7181,6 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, if (PageHuge(page)) { unsigned int order; - if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION)) - return false; - if (skip_hugetlb) { *skipped_hugetlb = true; return false; @@ -7192,17 +7191,9 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, if ((order >= MAX_FOLIO_ORDER) || (nr_pages <= (1 << order))) return false; - - /* - * Reaching this point means we've encounted a huge page - * smaller than nr_pages, skip all pfn's for that page. - * - * We can't get here from a tail-PageHuge, as it implies - * we started a scan in the middle of a hugepage larger - * than nr_pages - which the prior check filters for. - */ - i += (1 << order) - 1; } + + start_pfn += step; } return true; }