]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
hugetlb: increase hugepage reservations when using node-specific "hugepages=" cmdline
authorLi Zhe <lizhe.67@bytedance.com>
Thu, 22 Jan 2026 03:50:02 +0000 (11:50 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Sat, 31 Jan 2026 22:22:52 +0000 (14:22 -0800)
Commit 3dfd02c90037 ("hugetlb: increase number of reserving hugepages via
cmdline") raised the number of hugepages that can be reserved through the
boot-time "hugepages=" parameter for the non-node-specific case, but left
the node-specific form of the same parameter unchanged.

This patch extends the same optimization to node-specific reservations.
When HugeTLB vmemmap optimization (HVO) is enabled and a node cannot
satisfy the requested hugepages, the code first releases ordinary
struct-page memory of hugepages obtained from the buddy allocator,
allowing their struct-page memory to be reclaimed and reused for
additional hugepage reservations on that node.

This is particularly beneficial for configurations that require identical,
large per-node hugepage reservations.  On a four-node, 384 GB x86 VM, the
patch raises the attainable 2 MiB hugepage reservation from under 374 GB
to more than 379 GB.

Link: https://lkml.kernel.org/r/20260122035002.79958-1-lizhe.67@bytedance.com
Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Acked-by: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index 120ebd448b4231c490512e18afb0993ceffb9b02..0b005e944ee39486191f701a1db5815e51a1f0fd 100644 (file)
@@ -3435,6 +3435,13 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
 
                        folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, nid,
                                        &node_states[N_MEMORY], NULL);
+                       if (!folio && !list_empty(&folio_list) &&
+                           hugetlb_vmemmap_optimizable_size(h)) {
+                               prep_and_add_allocated_folios(h, &folio_list);
+                               INIT_LIST_HEAD(&folio_list);
+                               folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, nid,
+                                               &node_states[N_MEMORY], NULL);
+                       }
                        if (!folio)
                                break;
                        list_add(&folio->lru, &folio_list);