From: Li Xinhai Date: Fri, 4 Sep 2020 23:36:10 +0000 (-0700) Subject: mm/hugetlb: try preferred node first when alloc gigantic page from cma X-Git-Tag: v5.8.8~5 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=e2d4e423205ed6d80794be050c3f0cad1386a932;p=thirdparty%2Fkernel%2Fstable.git mm/hugetlb: try preferred node first when alloc gigantic page from cma commit 953f064aa6b29debcc211869b60bd59f26d19c34 upstream. Since commit cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma"), the gigantic page would be allocated from node which is not the preferred node, although there are pages available from that node. The reason is that the nid parameter has been ignored in alloc_gigantic_page(). Besides, the __GFP_THISNODE also need be checked if user required to alloc only from the preferred node. After this patch, the preferred node is tried first before other allowed nodes, and don't try to allocate from other nodes if __GFP_THISNODE is specified. If user don't specify the preferred node, the current node will be used as preferred node, which makes sure consistent behavior of allocating gigantic and non-gigantic hugetlb page. Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") Signed-off-by: Li Xinhai Signed-off-by: Andrew Morton Reviewed-by: Mike Kravetz Acked-by: Michal Hocko Cc: Roman Gushchin Link: https://lkml.kernel.org/r/20200902025016.697260-1-lixinhai.lxh@gmail.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7952c6cb6f08c..8695dce858d8e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1251,21 +1251,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = 1UL << huge_page_order(h); + if (nid == NUMA_NO_NODE) + nid = numa_mem_id(); #ifdef CONFIG_CMA { struct page *page; int node; - for_each_node_mask(node, *nodemask) { - if (!hugetlb_cma[node]) - continue; - - page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); + if (hugetlb_cma[nid]) { + page = cma_alloc(hugetlb_cma[nid], nr_pages, + huge_page_order(h), true); if (page) return page; } + + if (!(gfp_mask & __GFP_THISNODE)) { + for_each_node_mask(node, *nodemask) { + if (node == nid || !hugetlb_cma[node]) + continue; + + page = cma_alloc(hugetlb_cma[node], nr_pages, + huge_page_order(h), true); + if (page) + return page; + } + } } #endif