From: Greg Kroah-Hartman Date: Sat, 29 Oct 2016 12:51:31 +0000 (-0400) Subject: 4.8-stable patches X-Git-Tag: v4.4.29~3 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=79c7e040fde70c00b91c11821e581a6610c572d0;p=thirdparty%2Fkernel%2Fstable-queue.git 4.8-stable patches added patches: mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch --- diff --git a/queue-4.8/mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch b/queue-4.8/mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch new file mode 100644 index 00000000000..751a02d60ac --- /dev/null +++ b/queue-4.8/mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch @@ -0,0 +1,64 @@ +From eb03aa008561004257900983193d024e57abdd96 Mon Sep 17 00:00:00 2001 +From: Gerald Schaefer +Date: Fri, 7 Oct 2016 17:01:13 -0700 +Subject: mm/hugetlb: improve locking in dissolve_free_huge_pages() + +From: Gerald Schaefer + +commit eb03aa008561004257900983193d024e57abdd96 upstream. + +For every pfn aligned to minimum_order, dissolve_free_huge_pages() will +call dissolve_free_huge_page() which takes the hugetlb spinlock, even if +the page is not huge at all or a hugepage that is in-use. + +Improve this by doing the PageHuge() and page_count() checks already in +dissolve_free_huge_pages() before calling dissolve_free_huge_page(). In +dissolve_free_huge_page(), when holding the spinlock, those checks need +to be revalidated. + +Link: http://lkml.kernel.org/r/20160926172811.94033-4-gerald.schaefer@de.ibm.com +Signed-off-by: Gerald Schaefer +Acked-by: Michal Hocko +Acked-by: Naoya Horiguchi +Cc: "Kirill A . Shutemov" +Cc: Vlastimil Babka +Cc: Mike Kravetz +Cc: "Aneesh Kumar K . V" +Cc: Martin Schwidefsky +Cc: Heiko Carstens +Cc: Rui Teng +Cc: Dave Hansen +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/hugetlb.c | 12 +++++++++--- + 1 file changed, 9 insertions(+), 3 deletions(-) + +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1476,14 +1476,20 @@ out: + int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) + { + unsigned long pfn; ++ struct page *page; + int rc = 0; + + if (!hugepages_supported()) + return rc; + +- for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) +- if (rc = dissolve_free_huge_page(pfn_to_page(pfn))) +- break; ++ for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) { ++ page = pfn_to_page(pfn); ++ if (PageHuge(page) && !page_count(page)) { ++ rc = dissolve_free_huge_page(page); ++ if (rc) ++ break; ++ } ++ } + + return rc; + } diff --git a/queue-4.8/series b/queue-4.8/series index c5f1783abfe..37b66c595e7 100644 --- a/queue-4.8/series +++ b/queue-4.8/series @@ -78,3 +78,4 @@ ib-mlx5-fix-steering-resource-leak.patch power-bq24257-fix-use-of-uninitialized-pointer-bq-charger.patch dmaengine-ipu-remove-bogus-no_irq-reference.patch mm-hugetlb-check-for-reserved-hugepages-during-memory-offline.patch +mm-hugetlb-improve-locking-in-dissolve_free_huge_pages.patch