From: Hugh Dickins Date: Mon, 8 Sep 2025 22:16:53 +0000 (-0700) Subject: mm/gup: local lru_add_drain() to avoid lru_add_drain_all() X-Git-Tag: v6.1.155~54 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9557ba5fd368b98329cf1882d538a670abaaacec;p=thirdparty%2Fkernel%2Fstable.git mm/gup: local lru_add_drain() to avoid lru_add_drain_all() [ Upstream commit a09a8a1fbb374e0053b97306da9dbc05bd384685 ] In many cases, if collect_longterm_unpinnable_folios() does need to drain the LRU cache to release a reference, the cache in question is on this same CPU, and much more efficiently drained by a preliminary local lru_add_drain(), than the later cross-CPU lru_add_drain_all(). Marked for stable, to counter the increase in lru_add_drain_all()s from "mm/gup: check ref_count instead of lru before migration". Note for clean backports: can take 6.16 commit a03db236aebf ("gup: optimize longterm pin_user_pages() for large folio") first. Link: https://lkml.kernel.org/r/66f2751f-283e-816d-9530-765db7edc465@google.com Signed-off-by: Hugh Dickins Acked-by: David Hildenbrand Cc: "Aneesh Kumar K.V" Cc: Axel Rasmussen Cc: Chris Li Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Johannes Weiner Cc: John Hubbard Cc: Keir Fraser Cc: Konstantin Khlebnikov Cc: Li Zhe Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Cc: Rik van Riel Cc: Shivank Garg Cc: Vlastimil Babka Cc: Wei Xu Cc: Will Deacon Cc: yangge Cc: Yuanchu Xie Cc: Yu Zhao Cc: Signed-off-by: Andrew Morton [ Resolved minor conflicts ] Signed-off-by: Hugh Dickins Signed-off-by: Sasha Levin --- diff --git a/mm/gup.c b/mm/gup.c index 44e5fe2535d0e..e1f125af9c844 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1968,7 +1968,7 @@ static unsigned long collect_longterm_unpinnable_pages( { unsigned long i, collected = 0; struct folio *prev_folio = NULL; - bool drain_allow = true; + int drained = 0; for (i = 0; i < nr_pages; i++) { struct folio *folio = page_folio(pages[i]); @@ -1990,10 +1990,17 @@ static unsigned long collect_longterm_unpinnable_pages( continue; } - if (drain_allow && folio_ref_count(folio) != - folio_expected_ref_count(folio) + 1) { + if (drained == 0 && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { + lru_add_drain(); + drained = 1; + } + if (drained == 1 && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { lru_add_drain_all(); - drain_allow = false; + drained = 2; } if (folio_isolate_lru(folio))