From: Hugh Dickins Date: Mon, 8 Sep 2025 22:23:15 +0000 (-0700) Subject: mm: folio_may_be_lru_cached() unless folio_test_large() X-Git-Tag: v6.1.155~53 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=85fc33541cc56d364ec0c6b9cf9088674d9eb4c5;p=thirdparty%2Fkernel%2Fstable.git mm: folio_may_be_lru_cached() unless folio_test_large() [ Upstream commit 2da6de30e60dd9bb14600eff1cc99df2fa2ddae3 ] mm/swap.c and mm/mlock.c agree to drain any per-CPU batch as soon as a large folio is added: so collect_longterm_unpinnable_folios() just wastes effort when calling lru_add_drain[_all]() on a large folio. But although there is good reason not to batch up PMD-sized folios, we might well benefit from batching a small number of low-order mTHPs (though unclear how that "small number" limitation will be implemented). So ask if folio_may_be_lru_cached() rather than !folio_test_large(), to insulate those particular checks from future change. Name preferred to "folio_is_batchable" because large folios can well be put on a batch: it's just the per-CPU LRU caches, drained much later, which need care. Marked for stable, to counter the increase in lru_add_drain_all()s from "mm/gup: check ref_count instead of lru before migration". Link: https://lkml.kernel.org/r/57d2eaf8-3607-f318-e0c5-be02dce61ad0@google.com Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") Signed-off-by: Hugh Dickins Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Cc: "Aneesh Kumar K.V" Cc: Axel Rasmussen Cc: Chris Li Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Johannes Weiner Cc: John Hubbard Cc: Keir Fraser Cc: Konstantin Khlebnikov Cc: Li Zhe Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Cc: Rik van Riel Cc: Shivank Garg Cc: Vlastimil Babka Cc: Wei Xu Cc: Will Deacon Cc: yangge Cc: Yuanchu Xie Cc: Yu Zhao Cc: Signed-off-by: Andrew Morton [ Resolved conflicts in mm/swap.c; left "page" parts of mm/mlock.c as is ] Signed-off-by: Hugh Dickins Signed-off-by: Sasha Levin --- diff --git a/include/linux/swap.h b/include/linux/swap.h index add47f43e568e..3eecf97dfbb8d 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -392,6 +392,16 @@ void lru_cache_add(struct page *); void mark_page_accessed(struct page *); void folio_mark_accessed(struct folio *); +static inline bool folio_may_be_lru_cached(struct folio *folio) +{ + /* + * Holding PMD-sized folios in per-CPU LRU cache unbalances accounting. + * Holding small numbers of low-order mTHP folios in per-CPU LRU cache + * will be sensible, but nobody has implemented and tested that yet. + */ + return !folio_test_large(folio); +} + extern atomic_t lru_disable_count; static inline bool lru_cache_disabled(void) diff --git a/mm/gup.c b/mm/gup.c index e1f125af9c844..b02993c9a8cdf 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1990,13 +1990,13 @@ static unsigned long collect_longterm_unpinnable_pages( continue; } - if (drained == 0 && + if (drained == 0 && folio_may_be_lru_cached(folio) && folio_ref_count(folio) != folio_expected_ref_count(folio) + 1) { lru_add_drain(); drained = 1; } - if (drained == 1 && + if (drained == 1 && folio_may_be_lru_cached(folio) && folio_ref_count(folio) != folio_expected_ref_count(folio) + 1) { lru_add_drain_all(); diff --git a/mm/mlock.c b/mm/mlock.c index 7032f6dd0ce19..3bf9e1d263da4 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -256,7 +256,7 @@ void mlock_folio(struct folio *folio) folio_get(folio); if (!pagevec_add(pvec, mlock_lru(&folio->page)) || - folio_test_large(folio) || lru_cache_disabled()) + !folio_may_be_lru_cached(folio) || lru_cache_disabled()) mlock_pagevec(pvec); local_unlock(&mlock_pvec.lock); } diff --git a/mm/swap.c b/mm/swap.c index 85aa04fc48a67..e0fdf25350002 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -249,8 +249,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) static void folio_batch_add_and_move(struct folio_batch *fbatch, struct folio *folio, move_fn_t move_fn) { - if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && - !lru_cache_disabled()) + if (folio_batch_add(fbatch, folio) && + folio_may_be_lru_cached(folio) && !lru_cache_disabled()) return; folio_batch_move_lru(fbatch, move_fn); }