From: Zi Yan Date: Fri, 18 Jul 2025 18:37:20 +0000 (-0400) Subject: mm/huge_memory: refactor after-split (page) cache code X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=fde47708f9bc7f7babe4f48284f19d92faa06891;p=thirdparty%2Fkernel%2Flinux.git mm/huge_memory: refactor after-split (page) cache code Smatch/coverity checkers report NULL mapping referencing issues[1][2][3] every time the code is modified, because they do not understand that mapping cannot be NULL when a folio is in page cache in the code. Refactor the code to make it explicit. Remove "end = -1" for anonymous folios, since after code refactoring, end is no longer used by anonymous folio handling code. No functional change is intended. Link: https://lkml.kernel.org/r/20250718023000.4044406-7-ziy@nvidia.com Link: https://lore.kernel.org/linux-mm/2afe3d59-aca5-40f7-82a3-a6d976fb0f4f@stanley.mountain/ [1] Link: https://lore.kernel.org/oe-kbuild/64b54034-f311-4e7d-b935-c16775dbb642@suswa.mountain/ [2] Link: https://lore.kernel.org/linux-mm/20250716145804.4836-1-antonio@mandelbit.com/ [3] Link: https://lkml.kernel.org/r/20250718183720.4054515-7-ziy@nvidia.com Signed-off-by: Zi Yan Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Reviewed-by: Lorenzo Stoakes Cc: Balbir Singh Cc: Baolin Wang Cc: Barry Song Cc: Dan Carpenter Cc: Dev Jain Cc: Hugh Dickins Cc: Kirill A. Shutemov Cc: Liam Howlett Cc: Mariano Pache Cc: Mathew Brost Cc: Ryan Roberts Signed-off-by: Andrew Morton --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 19e69704fcff2..9c38a95e9f091 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3640,7 +3640,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, ret = -EBUSY; goto out; } - end = -1; mapping = NULL; anon_vma_lock_write(anon_vma); } else { @@ -3793,6 +3792,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order, */ for (new_folio = folio_next(folio); new_folio != end_folio; new_folio = next) { + unsigned long nr_pages = folio_nr_pages(new_folio); + next = folio_next(new_folio); expected_refs = folio_expected_ref_count(new_folio) + 1; @@ -3800,25 +3801,36 @@ static int __folio_split(struct folio *folio, unsigned int new_order, lru_add_split_folio(folio, new_folio, lruvec, list); - /* Some pages can be beyond EOF: drop them from cache */ - if (new_folio->index >= end) { - if (shmem_mapping(mapping)) - nr_shmem_dropped += folio_nr_pages(new_folio); - else if (folio_test_clear_dirty(new_folio)) - folio_account_cleaned( - new_folio, - inode_to_wb(mapping->host)); - __filemap_remove_folio(new_folio, NULL); - folio_put_refs(new_folio, - folio_nr_pages(new_folio)); - } else if (mapping) { - __xa_store(&mapping->i_pages, new_folio->index, - new_folio, 0); - } else if (swap_cache) { + /* + * Anonymous folio with swap cache. + * NOTE: shmem in swap cache is not supported yet. + */ + if (swap_cache) { __xa_store(&swap_cache->i_pages, swap_cache_index(new_folio->swap), new_folio, 0); + continue; + } + + /* Anonymous folio without swap cache */ + if (!mapping) + continue; + + /* Add the new folio to the page cache. */ + if (new_folio->index < end) { + __xa_store(&mapping->i_pages, new_folio->index, + new_folio, 0); + continue; } + + /* Drop folio beyond EOF: ->index >= end */ + if (shmem_mapping(mapping)) + nr_shmem_dropped += nr_pages; + else if (folio_test_clear_dirty(new_folio)) + folio_account_cleaned( + new_folio, inode_to_wb(mapping->host)); + __filemap_remove_folio(new_folio, NULL); + folio_put_refs(new_folio, nr_pages); } /* * Unfreeze @folio only after all page cache entries, which