From: David Hildenbrand (Arm) Date: Fri, 20 Mar 2026 22:13:34 +0000 (+0100) Subject: mm/memory_hotplug: remove for_each_valid_pfn() usage X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9d80de66a04606eef625cb9141b6d1d8c970dbcb;p=thirdparty%2Fkernel%2Flinux.git mm/memory_hotplug: remove for_each_valid_pfn() usage When offlining memory, we know that the memory range has no holes. Checking for valid pfns is not required. Link: https://lkml.kernel.org/r/20260320-sparsemem_cleanups-v2-2-096addc8800d@kernel.org Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) Cc: Axel Rasmussen Cc: Liam Howlett Cc: Michal Hocko Cc: Oscar Salvador Cc: Sidhartha Kumar Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Wei Xu Cc: Yuanchu Xie Signed-off-by: Andrew Morton --- diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c427967c78bbc..504aa50e3c330 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1745,7 +1745,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end, { unsigned long pfn; - for_each_valid_pfn(pfn, start, end) { + for (pfn = start; pfn < end; pfn++) { unsigned long nr_pages; struct page *page; struct folio *folio; @@ -1795,7 +1795,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); - for_each_valid_pfn(pfn, start_pfn, end_pfn) { + for (pfn = start_pfn; pfn < end_pfn; pfn++) { struct page *page; page = pfn_to_page(pfn);