]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
khugepaged: remove redundant index check for pmd-folios
authorDev Jain <dev.jain@arm.com>
Fri, 27 Feb 2026 14:35:01 +0000 (20:05 +0530)
committerAndrew Morton <akpm@linux-foundation.org>
Sun, 5 Apr 2026 20:53:10 +0000 (13:53 -0700)
Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.

Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
iterate on the xarray, have the invariant that start <= folio->index <
start + HPAGE_PMD_NR ...  (i)

A folio is always naturally aligned in the pagecache, therefore
folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)

thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
offsets in the VMA are aligned to the order,
=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)

Combining (i), (ii) and (iii), the claim is proven.

Therefore, remove this check.
While at it, simplify the comments.

Link: https://lkml.kernel.org/r/20260227143501.1488110-1-dev.jain@arm.com
Signed-off-by: Dev Jain <dev.jain@arm.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/khugepaged.c

index 13b0fe50dfc5652ee322bf45cd82b3792d9da037..ab97423fe837ee518929323c61a619469f4b1b73 100644 (file)
@@ -2023,9 +2023,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
                 * we locked the first folio, then a THP might be there already.
                 * This will be discovered on the first iteration.
                 */
-               if (folio_order(folio) == HPAGE_PMD_ORDER &&
-                   folio->index == start) {
-                       /* Maybe PMD-mapped */
+               if (folio_order(folio) == HPAGE_PMD_ORDER) {
                        result = SCAN_PTE_MAPPED_HUGEPAGE;
                        goto out_unlock;
                }
@@ -2353,15 +2351,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
                        continue;
                }
 
-               if (folio_order(folio) == HPAGE_PMD_ORDER &&
-                   folio->index == start) {
-                       /* Maybe PMD-mapped */
+               if (folio_order(folio) == HPAGE_PMD_ORDER) {
                        result = SCAN_PTE_MAPPED_HUGEPAGE;
                        /*
-                        * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
-                        * by the caller won't touch the page cache, and so
-                        * it's safe to skip LRU and refcount checks before
-                        * returning.
+                        * PMD-sized THP implies that we can only try
+                        * retracting the PTE table.
                         */
                        folio_put(folio);
                        break;