]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
mm/hugetlb: fix two comments related to huge_pmd_unshare()
authorDavid Hildenbrand (Red Hat) <david@kernel.org>
Tue, 23 Dec 2025 21:40:35 +0000 (22:40 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 20 Jan 2026 17:34:26 +0000 (09:34 -0800)
Ever since we stopped using the page count to detect shared PMD page
tables, these comments are outdated.

The only reason we have to flush the TLB early is because once we drop the
i_mmap_rwsem, the previously shared page table could get freed (to then
get reallocated and used for other purpose).  So we really have to flush
the TLB before that could happen.

So let's simplify the comments a bit.

The "If we unshared PMDs, the TLB flush was not recorded in mmu_gather."
part introduced as in commit a4a118f2eead ("hugetlbfs: flush TLBs
correctly after huge_pmd_unshare") was confusing: sure it is recorded in
the mmu_gather, otherwise tlb_flush_mmu_tlbonly() wouldn't do anything.
So let's drop that comment while at it as well.

We'll centralize these comments in a single helper as we rework the code
next.

Link: https://lkml.kernel.org/r/20251223214037.580860-3-david@kernel.org
Fixes: 59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count")
Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Cc: Liu Shixin <liushixin2@huawei.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: "Uschakow, Stanislav" <suschako@amazon.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index e0ab140205134489c448631109951698f974e781..67131aa24d7740db27cd5ec37533c7ab27372cf2 100644 (file)
@@ -5320,17 +5320,10 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
        tlb_end_vma(tlb, vma);
 
        /*
-        * If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
-        * could defer the flush until now, since by holding i_mmap_rwsem we
-        * guaranteed that the last reference would not be dropped. But we must
-        * do the flushing before we return, as otherwise i_mmap_rwsem will be
-        * dropped and the last reference to the shared PMDs page might be
-        * dropped as well.
-        *
-        * In theory we could defer the freeing of the PMD pages as well, but
-        * huge_pmd_unshare() relies on the exact page_count for the PMD page to
-        * detect sharing, so we cannot defer the release of the page either.
-        * Instead, do flush now.
+        * There is nothing protecting a previously-shared page table that we
+        * unshared through huge_pmd_unshare() from getting freed after we
+        * release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()
+        * succeeded, flush the range corresponding to the pud.
         */
        if (force_flush)
                tlb_flush_mmu_tlbonly(tlb);
@@ -6552,11 +6545,10 @@ next:
                cond_resched();
        }
        /*
-        * Must flush TLB before releasing i_mmap_rwsem: x86's huge_pmd_unshare
-        * may have cleared our pud entry and done put_page on the page table:
-        * once we release i_mmap_rwsem, another task can do the final put_page
-        * and that page table be reused and filled with junk.  If we actually
-        * did unshare a page of pmds, flush the range corresponding to the pud.
+        * There is nothing protecting a previously-shared page table that we
+        * unshared through huge_pmd_unshare() from getting freed after we
+        * release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()
+        * succeeded, flush the range corresponding to the pud.
         */
        if (shared_pmd)
                flush_hugetlb_tlb_range(vma, range.start, range.end);