]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs
authorJann Horn <jannh@google.com>
Fri, 3 Jan 2025 18:39:38 +0000 (19:39 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 10 Apr 2025 12:37:42 +0000 (14:37 +0200)
commit 3ef938c3503563bfc2ac15083557f880d29c2e64 upstream.

On the following path, flush_tlb_range() can be used for zapping normal
PMD entries (PMD entries that point to page tables) together with the PTE
entries in the pointed-to page table:

    collapse_pte_mapped_thp
      pmdp_collapse_flush
        flush_tlb_range

The arm64 version of flush_tlb_range() has a comment describing that it can
be used for page table removal, and does not use any last-level
invalidation optimizations. Fix the X86 version by making it behave the
same way.

Currently, X86 only uses this information for the following two purposes,
which I think means the issue doesn't have much impact:

 - In native_flush_tlb_multi() for checking if lazy TLB CPUs need to be
   IPI'd to avoid issues with speculative page table walks.
 - In Hyper-V TLB paravirtualization, again for lazy TLB stuff.

The patch "x86/mm: only invalidate final translations with INVLPGB" which
is currently under review (see
<https://lore.kernel.org/all/20241230175550.4046587-13-riel@surriel.com/>)
would probably be making the impact of this a lot worse.

Fixes: 016c4d92cd16 ("x86/mm/tlb: Add freed_tables argument to flush_tlb_mm_range")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20250103-x86-collapse-flush-fix-v1-1-3c521856cfa6@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/include/asm/tlbflush.h

index 5d61adc6e892eaf718b6dab03dfbbae355c3ad08..a496d9dc75d94bd61b222b7c2c92c5e87d7b161f 100644 (file)
@@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask,
        flush_tlb_mm_range((vma)->vm_mm, start, end,                    \
                           ((vma)->vm_flags & VM_HUGETLB)               \
                                ? huge_page_shift(hstate_vma(vma))      \
-                               : PAGE_SHIFT, false)
+                               : PAGE_SHIFT, true)
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,