From: Greg Kroah-Hartman Date: Thu, 15 Jan 2026 11:46:13 +0000 (+0100) Subject: drop mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch X-Git-Tag: v6.6.121~22 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=cf441ae09057fab2ab7e49c8c2c65ac101c0d5b4;p=thirdparty%2Fkernel%2Fstable-queue.git drop mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch --- diff --git a/queue-5.10/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch b/queue-5.10/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch deleted file mode 100644 index 1ef8212529..0000000000 --- a/queue-5.10/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch +++ /dev/null @@ -1,245 +0,0 @@ -From stable+bounces-206085-greg=kroah.com@vger.kernel.org Wed Jan 7 04:23:10 2026 -From: Harry Yoo -Date: Wed, 7 Jan 2026 12:21:21 +0900 -Subject: mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge() -To: stable@vger.kernel.org -Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, baohua@kernel.org, baolin.wang@linux.alibaba.com, david@kernel.org, dev.jain@arm.com, hughd@google.com, jane.chu@oracle.com, jannh@google.com, kas@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, lorenzo.stoakes@oracle.com, npache@redhat.com, pfalcato@suse.de, ryan.roberts@arm.com, vbabka@suse.cz, ziy@nvidia.com, "Alistair Popple" , "Anshuman Khandual" , "Axel Rasmussen" , "Christophe Leroy" , "Christoph Hellwig" , "David Hildenbrand" , "Huang, Ying" , "Ira Weiny" , "Jason Gunthorpe" , "Kirill A . Shutemov" , "Lorenzo Stoakes" , "Matthew Wilcox" , "Mel Gorman" , "Miaohe Lin" , "Mike Kravetz" , "Mike Rapoport" , "Minchan Kim" , "Naoya Horiguchi" , "Pavel Tatashin" , "Peter Xu" , "Peter Zijlstra" , "Qi Zheng" , "Ralph Campbell" , "SeongJae Park" , "Song Liu" , "Steven Price" , "Suren Baghdasaryan" , "Thomas Hellström" , "Will Deacon" , "Yang Shi" , "Yu Zhao" , "Zack Rusin" , "Harry Yoo" -Message-ID: <20260107032121.587629-3-harry.yoo@oracle.com> - -From: Hugh Dickins - -commit 670ddd8cdcbd1d07a4571266ae3517f821728c3a upstream. - -change_pmd_range() had special pmd_none_or_clear_bad_unless_trans_huge(), -required to avoid "bad" choices when setting automatic NUMA hinting under -mmap_read_lock(); but most of that is already covered in pte_offset_map() -now. change_pmd_range() just wants a pmd_none() check before wasting time -on MMU notifiers, then checks on the read-once _pmd value to work out -what's needed for huge cases. If change_pte_range() returns -EAGAIN to -retry if pte_offset_map_lock() fails, nothing more special is needed. - -Link: https://lkml.kernel.org/r/725a42a9-91e9-c868-925-e3a5fd40bb4f@google.com -Signed-off-by: Hugh Dickins -Cc: Alistair Popple -Cc: Anshuman Khandual -Cc: Axel Rasmussen -Cc: Christophe Leroy -Cc: Christoph Hellwig -Cc: David Hildenbrand -Cc: "Huang, Ying" -Cc: Ira Weiny -Cc: Jason Gunthorpe -Cc: Kirill A. Shutemov -Cc: Lorenzo Stoakes -Cc: Matthew Wilcox -Cc: Mel Gorman -Cc: Miaohe Lin -Cc: Mike Kravetz -Cc: Mike Rapoport (IBM) -Cc: Minchan Kim -Cc: Naoya Horiguchi -Cc: Pavel Tatashin -Cc: Peter Xu -Cc: Peter Zijlstra -Cc: Qi Zheng -Cc: Ralph Campbell -Cc: Ryan Roberts -Cc: SeongJae Park -Cc: Song Liu -Cc: Steven Price -Cc: Suren Baghdasaryan -Cc: Thomas Hellström -Cc: Will Deacon -Cc: Yang Shi -Cc: Yu Zhao -Cc: Zack Rusin -Signed-off-by: Andrew Morton -[ Background: It was reported that a bad pmd is seen when automatic NUMA - balancing is marking page table entries as prot_numa: - - [2437548.196018] mm/pgtable-generic.c:50: bad pmd 00000000af22fc02(dffffffe71fbfe02) - [2437548.235022] Call Trace: - [2437548.238234] - [2437548.241060] dump_stack_lvl+0x46/0x61 - [2437548.245689] panic+0x106/0x2e5 - [2437548.249497] pmd_clear_bad+0x3c/0x3c - [2437548.253967] change_pmd_range.isra.0+0x34d/0x3a7 - [2437548.259537] change_p4d_range+0x156/0x20e - [2437548.264392] change_protection_range+0x116/0x1a9 - [2437548.269976] change_prot_numa+0x15/0x37 - [2437548.274774] task_numa_work+0x1b8/0x302 - [2437548.279512] task_work_run+0x62/0x95 - [2437548.283882] exit_to_user_mode_loop+0x1a4/0x1a9 - [2437548.289277] exit_to_user_mode_prepare+0xf4/0xfc - [2437548.294751] ? sysvec_apic_timer_interrupt+0x34/0x81 - [2437548.300677] irqentry_exit_to_user_mode+0x5/0x25 - [2437548.306153] asm_sysvec_apic_timer_interrupt+0x16/0x1b - - This is due to a race condition between change_prot_numa() and - THP migration because the kernel doesn't check is_swap_pmd() and - pmd_trans_huge() atomically: - - change_prot_numa() THP migration - ====================================================================== - - change_pmd_range() - -> is_swap_pmd() returns false, - meaning it's not a PMD migration - entry. - - do_huge_pmd_numa_page() - -> migrate_misplaced_page() sets - migration entries for the THP. - - change_pmd_range() - -> pmd_none_or_clear_bad_unless_trans_huge() - -> pmd_none() and pmd_trans_huge() returns false - - pmd_none_or_clear_bad_unless_trans_huge() - -> pmd_bad() returns true for the migration entry! - - The upstream commit 670ddd8cdcbd ("mm/mprotect: delete - pmd_none_or_clear_bad_unless_trans_huge()") closes this race condition - by checking is_swap_pmd() and pmd_trans_huge() atomically. - - Backporting note: - Unlike the mainline, pte_offset_map_lock() does not check if the pmd - entry is a migration entry or a hugepage; acquires PTL unconditionally - instead of returning failure. Therefore, it is necessary to keep the - !is_swap_pmd() && !pmd_trans_huge() && !pmd_devmap() check before - acquiring the PTL. - - After acquiring the lock, open-code the semantics of - pte_offset_map_lock() in the mainline kernel; change_pte_range() fails - if the pmd value has changed. This requires adding one more parameter - (to pass pmd value that is read before calling the function) to - change_pte_range(). ] - -Signed-off-by: Harry Yoo -Acked-by: David Hildenbrand (Red Hat) -Signed-off-by: Greg Kroah-Hartman ---- - mm/mprotect.c | 75 ++++++++++++++++++++++++++++++++-------------------------- - 1 file changed, 42 insertions(+), 33 deletions(-) - ---- a/mm/mprotect.c -+++ b/mm/mprotect.c -@@ -36,10 +36,11 @@ - #include "internal.h" - - static long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, -- unsigned long addr, unsigned long end, pgprot_t newprot, -- unsigned long cp_flags) -+ pmd_t pmd_old, unsigned long addr, unsigned long end, -+ pgprot_t newprot, unsigned long cp_flags) - { - pte_t *pte, oldpte; -+ pmd_t _pmd; - spinlock_t *ptl; - long pages = 0; - int target_node = NUMA_NO_NODE; -@@ -48,21 +49,15 @@ static long change_pte_range(struct vm_a - bool uffd_wp = cp_flags & MM_CP_UFFD_WP; - bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; - -- /* -- * Can be called with only the mmap_lock for reading by -- * prot_numa so we must check the pmd isn't constantly -- * changing from under us from pmd_none to pmd_trans_huge -- * and/or the other way around. -- */ -- if (pmd_trans_unstable(pmd)) -- return 0; -- -- /* -- * The pmd points to a regular pte so the pmd can't change -- * from under us even if the mmap_lock is only hold for -- * reading. -- */ - pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); -+ /* Make sure pmd didn't change after acquiring ptl */ -+ _pmd = pmd_read_atomic(pmd); -+ /* See pmd_none_or_trans_huge_or_clear_bad for info on barrier */ -+ barrier(); -+ if (!pmd_same(pmd_old, _pmd)) { -+ pte_unmap_unlock(pte, ptl); -+ return -EAGAIN; -+ } - - /* Get target node for single threaded private VMAs */ - if (prot_numa && !(vma->vm_flags & VM_SHARED) && -@@ -223,21 +218,33 @@ static inline long change_pmd_range(stru - - pmd = pmd_offset(pud, addr); - do { -- long this_pages; -- -+ long ret; -+ pmd_t _pmd; -+again: - next = pmd_addr_end(addr, end); -+ _pmd = pmd_read_atomic(pmd); -+ /* See pmd_none_or_trans_huge_or_clear_bad for info on barrier */ -+#ifdef CONFIG_TRANSPARENT_HUGEPAGE -+ barrier(); -+#endif - - /* - * Automatic NUMA balancing walks the tables with mmap_lock - * held for read. It's possible a parallel update to occur -- * between pmd_trans_huge() and a pmd_none_or_clear_bad() -- * check leading to a false positive and clearing. -- * Hence, it's necessary to atomically read the PMD value -- * for all the checks. -+ * between pmd_trans_huge(), is_swap_pmd(), and -+ * a pmd_none_or_clear_bad() check leading to a false positive -+ * and clearing. Hence, it's necessary to atomically read -+ * the PMD value for all the checks. - */ -- if (!is_swap_pmd(*pmd) && !pmd_devmap(*pmd) && -- pmd_none_or_clear_bad_unless_trans_huge(pmd)) -- goto next; -+ if (!is_swap_pmd(_pmd) && !pmd_devmap(_pmd) && !pmd_trans_huge(_pmd)) { -+ if (pmd_none(_pmd)) -+ goto next; -+ -+ if (pmd_bad(_pmd)) { -+ pmd_clear_bad(pmd); -+ goto next; -+ } -+ } - - /* invoke the mmu notifier if the pmd is populated */ - if (!range.start) { -@@ -247,15 +254,15 @@ static inline long change_pmd_range(stru - mmu_notifier_invalidate_range_start(&range); - } - -- if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { -+ if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { - if (next - addr != HPAGE_PMD_SIZE) { - __split_huge_pmd(vma, pmd, addr, false, NULL); - } else { -- int nr_ptes = change_huge_pmd(vma, pmd, addr, -- newprot, cp_flags); -+ ret = change_huge_pmd(vma, pmd, addr, newprot, -+ cp_flags); - -- if (nr_ptes) { -- if (nr_ptes == HPAGE_PMD_NR) { -+ if (ret) { -+ if (ret == HPAGE_PMD_NR) { - pages += HPAGE_PMD_NR; - nr_huge_updates++; - } -@@ -266,9 +273,11 @@ static inline long change_pmd_range(stru - } - /* fall through, the trans huge pmd just split */ - } -- this_pages = change_pte_range(vma, pmd, addr, next, newprot, -- cp_flags); -- pages += this_pages; -+ ret = change_pte_range(vma, pmd, _pmd, addr, next, newprot, -+ cp_flags); -+ if (ret < 0) -+ goto again; -+ pages += ret; - next: - cond_resched(); - } while (pmd++, addr = next, addr != end); diff --git a/queue-5.10/series b/queue-5.10/series index 7f77bb5df4..b28a2a56aa 100644 --- a/queue-5.10/series +++ b/queue-5.10/series @@ -394,7 +394,6 @@ drm-gma500-remove-unused-helper-psb_fbdev_fb_setcolreg.patch wifi-mac80211-discard-beacon-frames-to-non-broadcast-address.patch nfsd-nfsv4-file-creation-neglects-setting-acl.patch mm-mprotect-use-long-for-page-accountings-and-retval.patch -mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch scsi-iscsi-move-pool-freeing.patch scsi-iscsi_tcp-fix-uaf-during-logout-when-accessing-the-shost-ipaddress.patch cpufreq-scmi-fix-null-ptr-deref-in-scmi_cpufreq_get_rate.patch diff --git a/queue-5.15/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch b/queue-5.15/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch deleted file mode 100644 index 7db44a8fcd..0000000000 --- a/queue-5.15/mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch +++ /dev/null @@ -1,279 +0,0 @@ -From stable+bounces-205078-greg=kroah.com@vger.kernel.org Tue Jan 6 12:59:03 2026 -From: Harry Yoo -Date: Tue, 6 Jan 2026 20:50:36 +0900 -Subject: mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge() -To: stable@vger.kernel.org -Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, baohua@kernel.org, baolin.wang@linux.alibaba.com, david@kernel.org, dev.jain@arm.com, hughd@google.com, jane.chu@oracle.com, jannh@google.com, kas@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, lorenzo.stoakes@oracle.com, npache@redhat.com, pfalcato@suse.de, ryan.roberts@arm.com, vbabka@suse.cz, ziy@nvidia.com, "Alistair Popple" , "Anshuman Khandual" , "Axel Rasmussen" , "Christophe Leroy" , "Christoph Hellwig" , "David Hildenbrand" , "Huang, Ying" , "Ira Weiny" , "Jason Gunthorpe" , "Kirill A . Shutemov" , "Lorenzo Stoakes" , "Matthew Wilcox" , "Mel Gorman" , "Miaohe Lin" , "Mike Kravetz" , "Mike Rapoport" , "Minchan Kim" , "Naoya Horiguchi" , "Pavel Tatashin" , "Peter Xu" , "Peter Zijlstra" , "Qi Zheng" , "Ralph Campbell" , "SeongJae Park" , "Song Liu" , "Steven Price" , "Suren Baghdasaryan" , "Thomas Hellström" , "Will Deacon" , "Yang Shi" , "Yu Zhao" , "Zack Rusin" , "Harry Yoo" -Message-ID: <20260106115036.86042-3-harry.yoo@oracle.com> - -From: Hugh Dickins - -commit 670ddd8cdcbd1d07a4571266ae3517f821728c3a upstream. - -change_pmd_range() had special pmd_none_or_clear_bad_unless_trans_huge(), -required to avoid "bad" choices when setting automatic NUMA hinting under -mmap_read_lock(); but most of that is already covered in pte_offset_map() -now. change_pmd_range() just wants a pmd_none() check before wasting time -on MMU notifiers, then checks on the read-once _pmd value to work out -what's needed for huge cases. If change_pte_range() returns -EAGAIN to -retry if pte_offset_map_lock() fails, nothing more special is needed. - -Link: https://lkml.kernel.org/r/725a42a9-91e9-c868-925-e3a5fd40bb4f@google.com -Signed-off-by: Hugh Dickins -Cc: Alistair Popple -Cc: Anshuman Khandual -Cc: Axel Rasmussen -Cc: Christophe Leroy -Cc: Christoph Hellwig -Cc: David Hildenbrand -Cc: "Huang, Ying" -Cc: Ira Weiny -Cc: Jason Gunthorpe -Cc: Kirill A. Shutemov -Cc: Lorenzo Stoakes -Cc: Matthew Wilcox -Cc: Mel Gorman -Cc: Miaohe Lin -Cc: Mike Kravetz -Cc: Mike Rapoport (IBM) -Cc: Minchan Kim -Cc: Naoya Horiguchi -Cc: Pavel Tatashin -Cc: Peter Xu -Cc: Peter Zijlstra -Cc: Qi Zheng -Cc: Ralph Campbell -Cc: Ryan Roberts -Cc: SeongJae Park -Cc: Song Liu -Cc: Steven Price -Cc: Suren Baghdasaryan -Cc: Thomas Hellström -Cc: Will Deacon -Cc: Yang Shi -Cc: Yu Zhao -Cc: Zack Rusin -Signed-off-by: Andrew Morton -[ Background: - - It was reported that a bad pmd is seen when automatic NUMA balancing - is marking page table entries as prot_numa: - - [2437548.196018] mm/pgtable-generic.c:50: bad pmd 00000000af22fc02(dffffffe71fbfe02) - [2437548.235022] Call Trace: - [2437548.238234] - [2437548.241060] dump_stack_lvl+0x46/0x61 - [2437548.245689] panic+0x106/0x2e5 - [2437548.249497] pmd_clear_bad+0x3c/0x3c - [2437548.253967] change_pmd_range.isra.0+0x34d/0x3a7 - [2437548.259537] change_p4d_range+0x156/0x20e - [2437548.264392] change_protection_range+0x116/0x1a9 - [2437548.269976] change_prot_numa+0x15/0x37 - [2437548.274774] task_numa_work+0x1b8/0x302 - [2437548.279512] task_work_run+0x62/0x95 - [2437548.283882] exit_to_user_mode_loop+0x1a4/0x1a9 - [2437548.289277] exit_to_user_mode_prepare+0xf4/0xfc - [2437548.294751] ? sysvec_apic_timer_interrupt+0x34/0x81 - [2437548.300677] irqentry_exit_to_user_mode+0x5/0x25 - [2437548.306153] asm_sysvec_apic_timer_interrupt+0x16/0x1b - - This is due to a race condition between change_prot_numa() and - THP migration because the kernel doesn't check is_swap_pmd() and - pmd_trans_huge() atomically: - - change_prot_numa() THP migration - ====================================================================== - - change_pmd_range() - -> is_swap_pmd() returns false, - meaning it's not a PMD migration - entry. - - do_huge_pmd_numa_page() - -> migrate_misplaced_page() sets - migration entries for the THP. - - change_pmd_range() - -> pmd_none_or_clear_bad_unless_trans_huge() - -> pmd_none() and pmd_trans_huge() returns false - - pmd_none_or_clear_bad_unless_trans_huge() - -> pmd_bad() returns true for the migration entry! - - The upstream commit 670ddd8cdcbd ("mm/mprotect: delete - pmd_none_or_clear_bad_unless_trans_huge()") closes this race condition - by checking is_swap_pmd() and pmd_trans_huge() atomically. - - Backporting note: - Unlike mainline, pte_offset_map_lock() does not check if the pmd - entry is a migration entry or a hugepage; acquires PTL unconditionally - instead of returning failure. Therefore, it is necessary to keep the - !is_swap_pmd() && !pmd_trans_huge() && !pmd_devmap() check before - acquiring the PTL. - - After acquiring it, open-code the mainline semantics of - pte_offset_map_lock() so that change_pte_range() fails if the pmd value - has changed (under the PTL). This requires adding one more parameter - (for passing pmd value that is read before calling the function) to - change_pte_range(). ] - -Signed-off-by: Harry Yoo -Acked-by: David Hildenbrand (Red Hat) -Signed-off-by: Greg Kroah-Hartman ---- - mm/mprotect.c | 99 ++++++++++++++++++++++++---------------------------------- - 1 file changed, 42 insertions(+), 57 deletions(-) - ---- a/mm/mprotect.c -+++ b/mm/mprotect.c -@@ -36,10 +36,11 @@ - #include "internal.h" - - static long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, -- unsigned long addr, unsigned long end, pgprot_t newprot, -- unsigned long cp_flags) -+ pmd_t pmd_old, unsigned long addr, unsigned long end, -+ pgprot_t newprot, unsigned long cp_flags) - { - pte_t *pte, oldpte; -+ pmd_t _pmd; - spinlock_t *ptl; - long pages = 0; - int target_node = NUMA_NO_NODE; -@@ -48,21 +49,16 @@ static long change_pte_range(struct vm_a - bool uffd_wp = cp_flags & MM_CP_UFFD_WP; - bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; - -- /* -- * Can be called with only the mmap_lock for reading by -- * prot_numa so we must check the pmd isn't constantly -- * changing from under us from pmd_none to pmd_trans_huge -- * and/or the other way around. -- */ -- if (pmd_trans_unstable(pmd)) -- return 0; -- -- /* -- * The pmd points to a regular pte so the pmd can't change -- * from under us even if the mmap_lock is only hold for -- * reading. -- */ -+ - pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); -+ /* Make sure pmd didn't change after acquiring ptl */ -+ _pmd = pmd_read_atomic(pmd); -+ /* See pmd_none_or_trans_huge_or_clear_bad for info on barrier */ -+ barrier(); -+ if (!pmd_same(pmd_old, _pmd)) { -+ pte_unmap_unlock(pte, ptl); -+ return -EAGAIN; -+ } - - /* Get target node for single threaded private VMAs */ - if (prot_numa && !(vma->vm_flags & VM_SHARED) && -@@ -194,31 +190,6 @@ static long change_pte_range(struct vm_a - return pages; - } - --/* -- * Used when setting automatic NUMA hinting protection where it is -- * critical that a numa hinting PMD is not confused with a bad PMD. -- */ --static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd) --{ -- pmd_t pmdval = pmd_read_atomic(pmd); -- -- /* See pmd_none_or_trans_huge_or_clear_bad for info on barrier */ --#ifdef CONFIG_TRANSPARENT_HUGEPAGE -- barrier(); --#endif -- -- if (pmd_none(pmdval)) -- return 1; -- if (pmd_trans_huge(pmdval)) -- return 0; -- if (unlikely(pmd_bad(pmdval))) { -- pmd_clear_bad(pmd); -- return 1; -- } -- -- return 0; --} -- - static inline long change_pmd_range(struct vm_area_struct *vma, - pud_t *pud, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) -@@ -233,21 +204,33 @@ static inline long change_pmd_range(stru - - pmd = pmd_offset(pud, addr); - do { -- long this_pages; -- -+ long ret; -+ pmd_t _pmd; -+again: - next = pmd_addr_end(addr, end); -+ _pmd = pmd_read_atomic(pmd); -+ /* See pmd_none_or_trans_huge_or_clear_bad for info on barrier */ -+#ifdef CONFIG_TRANSPARENT_HUGEPAGE -+ barrier(); -+#endif - - /* - * Automatic NUMA balancing walks the tables with mmap_lock - * held for read. It's possible a parallel update to occur -- * between pmd_trans_huge() and a pmd_none_or_clear_bad() -- * check leading to a false positive and clearing. -- * Hence, it's necessary to atomically read the PMD value -- * for all the checks. -+ * between pmd_trans_huge(), is_swap_pmd(), and -+ * a pmd_none_or_clear_bad() check leading to a false positive -+ * and clearing. Hence, it's necessary to atomically read -+ * the PMD value for all the checks. - */ -- if (!is_swap_pmd(*pmd) && !pmd_devmap(*pmd) && -- pmd_none_or_clear_bad_unless_trans_huge(pmd)) -- goto next; -+ if (!is_swap_pmd(_pmd) && !pmd_devmap(_pmd) && !pmd_trans_huge(_pmd)) { -+ if (pmd_none(_pmd)) -+ goto next; -+ -+ if (pmd_bad(_pmd)) { -+ pmd_clear_bad(pmd); -+ goto next; -+ } -+ } - - /* invoke the mmu notifier if the pmd is populated */ - if (!range.start) { -@@ -257,15 +240,15 @@ static inline long change_pmd_range(stru - mmu_notifier_invalidate_range_start(&range); - } - -- if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { -+ if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { - if (next - addr != HPAGE_PMD_SIZE) { - __split_huge_pmd(vma, pmd, addr, false, NULL); - } else { -- int nr_ptes = change_huge_pmd(vma, pmd, addr, -+ ret = change_huge_pmd(vma, pmd, addr, - newprot, cp_flags); - -- if (nr_ptes) { -- if (nr_ptes == HPAGE_PMD_NR) { -+ if (ret) { -+ if (ret == HPAGE_PMD_NR) { - pages += HPAGE_PMD_NR; - nr_huge_updates++; - } -@@ -276,9 +259,11 @@ static inline long change_pmd_range(stru - } - /* fall through, the trans huge pmd just split */ - } -- this_pages = change_pte_range(vma, pmd, addr, next, newprot, -- cp_flags); -- pages += this_pages; -+ ret = change_pte_range(vma, pmd, _pmd, addr, next, -+ newprot, cp_flags); -+ if (ret < 0) -+ goto again; -+ pages += ret; - next: - cond_resched(); - } while (pmd++, addr = next, addr != end); diff --git a/queue-5.15/series b/queue-5.15/series index bb966a6406..46c5202652 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -486,7 +486,6 @@ page_pool-fix-use-after-free-in-page_pool_recycle_in_ring.patch kvm-x86-acquire-kvm-srcu-when-handling-kvm_set_vcpu_events.patch hid-core-harden-s32ton-against-conversion-to-0-bits.patch mm-mprotect-use-long-for-page-accountings-and-retval.patch -mm-mprotect-delete-pmd_none_or_clear_bad_unless_trans_huge.patch kvm-arm64-sys_regs-disable-wuninitialized-const-pointer-warning.patch ipv6-fix-potential-uninit-value-access-in-__ip6_make_skb.patch ipv4-fix-uninit-value-access-in-__ip_make_skb.patch