From: Greg Kroah-Hartman Date: Fri, 15 Nov 2024 09:21:04 +0000 (+0100) Subject: 5.4-stable patches X-Git-Tag: v4.19.324~3 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=b5a59e2bbc31e01c7e6d83a7093e14b6048b3713;p=thirdparty%2Fkernel%2Fstable-queue.git 5.4-stable patches added patches: mm-avoid-leaving-partial-pfn-mappings-around-in-error-case.patch --- diff --git a/queue-5.4/mm-avoid-leaving-partial-pfn-mappings-around-in-error-case.patch b/queue-5.4/mm-avoid-leaving-partial-pfn-mappings-around-in-error-case.patch new file mode 100644 index 00000000000..c914c345eb1 --- /dev/null +++ b/queue-5.4/mm-avoid-leaving-partial-pfn-mappings-around-in-error-case.patch @@ -0,0 +1,81 @@ +From 35770ca6180caa24a2b258c99a87bd437a1ee10f Mon Sep 17 00:00:00 2001 +From: Linus Torvalds +Date: Wed, 11 Sep 2024 17:11:23 -0700 +Subject: mm: avoid leaving partial pfn mappings around in error case + +From: Linus Torvalds + +commit 79a61cc3fc0466ad2b7b89618a6157785f0293b3 upstream. + +As Jann points out, PFN mappings are special, because unlike normal +memory mappings, there is no lifetime information associated with the +mapping - it is just a raw mapping of PFNs with no reference counting of +a 'struct page'. + +That's all very much intentional, but it does mean that it's easy to +mess up the cleanup in case of errors. Yes, a failed mmap() will always +eventually clean up any partial mappings, but without any explicit +lifetime in the page table mapping itself, it's very easy to do the +error handling in the wrong order. + +In particular, it's easy to mistakenly free the physical backing store +before the page tables are actually cleaned up and (temporarily) have +stale dangling PTE entries. + +To make this situation less error-prone, just make sure that any partial +pfn mapping is torn down early, before any other error handling. + +Reported-and-tested-by: Jann Horn +Cc: Andrew Morton +Cc: Jason Gunthorpe +Cc: Simona Vetter +Signed-off-by: Linus Torvalds +Signed-off-by: Harshvardhan Jha +Signed-off-by: Greg Kroah-Hartman +--- + mm/memory.c | 27 ++++++++++++++++++++++----- + 1 file changed, 22 insertions(+), 5 deletions(-) + +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -1917,11 +1917,7 @@ static inline int remap_p4d_range(struct + return 0; + } + +-/* +- * Variant of remap_pfn_range that does not call track_pfn_remap. The caller +- * must have pre-validated the caching bits of the pgprot_t. +- */ +-int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, ++static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, unsigned long size, pgprot_t prot) + { + pgd_t *pgd; +@@ -1974,6 +1970,27 @@ int remap_pfn_range_notrack(struct vm_ar + return 0; + } + ++/* ++ * Variant of remap_pfn_range that does not call track_pfn_remap. The caller ++ * must have pre-validated the caching bits of the pgprot_t. ++ */ ++int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, ++ unsigned long pfn, unsigned long size, pgprot_t prot) ++{ ++ int error = remap_pfn_range_internal(vma, addr, pfn, size, prot); ++ ++ if (!error) ++ return 0; ++ ++ /* ++ * A partial pfn range mapping is dangerous: it does not ++ * maintain page reference counts, and callers may free ++ * pages due to the error. So zap it early. ++ */ ++ zap_page_range_single(vma, addr, size, NULL); ++ return error; ++} ++ + /** + * remap_pfn_range - remap kernel memory to userspace + * @vma: user vma to map to diff --git a/queue-5.4/series b/queue-5.4/series index 6b60065b3a7..3cd3954716c 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -64,3 +64,4 @@ mm-fix-ambiguous-comments-for-better-code-readability.patch mm-memory.c-make-remap_pfn_range-reject-unaligned-addr.patch mm-add-remap_pfn_range_notrack.patch 9p-fix-slab-cache-name-creation-for-real.patch +mm-avoid-leaving-partial-pfn-mappings-around-in-error-case.patch