]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
mm/memfd: fix information leak in hugetlb folios
authorDeepanshu Kartikey <kartikey406@gmail.com>
Wed, 12 Nov 2025 14:50:34 +0000 (20:20 +0530)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 24 Nov 2025 22:25:17 +0000 (14:25 -0800)
When allocating hugetlb folios for memfd, three initialization steps are
missing:

1. Folios are not zeroed, leading to kernel memory disclosure to userspace
2. Folios are not marked uptodate before adding to page cache
3. hugetlb_fault_mutex is not taken before hugetlb_add_to_page_cache()

The memfd allocation path bypasses the normal page fault handler
(hugetlb_no_page) which would handle all of these initialization steps.
This is problematic especially for udmabuf use cases where folios are
pinned and directly accessed by userspace via DMA.

Fix by matching the initialization pattern used in hugetlb_no_page():
- Zero the folio using folio_zero_user() which is optimized for huge pages
- Mark it uptodate with folio_mark_uptodate()
- Take hugetlb_fault_mutex before adding to page cache to prevent races

The folio_zero_user() change also fixes a potential security issue where
uninitialized kernel memory could be disclosed to userspace through read()
or mmap() operations on the memfd.

Link: https://lkml.kernel.org/r/20251112145034.2320452-1-kartikey406@gmail.com
Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios")
Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
Reported-by: syzbot+f64019ba229e3a5c411b@syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/20251112031631.2315651-1-kartikey406@gmail.com/
Closes: https://syzkaller.appspot.com/bug?extid=f64019ba229e3a5c411b
Suggested-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Tested-by: syzbot+f64019ba229e3a5c411b@syzkaller.appspotmail.com
Acked-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Jason Gunthorpe <jgg@nvidia.com> (v2)
Cc: Christoph Hellwig <hch@lst.de> (v6)
Cc: Dave Airlie <airlied@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memfd.c

index 1d109c1acf211b3c3c777ffda539a8089a0b7536..a405eaa451ee8b309e6f3d5c85296ef5f8be3b00 100644 (file)
@@ -96,9 +96,36 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
                                                    NULL,
                                                    gfp_mask);
                if (folio) {
+                       u32 hash;
+
+                       /*
+                        * Zero the folio to prevent information leaks to userspace.
+                        * Use folio_zero_user() which is optimized for huge/gigantic
+                        * pages. Pass 0 as addr_hint since this is not a faulting path
+                        *  and we don't have a user virtual address yet.
+                        */
+                       folio_zero_user(folio, 0);
+
+                       /*
+                        * Mark the folio uptodate before adding to page cache,
+                        * as required by filemap.c and other hugetlb paths.
+                        */
+                       __folio_mark_uptodate(folio);
+
+                       /*
+                        * Serialize hugepage allocation and instantiation to prevent
+                        * races with concurrent allocations, as required by all other
+                        * callers of hugetlb_add_to_page_cache().
+                        */
+                       hash = hugetlb_fault_mutex_hash(memfd->f_mapping, idx);
+                       mutex_lock(&hugetlb_fault_mutex_table[hash]);
+
                        err = hugetlb_add_to_page_cache(folio,
                                                        memfd->f_mapping,
                                                        idx);
+
+                       mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+
                        if (err) {
                                folio_put(folio);
                                goto err_unresv;