From: Pasha Tatashin Date: Wed, 25 Feb 2026 22:38:56 +0000 (-0500) Subject: mm/vmalloc: export clear_vm_uninitialized_flag() X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=ec106365394dc6c4e9ecf00842186d367dcc955a;p=thirdparty%2Fkernel%2Flinux.git mm/vmalloc: export clear_vm_uninitialized_flag() Patch series "Fix KASAN support for KHO restored vmalloc regions". When KHO restores a vmalloc area, it maps existing physical pages into a newly allocated virtual memory area. However, because these areas were not properly unpoisoned, KASAN would treat any access to the restored region as out-of-bounds, as seen in the following trace: BUG: KASAN: vmalloc-out-of-bounds in kho_test_restore_data.isra.0+0x17b/0x2cd Read of size 8 at addr ffffc90000025000 by task swapper/0/1 [...] Call Trace: [...] kasan_report+0xe8/0x120 kho_test_restore_data.isra.0+0x17b/0x2cd kho_test_init+0x15a/0x1f0 do_one_initcall+0xd5/0x4b0 The fix involves deferring KASAN's default poisoning by using the VM_UNINITIALIZED flag during allocation, manually unpoisoning the memory once it is correctly mapped, and then clearing the uninitialized flag using a newly exported helper. This patch (of 2): Make clear_vm_uninitialized_flag() available to other parts of the kernel that need to manage vmalloc areas manually, such as KHO for restoring vmallocs. Link: https://lkml.kernel.org/r/20260225220223.1695350-1-pasha.tatashin@soleen.com Link: https://lkml.kernel.org/r/20260225223857.1714801-2-pasha.tatashin@soleen.com Signed-off-by: Pasha Tatashin Acked-by: Pratyush Yadav (Google) Cc: Alexander Graf Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Michal Hocko Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: "Uladzislau Rezki (Sony)" Signed-off-by: Andrew Morton --- diff --git a/mm/internal.h b/mm/internal.h index 39ab37bb0e1dd..2daa6a744172e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1469,6 +1469,8 @@ int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end, } #endif +void clear_vm_uninitialized_flag(struct vm_struct *vm); + int __must_check __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6dda97c3799e8..b2c2ed650840c 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3183,7 +3183,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align) kasan_populate_early_vm_area_shadow(vm->addr, vm->size); } -static void clear_vm_uninitialized_flag(struct vm_struct *vm) +void clear_vm_uninitialized_flag(struct vm_struct *vm) { /* * Before removing VM_UNINITIALIZED,