From: Yang Shi Date: Wed, 19 Nov 2025 04:19:45 +0000 (-0800) Subject: arm64: mm: use untagged address to calculate page index X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=a06494adb7efba2dda3866ac2e354aeacb3992f1;p=thirdparty%2Fkernel%2Flinux.git arm64: mm: use untagged address to calculate page index Nathan Chancellor reported the below bug: [ 0.149929] BUG: KASAN: invalid-access in change_memory_common+0x258/0x2d0 [ 0.151006] Read of size 8 at addr f96680000268a000 by task swapper/0/1 [ 0.152031] [ 0.152274] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-rc1-00012-g37cb0aab9068 #1 PREEMPT [ 0.152288] Hardware name: linux,dummy-virt (DT) [ 0.152292] Call trace: [ 0.152295] show_stack+0x18/0x30 (C) [ 0.152309] dump_stack_lvl+0x60/0x80 [ 0.152320] print_report+0x480/0x498 [ 0.152331] kasan_report+0xac/0xf0 [ 0.152343] kasan_check_range+0x90/0xb0 [ 0.152353] __hwasan_load8_noabort+0x20/0x34 [ 0.152364] change_memory_common+0x258/0x2d0 [ 0.152375] set_memory_ro+0x18/0x24 [ 0.152386] bpf_prog_pack_alloc+0x200/0x2e8 [ 0.152397] bpf_jit_binary_pack_alloc+0x78/0x188 [ 0.152409] bpf_int_jit_compile+0xa4c/0xc74 [ 0.152420] bpf_prog_select_runtime+0x1c0/0x2bc [ 0.152430] bpf_prepare_filter+0x5a4/0x7c0 [ 0.152443] bpf_prog_create+0xa4/0x100 [ 0.152454] ptp_classifier_init+0x80/0xd0 [ 0.152465] sock_init+0x12c/0x178 [ 0.152474] do_one_initcall+0xa0/0x260 [ 0.152484] kernel_init_freeable+0x2d8/0x358 [ 0.152495] kernel_init+0x20/0x140 [ 0.152510] ret_from_fork+0x10/0x20 It is because the KASAN tagged address was used when calculating the page index. The untagged address should be used. Fixes: 37cb0aab9068 ("arm64: mm: make linear mapping permission update more robust for patial range") Reported-by: Nathan Chancellor Tested-by: Nathan Chancellor Signed-off-by: Yang Shi Signed-off-by: Catalin Marinas --- diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 08ac96b9f846f..fe6fdc6249e32 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -183,7 +183,8 @@ static int change_memory_common(unsigned long addr, int numpages, */ if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { - unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT; + unsigned long idx = (start - (unsigned long)kasan_reset_tag(area->addr)) + >> PAGE_SHIFT; for (; numpages; idx++, numpages--) { __change_memory_common((u64)page_address(area->pages[idx]), PAGE_SIZE, set_mask, clear_mask);