From: Yang Shi Date: Thu, 23 Oct 2025 20:44:28 +0000 (-0700) Subject: arm64: mm: make linear mapping permission update more robust for patial range X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=37cb0aab9068e8d7907822405fe5545a2cd7af0b;p=thirdparty%2Fkernel%2Flinux.git arm64: mm: make linear mapping permission update more robust for patial range The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings") made permission update for partial range more robust. But the linear mapping permission update still assumes update the whole range by iterating from the first page all the way to the last page of the area. Make it more robust by updating the linear mapping permission from the page mapped by start address, and update the number of numpages. Reviewed-by: Ryan Roberts Reviewed-by: Dev Jain Signed-off-by: Yang Shi Signed-off-by: Catalin Marinas --- diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958d..08ac96b9f846f 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; - int i; if (!PAGE_ALIGNED(addr)) { start &= PAGE_MASK; @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages, */ if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { - for (i = 0; i < area->nr_pages; i++) { - __change_memory_common((u64)page_address(area->pages[i]), + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT; + for (; numpages; idx++, numpages--) { + __change_memory_common((u64)page_address(area->pages[idx]), PAGE_SIZE, set_mask, clear_mask); } }