From: Will Deacon Date: Mon, 2 Mar 2026 13:55:54 +0000 (+0000) Subject: arm64: mm: Simplify __TLBI_RANGE_NUM() macro X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=057bbd8e06102100023b980dfdd26f8a595785cd;p=thirdparty%2Fkernel%2Fstable.git arm64: mm: Simplify __TLBI_RANGE_NUM() macro Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to decrement scale"), we don't need to clamp the 'pages' argument to fit the range for the specified 'scale' as we know that the upper bits will have been processed in a prior iteration. Drop the clamping and simplify the __TLBI_RANGE_NUM() macro. Signed-off-by: Will Deacon Reviewed-by: Ryan Roberts Reviewed-by: Dev Jain Reviewed-by: Jonathan Cameron Signed-off-by: Ryan Roberts Signed-off-by: Catalin Marinas --- diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3c05afdbe3a6..fb7e541cfdfd 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -199,11 +199,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level) * range. */ #define __TLBI_RANGE_NUM(pages, scale) \ - ({ \ - int __pages = min((pages), \ - __TLBI_RANGE_PAGES(31, (scale))); \ - (__pages >> (5 * (scale) + 1)) - 1; \ - }) + (((pages) >> (5 * (scale) + 1)) - 1) #define __repeat_tlbi_sync(op, arg...) \ do { \