From: Greg Kroah-Hartman Date: Tue, 23 Jul 2024 12:15:27 +0000 (+0200) Subject: 5.15-stable patches X-Git-Tag: v6.10.1~25 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=91b571aa6925a783f4137996020dd71be4cddb91;p=thirdparty%2Fkernel%2Fstable-queue.git 5.15-stable patches added patches: minmax-allow-comparisons-of-int-against-unsigned-char-short.patch minmax-allow-min-max-clamp-if-the-arguments-have-the-same-signedness.patch minmax-clamp-more-efficiently-by-avoiding-extra-comparison.patch minmax-fix-header-inclusions.patch minmax-relax-check-to-allow-comparison-between-unsigned-arguments-and-signed-constants.patch minmax-sanity-check-constant-bounds-when-clamping.patch mm-damon-core-merge-regions-aggressively-when-max_nr_regions-is-unmet.patch tracing-define-the-is_signed_type-macro-once.patch --- diff --git a/queue-5.15/minmax-allow-comparisons-of-int-against-unsigned-char-short.patch b/queue-5.15/minmax-allow-comparisons-of-int-against-unsigned-char-short.patch new file mode 100644 index 00000000000..ad80497da3c --- /dev/null +++ b/queue-5.15/minmax-allow-comparisons-of-int-against-unsigned-char-short.patch @@ -0,0 +1,44 @@ +From sj@kernel.org Tue Jul 16 20:33:45 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:31 -0700 +Subject: minmax: allow comparisons of 'int' against 'unsigned char/short' +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: David Laight , linux-kernel@vger.kernel.org, David Laight , Andy Shevchenko , Christoph Hellwig , "Jason A . Donenfeld" , Linus Torvalds , Matthew Wilcox , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-7-sj@kernel.org> + +From: David Laight + +commit 4ead534fba42fc4fd41163297528d2aa731cd121 upstream. + +Since 'unsigned char/short' get promoted to 'signed int' it is safe to +compare them against an 'int' value. + +Link: https://lkml.kernel.org/r/8732ef5f809c47c28a7be47c938b28d4@AcuMS.aculab.com +Signed-off-by: David Laight +Cc: Andy Shevchenko +Cc: Christoph Hellwig +Cc: Jason A. Donenfeld +Cc: Linus Torvalds +Cc: Matthew Wilcox (Oracle) +Signed-off-by: Andrew Morton +(cherry picked from commit 4ead534fba42fc4fd41163297528d2aa731cd121) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -25,8 +25,9 @@ + __builtin_choose_expr(__is_constexpr(is_signed_type(typeof(x))), \ + is_signed_type(typeof(x)), 0) + +-#define __types_ok(x, y) \ +- (__is_signed(x) == __is_signed(y)) ++#define __types_ok(x, y) \ ++ (__is_signed(x) == __is_signed(y) || \ ++ __is_signed((x) + 0) == __is_signed((y) + 0)) + + #define __cmp_op_min < + #define __cmp_op_max > diff --git a/queue-5.15/minmax-allow-min-max-clamp-if-the-arguments-have-the-same-signedness.patch b/queue-5.15/minmax-allow-min-max-clamp-if-the-arguments-have-the-same-signedness.patch new file mode 100644 index 00000000000..f8b3393ea89 --- /dev/null +++ b/queue-5.15/minmax-allow-min-max-clamp-if-the-arguments-have-the-same-signedness.patch @@ -0,0 +1,175 @@ +From sj@kernel.org Tue Jul 16 20:33:45 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:30 -0700 +Subject: minmax: allow min()/max()/clamp() if the arguments have the same signedness. +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: David Laight , linux-kernel@vger.kernel.org, David Laight , Andy Shevchenko , Christoph Hellwig , "Jason A . Donenfeld" , Linus Torvalds , Matthew Wilcox , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-6-sj@kernel.org> + +From: David Laight + +commit d03eba99f5bf7cbc6e2fdde3b6fa36954ad58e09 upstream. + +The type-check in min()/max() is there to stop unexpected results if a +negative value gets converted to a large unsigned value. However it also +rejects 'unsigned int' v 'unsigned long' compares which are common and +never problematc. + +Replace the 'same type' check with a 'same signedness' check. + +The new test isn't itself a compile time error, so use static_assert() to +report the error and give a meaningful error message. + +Due to the way builtin_choose_expr() works detecting the error in the +'non-constant' side (where static_assert() can be used) also detects +errors when the arguments are constant. + +Link: https://lkml.kernel.org/r/fe7e6c542e094bfca655abcd323c1c98@AcuMS.aculab.com +Signed-off-by: David Laight +Cc: Andy Shevchenko +Cc: Christoph Hellwig +Cc: Jason A. Donenfeld +Cc: Linus Torvalds +Cc: Matthew Wilcox (Oracle) +Signed-off-by: Andrew Morton +(cherry picked from commit d03eba99f5bf7cbc6e2fdde3b6fa36954ad58e09) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 62 ++++++++++++++++++++++++++----------------------- + 1 file changed, 33 insertions(+), 29 deletions(-) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -11,9 +11,8 @@ + * + * - avoid multiple evaluations of the arguments (so side-effects like + * "x++" happen only once) when non-constant. +- * - perform strict type-checking (to generate warnings instead of +- * nasty runtime surprises). See the "unnecessary" pointer comparison +- * in __typecheck(). ++ * - perform signed v unsigned type-checking (to generate compile ++ * errors instead of nasty runtime surprises). + * - retain result as a constant expressions when called with only + * constant expressions (to avoid tripping VLA warnings in stack + * allocation usage). +@@ -21,23 +20,30 @@ + #define __typecheck(x, y) \ + (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) + +-#define __no_side_effects(x, y) \ +- (__is_constexpr(x) && __is_constexpr(y)) ++/* is_signed_type() isn't a constexpr for pointer types */ ++#define __is_signed(x) \ ++ __builtin_choose_expr(__is_constexpr(is_signed_type(typeof(x))), \ ++ is_signed_type(typeof(x)), 0) + +-#define __safe_cmp(x, y) \ +- (__typecheck(x, y) && __no_side_effects(x, y)) ++#define __types_ok(x, y) \ ++ (__is_signed(x) == __is_signed(y)) + +-#define __cmp(x, y, op) ((x) op (y) ? (x) : (y)) ++#define __cmp_op_min < ++#define __cmp_op_max > + +-#define __cmp_once(x, y, unique_x, unique_y, op) ({ \ ++#define __cmp(op, x, y) ((x) __cmp_op_##op (y) ? (x) : (y)) ++ ++#define __cmp_once(op, x, y, unique_x, unique_y) ({ \ + typeof(x) unique_x = (x); \ + typeof(y) unique_y = (y); \ +- __cmp(unique_x, unique_y, op); }) +- +-#define __careful_cmp(x, y, op) \ +- __builtin_choose_expr(__safe_cmp(x, y), \ +- __cmp(x, y, op), \ +- __cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op)) ++ static_assert(__types_ok(x, y), \ ++ #op "(" #x ", " #y ") signedness error, fix types or consider u" #op "() before " #op "_t()"); \ ++ __cmp(op, unique_x, unique_y); }) ++ ++#define __careful_cmp(op, x, y) \ ++ __builtin_choose_expr(__is_constexpr((x) - (y)), \ ++ __cmp(op, x, y), \ ++ __cmp_once(op, x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y))) + + #define __clamp(val, lo, hi) \ + ((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val))) +@@ -46,17 +52,15 @@ + typeof(val) unique_val = (val); \ + typeof(lo) unique_lo = (lo); \ + typeof(hi) unique_hi = (hi); \ ++ static_assert(__builtin_choose_expr(__is_constexpr((lo) > (hi)), \ ++ (lo) <= (hi), true), \ ++ "clamp() low limit " #lo " greater than high limit " #hi); \ ++ static_assert(__types_ok(val, lo), "clamp() 'lo' signedness error"); \ ++ static_assert(__types_ok(val, hi), "clamp() 'hi' signedness error"); \ + __clamp(unique_val, unique_lo, unique_hi); }) + +-#define __clamp_input_check(lo, hi) \ +- (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \ +- __is_constexpr((lo) > (hi)), (lo) > (hi), false))) +- + #define __careful_clamp(val, lo, hi) ({ \ +- __clamp_input_check(lo, hi) + \ +- __builtin_choose_expr(__typecheck(val, lo) && __typecheck(val, hi) && \ +- __typecheck(hi, lo) && __is_constexpr(val) && \ +- __is_constexpr(lo) && __is_constexpr(hi), \ ++ __builtin_choose_expr(__is_constexpr((val) - (lo) + (hi)), \ + __clamp(val, lo, hi), \ + __clamp_once(val, lo, hi, __UNIQUE_ID(__val), \ + __UNIQUE_ID(__lo), __UNIQUE_ID(__hi))); }) +@@ -66,14 +70,14 @@ + * @x: first value + * @y: second value + */ +-#define min(x, y) __careful_cmp(x, y, <) ++#define min(x, y) __careful_cmp(min, x, y) + + /** + * max - return maximum of two values of the same or compatible types + * @x: first value + * @y: second value + */ +-#define max(x, y) __careful_cmp(x, y, >) ++#define max(x, y) __careful_cmp(max, x, y) + + /** + * umin - return minimum of two non-negative values +@@ -82,7 +86,7 @@ + * @y: second value + */ + #define umin(x, y) \ +- __careful_cmp((x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull, <) ++ __careful_cmp(min, (x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull) + + /** + * umax - return maximum of two non-negative values +@@ -90,7 +94,7 @@ + * @y: second value + */ + #define umax(x, y) \ +- __careful_cmp((x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull, >) ++ __careful_cmp(max, (x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull) + + /** + * min3 - return minimum of three values +@@ -142,7 +146,7 @@ + * @x: first value + * @y: second value + */ +-#define min_t(type, x, y) __careful_cmp((type)(x), (type)(y), <) ++#define min_t(type, x, y) __careful_cmp(min, (type)(x), (type)(y)) + + /** + * max_t - return maximum of two values, using the specified type +@@ -150,7 +154,7 @@ + * @x: first value + * @y: second value + */ +-#define max_t(type, x, y) __careful_cmp((type)(x), (type)(y), >) ++#define max_t(type, x, y) __careful_cmp(max, (type)(x), (type)(y)) + + /** + * clamp_t - return a value clamped to a given range using a given type diff --git a/queue-5.15/minmax-clamp-more-efficiently-by-avoiding-extra-comparison.patch b/queue-5.15/minmax-clamp-more-efficiently-by-avoiding-extra-comparison.patch new file mode 100644 index 00000000000..296bb1df1d8 --- /dev/null +++ b/queue-5.15/minmax-clamp-more-efficiently-by-avoiding-extra-comparison.patch @@ -0,0 +1,186 @@ +From sj@kernel.org Tue Jul 16 20:33:43 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:28 -0700 +Subject: minmax: clamp more efficiently by avoiding extra comparison +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: "Jason A. Donenfeld" , linux-kernel@vger.kernel.org, Andy Shevchenko , Kees Cook , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-4-sj@kernel.org> + +From: "Jason A. Donenfeld" + +commit 2122e2a4efc2cd139474079e11939b6e07adfacd upstream. + +Currently the clamp algorithm does: + + if (val > hi) + val = hi; + if (val < lo) + val = lo; + +But since hi > lo by definition, this can be made more efficient with: + + if (val > hi) + val = hi; + else if (val < lo) + val = lo; + +So fix up the clamp and clamp_t functions to do this, adding the same +argument checking as for min and min_t. + +For simple cases, code generation on x86_64 and aarch64 stay about the +same: + + before: + cmp edi, edx + mov eax, esi + cmova edi, edx + cmp edi, esi + cmovnb eax, edi + ret + after: + cmp edi, esi + mov eax, edx + cmovnb esi, edi + cmp edi, edx + cmovb eax, esi + ret + + before: + cmp w0, w2 + csel w8, w0, w2, lo + cmp w8, w1 + csel w0, w8, w1, hi + ret + after: + cmp w0, w1 + csel w8, w0, w1, hi + cmp w0, w2 + csel w0, w8, w2, lo + ret + +On MIPS64, however, code generation improves, by removing arithmetic in +the second branch: + + before: + sltu $3,$6,$4 + bne $3,$0,.L2 + move $2,$6 + + move $2,$4 + .L2: + sltu $3,$2,$5 + bnel $3,$0,.L7 + move $2,$5 + + .L7: + jr $31 + nop + after: + sltu $3,$4,$6 + beq $3,$0,.L13 + move $2,$6 + + sltu $3,$4,$5 + bne $3,$0,.L12 + move $2,$4 + + .L13: + jr $31 + nop + + .L12: + jr $31 + move $2,$5 + +For more complex cases with surrounding code, the effects are a bit +more complicated. For example, consider this simplified version of +timestamp_truncate() from fs/inode.c on x86_64: + + struct timespec64 timestamp_truncate(struct timespec64 t, struct inode *inode) + { + struct super_block *sb = inode->i_sb; + unsigned int gran = sb->s_time_gran; + + t.tv_sec = clamp(t.tv_sec, sb->s_time_min, sb->s_time_max); + if (t.tv_sec == sb->s_time_max || t.tv_sec == sb->s_time_min) + t.tv_nsec = 0; + return t; + } + + before: + mov r8, rdx + mov rdx, rsi + mov rcx, QWORD PTR [r8] + mov rax, QWORD PTR [rcx+8] + mov rcx, QWORD PTR [rcx+16] + cmp rax, rdi + mov r8, rcx + cmovge rdi, rax + cmp rdi, rcx + cmovle r8, rdi + cmp rax, r8 + je .L4 + cmp rdi, rcx + jge .L4 + mov rax, r8 + ret + .L4: + xor edx, edx + mov rax, r8 + ret + + after: + mov rax, QWORD PTR [rdx] + mov rdx, QWORD PTR [rax+8] + mov rax, QWORD PTR [rax+16] + cmp rax, rdi + jg .L6 + mov r8, rax + xor edx, edx + .L2: + mov rax, r8 + ret + .L6: + cmp rdx, rdi + mov r8, rdi + cmovge r8, rdx + cmp rax, r8 + je .L4 + xor eax, eax + cmp rdx, rdi + cmovl rax, rsi + mov rdx, rax + mov rax, r8 + ret + .L4: + xor edx, edx + jmp .L2 + +In this case, we actually gain a branch, unfortunately, because the +compiler's replacement axioms no longer as cleanly apply. + +So all and all, this change is a bit of a mixed bag. + +Link: https://lkml.kernel.org/r/20220926133435.1333846-2-Jason@zx2c4.com +Signed-off-by: Jason A. Donenfeld +Cc: Andy Shevchenko +Cc: Kees Cook +Signed-off-by: Andrew Morton +(cherry picked from commit 2122e2a4efc2cd139474079e11939b6e07adfacd) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -38,7 +38,7 @@ + __cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op)) + + #define __clamp(val, lo, hi) \ +- __cmp(__cmp(val, lo, >), hi, <) ++ ((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val))) + + #define __clamp_once(val, lo, hi, unique_val, unique_lo, unique_hi) ({ \ + typeof(val) unique_val = (val); \ diff --git a/queue-5.15/minmax-fix-header-inclusions.patch b/queue-5.15/minmax-fix-header-inclusions.patch new file mode 100644 index 00000000000..ed5f9383f6c --- /dev/null +++ b/queue-5.15/minmax-fix-header-inclusions.patch @@ -0,0 +1,41 @@ +From sj@kernel.org Tue Jul 16 20:33:44 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:29 -0700 +Subject: minmax: fix header inclusions +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: Andy Shevchenko , linux-kernel@vger.kernel.org, Herve Codina , Rasmus Villemoes , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-5-sj@kernel.org> + +From: Andy Shevchenko + +commit f6e9d38f8eb00ac8b52e6d15f6aa9bcecacb081b upstream. + +BUILD_BUG_ON*() macros are defined in build_bug.h. Include it. Replace +compiler_types.h by compiler.h, which provides the former, to have a +definition of the __UNIQUE_ID(). + +Link: https://lkml.kernel.org/r/20230912092355.79280-1-andriy.shevchenko@linux.intel.com +Signed-off-by: Andy Shevchenko +Reviewed-by: Herve Codina +Cc: Rasmus Villemoes + +Signed-off-by: Andrew Morton +(cherry picked from commit f6e9d38f8eb00ac8b52e6d15f6aa9bcecacb081b) +Signed-off-by: SeongJae Park +[Fix a conflict due to absence of compiler_types.h include] +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -2,6 +2,8 @@ + #ifndef _LINUX_MINMAX_H + #define _LINUX_MINMAX_H + ++#include ++#include + #include + + /* diff --git a/queue-5.15/minmax-relax-check-to-allow-comparison-between-unsigned-arguments-and-signed-constants.patch b/queue-5.15/minmax-relax-check-to-allow-comparison-between-unsigned-arguments-and-signed-constants.patch new file mode 100644 index 00000000000..7dc64d3f566 --- /dev/null +++ b/queue-5.15/minmax-relax-check-to-allow-comparison-between-unsigned-arguments-and-signed-constants.patch @@ -0,0 +1,77 @@ +From sj@kernel.org Tue Jul 16 20:33:46 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:32 -0700 +Subject: minmax: relax check to allow comparison between unsigned arguments and signed constants +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: David Laight , linux-kernel@vger.kernel.org, David Laight , Andy Shevchenko , Christoph Hellwig , "Jason A . Donenfeld" , Linus Torvalds , Matthew Wilcox , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-8-sj@kernel.org> + +From: David Laight + +commit 867046cc7027703f60a46339ffde91a1970f2901 upstream. + +Allow (for example) min(unsigned_var, 20). + +The opposite min(signed_var, 20u) is still errored. + +Since a comparison between signed and unsigned never makes the unsigned +value negative it is only necessary to adjust the __types_ok() test. + +Link: https://lkml.kernel.org/r/633b64e2f39e46bb8234809c5595b8c7@AcuMS.aculab.com +Signed-off-by: David Laight +Cc: Andy Shevchenko +Cc: Christoph Hellwig +Cc: Jason A. Donenfeld +Cc: Linus Torvalds +Cc: Matthew Wilcox (Oracle) +Signed-off-by: Andrew Morton +(cherry picked from commit 867046cc7027703f60a46339ffde91a1970f2901) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 24 +++++++++++++++++------- + 1 file changed, 17 insertions(+), 7 deletions(-) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -9,13 +9,18 @@ + /* + * min()/max()/clamp() macros must accomplish three things: + * +- * - avoid multiple evaluations of the arguments (so side-effects like ++ * - Avoid multiple evaluations of the arguments (so side-effects like + * "x++" happen only once) when non-constant. +- * - perform signed v unsigned type-checking (to generate compile +- * errors instead of nasty runtime surprises). +- * - retain result as a constant expressions when called with only ++ * - Retain result as a constant expressions when called with only + * constant expressions (to avoid tripping VLA warnings in stack + * allocation usage). ++ * - Perform signed v unsigned type-checking (to generate compile ++ * errors instead of nasty runtime surprises). ++ * - Unsigned char/short are always promoted to signed int and can be ++ * compared against signed or unsigned arguments. ++ * - Unsigned arguments can be compared against non-negative signed constants. ++ * - Comparison of a signed argument against an unsigned constant fails ++ * even if the constant is below __INT_MAX__ and could be cast to int. + */ + #define __typecheck(x, y) \ + (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) +@@ -25,9 +30,14 @@ + __builtin_choose_expr(__is_constexpr(is_signed_type(typeof(x))), \ + is_signed_type(typeof(x)), 0) + +-#define __types_ok(x, y) \ +- (__is_signed(x) == __is_signed(y) || \ +- __is_signed((x) + 0) == __is_signed((y) + 0)) ++/* True for a non-negative signed int constant */ ++#define __is_noneg_int(x) \ ++ (__builtin_choose_expr(__is_constexpr(x) && __is_signed(x), x, -1) >= 0) ++ ++#define __types_ok(x, y) \ ++ (__is_signed(x) == __is_signed(y) || \ ++ __is_signed((x) + 0) == __is_signed((y) + 0) || \ ++ __is_noneg_int(x) || __is_noneg_int(y)) + + #define __cmp_op_min < + #define __cmp_op_max > diff --git a/queue-5.15/minmax-sanity-check-constant-bounds-when-clamping.patch b/queue-5.15/minmax-sanity-check-constant-bounds-when-clamping.patch new file mode 100644 index 00000000000..9d15c5550ea --- /dev/null +++ b/queue-5.15/minmax-sanity-check-constant-bounds-when-clamping.patch @@ -0,0 +1,80 @@ +From sj@kernel.org Tue Jul 16 20:33:43 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:27 -0700 +Subject: minmax: sanity check constant bounds when clamping +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: "Jason A. Donenfeld" , linux-kernel@vger.kernel.org, Andy Shevchenko , Kees Cook , Andrew Morton , SeongJae Park +Message-ID: <20240716183333.138498-3-sj@kernel.org> + +From: "Jason A. Donenfeld" + +commit 5efcecd9a3b18078d3398b359a84c83f549e22cf upstream. + +The clamp family of functions only makes sense if hi>=lo. If hi and lo +are compile-time constants, then raise a build error. Doing so has +already caught buggy code. This also introduces the infrastructure to +improve the clamping function in subsequent commits. + +[akpm@linux-foundation.org: coding-style cleanups] +[akpm@linux-foundation.org: s@&&\@&& \@] +Link: https://lkml.kernel.org/r/20220926133435.1333846-1-Jason@zx2c4.com +Signed-off-by: Jason A. Donenfeld +Reviewed-by: Andy Shevchenko +Cc: Kees Cook +Signed-off-by: Andrew Morton +(cherry picked from commit 5efcecd9a3b18078d3398b359a84c83f549e22cf) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/minmax.h | 26 ++++++++++++++++++++++++-- + 1 file changed, 24 insertions(+), 2 deletions(-) + +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -37,6 +37,28 @@ + __cmp(x, y, op), \ + __cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op)) + ++#define __clamp(val, lo, hi) \ ++ __cmp(__cmp(val, lo, >), hi, <) ++ ++#define __clamp_once(val, lo, hi, unique_val, unique_lo, unique_hi) ({ \ ++ typeof(val) unique_val = (val); \ ++ typeof(lo) unique_lo = (lo); \ ++ typeof(hi) unique_hi = (hi); \ ++ __clamp(unique_val, unique_lo, unique_hi); }) ++ ++#define __clamp_input_check(lo, hi) \ ++ (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \ ++ __is_constexpr((lo) > (hi)), (lo) > (hi), false))) ++ ++#define __careful_clamp(val, lo, hi) ({ \ ++ __clamp_input_check(lo, hi) + \ ++ __builtin_choose_expr(__typecheck(val, lo) && __typecheck(val, hi) && \ ++ __typecheck(hi, lo) && __is_constexpr(val) && \ ++ __is_constexpr(lo) && __is_constexpr(hi), \ ++ __clamp(val, lo, hi), \ ++ __clamp_once(val, lo, hi, __UNIQUE_ID(__val), \ ++ __UNIQUE_ID(__lo), __UNIQUE_ID(__hi))); }) ++ + /** + * min - return minimum of two values of the same or compatible types + * @x: first value +@@ -103,7 +125,7 @@ + * This macro does strict typechecking of @lo/@hi to make sure they are of the + * same type as @val. See the unnecessary pointer comparisons. + */ +-#define clamp(val, lo, hi) min((typeof(val))max(val, lo), hi) ++#define clamp(val, lo, hi) __careful_clamp(val, lo, hi) + + /* + * ..and if you can't take the strict +@@ -138,7 +160,7 @@ + * This macro does no typechecking and uses temporary variables of type + * @type to make all the comparisons. + */ +-#define clamp_t(type, val, lo, hi) min_t(type, max_t(type, val, lo), hi) ++#define clamp_t(type, val, lo, hi) __careful_clamp((type)(val), (type)(lo), (type)(hi)) + + /** + * clamp_val - return a value clamped to a given range using val's type diff --git a/queue-5.15/mm-damon-core-merge-regions-aggressively-when-max_nr_regions-is-unmet.patch b/queue-5.15/mm-damon-core-merge-regions-aggressively-when-max_nr_regions-is-unmet.patch new file mode 100644 index 00000000000..4c72a2304cd --- /dev/null +++ b/queue-5.15/mm-damon-core-merge-regions-aggressively-when-max_nr_regions-is-unmet.patch @@ -0,0 +1,80 @@ +From sj@kernel.org Tue Jul 16 20:33:48 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:33 -0700 +Subject: mm/damon/core: merge regions aggressively when max_nr_regions is unmet +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: SeongJae Park , Andrew Morton , damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org +Message-ID: <20240716183333.138498-9-sj@kernel.org> + +From: SeongJae Park + +commit 310d6c15e9104c99d5d9d0ff8e5383a79da7d5e6 upstream. + +DAMON keeps the number of regions under max_nr_regions by skipping regions +split operations when doing so can make the number higher than the limit. +It works well for preventing violation of the limit. But, if somehow the +violation happens, it cannot recovery well depending on the situation. In +detail, if the real number of regions having different access pattern is +higher than the limit, the mechanism cannot reduce the number below the +limit. In such a case, the system could suffer from high monitoring +overhead of DAMON. + +The violation can actually happen. For an example, the user could reduce +max_nr_regions while DAMON is running, to be lower than the current number +of regions. Fix the problem by repeating the merge operations with +increasing aggressiveness in kdamond_merge_regions() for the case, until +the limit is met. + +[sj@kernel.org: increase regions merge aggressiveness while respecting min_nr_regions] + Link: https://lkml.kernel.org/r/20240626164753.46270-1-sj@kernel.org +[sj@kernel.org: ensure max threshold attempt for max_nr_regions violation] + Link: https://lkml.kernel.org/r/20240627163153.75969-1-sj@kernel.org +Link: https://lkml.kernel.org/r/20240624175814.89611-1-sj@kernel.org +Fixes: b9a6ac4e4ede ("mm/damon: adaptively adjust regions") +Signed-off-by: SeongJae Park +Cc: [5.15+] +Signed-off-by: Andrew Morton +(cherry picked from commit 310d6c15e9104c99d5d9d0ff8e5383a79da7d5e6) +Signed-off-by: SeongJae Park +[Remove use of unexisting damon_ctx->attrs field] +Signed-off-by: Greg Kroah-Hartman +--- + mm/damon/core.c | 21 +++++++++++++++++++-- + 1 file changed, 19 insertions(+), 2 deletions(-) + +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -507,14 +507,31 @@ static void damon_merge_regions_of(struc + * access frequencies are similar. This is for minimizing the monitoring + * overhead under the dynamically changeable access pattern. If a merge was + * unnecessarily made, later 'kdamond_split_regions()' will revert it. ++ * ++ * The total number of regions could be higher than the user-defined limit, ++ * max_nr_regions for some cases. For example, the user can update ++ * max_nr_regions to a number that lower than the current number of regions ++ * while DAMON is running. For such a case, repeat merging until the limit is ++ * met while increasing @threshold up to possible maximum level. + */ + static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold, + unsigned long sz_limit) + { + struct damon_target *t; ++ unsigned int nr_regions; ++ unsigned int max_thres; + +- damon_for_each_target(t, c) +- damon_merge_regions_of(t, threshold, sz_limit); ++ max_thres = c->aggr_interval / ++ (c->sample_interval ? c->sample_interval : 1); ++ do { ++ nr_regions = 0; ++ damon_for_each_target(t, c) { ++ damon_merge_regions_of(t, threshold, sz_limit); ++ nr_regions += damon_nr_regions(t); ++ } ++ threshold = max(1, threshold * 2); ++ } while (nr_regions > c->max_nr_regions && ++ threshold / 2 < max_thres); + } + + /* diff --git a/queue-5.15/series b/queue-5.15/series index b2aef28c25b..fd3eaccfbae 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -57,3 +57,11 @@ spi-imx-don-t-expect-dma-for-i.mx-25-35-50-51-53-csp.patch selftests-vdso-fix-clang-build-errors-and-warnings.patch hfsplus-fix-uninit-value-in-copy_name.patch spi-mux-set-ctlr-bits_per_word_mask.patch +tracing-define-the-is_signed_type-macro-once.patch +minmax-sanity-check-constant-bounds-when-clamping.patch +minmax-clamp-more-efficiently-by-avoiding-extra-comparison.patch +minmax-fix-header-inclusions.patch +minmax-allow-min-max-clamp-if-the-arguments-have-the-same-signedness.patch +minmax-allow-comparisons-of-int-against-unsigned-char-short.patch +minmax-relax-check-to-allow-comparison-between-unsigned-arguments-and-signed-constants.patch +mm-damon-core-merge-regions-aggressively-when-max_nr_regions-is-unmet.patch diff --git a/queue-5.15/tracing-define-the-is_signed_type-macro-once.patch b/queue-5.15/tracing-define-the-is_signed_type-macro-once.patch new file mode 100644 index 00000000000..095ffd0ef1a --- /dev/null +++ b/queue-5.15/tracing-define-the-is_signed_type-macro-once.patch @@ -0,0 +1,72 @@ +From sj@kernel.org Tue Jul 16 20:33:42 2024 +From: SeongJae Park +Date: Tue, 16 Jul 2024 11:33:26 -0700 +Subject: tracing: Define the is_signed_type() macro once +To: stable@vger.kernel.org, gregkh@linuxfoundation.org +Cc: Bart Van Assche , Luc Van Oostenryck , Steven Rostedt , Ingo Molnar , linux-sparse@vger.kernel.org, linux-kernel@vger.kernel.org, Rasmus Villemoes , Kees Cook , Linus Torvalds , SeongJae Park +Message-ID: <20240716183333.138498-2-sj@kernel.org> + +From: Bart Van Assche + +commit a49a64b5bf195381c09202c524f0f84b5f3e816f upstream. + +There are two definitions of the is_signed_type() macro: one in + and a second definition in . + +As suggested by Linus, move the definition of the is_signed_type() macro +into the header file. Change the definition of the +is_signed_type() macro to make sure that it does not trigger any sparse +warnings with future versions of sparse for bitwise types. + +Link: https://lore.kernel.org/all/CAHk-=whjH6p+qzwUdx5SOVVHjS3WvzJQr6mDUwhEyTf6pJWzaQ@mail.gmail.com/ +Link: https://lore.kernel.org/all/CAHk-=wjQGnVfb4jehFR0XyZikdQvCZouE96xR_nnf5kqaM5qqQ@mail.gmail.com/ +Cc: Rasmus Villemoes +Cc: Steven Rostedt +Acked-by: Kees Cook +Signed-off-by: Bart Van Assche +Signed-off-by: Linus Torvalds +(cherry picked from commit a49a64b5bf195381c09202c524f0f84b5f3e816f) +Signed-off-by: SeongJae Park +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/compiler.h | 6 ++++++ + include/linux/overflow.h | 1 - + include/linux/trace_events.h | 2 -- + 3 files changed, 6 insertions(+), 3 deletions(-) + +--- a/include/linux/compiler.h ++++ b/include/linux/compiler.h +@@ -247,6 +247,12 @@ static inline void *offset_to_ptr(const + #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0])) + + /* ++ * Whether 'type' is a signed type or an unsigned type. Supports scalar types, ++ * bool and also pointer types. ++ */ ++#define is_signed_type(type) (((type)(-1)) < (__force type)1) ++ ++/* + * This is needed in functions which generate the stack canary, see + * arch/x86/kernel/smpboot.c::start_secondary() for an example. + */ +--- a/include/linux/overflow.h ++++ b/include/linux/overflow.h +@@ -29,7 +29,6 @@ + * https://mail-index.netbsd.org/tech-misc/2007/02/05/0000.html - + * credit to Christian Biere. + */ +-#define is_signed_type(type) (((type)(-1)) < (type)1) + #define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - is_signed_type(type))) + #define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T))) + #define type_min(T) ((T)((T)-type_max(T)-(T)1)) +--- a/include/linux/trace_events.h ++++ b/include/linux/trace_events.h +@@ -801,8 +801,6 @@ extern int trace_add_event_call(struct t + extern int trace_remove_event_call(struct trace_event_call *call); + extern int trace_event_get_offsets(struct trace_event_call *call); + +-#define is_signed_type(type) (((type)(-1)) < (type)1) +- + int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set); + int trace_set_clr_event(const char *system, const char *event, int set); + int trace_array_set_clr_event(struct trace_array *tr, const char *system,