From: Yonghong Song Date: Tue, 23 Jul 2024 16:29:40 +0000 (-0700) Subject: selftests/bpf: Add reg_bounds tests for ldsx and subreg compare X-Git-Tag: v6.12-rc1~60^2~33 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=26672b5caf3cbeeffe8651035dddff95f610e7ec;p=thirdparty%2Fkernel%2Flinux.git selftests/bpf: Add reg_bounds tests for ldsx and subreg compare Add a few reg_bounds selftests to test 32/16/8-bit ldsx and subreg comparison. Without the previous patch, all added tests will fail. Acked-by: Eduard Zingerman Signed-off-by: Yonghong Song Link: https://lore.kernel.org/r/20240723162940.2732171-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov Signed-off-by: Andrii Nakryiko --- diff --git a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c index eb74363f9f701..0da4225749bd6 100644 --- a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c +++ b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c @@ -433,6 +433,19 @@ static struct range range_refine(enum num_t x_t, struct range x, enum num_t y_t, y_cast = range_cast(y_t, x_t, y); + /* If we know that + * - *x* is in the range of signed 32bit value, and + * - *y_cast* range is 32-bit signed non-negative + * then *x* range can be improved with *y_cast* such that *x* range + * is 32-bit signed non-negative. Otherwise, if the new range for *x* + * allows upper 32-bit * 0xffffffff then the eventual new range for + * *x* will be out of signed 32-bit range which violates the origin + * *x* range. + */ + if (x_t == S64 && y_t == S32 && y_cast.a <= S32_MAX && y_cast.b <= S32_MAX && + (s64)x.a >= S32_MIN && (s64)x.b <= S32_MAX) + return range_improve(x_t, x, y_cast); + /* the case when new range knowledge, *y*, is a 32-bit subregister * range, while previous range knowledge, *x*, is a full register * 64-bit range, needs special treatment to take into account upper 32 @@ -2108,6 +2121,9 @@ static struct subtest_case crafted_cases[] = { {S32, U32, {(u32)S32_MIN, 0}, {0, 0}}, {S32, U32, {(u32)S32_MIN, 0}, {(u32)S32_MIN, (u32)S32_MIN}}, {S32, U32, {(u32)S32_MIN, S32_MAX}, {S32_MAX, S32_MAX}}, + {S64, U32, {0x0, 0x1f}, {0xffffffff80000000ULL, 0x000000007fffffffULL}}, + {S64, U32, {0x0, 0x1f}, {0xffffffffffff8000ULL, 0x0000000000007fffULL}}, + {S64, U32, {0x0, 0x1f}, {0xffffffffffffff80ULL, 0x000000000000007fULL}}, }; /* Go over crafted hard-coded cases. This is fast, so we do it as part of