From: Tianci Cao Date: Wed, 4 Feb 2026 11:15:03 +0000 (+0800) Subject: selftests/bpf: Add tests for BPF_END bitwise tracking X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=56415363e02f0f561ecc5bda6a4318438f888b43;p=thirdparty%2Flinux.git selftests/bpf: Add tests for BPF_END bitwise tracking Now BPF_END has bitwise tracking support. This patch adds selftests to cover various cases of BPF_END (`bswap(16|32|64)`, `be(16|32|64)`, `le(16|32|64)`) with bitwise propagation. This patch is based on existing `verifier_bswap.c`, and add several types of new tests: 1. Unconditional byte swap operations: - bswap16/bswap32/bswap64 with unknown bytes 2. Endian conversion operations (architecture-aware): - be16/be32/be64: convert to big-endian * on little-endian: do swap * on big-endian: truncation (16/32-bit) or no-op (64-bit) - le16/le32/le64: convert to little-endian * on big-endian: do swap * on little-endian: truncation (16/32-bit) or no-op (64-bit) Each test simulates realistic networking scenarios where a value is masked with unknown bits (e.g., var_off=(0x0; 0x3f00), range=[0,0x3f00]), then byte-swapped, and the verifier must prove the result stays within expected bounds. Specifically, these selftests are based on dead code elimination: If the BPF verifier can precisely track bitwise through byte swap operations, it can prune the trap path (invalid memory access) that should be unreachable, allowing the program to pass verification. If bitwise tracking is incorrect, the verifier cannot prove the trap is unreachable, causing verification failure. The tests use preprocessor conditionals (#ifdef __BYTE_ORDER__) to verify correct behavior on both little-endian and big-endian architectures, and require Clang 18+ for bswap instruction support. Co-developed-by: Shenghao Yuan Signed-off-by: Shenghao Yuan Co-developed-by: Yazhou Tang Signed-off-by: Yazhou Tang Signed-off-by: Tianci Cao Acked-by: Eduard Zingerman Link: https://lore.kernel.org/r/20260204111503.77871-3-ziye@zju.edu.cn Signed-off-by: Alexei Starovoitov --- diff --git a/tools/testing/selftests/bpf/progs/verifier_bswap.c b/tools/testing/selftests/bpf/progs/verifier_bswap.c index e61755656e8d7..4b779deee7672 100644 --- a/tools/testing/selftests/bpf/progs/verifier_bswap.c +++ b/tools/testing/selftests/bpf/progs/verifier_bswap.c @@ -48,6 +48,49 @@ __naked void bswap_64(void) : __clobber_all); } +#define BSWAP_RANGE_TEST(name, op, in_value, out_value) \ + SEC("socket") \ + __success __log_level(2) \ + __msg("r0 &= {{.*}}; R0=scalar({{.*}},var_off=(0x0; " #in_value "))") \ + __msg("r0 = " op " r0 {{.*}}; R0=scalar({{.*}},var_off=(0x0; " #out_value "))") \ + __naked void name(void) \ + { \ + asm volatile ( \ + "call %[bpf_get_prandom_u32];" \ + "r0 &= " #in_value ";" \ + "r0 = " op " r0;" \ + "r2 = " #out_value " ll;" \ + "if r0 > r2 goto trap_%=;" \ + "r0 = 0;" \ + "exit;" \ + "trap_%=:" \ + "r1 = 42;" \ + "r0 = *(u64 *)(r1 + 0);" \ + "exit;" \ + : \ + : __imm(bpf_get_prandom_u32) \ + : __clobber_all); \ + } + +BSWAP_RANGE_TEST(bswap16_range, "bswap16", 0x3f00, 0x3f) +BSWAP_RANGE_TEST(bswap32_range, "bswap32", 0x3f00, 0x3f0000) +BSWAP_RANGE_TEST(bswap64_range, "bswap64", 0x3f00, 0x3f000000000000) +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +BSWAP_RANGE_TEST(be16_range, "be16", 0x3f00, 0x3f) +BSWAP_RANGE_TEST(be32_range, "be32", 0x3f00, 0x3f0000) +BSWAP_RANGE_TEST(be64_range, "be64", 0x3f00, 0x3f000000000000) +BSWAP_RANGE_TEST(le16_range, "le16", 0x3f00, 0x3f00) +BSWAP_RANGE_TEST(le32_range, "le32", 0x3f00, 0x3f00) +BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f00) +#else +BSWAP_RANGE_TEST(be16_range, "be16", 0x3f00, 0x3f00) +BSWAP_RANGE_TEST(be32_range, "be32", 0x3f00, 0x3f00) +BSWAP_RANGE_TEST(be64_range, "be64", 0x3f00, 0x3f00) +BSWAP_RANGE_TEST(le16_range, "le16", 0x3f00, 0x3f) +BSWAP_RANGE_TEST(le32_range, "le32", 0x3f00, 0x3f0000) +BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f000000000000) +#endif + #else SEC("socket")