Mykyta Yatsenko [Fri, 30 Jan 2026 20:42:10 +0000 (20:42 +0000)]
bpf: Introduce struct bpf_map_desc in verifier
Introduce struct bpf_map_desc to hold bpf_map pointer and map uid. Use
this struct in both bpf_call_arg_meta and bpf_kfunc_call_arg_meta
instead of having different representations:
- bpf_call_arg_meta had separate map_ptr and map_uid fields
- bpf_kfunc_call_arg_meta had an anonymous inline struct
This unifies the map fields layout across both metadata structures,
making the code more consistent and preparing for further refactoring of
map field pointer validation.
====================
x86/fgraph,bpf: Fix ORC stack unwind from kprobe_multi
hi,
Mahe reported missing function from stack trace on top of kprobe multi
program. It turned out the latest fix [1] needs some more fixing.
v2 changes:
- keep the unwind same as for kprobes, attached function
is part of entry probe stacktrace, not kretprobe [Steven]
- several change in trigger bench [Andrii]
- added selftests for standard kprobes and fentry/fexit probes [Andrii]
Note I'll try to add similar stacktrace adjustment for fentry/fexit
in separate patchset to not complicate this change.
Jiri Olsa [Mon, 26 Jan 2026 21:18:33 +0000 (22:18 +0100)]
x86/fgraph,bpf: Switch kprobe_multi program stack unwind to hw_regs path
Mahe reported missing function from stack trace on top of kprobe
multi program. The missing function is the very first one in the
stacktrace, the one that the bpf program is attached to.
The reason is that the previous change (the Fixes commit) fixed
stack unwind for tracepoint, but removed attached function address
from the stack trace on top of kprobe multi programs, which I also
overlooked in the related test (check following patch).
The tracepoint and kprobe_multi have different stack setup, but use
same unwind path. I think it's better to keep the previous change,
which fixed tracepoint unwind and instead change the kprobe multi
unwind as explained below.
The bpf program stack unwind calls perf_callchain_kernel for kernel
portion and it follows two unwind paths based on X86_EFLAGS_FIXED
bit in pt_regs.flags.
When the bit set we unwind from stack represented by pt_regs argument,
otherwise we unwind currently executed stack up to 'first_frame'
boundary.
The 'first_frame' value is taken from regs.rsp value, but ftrace_caller
and ftrace_regs_caller (ftrace trampoline) functions set the regs.rsp
to the previous stack frame, so we skip the attached function entry.
If we switch kprobe_multi unwind to use the X86_EFLAGS_FIXED bit,
we set the start of the unwind to the attached function address.
As another benefit we also cut extra unwind cycles needed to reach
the 'first_frame' boundary.
The speedup can be measured with trigger bench for kprobe_multi
program and stacktrace support.
The return probe skips the attached function, because it's no longer
on the stack at the point of the unwind and this way is the same how
standard kretprobe works.
Jiri Olsa [Mon, 26 Jan 2026 21:18:32 +0000 (22:18 +0100)]
x86/fgraph: Fix return_to_handler regs.rsp value
The previous change (Fixes commit) messed up the rsp register value,
which is wrong because it's already adjusted with FRAME_SIZE, we need
the original rsp value.
This change does not affect fprobe current kernel unwind, the !perf_hw_regs
path perf_callchain_kernel:
if (perf_hw_regs(regs)) {
if (perf_callchain_store(entry, regs->ip))
return;
unwind_start(&state, current, regs, NULL);
} else {
unwind_start(&state, current, NULL, (void *)regs->sp);
}
which uses pt_regs.sp as first_frame boundary (FRAME_SIZE shift makes
no difference, unwind stil stops at the right frame).
This change fixes the other path when we want to unwind directly from
pt_regs sp/fp/ip state, which is coming in following change.
Fixes: 20a0bc10272f ("x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/bpf/20260126211837.472802-2-jolsa@kernel.org
Changwoo Min [Fri, 30 Jan 2026 02:18:43 +0000 (11:18 +0900)]
selftests/bpf: Make bpf get_preempt_count() work for v6.14+ kernels
Recent x86 kernels export __preempt_count as a ksym, while some old kernels
between v6.1 and v6.14 expose the preemption counter via
pcpu_hot.preempt_count. The existing selftest helper unconditionally
dereferenced __preempt_count, which breaks BPF program loading on such old
kernels.
Make the x86 preemption count lookup version-agnostic by:
- Marking __preempt_count and pcpu_hot as weak ksyms.
- Introducing a BTF-described pcpu_hot___local layout with
preserve_access_index.
- Selecting the appropriate access path at runtime using ksym availability
and bpf_ksym_exists() and bpf_core_field_exists().
This allows a single BPF binary to run correctly across kernel versions
(e.g., v6.18 vs. v6.13) without relying on compile-time version checks.
====================
this patchset allows sleepable programs to use tail calls.
At the moment we need to have separate sleepable uprobe program
to retrieve user space data and pass it to complex program with
tail calls. It'd be great if the program with tail calls could
be sleepable and do the data retrieval directly.
Jiri Olsa [Fri, 30 Jan 2026 08:12:08 +0000 (09:12 +0100)]
selftests/bpf: Add test for sleepable program tailcalls
Adding test that makes sure we can't mix sleepable and non-sleepable
bpf programs in the BPF_MAP_TYPE_PROG_ARRAY map and that we can do
tail call in the sleepable program.
Jiri Olsa [Fri, 30 Jan 2026 08:12:07 +0000 (09:12 +0100)]
bpf: Allow sleepable programs to use tail calls
Allowing sleepable programs to use tail calls.
Making sure we can't mix sleepable and non-sleepable bpf programs
in tail call map (BPF_MAP_TYPE_PROG_ARRAY) and allowing it to be
used in sleepable programs.
Sleepable programs can be preempted and sleep which might bring
new source of race conditions, but both direct and indirect tail
calls should not be affected.
Direct tail calls work by patching direct jump to callee into bpf
caller program, so no problem there. We atomically switch from nop
to jump instruction.
Indirect tail call reads the callee from the map and then jumps to
it. The callee bpf program can't disappear (be released) from the
caller, because it is executed under rcu lock (rcu_read_lock_trace).
====================
bpf: Fix verifier_bug_if to account for BPF_CALL
This fixes the verifier_bug_if() that runs on nospec_result to not trigger
for BPF_CALL (bug reported by Hu, Mei, and Mu). See patch 1 for a full
description and patch 2 for a test (based on the PoC from the report).
While working on this I noticed two other problems:
- nospec_result is currently ignored for BPF_CALL during patching, but it
may be required if we assume the CPU may speculate into/out of functions.
- Both the instruction patching for nospec and nospec_result erases the
instruction aux information even thought it might be better to keep that.
For nospec_result it may be fine as it is only applied to store
instructions currently (except for when we decide to change the thing
from above), but nospec may be set for arbitrary instructions and if
these require rewrites they break.
I assume these issues are better fixed separately, thus I decided to
exclude them from this series.
====================
Luis Gerhorst [Tue, 27 Jan 2026 11:59:11 +0000 (12:59 +0100)]
bpf: Fix verifier_bug_if to account for BPF_CALL
The BPF verifier assumes `insn_aux->nospec_result` is only set for
direct memory writes (e.g., `*(u32*)(r1+off) = r2`). However, the
assertion fails to account for helper calls (e.g.,
`bpf_skb_load_bytes_relative`) that perform writes to stack memory. Make
the check more precise to resolve this.
The problem is that `BPF_CALL` instructions have `BPF_CLASS(insn->code)
== BPF_JMP`, which triggers the warning check:
- Helpers like `bpf_skb_load_bytes_relative` write to stack memory
- `check_helper_call()` loops through `meta.access_size`, calling
`check_mem_access(..., BPF_WRITE)`
- `check_stack_write()` sets `insn_aux->nospec_result = 1`
- Since `BPF_CALL` is encoded as `BPF_JMP | BPF_CALL`, the warning fires
Execution flow:
```
1. Drop capabilities → Enable Spectre mitigation
2. Load BPF program
└─> do_check()
├─> check_cond_jmp_op() → Marks dead branch as speculative
│ └─> push_stack(..., speculative=true)
├─> pop_stack() → state->speculative = 1
├─> check_helper_call() → Processes helper in dead branch
│ └─> check_mem_access(..., BPF_WRITE)
│ └─> insn_aux->nospec_result = 1
└─> Checks: state->speculative && insn_aux->nospec_result
└─> BPF_CLASS(insn->code) == BPF_JMP → WARNING
```
To fix the assert, it would be nice to be able to reuse
bpf_insn_successors() here, but bpf_insn_successors()->cnt is not
exactly what we want as it may also be 1 for BPF_JA. Instead, we could
check opcode_info.can_jump, but then we would have to share the table
between the functions. This would mean moving the table out of the
function and adding bpf_opcode_info(). As the verifier_bug_if() only
runs for insns with nospec_result set, the impact on verification time
would likely still be negligible. However, I assume sharing
bpf_opcode_info() between liveness.c and verifier.c will not be worth
it. It seems as only adjust_jmp_off() could also be simplified using it,
and there imm/off is touched. Thus it is maybe better to rely on exact
opcode/class matching there.
Therefore, to avoid this sharing only for a verifier_bug_if(), just
check the opcode. This should now cover all opcodes for which can_jump
in bpf_insn_successors() is true.
Parts of the description and example are taken from the bug report.
Fixes: dadb59104c64 ("bpf: Fix aux usage after do_check_insn()") Signed-off-by: Luis Gerhorst <luis.gerhorst@fau.de> Reported-by: Yinhao Hu <dddddd@hust.edu.cn> Reported-by: Kaiyan Mei <M202472210@hust.edu.cn> Reported-by: Dongliang Mu <dzm91@hust.edu.cn> Closes: https://lore.kernel.org/bpf/7678017d-b760-4053-a2d8-a6879b0dbeeb@hust.edu.cn/ Link: https://lore.kernel.org/r/20260127115912.3026761-2-luis.gerhorst@fau.de Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Ihor Solodrai [Wed, 28 Jan 2026 21:12:55 +0000 (13:12 -0800)]
bpftool: Fix dependencies for static build
When building selftests/bpf with EXTRA_LDFLAGS=-static the follwoing
error happens:
LINK /ws/linux/tools/testing/selftests/bpf/tools/build/bpftool/bootstrap/bpftool
/usr/bin/x86_64-linux-gnu-ld.bfd: /usr/lib/gcc/x86_64-linux-gnu/15/../../../x86_64-linux-gnu/libcrypto.a(libcrypto-lib-dso_dlfcn.o): in function `dlfcn_globallookup':
[...]
/usr/bin/x86_64-linux-gnu-ld.bfd: /usr/lib/gcc/x86_64-linux-gnu/15/../../../x86_64-linux-gnu/libcrypto.a(libcrypto-lib-c_zlib.o): in function `zlib_oneshot_expand_block':
(.text+0xc64): undefined reference to `uncompress'
/usr/bin/x86_64-linux-gnu-ld.bfd: /usr/lib/gcc/x86_64-linux-gnu/15/../../../x86_64-linux-gnu/libcrypto.a(libcrypto-lib-c_zlib.o): in function `zlib_oneshot_compress_block':
(.text+0xce4): undefined reference to `compress'
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:252: /ws/linux/tools/testing/selftests/bpf/tools/build/bpftool/bootstrap/bpftool] Error 1
make: *** [Makefile:327: /ws/linux/tools/testing/selftests/bpf/tools/sbin/bpftool] Error 2
make: *** Waiting for unfinished jobs....
This is caused by wrong order of dependencies in the Makefile. Fix it.
Mykyta Yatsenko [Wed, 28 Jan 2026 19:05:51 +0000 (19:05 +0000)]
selftests/bpf: Remove xxd util dependency
The verification signature header generation requires converting a
binary certificate to a C array. Previously this only worked with
xxd (part of vim-common package).
As xxd may not be available on some systems building selftests, it makes
sense to substitute it with more common utils: hexdump, wc, sed to
generate equivalent C array output.
Tested by generating header with both xxd and hexdump and comparing
them.
The speedup seems to be related to the fact that with single ftrace_ops object
we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
we skip the synchronize rcu calls (each ~100ms) at the end of that function.
v6 changes:
- rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
- factor hash_add/hash_sub [Steven]
- add kerneldoc header for update_ftrace_direct_* functions [Steven]
- few assorted smaller fixes [Steven]
- added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
case [Steven]
v5 changes:
- do not export ftrace_hash object [Steven]
- fix update_ftrace_direct_add new_filter_hash leak [ci]
v4 changes:
- rebased on top of bpf-next/master (with jmp attach changes)
added patch 1 to deal with that
- added extra checks for update_ftrace_direct_del/mod to address
the ci bot review
v3 changes:
- rebased on top of bpf-next/master
- fixed update_ftrace_direct_del cleanup path
- added missing inline to update_ftrace_direct_* stubs
v2 changes:
- rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
- renamed the API functions [2] [Steven]
- do not export the new api [Steven]
- kept the original direct interface:
I'm not sure if we want to melt both *_ftrace_direct and the new interface
into single one. It's bit different in semantic (hence the name change as
Steven suggested [2]) and I don't think the changes are not that big so
we could easily keep both APIs.
v1 changes:
- make the change x86 specific, after discussing with Mark options for
arm64 [Mark]
Jiri Olsa [Tue, 30 Dec 2025 14:50:10 +0000 (15:50 +0100)]
bpf,x86: Use single ftrace_ops for direct calls
Using single ftrace_ops for direct calls update instead of allocating
ftrace_ops object for each trampoline.
With single ftrace_ops object we can use update_ftrace_direct_* api
that allows multiple ip sites updates on single ftrace_ops object.
Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
each arch that supports this.
At the moment we can enable this only on x86 arch, because arm relies
on ftrace_ops object representing just single trampoline image (stored
in ftrace_ops::direct_call). Archs that do not support this will continue
to use *_ftrace_direct api.
Jiri Olsa [Tue, 30 Dec 2025 14:50:07 +0000 (15:50 +0100)]
ftrace: Add update_ftrace_direct_mod function
Adding update_ftrace_direct_mod function that modifies all entries
(ip -> direct) provided in hash argument to direct ftrace ops and
updates its attachments.
The difference to current modify_ftrace_direct is:
- hash argument that allows to modify multiple ip -> direct
entries at once
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Jiri Olsa [Tue, 30 Dec 2025 14:50:06 +0000 (15:50 +0100)]
ftrace: Add update_ftrace_direct_del function
Adding update_ftrace_direct_del function that removes all entries
(ip -> addr) provided in hash argument to direct ftrace ops and
updates its attachments.
The difference to current unregister_ftrace_direct is
- hash argument that allows to unregister multiple ip -> direct
entries at once
- we can call update_ftrace_direct_del multiple times on the
same ftrace_ops object, becase we do not need to unregister
all entries at once, we can do it gradualy with the help of
ftrace_update_ops function
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Jiri Olsa [Tue, 30 Dec 2025 14:50:05 +0000 (15:50 +0100)]
ftrace: Add update_ftrace_direct_add function
Adding update_ftrace_direct_add function that adds all entries
(ip -> addr) provided in hash argument to direct ftrace ops
and updates its attachments.
The difference to current register_ftrace_direct is
- hash argument that allows to register multiple ip -> direct
entries at once
- we can call update_ftrace_direct_add multiple times on the
same ftrace_ops object, becase after first registration with
register_ftrace_function_nolock, it uses ftrace_update_ops to
update the ftrace_ops object
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Jiri Olsa [Tue, 30 Dec 2025 14:50:02 +0000 (15:50 +0100)]
ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
At the moment the we allow the jmp attach only for ftrace_ops that
has FTRACE_OPS_FL_JMP set. This conflicts with following changes
where we use single ftrace_ops object for all direct call sites,
so all could be be attached via just call or jmp.
We already limit the jmp attach support with config option and bit
(LSB) set on the trampoline address. It turns out that's actually
enough to limit the jmp attach for architecture and only for chosen
addresses (with LSB bit set).
Each user of register_ftrace_direct or modify_ftrace_direct can set
the trampoline bit (LSB) to indicate it has to be attached by jmp.
The bpf trampoline generation code uses trampoline flags to generate
jmp-attach specific code and ftrace inner code uses the trampoline
bit (LSB) to handle return from jmp attachment, so there's no harm
to remove the FTRACE_OPS_FL_JMP bit.
The fexit/fmodret performance stays the same (did not drop),
current code:
Guillaume Gonnet [Tue, 27 Jan 2026 16:02:00 +0000 (17:02 +0100)]
bpf: Fix tcx/netkit detach permissions when prog fd isn't given
This commit fixes a security issue where BPF_PROG_DETACH on tcx or
netkit devices could be executed by any user when no program fd was
provided, bypassing permission checks. The fix adds a capability
check for CAP_NET_ADMIN or CAP_SYS_ADMIN in this case.
====================
bpf: Fix FIONREAD and copied_seq issues
syzkaller reported a bug [1] where a socket using sockmap, after being
unloaded, exposed incorrect copied_seq calculation. The selftest I
provided can be used to reproduce the issue reported by syzkaller.
A sockmap socket maintains its own receive queue (ingress_msg) which may
contain data from either its own protocol stack or forwarded from other
sockets.
FD1:read()
-- FD1->copied_seq++
| [read data]
|
[enqueue data] v
[sockmap] -> ingress to self -> ingress_msg queue
FD1 native stack ------> ^
-- FD1->rcv_nxt++ -> redirect to other | [enqueue data]
| |
| ingress to FD1
v ^
... | [sockmap]
FD2 native stack
The issue occurs when reading from ingress_msg: we update tp->copied_seq
by default, but if the data comes from other sockets (not the socket's
own protocol stack), tcp->rcv_nxt remains unchanged. Later, when
converting back to a native socket, reads may fail as copied_seq could
be significantly larger than rcv_nxt.
Additionally, FIONREAD calculation based on copied_seq and rcv_nxt is
insufficient for sockmap sockets, requiring separate field tracking.
[1] https://syzkaller.appspot.com/bug?extid=06dbd397158ec0ea4983
---
v7 -> v9: Address Jakub Sitnicki's feedback:
- Remove sk_receive_queue check in tcp_bpf_ioctl, only report
ingress_msg data length for FIONREAD
- Minor nits fixes
- Add Reviewed-by tag from John Fastabend
- Fix ci error
https://lore.kernel.org/bpf/20260113025121.197535-1-jiayuan.chen@linux.dev/
v5 -> v7: Some modifications suggested by Jakub Sitnicki, and added Reviewed-by tag.
https://lore.kernel.org/bpf/20260106051458.279151-1-jiayuan.chen@linux.dev/
v1 -> v5: Use skmsg.sk instead of extending BPF_F_XXX macro and fix CI
failure reported by CI
v1: https://lore.kernel.org/bpf/20251117110736.293040-1-jiayuan.chen@linux.dev/
====================
Jiayuan Chen [Sat, 24 Jan 2026 11:32:44 +0000 (19:32 +0800)]
bpf, sockmap: Fix FIONREAD for sockmap
A socket using sockmap has its own independent receive queue: ingress_msg.
This queue may contain data from its own protocol stack or from other
sockets.
Therefore, for sockmap, relying solely on copied_seq and rcv_nxt to
calculate FIONREAD is not enough.
This patch adds a new msg_tot_len field in the psock structure to record
the data length in ingress_msg. Additionally, we implement new ioctl
interfaces for TCP and UDP to intercept FIONREAD operations.
Note that we intentionally do not include sk_receive_queue data in the
FIONREAD result. Data in sk_receive_queue has not yet been processed by
the BPF verdict program, and may be redirected to other sockets or
dropped. Including it would create semantic ambiguity since this data
may never be readable by the user.
Unix and VSOCK sockets have similar issues, but fixing them is outside
the scope of this patch as it would require more intrusive changes.
Previous work by John Fastabend made some efforts towards FIONREAD support:
commit e5c6de5fa025 ("bpf, sockmap: Incorrectly handling copied_seq")
Although the current patch is based on the previous work by John Fastabend,
it is acceptable for our Fixes tag to point to the same commit.
FD1:read()
-- FD1->copied_seq++
| [read data]
|
[enqueue data] v
[sockmap] -> ingress to self -> ingress_msg queue
FD1 native stack ------> ^
-- FD1->rcv_nxt++ -> redirect to other | [enqueue data]
| |
| ingress to FD1
v ^
... | [sockmap]
FD2 native stack
A socket using sockmap has its own independent receive queue: ingress_msg.
This queue may contain data from its own protocol stack or from other
sockets.
The issue is that when reading from ingress_msg, we update tp->copied_seq
by default. However, if the data is not from its own protocol stack,
tcp->rcv_nxt is not increased. Later, if we convert this socket to a
native socket, reading from this socket may fail because copied_seq might
be significantly larger than rcv_nxt.
This fix also addresses the syzkaller-reported bug referenced in the
Closes tag.
This patch marks the skmsg objects in ingress_msg. When reading, we update
copied_seq only if the data is from its own protocol stack.
FD1:read()
-- FD1->copied_seq++
| [read data]
|
[enqueue data] v
[sockmap] -> ingress to self -> ingress_msg queue
FD1 native stack ------> ^
-- FD1->rcv_nxt++ -> redirect to other | [enqueue data]
| |
| ingress to FD1
v ^
... | [sockmap]
FD2 native stack
Matt Bobrowski [Tue, 27 Jan 2026 08:51:10 +0000 (08:51 +0000)]
bpf: add new BPF_CGROUP_ITER_CHILDREN control option
Currently, the BPF cgroup iterator supports walking descendants in
either pre-order (BPF_CGROUP_ITER_DESCENDANTS_PRE) or post-order
(BPF_CGROUP_ITER_DESCENDANTS_POST). These modes perform an exhaustive
depth-first search (DFS) of the hierarchy. In scenarios where a BPF
program may need to inspect only the direct children of a given parent
cgroup, a full DFS is unnecessarily expensive.
This patch introduces a new BPF cgroup iterator control option,
BPF_CGROUP_ITER_CHILDREN. This control option restricts the traversal
to the immediate children of a specified parent cgroup, allowing for
more targeted and efficient iteration, particularly when exhaustive
depth-first search (DFS) traversal is not required.
====================
selftests/bpf: migrate a few bpftool testing scripts
this is the v4 for some bpftool tests conversion. The new tests are
being integrated in test_progs so that they can be executed on each CI
run.
- First commit introduces a few dedicated helpers to execute bpftool
commands, with or without retrieving the generated stdout output
- Second commit integrates test_bpftool_metadata.sh into test_progs
- Third commit integrates test_bpftool_map.sh into test_progs
Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
Changes in v4:
- Port missing map access test in bpftool_metadata
- Link to v3: https://lore.kernel.org/r/20260121-bpftool-tests-v3-0-368632f377e5@bootlin.com
Changes in v3:
- Drop commit reordering objects in Makefile
- Rebased series on ci/bpf-next_base to fix conflict
- Link to v2: https://lore.kernel.org/r/20260121-bpftool-tests-v2-0-64edb47e91ae@bootlin.com
Changes in v2:
- drop standalone runner in favor of test_progs
- Link to v1: https://lore.kernel.org/r/20260114-bpftool-tests-v1-0-cfab1cc9beaf@bootlin.com
selftests/bpf: convert test_bpftool_map_access.sh into test_progs framework
The test_bpftool_map.sh script tests that maps read/write accesses
are being properly allowed/refused by the kernel depending on a specific
fmod_ret program being attached on security_bpf_map function.
Rewrite this test to integrate it in the test_progs. The
new test spawns a few subtests:
selftests/bpf: convert test_bpftool_metadata.sh into test_progs framework
The test_bpftool_metadata.sh script validates that bpftool properly
returns in its ouptput any metadata generated by bpf programs through
some .rodata sections.
Port this test to the test_progs framework so that it can be executed
automatically in CI. The new test, similarly to the former script,
checks that valid data appears both for textual output and json output,
as well as for both data not used at all and used data. For the json
check part, the expected json string is hardcoded to avoid bringing a
new external dependency (eg: a json deserializer) for test_progs.
As the test is now converted into test_progs, remove the former script.
selftests/bpf: Add a few helpers for bpftool testing
In order to integrate some bpftool tests into test_progs, define a few
specific helpers that allow to execute bpftool commands, while possibly
retrieving the command output. Those helpers most notably set the
path to the bpftool binary under test. This version checks different
possible paths relative to the directories where the different
test_progs runners are executed, as we want to make sure not to
accidentally use a bootstrap version of the binary.
Leon Hwang [Mon, 19 Jan 2026 13:34:17 +0000 (21:34 +0800)]
selftests/bpf: Harden cpu flags test for lru_percpu_hash map
CI occasionally reports failures in the
percpu_alloc/cpu_flag_lru_percpu_hash selftest, for example:
First test_progs failure (test_progs_no_alu32-x86_64-llvm-21):
#264/15 percpu_alloc/cpu_flag_lru_percpu_hash
...
test_percpu_map_op_cpu_flag:FAIL:bpf_map_lookup_batch value on specified cpu unexpected bpf_map_lookup_batch value on specified cpu: actual 0 != expected 3735929054
The unexpected value indicates that an element was removed from the map.
However, the test never calls delete_elem(), so the only possible cause
is LRU eviction.
This can happen when the current task migrates to another CPU: an
update_elem() triggers eviction because there is no available LRU node
on local freelist and global freelist.
Harden the test against this behavior by provisioning sufficient spare
elements. Set max_entries to 'nr_cpus * 2' and restrict the test to using
the first nr_cpus entries, ensuring that updates do not spuriously trigger
LRU eviction.
This series introduces four new BPF-native inline helpers -- bpf_in_nmi(),
bpf_in_hardirq(), bpf_in_serving_softirq(), and bpf_in_task() -- to allow
BPF programs to query the current execution context.
Following the feedback on v1, these are implemented in bpf_experimental.h
as inline helpers wrapping get_preempt_count(). This approach allows the
logic to be JIT-inlined for better performance compared to a kfunc call,
while providing the granular context detection (e.g., hardirq vs. softirq)
required by subsystems like sched_ext.
The series includes a new selftest suite, exe_ctx, which uses bpf_testmod
to verify context detection across Task, HardIRQ, and SoftIRQ boundaries
via irq_work and tasklets. NMI context testing is omitted as NMIs cannot
be triggered deterministically within software-only BPF CI environments.
ChangeLog v2 -> v3:
- Added exe_ctx to DENYLIST.s390x since new helpers are supported only
on x86 and arm64 (patch 2).
- Added comments to helpers describing supported architectures (patch 1).
ChangeLog v1 -> v2:
- Dropped the core kernel kfunc implementations, and implemented context
detection as inline BPF helpers in bpf_experimental.h.
- Renamed the selftest suite from ctx_kfunc to exe_ctx to reflect the
change from kfuncs to helpers.
- Updated BPF programs to use the new inline helpers.
- Swapped clean-up order between tasklet and irqwork in bpf_testmod to
avoid re-scheduling the already-killed tasklet (reported by bot+bpf-ci).
====================
Changwoo Min [Sun, 25 Jan 2026 11:54:13 +0000 (20:54 +0900)]
selftests/bpf: Add tests for execution context helpers
Add a new selftest suite `exe_ctx` to verify the accuracy of the
bpf_in_task(), bpf_in_hardirq(), and bpf_in_serving_softirq() helpers
introduced in bpf_experimental.h.
Testing these execution contexts deterministically requires crossing
context boundaries within a single CPU. To achieve this, the test
implements a "Trigger-Observer" pattern using bpf_testmod:
1. Trigger: A BPF syscall program calls a new bpf_testmod kfunc
bpf_kfunc_trigger_ctx_check().
2. Task to HardIRQ: The kfunc uses irq_work_queue() to trigger a
self-IPI on the local CPU.
3. HardIRQ to SoftIRQ: The irq_work handler calls a dummy function
(observed by BPF fentry) and then schedules a tasklet to
transition into SoftIRQ context.
The user-space runner ensures determinism by pinning itself to CPU 0
before execution, forcing the entire interrupt chain to remain on a
single core. Dummy noinline functions with compiler barriers are
added to bpf_testmod.c to serve as stable attachment points for
fentry programs. A retry loop is used in user-space to wait for the
asynchronous SoftIRQ to complete.
Note that testing on s390x is avoided because supporting those helpers
purely in BPF on s390x is not possible at this point.
Introduce bpf_in_nmi(), bpf_in_hardirq(), bpf_in_serving_softirq(), and
bpf_in_task() inline helpers in bpf_experimental.h. These allow BPF
programs to query the current execution context with higher granularity
than the existing bpf_in_interrupt() helper.
While BPF programs can often infer their context from attachment points,
subsystems like sched_ext may call the same BPF logic from multiple
contexts (e.g., task-to-task wake-ups vs. interrupt-to-task wake-ups).
These helpers provide a reliable way for logic to branch based on
the current CPU execution state.
Implementing these as BPF-native inline helpers wrapping
get_preempt_count() allows the compiler and JIT to inline the logic. The
implementation accounts for differences in preempt_count layout between
standard and PREEMPT_RT kernels.
overall
-------
Sometimes, we need to hook both the entry and exit of a function with
TRACING. Therefore, we need define a FENTRY and a FEXIT for the target
function, which is not convenient.
Therefore, we add a tracing session support for TRACING. Generally
speaking, it's similar to kprobe session, which can hook both the entry
and exit of a function with a single BPF program.
We allow the usage of bpf_get_func_ret() to get the return value in the
fentry of the tracing session, as it will always get "0", which is safe
enough and is OK.
Session cookie is also supported with the kfunc bpf_session_cookie().
In order to limit the stack usage, we limit the maximum number of cookies
to 4.
kfunc design
------------
In order to keep consistency with existing kfunc, we don't introduce new
kfunc for fsession. Instead, we reuse the existing kfunc
bpf_session_cookie() and bpf_session_is_return().
The prototype of bpf_session_cookie() and bpf_session_is_return() don't
satisfy our needs, so we change their prototype by adding the argument
"void *ctx" to them.
We inline bpf_session_cookie() and bpf_session_is_return() for fsession
in the verifier directly. Therefore, we don't need to introduce new
functions for them.
architecture
------------
The fsession stuff is arch related, so the -EOPNOTSUPP will be returned if
it is not supported yet by the arch. In this series, we only support
x86_64. And later, other arch will be implemented.
Changes v12 -> v13:
* fix the selftests fail on !x86_64 in the 11th patch
* v12: https://lore.kernel.org/bpf/20260124033119.28682-1-dongml2@chinatelecom.cn/
Changes v11 -> v12:
* update the variable "delta" in the 2nd patch
* improve the fsession testcase by adding the 11th patch, which will test
bpf_get_func_* for fsession
* v11: https://lore.kernel.org/bpf/20260123073532.238985-1-dongml2@chinatelecom.cn/
Changes v10 -> v11:
* rebase and fix the conflicts in the 2nd patch
* use "volatile" in the 11th patch
* rename BPF_TRAMP_SHIFT_* to BPF_TRAMP_*_SHIFT
* v10: https://lore.kernel.org/bpf/20260115112246.221082-1-dongml2@chinatelecom.cn/
Changes v9 -> v10:
* 1st patch: some small adjustment, such as use switch in
bpf_prog_has_trampoline()
* 2nd patch: some adjustment to the commit log and comment
* 3rd patch:
- drop the declaration of bpf_session_is_return() and
bpf_session_cookie()
- use vmlinux.h instead of bpf_kfuncs.h in uprobe_multi_session.c,
kprobe_multi_session_cookie.c and uprobe_multi_session_cookie.c
* 4th patch:
- some adjustment to the comment and commit log
- rename the prefix from BPF_TRAMP_M_ to BPF_TRAMP_SHIFT_
- remove the definition of BPF_TRAMP_M_NR_ARGS
- check the program type in bpf_session_filter()
* 5th patch: some adjustment to the commit log
* 6th patch:
- add the "reg" to the function arguments of emit_store_stack_imm64()
- use the positive offset in emit_store_stack_imm64()
* 7th patch:
- use "|" for func_meta instead of "+"
- pass the "func_meta_off" to invoke_bpf() explicitly, instead of
computing it with "stack_size + 8"
- pass the "cookie_off" to invoke_bpf() instead of computing the current
cookie index with "func_meta"
* 8th patch:
- split the modification to bpftool to a separate patch
* v9: https://lore.kernel.org/bpf/20260110141115.537055-1-dongml2@chinatelecom.cn/
Changes v8 -> v9:
* remove the definition of bpf_fsession_cookie and bpf_fsession_is_return
in the 4th and 5th patch
* rename emit_st_r0_imm64() to emit_store_stack_imm64() in the 6th patch
* v8: https://lore.kernel.org/bpf/20260108022450.88086-1-dongml2@chinatelecom.cn/
Changes v7 -> v8:
* use the last byte of nr_args for bpf_get_func_arg_cnt() in the 2nd patch
* v7: https://lore.kernel.org/bpf/20260107064352.291069-1-dongml2@chinatelecom.cn/
Changes v6 -> v7:
* change the prototype of bpf_session_cookie() and bpf_session_is_return(),
and reuse them instead of introduce new kfunc for fsession.
* v6: https://lore.kernel.org/bpf/20260104122814.183732-1-dongml2@chinatelecom.cn/
Changes v5 -> v6:
* No changes in this version, just a rebase to deal with conflicts.
* v5: https://lore.kernel.org/bpf/20251224130735.201422-1-dongml2@chinatelecom.cn/
Changes v4 -> v5:
* use fsession terminology consistently in all patches
* 1st patch:
- use more explicit way in __bpf_trampoline_link_prog()
* 4th patch:
- remove "cookie_cnt" in struct bpf_trampoline
* 6th patch:
- rename nr_regs to func_md
- define cookie_off in a new line
* 7th patch:
- remove the handling of BPF_TRACE_SESSION in legacy fallback path for
BPF_RAW_TRACEPOINT_OPEN
* v4: https://lore.kernel.org/bpf/20251217095445.218428-1-dongml2@chinatelecom.cn/
Changes v3 -> v4:
* instead of adding a new hlist to progs_hlist in trampoline, add the bpf
program to both the fentry hlist and the fexit hlist.
* introduce the 2nd patch to reuse the nr_args field in the stack to
store all the information we need(except the session cookies).
* limit the maximum number of cookies to 4.
* remove the logic to skip fexit if the fentry return non-zero.
* v3: https://lore.kernel.org/bpf/20251026030143.23807-1-dongml2@chinatelecom.cn/
Changes v2 -> v3:
* squeeze some patches:
- the 2 patches for the kfunc bpf_tracing_is_exit() and
bpf_fsession_cookie() are merged into the second patch.
- the testcases for fsession are also squeezed.
* fix the CI error by move the testcase for bpf_get_func_ip to
fsession_test.c
* v2: https://lore.kernel.org/bpf/20251022080159.553805-1-dongml2@chinatelecom.cn/
Changes v1 -> v2:
* session cookie support.
In this version, session cookie is implemented, and the kfunc
bpf_fsession_cookie() is added.
* restructure the layout of the stack.
In this version, the session stuff that stored in the stack is changed,
and we locate them after the return value to not break
bpf_get_func_ip().
* testcase enhancement.
Some nits in the testcase that suggested by Jiri is fixed. Meanwhile,
the testcase for get_func_ip and session cookie is added too.
* v1: https://lore.kernel.org/bpf/20251018142124.783206-1-dongml2@chinatelecom.cn/
====================
Menglong Dong [Sat, 24 Jan 2026 06:20:05 +0000 (14:20 +0800)]
selftests/bpf: add testcases for fsession
Add testcases for BPF_TRACE_FSESSION. The function arguments and return
value are tested both in the entry and exit. And the kfunc
bpf_session_is_ret() is also tested.
Menglong Dong [Sat, 24 Jan 2026 06:20:00 +0000 (14:20 +0800)]
bpf: support fsession for bpf_session_cookie
Implement session cookie for fsession. The session cookies will be stored
in the stack, and the layout of the stack will look like this:
return value -> 8 bytes
argN -> 8 bytes
...
arg1 -> 8 bytes
nr_args -> 8 bytes
ip (optional) -> 8 bytes
cookie2 -> 8 bytes
cookie1 -> 8 bytes
The offset of the cookie for the current bpf program, which is in 8-byte
units, is stored in the
"(((u64 *)ctx)[-1] >> BPF_TRAMP_COOKIE_INDEX_SHIFT) & 0xFF". Therefore, we
can get the session cookie with ((u64 *)ctx)[-offset].
Implement and inline the bpf_session_cookie() for the fsession in the
verifier.
Menglong Dong [Sat, 24 Jan 2026 06:19:57 +0000 (14:19 +0800)]
bpf: use the least significant byte for the nr_args in trampoline
For now, ((u64 *)ctx)[-1] is used to store the nr_args in the trampoline.
However, 1 byte is enough to store such information. Therefore, we use
only the least significant byte of ((u64 *)ctx)[-1] to store the nr_args,
and reserve the rest for other usages.
Menglong Dong [Sat, 24 Jan 2026 06:19:56 +0000 (14:19 +0800)]
bpf: add fsession support
The fsession is something that similar to kprobe session. It allow to
attach a single BPF program to both the entry and the exit of the target
functions.
Introduce the struct bpf_fsession_link, which allows to add the link to
both the fentry and fexit progs_hlist of the trampoline.
Yonghong Song [Fri, 23 Jan 2026 05:51:28 +0000 (21:51 -0800)]
selftests/bpf: Fix xdp_pull_data failure with 64K page
If the argument 'pull_len' of run_test() is 'PULL_MAX' or
'PULL_MAX | PULL_PLUS_ONE', the eventual pull_len size
will close to the page size. On arm64 systems with 64K pages,
the pull_len size will be close to 64K. But the existing buffer
will be close to 9000 which is not enough to pull.
For those failed run_tests(), make buff size to
pg_sz + (pg_sz / 2)
This way, there will be enough buffer space to pull
regardless of page size.
Yonghong Song [Fri, 23 Jan 2026 05:51:22 +0000 (21:51 -0800)]
selftests/bpf: Fix task_local_data failure with 64K page
On arm64 systems with 64K pages, the selftest task_local_data has the following
failures:
...
test_task_local_data_basic:PASS:tld_create_key 0 nsec
test_task_local_data_basic:FAIL:tld_create_key unexpected tld_create_key: actual 0 != expected -28
...
test_task_local_data_basic_thread:PASS:run task_main 0 nsec
test_task_local_data_basic_thread:FAIL:task_main retval unexpected error: 2 (errno 0)
test_task_local_data_basic_thread:FAIL:tld_get_data value0 unexpected tld_get_data value0: actual 0 != expected 6268
...
#447/1 task_local_data/task_local_data_basic:FAIL
...
#447/2 task_local_data/task_local_data_race:FAIL
#447 task_local_data:FAIL
When TLD_DYN_DATA_SIZE is 64K page size, for
struct tld_meta_u {
_Atomic __u8 cnt;
__u16 size;
struct tld_metadata metadata[];
};
field 'cnt' would overflow. For example, for 4K page, 'cnt' will
be 4096/64 = 64. But for 64K page, 'cnt' will be 65536/64 = 1024
and 'cnt' is not enough for 1024. To accommodate 64K page,
'_Atomic __u8 cnt' becomes '_Atomic __u16 cnt'. A few other places
are adjusted accordingly.
In test_task_local_data.c, the value for TLD_DYN_DATA_SIZE is changed
from 4096 to (getpagesize() - 8) since the maximum buffer size for
TLD_DYN_DATA_SIZE is (getpagesize() - 8).
Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Tested-by: Alan Maguire <alan.maguire@oracle.com> Cc: Amery Hung <ameryhung@gmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Amery Hung <ameryhung@gmail.com> Link: https://lore.kernel.org/r/20260123055122.494352-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
The TAS fallback can be invoked directly when queued spin locks are
disabled, and through the slow path when paravirt is enabled for queued
spin locks. In the latter case, the res_spin_lock macro will attempt the
fast path and already hold the entry when entering the slow path. This
will lead to creation of extraneous entries that are not released, which
may cause false positives for deadlock detection.
Fix this by always preceding invocation of the TAS fallback in every
case with the grabbing of the held lock entry, and add a comment to make
note of this.
Kery Qi [Wed, 21 Jan 2026 09:41:16 +0000 (17:41 +0800)]
selftests/bpf: Fix resource leak in serial_test_wq on attach failure
When wq__attach() fails, serial_test_wq() returns early without calling
wq__destroy(), leaking the skeleton resources allocated by
wq__open_and_load(). This causes ASAN leak reports in selftests runs.
Fix this by jumping to a common clean_up label that calls wq__destroy()
on all exit paths after successful open_and_load.
Note that the early return after wq__open_and_load() failure is correct
and doesn't need fixing, since that function returns NULL on failure
(after internally cleaning up any partial allocations).
This patchset introduces bpf_strncasecmp to allow case-insensitive and
limited-length string comparison. This is useful for parsing protocol
headers like HTTP.
---
Changes in v5:
- Fixed the test function numbering
Changes in v4:
- Updated the loop variable to maintain style consistency
Changes in v3:
- Use ternary operator to maintain style consistency
- Reverted unnecessary doc comment about XATTR_SIZE_MAX
Changes in v2:
- Compute max_sz upfront and remove len check from the loop body
- Document that @len is limited by XATTR_SIZE_MAX
====================
Matt Bobrowski [Wed, 21 Jan 2026 09:00:01 +0000 (09:00 +0000)]
bpf: Revert "bpf: drop KF_ACQUIRE flag on BPF kfunc bpf_get_root_mem_cgroup()"
This reverts commit e463b6de9da1 ("bpf: drop KF_ACQUIRE flag on BPF
kfunc bpf_get_root_mem_cgroup()").
The original commit removed the KF_ACQUIRE flag from
bpf_get_root_mem_cgroup() under the assumption that it resulted in
simplified usage. This stemmed from the fact that
bpf_get_root_mem_cgroup() inherently returns a reference to an object
which technically isn't reference counted, therefore there is no
strong requirement to call a matching bpf_put_mem_cgroup() on the
returned reference.
Although technically correct, as per the arguments in the thread [0],
dropping the KF_ACQUIRE flag and losing reference tracking semantics
negatively impacted the usability of bpf_get_root_mem_cgroup() in
practice.
====================
bpf: support bpf_get_func_arg() for BPF_TRACE_RAW_TP
Support bpf_get_func_arg() for BPF_TRACE_RAW_TP by getting the function
argument count from "prog->aux->attach_func_proto" during verifier inline.
Changes v5 -> v4:
* some format adjustment in the 1st patch
* v4: https://lore.kernel.org/bpf/20260120073046.324342-1-dongml2@chinatelecom.cn/
Changes v4 -> v3:
* fix the error of using bpf_get_func_arg() for BPF_TRACE_ITER
* v3: https://lore.kernel.org/bpf/20260119023732.130642-1-dongml2@chinatelecom.cn/
Changes v2 -> v1:
* for nr_args, skip first 'void *__data' argument in btf_trace_##name
typedef
* check the result4 and result5 in the selftests
* v1: https://lore.kernel.org/bpf/20260116035024.98214-1-dongml2@chinatelecom.cn/
====================
Menglong Dong [Wed, 21 Jan 2026 04:43:47 +0000 (12:43 +0800)]
bpf: support bpf_get_func_arg() for BPF_TRACE_RAW_TP
For now, bpf_get_func_arg() and bpf_get_func_arg_cnt() is not supported by
the BPF_TRACE_RAW_TP, which is not convenient to get the argument of the
tracepoint, especially for the case that the position of the arguments in
a tracepoint can change.
The target tracepoint BTF type id is specified during loading time,
therefore we can get the function argument count from the function
prototype instead of the stack.
====================
bpf, x86: inline bpf_get_current_task() for x86_64
Inline bpf_get_current_task() and bpf_get_current_task_btf() for x86_64
to obtain better performance, and add the testcase for it.
Changes since v5:
* remove unnecessary 'ifdef' and __description in the selftests
* v5: https://lore.kernel.org/bpf/20260119070246.249499-1-dongml2@chinatelecom.cn/
Changes since v4:
* don't support the !CONFIG_SMP case
* v4: https://lore.kernel.org/bpf/20260112104529.224645-1-dongml2@chinatelecom.cn/
Changes since v3:
* handle the !CONFIG_SMP case
* ignore the !CONFIG_SMP case in the testcase, as we enable CONFIG_SMP
for x86_64 in the selftests
Changes since v2:
* implement it in the verifier with BPF_MOV64_PERCPU_REG() instead of in
x86_64 JIT (Alexei).
Changes since v1:
* add the testcase
* remove the usage of const_current_task
====================
Mykyta Yatsenko [Tue, 20 Jan 2026 15:59:13 +0000 (15:59 +0000)]
bpf: Simplify bpf_timer_cancel()
Remove lock from the bpf_timer_cancel() helper. The lock does not
protect from concurrent modification of the bpf_async_cb data fields as
those are modified in the callback without locking.
Use guard(rcu)() instead of pair of explicit lock()/unlock().
Introduce bpf_async_update_prog_callback(): lock-free update of cb->prog
and cb->callback_fn. This function allows updating prog and callback_fn
fields of the struct bpf_async_cb without holding lock.
For now use it under the lock from __bpf_async_set_callback(), in the
next patches that lock will be removed.
Lock-free algorithm:
* Acquire a guard reference on prog to prevent it from being freed
during the retry loop.
* Retry loop:
1. Each iteration acquires a new prog reference and stores it
in cb->prog via xchg. The previous prog is released.
2. The loop condition checks if both cb->prog and cb->callback_fn
match what we just wrote. If either differs, a concurrent writer
overwrote our value, and we must retry.
3. When we retry, our previously-stored prog was already released by
the concurrent writer or will be released by us after
overwriting.
* Release guard reference.
Matt Bobrowski [Tue, 20 Jan 2026 09:16:30 +0000 (09:16 +0000)]
selftests/bpf: update verifier test for default trusted pointer semantics
Replace the verifier test for default trusted pointer semantics, which
previously relied on BPF kfunc bpf_get_root_mem_cgroup(), with a new
test utilizing dedicated BPF kfuncs defined within the bpf_testmod.
bpf_get_root_mem_cgroup() was modified such that it again relies on
KF_ACQUIRE semantics, therefore no longer making it a suitable
candidate to test BPF verifier default trusted pointer semantics
against.
====================
bpf: Fix memory access flags in helper prototypes
This series adds missing memory access flags (MEM_RDONLY or MEM_WRITE) to
several bpf helper function prototypes that use ARG_PTR_TO_MEM but lack the
correct flag. It also adds a new check in verifier to ensure the flag is
specified.
Missing memory access flags in helper prototypes can lead to critical
correctness issues when the verifier tries to perform code optimization.
After commit 37cce22dbd51 ("bpf: verifier: Refactor helper access type
tracking"), the verifier relies on the memory access flags, rather than
treating all arguments in helper functions as potentially modifying the
pointed-to memory.
Using ARG_PTR_TO_MEM alone without flags does not make sense because:
- If the helper does not change the argument, missing MEM_RDONLY causes the
verifier to incorrectly reject a read-only buffer.
- If the helper does change the argument, missing MEM_WRITE causes the
verifier to incorrectly assume the memory is unchanged, leading to
errors in code optimization.
We have already seen several reports regarding this:
- commit ac44dcc788b9 ("bpf: Fix verifier assumptions of bpf_d_path's
output buffer") adds MEM_WRITE to bpf_d_path;
- commit 2eb7648558a7 ("bpf: Specify access type of bpf_sysctl_get_name
args") adds MEM_WRITE to bpf_sysctl_get_name.
This series looks through all prototypes in the kernel and completes the
flags. It also adds check_mem_arg_rw_flag_ok() and wires it into
check_func_proto() to statically restrict ARG_PTR_TO_MEM from appearing
without memory access flags.
Changelog
=========
v3:
- Rebased to bpf-next to address check_func_proto() signature changes, as
suggested by Eduard Zingerman.
v2:
- Add missing MEM_RDONLY flags to protos with ARG_PTR_TO_FIXED_SIZE_MEM.
Zesen Liu [Tue, 20 Jan 2026 08:28:47 +0000 (16:28 +0800)]
bpf: Require ARG_PTR_TO_MEM with memory flag
Add check to ensure that ARG_PTR_TO_MEM is used with either MEM_WRITE or
MEM_RDONLY.
Using ARG_PTR_TO_MEM alone without flags does not make sense because:
- If the helper does not change the argument, missing MEM_RDONLY causes the
verifier to incorrectly reject a read-only buffer.
- If the helper does change the argument, missing MEM_WRITE causes the
verifier to incorrectly assume the memory is unchanged, leading to errors
in code optimization.
Co-developed-by: Shuran Liu <electronlsr@gmail.com> Signed-off-by: Shuran Liu <electronlsr@gmail.com> Co-developed-by: Peili Gao <gplhust955@gmail.com> Signed-off-by: Peili Gao <gplhust955@gmail.com> Co-developed-by: Haoran Ni <haoran.ni.cs@gmail.com> Signed-off-by: Haoran Ni <haoran.ni.cs@gmail.com> Signed-off-by: Zesen Liu <ftyghome@gmail.com> Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260120-helper_proto-v3-2-27b0180b4e77@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Zesen Liu [Tue, 20 Jan 2026 08:28:46 +0000 (16:28 +0800)]
bpf: Fix memory access flags in helper prototypes
After commit 37cce22dbd51 ("bpf: verifier: Refactor helper access type tracking"),
the verifier started relying on the access type flags in helper
function prototypes to perform memory access optimizations.
Currently, several helper functions utilizing ARG_PTR_TO_MEM lack the
corresponding MEM_RDONLY or MEM_WRITE flags. This omission causes the
verifier to incorrectly assume that the buffer contents are unchanged
across the helper call. Consequently, the verifier may optimize away
subsequent reads based on this wrong assumption, leading to correctness
issues.
For bpf_get_stack_proto_raw_tp, the original MEM_RDONLY was incorrect
since the helper writes to the buffer. Change it to ARG_PTR_TO_UNINIT_MEM
which correctly indicates write access to potentially uninitialized memory.
Similar issues were recently addressed for specific helpers in commit ac44dcc788b9 ("bpf: Fix verifier assumptions of bpf_d_path's output buffer")
and commit 2eb7648558a7 ("bpf: Specify access type of bpf_sysctl_get_name args").
Fix these prototypes by adding the correct memory access flags.
Fixes: 37cce22dbd51 ("bpf: verifier: Refactor helper access type tracking") Co-developed-by: Shuran Liu <electronlsr@gmail.com> Signed-off-by: Shuran Liu <electronlsr@gmail.com> Co-developed-by: Peili Gao <gplhust955@gmail.com> Signed-off-by: Peili Gao <gplhust955@gmail.com> Co-developed-by: Haoran Ni <haoran.ni.cs@gmail.com> Signed-off-by: Haoran Ni <haoran.ni.cs@gmail.com> Signed-off-by: Zesen Liu <ftyghome@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260120-helper_proto-v3-1-27b0180b4e77@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
====================
bpf: Add range tracking for BPF_DIV and BPF_MOD
From: Yazhou Tang <tangyazhou518@outlook.com>
Add range tracking (interval analysis) for BPF_DIV and BPF_MOD when
divisor is constant. Please see commit log of 1/2 for more details.
Changes v4 => v5:
1. Rename helper functions `__reset_reg(32|64)_and_tnum` to
`reset_reg(32|64)_and_tnum`. (Alexei)
2. Replace plain C division with `div64_u64` and `div64_s64` for 64-bit
operations, ensuring compatibility with 32-bit architectures. (Alexei
& kernel test robot)
3. Fixup an indent typo in selftest file `verifier_div_mod_bounds.c`.
Changes v3 => v4:
1. Remove verbose helper functions for "division by zero" handling. (Alexei)
2. Put all "reset" logic in one place for clarity, and add 2 helper
function `__reset_reg64_and_tnum` and `__reset_reg32_and_tnum` to
reduce code duplication. (Alexei)
3. Update all multi-line comments to follow the standard kernel style. (Alexei)
4. Add new test cases to cover strictly positive and strictly negative
divisor scenarios in SDIV and SMOD analysis. (Alexei)
5. Fixup a typo in SDIV analysis functions.
Changes v1 => v2:
1. Fixed 2 bugs in sdiv32 analysis logic and corrected the associated
selftest cases. (AI reviewer)
2. Renamed `tnum_bottom` to `tnum_empty` for better clarity, and updated
commit message to explain its role in signed BPF_DIV analysis.
Yazhou Tang [Mon, 19 Jan 2026 08:54:58 +0000 (16:54 +0800)]
selftests/bpf: Add tests for BPF_DIV and BPF_MOD range tracking
Now BPF_DIV has range tracking support via interval analysis. This patch
adds selftests to cover various cases of BPF_DIV and BPF_MOD operations
when the divisor is a constant, also covering both signed and unsigned variants.
This patch includes several types of tests in 32-bit and 64-bit variants:
Specifically, these selftests are based on dead code elimination:
If the BPF verifier can precisely analyze the result of BPF_DIV/BPF_MOD
instruction, it can prune the path that leads to an error (here we use
invalid memory access as the error case), allowing the program to pass
verification.
Yazhou Tang [Mon, 19 Jan 2026 08:54:57 +0000 (16:54 +0800)]
bpf: Add range tracking for BPF_DIV and BPF_MOD
This patch implements range tracking (interval analysis) for BPF_DIV and
BPF_MOD operations when the divisor is a constant, covering both signed
and unsigned variants.
While LLVM typically optimizes integer division and modulo by constants
into multiplication and shift sequences, this optimization is less
effective for the BPF target when dealing with 64-bit arithmetic.
Currently, the verifier does not track bounds for scalar division or
modulo, treating the result as "unbounded". This leads to false positive
rejections for safe code patterns.
For example, the following code (compiled with -O2):
```c
int test(struct pt_regs *ctx) {
char buffer[6] = {1};
__u64 x = bpf_ktime_get_ns();
__u64 res = x % sizeof(buffer);
char value = buffer[res];
bpf_printk("res = %llu, val = %d", res, value);
return 0;
}
```
Without this patch, the verifier fails with "math between map_value
pointer and register with unbounded min value is not allowed" because
it cannot deduce that `r0` is within [0, 5].
According to the BPF instruction set[1], the instruction's offset field
(`insn->off`) is used to distinguish between signed (`off == 1`) and
unsigned division (`off == 0`). Moreover, we also follow the BPF division
and modulo runtime behavior (semantics) to handle special cases, such as
division by zero and signed division overflow.
Here is the overview of the changes made in this patch (See the code comments
for more details and examples):
1. For BPF_DIV: Firstly check whether the divisor is zero. If so, set the
destination register to zero (matching runtime behavior).
For non-zero constant divisors: goto `scalar(32)?_min_max_(u|s)div` functions.
- General cases: compute the new range by dividing max_dividend and
min_dividend by the constant divisor.
- Overflow case (SIGNED_MIN / -1) in signed division: mark the result
as unbounded if the dividend is not a single number.
2. For BPF_MOD: Firstly check whether the divisor is zero. If so, leave the
destination register unchanged (matching runtime behavior).
For non-zero constant divisors: goto `scalar(32)?_min_max_(u|s)mod` functions.
- General case: For signed modulo, the result's sign matches the
dividend's sign. And the result's absolute value is strictly bounded
by `min(abs(dividend), abs(divisor) - 1)`.
- Special care is taken when the divisor is SIGNED_MIN. By casting
to unsigned before negation and subtracting 1, we avoid signed
overflow and correctly calculate the maximum possible magnitude
(`res_max_abs` in the code).
- "Small dividend" case: If the dividend is already within the possible
result range (e.g., [-2, 5] % 10), the operation is an identity
function, and the destination register remains unchanged.
3. In `scalar(32)?_min_max_(u|s)(div|mod)` functions: After updating current
range, reset other ranges and tnum to unbounded/unknown.
e.g., in `scalar_min_max_sdiv`, signed 64-bit range is updated. Then reset
unsigned 64-bit range and 32-bit range to unbounded, and tnum to unknown.
Exception: in BPF_MOD's "small dividend" case, since the result remains
unchanged, we do not reset other ranges/tnum.
4. Also updated existing selftests based on the expected BPF_DIV and
BPF_MOD behavior.
====================
bpf: Kernel functions with KF_IMPLICIT_ARGS
This series implements a generic "implicit arguments" feature for BPF
kernel functions. For context see prior work [1][2].
A mechanism is created for kfuncs to have arguments that are not
visible to the BPF programs, and are provided to the kernel function
implementation by the verifier.
This mechanism is then used in the kfuncs that have a parameter with
__prog annotation [3], which is the current way of passing struct
bpf_prog_aux pointer to kfuncs.
The function with implicit arguments is defined by KF_IMPLICIT_ARGS
flag in BTF_IDS_FLAGS set. In this series, only a pointer to struct
bpf_prog_aux can be implicit, although it is simple to extend this to
more types.
The verifier handles a kfunc with KF_IMPLICIT_ARGS by resolving it to
a different (actual) BTF prototype early in verification (patch #3).
A <kfunc>_impl function generated in BTF for a kfunc with implicit
args does not have a "bpf_kfunc" decl tag, and a kernel address. The
verifier will reject a program trying to call such an _impl kfunc.
The usage of <kfunc>_impl functions in BPF is only allowed for kfuncs
with an explicit kernel (or kmodule) declaration, that is in "legacy"
cases. As of this series, there are no legacy kernel functions, as all
__prog users are migrated to KF_IMPLICIT_ARGS. However the
implementation allows for legacy cases support in principle.
The series removes the following BPF kernel functions:
- bpf_stream_vprintk_impl
- bpf_task_work_schedule_resume_impl
- bpf_task_work_schedule_signal_impl
- bpf_wq_set_callback_impl
This will break existing BPF programs calling these functions (the
verifier will not load them) on new kernels.
To mitigate, BPF users are advised to use the following pattern [4]:
if (xxx_impl)
xxx_impl(..., NULL);
else
xxx(...);
Which can be wrapped in a macro.
The series consists of the following patches:
- patches #1 and #2 are non-functional refactoring in kernel/bpf
- patch #3 defines KF_IMPLICIT_ARGS flag and teaches the verifier
about it
- patches #4-#5 implement btf2btf transformation in resolve_btfids
- patch #6 adds selftests specific to KF_IMPLICIT_ARGS feature
- patches #7-#11 migrate the current users of __prog argument to
KF_IMPLICIT_ARGS
- patch #12 removes __prog arg suffix support from the kernel
- patch #13 updates the docs
v1->v2:
- Replace the following kernel functions with KF_IMPLICIT_ARGS version:
- bpf_stream_vprintk_impl -> bpf_stream_vprintk
- bpf_task_work_schedule_resume_impl -> bpf_task_work_schedule_resume
- bpf_task_work_schedule_signal_impl -> bpf_task_work_schedule_signal
- bpf_wq_set_callback_impl -> bpf_wq_set_callback_impl
- Remove __prog arg suffix support from the verifier
- Rework btf2btf implementation in resolve_btfids
- Do distill base and sort before BTF_ids patching
- Collect kfuncs based on BTF decl tags, before BTF_ids are patched
- resolve_btfids: use dynamic memory for intermediate data (Andrii)
- verifier: reset .subreg_def for caller saved registers on kfunc
call (Eduard)
- selftests/hid: remove Makefile changes (Benjamin)
- selftests/bpf: Add a patch (#11) migrating struct_ops_assoc test
to KF_IMPLICIT_ARGS
- Various nits across the series (Alexei, Andrii, Eduard)
Ihor Solodrai [Tue, 20 Jan 2026 22:26:36 +0000 (14:26 -0800)]
selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
A test kfunc named bpf_kfunc_multi_st_ops_test_1_impl() is a user of
__prog suffix. Subsequent patch removes __prog support in favor of
KF_IMPLICIT_ARGS, so migrate this kfunc to use implicit argument.
Ihor Solodrai [Tue, 20 Jan 2026 22:26:35 +0000 (14:26 -0800)]
bpf: Migrate bpf_stream_vprintk() to KF_IMPLICIT_ARGS
Implement bpf_stream_vprintk with an implicit bpf_prog_aux argument,
and remote bpf_stream_vprintk_impl from the kernel.
Update the selftests to use the new API with implicit argument.
bpf_stream_vprintk macro is changed to use the new bpf_stream_vprintk
kfunc, and the extern definition of bpf_stream_vprintk_impl is
replaced accordingly.
Ihor Solodrai [Tue, 20 Jan 2026 22:26:33 +0000 (14:26 -0800)]
HID: Use bpf_wq_set_callback kernel function
Remove extern declaration of bpf_wq_set_callback_impl() from
hid_bpf_helpers.h and replace bpf_wq_set_callback macro with a
corresponding new declaration.
Tested with:
# append tools/testing/selftests/hid/config and build the kernel
$ make -C tools/testing/selftests/hid
# in built kernel
$ ./tools/testing/selftests/hid/hid_bpf -t test_multiply_events_wq
TAP version 13
1..1
# Starting 1 tests from 1 test cases.
# RUN hid_bpf.test_multiply_events_wq ...
[ 2.575520] hid-generic 0003:0001:0A36.0001: hidraw0: USB HID v0.00 Device [test-uhid-device-138] on 138
# OK hid_bpf.test_multiply_events_wq
ok 1 hid_bpf.test_multiply_events_wq
# PASSED: 1 / 1 tests passed.
# Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
PASS
Ihor Solodrai [Tue, 20 Jan 2026 22:26:31 +0000 (14:26 -0800)]
selftests/bpf: Add tests for KF_IMPLICIT_ARGS
Add trivial end-to-end tests to validate that KF_IMPLICIT_ARGS flag is
properly handled by both resolve_btfids and the verifier.
Declare kfuncs in bpf_testmod. Check that bpf_prog_aux pointer is set
in the kfunc implementation. Verify that calls with implicit args and
a legacy case all work.
Ihor Solodrai [Tue, 20 Jan 2026 22:26:30 +0000 (14:26 -0800)]
resolve_btfids: Support for KF_IMPLICIT_ARGS
Implement BTF modifications in resolve_btfids to support BPF kernel
functions with implicit arguments.
For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
prototype is added to BTF that does not have implicit arguments. The
kfunc's prototype is then updated to a new one in BTF. This prototype
is the intended interface for the BPF programs.
A <func_name>_impl function is added to BTF to make the original kfunc
prototype searchable for the BPF verifier. If a <func_name>_impl
function already exists in BTF, its interpreted as a legacy case, and
this step is skipped.
Whether an argument is implicit is determined by its type:
currently only `struct bpf_prog_aux *` is supported.
As a result, the BTF associated with kfunc is changed from
Ihor Solodrai [Tue, 20 Jan 2026 22:26:29 +0000 (14:26 -0800)]
resolve_btfids: Introduce finalize_btf() step
Since recently [1][2] resolve_btfids executes final adjustments to the
kernel/module BTF before it's embedded into the target binary.
To keep the implementation simple, a clear and stable "pipeline" of
how BTF data flows through resolve_btfids would be helpful. Some BTF
modifications may change the ids of the types, so it is important to
maintain correct order of operations with respect to .BTF_ids
resolution too.
This patch refactors the BTF handling to establish the following
sequence:
- load target ELF sections
- load .BTF_ids symbols
- this will be a dependency of btf2btf transformations in
subsequent patches
- load BTF and its base as is
- (*) btf2btf transformations will happen here
- finalize_btf(), introduced in this patch
- does distill base and sort BTF
- resolve and patch .BTF_ids
This approach helps to avoid fixups in .BTF_ids data in case the ids
change at any point of BTF processing, because symbol resolution
happens on the finalized, ready to dump, BTF data.
This also gives flexibility in BTF transformations, because they will
happen on BTF that is not distilled and/or sorted yet, allowing to
freely add, remove and modify BTF types.
Ihor Solodrai [Tue, 20 Jan 2026 22:26:28 +0000 (14:26 -0800)]
bpf: Verifier support for KF_IMPLICIT_ARGS
A kernel function bpf_foo marked with KF_IMPLICIT_ARGS flag is
expected to have two associated types in BTF:
* `bpf_foo` with a function prototype that omits implicit arguments
* `bpf_foo_impl` with a function prototype that matches the kernel
declaration of `bpf_foo`, but doesn't have a ksym associated with
its name
In order to support kfuncs with implicit arguments, the verifier has
to know how to resolve a call of `bpf_foo` to the correct BTF function
prototype and address.
To implement this, in add_kfunc_call() kfunc flags are checked for
KF_IMPLICIT_ARGS. For such kfuncs a BTF func prototype is adjusted to
the one found for `bpf_foo_impl` (func_name + "_impl" suffix, by
convention) function in BTF.
This effectively changes the signature of the `bpf_foo` kfunc in the
context of verification: from one without implicit args to the one
with full argument list.
The values of implicit arguments by design are provided by the
verifier, and so they can only be of particular types. In this patch
the only allowed implicit arg type is a pointer to struct
bpf_prog_aux.
In order for the verifier to correctly set an implicit bpf_prog_aux
arg value at runtime, is_kfunc_arg_prog() is extended to check for the
arg type. At a point when prog arg is determined in check_kfunc_args()
the kfunc with implicit args already has a prototype with full
argument list, so the existing value patch mechanism just works.
If a new kfunc with KF_IMPLICIT_ARG is declared for an existing kfunc
that uses a __prog argument (a legacy case), the prototype
substitution works in exactly the same way, assuming the kfunc follows
the _impl naming convention. The difference is only in how _impl
prototype is added to the BTF, which is not the verifier's
concern. See a subsequent resolve_btfids patch for details.
__prog suffix is still supported at this point, but will be removed in
a subsequent patch, after current users are moved to KF_IMPLICIT_ARGS.
Introduction of KF_IMPLICIT_ARGS revealed an issue with zero-extension
tracking, because an explicit rX = 0 in place of the verifier-supplied
argument is now absent if the arg is implicit (the BPF prog doesn't
pass a dummy NULL anymore). To mitigate this, reset the subreg_def of
all caller saved registers in check_kfunc_call() [1].
Ihor Solodrai [Tue, 20 Jan 2026 22:26:27 +0000 (14:26 -0800)]
bpf: Introduce struct bpf_kfunc_meta
There is code duplication between add_kfunc_call() and
fetch_kfunc_meta() collecting information about a kfunc from BTF.
Introduce struct bpf_kfunc_meta to hold common kfunc BTF data and
implement fetch_kfunc_meta() to fill it in, instead of struct
bpf_kfunc_call_arg_meta directly.
Then use these in add_kfunc_call() and (new) fetch_kfunc_arg_meta()
functions, and fixup previous usages of fetch_kfunc_meta() to
fetch_kfunc_arg_meta().
Besides the code dedup, this change enables add_kfunc_call() to access
kfunc->flags.
Ihor Solodrai [Tue, 20 Jan 2026 22:26:26 +0000 (14:26 -0800)]
bpf: Refactor btf_kfunc_id_set_contains
btf_kfunc_id_set_contains() is called by fetch_kfunc_meta() in the BPF
verifier to get the kfunc flags stored in the .BTF_ids ELF section.
If it returns NULL instead of a valid pointer, it's interpreted as an
illegal kfunc usage failing the verification.
There are two potential reasons for btf_kfunc_id_set_contains() to
return NULL:
1. Provided kfunc BTF id is not present in relevant kfunc id sets.
2. The kfunc is not allowed, as determined by the program type
specific filter [1].
The filter functions accept a pointer to `struct bpf_prog`, so they
might implicitly depend on earlier stages of verification, when
bpf_prog members are set.
For example, bpf_qdisc_kfunc_filter() in linux/net/sched/bpf_qdisc.c
inspects prog->aux->st_ops [2], which is initialized in:
So far this hasn't been an issue, because fetch_kfunc_meta() is the
only caller of btf_kfunc_id_set_contains().
However in subsequent patches of this series it is necessary to
inspect kfunc flags earlier in BPF verifier, in the add_kfunc_call().
To resolve this, refactor btf_kfunc_id_set_contains() into two
interface functions:
* btf_kfunc_flags() that simply returns pointer to kfunc_flags
without applying the filters
* btf_kfunc_is_allowed() that both checks for kfunc_flags existence
(which is a requirement for a kfunc to be allowed) and applies the
prog filters
Add a multi-producer benchmark for perfbuf to complement the existing
ringbuf multi-producer test. Unlike ringbuf which uses a shared buffer
and experiences contention, perfbuf uses per-CPU buffers so the test
measures scaling behavior rather than contention.
This allows developers to compare perfbuf vs ringbuf performance under
multi-producer workloads when choosing between the two for their systems.
Qiliang Yuan [Tue, 20 Jan 2026 02:32:34 +0000 (10:32 +0800)]
bpf/verifier: Optimize ID mapping reset in states_equal
Currently, reset_idmap_scratch() performs a 4.7KB memset() in every
states_equal() call. Optimize this by using a counter to track used
ID mappings, replacing the O(N) memset() with an O(1) reset and
bounding the search loop in check_ids().
Daniel Borkmann [Tue, 20 Jan 2026 12:55:01 +0000 (13:55 +0100)]
bpf: Remove leftover accounting in htab_map_mem_usage after rqspinlock
After commit 4fa8d68aa53e ("bpf: Convert hashtab.c to rqspinlock")
we no longer use HASHTAB_MAP_LOCK_{COUNT,MASK} as the per-CPU
map_locked[HASHTAB_MAP_LOCK_COUNT] array got removed from struct
bpf_htab. Right now it is still accounted for in htab_map_mem_usage.
Puranjay Mohan [Fri, 16 Jan 2026 14:14:35 +0000 (06:14 -0800)]
bpf: verifier: Make sync_linked_regs() scratch registers
sync_linked_regs() is called after a conditional jump to propagate new
bounds of a register to all its liked registers. But the verifier log
only prints the state of the register that is part of the conditional
jump.
Make sync_linked_regs() scratch the registers whose bounds have been
updated by propagation from a known register.
The conditional jump in 4 updates the bound of R1 and the new bounds are
propogated to R0 as it is linked with the same id, before this change,
verifier only printed the state for R1 but after it prints for both R0
and R1.