From: Jason Xing Date: Thu, 20 Feb 2025 07:29:30 +0000 (+0800) Subject: bpf: Prepare the sock_ops ctx and call bpf prog for TX timestamping X-Git-Tag: v6.15-rc1~160^2~249^2~1^2~10 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=df600f3b1d7963e2203ebf0987f564946a2647f1;p=thirdparty%2Fkernel%2Flinux.git bpf: Prepare the sock_ops ctx and call bpf prog for TX timestamping This patch introduces a new bpf_skops_tx_timestamping() function that prepares the "struct bpf_sock_ops" ctx and then executes the sockops BPF program. The subsequent patch will utilize bpf_skops_tx_timestamping() at the existing TX timestamping kernel callbacks (__sk_tstamp_tx specifically) to call the sockops BPF program. Later, four callback points to report information to user space based on this patch will be introduced. Signed-off-by: Jason Xing Signed-off-by: Martin KaFai Lau Link: https://patch.msgid.link/20250220072940.99994-3-kerneljasonxing@gmail.com --- diff --git a/include/net/sock.h b/include/net/sock.h index a95eedacae76c..2f6b55c59c165 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2924,6 +2924,13 @@ int sock_set_timestamping(struct sock *sk, int optname, struct so_timestamping timestamping); void sock_enable_timestamps(struct sock *sk); +#if defined(CONFIG_CGROUP_BPF) +void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op); +#else +static inline void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op) +{ +} +#endif void sock_no_linger(struct sock *sk); void sock_set_keepalive(struct sock *sk); void sock_set_priority(struct sock *sk, u32 priority); diff --git a/net/core/sock.c b/net/core/sock.c index a197f0a0b8788..ba653c6a12292 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -949,6 +949,20 @@ int sock_set_timestamping(struct sock *sk, int optname, return 0; } +#if defined(CONFIG_CGROUP_BPF) +void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op) +{ + struct bpf_sock_ops_kern sock_ops; + + memset(&sock_ops, 0, offsetof(struct bpf_sock_ops_kern, temp)); + sock_ops.op = op; + sock_ops.is_fullsock = 1; + sock_ops.sk = sk; + bpf_skops_init_skb(&sock_ops, skb, 0); + __cgroup_bpf_run_filter_sock_ops(sk, &sock_ops, CGROUP_SOCK_OPS); +} +#endif + void sock_set_keepalive(struct sock *sk) { lock_sock(sk);