From: Eric Dumazet Date: Mon, 23 Feb 2026 11:35:01 +0000 (+0000) Subject: tcp: reduce calls to tcp_schedule_loss_probe() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=fca59a2dd0b8ce55ad090707ec425d6d37b8b932;p=thirdparty%2Flinux.git tcp: reduce calls to tcp_schedule_loss_probe() For RPC workloads, we alternate tcp_schedule_loss_probe() calls from output path and from input path, with tp->packets_out value oscillating between !zero and zero, leading to poor branch prediction. Move tp->packets_out check from tcp_schedule_loss_probe() to tcp_set_xmit_timer(). We avoid one call to tcp_schedule_loss_probe() from tcp_ack() path for typical RPC workloads, while improving branch prediction. Signed-off-by: Eric Dumazet Reviewed-by: Neal Cardwell Reviewed-by: Kuniyuki Iwashima Link: https://patch.msgid.link/20260223113501.4070245-1-edumazet@google.com Signed-off-by: Jakub Kicinski --- diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index e7b41abb82aad..6c3f1d0314446 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3552,7 +3552,7 @@ void tcp_rearm_rto(struct sock *sk) /* Try to schedule a loss probe; if that doesn't work, then schedule an RTO. */ static void tcp_set_xmit_timer(struct sock *sk) { - if (!tcp_schedule_loss_probe(sk, true)) + if (!tcp_sk(sk)->packets_out || !tcp_schedule_loss_probe(sk, true)) tcp_rearm_rto(sk); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 1ef419c66a0eb..46bd48cf776a6 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3135,7 +3135,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto) * not in loss recovery, that are either limited by cwnd or application. */ if ((early_retrans != 3 && early_retrans != 4) || - !tp->packets_out || !tcp_is_sack(tp) || + !tcp_is_sack(tp) || (icsk->icsk_ca_state != TCP_CA_Open && icsk->icsk_ca_state != TCP_CA_CWR)) return false;