From 90c1d870edc3fde05fdeb7b856b1dab2de86009c Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 17 Aug 2016 10:00:14 +0200 Subject: [PATCH] tcp: make challenge acks faster When backporting upstream commit 75ff39ccc1bd ("tcp: make challenge acks less predictable") I negelected to use the correct ACCESS* type macros. This fixes that up to hopefully speed things up a bit more. Thanks to Chas Wiliams for the 3.10 backport which reminded me of this. Cc: Yue Cao Cc: Eric Dumazet Cc: Linus Torvalds Cc: Yuchung Cheng Cc: Neal Cardwell Cc: Neal Cardwell Cc: Yuchung Cheng Cc: David S. Miller Cc: Chas Williams Cc: Willy Tarreau Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 90f9d00a3fbc1..963b7f7467775 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3299,12 +3299,12 @@ static void tcp_send_challenge_ack(struct sock *sk) u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1; challenge_timestamp = now; - challenge_count = half + + ACCESS_ONCE(challenge_count) = half + prandom_u32_max(sysctl_tcp_challenge_ack_limit); } - count = challenge_count; + count = ACCESS_ONCE(challenge_count); if (count > 0) { - challenge_count = count - 1; + ACCESS_ONCE(challenge_count) = count - 1; NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK); tcp_send_ack(sk); } -- 2.47.2