When backporting upstream commit
75ff39ccc1bd ("tcp: make challenge acks
less predictable") I negelected to use the correct ACCESS* type macros.
This fixes that up to hopefully speed things up a bit more.
Thanks to Chas Wiliams for the 3.10 backport which reminded me of this.
Cc: Yue Cao <ycao009@ucr.edu>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Chas Williams <ciwillia@brocade.com>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
challenge_timestamp = now;
- challenge_count = half +
+ ACCESS_ONCE(challenge_count) = half +
prandom_u32_max(sysctl_tcp_challenge_ack_limit);
}
- count = challenge_count;
+ count = ACCESS_ONCE(challenge_count);
if (count > 0) {
- challenge_count = count - 1;
+ ACCESS_ONCE(challenge_count) = count - 1;
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
tcp_send_ack(sk);
}