]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
tcp: fix a signed-integer-overflow bug in tcp_add_backlog()
authorLu Wei <luwei32@huawei.com>
Fri, 21 Oct 2022 04:06:22 +0000 (12:06 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 3 Nov 2022 15:00:31 +0000 (00:00 +0900)
[ Upstream commit ec791d8149ff60c40ad2074af3b92a39c916a03f ]

The type of sk_rcvbuf and sk_sndbuf in struct sock is int, and
in tcp_add_backlog(), the variable limit is caculated by adding
sk_rcvbuf, sk_sndbuf and 64 * 1024, it may exceed the max value
of int and overflow. This patch reduces the limit budget by
halving the sndbuf to solve this issue since ACK packets are much
smaller than the payload.

Fixes: c9c3321257e1 ("tcp: add tcp_add_backlog()")
Signed-off-by: Lu Wei <luwei32@huawei.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Acked-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
net/ipv4/tcp_ipv4.c

index 5b019ba2b9d2155b6d612727d32c89433d19be03..fe9a6022db66dc566feb7ce88f01d5e8aaef981b 100644 (file)
@@ -1853,11 +1853,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
        __skb_push(skb, hdrlen);
 
 no_coalesce:
+       limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
+
        /* Only socket owner can try to collapse/prune rx queues
         * to reduce memory overhead, so add a little headroom here.
         * Few sockets backlog are possibly concurrently non empty.
         */
-       limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+       limit += 64 * 1024;
 
        if (unlikely(sk_add_backlog(sk, skb, limit))) {
                bh_unlock_sock(sk);