]> git.ipfire.org Git - thirdparty/linux.git/commit
inet: Avoid ehash lookup race in inet_ehash_insert()
authorXuanqiang Luo <luoxuanqiang@kylinos.cn>
Wed, 15 Oct 2025 02:02:35 +0000 (10:02 +0800)
committerJakub Kicinski <kuba@kernel.org>
Fri, 17 Oct 2025 23:08:43 +0000 (16:08 -0700)
commit1532ed0d0753c83e72595f785f82b48c28bbe5dc
tree0cf7c33aa38f5e07b617ec2dbf45ab0b9ff38452
parent9c4609225ec1cb551006d6a03c7c4ad8cb5584c0
inet: Avoid ehash lookup race in inet_ehash_insert()

Since ehash lookups are lockless, if one CPU performs a lookup while
another concurrently deletes and inserts (removing reqsk and inserting sk),
the lookup may fail to find the socket, an RST may be sent.

The call trace map is drawn as follows:
   CPU 0                           CPU 1
   -----                           -----
inet_ehash_insert()
                                spin_lock()
                                sk_nulls_del_node_init_rcu(osk)
__inet_lookup_established()
(lookup failed)
                                __sk_nulls_add_node_rcu(sk, list)
                                spin_unlock()

As both deletion and insertion operate on the same ehash chain, this patch
introduces a new sk_nulls_replace_node_init_rcu() helper functions to
implement atomic replacement.

Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Jiayuan Chen <jiayuan.chen@linux.dev>
Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20251015020236.431822-3-xuanqiang.luo@linux.dev
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
include/net/sock.h
net/ipv4/inet_hashtables.c