From: Mykyta Yatsenko Date: Wed, 1 Apr 2026 13:50:36 +0000 (-0700) Subject: bpf: Use copy_map_value_locked() in alloc_htab_elem() for BPF_F_LOCK X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=07738bc566c38e0a8c82084e962890d1d59715c8;p=thirdparty%2Fkernel%2Flinux.git bpf: Use copy_map_value_locked() in alloc_htab_elem() for BPF_F_LOCK When a BPF_F_LOCK update races with a concurrent delete, the freed element can be immediately recycled by alloc_htab_elem(). The fast path in htab_map_update_elem() performs a lockless lookup and then calls copy_map_value_locked() under the element's spin_lock. If alloc_htab_elem() recycles the same memory, it overwrites the value with plain copy_map_value(), without taking the spin_lock, causing torn writes. Use copy_map_value_locked() when BPF_F_LOCK is set so the new element's value is written under the embedded spin_lock, serializing against any stale lock holders. Fixes: 96049f3afd50 ("bpf: introduce BPF_F_LOCK flag") Reported-by: Aaron Esau Closes: https://lore.kernel.org/all/CADucPGRvSRpkneb94dPP08YkOHgNgBnskTK6myUag_Mkjimihg@mail.gmail.com/ Signed-off-by: Mykyta Yatsenko Link: https://lore.kernel.org/r/20260401-bpf_map_torn_writes-v1-1-782d071c55e7@meta.com Signed-off-by: Alexei Starovoitov --- diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index bc6bc8bb871d4..f7ac1ec7be8bf 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1138,6 +1138,10 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key, } else if (fd_htab_map_needs_adjust(htab)) { size = round_up(size, 8); memcpy(htab_elem_value(l_new, key_size), value, size); + } else if (map_flags & BPF_F_LOCK) { + copy_map_value_locked(&htab->map, + htab_elem_value(l_new, key_size), + value, false); } else { copy_map_value(&htab->map, htab_elem_value(l_new, key_size), value); }