]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
bpf: Only fails the busy counter check in bpf_cgrp_storage_get if it creates storage
authorMartin KaFai Lau <martin.lau@kernel.org>
Tue, 18 Mar 2025 18:27:59 +0000 (11:27 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 2 May 2025 05:50:53 +0000 (07:50 +0200)
[ Upstream commit f4edc66e48a694b3e6d164cc71f059de542dfaec ]

The current cgrp storage has a percpu counter, bpf_cgrp_storage_busy,
to detect potential deadlock at a spin_lock that the local storage
acquires during new storage creation.

There are false positives. It turns out to be too noisy in
production. For example, a bpf prog may be doing a
bpf_cgrp_storage_get on map_a. An IRQ comes in and triggers
another bpf_cgrp_storage_get on a different map_b. It will then
trigger the false positive deadlock check in the percpu counter.
On top of that, both are doing lookup only and no need to create
new storage, so practically it does not need to acquire
the spin_lock.

The bpf_task_storage_get already has a strategy to minimize this
false positive by only failing if the bpf_task_storage_get needs
to create a new storage and the percpu counter is busy. Creating
a new storage is the only time it must acquire the spin_lock.

This patch borrows the same idea. Unlike task storage that
has a separate variant for tracing (_recur) and non-tracing, this
patch stays with one bpf_cgrp_storage_get helper to keep it simple
for now in light of the upcoming res_spin_lock.

The variable could potentially use a better name noTbusy instead
of nobusy. This patch follows the same naming in
bpf_task_storage_get for now.

I have tested it by temporarily adding noinline to
the cgroup_storage_lookup(), traced it by fentry, and the fentry
program succeeded in calling bpf_cgrp_storage_get().

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20250318182759.3676094-1-martin.lau@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/bpf/bpf_cgrp_storage.c

index ee1c7b77096e7b55ddc94edbf0ef4663e372e5b2..fbbf3b6b9f8353a446e0b705ee604e01fcc7f012 100644 (file)
@@ -162,6 +162,7 @@ BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
           void *, value, u64, flags, gfp_t, gfp_flags)
 {
        struct bpf_local_storage_data *sdata;
+       bool nobusy;
 
        WARN_ON_ONCE(!bpf_rcu_lock_held());
        if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
@@ -170,21 +171,21 @@ BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
        if (!cgroup)
                return (unsigned long)NULL;
 
-       if (!bpf_cgrp_storage_trylock())
-               return (unsigned long)NULL;
+       nobusy = bpf_cgrp_storage_trylock();
 
-       sdata = cgroup_storage_lookup(cgroup, map, true);
+       sdata = cgroup_storage_lookup(cgroup, map, nobusy);
        if (sdata)
                goto unlock;
 
        /* only allocate new storage, when the cgroup is refcounted */
        if (!percpu_ref_is_dying(&cgroup->self.refcnt) &&
-           (flags & BPF_LOCAL_STORAGE_GET_F_CREATE))
+           (flags & BPF_LOCAL_STORAGE_GET_F_CREATE) && nobusy)
                sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
                                                 value, BPF_NOEXIST, gfp_flags);
 
 unlock:
-       bpf_cgrp_storage_unlock();
+       if (nobusy)
+               bpf_cgrp_storage_unlock();
        return IS_ERR_OR_NULL(sdata) ? (unsigned long)NULL : (unsigned long)sdata->data;
 }