From: Manfred Spraul Date: Fri, 16 Dec 2022 15:04:40 +0000 (+0100) Subject: include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() X-Git-Tag: v6.3-rc1~112^2~47 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=88ad32a799ddc92eafd2ae204cb43f04ac20a05c;p=thirdparty%2Flinux.git include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() The percpu interface is supposed to be preempt and irq safe. But: The uniprocessor implementation of percpu_counter_add() is not irq safe: if an interrupt happens during the +=, then the result is undefined. Therefore: switch from preempt_disable() to local_irq_save(). This prevents interrupts from interrupting the +=, and as a side effect prevents preemption. Link: https://lkml.kernel.org/r/20221216150441.200533-2-manfred@colorfullife.com Signed-off-by: Manfred Spraul Cc: "Sun, Jiebin" Cc: <1vier1@web.de> Cc: Alexander Sverdlin Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h index a3aae8d57a421..521a733e21a92 100644 --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) { - preempt_disable(); + unsigned long flags; + + local_irq_save(flags); fbc->count += amount; - preempt_enable(); + local_irq_restore(flags); } /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */