From 3bb83c910971c47989aa439849265600fa67b42a Mon Sep 17 00:00:00 2001 From: Finn Thain Date: Tue, 13 Jan 2026 16:22:28 +1100 Subject: [PATCH] bpf: explicitly align bpf_res_spin_lock Patch series "Align atomic storage", v7. This series adds the __aligned attribute to atomic_t and atomic64_t definitions in include/linux and include/asm-generic (respectively) to get natural alignment of both types on csky, m68k, microblaze, nios2, openrisc and sh. This series also adds Kconfig options to enable a new run-time warning to help reveal misaligned atomic accesses on platforms which don't trap that. The performance impact is expected to vary across platforms and workloads. The measurements I made on m68k show that some workloads run faster and others slower. This patch (of 4): Align bpf_res_spin_lock to avoid a BUILD_BUG_ON() when the alignment changes, as it will do on m68k when, in a subsequent patch, the minimum alignment of the atomic_t member of struct rqspinlock gets increased from 2 to 4. Drop the BUILD_BUG_ON() as it becomes redundant. Link: https://lkml.kernel.org/r/cover.1768281748.git.fthain@linux-m68k.org Link: https://lkml.kernel.org/r/8a83876b07d1feacc024521e44059ae89abbb1ea.1768281748.git.fthain@linux-m68k.org Signed-off-by: Finn Thain Acked-by: Alexei Starovoitov Reviewed-by: Arnd Bergmann Cc: Geert Uytterhoeven Cc: Andrii Nakryiko Cc: Ard Biesheuvel Cc: Boqun Feng Cc: "Borislav Petkov (AMD)" Cc: Daniel Borkman Cc: Dinh Nguyen Cc: Eduard Zingerman Cc: Gary Guo Cc: Guo Ren Cc: Hao Luo Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jiri Olsa Cc: John Fastabend Cc: John Paul Adrian Glaubitz Cc: Jonas Bonn Cc: KP Singh Cc: Marc Rutland Cc: Martin KaFai Lau Cc: Peter Zijlstra Cc: Rich Felker Cc: Sasha Levin (Microsoft) Cc: Song Liu Cc: Stafford Horne Cc: Stanislav Fomichev Cc: Stefan Kristiansson Cc: Thomas Gleixner Cc: Will Deacon Cc: Yonghong Song Cc: Yoshinori Sato Cc: Dave Hansen Signed-off-by: Andrew Morton --- include/asm-generic/rqspinlock.h | 2 +- kernel/bpf/rqspinlock.c | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h index 0f2dcbbfee2f0..dd36ac96bf66e 100644 --- a/include/asm-generic/rqspinlock.h +++ b/include/asm-generic/rqspinlock.h @@ -28,7 +28,7 @@ struct rqspinlock { */ struct bpf_res_spin_lock { u32 val; -}; +} __aligned(__alignof__(struct rqspinlock)); struct qspinlock; #ifdef CONFIG_QUEUED_SPINLOCKS diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c index f7d0c8d4644ed..8d892fb099ac6 100644 --- a/kernel/bpf/rqspinlock.c +++ b/kernel/bpf/rqspinlock.c @@ -694,7 +694,6 @@ __bpf_kfunc int bpf_res_spin_lock(struct bpf_res_spin_lock *lock) int ret; BUILD_BUG_ON(sizeof(rqspinlock_t) != sizeof(struct bpf_res_spin_lock)); - BUILD_BUG_ON(__alignof__(rqspinlock_t) != __alignof__(struct bpf_res_spin_lock)); preempt_disable(); ret = res_spin_lock((rqspinlock_t *)lock); -- 2.47.3