Otherwise the lock is susceptible to ever-changing false-sharing due to
unrelated changes. This in particular popped up here where an unrelated
change improved performance:
https://lore.kernel.org/oe-lkp/
202511281306.
51105b46-lkp@intel.com/
Stabilize it with an explicit annotation which also has a side effect
of furher improving scalability:
> in our oiginal report,
284922f4c5 has a 6.1% performance improvement comparing
> to parent
17d85f33a8.
> we applied your patch directly upon
284922f4c5. as below, now by
> "
284922f4c5 + your patch"
> we observe a 12.8% performance improvements (still comparing to
17d85f33a8).
Note nothing was done for the other fields, so some fluctuation is still
possible.
Tested-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Link: https://patch.msgid.link/20251203100122.291550-1-mjguzik@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
}
}
-static DEFINE_SPINLOCK(unix_gc_lock);
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(unix_gc_lock);
void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver)
{