]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()
authorBoqun Feng <boqun.feng@gmail.com>
Wed, 26 Mar 2025 18:08:30 +0000 (11:08 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 25 Apr 2025 08:45:29 +0000 (10:45 +0200)
commit8385532d4dd41befef2a235254bbb0e41407fe7c
treeb0f7d9a15cddf0a89cb3b4f978f00910d42a5ae4
parent388ba87816163f1a8138dab74bff70f902912400
locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()

commit 495f53d5cca0f939eaed9dca90b67e7e6fb0e30c upstream.

Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:

  [...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
  [...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0

And as a result, lockdep will be disabled after this.

Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20250326180831.510348-1-boqun.feng@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/locking/lockdep.c