]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()
authorOleg Nesterov <oleg@redhat.com>
Mon, 22 Jan 2024 15:50:50 +0000 (16:50 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 23 Feb 2024 08:51:45 +0000 (09:51 +0100)
commit03b309a64d2588621417eefafcfb1fc135cc707a
tree0e7df404e7be251fdfe43d85f2843a587c9b9742
parent80a27b47f177b3781ffcd11966f514bffcb57f64
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()

commit daa694e4137571b4ebec330f9a9b4d54aa8b8089 upstream.

Patch series "getrusage: use sig->stats_lock", v2.

This patch (of 2):

thread_group_cputime() does its own locking, we can safely shift
thread_group_cputime_adjusted() which does another for_each_thread loop
outside of ->siglock protected section.

This is also preparation for the next patch which changes getrusage() to
use stats_lock instead of siglock, thread_group_cputime() takes the same
lock.  With the current implementation recursive read_seqbegin_or_lock()
is fine, thread_group_cputime() can't enter the slow mode if the caller
holds stats_lock, yet this looks more safe and better performance-wise.

Link: https://lkml.kernel.org/r/20240122155023.GA26169@redhat.com
Link: https://lkml.kernel.org/r/20240122155050.GA26205@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Dylan Hatch <dylanbhatch@google.com>
Tested-by: Dylan Hatch <dylanbhatch@google.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/sys.c