]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()
authorOleg Nesterov <oleg@redhat.com>
Mon, 22 Jan 2024 15:50:50 +0000 (16:50 +0100)
committerSasha Levin <sashal@kernel.org>
Fri, 15 Mar 2024 14:48:22 +0000 (10:48 -0400)
commit18c7394e46d8dadfaa7f67a278f68a22d565384c
treebe37c6f98da23c2c74550d12b64b10040ab24b46
parentc5579e7280e632db7a4caa79beba4d0db668e1c8
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()

[ Upstream commit daa694e4137571b4ebec330f9a9b4d54aa8b8089 ]

Patch series "getrusage: use sig->stats_lock", v2.

This patch (of 2):

thread_group_cputime() does its own locking, we can safely shift
thread_group_cputime_adjusted() which does another for_each_thread loop
outside of ->siglock protected section.

This is also preparation for the next patch which changes getrusage() to
use stats_lock instead of siglock, thread_group_cputime() takes the same
lock.  With the current implementation recursive read_seqbegin_or_lock()
is fine, thread_group_cputime() can't enter the slow mode if the caller
holds stats_lock, yet this looks more safe and better performance-wise.

Link: https://lkml.kernel.org/r/20240122155023.GA26169@redhat.com
Link: https://lkml.kernel.org/r/20240122155050.GA26205@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Dylan Hatch <dylanbhatch@google.com>
Tested-by: Dylan Hatch <dylanbhatch@google.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/sys.c