]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/4.19.312/dm-raid-fix-lockdep-waring-in-pers-hot_add_disk.patch
6.8-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.19.312 / dm-raid-fix-lockdep-waring-in-pers-hot_add_disk.patch
1 From 1b3818962a987bdc835b4603c3028f209b754aa5 Mon Sep 17 00:00:00 2001
2 From: Sasha Levin <sashal@kernel.org>
3 Date: Tue, 5 Mar 2024 15:23:06 +0800
4 Subject: dm-raid: fix lockdep waring in "pers->hot_add_disk"
5
6 From: Yu Kuai <yukuai3@huawei.com>
7
8 [ Upstream commit 95009ae904b1e9dca8db6f649f2d7c18a6e42c75 ]
9
10 The lockdep assert is added by commit a448af25becf ("md/raid10: remove
11 rcu protection to access rdev from conf") in print_conf(). And I didn't
12 notice that dm-raid is calling "pers->hot_add_disk" without holding
13 'reconfig_mutex'.
14
15 "pers->hot_add_disk" read and write many fields that is protected by
16 'reconfig_mutex', and raid_resume() already grab the lock in other
17 contex. Hence fix this problem by protecting "pers->host_add_disk"
18 with the lock.
19
20 Fixes: 9092c02d9435 ("DM RAID: Add ability to restore transiently failed devices on resume")
21 Fixes: a448af25becf ("md/raid10: remove rcu protection to access rdev from conf")
22 Cc: stable@vger.kernel.org # v6.7+
23 Signed-off-by: Yu Kuai <yukuai3@huawei.com>
24 Signed-off-by: Xiao Ni <xni@redhat.com>
25 Acked-by: Mike Snitzer <snitzer@kernel.org>
26 Signed-off-by: Song Liu <song@kernel.org>
27 Link: https://lore.kernel.org/r/20240305072306.2562024-10-yukuai1@huaweicloud.com
28 Signed-off-by: Sasha Levin <sashal@kernel.org>
29 ---
30 drivers/md/dm-raid.c | 2 ++
31 1 file changed, 2 insertions(+)
32
33 diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
34 index 1759134fce824..2a8746f9c6d87 100644
35 --- a/drivers/md/dm-raid.c
36 +++ b/drivers/md/dm-raid.c
37 @@ -4023,7 +4023,9 @@ static void raid_resume(struct dm_target *ti)
38 * Take this opportunity to check whether any failed
39 * devices are reachable again.
40 */
41 + mddev_lock_nointr(mddev);
42 attempt_restore_of_faulty_devices(rs);
43 + mddev_unlock(mddev);
44 }
45
46 if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
47 --
48 2.43.0
49