]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/5.1.3/don-t-jump-to-compute_result-state-from-check_result-state.patch
Linux 5.1.3
[thirdparty/kernel/stable-queue.git] / releases / 5.1.3 / don-t-jump-to-compute_result-state-from-check_result-state.patch
1 From 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef Mon Sep 17 00:00:00 2001
2 From: Nigel Croxon <ncroxon@redhat.com>
3 Date: Fri, 29 Mar 2019 10:46:15 -0700
4 Subject: Don't jump to compute_result state from check_result state
5
6 From: Nigel Croxon <ncroxon@redhat.com>
7
8 commit 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef upstream.
9
10 Changing state from check_state_check_result to
11 check_state_compute_result not only is unsafe but also doesn't
12 appear to serve a valid purpose. A raid6 check should only be
13 pushing out extra writes if doing repair and a mis-match occurs.
14 The stripe dev management will already try and do repair writes
15 for failing sectors.
16
17 This patch makes the raid6 check_state_check_result handling
18 work more like raid5's. If somehow too many failures for a
19 check, just quit the check operation for the stripe. When any
20 checks pass, don't try and use check_state_compute_result for
21 a purpose it isn't needed for and is unsafe for. Just mark the
22 stripe as in sync for passing its parity checks and let the
23 stripe dev read/write code and the bad blocks list do their
24 job handling I/O errors.
25
26 Repro steps from Xiao:
27
28 These are the steps to reproduce this problem:
29 1. redefined OPT_MEDIUM_ERR_ADDR to 12000 in scsi_debug.c
30 2. insmod scsi_debug.ko dev_size_mb=11000 max_luns=1 num_tgts=1
31 3. mdadm --create /dev/md127 --level=6 --raid-devices=5 /dev/sde1 /dev/sde2 /dev/sde3 /dev/sde5 /dev/sde6
32 sde is the disk created by scsi_debug
33 4. echo "2" >/sys/module/scsi_debug/parameters/opts
34 5. raid-check
35
36 It panic:
37 [ 4854.730899] md: data-check of RAID array md127
38 [ 4854.857455] sd 5:0:0:0: [sdr] tag#80 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
39 [ 4854.859246] sd 5:0:0:0: [sdr] tag#80 Sense Key : Medium Error [current]
40 [ 4854.860694] sd 5:0:0:0: [sdr] tag#80 Add. Sense: Unrecovered read error
41 [ 4854.862207] sd 5:0:0:0: [sdr] tag#80 CDB: Read(10) 28 00 00 00 2d 88 00 04 00 00
42 [ 4854.864196] print_req_error: critical medium error, dev sdr, sector 11656 flags 0
43 [ 4854.867409] sd 5:0:0:0: [sdr] tag#100 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
44 [ 4854.869469] sd 5:0:0:0: [sdr] tag#100 Sense Key : Medium Error [current]
45 [ 4854.871206] sd 5:0:0:0: [sdr] tag#100 Add. Sense: Unrecovered read error
46 [ 4854.872858] sd 5:0:0:0: [sdr] tag#100 CDB: Read(10) 28 00 00 00 2e e0 00 00 08 00
47 [ 4854.874587] print_req_error: critical medium error, dev sdr, sector 12000 flags 4000
48 [ 4854.876456] sd 5:0:0:0: [sdr] tag#101 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
49 [ 4854.878552] sd 5:0:0:0: [sdr] tag#101 Sense Key : Medium Error [current]
50 [ 4854.880278] sd 5:0:0:0: [sdr] tag#101 Add. Sense: Unrecovered read error
51 [ 4854.881846] sd 5:0:0:0: [sdr] tag#101 CDB: Read(10) 28 00 00 00 2e e8 00 00 08 00
52 [ 4854.883691] print_req_error: critical medium error, dev sdr, sector 12008 flags 4000
53 [ 4854.893927] sd 5:0:0:0: [sdr] tag#166 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
54 [ 4854.896002] sd 5:0:0:0: [sdr] tag#166 Sense Key : Medium Error [current]
55 [ 4854.897561] sd 5:0:0:0: [sdr] tag#166 Add. Sense: Unrecovered read error
56 [ 4854.899110] sd 5:0:0:0: [sdr] tag#166 CDB: Read(10) 28 00 00 00 2e e0 00 00 10 00
57 [ 4854.900989] print_req_error: critical medium error, dev sdr, sector 12000 flags 0
58 [ 4854.902757] md/raid:md127: read error NOT corrected!! (sector 9952 on sdr1).
59 [ 4854.904375] md/raid:md127: read error NOT corrected!! (sector 9960 on sdr1).
60 [ 4854.906201] ------------[ cut here ]------------
61 [ 4854.907341] kernel BUG at drivers/md/raid5.c:4190!
62
63 raid5.c:4190 above is this BUG_ON:
64
65 handle_parity_checks6()
66 ...
67 BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */
68
69 Cc: <stable@vger.kernel.org> # v3.16+
70 OriginalAuthor: David Jeffery <djeffery@redhat.com>
71 Cc: Xiao Ni <xni@redhat.com>
72 Tested-by: David Jeffery <djeffery@redhat.com>
73 Signed-off-by: David Jeffy <djeffery@redhat.com>
74 Signed-off-by: Nigel Croxon <ncroxon@redhat.com>
75 Signed-off-by: Song Liu <songliubraving@fb.com>
76 Signed-off-by: Jens Axboe <axboe@kernel.dk>
77 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
78
79 ---
80 drivers/md/raid5.c | 19 ++++---------------
81 1 file changed, 4 insertions(+), 15 deletions(-)
82
83 --- a/drivers/md/raid5.c
84 +++ b/drivers/md/raid5.c
85 @@ -4223,26 +4223,15 @@ static void handle_parity_checks6(struct
86 case check_state_check_result:
87 sh->check_state = check_state_idle;
88
89 + if (s->failed > 1)
90 + break;
91 /* handle a successful check operation, if parity is correct
92 * we are done. Otherwise update the mismatch count and repair
93 * parity if !MD_RECOVERY_CHECK
94 */
95 if (sh->ops.zero_sum_result == 0) {
96 - /* both parities are correct */
97 - if (!s->failed)
98 - set_bit(STRIPE_INSYNC, &sh->state);
99 - else {
100 - /* in contrast to the raid5 case we can validate
101 - * parity, but still have a failure to write
102 - * back
103 - */
104 - sh->check_state = check_state_compute_result;
105 - /* Returning at this point means that we may go
106 - * off and bring p and/or q uptodate again so
107 - * we make sure to check zero_sum_result again
108 - * to verify if p or q need writeback
109 - */
110 - }
111 + /* Any parity checked was correct */
112 + set_bit(STRIPE_INSYNC, &sh->state);
113 } else {
114 atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
115 if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) {