From: Greg Kroah-Hartman Date: Tue, 14 May 2019 09:30:02 +0000 (+0200) Subject: 3.18-stable patches X-Git-Tag: v5.1.2~15 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=3d577bdd930704b51b444dfd80e5c326a8c22ab9;p=thirdparty%2Fkernel%2Fstable-queue.git 3.18-stable patches added patches: don-t-jump-to-compute_result-state-from-check_result-state.patch --- diff --git a/queue-3.18/don-t-jump-to-compute_result-state-from-check_result-state.patch b/queue-3.18/don-t-jump-to-compute_result-state-from-check_result-state.patch new file mode 100644 index 00000000000..42d4470d360 --- /dev/null +++ b/queue-3.18/don-t-jump-to-compute_result-state-from-check_result-state.patch @@ -0,0 +1,115 @@ +From 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef Mon Sep 17 00:00:00 2001 +From: Nigel Croxon +Date: Fri, 29 Mar 2019 10:46:15 -0700 +Subject: Don't jump to compute_result state from check_result state + +From: Nigel Croxon + +commit 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef upstream. + +Changing state from check_state_check_result to +check_state_compute_result not only is unsafe but also doesn't +appear to serve a valid purpose. A raid6 check should only be +pushing out extra writes if doing repair and a mis-match occurs. +The stripe dev management will already try and do repair writes +for failing sectors. + +This patch makes the raid6 check_state_check_result handling +work more like raid5's. If somehow too many failures for a +check, just quit the check operation for the stripe. When any +checks pass, don't try and use check_state_compute_result for +a purpose it isn't needed for and is unsafe for. Just mark the +stripe as in sync for passing its parity checks and let the +stripe dev read/write code and the bad blocks list do their +job handling I/O errors. + +Repro steps from Xiao: + +These are the steps to reproduce this problem: +1. redefined OPT_MEDIUM_ERR_ADDR to 12000 in scsi_debug.c +2. insmod scsi_debug.ko dev_size_mb=11000 max_luns=1 num_tgts=1 +3. mdadm --create /dev/md127 --level=6 --raid-devices=5 /dev/sde1 /dev/sde2 /dev/sde3 /dev/sde5 /dev/sde6 +sde is the disk created by scsi_debug +4. echo "2" >/sys/module/scsi_debug/parameters/opts +5. raid-check + +It panic: +[ 4854.730899] md: data-check of RAID array md127 +[ 4854.857455] sd 5:0:0:0: [sdr] tag#80 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE +[ 4854.859246] sd 5:0:0:0: [sdr] tag#80 Sense Key : Medium Error [current] +[ 4854.860694] sd 5:0:0:0: [sdr] tag#80 Add. Sense: Unrecovered read error +[ 4854.862207] sd 5:0:0:0: [sdr] tag#80 CDB: Read(10) 28 00 00 00 2d 88 00 04 00 00 +[ 4854.864196] print_req_error: critical medium error, dev sdr, sector 11656 flags 0 +[ 4854.867409] sd 5:0:0:0: [sdr] tag#100 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE +[ 4854.869469] sd 5:0:0:0: [sdr] tag#100 Sense Key : Medium Error [current] +[ 4854.871206] sd 5:0:0:0: [sdr] tag#100 Add. Sense: Unrecovered read error +[ 4854.872858] sd 5:0:0:0: [sdr] tag#100 CDB: Read(10) 28 00 00 00 2e e0 00 00 08 00 +[ 4854.874587] print_req_error: critical medium error, dev sdr, sector 12000 flags 4000 +[ 4854.876456] sd 5:0:0:0: [sdr] tag#101 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE +[ 4854.878552] sd 5:0:0:0: [sdr] tag#101 Sense Key : Medium Error [current] +[ 4854.880278] sd 5:0:0:0: [sdr] tag#101 Add. Sense: Unrecovered read error +[ 4854.881846] sd 5:0:0:0: [sdr] tag#101 CDB: Read(10) 28 00 00 00 2e e8 00 00 08 00 +[ 4854.883691] print_req_error: critical medium error, dev sdr, sector 12008 flags 4000 +[ 4854.893927] sd 5:0:0:0: [sdr] tag#166 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE +[ 4854.896002] sd 5:0:0:0: [sdr] tag#166 Sense Key : Medium Error [current] +[ 4854.897561] sd 5:0:0:0: [sdr] tag#166 Add. Sense: Unrecovered read error +[ 4854.899110] sd 5:0:0:0: [sdr] tag#166 CDB: Read(10) 28 00 00 00 2e e0 00 00 10 00 +[ 4854.900989] print_req_error: critical medium error, dev sdr, sector 12000 flags 0 +[ 4854.902757] md/raid:md127: read error NOT corrected!! (sector 9952 on sdr1). +[ 4854.904375] md/raid:md127: read error NOT corrected!! (sector 9960 on sdr1). +[ 4854.906201] ------------[ cut here ]------------ +[ 4854.907341] kernel BUG at drivers/md/raid5.c:4190! + +raid5.c:4190 above is this BUG_ON: + + handle_parity_checks6() + ... + BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */ + +Cc: # v3.16+ +OriginalAuthor: David Jeffery +Cc: Xiao Ni +Tested-by: David Jeffery +Signed-off-by: David Jeffy +Signed-off-by: Nigel Croxon +Signed-off-by: Song Liu +Signed-off-by: Jens Axboe +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/md/raid5.c | 19 ++++--------------- + 1 file changed, 4 insertions(+), 15 deletions(-) + +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -3414,26 +3414,15 @@ static void handle_parity_checks6(struct + case check_state_check_result: + sh->check_state = check_state_idle; + ++ if (s->failed > 1) ++ break; + /* handle a successful check operation, if parity is correct + * we are done. Otherwise update the mismatch count and repair + * parity if !MD_RECOVERY_CHECK + */ + if (sh->ops.zero_sum_result == 0) { +- /* both parities are correct */ +- if (!s->failed) +- set_bit(STRIPE_INSYNC, &sh->state); +- else { +- /* in contrast to the raid5 case we can validate +- * parity, but still have a failure to write +- * back +- */ +- sh->check_state = check_state_compute_result; +- /* Returning at this point means that we may go +- * off and bring p and/or q uptodate again so +- * we make sure to check zero_sum_result again +- * to verify if p or q need writeback +- */ +- } ++ /* Any parity checked was correct */ ++ set_bit(STRIPE_INSYNC, &sh->state); + } else { + atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches); + if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) diff --git a/queue-3.18/series b/queue-3.18/series index df435fbd003..49976181a4a 100644 --- a/queue-3.18/series +++ b/queue-3.18/series @@ -72,3 +72,4 @@ init-initialize-jump-labels-before-command-line-opti.patch s390-ctcm-fix-ctcm_new_device-error-return-code.patch selftests-net-correct-the-return-value-for-run_netso.patch gpu-ipu-v3-dp-fix-csc-handling.patch +don-t-jump-to-compute_result-state-from-check_result-state.patch