]> git.ipfire.org Git - thirdparty/mdadm.git/commitdiff
FIX: imsm: Rebuild does not start on second failed disk
authorKrzysztof Wojcik <krzysztof.wojcik@intel.com>
Wed, 23 Mar 2011 15:04:20 +0000 (16:04 +0100)
committerNeilBrown <neilb@suse.de>
Wed, 23 Mar 2011 23:10:56 +0000 (10:10 +1100)
Problem:
If we have an array with two failed disks and the array is in degraded
state (now it is possible only for raid10 with 2 degraded mirrors) and
we have two spare devices in the container, recovery process should be
triggered on booth failed disks. It does not.
Recovery is triggered only for first failed disk.
Second failed disk remains unchanged although the spare drive exists
in the container and is ready to recovery.

Root cause:
mdmon does not check if the array is degraded after recovery of first
drive is completed.

Resolution:
Check if current number of disks in the array equals target number of disks.
If not, trigger degradation check and then recovery process.

Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
monitor.c

index 4a34bc1d8fdae282067b247eb1497195b8013c13..7ac59072144ca5c309625e52a74bfb433a408667 100644 (file)
--- a/monitor.c
+++ b/monitor.c
@@ -219,6 +219,7 @@ static int read_and_act(struct active_array *a)
        int deactivate = 0;
        struct mdinfo *mdi;
        int dirty = 0;
+       int count = 0;
 
        a->next_state = bad_word;
        a->next_action = bad_action;
@@ -311,7 +312,10 @@ static int read_and_act(struct active_array *a)
                                                   mdi->curr_state);
                        if (! (mdi->curr_state & DS_INSYNC))
                                check_degraded = 1;
+                       count++;
                }
+               if (count != a->info.array.raid_disks)
+                       check_degraded = 1;
        }
 
        if (!deactivate &&