4 mdadm \- manage MD devices
10 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
13 RAID devices are virtual devices created from two or more
14 real block devices. This allows multiple devices (typically disk
15 drives or partitions there-of) to be combined into a single device to
16 hold (for example) a single filesystem.
17 Some RAID levels include redundancy and so can survive some degree of
20 Linux Software RAID devices are implemented through the md (Multiple
21 Devices) device driver.
23 Currently, Linux supports
36 .B MULTIPATH is not a Software RAID mechanism, but does involve
39 each device is a path to one common physical storage device.
43 is a program that can be used to create, manage, and monitor
45 such it provides a similar set of functionality to the
48 The key differences between
55 is a single program and not a collection of programs.
58 can perform (almost) all of its functions without having a
59 configuration file and does not use one by default. Also
61 helps with management of the configuration
65 can provide information about your arrays (through Query, Detail, and Examine)
75 configuration file, at all. It has a different configuration file
76 with a different format and an different purpose.
79 mdadm has 7 major modes of operation:
82 Assemble the parts of a previously created
83 array into an active array. Components can be explicitly given
84 or can be searched for.
86 checks that the components
87 do form a bona fide array, and can, on request, fiddle superblock
88 information so as to assemble a faulty array.
92 Build a legacy array without per-device superblocks.
96 Create a new array with per-device superblocks.
98 '''in several step create-add-add-run or it can all happen with one command.
102 This is for doing things to specific components of an array such as
103 adding new spares and removing faulty devices.
107 This mode allows operations on independent devices such as examine MD
108 superblocks, erasing old superblocks and stopping active arrays.
111 .B "Follow or Monitor"
112 Monitor one or more md devices and act on any state changes. This is
113 only meaningful for raid1, 4, 5, 6 or multipath arrays as
114 only these have interesting state. raid0 or linear never have
115 missing, spare, or failed drives, so there is nothing to monitor.
119 Grow (or shrink) an array, or otherwise reshape it in some way.
120 Currently supported growth options including changing the active size
121 of componenet devices in RAID level 1/4/5/6 and changing the number of
122 active devices in RAID1.
126 Available options are:
129 .BR -A ", " --assemble
130 Assemble a pre-existing array.
134 Build a legacy array without superblocks.
142 Examine a device to see
143 (1) if it is an md device and (2) if it is a component of an md
145 Information about what is discovered is presented.
149 Print detail of one or more md devices.
152 .BR -E ", " --examine
153 Print content of md superblock on device(s).
156 .BR -F ", " --follow ", " --monitor
163 Change the size or shape of an active array.
167 Display help message or, after above option, mode specific help
172 Display more detailed help about command line parsing and some commonly
176 .BR -V ", " --version
177 Print version information for mdadm.
180 .BR -v ", " --verbose
181 Be more verbose about what is happening.
185 Be less verbose. This is used with
192 Be more forceful about certain operations. See the various modes of
193 the exact meaning of this option in different contexts.
196 .BR -c ", " --config=
197 Specify the config file. Default is
198 .BR /etc/mdadm.conf .
199 If the config file given is
201 then nothing will be read, but
203 will act as though the config file contained exactly
204 .B "DEVICE partitions"
207 to find a list of devices to scan.
210 is given for the config file, then
212 will act as though the config file were empty.
218 for missing information.
219 In general, this option gives
221 permission to get any missing information, like component devices,
222 array devices, array identities, and alert destination from the
224 .BR /etc/mdadm.conf .
225 One exception is MISC mode when using
231 says to get a list of array devices from
234 .SH For create or build:
238 Specify chunk size of kibibytes. The default is 64.
242 Specify rounding factor for linear array (==chunk size)
246 Set raid level. When used with
248 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
249 raid5, 5, raid6, 6, multipath, mp. Obviously some of these are synonymous.
253 only linear, raid0, 0, stripe are valid.
256 .BR -p ", " --parity=
257 Set raid5 parity algorithm. Options are:
262 la, ra, ls, rs. The default is left-symmetric.
269 .BR -n ", " --raid-devices=
270 Specify the number of active devices in the array. This, plus the
271 number of spare devices (see below) must equal the number of
273 (including "\fBmissing\fP" devices)
274 that are listed on the command line for
276 Setting a value of 1 is probably
277 a mistake and so requires that
279 be specified first. A value of 1 will then be allowed for linear,
280 multipath, raid0 and raid1. It is never allowed for raid4 or raid5.
282 This number can only be changed using
284 for RAID1 arrays, and only on kernels which provide necessary support.
287 .BR -x ", " --spare-devices=
288 Specify the number of spare (eXtra) devices in the initial array.
289 Spares can also be added
290 and removed later. The number of component devices listed
291 on the command line must equal the number of raid devices plus the
292 number of spare devices.
297 Amount (in Kibibytes) of space to use from each drive in RAID1/4/5/6.
298 This must be a multiple of the chunk size, and must leave about 128Kb
299 of space at the end of the drive for the RAID superblock.
300 If this is not specified
301 (as it normally is not) the smallest drive (or partition) sets the
302 size, though if there is a variance among the drives of greater than 1%, a warning is
305 This value can be set with
307 for RAID level 1/4/5/6. If the array was created with a size smaller
308 than the currently active drives, the extra space can be accessed
316 that the array pre-existed and is known to be clean. This is only
317 really useful for Building RAID1 array. Only use this if you really
318 know what you are doing. This is currently only supported for --build.
324 run the array, even if some of the components
325 appear to be active in another array or filesystem. Normally
327 will ask for confirmation before including such components in an
328 array. This option causes that question to be suppressed.
334 accept the geometry and layout specified without question. Normally
336 will not allow creation of an array with only one device, and will try
337 to create a raid5 array with one missing drive (as this makes the
338 initial resync work faster). With
341 will not try to be so clever.
344 .BR -a ", " "--auto{=no,yes,md,mdp,part,p}{NN}"
345 Instruct mdadm to create the device file if needed, and to allocate
346 an unused minor number. "yes" or "md" causes a non-partitionable array
347 to be used. "mdp", "part" or "p" causes a partitionable array (2.6 and
348 later) to be used. The argumentment can also come immediately after
351 For partitionable arrays,
353 will create the device file for the whole array and for the first 4
354 partitions. A different number of partitions can be specified at the
355 end of this option (e.g.
357 If the device name ends with a digit, the partition names add an
358 underscore, a 'p', and a number, e.g. "/dev/home1_p3". If there is no
359 trailing digit, then the partition names just have a number added,
360 e.g. "/dev/scratch3".
366 uuid of array to assemble. Devices which don't have this uuid are
370 .BR -m ", " --super-minor=
371 Minor number of device that array was created for. Devices which
372 don't have this minor number are excluded. If you create an array as
373 /dev/md1, then all superblocks will contain the minor number 1, even if
374 the array is later assembled as /dev/md2.
376 Giving the literal word "dev" for
380 to use the minor number of the md device that is being assembled.
384 will look for super blocks with a minor number of 0.
388 Assemble the array even if some superblocks appear out-of-date
392 Attempt to start the array even if fewer drives were given than are
393 needed for a full array. Normally if not all drives are found and
395 is not used, then the array will be assembled but not started.
398 an attempt will be made to start it anyway.
401 .BR -a ", " "--auto{=no,yes,md,mdp,part}"
402 See this option under Create and Build options.
405 .BR -U ", " --update=
406 Update the superblock on each device while assembling the array. The
407 argument given to this flag can be one of
416 option will adjust the superblock of an array what was created on a Sparc
417 machine running a patched 2.2 Linux kernel. This kernel got the
418 alignment of part of the superblock wrong. You can use the
419 .B "--examine --sparc2.2"
422 to see what effect this would have.
426 option will update the
428 field on each superblock to match the minor number of the array being
429 assembled. This is not needed on 2.6 and later kernels as they make
430 this adjustment automatically.
434 option will cause the array to be marked
436 meaning that any redundancy in the array (e.g. parity for raid5,
437 copies for raid1) may be incorrect. This will cause the raid system
438 to perform a "resync" pass to make sure that all redundant information
443 option will correct the summaries in the superblock. That is the
444 counts of total, working, active, failed, and spare devices.
451 hotadd listed devices.
455 remove listed devices. They must not be active. i.e. they should
456 be failed or spare devices.
460 mark listed devices as faulty.
466 .SH For Examine mode:
470 In an array was created on a 2.2 Linux kernel patched with RAID
471 support, the superblock will have been created incorrectly, or at
472 least incompatibly with 2.4 and later kernels. Using the
476 will fix the superblock before displaying it. If this appears to do
477 the right thing, then the array can be successfully assembled using
478 .BR "--assemble --update=sparc2.2" .
484 start a partially built array.
488 deactivate array, releasing all resources.
491 .BR -o ", " --readonly
492 mark array as readonly.
495 .BR -w ", " --readwrite
496 mark array as readwrite.
500 If the device contains a valid md superblock, the block is
501 over-written with zeros. With
503 the block where the superblock would be is over-written even if it
504 doesn't appear to be valid.
512 is set to reflect the status of the device.
514 .SH For Monitor mode:
517 Give a mail address to send alerts to.
520 .BR -p ", " --program ", " --alert
521 Give a program to be run whenever an event is detected.
525 Give a delay in seconds.
527 polls the md arrays and then waits this many seconds before polling
528 again. The default is 60 seconds.
531 .BR -f ", " --daemonise
534 to run as a background daemon if it decides to monitor anything. This
535 causes it to fork and run in the child, and to disconnect form the
536 terminal. The process id of the child is written to stdout.
539 which will only continue monitoring if a mail address or alert program
540 is found in the config file.
543 .BR -1 ", " --oneshot
544 Check arrays only once. This will generate
546 events and more significantly
550 .B " mdadm --monitor --scan -1"
552 from a cron script will ensure regular notification of any degraded arrays.
558 alert for every array found at startup. This alert gets mailed and
559 passed to the alert program. This can be used for testing that alert
560 message to get through successfully.
567 .I md-device options-and-component-devices...
570 .B mdadm --assemble --scan
571 .I md-devices-and-options...
574 .B mdadm --assemble --scan
578 This usage assembles one or more raid arrays from pre-existing components.
579 For each array, mdadm needs to know the md device, the identity of the
580 array, and a number of component-devices. These can be found in a number of ways.
582 In the first usage example (without the
584 the first device given is the md device.
585 In the second usage example, all devices listed are treated as md
586 devices and assembly is attempted.
587 In the third (where no devices are listed) all md devices that are
588 listed in the configuration file are assembled.
590 If precisely one device is listed, but
596 was given and identify information is extracted from the configuration file.
598 The identity can be given with the
602 option, can be found in the config file, or will be taken from the
603 super block on the first component-device listed on the command line.
605 Devices can be given on the
607 command line or in the config file. Only devices which have an md
608 superblock which contains the right identity will be considered for
611 The config file is only used if explicitly named with
613 or requested with (a possibly implicit)
621 is not given, then the config file will only be used to find the
622 identity of md arrays.
624 Normally the array will be started after it is assembled. However if
626 is not given and insufficient drives were listed to start a complete
627 (non-degraded) array, then the array is not started (to guard against
628 usage errors). To insist that the array be started in this case (as
629 may work for RAID1, 4, 5 or 6), give the
635 option is given, either on the command line (--auto) or in the
636 configuration file (e.g. auto=part), then
638 will create the md device if necessary or will re-create it if it
639 doesn't look usable as it is.
641 This can be useful for handling partitioned devices (which don't have
642 a stable device number - it can change after a reboot) and when using
643 "udev" to manage your
645 tree (udev cannot handle md devices because of the unusual device
646 initialisation conventions).
648 If the option to "auto" is "mdp" or "part" or (on the command line
649 only) "p", then mdadm will create a partitionable array, using the
650 first free one that is not inuse, and does not already have an entry
651 in /dev (apart from numeric /dev/md* entries).
653 If the option to "auto" is "yes" or "md" or (on the command line)
654 nothing, then mdadm will create a traditional, non-partitionable md
657 It is expected that the "auto" functionality will be used to create
658 device entries with meaningful names such as "/dev/md/home" or
659 "/dev/md/root", rather than names based on the numerical array number.
661 When using this option to create a partitionable array, the device
662 files for the first 4 partitions are also created. If a different
663 number is required it can be simply appended to the auto option.
664 e.g. "auto=part8". Partition names are created by appending a digit
665 string to the device name, with an intervening "_p" if the device name
670 option is also available in Build and Create modes. As those modes do
671 not use a config file, the "auto=" config option does not apply to
682 .BI --raid-devices= Z
686 This usage is similar to
688 The difference is that it creates a legacy array without a superblock. With
689 these arrays there is no difference between initially creating the array and
690 subsequently assembling the array, except that hopefully there is useful
691 data there in the second case.
693 The level may only be 0, raid0, or linear. All devices must be listed
694 and the array will be started once complete.
705 .BI --raid-devices= Z
709 This usage will initialise a new md array, associate some devices with
710 it, and activate the array.
714 option is given (as described in more detail in the section on
715 Assemble mode), then the md device will be created with a suitable
716 device number if necessary.
718 As devices are added, they are checked to see if they contain raid
719 superblocks or filesystems. They are also checked to see if the variance in
720 device size exceeds 1%.
722 If any discrepancy is found, the array will not automatically be run, though
725 can override this caution.
727 To create a "degraded" array in which some devices are missing, simply
728 give the word "\fBmissing\fP"
729 in place of a device name. This will cause
731 to leave the corresponding slot in the array empty.
732 For a RAID4 or RAID5 array at most one slot can be
733 "\fBmissing\fP"; for a RAID6 array at most two slots.
734 For a RAID1 array, only one real device needs to be given. All of the
738 When creating a RAID5 array,
740 will automatically create a degraded array with an extra spare drive.
741 This is because building the spare into a degraded array is in general faster than resyncing
742 the parity on a non-degraded, but not clean, array. This feature can
743 be over-ridden with the
749 '''option is given, it is not necessary to list any component-devices in this command.
750 '''They can be added later, before a
754 '''is given, the apparent size of the smallest drive given is used.
756 The General Management options that are valid with --create are:
759 insist on running the array even if some devices look like they might
764 start the array readonly - not supported yet.
771 .I options... devices...
774 This usage will allow individual devices in an array to be failed,
775 removed or added. It is possible to perform multiple operations with
776 on command. For example:
778 .B " mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1"
784 and will then remove it from the array and finally add it back
785 in as a spare. However only one md array can be affected by a single
796 MISC mode includes a number if distinct operations that
797 operate on distinct devices. The operations are:
800 The device is examined to see if it is
801 (1) an active md array, or
802 (2) a component of an md array.
803 The information discovered is reported.
807 The device should be an active md device.
809 will display a detailed description of the array.
813 will cause the output to be less detailed and the format to be
814 suitable for inclusion in
815 .BR /etc/mdadm.conf .
818 will normally be 0 unless
820 failed to get useful information about the device(s). However if the
822 option is given, then the exit status will be:
826 The array is functioning normally.
829 The array has at least one failed device.
832 The array has multiple failed devices and hence is unusable (raid4 or
836 There was an error while trying to get information about the device.
841 The device should be a component of an md array.
843 will read the md superblock of the device and display the contents.
848 then multiple devices that are components of the one array
849 are grouped together and reported in a single entry suitable
851 .BR /etc/mdadm.conf .
855 without listing any devices will cause all devices listed in the
856 config file to be examined.
860 The devices should be active md arrays which will be deactivated, as
861 long as they are not currently in use.
865 This will fully activate a partially assembled md array.
869 This will mark an active array as read-only, providing that it is
870 not currently being used.
876 array back to being read/write.
880 For all operations except
883 will cause the operation to be applied to all arrays listed in
888 causes all devices listed in the config file to be examined.
896 .I options... devices...
901 to periodically poll a number of md arrays and to report on any events
904 will never exit once it decides that there are arrays to be checked,
905 so it should normally be run in the background.
907 As well as reporting events,
909 may move a spare drive from one array to another if they are in the
912 and if the destination array has a failed drive but not spares.
914 If any devices are listed on the command line,
916 will only monitor those devices. Otherwise all arrays listed in the
917 configuration file will be monitored. Further, if
919 is given, then any other md devices that appear in
921 will also be monitored.
923 The result of monitoring the arrays is the generation of events.
924 These events are passed to a separate program (if specified) and may
925 be mailed to a given E-mail address.
927 When passing event to program, the program is run once for each event
928 and is given 2 or 3 command-line arguements. The first is the
929 name of the event (see below). The second is the name of the
930 md device which is affected, and the third is the name of a related
931 device if relevant, such as a component device that has failed.
935 is given, then a program or an E-mail address must be specified on the
936 command line or in the config file. If neither are available, then
938 will not monitor anything.
942 will continue monitoring as long as something was found to monitor. If
943 no program or email is given, then each event is reported to
946 The different events are:
951 An md array which previously was configured appears to no longer be
956 An md array started reconstruction.
962 is 20, 40, 60, or 80, this indicates that rebuild has passed that many
963 percentage of the total.
967 An md array that was rebuilding, isn't any more, either because it
968 finished normally or was aborted.
972 An active component device of an array has been marked as faulty.
976 A spare component device which was being rebuilt to replace a faulty
981 A spare component device which was being rebuilt to replace a faulty
982 device as been successfully rebuild and has been made active.
986 A new md array has been detected in the
992 A newly noticed array appears to be degraded. This message is not
995 notices a drive failure which causes degradation, but only when
997 notices that an array is degraded when it first sees the array.
1001 A spare drive has been moved from one array in a
1003 to another to allow a failed drive to be replaced.
1007 An array was found at startup, and the
1018 cause Email to be sent. All events cause the program to be run.
1019 The program is run with two or three arguments, they being the event
1020 name, the array device and possibly a second device.
1022 Each event has an associated array device (e.g.
1024 and possibly a second device. For
1029 the second device is the relevant component device.
1032 the second device is the array that the spare was moved from.
1036 to move spares from one array to another, the different arrays need to
1037 be labelled with the same
1039 in the configuration file. The
1041 name can be any string. It is only necessary that different spare
1042 groups use different names.
1046 detects that an array which is in a spare group has fewer active
1047 devices than necessary for the complete array, and has no spare
1048 devices, it will look for another array in the same spare group that
1049 has a full complement of working drive and a spare. It will then
1050 attempt to remove the spare from the second drive and add it to the
1052 If the removal succeeds but the adding fails, then it is added back to
1056 The GROW mode is used for changing the size or shape of an active
1058 For this to work, the kernel must support the necessary change.
1059 Various types of growth may be added during 2.6 development, possibly
1060 including restructuring a raid5 array to have more active devices.
1062 Currently the only support available is to change the "size" attribute
1063 for arrays with redundancy, and the raid-disks attribute of RAID1
1066 Normally when an array is build the "size" it taken from the smallest
1067 of the drives. If all the small drives in an arrays are, one at a
1068 time, removed and replaced with larger drives, then you could have an
1069 array of large drives with only a small amount used. In this
1070 situation, changing the "size" with "GROW" mode will allow the extra
1071 space to start being used. If the size is increased in this way, a
1072 "resync" process will start to make sure the new parts of the array
1075 Note that when an array changes size, any filesystem that may be
1076 stored in the array will not automatically grow to use the space. The
1077 filesystem will need to be explicitly told to use the extra space.
1079 A RAID1 array can work with any number of devices from 1 upwards
1080 (though 1 is not very useful). There may be times which you want to
1081 increase or decrease the number of active devices. Note that this is
1082 different to hot-add or hot-remove which changes the number of
1085 When reducing the number of devices in a RAID1 array, the slots which
1086 are to be removed from the array must already be vacant. That is, the
1087 devices that which were in those slots must be failed and removed.
1089 When the number of devices is increased, any hot spares that are
1090 present may be activated immediately.
1094 .B " mdadm --query /dev/name-of-device"
1096 This will find out if a given device is a raid array, or is part of
1097 one, and will provide brief information about the device.
1099 .B " mdadm --assemble --scan"
1101 This will assemble and start all arrays listed in the standard confile
1102 file. This command will typically go in a system startup file.
1104 .B " mdadm --stop --scan"
1106 This will shut down all array that can be shut down (i.e. are not
1107 currently in used). This will typically going in a system shutdown script.
1109 .B " mdadm --follow --scan --delay=120"
1111 If (and only if) there is an Email address or program given in the
1112 standard config file, then
1113 monitor the status of all arrays listed in that file by
1114 polling them ever 2 minutes.
1116 .B " mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1"
1118 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
1121 .B " echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf"
1123 .B " mdadm --detail --scan >> mdadm.conf"
1125 This will create a prototype config file that describes currently
1126 active arrays that are known to be made from partitions of IDE or SCSI drives.
1127 This file should be reviewed before being used as it may
1128 contain unwanted detail.
1130 .B " echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf"
1132 .B " mdadm --examine --scan --config=mdadm.conf >> mdadm.conf"
1134 This will find what arrays could be assembled from existign IDE and
1135 SCSI whole drives (not partitions) and store the information is the
1136 format of a config file.
1137 This file is very likely to contain unwanted detail, particularly
1140 entries. It should be reviewed and edited before being used as an
1143 .B " mdadm --examine --brief --scan --config=partitions"
1145 .B " mdadm -Ebsc partitions"
1147 Create a list of devices by reading
1148 .BR /proc/partitions ,
1149 scan these for RAID superblocks, and printout a brief listing of all
1152 .B " mdadm -Ac partitions -m 0 /dev/md0"
1154 Scan all partitions and devices listed in
1155 .BR /proc/partitions
1158 out of all such devices with a RAID superblock with a minor number of 0.
1160 .B " mdadm --monitor --scan --daemonise > /var/run/mdadm"
1162 If config file contains a mail address or alert program, run mdadm in
1163 the background in monitor mode monitoring all md devices. Also write
1164 pid of mdadm daemon to
1165 .BR /var/run/mdadm .
1167 .B " mdadm --create --help"
1169 Providew help about the Create mode.
1171 .B " mdadm --config --help"
1173 Provide help about the format of the config file.
1177 Provide general help.
1188 lists all active md devices with information about them.
1190 uses this to find arrays when
1192 is given in Misc mode, and to monitor array reconstruction
1198 The config file lists which devices may be scanned to see if
1199 they contain MD super block, and gives identifying information
1200 (e.g. UUID) about known MD arrays. See
1207 was previously known as
1211 For information on the various levels of
1215 .UR http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1216 http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1219 for new releases of the RAID driver check out:
1222 .UR ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1223 ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1228 .UR http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
1229 http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/