2 ''' Copyright Neil Brown and others.
3 ''' This program is free software; you can redistribute it and/or modify
4 ''' it under the terms of the GNU General Public License as published by
5 ''' the Free Software Foundation; either version 2 of the License, or
6 ''' (at your option) any later version.
7 ''' See file COPYING in distribution for details.
10 mdadm \- manage MD devices
16 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
19 RAID devices are virtual devices created from two or more
20 real block devices. This allows multiple devices (typically disk
21 drives or partitions thereof) to be combined into a single device to
22 hold (for example) a single filesystem.
23 Some RAID levels include redundancy and so can survive some degree of
26 Linux Software RAID devices are implemented through the md (Multiple
27 Devices) device driver.
29 Currently, Linux supports
45 is not a Software RAID mechanism, but does involve
48 each device is a path to one common physical storage device.
51 is also not true RAID, and it only involves one device. It
52 provides a layer over a true device that can be used to inject faults.
55 '''is a program that can be used to create, manage, and monitor
57 '''such it provides a similar set of functionality to the
60 '''The key differences between
67 '''is a single program and not a collection of programs.
70 '''can perform (almost) all of its functions without having a
71 '''configuration file and does not use one by default. Also
73 '''helps with management of the configuration
77 '''can provide information about your arrays (through Query, Detail, and Examine)
87 '''configuration file, at all. It has a different configuration file
88 '''with a different format and a different purpose.
91 mdadm has several major modes of operation:
94 Assemble the parts of a previously created
95 array into an active array. Components can be explicitly given
96 or can be searched for.
98 checks that the components
99 do form a bona fide array, and can, on request, fiddle superblock
100 information so as to assemble a faulty array.
104 Build an array that doesn't have per-device superblocks. For these
107 cannot differentiate between initial creation and subsequent assembly
108 of an array. It also cannot perform any checks that appropriate
109 devices have been requested. Because of this, the
111 mode should only be used together with a complete understanding of
116 Create a new array with per-device superblocks.
118 '''in several step create-add-add-run or it can all happen with one command.
121 .B "Follow or Monitor"
122 Monitor one or more md devices and act on any state changes. This is
123 only meaningful for raid1, 4, 5, 6, 10 or multipath arrays as
124 only these have interesting state. raid0 or linear never have
125 missing, spare, or failed drives, so there is nothing to monitor.
129 Grow (or shrink) an array, or otherwise reshape it in some way.
130 Currently supported growth options including changing the active size
131 of component devices in RAID level 1/4/5/6 and changing the number of
132 active devices in RAID1/5/6.
135 .B "Incremental Assembly"
136 Add a single device to an appropriate array. If the addition of the
137 device makes the array runnable, the array will be started.
138 This provides a convenient interface to a
140 system. As each device is detected,
142 has a chance to include it in some array as appropriate.
146 This is for doing things to specific components of an array such as
147 adding new spares and removing faulty devices.
151 This is an 'everything else' mode that supports operations on active
152 arrays, operations on component devices such as erasing old superblocks, and
153 information gathering operations.
154 '''This mode allows operations on independent devices such as examine MD
155 '''superblocks, erasing old superblocks and stopping active arrays.
159 .SH Options for selecting a mode are:
162 .BR -A ", " --assemble
163 Assemble a pre-existing array.
167 Build a legacy array without superblocks.
174 .BR -F ", " --follow ", " --monitor
181 Change the size or shape of an active array.
184 .BE -I ", " --incremental
185 Add a single device into an appropriate array, and possibly start the array.
188 If a device is given before any options, or if the first option is
193 then the MANAGE mode is assume.
194 Anything other than these will cause the
198 .SH Options that are not mode-specific are:
202 Display general help message or, after one of the above options, a
203 mode specific help message.
207 Display more detailed help about command line parsing and some commonly
211 .BR -V ", " --version
212 Print version information for mdadm.
215 .BR -v ", " --verbose
216 Be more verbose about what is happening. This can be used twice to be
218 The extra verbosity currently only affects
221 .BR "--examine --scan" .
225 Avoid printing purely informative messages. With this,
227 will be silent unless there is something really important to report.
231 Be less verbose. This is used with
239 gives an intermediate level of verbosity.
243 Be more forceful about certain operations. See the various modes of
244 the exact meaning of this option in different contexts.
247 .BR -c ", " --config=
248 Specify the config file. Default is to use
249 .BR /etc/mdadm.conf ,
250 or if that is missing, then
251 .BR /etc/mdadm/mdadm.conf .
252 If the config file given is
254 then nothing will be read, but
256 will act as though the config file contained exactly
257 .B "DEVICE partitions"
260 to find a list of devices to scan.
263 is given for the config file, then
265 will act as though the config file were empty.
271 for missing information.
272 In general, this option gives
274 permission to get any missing information, like component devices,
275 array devices, array identities, and alert destination from the
277 .BR /etc/mdadm.conf .
278 One exception is MISC mode when using
284 says to get a list of array devices from
288 .B -e ", " --metadata=
289 Declare the style of superblock (raid metadata) to be used. The
292 and to guess for other operations.
293 The default can be overridden by setting the
302 .IP "0, 0.90, default"
303 Use the original 0.90 format superblock. This format limits arrays to
304 28 componenet devices and limits component devices of levels 1 and
305 greater to 2 terabytes.
306 .IP "1, 1.0, 1.1, 1.2"
307 Use the new version-1 format superblock. This has few restrictions.
308 The different subversion store the superblock at different locations
309 on the device, either at the end (for 1.0), at the start (for 1.1) or
310 4K from the start (for 1.2).
315 This will override any
317 setting in the config file and provides the identify of the host which
318 should be considered the home for any arrays.
320 When creating an array, the
322 will be recorded in the superblock. For version-1 superblocks, it will
323 be prefixed to the array name. For version-0.90 superblocks part of
324 the SHA1 hash of the hostname will be stored in the later half of the
327 When reporting information about an array, any array which is tagged
328 for the given homehost will be reported as such.
330 When using Auto-Assemble, only arrays tagged for the given homehost
333 .SH For create, build, or grow:
336 .BR -n ", " --raid-devices=
337 Specify the number of active devices in the array. This, plus the
338 number of spare devices (see below) must equal the number of
340 (including "\fBmissing\fP" devices)
341 that are listed on the command line for
343 Setting a value of 1 is probably
344 a mistake and so requires that
346 be specified first. A value of 1 will then be allowed for linear,
347 multipath, raid0 and raid1. It is never allowed for raid4 or raid5.
349 This number can only be changed using
351 for RAID1, RAID5 and RAID6 arrays, and only on kernels which provide
355 .BR -x ", " --spare-devices=
356 Specify the number of spare (eXtra) devices in the initial array.
357 Spares can also be added
358 and removed later. The number of component devices listed
359 on the command line must equal the number of raid devices plus the
360 number of spare devices.
365 Amount (in Kibibytes) of space to use from each drive in RAID1/4/5/6.
366 This must be a multiple of the chunk size, and must leave about 128Kb
367 of space at the end of the drive for the RAID superblock.
368 If this is not specified
369 (as it normally is not) the smallest drive (or partition) sets the
370 size, though if there is a variance among the drives of greater than 1%, a warning is
373 This value can be set with
375 for RAID level 1/4/5/6. If the array was created with a size smaller
376 than the currently active drives, the extra space can be accessed
379 The size can be given as
381 which means to choose the largest size that fits on all current drives.
385 Specify chunk size of kibibytes. The default is 64.
389 Specify rounding factor for linear array (==chunk size)
393 Set raid level. When used with
395 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
396 raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty. Obviously some of these are synonymous.
400 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
402 Not yet supported with
406 .BR -p ", " --layout=
407 This option configures the fine details of data layout for raid5,
408 and raid10 arrays, and controls the failure modes for
411 The layout of the raid5 parity block can be one of
412 .BR left-asymmetric ,
414 .BR right-asymmetric ,
415 .BR right-symmetric ,
416 .BR la ", " ra ", " ls ", " rs .
420 When setting the failure mode for
423 .BR write-transient ", " wt ,
424 .BR read-transient ", " rt ,
425 .BR write-persistent ", " wp ,
426 .BR read-persistent ", " rp ,
428 .BR read-fixable ", " rf ,
429 .BR clear ", " flush ", " none .
431 Each mode can be followed by a number which is used as a period
432 between fault generation. Without a number, the fault is generated
433 once on the first relevant request. With a number, the fault will be
434 generated after that many request, and will continue to be generated
435 every time the period elapses.
437 Multiple failure modes can be current simultaneously by using the
439 option to set subsequent failure modes.
441 "clear" or "none" will remove any pending or periodic failure modes,
442 and "flush" will clear any persistent faults.
444 To set the parity with
446 the level of the array ("faulty")
447 must be specified before the fault mode is specified.
449 Finally, the layout options for RAID10 are one of 'n', 'o' or 'p' followed
450 by a small number. The default is 'n2'.
453 signals 'near' copies. Multiple copies of one data block are at
454 similar offsets in different devices.
457 signals 'offset' copies. Rather than the chunks being duplicated
458 within a stripe, whole stripes are duplicated but are rotated by one
459 device so duplicate blocks are on different devices. Thus subsequent
460 copies of a block are in the next drive, and are one chunk further
465 (multiple copies have very different offsets). See md(4) for more
466 detail about 'near' and 'far'.
468 The number is the number of copies of each datablock. 2 is normal, 3
469 can be useful. This number can be at most equal to the number of
470 devices in the array. It does not need to divide evenly into that
471 number (e.g. it is perfectly legal to have an 'n2' layout for an array
472 with an odd number of devices).
478 (thus explaining the p of
482 .BR -b ", " --bitmap=
483 Specify a file to store a write-intent bitmap in. The file should not
486 is also given. The same file should be provided
487 when assembling the array. If the word
489 is given, then the bitmap is stored with the metadata on the array,
490 and so is replicated on all devices. If the word
494 mode, then any bitmap that is present is removed.
496 To help catch typing errors, the filename must contain at least one
497 slash ('/') if it is a real file (not 'internal' or 'none').
499 Note: external bitmaps are only known to work on ext2 and ext3.
500 Storing bitmap files on other filesystems may result in serious problems.
504 Set the chunksize of the bitmap. Each bit corresponds to that many
505 Kilobytes of storage.
506 When using a file based bitmap, the default is to use the smallest
507 size that is atleast 4 and requires no more than 2^21 chunks.
510 bitmap, the chunksize is automatically determined to make best use of
515 .BR -W ", " --write-mostly
516 subsequent devices lists in a
521 command will be flagged as 'write-mostly'. This is valid for RAID1
522 only and means that the 'md' driver will avoid reading from these
523 devices if at all possible. This can be useful if mirroring over a
528 Specify that write-behind mode should be enabled (valid for RAID1
529 only). If an argument is specified, it will set the maximum number
530 of outstanding writes allowed. The default value is 256.
531 A write-intent bitmap is required in order to use write-behind
532 mode, and write-behind is only attempted on drives marked as
539 that the array pre-existed and is known to be clean. It can be useful
540 when trying to recover from a major failure as you can be sure that no
541 data will be affected unless you actually write to the array. It can
542 also be used when creating a RAID1 or RAID10 if you want to avoid the
543 initial resync, however this practice \(em while normally safe \(em is not
544 recommended. Use this ony if you really know what you are doing.
550 is used to increase the number of
551 raid-devices in a RAID5 if there are no spare devices available.
552 See the section below on RAID_DEVICE CHANGES. The file should be
553 stored on a separate device, not on the raid array being reshaped.
559 for the array. This is currently only effective when creating an
560 array with a version-1 superblock. The name is a simple textual
561 string that can be used to identify array components when assembling.
567 run the array, even if some of the components
568 appear to be active in another array or filesystem. Normally
570 will ask for confirmation before including such components in an
571 array. This option causes that question to be suppressed.
577 accept the geometry and layout specified without question. Normally
579 will not allow creation of an array with only one device, and will try
580 to create a raid5 array with one missing drive (as this makes the
581 initial resync work faster). With
584 will not try to be so clever.
587 .BR -a ", " "--auto{=no,yes,md,mdp,part,p}{NN}"
588 Instruct mdadm to create the device file if needed, possibly allocating
589 an unused minor number. "md" causes a non-partitionable array
590 to be used. "mdp", "part" or "p" causes a partitionable array (2.6 and
591 later) to be used. "yes" requires the named md device to have
592 a 'standard' format, and the type and minor number will be determined
593 from this. See DEVICE NAMES below.
595 The argument can also come immediately after
600 is not given on the command line or in the config file, then
606 is also given, then any
608 entries in the config file will override the
610 instruction given on the command line.
612 For partitionable arrays,
614 will create the device file for the whole array and for the first 4
615 partitions. A different number of partitions can be specified at the
616 end of this option (e.g.
618 If the device name ends with a digit, the partition names add a 'p',
619 and a number, e.g. "/dev/home1p3". If there is no
620 trailing digit, then the partition names just have a number added,
621 e.g. "/dev/scratch3".
623 If the md device name is in a 'standard' format as described in DEVICE
624 NAMES, then it will be created, if necessary, with the appropriate
625 number based on that name. If the device name is not in one of these
626 formats, then a unused minor number will be allocated. The minor
627 number will be considered unused if there is no active array for that
628 number, and there is no entry in /dev for that number and with a
639 it will also create symlinks from
641 with names starting with
649 to enforce this even if it is suppressing
657 uuid of array to assemble. Devices which don't have this uuid are
661 .BR -m ", " --super-minor=
662 Minor number of device that array was created for. Devices which
663 don't have this minor number are excluded. If you create an array as
664 /dev/md1, then all superblocks will contain the minor number 1, even if
665 the array is later assembled as /dev/md2.
667 Giving the literal word "dev" for
671 to use the minor number of the md device that is being assembled.
675 will look for super blocks with a minor number of 0.
679 Specify the name of the array to assemble. This must be the name
680 that was specified when creating the array. It must either match
681 then name stored in the superblock exactly, or it must match
684 is added to the start of the given name.
688 Assemble the array even if some superblocks appear out-of-date
692 Attempt to start the array even if fewer drives were given than were
693 present last time the array was active. Normally if not all the
694 expected drives are found and
696 is not used, then the array will be assembled but not started.
699 an attempt will be made to start it anyway.
703 This is the reverse of
705 in that it inhibits the started if array unless all expected drives
706 are present. This is only needed with
708 and can be used if you physical connections to devices are
709 not as reliable as you would like.
712 .BR -a ", " "--auto{=no,yes,md,mdp,part}"
713 See this option under Create and Build options.
716 .BR -b ", " --bitmap=
717 Specify the bitmap file that was given when the array was created. If
720 bitmap, there is no need to specify this when assembling the array.
726 was used to grow the number of raid-devices in a RAID5, and the system
727 crashed during the critical section, then the same
731 to allow possibly corrupted data to be restored.
734 .BR -U ", " --update=
735 Update the superblock on each device while assembling the array. The
736 argument given to this flag can be one of
750 option will adjust the superblock of an array what was created on a Sparc
751 machine running a patched 2.2 Linux kernel. This kernel got the
752 alignment of part of the superblock wrong. You can use the
753 .B "--examine --sparc2.2"
756 to see what effect this would have.
760 option will update the
762 field on each superblock to match the minor number of the array being
764 This can be useful if
766 reports a different "Preferred Minor" to
768 In some cases this update will be performed automatically
769 by the kernel driver. In particular the update happens automatically
770 at the first write to an array with redundancy (RAID level 1 or
771 greater) on a 2.6 (or later) kernel.
775 option will change the uuid of the array. If a UUID is given with the
777 option that UUID will be used as a new UUID and will
779 be used to help identify the devices in the array.
782 is given, a random UUID is chosen.
786 option will change the
788 of the array as stored in the superblock. This is only supported for
789 version-1 superblocks.
793 option will change the
795 as recorded in the superblock. For version-0 superblocks, this is the
796 same as updating the UUID.
797 For version-1 superblocks, this involves updating the name.
801 option will cause the array to be marked
803 meaning that any redundancy in the array (e.g. parity for raid5,
804 copies for raid1) may be incorrect. This will cause the raid system
805 to perform a "resync" pass to make sure that all redundant information
810 option allows arrays to be moved between machines with different
812 When assembling such an array for the first time after a move, giving
813 .B "--update=byteorder"
816 to expect superblocks to have their byteorder reversed, and will
817 correct that order before assembling the array. This is only valid
818 with original (Version 0.90) superblocks.
822 option will correct the summaries in the superblock. That is the
823 counts of total, working, active, failed, and spare devices.
827 will rarely be of use. It applies to version 1.1 and 1.2 metadata
828 only (where the metadata is at the start of the device) and is only
829 useful when the component device has changed size (typically become
830 larger). The version 1 metadata records the amount of the device that
831 can be used to store data, so if a device in a version 1.1 or 1.2
832 array becomes larger, the metadata will still be visible, but the
833 extra space will not. In this case it might be useful to assemble the
835 .BR --update=devicesize .
838 to determine the maximum usable amount of space on each device and
839 update the relevant field in the metadata.
842 .B --auto-update-homehost
843 This flag is only meaning with auto-assembly (see discussion below).
844 In that situation, if no suitable arrays are found for this homehost,
846 will recan for any arrays at all and will assemble them and update the
847 homehost to match the current host.
853 hot-add listed devices.
857 re-add a device that was recently removed from an array.
861 remove listed devices. They must not be active. i.e. they should
862 be failed or spare devices.
866 mark listed devices as faulty.
874 Each of these options require that the first device list is the array
875 to be acted upon and the remainder are component devices to be added,
876 removed, or marked as fault. Several different operations can be
877 specified for different devices, e.g.
879 mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
881 Each operation applies to all devices listed until the next
884 If an array is using a write-intent bitmap, then devices which have
885 been removed can be re-added in a way that avoids a full
886 reconstruction but instead just updated the blocks that have changed
887 since the device was removed. For arrays with persistent metadata
888 (superblocks) this is done automatically. For arrays created with
890 mdadm needs to be told that this device we removed recently with
893 Devices can only be removed from an array if they are not in active
894 use. i.e. that must be spares or failed devices. To remove an active
895 device, it must be marked as
903 Examine a device to see
904 (1) if it is an md device and (2) if it is a component of an md
906 Information about what is discovered is presented.
910 Print detail of one or more md devices.
913 .BR -E ", " --examine
914 Print content of md superblock on device(s).
917 If an array was created on a 2.2 Linux kernel patched with RAID
918 support, the superblock will have been created incorrectly, or at
919 least incompatibly with 2.4 and later kernels. Using the
923 will fix the superblock before displaying it. If this appears to do
924 the right thing, then the array can be successfully assembled using
925 .BR "--assemble --update=sparc2.2" .
928 .BR -X ", " --examine-bitmap
929 Report information about a bitmap file.
933 start a partially built array.
937 deactivate array, releasing all resources.
940 .BR -o ", " --readonly
941 mark array as readonly.
944 .BR -w ", " --readwrite
945 mark array as readwrite.
949 If the device contains a valid md superblock, the block is
950 overwritten with zeros. With
952 the block where the superblock would be is overwritten even if it
953 doesn't appear to be valid.
961 is set to reflect the status of the device.
965 For each md device given, wait for any resync, recovery, or reshape
966 activity to finish before returning.
968 will return with success if it actually waited for every device
969 listed, otherwise it will return failure.
971 .SH For Incremental Assembly mode:
973 .BR --rebuild-map ", " -r
975 .RB ( /var/run/mdadm/map )
978 uses to help track which arrays are currently being assembled.
982 Run any array assembled as soon as a minimal number of devices are
983 available, rather than waiting until all expected devices are present.
991 file for arrays that are being incrementally assembled and will try to
992 start any that are not already started. If any such array is listed
995 as requiring an external bitmap, that bitmap will be attached first.
997 .SH For Monitor mode:
1000 Give a mail address to send alerts to.
1003 .BR -p ", " --program ", " --alert
1004 Give a program to be run whenever an event is detected.
1007 .BR -y ", " --syslog
1008 Cause all events to be reported through 'syslog'. The messages have
1009 facility of 'daemon' and varying priorities.
1013 Give a delay in seconds.
1015 polls the md arrays and then waits this many seconds before polling
1016 again. The default is 60 seconds.
1019 .BR -f ", " --daemonise
1022 to run as a background daemon if it decides to monitor anything. This
1023 causes it to fork and run in the child, and to disconnect form the
1024 terminal. The process id of the child is written to stdout.
1027 which will only continue monitoring if a mail address or alert program
1028 is found in the config file.
1031 .BR -i ", " --pid-file
1034 is running in daemon mode, write the pid of the daemon process to
1035 the specified file, instead of printing it on standard output.
1038 .BR -1 ", " --oneshot
1039 Check arrays only once. This will generate
1041 events and more significantly
1047 .B " mdadm --monitor --scan -1"
1049 from a cron script will ensure regular notification of any degraded arrays.
1055 alert for every array found at startup. This alert gets mailed and
1056 passed to the alert program. This can be used for testing that alert
1057 message do get through successfully.
1064 .I md-device options-and-component-devices...
1067 .B mdadm --assemble --scan
1068 .I md-devices-and-options...
1071 .B mdadm --assemble --scan
1075 This usage assembles one or more raid arrays from pre-existing components.
1076 For each array, mdadm needs to know the md device, the identity of the
1077 array, and a number of component-devices. These can be found in a number of ways.
1079 In the first usage example (without the
1081 the first device given is the md device.
1082 In the second usage example, all devices listed are treated as md
1083 devices and assembly is attempted.
1084 In the third (where no devices are listed) all md devices that are
1085 listed in the configuration file are assembled.
1087 If precisely one device is listed, but
1093 was given and identify information is extracted from the configuration file.
1095 The identity can be given with the
1099 option, can be found in the config file, or will be taken from the
1100 super block on the first component-device listed on the command line.
1102 Devices can be given on the
1104 command line or in the config file. Only devices which have an md
1105 superblock which contains the right identity will be considered for
1108 The config file is only used if explicitly named with
1110 or requested with (a possibly implicit)
1118 is not given, then the config file will only be used to find the
1119 identity of md arrays.
1121 Normally the array will be started after it is assembled. However if
1123 is not given and insufficient drives were listed to start a complete
1124 (non-degraded) array, then the array is not started (to guard against
1125 usage errors). To insist that the array be started in this case (as
1126 may work for RAID1, 4, 5, 6, or 10), give the
1130 If the md device does not exist, then it will be created providing the
1131 intent is clear. i.e. the name must be in a standard form, or the
1133 option must be given to clarify how and whether the device should be
1136 This can be useful for handling partitioned devices (which don't have
1137 a stable device number \(em it can change after a reboot) and when using
1138 "udev" to manage your
1140 tree (udev cannot handle md devices because of the unusual device
1141 initialisation conventions).
1143 If the option to "auto" is "mdp" or "part" or (on the command line
1144 only) "p", then mdadm will create a partitionable array, using the
1145 first free one that is not in use, and does not already have an entry
1146 in /dev (apart from numeric /dev/md* entries).
1148 If the option to "auto" is "yes" or "md" or (on the command line)
1149 nothing, then mdadm will create a traditional, non-partitionable md
1152 It is expected that the "auto" functionality will be used to create
1153 device entries with meaningful names such as "/dev/md/home" or
1154 "/dev/md/root", rather than names based on the numerical array number.
1156 When using this option to create a partitionable array, the device
1157 files for the first 4 partitions are also created. If a different
1158 number is required it can be simply appended to the auto option.
1159 e.g. "auto=part8". Partition names are created by appending a digit
1160 string to the device name, with an intervening "p" if the device name
1165 option is also available in Build and Create modes. As those modes do
1166 not use a config file, the "auto=" config option does not apply to
1174 and no devices are listed,
1176 will first attempt to assemble all the arrays listed in the config
1181 has been specified (either in the config file or on the command line),
1183 will look further for possible arrays and will try to assemble
1184 anything that it finds which is tagged as belonging to the given
1185 homehost. This is the only situation where
1187 will assemble arrays without being given specific device name or
1188 identify information for the array.
1192 finds a consistent set of devices that look like they should comprise
1193 an array, and if the superblock is tagged as belonging to the given
1194 home host, it will automatically choose a device name and try to
1195 assemble the array. If the array uses version-0.90 metadata, then the
1197 number as recorded in the superblock is used to create a name in
1201 If the array uses version-1 metadata, then the
1203 from the superblock is used to similarly create a name in
1205 The name will have any 'host' prefix stripped first.
1209 cannot find any array for the given host at all, and if
1210 .B --auto-update-homehost
1213 will search again for any array (not just an array created for this
1214 host) and will assemble each assuming
1215 .BR --update=homehost .
1216 This will change the host tag in the superblock so that on the next run,
1217 these arrays will be found without the second pass. The intention of
1218 this feature is to support transitioning a set of md arrays to using
1221 The reason for requiring arrays to be tagged with the homehost for
1222 auto assembly is to guard against problems that can arise when moving
1223 devices from one host to another.
1233 .BI --raid-devices= Z
1237 This usage is similar to
1239 The difference is that it creates an array without a superblock. With
1240 these arrays there is no difference between initially creating the array and
1241 subsequently assembling the array, except that hopefully there is useful
1242 data there in the second case.
1244 The level may raid0, linear, multipath, or faulty, or one of their
1245 synonyms. All devices must be listed and the array will be started
1257 .BI --raid-devices= Z
1261 This usage will initialise a new md array, associate some devices with
1262 it, and activate the array.
1266 option is given (as described in more detail in the section on
1267 Assemble mode), then the md device will be created with a suitable
1268 device number if necessary.
1270 As devices are added, they are checked to see if they contain raid
1271 superblocks or filesystems. They are also checked to see if the variance in
1272 device size exceeds 1%.
1274 If any discrepancy is found, the array will not automatically be run, though
1277 can override this caution.
1279 To create a "degraded" array in which some devices are missing, simply
1280 give the word "\fBmissing\fP"
1281 in place of a device name. This will cause
1283 to leave the corresponding slot in the array empty.
1284 For a RAID4 or RAID5 array at most one slot can be
1285 "\fBmissing\fP"; for a RAID6 array at most two slots.
1286 For a RAID1 array, only one real device needs to be given. All of the
1290 When creating a RAID5 array,
1292 will automatically create a degraded array with an extra spare drive.
1293 This is because building the spare into a degraded array is in general faster than resyncing
1294 the parity on a non-degraded, but not clean, array. This feature can
1295 be overridden with the
1299 When creating an array with version-1 metadata a name for the host is
1301 If this is not given with the
1305 will chose a name based on the last component of the name of the
1306 device being created. So if
1308 is being created, then the name
1313 is being created, then the name
1317 A new array will normally get a randomly assigned 128bit UUID which is
1318 very likely to be unique. If you have a specific need, you can choose
1319 a UUID for the array by giving the
1321 option. Be warned that creating two arrays with the same UUID is a
1322 recipe for disaster. Also, using
1324 when creating a v0.90 array will silently override any
1329 '''option is given, it is not necessary to list any component-devices in this command.
1330 '''They can be added later, before a
1334 '''is given, the apparent size of the smallest drive given is used.
1336 The General Management options that are valid with
1341 insist on running the array even if some devices look like they might
1346 start the array readonly \(em not supported yet.
1354 .I options... devices...
1357 This usage will allow individual devices in an array to be failed,
1358 removed or added. It is possible to perform multiple operations with
1359 on command. For example:
1361 .B " mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1"
1367 and will then remove it from the array and finally add it back
1368 in as a spare. However only one md array can be affected by a single
1379 MISC mode includes a number of distinct operations that
1380 operate on distinct devices. The operations are:
1383 The device is examined to see if it is
1384 (1) an active md array, or
1385 (2) a component of an md array.
1386 The information discovered is reported.
1390 The device should be an active md device.
1392 will display a detailed description of the array.
1396 will cause the output to be less detailed and the format to be
1397 suitable for inclusion in
1398 .BR /etc/mdadm.conf .
1401 will normally be 0 unless
1403 failed to get useful information about the device(s). However if the
1405 option is given, then the exit status will be:
1409 The array is functioning normally.
1412 The array has at least one failed device.
1415 The array has multiple failed devices and hence is unusable (raid4 or
1419 There was an error while trying to get information about the device.
1424 The device should be a component of an md array.
1426 will read the md superblock of the device and display the contents.
1431 then multiple devices that are components of the one array
1432 are grouped together and reported in a single entry suitable
1434 .BR /etc/mdadm.conf .
1438 without listing any devices will cause all devices listed in the
1439 config file to be examined.
1443 The devices should be active md arrays which will be deactivated, as
1444 long as they are not currently in use.
1448 This will fully activate a partially assembled md array.
1452 This will mark an active array as read-only, providing that it is
1453 not currently being used.
1459 array back to being read/write.
1463 For all operations except
1466 will cause the operation to be applied to all arrays listed in
1471 causes all devices listed in the config file to be examined.
1479 .I options... devices...
1484 to periodically poll a number of md arrays and to report on any events
1487 will never exit once it decides that there are arrays to be checked,
1488 so it should normally be run in the background.
1490 As well as reporting events,
1492 may move a spare drive from one array to another if they are in the
1495 and if the destination array has a failed drive but no spares.
1497 If any devices are listed on the command line,
1499 will only monitor those devices. Otherwise all arrays listed in the
1500 configuration file will be monitored. Further, if
1502 is given, then any other md devices that appear in
1504 will also be monitored.
1506 The result of monitoring the arrays is the generation of events.
1507 These events are passed to a separate program (if specified) and may
1508 be mailed to a given E-mail address.
1510 When passing event to program, the program is run once for each event
1511 and is given 2 or 3 command-line arguments. The first is the
1512 name of the event (see below). The second is the name of the
1513 md device which is affected, and the third is the name of a related
1514 device if relevant, such as a component device that has failed.
1518 is given, then a program or an E-mail address must be specified on the
1519 command line or in the config file. If neither are available, then
1521 will not monitor anything.
1525 will continue monitoring as long as something was found to monitor. If
1526 no program or email is given, then each event is reported to
1529 The different events are:
1533 .B DeviceDisappeared
1534 An md array which previously was configured appears to no longer be
1535 configured. (syslog priority: Critical)
1539 was told to monitor an array which is RAID0 or Linear, then it will
1541 .B DeviceDisappeared
1542 with the extra information
1544 This is because RAID0 and Linear do not support the device-failed,
1545 hot-spare and resync operations which are monitored.
1549 An md array started reconstruction. (syslog priority: Warning)
1555 is 20, 40, 60, or 80, this indicates that rebuild has passed that many
1556 percentage of the total. (syslog priority: Warning)
1560 An md array that was rebuilding, isn't any more, either because it
1561 finished normally or was aborted. (syslog priority: Warning)
1565 An active component device of an array has been marked as
1566 faulty. (syslog priority: Critical)
1570 A spare component device which was being rebuilt to replace a faulty
1571 device has failed. (syslog priority: Critial)
1575 A spare component device which was being rebuilt to replace a faulty
1576 device has been successfully rebuilt and has been made active.
1577 (syslog priority: Info)
1581 A new md array has been detected in the
1583 file. (syslog priority: Info)
1587 A newly noticed array appears to be degraded. This message is not
1590 notices a drive failure which causes degradation, but only when
1592 notices that an array is degraded when it first sees the array.
1593 (syslog priority: Critial)
1597 A spare drive has been moved from one array in a
1599 to another to allow a failed drive to be replaced.
1600 (syslog priority: Info)
1606 has been told, via the config file, that an array should have a certain
1607 number of spare devices, and
1609 detects that it has fewer that this number when it first sees the
1610 array, it will report a
1613 (syslog priority: Warning)
1617 An array was found at startup, and the
1620 (syslog priority: Info)
1630 cause Email to be sent. All events cause the program to be run.
1631 The program is run with two or three arguments, they being the event
1632 name, the array device and possibly a second device.
1634 Each event has an associated array device (e.g.
1636 and possibly a second device. For
1641 the second device is the relevant component device.
1644 the second device is the array that the spare was moved from.
1648 to move spares from one array to another, the different arrays need to
1649 be labelled with the same
1651 in the configuration file. The
1653 name can be any string. It is only necessary that different spare
1654 groups use different names.
1658 detects that an array which is in a spare group has fewer active
1659 devices than necessary for the complete array, and has no spare
1660 devices, it will look for another array in the same spare group that
1661 has a full complement of working drive and a spare. It will then
1662 attempt to remove the spare from the second drive and add it to the
1664 If the removal succeeds but the adding fails, then it is added back to
1668 The GROW mode is used for changing the size or shape of an active
1670 For this to work, the kernel must support the necessary change.
1671 Various types of growth are being added during 2.6 development,
1672 including restructuring a raid5 array to have more active devices.
1674 Currently the only support available is to
1676 change the "size" attribute
1677 for RAID1, RAID5 and RAID6.
1679 increase the "raid-disks" attribute of RAID1, RAID5, and RAID6.
1681 add a write-intent bitmap to any array which support these bitmaps, or
1682 remove a write-intent bitmap from such an array.
1686 Normally when an array is built the "size" it taken from the smallest
1687 of the drives. If all the small drives in an arrays are, one at a
1688 time, removed and replaced with larger drives, then you could have an
1689 array of large drives with only a small amount used. In this
1690 situation, changing the "size" with "GROW" mode will allow the extra
1691 space to start being used. If the size is increased in this way, a
1692 "resync" process will start to make sure the new parts of the array
1695 Note that when an array changes size, any filesystem that may be
1696 stored in the array will not automatically grow to use the space. The
1697 filesystem will need to be explicitly told to use the extra space.
1699 .SS RAID-DEVICES CHANGES
1701 A RAID1 array can work with any number of devices from 1 upwards
1702 (though 1 is not very useful). There may be times which you want to
1703 increase or decrease the number of active devices. Note that this is
1704 different to hot-add or hot-remove which changes the number of
1707 When reducing the number of devices in a RAID1 array, the slots which
1708 are to be removed from the array must already be vacant. That is, the
1709 devices that which were in those slots must be failed and removed.
1711 When the number of devices is increased, any hot spares that are
1712 present will be activated immediately.
1714 Increasing the number of active devices in a RAID5 is much more
1715 effort. Every block in the array will need to be read and written
1716 back to a new location. From 2.6.17, the Linux Kernel is able to do
1717 this safely, including restart and interrupted "reshape".
1719 When relocating the first few stripes on a raid5, it is not possible
1720 to keep the data on disk completely consistent and crash-proof. To
1721 provide the required safety, mdadm disables writes to the array while
1722 this "critical section" is reshaped, and takes a backup of the data
1723 that is in that section. This backup is normally stored in any spare
1724 devices that the array has, however it can also be stored in a
1725 separate file specified with the
1727 option. If this option is used, and the system does crash during the
1728 critical period, the same file must be passed to
1730 to restore the backup and reassemble the array.
1734 A write-intent bitmap can be added to, or removed from, an active
1735 array. Either internal bitmaps, or bitmaps stored in a separate file
1736 can be added. Note that if you add a bitmap stored in a file which is
1737 in a filesystem that is on the raid array being affected, the system
1738 will deadlock. The bitmap must be on a separate filesystem.
1740 .SH INCREMENTAL MODE
1744 .B mdadm --incremental
1750 .B mdadm --incremental --rebuild
1753 .B mdadm --incremental --run --scan
1757 This mode is designed to be used in conjunction with a device
1758 discovery system. As devices are found in a system, they can be
1760 .B "mdadm --incremental"
1761 to be conditionally added to an appropriate array.
1764 performs a number of tests to determine if the device is part of an
1765 array, and which array is should be part of. If an appropriate array
1766 is found, or can be created,
1768 adds the device to the array and conditionally starts the array.
1772 will only add devices to an array which were previously working
1773 (active or spare) parts of that array. It does not currently support
1774 automatic inclusion of a new drive as a spare in some array.
1776 .B "mdadm --incremental"
1777 requires a bug present in all kernels through 2.6.19, to be fixed.
1778 Hopefully this will be fixed in 2.6.20. Alternately apply the patch
1779 which is included with the mdadm source distribution. If
1781 detects that this bug is present, it will abort any attempt to use
1786 makes are as follow:
1788 Is the device permitted by
1790 That is, is it listed in a
1792 line in that file. If
1794 is absent then the default it to allow any device. Similar if
1796 contains the special word
1798 then any device is allowed. Otherwise the device name given to
1800 must match one of the names or patterns in a
1805 Does the device have a valid md superblock. If a specific metadata
1806 version is request with
1810 then only that style of metadata is accepted, otherwise
1812 finds any known version of metadata. If no
1814 metadata is found, the device is rejected.
1817 Does the metadata match an expected array?
1818 The metadata can match in two ways. Either there is an array listed
1821 which identifies the array (either by UUID, by name, by device list,
1822 or by minor-number), the array was created with a
1826 matches that which is given in
1828 or on the command line.
1831 is not able to positively identify the array as belonging to the
1832 current host, the device will be rejected.
1836 keeps a list of arrays that is has partly assembled in
1837 .B /var/run/mdadm/map
1839 .B /var/run/mdadm.map
1840 if the directory doesn't exist). If no array exists which matches
1841 the metadata on the new device,
1843 must choose a device name and unit number. It does this based on any
1846 or any name information stored in the metadata. If this name
1847 suggests a unit number, that number will be used, otherwise a free
1848 unit number will be chosen. Normally
1850 will prefer to create a partitionable array, however if the
1854 suggests that a non-partitionable array is preferred, that will be
1858 Once an appropriate array is found or created and the device is added,
1860 must decide if the array is ready to be started. It will
1861 normally compare the number of available (non-spare) devices to the
1862 number of devices that the metadata suggests need to be active. If
1863 there are at least that many, the array will be started. This means
1864 that if any devices are missing the array will not be restarted.
1870 in which case the array will be run as soon as there are enough
1871 devices present for the data to be accessible. For a raid1, that
1872 means one device will start the array. For a clean raid5, the array
1873 will be started as soon as all but one drive is present.
1875 Note that neither of these approaches is really ideal. If it is can
1876 be known that all device discovery has completed, then
1880 can be run which will try to start all arrays that are being
1881 incrementally assembled. They are started in "read-auto" mode in
1882 which they are read-only until the first write request. This means
1883 that no metadata updates are made and no attempt at resync or recovery
1884 happens. Further devices that are found before the first write can
1885 still be added safely.
1889 .B " mdadm --query /dev/name-of-device"
1891 This will find out if a given device is a raid array, or is part of
1892 one, and will provide brief information about the device.
1894 .B " mdadm --assemble --scan"
1896 This will assemble and start all arrays listed in the standard config file
1897 file. This command will typically go in a system startup file.
1899 .B " mdadm --stop --scan"
1901 This will shut down all array that can be shut down (i.e. are not
1902 currently in use). This will typically go in a system shutdown script.
1904 .B " mdadm --follow --scan --delay=120"
1906 If (and only if) there is an Email address or program given in the
1907 standard config file, then
1908 monitor the status of all arrays listed in that file by
1909 polling them ever 2 minutes.
1911 .B " mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1"
1913 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
1916 .B " echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf"
1918 .B " mdadm --detail --scan >> mdadm.conf"
1920 This will create a prototype config file that describes currently
1921 active arrays that are known to be made from partitions of IDE or SCSI drives.
1922 This file should be reviewed before being used as it may
1923 contain unwanted detail.
1925 .B " echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf"
1927 .B " mdadm --examine --scan --config=mdadm.conf >> mdadm.conf"
1929 This will find what arrays could be assembled from existing IDE and
1930 SCSI whole drives (not partitions) and store the information is the
1931 format of a config file.
1932 This file is very likely to contain unwanted detail, particularly
1935 entries. It should be reviewed and edited before being used as an
1938 .B " mdadm --examine --brief --scan --config=partitions"
1940 .B " mdadm -Ebsc partitions"
1942 Create a list of devices by reading
1943 .BR /proc/partitions ,
1944 scan these for RAID superblocks, and printout a brief listing of all
1947 .B " mdadm -Ac partitions -m 0 /dev/md0"
1949 Scan all partitions and devices listed in
1950 .BR /proc/partitions
1953 out of all such devices with a RAID superblock with a minor number of 0.
1955 .B " mdadm --monitor --scan --daemonise > /var/run/mdadm"
1957 If config file contains a mail address or alert program, run mdadm in
1958 the background in monitor mode monitoring all md devices. Also write
1959 pid of mdadm daemon to
1960 .BR /var/run/mdadm .
1962 .B " mdadm -Iq /dev/somedevice"
1964 Try to incorporate newly discovered device into some array as
1967 .B " mdadm --incremental --rebuild --run --scan"
1969 Rebuild the array map from any current arrays, and then start any that
1972 .B " mdadm --create --help"
1974 Provide help about the Create mode.
1976 .B " mdadm --config --help"
1978 Provide help about the format of the config file.
1982 Provide general help.
1993 lists all active md devices with information about them.
1995 uses this to find arrays when
1997 is given in Misc mode, and to monitor array reconstruction
2003 The config file lists which devices may be scanned to see if
2004 they contain MD super block, and gives identifying information
2005 (e.g. UUID) about known MD arrays. See
2009 .SS /var/run/mdadm/map
2012 mode is used. this file gets a list of arrays currently being created.
2015 does not exist as a directory, then
2016 .B /var/run/mdadm.map
2021 While entries in the /dev directory can have any format you like,
2023 has an understanding of 'standard' formats which it uses to guide its
2024 behaviour when creating device files via the
2028 The standard names for non-partitioned arrays (the only sort of md
2029 array available in 2.4 and earlier) either of
2035 where NN is a number.
2036 The standard names for partitionable arrays (as available from 2.6
2043 Partition numbers should be indicated by added "pMM" to these, thus "/dev/md/d1p2".
2047 was previously known as
2051 is completely separate from the
2053 package, and does not use the
2055 configuration file at all.
2058 For information on the various levels of
2062 .UR http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
2063 http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
2066 '''for new releases of the RAID driver check out:
2069 '''.UR ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
2070 '''ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
2075 '''.UR http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
2076 '''http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
2079 The latest version of
2081 should always be available from
2083 .UR http://www.kernel.org/pub/linux/utils/raid/mdadm/
2084 http://www.kernel.org/pub/linux/utils/raid/mdadm/