2 .\" Copyright Neil Brown and others.
3 .\" This program is free software; you can redistribute it and/or modify
4 .\" it under the terms of the GNU General Public License as published by
5 .\" the Free Software Foundation; either version 2 of the License, or
6 .\" (at your option) any later version.
7 .\" See file COPYING in distribution for details.
8 .TH MDADM 8 "" v3.0-rc1
10 mdadm \- manage MD devices
16 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
19 RAID devices are virtual devices created from two or more
20 real block devices. This allows multiple devices (typically disk
21 drives or partitions thereof) to be combined into a single device to
22 hold (for example) a single filesystem.
23 Some RAID levels include redundancy and so can survive some degree of
26 Linux Software RAID devices are implemented through the md (Multiple
27 Devices) device driver.
29 Currently, Linux supports
46 is not a Software RAID mechanism, but does involve
48 each device is a path to one common physical storage device.
49 New installations should not use md/multipath as it is not well
50 supported and has no ongoing development. Use the Device Mapper based
51 multipath-tools instead.
54 is also not true RAID, and it only involves one device. It
55 provides a layer over a true device that can be used to inject faults.
60 is a collection of devices that are
61 managed as a set. This is similar to the set of devices connected to
62 a hardware RAID controller. The set of devices may contain a number
63 of different RAID arrays each utilising some (or all) of the blocks from a
64 number of the devices in the set. For example, two devices in a 5-device set
65 might form a RAID1 using the whole devices. The remaining three might
66 have a RAID5 over the first half of each device, and a RAID0 over the
71 there is one set of metadata that describes all of
72 the arrays in the container. So when
76 device, the device just represents the metadata. Other normal arrays (RAID1
77 etc) can be created inside the container.
80 mdadm has several major modes of operation:
83 Assemble the components of a previously created
84 array into an active array. Components can be explicitly given
85 or can be searched for.
87 checks that the components
88 do form a bona fide array, and can, on request, fiddle superblock
89 information so as to assemble a faulty array.
93 Build an array that doesn't have per-device superblocks. For these
96 cannot differentiate between initial creation and subsequent assembly
97 of an array. It also cannot perform any checks that appropriate
98 components have been requested. Because of this, the
100 mode should only be used together with a complete understanding of
105 Create a new array with per-device superblocks.
107 .\"in several step create-add-add-run or it can all happen with one command.
110 .B "Follow or Monitor"
111 Monitor one or more md devices and act on any state changes. This is
112 only meaningful for raid1, 4, 5, 6, 10 or multipath arrays, as
113 only these have interesting state. raid0 or linear never have
114 missing, spare, or failed drives, so there is nothing to monitor.
118 Grow (or shrink) an array, or otherwise reshape it in some way.
119 Currently supported growth options including changing the active size
120 of component devices and changing the number of active devices in RAID
121 levels 1/4/5/6, changing the RAID level between 1, 5, and 6, changing
122 the chunk size and layout for RAID5 and RAID5, as well as adding or
123 removing a write-intent bitmap.
126 .B "Incremental Assembly"
127 Add a single device to an appropriate array. If the addition of the
128 device makes the array runnable, the array will be started.
129 This provides a convenient interface to a
131 system. As each device is detected,
133 has a chance to include it in some array as appropriate.
139 in this mode, then any arrays within that container will be assembled
144 This is for doing things to specific components of an array such as
145 adding new spares and removing faulty devices.
149 This is an 'everything else' mode that supports operations on active
150 arrays, operations on component devices such as erasing old superblocks, and
151 information gathering operations.
152 .\"This mode allows operations on independent devices such as examine MD
153 .\"superblocks, erasing old superblocks and stopping active arrays.
157 This mode does not act on a specific device or array, but rather it
158 requests the Linux Kernel to activate any auto-detected arrays.
161 .SH Options for selecting a mode are:
164 .BR \-A ", " \-\-assemble
165 Assemble a pre-existing array.
168 .BR \-B ", " \-\-build
169 Build a legacy array without superblocks.
172 .BR \-C ", " \-\-create
176 .BR \-F ", " \-\-follow ", " \-\-monitor
182 .BR \-G ", " \-\-grow
183 Change the size or shape of an active array.
186 .BR \-I ", " \-\-incremental
187 Add a single device into an appropriate array, and possibly start the array.
191 Request that the kernel starts any auto-detected arrays. This can only
194 is compiled into the kernel \(em not if it is a module.
195 Arrays can be auto-detected by the kernel if all the components are in
196 primary MS-DOS partitions with partition type
198 In-kernel autodetect is not recommended for new installations. Using
200 to detect and assemble arrays \(em possibly in an
202 \(em is substantially more flexible and should be preferred.
205 If a device is given before any options, or if the first option is
210 then the MANAGE mode is assume.
211 Anything other than these will cause the
215 .SH Options that are not mode-specific are:
218 .BR \-h ", " \-\-help
219 Display general help message or, after one of the above options, a
220 mode-specific help message.
224 Display more detailed help about command line parsing and some commonly
228 .BR \-V ", " \-\-version
229 Print version information for mdadm.
232 .BR \-v ", " \-\-verbose
233 Be more verbose about what is happening. This can be used twice to be
235 The extra verbosity currently only affects
236 .B \-\-detail \-\-scan
238 .BR "\-\-examine \-\-scan" .
241 .BR \-q ", " \-\-quiet
242 Avoid printing purely informative messages. With this,
244 will be silent unless there is something really important to report.
247 .BR \-b ", " \-\-brief
248 Be less verbose. This is used with
256 gives an intermediate level of verbosity.
259 .BR \-f ", " \-\-force
260 Be more forceful about certain operations. See the various modes for
261 the exact meaning of this option in different contexts.
264 .BR \-c ", " \-\-config=
265 Specify the config file. Default is to use
266 .BR /etc/mdadm.conf ,
267 or if that is missing then
268 .BR /etc/mdadm/mdadm.conf .
269 If the config file given is
271 then nothing will be read, but
273 will act as though the config file contained exactly
274 .B "DEVICE partitions containers"
277 to find a list of devices to scan, and
279 to find a list of containers to examine.
282 is given for the config file, then
284 will act as though the config file were empty.
287 .BR \-s ", " \-\-scan
290 for missing information.
291 In general, this option gives
293 permission to get any missing information (like component devices,
294 array devices, array identities, and alert destination) from the
295 configuration file (see previous option);
296 one exception is MISC mode when using
302 says to get a list of array devices from
306 .B \-e ", " \-\-metadata=
307 Declare the style of superblock (raid metadata) to be used. The
310 and to guess for other operations.
311 The default can be overridden by setting the
320 .IP "0, 0.90, default"
321 Use the original 0.90 format superblock. This format limits arrays to
322 28 component devices and limits component devices of levels 1 and
323 greater to 2 terabytes.
324 .IP "1, 1.0, 1.1, 1.2"
325 Use the new version-1 format superblock. This has few restrictions.
326 The different sub-versions store the superblock at different locations
327 on the device, either at the end (for 1.0), at the start (for 1.1) or
328 4K from the start (for 1.2).
330 Use the "Industry Standard" DDF (Disk Data Format) format. When
331 creating a DDF array a
333 will be created, and normal arrays can be created in that container.
335 Use the Intel(R) Matrix Storage Manager metadata format. This creates a
337 which is managed in a similar manner to DDF, and is supported by an
338 option-rom on some platforms:
340 .B http://www.intel.com/design/chipsets/matrixstorage_sb.htm
346 This will override any
348 setting in the config file and provides the identity of the host which
349 should be considered the home for any arrays.
351 When creating an array, the
353 will be recorded in the superblock. For version-1 superblocks, it will
354 be prefixed to the array name. For version-0.90 superblocks, part of
355 the SHA1 hash of the hostname will be stored in the later half of the
358 When reporting information about an array, any array which is tagged
359 for the given homehost will be reported as such.
361 When using Auto-Assemble, only arrays tagged for the given homehost
362 will be allowed to use 'local' names (i.e. not ending in '_' followed
365 .SH For create, build, or grow:
368 .BR \-n ", " \-\-raid\-devices=
369 Specify the number of active devices in the array. This, plus the
370 number of spare devices (see below) must equal the number of
372 (including "\fBmissing\fP" devices)
373 that are listed on the command line for
375 Setting a value of 1 is probably
376 a mistake and so requires that
378 be specified first. A value of 1 will then be allowed for linear,
379 multipath, raid0 and raid1. It is never allowed for raid4 or raid5.
381 This number can only be changed using
383 for RAID1, RAID5 and RAID6 arrays, and only on kernels which provide
387 .BR \-x ", " \-\-spare\-devices=
388 Specify the number of spare (eXtra) devices in the initial array.
389 Spares can also be added
390 and removed later. The number of component devices listed
391 on the command line must equal the number of raid devices plus the
392 number of spare devices.
396 .BR \-z ", " \-\-size=
397 Amount (in Kibibytes) of space to use from each drive in RAID level 1/4/5/6.
398 This must be a multiple of the chunk size, and must leave about 128Kb
399 of space at the end of the drive for the RAID superblock.
400 If this is not specified
401 (as it normally is not) the smallest drive (or partition) sets the
402 size, though if there is a variance among the drives of greater than 1%, a warning is
405 This value can be set with
407 for RAID level 1/4/5/6. If the array was created with a size smaller
408 than the currently active drives, the extra space can be accessed
411 The size can be given as
413 which means to choose the largest size that fits on all current drives.
415 This value can not be used with
417 metadata such as DDF and IMSM.
420 .BR \-Z ", " \-\-array-size=
421 This is only meaningful with
423 and its effect is not persistent: when the array is stopped an
424 restarted the default array size will be restored.
426 Setting the array-size causes the array to appear smaller to programs
427 that access the data. This is particularly needed before reshaping an
428 array so that it will be smaller. As the reshape is not reversible,
429 but setting the size with
431 is, it is required that the array size is reduced as appropriate
432 before the number of devices in the array is reduced.
435 .BR \-c ", " \-\-chunk=
436 Specify chunk size of kibibytes. The default is 64.
440 Specify rounding factor for linear array (==chunk size)
443 .BR \-l ", " \-\-level=
444 Set raid level. When used with
446 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
447 raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
448 Obviously some of these are synonymous.
452 metadata type is requested, only the
454 level is permitted, and it does not need to be explicitly given.
458 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
460 Not yet supported with
464 .BR \-p ", " \-\-layout=
465 This option configures the fine details of data layout for RAID5, RAID6,
466 and RAID10 arrays, and controls the failure modes for
469 The layout of the raid5 parity block can be one of
470 .BR left\-asymmetric ,
471 .BR left\-symmetric ,
472 .BR right\-asymmetric ,
473 .BR right\-symmetric ,
474 .BR la ", " ra ", " ls ", " rs .
476 .BR left\-symmetric .
478 When setting the failure mode for level
481 .BR write\-transient ", " wt ,
482 .BR read\-transient ", " rt ,
483 .BR write\-persistent ", " wp ,
484 .BR read\-persistent ", " rp ,
486 .BR read\-fixable ", " rf ,
487 .BR clear ", " flush ", " none .
489 Each failure mode can be followed by a number, which is used as a period
490 between fault generation. Without a number, the fault is generated
491 once on the first relevant request. With a number, the fault will be
492 generated after that many requests, and will continue to be generated
493 every time the period elapses.
495 Multiple failure modes can be current simultaneously by using the
497 option to set subsequent failure modes.
499 "clear" or "none" will remove any pending or periodic failure modes,
500 and "flush" will clear any persistent faults.
502 Finally, the layout options for RAID10 are one of 'n', 'o' or 'f' followed
503 by a small number. The default is 'n2'. The supported options are:
506 signals 'near' copies. Multiple copies of one data block are at
507 similar offsets in different devices.
510 signals 'offset' copies. Rather than the chunks being duplicated
511 within a stripe, whole stripes are duplicated but are rotated by one
512 device so duplicate blocks are on different devices. Thus subsequent
513 copies of a block are in the next drive, and are one chunk further
518 (multiple copies have very different offsets).
519 See md(4) for more detail about 'near' and 'far'.
521 The number is the number of copies of each datablock. 2 is normal, 3
522 can be useful. This number can be at most equal to the number of
523 devices in the array. It does not need to divide evenly into that
524 number (e.g. it is perfectly legal to have an 'n2' layout for an array
525 with an odd number of devices).
527 When an array is converted between RAID5 and RAID6 an intermediate
528 RAID6 layout is used in which the second parity block (Q) is always on
529 the last device. To convert a RAID5 to RAID6 and leave it in this new
530 layout (which does not require re-striping) use
531 .BR \-\-layout=preserve .
532 This will try to avoid any restriping.
534 The converse of this is
535 .B \-\-layout=normalise
536 which will change a non-standard RAID6 layout into a more standard
543 (thus explaining the p of
547 .BR \-b ", " \-\-bitmap=
548 Specify a file to store a write-intent bitmap in. The file should not
551 is also given. The same file should be provided
552 when assembling the array. If the word
554 is given, then the bitmap is stored with the metadata on the array,
555 and so is replicated on all devices. If the word
559 mode, then any bitmap that is present is removed.
561 To help catch typing errors, the filename must contain at least one
562 slash ('/') if it is a real file (not 'internal' or 'none').
564 Note: external bitmaps are only known to work on ext2 and ext3.
565 Storing bitmap files on other filesystems may result in serious problems.
568 .BR \-\-bitmap\-chunk=
569 Set the chunksize of the bitmap. Each bit corresponds to that many
570 Kilobytes of storage.
571 When using a file based bitmap, the default is to use the smallest
572 size that is at-least 4 and requires no more than 2^21 chunks.
575 bitmap, the chunksize is automatically determined to make best use of
580 .BR \-W ", " \-\-write\-mostly
581 subsequent devices lists in a
586 command will be flagged as 'write-mostly'. This is valid for RAID1
587 only and means that the 'md' driver will avoid reading from these
588 devices if at all possible. This can be useful if mirroring over a
592 .BR \-\-write\-behind=
593 Specify that write-behind mode should be enabled (valid for RAID1
594 only). If an argument is specified, it will set the maximum number
595 of outstanding writes allowed. The default value is 256.
596 A write-intent bitmap is required in order to use write-behind
597 mode, and write-behind is only attempted on drives marked as
601 .BR \-\-assume\-clean
604 that the array pre-existed and is known to be clean. It can be useful
605 when trying to recover from a major failure as you can be sure that no
606 data will be affected unless you actually write to the array. It can
607 also be used when creating a RAID1 or RAID10 if you want to avoid the
608 initial resync, however this practice \(em while normally safe \(em is not
609 recommended. Use this only if you really know what you are doing.
612 .BR \-\-backup\-file=
615 is used to increase the number of
616 raid-devices in a RAID5 if there are no spare devices available.
617 See the section below on RAID_DEVICE CHANGES. The file should be
618 stored on a separate device, not on the raid array being reshaped.
621 .BR \-\-array-size= ", " \-Z
622 Set the size of the array which is seen by users of the device such as
623 filesystems. This can be less that the real size, but never greater.
624 The size set this way does not persist across restarts of the array.
626 This is most useful when reducing the number of devices in a RAID5 or
627 RAID6. Such arrays require the array-size to be reduced before a
628 reshape can be performed that reduces the real size.
632 restores the apparent size of the array to be whatever the real
633 amount of available space is.
636 .BR \-N ", " \-\-name=
639 for the array. This is currently only effective when creating an
640 array with a version-1 superblock. The name is a simple textual
641 string that can be used to identify array components when assembling.
647 run the array, even if some of the components
648 appear to be active in another array or filesystem. Normally
650 will ask for confirmation before including such components in an
651 array. This option causes that question to be suppressed.
654 .BR \-f ", " \-\-force
657 accept the geometry and layout specified without question. Normally
659 will not allow creation of an array with only one device, and will try
660 to create a raid5 array with one missing drive (as this makes the
661 initial resync work faster). With
664 will not try to be so clever.
667 .BR \-a ", " "\-\-auto{=yes,md,mdp,part,p}{NN}"
668 Instruct mdadm how to create the device file if needed, possibly allocating
669 an unused minor number. "md" causes a non-partitionable array
670 to be used (though since Linux 2.6.28, these array devices are in fact
671 partitionable). "mdp", "part" or "p" causes a partitionable array (2.6 and
672 later) to be used. "yes" requires the named md device to have
673 a 'standard' format, and the type and minor number will be determined
674 from this. With mdadm 3.0, device creation is normally left up to
676 so this option is unlikely to be needed.
677 See DEVICE NAMES below.
679 The argument can also come immediately after
684 is not given on the command line or in the config file, then
690 is also given, then any
692 entries in the config file will override the
694 instruction given on the command line.
696 For partitionable arrays,
698 will create the device file for the whole array and for the first 4
699 partitions. A different number of partitions can be specified at the
700 end of this option (e.g.
702 If the device name ends with a digit, the partition names add a 'p',
703 and a number, e.g. "/dev/md/home1p3". If there is no
704 trailing digit, then the partition names just have a number added,
705 e.g. "/dev/md/scratch3".
707 If the md device name is in a 'standard' format as described in DEVICE
708 NAMES, then it will be created, if necessary, with the appropriate
709 number based on that name. If the device name is not in one of these
710 formats, then a unused minor number will be allocated. The minor
711 number will be considered unused if there is no active array for that
712 number, and there is no entry in /dev for that number and with a
713 non-standard name. Name that are not in 'standard' format are only
714 allowed in "/dev/md/".
718 \".BR \-\-symlink = no
723 \"to create devices in
725 \"it will also create symlinks from
727 \"with names starting with
733 \"to suppress this, or
735 \"to enforce this even if it is suppressing
743 .BR \-u ", " \-\-uuid=
744 uuid of array to assemble. Devices which don't have this uuid are
748 .BR \-m ", " \-\-super\-minor=
749 Minor number of device that array was created for. Devices which
750 don't have this minor number are excluded. If you create an array as
751 /dev/md1, then all superblocks will contain the minor number 1, even if
752 the array is later assembled as /dev/md2.
754 Giving the literal word "dev" for
758 to use the minor number of the md device that is being assembled.
761 .B \-\-super\-minor=dev
762 will look for super blocks with a minor number of 0.
765 .BR \-N ", " \-\-name=
766 Specify the name of the array to assemble. This must be the name
767 that was specified when creating the array. It must either match
768 the name stored in the superblock exactly, or it must match
771 prefixed to the start of the given name.
774 .BR \-f ", " \-\-force
775 Assemble the array even if some superblocks appear out-of-date
779 Attempt to start the array even if fewer drives were given than were
780 present last time the array was active. Normally if not all the
781 expected drives are found and
783 is not used, then the array will be assembled but not started.
786 an attempt will be made to start it anyway.
790 This is the reverse of
792 in that it inhibits the startup of array unless all expected drives
793 are present. This is only needed with
795 and can be used if the physical connections to devices are
796 not as reliable as you would like.
799 .BR \-a ", " "\-\-auto{=no,yes,md,mdp,part}"
800 See this option under Create and Build options.
803 .BR \-b ", " \-\-bitmap=
804 Specify the bitmap file that was given when the array was created. If
807 bitmap, there is no need to specify this when assembling the array.
810 .BR \-\-backup\-file=
813 was used to grow the number of raid-devices in a RAID5, and the system
814 crashed during the critical section, then the same
818 to allow possibly corrupted data to be restored.
821 .BR \-U ", " \-\-update=
822 Update the superblock on each device while assembling the array. The
823 argument given to this flag can be one of
837 option will adjust the superblock of an array what was created on a Sparc
838 machine running a patched 2.2 Linux kernel. This kernel got the
839 alignment of part of the superblock wrong. You can use the
840 .B "\-\-examine \-\-sparc2.2"
843 to see what effect this would have.
847 option will update the
849 field on each superblock to match the minor number of the array being
851 This can be useful if
853 reports a different "Preferred Minor" to
855 In some cases this update will be performed automatically
856 by the kernel driver. In particular the update happens automatically
857 at the first write to an array with redundancy (RAID level 1 or
858 greater) on a 2.6 (or later) kernel.
862 option will change the uuid of the array. If a UUID is given with the
864 option that UUID will be used as a new UUID and will
866 be used to help identify the devices in the array.
869 is given, a random UUID is chosen.
873 option will change the
875 of the array as stored in the superblock. This is only supported for
876 version-1 superblocks.
880 option will change the
882 as recorded in the superblock. For version-0 superblocks, this is the
883 same as updating the UUID.
884 For version-1 superblocks, this involves updating the name.
888 option will cause the array to be marked
890 meaning that any redundancy in the array (e.g. parity for raid5,
891 copies for raid1) may be incorrect. This will cause the raid system
892 to perform a "resync" pass to make sure that all redundant information
897 option allows arrays to be moved between machines with different
899 When assembling such an array for the first time after a move, giving
900 .B "\-\-update=byteorder"
903 to expect superblocks to have their byteorder reversed, and will
904 correct that order before assembling the array. This is only valid
905 with original (Version 0.90) superblocks.
909 option will correct the summaries in the superblock. That is the
910 counts of total, working, active, failed, and spare devices.
914 will rarely be of use. It applies to version 1.1 and 1.2 metadata
915 only (where the metadata is at the start of the device) and is only
916 useful when the component device has changed size (typically become
917 larger). The version 1 metadata records the amount of the device that
918 can be used to store data, so if a device in a version 1.1 or 1.2
919 array becomes larger, the metadata will still be visible, but the
920 extra space will not. In this case it might be useful to assemble the
922 .BR \-\-update=devicesize .
925 to determine the maximum usable amount of space on each device and
926 update the relevant field in the metadata.
929 .B \-\-auto\-update\-homehost
930 This flag is only meaningful with auto-assembly (see discussion below).
931 In that situation, if no suitable arrays are found for this homehost,
933 will rescan for any arrays at all and will assemble them and update the
934 homehost to match the current host.
940 hot-add listed devices.
944 re-add a device that was recently removed from an array.
947 .BR \-r ", " \-\-remove
948 remove listed devices. They must not be active. i.e. they should
949 be failed or spare devices. As well as the name of a device file
958 The first causes all failed device to be removed. The second causes
959 any device which is no longer connected to the system (i.e an 'open'
962 to be removed. This will only succeed for devices that are spares or
963 have already been marked as failed.
966 .BR \-f ", " \-\-fail
967 mark listed devices as faulty.
968 As well as the name of a device file, the word
970 can be given. This will cause any device that has been detached from
971 the system to be marked as failed. It can then be removed.
979 .BR \-\-write\-mostly
980 Subsequent devices that are added or re-added will have the 'write-mostly'
981 flag set. This is only valid for RAID! and means that the 'md' driver
982 will avoid reading from these devices if possible.
985 Subsequent devices that are added or re-added will have the 'write-mostly'
990 Each of these options require that the first device listed is the array
991 to be acted upon, and the remainder are component devices to be added,
992 removed, or marked as faulty. Several different operations can be
993 specified for different devices, e.g.
995 mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
997 Each operation applies to all devices listed until the next
1000 If an array is using a write-intent bitmap, then devices which have
1001 been removed can be re-added in a way that avoids a full
1002 reconstruction but instead just updates the blocks that have changed
1003 since the device was removed. For arrays with persistent metadata
1004 (superblocks) this is done automatically. For arrays created with
1006 mdadm needs to be told that this device we removed recently with
1009 Devices can only be removed from an array if they are not in active
1010 use, i.e. that must be spares or failed devices. To remove an active
1011 device, it must first be marked as
1017 .BR \-Q ", " \-\-query
1018 Examine a device to see
1019 (1) if it is an md device and (2) if it is a component of an md
1021 Information about what is discovered is presented.
1024 .BR \-D ", " \-\-detail
1025 Print detail of one or more md devices.
1028 .BR \-\-detail\-platform
1029 Print detail of the platform's raid capabilities (firmware / hardware
1030 topology) for a given metadata format.
1033 .BR \-Y ", " \-\-export
1038 output will be formatted as
1040 pairs for easy import into the environment.
1043 .BR \-E ", " \-\-examine
1044 Print content of md superblock on device(s).
1047 If an array was created on a 2.2 Linux kernel patched with RAID
1048 support, the superblock will have been created incorrectly, or at
1049 least incompatibly with 2.4 and later kernels. Using the
1053 will fix the superblock before displaying it. If this appears to do
1054 the right thing, then the array can be successfully assembled using
1055 .BR "\-\-assemble \-\-update=sparc2.2" .
1058 .BR \-X ", " \-\-examine\-bitmap
1059 Report information about a bitmap file.
1060 The argument is either an external bitmap file or an array component
1061 in case of an internal bitmap.
1064 .BR \-R ", " \-\-run
1065 start a partially built array.
1068 .BR \-S ", " \-\-stop
1069 deactivate array, releasing all resources.
1072 .BR \-o ", " \-\-readonly
1073 mark array as readonly.
1076 .BR \-w ", " \-\-readwrite
1077 mark array as readwrite.
1080 .B \-\-zero\-superblock
1081 If the device contains a valid md superblock, the block is
1082 overwritten with zeros. With
1084 the block where the superblock would be is overwritten even if it
1085 doesn't appear to be valid.
1088 .BR \-t ", " \-\-test
1093 is set to reflect the status of the device.
1096 .BR \-W ", " \-\-wait
1097 For each md device given, wait for any resync, recovery, or reshape
1098 activity to finish before returning.
1100 will return with success if it actually waited for every device
1101 listed, otherwise it will return failure.
1105 For each md device given, or each device in /proc/mdstat if
1107 is given, arrange for the array to be marked clean as soon as possible.
1108 Also, quiesce resync so that the monitor for external metadata arrays
1109 (mdmon) has an opportunity to checkpoint the resync position.
1111 will return with success if the array uses external metadata and we
1112 successfully waited. For native arrays this returns immediately as the
1113 kernel handles both dirty-clean transitions and resync checkpointing in
1114 the kernel at shutdown. No action is taken if safe-mode handling is
1117 .SH For Incremental Assembly mode:
1119 .BR \-\-rebuild\-map ", " \-r
1120 Rebuild the map file
1121 .RB ( /var/run/mdadm/map )
1124 uses to help track which arrays are currently being assembled.
1127 .BR \-\-run ", " \-R
1128 Run any array assembled as soon as a minimal number of devices are
1129 available, rather than waiting until all expected devices are present.
1133 This allows the hot-plug system to prevent arrays from running when it knows
1134 that more disks may arrive later in the discovery process.
1137 .BR \-\-scan ", " \-s
1138 Only meaningful with
1142 file for arrays that are being incrementally assembled and will try to
1143 start any that are not already started. If any such array is listed
1146 as requiring an external bitmap, that bitmap will be attached first.
1148 .SH For Monitor mode:
1150 .BR \-m ", " \-\-mail
1151 Give a mail address to send alerts to.
1154 .BR \-p ", " \-\-program ", " \-\-alert
1155 Give a program to be run whenever an event is detected.
1158 .BR \-y ", " \-\-syslog
1159 Cause all events to be reported through 'syslog'. The messages have
1160 facility of 'daemon' and varying priorities.
1163 .BR \-d ", " \-\-delay
1164 Give a delay in seconds.
1166 polls the md arrays and then waits this many seconds before polling
1167 again. The default is 60 seconds.
1170 .BR \-f ", " \-\-daemonise
1173 to run as a background daemon if it decides to monitor anything. This
1174 causes it to fork and run in the child, and to disconnect form the
1175 terminal. The process id of the child is written to stdout.
1178 which will only continue monitoring if a mail address or alert program
1179 is found in the config file.
1182 .BR \-i ", " \-\-pid\-file
1185 is running in daemon mode, write the pid of the daemon process to
1186 the specified file, instead of printing it on standard output.
1189 .BR \-1 ", " \-\-oneshot
1190 Check arrays only once. This will generate
1192 events and more significantly
1198 .B " mdadm \-\-monitor \-\-scan \-1"
1200 from a cron script will ensure regular notification of any degraded arrays.
1203 .BR \-t ", " \-\-test
1206 alert for every array found at startup. This alert gets mailed and
1207 passed to the alert program. This can be used for testing that alert
1208 message do get through successfully.
1214 .B mdadm \-\-assemble
1215 .I md-device options-and-component-devices...
1218 .B mdadm \-\-assemble \-\-scan
1219 .I md-devices-and-options...
1222 .B mdadm \-\-assemble \-\-scan
1226 This usage assembles one or more raid arrays from pre-existing components.
1227 For each array, mdadm needs to know the md device, the identity of the
1228 array, and a number of component-devices. These can be found in a number of ways.
1230 In the first usage example (without the
1232 the first device given is the md device.
1233 In the second usage example, all devices listed are treated as md
1234 devices and assembly is attempted.
1235 In the third (where no devices are listed) all md devices that are
1236 listed in the configuration file are assembled. Then any arrays that
1237 can be found on unused devices will also be assembled.
1239 If precisely one device is listed, but
1245 was given and identity information is extracted from the configuration file.
1247 The identity can be given with the
1251 option, will be taken from the md-device record in the config file, or
1252 will be taken from the super block of the first component-device
1253 listed on the command line.
1255 Devices can be given on the
1257 command line or in the config file. Only devices which have an md
1258 superblock which contains the right identity will be considered for
1261 The config file is only used if explicitly named with
1263 or requested with (a possibly implicit)
1268 .B /etc/mdadm/mdadm.conf
1273 is not given, then the config file will only be used to find the
1274 identity of md arrays.
1276 Normally the array will be started after it is assembled. However if
1278 is not given and insufficient drives were listed to start a complete
1279 (non-degraded) array, then the array is not started (to guard against
1280 usage errors). To insist that the array be started in this case (as
1281 may work for RAID1, 4, 5, 6, or 10), give the
1285 If the md device does not exist, then it will be created providing the
1286 intent is clear. i.e. the name must be in a standard form, or the
1288 option must be given to clarify how and whether the device should be
1290 This can be useful for handling partitioned devices (which don't have
1291 a stable device number \(em it can change after a reboot) and when using
1292 "udev" to manage your
1294 tree (udev cannot handle md devices because of the unusual device
1295 initialisation conventions).
1297 If the option to "auto" is "mdp" or "part" or (on the command line
1298 only) "p", then mdadm will create a partitionable array, using the
1299 first free one that is not in use and does not already have an entry
1300 in /dev (apart from numeric /dev/md* entries).
1302 If the option to "auto" is "yes" or "md" or (on the command line)
1303 nothing, then mdadm will create a traditional, non-partitionable md
1306 It is expected that the "auto" functionality will be used to create
1307 device entries with meaningful names such as "/dev/md/home" or
1308 "/dev/md/root", rather than names based on the numerical array number.
1310 When using option "auto" to create a partitionable array, the device
1311 files for the first 4 partitions are also created. If a different
1312 number is required it can be simply appended to the auto option.
1313 e.g. "auto=part8". Partition names are created by appending a digit
1314 string to the device name, with an intervening "p" if the device name
1319 option is also available in Build and Create modes. As those modes do
1320 not use a config file, the "auto=" config option does not apply to
1328 and no devices are listed,
1330 will first attempt to assemble all the arrays listed in the config
1333 It will then look further for possible arrays and will try to assemble
1334 anything that it finds. Arrays which are tagged as belonging to the given
1335 homehost will be assembled and started normally. Arrays which do not
1336 obviously belong to this host are given names that are expected not to
1337 conflict with anything local, and are started "read-auto" so that
1338 nothing is written to any device until the array is written to. i.e.
1339 automatic resync etc is delayed.
1343 finds a consistent set of devices that look like they should comprise
1344 an array, and if the superblock is tagged as belonging to the given
1345 home host, it will automatically choose a device name and try to
1346 assemble the array. If the array uses version-0.90 metadata, then the
1348 number as recorded in the superblock is used to create a name in
1352 If the array uses version-1 metadata, then the
1354 from the superblock is used to similarly create a name in
1356 (the name will have any 'host' prefix stripped first).
1360 cannot find any array for the given host at all, and if
1361 .B \-\-auto\-update\-homehost
1364 will search again for any array (not just an array created for this
1365 host) and will assemble each assuming
1366 .BR \-\-update=homehost .
1367 This will change the host tag in the superblock so that on the next run,
1368 these arrays will be found without the second pass. The intention of
1369 this feature is to support transitioning a set of md arrays to using
1372 The reason for requiring arrays to be tagged with the homehost for
1373 auto assembly is to guard against problems that can arise when moving
1374 devices from one host to another.
1384 .BI \-\-raid\-devices= Z
1388 This usage is similar to
1390 The difference is that it creates an array without a superblock. With
1391 these arrays there is no difference between initially creating the array and
1392 subsequently assembling the array, except that hopefully there is useful
1393 data there in the second case.
1395 The level may raid0, linear, multipath, or faulty, or one of their
1396 synonyms. All devices must be listed and the array will be started
1408 .BI \-\-raid\-devices= Z
1412 This usage will initialise a new md array, associate some devices with
1413 it, and activate the array.
1417 option is given (as described in more detail in the section on
1418 Assemble mode), then the md device will be created with a suitable
1419 device number if necessary.
1421 As devices are added, they are checked to see if they contain raid
1422 superblocks or filesystems. They are also checked to see if the variance in
1423 device size exceeds 1%.
1425 If any discrepancy is found, the array will not automatically be run, though
1428 can override this caution.
1430 To create a "degraded" array in which some devices are missing, simply
1431 give the word "\fBmissing\fP"
1432 in place of a device name. This will cause
1434 to leave the corresponding slot in the array empty.
1435 For a RAID4 or RAID5 array at most one slot can be
1436 "\fBmissing\fP"; for a RAID6 array at most two slots.
1437 For a RAID1 array, only one real device needs to be given. All of the
1441 When creating a RAID5 array,
1443 will automatically create a degraded array with an extra spare drive.
1444 This is because building the spare into a degraded array is in general faster than resyncing
1445 the parity on a non-degraded, but not clean, array. This feature can
1446 be overridden with the
1450 When creating an array with version-1 metadata a name for the array is
1452 If this is not given with the
1456 will choose a name based on the last component of the name of the
1457 device being created. So if
1459 is being created, then the name
1464 is being created, then the name
1468 When creating a partition based array, using
1470 with version-1.x metadata, the partition type should be set to
1472 (non fs-data). This type selection allows for greater precision since
1473 using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)],
1474 might create problems in the event of array recovery through a live cdrom.
1476 A new array will normally get a randomly assigned 128bit UUID which is
1477 very likely to be unique. If you have a specific need, you can choose
1478 a UUID for the array by giving the
1480 option. Be warned that creating two arrays with the same UUID is a
1481 recipe for disaster. Also, using
1483 when creating a v0.90 array will silently override any
1488 .\"option is given, it is not necessary to list any component-devices in this command.
1489 .\"They can be added later, before a
1493 .\"is given, the apparent size of the smallest drive given is used.
1495 When creating an array within a
1498 can be given either the list of devices to use, or simply the name of
1499 the container. The former case gives control over which devices in
1500 the container will be used for the array. The latter case allows
1502 to automatically choose which devices to use based on how much spare
1505 The General Management options that are valid with
1510 insist on running the array even if some devices look like they might
1515 start the array readonly \(em not supported yet.
1523 .I options... devices...
1526 This usage will allow individual devices in an array to be failed,
1527 removed or added. It is possible to perform multiple operations with
1528 on command. For example:
1530 .B " mdadm /dev/md0 \-f /dev/hda1 \-r /dev/hda1 \-a /dev/hda1"
1536 and will then remove it from the array and finally add it back
1537 in as a spare. However only one md array can be affected by a single
1548 MISC mode includes a number of distinct operations that
1549 operate on distinct devices. The operations are:
1552 The device is examined to see if it is
1553 (1) an active md array, or
1554 (2) a component of an md array.
1555 The information discovered is reported.
1559 The device should be an active md device.
1561 will display a detailed description of the array.
1565 will cause the output to be less detailed and the format to be
1566 suitable for inclusion in
1567 .BR /etc/mdadm.conf .
1570 will normally be 0 unless
1572 failed to get useful information about the device(s); however, if the
1574 option is given, then the exit status will be:
1578 The array is functioning normally.
1581 The array has at least one failed device.
1584 The array has multiple failed devices such that it is unusable.
1587 There was an error while trying to get information about the device.
1591 .B \-\-detail\-platform
1592 Print detail of the platform's raid capabilities (firmware / hardware
1593 topology). If the metadata is specified with
1597 then the return status will be:
1601 metadata successfully enumerated its platform components on this system
1604 metadata is platform independent
1607 metadata failed to find its platform components on this system
1612 The device should be a component of an md array.
1614 will read the md superblock of the device and display the contents.
1619 is given, then multiple devices that are components of the one array
1620 are grouped together and reported in a single entry suitable
1622 .BR /etc/mdadm.conf .
1626 without listing any devices will cause all devices listed in the
1627 config file to be examined.
1631 The devices should be active md arrays which will be deactivated, as
1632 long as they are not currently in use.
1636 This will fully activate a partially assembled md array.
1640 This will mark an active array as read-only, providing that it is
1641 not currently being used.
1647 array back to being read/write.
1651 For all operations except
1654 will cause the operation to be applied to all arrays listed in
1659 causes all devices listed in the config file to be examined.
1666 .B mdadm \-\-monitor
1667 .I options... devices...
1672 to periodically poll a number of md arrays and to report on any events
1675 will never exit once it decides that there are arrays to be checked,
1676 so it should normally be run in the background.
1678 As well as reporting events,
1680 may move a spare drive from one array to another if they are in the
1683 and if the destination array has a failed drive but no spares.
1685 If any devices are listed on the command line,
1687 will only monitor those devices. Otherwise all arrays listed in the
1688 configuration file will be monitored. Further, if
1690 is given, then any other md devices that appear in
1692 will also be monitored.
1694 The result of monitoring the arrays is the generation of events.
1695 These events are passed to a separate program (if specified) and may
1696 be mailed to a given E-mail address.
1698 When passing events to a program, the program is run once for each event,
1699 and is given 2 or 3 command-line arguments: the first is the
1700 name of the event (see below), the second is the name of the
1701 md device which is affected, and the third is the name of a related
1702 device if relevant (such as a component device that has failed).
1706 is given, then a program or an E-mail address must be specified on the
1707 command line or in the config file. If neither are available, then
1709 will not monitor anything.
1713 will continue monitoring as long as something was found to monitor. If
1714 no program or email is given, then each event is reported to
1717 The different events are:
1721 .B DeviceDisappeared
1722 An md array which previously was configured appears to no longer be
1723 configured. (syslog priority: Critical)
1727 was told to monitor an array which is RAID0 or Linear, then it will
1729 .B DeviceDisappeared
1730 with the extra information
1732 This is because RAID0 and Linear do not support the device-failed,
1733 hot-spare and resync operations which are monitored.
1737 An md array started reconstruction. (syslog priority: Warning)
1743 is 20, 40, 60, or 80, this indicates that rebuild has passed that many
1744 percentage of the total. (syslog priority: Warning)
1748 An md array that was rebuilding, isn't any more, either because it
1749 finished normally or was aborted. (syslog priority: Warning)
1753 An active component device of an array has been marked as
1754 faulty. (syslog priority: Critical)
1758 A spare component device which was being rebuilt to replace a faulty
1759 device has failed. (syslog priority: Critical)
1763 A spare component device which was being rebuilt to replace a faulty
1764 device has been successfully rebuilt and has been made active.
1765 (syslog priority: Info)
1769 A new md array has been detected in the
1771 file. (syslog priority: Info)
1775 A newly noticed array appears to be degraded. This message is not
1778 notices a drive failure which causes degradation, but only when
1780 notices that an array is degraded when it first sees the array.
1781 (syslog priority: Critical)
1785 A spare drive has been moved from one array in a
1787 to another to allow a failed drive to be replaced.
1788 (syslog priority: Info)
1794 has been told, via the config file, that an array should have a certain
1795 number of spare devices, and
1797 detects that it has fewer than this number when it first sees the
1798 array, it will report a
1801 (syslog priority: Warning)
1805 An array was found at startup, and the
1808 (syslog priority: Info)
1818 cause Email to be sent. All events cause the program to be run.
1819 The program is run with two or three arguments: the event
1820 name, the array device and possibly a second device.
1822 Each event has an associated array device (e.g.
1824 and possibly a second device. For
1829 the second device is the relevant component device.
1832 the second device is the array that the spare was moved from.
1836 to move spares from one array to another, the different arrays need to
1837 be labeled with the same
1839 in the configuration file. The
1841 name can be any string; it is only necessary that different spare
1842 groups use different names.
1846 detects that an array in a spare group has fewer active
1847 devices than necessary for the complete array, and has no spare
1848 devices, it will look for another array in the same spare group that
1849 has a full complement of working drive and a spare. It will then
1850 attempt to remove the spare from the second drive and add it to the
1852 If the removal succeeds but the adding fails, then it is added back to
1856 The GROW mode is used for changing the size or shape of an active
1858 For this to work, the kernel must support the necessary change.
1859 Various types of growth are being added during 2.6 development,
1860 including restructuring a raid5 array to have more active devices.
1862 Currently the only support available is to
1864 change the "size" attribute
1865 for RAID1, RAID5 and RAID6.
1867 increase or decrease the "raid\-devices" attribute of RAID1, RAID5,
1870 change the chunk-size and layout of RAID5 and RAID6.
1872 convert between RAID1 and RAID5, and between RAID5 and RAID6.
1874 add a write-intent bitmap to any array which supports these bitmaps, or
1875 remove a write-intent bitmap from such an array.
1878 GROW mode is not currently supported for
1880 or arrays inside containers.
1883 Normally when an array is built the "size" it taken from the smallest
1884 of the drives. If all the small drives in an arrays are, one at a
1885 time, removed and replaced with larger drives, then you could have an
1886 array of large drives with only a small amount used. In this
1887 situation, changing the "size" with "GROW" mode will allow the extra
1888 space to start being used. If the size is increased in this way, a
1889 "resync" process will start to make sure the new parts of the array
1892 Note that when an array changes size, any filesystem that may be
1893 stored in the array will not automatically grow to use the space. The
1894 filesystem will need to be explicitly told to use the extra space.
1896 .SS RAID-DEVICES CHANGES
1898 A RAID1 array can work with any number of devices from 1 upwards
1899 (though 1 is not very useful). There may be times which you want to
1900 increase or decrease the number of active devices. Note that this is
1901 different to hot-add or hot-remove which changes the number of
1904 When reducing the number of devices in a RAID1 array, the slots which
1905 are to be removed from the array must already be vacant. That is, the
1906 devices which were in those slots must be failed and removed.
1908 When the number of devices is increased, any hot spares that are
1909 present will be activated immediately.
1911 Changing the number of active devices in a RAID5 or RAID6 is much more
1912 effort. Every block in the array will need to be read and written
1913 back to a new location. From 2.6.17, the Linux Kernel is able to
1914 increase the number of devices in a RAID5 safely, including restart
1915 and interrupted "reshape". From 2.6.31, the Linux Kernel is able to
1916 increase or decrease the number of devices in a RAID5 or RAID6.
1918 When decreasing the number of devices, the size of the array will also
1919 decrease. If there was data in the array, it could get destroyed and
1920 this is not reversible. To help prevent accidents,
1922 requires that the size of the array be decreased first with
1923 .BR "mdadm --grow --array-size" .
1924 This is a reversible change which simply makes the end of the array
1925 inaccessible. The integrity of any data can then be checked before
1926 the non-reversible reduction in the number of devices is request.
1928 When relocating the first few stripes on a raid5, it is not possible
1929 to keep the data on disk completely consistent and crash-proof. To
1930 provide the required safety, mdadm disables writes to the array while
1931 this "critical section" is reshaped, and takes a backup of the data
1932 that is in that section. This backup is normally stored in any spare
1933 devices that the array has, however it can also be stored in a
1934 separate file specified with the
1936 option. If this option is used, and the system does crash during the
1937 critical period, the same file must be passed to
1939 to restore the backup and reassemble the array.
1943 Changing the RAID level of any array happens instantaneously. However
1944 in the RAID to RAID6 case this requires a non-standard layout of the
1945 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
1946 required before the change can be accomplish. So while the level
1947 change is instant, the accompanying layout change can take quite a
1950 .SS CHUNK-SIZE AND LAYOUT CHANGES
1952 Changing the chunk-size of layout without also changing the number of
1953 devices as the same time will involve re-writing all blocks in-place.
1954 To ensure against data loss in the case of a crash, a
1956 must be provided for these changes. Small sections of the array will
1957 be copied to the backup file while they are being rearranged.
1959 If the reshape is interrupted for any reason, this backup file must be
1961 .B "mdadm --assemble"
1962 so the array can be reassembled. Consequently the file cannot be
1963 stored on the device being reshaped.
1968 A write-intent bitmap can be added to, or removed from, an active
1969 array. Either internal bitmaps, or bitmaps stored in a separate file,
1970 can be added. Note that if you add a bitmap stored in a file which is
1971 in a filesystem that is on the raid array being affected, the system
1972 will deadlock. The bitmap must be on a separate filesystem.
1974 .SH INCREMENTAL MODE
1978 .B mdadm \-\-incremental
1984 .B mdadm \-\-incremental \-\-rebuild
1987 .B mdadm \-\-incremental \-\-run \-\-scan
1991 This mode is designed to be used in conjunction with a device
1992 discovery system. As devices are found in a system, they can be
1994 .B "mdadm \-\-incremental"
1995 to be conditionally added to an appropriate array.
1997 If the device passed is a
1999 device created by a previous call to
2001 then rather than trying to add that device to an array, all the arrays
2002 described by the metadata of the container will be started.
2005 performs a number of tests to determine if the device is part of an
2006 array, and which array it should be part of. If an appropriate array
2007 is found, or can be created,
2009 adds the device to the array and conditionally starts the array.
2013 will only add devices to an array which were previously working
2014 (active or spare) parts of that array. It does not currently support
2015 automatic inclusion of a new drive as a spare in some array.
2019 makes are as follow:
2021 Is the device permitted by
2023 That is, is it listed in a
2025 line in that file. If
2027 is absent then the default it to allow any device. Similar if
2029 contains the special word
2031 then any device is allowed. Otherwise the device name given to
2033 must match one of the names or patterns in a
2038 Does the device have a valid md superblock. If a specific metadata
2039 version is request with
2043 then only that style of metadata is accepted, otherwise
2045 finds any known version of metadata. If no
2047 metadata is found, the device is rejected.
2050 Does the metadata match an expected array?
2051 The metadata can match in two ways. Either there is an array listed
2054 which identifies the array (either by UUID, by name, by device list,
2055 or by minor-number), or the array was created with a
2061 or on the command line.
2064 is not able to positively identify the array as belonging to the
2065 current host, the device will be rejected.
2069 keeps a list of arrays that it has partially assembled in
2070 .B /var/run/mdadm/map
2072 .B /var/run/mdadm.map
2073 if the directory doesn't exist). If no array exists which matches
2074 the metadata on the new device,
2076 must choose a device name and unit number. It does this based on any
2079 or any name information stored in the metadata. If this name
2080 suggests a unit number, that number will be used, otherwise a free
2081 unit number will be chosen. Normally
2083 will prefer to create a partitionable array, however if the
2087 suggests that a non-partitionable array is preferred, that will be
2091 Once an appropriate array is found or created and the device is added,
2093 must decide if the array is ready to be started. It will
2094 normally compare the number of available (non-spare) devices to the
2095 number of devices that the metadata suggests need to be active. If
2096 there are at least that many, the array will be started. This means
2097 that if any devices are missing the array will not be restarted.
2103 in which case the array will be run as soon as there are enough
2104 devices present for the data to be accessible. For a raid1, that
2105 means one device will start the array. For a clean raid5, the array
2106 will be started as soon as all but one drive is present.
2108 Note that neither of these approaches is really ideal. If it can
2109 be known that all device discovery has completed, then
2113 can be run which will try to start all arrays that are being
2114 incrementally assembled. They are started in "read-auto" mode in
2115 which they are read-only until the first write request. This means
2116 that no metadata updates are made and no attempt at resync or recovery
2117 happens. Further devices that are found before the first write can
2118 still be added safely.
2122 This section describes environment variables that affect how mdadm
2127 Setting this value to 1 will prevent mdadm from automatically launching
2128 mdmon. This variable is intended primarily for debugging mdadm/mdmon.
2134 does not create any device nodes in /dev, but leaves that task to
2138 appears not to be configured, or if this environment variable is set
2141 will create and devices that are needed.
2145 .B " mdadm \-\-query /dev/name-of-device"
2147 This will find out if a given device is a raid array, or is part of
2148 one, and will provide brief information about the device.
2150 .B " mdadm \-\-assemble \-\-scan"
2152 This will assemble and start all arrays listed in the standard config
2153 file. This command will typically go in a system startup file.
2155 .B " mdadm \-\-stop \-\-scan"
2157 This will shut down all arrays that can be shut down (i.e. are not
2158 currently in use). This will typically go in a system shutdown script.
2160 .B " mdadm \-\-follow \-\-scan \-\-delay=120"
2162 If (and only if) there is an Email address or program given in the
2163 standard config file, then
2164 monitor the status of all arrays listed in that file by
2165 polling them ever 2 minutes.
2167 .B " mdadm \-\-create /dev/md0 \-\-level=1 \-\-raid\-devices=2 /dev/hd[ac]1"
2169 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
2172 .B " echo 'DEVICE /dev/hd*[0\-9] /dev/sd*[0\-9]' > mdadm.conf"
2174 .B " mdadm \-\-detail \-\-scan >> mdadm.conf"
2176 This will create a prototype config file that describes currently
2177 active arrays that are known to be made from partitions of IDE or SCSI drives.
2178 This file should be reviewed before being used as it may
2179 contain unwanted detail.
2181 .B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
2183 .B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
2185 This will find arrays which could be assembled from existing IDE and
2186 SCSI whole drives (not partitions), and store the information in the
2187 format of a config file.
2188 This file is very likely to contain unwanted detail, particularly
2191 entries. It should be reviewed and edited before being used as an
2194 .B " mdadm \-\-examine \-\-brief \-\-scan \-\-config=partitions"
2196 .B " mdadm \-Ebsc partitions"
2198 Create a list of devices by reading
2199 .BR /proc/partitions ,
2200 scan these for RAID superblocks, and printout a brief listing of all
2203 .B " mdadm \-Ac partitions \-m 0 /dev/md0"
2205 Scan all partitions and devices listed in
2206 .BR /proc/partitions
2209 out of all such devices with a RAID superblock with a minor number of 0.
2211 .B " mdadm \-\-monitor \-\-scan \-\-daemonise > /var/run/mdadm"
2213 If config file contains a mail address or alert program, run mdadm in
2214 the background in monitor mode monitoring all md devices. Also write
2215 pid of mdadm daemon to
2216 .BR /var/run/mdadm .
2218 .B " mdadm \-Iq /dev/somedevice"
2220 Try to incorporate newly discovered device into some array as
2223 .B " mdadm \-\-incremental \-\-rebuild \-\-run \-\-scan"
2225 Rebuild the array map from any current arrays, and then start any that
2228 .B " mdadm /dev/md4 --fail detached --remove detached"
2230 Any devices which are components of /dev/md4 will be marked as faulty
2231 and then remove from the array.
2233 .B " mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4
2237 which is currently a RAID5 array will be converted to RAID6. There
2238 should normally already be a spare drive attached to the array as a
2239 RAID6 needs one more drive than a matching RAID5.
2241 .B " mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]"
2243 Create a DDF array over 6 devices.
2245 .B " mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf"
2247 Create a raid5 array over any 3 devices in the given DDF set. Use
2248 only 30 gigabytes of each device.
2250 .B " mdadm -A /dev/md/ddf1 /dev/sd[a-f]"
2252 Assemble a pre-exist ddf array.
2254 .B " mdadm -I /dev/md/ddf1"
2256 Assemble all arrays contained in the ddf array, assigning names as
2259 .B " mdadm \-\-create \-\-help"
2261 Provide help about the Create mode.
2263 .B " mdadm \-\-config \-\-help"
2265 Provide help about the format of the config file.
2267 .B " mdadm \-\-help"
2269 Provide general help.
2280 lists all active md devices with information about them.
2282 uses this to find arrays when
2284 is given in Misc mode, and to monitor array reconstruction
2290 The config file lists which devices may be scanned to see if
2291 they contain MD super block, and gives identifying information
2292 (e.g. UUID) about known MD arrays. See
2296 .SS /var/run/mdadm/map
2299 mode is used, this file gets a list of arrays currently being created.
2302 does not exist as a directory, then
2303 .B /var/run/mdadm.map
2309 understand two sorts of names for array devices.
2311 The first is the so-called 'standard' format name, which matches the
2312 names used by the kernel and which appear in
2315 The second sort can be freely chosen, but must reside in
2317 When giving a device name to
2319 to create or assemble an array, either full path name such as
2323 can be given, or just the suffix of the second sort of name, such as
2329 chooses device names during auto-assembly, it will normally add a
2330 small sequence number to the end of the name to avoid conflicted
2331 between multiple arrays that have the same name. If
2333 can reasonably determine that the array really is meant for this host,
2334 either by a hostname in the metadata, or by the presence of the array
2335 in /etc/mdadm.conf, then it will leave of the suffix if possible.
2337 The standard names for non-partitioned arrays (the only sort of md
2338 array available in 2.4 and earlier) are of the form
2342 where NN is a number.
2343 The standard names for partitionable arrays (as available from 2.6
2344 onwards) are of the form
2348 Partition numbers should be indicated by added "pMM" to these, thus "/dev/md/d1p2".
2350 From kernel version, 2.6.28 the "non-partitioned array" can actually
2351 be partitioned. So the "md_dNN" names are no longer needed, and
2352 partitions such as "/dev/mdNNpXX" are possible.
2356 was previously known as
2360 is completely separate from the
2362 package, and does not use the
2364 configuration file at all.
2367 For further information on mdadm usage, MD and the various levels of
2370 .B http://linux\-raid.osdl.org/
2372 (based upon Jakob \(/Ostergaard's Software\-RAID.HOWTO)
2374 .\"for new releases of the RAID driver check out:
2377 .\".UR ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
2378 .\"ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
2383 .\".UR http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
2384 .\"http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
2387 The latest version of
2389 should always be available from
2391 .B http://www.kernel.org/pub/linux/utils/raid/mdadm/