2 .\" Copyright Neil Brown and others.
3 .\" This program is free software; you can redistribute it and/or modify
4 .\" it under the terms of the GNU General Public License as published by
5 .\" the Free Software Foundation; either version 2 of the License, or
6 .\" (at your option) any later version.
7 .\" See file COPYING in distribution for details.
10 mdadm \- manage MD devices
16 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
19 RAID devices are virtual devices created from two or more
20 real block devices. This allows multiple devices (typically disk
21 drives or partitions thereof) to be combined into a single device to
22 hold (for example) a single filesystem.
23 Some RAID levels include redundancy and so can survive some degree of
26 Linux Software RAID devices are implemented through the md (Multiple
27 Devices) device driver.
29 Currently, Linux supports
46 is not a Software RAID mechanism, but does involve
48 each device is a path to one common physical storage device.
49 New installations should not use md/multipath as it is not well
50 supported and has no ongoing development. Use the Device Mapper based
51 multipath-tools instead.
54 is also not true RAID, and it only involves one device. It
55 provides a layer over a true device that can be used to inject faults.
60 is a collection of devices that are
61 managed as a set. This is similar to the set of devices connected to
62 a hardware RAID controller. The set of devices may contain a number
63 of different RAID arrays each utilising some (or all) of the blocks from a
64 number of the devices in the set. For example, two devices in a 5-device set
65 might form a RAID1 using the whole devices. The remaining three might
66 have a RAID5 over the first half of each device, and a RAID0 over the
71 there is one set of metadata that describes all of
72 the arrays in the container. So when
76 device, the device just represents the metadata. Other normal arrays (RAID1
77 etc) can be created inside the container.
80 mdadm has several major modes of operation:
83 Assemble the components of a previously created
84 array into an active array. Components can be explicitly given
85 or can be searched for.
87 checks that the components
88 do form a bona fide array, and can, on request, fiddle superblock
89 information so as to assemble a faulty array.
93 Build an array that doesn't have per-device metadata (superblocks). For these
96 cannot differentiate between initial creation and subsequent assembly
97 of an array. It also cannot perform any checks that appropriate
98 components have been requested. Because of this, the
100 mode should only be used together with a complete understanding of
105 Create a new array with per-device metadata (superblocks).
106 Appropriate metadata is written to each device, and then the array
107 comprising those devices is activated. A 'resync' process is started
108 to make sure that the array is consistent (e.g. both sides of a mirror
109 contain the same data) but the content of the device is left otherwise
111 The array can be used as soon as it has been created. There is no
112 need to wait for the initial resync to finish.
115 .B "Follow or Monitor"
116 Monitor one or more md devices and act on any state changes. This is
117 only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as
118 only these have interesting state. RAID0 or Linear never have
119 missing, spare, or failed drives, so there is nothing to monitor.
123 Grow (or shrink) an array, or otherwise reshape it in some way.
124 Currently supported growth options including changing the active size
125 of component devices and changing the number of active devices in
126 Linear and RAID levels 0/1/4/5/6,
127 changing the RAID level between 0, 1, 5, and 6, and between 0 and 10,
128 changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or
129 removing a write-intent bitmap and changing the array's consistency policy.
132 .B "Incremental Assembly"
133 Add a single device to an appropriate array. If the addition of the
134 device makes the array runnable, the array will be started.
135 This provides a convenient interface to a
137 system. As each device is detected,
139 has a chance to include it in some array as appropriate.
142 flag is passed in we will remove the device from any active array
143 instead of adding it.
149 in this mode, then any arrays within that container will be assembled
154 This is for doing things to specific components of an array such as
155 adding new spares and removing faulty devices.
159 This is an 'everything else' mode that supports operations on active
160 arrays, operations on component devices such as erasing old superblocks, and
161 information gathering operations.
162 .\"This mode allows operations on independent devices such as examine MD
163 .\"superblocks, erasing old superblocks and stopping active arrays.
167 This mode does not act on a specific device or array, but rather it
168 requests the Linux Kernel to activate any auto-detected arrays.
171 .SH Options for selecting a mode are:
174 .BR \-A ", " \-\-assemble
175 Assemble a pre-existing array.
178 .BR \-B ", " \-\-build
179 Build a legacy array without superblocks.
182 .BR \-C ", " \-\-create
186 .BR \-F ", " \-\-follow ", " \-\-monitor
192 .BR \-G ", " \-\-grow
193 Change the size or shape of an active array.
196 .BR \-I ", " \-\-incremental
197 Add/remove a single device to/from an appropriate array, and possibly start the array.
201 Request that the kernel starts any auto-detected arrays. This can only
204 is compiled into the kernel \(em not if it is a module.
205 Arrays can be auto-detected by the kernel if all the components are in
206 primary MS-DOS partitions with partition type
208 and all use v0.90 metadata.
209 In-kernel autodetect is not recommended for new installations. Using
211 to detect and assemble arrays \(em possibly in an
213 \(em is substantially more flexible and should be preferred.
216 If a device is given before any options, or if the first option is
225 then the MANAGE mode is assumed.
226 Anything other than these will cause the
230 .SH Options that are not mode-specific are:
233 .BR \-h ", " \-\-help
234 Display general help message or, after one of the above options, a
235 mode-specific help message.
239 Display more detailed help about command line parsing and some commonly
243 .BR \-V ", " \-\-version
244 Print version information for mdadm.
247 .BR \-v ", " \-\-verbose
248 Be more verbose about what is happening. This can be used twice to be
250 The extra verbosity currently only affects
251 .B \-\-detail \-\-scan
253 .BR "\-\-examine \-\-scan" .
256 .BR \-q ", " \-\-quiet
257 Avoid printing purely informative messages. With this,
259 will be silent unless there is something really important to report.
263 .BR \-f ", " \-\-force
264 Be more forceful about certain operations. See the various modes for
265 the exact meaning of this option in different contexts.
268 .BR \-c ", " \-\-config=
269 Specify the config file or directory. Default is to use
272 .BR /etc/mdadm.conf.d ,
273 or if those are missing then
274 .B /etc/mdadm/mdadm.conf
276 .BR /etc/mdadm/mdadm.conf.d .
277 If the config file given is
279 then nothing will be read, but
281 will act as though the config file contained exactly
283 .B " DEVICE partitions containers"
287 to find a list of devices to scan, and
289 to find a list of containers to examine.
292 is given for the config file, then
294 will act as though the config file were empty.
296 If the name given is of a directory, then
298 will collect all the files contained in the directory with a name ending
301 sort them lexically, and process all of those files as config files.
304 .BR \-s ", " \-\-scan
307 for missing information.
308 In general, this option gives
310 permission to get any missing information (like component devices,
311 array devices, array identities, and alert destination) from the
312 configuration file (see previous option);
313 one exception is MISC mode when using
319 says to get a list of array devices from
323 .BR \-e ", " \-\-metadata=
324 Declare the style of RAID metadata (superblock) to be used. The
325 default is {DEFAULT_METADATA} for
327 and to guess for other operations.
328 The default can be overridden by setting the
337 .ie '{DEFAULT_METADATA}'0.90'
338 .IP "0, 0.90, default"
341 Use the original 0.90 format superblock. This format limits arrays to
342 28 component devices and limits component devices of levels 1 and
343 greater to 2 terabytes. It is also possible for there to be confusion
344 about whether the superblock applies to a whole device or just the
345 last partition, if that partition starts on a 64K boundary.
346 .ie '{DEFAULT_METADATA}'0.90'
347 .IP "1, 1.0, 1.1, 1.2"
349 .IP "1, 1.0, 1.1, 1.2 default"
350 Use the new version-1 format superblock. This has fewer restrictions.
351 It can easily be moved between hosts with different endian-ness, and a
352 recovery operation can be checkpointed and restarted. The different
353 sub-versions store the superblock at different locations on the
354 device, either at the end (for 1.0), at the start (for 1.1) or 4K from
355 the start (for 1.2). "1" is equivalent to "1.2" (the commonly
356 preferred 1.x format).
357 'if '{DEFAULT_METADATA}'1.2' "default" is equivalent to "1.2".
359 Use the "Industry Standard" DDF (Disk Data Format) format defined by
361 When creating a DDF array a
363 will be created, and normal arrays can be created in that container.
365 Use the Intel(R) Matrix Storage Manager metadata format. This creates a
367 which is managed in a similar manner to DDF, and is supported by an
368 option-rom on some platforms:
370 .B http://www.intel.com/design/chipsets/matrixstorage_sb.htm
376 This will override any
378 setting in the config file and provides the identity of the host which
379 should be considered the home for any arrays.
381 When creating an array, the
383 will be recorded in the metadata. For version-1 superblocks, it will
384 be prefixed to the array name. For version-0.90 superblocks, part of
385 the SHA1 hash of the hostname will be stored in the later half of the
388 When reporting information about an array, any array which is tagged
389 for the given homehost will be reported as such.
391 When using Auto-Assemble, only arrays tagged for the given homehost
392 will be allowed to use 'local' names (i.e. not ending in '_' followed
393 by a digit string). See below under
394 .BR "Auto Assembly" .
396 The special name "\fBany\fP" can be used as a wild card. If an array
399 then the name "\fBany\fP" will be stored in the array and it can be
400 assembled in the same way on any host. If an array is assembled with
401 this option, then the homehost recorded on the array will be ignored.
407 needs to print the name for a device it normally finds the name in
409 which refers to the device and is shortest. When a path component is
413 will prefer a longer name if it contains that component. For example
414 .B \-\-prefer=by-uuid
415 will prefer a name in a subdirectory of
420 This functionality is currently only provided by
426 .B \-\-home\-cluster=
427 specifies the cluster name for the md device. The md device can be assembled
428 only on the cluster which matches the name specified. If this option is not
429 provided, mdadm tries to detect the cluster name automatically.
431 .SH For create, build, or grow:
434 .BR \-n ", " \-\-raid\-devices=
435 Specify the number of active devices in the array. This, plus the
436 number of spare devices (see below) must equal the number of
438 (including "\fBmissing\fP" devices)
439 that are listed on the command line for
441 Setting a value of 1 is probably
442 a mistake and so requires that
444 be specified first. A value of 1 will then be allowed for linear,
445 multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
447 This number can only be changed using
449 for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide
450 the necessary support.
453 .BR \-x ", " \-\-spare\-devices=
454 Specify the number of spare (eXtra) devices in the initial array.
455 Spares can also be added
456 and removed later. The number of component devices listed
457 on the command line must equal the number of RAID devices plus the
458 number of spare devices.
461 .BR \-z ", " \-\-size=
462 Amount (in Kilobytes) of space to use from each drive in RAID levels 1/4/5/6.
463 This must be a multiple of the chunk size, and must leave about 128Kb
464 of space at the end of the drive for the RAID superblock.
465 If this is not specified
466 (as it normally is not) the smallest drive (or partition) sets the
467 size, though if there is a variance among the drives of greater than 1%, a warning is
470 A suffix of 'K', 'M' or 'G' can be given to indicate Kilobytes, Megabytes or
471 Gigabytes respectively.
473 Sometimes a replacement drive can be a little smaller than the
474 original drives though this should be minimised by IDEMA standards.
475 Such a replacement drive will be rejected by
477 To guard against this it can be useful to set the initial size
478 slightly smaller than the smaller device with the aim that it will
479 still be larger than any replacement.
481 This value can be set with
483 for RAID level 1/4/5/6 though
485 based arrays such as those with IMSM metadata may not be able to
487 If the array was created with a size smaller than the currently
488 active drives, the extra space can be accessed using
490 The size can be given as
492 which means to choose the largest size that fits on all current drives.
494 Before reducing the size of the array (with
495 .BR "\-\-grow \-\-size=" )
496 you should make sure that space isn't needed. If the device holds a
497 filesystem, you would need to resize the filesystem to use less space.
499 After reducing the array size you should check that the data stored in
500 the device is still available. If the device holds a filesystem, then
501 an 'fsck' of the filesystem is a minimum requirement. If there are
502 problems the array can be made bigger again with no loss with another
503 .B "\-\-grow \-\-size="
506 This value cannot be used when creating a
508 such as with DDF and IMSM metadata, though it perfectly valid when
509 creating an array inside a container.
512 .BR \-Z ", " \-\-array\-size=
513 This is only meaningful with
515 and its effect is not persistent: when the array is stopped and
516 restarted the default array size will be restored.
518 Setting the array-size causes the array to appear smaller to programs
519 that access the data. This is particularly needed before reshaping an
520 array so that it will be smaller. As the reshape is not reversible,
521 but setting the size with
523 is, it is required that the array size is reduced as appropriate
524 before the number of devices in the array is reduced.
526 Before reducing the size of the array you should make sure that space
527 isn't needed. If the device holds a filesystem, you would need to
528 resize the filesystem to use less space.
530 After reducing the array size you should check that the data stored in
531 the device is still available. If the device holds a filesystem, then
532 an 'fsck' of the filesystem is a minimum requirement. If there are
533 problems the array can be made bigger again with no loss with another
534 .B "\-\-grow \-\-array\-size="
537 A suffix of 'K', 'M' or 'G' can be given to indicate Kilobytes, Megabytes or
538 Gigabytes respectively.
541 restores the apparent size of the array to be whatever the real
542 amount of available space is.
544 Clustered arrays do not support this parameter yet.
547 .BR \-c ", " \-\-chunk=
548 Specify chunk size of kilobytes. The default when creating an
549 array is 512KB. To ensure compatibility with earlier versions, the
550 default when building an array with no persistent metadata is 64KB.
551 This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
553 RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power
554 of 2. In any case it must be a multiple of 4KB.
556 A suffix of 'K', 'M' or 'G' can be given to indicate Kilobytes, Megabytes or
557 Gigabytes respectively.
561 Specify rounding factor for a Linear array. The size of each
562 component will be rounded down to a multiple of this size.
563 This is a synonym for
565 but highlights the different meaning for Linear as compared to other
566 RAID levels. The default is 64K if a kernel earlier than 2.6.16 is in
567 use, and is 0K (i.e. no rounding) in later kernels.
570 .BR \-l ", " \-\-level=
571 Set RAID level. When used with
573 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
574 raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
575 Obviously some of these are synonymous.
579 metadata type is requested, only the
581 level is permitted, and it does not need to be explicitly given.
585 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
589 to change the RAID level in some cases. See LEVEL CHANGES below.
592 .BR \-p ", " \-\-layout=
593 This option configures the fine details of data layout for RAID5, RAID6,
594 and RAID10 arrays, and controls the failure modes for
597 The layout of the RAID5 parity block can be one of
598 .BR left\-asymmetric ,
599 .BR left\-symmetric ,
600 .BR right\-asymmetric ,
601 .BR right\-symmetric ,
602 .BR la ", " ra ", " ls ", " rs .
604 .BR left\-symmetric .
606 It is also possible to cause RAID5 to use a RAID4-like layout by
612 Finally for RAID5 there are DDF\-compatible layouts,
613 .BR ddf\-zero\-restart ,
614 .BR ddf\-N\-restart ,
616 .BR ddf\-N\-continue .
618 These same layouts are available for RAID6. There are also 4 layouts
619 that will provide an intermediate stage for converting between RAID5
620 and RAID6. These provide a layout which is identical to the
621 corresponding RAID5 layout on the first N\-1 devices, and has the 'Q'
622 syndrome (the second 'parity' block used by RAID6) on the last device.
624 .BR left\-symmetric\-6 ,
625 .BR right\-symmetric\-6 ,
626 .BR left\-asymmetric\-6 ,
627 .BR right\-asymmetric\-6 ,
629 .BR parity\-first\-6 .
631 When setting the failure mode for level
634 .BR write\-transient ", " wt ,
635 .BR read\-transient ", " rt ,
636 .BR write\-persistent ", " wp ,
637 .BR read\-persistent ", " rp ,
639 .BR read\-fixable ", " rf ,
640 .BR clear ", " flush ", " none .
642 Each failure mode can be followed by a number, which is used as a period
643 between fault generation. Without a number, the fault is generated
644 once on the first relevant request. With a number, the fault will be
645 generated after that many requests, and will continue to be generated
646 every time the period elapses.
648 Multiple failure modes can be current simultaneously by using the
650 option to set subsequent failure modes.
652 "clear" or "none" will remove any pending or periodic failure modes,
653 and "flush" will clear any persistent faults.
655 Finally, the layout options for RAID10 are one of 'n', 'o' or 'f' followed
656 by a small number. The default is 'n2'. The supported options are:
659 signals 'near' copies. Multiple copies of one data block are at
660 similar offsets in different devices.
663 signals 'offset' copies. Rather than the chunks being duplicated
664 within a stripe, whole stripes are duplicated but are rotated by one
665 device so duplicate blocks are on different devices. Thus subsequent
666 copies of a block are in the next drive, and are one chunk further
671 (multiple copies have very different offsets).
672 See md(4) for more detail about 'near', 'offset', and 'far'.
674 The number is the number of copies of each datablock. 2 is normal, 3
675 can be useful. This number can be at most equal to the number of
676 devices in the array. It does not need to divide evenly into that
677 number (e.g. it is perfectly legal to have an 'n2' layout for an array
678 with an odd number of devices).
680 When an array is converted between RAID5 and RAID6 an intermediate
681 RAID6 layout is used in which the second parity block (Q) is always on
682 the last device. To convert a RAID5 to RAID6 and leave it in this new
683 layout (which does not require re-striping) use
684 .BR \-\-layout=preserve .
685 This will try to avoid any restriping.
687 The converse of this is
688 .B \-\-layout=normalise
689 which will change a non-standard RAID6 layout into a more standard
696 (thus explaining the p of
700 .BR \-b ", " \-\-bitmap=
701 Specify a file to store a write-intent bitmap in. The file should not
704 is also given. The same file should be provided
705 when assembling the array. If the word
707 is given, then the bitmap is stored with the metadata on the array,
708 and so is replicated on all devices. If the word
712 mode, then any bitmap that is present is removed. If the word
714 is given, the array is created for a clustered environment. One bitmap
715 is created for each node as defined by the
717 parameter and are stored internally.
719 To help catch typing errors, the filename must contain at least one
720 slash ('/') if it is a real file (not 'internal' or 'none').
722 Note: external bitmaps are only known to work on ext2 and ext3.
723 Storing bitmap files on other filesystems may result in serious problems.
725 When creating an array on devices which are 100G or larger,
727 automatically adds an internal bitmap as it will usually be
728 beneficial. This can be suppressed with
730 or by selecting a different consistency policy with
731 .BR \-\-consistency\-policy .
734 .BR \-\-bitmap\-chunk=
735 Set the chunksize of the bitmap. Each bit corresponds to that many
736 Kilobytes of storage.
737 When using a file based bitmap, the default is to use the smallest
738 size that is at-least 4 and requires no more than 2^21 chunks.
741 bitmap, the chunksize defaults to 64Meg, or larger if necessary to
742 fit the bitmap into the available space.
744 A suffix of 'K', 'M' or 'G' can be given to indicate Kilobytes, Megabytes or
745 Gigabytes respectively.
748 .BR \-W ", " \-\-write\-mostly
749 subsequent devices listed in a
754 command will be flagged as 'write\-mostly'. This is valid for RAID1
755 only and means that the 'md' driver will avoid reading from these
756 devices if at all possible. This can be useful if mirroring over a
760 .BR \-\-write\-behind=
761 Specify that write-behind mode should be enabled (valid for RAID1
762 only). If an argument is specified, it will set the maximum number
763 of outstanding writes allowed. The default value is 256.
764 A write-intent bitmap is required in order to use write-behind
765 mode, and write-behind is only attempted on drives marked as
770 subsequent devices listed in a
774 command will be flagged as 'failfast'. This is valid for RAID1 and
775 RAID10 only. IO requests to these devices will be encouraged to fail
776 quickly rather than cause long delays due to error handling. Also no
777 attempt is made to repair a read error on these devices.
779 If an array becomes degraded so that the 'failfast' device is the only
780 usable device, the 'failfast' flag will then be ignored and extended
781 delays will be preferred to complete failure.
783 The 'failfast' flag is appropriate for storage arrays which have a
784 low probability of true failure, but which may sometimes
785 cause unacceptable delays due to internal maintenance functions.
788 .BR \-\-assume\-clean
791 that the array pre-existed and is known to be clean. It can be useful
792 when trying to recover from a major failure as you can be sure that no
793 data will be affected unless you actually write to the array. It can
794 also be used when creating a RAID1 or RAID10 if you want to avoid the
795 initial resync, however this practice \(em while normally safe \(em is not
796 recommended. Use this only if you really know what you are doing.
798 When the devices that will be part of a new array were filled
799 with zeros before creation the operator knows the array is
800 actually clean. If that is the case, such as after running
801 badblocks, this argument can be used to tell mdadm the
802 facts the operator knows.
804 When an array is resized to a larger size with
805 .B "\-\-grow \-\-size="
806 the new space is normally resynced in that same way that the whole
807 array is resynced at creation. From Linux version 3.0,
809 can be used with that command to avoid the automatic resync.
812 .BR \-\-backup\-file=
815 is used to increase the number of raid-devices in a RAID5 or RAID6 if
816 there are no spare devices available, or to shrink, change RAID level
817 or layout. See the GROW MODE section below on RAID\-DEVICES CHANGES.
818 The file must be stored on a separate device, not on the RAID array
823 Arrays with 1.x metadata can leave a gap between the start of the
824 device and the start of array data. This gap can be used for various
825 metadata. The start of data is known as the
827 Normally an appropriate data offset is computed automatically.
828 However it can be useful to set it explicitly such as when re-creating
829 an array which was originally created using a different version of
831 which computed a different offset.
833 Setting the offset explicitly over-rides the default. The value given
834 is in Kilobytes unless a suffix of 'K', 'M' or 'G' is used to explicitly
835 indicate Kilobytes, Megabytes or Gigabytes respectively.
839 can also be used with
841 for some RAID levels (initially on RAID10). This allows the
842 data\-offset to be changed as part of the reshape process. When the
843 data offset is changed, no backup file is required as the difference
844 in offsets is used to provide the same functionality.
846 When the new offset is earlier than the old offset, the number of
847 devices in the array cannot shrink. When it is after the old offset,
848 the number of devices in the array cannot increase.
850 When creating an array,
854 In the case each member device is expected to have a offset appended
855 to the name, separated by a colon. This makes it possible to recreate
856 exactly an array which has varying data offsets (as can happen when
857 different versions of
859 are used to add different devices).
863 This option is complementary to the
864 .B \-\-freeze-reshape
865 option for assembly. It is needed when
867 operation is interrupted and it is not restarted automatically due to
868 .B \-\-freeze-reshape
869 usage during array assembly. This option is used together with
873 ) command and device for a pending reshape to be continued.
874 All parameters required for reshape continuation will be read from array metadata.
878 .BR \-\-backup\-file=
879 option to be set, continuation option will require to have exactly the same
880 backup file given as well.
882 Any other parameter passed together with
884 option will be ignored.
887 .BR \-N ", " \-\-name=
890 for the array. This is currently only effective when creating an
891 array with a version-1 superblock, or an array in a DDF container.
892 The name is a simple textual string that can be used to identify array
893 components when assembling. If name is needed but not specified, it
894 is taken from the basename of the device that is being created.
906 run the array, even if some of the components
907 appear to be active in another array or filesystem. Normally
909 will ask for confirmation before including such components in an
910 array. This option causes that question to be suppressed.
913 .BR \-f ", " \-\-force
916 accept the geometry and layout specified without question. Normally
918 will not allow creation of an array with only one device, and will try
919 to create a RAID5 array with one missing drive (as this makes the
920 initial resync work faster). With
923 will not try to be so clever.
926 .BR \-o ", " \-\-readonly
929 rather than read-write as normal. No writes will be allowed to the
930 array, and no resync, recovery, or reshape will be started. It works with
931 Create, Assemble, Manage and Misc mode.
934 .BR \-a ", " "\-\-auto{=yes,md,mdp,part,p}{NN}"
935 Instruct mdadm how to create the device file if needed, possibly allocating
936 an unused minor number. "md" causes a non-partitionable array
937 to be used (though since Linux 2.6.28, these array devices are in fact
938 partitionable). "mdp", "part" or "p" causes a partitionable array (2.6 and
939 later) to be used. "yes" requires the named md device to have
940 a 'standard' format, and the type and minor number will be determined
941 from this. With mdadm 3.0, device creation is normally left up to
943 so this option is unlikely to be needed.
944 See DEVICE NAMES below.
946 The argument can also come immediately after
951 is not given on the command line or in the config file, then
957 is also given, then any
959 entries in the config file will override the
961 instruction given on the command line.
963 For partitionable arrays,
965 will create the device file for the whole array and for the first 4
966 partitions. A different number of partitions can be specified at the
967 end of this option (e.g.
969 If the device name ends with a digit, the partition names add a 'p',
971 .IR /dev/md/home1p3 .
972 If there is no trailing digit, then the partition names just have a
974 .IR /dev/md/scratch3 .
976 If the md device name is in a 'standard' format as described in DEVICE
977 NAMES, then it will be created, if necessary, with the appropriate
978 device number based on that name. If the device name is not in one of these
979 formats, then a unused device number will be allocated. The device
980 number will be considered unused if there is no active array for that
981 number, and there is no entry in /dev for that number and with a
982 non-standard name. Names that are not in 'standard' format are only
983 allowed in "/dev/md/".
985 This is meaningful with
991 .BR \-a ", " "\-\-add"
992 This option can be used in Grow mode in two cases.
994 If the target array is a Linear array, then
996 can be used to add one or more devices to the array. They
997 are simply catenated on to the end of the array. Once added, the
998 devices cannot be removed.
1002 option is being used to increase the number of devices in an array,
1005 can be used to add some extra devices to be included in the array.
1006 In most cases this is not needed as the extra devices can be added as
1007 spares first, and then the number of raid-disks can be changed.
1008 However for RAID0, it is not possible to add spares. So to increase
1009 the number of devices in a RAID0, it is necessary to set the new
1010 number of devices, and to add the new devices, in the same command.
1014 Only works when the array is for clustered environment. It specifies
1015 the maximum number of nodes in the cluster that will use this device
1016 simultaneously. If not specified, this defaults to 4.
1019 .BR \-\-write-journal
1020 Specify journal device for the RAID-4/5/6 array. The journal device
1021 should be a SSD with reasonable lifetime.
1025 Auto creation of symlinks in /dev to /dev/md, option --symlinks must
1026 be 'no' or 'yes' and work with --create and --build.
1029 .BR \-k ", " \-\-consistency\-policy=
1030 Specify how the array maintains consistency in case of unexpected shutdown.
1031 Only relevant for RAID levels with redundancy.
1032 Currently supported options are:
1037 Full resync is performed and all redundancy is regenerated when the array is
1038 started after unclean shutdown.
1042 Resync assisted by a write-intent bitmap. Implicitly selected when using
1047 For RAID levels 4/5/6, journal device is used to log transactions and replay
1048 after unclean shutdown. Implicitly selected when using
1049 .BR \-\-write\-journal .
1053 For RAID5 only, Partial Parity Log is used to close the write hole and
1054 eliminate resync. PPL is stored in the metadata region of RAID member drives,
1055 no additional journal drive is needed.
1058 Can be used with \-\-grow to change the consistency policy of an active array
1059 in some cases. See CONSISTENCY POLICY CHANGES below.
1066 .BR \-u ", " \-\-uuid=
1067 uuid of array to assemble. Devices which don't have this uuid are
1071 .BR \-m ", " \-\-super\-minor=
1072 Minor number of device that array was created for. Devices which
1073 don't have this minor number are excluded. If you create an array as
1074 /dev/md1, then all superblocks will contain the minor number 1, even if
1075 the array is later assembled as /dev/md2.
1077 Giving the literal word "dev" for
1081 to use the minor number of the md device that is being assembled.
1082 e.g. when assembling
1084 .B \-\-super\-minor=dev
1085 will look for super blocks with a minor number of 0.
1088 is only relevant for v0.90 metadata, and should not normally be used.
1094 .BR \-N ", " \-\-name=
1095 Specify the name of the array to assemble. This must be the name
1096 that was specified when creating the array. It must either match
1097 the name stored in the superblock exactly, or it must match
1100 prefixed to the start of the given name.
1103 .BR \-f ", " \-\-force
1104 Assemble the array even if the metadata on some devices appears to be
1107 cannot find enough working devices to start the array, but can find
1108 some devices that are recorded as having failed, then it will mark
1109 those devices as working so that the array can be started.
1110 An array which requires
1112 to be started may contain data corruption. Use it carefully.
1115 .BR \-R ", " \-\-run
1116 Attempt to start the array even if fewer drives were given than were
1117 present last time the array was active. Normally if not all the
1118 expected drives are found and
1120 is not used, then the array will be assembled but not started.
1123 an attempt will be made to start it anyway.
1127 This is the reverse of
1129 in that it inhibits the startup of array unless all expected drives
1130 are present. This is only needed with
1132 and can be used if the physical connections to devices are
1133 not as reliable as you would like.
1136 .BR \-a ", " "\-\-auto{=no,yes,md,mdp,part}"
1137 See this option under Create and Build options.
1140 .BR \-b ", " \-\-bitmap=
1141 Specify the bitmap file that was given when the array was created. If
1144 bitmap, there is no need to specify this when assembling the array.
1147 .BR \-\-backup\-file=
1150 was used while reshaping an array (e.g. changing number of devices or
1151 chunk size) and the system crashed during the critical section, then the same
1153 must be presented to
1155 to allow possibly corrupted data to be restored, and the reshape
1159 .BR \-\-invalid\-backup
1160 If the file needed for the above option is not available for any
1161 reason an empty file can be given together with this option to
1162 indicate that the backup file is invalid. In this case the data that
1163 was being rearranged at the time of the crash could be irrecoverably
1164 lost, but the rest of the array may still be recoverable. This option
1165 should only be used as a last resort if there is no way to recover the
1170 .BR \-U ", " \-\-update=
1171 Update the superblock on each device while assembling the array. The
1172 argument given to this flag can be one of
1194 option will adjust the superblock of an array what was created on a Sparc
1195 machine running a patched 2.2 Linux kernel. This kernel got the
1196 alignment of part of the superblock wrong. You can use the
1197 .B "\-\-examine \-\-sparc2.2"
1200 to see what effect this would have.
1204 option will update the
1205 .B "preferred minor"
1206 field on each superblock to match the minor number of the array being
1208 This can be useful if
1210 reports a different "Preferred Minor" to
1212 In some cases this update will be performed automatically
1213 by the kernel driver. In particular the update happens automatically
1214 at the first write to an array with redundancy (RAID level 1 or
1215 greater) on a 2.6 (or later) kernel.
1219 option will change the uuid of the array. If a UUID is given with the
1221 option that UUID will be used as a new UUID and will
1223 be used to help identify the devices in the array.
1226 is given, a random UUID is chosen.
1230 option will change the
1232 of the array as stored in the superblock. This is only supported for
1233 version-1 superblocks.
1237 option will change the
1239 of the array as stored in the bitmap superblock. This option only
1240 works for a clustered environment.
1244 option will change the
1246 as recorded in the superblock. For version-0 superblocks, this is the
1247 same as updating the UUID.
1248 For version-1 superblocks, this involves updating the name.
1252 option will change the cluster name as recorded in the superblock and
1253 bitmap. This option only works for clustered environment.
1257 option will cause the array to be marked
1259 meaning that any redundancy in the array (e.g. parity for RAID5,
1260 copies for RAID1) may be incorrect. This will cause the RAID system
1261 to perform a "resync" pass to make sure that all redundant information
1266 option allows arrays to be moved between machines with different
1267 byte-order, such as from a big-endian machine like a Sparc or some
1268 MIPS machines, to a little-endian x86_64 machine.
1269 When assembling such an array for the first time after a move, giving
1270 .B "\-\-update=byteorder"
1273 to expect superblocks to have their byteorder reversed, and will
1274 correct that order before assembling the array. This is only valid
1275 with original (Version 0.90) superblocks.
1279 option will correct the summaries in the superblock. That is the
1280 counts of total, working, active, failed, and spare devices.
1284 option will rarely be of use. It applies to version 1.1 and 1.2 metadata
1285 only (where the metadata is at the start of the device) and is only
1286 useful when the component device has changed size (typically become
1287 larger). The version 1 metadata records the amount of the device that
1288 can be used to store data, so if a device in a version 1.1 or 1.2
1289 array becomes larger, the metadata will still be visible, but the
1290 extra space will not. In this case it might be useful to assemble the
1292 .BR \-\-update=devicesize .
1295 to determine the maximum usable amount of space on each device and
1296 update the relevant field in the metadata.
1300 option only works on v0.90 metadata arrays and will convert them to
1301 v1.0 metadata. The array must not be dirty (i.e. it must not need a
1302 sync) and it must not have a write-intent bitmap.
1304 The old metadata will remain on the devices, but will appear older
1305 than the new metadata and so will usually be ignored. The old metadata
1306 (or indeed the new metadata) can be removed by giving the appropriate
1309 .BR \-\-zero\-superblock .
1313 option can be used when an array has an internal bitmap which is
1314 corrupt in some way so that assembling the array normally fails. It
1315 will cause any internal bitmap to be ignored.
1319 option will reserve space in each device for a bad block list. This
1320 will be 4K in size and positioned near the end of any free space
1321 between the superblock and the data.
1325 option will cause any reservation of space for a bad block list to be
1326 removed. If the bad block list contains entries, this will fail, as
1327 removing the list could cause data corruption.
1331 option will enable PPL for a RAID5 array and reserve space for PPL on each
1332 device. There must be enough free space between the data and superblock and a
1333 write-intent bitmap or journal must not be used.
1337 option will disable PPL in the superblock.
1340 .BR \-\-freeze\-reshape
1341 Option is intended to be used in start-up scripts during initrd boot phase.
1342 When array under reshape is assembled during initrd phase, this option
1343 stops reshape after reshape critical section is being restored. This happens
1344 before file system pivot operation and avoids loss of file system context.
1345 Losing file system context would cause reshape to be broken.
1347 Reshape can be continued later using the
1349 option for the grow command.
1353 See this option under Create and Build options.
1355 .SH For Manage mode:
1358 .BR \-t ", " \-\-test
1359 Unless a more serious error occurred,
1361 will exit with a status of 2 if no changes were made to the array and
1362 0 if at least one change was made.
1363 This can be useful when an indirect specifier such as
1368 is used in requesting an operation on the array.
1370 will report failure if these specifiers didn't find any match.
1373 .BR \-a ", " \-\-add
1374 hot-add listed devices.
1375 If a device appears to have recently been part of the array
1376 (possibly it failed or was removed) the device is re\-added as described
1378 If that fails or the device was never part of the array, the device is
1379 added as a hot-spare.
1380 If the array is degraded, it will immediately start to rebuild data
1383 Note that this and the following options are only meaningful on array
1384 with redundancy. They don't apply to RAID0 or Linear.
1388 re\-add a device that was previously removed from an array.
1389 If the metadata on the device reports that it is a member of the
1390 array, and the slot that it used is still vacant, then the device will
1391 be added back to the array in the same position. This will normally
1392 cause the data for that device to be recovered. However based on the
1393 event count on the device, the recovery may only require sections that
1394 are flagged a write-intent bitmap to be recovered or may not require
1395 any recovery at all.
1397 When used on an array that has no metadata (i.e. it was built with
1399 it will be assumed that bitmap-based recovery is enough to make the
1400 device fully consistent with the array.
1402 When used with v1.x metadata,
1404 can be accompanied by
1405 .BR \-\-update=devicesize ,
1406 .BR \-\-update=bbl ", or"
1407 .BR \-\-update=no\-bbl .
1408 See the description of these option when used in Assemble mode for an
1409 explanation of their use.
1411 If the device name given is
1415 will try to find any device that looks like it should be
1416 part of the array but isn't and will try to re\-add all such devices.
1418 If the device name given is
1422 will find all devices in the array that are marked
1424 remove them and attempt to immediately re\-add them. This can be
1425 useful if you are certain that the reason for failure has been
1430 Add a device as a spare. This is similar to
1432 except that it does not attempt
1434 first. The device will be added as a spare even if it looks like it
1435 could be an recent member of the array.
1438 .BR \-r ", " \-\-remove
1439 remove listed devices. They must not be active. i.e. they should
1440 be failed or spare devices.
1442 As well as the name of a device file
1452 The first causes all failed device to be removed. The second causes
1453 any device which is no longer connected to the system (i.e an 'open'
1457 The third will remove a set as describe below under
1461 .BR \-f ", " \-\-fail
1462 Mark listed devices as faulty.
1463 As well as the name of a device file, the word
1467 can be given. The former will cause any device that has been detached from
1468 the system to be marked as failed. It can then be removed.
1470 For RAID10 arrays where the number of copies evenly divides the number
1471 of devices, the devices can be conceptually divided into sets where
1472 each set contains a single complete copy of the data on the array.
1473 Sometimes a RAID10 array will be configured so that these sets are on
1474 separate controllers. In this case all the devices in one set can be
1475 failed by giving a name like
1481 The appropriate set names are reported by
1491 Mark listed devices as requiring replacement. As soon as a spare is
1492 available, it will be rebuilt and will replace the marked device.
1493 This is similar to marking a device as faulty, but the device remains
1494 in service during the recovery process to increase resilience against
1495 multiple failures. When the replacement process finishes, the
1496 replaced device will be marked as faulty.
1500 This can follow a list of
1502 devices. The devices listed after
1504 will be preferentially used to replace the devices listed after
1506 These device must already be spare devices in the array.
1509 .BR \-\-write\-mostly
1510 Subsequent devices that are added or re\-added will have the 'write-mostly'
1511 flag set. This is only valid for RAID1 and means that the 'md' driver
1512 will avoid reading from these devices if possible.
1515 Subsequent devices that are added or re\-added will have the 'write-mostly'
1518 .BR \-\-cluster\-confirm
1519 Confirm the existence of the device. This is issued in response to an \-\-add
1520 request by a node in a cluster. When a node adds a device it sends a message
1521 to all nodes in the cluster to look for a device with a UUID. This translates
1522 to a udev notification with the UUID of the device to be added and the slot
1523 number. The receiving node must acknowledge this message
1524 with \-\-cluster\-confirm. Valid arguments are <slot>:<devicename> in case
1525 the device is found or <slot>:missing in case the device is not found.
1529 Recreate journal for RAID-4/5/6 array that lost a journal device. In the
1530 current implementation, this command cannot add a journal to an array
1531 that had a failed journal. To avoid interrupting on-going write opertions,
1533 only works for array in Read-Only state.
1537 Subsequent devices that are added or re\-added will have
1538 the 'failfast' flag set. This is only valid for RAID1 and RAID10 and
1539 means that the 'md' driver will avoid long timeouts on error handling
1543 Subsequent devices that are re\-added will be re\-added without
1544 the 'failfast' flag set.
1547 Each of these options requires that the first device listed is the array
1548 to be acted upon, and the remainder are component devices to be added,
1549 removed, marked as faulty, etc. Several different operations can be
1550 specified for different devices, e.g.
1552 mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
1554 Each operation applies to all devices listed until the next
1557 If an array is using a write-intent bitmap, then devices which have
1558 been removed can be re\-added in a way that avoids a full
1559 reconstruction but instead just updates the blocks that have changed
1560 since the device was removed. For arrays with persistent metadata
1561 (superblocks) this is done automatically. For arrays created with
1563 mdadm needs to be told that this device we removed recently with
1566 Devices can only be removed from an array if they are not in active
1567 use, i.e. that must be spares or failed devices. To remove an active
1568 device, it must first be marked as
1574 .BR \-Q ", " \-\-query
1575 Examine a device to see
1576 (1) if it is an md device and (2) if it is a component of an md
1578 Information about what is discovered is presented.
1581 .BR \-D ", " \-\-detail
1582 Print details of one or more md devices.
1585 .BR \-\-detail\-platform
1586 Print details of the platform's RAID capabilities (firmware / hardware
1587 topology) for a given metadata format. If used without argument, mdadm
1588 will scan all controllers looking for their capabilities. Otherwise, mdadm
1589 will only look at the controller specified by the argument in form of an
1590 absolute filepath or a link, e.g.
1591 .IR /sys/devices/pci0000:00/0000:00:1f.2 .
1594 .BR \-Y ", " \-\-export
1597 .BR \-\-detail-platform ,
1601 output will be formatted as
1603 pairs for easy import into the environment.
1609 indicates whether an array was started
1611 or not, which may include a reason
1612 .RB ( unsafe ", " nothing ", " no ).
1615 indicates if the array is expected on this host
1617 or seems to be from elsewhere
1621 .BR \-E ", " \-\-examine
1622 Print contents of the metadata stored on the named device(s).
1623 Note the contrast between
1628 applies to devices which are components of an array, while
1630 applies to a whole array which is currently active.
1633 If an array was created on a SPARC machine with a 2.2 Linux kernel
1634 patched with RAID support, the superblock will have been created
1635 incorrectly, or at least incompatibly with 2.4 and later kernels.
1640 will fix the superblock before displaying it. If this appears to do
1641 the right thing, then the array can be successfully assembled using
1642 .BR "\-\-assemble \-\-update=sparc2.2" .
1645 .BR \-X ", " \-\-examine\-bitmap
1646 Report information about a bitmap file.
1647 The argument is either an external bitmap file or an array component
1648 in case of an internal bitmap. Note that running this on an array
1651 does not report the bitmap for that array.
1654 .B \-\-examine\-badblocks
1655 List the bad-blocks recorded for the device, if a bad-blocks list has
1656 been configured. Currently only
1658 metadata supports bad-blocks lists.
1661 .BI \-\-dump= directory
1663 .BI \-\-restore= directory
1664 Save metadata from lists devices, or restore metadata to listed devices.
1667 .BR \-R ", " \-\-run
1668 start a partially assembled array. If
1670 did not find enough devices to fully start the array, it might leaving
1671 it partially assembled. If you wish, you can then use
1673 to start the array in degraded mode.
1676 .BR \-S ", " \-\-stop
1677 deactivate array, releasing all resources.
1680 .BR \-o ", " \-\-readonly
1681 mark array as readonly.
1684 .BR \-w ", " \-\-readwrite
1685 mark array as readwrite.
1688 .B \-\-zero\-superblock
1689 If the device contains a valid md superblock, the block is
1690 overwritten with zeros. With
1692 the block where the superblock would be is overwritten even if it
1693 doesn't appear to be valid.
1696 .B \-\-kill\-subarray=
1697 If the device is a container and the argument to \-\-kill\-subarray
1698 specifies an inactive subarray in the container, then the subarray is
1699 deleted. Deleting all subarrays will leave an 'empty-container' or
1700 spare superblock on the drives. See
1701 .B \-\-zero\-superblock
1703 removing a superblock. Note that some formats depend on the subarray
1704 index for generating a UUID, this command will fail if it would change
1705 the UUID of an active subarray.
1708 .B \-\-update\-subarray=
1709 If the device is a container and the argument to \-\-update\-subarray
1710 specifies a subarray in the container, then attempt to update the given
1711 superblock field in the subarray. See below in
1716 .BR \-t ", " \-\-test
1721 is set to reflect the status of the device. See below in
1726 .BR \-W ", " \-\-wait
1727 For each md device given, wait for any resync, recovery, or reshape
1728 activity to finish before returning.
1730 will return with success if it actually waited for every device
1731 listed, otherwise it will return failure.
1735 For each md device given, or each device in /proc/mdstat if
1737 is given, arrange for the array to be marked clean as soon as possible.
1739 will return with success if the array uses external metadata and we
1740 successfully waited. For native arrays this returns immediately as the
1741 kernel handles dirty-clean transitions at shutdown. No action is taken
1742 if safe-mode handling is disabled.
1746 Set the "sync_action" for all md devices given to one of
1753 will abort any currently running action though some actions will
1754 automatically restart.
1757 will abort any current action and ensure no other action starts
1767 .BR "SCRUBBING AND MISMATCHES" .
1769 .SH For Incremental Assembly mode:
1771 .BR \-\-rebuild\-map ", " \-r
1772 Rebuild the map file
1776 uses to help track which arrays are currently being assembled.
1779 .BR \-\-run ", " \-R
1780 Run any array assembled as soon as a minimal number of devices are
1781 available, rather than waiting until all expected devices are present.
1784 .BR \-\-scan ", " \-s
1785 Only meaningful with
1789 file for arrays that are being incrementally assembled and will try to
1790 start any that are not already started. If any such array is listed
1793 as requiring an external bitmap, that bitmap will be attached first.
1796 .BR \-\-fail ", " \-f
1797 This allows the hot-plug system to remove devices that have fully disappeared
1798 from the kernel. It will first fail and then remove the device from any
1799 array it belongs to.
1800 The device name given should be a kernel device name such as "sda",
1806 Only used with \-\-fail. The 'path' given will be recorded so that if
1807 a new device appears at the same location it can be automatically
1808 added to the same array. This allows the failed device to be
1809 automatically replaced by a new device without metadata if it appears
1810 at specified path. This option is normally only set by a
1814 .SH For Monitor mode:
1816 .BR \-m ", " \-\-mail
1817 Give a mail address to send alerts to.
1820 .BR \-p ", " \-\-program ", " \-\-alert
1821 Give a program to be run whenever an event is detected.
1824 .BR \-y ", " \-\-syslog
1825 Cause all events to be reported through 'syslog'. The messages have
1826 facility of 'daemon' and varying priorities.
1829 .BR \-d ", " \-\-delay
1830 Give a delay in seconds.
1832 polls the md arrays and then waits this many seconds before polling
1833 again. The default is 60 seconds. Since 2.6.16, there is no need to
1834 reduce this as the kernel alerts
1836 immediately when there is any change.
1839 .BR \-r ", " \-\-increment
1840 Give a percentage increment.
1842 will generate RebuildNN events with the given percentage increment.
1845 .BR \-f ", " \-\-daemonise
1848 to run as a background daemon if it decides to monitor anything. This
1849 causes it to fork and run in the child, and to disconnect from the
1850 terminal. The process id of the child is written to stdout.
1853 which will only continue monitoring if a mail address or alert program
1854 is found in the config file.
1857 .BR \-i ", " \-\-pid\-file
1860 is running in daemon mode, write the pid of the daemon process to
1861 the specified file, instead of printing it on standard output.
1864 .BR \-1 ", " \-\-oneshot
1865 Check arrays only once. This will generate
1867 events and more significantly
1873 .B " mdadm \-\-monitor \-\-scan \-1"
1875 from a cron script will ensure regular notification of any degraded arrays.
1878 .BR \-t ", " \-\-test
1881 alert for every array found at startup. This alert gets mailed and
1882 passed to the alert program. This can be used for testing that alert
1883 message do get through successfully.
1887 This inhibits the functionality for moving spares between arrays.
1888 Only one monitoring process started with
1890 but without this flag is allowed, otherwise the two could interfere
1897 .B mdadm \-\-assemble
1898 .I md-device options-and-component-devices...
1901 .B mdadm \-\-assemble \-\-scan
1902 .I md-devices-and-options...
1905 .B mdadm \-\-assemble \-\-scan
1909 This usage assembles one or more RAID arrays from pre-existing components.
1910 For each array, mdadm needs to know the md device, the identity of the
1911 array, and a number of component-devices. These can be found in a number of ways.
1913 In the first usage example (without the
1915 the first device given is the md device.
1916 In the second usage example, all devices listed are treated as md
1917 devices and assembly is attempted.
1918 In the third (where no devices are listed) all md devices that are
1919 listed in the configuration file are assembled. If no arrays are
1920 described by the configuration file, then any arrays that
1921 can be found on unused devices will be assembled.
1923 If precisely one device is listed, but
1929 was given and identity information is extracted from the configuration file.
1931 The identity can be given with the
1937 option, will be taken from the md-device record in the config file, or
1938 will be taken from the super block of the first component-device
1939 listed on the command line.
1941 Devices can be given on the
1943 command line or in the config file. Only devices which have an md
1944 superblock which contains the right identity will be considered for
1947 The config file is only used if explicitly named with
1949 or requested with (a possibly implicit)
1954 .B /etc/mdadm/mdadm.conf
1959 is not given, then the config file will only be used to find the
1960 identity of md arrays.
1962 Normally the array will be started after it is assembled. However if
1964 is not given and not all expected drives were listed, then the array
1965 is not started (to guard against usage errors). To insist that the
1966 array be started in this case (as may work for RAID1, 4, 5, 6, or 10),
1975 does not create any entries in
1979 It does record information in
1983 to choose the correct name.
1987 detects that udev is not configured, it will create the devices in
1991 In Linux kernels prior to version 2.6.28 there were two distinctly
1992 different types of md devices that could be created: one that could be
1993 partitioned using standard partitioning tools and one that could not.
1994 Since 2.6.28 that distinction is no longer relevant as both type of
1995 devices can be partitioned.
1997 will normally create the type that originally could not be partitioned
1998 as it has a well defined major number (9).
2000 Prior to 2.6.28, it is important that mdadm chooses the correct type
2001 of array device to use. This can be controlled with the
2003 option. In particular, a value of "mdp" or "part" or "p" tells mdadm
2004 to use a partitionable device rather than the default.
2006 In the no-udev case, the value given to
2008 can be suffixed by a number. This tells
2010 to create that number of partition devices rather than the default of 4.
2014 can also be given in the configuration file as a word starting
2016 on the ARRAY line for the relevant array.
2023 and no devices are listed,
2025 will first attempt to assemble all the arrays listed in the config
2028 If no arrays are listed in the config (other than those marked
2030 it will look through the available devices for possible arrays and
2031 will try to assemble anything that it finds. Arrays which are tagged
2032 as belonging to the given homehost will be assembled and started
2033 normally. Arrays which do not obviously belong to this host are given
2034 names that are expected not to conflict with anything local, and are
2035 started "read-auto" so that nothing is written to any device until the
2036 array is written to. i.e. automatic resync etc is delayed.
2040 finds a consistent set of devices that look like they should comprise
2041 an array, and if the superblock is tagged as belonging to the given
2042 home host, it will automatically choose a device name and try to
2043 assemble the array. If the array uses version-0.90 metadata, then the
2045 number as recorded in the superblock is used to create a name in
2049 If the array uses version-1 metadata, then the
2051 from the superblock is used to similarly create a name in
2053 (the name will have any 'host' prefix stripped first).
2055 This behaviour can be modified by the
2059 configuration file. This line can indicate that specific metadata
2060 type should, or should not, be automatically assembled. If an array
2061 is found which is not listed in
2063 and has a metadata format that is denied by the
2065 line, then it will not be assembled.
2068 line can also request that all arrays identified as being for this
2069 homehost should be assembled regardless of their metadata type.
2072 for further details.
2074 Note: Auto assembly cannot be used for assembling and activating some
2075 arrays which are undergoing reshape. In particular as the
2077 cannot be given, any reshape which requires a backup-file to continue
2078 cannot be started by auto assembly. An array which is growing to more
2079 devices and has passed the critical section can be assembled using
2090 .BI \-\-raid\-devices= Z
2094 This usage is similar to
2096 The difference is that it creates an array without a superblock. With
2097 these arrays there is no difference between initially creating the array and
2098 subsequently assembling the array, except that hopefully there is useful
2099 data there in the second case.
2101 The level may raid0, linear, raid1, raid10, multipath, or faulty, or
2102 one of their synonyms. All devices must be listed and the array will
2103 be started once complete. It will often be appropriate to use
2104 .B \-\-assume\-clean
2105 with levels raid1 or raid10.
2116 .BI \-\-raid\-devices= Z
2120 This usage will initialise a new md array, associate some devices with
2121 it, and activate the array.
2123 The named device will normally not exist when
2124 .I "mdadm \-\-create"
2125 is run, but will be created by
2127 once the array becomes active.
2129 As devices are added, they are checked to see if they contain RAID
2130 superblocks or filesystems. They are also checked to see if the variance in
2131 device size exceeds 1%.
2133 If any discrepancy is found, the array will not automatically be run, though
2136 can override this caution.
2138 To create a "degraded" array in which some devices are missing, simply
2139 give the word "\fBmissing\fP"
2140 in place of a device name. This will cause
2142 to leave the corresponding slot in the array empty.
2143 For a RAID4 or RAID5 array at most one slot can be
2144 "\fBmissing\fP"; for a RAID6 array at most two slots.
2145 For a RAID1 array, only one real device needs to be given. All of the
2149 When creating a RAID5 array,
2151 will automatically create a degraded array with an extra spare drive.
2152 This is because building the spare into a degraded array is in general
2153 faster than resyncing the parity on a non-degraded, but not clean,
2154 array. This feature can be overridden with the
2158 When creating an array with version-1 metadata a name for the array is
2160 If this is not given with the
2164 will choose a name based on the last component of the name of the
2165 device being created. So if
2167 is being created, then the name
2172 is being created, then the name
2176 When creating a partition based array, using
2178 with version-1.x metadata, the partition type should be set to
2180 (non fs-data). This type selection allows for greater precision since
2181 using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)],
2182 might create problems in the event of array recovery through a live cdrom.
2184 A new array will normally get a randomly assigned 128bit UUID which is
2185 very likely to be unique. If you have a specific need, you can choose
2186 a UUID for the array by giving the
2188 option. Be warned that creating two arrays with the same UUID is a
2189 recipe for disaster. Also, using
2191 when creating a v0.90 array will silently override any
2196 .\"option is given, it is not necessary to list any component-devices in this command.
2197 .\"They can be added later, before a
2201 .\"is given, the apparent size of the smallest drive given is used.
2203 If the array type supports a write-intent bitmap, and if the devices
2204 in the array exceed 100G is size, an internal write-intent bitmap
2205 will automatically be added unless some other option is explicitly
2208 option or a different consistency policy is selected with the
2209 .B \-\-consistency\-policy
2210 option. In any case space for a bitmap will be reserved so that one
2211 can be added later with
2212 .BR "\-\-grow \-\-bitmap=internal" .
2214 If the metadata type supports it (currently only 1.x metadata), space
2215 will be allocated to store a bad block list. This allows a modest
2216 number of bad blocks to be recorded, allowing the drive to remain in
2217 service while only partially functional.
2219 When creating an array within a
2222 can be given either the list of devices to use, or simply the name of
2223 the container. The former case gives control over which devices in
2224 the container will be used for the array. The latter case allows
2226 to automatically choose which devices to use based on how much spare
2229 The General Management options that are valid with
2234 insist on running the array even if some devices look like they might
2239 start the array in readonly mode.
2246 .I options... devices...
2249 This usage will allow individual devices in an array to be failed,
2250 removed or added. It is possible to perform multiple operations with
2251 on command. For example:
2253 .B " mdadm /dev/md0 \-f /dev/hda1 \-r /dev/hda1 \-a /dev/hda1"
2259 and will then remove it from the array and finally add it back
2260 in as a spare. However only one md array can be affected by a single
2263 When a device is added to an active array, mdadm checks to see if it
2264 has metadata on it which suggests that it was recently a member of the
2265 array. If it does, it tries to "re\-add" the device. If there have
2266 been no changes since the device was removed, or if the array has a
2267 write-intent bitmap which has recorded whatever changes there were,
2268 then the device will immediately become a full member of the array and
2269 those differences recorded in the bitmap will be resolved.
2279 MISC mode includes a number of distinct operations that
2280 operate on distinct devices. The operations are:
2283 The device is examined to see if it is
2284 (1) an active md array, or
2285 (2) a component of an md array.
2286 The information discovered is reported.
2290 The device should be an active md device.
2292 will display a detailed description of the array.
2296 will cause the output to be less detailed and the format to be
2297 suitable for inclusion in
2301 will normally be 0 unless
2303 failed to get useful information about the device(s); however, if the
2305 option is given, then the exit status will be:
2309 The array is functioning normally.
2312 The array has at least one failed device.
2315 The array has multiple failed devices such that it is unusable.
2318 There was an error while trying to get information about the device.
2322 .B \-\-detail\-platform
2323 Print detail of the platform's RAID capabilities (firmware / hardware
2324 topology). If the metadata is specified with
2328 then the return status will be:
2332 metadata successfully enumerated its platform components on this system
2335 metadata is platform independent
2338 metadata failed to find its platform components on this system
2342 .B \-\-update\-subarray=
2343 If the device is a container and the argument to \-\-update\-subarray
2344 specifies a subarray in the container, then attempt to update the given
2345 superblock field in the subarray. Similar to updating an array in
2346 "assemble" mode, the field to update is selected by
2350 option. The supported options are
2358 option updates the subarray name in the metadata, it may not affect the
2359 device node name or the device node symlink until the subarray is
2360 re\-assembled. If updating
2362 would change the UUID of an active subarray this operation is blocked,
2363 and the command will end in an error.
2369 options enable and disable PPL in the metadata. Currently supported only for
2374 The device should be a component of an md array.
2376 will read the md superblock of the device and display the contents.
2381 is given, then multiple devices that are components of the one array
2382 are grouped together and reported in a single entry suitable
2388 without listing any devices will cause all devices listed in the
2389 config file to be examined.
2392 .BI \-\-dump= directory
2393 If the device contains RAID metadata, a file will be created in the
2395 and the metadata will be written to it. The file will be the same
2396 size as the device and have the metadata written in the file at the
2397 same locate that it exists in the device. However the file will be "sparse" so
2398 that only those blocks containing metadata will be allocated. The
2399 total space used will be small.
2401 The file name used in the
2403 will be the base name of the device. Further if any links appear in
2405 which point to the device, then hard links to the file will be created
2412 Multiple devices can be listed and their metadata will all be stored
2413 in the one directory.
2416 .BI \-\-restore= directory
2417 This is the reverse of
2420 will locate a file in the directory that has a name appropriate for
2421 the given device and will restore metadata from it. Names that match
2423 names are preferred, however if two of those refer to different files,
2425 will not choose between them but will abort the operation.
2427 If a file name is given instead of a
2431 will restore from that file to a single device, always provided the
2432 size of the file matches that of the device, and the file contains
2436 The devices should be active md arrays which will be deactivated, as
2437 long as they are not currently in use.
2441 This will fully activate a partially assembled md array.
2445 This will mark an active array as read-only, providing that it is
2446 not currently being used.
2452 array back to being read/write.
2456 For all operations except
2459 will cause the operation to be applied to all arrays listed in
2464 causes all devices listed in the config file to be examined.
2467 .BR \-b ", " \-\-brief
2468 Be less verbose. This is used with
2476 gives an intermediate level of verbosity.
2482 .B mdadm \-\-monitor
2483 .I options... devices...
2488 to periodically poll a number of md arrays and to report on any events
2491 will never exit once it decides that there are arrays to be checked,
2492 so it should normally be run in the background.
2494 As well as reporting events,
2496 may move a spare drive from one array to another if they are in the
2501 and if the destination array has a failed drive but no spares.
2503 If any devices are listed on the command line,
2505 will only monitor those devices. Otherwise all arrays listed in the
2506 configuration file will be monitored. Further, if
2508 is given, then any other md devices that appear in
2510 will also be monitored.
2512 The result of monitoring the arrays is the generation of events.
2513 These events are passed to a separate program (if specified) and may
2514 be mailed to a given E-mail address.
2516 When passing events to a program, the program is run once for each event,
2517 and is given 2 or 3 command-line arguments: the first is the
2518 name of the event (see below), the second is the name of the
2519 md device which is affected, and the third is the name of a related
2520 device if relevant (such as a component device that has failed).
2524 is given, then a program or an E-mail address must be specified on the
2525 command line or in the config file. If neither are available, then
2527 will not monitor anything.
2531 will continue monitoring as long as something was found to monitor. If
2532 no program or email is given, then each event is reported to
2535 The different events are:
2539 .B DeviceDisappeared
2540 An md array which previously was configured appears to no longer be
2541 configured. (syslog priority: Critical)
2545 was told to monitor an array which is RAID0 or Linear, then it will
2547 .B DeviceDisappeared
2548 with the extra information
2550 This is because RAID0 and Linear do not support the device-failed,
2551 hot-spare and resync operations which are monitored.
2555 An md array started reconstruction (e.g. recovery, resync, reshape,
2556 check, repair). (syslog priority: Warning)
2562 is a two-digit number (ie. 05, 48). This indicates that rebuild
2563 has passed that many percent of the total. The events are generated
2564 with fixed increment since 0. Increment size may be specified with
2565 a commandline option (default is 20). (syslog priority: Warning)
2569 An md array that was rebuilding, isn't any more, either because it
2570 finished normally or was aborted. (syslog priority: Warning)
2574 An active component device of an array has been marked as
2575 faulty. (syslog priority: Critical)
2579 A spare component device which was being rebuilt to replace a faulty
2580 device has failed. (syslog priority: Critical)
2584 A spare component device which was being rebuilt to replace a faulty
2585 device has been successfully rebuilt and has been made active.
2586 (syslog priority: Info)
2590 A new md array has been detected in the
2592 file. (syslog priority: Info)
2596 A newly noticed array appears to be degraded. This message is not
2599 notices a drive failure which causes degradation, but only when
2601 notices that an array is degraded when it first sees the array.
2602 (syslog priority: Critical)
2606 A spare drive has been moved from one array in a
2610 to another to allow a failed drive to be replaced.
2611 (syslog priority: Info)
2617 has been told, via the config file, that an array should have a certain
2618 number of spare devices, and
2620 detects that it has fewer than this number when it first sees the
2621 array, it will report a
2624 (syslog priority: Warning)
2628 An array was found at startup, and the
2631 (syslog priority: Info)
2641 cause Email to be sent. All events cause the program to be run.
2642 The program is run with two or three arguments: the event
2643 name, the array device and possibly a second device.
2645 Each event has an associated array device (e.g.
2647 and possibly a second device. For
2652 the second device is the relevant component device.
2655 the second device is the array that the spare was moved from.
2659 to move spares from one array to another, the different arrays need to
2660 be labeled with the same
2662 or the spares must be allowed to migrate through matching POLICY domains
2663 in the configuration file. The
2665 name can be any string; it is only necessary that different spare
2666 groups use different names.
2670 detects that an array in a spare group has fewer active
2671 devices than necessary for the complete array, and has no spare
2672 devices, it will look for another array in the same spare group that
2673 has a full complement of working drive and a spare. It will then
2674 attempt to remove the spare from the second drive and add it to the
2676 If the removal succeeds but the adding fails, then it is added back to
2679 If the spare group for a degraded array is not defined,
2681 will look at the rules of spare migration specified by POLICY lines in
2683 and then follow similar steps as above if a matching spare is found.
2686 The GROW mode is used for changing the size or shape of an active
2688 For this to work, the kernel must support the necessary change.
2689 Various types of growth are being added during 2.6 development.
2691 Currently the supported changes include
2693 change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2695 increase or decrease the "raid\-devices" attribute of RAID0, RAID1, RAID4,
2698 change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and RAID10.
2700 convert between RAID1 and RAID5, between RAID5 and RAID6, between
2701 RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).
2703 add a write-intent bitmap to any array which supports these bitmaps, or
2704 remove a write-intent bitmap from such an array.
2706 change the array's consistency policy.
2709 Using GROW on containers is currently supported only for Intel's IMSM
2710 container format. The number of devices in a container can be
2711 increased - which affects all arrays in the container - or an array
2712 in a container can be converted between levels where those levels are
2713 supported by the container, and the conversion is on of those listed
2714 above. Resizing arrays in an IMSM container with
2716 is not yet supported.
2718 Grow functionality (e.g. expand a number of raid devices) for Intel's
2719 IMSM container format has an experimental status. It is guarded by the
2720 .B MDADM_EXPERIMENTAL
2721 environment variable which must be set to '1' for a GROW command to
2723 This is for the following reasons:
2726 Intel's native IMSM check-pointing is not fully tested yet.
2727 This can causes IMSM incompatibility during the grow process: an array
2728 which is growing cannot roam between Microsoft Windows(R) and Linux
2732 Interrupting a grow operation is not recommended, because it
2733 has not been fully tested for Intel's IMSM container format yet.
2736 Note: Intel's native checkpointing doesn't use
2738 option and it is transparent for assembly feature.
2741 Normally when an array is built the "size" is taken from the smallest
2742 of the drives. If all the small drives in an arrays are, one at a
2743 time, removed and replaced with larger drives, then you could have an
2744 array of large drives with only a small amount used. In this
2745 situation, changing the "size" with "GROW" mode will allow the extra
2746 space to start being used. If the size is increased in this way, a
2747 "resync" process will start to make sure the new parts of the array
2750 Note that when an array changes size, any filesystem that may be
2751 stored in the array will not automatically grow or shrink to use or
2752 vacate the space. The
2753 filesystem will need to be explicitly told to use the extra space
2754 after growing, or to reduce its size
2756 to shrinking the array.
2758 Also the size of an array cannot be changed while it has an active
2759 bitmap. If an array has a bitmap, it must be removed before the size
2760 can be changed. Once the change is complete a new bitmap can be created.
2762 .SS RAID\-DEVICES CHANGES
2764 A RAID1 array can work with any number of devices from 1 upwards
2765 (though 1 is not very useful). There may be times which you want to
2766 increase or decrease the number of active devices. Note that this is
2767 different to hot-add or hot-remove which changes the number of
2770 When reducing the number of devices in a RAID1 array, the slots which
2771 are to be removed from the array must already be vacant. That is, the
2772 devices which were in those slots must be failed and removed.
2774 When the number of devices is increased, any hot spares that are
2775 present will be activated immediately.
2777 Changing the number of active devices in a RAID5 or RAID6 is much more
2778 effort. Every block in the array will need to be read and written
2779 back to a new location. From 2.6.17, the Linux Kernel is able to
2780 increase the number of devices in a RAID5 safely, including restarting
2781 an interrupted "reshape". From 2.6.31, the Linux Kernel is able to
2782 increase or decrease the number of devices in a RAID5 or RAID6.
2784 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2787 uses this functionality and the ability to add
2788 devices to a RAID4 to allow devices to be added to a RAID0. When
2789 requested to do this,
2791 will convert the RAID0 to a RAID4, add the necessary disks and make
2792 the reshape happen, and then convert the RAID4 back to RAID0.
2794 When decreasing the number of devices, the size of the array will also
2795 decrease. If there was data in the array, it could get destroyed and
2796 this is not reversible, so you should firstly shrink the filesystem on
2797 the array to fit within the new size. To help prevent accidents,
2799 requires that the size of the array be decreased first with
2800 .BR "mdadm --grow --array-size" .
2801 This is a reversible change which simply makes the end of the array
2802 inaccessible. The integrity of any data can then be checked before
2803 the non-reversible reduction in the number of devices is request.
2805 When relocating the first few stripes on a RAID5 or RAID6, it is not
2806 possible to keep the data on disk completely consistent and
2807 crash-proof. To provide the required safety, mdadm disables writes to
2808 the array while this "critical section" is reshaped, and takes a
2809 backup of the data that is in that section. For grows, this backup may be
2810 stored in any spare devices that the array has, however it can also be
2811 stored in a separate file specified with the
2813 option, and is required to be specified for shrinks, RAID level
2814 changes and layout changes. If this option is used, and the system
2815 does crash during the critical period, the same file must be passed to
2817 to restore the backup and reassemble the array. When shrinking rather
2818 than growing the array, the reshape is done from the end towards the
2819 beginning, so the "critical section" is at the end of the reshape.
2823 Changing the RAID level of any array happens instantaneously. However
2824 in the RAID5 to RAID6 case this requires a non-standard layout of the
2825 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2826 required before the change can be accomplished. So while the level
2827 change is instant, the accompanying layout change can take quite a
2830 is required. If the array is not simultaneously being grown or
2831 shrunk, so that the array size will remain the same - for example,
2832 reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will
2833 be used not just for a "cricital section" but throughout the reshape
2834 operation, as described below under LAYOUT CHANGES.
2836 .SS CHUNK-SIZE AND LAYOUT CHANGES
2838 Changing the chunk-size of layout without also changing the number of
2839 devices as the same time will involve re-writing all blocks in-place.
2840 To ensure against data loss in the case of a crash, a
2842 must be provided for these changes. Small sections of the array will
2843 be copied to the backup file while they are being rearranged. This
2844 means that all the data is copied twice, once to the backup and once
2845 to the new layout on the array, so this type of reshape will go very
2848 If the reshape is interrupted for any reason, this backup file must be
2850 .B "mdadm --assemble"
2851 so the array can be reassembled. Consequently the file cannot be
2852 stored on the device being reshaped.
2857 A write-intent bitmap can be added to, or removed from, an active
2858 array. Either internal bitmaps, or bitmaps stored in a separate file,
2859 can be added. Note that if you add a bitmap stored in a file which is
2860 in a filesystem that is on the RAID array being affected, the system
2861 will deadlock. The bitmap must be on a separate filesystem.
2863 .SS CONSISTENCY POLICY CHANGES
2865 The consistency policy of an active array can be changed by using the
2866 .B \-\-consistency\-policy
2867 option in Grow mode. Currently this works only for the
2871 policies and allows to enable or disable the RAID5 Partial Parity Log (PPL).
2873 .SH INCREMENTAL MODE
2877 .B mdadm \-\-incremental
2881 .RI [ optional-aliases-for-device ]
2884 .B mdadm \-\-incremental \-\-fail
2888 .B mdadm \-\-incremental \-\-rebuild\-map
2891 .B mdadm \-\-incremental \-\-run \-\-scan
2894 This mode is designed to be used in conjunction with a device
2895 discovery system. As devices are found in a system, they can be
2897 .B "mdadm \-\-incremental"
2898 to be conditionally added to an appropriate array.
2900 Conversely, it can also be used with the
2902 flag to do just the opposite and find whatever array a particular device
2903 is part of and remove the device from that array.
2905 If the device passed is a
2907 device created by a previous call to
2909 then rather than trying to add that device to an array, all the arrays
2910 described by the metadata of the container will be started.
2913 performs a number of tests to determine if the device is part of an
2914 array, and which array it should be part of. If an appropriate array
2915 is found, or can be created,
2917 adds the device to the array and conditionally starts the array.
2921 will normally only add devices to an array which were previously working
2922 (active or spare) parts of that array. The support for automatic
2923 inclusion of a new drive as a spare in some array requires
2924 a configuration through POLICY in config file.
2928 makes are as follow:
2930 Is the device permitted by
2932 That is, is it listed in a
2934 line in that file. If
2936 is absent then the default it to allow any device. Similarly if
2938 contains the special word
2940 then any device is allowed. Otherwise the device name given to
2942 or one of the aliases given, or an alias found in the filesystem,
2943 must match one of the names or patterns in a
2947 This is the only context where the aliases are used. They are
2948 usually provided by a
2954 Does the device have a valid md superblock? If a specific metadata
2955 version is requested with
2959 then only that style of metadata is accepted, otherwise
2961 finds any known version of metadata. If no
2963 metadata is found, the device may be still added to an array
2964 as a spare if POLICY allows.
2968 Does the metadata match an expected array?
2969 The metadata can match in two ways. Either there is an array listed
2972 which identifies the array (either by UUID, by name, by device list,
2973 or by minor-number), or the array was created with a
2979 or on the command line.
2982 is not able to positively identify the array as belonging to the
2983 current host, the device will be rejected.
2988 keeps a list of arrays that it has partially assembled in
2990 If no array exists which matches
2991 the metadata on the new device,
2993 must choose a device name and unit number. It does this based on any
2996 or any name information stored in the metadata. If this name
2997 suggests a unit number, that number will be used, otherwise a free
2998 unit number will be chosen. Normally
3000 will prefer to create a partitionable array, however if the
3004 suggests that a non-partitionable array is preferred, that will be
3007 If the array is not found in the config file and its metadata does not
3008 identify it as belonging to the "homehost", then
3010 will choose a name for the array which is certain not to conflict with
3011 any array which does belong to this host. It does this be adding an
3012 underscore and a small number to the name preferred by the metadata.
3014 Once an appropriate array is found or created and the device is added,
3016 must decide if the array is ready to be started. It will
3017 normally compare the number of available (non-spare) devices to the
3018 number of devices that the metadata suggests need to be active. If
3019 there are at least that many, the array will be started. This means
3020 that if any devices are missing the array will not be restarted.
3026 in which case the array will be run as soon as there are enough
3027 devices present for the data to be accessible. For a RAID1, that
3028 means one device will start the array. For a clean RAID5, the array
3029 will be started as soon as all but one drive is present.
3031 Note that neither of these approaches is really ideal. If it can
3032 be known that all device discovery has completed, then
3036 can be run which will try to start all arrays that are being
3037 incrementally assembled. They are started in "read-auto" mode in
3038 which they are read-only until the first write request. This means
3039 that no metadata updates are made and no attempt at resync or recovery
3040 happens. Further devices that are found before the first write can
3041 still be added safely.
3044 This section describes environment variables that affect how mdadm
3049 Setting this value to 1 will prevent mdadm from automatically launching
3050 mdmon. This variable is intended primarily for debugging mdadm/mdmon.
3056 does not create any device nodes in /dev, but leaves that task to
3060 appears not to be configured, or if this environment variable is set
3063 will create and devices that are needed.
3066 .B MDADM_NO_SYSTEMCTL
3071 is in use it will normally request
3073 to start various background tasks (particularly
3075 rather than forking and running them in the background. This can be
3076 suppressed by setting
3077 .BR MDADM_NO_SYSTEMCTL=1 .
3081 A key value of IMSM metadata is that it allows interoperability with
3082 boot ROMs on Intel platforms, and with other major operating systems.
3085 will only allow an IMSM array to be created or modified if detects
3086 that it is running on an Intel platform which supports IMSM, and
3087 supports the particular configuration of IMSM that is being requested
3088 (some functionality requires newer OROM support).
3090 These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in the
3091 environment. This can be useful for testing or for disaster
3092 recovery. You should be aware that interoperability may be
3093 compromised by setting this value.
3096 .B MDADM_GROW_ALLOW_OLD
3097 If an array is stopped while it is performing a reshape and that
3098 reshape was making use of a backup file, then when the array is
3101 will sometimes complain that the backup file is too old. If this
3102 happens and you are certain it is the right backup file, you can
3103 over-ride this check by setting
3104 .B MDADM_GROW_ALLOW_OLD=1
3109 Any string given in this variable is added to the start of the
3111 line in the config file, or treated as the whole
3113 line if none is given. It can be used to disable certain metadata
3116 is called from a boot script. For example
3118 .B " export MDADM_CONF_AUTO='-ddf -imsm'
3122 does not automatically assemble any DDF or
3123 IMSM arrays that are found. This can be useful on systems configured
3124 to manage such arrays with
3130 .B " mdadm \-\-query /dev/name-of-device"
3132 This will find out if a given device is a RAID array, or is part of
3133 one, and will provide brief information about the device.
3135 .B " mdadm \-\-assemble \-\-scan"
3137 This will assemble and start all arrays listed in the standard config
3138 file. This command will typically go in a system startup file.
3140 .B " mdadm \-\-stop \-\-scan"
3142 This will shut down all arrays that can be shut down (i.e. are not
3143 currently in use). This will typically go in a system shutdown script.
3145 .B " mdadm \-\-follow \-\-scan \-\-delay=120"
3147 If (and only if) there is an Email address or program given in the
3148 standard config file, then
3149 monitor the status of all arrays listed in that file by
3150 polling them ever 2 minutes.
3152 .B " mdadm \-\-create /dev/md0 \-\-level=1 \-\-raid\-devices=2 /dev/hd[ac]1"
3154 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
3157 .B " echo 'DEVICE /dev/hd*[0\-9] /dev/sd*[0\-9]' > mdadm.conf"
3159 .B " mdadm \-\-detail \-\-scan >> mdadm.conf"
3161 This will create a prototype config file that describes currently
3162 active arrays that are known to be made from partitions of IDE or SCSI drives.
3163 This file should be reviewed before being used as it may
3164 contain unwanted detail.
3166 .B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
3168 .B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
3170 This will find arrays which could be assembled from existing IDE and
3171 SCSI whole drives (not partitions), and store the information in the
3172 format of a config file.
3173 This file is very likely to contain unwanted detail, particularly
3176 entries. It should be reviewed and edited before being used as an
3179 .B " mdadm \-\-examine \-\-brief \-\-scan \-\-config=partitions"
3181 .B " mdadm \-Ebsc partitions"
3183 Create a list of devices by reading
3184 .BR /proc/partitions ,
3185 scan these for RAID superblocks, and printout a brief listing of all
3188 .B " mdadm \-Ac partitions \-m 0 /dev/md0"
3190 Scan all partitions and devices listed in
3191 .BR /proc/partitions
3194 out of all such devices with a RAID superblock with a minor number of 0.
3196 .B " mdadm \-\-monitor \-\-scan \-\-daemonise > /run/mdadm/mon.pid"
3198 If config file contains a mail address or alert program, run mdadm in
3199 the background in monitor mode monitoring all md devices. Also write
3200 pid of mdadm daemon to
3201 .BR /run/mdadm/mon.pid .
3203 .B " mdadm \-Iq /dev/somedevice"
3205 Try to incorporate newly discovered device into some array as
3208 .B " mdadm \-\-incremental \-\-rebuild\-map \-\-run \-\-scan"
3210 Rebuild the array map from any current arrays, and then start any that
3213 .B " mdadm /dev/md4 --fail detached --remove detached"
3215 Any devices which are components of /dev/md4 will be marked as faulty
3216 and then remove from the array.
3218 .B " mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4"
3222 which is currently a RAID5 array will be converted to RAID6. There
3223 should normally already be a spare drive attached to the array as a
3224 RAID6 needs one more drive than a matching RAID5.
3226 .B " mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]"
3228 Create a DDF array over 6 devices.
3230 .B " mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf"
3232 Create a RAID5 array over any 3 devices in the given DDF set. Use
3233 only 30 gigabytes of each device.
3235 .B " mdadm -A /dev/md/ddf1 /dev/sd[a-f]"
3237 Assemble a pre-exist ddf array.
3239 .B " mdadm -I /dev/md/ddf1"
3241 Assemble all arrays contained in the ddf array, assigning names as
3244 .B " mdadm \-\-create \-\-help"
3246 Provide help about the Create mode.
3248 .B " mdadm \-\-config \-\-help"
3250 Provide help about the format of the config file.
3252 .B " mdadm \-\-help"
3254 Provide general help.
3264 lists all active md devices with information about them.
3266 uses this to find arrays when
3268 is given in Misc mode, and to monitor array reconstruction
3273 The config file lists which devices may be scanned to see if
3274 they contain MD super block, and gives identifying information
3275 (e.g. UUID) about known MD arrays. See
3279 .SS /etc/mdadm.conf.d
3281 A directory containing configuration files which are read in lexical
3287 mode is used, this file gets a list of arrays currently being created.
3292 understand two sorts of names for array devices.
3294 The first is the so-called 'standard' format name, which matches the
3295 names used by the kernel and which appear in
3298 The second sort can be freely chosen, but must reside in
3300 When giving a device name to
3302 to create or assemble an array, either full path name such as
3306 can be given, or just the suffix of the second sort of name, such as
3312 chooses device names during auto-assembly or incremental assembly, it
3313 will sometimes add a small sequence number to the end of the name to
3314 avoid conflicted between multiple arrays that have the same name. If
3316 can reasonably determine that the array really is meant for this host,
3317 either by a hostname in the metadata, or by the presence of the array
3320 then it will leave off the suffix if possible.
3321 Also if the homehost is specified as
3324 will only use a suffix if a different array of the same name already
3325 exists or is listed in the config file.
3327 The standard names for non-partitioned arrays (the only sort of md
3328 array available in 2.4 and earlier) are of the form
3332 where NN is a number.
3333 The standard names for partitionable arrays (as available from 2.6
3334 onwards) are of the form:
3338 Partition numbers should be indicated by adding "pMM" to these, thus "/dev/md/d1p2".
3340 From kernel version 2.6.28 the "non-partitioned array" can actually
3341 be partitioned. So the "md_d\fBNN\fP"
3342 names are no longer needed, and
3343 partitions such as "/dev/md\fBNN\fPp\fBXX\fP"
3346 From kernel version 2.6.29 standard names can be non-numeric following
3353 is any string. These names are supported by
3355 since version 3.3 provided they are enabled in
3360 was previously known as
3364 For further information on mdadm usage, MD and the various levels of
3367 .B http://raid.wiki.kernel.org/
3369 (based upon Jakob \(/Ostergaard's Software\-RAID.HOWTO)
3371 The latest version of
3373 should always be available from
3375 .B http://www.kernel.org/pub/linux/utils/raid/mdadm/