2 .TH MDADM 8 "" v2.0-devel-3
4 mdadm \- manage MD devices
10 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
13 RAID devices are virtual devices created from two or more
14 real block devices. This allows multiple devices (typically disk
15 drives or partitions there-of) to be combined into a single device to
16 hold (for example) a single filesystem.
17 Some RAID levels include redundancy and so can survive some degree of
20 Linux Software RAID devices are implemented through the md (Multiple
21 Devices) device driver.
23 Currently, Linux supports
37 .B MULTIPATH is not a Software RAID mechanism, but does involve
40 each device is a path to one common physical storage device.
42 .B FAULTY is also no true RAID, and it only involves one device. It
43 provides a layer over a true device that can be used to inject faults.
46 is a program that can be used to create, manage, and monitor
48 such it provides a similar set of functionality to the
51 The key differences between
58 is a single program and not a collection of programs.
61 can perform (almost) all of its functions without having a
62 configuration file and does not use one by default. Also
64 helps with management of the configuration
68 can provide information about your arrays (through Query, Detail, and Examine)
78 configuration file, at all. It has a different configuration file
79 with a different format and an different purpose.
82 mdadm has 7 major modes of operation:
85 Assemble the parts of a previously created
86 array into an active array. Components can be explicitly given
87 or can be searched for.
89 checks that the components
90 do form a bona fide array, and can, on request, fiddle superblock
91 information so as to assemble a faulty array.
95 Build an array without per-device superblocks.
99 Create a new array with per-device superblocks.
101 '''in several step create-add-add-run or it can all happen with one command.
105 This is for doing things to specific components of an array such as
106 adding new spares and removing faulty devices.
110 This mode allows operations on independent devices such as examine MD
111 superblocks, erasing old superblocks and stopping active arrays.
114 .B "Follow or Monitor"
115 Monitor one or more md devices and act on any state changes. This is
116 only meaningful for raid1, 4, 5, 6 or multipath arrays as
117 only these have interesting state. raid0 or linear never have
118 missing, spare, or failed drives, so there is nothing to monitor.
122 Grow (or shrink) an array, or otherwise reshape it in some way.
123 Currently supported growth options including changing the active size
124 of componenet devices in RAID level 1/4/5/6 and changing the number of
125 active devices in RAID1.
129 Available options are:
132 .BR -A ", " --assemble
133 Assemble a pre-existing array.
137 Build a legacy array without superblocks.
145 Examine a device to see
146 (1) if it is an md device and (2) if it is a component of an md
148 Information about what is discovered is presented.
152 Print detail of one or more md devices.
155 .BR -E ", " --examine
156 Print content of md superblock on device(s).
159 .BR -F ", " --follow ", " --monitor
166 Change the size or shape of an active array.
169 .BR -X ", " --examine-bitmap
170 Report information about a bitmap file.
174 Display help message or, after above option, mode specific help
179 Display more detailed help about command line parsing and some commonly
183 .BR -V ", " --version
184 Print version information for mdadm.
187 .BR -v ", " --verbose
188 Be more verbose about what is happening. This can be used twice to be
190 This currently only affects
193 .BR "--examine --scan" .
197 Be less verbose. This is used with
205 gives an intermediate level of verbosity.
208 .BR -W ", " --write-mostly
209 subsequent devices lists in a
214 command will be flagged as 'write-mostly'. This is valid for RAID1
215 only and means that the 'md' driver will avoid reading from these
216 devices if at all possible. This can be useful if mirroring over a
220 .BR -b ", " --bitmap=
221 Give the name of a bitmap file to use with this array. Can be used
222 with --create (file should not exist) or --assemble (file should
227 Set the Chunksize of the bitmap. Each bit corresponds to that many
228 Kilobytes of storage. Default is 4.
232 Specify that write-behind mode should be enabled (valid for RAID1
233 only). If an argument is specified, it will set the maximum number
234 of outstanding writes allowed. The default value is 256.
235 A write-intent bitmap is required in order to use write-behind
236 mode, and write-behind is only attempted on drives marked as
242 Be more forceful about certain operations. See the various modes of
243 the exact meaning of this option in different contexts.
246 .BR -c ", " --config=
247 Specify the config file. Default is
248 .BR /etc/mdadm.conf .
249 If the config file given is
251 then nothing will be read, but
253 will act as though the config file contained exactly
254 .B "DEVICE partitions"
257 to find a list of devices to scan.
260 is given for the config file, then
262 will act as though the config file were empty.
268 for missing information.
269 In general, this option gives
271 permission to get any missing information, like component devices,
272 array devices, array identities, and alert destination from the
274 .BR /etc/mdadm.conf .
275 One exception is MISC mode when using
281 says to get a list of array devices from
285 .B -e ", " --metadata=
286 Declare the style of superblock (raid metadata) to be used. The
287 default is 0.90 for --create, and to guess for other operations.
291 .IP "0, 0.90, default"
292 Use the original 0.90 format superblock. This format limits arrays to
293 28 componenet devices and limits component devices of levels 1 and
294 greater to 2 terabytes.
295 .IP "1, 1.0, 1.1, 1.2"
296 Use the new version-1 format superblock. This has few restrictions.
297 The different subversion store the superblock at different locations
298 on the device, either at the end (for 1.0), at the start (for 1.1) or
299 4K from the start (for 1.2).
302 .SH For create or build:
306 Specify chunk size of kibibytes. The default is 64.
310 Specify rounding factor for linear array (==chunk size)
314 Set raid level. When used with
316 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
317 raid5, 5, raid6, 6, multipath, mp, fautly. Obviously some of these are synonymous.
321 only linear, raid0, 0, stripe are valid.
324 .BR -p ", " --parity=
325 Set raid5 parity algorithm. Options are:
330 la, ra, ls, rs. The default is left-symmetric.
332 This option is also used to set the failure mode for
350 Each mode can be followed by a number which is used as a period
351 between fault generation. Without a number, the fault is generated
352 once on the first relevant request. With a number, the fault will be
353 generated after that many request, and will continue to be generated
354 every time the period elapses.
356 Multiple failure modes can be current simultaneously by using the
357 "--grow" option to set subsequent failure modes.
359 "clear" or "none" will remove any pending or periodic failure modes,
360 and "flush" will clear any persistant faults.
362 To set the parity with "--grow", the level of the array ("faulty")
363 must be specified before the fault mode is specified.
370 .BR -b ", " --bitmap=
371 Specify a file to store a write-intent bitmap in. The file should not
372 exist unless --force is also given. The same file should be provided
373 when assembling the array.
377 Specifty the chunksize for the bitmap.
380 .BR -n ", " --raid-devices=
381 Specify the number of active devices in the array. This, plus the
382 number of spare devices (see below) must equal the number of
384 (including "\fBmissing\fP" devices)
385 that are listed on the command line for
387 Setting a value of 1 is probably
388 a mistake and so requires that
390 be specified first. A value of 1 will then be allowed for linear,
391 multipath, raid0 and raid1. It is never allowed for raid4 or raid5.
393 This number can only be changed using
395 for RAID1 arrays, and only on kernels which provide necessary support.
398 .BR -x ", " --spare-devices=
399 Specify the number of spare (eXtra) devices in the initial array.
400 Spares can also be added
401 and removed later. The number of component devices listed
402 on the command line must equal the number of raid devices plus the
403 number of spare devices.
408 Amount (in Kibibytes) of space to use from each drive in RAID1/4/5/6.
409 This must be a multiple of the chunk size, and must leave about 128Kb
410 of space at the end of the drive for the RAID superblock.
411 If this is not specified
412 (as it normally is not) the smallest drive (or partition) sets the
413 size, though if there is a variance among the drives of greater than 1%, a warning is
416 This value can be set with
418 for RAID level 1/4/5/6. If the array was created with a size smaller
419 than the currently active drives, the extra space can be accessed
422 The size can be given as
424 which means to choose the largest size that fits all on all current drives.
430 that the array pre-existed and is known to be clean. This is only
431 really useful for Building RAID1 array. Only use this if you really
432 know what you are doing. This is currently only supported for --build.
438 for the array. This is currently only effective when creating an
439 array with a version-1 superblock. The name is a simple textual
440 string that can be used to identify array components when assembling.
446 run the array, even if some of the components
447 appear to be active in another array or filesystem. Normally
449 will ask for confirmation before including such components in an
450 array. This option causes that question to be suppressed.
456 accept the geometry and layout specified without question. Normally
458 will not allow creation of an array with only one device, and will try
459 to create a raid5 array with one missing drive (as this makes the
460 initial resync work faster). With
463 will not try to be so clever.
466 .BR -a ", " "--auto{=no,yes,md,mdp,part,p}{NN}"
467 Instruct mdadm to create the device file if needed, possibly allocating
468 an unused minor number. "md" causes a non-partitionable array
469 to be used. "mdp", "part" or "p" causes a partitionable array (2.6 and
470 later) to be used. "yes" requires the named md device to have a
471 'standard' format, and the type and minor number will be determined
472 from this. See DEVICE NAMES below.
474 The argumentment can also come immediately after
479 is also given, then any
481 entries in the config file will over-ride the
483 instruction given on the command line.
485 For partitionable arrays,
487 will create the device file for the whole array and for the first 4
488 partitions. A different number of partitions can be specified at the
489 end of this option (e.g.
491 If the device name ends with a digit, the partition names add a'p',
492 and a number, e.g. "/dev/home1p3". If there is no
493 trailing digit, then the partition names just have a number added,
494 e.g. "/dev/scratch3".
496 If the md device name is in a 'standard' format as described in DEVICE
497 NAMES, then it will be created, if necessary, with the appropriate
498 number based on that name. If the device name is not in one of these
499 formats, then a unused minor number will be allocted. The minor
500 number will be considered unused if there is no active array for that
501 number, and there is no entry in /dev for that number and with a
508 uuid of array to assemble. Devices which don't have this uuid are
512 .BR -m ", " --super-minor=
513 Minor number of device that array was created for. Devices which
514 don't have this minor number are excluded. If you create an array as
515 /dev/md1, then all superblocks will contain the minor number 1, even if
516 the array is later assembled as /dev/md2.
518 Giving the literal word "dev" for
522 to use the minor number of the md device that is being assembled.
526 will look for super blocks with a minor number of 0.
530 Specify the name of the array to assemble. This must be the name
531 that was specified when creating the array.
535 Assemble the array even if some superblocks appear out-of-date
539 Attempt to start the array even if fewer drives were given than are
540 needed for a full array. Normally if not all drives are found and
542 is not used, then the array will be assembled but not started.
545 an attempt will be made to start it anyway.
548 .BR -a ", " "--auto{=no,yes,md,mdp,part}"
549 See this option under Create and Build options.
552 .BR -b ", " --bitmap=
553 Specify the bitmap file that was given when the array was created.
556 .BR -U ", " --update=
557 Update the superblock on each device while assembling the array. The
558 argument given to this flag can be one of
568 option will adjust the superblock of an array what was created on a Sparc
569 machine running a patched 2.2 Linux kernel. This kernel got the
570 alignment of part of the superblock wrong. You can use the
571 .B "--examine --sparc2.2"
574 to see what effect this would have.
578 option will update the
580 field on each superblock to match the minor number of the array being
581 assembled. This is not needed on 2.6 and later kernels as they make
582 this adjustment automatically.
586 option will cause the array to be marked
588 meaning that any redundancy in the array (e.g. parity for raid5,
589 copies for raid1) may be incorrect. This will cause the raid system
590 to perform a "resync" pass to make sure that all redundant information
595 option allows arrays to be moved between machines with different
597 When assembling such an array for the first time after a move, giving
598 .B "--update=byteorder"
601 to expect superblocks to have their byteorder reversed, and will
602 correct that order before assembling the array. This is only valid
603 with original (Verion 0.90) superblocks.
607 option will correct the summaries in the superblock. That is the
608 counts of total, working, active, failed, and spare devices.
615 hotadd listed devices.
619 remove listed devices. They must not be active. i.e. they should
620 be failed or spare devices.
624 mark listed devices as faulty.
630 .SH For Examine mode:
634 In an array was created on a 2.2 Linux kernel patched with RAID
635 support, the superblock will have been created incorrectly, or at
636 least incompatibly with 2.4 and later kernels. Using the
640 will fix the superblock before displaying it. If this appears to do
641 the right thing, then the array can be successfully assembled using
642 .BR "--assemble --update=sparc2.2" .
648 start a partially built array.
652 deactivate array, releasing all resources.
655 .BR -o ", " --readonly
656 mark array as readonly.
659 .BR -w ", " --readwrite
660 mark array as readwrite.
664 If the device contains a valid md superblock, the block is
665 over-written with zeros. With
667 the block where the superblock would be is over-written even if it
668 doesn't appear to be valid.
676 is set to reflect the status of the device.
678 .SH For Monitor mode:
681 Give a mail address to send alerts to.
684 .BR -p ", " --program ", " --alert
685 Give a program to be run whenever an event is detected.
689 Give a delay in seconds.
691 polls the md arrays and then waits this many seconds before polling
692 again. The default is 60 seconds.
695 .BR -f ", " --daemonise
698 to run as a background daemon if it decides to monitor anything. This
699 causes it to fork and run in the child, and to disconnect form the
700 terminal. The process id of the child is written to stdout.
703 which will only continue monitoring if a mail address or alert program
704 is found in the config file.
707 .BR -i ", " --pid-file
710 is running in daemon mode, write the pid of the daemon process to
711 the specified file, instead of printing it on standard output.
714 .BR -1 ", " --oneshot
715 Check arrays only once. This will generate
717 events and more significantly
721 .B " mdadm --monitor --scan -1"
723 from a cron script will ensure regular notification of any degraded arrays.
729 alert for every array found at startup. This alert gets mailed and
730 passed to the alert program. This can be used for testing that alert
731 message to get through successfully.
738 .I md-device options-and-component-devices...
741 .B mdadm --assemble --scan
742 .I md-devices-and-options...
745 .B mdadm --assemble --scan
749 This usage assembles one or more raid arrays from pre-existing components.
750 For each array, mdadm needs to know the md device, the identity of the
751 array, and a number of component-devices. These can be found in a number of ways.
753 In the first usage example (without the
755 the first device given is the md device.
756 In the second usage example, all devices listed are treated as md
757 devices and assembly is attempted.
758 In the third (where no devices are listed) all md devices that are
759 listed in the configuration file are assembled.
761 If precisely one device is listed, but
767 was given and identify information is extracted from the configuration file.
769 The identity can be given with the
773 option, can be found in the config file, or will be taken from the
774 super block on the first component-device listed on the command line.
776 Devices can be given on the
778 command line or in the config file. Only devices which have an md
779 superblock which contains the right identity will be considered for
782 The config file is only used if explicitly named with
784 or requested with (a possibly implicit)
792 is not given, then the config file will only be used to find the
793 identity of md arrays.
795 Normally the array will be started after it is assembled. However if
797 is not given and insufficient drives were listed to start a complete
798 (non-degraded) array, then the array is not started (to guard against
799 usage errors). To insist that the array be started in this case (as
800 may work for RAID1, 4, 5 or 6), give the
806 option is given, either on the command line (--auto) or in the
807 configuration file (e.g. auto=part), then
809 will create the md device if necessary or will re-create it if it
810 doesn't look usable as it is.
812 This can be useful for handling partitioned devices (which don't have
813 a stable device number - it can change after a reboot) and when using
814 "udev" to manage your
816 tree (udev cannot handle md devices because of the unusual device
817 initialisation conventions).
819 If the option to "auto" is "mdp" or "part" or (on the command line
820 only) "p", then mdadm will create a partitionable array, using the
821 first free one that is not inuse, and does not already have an entry
822 in /dev (apart from numeric /dev/md* entries).
824 If the option to "auto" is "yes" or "md" or (on the command line)
825 nothing, then mdadm will create a traditional, non-partitionable md
828 It is expected that the "auto" functionality will be used to create
829 device entries with meaningful names such as "/dev/md/home" or
830 "/dev/md/root", rather than names based on the numerical array number.
832 When using this option to create a partitionable array, the device
833 files for the first 4 partitions are also created. If a different
834 number is required it can be simply appended to the auto option.
835 e.g. "auto=part8". Partition names are created by appending a digit
836 string to the device name, with an intervening "_p" if the device name
841 option is also available in Build and Create modes. As those modes do
842 not use a config file, the "auto=" config option does not apply to
853 .BI --raid-devices= Z
857 This usage is similar to
859 The difference is that it creates a legacy array without a superblock. With
860 these arrays there is no difference between initially creating the array and
861 subsequently assembling the array, except that hopefully there is useful
862 data there in the second case.
864 The level may only be 0, raid0, or linear. All devices must be listed
865 and the array will be started once complete.
876 .BI --raid-devices= Z
880 This usage will initialise a new md array, associate some devices with
881 it, and activate the array.
885 option is given (as described in more detail in the section on
886 Assemble mode), then the md device will be created with a suitable
887 device number if necessary.
889 As devices are added, they are checked to see if they contain raid
890 superblocks or filesystems. They are also checked to see if the variance in
891 device size exceeds 1%.
893 If any discrepancy is found, the array will not automatically be run, though
896 can override this caution.
898 To create a "degraded" array in which some devices are missing, simply
899 give the word "\fBmissing\fP"
900 in place of a device name. This will cause
902 to leave the corresponding slot in the array empty.
903 For a RAID4 or RAID5 array at most one slot can be
904 "\fBmissing\fP"; for a RAID6 array at most two slots.
905 For a RAID1 array, only one real device needs to be given. All of the
909 When creating a RAID5 array,
911 will automatically create a degraded array with an extra spare drive.
912 This is because building the spare into a degraded array is in general faster than resyncing
913 the parity on a non-degraded, but not clean, array. This feature can
914 be over-ridden with the
920 '''option is given, it is not necessary to list any component-devices in this command.
921 '''They can be added later, before a
925 '''is given, the apparent size of the smallest drive given is used.
927 The General Management options that are valid with --create are:
930 insist on running the array even if some devices look like they might
935 start the array readonly - not supported yet.
942 .I options... devices...
945 This usage will allow individual devices in an array to be failed,
946 removed or added. It is possible to perform multiple operations with
947 on command. For example:
949 .B " mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1"
955 and will then remove it from the array and finally add it back
956 in as a spare. However only one md array can be affected by a single
967 MISC mode includes a number of distinct operations that
968 operate on distinct devices. The operations are:
971 The device is examined to see if it is
972 (1) an active md array, or
973 (2) a component of an md array.
974 The information discovered is reported.
978 The device should be an active md device.
980 will display a detailed description of the array.
984 will cause the output to be less detailed and the format to be
985 suitable for inclusion in
986 .BR /etc/mdadm.conf .
989 will normally be 0 unless
991 failed to get useful information about the device(s). However if the
993 option is given, then the exit status will be:
997 The array is functioning normally.
1000 The array has at least one failed device.
1003 The array has multiple failed devices and hence is unusable (raid4 or
1007 There was an error while trying to get information about the device.
1012 The device should be a component of an md array.
1014 will read the md superblock of the device and display the contents.
1019 then multiple devices that are components of the one array
1020 are grouped together and reported in a single entry suitable
1022 .BR /etc/mdadm.conf .
1026 without listing any devices will cause all devices listed in the
1027 config file to be examined.
1031 The devices should be active md arrays which will be deactivated, as
1032 long as they are not currently in use.
1036 This will fully activate a partially assembled md array.
1040 This will mark an active array as read-only, providing that it is
1041 not currently being used.
1047 array back to being read/write.
1051 For all operations except
1054 will cause the operation to be applied to all arrays listed in
1059 causes all devices listed in the config file to be examined.
1067 .I options... devices...
1072 to periodically poll a number of md arrays and to report on any events
1075 will never exit once it decides that there are arrays to be checked,
1076 so it should normally be run in the background.
1078 As well as reporting events,
1080 may move a spare drive from one array to another if they are in the
1083 and if the destination array has a failed drive but not spares.
1085 If any devices are listed on the command line,
1087 will only monitor those devices. Otherwise all arrays listed in the
1088 configuration file will be monitored. Further, if
1090 is given, then any other md devices that appear in
1092 will also be monitored.
1094 The result of monitoring the arrays is the generation of events.
1095 These events are passed to a separate program (if specified) and may
1096 be mailed to a given E-mail address.
1098 When passing event to program, the program is run once for each event
1099 and is given 2 or 3 command-line arguements. The first is the
1100 name of the event (see below). The second is the name of the
1101 md device which is affected, and the third is the name of a related
1102 device if relevant, such as a component device that has failed.
1106 is given, then a program or an E-mail address must be specified on the
1107 command line or in the config file. If neither are available, then
1109 will not monitor anything.
1113 will continue monitoring as long as something was found to monitor. If
1114 no program or email is given, then each event is reported to
1117 The different events are:
1121 .B DeviceDisappeared
1122 An md array which previously was configured appears to no longer be
1127 was told to monitor an array which is RAID0 or Linear, then it will
1129 .B DeviceDisappeared
1130 with the extra information
1132 This is because RAID0 and Linear do not support the device-failed,
1133 hot-spare and resync operations which are monitored.
1137 An md array started reconstruction.
1143 is 20, 40, 60, or 80, this indicates that rebuild has passed that many
1144 percentage of the total.
1148 An md array that was rebuilding, isn't any more, either because it
1149 finished normally or was aborted.
1153 An active component device of an array has been marked as faulty.
1157 A spare component device which was being rebuilt to replace a faulty
1162 A spare component device which was being rebuilt to replace a faulty
1163 device as been successfully rebuild and has been made active.
1167 A new md array has been detected in the
1173 A newly noticed array appears to be degraded. This message is not
1176 notices a drive failure which causes degradation, but only when
1178 notices that an array is degraded when it first sees the array.
1182 A spare drive has been moved from one array in a
1184 to another to allow a failed drive to be replaced.
1190 has been told, via the config file, that an array should have a certain
1191 number of spare devices, and
1193 detects that it has fewer that this number when it first sees the
1194 array, it will report a
1200 An array was found at startup, and the
1211 cause Email to be sent. All events cause the program to be run.
1212 The program is run with two or three arguments, they being the event
1213 name, the array device and possibly a second device.
1215 Each event has an associated array device (e.g.
1217 and possibly a second device. For
1222 the second device is the relevant component device.
1225 the second device is the array that the spare was moved from.
1229 to move spares from one array to another, the different arrays need to
1230 be labelled with the same
1232 in the configuration file. The
1234 name can be any string. It is only necessary that different spare
1235 groups use different names.
1239 detects that an array which is in a spare group has fewer active
1240 devices than necessary for the complete array, and has no spare
1241 devices, it will look for another array in the same spare group that
1242 has a full complement of working drive and a spare. It will then
1243 attempt to remove the spare from the second drive and add it to the
1245 If the removal succeeds but the adding fails, then it is added back to
1249 The GROW mode is used for changing the size or shape of an active
1251 For this to work, the kernel must support the necessary change.
1252 Various types of growth may be added during 2.6 development, possibly
1253 including restructuring a raid5 array to have more active devices.
1255 Currently the only support available is to
1257 change the "size" attribute
1258 for RAID1, RAID5 and RAID6.
1260 change the "raid-disks" attribute of RAID1.
1262 add a write-intent bitmap to a RAID1 array.
1265 Normally when an array is build the "size" it taken from the smallest
1266 of the drives. If all the small drives in an arrays are, one at a
1267 time, removed and replaced with larger drives, then you could have an
1268 array of large drives with only a small amount used. In this
1269 situation, changing the "size" with "GROW" mode will allow the extra
1270 space to start being used. If the size is increased in this way, a
1271 "resync" process will start to make sure the new parts of the array
1274 Note that when an array changes size, any filesystem that may be
1275 stored in the array will not automatically grow to use the space. The
1276 filesystem will need to be explicitly told to use the extra space.
1278 A RAID1 array can work with any number of devices from 1 upwards
1279 (though 1 is not very useful). There may be times which you want to
1280 increase or decrease the number of active devices. Note that this is
1281 different to hot-add or hot-remove which changes the number of
1284 When reducing the number of devices in a RAID1 array, the slots which
1285 are to be removed from the array must already be vacant. That is, the
1286 devices that which were in those slots must be failed and removed.
1288 When the number of devices is increased, any hot spares that are
1289 present may be activated immediately.
1293 .B " mdadm --query /dev/name-of-device"
1295 This will find out if a given device is a raid array, or is part of
1296 one, and will provide brief information about the device.
1298 .B " mdadm --assemble --scan"
1300 This will assemble and start all arrays listed in the standard confile
1301 file. This command will typically go in a system startup file.
1303 .B " mdadm --stop --scan"
1305 This will shut down all array that can be shut down (i.e. are not
1306 currently in use). This will typically go in a system shutdown script.
1308 .B " mdadm --follow --scan --delay=120"
1310 If (and only if) there is an Email address or program given in the
1311 standard config file, then
1312 monitor the status of all arrays listed in that file by
1313 polling them ever 2 minutes.
1315 .B " mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1"
1317 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
1320 .B " echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf"
1322 .B " mdadm --detail --scan >> mdadm.conf"
1324 This will create a prototype config file that describes currently
1325 active arrays that are known to be made from partitions of IDE or SCSI drives.
1326 This file should be reviewed before being used as it may
1327 contain unwanted detail.
1329 .B " echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf"
1331 .B " mdadm --examine --scan --config=mdadm.conf >> mdadm.conf"
1333 This will find what arrays could be assembled from existign IDE and
1334 SCSI whole drives (not partitions) and store the information is the
1335 format of a config file.
1336 This file is very likely to contain unwanted detail, particularly
1339 entries. It should be reviewed and edited before being used as an
1342 .B " mdadm --examine --brief --scan --config=partitions"
1344 .B " mdadm -Ebsc partitions"
1346 Create a list of devices by reading
1347 .BR /proc/partitions ,
1348 scan these for RAID superblocks, and printout a brief listing of all
1351 .B " mdadm -Ac partitions -m 0 /dev/md0"
1353 Scan all partitions and devices listed in
1354 .BR /proc/partitions
1357 out of all such devices with a RAID superblock with a minor number of 0.
1359 .B " mdadm --monitor --scan --daemonise > /var/run/mdadm"
1361 If config file contains a mail address or alert program, run mdadm in
1362 the background in monitor mode monitoring all md devices. Also write
1363 pid of mdadm daemon to
1364 .BR /var/run/mdadm .
1366 .B " mdadm --create --help"
1368 Providew help about the Create mode.
1370 .B " mdadm --config --help"
1372 Provide help about the format of the config file.
1376 Provide general help.
1387 lists all active md devices with information about them.
1389 uses this to find arrays when
1391 is given in Misc mode, and to monitor array reconstruction
1397 The config file lists which devices may be scanned to see if
1398 they contain MD super block, and gives identifying information
1399 (e.g. UUID) about known MD arrays. See
1405 While entries in the /dev directory can have any format you like,
1407 has an understanding of 'standard' formats which it uses to guide its
1408 behaviour when creating device files via the
1412 The standard names for non-partitioned arrays (the only sort of md
1413 array available in 2.4 and earlier) either of
1419 where NN is a number.
1420 The standard names for partitionable arrays (as available from 2.6
1427 Partition numbers should be indicated by added "pMM" to these, thus "/dev/md/d1p2".
1431 was previously known as
1435 For information on the various levels of
1439 .UR http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1440 http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1443 for new releases of the RAID driver check out:
1446 .UR ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1447 ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1452 .UR http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
1453 http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/