2 .TH MDADM 8 "" v2.0-devel-3
4 mdadm \- manage MD devices
10 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
13 RAID devices are virtual devices created from two or more
14 real block devices. This allows multiple devices (typically disk
15 drives or partitions there-of) to be combined into a single device to
16 hold (for example) a single filesystem.
17 Some RAID levels include redundancy and so can survive some degree of
20 Linux Software RAID devices are implemented through the md (Multiple
21 Devices) device driver.
23 Currently, Linux supports
39 is not a Software RAID mechanism, but does involve
42 each device is a path to one common physical storage device.
45 is also not true RAID, and it only involves one device. It
46 provides a layer over a true device that can be used to inject faults.
49 '''is a program that can be used to create, manage, and monitor
51 '''such it provides a similar set of functionality to the
54 '''The key differences between
61 '''is a single program and not a collection of programs.
64 '''can perform (almost) all of its functions without having a
65 '''configuration file and does not use one by default. Also
67 '''helps with management of the configuration
71 '''can provide information about your arrays (through Query, Detail, and Examine)
81 '''configuration file, at all. It has a different configuration file
82 '''with a different format and an different purpose.
85 mdadm has 7 major modes of operation:
88 Assemble the parts of a previously created
89 array into an active array. Components can be explicitly given
90 or can be searched for.
92 checks that the components
93 do form a bona fide array, and can, on request, fiddle superblock
94 information so as to assemble a faulty array.
98 Build an array that doesn't have per-device superblocks. For these
101 cannot differentiate between initial creation and subsequent assembly
102 of an array. It also cannot perform any checks that appropriate
103 devices have been requested. Because of this, the
105 mode should only be used together with a complete understanding of
110 Create a new array with per-device superblocks.
112 '''in several step create-add-add-run or it can all happen with one command.
116 This is for doing things to specific components of an array such as
117 adding new spares and removing faulty devices.
121 This mode allows operations on independent devices such as examine MD
122 superblocks, erasing old superblocks and stopping active arrays.
125 .B "Follow or Monitor"
126 Monitor one or more md devices and act on any state changes. This is
127 only meaningful for raid1, 4, 5, 6, 10 or multipath arrays as
128 only these have interesting state. raid0 or linear never have
129 missing, spare, or failed drives, so there is nothing to monitor.
133 Grow (or shrink) an array, or otherwise reshape it in some way.
134 Currently supported growth options including changing the active size
135 of componenet devices in RAID level 1/4/5/6 and changing the number of
136 active devices in RAID1.
140 Available options are:
143 .BR -A ", " --assemble
144 Assemble a pre-existing array.
148 Build a legacy array without superblocks.
156 Examine a device to see
157 (1) if it is an md device and (2) if it is a component of an md
159 Information about what is discovered is presented.
163 Print detail of one or more md devices.
166 .BR -E ", " --examine
167 Print content of md superblock on device(s).
170 .BR -F ", " --follow ", " --monitor
177 Change the size or shape of an active array.
180 .BR -X ", " --examine-bitmap
181 Report information about a bitmap file.
185 Display general help message or, after one of the above options, a
186 mode specific help message.
190 Display more detailed help about command line parsing and some commonly
194 .BR -V ", " --version
195 Print version information for mdadm.
198 .BR -v ", " --verbose
199 Be more verbose about what is happening. This can be used twice to be
201 The extra verbosity currently only affects
204 .BR "--examine --scan" .
208 Avoid printing purely informative messages. With this,
210 will be silent unless there is something really important to report.
214 Be less verbose. This is used with
222 gives an intermediate level of verbosity.
225 .BR -W ", " --write-mostly
226 subsequent devices lists in a
231 command will be flagged as 'write-mostly'. This is valid for RAID1
232 only and means that the 'md' driver will avoid reading from these
233 devices if at all possible. This can be useful if mirroring over a
237 .BR -b ", " --bitmap=
238 Give the name of a bitmap file to use with this array. Can be used
239 with --create (file should not exist), --assemble (file should
240 exist), of --grow (file should not exist).
244 can be used to indicate that the bitmap should be stored in the array,
245 near the superblock. There is a limited amount of space for such
246 bitmaps, but it is often sufficient.
250 can be given when used with --grow to remove a bitmap.
254 Set the Chunksize of the bitmap. Each bit corresponds to that many
255 Kilobytes of storage. Default is 4.
259 Specify that write-behind mode should be enabled (valid for RAID1
260 only). If an argument is specified, it will set the maximum number
261 of outstanding writes allowed. The default value is 256.
262 A write-intent bitmap is required in order to use write-behind
263 mode, and write-behind is only attempted on drives marked as
269 Be more forceful about certain operations. See the various modes of
270 the exact meaning of this option in different contexts.
273 .BR -c ", " --config=
274 Specify the config file. Default is
275 .BR /etc/mdadm.conf .
276 If the config file given is
278 then nothing will be read, but
280 will act as though the config file contained exactly
281 .B "DEVICE partitions"
284 to find a list of devices to scan.
287 is given for the config file, then
289 will act as though the config file were empty.
295 for missing information.
296 In general, this option gives
298 permission to get any missing information, like component devices,
299 array devices, array identities, and alert destination from the
301 .BR /etc/mdadm.conf .
302 One exception is MISC mode when using
308 says to get a list of array devices from
312 .B -e ", " --metadata=
313 Declare the style of superblock (raid metadata) to be used. The
314 default is 0.90 for --create, and to guess for other operations.
318 .IP "0, 0.90, default"
319 Use the original 0.90 format superblock. This format limits arrays to
320 28 componenet devices and limits component devices of levels 1 and
321 greater to 2 terabytes.
322 .IP "1, 1.0, 1.1, 1.2"
323 Use the new version-1 format superblock. This has few restrictions.
324 The different subversion store the superblock at different locations
325 on the device, either at the end (for 1.0), at the start (for 1.1) or
326 4K from the start (for 1.2).
329 .SH For create or build:
333 Specify chunk size of kibibytes. The default is 64.
337 Specify rounding factor for linear array (==chunk size)
341 Set raid level. When used with
343 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
344 raid5, 5, raid6, 6, raid10, 10, multipath, mp, fautly. Obviously some of these are synonymous.
348 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
351 .BR -p ", " --layout=
352 This option configures the fine details of data layout for raid5,
353 and raid10 arrays, and controls the failure modes for
356 The layout of the raid5 parity block can be one of
361 la, ra, ls, rs. The default is left-symmetric.
363 When setting the failure mode for
381 Each mode can be followed by a number which is used as a period
382 between fault generation. Without a number, the fault is generated
383 once on the first relevant request. With a number, the fault will be
384 generated after that many request, and will continue to be generated
385 every time the period elapses.
387 Multiple failure modes can be current simultaneously by using the
388 "--grow" option to set subsequent failure modes.
390 "clear" or "none" will remove any pending or periodic failure modes,
391 and "flush" will clear any persistant faults.
393 To set the parity with "--grow", the level of the array ("faulty")
394 must be specified before the fault mode is specified.
396 Finally, the layout options for RAID10 are either 'n' or 'p' followed
397 by a small number. The default is 'n2'.
400 signals 'near' copies (multiple copies of one data block are at
401 similar offsets in different devices) while
404 (multiple copies have very different offsets). See md(4) for more
405 detail about 'near' and 'far'.
407 The number is the number of copies of each datablock. 2 is normal, 3
408 can be useful. This number can be at most equal to the number of
409 devices in the array. It does not need to divide evenly into that
410 number (e.g. it is perfectly legal to have an 'n2' layout for an array
411 with an odd number of devices).
415 same as --layout (thus explaining the p of
419 .BR -b ", " --bitmap=
420 Specify a file to store a write-intent bitmap in. The file should not
421 exist unless --force is also given. The same file should be provided
422 when assembling the array.
426 Specifty the chunksize for the bitmap.
429 .BR -n ", " --raid-devices=
430 Specify the number of active devices in the array. This, plus the
431 number of spare devices (see below) must equal the number of
433 (including "\fBmissing\fP" devices)
434 that are listed on the command line for
436 Setting a value of 1 is probably
437 a mistake and so requires that
439 be specified first. A value of 1 will then be allowed for linear,
440 multipath, raid0 and raid1. It is never allowed for raid4 or raid5.
442 This number can only be changed using
444 for RAID1 arrays, and only on kernels which provide necessary support.
447 .BR -x ", " --spare-devices=
448 Specify the number of spare (eXtra) devices in the initial array.
449 Spares can also be added
450 and removed later. The number of component devices listed
451 on the command line must equal the number of raid devices plus the
452 number of spare devices.
457 Amount (in Kibibytes) of space to use from each drive in RAID1/4/5/6.
458 This must be a multiple of the chunk size, and must leave about 128Kb
459 of space at the end of the drive for the RAID superblock.
460 If this is not specified
461 (as it normally is not) the smallest drive (or partition) sets the
462 size, though if there is a variance among the drives of greater than 1%, a warning is
465 This value can be set with
467 for RAID level 1/4/5/6. If the array was created with a size smaller
468 than the currently active drives, the extra space can be accessed
471 The size can be given as
473 which means to choose the largest size that fits on all current drives.
479 that the array pre-existed and is known to be clean. This is only
480 really useful for Building RAID1 array. Only use this if you really
481 know what you are doing. This is currently only supported for --build.
487 for the array. This is currently only effective when creating an
488 array with a version-1 superblock. The name is a simple textual
489 string that can be used to identify array components when assembling.
495 run the array, even if some of the components
496 appear to be active in another array or filesystem. Normally
498 will ask for confirmation before including such components in an
499 array. This option causes that question to be suppressed.
505 accept the geometry and layout specified without question. Normally
507 will not allow creation of an array with only one device, and will try
508 to create a raid5 array with one missing drive (as this makes the
509 initial resync work faster). With
512 will not try to be so clever.
515 .BR -a ", " "--auto{=no,yes,md,mdp,part,p}{NN}"
516 Instruct mdadm to create the device file if needed, possibly allocating
517 an unused minor number. "md" causes a non-partitionable array
518 to be used. "mdp", "part" or "p" causes a partitionable array (2.6 and
519 later) to be used. "yes" requires the named md device to have a
520 'standard' format, and the type and minor number will be determined
521 from this. See DEVICE NAMES below.
523 The argument can also come immediately after
528 is also given, then any
530 entries in the config file will over-ride the
532 instruction given on the command line.
534 For partitionable arrays,
536 will create the device file for the whole array and for the first 4
537 partitions. A different number of partitions can be specified at the
538 end of this option (e.g.
540 If the device name ends with a digit, the partition names add a'p',
541 and a number, e.g. "/dev/home1p3". If there is no
542 trailing digit, then the partition names just have a number added,
543 e.g. "/dev/scratch3".
545 If the md device name is in a 'standard' format as described in DEVICE
546 NAMES, then it will be created, if necessary, with the appropriate
547 number based on that name. If the device name is not in one of these
548 formats, then a unused minor number will be allocated. The minor
549 number will be considered unused if there is no active array for that
550 number, and there is no entry in /dev for that number and with a
557 uuid of array to assemble. Devices which don't have this uuid are
561 .BR -m ", " --super-minor=
562 Minor number of device that array was created for. Devices which
563 don't have this minor number are excluded. If you create an array as
564 /dev/md1, then all superblocks will contain the minor number 1, even if
565 the array is later assembled as /dev/md2.
567 Giving the literal word "dev" for
571 to use the minor number of the md device that is being assembled.
575 will look for super blocks with a minor number of 0.
579 Specify the name of the array to assemble. This must be the name
580 that was specified when creating the array.
584 Assemble the array even if some superblocks appear out-of-date
588 Attempt to start the array even if fewer drives were given than are
589 needed for a full array. Normally if not all drives are found and
591 is not used, then the array will be assembled but not started.
594 an attempt will be made to start it anyway.
597 .BR -a ", " "--auto{=no,yes,md,mdp,part}"
598 See this option under Create and Build options.
601 .BR -b ", " --bitmap=
602 Specify the bitmap file that was given when the array was created.
605 .BR -U ", " --update=
606 Update the superblock on each device while assembling the array. The
607 argument given to this flag can be one of
617 option will adjust the superblock of an array what was created on a Sparc
618 machine running a patched 2.2 Linux kernel. This kernel got the
619 alignment of part of the superblock wrong. You can use the
620 .B "--examine --sparc2.2"
623 to see what effect this would have.
627 option will update the
629 field on each superblock to match the minor number of the array being
630 assembled. This is not needed on 2.6 and later kernels as they make
631 this adjustment automatically.
635 option will cause the array to be marked
637 meaning that any redundancy in the array (e.g. parity for raid5,
638 copies for raid1) may be incorrect. This will cause the raid system
639 to perform a "resync" pass to make sure that all redundant information
644 option allows arrays to be moved between machines with different
646 When assembling such an array for the first time after a move, giving
647 .B "--update=byteorder"
650 to expect superblocks to have their byteorder reversed, and will
651 correct that order before assembling the array. This is only valid
652 with original (Verion 0.90) superblocks.
656 option will correct the summaries in the superblock. That is the
657 counts of total, working, active, failed, and spare devices.
664 hotadd listed devices.
668 Listed devices are assumed to have recently been part of the array,
669 and they are re-added. This is only different from --add when a
670 write-intent bitmap is present. It causes only those parts of the
671 device that have changed since the device was removed from the array
674 This flag is only needed with arrays that are built without a
675 superblock (i.e. --build, not --create). For array with a superblock,
677 checks if a superblock is present and automatically determines if a
678 re-add is appropriate.
682 remove listed devices. They must not be active. i.e. they should
683 be failed or spare devices.
687 mark listed devices as faulty.
693 .SH For Examine mode:
697 If an array was created on a 2.2 Linux kernel patched with RAID
698 support, the superblock will have been created incorrectly, or at
699 least incompatibly with 2.4 and later kernels. Using the
703 will fix the superblock before displaying it. If this appears to do
704 the right thing, then the array can be successfully assembled using
705 .BR "--assemble --update=sparc2.2" .
711 start a partially built array.
715 deactivate array, releasing all resources.
718 .BR -o ", " --readonly
719 mark array as readonly.
722 .BR -w ", " --readwrite
723 mark array as readwrite.
727 If the device contains a valid md superblock, the block is
728 over-written with zeros. With
730 the block where the superblock would be is over-written even if it
731 doesn't appear to be valid.
739 is set to reflect the status of the device.
741 .SH For Monitor mode:
744 Give a mail address to send alerts to.
747 .BR -p ", " --program ", " --alert
748 Give a program to be run whenever an event is detected.
752 Give a delay in seconds.
754 polls the md arrays and then waits this many seconds before polling
755 again. The default is 60 seconds.
758 .BR -f ", " --daemonise
761 to run as a background daemon if it decides to monitor anything. This
762 causes it to fork and run in the child, and to disconnect form the
763 terminal. The process id of the child is written to stdout.
766 which will only continue monitoring if a mail address or alert program
767 is found in the config file.
770 .BR -i ", " --pid-file
773 is running in daemon mode, write the pid of the daemon process to
774 the specified file, instead of printing it on standard output.
777 .BR -1 ", " --oneshot
778 Check arrays only once. This will generate
780 events and more significantly
786 .B " mdadm --monitor --scan -1"
788 from a cron script will ensure regular notification of any degraded arrays.
794 alert for every array found at startup. This alert gets mailed and
795 passed to the alert program. This can be used for testing that alert
796 message do get through successfully.
803 .I md-device options-and-component-devices...
806 .B mdadm --assemble --scan
807 .I md-devices-and-options...
810 .B mdadm --assemble --scan
814 This usage assembles one or more raid arrays from pre-existing components.
815 For each array, mdadm needs to know the md device, the identity of the
816 array, and a number of component-devices. These can be found in a number of ways.
818 In the first usage example (without the
820 the first device given is the md device.
821 In the second usage example, all devices listed are treated as md
822 devices and assembly is attempted.
823 In the third (where no devices are listed) all md devices that are
824 listed in the configuration file are assembled.
826 If precisely one device is listed, but
832 was given and identify information is extracted from the configuration file.
834 The identity can be given with the
838 option, can be found in the config file, or will be taken from the
839 super block on the first component-device listed on the command line.
841 Devices can be given on the
843 command line or in the config file. Only devices which have an md
844 superblock which contains the right identity will be considered for
847 The config file is only used if explicitly named with
849 or requested with (a possibly implicit)
857 is not given, then the config file will only be used to find the
858 identity of md arrays.
860 Normally the array will be started after it is assembled. However if
862 is not given and insufficient drives were listed to start a complete
863 (non-degraded) array, then the array is not started (to guard against
864 usage errors). To insist that the array be started in this case (as
865 may work for RAID1, 4, 5, 6, or 10), give the
871 option is given, either on the command line (--auto) or in the
872 configuration file (e.g. auto=part), then
874 will create the md device if necessary or will re-create it if it
875 doesn't look usable as it is.
877 This can be useful for handling partitioned devices (which don't have
878 a stable device number - it can change after a reboot) and when using
879 "udev" to manage your
881 tree (udev cannot handle md devices because of the unusual device
882 initialisation conventions).
884 If the option to "auto" is "mdp" or "part" or (on the command line
885 only) "p", then mdadm will create a partitionable array, using the
886 first free one that is not inuse, and does not already have an entry
887 in /dev (apart from numeric /dev/md* entries).
889 If the option to "auto" is "yes" or "md" or (on the command line)
890 nothing, then mdadm will create a traditional, non-partitionable md
893 It is expected that the "auto" functionality will be used to create
894 device entries with meaningful names such as "/dev/md/home" or
895 "/dev/md/root", rather than names based on the numerical array number.
897 When using this option to create a partitionable array, the device
898 files for the first 4 partitions are also created. If a different
899 number is required it can be simply appended to the auto option.
900 e.g. "auto=part8". Partition names are created by appending a digit
901 string to the device name, with an intervening "p" if the device name
906 option is also available in Build and Create modes. As those modes do
907 not use a config file, the "auto=" config option does not apply to
918 .BI --raid-devices= Z
922 This usage is similar to
924 The difference is that it creates an array without a superblock. With
925 these arrays there is no difference between initially creating the array and
926 subsequently assembling the array, except that hopefully there is useful
927 data there in the second case.
929 The level may raid0, linear, multipath, or faulty, or one of their
930 synonyms. All devices must be listed and the array will be started
942 .BI --raid-devices= Z
946 This usage will initialise a new md array, associate some devices with
947 it, and activate the array.
951 option is given (as described in more detail in the section on
952 Assemble mode), then the md device will be created with a suitable
953 device number if necessary.
955 As devices are added, they are checked to see if they contain raid
956 superblocks or filesystems. They are also checked to see if the variance in
957 device size exceeds 1%.
959 If any discrepancy is found, the array will not automatically be run, though
962 can override this caution.
964 To create a "degraded" array in which some devices are missing, simply
965 give the word "\fBmissing\fP"
966 in place of a device name. This will cause
968 to leave the corresponding slot in the array empty.
969 For a RAID4 or RAID5 array at most one slot can be
970 "\fBmissing\fP"; for a RAID6 array at most two slots.
971 For a RAID1 array, only one real device needs to be given. All of the
975 When creating a RAID5 array,
977 will automatically create a degraded array with an extra spare drive.
978 This is because building the spare into a degraded array is in general faster than resyncing
979 the parity on a non-degraded, but not clean, array. This feature can
980 be over-ridden with the
986 '''option is given, it is not necessary to list any component-devices in this command.
987 '''They can be added later, before a
991 '''is given, the apparent size of the smallest drive given is used.
993 The General Management options that are valid with --create are:
996 insist on running the array even if some devices look like they might
1001 start the array readonly - not supported yet.
1008 .I options... devices...
1011 This usage will allow individual devices in an array to be failed,
1012 removed or added. It is possible to perform multiple operations with
1013 on command. For example:
1015 .B " mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1"
1021 and will then remove it from the array and finally add it back
1022 in as a spare. However only one md array can be affected by a single
1033 MISC mode includes a number of distinct operations that
1034 operate on distinct devices. The operations are:
1037 The device is examined to see if it is
1038 (1) an active md array, or
1039 (2) a component of an md array.
1040 The information discovered is reported.
1044 The device should be an active md device.
1046 will display a detailed description of the array.
1050 will cause the output to be less detailed and the format to be
1051 suitable for inclusion in
1052 .BR /etc/mdadm.conf .
1055 will normally be 0 unless
1057 failed to get useful information about the device(s). However if the
1059 option is given, then the exit status will be:
1063 The array is functioning normally.
1066 The array has at least one failed device.
1069 The array has multiple failed devices and hence is unusable (raid4 or
1073 There was an error while trying to get information about the device.
1078 The device should be a component of an md array.
1080 will read the md superblock of the device and display the contents.
1085 then multiple devices that are components of the one array
1086 are grouped together and reported in a single entry suitable
1088 .BR /etc/mdadm.conf .
1092 without listing any devices will cause all devices listed in the
1093 config file to be examined.
1097 The devices should be active md arrays which will be deactivated, as
1098 long as they are not currently in use.
1102 This will fully activate a partially assembled md array.
1106 This will mark an active array as read-only, providing that it is
1107 not currently being used.
1113 array back to being read/write.
1117 For all operations except
1120 will cause the operation to be applied to all arrays listed in
1125 causes all devices listed in the config file to be examined.
1133 .I options... devices...
1138 to periodically poll a number of md arrays and to report on any events
1141 will never exit once it decides that there are arrays to be checked,
1142 so it should normally be run in the background.
1144 As well as reporting events,
1146 may move a spare drive from one array to another if they are in the
1149 and if the destination array has a failed drive but no spares.
1151 If any devices are listed on the command line,
1153 will only monitor those devices. Otherwise all arrays listed in the
1154 configuration file will be monitored. Further, if
1156 is given, then any other md devices that appear in
1158 will also be monitored.
1160 The result of monitoring the arrays is the generation of events.
1161 These events are passed to a separate program (if specified) and may
1162 be mailed to a given E-mail address.
1164 When passing event to program, the program is run once for each event
1165 and is given 2 or 3 command-line arguements. The first is the
1166 name of the event (see below). The second is the name of the
1167 md device which is affected, and the third is the name of a related
1168 device if relevant, such as a component device that has failed.
1172 is given, then a program or an E-mail address must be specified on the
1173 command line or in the config file. If neither are available, then
1175 will not monitor anything.
1179 will continue monitoring as long as something was found to monitor. If
1180 no program or email is given, then each event is reported to
1183 The different events are:
1187 .B DeviceDisappeared
1188 An md array which previously was configured appears to no longer be
1193 was told to monitor an array which is RAID0 or Linear, then it will
1195 .B DeviceDisappeared
1196 with the extra information
1198 This is because RAID0 and Linear do not support the device-failed,
1199 hot-spare and resync operations which are monitored.
1203 An md array started reconstruction.
1209 is 20, 40, 60, or 80, this indicates that rebuild has passed that many
1210 percentage of the total.
1214 An md array that was rebuilding, isn't any more, either because it
1215 finished normally or was aborted.
1219 An active component device of an array has been marked as faulty.
1223 A spare component device which was being rebuilt to replace a faulty
1228 A spare component device which was being rebuilt to replace a faulty
1229 device as been successfully rebuild and has been made active.
1233 A new md array has been detected in the
1239 A newly noticed array appears to be degraded. This message is not
1242 notices a drive failure which causes degradation, but only when
1244 notices that an array is degraded when it first sees the array.
1248 A spare drive has been moved from one array in a
1250 to another to allow a failed drive to be replaced.
1256 has been told, via the config file, that an array should have a certain
1257 number of spare devices, and
1259 detects that it has fewer that this number when it first sees the
1260 array, it will report a
1266 An array was found at startup, and the
1277 cause Email to be sent. All events cause the program to be run.
1278 The program is run with two or three arguments, they being the event
1279 name, the array device and possibly a second device.
1281 Each event has an associated array device (e.g.
1283 and possibly a second device. For
1288 the second device is the relevant component device.
1291 the second device is the array that the spare was moved from.
1295 to move spares from one array to another, the different arrays need to
1296 be labelled with the same
1298 in the configuration file. The
1300 name can be any string. It is only necessary that different spare
1301 groups use different names.
1305 detects that an array which is in a spare group has fewer active
1306 devices than necessary for the complete array, and has no spare
1307 devices, it will look for another array in the same spare group that
1308 has a full complement of working drive and a spare. It will then
1309 attempt to remove the spare from the second drive and add it to the
1311 If the removal succeeds but the adding fails, then it is added back to
1315 The GROW mode is used for changing the size or shape of an active
1317 For this to work, the kernel must support the necessary change.
1318 Various types of growth may be added during 2.6 development, possibly
1319 including restructuring a raid5 array to have more active devices.
1321 Currently the only support available is to
1323 change the "size" attribute
1324 for RAID1, RAID5 and RAID6.
1326 change the "raid-disks" attribute of RAID1.
1328 add a write-intent bitmap to a RAID1 array.
1331 Normally when an array is built the "size" it taken from the smallest
1332 of the drives. If all the small drives in an arrays are, one at a
1333 time, removed and replaced with larger drives, then you could have an
1334 array of large drives with only a small amount used. In this
1335 situation, changing the "size" with "GROW" mode will allow the extra
1336 space to start being used. If the size is increased in this way, a
1337 "resync" process will start to make sure the new parts of the array
1340 Note that when an array changes size, any filesystem that may be
1341 stored in the array will not automatically grow to use the space. The
1342 filesystem will need to be explicitly told to use the extra space.
1344 A RAID1 array can work with any number of devices from 1 upwards
1345 (though 1 is not very useful). There may be times which you want to
1346 increase or decrease the number of active devices. Note that this is
1347 different to hot-add or hot-remove which changes the number of
1350 When reducing the number of devices in a RAID1 array, the slots which
1351 are to be removed from the array must already be vacant. That is, the
1352 devices that which were in those slots must be failed and removed.
1354 When the number of devices is increased, any hot spares that are
1355 present will be activated immediately.
1357 A write-intent bitmap can be added to, or remove from, an active RAID1
1358 array. Either internal bitmap, of bitmaps stored in a separate file
1359 can be added. Note that if you add a bitmap stored in a file which is
1360 in a filesystem that is on the raid array being affected, the system
1361 will deadlock. The bitmap must be on a separate filesystem.
1365 .B " mdadm --query /dev/name-of-device"
1367 This will find out if a given device is a raid array, or is part of
1368 one, and will provide brief information about the device.
1370 .B " mdadm --assemble --scan"
1372 This will assemble and start all arrays listed in the standard confile
1373 file. This command will typically go in a system startup file.
1375 .B " mdadm --stop --scan"
1377 This will shut down all array that can be shut down (i.e. are not
1378 currently in use). This will typically go in a system shutdown script.
1380 .B " mdadm --follow --scan --delay=120"
1382 If (and only if) there is an Email address or program given in the
1383 standard config file, then
1384 monitor the status of all arrays listed in that file by
1385 polling them ever 2 minutes.
1387 .B " mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1"
1389 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
1392 .B " echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf"
1394 .B " mdadm --detail --scan >> mdadm.conf"
1396 This will create a prototype config file that describes currently
1397 active arrays that are known to be made from partitions of IDE or SCSI drives.
1398 This file should be reviewed before being used as it may
1399 contain unwanted detail.
1401 .B " echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf"
1403 .B " mdadm --examine --scan --config=mdadm.conf >> mdadm.conf"
1405 This will find what arrays could be assembled from existign IDE and
1406 SCSI whole drives (not partitions) and store the information is the
1407 format of a config file.
1408 This file is very likely to contain unwanted detail, particularly
1411 entries. It should be reviewed and edited before being used as an
1414 .B " mdadm --examine --brief --scan --config=partitions"
1416 .B " mdadm -Ebsc partitions"
1418 Create a list of devices by reading
1419 .BR /proc/partitions ,
1420 scan these for RAID superblocks, and printout a brief listing of all
1423 .B " mdadm -Ac partitions -m 0 /dev/md0"
1425 Scan all partitions and devices listed in
1426 .BR /proc/partitions
1429 out of all such devices with a RAID superblock with a minor number of 0.
1431 .B " mdadm --monitor --scan --daemonise > /var/run/mdadm"
1433 If config file contains a mail address or alert program, run mdadm in
1434 the background in monitor mode monitoring all md devices. Also write
1435 pid of mdadm daemon to
1436 .BR /var/run/mdadm .
1438 .B " mdadm --create --help"
1440 Providew help about the Create mode.
1442 .B " mdadm --config --help"
1444 Provide help about the format of the config file.
1448 Provide general help.
1459 lists all active md devices with information about them.
1461 uses this to find arrays when
1463 is given in Misc mode, and to monitor array reconstruction
1469 The config file lists which devices may be scanned to see if
1470 they contain MD super block, and gives identifying information
1471 (e.g. UUID) about known MD arrays. See
1477 While entries in the /dev directory can have any format you like,
1479 has an understanding of 'standard' formats which it uses to guide its
1480 behaviour when creating device files via the
1484 The standard names for non-partitioned arrays (the only sort of md
1485 array available in 2.4 and earlier) either of
1491 where NN is a number.
1492 The standard names for partitionable arrays (as available from 2.6
1499 Partition numbers should be indicated by added "pMM" to these, thus "/dev/md/d1p2".
1503 was previously known as
1507 is completely separate from the
1509 package, and does not use the
1511 configuration file at all.
1514 For information on the various levels of
1518 .UR http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1519 http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/
1522 '''for new releases of the RAID driver check out:
1525 '''.UR ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1526 '''ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches
1531 '''.UR http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
1532 '''http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/
1535 The lastest version of
1537 should always be available from
1539 .UR http://www.kernel.org/pub/linux/utils/raid/mdadm/
1540 http://www.kernel.org/pub/linux/utils/raid/mdadm/