]> git.ipfire.org Git - thirdparty/mdadm.git/blame_incremental - mdadm.8.in
mdmonitor: use MAILFROM to set sendmail envelope sender address
[thirdparty/mdadm.git] / mdadm.8.in
... / ...
CommitLineData
1.\" -*- nroff -*-
2.\" Copyright Neil Brown and others.
3.\" This program is free software; you can redistribute it and/or modify
4.\" it under the terms of the GNU General Public License as published by
5.\" the Free Software Foundation; either version 2 of the License, or
6.\" (at your option) any later version.
7.\" See file COPYING in distribution for details.
8.TH MDADM 8 "" v4.4
9.SH NAME
10mdadm \- manage MD devices
11.I aka
12Linux Software RAID
13
14.SH SYNOPSIS
15
16.BI mdadm " [mode] <raiddevice> [options] <component-devices>"
17
18.SH DESCRIPTION
19RAID devices are virtual devices created from two or more
20real block devices. This allows multiple devices (typically disk
21drives or partitions thereof) to be combined into a single device to
22hold (for example) a single filesystem.
23Some RAID levels include redundancy and so can survive some degree of
24device failure.
25
26Linux Software RAID devices are implemented through the md (Multiple
27Devices) device driver.
28
29Currently, Linux supports
30.B LINEAR
31md devices,
32.B RAID0
33(striping),
34.B RAID1
35(mirroring),
36.BR RAID4 ,
37.BR RAID5 ,
38.BR RAID6 ,
39.BR RAID10 ,
40.BR MULTIPATH ,
41.BR FAULTY ,
42and
43.BR CONTAINER .
44
45.B MULTIPATH
46is not a Software RAID mechanism, but does involve
47multiple devices:
48each device is a path to one common physical storage device.
49New installations should not use md/multipath as it is not well
50supported and has no ongoing development. Use the Device Mapper based
51multipath-tools instead. It is deprecated and support will be removed in the future.
52
53.B FAULTY
54is also not true RAID, and it only involves one device. It
55provides a layer over a true device that can be used to inject faults. It is deprecated
56and support will be removed in the future.
57
58.B CONTAINER
59is different again. A
60.B CONTAINER
61is a collection of devices that are
62managed as a set. This is similar to the set of devices connected to
63a hardware RAID controller. The set of devices may contain a number
64of different RAID arrays each utilising some (or all) of the blocks from a
65number of the devices in the set. For example, two devices in a 5-device set
66might form a RAID1 using the whole devices. The remaining three might
67have a RAID5 over the first half of each device, and a RAID0 over the
68second half.
69
70With a
71.BR CONTAINER ,
72there is one set of metadata that describes all of
73the arrays in the container. So when
74.I mdadm
75creates a
76.B CONTAINER
77device, the device just represents the metadata. Other normal arrays (RAID1
78etc) can be created inside the container.
79
80.SH MODES
81mdadm has several major modes of operation:
82.TP
83.B Assemble
84Assemble the components of a previously created
85array into an active array. Components can be explicitly given
86or can be searched for.
87.I mdadm
88checks that the components
89do form a bona fide array, and can, on request, fiddle superblock
90information so as to assemble a faulty array.
91
92.TP
93.B Build
94Build an array that doesn't have per-device metadata (superblocks). For these
95sorts of arrays,
96.I mdadm
97cannot differentiate between initial creation and subsequent assembly
98of an array. It also cannot perform any checks that appropriate
99components have been requested. Because of this, the
100.B Build
101mode should only be used together with a complete understanding of
102what you are doing.
103
104.TP
105.B Create
106Create a new array with per-device metadata (superblocks).
107Appropriate metadata is written to each device, and then the array
108comprising those devices is activated. A 'resync' process is started
109to make sure that the array is consistent (e.g. both sides of a mirror
110contain the same data) but the content of the device is left otherwise
111untouched.
112The array can be used as soon as it has been created. There is no
113need to wait for the initial resync to finish.
114
115.TP
116.B "Follow or Monitor"
117Monitor one or more md devices and act on any state changes. This is
118only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as
119only these have interesting state. RAID0 or Linear never have
120missing, spare, or failed drives, so there is nothing to monitor.
121
122.TP
123.B "Grow"
124Grow (or shrink) an array, or otherwise reshape it in some way.
125Currently supported growth options including changing the active size
126of component devices and changing the number of active devices in
127Linear and RAID levels 0/1/4/5/6,
128changing the RAID level between 0, 1, 5, and 6, and between 0 and 10,
129changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or
130removing a write-intent bitmap and changing the array's consistency policy.
131
132.TP
133.B "Incremental Assembly"
134Add a single device to an appropriate array. If the addition of the
135device makes the array runnable, the array will be started.
136This provides a convenient interface to a
137.I hot-plug
138system. As each device is detected,
139.I mdadm
140has a chance to include it in some array as appropriate.
141Optionally, when the
142.I \-\-fail
143flag is passed in we will remove the device from any active array
144instead of adding it.
145
146If a
147.B CONTAINER
148is passed to
149.I mdadm
150in this mode, then any arrays within that container will be assembled
151and started.
152
153.TP
154.B Manage
155This is for doing things to specific components of an array such as
156adding new spares and removing faulty devices.
157
158.TP
159.B Misc
160This is an 'everything else' mode that supports operations on active
161arrays, operations on component devices such as erasing old superblocks, and
162information-gathering operations.
163.\"This mode allows operations on independent devices such as examine MD
164.\"superblocks, erasing old superblocks and stopping active arrays.
165
166.TP
167.B Auto-detect
168This mode does not act on a specific device or array, but rather it
169requests the Linux Kernel to activate any auto-detected arrays.
170.SH OPTIONS
171
172.SH Options for selecting a mode are:
173
174.TP
175.BR \-A ", " \-\-assemble
176Assemble a pre-existing array.
177
178.TP
179.BR \-B ", " \-\-build
180Build a legacy array without superblocks.
181
182.TP
183.BR \-C ", " \-\-create
184Create a new array.
185
186.TP
187.BR \-F ", " \-\-follow ", " \-\-monitor
188Select
189.B Monitor
190mode.
191
192.TP
193.BR \-G ", " \-\-grow
194Change the size or shape of an active array.
195
196.TP
197.BR \-I ", " \-\-incremental
198Add/remove a single device to/from an appropriate array, and possibly start the array.
199
200.TP
201.B \-\-auto-detect
202Request that the kernel starts any auto-detected arrays. This can only
203work if
204.I md
205is compiled into the kernel \(em not if it is a module.
206Arrays can be auto-detected by the kernel if all the components are in
207primary MS-DOS partitions with partition type
208.BR FD ,
209and all use v0.90 metadata.
210In-kernel autodetect is not recommended for new installations. Using
211.I mdadm
212to detect and assemble arrays \(em possibly in an
213.I initrd
214\(em is substantially more flexible and should be preferred.
215
216.P
217If a device is given before any options, or if the first option is
218one of
219.BR \-\-add ,
220.BR \-\-re\-add ,
221.BR \-\-add\-spare ,
222.BR \-\-fail ,
223.BR \-\-remove ,
224or
225.BR \-\-replace ,
226then the MANAGE mode is assumed.
227Anything other than these will cause the
228.B Misc
229mode to be assumed.
230
231.SH Options that are not mode-specific are:
232
233.TP
234.BR \-h ", " \-\-help
235Display a general help message or, after one of the above options, a
236mode-specific help message.
237
238.TP
239.B \-\-help\-options
240Display more detailed help about command-line parsing and some commonly
241used options.
242
243.TP
244.BR \-V ", " \-\-version
245Print version information for mdadm.
246
247.TP
248.BR \-v ", " \-\-verbose
249Be more verbose about what is happening. This can be used twice to be
250extra-verbose.
251The extra verbosity currently only affects
252.B \-\-detail \-\-scan
253and
254.BR "\-\-examine \-\-scan" .
255
256.TP
257.BR \-q ", " \-\-quiet
258Avoid printing purely informative messages. With this,
259.I mdadm
260will be silent unless there is something really important to report.
261
262
263.TP
264.BR \-f ", " \-\-force
265Be more forceful about certain operations. See the various modes for
266the exact meaning of this option in different contexts.
267
268.TP
269.BR \-c ", " \-\-config=
270Specify the config file or directory. If not specified, the default config file
271and default conf.d directory will be used. See
272.BR mdadm.conf (5)
273for more details.
274
275If the config file given is
276.B "partitions"
277then nothing will be read, but
278.I mdadm
279will act as though the config file contained exactly
280.br
281.B " DEVICE partitions containers"
282.br
283and will read
284.B /proc/partitions
285to find a list of devices to scan, and
286.B /proc/mdstat
287to find a list of containers to examine.
288If the word
289.B "none"
290is given for the config file, then
291.I mdadm
292will act as though the config file were empty.
293
294If the name given is of a directory, then
295.I mdadm
296will collect all the files contained in the directory with a name ending
297in
298.BR .conf ,
299sort them lexically, and process all of those files as config files.
300
301.TP
302.BR \-s ", " \-\-scan
303Scan config file or
304.B /proc/mdstat
305for missing information.
306In general, this option gives
307.I mdadm
308permission to get any missing information (like component devices,
309array devices, array identities, and alert destination) from the
310configuration file (see previous option);
311one exception is MISC mode when using
312.B \-\-detail
313or
314.B \-\-stop,
315in which case
316.B \-\-scan
317says to get a list of array devices from
318.BR /proc/mdstat .
319
320.TP
321.BR \-e ", " \-\-metadata=
322Declare the style of RAID metadata (superblock) to be used. The
323default is {DEFAULT_METADATA} for
324.BR \-\-create ,
325and to guess for other operations.
326The default can be overridden by setting the
327.B metadata
328value for the
329.B CREATE
330keyword in
331.BR mdadm.conf .
332
333Options are:
334.RS
335.ie '{DEFAULT_METADATA}'0.90'
336.IP "0, 0.90, default"
337.el
338.IP "0, 0.90"
339Use the original 0.90 format superblock. This format limits arrays to
34028 component devices and limits component devices of levels 1 and
341greater to 2 terabytes. It is also possible for there to be confusion
342about whether the superblock applies to a whole device or just the
343last partition, if that partition starts on a 64K boundary.
344.ie '{DEFAULT_METADATA}'0.90'
345.IP "1, 1.0, 1.1, 1.2"
346.el
347.IP "1, 1.0, 1.1, 1.2 default"
348Use the new version-1 format superblock. This has fewer restrictions.
349It can easily be moved between hosts with different endian-ness, and a
350recovery operation can be checkpointed and restarted. The different
351sub-versions store the superblock at different locations on the
352device, either at the end (for 1.0), at the start (for 1.1) or 4K from
353the start (for 1.2). "1" is equivalent to "1.2" (the commonly
354preferred 1.x format).
355'if '{DEFAULT_METADATA}'1.2' "default" is equivalent to "1.2".
356.IP ddf
357Use the "Industry Standard" DDF (Disk Data Format) format defined by
358SNIA. DDF is deprecated and there is no active development around it.
359When creating a DDF array a
360.B CONTAINER
361will be created, and normal arrays can be created in that container.
362.IP imsm
363Use the Intel(R) Matrix Storage Manager metadata format. This creates a
364.B CONTAINER
365which is managed in a similar manner to DDF, and is supported by an
366option-rom on some platforms:
367.IP
368.B https://www.intel.com/content/www/us/en/support/products/122484
369.PP
370.RE
371
372.TP
373.B \-\-homehost=
374This will override any
375.B HOMEHOST
376setting in the config file and provides the identity of the host which
377should be considered the home for any arrays.
378
379When creating an array, the
380.B homehost
381will be recorded in the metadata. For version-1 superblocks, it will
382be prefixed to the array name. For version-0.90 superblocks, part of
383the SHA1 hash of the hostname will be stored in the latter half of the
384UUID.
385
386When reporting information about an array, any array which is tagged
387for the given homehost will be reported as such.
388
389When using Auto-Assemble, only arrays tagged for the given homehost
390will be allowed to use 'local' names (i.e. not ending in '_' followed
391by a digit string). See below under
392.BR "Auto-Assembly" .
393
394The special name "\fBany\fP" can be used as a wild card. If an array
395is created with
396.B --homehost=any
397then the name "\fBany\fP" will be stored in the array and it can be
398assembled in the same way on any host. If an array is assembled with
399this option, then the homehost recorded on the array will be ignored.
400
401.TP
402.B \-\-prefer=
403When
404.I mdadm
405needs to print the name for a device it normally finds the name in
406.B /dev
407which refers to the device and is the shortest. When a path component is
408given with
409.B \-\-prefer
410.I mdadm
411will prefer a longer name if it contains that component. For example
412.B \-\-prefer=by-uuid
413will prefer a name in a subdirectory of
414.B /dev
415called
416.BR by-uuid .
417
418This functionality is currently only provided by
419.B \-\-detail
420and
421.BR \-\-monitor .
422
423.TP
424.B \-\-home\-cluster=
425specifies the cluster name for the md device. The md device can be assembled
426only on the cluster which matches the name specified. If this option is not
427provided, mdadm tries to detect the cluster name automatically.
428
429.SH For create, build, or grow:
430
431.TP
432.BR \-n ", " \-\-raid\-devices=
433Specify the number of active devices in the array. This, plus the
434number of spare devices (see below) must equal the number of
435.I component-devices
436(including "\fBmissing\fP" devices)
437that are listed on the command line for
438.BR \-\-create .
439Setting a value of 1 is probably
440a mistake and so requires that
441.B \-\-force
442be specified first. A value of 1 will then be allowed for linear,
443multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
444.br
445This number can only be changed using
446.B \-\-grow
447for RAID1, RAID4, RAID5 and RAID6 arrays.
448
449.TP
450.BR \-x ", " \-\-spare\-devices=
451Specify the number of spare (eXtra) devices in the initial array.
452Spares can also be added
453and removed later. The number of component devices listed
454on the command line must equal the number of RAID devices plus the
455number of spare devices.
456
457.TP
458.BR \-z ", " \-\-size=
459Amount (in Kilobytes) of space to use from each drive in RAID levels 1/4/5/6/10
460and for RAID 0 on external metadata.
461This must be a multiple of the chunk size, and must leave about 128Kb
462of space at the end of the drive for the RAID superblock.
463If this is not specified
464(as it normally is not) the smallest drive (or partition) sets the
465size, though if there is a variance among the drives of greater than 1%, a warning is
466issued.
467
468A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
469Megabytes, Gigabytes or Terabytes respectively.
470
471Sometimes a replacement drive can be a little smaller than the
472original drives though this should be minimised by IDEMA standards.
473Such a replacement drive will be rejected by
474.IR md .
475To guard against this it can be useful to set the initial size
476slightly smaller than the smaller device with the aim that it will
477still be larger than any replacement.
478
479This option can be used with
480.B \-\-create
481for determining the initial size of an array. For external metadata,
482it can be used on a volume, but not on a container itself.
483Setting the initial size of
484.B RAID 0
485array is only valid for external metadata.
486
487This value can be set with
488.B \-\-grow
489for RAID level 1/4/5/6/10 though
490DDF arrays may not be able to support this.
491RAID 0 array size cannot be changed.
492If the array was created with a size smaller than the currently
493active drives, the extra space can be accessed using
494.BR \-\-grow .
495The size can be given as
496.B max
497which means to choose the largest size that fits on all current drives.
498
499Before reducing the size of the array (with
500.BR "\-\-grow \-\-size=" )
501you should make sure that space isn't needed. If the device holds a
502filesystem, you would need to resize the filesystem to use less space.
503
504After reducing the array size you should check that the data stored in
505the device is still available. If the device holds a filesystem, then
506an 'fsck' of the filesystem is a minimum requirement. If there are
507problems the array can be made bigger again with no loss with another
508.B "\-\-grow \-\-size="
509command.
510
511.TP
512.BR \-Z ", " \-\-array\-size=
513This is only meaningful with
514.B \-\-grow
515and its effect is not persistent: when the array is stopped and
516restarted the default array size will be restored.
517
518Setting the array-size causes the array to appear smaller to programs
519that access the data. This is particularly needed before reshaping an
520array so that it will be smaller. As the reshape is not reversible,
521but setting the size with
522.B \-\-array-size
523is, it is required that the array size is reduced as appropriate
524before the number of devices in the array is reduced.
525
526Before reducing the size of the array you should make sure that space
527isn't needed. If the device holds a filesystem, you would need to
528resize the filesystem to use less space.
529
530After reducing the array size you should check that the data stored in
531the device is still available. If the device holds a filesystem, then
532an 'fsck' of the filesystem is a minimum requirement. If there are
533problems the array can be made bigger again with no loss with another
534.B "\-\-grow \-\-array\-size="
535command.
536
537A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
538Megabytes, Gigabytes or Terabytes respectively.
539A value of
540.B max
541restores the apparent size of the array to be whatever the real
542amount of available space is.
543
544Clustered arrays do not support this parameter yet.
545
546.TP
547.BR \-c ", " \-\-chunk=
548Specify chunk size in kilobytes. The default when creating an
549array is 512KB. To ensure compatibility with earlier versions, the
550default when building an array with no persistent metadata is 64KB.
551This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
552
553RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power
554of 2, with minimal chunk size being 4KB.
555
556A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
557Megabytes, Gigabytes or Terabytes respectively.
558
559.TP
560.BR \-\-rounding=
561Specify the rounding factor for a Linear array. The size of each
562component will be rounded down to a multiple of this size.
563This is a synonym for
564.B \-\-chunk
565but highlights the different meaning for Linear as compared to other
566RAID levels. The default is 0K (i.e. no rounding).
567
568.TP
569.BR \-l ", " \-\-level=
570Set RAID level. When used with
571.BR \-\-create ,
572options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
573raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
574Obviously some of these are synonymous.
575
576When a
577.B CONTAINER
578metadata type is requested, only the
579.B container
580level is permitted, and it does not need to be explicitly given.
581
582When used with
583.BR \-\-build ,
584only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
585
586Can be used with
587.B \-\-grow
588to change the RAID level in some cases. See LEVEL CHANGES below.
589
590.TP
591.BR \-p ", " \-\-layout=
592This option configures the fine details of data layout for RAID5, RAID6,
593and RAID10 arrays, and controls the failure modes for
594.IR faulty .
595It can also be used for working around a kernel bug with RAID0, but generally
596doesn't need to be used explicitly.
597
598The layout of the RAID5 parity block can be one of
599.BR left\-asymmetric ,
600.BR left\-symmetric ,
601.BR right\-asymmetric ,
602.BR right\-symmetric ,
603.BR la ", " ra ", " ls ", " rs .
604The default is
605.BR left\-symmetric .
606
607It is also possible to cause RAID5 to use a RAID4-like layout by
608choosing
609.BR parity\-first ,
610or
611.BR parity\-last .
612
613Finally for RAID5 there are DDF\-compatible layouts,
614.BR ddf\-zero\-restart ,
615.BR ddf\-N\-restart ,
616and
617.BR ddf\-N\-continue .
618
619These same layouts are available for RAID6. There are also 4 layouts
620that will provide an intermediate stage for converting between RAID5
621and RAID6. These provide a layout which is identical to the
622corresponding RAID5 layout on the first N\-1 devices, and has the 'Q'
623syndrome (the second 'parity' block used by RAID6) on the last device.
624These layouts are:
625.BR left\-symmetric\-6 ,
626.BR right\-symmetric\-6 ,
627.BR left\-asymmetric\-6 ,
628.BR right\-asymmetric\-6 ,
629and
630.BR parity\-first\-6 .
631
632When setting the failure mode for level
633.I faulty,
634the options are:
635.BR write\-transient ", " wt ,
636.BR read\-transient ", " rt ,
637.BR write\-persistent ", " wp ,
638.BR read\-persistent ", " rp ,
639.BR write\-all ,
640.BR read\-fixable ", " rf ,
641.BR clear ", " flush ", " none .
642
643Each failure mode can be followed by a number, which is used as a period
644between fault generation. Without a number, the fault is generated
645once on the first relevant request. With a number, the fault will be
646generated after that many requests, and will continue to be generated
647every time the period elapses.
648
649Multiple failure modes can be current simultaneously by using the
650.B \-\-grow
651option to set subsequent failure modes.
652
653"clear" or "none" will remove any pending or periodic failure modes,
654and "flush" will clear any persistent faults.
655
656The layout options for RAID10 are one of 'n', 'o' or 'f' followed
657by a small number signifying the number of copies of each datablock.
658The default is 'n2'. The supported options are:
659
660.I 'n'
661signals 'near' copies. Multiple copies of one data block are at
662similar offsets in different devices.
663
664.I 'o'
665signals 'offset' copies. Rather than the chunks being duplicated
666within a stripe, whole stripes are duplicated but are rotated by one
667device so duplicate blocks are on different devices. Thus subsequent
668copies of a block are in the next drive, and are one chunk further
669down.
670
671.I 'f'
672signals 'far' copies
673(multiple copies have very different offsets).
674See md(4) for more detail about 'near', 'offset', and 'far'.
675
676As for the number of copies of each data block, 2 is normal, 3
677can be useful. This number can be at most equal to the number of
678devices in the array. It does not need to divide evenly into that
679number (e.g. it is perfectly legal to have an 'n2' layout for an array
680with an odd number of devices).
681
682A bug introduced in Linux 3.14 means that RAID0 arrays
683.B "with devices of differing sizes"
684started using a different layout. This could lead to
685data corruption. Since Linux 5.4 (and various stable releases that received
686backports), the kernel will not accept such an array unless
687a layout is explicitly set. It can be set to
688.RB ' original '
689or
690.RB ' alternate '.
691When creating a new array,
692.I mdadm
693will select
694.RB ' original '
695by default, so the layout does not normally need to be set.
696An array created for either
697.RB ' original '
698or
699.RB ' alternate '
700will not be recognized by an (unpatched) kernel prior to 5.4. To create
701a RAID0 array with devices of differing sizes that can be used on an
702older kernel, you can set the layout to
703.RB ' dangerous '.
704This will use whichever layout the running kernel supports, so the data
705on the array may become corrupt when changing kernel from pre-3.14 to a
706later kernel.
707
708When an array is converted between RAID5 and RAID6 an intermediate
709RAID6 layout is used in which the second parity block (Q) is always on
710the last device. To convert a RAID5 to RAID6 and leave it in this new
711layout (which does not require re-striping) use
712.BR \-\-layout=preserve .
713This will try to avoid any restriping.
714
715The converse of this is
716.B \-\-layout=normalise
717which will change a non-standard RAID6 layout into a more standard
718arrangement.
719
720.TP
721.BR \-\-parity=
722same as
723.B \-\-layout
724(thus explaining the p of
725.BR \-p ).
726
727.TP
728.BR \-b ", " \-\-bitmap=
729Specify how to store a write-intent bitmap. Following values are supported:
730
731.B internal
732- the bitmap is stored with the metadata on the array and so is replicated on all devices.
733
734.B clustered
735- the array is created for a clustered environment. One bitmap is created for each node as defined
736by the
737.B \-\-nodes
738parameter and are stored internally.
739
740.B none
741- create array with no bitmap or remove any present bitmap (grow mode).
742
743.TP
744.BR \-\-bitmap\-chunk=
745Set the chunk size of the bitmap. Each bit corresponds to that many
746Kilobytes of storage.
747
748.B internal
749bitmap, the chunk size defaults to 64Meg, or larger if necessary to
750fit the bitmap into the available space.
751
752A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
753Megabytes, Gigabytes or Terabytes respectively.
754
755.TP
756.BR \-W ", " \-\-write\-mostly
757subsequent devices listed in a
758.BR \-\-build ,
759.BR \-\-create ,
760or
761.B \-\-add
762command will be flagged as 'write\-mostly'. This is valid for RAID1
763only and means that the 'md' driver will avoid reading from these
764devices if at all possible. This can be useful if mirroring over a
765slow link.
766
767.TP
768.BR \-\-write\-behind=
769Specify that write-behind mode should be enabled (valid for RAID1
770only). If an argument is specified, it will set the maximum number
771of outstanding writes allowed. The default value is 256.
772A write-intent bitmap is required in order to use write-behind
773mode, and write-behind is only attempted on drives marked as
774.IR write-mostly .
775
776.TP
777.BR \-\-failfast
778subsequent devices listed in a
779.B \-\-create
780or
781.B \-\-add
782command will be flagged as 'failfast'. This is valid for RAID1 and
783RAID10 only. IO requests to these devices will be encouraged to fail
784quickly rather than cause long delays due to error handling. Also no
785attempt is made to repair a read error on these devices.
786
787If an array becomes degraded so that the 'failfast' device is the only
788usable device, the 'failfast' flag will then be ignored and extended
789delays will be preferred to complete failure.
790
791The 'failfast' flag is appropriate for storage arrays which have a
792low probability of true failure, but which may sometimes
793cause unacceptable delays due to internal maintenance functions.
794
795.TP
796.BR \-\-assume\-clean
797Tell
798.I mdadm
799that the array pre-existed and is known to be clean. It can be useful
800when trying to recover from a major failure as you can be sure that no
801data will be affected unless you actually write to the array. It can
802also be used when creating a RAID1 or RAID10 if you want to avoid the
803initial resync, however this practice \(em while normally safe \(em is not
804recommended. Use this only if you really know what you are doing.
805.IP
806When the devices that will be part of a new array were filled
807with zeros before creation the operator knows the array is
808actually clean. If that is the case, such as after running
809badblocks, this argument can be used to tell mdadm the
810facts the operator knows.
811.IP
812When an array is resized to a larger size with
813.B "\-\-grow \-\-size="
814the new space is normally resynced in that same way that the whole
815array is resynced at creation.
816.B \-\-assume\-clean
817can be used with that command to avoid the automatic resync.
818
819.TP
820.BR \-\-write-zeroes
821When creating an array, send write zeroes requests to all the block
822devices. This should zero the data area on all disks such that the
823initial sync is not necessary and, if successful, will behave as if
824.B \-\-assume\-clean
825was specified.
826.IP
827This is intended for use with devices that have hardware offload for
828zeroing, but despite this zeroing can still take several minutes for
829large disks. Thus a message is printed before and after zeroing and
830each disk is zeroed in parallel with the others.
831.IP
832This is only meaningful with --create.
833
834.TP
835.BR \-\-backup\-file=
836This is needed when
837.B \-\-grow
838is used to increase the number of raid devices in a RAID5 or RAID6 if
839there are no spare devices available, or to shrink, change RAID level
840or layout. See the GROW MODE section below on RAID\-DEVICES CHANGES.
841The file must be stored on a separate device, not on the RAID array
842being reshaped.
843
844.TP
845.B \-\-data\-offset=
846Arrays with 1.x metadata can leave a gap between the start of the
847device and the start of array data. This gap can be used for various
848metadata. The start of data is known as the
849.IR data\-offset .
850Normally an appropriate data offset is computed automatically.
851However it can be useful to set it explicitly such as when re-creating
852an array which was originally created using a different version of
853.I mdadm
854which computed a different offset.
855
856Setting the offset explicitly over-rides the default. The value given
857is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is used to explicitly
858indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively.
859
860.B \-\-data\-offset
861can also be used with
862.B --grow
863for some RAID levels (initially on RAID10). This allows the
864data\-offset to be changed as part of the reshape process. When the
865data offset is changed, no backup file is required as the difference
866in offsets is used to provide the same functionality.
867
868When the new offset is earlier than the old offset, the number of
869devices in the array cannot shrink. When it is after the old offset,
870the number of devices in the array cannot increase.
871
872When creating an array,
873.B \-\-data\-offset
874can be specified as
875.BR variable .
876In the case each member device is expected to have an offset appended
877to the name, separated by a colon. This makes it possible to recreate
878exactly an array which has varying data offsets (as can happen when
879different versions of
880.I mdadm
881are used to add different devices).
882
883.TP
884.BR \-N ", " \-\-name=
885Set a
886.B name
887for the array. It cannot be longer than 32 chars. This is effective when
888creating an array with a v1 metadata, or an external array.
889
890If name is needed but not specified, it is taken from the basename of the device
891that is being created. See
892.BR "DEVICE NAMES"
893
894.TP
895.BR \-R ", " \-\-run
896Insist that
897.I mdadm
898run the array, even if some of the components
899appear to be active in another array or filesystem. Normally
900.I mdadm
901will ask for confirmation before including such components in an
902array. This option causes that question to be suppressed.
903
904.TP
905.BR \-f ", " \-\-force
906Insist that
907.I mdadm
908accept the geometry and layout specified without question. Normally
909.I mdadm
910will not allow the creation of an array with only one device, and will try
911to create a RAID5 array with one missing drive (as this makes the
912initial resync work faster). With
913.BR \-\-force ,
914.I mdadm
915will not try to be so clever.
916
917.TP
918.BR \-o ", " \-\-readonly
919Start the array
920.B read only
921rather than read-write as normal. No writes will be allowed to the
922array, and no resync, recovery, or reshape will be started. It works with
923Create, Assemble, Manage and Misc mode.
924
925.TP
926.BR \-a ", " "\-\-add"
927This option can be used in Grow mode in two cases.
928
929If the target array is a Linear array, then
930.B \-\-add
931can be used to add one or more devices to the array. They
932are simply catenated on to the end of the array. Once added, the
933devices cannot be removed.
934
935If the
936.B \-\-raid\-disks
937option is being used to increase the number of devices in an array,
938then
939.B \-\-add
940can be used to add some extra devices to be included in the array.
941In most cases this is not needed as the extra devices can be added as
942spares first, and then the number of raid disks can be changed.
943However, for RAID0 it is not possible to add spares. So to increase
944the number of devices in a RAID0, it is necessary to set the new
945number of devices, and to add the new devices, in the same command.
946
947.TP
948.BR \-\-nodes
949Only works when the array is created for a clustered environment. It specifies
950the maximum number of nodes in the cluster that will use this device
951simultaneously. If not specified, this defaults to 4.
952
953.TP
954.BR \-\-write-journal
955Specify journal device for the RAID-4/5/6 array. The journal device
956should be an SSD with a reasonable lifetime.
957
958.TP
959.BR \-k ", " \-\-consistency\-policy=
960Specify how the array maintains consistency in the case of an unexpected shutdown.
961Only relevant for RAID levels with redundancy.
962Currently supported options are:
963.RS
964
965.TP
966.B resync
967Full resync is performed and all redundancy is regenerated when the array is
968started after an unclean shutdown.
969
970.TP
971.B bitmap
972Resync assisted by a write-intent bitmap. Implicitly selected when using
973.BR \-\-bitmap .
974
975.TP
976.B journal
977For RAID levels 4/5/6, the journal device is used to log transactions and replay
978after an unclean shutdown. Implicitly selected when using
979.BR \-\-write\-journal .
980
981.TP
982.B ppl
983For RAID5 only, Partial Parity Log is used to close the write hole and
984eliminate resync. PPL is stored in the metadata region of RAID member drives,
985no additional journal drive is needed.
986
987.PP
988Can be used with \-\-grow to change the consistency policy of an active array
989in some cases. See CONSISTENCY POLICY CHANGES below.
990.RE
991
992
993.SH For assemble:
994
995.TP
996.BR \-u ", " \-\-uuid=
997uuid of array to assemble. Devices which don't have this uuid are
998excluded
999
1000.TP
1001.BR \-m ", " \-\-super\-minor=
1002Minor number of device that array was created for. Devices which
1003don't have this minor number are excluded. If you create an array as
1004/dev/md1, then all superblocks will contain the minor number 1, even if
1005the array is later assembled as /dev/md2.
1006
1007Giving the literal word "dev" for
1008.B \-\-super\-minor
1009will cause
1010.I mdadm
1011to use the minor number of the md device that is being assembled.
1012e.g. when assembling
1013.BR /dev/md0 ,
1014.B \-\-super\-minor=dev
1015will look for super blocks with a minor number of 0.
1016
1017.B \-\-super\-minor
1018is only relevant for v0.90 metadata, and should not normally be used.
1019Using
1020.B \-\-uuid
1021is much safer.
1022
1023.TP
1024.BR \-N ", " \-\-name=
1025Specify the name of the array to assemble. It cannot be longer than 32 chars.
1026This must be the name that was specified when creating the array. It must
1027either match the name stored in the superblock exactly, or it must match
1028with the current
1029.I homehost
1030prefixed to the start of the given name.
1031
1032.TP
1033.BR \-f ", " \-\-force
1034Assemble the array even if the metadata on some devices appears to be
1035out-of-date. If
1036.I mdadm
1037cannot find enough working devices to start the array, but can find
1038some devices that are recorded as having failed, then it will mark
1039those devices as working so that the array can be started. This works only for
1040native. For external metadata it allows to start dirty degraded RAID 4, 5, 6.
1041An array which requires
1042.B \-\-force
1043to be started may contain data corruption. Use it carefully.
1044
1045.TP
1046.BR \-R ", " \-\-run
1047Attempt to start the array even if fewer drives were given than were
1048present last time the array was active. Normally if not all the
1049expected drives are found and
1050.B \-\-scan
1051is not used, then the array will be assembled but not started.
1052With
1053.B \-\-run
1054an attempt will be made to start it anyway.
1055
1056.TP
1057.B \-\-no\-degraded
1058This is the reverse of
1059.B \-\-run
1060in that it inhibits the startup of array unless all expected drives
1061are present. This is only needed with
1062.B \-\-scan,
1063and can be used if the physical connections to devices are
1064not as reliable as you would like.
1065
1066.TP
1067.BR \-\-backup\-file=
1068If
1069.B \-\-backup\-file
1070was used while reshaping an array (e.g. changing number of devices or
1071chunk size) and the system crashed during the critical section, then the same
1072.B \-\-backup\-file
1073must be presented to
1074.B \-\-assemble
1075to allow possibly corrupted data to be restored, and the reshape
1076to be completed.
1077
1078.TP
1079.BR \-\-invalid\-backup
1080If the file needed for the above option is not available for any
1081reason an empty file can be given together with this option to
1082indicate that the backup file is invalid. In this case the data that
1083was being rearranged at the time of the crash could be irrecoverably
1084lost, but the rest of the array may still be recoverable. This option
1085should only be used as a last resort if there is no way to recover the
1086backup file.
1087
1088
1089.TP
1090.BR \-U ", " \-\-update=
1091Update the superblock on each device while assembling the array. The
1092argument given to this flag can be one of
1093.BR summaries ,
1094.BR uuid ,
1095.BR name ,
1096.BR nodes ,
1097.BR homehost ,
1098.BR home-cluster ,
1099.BR resync ,
1100.BR byteorder ,
1101.BR devicesize ,
1102.BR no\-bitmap ,
1103.BR bbl ,
1104.BR no\-bbl ,
1105.BR ppl ,
1106.BR no\-ppl ,
1107.BR layout\-original ,
1108.BR layout\-alternate ,
1109.BR layout\-unspecified ,
1110.BR metadata ,
1111or
1112.BR super\-minor .
1113
1114The
1115.B super\-minor
1116option will update the
1117.B "preferred minor"
1118field on each superblock to match the minor number of the array being
1119assembled.
1120This can be useful if
1121.B \-\-examine
1122reports a different "Preferred Minor" to
1123.BR \-\-detail .
1124In some cases this update will be performed automatically
1125by the kernel driver. In particular, the update happens automatically
1126at the first write to an array with redundancy (RAID level 1 or
1127greater).
1128
1129The
1130.B uuid
1131option will change the uuid of the array. If a UUID is given with the
1132.B \-\-uuid
1133option that UUID will be used as a new UUID and will
1134.B NOT
1135be used to help identify the devices in the array.
1136If no
1137.B \-\-uuid
1138is given, a random UUID is chosen.
1139
1140The
1141.B name
1142option will change the
1143.I name
1144of the array as stored in the superblock. This is only supported for
1145version-1 superblocks.
1146
1147The
1148.B nodes
1149option will change the
1150.I nodes
1151of the array as stored in the bitmap superblock. This option only
1152works for a clustered environment.
1153
1154The
1155.B homehost
1156option will change the
1157.I homehost
1158as recorded in the superblock. For version-0 superblocks, this is the
1159same as updating the UUID.
1160For version-1 superblocks, this involves updating the name.
1161
1162The
1163.B home\-cluster
1164option will change the cluster name as recorded in the superblock and
1165bitmap. This option only works for a clustered environment.
1166
1167The
1168.B resync
1169option will cause the array to be marked
1170.I dirty
1171meaning that any redundancy in the array (e.g. parity for RAID5,
1172copies for RAID1) may be incorrect. This will cause the RAID system
1173to perform a "resync" pass to make sure that all redundant information
1174is correct.
1175
1176The
1177.B byteorder
1178option allows arrays to be moved between machines with different
1179byte-order, such as from a big-endian machine like a Sparc or some
1180MIPS machines, to a little-endian x86_64 machine.
1181When assembling such an array for the first time after a move, giving
1182.B "\-\-update=byteorder"
1183will cause
1184.I mdadm
1185to expect superblocks to have their byteorder reversed, and will
1186correct that order before assembling the array. This is only valid
1187with original (Version 0.90) superblocks.
1188
1189The
1190.B summaries
1191option will correct the summaries in the superblock. That is the
1192counts of total, working, active, failed, and spare devices.
1193
1194The
1195.B devicesize
1196option will rarely be of use. It applies to version 1.1 and 1.2 metadata
1197only (where the metadata is at the start of the device) and is only
1198useful when the component device has changed size (typically become
1199larger). The version 1 metadata records the amount of the device that
1200can be used to store data, so if a device in a version 1.1 or 1.2
1201array becomes larger, the metadata will still be visible, but the
1202extra space will not. In this case it might be useful to assemble the
1203array with
1204.BR \-\-update=devicesize .
1205This will cause
1206.I mdadm
1207to determine the maximum usable amount of space on each device and
1208update the relevant field in the metadata.
1209
1210The
1211.B metadata
1212option only works on v0.90 metadata arrays and will convert them to
1213v1.0 metadata. The array must not be dirty (i.e. it must not need a
1214sync) and it must not have a write-intent bitmap.
1215
1216The old metadata will remain on the devices, but will appear older
1217than the new metadata and so will usually be ignored. The old metadata
1218(or indeed the new metadata) can be removed by giving the appropriate
1219.B \-\-metadata=
1220option to
1221.BR \-\-zero\-superblock .
1222
1223The
1224.B no\-bitmap
1225option can be used when an array has an internal bitmap which is
1226corrupt in some way so that assembling the array normally fails. It
1227will cause any internal bitmap to be ignored.
1228
1229The
1230.B bbl
1231option will reserve space in each device for a bad block list. This
1232will be 4K in size and positioned near the end of any free space
1233between the superblock and the data.
1234
1235The
1236.B no\-bbl
1237option will cause any reservation of space for a bad block list to be
1238removed. If the bad block list contains entries, this will fail, as
1239removing the list could cause data corruption.
1240
1241The
1242.B ppl
1243option will enable PPL for a RAID5 array and reserve space for PPL on each
1244device. There must be enough free space between the data and superblock and a
1245write-intent bitmap or journal must not be used.
1246
1247The
1248.B no\-ppl
1249option will disable PPL in the superblock.
1250
1251The
1252.B layout\-original
1253and
1254.B layout\-alternate
1255options are for RAID0 arrays with non-uniform devices size that were in
1256use before Linux 5.4. If the array was being used with Linux 3.13 or
1257earlier, then to assemble the array on a new kernel,
1258.B \-\-update=layout\-original
1259must be given. If the array was created and used with a kernel from Linux 3.14 to
1260Linux 5.3, then
1261.B \-\-update=layout\-alternate
1262must be given. This only needs to be given once. Subsequent assembly of the array
1263will happen normally.
1264For more information, see
1265.IR md (4).
1266
1267The
1268.B layout\-unspecified
1269option reverts the effect of
1270.B layout\-orignal
1271or
1272.B layout\-alternate
1273and allows the array to be again used on a kernel prior to Linux 5.3.
1274This option should be used with great caution.
1275
1276.SH For Manage mode:
1277
1278.TP
1279.BR \-t ", " \-\-test
1280Unless a more serious error occurred,
1281.I mdadm
1282will exit with a status of 2 if no changes were made to the array and
12830 if at least one change was made.
1284This can be useful when an indirect specifier such as
1285.BR missing ,
1286.B detached
1287or
1288.B faulty
1289is used in requesting an operation on the array.
1290.B \-\-test
1291will report failure if these specifiers didn't find any match.
1292
1293.TP
1294.BR \-a ", " \-\-add
1295hot-add listed devices.
1296If a device appears to have recently been part of the array
1297(possibly it failed or was removed) the device is re\-added as described
1298in the next point.
1299If that fails or the device was never part of the array, the device is
1300added as a hot-spare.
1301If the array is degraded, it will immediately start to rebuild data
1302onto that spare.
1303
1304Note that this and the following options are only meaningful on array
1305with redundancy. They don't apply to RAID0 or Linear.
1306
1307.TP
1308.BR \-\-re\-add
1309re\-add a device that was previously removed from an array.
1310If the metadata on the device reports that it is a member of the
1311array, and the slot that it used is still vacant, then the device will
1312be added back to the array in the same position. This will normally
1313cause the data for that device to be recovered. However, based on the
1314event count on the device, the recovery may only require sections that
1315are flagged by a write-intent bitmap to be recovered or may not require
1316any recovery at all.
1317
1318When used on an array that has no metadata (i.e. it was built with
1319.BR \-\-build)
1320it will be assumed that bitmap-based recovery is enough to make the
1321device fully consistent with the array.
1322
1323.B \-\-re\-add
1324can also be accompanied by
1325.BR \-\-update=devicesize ,
1326.BR \-\-update=bbl ", or"
1327.BR \-\-update=no\-bbl .
1328See descriptions of these options when used in Assemble mode for an
1329explanation of their use.
1330
1331If the device name given is
1332.B missing
1333then
1334.I mdadm
1335will try to find any device that looks like it should be
1336part of the array but isn't and will try to re\-add all such devices.
1337
1338If the device name given is
1339.B faulty
1340then
1341.I mdadm
1342will find all devices in the array that are marked
1343.BR faulty ,
1344remove them and attempt to immediately re\-add them. This can be
1345useful if you are certain that the reason for failure has been
1346resolved.
1347
1348.TP
1349.B \-\-add\-spare
1350Add a device as a spare. This is similar to
1351.B \-\-add
1352except that it does not attempt
1353.B \-\-re\-add
1354first. The device will be added as a spare even if it looks like it
1355could be a recent member of the array.
1356
1357.TP
1358.BR \-r ", " \-\-remove
1359remove listed devices. They must not be active. i.e. they should
1360be failed or spare devices.
1361
1362As well as the name of a device file
1363(e.g.
1364.BR /dev/sda1 )
1365the words
1366.BR failed ,
1367.B detached
1368and names like
1369.B set-A
1370can be given to
1371.BR \-\-remove .
1372The first causes all failed devices to be removed. The second causes
1373any device which is no longer connected to the system (i.e an 'open'
1374returns
1375.BR ENXIO )
1376to be removed.
1377The third will remove a set as described below under
1378.BR \-\-fail .
1379
1380.TP
1381.BR \-f ", " \-\-fail
1382Mark listed devices as faulty.
1383As well as the name of a device file, the word
1384.B detached
1385or a set name like
1386.B set\-A
1387can be given. The former will cause any device that has been detached from
1388the system to be marked as failed. It can then be removed.
1389
1390For RAID10 arrays where the number of copies evenly divides the number
1391of devices, the devices can be conceptually divided into sets where
1392each set contains a single complete copy of the data on the array.
1393Sometimes a RAID10 array will be configured so that these sets are on
1394separate controllers. In this case, all the devices in one set can be
1395failed by giving a name like
1396.B set\-A
1397or
1398.B set\-B
1399to
1400.BR \-\-fail .
1401The appropriate set names are reported by
1402.BR \-\-detail .
1403
1404.TP
1405.BR \-\-set\-faulty
1406same as
1407.BR \-\-fail .
1408
1409.TP
1410.B \-\-replace
1411Mark listed devices as requiring replacement. As soon as a spare is
1412available, it will be rebuilt and will replace the marked device.
1413This is similar to marking a device as faulty, but the device remains
1414in service during the recovery process to increase resilience against
1415multiple failures. When the replacement process finishes, the
1416replaced device will be marked as faulty.
1417
1418.TP
1419.B \-\-with
1420This can follow a list of
1421.B \-\-replace
1422devices. The devices listed after
1423.B \-\-with
1424will preferentially be used to replace the devices listed after
1425.BR \-\-replace .
1426These devices must already be spare devices in the array.
1427
1428.TP
1429.BR \-\-write\-mostly
1430Subsequent devices that are added or re\-added will have the 'write-mostly'
1431flag set. This is only valid for RAID1 and means that the 'md' driver
1432will avoid reading from these devices if possible.
1433.TP
1434.BR \-\-readwrite
1435Subsequent devices that are added or re\-added will have the 'write-mostly'
1436flag cleared.
1437.TP
1438.BR \-\-cluster\-confirm
1439Confirm the existence of the device. This is issued in response to an \-\-add
1440request by a node in a cluster. When a node adds a device it sends a message
1441to all nodes in the cluster to look for a device with a UUID. This translates
1442to a udev notification with the UUID of the device to be added and the slot
1443number. The receiving node must acknowledge this message
1444with \-\-cluster\-confirm. Valid arguments are <slot>:<devicename> in case
1445the device is found or <slot>:missing in case the device is not found.
1446
1447.TP
1448.BR \-\-add-journal
1449Add a journal to an existing array, or recreate journal for a RAID-4/5/6 array
1450that lost a journal device. To avoid interrupting ongoing write operations,
1451.B \-\-add-journal
1452only works for array in Read-Only state.
1453
1454.TP
1455.BR \-\-failfast
1456Subsequent devices that are added or re\-added will have
1457the 'failfast' flag set. This is only valid for RAID1 and RAID10 and
1458means that the 'md' driver will avoid long timeouts on error handling
1459where possible.
1460.TP
1461.BR \-\-nofailfast
1462Subsequent devices that are re\-added will be re\-added without
1463the 'failfast' flag set.
1464
1465.P
1466Each of these options requires that the first device listed is the array
1467to be acted upon, and the remainder are component devices to be added,
1468removed, marked as faulty, etc. Several different operations can be
1469specified for different devices, e.g.
1470.in +5
1471mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
1472.in -5
1473Each operation applies to all devices listed until the next
1474operation.
1475
1476If an array is using a write-intent bitmap, then devices which have
1477been removed can be re\-added in a way that avoids a full
1478reconstruction but instead just updates the blocks that have changed
1479since the device was removed. For arrays with persistent metadata
1480(superblocks) this is done automatically. For arrays created with
1481.B \-\-build
1482mdadm needs to be told that this device we removed recently with
1483.BR \-\-re\-add .
1484
1485Devices can only be removed from an array if they are not in active
1486use, i.e. that must be spares or failed devices. To remove an active
1487device, it must first be marked as
1488.B faulty.
1489
1490.SH For Misc mode:
1491
1492.TP
1493.BR \-Q ", " \-\-query
1494Examine a device to see
1495(1) if it is an md device and (2) if it is a component of an md
1496array.
1497Information about what is discovered is presented.
1498
1499.TP
1500.BR \-D ", " \-\-detail
1501Print details of one or more md devices.
1502
1503.TP
1504.BR \-\-detail\-platform
1505Print details of the platform's RAID capabilities (firmware / hardware
1506topology) for a given metadata format. If used without an argument, mdadm
1507will scan all controllers looking for their capabilities. Otherwise, mdadm
1508will only look at the controller specified by the argument in the form of an
1509absolute filepath or a link, e.g.
1510.IR /sys/devices/pci0000:00/0000:00:1f.2 .
1511
1512.TP
1513.BR \-Y ", " \-\-export
1514When used with
1515.BR \-\-detail ,
1516.BR \-\-detail-platform ,
1517.BR \-\-examine ,
1518or
1519.B \-\-incremental
1520output will be formatted as
1521.B key=value
1522pairs for easy import into the environment.
1523
1524With
1525.B \-\-incremental
1526The value
1527.B MD_STARTED
1528indicates whether an array was started
1529.RB ( yes )
1530or not, which may include a reason
1531.RB ( unsafe ", " nothing ", " no ).
1532Also the value
1533.B MD_FOREIGN
1534indicates if the array is expected on this host
1535.RB ( no ),
1536or seems to be from elsewhere
1537.RB ( yes ).
1538
1539.TP
1540.BR \-E ", " \-\-examine
1541Print contents of the metadata stored on the named device(s).
1542Note the contrast between
1543.B \-\-examine
1544and
1545.BR \-\-detail .
1546.B \-\-examine
1547applies to devices which are components of an array, while
1548.B \-\-detail
1549applies to a whole array which is currently active.
1550
1551.TP
1552.BR \-X ", " \-\-examine\-bitmap
1553Report information about a bitmap.
1554The argument is an array component. Note that running this on an array
1555device (e.g.
1556.BR /dev/md0 )
1557does not report the bitmap for that array.
1558
1559.TP
1560.B \-\-examine\-badblocks
1561List the bad-blocks recorded for the device, if a bad-blocks list has
1562been configured. Currently only
1563.B 1.x
1564and
1565.B IMSM
1566metadata support bad-blocks lists.
1567
1568.TP
1569.BI \-\-dump= directory
1570.TP
1571.BI \-\-restore= directory
1572Save metadata from lists devices, or restore metadata to listed devices.
1573
1574.TP
1575.BR \-R ", " \-\-run
1576start a partially assembled array. If
1577.B \-\-assemble
1578did not find enough devices to fully start the array, it might leaving
1579it partially assembled. If you wish, you can then use
1580.B \-\-run
1581to start the array in degraded mode.
1582
1583.TP
1584.BR \-S ", " \-\-stop
1585deactivate array, releasing all resources.
1586
1587.TP
1588.BR \-o ", " \-\-readonly
1589mark array as readonly.
1590
1591.TP
1592.BR \-w ", " \-\-readwrite
1593mark array as readwrite.
1594
1595.TP
1596.B \-\-zero\-superblock
1597If the device contains a valid md superblock, the block is
1598overwritten with zeros. With
1599.B \-\-force
1600the block where the superblock would be is overwritten even if it
1601doesn't appear to be valid.
1602
1603.B Note:
1604Be careful when calling \-\-zero\-superblock with clustered raid. Make sure
1605the array isn't used or assembled in another cluster node before executing it.
1606
1607.TP
1608.B \-\-kill\-subarray=
1609If the device is a container and the argument to \-\-kill\-subarray
1610specifies an inactive subarray in the container, then the subarray is
1611deleted. Deleting all subarrays will leave an 'empty-container' or
1612spare superblock on the drives. See
1613.B \-\-zero\-superblock
1614for completely
1615removing a superblock. Note that some formats depend on the subarray
1616index for generating a UUID, this command will fail if it would change
1617the UUID of an active subarray.
1618
1619.TP
1620.B \-\-update\-subarray=
1621If the device is a container and the argument to \-\-update\-subarray
1622specifies a subarray in the container, then attempt to update the given
1623superblock field in the subarray. See below in
1624.B MISC MODE
1625for details.
1626
1627.TP
1628.BR \-t ", " \-\-test
1629When used with
1630.BR \-\-detail ,
1631the exit status of
1632.I mdadm
1633is set to reflect the status of the device. See below in
1634.B MISC MODE
1635for details.
1636
1637.TP
1638.BR \-W ", " \-\-wait
1639For each md device given, wait for any resync, recovery, or reshape
1640activity to finish before returning.
1641.I mdadm
1642will return with success if it actually waited for every device
1643listed, otherwise it will return failure.
1644
1645.TP
1646.BR \-\-wait\-clean
1647For each md device given, or each device in /proc/mdstat if
1648.B \-\-scan
1649is given, arrange for the array to be marked clean as soon as possible.
1650.I mdadm
1651will return with success if the array uses external metadata and we
1652successfully waited. For native arrays, this returns immediately as the
1653kernel handles dirty-clean transitions at shutdown. No action is taken
1654if safe-mode handling is disabled.
1655
1656.TP
1657.B \-\-action=
1658Set the "sync_action" for all md devices given to one of
1659.BR idle ,
1660.BR frozen ,
1661.BR check ,
1662.BR repair .
1663Setting to
1664.B idle
1665will abort any currently running action though some actions will
1666automatically restart.
1667Setting to
1668.B frozen
1669will abort any current action and ensure no other action starts
1670automatically.
1671
1672Details of
1673.B check
1674and
1675.B repair
1676can be found it
1677.IR md (4)
1678under
1679.BR "SCRUBBING AND MISMATCHES" .
1680
1681.TP
1682.B \-\-udev\-rules=
1683it generates the udev rules to the file that handles hot-plug bare devices.
1684Given the POLICYs defined under
1685.IR {CONFFILE}\ (or {CONFFILE2})
1686
1687See
1688.BR mdadm.conf (5)
1689for more details and usage examples about POLICY.
1690
1691.SH For Incremental Assembly mode:
1692.TP
1693.BR \-\-rebuild\-map ", " \-r
1694Rebuild the map file
1695.RB ( {MAP_PATH} )
1696that
1697.I mdadm
1698uses to help track which arrays are currently being assembled.
1699
1700.TP
1701.BR \-\-run ", " \-R
1702Run any array assembled as soon as a minimal number of devices is
1703available, rather than waiting until all expected devices are present.
1704
1705.TP
1706.BR \-\-scan ", " \-s
1707Only meaningful with
1708.B \-R
1709this will scan the
1710.B map
1711file for arrays that are being incrementally assembled and will try to
1712start any that are not already started.
1713
1714.TP
1715.BR \-\-fail ", " \-f
1716This allows the hot-plug system to remove devices that have fully disappeared
1717from the kernel. It will first fail and then remove the device from any
1718array it belongs to.
1719The device name given should be a kernel device name such as "sda",
1720not a name in
1721.IR /dev .
1722
1723.TP
1724.BR \-\-path=
1725Only used with \-\-fail. The 'path' given will be recorded so that if
1726a new device appears at the same location it can be automatically
1727added to the same array. This allows the failed device to be
1728automatically replaced by a new device without metadata if it appears
1729at specified path. This option is normally only set by an
1730.I udev
1731script.
1732
1733.SH For Monitor mode:
1734.TP
1735.BR \-m ", " \-\-mail
1736Give an mail address to send alerts to. Can be configured in
1737.B mdadm.conf
1738as MAILADDR.
1739
1740.TP
1741.BR \-p ", " \-\-program ", " \-\-alert
1742Give a program to be run whenever an event is detected. Can be configured in
1743.B mdadm.conf
1744as PROGRAM.
1745
1746.TP
1747.BR \-y ", " \-\-syslog
1748Cause all events to be reported through 'syslog'. The messages have
1749facility of 'daemon' and varying priorities.
1750
1751.TP
1752.BR \-d ", " \-\-delay
1753Give a delay in seconds. The default is 60 seconds.
1754.I mdadm
1755polls the md arrays and then waits this many seconds before polling again if no event happened.
1756Can be configured in
1757.B mdadm.conf
1758as MONITORDELAY.
1759
1760.TP
1761.BR \-r ", " \-\-increment
1762Give a percentage increment.
1763.I mdadm
1764will generate RebuildNN events with the given percentage increment.
1765
1766.TP
1767.BR \-f ", " \-\-daemonise
1768Tell
1769.I mdadm
1770to run as a background daemon if it decides to monitor anything. This
1771causes it to fork and run in the child, and to disconnect from the
1772terminal. The process id of the child is written to stdout.
1773This is useful with
1774.B \-\-scan
1775which will only continue monitoring if a mail address or alert program
1776is found in the config file.
1777
1778.TP
1779.BR \-i ", " \-\-pid\-file
1780When
1781.I mdadm
1782is running in daemon mode, write the pid of the daemon process to
1783the specified file, instead of printing it on standard output.
1784
1785.TP
1786.BR \-1 ", " \-\-oneshot
1787Check arrays only once. This will generate
1788.B NewArray
1789events and more significantly
1790.B DegradedArray
1791and
1792.B SparesMissing
1793events. Running
1794.in +5
1795.B " mdadm \-\-monitor \-\-scan \-1"
1796.in -5
1797from a cron script will ensure regular notification of any degraded arrays.
1798
1799.TP
1800.BR \-t ", " \-\-test
1801Generate a
1802.B TestMessage
1803alert for every array found at startup. This alert gets mailed and
1804passed to the alert program. This can be used for testing that alert
1805message do get through successfully.
1806
1807.TP
1808.BR \-\-no\-sharing
1809This inhibits the functionality for moving spares between arrays.
1810Only one monitoring process started with
1811.B \-\-scan
1812but without this flag is allowed, otherwise the two could interfere
1813with each other.
1814
1815.SH ASSEMBLE MODE
1816
1817.HP 12
1818Usage:
1819.B mdadm \-\-assemble
1820.I md-device options-and-component-devices...
1821.HP 12
1822Usage:
1823.B mdadm \-\-assemble \-\-scan
1824.I md-devices-and-options...
1825.HP 12
1826Usage:
1827.B mdadm \-\-assemble \-\-scan
1828.I options...
1829
1830.PP
1831This usage assembles one or more RAID arrays from pre-existing components.
1832For each array, mdadm needs to know the md device, the identity of the
1833array, and the number of component devices. These can be found in a number of ways.
1834
1835In the first usage example (without the
1836.BR \-\-scan )
1837the first device given is the md device.
1838In the second usage example, all devices listed are treated as md
1839devices and assembly is attempted.
1840In the third (where no devices are listed) all md devices that are
1841listed in the configuration file are assembled. If no arrays are
1842described by the configuration file, then any arrays that
1843can be found on unused devices will be assembled.
1844
1845If precisely one device is listed, but
1846.B \-\-scan
1847is not given, then
1848.I mdadm
1849acts as though
1850.B \-\-scan
1851was given and identity information is extracted from the configuration file.
1852
1853The identity can be given with the
1854.B \-\-uuid
1855option, the
1856.B \-\-name
1857option, or the
1858.B \-\-super\-minor
1859option, will be taken from the md-device record in the config file, or
1860will be taken from the super block of the first component-device
1861listed on the command line.
1862
1863Devices can be given on the
1864.B \-\-assemble
1865command line or in the config file. Only devices which have an md
1866superblock which contains the right identity will be considered for
1867any array.
1868
1869The config file is only used if explicitly named with
1870.B \-\-config
1871or requested with (a possibly implicit)
1872.BR \-\-scan .
1873In the latter case, the default config file is used. See
1874.BR mdadm.conf (5)
1875for more details.
1876
1877If
1878.B \-\-scan
1879is not given, then the config file will only be used to find the
1880identity of md arrays.
1881
1882Normally the array will be started after it is assembled. However if
1883.B \-\-scan
1884is not given and not all expected drives were listed, then the array
1885is not started (to guard against usage errors). To insist that the
1886array be started in this case (as may work for RAID1, 4, 5, 6, or 10),
1887give the
1888.B \-\-run
1889flag.
1890
1891If
1892.I udev
1893is active,
1894.I mdadm
1895does not create any entries in
1896.B /dev
1897but leaves that to
1898.IR udev .
1899It does record information in
1900.B {MAP_PATH}
1901which will allow
1902.I udev
1903to choose the correct name.
1904
1905If
1906.I mdadm
1907detects that udev is not configured, it will create the devices in
1908.B /dev
1909itself.
1910
1911.SS Auto-Assembly
1912When
1913.B \-\-assemble
1914is used with
1915.B \-\-scan
1916and no devices are listed,
1917.I mdadm
1918will first attempt to assemble all the arrays listed in the config
1919file.
1920
1921If no arrays are listed in the config (other than those marked
1922.BR <ignore> )
1923it will look through the available devices for possible arrays and
1924will try to assemble anything that it finds. Arrays which are tagged
1925as belonging to the given homehost will be assembled and started
1926normally. Arrays which do not obviously belong to this host are given
1927names that are expected not to conflict with anything local, and are
1928started "read-auto" so that nothing is written to any device until the
1929array is written to. i.e. automatic resync etc is delayed.
1930
1931If
1932.I mdadm
1933finds a consistent set of devices that look like they should comprise
1934an array, and if the superblock is tagged as belonging to the given
1935home host, it will automatically choose a device name and try to
1936assemble the array. If the array uses version-0.90 metadata, then the
1937.B minor
1938number as recorded in the superblock is used to create a name in
1939.B /dev/md/
1940so for example
1941.BR /dev/md/3 .
1942If the array uses version-1 metadata, then the
1943.B name
1944from the superblock is used to similarly create a name in
1945.B /dev/md/
1946(the name will have any 'host' prefix stripped first).
1947
1948This behaviour can be modified by the
1949.I AUTO
1950line in the
1951.I mdadm.conf
1952configuration file. This line can indicate that specific metadata
1953type should, or should not, be automatically assembled. If an array
1954is found which is not listed in
1955.I mdadm.conf
1956and has a metadata format that is denied by the
1957.I AUTO
1958line, then it will not be assembled.
1959The
1960.I AUTO
1961line can also request that all arrays identified as being for this
1962homehost should be assembled regardless of their metadata type.
1963See
1964.IR mdadm.conf (5)
1965for further details.
1966
1967Note: Auto-assembly cannot be used for assembling and activating some
1968arrays which are undergoing reshape. In particular as the
1969.B backup\-file
1970cannot be given, any reshape which requires a backup file to continue
1971cannot be started by auto-assembly. An array which is growing to more
1972devices and has passed the critical section can be assembled using
1973auto-assembly.
1974
1975.SH BUILD MODE
1976
1977.HP 12
1978Usage:
1979.B mdadm \-\-build
1980.I md-device
1981.BI \-\-chunk= X
1982.BI \-\-level= Y
1983.BI \-\-raid\-devices= Z
1984.I devices
1985
1986.PP
1987This usage is similar to
1988.BR \-\-create .
1989The difference is that it creates an array without a superblock. With
1990these arrays there is no difference between initially creating the array and
1991subsequently assembling the array, except that hopefully there is useful
1992data there in the second case.
1993
1994The level may raid0, linear, raid1, raid10, multipath, or faulty, or
1995one of their synonyms. All devices must be listed and the array will
1996be started once complete. It will often be appropriate to use
1997.B \-\-assume\-clean
1998with levels raid1 or raid10.
1999
2000.SH CREATE MODE
2001
2002.HP 12
2003Usage:
2004.B mdadm \-\-create
2005.I md-device
2006.BI \-\-chunk= X
2007.BI \-\-level= Y
2008.BI \-\-raid\-devices= Z
2009.I devices
2010
2011.PP
2012This usage will initialize a new md array, associate some devices with
2013it, and activate the array.
2014
2015.I md-device
2016is a new device. This could be standard name or chosen name. For details see:
2017.BR "DEVICE NAMES"
2018
2019The named device will normally not exist when
2020.I "mdadm \-\-create"
2021is run, but will be created by
2022.I udev
2023once the array becomes active.
2024
2025The max length md-device name is limited to 32 characters.
2026Different metadata types have more strict limitation
2027(like IMSM where only 16 characters are allowed).
2028For that reason, long name could be truncated or rejected, it depends on metadata policy.
2029
2030As devices are added, they are checked to see if they contain RAID
2031superblocks or filesystems. They are also checked to see if the variance in
2032device size exceeds 1%.
2033
2034If any discrepancy is found, the array will not automatically be run, though
2035the presence of a
2036.B \-\-run
2037can override this caution.
2038
2039To create a "degraded" array in which some devices are missing, simply
2040give the word "\fBmissing\fP"
2041in place of a device name. This will cause
2042.I mdadm
2043to leave the corresponding slot in the array empty.
2044For a RAID4 or RAID5 array at most one slot can be
2045"\fBmissing\fP"; for a RAID6 array at most two slots.
2046For a RAID1 array, only one real device needs to be given. All of the
2047others can be
2048"\fBmissing\fP".
2049
2050When creating a RAID5 array,
2051.I mdadm
2052will automatically create a degraded array with an extra spare drive.
2053This is because building the spare into a degraded array is in general
2054faster than resyncing the parity on a non-degraded, but not clean,
2055array. This feature can be overridden with the
2056.B \-\-force
2057option.
2058
2059When creating a partition based array, using
2060.I mdadm
2061with version-1.x metadata, the partition type should be set to
2062.B 0xDA
2063(non fs-data). This type of selection allows for greater precision since
2064using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)],
2065might create problems in the event of array recovery through a live cdrom.
2066
2067A new array will normally get a randomly assigned 128bit UUID which is
2068very likely to be unique. If you have a specific need, you can choose
2069a UUID for the array by giving the
2070.B \-\-uuid=
2071option. Be warned that creating two arrays with the same UUID is a
2072recipe for disaster. Also, using
2073.B \-\-uuid=
2074when creating a v0.90 array will silently override any
2075.B \-\-homehost=
2076setting.
2077.\"If the
2078.\".B \-\-size
2079.\"option is given, it is not necessary to list any component devices in this command.
2080.\"They can be added later, before a
2081.\".B \-\-run.
2082.\"If no
2083.\".B \-\-size
2084.\"is given, the apparent size of the smallest drive given is used.
2085
2086Space for a bitmap will be reserved so that one can be added later with
2087.BR "\-\-grow \-\-bitmap=internal" .
2088
2089If the metadata type supports it (currently only 1.x and IMSM metadata),
2090space will be allocated to store a bad block list. This allows a modest
2091number of bad blocks to be recorded, allowing the drive to remain in
2092service while only partially functional.
2093
2094When creating an array within a
2095.B CONTAINER
2096.I mdadm
2097can be given either the list of devices to use, or simply the name of
2098the container. The former case gives control over which devices in
2099the container will be used for the array. The latter case allows
2100.I mdadm
2101to automatically choose which devices to use based on how much spare
2102space is available.
2103
2104The General Management options that are valid with
2105.B \-\-create
2106are:
2107.TP
2108.B \-\-run
2109insist on running the array even if some devices look like they might
2110be in use.
2111
2112.TP
2113.B \-\-readonly
2114start the array in readonly mode.
2115
2116.SH MANAGE MODE
2117.HP 12
2118Usage:
2119.B mdadm
2120.I device
2121.I options... devices...
2122.PP
2123
2124This usage will allow individual devices in an array to be failed,
2125removed or added. It is possible to perform multiple operations with
2126on command. For example:
2127.br
2128.B " mdadm /dev/md0 \-f /dev/hda1 \-r /dev/hda1 \-a /dev/hda1"
2129.br
2130will firstly mark
2131.B /dev/hda1
2132as faulty in
2133.B /dev/md0
2134and will then remove it from the array and finally add it back
2135in as a spare. However, only one md array can be affected by a single
2136command.
2137
2138When a device is added to an active array, mdadm checks to see if it
2139has metadata on it which suggests that it was recently a member of the
2140array. If it does, it tries to "re\-add" the device. If there have
2141been no changes since the device was removed, or if the array has a
2142write-intent bitmap which has recorded whatever changes there were,
2143then the device will immediately become a full member of the array and
2144those differences recorded in the bitmap will be resolved.
2145
2146.SH MISC MODE
2147.HP 12
2148Usage:
2149.B mdadm
2150.I options ...
2151.I devices ...
2152.PP
2153
2154MISC mode includes a number of distinct operations that
2155operate on distinct devices. The operations are:
2156.TP
2157.B \-\-query
2158The device is examined to see if it is
2159(1) an active md array, or
2160(2) a component of an md array.
2161The information discovered is reported.
2162
2163.TP
2164.B \-\-detail
2165The device should be an active md device.
2166.B mdadm
2167will display a detailed description of the array.
2168.B \-\-brief
2169or
2170.B \-\-scan
2171will cause the output to be less detailed and the format to be
2172suitable for inclusion in
2173.BR mdadm.conf .
2174The exit status of
2175.I mdadm
2176will normally be 0 unless
2177.I mdadm
2178failed to get useful information about the device(s); however, if the
2179.B \-\-test
2180option is given, then the exit status will be:
2181.RS
2182.TP
21830
2184The array is functioning normally.
2185.TP
21861
2187The array has at least one failed device.
2188.TP
21892
2190The array has multiple failed devices such that it is unusable.
2191.TP
21924
2193There was an error while trying to get information about the device.
2194.RE
2195
2196.TP
2197.B \-\-detail\-platform
2198Print detail of the platform's RAID capabilities (firmware / hardware
2199topology). If the metadata is specified with
2200.B \-e
2201or
2202.B \-\-metadata=
2203then the return status will be:
2204.RS
2205.TP
22060
2207metadata successfully enumerated its platform components on this system
2208.TP
22091
2210metadata is platform independent
2211.TP
22122
2213metadata failed to find its platform components on this system
2214.RE
2215
2216.TP
2217.B \-\-update\-subarray=
2218If the device is a container and the argument to \-\-update\-subarray
2219specifies a subarray in the container, then attempt to update the given
2220superblock field in the subarray. Similar to updating an array in
2221"assemble" mode, the field to update is selected by
2222.B \-U
2223or
2224.B \-\-update=
2225option. The supported options are
2226.BR name ,
2227.BR ppl ,
2228.BR no\-ppl ,
2229.BR bitmap
2230and
2231.BR no\-bitmap .
2232
2233The
2234.B name
2235option updates the subarray name in the metadata. It cannot be longer than
223632 chars. If successes, new value will be respected after next assembly.
2237
2238The
2239.B ppl
2240and
2241.B no\-ppl
2242options enable and disable PPL in the metadata. Currently supported only for
2243IMSM subarrays.
2244
2245The
2246.B bitmap
2247and
2248.B no\-bitmap
2249options enable and disable write-intent bitmap in the metadata. Currently supported only for
2250IMSM subarrays.
2251
2252.TP
2253.B \-\-examine
2254The device should be a component of an md array.
2255.I mdadm
2256will read the md superblock of the device and display the contents.
2257If
2258.B \-\-brief
2259or
2260.B \-\-scan
2261is given, then multiple devices that are components of the one array
2262are grouped together and reported in a single entry suitable
2263for inclusion in
2264.BR mdadm.conf .
2265
2266Having
2267.B \-\-scan
2268without listing any devices will cause all devices listed in the
2269config file to be examined.
2270
2271.TP
2272.BI \-\-dump= directory
2273If the device contains RAID metadata, a file will be created in the
2274.I directory
2275and the metadata will be written to it. The file will be the same
2276size as the device and will have the metadata written at the
2277same location as it exists in the device. However, the file will be "sparse" so
2278that only those blocks containing metadata will be allocated. The
2279total space used will be small.
2280
2281The filename used in the
2282.I directory
2283will be the base name of the device. Further, if any links appear in
2284.I /dev/disk/by-id
2285which point to the device, then hard links to the file will be created
2286in
2287.I directory
2288based on these
2289.I by-id
2290names.
2291
2292Multiple devices can be listed and their metadata will all be stored
2293in the one directory.
2294
2295.TP
2296.BI \-\-restore= directory
2297This is the reverse of
2298.BR \-\-dump .
2299.I mdadm
2300will locate a file in the directory that has a name appropriate for
2301the given device and will restore metadata from it. Names that match
2302.I /dev/disk/by-id
2303names are preferred, however if two of those refer to different files,
2304.I mdadm
2305will not choose between them but will abort the operation.
2306
2307If a file name is given instead of a
2308.I directory
2309then
2310.I mdadm
2311will restore from that file to a single device, always provided the
2312size of the file matches that of the device, and the file contains
2313valid metadata.
2314.TP
2315.B \-\-stop
2316The devices should be active md arrays which will be deactivated, as
2317long as they are not currently in use.
2318
2319.TP
2320.B \-\-run
2321This will fully activate a partially assembled md array.
2322
2323.TP
2324.B \-\-readonly
2325This will mark an active array as read-only, providing that it is
2326not currently being used.
2327
2328.TP
2329.B \-\-readwrite
2330This will change a
2331.B readonly
2332array back to being read/write.
2333
2334.TP
2335.B \-\-scan
2336For all operations except
2337.BR \-\-examine ,
2338.B \-\-scan
2339will cause the operation to be applied to all arrays listed in
2340.BR /proc/mdstat .
2341For
2342.BR \-\-examine,
2343.B \-\-scan
2344causes all devices listed in the config file to be examined.
2345
2346.TP
2347.BR \-b ", " \-\-brief
2348Be less verbose. This is used with
2349.B \-\-detail
2350and
2351.BR \-\-examine .
2352Using
2353.B \-\-brief
2354with
2355.B \-\-verbose
2356gives an intermediate level of verbosity.
2357
2358.SH MONITOR MODE
2359
2360.HP 12
2361Usage:
2362.B mdadm \-\-monitor
2363.I options... devices...
2364
2365.PP
2366Monitor option can work in two modes:
2367.IP \(bu 4
2368system wide mode, follow all md devices based on
2369.B /proc/mdstat,
2370.IP \(bu 4
2371follow only specified MD devices in command line.
2372.PP
2373
2374.B \-\-scan -
2375indicates system wide mode. Option causes the
2376.I monitor
2377to track all md devices that appear in
2378.B /proc/mdstat.
2379If it is not set, then at least one
2380.B device
2381must be specified.
2382
2383Monitor usage causes
2384.I mdadm
2385to periodically poll a number of md arrays and to report on any events
2386noticed.
2387
2388In both modes,
2389.I monitor
2390will work as long as there is an active array with redundancy and it is defined to follow (for
2391.B \-\-scan
2392every array is followed).
2393
2394As well as reporting events,
2395.I mdadm
2396may move a spare drive from one array to another if they are in the
2397same
2398.B spare-group
2399or
2400.B domain
2401and if the destination array has a failed drive but no spares.
2402
2403The result of monitoring the arrays is the generation of events.
2404These events are passed to a separate program (if specified) and may
2405be mailed to a given E-mail address.
2406
2407When passing events to a program, the program is run once for each event,
2408and is given 2 or 3 command-line arguments: the first is the
2409name of the event (see below), the second is the name of the
2410md device which is affected, and the third is the name of a related
2411device if relevant (such as a component device that has failed).
2412
2413If
2414.B \-\-scan
2415is given, then a
2416.B program
2417or an
2418.B e-mail
2419address must be specified on the command line or in the config file. If neither are available, then
2420.I mdadm
2421will not monitor anything. For devices given directly in command line, without
2422.B program
2423or
2424.B email
2425specified, each event is reported to
2426.BR stdout.
2427
2428Note: On systems where mdadm monitoring is managed through systemd, the mdmonitor.service
2429should be present. This service is designed to be the primary solution for array monitoring.
2430It is configured to operate in system-wide mode. It is initiated by udev when start criteria are
2431met, e.g.
2432.B mdadm.conf
2433exists and necessary configuration parameters are set.
2434It is kept alive as long as a redundant RAID array is active; it stops otherwise. User should
2435customize MAILADDR in
2436.B mdadm.conf
2437to receive mail notifications. MONITORDELAY, MAILFROM and PROGRAM are optional. See
2438.BR mdadm.conf (5)
2439for detailed description of these options.
2440Use systemctl status mdmonitor.service to verify status or determine if additional configuration
2441is needed.
2442
2443The different events are:
2444
2445.RS 4
2446.TP
2447.B DeviceDisappeared
2448An md array which previously was configured appears to no longer be
2449configured. (syslog priority: Critical)
2450
2451If
2452.I mdadm
2453was told to monitor an array which is RAID0 or Linear, then it will
2454report
2455.B DeviceDisappeared
2456with the extra information
2457.BR Wrong-Level .
2458This is because RAID0 and Linear do not support the device-failed,
2459hot-spare and resync operations which are monitored.
2460
2461.TP
2462.B RebuildStarted
2463An md array started reconstruction (e.g. recovery, resync, reshape,
2464check, repair). (syslog priority: Warning)
2465
2466.TP
2467.BI Rebuild NN
2468Where
2469.I NN
2470is a two-digit number (eg. 05, 48). This indicates that the rebuild
2471has reached that percentage of the total. The events are generated
2472at a fixed increment from 0. The increment size may be specified with
2473a command-line option (the default is 20). (syslog priority: Warning)
2474
2475.TP
2476.B RebuildFinished
2477An md array that was rebuilding, isn't any more, either because it
2478finished normally or was aborted. (syslog priority: Warning)
2479
2480.TP
2481.B Fail
2482An active component device of an array has been marked as
2483faulty. (syslog priority: Critical)
2484
2485.TP
2486.B FailSpare
2487A spare component device which was being rebuilt to replace a faulty
2488device has failed. (syslog priority: Critical)
2489
2490.TP
2491.B SpareActive
2492A spare component device which was being rebuilt to replace a faulty
2493device has been successfully rebuilt and has been made active.
2494(syslog priority: Info)
2495
2496.TP
2497.B NewArray
2498A new md array has been detected in the
2499.B /proc/mdstat
2500file. (syslog priority: Info)
2501
2502.TP
2503.B DegradedArray
2504A newly noticed array appears to be degraded. This message is not
2505generated when
2506.I mdadm
2507notices a drive failure which causes degradation, but only when
2508.I mdadm
2509notices that an array is degraded when it first sees the array.
2510(syslog priority: Critical)
2511
2512.TP
2513.B MoveSpare
2514A spare drive has been moved from one array in a
2515.B spare-group
2516or
2517.B domain
2518to another to allow a failed drive to be replaced.
2519(syslog priority: Info)
2520
2521.TP
2522.B SparesMissing
2523If
2524.I mdadm
2525has been told, via the config file, that an array should have a certain
2526number of spare devices, and
2527.I mdadm
2528detects that it has fewer than this number when it first sees the
2529array, it will report a
2530.B SparesMissing
2531message.
2532(syslog priority: Warning)
2533
2534.TP
2535.B TestMessage
2536An array was found at startup, and the
2537.B \-\-test
2538flag was given.
2539(syslog priority: Info)
2540.RE
2541
2542Only
2543.B Fail,
2544.B FailSpare,
2545.B DegradedArray,
2546.B SparesMissing
2547and
2548.B TestMessage
2549cause Email to be sent. All events cause the program to be run.
2550The program is run with two or three arguments: the event
2551name, the array device and possibly a second device.
2552
2553Each event has an associated array device (e.g.
2554.BR /dev/md1 )
2555and possibly a second device. For
2556.BR Fail ,
2557.BR FailSpare ,
2558and
2559.B SpareActive
2560the second device is the relevant component device.
2561For
2562.B MoveSpare
2563the second device is the array that the spare was moved from.
2564
2565For
2566.I mdadm
2567to move spares from one array to another, the different arrays need to
2568be labeled with the same
2569.B spare-group
2570or the spares must be allowed to migrate through matching POLICY domains
2571in the configuration file. The
2572.B spare-group
2573name can be any string; it is only necessary that different spare
2574groups use different names.
2575
2576When
2577.I mdadm
2578detects that an array in a spare group has fewer active
2579devices than necessary for the complete array, and has no spare
2580devices, it will look for another array in the same spare group that
2581has a full complement of working drives and a spare. It will then
2582attempt to remove the spare from the second array and add it to the
2583first.
2584If the removal succeeds but the adding fails, then it is added back to
2585the original array.
2586
2587If the spare group for a degraded array is not defined,
2588.I mdadm
2589will look at the rules of spare migration specified by POLICY lines in
2590.B mdadm.conf
2591and then follow similar steps as above if a matching spare is found.
2592
2593.SH GROW MODE
2594The GROW mode is used for changing the size or shape of an active
2595array.
2596
2597The following changes are supported:
2598.IP \(bu 4
2599change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2600.IP \(bu 4
2601increase or decrease the "raid\-devices" attribute of RAID0, RAID1, RAID4,
2602RAID5, and RAID6.
2603.IP \(bu 4
2604change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and RAID10.
2605.IP \(bu 4
2606convert between RAID1 and RAID5, between RAID5 and RAID6, between
2607RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).
2608.IP \(bu 4
2609add a write-intent bitmap to any array which supports these bitmaps, or
2610remove a write-intent bitmap from such an array.
2611.IP \(bu 4
2612change the array's consistency policy.
2613.PP
2614
2615Using GROW on containers is currently supported only for Intel's IMSM
2616container format. The number of devices in a container can be
2617increased - which affects all arrays in the container - or an array
2618in a container can be converted between levels where those levels are
2619supported by the container, and the conversion is on of those listed
2620above.
2621
2622.PP
2623Notes:
2624.IP \(bu 4
2625Intel's native checkpointing doesn't use
2626.B --backup-file
2627option and it is transparent for assembly feature.
2628.IP \(bu 4
2629Roaming between Windows(R) and Linux systems for IMSM metadata is not
2630supported during grow process.
2631.IP \(bu 4
2632When growing a raid0 device, the new component disk size (or external
2633backup size) should be larger than LCM(old, new) * chunk-size * 2,
2634where LCM() is the least common multiple of the old and new count of
2635component disks, and "* 2" comes from the fact that mdadm refuses to
2636use more than half of a spare device for backup space.
2637
2638.SS SIZE CHANGES
2639Normally when an array is built the "size" is taken from the smallest
2640of the drives. If all the small drives in an arrays are, over time,
2641removed and replaced with larger drives, then you could have an
2642array of large drives with only a small amount used. In this
2643situation, changing the "size" with "GROW" mode will allow the extra
2644space to start being used. If the size is increased in this way, a
2645"resync" process will start to make sure the new parts of the array
2646are synchronised.
2647
2648Note that when an array changes size, any filesystem that may be
2649stored in the array will not automatically grow or shrink to use or
2650vacate the space. The
2651filesystem will need to be explicitly told to use the extra space
2652after growing, or to reduce its size
2653.B prior
2654to shrinking the array.
2655
2656Also, the size of an array cannot be changed while it has an active
2657bitmap. If an array has a bitmap, it must be removed before the size
2658can be changed. Once the change is complete a new bitmap can be created.
2659
2660.SS RAID\-DEVICES CHANGES
2661
2662A RAID1 array can work with any number of devices from 1 upwards
2663(though 1 is not very useful). There may be times which you want to
2664increase or decrease the number of active devices. Note that this is
2665different to hot-add or hot-remove which changes the number of
2666inactive devices.
2667
2668When reducing the number of devices in a RAID1 array, the slots which
2669are to be removed from the array must already be vacant. That is, the
2670devices which were in those slots must be failed and removed.
2671
2672When the number of devices is increased, any hot spares that are
2673present will be activated immediately.
2674
2675Changing the number of active devices in a RAID5 or RAID6 is much more
2676effort. Every block in the array will need to be read and written
2677back to a new location. Linux Kernel is able to increase or decrease
2678the number of devices in a RAID5 and RAID6 safely, including restarting
2679an interrupted "reshape".
2680
2681The Linux Kernel is able to convert a RAID0 into a RAID4 or RAID5.
2682.I mdadm
2683uses this functionality and the ability to add
2684devices to a RAID4 to allow devices to be added to a RAID0. When
2685requested to do this,
2686.I mdadm
2687will convert the RAID0 to a RAID4, add the necessary disks and make
2688the reshape happen, and then convert the RAID4 back to RAID0.
2689
2690When decreasing the number of devices, the size of the array will also
2691decrease. If there was data in the array, it could get destroyed and
2692this is not reversible, so you should firstly shrink the filesystem on
2693the array to fit within the new size. To help prevent accidents,
2694.I mdadm
2695requires that the size of the array be decreased first with
2696.BR "mdadm --grow --array-size" .
2697This is a reversible change which simply makes the end of the array
2698inaccessible. The integrity of any data can then be checked before
2699the non-reversible reduction in the number of devices is request.
2700
2701When relocating the first few stripes on a RAID5 or RAID6, it is not
2702possible to keep the data on disk completely consistent and
2703crash-proof. To provide the required safety, mdadm disables writes to
2704the array while this "critical section" is reshaped, and takes a
2705backup of the data that is in that section. For grows, this backup may be
2706stored in any spare devices that the array has, however it can also be
2707stored in a separate file specified with the
2708.B \-\-backup\-file
2709option, and is required to be specified for shrinks, RAID level
2710changes and layout changes. If this option is used, and the system
2711does crash during the critical period, the same file must be passed to
2712.B \-\-assemble
2713to restore the backup and reassemble the array. When shrinking rather
2714than growing the array, the reshape is done from the end towards the
2715beginning, so the "critical section" is at the end of the reshape.
2716
2717.SS LEVEL CHANGES
2718
2719Changing the RAID level of any array happens instantaneously. However
2720in the RAID5 to RAID6 case this requires a non-standard layout of the
2721RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2722required before the change can be accomplished. So while the level
2723change is instant, the accompanying layout change can take quite a
2724long time. A
2725.B \-\-backup\-file
2726is required. If the array is not simultaneously being grown or
2727shrunk, so that the array size will remain the same - for example,
2728reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will
2729be used not just for a "critical section" but throughout the reshape
2730operation, as described below under LAYOUT CHANGES.
2731
2732.SS CHUNK-SIZE AND LAYOUT CHANGES
2733
2734Changing the chunk-size or layout without also changing the number of
2735devices as the same time will involve re-writing all blocks in-place.
2736To ensure against data loss in the case of a crash, a
2737.B --backup-file
2738must be provided for these changes. Small sections of the array will
2739be copied to the backup file while they are being rearranged. This
2740means that all the data is copied twice, once to the backup and once
2741to the new layout on the array, so this type of reshape will go very
2742slowly.
2743
2744If the reshape is interrupted for any reason, this backup file must be
2745made available to
2746.B "mdadm --assemble"
2747so the array can be reassembled. Consequently, the file cannot be
2748stored on the device being reshaped.
2749
2750
2751.SS BITMAP CHANGES
2752
2753A write-intent bitmap can be added to, or removed from, an active
2754array.
2755
2756.SS CONSISTENCY POLICY CHANGES
2757
2758The consistency policy of an active array can be changed by using the
2759.B \-\-consistency\-policy
2760option in Grow mode. Currently this works only for the
2761.B ppl
2762and
2763.B resync
2764policies and allows to enable or disable the RAID5 Partial Parity Log (PPL).
2765
2766.SH INCREMENTAL MODE
2767
2768.HP 12
2769Usage:
2770.B mdadm \-\-incremental
2771.RB [ \-\-run ]
2772.RB [ \-\-quiet ]
2773.I component-device
2774.RI [ optional-aliases-for-device ]
2775.HP 12
2776Usage:
2777.B mdadm \-\-incremental \-\-fail
2778.I component-device
2779.HP 12
2780Usage:
2781.B mdadm \-\-incremental \-\-rebuild\-map
2782.HP 12
2783Usage:
2784.B mdadm \-\-incremental \-\-run \-\-scan
2785
2786.PP
2787This mode is designed to be used in conjunction with a device
2788discovery system. As devices are found in a system, they can be
2789passed to
2790.B "mdadm \-\-incremental"
2791to be conditionally added to an appropriate array.
2792
2793Conversely, it can also be used with the
2794.B \-\-fail
2795flag to do just the opposite and find whatever array a particular device
2796is part of and remove the device from that array.
2797
2798If the device passed is a
2799.B CONTAINER
2800device created by a previous call to
2801.IR mdadm ,
2802then rather than trying to add that device to an array, all the arrays
2803described by the metadata of the container will be started.
2804
2805.I mdadm
2806performs a number of tests to determine if the device is part of an
2807array, and which array it should be part of. If an appropriate array
2808is found, or can be created,
2809.I mdadm
2810adds the device to the array and conditionally starts the array.
2811
2812Note that
2813.I mdadm
2814will normally only add devices to an array which were previously working
2815(active or spare) parts of that array. The support for automatic
2816inclusion of a new drive as a spare in some array requires
2817a configuration through POLICY in config file.
2818
2819The tests that
2820.I mdadm
2821makes are as follow:
2822.IP +
2823Is the device permitted by
2824.BR mdadm.conf ?
2825That is, is it listed in a
2826.B DEVICES
2827line in that file. If
2828.B DEVICES
2829is absent then the default it to allow any device. Similarly if
2830.B DEVICES
2831contains the special word
2832.B partitions
2833then any device is allowed. Otherwise the device name given to
2834.IR mdadm ,
2835or one of the aliases given, or an alias found in the filesystem,
2836must match one of the names or patterns in a
2837.B DEVICES
2838line.
2839
2840This is the only context where the aliases are used. They are
2841usually provided by a
2842.I udev
2843rules mentioning
2844.BR $env{DEVLINKS} .
2845
2846.IP +
2847Does the device have a valid md superblock? If a specific metadata
2848version is requested with
2849.B \-\-metadata
2850or
2851.B \-e
2852then only that style of metadata is accepted, otherwise
2853.I mdadm
2854finds any known version of metadata. If no
2855.I md
2856metadata is found, the device may be still added to an array
2857as a spare if POLICY allows.
2858
2859.ig
2860.IP +
2861Does the metadata match an expected array?
2862The metadata can match in two ways. Either there is an array listed
2863in
2864.B mdadm.conf
2865which identifies the array (either by UUID, by name, by device list,
2866or by minor-number), or the array was created with a
2867.B homehost
2868specified and that
2869.B homehost
2870matches the one in
2871.B mdadm.conf
2872or on the command line.
2873If
2874.I mdadm
2875is not able to positively identify the array as belonging to the
2876current host, the device will be rejected.
2877..
2878
2879.PP
2880.I mdadm
2881keeps a list of arrays that it has partially assembled in
2882.BR {MAP_PATH} .
2883If no array exists which matches
2884the metadata on the new device,
2885.I mdadm
2886must choose a device name and unit number. It does this based on any
2887name given in
2888.B mdadm.conf
2889or any name information stored in the metadata. If this name
2890suggests a unit number, that number will be used, otherwise a free
2891unit number will be chosen. Normally
2892.I mdadm
2893will prefer to create a partitionable array, however if the
2894.B CREATE
2895line in
2896.B mdadm.conf
2897suggests that a non-partitionable array is preferred, that will be
2898honoured.
2899
2900If the array is not found in the config file and its metadata does not
2901identify it as belonging to the "homehost", then
2902.I mdadm
2903will choose a name for the array which is certain not to conflict with
2904any array which does belong to this host. It does this be adding an
2905underscore and a small number to the name preferred by the metadata.
2906
2907Once an appropriate array is found or created and the device is added,
2908.I mdadm
2909must decide if the array is ready to be started. It will
2910normally compare the number of available (non-spare) devices to the
2911number of devices that the metadata suggests need to be active. If
2912there are at least that many, the array will be started. This means
2913that if any devices are missing the array will not be restarted.
2914
2915As an alternative,
2916.B \-\-run
2917may be passed to
2918.I mdadm
2919in which case the array will be run as soon as there are enough
2920devices present for the data to be accessible. For a RAID1, that
2921means one device will start the array. For a clean RAID5, the array
2922will be started as soon as all but one drive is present.
2923
2924Note that neither of these approaches is really ideal. If it can
2925be known that all device discovery has completed, then
2926.br
2927.B " mdadm \-IRs"
2928.br
2929can be run which will try to start all arrays that are being
2930incrementally assembled. They are started in "read-auto" mode in
2931which they are read-only until the first write request. This means
2932that no metadata updates are made and no attempt at resync or recovery
2933happens. Further devices that are found before the first write can
2934still be added safely.
2935
2936.SH ENVIRONMENT
2937This section describes environment variables that affect how mdadm
2938operates.
2939
2940.TP
2941.B MDADM_NO_MDMON
2942Setting this value to 1 will prevent mdadm from automatically launching
2943mdmon. This variable is intended primarily for debugging mdadm/mdmon.
2944
2945.TP
2946.B MDADM_NO_UDEV
2947Normally,
2948.I mdadm
2949does not create any device nodes in /dev, but leaves that task to
2950.IR udev .
2951If
2952.I udev
2953appears not to be configured, or if this environment variable is set
2954to '1', the
2955.I mdadm
2956will create and devices that are needed.
2957
2958.TP
2959.B MDADM_NO_SYSTEMCTL
2960If
2961.I mdadm
2962detects that
2963.I systemd
2964is in use it will normally request
2965.I systemd
2966to start various background tasks (particularly
2967.IR mdmon )
2968rather than forking and running them in the background. This can be
2969suppressed by setting
2970.BR MDADM_NO_SYSTEMCTL=1 .
2971
2972.TP
2973.B IMSM_NO_PLATFORM
2974A key value of IMSM metadata is that it allows interoperability with
2975boot ROMs on Intel platforms, and with other major operating systems.
2976Consequently,
2977.I mdadm
2978will only allow an IMSM array to be created or modified if detects
2979that it is running on an Intel platform which supports IMSM, and
2980supports the particular configuration of IMSM that is being requested
2981(some functionality requires newer OROM support).
2982
2983These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in the
2984environment. This can be useful for testing or for disaster
2985recovery. You should be aware that interoperability may be
2986compromised by setting this value.
2987
2988These change can also be suppressed by adding
2989.B mdadm.imsm.test=1
2990to the kernel command line. This makes it easy to test IMSM
2991code in a virtual machine that doesn't have IMSM virtual hardware.
2992
2993.TP
2994.B MDADM_GROW_ALLOW_OLD
2995If an array is stopped while it is performing a reshape and that
2996reshape was making use of a backup file, then when the array is
2997re-assembled
2998.I mdadm
2999will sometimes complain that the backup file is too old. If this
3000happens and you are certain it is the right backup file, you can
3001over-ride this check by setting
3002.B MDADM_GROW_ALLOW_OLD=1
3003in the environment.
3004
3005.TP
3006.B MDADM_CONF_AUTO
3007Any string given in this variable is added to the start of the
3008.B AUTO
3009line in the config file, or treated as the whole
3010.B AUTO
3011line if none is given. It can be used to disable certain metadata
3012types when
3013.I mdadm
3014is called from a boot script. For example
3015.br
3016.B " export MDADM_CONF_AUTO='-ddf -imsm'
3017.br
3018will make sure that
3019.I mdadm
3020does not automatically assemble any DDF or
3021IMSM arrays that are found. This can be useful on systems configured
3022to manage such arrays with
3023.BR dmraid .
3024
3025
3026.SH EXAMPLES
3027
3028.B " mdadm \-\-query /dev/name-of-device"
3029.br
3030This will find out if a given device is a RAID array, or is part of
3031one, and will provide brief information about the device.
3032
3033.B " mdadm \-\-assemble \-\-scan"
3034.br
3035This will assemble and start all arrays listed in the standard config
3036file. This command will typically go in a system startup file.
3037
3038.B " mdadm \-\-stop \-\-scan"
3039.br
3040This will shut down all arrays that can be shut down (i.e. are not
3041currently in use). This will typically go in a system shutdown script.
3042
3043.B " mdadm \-\-follow \-\-scan \-\-delay=120"
3044.br
3045If (and only if) there is an Email address or program given in the
3046standard config file, then
3047monitor the status of all arrays listed in that file by
3048polling them ever 2 minutes.
3049
3050.B " mdadm \-\-create /dev/md0 \-\-level=1 \-\-raid\-devices=2 /dev/hd[ac]1"
3051.br
3052Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
3053
3054.br
3055.B " echo 'DEVICE /dev/hd*[0\-9] /dev/sd*[0\-9]' > mdadm.conf"
3056.br
3057.B " mdadm \-\-detail \-\-scan >> mdadm.conf"
3058.br
3059This will create a prototype config file that describes currently
3060active arrays that are known to be made from partitions of IDE or SCSI drives.
3061This file should be reviewed before being used as it may
3062contain unwanted detail.
3063
3064.B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
3065.br
3066.B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
3067.br
3068This will find arrays which could be assembled from existing IDE and
3069SCSI whole drives (not partitions), and store the information in the
3070format of a config file.
3071This file is very likely to contain unwanted detail, particularly
3072the
3073.B devices=
3074entries. It should be reviewed and edited before being used as an
3075actual config file.
3076
3077.B " mdadm \-\-examine \-\-brief \-\-scan \-\-config=partitions"
3078.br
3079.B " mdadm \-Ebsc partitions"
3080.br
3081Create a list of devices by reading
3082.BR /proc/partitions ,
3083scan these for RAID superblocks, and printout a brief listing of all
3084that were found.
3085
3086.B " mdadm \-Ac partitions \-m 0 /dev/md0"
3087.br
3088Scan all partitions and devices listed in
3089.BR /proc/partitions
3090and assemble
3091.B /dev/md0
3092out of all such devices with a RAID superblock with a minor number of 0.
3093
3094.B " mdadm \-\-monitor \-\-scan \-\-daemonise > /run/mdadm/mon.pid"
3095.br
3096If config file contains a mail address or alert program, run mdadm in
3097the background in monitor mode monitoring all md devices. Also write
3098pid of mdadm daemon to
3099.BR /run/mdadm/mon.pid .
3100
3101.B " mdadm \-Iq /dev/somedevice"
3102.br
3103Try to incorporate newly discovered device into some array as
3104appropriate.
3105
3106.B " mdadm \-\-incremental \-\-rebuild\-map \-\-run \-\-scan"
3107.br
3108Rebuild the array map from any current arrays, and then start any that
3109can be started.
3110
3111.B " mdadm /dev/md4 --fail detached --remove detached"
3112.br
3113Any devices which are components of /dev/md4 will be marked as faulty
3114and then remove from the array.
3115
3116.B " mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4"
3117.br
3118The array
3119.B /dev/md4
3120which is currently a RAID5 array will be converted to RAID6. There
3121should normally already be a spare drive attached to the array as a
3122RAID6 needs one more drive than a matching RAID5.
3123
3124.B " mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]"
3125.br
3126Create a DDF array over 6 devices.
3127
3128.B " mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf"
3129.br
3130Create a RAID5 array over any 3 devices in the given DDF set. Use
3131only 30 gigabytes of each device.
3132
3133.B " mdadm -A /dev/md/ddf1 /dev/sd[a-f]"
3134.br
3135Assemble a pre-exist ddf array.
3136
3137.B " mdadm -I /dev/md/ddf1"
3138.br
3139Assemble all arrays contained in the ddf array, assigning names as
3140appropriate.
3141
3142.B " mdadm \-\-create \-\-help"
3143.br
3144Provide help about the Create mode.
3145
3146.B " mdadm \-\-config \-\-help"
3147.br
3148Provide help about the format of the config file.
3149
3150.B " mdadm \-\-help"
3151.br
3152Provide general help.
3153
3154.SH FILES
3155
3156.SS /proc/mdstat
3157
3158If you're using the
3159.B /proc
3160filesystem,
3161.B /proc/mdstat
3162lists all active md devices with information about them.
3163.I mdadm
3164uses this to find arrays when
3165.B \-\-scan
3166is given in Misc mode, and to monitor array reconstruction
3167on Monitor mode.
3168
3169.SS {CONFFILE} (or {CONFFILE2})
3170
3171Default config file. See
3172.BR mdadm.conf (5)
3173for more details.
3174
3175.SS {CONFFILE}.d (or {CONFFILE2}.d)
3176
3177Default directory containing configuration files. See
3178.BR mdadm.conf (5)
3179for more details.
3180
3181.SS {MAP_PATH}
3182When
3183.B \-\-incremental
3184mode is used, this file gets a list of arrays currently being created.
3185
3186.SH POSIX PORTABLE NAME
3187A valid name can only consist of characters "A-Za-z0-9.-_".
3188The name cannot start with a leading "-" and cannot exceed 255 chars.
3189
3190.SH DEVICE NAMES
3191
3192.I mdadm
3193understand two sorts of names for array devices.
3194
3195The first is the so-called 'standard' format name, which matches the
3196names used by the kernel and which appear in
3197.IR /proc/mdstat .
3198
3199The second sort can be freely chosen, but must reside in
3200.IR /dev/md/ .
3201When giving a device name to
3202.I mdadm
3203to create or assemble an array, either full path name such as
3204.I /dev/md0
3205or
3206.I /dev/md/home
3207can be given, or just the suffix of the second sort of name, such as
3208.I home
3209can be given.
3210
3211In every style, raw name has to be no longer than 32 chars.
3212
3213When
3214.I mdadm
3215chooses device names during auto-assembly or incremental assembly, it
3216will sometimes add a small sequence number to the end of the name to
3217avoid conflicted between multiple arrays that have the same name. If
3218.I mdadm
3219can reasonably determine that the array really is meant for this host,
3220either by a hostname in the metadata, or by the presence of the array
3221in
3222.BR mdadm.conf ,
3223then it will leave off the suffix if possible.
3224Also if the homehost is specified as
3225.B <ignore>
3226.I mdadm
3227will only use a suffix if a different array of the same name already
3228exists or is listed in the config file.
3229
3230The names for arrays are of the form:
3231.IP
3232.RB /dev/md NN
3233.PP
3234where NN is a number.
3235
3236.PP
3237Names can be non-numeric following
3238the form:
3239.IP
3240.RB /dev/md_ XXX
3241.PP
3242where
3243.B XXX
3244is any string. These names are supported by
3245.I mdadm
3246since version 3.3 provided they are enabled in
3247.IR mdadm.conf .
3248
3249.SH UNDERSTANDING OUTPUT
3250
3251.TP
3252EXAMINE
3253
3254.TP
3255.B checkpoint
3256Checkpoint value is reported when array is performing some action including
3257resync, recovery or reshape. Checkpoints allow resuming action from certain
3258point if it was interrupted.
3259
3260Checkpoint is reported as combination of two values: current migration unit
3261and number of blocks per unit. By multiplying those values and dividing by
3262array size checkpoint progress percentage can be obtained in relation to
3263current progress reported in /proc/mdstat. Checkpoint is also related to (and
3264sometimes based on) sysfs entry sync_completed but depending on action units
3265may differ. Even if units are the same, it should not be expected that
3266checkpoint and sync_completed will be exact match nor updated simultaneously.
3267
3268.SH NOTE
3269.I mdadm
3270was previously known as
3271.IR mdctl .
3272
3273.SH SEE ALSO
3274For further information on mdadm usage, MD and the various levels of
3275RAID, see:
3276.IP
3277.B https://raid.wiki.kernel.org/
3278.PP
3279(based upon Jakob \(/Ostergaard's Software\-RAID.HOWTO)
3280.PP
3281The latest version of
3282.I mdadm
3283should always be available from
3284.IP
3285.B https://www.kernel.org/pub/linux/utils/raid/mdadm/
3286.PP
3287Related man pages:
3288.PP
3289.IR mdmon (8),
3290.IR mdadm.conf (5),
3291.IR md (4).