]> git.ipfire.org Git - thirdparty/mdadm.git/blob - mdadm.8.in
Assemble: start dirty and degraded array.
[thirdparty/mdadm.git] / mdadm.8.in
1 .\" -*- nroff -*-
2 .\" Copyright Neil Brown and others.
3 .\" This program is free software; you can redistribute it and/or modify
4 .\" it under the terms of the GNU General Public License as published by
5 .\" the Free Software Foundation; either version 2 of the License, or
6 .\" (at your option) any later version.
7 .\" See file COPYING in distribution for details.
8 .TH MDADM 8 "" v4.2-rc1
9 .SH NAME
10 mdadm \- manage MD devices
11 .I aka
12 Linux Software RAID
13
14 .SH SYNOPSIS
15
16 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
17
18 .SH DESCRIPTION
19 RAID devices are virtual devices created from two or more
20 real block devices. This allows multiple devices (typically disk
21 drives or partitions thereof) to be combined into a single device to
22 hold (for example) a single filesystem.
23 Some RAID levels include redundancy and so can survive some degree of
24 device failure.
25
26 Linux Software RAID devices are implemented through the md (Multiple
27 Devices) device driver.
28
29 Currently, Linux supports
30 .B LINEAR
31 md devices,
32 .B RAID0
33 (striping),
34 .B RAID1
35 (mirroring),
36 .BR RAID4 ,
37 .BR RAID5 ,
38 .BR RAID6 ,
39 .BR RAID10 ,
40 .BR MULTIPATH ,
41 .BR FAULTY ,
42 and
43 .BR CONTAINER .
44
45 .B MULTIPATH
46 is not a Software RAID mechanism, but does involve
47 multiple devices:
48 each device is a path to one common physical storage device.
49 New installations should not use md/multipath as it is not well
50 supported and has no ongoing development. Use the Device Mapper based
51 multipath-tools instead.
52
53 .B FAULTY
54 is also not true RAID, and it only involves one device. It
55 provides a layer over a true device that can be used to inject faults.
56
57 .B CONTAINER
58 is different again. A
59 .B CONTAINER
60 is a collection of devices that are
61 managed as a set. This is similar to the set of devices connected to
62 a hardware RAID controller. The set of devices may contain a number
63 of different RAID arrays each utilising some (or all) of the blocks from a
64 number of the devices in the set. For example, two devices in a 5-device set
65 might form a RAID1 using the whole devices. The remaining three might
66 have a RAID5 over the first half of each device, and a RAID0 over the
67 second half.
68
69 With a
70 .BR CONTAINER ,
71 there is one set of metadata that describes all of
72 the arrays in the container. So when
73 .I mdadm
74 creates a
75 .B CONTAINER
76 device, the device just represents the metadata. Other normal arrays (RAID1
77 etc) can be created inside the container.
78
79 .SH MODES
80 mdadm has several major modes of operation:
81 .TP
82 .B Assemble
83 Assemble the components of a previously created
84 array into an active array. Components can be explicitly given
85 or can be searched for.
86 .I mdadm
87 checks that the components
88 do form a bona fide array, and can, on request, fiddle superblock
89 information so as to assemble a faulty array.
90
91 .TP
92 .B Build
93 Build an array that doesn't have per-device metadata (superblocks). For these
94 sorts of arrays,
95 .I mdadm
96 cannot differentiate between initial creation and subsequent assembly
97 of an array. It also cannot perform any checks that appropriate
98 components have been requested. Because of this, the
99 .B Build
100 mode should only be used together with a complete understanding of
101 what you are doing.
102
103 .TP
104 .B Create
105 Create a new array with per-device metadata (superblocks).
106 Appropriate metadata is written to each device, and then the array
107 comprising those devices is activated. A 'resync' process is started
108 to make sure that the array is consistent (e.g. both sides of a mirror
109 contain the same data) but the content of the device is left otherwise
110 untouched.
111 The array can be used as soon as it has been created. There is no
112 need to wait for the initial resync to finish.
113
114 .TP
115 .B "Follow or Monitor"
116 Monitor one or more md devices and act on any state changes. This is
117 only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as
118 only these have interesting state. RAID0 or Linear never have
119 missing, spare, or failed drives, so there is nothing to monitor.
120
121 .TP
122 .B "Grow"
123 Grow (or shrink) an array, or otherwise reshape it in some way.
124 Currently supported growth options including changing the active size
125 of component devices and changing the number of active devices in
126 Linear and RAID levels 0/1/4/5/6,
127 changing the RAID level between 0, 1, 5, and 6, and between 0 and 10,
128 changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or
129 removing a write-intent bitmap and changing the array's consistency policy.
130
131 .TP
132 .B "Incremental Assembly"
133 Add a single device to an appropriate array. If the addition of the
134 device makes the array runnable, the array will be started.
135 This provides a convenient interface to a
136 .I hot-plug
137 system. As each device is detected,
138 .I mdadm
139 has a chance to include it in some array as appropriate.
140 Optionally, when the
141 .I \-\-fail
142 flag is passed in we will remove the device from any active array
143 instead of adding it.
144
145 If a
146 .B CONTAINER
147 is passed to
148 .I mdadm
149 in this mode, then any arrays within that container will be assembled
150 and started.
151
152 .TP
153 .B Manage
154 This is for doing things to specific components of an array such as
155 adding new spares and removing faulty devices.
156
157 .TP
158 .B Misc
159 This is an 'everything else' mode that supports operations on active
160 arrays, operations on component devices such as erasing old superblocks, and
161 information gathering operations.
162 .\"This mode allows operations on independent devices such as examine MD
163 .\"superblocks, erasing old superblocks and stopping active arrays.
164
165 .TP
166 .B Auto-detect
167 This mode does not act on a specific device or array, but rather it
168 requests the Linux Kernel to activate any auto-detected arrays.
169 .SH OPTIONS
170
171 .SH Options for selecting a mode are:
172
173 .TP
174 .BR \-A ", " \-\-assemble
175 Assemble a pre-existing array.
176
177 .TP
178 .BR \-B ", " \-\-build
179 Build a legacy array without superblocks.
180
181 .TP
182 .BR \-C ", " \-\-create
183 Create a new array.
184
185 .TP
186 .BR \-F ", " \-\-follow ", " \-\-monitor
187 Select
188 .B Monitor
189 mode.
190
191 .TP
192 .BR \-G ", " \-\-grow
193 Change the size or shape of an active array.
194
195 .TP
196 .BR \-I ", " \-\-incremental
197 Add/remove a single device to/from an appropriate array, and possibly start the array.
198
199 .TP
200 .B \-\-auto-detect
201 Request that the kernel starts any auto-detected arrays. This can only
202 work if
203 .I md
204 is compiled into the kernel \(em not if it is a module.
205 Arrays can be auto-detected by the kernel if all the components are in
206 primary MS-DOS partitions with partition type
207 .BR FD ,
208 and all use v0.90 metadata.
209 In-kernel autodetect is not recommended for new installations. Using
210 .I mdadm
211 to detect and assemble arrays \(em possibly in an
212 .I initrd
213 \(em is substantially more flexible and should be preferred.
214
215 .P
216 If a device is given before any options, or if the first option is
217 one of
218 .BR \-\-add ,
219 .BR \-\-re\-add ,
220 .BR \-\-add\-spare ,
221 .BR \-\-fail ,
222 .BR \-\-remove ,
223 or
224 .BR \-\-replace ,
225 then the MANAGE mode is assumed.
226 Anything other than these will cause the
227 .B Misc
228 mode to be assumed.
229
230 .SH Options that are not mode-specific are:
231
232 .TP
233 .BR \-h ", " \-\-help
234 Display general help message or, after one of the above options, a
235 mode-specific help message.
236
237 .TP
238 .B \-\-help\-options
239 Display more detailed help about command line parsing and some commonly
240 used options.
241
242 .TP
243 .BR \-V ", " \-\-version
244 Print version information for mdadm.
245
246 .TP
247 .BR \-v ", " \-\-verbose
248 Be more verbose about what is happening. This can be used twice to be
249 extra-verbose.
250 The extra verbosity currently only affects
251 .B \-\-detail \-\-scan
252 and
253 .BR "\-\-examine \-\-scan" .
254
255 .TP
256 .BR \-q ", " \-\-quiet
257 Avoid printing purely informative messages. With this,
258 .I mdadm
259 will be silent unless there is something really important to report.
260
261
262 .TP
263 .BR \-f ", " \-\-force
264 Be more forceful about certain operations. See the various modes for
265 the exact meaning of this option in different contexts.
266
267 .TP
268 .BR \-c ", " \-\-config=
269 Specify the config file or directory. Default is to use
270 .B /etc/mdadm.conf
271 and
272 .BR /etc/mdadm.conf.d ,
273 or if those are missing then
274 .B /etc/mdadm/mdadm.conf
275 and
276 .BR /etc/mdadm/mdadm.conf.d .
277 If the config file given is
278 .B "partitions"
279 then nothing will be read, but
280 .I mdadm
281 will act as though the config file contained exactly
282 .br
283 .B " DEVICE partitions containers"
284 .br
285 and will read
286 .B /proc/partitions
287 to find a list of devices to scan, and
288 .B /proc/mdstat
289 to find a list of containers to examine.
290 If the word
291 .B "none"
292 is given for the config file, then
293 .I mdadm
294 will act as though the config file were empty.
295
296 If the name given is of a directory, then
297 .I mdadm
298 will collect all the files contained in the directory with a name ending
299 in
300 .BR .conf ,
301 sort them lexically, and process all of those files as config files.
302
303 .TP
304 .BR \-s ", " \-\-scan
305 Scan config file or
306 .B /proc/mdstat
307 for missing information.
308 In general, this option gives
309 .I mdadm
310 permission to get any missing information (like component devices,
311 array devices, array identities, and alert destination) from the
312 configuration file (see previous option);
313 one exception is MISC mode when using
314 .B \-\-detail
315 or
316 .B \-\-stop,
317 in which case
318 .B \-\-scan
319 says to get a list of array devices from
320 .BR /proc/mdstat .
321
322 .TP
323 .BR \-e ", " \-\-metadata=
324 Declare the style of RAID metadata (superblock) to be used. The
325 default is {DEFAULT_METADATA} for
326 .BR \-\-create ,
327 and to guess for other operations.
328 The default can be overridden by setting the
329 .B metadata
330 value for the
331 .B CREATE
332 keyword in
333 .BR mdadm.conf .
334
335 Options are:
336 .RS
337 .ie '{DEFAULT_METADATA}'0.90'
338 .IP "0, 0.90, default"
339 .el
340 .IP "0, 0.90"
341 Use the original 0.90 format superblock. This format limits arrays to
342 28 component devices and limits component devices of levels 1 and
343 greater to 2 terabytes. It is also possible for there to be confusion
344 about whether the superblock applies to a whole device or just the
345 last partition, if that partition starts on a 64K boundary.
346 .ie '{DEFAULT_METADATA}'0.90'
347 .IP "1, 1.0, 1.1, 1.2"
348 .el
349 .IP "1, 1.0, 1.1, 1.2 default"
350 Use the new version-1 format superblock. This has fewer restrictions.
351 It can easily be moved between hosts with different endian-ness, and a
352 recovery operation can be checkpointed and restarted. The different
353 sub-versions store the superblock at different locations on the
354 device, either at the end (for 1.0), at the start (for 1.1) or 4K from
355 the start (for 1.2). "1" is equivalent to "1.2" (the commonly
356 preferred 1.x format).
357 'if '{DEFAULT_METADATA}'1.2' "default" is equivalent to "1.2".
358 .IP ddf
359 Use the "Industry Standard" DDF (Disk Data Format) format defined by
360 SNIA.
361 When creating a DDF array a
362 .B CONTAINER
363 will be created, and normal arrays can be created in that container.
364 .IP imsm
365 Use the Intel(R) Matrix Storage Manager metadata format. This creates a
366 .B CONTAINER
367 which is managed in a similar manner to DDF, and is supported by an
368 option-rom on some platforms:
369 .IP
370 .B https://www.intel.com/content/www/us/en/support/products/122484/memory-and-storage/ssd-software/intel-virtual-raid-on-cpu-intel-vroc.html
371 .PP
372 .RE
373
374 .TP
375 .B \-\-homehost=
376 This will override any
377 .B HOMEHOST
378 setting in the config file and provides the identity of the host which
379 should be considered the home for any arrays.
380
381 When creating an array, the
382 .B homehost
383 will be recorded in the metadata. For version-1 superblocks, it will
384 be prefixed to the array name. For version-0.90 superblocks, part of
385 the SHA1 hash of the hostname will be stored in the later half of the
386 UUID.
387
388 When reporting information about an array, any array which is tagged
389 for the given homehost will be reported as such.
390
391 When using Auto-Assemble, only arrays tagged for the given homehost
392 will be allowed to use 'local' names (i.e. not ending in '_' followed
393 by a digit string). See below under
394 .BR "Auto Assembly" .
395
396 The special name "\fBany\fP" can be used as a wild card. If an array
397 is created with
398 .B --homehost=any
399 then the name "\fBany\fP" will be stored in the array and it can be
400 assembled in the same way on any host. If an array is assembled with
401 this option, then the homehost recorded on the array will be ignored.
402
403 .TP
404 .B \-\-prefer=
405 When
406 .I mdadm
407 needs to print the name for a device it normally finds the name in
408 .B /dev
409 which refers to the device and is shortest. When a path component is
410 given with
411 .B \-\-prefer
412 .I mdadm
413 will prefer a longer name if it contains that component. For example
414 .B \-\-prefer=by-uuid
415 will prefer a name in a subdirectory of
416 .B /dev
417 called
418 .BR by-uuid .
419
420 This functionality is currently only provided by
421 .B \-\-detail
422 and
423 .BR \-\-monitor .
424
425 .TP
426 .B \-\-home\-cluster=
427 specifies the cluster name for the md device. The md device can be assembled
428 only on the cluster which matches the name specified. If this option is not
429 provided, mdadm tries to detect the cluster name automatically.
430
431 .SH For create, build, or grow:
432
433 .TP
434 .BR \-n ", " \-\-raid\-devices=
435 Specify the number of active devices in the array. This, plus the
436 number of spare devices (see below) must equal the number of
437 .I component-devices
438 (including "\fBmissing\fP" devices)
439 that are listed on the command line for
440 .BR \-\-create .
441 Setting a value of 1 is probably
442 a mistake and so requires that
443 .B \-\-force
444 be specified first. A value of 1 will then be allowed for linear,
445 multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
446 .br
447 This number can only be changed using
448 .B \-\-grow
449 for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide
450 the necessary support.
451
452 .TP
453 .BR \-x ", " \-\-spare\-devices=
454 Specify the number of spare (eXtra) devices in the initial array.
455 Spares can also be added
456 and removed later. The number of component devices listed
457 on the command line must equal the number of RAID devices plus the
458 number of spare devices.
459
460 .TP
461 .BR \-z ", " \-\-size=
462 Amount (in Kilobytes) of space to use from each drive in RAID levels 1/4/5/6.
463 This must be a multiple of the chunk size, and must leave about 128Kb
464 of space at the end of the drive for the RAID superblock.
465 If this is not specified
466 (as it normally is not) the smallest drive (or partition) sets the
467 size, though if there is a variance among the drives of greater than 1%, a warning is
468 issued.
469
470 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
471 Megabytes, Gigabytes or Terabytes respectively.
472
473 Sometimes a replacement drive can be a little smaller than the
474 original drives though this should be minimised by IDEMA standards.
475 Such a replacement drive will be rejected by
476 .IR md .
477 To guard against this it can be useful to set the initial size
478 slightly smaller than the smaller device with the aim that it will
479 still be larger than any replacement.
480
481 This value can be set with
482 .B \-\-grow
483 for RAID level 1/4/5/6 though
484 DDF arrays may not be able to support this.
485 If the array was created with a size smaller than the currently
486 active drives, the extra space can be accessed using
487 .BR \-\-grow .
488 The size can be given as
489 .B max
490 which means to choose the largest size that fits on all current drives.
491
492 Before reducing the size of the array (with
493 .BR "\-\-grow \-\-size=" )
494 you should make sure that space isn't needed. If the device holds a
495 filesystem, you would need to resize the filesystem to use less space.
496
497 After reducing the array size you should check that the data stored in
498 the device is still available. If the device holds a filesystem, then
499 an 'fsck' of the filesystem is a minimum requirement. If there are
500 problems the array can be made bigger again with no loss with another
501 .B "\-\-grow \-\-size="
502 command.
503
504 This value cannot be used when creating a
505 .B CONTAINER
506 such as with DDF and IMSM metadata, though it perfectly valid when
507 creating an array inside a container.
508
509 .TP
510 .BR \-Z ", " \-\-array\-size=
511 This is only meaningful with
512 .B \-\-grow
513 and its effect is not persistent: when the array is stopped and
514 restarted the default array size will be restored.
515
516 Setting the array-size causes the array to appear smaller to programs
517 that access the data. This is particularly needed before reshaping an
518 array so that it will be smaller. As the reshape is not reversible,
519 but setting the size with
520 .B \-\-array-size
521 is, it is required that the array size is reduced as appropriate
522 before the number of devices in the array is reduced.
523
524 Before reducing the size of the array you should make sure that space
525 isn't needed. If the device holds a filesystem, you would need to
526 resize the filesystem to use less space.
527
528 After reducing the array size you should check that the data stored in
529 the device is still available. If the device holds a filesystem, then
530 an 'fsck' of the filesystem is a minimum requirement. If there are
531 problems the array can be made bigger again with no loss with another
532 .B "\-\-grow \-\-array\-size="
533 command.
534
535 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
536 Megabytes, Gigabytes or Terabytes respectively.
537 A value of
538 .B max
539 restores the apparent size of the array to be whatever the real
540 amount of available space is.
541
542 Clustered arrays do not support this parameter yet.
543
544 .TP
545 .BR \-c ", " \-\-chunk=
546 Specify chunk size of kilobytes. The default when creating an
547 array is 512KB. To ensure compatibility with earlier versions, the
548 default when building an array with no persistent metadata is 64KB.
549 This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
550
551 RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power
552 of 2. In any case it must be a multiple of 4KB.
553
554 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
555 Megabytes, Gigabytes or Terabytes respectively.
556
557 .TP
558 .BR \-\-rounding=
559 Specify rounding factor for a Linear array. The size of each
560 component will be rounded down to a multiple of this size.
561 This is a synonym for
562 .B \-\-chunk
563 but highlights the different meaning for Linear as compared to other
564 RAID levels. The default is 64K if a kernel earlier than 2.6.16 is in
565 use, and is 0K (i.e. no rounding) in later kernels.
566
567 .TP
568 .BR \-l ", " \-\-level=
569 Set RAID level. When used with
570 .BR \-\-create ,
571 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
572 raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
573 Obviously some of these are synonymous.
574
575 When a
576 .B CONTAINER
577 metadata type is requested, only the
578 .B container
579 level is permitted, and it does not need to be explicitly given.
580
581 When used with
582 .BR \-\-build ,
583 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
584
585 Can be used with
586 .B \-\-grow
587 to change the RAID level in some cases. See LEVEL CHANGES below.
588
589 .TP
590 .BR \-p ", " \-\-layout=
591 This option configures the fine details of data layout for RAID5, RAID6,
592 and RAID10 arrays, and controls the failure modes for
593 .IR faulty .
594 It can also be used for working around a kernel bug with RAID0, but generally
595 doesn't need to be used explicitly.
596
597 The layout of the RAID5 parity block can be one of
598 .BR left\-asymmetric ,
599 .BR left\-symmetric ,
600 .BR right\-asymmetric ,
601 .BR right\-symmetric ,
602 .BR la ", " ra ", " ls ", " rs .
603 The default is
604 .BR left\-symmetric .
605
606 It is also possible to cause RAID5 to use a RAID4-like layout by
607 choosing
608 .BR parity\-first ,
609 or
610 .BR parity\-last .
611
612 Finally for RAID5 there are DDF\-compatible layouts,
613 .BR ddf\-zero\-restart ,
614 .BR ddf\-N\-restart ,
615 and
616 .BR ddf\-N\-continue .
617
618 These same layouts are available for RAID6. There are also 4 layouts
619 that will provide an intermediate stage for converting between RAID5
620 and RAID6. These provide a layout which is identical to the
621 corresponding RAID5 layout on the first N\-1 devices, and has the 'Q'
622 syndrome (the second 'parity' block used by RAID6) on the last device.
623 These layouts are:
624 .BR left\-symmetric\-6 ,
625 .BR right\-symmetric\-6 ,
626 .BR left\-asymmetric\-6 ,
627 .BR right\-asymmetric\-6 ,
628 and
629 .BR parity\-first\-6 .
630
631 When setting the failure mode for level
632 .I faulty,
633 the options are:
634 .BR write\-transient ", " wt ,
635 .BR read\-transient ", " rt ,
636 .BR write\-persistent ", " wp ,
637 .BR read\-persistent ", " rp ,
638 .BR write\-all ,
639 .BR read\-fixable ", " rf ,
640 .BR clear ", " flush ", " none .
641
642 Each failure mode can be followed by a number, which is used as a period
643 between fault generation. Without a number, the fault is generated
644 once on the first relevant request. With a number, the fault will be
645 generated after that many requests, and will continue to be generated
646 every time the period elapses.
647
648 Multiple failure modes can be current simultaneously by using the
649 .B \-\-grow
650 option to set subsequent failure modes.
651
652 "clear" or "none" will remove any pending or periodic failure modes,
653 and "flush" will clear any persistent faults.
654
655 The layout options for RAID10 are one of 'n', 'o' or 'f' followed
656 by a small number. The default is 'n2'. The supported options are:
657
658 .I 'n'
659 signals 'near' copies. Multiple copies of one data block are at
660 similar offsets in different devices.
661
662 .I 'o'
663 signals 'offset' copies. Rather than the chunks being duplicated
664 within a stripe, whole stripes are duplicated but are rotated by one
665 device so duplicate blocks are on different devices. Thus subsequent
666 copies of a block are in the next drive, and are one chunk further
667 down.
668
669 .I 'f'
670 signals 'far' copies
671 (multiple copies have very different offsets).
672 See md(4) for more detail about 'near', 'offset', and 'far'.
673
674 The number is the number of copies of each datablock. 2 is normal, 3
675 can be useful. This number can be at most equal to the number of
676 devices in the array. It does not need to divide evenly into that
677 number (e.g. it is perfectly legal to have an 'n2' layout for an array
678 with an odd number of devices).
679
680 A bug introduced in Linux 3.14 means that RAID0 arrays
681 .B "with devices of differing sizes"
682 started using a different layout. This could lead to
683 data corruption. Since Linux 5.4 (and various stable releases that received
684 backports), the kernel will not accept such an array unless
685 a layout is explictly set. It can be set to
686 .RB ' original '
687 or
688 .RB ' alternate '.
689 When creating a new array,
690 .I mdadm
691 will select
692 .RB ' original '
693 by default, so the layout does not normally need to be set.
694 An array created for either
695 .RB ' original '
696 or
697 .RB ' alternate '
698 will not be recognized by an (unpatched) kernel prior to 5.4. To create
699 a RAID0 array with devices of differing sizes that can be used on an
700 older kernel, you can set the layout to
701 .RB ' dangerous '.
702 This will use whichever layout the running kernel supports, so the data
703 on the array may become corrupt when changing kernel from pre-3.14 to a
704 later kernel.
705
706 When an array is converted between RAID5 and RAID6 an intermediate
707 RAID6 layout is used in which the second parity block (Q) is always on
708 the last device. To convert a RAID5 to RAID6 and leave it in this new
709 layout (which does not require re-striping) use
710 .BR \-\-layout=preserve .
711 This will try to avoid any restriping.
712
713 The converse of this is
714 .B \-\-layout=normalise
715 which will change a non-standard RAID6 layout into a more standard
716 arrangement.
717
718 .TP
719 .BR \-\-parity=
720 same as
721 .B \-\-layout
722 (thus explaining the p of
723 .BR \-p ).
724
725 .TP
726 .BR \-b ", " \-\-bitmap=
727 Specify a file to store a write-intent bitmap in. The file should not
728 exist unless
729 .B \-\-force
730 is also given. The same file should be provided
731 when assembling the array. If the word
732 .B "internal"
733 is given, then the bitmap is stored with the metadata on the array,
734 and so is replicated on all devices. If the word
735 .B "none"
736 is given with
737 .B \-\-grow
738 mode, then any bitmap that is present is removed. If the word
739 .B "clustered"
740 is given, the array is created for a clustered environment. One bitmap
741 is created for each node as defined by the
742 .B \-\-nodes
743 parameter and are stored internally.
744
745 To help catch typing errors, the filename must contain at least one
746 slash ('/') if it is a real file (not 'internal' or 'none').
747
748 Note: external bitmaps are only known to work on ext2 and ext3.
749 Storing bitmap files on other filesystems may result in serious problems.
750
751 When creating an array on devices which are 100G or larger,
752 .I mdadm
753 automatically adds an internal bitmap as it will usually be
754 beneficial. This can be suppressed with
755 .B "\-\-bitmap=none"
756 or by selecting a different consistency policy with
757 .BR \-\-consistency\-policy .
758
759 .TP
760 .BR \-\-bitmap\-chunk=
761 Set the chunksize of the bitmap. Each bit corresponds to that many
762 Kilobytes of storage.
763 When using a file based bitmap, the default is to use the smallest
764 size that is at-least 4 and requires no more than 2^21 chunks.
765 When using an
766 .B internal
767 bitmap, the chunksize defaults to 64Meg, or larger if necessary to
768 fit the bitmap into the available space.
769
770 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
771 Megabytes, Gigabytes or Terabytes respectively.
772
773 .TP
774 .BR \-W ", " \-\-write\-mostly
775 subsequent devices listed in a
776 .BR \-\-build ,
777 .BR \-\-create ,
778 or
779 .B \-\-add
780 command will be flagged as 'write\-mostly'. This is valid for RAID1
781 only and means that the 'md' driver will avoid reading from these
782 devices if at all possible. This can be useful if mirroring over a
783 slow link.
784
785 .TP
786 .BR \-\-write\-behind=
787 Specify that write-behind mode should be enabled (valid for RAID1
788 only). If an argument is specified, it will set the maximum number
789 of outstanding writes allowed. The default value is 256.
790 A write-intent bitmap is required in order to use write-behind
791 mode, and write-behind is only attempted on drives marked as
792 .IR write-mostly .
793
794 .TP
795 .BR \-\-failfast
796 subsequent devices listed in a
797 .B \-\-create
798 or
799 .B \-\-add
800 command will be flagged as 'failfast'. This is valid for RAID1 and
801 RAID10 only. IO requests to these devices will be encouraged to fail
802 quickly rather than cause long delays due to error handling. Also no
803 attempt is made to repair a read error on these devices.
804
805 If an array becomes degraded so that the 'failfast' device is the only
806 usable device, the 'failfast' flag will then be ignored and extended
807 delays will be preferred to complete failure.
808
809 The 'failfast' flag is appropriate for storage arrays which have a
810 low probability of true failure, but which may sometimes
811 cause unacceptable delays due to internal maintenance functions.
812
813 .TP
814 .BR \-\-assume\-clean
815 Tell
816 .I mdadm
817 that the array pre-existed and is known to be clean. It can be useful
818 when trying to recover from a major failure as you can be sure that no
819 data will be affected unless you actually write to the array. It can
820 also be used when creating a RAID1 or RAID10 if you want to avoid the
821 initial resync, however this practice \(em while normally safe \(em is not
822 recommended. Use this only if you really know what you are doing.
823 .IP
824 When the devices that will be part of a new array were filled
825 with zeros before creation the operator knows the array is
826 actually clean. If that is the case, such as after running
827 badblocks, this argument can be used to tell mdadm the
828 facts the operator knows.
829 .IP
830 When an array is resized to a larger size with
831 .B "\-\-grow \-\-size="
832 the new space is normally resynced in that same way that the whole
833 array is resynced at creation. From Linux version 3.0,
834 .B \-\-assume\-clean
835 can be used with that command to avoid the automatic resync.
836
837 .TP
838 .BR \-\-backup\-file=
839 This is needed when
840 .B \-\-grow
841 is used to increase the number of raid-devices in a RAID5 or RAID6 if
842 there are no spare devices available, or to shrink, change RAID level
843 or layout. See the GROW MODE section below on RAID\-DEVICES CHANGES.
844 The file must be stored on a separate device, not on the RAID array
845 being reshaped.
846
847 .TP
848 .B \-\-data\-offset=
849 Arrays with 1.x metadata can leave a gap between the start of the
850 device and the start of array data. This gap can be used for various
851 metadata. The start of data is known as the
852 .IR data\-offset .
853 Normally an appropriate data offset is computed automatically.
854 However it can be useful to set it explicitly such as when re-creating
855 an array which was originally created using a different version of
856 .I mdadm
857 which computed a different offset.
858
859 Setting the offset explicitly over-rides the default. The value given
860 is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is used to explicitly
861 indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively.
862
863 Since Linux 3.4,
864 .B \-\-data\-offset
865 can also be used with
866 .B --grow
867 for some RAID levels (initially on RAID10). This allows the
868 data\-offset to be changed as part of the reshape process. When the
869 data offset is changed, no backup file is required as the difference
870 in offsets is used to provide the same functionality.
871
872 When the new offset is earlier than the old offset, the number of
873 devices in the array cannot shrink. When it is after the old offset,
874 the number of devices in the array cannot increase.
875
876 When creating an array,
877 .B \-\-data\-offset
878 can be specified as
879 .BR variable .
880 In the case each member device is expected to have a offset appended
881 to the name, separated by a colon. This makes it possible to recreate
882 exactly an array which has varying data offsets (as can happen when
883 different versions of
884 .I mdadm
885 are used to add different devices).
886
887 .TP
888 .BR \-\-continue
889 This option is complementary to the
890 .B \-\-freeze-reshape
891 option for assembly. It is needed when
892 .B \-\-grow
893 operation is interrupted and it is not restarted automatically due to
894 .B \-\-freeze-reshape
895 usage during array assembly. This option is used together with
896 .BR \-G
897 , (
898 .BR \-\-grow
899 ) command and device for a pending reshape to be continued.
900 All parameters required for reshape continuation will be read from array metadata.
901 If initial
902 .BR \-\-grow
903 command had required
904 .BR \-\-backup\-file=
905 option to be set, continuation option will require to have exactly the same
906 backup file given as well.
907 .IP
908 Any other parameter passed together with
909 .BR \-\-continue
910 option will be ignored.
911
912 .TP
913 .BR \-N ", " \-\-name=
914 Set a
915 .B name
916 for the array. This is currently only effective when creating an
917 array with a version-1 superblock, or an array in a DDF container.
918 The name is a simple textual string that can be used to identify array
919 components when assembling. If name is needed but not specified, it
920 is taken from the basename of the device that is being created.
921 e.g. when creating
922 .I /dev/md/home
923 the
924 .B name
925 will default to
926 .IR home .
927
928 .TP
929 .BR \-R ", " \-\-run
930 Insist that
931 .I mdadm
932 run the array, even if some of the components
933 appear to be active in another array or filesystem. Normally
934 .I mdadm
935 will ask for confirmation before including such components in an
936 array. This option causes that question to be suppressed.
937
938 .TP
939 .BR \-f ", " \-\-force
940 Insist that
941 .I mdadm
942 accept the geometry and layout specified without question. Normally
943 .I mdadm
944 will not allow creation of an array with only one device, and will try
945 to create a RAID5 array with one missing drive (as this makes the
946 initial resync work faster). With
947 .BR \-\-force ,
948 .I mdadm
949 will not try to be so clever.
950
951 .TP
952 .BR \-o ", " \-\-readonly
953 Start the array
954 .B read only
955 rather than read-write as normal. No writes will be allowed to the
956 array, and no resync, recovery, or reshape will be started. It works with
957 Create, Assemble, Manage and Misc mode.
958
959 .TP
960 .BR \-a ", " "\-\-auto{=yes,md,mdp,part,p}{NN}"
961 Instruct mdadm how to create the device file if needed, possibly allocating
962 an unused minor number. "md" causes a non-partitionable array
963 to be used (though since Linux 2.6.28, these array devices are in fact
964 partitionable). "mdp", "part" or "p" causes a partitionable array (2.6 and
965 later) to be used. "yes" requires the named md device to have
966 a 'standard' format, and the type and minor number will be determined
967 from this. With mdadm 3.0, device creation is normally left up to
968 .I udev
969 so this option is unlikely to be needed.
970 See DEVICE NAMES below.
971
972 The argument can also come immediately after
973 "\-a". e.g. "\-ap".
974
975 If
976 .B \-\-auto
977 is not given on the command line or in the config file, then
978 the default will be
979 .BR \-\-auto=yes .
980
981 If
982 .B \-\-scan
983 is also given, then any
984 .I auto=
985 entries in the config file will override the
986 .B \-\-auto
987 instruction given on the command line.
988
989 For partitionable arrays,
990 .I mdadm
991 will create the device file for the whole array and for the first 4
992 partitions. A different number of partitions can be specified at the
993 end of this option (e.g.
994 .BR \-\-auto=p7 ).
995 If the device name ends with a digit, the partition names add a 'p',
996 and a number, e.g.
997 .IR /dev/md/home1p3 .
998 If there is no trailing digit, then the partition names just have a
999 number added, e.g.
1000 .IR /dev/md/scratch3 .
1001
1002 If the md device name is in a 'standard' format as described in DEVICE
1003 NAMES, then it will be created, if necessary, with the appropriate
1004 device number based on that name. If the device name is not in one of these
1005 formats, then a unused device number will be allocated. The device
1006 number will be considered unused if there is no active array for that
1007 number, and there is no entry in /dev for that number and with a
1008 non-standard name. Names that are not in 'standard' format are only
1009 allowed in "/dev/md/".
1010
1011 This is meaningful with
1012 .B \-\-create
1013 or
1014 .BR \-\-build .
1015
1016 .TP
1017 .BR \-a ", " "\-\-add"
1018 This option can be used in Grow mode in two cases.
1019
1020 If the target array is a Linear array, then
1021 .B \-\-add
1022 can be used to add one or more devices to the array. They
1023 are simply catenated on to the end of the array. Once added, the
1024 devices cannot be removed.
1025
1026 If the
1027 .B \-\-raid\-disks
1028 option is being used to increase the number of devices in an array,
1029 then
1030 .B \-\-add
1031 can be used to add some extra devices to be included in the array.
1032 In most cases this is not needed as the extra devices can be added as
1033 spares first, and then the number of raid-disks can be changed.
1034 However for RAID0, it is not possible to add spares. So to increase
1035 the number of devices in a RAID0, it is necessary to set the new
1036 number of devices, and to add the new devices, in the same command.
1037
1038 .TP
1039 .BR \-\-nodes
1040 Only works when the array is for clustered environment. It specifies
1041 the maximum number of nodes in the cluster that will use this device
1042 simultaneously. If not specified, this defaults to 4.
1043
1044 .TP
1045 .BR \-\-write-journal
1046 Specify journal device for the RAID-4/5/6 array. The journal device
1047 should be a SSD with reasonable lifetime.
1048
1049 .TP
1050 .BR \-\-symlinks
1051 Auto creation of symlinks in /dev to /dev/md, option --symlinks must
1052 be 'no' or 'yes' and work with --create and --build.
1053
1054 .TP
1055 .BR \-k ", " \-\-consistency\-policy=
1056 Specify how the array maintains consistency in case of unexpected shutdown.
1057 Only relevant for RAID levels with redundancy.
1058 Currently supported options are:
1059 .RS
1060
1061 .TP
1062 .B resync
1063 Full resync is performed and all redundancy is regenerated when the array is
1064 started after unclean shutdown.
1065
1066 .TP
1067 .B bitmap
1068 Resync assisted by a write-intent bitmap. Implicitly selected when using
1069 .BR \-\-bitmap .
1070
1071 .TP
1072 .B journal
1073 For RAID levels 4/5/6, journal device is used to log transactions and replay
1074 after unclean shutdown. Implicitly selected when using
1075 .BR \-\-write\-journal .
1076
1077 .TP
1078 .B ppl
1079 For RAID5 only, Partial Parity Log is used to close the write hole and
1080 eliminate resync. PPL is stored in the metadata region of RAID member drives,
1081 no additional journal drive is needed.
1082
1083 .PP
1084 Can be used with \-\-grow to change the consistency policy of an active array
1085 in some cases. See CONSISTENCY POLICY CHANGES below.
1086 .RE
1087
1088
1089 .SH For assemble:
1090
1091 .TP
1092 .BR \-u ", " \-\-uuid=
1093 uuid of array to assemble. Devices which don't have this uuid are
1094 excluded
1095
1096 .TP
1097 .BR \-m ", " \-\-super\-minor=
1098 Minor number of device that array was created for. Devices which
1099 don't have this minor number are excluded. If you create an array as
1100 /dev/md1, then all superblocks will contain the minor number 1, even if
1101 the array is later assembled as /dev/md2.
1102
1103 Giving the literal word "dev" for
1104 .B \-\-super\-minor
1105 will cause
1106 .I mdadm
1107 to use the minor number of the md device that is being assembled.
1108 e.g. when assembling
1109 .BR /dev/md0 ,
1110 .B \-\-super\-minor=dev
1111 will look for super blocks with a minor number of 0.
1112
1113 .B \-\-super\-minor
1114 is only relevant for v0.90 metadata, and should not normally be used.
1115 Using
1116 .B \-\-uuid
1117 is much safer.
1118
1119 .TP
1120 .BR \-N ", " \-\-name=
1121 Specify the name of the array to assemble. This must be the name
1122 that was specified when creating the array. It must either match
1123 the name stored in the superblock exactly, or it must match
1124 with the current
1125 .I homehost
1126 prefixed to the start of the given name.
1127
1128 .TP
1129 .BR \-f ", " \-\-force
1130 Assemble the array even if the metadata on some devices appears to be
1131 out-of-date. If
1132 .I mdadm
1133 cannot find enough working devices to start the array, but can find
1134 some devices that are recorded as having failed, then it will mark
1135 those devices as working so that the array can be started. This works only for
1136 native. For external metadata it allows to start dirty degraded RAID 4, 5, 6.
1137 An array which requires
1138 .B \-\-force
1139 to be started may contain data corruption. Use it carefully.
1140
1141 .TP
1142 .BR \-R ", " \-\-run
1143 Attempt to start the array even if fewer drives were given than were
1144 present last time the array was active. Normally if not all the
1145 expected drives are found and
1146 .B \-\-scan
1147 is not used, then the array will be assembled but not started.
1148 With
1149 .B \-\-run
1150 an attempt will be made to start it anyway.
1151
1152 .TP
1153 .B \-\-no\-degraded
1154 This is the reverse of
1155 .B \-\-run
1156 in that it inhibits the startup of array unless all expected drives
1157 are present. This is only needed with
1158 .B \-\-scan,
1159 and can be used if the physical connections to devices are
1160 not as reliable as you would like.
1161
1162 .TP
1163 .BR \-a ", " "\-\-auto{=no,yes,md,mdp,part}"
1164 See this option under Create and Build options.
1165
1166 .TP
1167 .BR \-b ", " \-\-bitmap=
1168 Specify the bitmap file that was given when the array was created. If
1169 an array has an
1170 .B internal
1171 bitmap, there is no need to specify this when assembling the array.
1172
1173 .TP
1174 .BR \-\-backup\-file=
1175 If
1176 .B \-\-backup\-file
1177 was used while reshaping an array (e.g. changing number of devices or
1178 chunk size) and the system crashed during the critical section, then the same
1179 .B \-\-backup\-file
1180 must be presented to
1181 .B \-\-assemble
1182 to allow possibly corrupted data to be restored, and the reshape
1183 to be completed.
1184
1185 .TP
1186 .BR \-\-invalid\-backup
1187 If the file needed for the above option is not available for any
1188 reason an empty file can be given together with this option to
1189 indicate that the backup file is invalid. In this case the data that
1190 was being rearranged at the time of the crash could be irrecoverably
1191 lost, but the rest of the array may still be recoverable. This option
1192 should only be used as a last resort if there is no way to recover the
1193 backup file.
1194
1195
1196 .TP
1197 .BR \-U ", " \-\-update=
1198 Update the superblock on each device while assembling the array. The
1199 argument given to this flag can be one of
1200 .BR sparc2.2 ,
1201 .BR summaries ,
1202 .BR uuid ,
1203 .BR name ,
1204 .BR nodes ,
1205 .BR homehost ,
1206 .BR home-cluster ,
1207 .BR resync ,
1208 .BR byteorder ,
1209 .BR devicesize ,
1210 .BR no\-bitmap ,
1211 .BR bbl ,
1212 .BR no\-bbl ,
1213 .BR ppl ,
1214 .BR no\-ppl ,
1215 .BR layout\-original ,
1216 .BR layout\-alternate ,
1217 .BR layout\-unspecified ,
1218 .BR metadata ,
1219 or
1220 .BR super\-minor .
1221
1222 The
1223 .B sparc2.2
1224 option will adjust the superblock of an array what was created on a Sparc
1225 machine running a patched 2.2 Linux kernel. This kernel got the
1226 alignment of part of the superblock wrong. You can use the
1227 .B "\-\-examine \-\-sparc2.2"
1228 option to
1229 .I mdadm
1230 to see what effect this would have.
1231
1232 The
1233 .B super\-minor
1234 option will update the
1235 .B "preferred minor"
1236 field on each superblock to match the minor number of the array being
1237 assembled.
1238 This can be useful if
1239 .B \-\-examine
1240 reports a different "Preferred Minor" to
1241 .BR \-\-detail .
1242 In some cases this update will be performed automatically
1243 by the kernel driver. In particular the update happens automatically
1244 at the first write to an array with redundancy (RAID level 1 or
1245 greater) on a 2.6 (or later) kernel.
1246
1247 The
1248 .B uuid
1249 option will change the uuid of the array. If a UUID is given with the
1250 .B \-\-uuid
1251 option that UUID will be used as a new UUID and will
1252 .B NOT
1253 be used to help identify the devices in the array.
1254 If no
1255 .B \-\-uuid
1256 is given, a random UUID is chosen.
1257
1258 The
1259 .B name
1260 option will change the
1261 .I name
1262 of the array as stored in the superblock. This is only supported for
1263 version-1 superblocks.
1264
1265 The
1266 .B nodes
1267 option will change the
1268 .I nodes
1269 of the array as stored in the bitmap superblock. This option only
1270 works for a clustered environment.
1271
1272 The
1273 .B homehost
1274 option will change the
1275 .I homehost
1276 as recorded in the superblock. For version-0 superblocks, this is the
1277 same as updating the UUID.
1278 For version-1 superblocks, this involves updating the name.
1279
1280 The
1281 .B home\-cluster
1282 option will change the cluster name as recorded in the superblock and
1283 bitmap. This option only works for clustered environment.
1284
1285 The
1286 .B resync
1287 option will cause the array to be marked
1288 .I dirty
1289 meaning that any redundancy in the array (e.g. parity for RAID5,
1290 copies for RAID1) may be incorrect. This will cause the RAID system
1291 to perform a "resync" pass to make sure that all redundant information
1292 is correct.
1293
1294 The
1295 .B byteorder
1296 option allows arrays to be moved between machines with different
1297 byte-order, such as from a big-endian machine like a Sparc or some
1298 MIPS machines, to a little-endian x86_64 machine.
1299 When assembling such an array for the first time after a move, giving
1300 .B "\-\-update=byteorder"
1301 will cause
1302 .I mdadm
1303 to expect superblocks to have their byteorder reversed, and will
1304 correct that order before assembling the array. This is only valid
1305 with original (Version 0.90) superblocks.
1306
1307 The
1308 .B summaries
1309 option will correct the summaries in the superblock. That is the
1310 counts of total, working, active, failed, and spare devices.
1311
1312 The
1313 .B devicesize
1314 option will rarely be of use. It applies to version 1.1 and 1.2 metadata
1315 only (where the metadata is at the start of the device) and is only
1316 useful when the component device has changed size (typically become
1317 larger). The version 1 metadata records the amount of the device that
1318 can be used to store data, so if a device in a version 1.1 or 1.2
1319 array becomes larger, the metadata will still be visible, but the
1320 extra space will not. In this case it might be useful to assemble the
1321 array with
1322 .BR \-\-update=devicesize .
1323 This will cause
1324 .I mdadm
1325 to determine the maximum usable amount of space on each device and
1326 update the relevant field in the metadata.
1327
1328 The
1329 .B metadata
1330 option only works on v0.90 metadata arrays and will convert them to
1331 v1.0 metadata. The array must not be dirty (i.e. it must not need a
1332 sync) and it must not have a write-intent bitmap.
1333
1334 The old metadata will remain on the devices, but will appear older
1335 than the new metadata and so will usually be ignored. The old metadata
1336 (or indeed the new metadata) can be removed by giving the appropriate
1337 .B \-\-metadata=
1338 option to
1339 .BR \-\-zero\-superblock .
1340
1341 The
1342 .B no\-bitmap
1343 option can be used when an array has an internal bitmap which is
1344 corrupt in some way so that assembling the array normally fails. It
1345 will cause any internal bitmap to be ignored.
1346
1347 The
1348 .B bbl
1349 option will reserve space in each device for a bad block list. This
1350 will be 4K in size and positioned near the end of any free space
1351 between the superblock and the data.
1352
1353 The
1354 .B no\-bbl
1355 option will cause any reservation of space for a bad block list to be
1356 removed. If the bad block list contains entries, this will fail, as
1357 removing the list could cause data corruption.
1358
1359 The
1360 .B ppl
1361 option will enable PPL for a RAID5 array and reserve space for PPL on each
1362 device. There must be enough free space between the data and superblock and a
1363 write-intent bitmap or journal must not be used.
1364
1365 The
1366 .B no\-ppl
1367 option will disable PPL in the superblock.
1368
1369 The
1370 .B layout\-original
1371 and
1372 .B layout\-alternate
1373 options are for RAID0 arrays with non-uniform devices size that were in
1374 use before Linux 5.4. If the array was being used with Linux 3.13 or
1375 earlier, then to assemble the array on a new kernel,
1376 .B \-\-update=layout\-original
1377 must be given. If the array was created and used with a kernel from Linux 3.14 to
1378 Linux 5.3, then
1379 .B \-\-update=layout\-alternate
1380 must be given. This only needs to be given once. Subsequent assembly of the array
1381 will happen normally.
1382 For more information, see
1383 .IR md (4).
1384
1385 The
1386 .B layout\-unspecified
1387 option reverts the effect of
1388 .B layout\-orignal
1389 or
1390 .B layout\-alternate
1391 and allows the array to be again used on a kernel prior to Linux 5.3.
1392 This option should be used with great caution.
1393
1394 .TP
1395 .BR \-\-freeze\-reshape
1396 Option is intended to be used in start-up scripts during initrd boot phase.
1397 When array under reshape is assembled during initrd phase, this option
1398 stops reshape after reshape critical section is being restored. This happens
1399 before file system pivot operation and avoids loss of file system context.
1400 Losing file system context would cause reshape to be broken.
1401
1402 Reshape can be continued later using the
1403 .B \-\-continue
1404 option for the grow command.
1405
1406 .TP
1407 .BR \-\-symlinks
1408 See this option under Create and Build options.
1409
1410 .SH For Manage mode:
1411
1412 .TP
1413 .BR \-t ", " \-\-test
1414 Unless a more serious error occurred,
1415 .I mdadm
1416 will exit with a status of 2 if no changes were made to the array and
1417 0 if at least one change was made.
1418 This can be useful when an indirect specifier such as
1419 .BR missing ,
1420 .B detached
1421 or
1422 .B faulty
1423 is used in requesting an operation on the array.
1424 .B \-\-test
1425 will report failure if these specifiers didn't find any match.
1426
1427 .TP
1428 .BR \-a ", " \-\-add
1429 hot-add listed devices.
1430 If a device appears to have recently been part of the array
1431 (possibly it failed or was removed) the device is re\-added as described
1432 in the next point.
1433 If that fails or the device was never part of the array, the device is
1434 added as a hot-spare.
1435 If the array is degraded, it will immediately start to rebuild data
1436 onto that spare.
1437
1438 Note that this and the following options are only meaningful on array
1439 with redundancy. They don't apply to RAID0 or Linear.
1440
1441 .TP
1442 .BR \-\-re\-add
1443 re\-add a device that was previously removed from an array.
1444 If the metadata on the device reports that it is a member of the
1445 array, and the slot that it used is still vacant, then the device will
1446 be added back to the array in the same position. This will normally
1447 cause the data for that device to be recovered. However based on the
1448 event count on the device, the recovery may only require sections that
1449 are flagged a write-intent bitmap to be recovered or may not require
1450 any recovery at all.
1451
1452 When used on an array that has no metadata (i.e. it was built with
1453 .BR \-\-build)
1454 it will be assumed that bitmap-based recovery is enough to make the
1455 device fully consistent with the array.
1456
1457 When used with v1.x metadata,
1458 .B \-\-re\-add
1459 can be accompanied by
1460 .BR \-\-update=devicesize ,
1461 .BR \-\-update=bbl ", or"
1462 .BR \-\-update=no\-bbl .
1463 See the description of these option when used in Assemble mode for an
1464 explanation of their use.
1465
1466 If the device name given is
1467 .B missing
1468 then
1469 .I mdadm
1470 will try to find any device that looks like it should be
1471 part of the array but isn't and will try to re\-add all such devices.
1472
1473 If the device name given is
1474 .B faulty
1475 then
1476 .I mdadm
1477 will find all devices in the array that are marked
1478 .BR faulty ,
1479 remove them and attempt to immediately re\-add them. This can be
1480 useful if you are certain that the reason for failure has been
1481 resolved.
1482
1483 .TP
1484 .B \-\-add\-spare
1485 Add a device as a spare. This is similar to
1486 .B \-\-add
1487 except that it does not attempt
1488 .B \-\-re\-add
1489 first. The device will be added as a spare even if it looks like it
1490 could be an recent member of the array.
1491
1492 .TP
1493 .BR \-r ", " \-\-remove
1494 remove listed devices. They must not be active. i.e. they should
1495 be failed or spare devices.
1496
1497 As well as the name of a device file
1498 (e.g.
1499 .BR /dev/sda1 )
1500 the words
1501 .BR failed ,
1502 .B detached
1503 and names like
1504 .B set-A
1505 can be given to
1506 .BR \-\-remove .
1507 The first causes all failed device to be removed. The second causes
1508 any device which is no longer connected to the system (i.e an 'open'
1509 returns
1510 .BR ENXIO )
1511 to be removed.
1512 The third will remove a set as describe below under
1513 .BR \-\-fail .
1514
1515 .TP
1516 .BR \-f ", " \-\-fail
1517 Mark listed devices as faulty.
1518 As well as the name of a device file, the word
1519 .B detached
1520 or a set name like
1521 .B set\-A
1522 can be given. The former will cause any device that has been detached from
1523 the system to be marked as failed. It can then be removed.
1524
1525 For RAID10 arrays where the number of copies evenly divides the number
1526 of devices, the devices can be conceptually divided into sets where
1527 each set contains a single complete copy of the data on the array.
1528 Sometimes a RAID10 array will be configured so that these sets are on
1529 separate controllers. In this case all the devices in one set can be
1530 failed by giving a name like
1531 .B set\-A
1532 or
1533 .B set\-B
1534 to
1535 .BR \-\-fail .
1536 The appropriate set names are reported by
1537 .BR \-\-detail .
1538
1539 .TP
1540 .BR \-\-set\-faulty
1541 same as
1542 .BR \-\-fail .
1543
1544 .TP
1545 .B \-\-replace
1546 Mark listed devices as requiring replacement. As soon as a spare is
1547 available, it will be rebuilt and will replace the marked device.
1548 This is similar to marking a device as faulty, but the device remains
1549 in service during the recovery process to increase resilience against
1550 multiple failures. When the replacement process finishes, the
1551 replaced device will be marked as faulty.
1552
1553 .TP
1554 .B \-\-with
1555 This can follow a list of
1556 .B \-\-replace
1557 devices. The devices listed after
1558 .B \-\-with
1559 will be preferentially used to replace the devices listed after
1560 .BR \-\-replace .
1561 These device must already be spare devices in the array.
1562
1563 .TP
1564 .BR \-\-write\-mostly
1565 Subsequent devices that are added or re\-added will have the 'write-mostly'
1566 flag set. This is only valid for RAID1 and means that the 'md' driver
1567 will avoid reading from these devices if possible.
1568 .TP
1569 .BR \-\-readwrite
1570 Subsequent devices that are added or re\-added will have the 'write-mostly'
1571 flag cleared.
1572 .TP
1573 .BR \-\-cluster\-confirm
1574 Confirm the existence of the device. This is issued in response to an \-\-add
1575 request by a node in a cluster. When a node adds a device it sends a message
1576 to all nodes in the cluster to look for a device with a UUID. This translates
1577 to a udev notification with the UUID of the device to be added and the slot
1578 number. The receiving node must acknowledge this message
1579 with \-\-cluster\-confirm. Valid arguments are <slot>:<devicename> in case
1580 the device is found or <slot>:missing in case the device is not found.
1581
1582 .TP
1583 .BR \-\-add-journal
1584 Add journal to an existing array, or recreate journal for RAID-4/5/6 array
1585 that lost a journal device. To avoid interrupting on-going write opertions,
1586 .B \-\-add-journal
1587 only works for array in Read-Only state.
1588
1589 .TP
1590 .BR \-\-failfast
1591 Subsequent devices that are added or re\-added will have
1592 the 'failfast' flag set. This is only valid for RAID1 and RAID10 and
1593 means that the 'md' driver will avoid long timeouts on error handling
1594 where possible.
1595 .TP
1596 .BR \-\-nofailfast
1597 Subsequent devices that are re\-added will be re\-added without
1598 the 'failfast' flag set.
1599
1600 .P
1601 Each of these options requires that the first device listed is the array
1602 to be acted upon, and the remainder are component devices to be added,
1603 removed, marked as faulty, etc. Several different operations can be
1604 specified for different devices, e.g.
1605 .in +5
1606 mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
1607 .in -5
1608 Each operation applies to all devices listed until the next
1609 operation.
1610
1611 If an array is using a write-intent bitmap, then devices which have
1612 been removed can be re\-added in a way that avoids a full
1613 reconstruction but instead just updates the blocks that have changed
1614 since the device was removed. For arrays with persistent metadata
1615 (superblocks) this is done automatically. For arrays created with
1616 .B \-\-build
1617 mdadm needs to be told that this device we removed recently with
1618 .BR \-\-re\-add .
1619
1620 Devices can only be removed from an array if they are not in active
1621 use, i.e. that must be spares or failed devices. To remove an active
1622 device, it must first be marked as
1623 .B faulty.
1624
1625 .SH For Misc mode:
1626
1627 .TP
1628 .BR \-Q ", " \-\-query
1629 Examine a device to see
1630 (1) if it is an md device and (2) if it is a component of an md
1631 array.
1632 Information about what is discovered is presented.
1633
1634 .TP
1635 .BR \-D ", " \-\-detail
1636 Print details of one or more md devices.
1637
1638 .TP
1639 .BR \-\-detail\-platform
1640 Print details of the platform's RAID capabilities (firmware / hardware
1641 topology) for a given metadata format. If used without argument, mdadm
1642 will scan all controllers looking for their capabilities. Otherwise, mdadm
1643 will only look at the controller specified by the argument in form of an
1644 absolute filepath or a link, e.g.
1645 .IR /sys/devices/pci0000:00/0000:00:1f.2 .
1646
1647 .TP
1648 .BR \-Y ", " \-\-export
1649 When used with
1650 .BR \-\-detail ,
1651 .BR \-\-detail-platform ,
1652 .BR \-\-examine ,
1653 or
1654 .B \-\-incremental
1655 output will be formatted as
1656 .B key=value
1657 pairs for easy import into the environment.
1658
1659 With
1660 .B \-\-incremental
1661 The value
1662 .B MD_STARTED
1663 indicates whether an array was started
1664 .RB ( yes )
1665 or not, which may include a reason
1666 .RB ( unsafe ", " nothing ", " no ).
1667 Also the value
1668 .B MD_FOREIGN
1669 indicates if the array is expected on this host
1670 .RB ( no ),
1671 or seems to be from elsewhere
1672 .RB ( yes ).
1673
1674 .TP
1675 .BR \-E ", " \-\-examine
1676 Print contents of the metadata stored on the named device(s).
1677 Note the contrast between
1678 .B \-\-examine
1679 and
1680 .BR \-\-detail .
1681 .B \-\-examine
1682 applies to devices which are components of an array, while
1683 .B \-\-detail
1684 applies to a whole array which is currently active.
1685 .TP
1686 .B \-\-sparc2.2
1687 If an array was created on a SPARC machine with a 2.2 Linux kernel
1688 patched with RAID support, the superblock will have been created
1689 incorrectly, or at least incompatibly with 2.4 and later kernels.
1690 Using the
1691 .B \-\-sparc2.2
1692 flag with
1693 .B \-\-examine
1694 will fix the superblock before displaying it. If this appears to do
1695 the right thing, then the array can be successfully assembled using
1696 .BR "\-\-assemble \-\-update=sparc2.2" .
1697
1698 .TP
1699 .BR \-X ", " \-\-examine\-bitmap
1700 Report information about a bitmap file.
1701 The argument is either an external bitmap file or an array component
1702 in case of an internal bitmap. Note that running this on an array
1703 device (e.g.
1704 .BR /dev/md0 )
1705 does not report the bitmap for that array.
1706
1707 .TP
1708 .B \-\-examine\-badblocks
1709 List the bad-blocks recorded for the device, if a bad-blocks list has
1710 been configured. Currently only
1711 .B 1.x
1712 and
1713 .B IMSM
1714 metadata support bad-blocks lists.
1715
1716 .TP
1717 .BI \-\-dump= directory
1718 .TP
1719 .BI \-\-restore= directory
1720 Save metadata from lists devices, or restore metadata to listed devices.
1721
1722 .TP
1723 .BR \-R ", " \-\-run
1724 start a partially assembled array. If
1725 .B \-\-assemble
1726 did not find enough devices to fully start the array, it might leaving
1727 it partially assembled. If you wish, you can then use
1728 .B \-\-run
1729 to start the array in degraded mode.
1730
1731 .TP
1732 .BR \-S ", " \-\-stop
1733 deactivate array, releasing all resources.
1734
1735 .TP
1736 .BR \-o ", " \-\-readonly
1737 mark array as readonly.
1738
1739 .TP
1740 .BR \-w ", " \-\-readwrite
1741 mark array as readwrite.
1742
1743 .TP
1744 .B \-\-zero\-superblock
1745 If the device contains a valid md superblock, the block is
1746 overwritten with zeros. With
1747 .B \-\-force
1748 the block where the superblock would be is overwritten even if it
1749 doesn't appear to be valid.
1750
1751 .B Note:
1752 Be careful to call \-\-zero\-superblock with clustered raid, make sure
1753 array isn't used or assembled in other cluster node before execute it.
1754
1755 .TP
1756 .B \-\-kill\-subarray=
1757 If the device is a container and the argument to \-\-kill\-subarray
1758 specifies an inactive subarray in the container, then the subarray is
1759 deleted. Deleting all subarrays will leave an 'empty-container' or
1760 spare superblock on the drives. See
1761 .B \-\-zero\-superblock
1762 for completely
1763 removing a superblock. Note that some formats depend on the subarray
1764 index for generating a UUID, this command will fail if it would change
1765 the UUID of an active subarray.
1766
1767 .TP
1768 .B \-\-update\-subarray=
1769 If the device is a container and the argument to \-\-update\-subarray
1770 specifies a subarray in the container, then attempt to update the given
1771 superblock field in the subarray. See below in
1772 .B MISC MODE
1773 for details.
1774
1775 .TP
1776 .BR \-t ", " \-\-test
1777 When used with
1778 .BR \-\-detail ,
1779 the exit status of
1780 .I mdadm
1781 is set to reflect the status of the device. See below in
1782 .B MISC MODE
1783 for details.
1784
1785 .TP
1786 .BR \-W ", " \-\-wait
1787 For each md device given, wait for any resync, recovery, or reshape
1788 activity to finish before returning.
1789 .I mdadm
1790 will return with success if it actually waited for every device
1791 listed, otherwise it will return failure.
1792
1793 .TP
1794 .BR \-\-wait\-clean
1795 For each md device given, or each device in /proc/mdstat if
1796 .B \-\-scan
1797 is given, arrange for the array to be marked clean as soon as possible.
1798 .I mdadm
1799 will return with success if the array uses external metadata and we
1800 successfully waited. For native arrays this returns immediately as the
1801 kernel handles dirty-clean transitions at shutdown. No action is taken
1802 if safe-mode handling is disabled.
1803
1804 .TP
1805 .B \-\-action=
1806 Set the "sync_action" for all md devices given to one of
1807 .BR idle ,
1808 .BR frozen ,
1809 .BR check ,
1810 .BR repair .
1811 Setting to
1812 .B idle
1813 will abort any currently running action though some actions will
1814 automatically restart.
1815 Setting to
1816 .B frozen
1817 will abort any current action and ensure no other action starts
1818 automatically.
1819
1820 Details of
1821 .B check
1822 and
1823 .B repair
1824 can be found it
1825 .IR md (4)
1826 under
1827 .BR "SCRUBBING AND MISMATCHES" .
1828
1829 .SH For Incremental Assembly mode:
1830 .TP
1831 .BR \-\-rebuild\-map ", " \-r
1832 Rebuild the map file
1833 .RB ( {MAP_PATH} )
1834 that
1835 .I mdadm
1836 uses to help track which arrays are currently being assembled.
1837
1838 .TP
1839 .BR \-\-run ", " \-R
1840 Run any array assembled as soon as a minimal number of devices are
1841 available, rather than waiting until all expected devices are present.
1842
1843 .TP
1844 .BR \-\-scan ", " \-s
1845 Only meaningful with
1846 .B \-R
1847 this will scan the
1848 .B map
1849 file for arrays that are being incrementally assembled and will try to
1850 start any that are not already started. If any such array is listed
1851 in
1852 .B mdadm.conf
1853 as requiring an external bitmap, that bitmap will be attached first.
1854
1855 .TP
1856 .BR \-\-fail ", " \-f
1857 This allows the hot-plug system to remove devices that have fully disappeared
1858 from the kernel. It will first fail and then remove the device from any
1859 array it belongs to.
1860 The device name given should be a kernel device name such as "sda",
1861 not a name in
1862 .IR /dev .
1863
1864 .TP
1865 .BR \-\-path=
1866 Only used with \-\-fail. The 'path' given will be recorded so that if
1867 a new device appears at the same location it can be automatically
1868 added to the same array. This allows the failed device to be
1869 automatically replaced by a new device without metadata if it appears
1870 at specified path. This option is normally only set by a
1871 .I udev
1872 script.
1873
1874 .SH For Monitor mode:
1875 .TP
1876 .BR \-m ", " \-\-mail
1877 Give a mail address to send alerts to.
1878
1879 .TP
1880 .BR \-p ", " \-\-program ", " \-\-alert
1881 Give a program to be run whenever an event is detected.
1882
1883 .TP
1884 .BR \-y ", " \-\-syslog
1885 Cause all events to be reported through 'syslog'. The messages have
1886 facility of 'daemon' and varying priorities.
1887
1888 .TP
1889 .BR \-d ", " \-\-delay
1890 Give a delay in seconds.
1891 .I mdadm
1892 polls the md arrays and then waits this many seconds before polling
1893 again. The default is 60 seconds. Since 2.6.16, there is no need to
1894 reduce this as the kernel alerts
1895 .I mdadm
1896 immediately when there is any change.
1897
1898 .TP
1899 .BR \-r ", " \-\-increment
1900 Give a percentage increment.
1901 .I mdadm
1902 will generate RebuildNN events with the given percentage increment.
1903
1904 .TP
1905 .BR \-f ", " \-\-daemonise
1906 Tell
1907 .I mdadm
1908 to run as a background daemon if it decides to monitor anything. This
1909 causes it to fork and run in the child, and to disconnect from the
1910 terminal. The process id of the child is written to stdout.
1911 This is useful with
1912 .B \-\-scan
1913 which will only continue monitoring if a mail address or alert program
1914 is found in the config file.
1915
1916 .TP
1917 .BR \-i ", " \-\-pid\-file
1918 When
1919 .I mdadm
1920 is running in daemon mode, write the pid of the daemon process to
1921 the specified file, instead of printing it on standard output.
1922
1923 .TP
1924 .BR \-1 ", " \-\-oneshot
1925 Check arrays only once. This will generate
1926 .B NewArray
1927 events and more significantly
1928 .B DegradedArray
1929 and
1930 .B SparesMissing
1931 events. Running
1932 .in +5
1933 .B " mdadm \-\-monitor \-\-scan \-1"
1934 .in -5
1935 from a cron script will ensure regular notification of any degraded arrays.
1936
1937 .TP
1938 .BR \-t ", " \-\-test
1939 Generate a
1940 .B TestMessage
1941 alert for every array found at startup. This alert gets mailed and
1942 passed to the alert program. This can be used for testing that alert
1943 message do get through successfully.
1944
1945 .TP
1946 .BR \-\-no\-sharing
1947 This inhibits the functionality for moving spares between arrays.
1948 Only one monitoring process started with
1949 .B \-\-scan
1950 but without this flag is allowed, otherwise the two could interfere
1951 with each other.
1952
1953 .SH ASSEMBLE MODE
1954
1955 .HP 12
1956 Usage:
1957 .B mdadm \-\-assemble
1958 .I md-device options-and-component-devices...
1959 .HP 12
1960 Usage:
1961 .B mdadm \-\-assemble \-\-scan
1962 .I md-devices-and-options...
1963 .HP 12
1964 Usage:
1965 .B mdadm \-\-assemble \-\-scan
1966 .I options...
1967
1968 .PP
1969 This usage assembles one or more RAID arrays from pre-existing components.
1970 For each array, mdadm needs to know the md device, the identity of the
1971 array, and a number of component-devices. These can be found in a number of ways.
1972
1973 In the first usage example (without the
1974 .BR \-\-scan )
1975 the first device given is the md device.
1976 In the second usage example, all devices listed are treated as md
1977 devices and assembly is attempted.
1978 In the third (where no devices are listed) all md devices that are
1979 listed in the configuration file are assembled. If no arrays are
1980 described by the configuration file, then any arrays that
1981 can be found on unused devices will be assembled.
1982
1983 If precisely one device is listed, but
1984 .B \-\-scan
1985 is not given, then
1986 .I mdadm
1987 acts as though
1988 .B \-\-scan
1989 was given and identity information is extracted from the configuration file.
1990
1991 The identity can be given with the
1992 .B \-\-uuid
1993 option, the
1994 .B \-\-name
1995 option, or the
1996 .B \-\-super\-minor
1997 option, will be taken from the md-device record in the config file, or
1998 will be taken from the super block of the first component-device
1999 listed on the command line.
2000
2001 Devices can be given on the
2002 .B \-\-assemble
2003 command line or in the config file. Only devices which have an md
2004 superblock which contains the right identity will be considered for
2005 any array.
2006
2007 The config file is only used if explicitly named with
2008 .B \-\-config
2009 or requested with (a possibly implicit)
2010 .BR \-\-scan .
2011 In the later case,
2012 .B /etc/mdadm.conf
2013 or
2014 .B /etc/mdadm/mdadm.conf
2015 is used.
2016
2017 If
2018 .B \-\-scan
2019 is not given, then the config file will only be used to find the
2020 identity of md arrays.
2021
2022 Normally the array will be started after it is assembled. However if
2023 .B \-\-scan
2024 is not given and not all expected drives were listed, then the array
2025 is not started (to guard against usage errors). To insist that the
2026 array be started in this case (as may work for RAID1, 4, 5, 6, or 10),
2027 give the
2028 .B \-\-run
2029 flag.
2030
2031 If
2032 .I udev
2033 is active,
2034 .I mdadm
2035 does not create any entries in
2036 .B /dev
2037 but leaves that to
2038 .IR udev .
2039 It does record information in
2040 .B {MAP_PATH}
2041 which will allow
2042 .I udev
2043 to choose the correct name.
2044
2045 If
2046 .I mdadm
2047 detects that udev is not configured, it will create the devices in
2048 .B /dev
2049 itself.
2050
2051 In Linux kernels prior to version 2.6.28 there were two distinctly
2052 different types of md devices that could be created: one that could be
2053 partitioned using standard partitioning tools and one that could not.
2054 Since 2.6.28 that distinction is no longer relevant as both type of
2055 devices can be partitioned.
2056 .I mdadm
2057 will normally create the type that originally could not be partitioned
2058 as it has a well defined major number (9).
2059
2060 Prior to 2.6.28, it is important that mdadm chooses the correct type
2061 of array device to use. This can be controlled with the
2062 .B \-\-auto
2063 option. In particular, a value of "mdp" or "part" or "p" tells mdadm
2064 to use a partitionable device rather than the default.
2065
2066 In the no-udev case, the value given to
2067 .B \-\-auto
2068 can be suffixed by a number. This tells
2069 .I mdadm
2070 to create that number of partition devices rather than the default of 4.
2071
2072 The value given to
2073 .B \-\-auto
2074 can also be given in the configuration file as a word starting
2075 .B auto=
2076 on the ARRAY line for the relevant array.
2077
2078 .SS Auto Assembly
2079 When
2080 .B \-\-assemble
2081 is used with
2082 .B \-\-scan
2083 and no devices are listed,
2084 .I mdadm
2085 will first attempt to assemble all the arrays listed in the config
2086 file.
2087
2088 If no arrays are listed in the config (other than those marked
2089 .BR <ignore> )
2090 it will look through the available devices for possible arrays and
2091 will try to assemble anything that it finds. Arrays which are tagged
2092 as belonging to the given homehost will be assembled and started
2093 normally. Arrays which do not obviously belong to this host are given
2094 names that are expected not to conflict with anything local, and are
2095 started "read-auto" so that nothing is written to any device until the
2096 array is written to. i.e. automatic resync etc is delayed.
2097
2098 If
2099 .I mdadm
2100 finds a consistent set of devices that look like they should comprise
2101 an array, and if the superblock is tagged as belonging to the given
2102 home host, it will automatically choose a device name and try to
2103 assemble the array. If the array uses version-0.90 metadata, then the
2104 .B minor
2105 number as recorded in the superblock is used to create a name in
2106 .B /dev/md/
2107 so for example
2108 .BR /dev/md/3 .
2109 If the array uses version-1 metadata, then the
2110 .B name
2111 from the superblock is used to similarly create a name in
2112 .B /dev/md/
2113 (the name will have any 'host' prefix stripped first).
2114
2115 This behaviour can be modified by the
2116 .I AUTO
2117 line in the
2118 .I mdadm.conf
2119 configuration file. This line can indicate that specific metadata
2120 type should, or should not, be automatically assembled. If an array
2121 is found which is not listed in
2122 .I mdadm.conf
2123 and has a metadata format that is denied by the
2124 .I AUTO
2125 line, then it will not be assembled.
2126 The
2127 .I AUTO
2128 line can also request that all arrays identified as being for this
2129 homehost should be assembled regardless of their metadata type.
2130 See
2131 .IR mdadm.conf (5)
2132 for further details.
2133
2134 Note: Auto assembly cannot be used for assembling and activating some
2135 arrays which are undergoing reshape. In particular as the
2136 .B backup\-file
2137 cannot be given, any reshape which requires a backup-file to continue
2138 cannot be started by auto assembly. An array which is growing to more
2139 devices and has passed the critical section can be assembled using
2140 auto-assembly.
2141
2142 .SH BUILD MODE
2143
2144 .HP 12
2145 Usage:
2146 .B mdadm \-\-build
2147 .I md-device
2148 .BI \-\-chunk= X
2149 .BI \-\-level= Y
2150 .BI \-\-raid\-devices= Z
2151 .I devices
2152
2153 .PP
2154 This usage is similar to
2155 .BR \-\-create .
2156 The difference is that it creates an array without a superblock. With
2157 these arrays there is no difference between initially creating the array and
2158 subsequently assembling the array, except that hopefully there is useful
2159 data there in the second case.
2160
2161 The level may raid0, linear, raid1, raid10, multipath, or faulty, or
2162 one of their synonyms. All devices must be listed and the array will
2163 be started once complete. It will often be appropriate to use
2164 .B \-\-assume\-clean
2165 with levels raid1 or raid10.
2166
2167 .SH CREATE MODE
2168
2169 .HP 12
2170 Usage:
2171 .B mdadm \-\-create
2172 .I md-device
2173 .BI \-\-chunk= X
2174 .BI \-\-level= Y
2175 .br
2176 .BI \-\-raid\-devices= Z
2177 .I devices
2178
2179 .PP
2180 This usage will initialise a new md array, associate some devices with
2181 it, and activate the array.
2182
2183 The named device will normally not exist when
2184 .I "mdadm \-\-create"
2185 is run, but will be created by
2186 .I udev
2187 once the array becomes active.
2188
2189 As devices are added, they are checked to see if they contain RAID
2190 superblocks or filesystems. They are also checked to see if the variance in
2191 device size exceeds 1%.
2192
2193 If any discrepancy is found, the array will not automatically be run, though
2194 the presence of a
2195 .B \-\-run
2196 can override this caution.
2197
2198 To create a "degraded" array in which some devices are missing, simply
2199 give the word "\fBmissing\fP"
2200 in place of a device name. This will cause
2201 .I mdadm
2202 to leave the corresponding slot in the array empty.
2203 For a RAID4 or RAID5 array at most one slot can be
2204 "\fBmissing\fP"; for a RAID6 array at most two slots.
2205 For a RAID1 array, only one real device needs to be given. All of the
2206 others can be
2207 "\fBmissing\fP".
2208
2209 When creating a RAID5 array,
2210 .I mdadm
2211 will automatically create a degraded array with an extra spare drive.
2212 This is because building the spare into a degraded array is in general
2213 faster than resyncing the parity on a non-degraded, but not clean,
2214 array. This feature can be overridden with the
2215 .B \-\-force
2216 option.
2217
2218 When creating an array with version-1 metadata a name for the array is
2219 required.
2220 If this is not given with the
2221 .B \-\-name
2222 option,
2223 .I mdadm
2224 will choose a name based on the last component of the name of the
2225 device being created. So if
2226 .B /dev/md3
2227 is being created, then the name
2228 .B 3
2229 will be chosen.
2230 If
2231 .B /dev/md/home
2232 is being created, then the name
2233 .B home
2234 will be used.
2235
2236 When creating a partition based array, using
2237 .I mdadm
2238 with version-1.x metadata, the partition type should be set to
2239 .B 0xDA
2240 (non fs-data). This type selection allows for greater precision since
2241 using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)],
2242 might create problems in the event of array recovery through a live cdrom.
2243
2244 A new array will normally get a randomly assigned 128bit UUID which is
2245 very likely to be unique. If you have a specific need, you can choose
2246 a UUID for the array by giving the
2247 .B \-\-uuid=
2248 option. Be warned that creating two arrays with the same UUID is a
2249 recipe for disaster. Also, using
2250 .B \-\-uuid=
2251 when creating a v0.90 array will silently override any
2252 .B \-\-homehost=
2253 setting.
2254 .\"If the
2255 .\".B \-\-size
2256 .\"option is given, it is not necessary to list any component-devices in this command.
2257 .\"They can be added later, before a
2258 .\".B \-\-run.
2259 .\"If no
2260 .\".B \-\-size
2261 .\"is given, the apparent size of the smallest drive given is used.
2262
2263 If the array type supports a write-intent bitmap, and if the devices
2264 in the array exceed 100G is size, an internal write-intent bitmap
2265 will automatically be added unless some other option is explicitly
2266 requested with the
2267 .B \-\-bitmap
2268 option or a different consistency policy is selected with the
2269 .B \-\-consistency\-policy
2270 option. In any case space for a bitmap will be reserved so that one
2271 can be added later with
2272 .BR "\-\-grow \-\-bitmap=internal" .
2273
2274 If the metadata type supports it (currently only 1.x and IMSM metadata),
2275 space will be allocated to store a bad block list. This allows a modest
2276 number of bad blocks to be recorded, allowing the drive to remain in
2277 service while only partially functional.
2278
2279 When creating an array within a
2280 .B CONTAINER
2281 .I mdadm
2282 can be given either the list of devices to use, or simply the name of
2283 the container. The former case gives control over which devices in
2284 the container will be used for the array. The latter case allows
2285 .I mdadm
2286 to automatically choose which devices to use based on how much spare
2287 space is available.
2288
2289 The General Management options that are valid with
2290 .B \-\-create
2291 are:
2292 .TP
2293 .B \-\-run
2294 insist on running the array even if some devices look like they might
2295 be in use.
2296
2297 .TP
2298 .B \-\-readonly
2299 start the array in readonly mode.
2300
2301 .SH MANAGE MODE
2302 .HP 12
2303 Usage:
2304 .B mdadm
2305 .I device
2306 .I options... devices...
2307 .PP
2308
2309 This usage will allow individual devices in an array to be failed,
2310 removed or added. It is possible to perform multiple operations with
2311 on command. For example:
2312 .br
2313 .B " mdadm /dev/md0 \-f /dev/hda1 \-r /dev/hda1 \-a /dev/hda1"
2314 .br
2315 will firstly mark
2316 .B /dev/hda1
2317 as faulty in
2318 .B /dev/md0
2319 and will then remove it from the array and finally add it back
2320 in as a spare. However only one md array can be affected by a single
2321 command.
2322
2323 When a device is added to an active array, mdadm checks to see if it
2324 has metadata on it which suggests that it was recently a member of the
2325 array. If it does, it tries to "re\-add" the device. If there have
2326 been no changes since the device was removed, or if the array has a
2327 write-intent bitmap which has recorded whatever changes there were,
2328 then the device will immediately become a full member of the array and
2329 those differences recorded in the bitmap will be resolved.
2330
2331 .SH MISC MODE
2332 .HP 12
2333 Usage:
2334 .B mdadm
2335 .I options ...
2336 .I devices ...
2337 .PP
2338
2339 MISC mode includes a number of distinct operations that
2340 operate on distinct devices. The operations are:
2341 .TP
2342 .B \-\-query
2343 The device is examined to see if it is
2344 (1) an active md array, or
2345 (2) a component of an md array.
2346 The information discovered is reported.
2347
2348 .TP
2349 .B \-\-detail
2350 The device should be an active md device.
2351 .B mdadm
2352 will display a detailed description of the array.
2353 .B \-\-brief
2354 or
2355 .B \-\-scan
2356 will cause the output to be less detailed and the format to be
2357 suitable for inclusion in
2358 .BR mdadm.conf .
2359 The exit status of
2360 .I mdadm
2361 will normally be 0 unless
2362 .I mdadm
2363 failed to get useful information about the device(s); however, if the
2364 .B \-\-test
2365 option is given, then the exit status will be:
2366 .RS
2367 .TP
2368 0
2369 The array is functioning normally.
2370 .TP
2371 1
2372 The array has at least one failed device.
2373 .TP
2374 2
2375 The array has multiple failed devices such that it is unusable.
2376 .TP
2377 4
2378 There was an error while trying to get information about the device.
2379 .RE
2380
2381 .TP
2382 .B \-\-detail\-platform
2383 Print detail of the platform's RAID capabilities (firmware / hardware
2384 topology). If the metadata is specified with
2385 .B \-e
2386 or
2387 .B \-\-metadata=
2388 then the return status will be:
2389 .RS
2390 .TP
2391 0
2392 metadata successfully enumerated its platform components on this system
2393 .TP
2394 1
2395 metadata is platform independent
2396 .TP
2397 2
2398 metadata failed to find its platform components on this system
2399 .RE
2400
2401 .TP
2402 .B \-\-update\-subarray=
2403 If the device is a container and the argument to \-\-update\-subarray
2404 specifies a subarray in the container, then attempt to update the given
2405 superblock field in the subarray. Similar to updating an array in
2406 "assemble" mode, the field to update is selected by
2407 .B \-U
2408 or
2409 .B \-\-update=
2410 option. The supported options are
2411 .BR name ,
2412 .BR ppl ,
2413 .BR no\-ppl ,
2414 .BR bitmap
2415 and
2416 .BR no\-bitmap .
2417
2418 The
2419 .B name
2420 option updates the subarray name in the metadata, it may not affect the
2421 device node name or the device node symlink until the subarray is
2422 re\-assembled. If updating
2423 .B name
2424 would change the UUID of an active subarray this operation is blocked,
2425 and the command will end in an error.
2426
2427 The
2428 .B ppl
2429 and
2430 .B no\-ppl
2431 options enable and disable PPL in the metadata. Currently supported only for
2432 IMSM subarrays.
2433
2434 The
2435 .B bitmap
2436 and
2437 .B no\-bitmap
2438 options enable and disable write-intent bitmap in the metadata. Currently supported only for
2439 IMSM subarrays.
2440
2441 .TP
2442 .B \-\-examine
2443 The device should be a component of an md array.
2444 .I mdadm
2445 will read the md superblock of the device and display the contents.
2446 If
2447 .B \-\-brief
2448 or
2449 .B \-\-scan
2450 is given, then multiple devices that are components of the one array
2451 are grouped together and reported in a single entry suitable
2452 for inclusion in
2453 .BR mdadm.conf .
2454
2455 Having
2456 .B \-\-scan
2457 without listing any devices will cause all devices listed in the
2458 config file to be examined.
2459
2460 .TP
2461 .BI \-\-dump= directory
2462 If the device contains RAID metadata, a file will be created in the
2463 .I directory
2464 and the metadata will be written to it. The file will be the same
2465 size as the device and have the metadata written in the file at the
2466 same locate that it exists in the device. However the file will be "sparse" so
2467 that only those blocks containing metadata will be allocated. The
2468 total space used will be small.
2469
2470 The file name used in the
2471 .I directory
2472 will be the base name of the device. Further if any links appear in
2473 .I /dev/disk/by-id
2474 which point to the device, then hard links to the file will be created
2475 in
2476 .I directory
2477 based on these
2478 .I by-id
2479 names.
2480
2481 Multiple devices can be listed and their metadata will all be stored
2482 in the one directory.
2483
2484 .TP
2485 .BI \-\-restore= directory
2486 This is the reverse of
2487 .BR \-\-dump .
2488 .I mdadm
2489 will locate a file in the directory that has a name appropriate for
2490 the given device and will restore metadata from it. Names that match
2491 .I /dev/disk/by-id
2492 names are preferred, however if two of those refer to different files,
2493 .I mdadm
2494 will not choose between them but will abort the operation.
2495
2496 If a file name is given instead of a
2497 .I directory
2498 then
2499 .I mdadm
2500 will restore from that file to a single device, always provided the
2501 size of the file matches that of the device, and the file contains
2502 valid metadata.
2503 .TP
2504 .B \-\-stop
2505 The devices should be active md arrays which will be deactivated, as
2506 long as they are not currently in use.
2507
2508 .TP
2509 .B \-\-run
2510 This will fully activate a partially assembled md array.
2511
2512 .TP
2513 .B \-\-readonly
2514 This will mark an active array as read-only, providing that it is
2515 not currently being used.
2516
2517 .TP
2518 .B \-\-readwrite
2519 This will change a
2520 .B readonly
2521 array back to being read/write.
2522
2523 .TP
2524 .B \-\-scan
2525 For all operations except
2526 .BR \-\-examine ,
2527 .B \-\-scan
2528 will cause the operation to be applied to all arrays listed in
2529 .BR /proc/mdstat .
2530 For
2531 .BR \-\-examine,
2532 .B \-\-scan
2533 causes all devices listed in the config file to be examined.
2534
2535 .TP
2536 .BR \-b ", " \-\-brief
2537 Be less verbose. This is used with
2538 .B \-\-detail
2539 and
2540 .BR \-\-examine .
2541 Using
2542 .B \-\-brief
2543 with
2544 .B \-\-verbose
2545 gives an intermediate level of verbosity.
2546
2547 .SH MONITOR MODE
2548
2549 .HP 12
2550 Usage:
2551 .B mdadm \-\-monitor
2552 .I options... devices...
2553
2554 .PP
2555 This usage causes
2556 .I mdadm
2557 to periodically poll a number of md arrays and to report on any events
2558 noticed.
2559 .I mdadm
2560 will never exit once it decides that there are arrays to be checked,
2561 so it should normally be run in the background.
2562
2563 As well as reporting events,
2564 .I mdadm
2565 may move a spare drive from one array to another if they are in the
2566 same
2567 .B spare-group
2568 or
2569 .B domain
2570 and if the destination array has a failed drive but no spares.
2571
2572 If any devices are listed on the command line,
2573 .I mdadm
2574 will only monitor those devices. Otherwise all arrays listed in the
2575 configuration file will be monitored. Further, if
2576 .B \-\-scan
2577 is given, then any other md devices that appear in
2578 .B /proc/mdstat
2579 will also be monitored.
2580
2581 The result of monitoring the arrays is the generation of events.
2582 These events are passed to a separate program (if specified) and may
2583 be mailed to a given E-mail address.
2584
2585 When passing events to a program, the program is run once for each event,
2586 and is given 2 or 3 command-line arguments: the first is the
2587 name of the event (see below), the second is the name of the
2588 md device which is affected, and the third is the name of a related
2589 device if relevant (such as a component device that has failed).
2590
2591 If
2592 .B \-\-scan
2593 is given, then a program or an E-mail address must be specified on the
2594 command line or in the config file. If neither are available, then
2595 .I mdadm
2596 will not monitor anything.
2597 Without
2598 .B \-\-scan,
2599 .I mdadm
2600 will continue monitoring as long as something was found to monitor. If
2601 no program or email is given, then each event is reported to
2602 .BR stdout .
2603
2604 The different events are:
2605
2606 .RS 4
2607 .TP
2608 .B DeviceDisappeared
2609 An md array which previously was configured appears to no longer be
2610 configured. (syslog priority: Critical)
2611
2612 If
2613 .I mdadm
2614 was told to monitor an array which is RAID0 or Linear, then it will
2615 report
2616 .B DeviceDisappeared
2617 with the extra information
2618 .BR Wrong-Level .
2619 This is because RAID0 and Linear do not support the device-failed,
2620 hot-spare and resync operations which are monitored.
2621
2622 .TP
2623 .B RebuildStarted
2624 An md array started reconstruction (e.g. recovery, resync, reshape,
2625 check, repair). (syslog priority: Warning)
2626
2627 .TP
2628 .BI Rebuild NN
2629 Where
2630 .I NN
2631 is a two-digit number (ie. 05, 48). This indicates that rebuild
2632 has passed that many percent of the total. The events are generated
2633 with fixed increment since 0. Increment size may be specified with
2634 a commandline option (default is 20). (syslog priority: Warning)
2635
2636 .TP
2637 .B RebuildFinished
2638 An md array that was rebuilding, isn't any more, either because it
2639 finished normally or was aborted. (syslog priority: Warning)
2640
2641 .TP
2642 .B Fail
2643 An active component device of an array has been marked as
2644 faulty. (syslog priority: Critical)
2645
2646 .TP
2647 .B FailSpare
2648 A spare component device which was being rebuilt to replace a faulty
2649 device has failed. (syslog priority: Critical)
2650
2651 .TP
2652 .B SpareActive
2653 A spare component device which was being rebuilt to replace a faulty
2654 device has been successfully rebuilt and has been made active.
2655 (syslog priority: Info)
2656
2657 .TP
2658 .B NewArray
2659 A new md array has been detected in the
2660 .B /proc/mdstat
2661 file. (syslog priority: Info)
2662
2663 .TP
2664 .B DegradedArray
2665 A newly noticed array appears to be degraded. This message is not
2666 generated when
2667 .I mdadm
2668 notices a drive failure which causes degradation, but only when
2669 .I mdadm
2670 notices that an array is degraded when it first sees the array.
2671 (syslog priority: Critical)
2672
2673 .TP
2674 .B MoveSpare
2675 A spare drive has been moved from one array in a
2676 .B spare-group
2677 or
2678 .B domain
2679 to another to allow a failed drive to be replaced.
2680 (syslog priority: Info)
2681
2682 .TP
2683 .B SparesMissing
2684 If
2685 .I mdadm
2686 has been told, via the config file, that an array should have a certain
2687 number of spare devices, and
2688 .I mdadm
2689 detects that it has fewer than this number when it first sees the
2690 array, it will report a
2691 .B SparesMissing
2692 message.
2693 (syslog priority: Warning)
2694
2695 .TP
2696 .B TestMessage
2697 An array was found at startup, and the
2698 .B \-\-test
2699 flag was given.
2700 (syslog priority: Info)
2701 .RE
2702
2703 Only
2704 .B Fail,
2705 .B FailSpare,
2706 .B DegradedArray,
2707 .B SparesMissing
2708 and
2709 .B TestMessage
2710 cause Email to be sent. All events cause the program to be run.
2711 The program is run with two or three arguments: the event
2712 name, the array device and possibly a second device.
2713
2714 Each event has an associated array device (e.g.
2715 .BR /dev/md1 )
2716 and possibly a second device. For
2717 .BR Fail ,
2718 .BR FailSpare ,
2719 and
2720 .B SpareActive
2721 the second device is the relevant component device.
2722 For
2723 .B MoveSpare
2724 the second device is the array that the spare was moved from.
2725
2726 For
2727 .I mdadm
2728 to move spares from one array to another, the different arrays need to
2729 be labeled with the same
2730 .B spare-group
2731 or the spares must be allowed to migrate through matching POLICY domains
2732 in the configuration file. The
2733 .B spare-group
2734 name can be any string; it is only necessary that different spare
2735 groups use different names.
2736
2737 When
2738 .I mdadm
2739 detects that an array in a spare group has fewer active
2740 devices than necessary for the complete array, and has no spare
2741 devices, it will look for another array in the same spare group that
2742 has a full complement of working drive and a spare. It will then
2743 attempt to remove the spare from the second drive and add it to the
2744 first.
2745 If the removal succeeds but the adding fails, then it is added back to
2746 the original array.
2747
2748 If the spare group for a degraded array is not defined,
2749 .I mdadm
2750 will look at the rules of spare migration specified by POLICY lines in
2751 .B mdadm.conf
2752 and then follow similar steps as above if a matching spare is found.
2753
2754 .SH GROW MODE
2755 The GROW mode is used for changing the size or shape of an active
2756 array.
2757 For this to work, the kernel must support the necessary change.
2758 Various types of growth are being added during 2.6 development.
2759
2760 Currently the supported changes include
2761 .IP \(bu 4
2762 change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2763 .IP \(bu 4
2764 increase or decrease the "raid\-devices" attribute of RAID0, RAID1, RAID4,
2765 RAID5, and RAID6.
2766 .IP \(bu 4
2767 change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and RAID10.
2768 .IP \(bu 4
2769 convert between RAID1 and RAID5, between RAID5 and RAID6, between
2770 RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).
2771 .IP \(bu 4
2772 add a write-intent bitmap to any array which supports these bitmaps, or
2773 remove a write-intent bitmap from such an array.
2774 .IP \(bu 4
2775 change the array's consistency policy.
2776 .PP
2777
2778 Using GROW on containers is currently supported only for Intel's IMSM
2779 container format. The number of devices in a container can be
2780 increased - which affects all arrays in the container - or an array
2781 in a container can be converted between levels where those levels are
2782 supported by the container, and the conversion is on of those listed
2783 above.
2784
2785 .PP
2786 Notes:
2787 .IP \(bu 4
2788 Intel's native checkpointing doesn't use
2789 .B --backup-file
2790 option and it is transparent for assembly feature.
2791 .IP \(bu 4
2792 Roaming between Windows(R) and Linux systems for IMSM metadata is not
2793 supported during grow process.
2794 .IP \(bu 4
2795 When growing a raid0 device, the new component disk size (or external
2796 backup size) should be larger than LCM(old, new) * chunk-size * 2,
2797 where LCM() is the least common multiple of the old and new count of
2798 component disks, and "* 2" comes from the fact that mdadm refuses to
2799 use more than half of a spare device for backup space.
2800
2801 .SS SIZE CHANGES
2802 Normally when an array is built the "size" is taken from the smallest
2803 of the drives. If all the small drives in an arrays are, one at a
2804 time, removed and replaced with larger drives, then you could have an
2805 array of large drives with only a small amount used. In this
2806 situation, changing the "size" with "GROW" mode will allow the extra
2807 space to start being used. If the size is increased in this way, a
2808 "resync" process will start to make sure the new parts of the array
2809 are synchronised.
2810
2811 Note that when an array changes size, any filesystem that may be
2812 stored in the array will not automatically grow or shrink to use or
2813 vacate the space. The
2814 filesystem will need to be explicitly told to use the extra space
2815 after growing, or to reduce its size
2816 .B prior
2817 to shrinking the array.
2818
2819 Also the size of an array cannot be changed while it has an active
2820 bitmap. If an array has a bitmap, it must be removed before the size
2821 can be changed. Once the change is complete a new bitmap can be created.
2822
2823 .PP
2824 Note:
2825 .B "--grow --size"
2826 is not yet supported for external file bitmap.
2827
2828 .SS RAID\-DEVICES CHANGES
2829
2830 A RAID1 array can work with any number of devices from 1 upwards
2831 (though 1 is not very useful). There may be times which you want to
2832 increase or decrease the number of active devices. Note that this is
2833 different to hot-add or hot-remove which changes the number of
2834 inactive devices.
2835
2836 When reducing the number of devices in a RAID1 array, the slots which
2837 are to be removed from the array must already be vacant. That is, the
2838 devices which were in those slots must be failed and removed.
2839
2840 When the number of devices is increased, any hot spares that are
2841 present will be activated immediately.
2842
2843 Changing the number of active devices in a RAID5 or RAID6 is much more
2844 effort. Every block in the array will need to be read and written
2845 back to a new location. From 2.6.17, the Linux Kernel is able to
2846 increase the number of devices in a RAID5 safely, including restarting
2847 an interrupted "reshape". From 2.6.31, the Linux Kernel is able to
2848 increase or decrease the number of devices in a RAID5 or RAID6.
2849
2850 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2851 or RAID5.
2852 .I mdadm
2853 uses this functionality and the ability to add
2854 devices to a RAID4 to allow devices to be added to a RAID0. When
2855 requested to do this,
2856 .I mdadm
2857 will convert the RAID0 to a RAID4, add the necessary disks and make
2858 the reshape happen, and then convert the RAID4 back to RAID0.
2859
2860 When decreasing the number of devices, the size of the array will also
2861 decrease. If there was data in the array, it could get destroyed and
2862 this is not reversible, so you should firstly shrink the filesystem on
2863 the array to fit within the new size. To help prevent accidents,
2864 .I mdadm
2865 requires that the size of the array be decreased first with
2866 .BR "mdadm --grow --array-size" .
2867 This is a reversible change which simply makes the end of the array
2868 inaccessible. The integrity of any data can then be checked before
2869 the non-reversible reduction in the number of devices is request.
2870
2871 When relocating the first few stripes on a RAID5 or RAID6, it is not
2872 possible to keep the data on disk completely consistent and
2873 crash-proof. To provide the required safety, mdadm disables writes to
2874 the array while this "critical section" is reshaped, and takes a
2875 backup of the data that is in that section. For grows, this backup may be
2876 stored in any spare devices that the array has, however it can also be
2877 stored in a separate file specified with the
2878 .B \-\-backup\-file
2879 option, and is required to be specified for shrinks, RAID level
2880 changes and layout changes. If this option is used, and the system
2881 does crash during the critical period, the same file must be passed to
2882 .B \-\-assemble
2883 to restore the backup and reassemble the array. When shrinking rather
2884 than growing the array, the reshape is done from the end towards the
2885 beginning, so the "critical section" is at the end of the reshape.
2886
2887 .SS LEVEL CHANGES
2888
2889 Changing the RAID level of any array happens instantaneously. However
2890 in the RAID5 to RAID6 case this requires a non-standard layout of the
2891 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2892 required before the change can be accomplished. So while the level
2893 change is instant, the accompanying layout change can take quite a
2894 long time. A
2895 .B \-\-backup\-file
2896 is required. If the array is not simultaneously being grown or
2897 shrunk, so that the array size will remain the same - for example,
2898 reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will
2899 be used not just for a "cricital section" but throughout the reshape
2900 operation, as described below under LAYOUT CHANGES.
2901
2902 .SS CHUNK-SIZE AND LAYOUT CHANGES
2903
2904 Changing the chunk-size or layout without also changing the number of
2905 devices as the same time will involve re-writing all blocks in-place.
2906 To ensure against data loss in the case of a crash, a
2907 .B --backup-file
2908 must be provided for these changes. Small sections of the array will
2909 be copied to the backup file while they are being rearranged. This
2910 means that all the data is copied twice, once to the backup and once
2911 to the new layout on the array, so this type of reshape will go very
2912 slowly.
2913
2914 If the reshape is interrupted for any reason, this backup file must be
2915 made available to
2916 .B "mdadm --assemble"
2917 so the array can be reassembled. Consequently the file cannot be
2918 stored on the device being reshaped.
2919
2920
2921 .SS BITMAP CHANGES
2922
2923 A write-intent bitmap can be added to, or removed from, an active
2924 array. Either internal bitmaps, or bitmaps stored in a separate file,
2925 can be added. Note that if you add a bitmap stored in a file which is
2926 in a filesystem that is on the RAID array being affected, the system
2927 will deadlock. The bitmap must be on a separate filesystem.
2928
2929 .SS CONSISTENCY POLICY CHANGES
2930
2931 The consistency policy of an active array can be changed by using the
2932 .B \-\-consistency\-policy
2933 option in Grow mode. Currently this works only for the
2934 .B ppl
2935 and
2936 .B resync
2937 policies and allows to enable or disable the RAID5 Partial Parity Log (PPL).
2938
2939 .SH INCREMENTAL MODE
2940
2941 .HP 12
2942 Usage:
2943 .B mdadm \-\-incremental
2944 .RB [ \-\-run ]
2945 .RB [ \-\-quiet ]
2946 .I component-device
2947 .RI [ optional-aliases-for-device ]
2948 .HP 12
2949 Usage:
2950 .B mdadm \-\-incremental \-\-fail
2951 .I component-device
2952 .HP 12
2953 Usage:
2954 .B mdadm \-\-incremental \-\-rebuild\-map
2955 .HP 12
2956 Usage:
2957 .B mdadm \-\-incremental \-\-run \-\-scan
2958
2959 .PP
2960 This mode is designed to be used in conjunction with a device
2961 discovery system. As devices are found in a system, they can be
2962 passed to
2963 .B "mdadm \-\-incremental"
2964 to be conditionally added to an appropriate array.
2965
2966 Conversely, it can also be used with the
2967 .B \-\-fail
2968 flag to do just the opposite and find whatever array a particular device
2969 is part of and remove the device from that array.
2970
2971 If the device passed is a
2972 .B CONTAINER
2973 device created by a previous call to
2974 .IR mdadm ,
2975 then rather than trying to add that device to an array, all the arrays
2976 described by the metadata of the container will be started.
2977
2978 .I mdadm
2979 performs a number of tests to determine if the device is part of an
2980 array, and which array it should be part of. If an appropriate array
2981 is found, or can be created,
2982 .I mdadm
2983 adds the device to the array and conditionally starts the array.
2984
2985 Note that
2986 .I mdadm
2987 will normally only add devices to an array which were previously working
2988 (active or spare) parts of that array. The support for automatic
2989 inclusion of a new drive as a spare in some array requires
2990 a configuration through POLICY in config file.
2991
2992 The tests that
2993 .I mdadm
2994 makes are as follow:
2995 .IP +
2996 Is the device permitted by
2997 .BR mdadm.conf ?
2998 That is, is it listed in a
2999 .B DEVICES
3000 line in that file. If
3001 .B DEVICES
3002 is absent then the default it to allow any device. Similarly if
3003 .B DEVICES
3004 contains the special word
3005 .B partitions
3006 then any device is allowed. Otherwise the device name given to
3007 .IR mdadm ,
3008 or one of the aliases given, or an alias found in the filesystem,
3009 must match one of the names or patterns in a
3010 .B DEVICES
3011 line.
3012
3013 This is the only context where the aliases are used. They are
3014 usually provided by a
3015 .I udev
3016 rules mentioning
3017 .BR $env{DEVLINKS} .
3018
3019 .IP +
3020 Does the device have a valid md superblock? If a specific metadata
3021 version is requested with
3022 .B \-\-metadata
3023 or
3024 .B \-e
3025 then only that style of metadata is accepted, otherwise
3026 .I mdadm
3027 finds any known version of metadata. If no
3028 .I md
3029 metadata is found, the device may be still added to an array
3030 as a spare if POLICY allows.
3031
3032 .ig
3033 .IP +
3034 Does the metadata match an expected array?
3035 The metadata can match in two ways. Either there is an array listed
3036 in
3037 .B mdadm.conf
3038 which identifies the array (either by UUID, by name, by device list,
3039 or by minor-number), or the array was created with a
3040 .B homehost
3041 specified and that
3042 .B homehost
3043 matches the one in
3044 .B mdadm.conf
3045 or on the command line.
3046 If
3047 .I mdadm
3048 is not able to positively identify the array as belonging to the
3049 current host, the device will be rejected.
3050 ..
3051
3052 .PP
3053 .I mdadm
3054 keeps a list of arrays that it has partially assembled in
3055 .BR {MAP_PATH} .
3056 If no array exists which matches
3057 the metadata on the new device,
3058 .I mdadm
3059 must choose a device name and unit number. It does this based on any
3060 name given in
3061 .B mdadm.conf
3062 or any name information stored in the metadata. If this name
3063 suggests a unit number, that number will be used, otherwise a free
3064 unit number will be chosen. Normally
3065 .I mdadm
3066 will prefer to create a partitionable array, however if the
3067 .B CREATE
3068 line in
3069 .B mdadm.conf
3070 suggests that a non-partitionable array is preferred, that will be
3071 honoured.
3072
3073 If the array is not found in the config file and its metadata does not
3074 identify it as belonging to the "homehost", then
3075 .I mdadm
3076 will choose a name for the array which is certain not to conflict with
3077 any array which does belong to this host. It does this be adding an
3078 underscore and a small number to the name preferred by the metadata.
3079
3080 Once an appropriate array is found or created and the device is added,
3081 .I mdadm
3082 must decide if the array is ready to be started. It will
3083 normally compare the number of available (non-spare) devices to the
3084 number of devices that the metadata suggests need to be active. If
3085 there are at least that many, the array will be started. This means
3086 that if any devices are missing the array will not be restarted.
3087
3088 As an alternative,
3089 .B \-\-run
3090 may be passed to
3091 .I mdadm
3092 in which case the array will be run as soon as there are enough
3093 devices present for the data to be accessible. For a RAID1, that
3094 means one device will start the array. For a clean RAID5, the array
3095 will be started as soon as all but one drive is present.
3096
3097 Note that neither of these approaches is really ideal. If it can
3098 be known that all device discovery has completed, then
3099 .br
3100 .B " mdadm \-IRs"
3101 .br
3102 can be run which will try to start all arrays that are being
3103 incrementally assembled. They are started in "read-auto" mode in
3104 which they are read-only until the first write request. This means
3105 that no metadata updates are made and no attempt at resync or recovery
3106 happens. Further devices that are found before the first write can
3107 still be added safely.
3108
3109 .SH ENVIRONMENT
3110 This section describes environment variables that affect how mdadm
3111 operates.
3112
3113 .TP
3114 .B MDADM_NO_MDMON
3115 Setting this value to 1 will prevent mdadm from automatically launching
3116 mdmon. This variable is intended primarily for debugging mdadm/mdmon.
3117
3118 .TP
3119 .B MDADM_NO_UDEV
3120 Normally,
3121 .I mdadm
3122 does not create any device nodes in /dev, but leaves that task to
3123 .IR udev .
3124 If
3125 .I udev
3126 appears not to be configured, or if this environment variable is set
3127 to '1', the
3128 .I mdadm
3129 will create and devices that are needed.
3130
3131 .TP
3132 .B MDADM_NO_SYSTEMCTL
3133 If
3134 .I mdadm
3135 detects that
3136 .I systemd
3137 is in use it will normally request
3138 .I systemd
3139 to start various background tasks (particularly
3140 .IR mdmon )
3141 rather than forking and running them in the background. This can be
3142 suppressed by setting
3143 .BR MDADM_NO_SYSTEMCTL=1 .
3144
3145 .TP
3146 .B IMSM_NO_PLATFORM
3147 A key value of IMSM metadata is that it allows interoperability with
3148 boot ROMs on Intel platforms, and with other major operating systems.
3149 Consequently,
3150 .I mdadm
3151 will only allow an IMSM array to be created or modified if detects
3152 that it is running on an Intel platform which supports IMSM, and
3153 supports the particular configuration of IMSM that is being requested
3154 (some functionality requires newer OROM support).
3155
3156 These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in the
3157 environment. This can be useful for testing or for disaster
3158 recovery. You should be aware that interoperability may be
3159 compromised by setting this value.
3160
3161 .TP
3162 .B MDADM_GROW_ALLOW_OLD
3163 If an array is stopped while it is performing a reshape and that
3164 reshape was making use of a backup file, then when the array is
3165 re-assembled
3166 .I mdadm
3167 will sometimes complain that the backup file is too old. If this
3168 happens and you are certain it is the right backup file, you can
3169 over-ride this check by setting
3170 .B MDADM_GROW_ALLOW_OLD=1
3171 in the environment.
3172
3173 .TP
3174 .B MDADM_CONF_AUTO
3175 Any string given in this variable is added to the start of the
3176 .B AUTO
3177 line in the config file, or treated as the whole
3178 .B AUTO
3179 line if none is given. It can be used to disable certain metadata
3180 types when
3181 .I mdadm
3182 is called from a boot script. For example
3183 .br
3184 .B " export MDADM_CONF_AUTO='-ddf -imsm'
3185 .br
3186 will make sure that
3187 .I mdadm
3188 does not automatically assemble any DDF or
3189 IMSM arrays that are found. This can be useful on systems configured
3190 to manage such arrays with
3191 .BR dmraid .
3192
3193
3194 .SH EXAMPLES
3195
3196 .B " mdadm \-\-query /dev/name-of-device"
3197 .br
3198 This will find out if a given device is a RAID array, or is part of
3199 one, and will provide brief information about the device.
3200
3201 .B " mdadm \-\-assemble \-\-scan"
3202 .br
3203 This will assemble and start all arrays listed in the standard config
3204 file. This command will typically go in a system startup file.
3205
3206 .B " mdadm \-\-stop \-\-scan"
3207 .br
3208 This will shut down all arrays that can be shut down (i.e. are not
3209 currently in use). This will typically go in a system shutdown script.
3210
3211 .B " mdadm \-\-follow \-\-scan \-\-delay=120"
3212 .br
3213 If (and only if) there is an Email address or program given in the
3214 standard config file, then
3215 monitor the status of all arrays listed in that file by
3216 polling them ever 2 minutes.
3217
3218 .B " mdadm \-\-create /dev/md0 \-\-level=1 \-\-raid\-devices=2 /dev/hd[ac]1"
3219 .br
3220 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
3221
3222 .br
3223 .B " echo 'DEVICE /dev/hd*[0\-9] /dev/sd*[0\-9]' > mdadm.conf"
3224 .br
3225 .B " mdadm \-\-detail \-\-scan >> mdadm.conf"
3226 .br
3227 This will create a prototype config file that describes currently
3228 active arrays that are known to be made from partitions of IDE or SCSI drives.
3229 This file should be reviewed before being used as it may
3230 contain unwanted detail.
3231
3232 .B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
3233 .br
3234 .B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
3235 .br
3236 This will find arrays which could be assembled from existing IDE and
3237 SCSI whole drives (not partitions), and store the information in the
3238 format of a config file.
3239 This file is very likely to contain unwanted detail, particularly
3240 the
3241 .B devices=
3242 entries. It should be reviewed and edited before being used as an
3243 actual config file.
3244
3245 .B " mdadm \-\-examine \-\-brief \-\-scan \-\-config=partitions"
3246 .br
3247 .B " mdadm \-Ebsc partitions"
3248 .br
3249 Create a list of devices by reading
3250 .BR /proc/partitions ,
3251 scan these for RAID superblocks, and printout a brief listing of all
3252 that were found.
3253
3254 .B " mdadm \-Ac partitions \-m 0 /dev/md0"
3255 .br
3256 Scan all partitions and devices listed in
3257 .BR /proc/partitions
3258 and assemble
3259 .B /dev/md0
3260 out of all such devices with a RAID superblock with a minor number of 0.
3261
3262 .B " mdadm \-\-monitor \-\-scan \-\-daemonise > /run/mdadm/mon.pid"
3263 .br
3264 If config file contains a mail address or alert program, run mdadm in
3265 the background in monitor mode monitoring all md devices. Also write
3266 pid of mdadm daemon to
3267 .BR /run/mdadm/mon.pid .
3268
3269 .B " mdadm \-Iq /dev/somedevice"
3270 .br
3271 Try to incorporate newly discovered device into some array as
3272 appropriate.
3273
3274 .B " mdadm \-\-incremental \-\-rebuild\-map \-\-run \-\-scan"
3275 .br
3276 Rebuild the array map from any current arrays, and then start any that
3277 can be started.
3278
3279 .B " mdadm /dev/md4 --fail detached --remove detached"
3280 .br
3281 Any devices which are components of /dev/md4 will be marked as faulty
3282 and then remove from the array.
3283
3284 .B " mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4"
3285 .br
3286 The array
3287 .B /dev/md4
3288 which is currently a RAID5 array will be converted to RAID6. There
3289 should normally already be a spare drive attached to the array as a
3290 RAID6 needs one more drive than a matching RAID5.
3291
3292 .B " mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]"
3293 .br
3294 Create a DDF array over 6 devices.
3295
3296 .B " mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf"
3297 .br
3298 Create a RAID5 array over any 3 devices in the given DDF set. Use
3299 only 30 gigabytes of each device.
3300
3301 .B " mdadm -A /dev/md/ddf1 /dev/sd[a-f]"
3302 .br
3303 Assemble a pre-exist ddf array.
3304
3305 .B " mdadm -I /dev/md/ddf1"
3306 .br
3307 Assemble all arrays contained in the ddf array, assigning names as
3308 appropriate.
3309
3310 .B " mdadm \-\-create \-\-help"
3311 .br
3312 Provide help about the Create mode.
3313
3314 .B " mdadm \-\-config \-\-help"
3315 .br
3316 Provide help about the format of the config file.
3317
3318 .B " mdadm \-\-help"
3319 .br
3320 Provide general help.
3321
3322 .SH FILES
3323
3324 .SS /proc/mdstat
3325
3326 If you're using the
3327 .B /proc
3328 filesystem,
3329 .B /proc/mdstat
3330 lists all active md devices with information about them.
3331 .I mdadm
3332 uses this to find arrays when
3333 .B \-\-scan
3334 is given in Misc mode, and to monitor array reconstruction
3335 on Monitor mode.
3336
3337 .SS /etc/mdadm.conf
3338
3339 The config file lists which devices may be scanned to see if
3340 they contain MD super block, and gives identifying information
3341 (e.g. UUID) about known MD arrays. See
3342 .BR mdadm.conf (5)
3343 for more details.
3344
3345 .SS /etc/mdadm.conf.d
3346
3347 A directory containing configuration files which are read in lexical
3348 order.
3349
3350 .SS {MAP_PATH}
3351 When
3352 .B \-\-incremental
3353 mode is used, this file gets a list of arrays currently being created.
3354
3355 .SH DEVICE NAMES
3356
3357 .I mdadm
3358 understand two sorts of names for array devices.
3359
3360 The first is the so-called 'standard' format name, which matches the
3361 names used by the kernel and which appear in
3362 .IR /proc/mdstat .
3363
3364 The second sort can be freely chosen, but must reside in
3365 .IR /dev/md/ .
3366 When giving a device name to
3367 .I mdadm
3368 to create or assemble an array, either full path name such as
3369 .I /dev/md0
3370 or
3371 .I /dev/md/home
3372 can be given, or just the suffix of the second sort of name, such as
3373 .I home
3374 can be given.
3375
3376 When
3377 .I mdadm
3378 chooses device names during auto-assembly or incremental assembly, it
3379 will sometimes add a small sequence number to the end of the name to
3380 avoid conflicted between multiple arrays that have the same name. If
3381 .I mdadm
3382 can reasonably determine that the array really is meant for this host,
3383 either by a hostname in the metadata, or by the presence of the array
3384 in
3385 .BR mdadm.conf ,
3386 then it will leave off the suffix if possible.
3387 Also if the homehost is specified as
3388 .B <ignore>
3389 .I mdadm
3390 will only use a suffix if a different array of the same name already
3391 exists or is listed in the config file.
3392
3393 The standard names for non-partitioned arrays (the only sort of md
3394 array available in 2.4 and earlier) are of the form
3395 .IP
3396 .RB /dev/md NN
3397 .PP
3398 where NN is a number.
3399 The standard names for partitionable arrays (as available from 2.6
3400 onwards) are of the form:
3401 .IP
3402 .RB /dev/md_d NN
3403 .PP
3404 Partition numbers should be indicated by adding "pMM" to these, thus "/dev/md/d1p2".
3405 .PP
3406 From kernel version 2.6.28 the "non-partitioned array" can actually
3407 be partitioned. So the "md_d\fBNN\fP"
3408 names are no longer needed, and
3409 partitions such as "/dev/md\fBNN\fPp\fBXX\fP"
3410 are possible.
3411 .PP
3412 From kernel version 2.6.29 standard names can be non-numeric following
3413 the form:
3414 .IP
3415 .RB /dev/md_ XXX
3416 .PP
3417 where
3418 .B XXX
3419 is any string. These names are supported by
3420 .I mdadm
3421 since version 3.3 provided they are enabled in
3422 .IR mdadm.conf .
3423
3424 .SH NOTE
3425 .I mdadm
3426 was previously known as
3427 .IR mdctl .
3428
3429 .SH SEE ALSO
3430 For further information on mdadm usage, MD and the various levels of
3431 RAID, see:
3432 .IP
3433 .B https://raid.wiki.kernel.org/
3434 .PP
3435 (based upon Jakob \(/Ostergaard's Software\-RAID.HOWTO)
3436 .PP
3437 The latest version of
3438 .I mdadm
3439 should always be available from
3440 .IP
3441 .B https://www.kernel.org/pub/linux/utils/raid/mdadm/
3442 .PP
3443 Related man pages:
3444 .PP
3445 .IR mdmon (8),
3446 .IR mdadm.conf (5),
3447 .IR md (4).