]> git.ipfire.org Git - thirdparty/mdadm.git/blob - mdadm.8.in
Super1: allow RAID0 layout setting to be removed.
[thirdparty/mdadm.git] / mdadm.8.in
1 .\" -*- nroff -*-
2 .\" Copyright Neil Brown and others.
3 .\" This program is free software; you can redistribute it and/or modify
4 .\" it under the terms of the GNU General Public License as published by
5 .\" the Free Software Foundation; either version 2 of the License, or
6 .\" (at your option) any later version.
7 .\" See file COPYING in distribution for details.
8 .TH MDADM 8 "" v4.1-rc2
9 .SH NAME
10 mdadm \- manage MD devices
11 .I aka
12 Linux Software RAID
13
14 .SH SYNOPSIS
15
16 .BI mdadm " [mode] <raiddevice> [options] <component-devices>"
17
18 .SH DESCRIPTION
19 RAID devices are virtual devices created from two or more
20 real block devices. This allows multiple devices (typically disk
21 drives or partitions thereof) to be combined into a single device to
22 hold (for example) a single filesystem.
23 Some RAID levels include redundancy and so can survive some degree of
24 device failure.
25
26 Linux Software RAID devices are implemented through the md (Multiple
27 Devices) device driver.
28
29 Currently, Linux supports
30 .B LINEAR
31 md devices,
32 .B RAID0
33 (striping),
34 .B RAID1
35 (mirroring),
36 .BR RAID4 ,
37 .BR RAID5 ,
38 .BR RAID6 ,
39 .BR RAID10 ,
40 .BR MULTIPATH ,
41 .BR FAULTY ,
42 and
43 .BR CONTAINER .
44
45 .B MULTIPATH
46 is not a Software RAID mechanism, but does involve
47 multiple devices:
48 each device is a path to one common physical storage device.
49 New installations should not use md/multipath as it is not well
50 supported and has no ongoing development. Use the Device Mapper based
51 multipath-tools instead.
52
53 .B FAULTY
54 is also not true RAID, and it only involves one device. It
55 provides a layer over a true device that can be used to inject faults.
56
57 .B CONTAINER
58 is different again. A
59 .B CONTAINER
60 is a collection of devices that are
61 managed as a set. This is similar to the set of devices connected to
62 a hardware RAID controller. The set of devices may contain a number
63 of different RAID arrays each utilising some (or all) of the blocks from a
64 number of the devices in the set. For example, two devices in a 5-device set
65 might form a RAID1 using the whole devices. The remaining three might
66 have a RAID5 over the first half of each device, and a RAID0 over the
67 second half.
68
69 With a
70 .BR CONTAINER ,
71 there is one set of metadata that describes all of
72 the arrays in the container. So when
73 .I mdadm
74 creates a
75 .B CONTAINER
76 device, the device just represents the metadata. Other normal arrays (RAID1
77 etc) can be created inside the container.
78
79 .SH MODES
80 mdadm has several major modes of operation:
81 .TP
82 .B Assemble
83 Assemble the components of a previously created
84 array into an active array. Components can be explicitly given
85 or can be searched for.
86 .I mdadm
87 checks that the components
88 do form a bona fide array, and can, on request, fiddle superblock
89 information so as to assemble a faulty array.
90
91 .TP
92 .B Build
93 Build an array that doesn't have per-device metadata (superblocks). For these
94 sorts of arrays,
95 .I mdadm
96 cannot differentiate between initial creation and subsequent assembly
97 of an array. It also cannot perform any checks that appropriate
98 components have been requested. Because of this, the
99 .B Build
100 mode should only be used together with a complete understanding of
101 what you are doing.
102
103 .TP
104 .B Create
105 Create a new array with per-device metadata (superblocks).
106 Appropriate metadata is written to each device, and then the array
107 comprising those devices is activated. A 'resync' process is started
108 to make sure that the array is consistent (e.g. both sides of a mirror
109 contain the same data) but the content of the device is left otherwise
110 untouched.
111 The array can be used as soon as it has been created. There is no
112 need to wait for the initial resync to finish.
113
114 .TP
115 .B "Follow or Monitor"
116 Monitor one or more md devices and act on any state changes. This is
117 only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as
118 only these have interesting state. RAID0 or Linear never have
119 missing, spare, or failed drives, so there is nothing to monitor.
120
121 .TP
122 .B "Grow"
123 Grow (or shrink) an array, or otherwise reshape it in some way.
124 Currently supported growth options including changing the active size
125 of component devices and changing the number of active devices in
126 Linear and RAID levels 0/1/4/5/6,
127 changing the RAID level between 0, 1, 5, and 6, and between 0 and 10,
128 changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or
129 removing a write-intent bitmap and changing the array's consistency policy.
130
131 .TP
132 .B "Incremental Assembly"
133 Add a single device to an appropriate array. If the addition of the
134 device makes the array runnable, the array will be started.
135 This provides a convenient interface to a
136 .I hot-plug
137 system. As each device is detected,
138 .I mdadm
139 has a chance to include it in some array as appropriate.
140 Optionally, when the
141 .I \-\-fail
142 flag is passed in we will remove the device from any active array
143 instead of adding it.
144
145 If a
146 .B CONTAINER
147 is passed to
148 .I mdadm
149 in this mode, then any arrays within that container will be assembled
150 and started.
151
152 .TP
153 .B Manage
154 This is for doing things to specific components of an array such as
155 adding new spares and removing faulty devices.
156
157 .TP
158 .B Misc
159 This is an 'everything else' mode that supports operations on active
160 arrays, operations on component devices such as erasing old superblocks, and
161 information gathering operations.
162 .\"This mode allows operations on independent devices such as examine MD
163 .\"superblocks, erasing old superblocks and stopping active arrays.
164
165 .TP
166 .B Auto-detect
167 This mode does not act on a specific device or array, but rather it
168 requests the Linux Kernel to activate any auto-detected arrays.
169 .SH OPTIONS
170
171 .SH Options for selecting a mode are:
172
173 .TP
174 .BR \-A ", " \-\-assemble
175 Assemble a pre-existing array.
176
177 .TP
178 .BR \-B ", " \-\-build
179 Build a legacy array without superblocks.
180
181 .TP
182 .BR \-C ", " \-\-create
183 Create a new array.
184
185 .TP
186 .BR \-F ", " \-\-follow ", " \-\-monitor
187 Select
188 .B Monitor
189 mode.
190
191 .TP
192 .BR \-G ", " \-\-grow
193 Change the size or shape of an active array.
194
195 .TP
196 .BR \-I ", " \-\-incremental
197 Add/remove a single device to/from an appropriate array, and possibly start the array.
198
199 .TP
200 .B \-\-auto-detect
201 Request that the kernel starts any auto-detected arrays. This can only
202 work if
203 .I md
204 is compiled into the kernel \(em not if it is a module.
205 Arrays can be auto-detected by the kernel if all the components are in
206 primary MS-DOS partitions with partition type
207 .BR FD ,
208 and all use v0.90 metadata.
209 In-kernel autodetect is not recommended for new installations. Using
210 .I mdadm
211 to detect and assemble arrays \(em possibly in an
212 .I initrd
213 \(em is substantially more flexible and should be preferred.
214
215 .P
216 If a device is given before any options, or if the first option is
217 one of
218 .BR \-\-add ,
219 .BR \-\-re\-add ,
220 .BR \-\-add\-spare ,
221 .BR \-\-fail ,
222 .BR \-\-remove ,
223 or
224 .BR \-\-replace ,
225 then the MANAGE mode is assumed.
226 Anything other than these will cause the
227 .B Misc
228 mode to be assumed.
229
230 .SH Options that are not mode-specific are:
231
232 .TP
233 .BR \-h ", " \-\-help
234 Display general help message or, after one of the above options, a
235 mode-specific help message.
236
237 .TP
238 .B \-\-help\-options
239 Display more detailed help about command line parsing and some commonly
240 used options.
241
242 .TP
243 .BR \-V ", " \-\-version
244 Print version information for mdadm.
245
246 .TP
247 .BR \-v ", " \-\-verbose
248 Be more verbose about what is happening. This can be used twice to be
249 extra-verbose.
250 The extra verbosity currently only affects
251 .B \-\-detail \-\-scan
252 and
253 .BR "\-\-examine \-\-scan" .
254
255 .TP
256 .BR \-q ", " \-\-quiet
257 Avoid printing purely informative messages. With this,
258 .I mdadm
259 will be silent unless there is something really important to report.
260
261
262 .TP
263 .BR \-f ", " \-\-force
264 Be more forceful about certain operations. See the various modes for
265 the exact meaning of this option in different contexts.
266
267 .TP
268 .BR \-c ", " \-\-config=
269 Specify the config file or directory. Default is to use
270 .B /etc/mdadm.conf
271 and
272 .BR /etc/mdadm.conf.d ,
273 or if those are missing then
274 .B /etc/mdadm/mdadm.conf
275 and
276 .BR /etc/mdadm/mdadm.conf.d .
277 If the config file given is
278 .B "partitions"
279 then nothing will be read, but
280 .I mdadm
281 will act as though the config file contained exactly
282 .br
283 .B " DEVICE partitions containers"
284 .br
285 and will read
286 .B /proc/partitions
287 to find a list of devices to scan, and
288 .B /proc/mdstat
289 to find a list of containers to examine.
290 If the word
291 .B "none"
292 is given for the config file, then
293 .I mdadm
294 will act as though the config file were empty.
295
296 If the name given is of a directory, then
297 .I mdadm
298 will collect all the files contained in the directory with a name ending
299 in
300 .BR .conf ,
301 sort them lexically, and process all of those files as config files.
302
303 .TP
304 .BR \-s ", " \-\-scan
305 Scan config file or
306 .B /proc/mdstat
307 for missing information.
308 In general, this option gives
309 .I mdadm
310 permission to get any missing information (like component devices,
311 array devices, array identities, and alert destination) from the
312 configuration file (see previous option);
313 one exception is MISC mode when using
314 .B \-\-detail
315 or
316 .B \-\-stop,
317 in which case
318 .B \-\-scan
319 says to get a list of array devices from
320 .BR /proc/mdstat .
321
322 .TP
323 .BR \-e ", " \-\-metadata=
324 Declare the style of RAID metadata (superblock) to be used. The
325 default is {DEFAULT_METADATA} for
326 .BR \-\-create ,
327 and to guess for other operations.
328 The default can be overridden by setting the
329 .B metadata
330 value for the
331 .B CREATE
332 keyword in
333 .BR mdadm.conf .
334
335 Options are:
336 .RS
337 .ie '{DEFAULT_METADATA}'0.90'
338 .IP "0, 0.90, default"
339 .el
340 .IP "0, 0.90"
341 Use the original 0.90 format superblock. This format limits arrays to
342 28 component devices and limits component devices of levels 1 and
343 greater to 2 terabytes. It is also possible for there to be confusion
344 about whether the superblock applies to a whole device or just the
345 last partition, if that partition starts on a 64K boundary.
346 .ie '{DEFAULT_METADATA}'0.90'
347 .IP "1, 1.0, 1.1, 1.2"
348 .el
349 .IP "1, 1.0, 1.1, 1.2 default"
350 Use the new version-1 format superblock. This has fewer restrictions.
351 It can easily be moved between hosts with different endian-ness, and a
352 recovery operation can be checkpointed and restarted. The different
353 sub-versions store the superblock at different locations on the
354 device, either at the end (for 1.0), at the start (for 1.1) or 4K from
355 the start (for 1.2). "1" is equivalent to "1.2" (the commonly
356 preferred 1.x format).
357 'if '{DEFAULT_METADATA}'1.2' "default" is equivalent to "1.2".
358 .IP ddf
359 Use the "Industry Standard" DDF (Disk Data Format) format defined by
360 SNIA.
361 When creating a DDF array a
362 .B CONTAINER
363 will be created, and normal arrays can be created in that container.
364 .IP imsm
365 Use the Intel(R) Matrix Storage Manager metadata format. This creates a
366 .B CONTAINER
367 which is managed in a similar manner to DDF, and is supported by an
368 option-rom on some platforms:
369 .IP
370 .B https://www.intel.com/content/www/us/en/support/products/122484/memory-and-storage/ssd-software/intel-virtual-raid-on-cpu-intel-vroc.html
371 .PP
372 .RE
373
374 .TP
375 .B \-\-homehost=
376 This will override any
377 .B HOMEHOST
378 setting in the config file and provides the identity of the host which
379 should be considered the home for any arrays.
380
381 When creating an array, the
382 .B homehost
383 will be recorded in the metadata. For version-1 superblocks, it will
384 be prefixed to the array name. For version-0.90 superblocks, part of
385 the SHA1 hash of the hostname will be stored in the later half of the
386 UUID.
387
388 When reporting information about an array, any array which is tagged
389 for the given homehost will be reported as such.
390
391 When using Auto-Assemble, only arrays tagged for the given homehost
392 will be allowed to use 'local' names (i.e. not ending in '_' followed
393 by a digit string). See below under
394 .BR "Auto Assembly" .
395
396 The special name "\fBany\fP" can be used as a wild card. If an array
397 is created with
398 .B --homehost=any
399 then the name "\fBany\fP" will be stored in the array and it can be
400 assembled in the same way on any host. If an array is assembled with
401 this option, then the homehost recorded on the array will be ignored.
402
403 .TP
404 .B \-\-prefer=
405 When
406 .I mdadm
407 needs to print the name for a device it normally finds the name in
408 .B /dev
409 which refers to the device and is shortest. When a path component is
410 given with
411 .B \-\-prefer
412 .I mdadm
413 will prefer a longer name if it contains that component. For example
414 .B \-\-prefer=by-uuid
415 will prefer a name in a subdirectory of
416 .B /dev
417 called
418 .BR by-uuid .
419
420 This functionality is currently only provided by
421 .B \-\-detail
422 and
423 .BR \-\-monitor .
424
425 .TP
426 .B \-\-home\-cluster=
427 specifies the cluster name for the md device. The md device can be assembled
428 only on the cluster which matches the name specified. If this option is not
429 provided, mdadm tries to detect the cluster name automatically.
430
431 .SH For create, build, or grow:
432
433 .TP
434 .BR \-n ", " \-\-raid\-devices=
435 Specify the number of active devices in the array. This, plus the
436 number of spare devices (see below) must equal the number of
437 .I component-devices
438 (including "\fBmissing\fP" devices)
439 that are listed on the command line for
440 .BR \-\-create .
441 Setting a value of 1 is probably
442 a mistake and so requires that
443 .B \-\-force
444 be specified first. A value of 1 will then be allowed for linear,
445 multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
446 .br
447 This number can only be changed using
448 .B \-\-grow
449 for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide
450 the necessary support.
451
452 .TP
453 .BR \-x ", " \-\-spare\-devices=
454 Specify the number of spare (eXtra) devices in the initial array.
455 Spares can also be added
456 and removed later. The number of component devices listed
457 on the command line must equal the number of RAID devices plus the
458 number of spare devices.
459
460 .TP
461 .BR \-z ", " \-\-size=
462 Amount (in Kilobytes) of space to use from each drive in RAID levels 1/4/5/6.
463 This must be a multiple of the chunk size, and must leave about 128Kb
464 of space at the end of the drive for the RAID superblock.
465 If this is not specified
466 (as it normally is not) the smallest drive (or partition) sets the
467 size, though if there is a variance among the drives of greater than 1%, a warning is
468 issued.
469
470 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
471 Megabytes, Gigabytes or Terabytes respectively.
472
473 Sometimes a replacement drive can be a little smaller than the
474 original drives though this should be minimised by IDEMA standards.
475 Such a replacement drive will be rejected by
476 .IR md .
477 To guard against this it can be useful to set the initial size
478 slightly smaller than the smaller device with the aim that it will
479 still be larger than any replacement.
480
481 This value can be set with
482 .B \-\-grow
483 for RAID level 1/4/5/6 though
484 DDF arrays may not be able to support this.
485 If the array was created with a size smaller than the currently
486 active drives, the extra space can be accessed using
487 .BR \-\-grow .
488 The size can be given as
489 .B max
490 which means to choose the largest size that fits on all current drives.
491
492 Before reducing the size of the array (with
493 .BR "\-\-grow \-\-size=" )
494 you should make sure that space isn't needed. If the device holds a
495 filesystem, you would need to resize the filesystem to use less space.
496
497 After reducing the array size you should check that the data stored in
498 the device is still available. If the device holds a filesystem, then
499 an 'fsck' of the filesystem is a minimum requirement. If there are
500 problems the array can be made bigger again with no loss with another
501 .B "\-\-grow \-\-size="
502 command.
503
504 This value cannot be used when creating a
505 .B CONTAINER
506 such as with DDF and IMSM metadata, though it perfectly valid when
507 creating an array inside a container.
508
509 .TP
510 .BR \-Z ", " \-\-array\-size=
511 This is only meaningful with
512 .B \-\-grow
513 and its effect is not persistent: when the array is stopped and
514 restarted the default array size will be restored.
515
516 Setting the array-size causes the array to appear smaller to programs
517 that access the data. This is particularly needed before reshaping an
518 array so that it will be smaller. As the reshape is not reversible,
519 but setting the size with
520 .B \-\-array-size
521 is, it is required that the array size is reduced as appropriate
522 before the number of devices in the array is reduced.
523
524 Before reducing the size of the array you should make sure that space
525 isn't needed. If the device holds a filesystem, you would need to
526 resize the filesystem to use less space.
527
528 After reducing the array size you should check that the data stored in
529 the device is still available. If the device holds a filesystem, then
530 an 'fsck' of the filesystem is a minimum requirement. If there are
531 problems the array can be made bigger again with no loss with another
532 .B "\-\-grow \-\-array\-size="
533 command.
534
535 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
536 Megabytes, Gigabytes or Terabytes respectively.
537 A value of
538 .B max
539 restores the apparent size of the array to be whatever the real
540 amount of available space is.
541
542 Clustered arrays do not support this parameter yet.
543
544 .TP
545 .BR \-c ", " \-\-chunk=
546 Specify chunk size of kilobytes. The default when creating an
547 array is 512KB. To ensure compatibility with earlier versions, the
548 default when building an array with no persistent metadata is 64KB.
549 This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
550
551 RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power
552 of 2. In any case it must be a multiple of 4KB.
553
554 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
555 Megabytes, Gigabytes or Terabytes respectively.
556
557 .TP
558 .BR \-\-rounding=
559 Specify rounding factor for a Linear array. The size of each
560 component will be rounded down to a multiple of this size.
561 This is a synonym for
562 .B \-\-chunk
563 but highlights the different meaning for Linear as compared to other
564 RAID levels. The default is 64K if a kernel earlier than 2.6.16 is in
565 use, and is 0K (i.e. no rounding) in later kernels.
566
567 .TP
568 .BR \-l ", " \-\-level=
569 Set RAID level. When used with
570 .BR \-\-create ,
571 options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,
572 raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
573 Obviously some of these are synonymous.
574
575 When a
576 .B CONTAINER
577 metadata type is requested, only the
578 .B container
579 level is permitted, and it does not need to be explicitly given.
580
581 When used with
582 .BR \-\-build ,
583 only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
584
585 Can be used with
586 .B \-\-grow
587 to change the RAID level in some cases. See LEVEL CHANGES below.
588
589 .TP
590 .BR \-p ", " \-\-layout=
591 This option configures the fine details of data layout for RAID5, RAID6,
592 and RAID10 arrays, and controls the failure modes for
593 .IR faulty .
594 It can also be used for working around a kernel bug with RAID0, but generally
595 doesn't need to be used explicitly.
596
597 The layout of the RAID5 parity block can be one of
598 .BR left\-asymmetric ,
599 .BR left\-symmetric ,
600 .BR right\-asymmetric ,
601 .BR right\-symmetric ,
602 .BR la ", " ra ", " ls ", " rs .
603 The default is
604 .BR left\-symmetric .
605
606 It is also possible to cause RAID5 to use a RAID4-like layout by
607 choosing
608 .BR parity\-first ,
609 or
610 .BR parity\-last .
611
612 Finally for RAID5 there are DDF\-compatible layouts,
613 .BR ddf\-zero\-restart ,
614 .BR ddf\-N\-restart ,
615 and
616 .BR ddf\-N\-continue .
617
618 These same layouts are available for RAID6. There are also 4 layouts
619 that will provide an intermediate stage for converting between RAID5
620 and RAID6. These provide a layout which is identical to the
621 corresponding RAID5 layout on the first N\-1 devices, and has the 'Q'
622 syndrome (the second 'parity' block used by RAID6) on the last device.
623 These layouts are:
624 .BR left\-symmetric\-6 ,
625 .BR right\-symmetric\-6 ,
626 .BR left\-asymmetric\-6 ,
627 .BR right\-asymmetric\-6 ,
628 and
629 .BR parity\-first\-6 .
630
631 When setting the failure mode for level
632 .I faulty,
633 the options are:
634 .BR write\-transient ", " wt ,
635 .BR read\-transient ", " rt ,
636 .BR write\-persistent ", " wp ,
637 .BR read\-persistent ", " rp ,
638 .BR write\-all ,
639 .BR read\-fixable ", " rf ,
640 .BR clear ", " flush ", " none .
641
642 Each failure mode can be followed by a number, which is used as a period
643 between fault generation. Without a number, the fault is generated
644 once on the first relevant request. With a number, the fault will be
645 generated after that many requests, and will continue to be generated
646 every time the period elapses.
647
648 Multiple failure modes can be current simultaneously by using the
649 .B \-\-grow
650 option to set subsequent failure modes.
651
652 "clear" or "none" will remove any pending or periodic failure modes,
653 and "flush" will clear any persistent faults.
654
655 The layout options for RAID10 are one of 'n', 'o' or 'f' followed
656 by a small number. The default is 'n2'. The supported options are:
657
658 .I 'n'
659 signals 'near' copies. Multiple copies of one data block are at
660 similar offsets in different devices.
661
662 .I 'o'
663 signals 'offset' copies. Rather than the chunks being duplicated
664 within a stripe, whole stripes are duplicated but are rotated by one
665 device so duplicate blocks are on different devices. Thus subsequent
666 copies of a block are in the next drive, and are one chunk further
667 down.
668
669 .I 'f'
670 signals 'far' copies
671 (multiple copies have very different offsets).
672 See md(4) for more detail about 'near', 'offset', and 'far'.
673
674 The number is the number of copies of each datablock. 2 is normal, 3
675 can be useful. This number can be at most equal to the number of
676 devices in the array. It does not need to divide evenly into that
677 number (e.g. it is perfectly legal to have an 'n2' layout for an array
678 with an odd number of devices).
679
680 A bug introduced in Linux 3.14 means that RAID0 arrays
681 .B "with devices of differing sizes"
682 started using a different layout. This could lead to
683 data corruption. Since Linux 5.4 (and various stable releases that received
684 backports), the kernel will not accept such an array unless
685 a layout is explictly set. It can be set to
686 .RB ' original '
687 or
688 .RB ' alternate '.
689 When creating a new array,
690 .I mdadm
691 will select
692 .RB ' original '
693 by default, so the layout does not normally need to be set.
694 An array created for either
695 .RB ' original '
696 or
697 .RB ' alternate '
698 will not be recognized by an (unpatched) kernel prior to 5.4. To create
699 a RAID0 array with devices of differing sizes that can be used on an
700 older kernel, you can set the layout to
701 .RB ' dangerous '.
702 This will use whichever layout the running kernel supports, so the data
703 on the array may become corrupt when changing kernel from pre-3.14 to a
704 later kernel.
705
706 When an array is converted between RAID5 and RAID6 an intermediate
707 RAID6 layout is used in which the second parity block (Q) is always on
708 the last device. To convert a RAID5 to RAID6 and leave it in this new
709 layout (which does not require re-striping) use
710 .BR \-\-layout=preserve .
711 This will try to avoid any restriping.
712
713 The converse of this is
714 .B \-\-layout=normalise
715 which will change a non-standard RAID6 layout into a more standard
716 arrangement.
717
718 .TP
719 .BR \-\-parity=
720 same as
721 .B \-\-layout
722 (thus explaining the p of
723 .BR \-p ).
724
725 .TP
726 .BR \-b ", " \-\-bitmap=
727 Specify a file to store a write-intent bitmap in. The file should not
728 exist unless
729 .B \-\-force
730 is also given. The same file should be provided
731 when assembling the array. If the word
732 .B "internal"
733 is given, then the bitmap is stored with the metadata on the array,
734 and so is replicated on all devices. If the word
735 .B "none"
736 is given with
737 .B \-\-grow
738 mode, then any bitmap that is present is removed. If the word
739 .B "clustered"
740 is given, the array is created for a clustered environment. One bitmap
741 is created for each node as defined by the
742 .B \-\-nodes
743 parameter and are stored internally.
744
745 To help catch typing errors, the filename must contain at least one
746 slash ('/') if it is a real file (not 'internal' or 'none').
747
748 Note: external bitmaps are only known to work on ext2 and ext3.
749 Storing bitmap files on other filesystems may result in serious problems.
750
751 When creating an array on devices which are 100G or larger,
752 .I mdadm
753 automatically adds an internal bitmap as it will usually be
754 beneficial. This can be suppressed with
755 .B "\-\-bitmap=none"
756 or by selecting a different consistency policy with
757 .BR \-\-consistency\-policy .
758
759 .TP
760 .BR \-\-bitmap\-chunk=
761 Set the chunksize of the bitmap. Each bit corresponds to that many
762 Kilobytes of storage.
763 When using a file based bitmap, the default is to use the smallest
764 size that is at-least 4 and requires no more than 2^21 chunks.
765 When using an
766 .B internal
767 bitmap, the chunksize defaults to 64Meg, or larger if necessary to
768 fit the bitmap into the available space.
769
770 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes,
771 Megabytes, Gigabytes or Terabytes respectively.
772
773 .TP
774 .BR \-W ", " \-\-write\-mostly
775 subsequent devices listed in a
776 .BR \-\-build ,
777 .BR \-\-create ,
778 or
779 .B \-\-add
780 command will be flagged as 'write\-mostly'. This is valid for RAID1
781 only and means that the 'md' driver will avoid reading from these
782 devices if at all possible. This can be useful if mirroring over a
783 slow link.
784
785 .TP
786 .BR \-\-write\-behind=
787 Specify that write-behind mode should be enabled (valid for RAID1
788 only). If an argument is specified, it will set the maximum number
789 of outstanding writes allowed. The default value is 256.
790 A write-intent bitmap is required in order to use write-behind
791 mode, and write-behind is only attempted on drives marked as
792 .IR write-mostly .
793
794 .TP
795 .BR \-\-failfast
796 subsequent devices listed in a
797 .B \-\-create
798 or
799 .B \-\-add
800 command will be flagged as 'failfast'. This is valid for RAID1 and
801 RAID10 only. IO requests to these devices will be encouraged to fail
802 quickly rather than cause long delays due to error handling. Also no
803 attempt is made to repair a read error on these devices.
804
805 If an array becomes degraded so that the 'failfast' device is the only
806 usable device, the 'failfast' flag will then be ignored and extended
807 delays will be preferred to complete failure.
808
809 The 'failfast' flag is appropriate for storage arrays which have a
810 low probability of true failure, but which may sometimes
811 cause unacceptable delays due to internal maintenance functions.
812
813 .TP
814 .BR \-\-assume\-clean
815 Tell
816 .I mdadm
817 that the array pre-existed and is known to be clean. It can be useful
818 when trying to recover from a major failure as you can be sure that no
819 data will be affected unless you actually write to the array. It can
820 also be used when creating a RAID1 or RAID10 if you want to avoid the
821 initial resync, however this practice \(em while normally safe \(em is not
822 recommended. Use this only if you really know what you are doing.
823 .IP
824 When the devices that will be part of a new array were filled
825 with zeros before creation the operator knows the array is
826 actually clean. If that is the case, such as after running
827 badblocks, this argument can be used to tell mdadm the
828 facts the operator knows.
829 .IP
830 When an array is resized to a larger size with
831 .B "\-\-grow \-\-size="
832 the new space is normally resynced in that same way that the whole
833 array is resynced at creation. From Linux version 3.0,
834 .B \-\-assume\-clean
835 can be used with that command to avoid the automatic resync.
836
837 .TP
838 .BR \-\-backup\-file=
839 This is needed when
840 .B \-\-grow
841 is used to increase the number of raid-devices in a RAID5 or RAID6 if
842 there are no spare devices available, or to shrink, change RAID level
843 or layout. See the GROW MODE section below on RAID\-DEVICES CHANGES.
844 The file must be stored on a separate device, not on the RAID array
845 being reshaped.
846
847 .TP
848 .B \-\-data\-offset=
849 Arrays with 1.x metadata can leave a gap between the start of the
850 device and the start of array data. This gap can be used for various
851 metadata. The start of data is known as the
852 .IR data\-offset .
853 Normally an appropriate data offset is computed automatically.
854 However it can be useful to set it explicitly such as when re-creating
855 an array which was originally created using a different version of
856 .I mdadm
857 which computed a different offset.
858
859 Setting the offset explicitly over-rides the default. The value given
860 is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is used to explicitly
861 indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively.
862
863 Since Linux 3.4,
864 .B \-\-data\-offset
865 can also be used with
866 .B --grow
867 for some RAID levels (initially on RAID10). This allows the
868 data\-offset to be changed as part of the reshape process. When the
869 data offset is changed, no backup file is required as the difference
870 in offsets is used to provide the same functionality.
871
872 When the new offset is earlier than the old offset, the number of
873 devices in the array cannot shrink. When it is after the old offset,
874 the number of devices in the array cannot increase.
875
876 When creating an array,
877 .B \-\-data\-offset
878 can be specified as
879 .BR variable .
880 In the case each member device is expected to have a offset appended
881 to the name, separated by a colon. This makes it possible to recreate
882 exactly an array which has varying data offsets (as can happen when
883 different versions of
884 .I mdadm
885 are used to add different devices).
886
887 .TP
888 .BR \-\-continue
889 This option is complementary to the
890 .B \-\-freeze-reshape
891 option for assembly. It is needed when
892 .B \-\-grow
893 operation is interrupted and it is not restarted automatically due to
894 .B \-\-freeze-reshape
895 usage during array assembly. This option is used together with
896 .BR \-G
897 , (
898 .BR \-\-grow
899 ) command and device for a pending reshape to be continued.
900 All parameters required for reshape continuation will be read from array metadata.
901 If initial
902 .BR \-\-grow
903 command had required
904 .BR \-\-backup\-file=
905 option to be set, continuation option will require to have exactly the same
906 backup file given as well.
907 .IP
908 Any other parameter passed together with
909 .BR \-\-continue
910 option will be ignored.
911
912 .TP
913 .BR \-N ", " \-\-name=
914 Set a
915 .B name
916 for the array. This is currently only effective when creating an
917 array with a version-1 superblock, or an array in a DDF container.
918 The name is a simple textual string that can be used to identify array
919 components when assembling. If name is needed but not specified, it
920 is taken from the basename of the device that is being created.
921 e.g. when creating
922 .I /dev/md/home
923 the
924 .B name
925 will default to
926 .IR home .
927
928 .TP
929 .BR \-R ", " \-\-run
930 Insist that
931 .I mdadm
932 run the array, even if some of the components
933 appear to be active in another array or filesystem. Normally
934 .I mdadm
935 will ask for confirmation before including such components in an
936 array. This option causes that question to be suppressed.
937
938 .TP
939 .BR \-f ", " \-\-force
940 Insist that
941 .I mdadm
942 accept the geometry and layout specified without question. Normally
943 .I mdadm
944 will not allow creation of an array with only one device, and will try
945 to create a RAID5 array with one missing drive (as this makes the
946 initial resync work faster). With
947 .BR \-\-force ,
948 .I mdadm
949 will not try to be so clever.
950
951 .TP
952 .BR \-o ", " \-\-readonly
953 Start the array
954 .B read only
955 rather than read-write as normal. No writes will be allowed to the
956 array, and no resync, recovery, or reshape will be started. It works with
957 Create, Assemble, Manage and Misc mode.
958
959 .TP
960 .BR \-a ", " "\-\-auto{=yes,md,mdp,part,p}{NN}"
961 Instruct mdadm how to create the device file if needed, possibly allocating
962 an unused minor number. "md" causes a non-partitionable array
963 to be used (though since Linux 2.6.28, these array devices are in fact
964 partitionable). "mdp", "part" or "p" causes a partitionable array (2.6 and
965 later) to be used. "yes" requires the named md device to have
966 a 'standard' format, and the type and minor number will be determined
967 from this. With mdadm 3.0, device creation is normally left up to
968 .I udev
969 so this option is unlikely to be needed.
970 See DEVICE NAMES below.
971
972 The argument can also come immediately after
973 "\-a". e.g. "\-ap".
974
975 If
976 .B \-\-auto
977 is not given on the command line or in the config file, then
978 the default will be
979 .BR \-\-auto=yes .
980
981 If
982 .B \-\-scan
983 is also given, then any
984 .I auto=
985 entries in the config file will override the
986 .B \-\-auto
987 instruction given on the command line.
988
989 For partitionable arrays,
990 .I mdadm
991 will create the device file for the whole array and for the first 4
992 partitions. A different number of partitions can be specified at the
993 end of this option (e.g.
994 .BR \-\-auto=p7 ).
995 If the device name ends with a digit, the partition names add a 'p',
996 and a number, e.g.
997 .IR /dev/md/home1p3 .
998 If there is no trailing digit, then the partition names just have a
999 number added, e.g.
1000 .IR /dev/md/scratch3 .
1001
1002 If the md device name is in a 'standard' format as described in DEVICE
1003 NAMES, then it will be created, if necessary, with the appropriate
1004 device number based on that name. If the device name is not in one of these
1005 formats, then a unused device number will be allocated. The device
1006 number will be considered unused if there is no active array for that
1007 number, and there is no entry in /dev for that number and with a
1008 non-standard name. Names that are not in 'standard' format are only
1009 allowed in "/dev/md/".
1010
1011 This is meaningful with
1012 .B \-\-create
1013 or
1014 .BR \-\-build .
1015
1016 .TP
1017 .BR \-a ", " "\-\-add"
1018 This option can be used in Grow mode in two cases.
1019
1020 If the target array is a Linear array, then
1021 .B \-\-add
1022 can be used to add one or more devices to the array. They
1023 are simply catenated on to the end of the array. Once added, the
1024 devices cannot be removed.
1025
1026 If the
1027 .B \-\-raid\-disks
1028 option is being used to increase the number of devices in an array,
1029 then
1030 .B \-\-add
1031 can be used to add some extra devices to be included in the array.
1032 In most cases this is not needed as the extra devices can be added as
1033 spares first, and then the number of raid-disks can be changed.
1034 However for RAID0, it is not possible to add spares. So to increase
1035 the number of devices in a RAID0, it is necessary to set the new
1036 number of devices, and to add the new devices, in the same command.
1037
1038 .TP
1039 .BR \-\-nodes
1040 Only works when the array is for clustered environment. It specifies
1041 the maximum number of nodes in the cluster that will use this device
1042 simultaneously. If not specified, this defaults to 4.
1043
1044 .TP
1045 .BR \-\-write-journal
1046 Specify journal device for the RAID-4/5/6 array. The journal device
1047 should be a SSD with reasonable lifetime.
1048
1049 .TP
1050 .BR \-\-symlinks
1051 Auto creation of symlinks in /dev to /dev/md, option --symlinks must
1052 be 'no' or 'yes' and work with --create and --build.
1053
1054 .TP
1055 .BR \-k ", " \-\-consistency\-policy=
1056 Specify how the array maintains consistency in case of unexpected shutdown.
1057 Only relevant for RAID levels with redundancy.
1058 Currently supported options are:
1059 .RS
1060
1061 .TP
1062 .B resync
1063 Full resync is performed and all redundancy is regenerated when the array is
1064 started after unclean shutdown.
1065
1066 .TP
1067 .B bitmap
1068 Resync assisted by a write-intent bitmap. Implicitly selected when using
1069 .BR \-\-bitmap .
1070
1071 .TP
1072 .B journal
1073 For RAID levels 4/5/6, journal device is used to log transactions and replay
1074 after unclean shutdown. Implicitly selected when using
1075 .BR \-\-write\-journal .
1076
1077 .TP
1078 .B ppl
1079 For RAID5 only, Partial Parity Log is used to close the write hole and
1080 eliminate resync. PPL is stored in the metadata region of RAID member drives,
1081 no additional journal drive is needed.
1082
1083 .PP
1084 Can be used with \-\-grow to change the consistency policy of an active array
1085 in some cases. See CONSISTENCY POLICY CHANGES below.
1086 .RE
1087
1088
1089 .SH For assemble:
1090
1091 .TP
1092 .BR \-u ", " \-\-uuid=
1093 uuid of array to assemble. Devices which don't have this uuid are
1094 excluded
1095
1096 .TP
1097 .BR \-m ", " \-\-super\-minor=
1098 Minor number of device that array was created for. Devices which
1099 don't have this minor number are excluded. If you create an array as
1100 /dev/md1, then all superblocks will contain the minor number 1, even if
1101 the array is later assembled as /dev/md2.
1102
1103 Giving the literal word "dev" for
1104 .B \-\-super\-minor
1105 will cause
1106 .I mdadm
1107 to use the minor number of the md device that is being assembled.
1108 e.g. when assembling
1109 .BR /dev/md0 ,
1110 .B \-\-super\-minor=dev
1111 will look for super blocks with a minor number of 0.
1112
1113 .B \-\-super\-minor
1114 is only relevant for v0.90 metadata, and should not normally be used.
1115 Using
1116 .B \-\-uuid
1117 is much safer.
1118
1119 .TP
1120 .BR \-N ", " \-\-name=
1121 Specify the name of the array to assemble. This must be the name
1122 that was specified when creating the array. It must either match
1123 the name stored in the superblock exactly, or it must match
1124 with the current
1125 .I homehost
1126 prefixed to the start of the given name.
1127
1128 .TP
1129 .BR \-f ", " \-\-force
1130 Assemble the array even if the metadata on some devices appears to be
1131 out-of-date. If
1132 .I mdadm
1133 cannot find enough working devices to start the array, but can find
1134 some devices that are recorded as having failed, then it will mark
1135 those devices as working so that the array can be started.
1136 An array which requires
1137 .B \-\-force
1138 to be started may contain data corruption. Use it carefully.
1139
1140 .TP
1141 .BR \-R ", " \-\-run
1142 Attempt to start the array even if fewer drives were given than were
1143 present last time the array was active. Normally if not all the
1144 expected drives are found and
1145 .B \-\-scan
1146 is not used, then the array will be assembled but not started.
1147 With
1148 .B \-\-run
1149 an attempt will be made to start it anyway.
1150
1151 .TP
1152 .B \-\-no\-degraded
1153 This is the reverse of
1154 .B \-\-run
1155 in that it inhibits the startup of array unless all expected drives
1156 are present. This is only needed with
1157 .B \-\-scan,
1158 and can be used if the physical connections to devices are
1159 not as reliable as you would like.
1160
1161 .TP
1162 .BR \-a ", " "\-\-auto{=no,yes,md,mdp,part}"
1163 See this option under Create and Build options.
1164
1165 .TP
1166 .BR \-b ", " \-\-bitmap=
1167 Specify the bitmap file that was given when the array was created. If
1168 an array has an
1169 .B internal
1170 bitmap, there is no need to specify this when assembling the array.
1171
1172 .TP
1173 .BR \-\-backup\-file=
1174 If
1175 .B \-\-backup\-file
1176 was used while reshaping an array (e.g. changing number of devices or
1177 chunk size) and the system crashed during the critical section, then the same
1178 .B \-\-backup\-file
1179 must be presented to
1180 .B \-\-assemble
1181 to allow possibly corrupted data to be restored, and the reshape
1182 to be completed.
1183
1184 .TP
1185 .BR \-\-invalid\-backup
1186 If the file needed for the above option is not available for any
1187 reason an empty file can be given together with this option to
1188 indicate that the backup file is invalid. In this case the data that
1189 was being rearranged at the time of the crash could be irrecoverably
1190 lost, but the rest of the array may still be recoverable. This option
1191 should only be used as a last resort if there is no way to recover the
1192 backup file.
1193
1194
1195 .TP
1196 .BR \-U ", " \-\-update=
1197 Update the superblock on each device while assembling the array. The
1198 argument given to this flag can be one of
1199 .BR sparc2.2 ,
1200 .BR summaries ,
1201 .BR uuid ,
1202 .BR name ,
1203 .BR nodes ,
1204 .BR homehost ,
1205 .BR home-cluster ,
1206 .BR resync ,
1207 .BR byteorder ,
1208 .BR devicesize ,
1209 .BR no\-bitmap ,
1210 .BR bbl ,
1211 .BR no\-bbl ,
1212 .BR ppl ,
1213 .BR no\-ppl ,
1214 .BR layout\-original ,
1215 .BR layout\-alternate ,
1216 .BR layout\-unspecified ,
1217 .BR metadata ,
1218 or
1219 .BR super\-minor .
1220
1221 The
1222 .B sparc2.2
1223 option will adjust the superblock of an array what was created on a Sparc
1224 machine running a patched 2.2 Linux kernel. This kernel got the
1225 alignment of part of the superblock wrong. You can use the
1226 .B "\-\-examine \-\-sparc2.2"
1227 option to
1228 .I mdadm
1229 to see what effect this would have.
1230
1231 The
1232 .B super\-minor
1233 option will update the
1234 .B "preferred minor"
1235 field on each superblock to match the minor number of the array being
1236 assembled.
1237 This can be useful if
1238 .B \-\-examine
1239 reports a different "Preferred Minor" to
1240 .BR \-\-detail .
1241 In some cases this update will be performed automatically
1242 by the kernel driver. In particular the update happens automatically
1243 at the first write to an array with redundancy (RAID level 1 or
1244 greater) on a 2.6 (or later) kernel.
1245
1246 The
1247 .B uuid
1248 option will change the uuid of the array. If a UUID is given with the
1249 .B \-\-uuid
1250 option that UUID will be used as a new UUID and will
1251 .B NOT
1252 be used to help identify the devices in the array.
1253 If no
1254 .B \-\-uuid
1255 is given, a random UUID is chosen.
1256
1257 The
1258 .B name
1259 option will change the
1260 .I name
1261 of the array as stored in the superblock. This is only supported for
1262 version-1 superblocks.
1263
1264 The
1265 .B nodes
1266 option will change the
1267 .I nodes
1268 of the array as stored in the bitmap superblock. This option only
1269 works for a clustered environment.
1270
1271 The
1272 .B homehost
1273 option will change the
1274 .I homehost
1275 as recorded in the superblock. For version-0 superblocks, this is the
1276 same as updating the UUID.
1277 For version-1 superblocks, this involves updating the name.
1278
1279 The
1280 .B home\-cluster
1281 option will change the cluster name as recorded in the superblock and
1282 bitmap. This option only works for clustered environment.
1283
1284 The
1285 .B resync
1286 option will cause the array to be marked
1287 .I dirty
1288 meaning that any redundancy in the array (e.g. parity for RAID5,
1289 copies for RAID1) may be incorrect. This will cause the RAID system
1290 to perform a "resync" pass to make sure that all redundant information
1291 is correct.
1292
1293 The
1294 .B byteorder
1295 option allows arrays to be moved between machines with different
1296 byte-order, such as from a big-endian machine like a Sparc or some
1297 MIPS machines, to a little-endian x86_64 machine.
1298 When assembling such an array for the first time after a move, giving
1299 .B "\-\-update=byteorder"
1300 will cause
1301 .I mdadm
1302 to expect superblocks to have their byteorder reversed, and will
1303 correct that order before assembling the array. This is only valid
1304 with original (Version 0.90) superblocks.
1305
1306 The
1307 .B summaries
1308 option will correct the summaries in the superblock. That is the
1309 counts of total, working, active, failed, and spare devices.
1310
1311 The
1312 .B devicesize
1313 option will rarely be of use. It applies to version 1.1 and 1.2 metadata
1314 only (where the metadata is at the start of the device) and is only
1315 useful when the component device has changed size (typically become
1316 larger). The version 1 metadata records the amount of the device that
1317 can be used to store data, so if a device in a version 1.1 or 1.2
1318 array becomes larger, the metadata will still be visible, but the
1319 extra space will not. In this case it might be useful to assemble the
1320 array with
1321 .BR \-\-update=devicesize .
1322 This will cause
1323 .I mdadm
1324 to determine the maximum usable amount of space on each device and
1325 update the relevant field in the metadata.
1326
1327 The
1328 .B metadata
1329 option only works on v0.90 metadata arrays and will convert them to
1330 v1.0 metadata. The array must not be dirty (i.e. it must not need a
1331 sync) and it must not have a write-intent bitmap.
1332
1333 The old metadata will remain on the devices, but will appear older
1334 than the new metadata and so will usually be ignored. The old metadata
1335 (or indeed the new metadata) can be removed by giving the appropriate
1336 .B \-\-metadata=
1337 option to
1338 .BR \-\-zero\-superblock .
1339
1340 The
1341 .B no\-bitmap
1342 option can be used when an array has an internal bitmap which is
1343 corrupt in some way so that assembling the array normally fails. It
1344 will cause any internal bitmap to be ignored.
1345
1346 The
1347 .B bbl
1348 option will reserve space in each device for a bad block list. This
1349 will be 4K in size and positioned near the end of any free space
1350 between the superblock and the data.
1351
1352 The
1353 .B no\-bbl
1354 option will cause any reservation of space for a bad block list to be
1355 removed. If the bad block list contains entries, this will fail, as
1356 removing the list could cause data corruption.
1357
1358 The
1359 .B ppl
1360 option will enable PPL for a RAID5 array and reserve space for PPL on each
1361 device. There must be enough free space between the data and superblock and a
1362 write-intent bitmap or journal must not be used.
1363
1364 The
1365 .B no\-ppl
1366 option will disable PPL in the superblock.
1367
1368 The
1369 .B layout\-original
1370 and
1371 .B layout\-alternate
1372 options are for RAID0 arrays with non-uniform devices size that were in
1373 use before Linux 5.4. If the array was being used with Linux 3.13 or
1374 earlier, then to assemble the array on a new kernel,
1375 .B \-\-update=layout\-original
1376 must be given. If the array was created and used with a kernel from Linux 3.14 to
1377 Linux 5.3, then
1378 .B \-\-update=layout\-alternate
1379 must be given. This only needs to be given once. Subsequent assembly of the array
1380 will happen normally.
1381 For more information, see
1382 .IR md (4).
1383
1384 The
1385 .B layout\-unspecified
1386 option reverts the effect of
1387 .B layout\-orignal
1388 or
1389 .B layout\-alternate
1390 and allows the array to be again used on a kernel prior to Linux 5.3.
1391 This option should be used with great caution.
1392
1393 .TP
1394 .BR \-\-freeze\-reshape
1395 Option is intended to be used in start-up scripts during initrd boot phase.
1396 When array under reshape is assembled during initrd phase, this option
1397 stops reshape after reshape critical section is being restored. This happens
1398 before file system pivot operation and avoids loss of file system context.
1399 Losing file system context would cause reshape to be broken.
1400
1401 Reshape can be continued later using the
1402 .B \-\-continue
1403 option for the grow command.
1404
1405 .TP
1406 .BR \-\-symlinks
1407 See this option under Create and Build options.
1408
1409 .SH For Manage mode:
1410
1411 .TP
1412 .BR \-t ", " \-\-test
1413 Unless a more serious error occurred,
1414 .I mdadm
1415 will exit with a status of 2 if no changes were made to the array and
1416 0 if at least one change was made.
1417 This can be useful when an indirect specifier such as
1418 .BR missing ,
1419 .B detached
1420 or
1421 .B faulty
1422 is used in requesting an operation on the array.
1423 .B \-\-test
1424 will report failure if these specifiers didn't find any match.
1425
1426 .TP
1427 .BR \-a ", " \-\-add
1428 hot-add listed devices.
1429 If a device appears to have recently been part of the array
1430 (possibly it failed or was removed) the device is re\-added as described
1431 in the next point.
1432 If that fails or the device was never part of the array, the device is
1433 added as a hot-spare.
1434 If the array is degraded, it will immediately start to rebuild data
1435 onto that spare.
1436
1437 Note that this and the following options are only meaningful on array
1438 with redundancy. They don't apply to RAID0 or Linear.
1439
1440 .TP
1441 .BR \-\-re\-add
1442 re\-add a device that was previously removed from an array.
1443 If the metadata on the device reports that it is a member of the
1444 array, and the slot that it used is still vacant, then the device will
1445 be added back to the array in the same position. This will normally
1446 cause the data for that device to be recovered. However based on the
1447 event count on the device, the recovery may only require sections that
1448 are flagged a write-intent bitmap to be recovered or may not require
1449 any recovery at all.
1450
1451 When used on an array that has no metadata (i.e. it was built with
1452 .BR \-\-build)
1453 it will be assumed that bitmap-based recovery is enough to make the
1454 device fully consistent with the array.
1455
1456 When used with v1.x metadata,
1457 .B \-\-re\-add
1458 can be accompanied by
1459 .BR \-\-update=devicesize ,
1460 .BR \-\-update=bbl ", or"
1461 .BR \-\-update=no\-bbl .
1462 See the description of these option when used in Assemble mode for an
1463 explanation of their use.
1464
1465 If the device name given is
1466 .B missing
1467 then
1468 .I mdadm
1469 will try to find any device that looks like it should be
1470 part of the array but isn't and will try to re\-add all such devices.
1471
1472 If the device name given is
1473 .B faulty
1474 then
1475 .I mdadm
1476 will find all devices in the array that are marked
1477 .BR faulty ,
1478 remove them and attempt to immediately re\-add them. This can be
1479 useful if you are certain that the reason for failure has been
1480 resolved.
1481
1482 .TP
1483 .B \-\-add\-spare
1484 Add a device as a spare. This is similar to
1485 .B \-\-add
1486 except that it does not attempt
1487 .B \-\-re\-add
1488 first. The device will be added as a spare even if it looks like it
1489 could be an recent member of the array.
1490
1491 .TP
1492 .BR \-r ", " \-\-remove
1493 remove listed devices. They must not be active. i.e. they should
1494 be failed or spare devices.
1495
1496 As well as the name of a device file
1497 (e.g.
1498 .BR /dev/sda1 )
1499 the words
1500 .BR failed ,
1501 .B detached
1502 and names like
1503 .B set-A
1504 can be given to
1505 .BR \-\-remove .
1506 The first causes all failed device to be removed. The second causes
1507 any device which is no longer connected to the system (i.e an 'open'
1508 returns
1509 .BR ENXIO )
1510 to be removed.
1511 The third will remove a set as describe below under
1512 .BR \-\-fail .
1513
1514 .TP
1515 .BR \-f ", " \-\-fail
1516 Mark listed devices as faulty.
1517 As well as the name of a device file, the word
1518 .B detached
1519 or a set name like
1520 .B set\-A
1521 can be given. The former will cause any device that has been detached from
1522 the system to be marked as failed. It can then be removed.
1523
1524 For RAID10 arrays where the number of copies evenly divides the number
1525 of devices, the devices can be conceptually divided into sets where
1526 each set contains a single complete copy of the data on the array.
1527 Sometimes a RAID10 array will be configured so that these sets are on
1528 separate controllers. In this case all the devices in one set can be
1529 failed by giving a name like
1530 .B set\-A
1531 or
1532 .B set\-B
1533 to
1534 .BR \-\-fail .
1535 The appropriate set names are reported by
1536 .BR \-\-detail .
1537
1538 .TP
1539 .BR \-\-set\-faulty
1540 same as
1541 .BR \-\-fail .
1542
1543 .TP
1544 .B \-\-replace
1545 Mark listed devices as requiring replacement. As soon as a spare is
1546 available, it will be rebuilt and will replace the marked device.
1547 This is similar to marking a device as faulty, but the device remains
1548 in service during the recovery process to increase resilience against
1549 multiple failures. When the replacement process finishes, the
1550 replaced device will be marked as faulty.
1551
1552 .TP
1553 .B \-\-with
1554 This can follow a list of
1555 .B \-\-replace
1556 devices. The devices listed after
1557 .B \-\-with
1558 will be preferentially used to replace the devices listed after
1559 .BR \-\-replace .
1560 These device must already be spare devices in the array.
1561
1562 .TP
1563 .BR \-\-write\-mostly
1564 Subsequent devices that are added or re\-added will have the 'write-mostly'
1565 flag set. This is only valid for RAID1 and means that the 'md' driver
1566 will avoid reading from these devices if possible.
1567 .TP
1568 .BR \-\-readwrite
1569 Subsequent devices that are added or re\-added will have the 'write-mostly'
1570 flag cleared.
1571 .TP
1572 .BR \-\-cluster\-confirm
1573 Confirm the existence of the device. This is issued in response to an \-\-add
1574 request by a node in a cluster. When a node adds a device it sends a message
1575 to all nodes in the cluster to look for a device with a UUID. This translates
1576 to a udev notification with the UUID of the device to be added and the slot
1577 number. The receiving node must acknowledge this message
1578 with \-\-cluster\-confirm. Valid arguments are <slot>:<devicename> in case
1579 the device is found or <slot>:missing in case the device is not found.
1580
1581 .TP
1582 .BR \-\-add-journal
1583 Add journal to an existing array, or recreate journal for RAID-4/5/6 array
1584 that lost a journal device. To avoid interrupting on-going write opertions,
1585 .B \-\-add-journal
1586 only works for array in Read-Only state.
1587
1588 .TP
1589 .BR \-\-failfast
1590 Subsequent devices that are added or re\-added will have
1591 the 'failfast' flag set. This is only valid for RAID1 and RAID10 and
1592 means that the 'md' driver will avoid long timeouts on error handling
1593 where possible.
1594 .TP
1595 .BR \-\-nofailfast
1596 Subsequent devices that are re\-added will be re\-added without
1597 the 'failfast' flag set.
1598
1599 .P
1600 Each of these options requires that the first device listed is the array
1601 to be acted upon, and the remainder are component devices to be added,
1602 removed, marked as faulty, etc. Several different operations can be
1603 specified for different devices, e.g.
1604 .in +5
1605 mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
1606 .in -5
1607 Each operation applies to all devices listed until the next
1608 operation.
1609
1610 If an array is using a write-intent bitmap, then devices which have
1611 been removed can be re\-added in a way that avoids a full
1612 reconstruction but instead just updates the blocks that have changed
1613 since the device was removed. For arrays with persistent metadata
1614 (superblocks) this is done automatically. For arrays created with
1615 .B \-\-build
1616 mdadm needs to be told that this device we removed recently with
1617 .BR \-\-re\-add .
1618
1619 Devices can only be removed from an array if they are not in active
1620 use, i.e. that must be spares or failed devices. To remove an active
1621 device, it must first be marked as
1622 .B faulty.
1623
1624 .SH For Misc mode:
1625
1626 .TP
1627 .BR \-Q ", " \-\-query
1628 Examine a device to see
1629 (1) if it is an md device and (2) if it is a component of an md
1630 array.
1631 Information about what is discovered is presented.
1632
1633 .TP
1634 .BR \-D ", " \-\-detail
1635 Print details of one or more md devices.
1636
1637 .TP
1638 .BR \-\-detail\-platform
1639 Print details of the platform's RAID capabilities (firmware / hardware
1640 topology) for a given metadata format. If used without argument, mdadm
1641 will scan all controllers looking for their capabilities. Otherwise, mdadm
1642 will only look at the controller specified by the argument in form of an
1643 absolute filepath or a link, e.g.
1644 .IR /sys/devices/pci0000:00/0000:00:1f.2 .
1645
1646 .TP
1647 .BR \-Y ", " \-\-export
1648 When used with
1649 .BR \-\-detail ,
1650 .BR \-\-detail-platform ,
1651 .BR \-\-examine ,
1652 or
1653 .B \-\-incremental
1654 output will be formatted as
1655 .B key=value
1656 pairs for easy import into the environment.
1657
1658 With
1659 .B \-\-incremental
1660 The value
1661 .B MD_STARTED
1662 indicates whether an array was started
1663 .RB ( yes )
1664 or not, which may include a reason
1665 .RB ( unsafe ", " nothing ", " no ).
1666 Also the value
1667 .B MD_FOREIGN
1668 indicates if the array is expected on this host
1669 .RB ( no ),
1670 or seems to be from elsewhere
1671 .RB ( yes ).
1672
1673 .TP
1674 .BR \-E ", " \-\-examine
1675 Print contents of the metadata stored on the named device(s).
1676 Note the contrast between
1677 .B \-\-examine
1678 and
1679 .BR \-\-detail .
1680 .B \-\-examine
1681 applies to devices which are components of an array, while
1682 .B \-\-detail
1683 applies to a whole array which is currently active.
1684 .TP
1685 .B \-\-sparc2.2
1686 If an array was created on a SPARC machine with a 2.2 Linux kernel
1687 patched with RAID support, the superblock will have been created
1688 incorrectly, or at least incompatibly with 2.4 and later kernels.
1689 Using the
1690 .B \-\-sparc2.2
1691 flag with
1692 .B \-\-examine
1693 will fix the superblock before displaying it. If this appears to do
1694 the right thing, then the array can be successfully assembled using
1695 .BR "\-\-assemble \-\-update=sparc2.2" .
1696
1697 .TP
1698 .BR \-X ", " \-\-examine\-bitmap
1699 Report information about a bitmap file.
1700 The argument is either an external bitmap file or an array component
1701 in case of an internal bitmap. Note that running this on an array
1702 device (e.g.
1703 .BR /dev/md0 )
1704 does not report the bitmap for that array.
1705
1706 .TP
1707 .B \-\-examine\-badblocks
1708 List the bad-blocks recorded for the device, if a bad-blocks list has
1709 been configured. Currently only
1710 .B 1.x
1711 and
1712 .B IMSM
1713 metadata support bad-blocks lists.
1714
1715 .TP
1716 .BI \-\-dump= directory
1717 .TP
1718 .BI \-\-restore= directory
1719 Save metadata from lists devices, or restore metadata to listed devices.
1720
1721 .TP
1722 .BR \-R ", " \-\-run
1723 start a partially assembled array. If
1724 .B \-\-assemble
1725 did not find enough devices to fully start the array, it might leaving
1726 it partially assembled. If you wish, you can then use
1727 .B \-\-run
1728 to start the array in degraded mode.
1729
1730 .TP
1731 .BR \-S ", " \-\-stop
1732 deactivate array, releasing all resources.
1733
1734 .TP
1735 .BR \-o ", " \-\-readonly
1736 mark array as readonly.
1737
1738 .TP
1739 .BR \-w ", " \-\-readwrite
1740 mark array as readwrite.
1741
1742 .TP
1743 .B \-\-zero\-superblock
1744 If the device contains a valid md superblock, the block is
1745 overwritten with zeros. With
1746 .B \-\-force
1747 the block where the superblock would be is overwritten even if it
1748 doesn't appear to be valid.
1749
1750 .B Note:
1751 Be careful to call \-\-zero\-superblock with clustered raid, make sure
1752 array isn't used or assembled in other cluster node before execute it.
1753
1754 .TP
1755 .B \-\-kill\-subarray=
1756 If the device is a container and the argument to \-\-kill\-subarray
1757 specifies an inactive subarray in the container, then the subarray is
1758 deleted. Deleting all subarrays will leave an 'empty-container' or
1759 spare superblock on the drives. See
1760 .B \-\-zero\-superblock
1761 for completely
1762 removing a superblock. Note that some formats depend on the subarray
1763 index for generating a UUID, this command will fail if it would change
1764 the UUID of an active subarray.
1765
1766 .TP
1767 .B \-\-update\-subarray=
1768 If the device is a container and the argument to \-\-update\-subarray
1769 specifies a subarray in the container, then attempt to update the given
1770 superblock field in the subarray. See below in
1771 .B MISC MODE
1772 for details.
1773
1774 .TP
1775 .BR \-t ", " \-\-test
1776 When used with
1777 .BR \-\-detail ,
1778 the exit status of
1779 .I mdadm
1780 is set to reflect the status of the device. See below in
1781 .B MISC MODE
1782 for details.
1783
1784 .TP
1785 .BR \-W ", " \-\-wait
1786 For each md device given, wait for any resync, recovery, or reshape
1787 activity to finish before returning.
1788 .I mdadm
1789 will return with success if it actually waited for every device
1790 listed, otherwise it will return failure.
1791
1792 .TP
1793 .BR \-\-wait\-clean
1794 For each md device given, or each device in /proc/mdstat if
1795 .B \-\-scan
1796 is given, arrange for the array to be marked clean as soon as possible.
1797 .I mdadm
1798 will return with success if the array uses external metadata and we
1799 successfully waited. For native arrays this returns immediately as the
1800 kernel handles dirty-clean transitions at shutdown. No action is taken
1801 if safe-mode handling is disabled.
1802
1803 .TP
1804 .B \-\-action=
1805 Set the "sync_action" for all md devices given to one of
1806 .BR idle ,
1807 .BR frozen ,
1808 .BR check ,
1809 .BR repair .
1810 Setting to
1811 .B idle
1812 will abort any currently running action though some actions will
1813 automatically restart.
1814 Setting to
1815 .B frozen
1816 will abort any current action and ensure no other action starts
1817 automatically.
1818
1819 Details of
1820 .B check
1821 and
1822 .B repair
1823 can be found it
1824 .IR md (4)
1825 under
1826 .BR "SCRUBBING AND MISMATCHES" .
1827
1828 .SH For Incremental Assembly mode:
1829 .TP
1830 .BR \-\-rebuild\-map ", " \-r
1831 Rebuild the map file
1832 .RB ( {MAP_PATH} )
1833 that
1834 .I mdadm
1835 uses to help track which arrays are currently being assembled.
1836
1837 .TP
1838 .BR \-\-run ", " \-R
1839 Run any array assembled as soon as a minimal number of devices are
1840 available, rather than waiting until all expected devices are present.
1841
1842 .TP
1843 .BR \-\-scan ", " \-s
1844 Only meaningful with
1845 .B \-R
1846 this will scan the
1847 .B map
1848 file for arrays that are being incrementally assembled and will try to
1849 start any that are not already started. If any such array is listed
1850 in
1851 .B mdadm.conf
1852 as requiring an external bitmap, that bitmap will be attached first.
1853
1854 .TP
1855 .BR \-\-fail ", " \-f
1856 This allows the hot-plug system to remove devices that have fully disappeared
1857 from the kernel. It will first fail and then remove the device from any
1858 array it belongs to.
1859 The device name given should be a kernel device name such as "sda",
1860 not a name in
1861 .IR /dev .
1862
1863 .TP
1864 .BR \-\-path=
1865 Only used with \-\-fail. The 'path' given will be recorded so that if
1866 a new device appears at the same location it can be automatically
1867 added to the same array. This allows the failed device to be
1868 automatically replaced by a new device without metadata if it appears
1869 at specified path. This option is normally only set by a
1870 .I udev
1871 script.
1872
1873 .SH For Monitor mode:
1874 .TP
1875 .BR \-m ", " \-\-mail
1876 Give a mail address to send alerts to.
1877
1878 .TP
1879 .BR \-p ", " \-\-program ", " \-\-alert
1880 Give a program to be run whenever an event is detected.
1881
1882 .TP
1883 .BR \-y ", " \-\-syslog
1884 Cause all events to be reported through 'syslog'. The messages have
1885 facility of 'daemon' and varying priorities.
1886
1887 .TP
1888 .BR \-d ", " \-\-delay
1889 Give a delay in seconds.
1890 .I mdadm
1891 polls the md arrays and then waits this many seconds before polling
1892 again. The default is 60 seconds. Since 2.6.16, there is no need to
1893 reduce this as the kernel alerts
1894 .I mdadm
1895 immediately when there is any change.
1896
1897 .TP
1898 .BR \-r ", " \-\-increment
1899 Give a percentage increment.
1900 .I mdadm
1901 will generate RebuildNN events with the given percentage increment.
1902
1903 .TP
1904 .BR \-f ", " \-\-daemonise
1905 Tell
1906 .I mdadm
1907 to run as a background daemon if it decides to monitor anything. This
1908 causes it to fork and run in the child, and to disconnect from the
1909 terminal. The process id of the child is written to stdout.
1910 This is useful with
1911 .B \-\-scan
1912 which will only continue monitoring if a mail address or alert program
1913 is found in the config file.
1914
1915 .TP
1916 .BR \-i ", " \-\-pid\-file
1917 When
1918 .I mdadm
1919 is running in daemon mode, write the pid of the daemon process to
1920 the specified file, instead of printing it on standard output.
1921
1922 .TP
1923 .BR \-1 ", " \-\-oneshot
1924 Check arrays only once. This will generate
1925 .B NewArray
1926 events and more significantly
1927 .B DegradedArray
1928 and
1929 .B SparesMissing
1930 events. Running
1931 .in +5
1932 .B " mdadm \-\-monitor \-\-scan \-1"
1933 .in -5
1934 from a cron script will ensure regular notification of any degraded arrays.
1935
1936 .TP
1937 .BR \-t ", " \-\-test
1938 Generate a
1939 .B TestMessage
1940 alert for every array found at startup. This alert gets mailed and
1941 passed to the alert program. This can be used for testing that alert
1942 message do get through successfully.
1943
1944 .TP
1945 .BR \-\-no\-sharing
1946 This inhibits the functionality for moving spares between arrays.
1947 Only one monitoring process started with
1948 .B \-\-scan
1949 but without this flag is allowed, otherwise the two could interfere
1950 with each other.
1951
1952 .SH ASSEMBLE MODE
1953
1954 .HP 12
1955 Usage:
1956 .B mdadm \-\-assemble
1957 .I md-device options-and-component-devices...
1958 .HP 12
1959 Usage:
1960 .B mdadm \-\-assemble \-\-scan
1961 .I md-devices-and-options...
1962 .HP 12
1963 Usage:
1964 .B mdadm \-\-assemble \-\-scan
1965 .I options...
1966
1967 .PP
1968 This usage assembles one or more RAID arrays from pre-existing components.
1969 For each array, mdadm needs to know the md device, the identity of the
1970 array, and a number of component-devices. These can be found in a number of ways.
1971
1972 In the first usage example (without the
1973 .BR \-\-scan )
1974 the first device given is the md device.
1975 In the second usage example, all devices listed are treated as md
1976 devices and assembly is attempted.
1977 In the third (where no devices are listed) all md devices that are
1978 listed in the configuration file are assembled. If no arrays are
1979 described by the configuration file, then any arrays that
1980 can be found on unused devices will be assembled.
1981
1982 If precisely one device is listed, but
1983 .B \-\-scan
1984 is not given, then
1985 .I mdadm
1986 acts as though
1987 .B \-\-scan
1988 was given and identity information is extracted from the configuration file.
1989
1990 The identity can be given with the
1991 .B \-\-uuid
1992 option, the
1993 .B \-\-name
1994 option, or the
1995 .B \-\-super\-minor
1996 option, will be taken from the md-device record in the config file, or
1997 will be taken from the super block of the first component-device
1998 listed on the command line.
1999
2000 Devices can be given on the
2001 .B \-\-assemble
2002 command line or in the config file. Only devices which have an md
2003 superblock which contains the right identity will be considered for
2004 any array.
2005
2006 The config file is only used if explicitly named with
2007 .B \-\-config
2008 or requested with (a possibly implicit)
2009 .BR \-\-scan .
2010 In the later case,
2011 .B /etc/mdadm.conf
2012 or
2013 .B /etc/mdadm/mdadm.conf
2014 is used.
2015
2016 If
2017 .B \-\-scan
2018 is not given, then the config file will only be used to find the
2019 identity of md arrays.
2020
2021 Normally the array will be started after it is assembled. However if
2022 .B \-\-scan
2023 is not given and not all expected drives were listed, then the array
2024 is not started (to guard against usage errors). To insist that the
2025 array be started in this case (as may work for RAID1, 4, 5, 6, or 10),
2026 give the
2027 .B \-\-run
2028 flag.
2029
2030 If
2031 .I udev
2032 is active,
2033 .I mdadm
2034 does not create any entries in
2035 .B /dev
2036 but leaves that to
2037 .IR udev .
2038 It does record information in
2039 .B {MAP_PATH}
2040 which will allow
2041 .I udev
2042 to choose the correct name.
2043
2044 If
2045 .I mdadm
2046 detects that udev is not configured, it will create the devices in
2047 .B /dev
2048 itself.
2049
2050 In Linux kernels prior to version 2.6.28 there were two distinctly
2051 different types of md devices that could be created: one that could be
2052 partitioned using standard partitioning tools and one that could not.
2053 Since 2.6.28 that distinction is no longer relevant as both type of
2054 devices can be partitioned.
2055 .I mdadm
2056 will normally create the type that originally could not be partitioned
2057 as it has a well defined major number (9).
2058
2059 Prior to 2.6.28, it is important that mdadm chooses the correct type
2060 of array device to use. This can be controlled with the
2061 .B \-\-auto
2062 option. In particular, a value of "mdp" or "part" or "p" tells mdadm
2063 to use a partitionable device rather than the default.
2064
2065 In the no-udev case, the value given to
2066 .B \-\-auto
2067 can be suffixed by a number. This tells
2068 .I mdadm
2069 to create that number of partition devices rather than the default of 4.
2070
2071 The value given to
2072 .B \-\-auto
2073 can also be given in the configuration file as a word starting
2074 .B auto=
2075 on the ARRAY line for the relevant array.
2076
2077 .SS Auto Assembly
2078 When
2079 .B \-\-assemble
2080 is used with
2081 .B \-\-scan
2082 and no devices are listed,
2083 .I mdadm
2084 will first attempt to assemble all the arrays listed in the config
2085 file.
2086
2087 If no arrays are listed in the config (other than those marked
2088 .BR <ignore> )
2089 it will look through the available devices for possible arrays and
2090 will try to assemble anything that it finds. Arrays which are tagged
2091 as belonging to the given homehost will be assembled and started
2092 normally. Arrays which do not obviously belong to this host are given
2093 names that are expected not to conflict with anything local, and are
2094 started "read-auto" so that nothing is written to any device until the
2095 array is written to. i.e. automatic resync etc is delayed.
2096
2097 If
2098 .I mdadm
2099 finds a consistent set of devices that look like they should comprise
2100 an array, and if the superblock is tagged as belonging to the given
2101 home host, it will automatically choose a device name and try to
2102 assemble the array. If the array uses version-0.90 metadata, then the
2103 .B minor
2104 number as recorded in the superblock is used to create a name in
2105 .B /dev/md/
2106 so for example
2107 .BR /dev/md/3 .
2108 If the array uses version-1 metadata, then the
2109 .B name
2110 from the superblock is used to similarly create a name in
2111 .B /dev/md/
2112 (the name will have any 'host' prefix stripped first).
2113
2114 This behaviour can be modified by the
2115 .I AUTO
2116 line in the
2117 .I mdadm.conf
2118 configuration file. This line can indicate that specific metadata
2119 type should, or should not, be automatically assembled. If an array
2120 is found which is not listed in
2121 .I mdadm.conf
2122 and has a metadata format that is denied by the
2123 .I AUTO
2124 line, then it will not be assembled.
2125 The
2126 .I AUTO
2127 line can also request that all arrays identified as being for this
2128 homehost should be assembled regardless of their metadata type.
2129 See
2130 .IR mdadm.conf (5)
2131 for further details.
2132
2133 Note: Auto assembly cannot be used for assembling and activating some
2134 arrays which are undergoing reshape. In particular as the
2135 .B backup\-file
2136 cannot be given, any reshape which requires a backup-file to continue
2137 cannot be started by auto assembly. An array which is growing to more
2138 devices and has passed the critical section can be assembled using
2139 auto-assembly.
2140
2141 .SH BUILD MODE
2142
2143 .HP 12
2144 Usage:
2145 .B mdadm \-\-build
2146 .I md-device
2147 .BI \-\-chunk= X
2148 .BI \-\-level= Y
2149 .BI \-\-raid\-devices= Z
2150 .I devices
2151
2152 .PP
2153 This usage is similar to
2154 .BR \-\-create .
2155 The difference is that it creates an array without a superblock. With
2156 these arrays there is no difference between initially creating the array and
2157 subsequently assembling the array, except that hopefully there is useful
2158 data there in the second case.
2159
2160 The level may raid0, linear, raid1, raid10, multipath, or faulty, or
2161 one of their synonyms. All devices must be listed and the array will
2162 be started once complete. It will often be appropriate to use
2163 .B \-\-assume\-clean
2164 with levels raid1 or raid10.
2165
2166 .SH CREATE MODE
2167
2168 .HP 12
2169 Usage:
2170 .B mdadm \-\-create
2171 .I md-device
2172 .BI \-\-chunk= X
2173 .BI \-\-level= Y
2174 .br
2175 .BI \-\-raid\-devices= Z
2176 .I devices
2177
2178 .PP
2179 This usage will initialise a new md array, associate some devices with
2180 it, and activate the array.
2181
2182 The named device will normally not exist when
2183 .I "mdadm \-\-create"
2184 is run, but will be created by
2185 .I udev
2186 once the array becomes active.
2187
2188 As devices are added, they are checked to see if they contain RAID
2189 superblocks or filesystems. They are also checked to see if the variance in
2190 device size exceeds 1%.
2191
2192 If any discrepancy is found, the array will not automatically be run, though
2193 the presence of a
2194 .B \-\-run
2195 can override this caution.
2196
2197 To create a "degraded" array in which some devices are missing, simply
2198 give the word "\fBmissing\fP"
2199 in place of a device name. This will cause
2200 .I mdadm
2201 to leave the corresponding slot in the array empty.
2202 For a RAID4 or RAID5 array at most one slot can be
2203 "\fBmissing\fP"; for a RAID6 array at most two slots.
2204 For a RAID1 array, only one real device needs to be given. All of the
2205 others can be
2206 "\fBmissing\fP".
2207
2208 When creating a RAID5 array,
2209 .I mdadm
2210 will automatically create a degraded array with an extra spare drive.
2211 This is because building the spare into a degraded array is in general
2212 faster than resyncing the parity on a non-degraded, but not clean,
2213 array. This feature can be overridden with the
2214 .B \-\-force
2215 option.
2216
2217 When creating an array with version-1 metadata a name for the array is
2218 required.
2219 If this is not given with the
2220 .B \-\-name
2221 option,
2222 .I mdadm
2223 will choose a name based on the last component of the name of the
2224 device being created. So if
2225 .B /dev/md3
2226 is being created, then the name
2227 .B 3
2228 will be chosen.
2229 If
2230 .B /dev/md/home
2231 is being created, then the name
2232 .B home
2233 will be used.
2234
2235 When creating a partition based array, using
2236 .I mdadm
2237 with version-1.x metadata, the partition type should be set to
2238 .B 0xDA
2239 (non fs-data). This type selection allows for greater precision since
2240 using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)],
2241 might create problems in the event of array recovery through a live cdrom.
2242
2243 A new array will normally get a randomly assigned 128bit UUID which is
2244 very likely to be unique. If you have a specific need, you can choose
2245 a UUID for the array by giving the
2246 .B \-\-uuid=
2247 option. Be warned that creating two arrays with the same UUID is a
2248 recipe for disaster. Also, using
2249 .B \-\-uuid=
2250 when creating a v0.90 array will silently override any
2251 .B \-\-homehost=
2252 setting.
2253 .\"If the
2254 .\".B \-\-size
2255 .\"option is given, it is not necessary to list any component-devices in this command.
2256 .\"They can be added later, before a
2257 .\".B \-\-run.
2258 .\"If no
2259 .\".B \-\-size
2260 .\"is given, the apparent size of the smallest drive given is used.
2261
2262 If the array type supports a write-intent bitmap, and if the devices
2263 in the array exceed 100G is size, an internal write-intent bitmap
2264 will automatically be added unless some other option is explicitly
2265 requested with the
2266 .B \-\-bitmap
2267 option or a different consistency policy is selected with the
2268 .B \-\-consistency\-policy
2269 option. In any case space for a bitmap will be reserved so that one
2270 can be added later with
2271 .BR "\-\-grow \-\-bitmap=internal" .
2272
2273 If the metadata type supports it (currently only 1.x and IMSM metadata),
2274 space will be allocated to store a bad block list. This allows a modest
2275 number of bad blocks to be recorded, allowing the drive to remain in
2276 service while only partially functional.
2277
2278 When creating an array within a
2279 .B CONTAINER
2280 .I mdadm
2281 can be given either the list of devices to use, or simply the name of
2282 the container. The former case gives control over which devices in
2283 the container will be used for the array. The latter case allows
2284 .I mdadm
2285 to automatically choose which devices to use based on how much spare
2286 space is available.
2287
2288 The General Management options that are valid with
2289 .B \-\-create
2290 are:
2291 .TP
2292 .B \-\-run
2293 insist on running the array even if some devices look like they might
2294 be in use.
2295
2296 .TP
2297 .B \-\-readonly
2298 start the array in readonly mode.
2299
2300 .SH MANAGE MODE
2301 .HP 12
2302 Usage:
2303 .B mdadm
2304 .I device
2305 .I options... devices...
2306 .PP
2307
2308 This usage will allow individual devices in an array to be failed,
2309 removed or added. It is possible to perform multiple operations with
2310 on command. For example:
2311 .br
2312 .B " mdadm /dev/md0 \-f /dev/hda1 \-r /dev/hda1 \-a /dev/hda1"
2313 .br
2314 will firstly mark
2315 .B /dev/hda1
2316 as faulty in
2317 .B /dev/md0
2318 and will then remove it from the array and finally add it back
2319 in as a spare. However only one md array can be affected by a single
2320 command.
2321
2322 When a device is added to an active array, mdadm checks to see if it
2323 has metadata on it which suggests that it was recently a member of the
2324 array. If it does, it tries to "re\-add" the device. If there have
2325 been no changes since the device was removed, or if the array has a
2326 write-intent bitmap which has recorded whatever changes there were,
2327 then the device will immediately become a full member of the array and
2328 those differences recorded in the bitmap will be resolved.
2329
2330 .SH MISC MODE
2331 .HP 12
2332 Usage:
2333 .B mdadm
2334 .I options ...
2335 .I devices ...
2336 .PP
2337
2338 MISC mode includes a number of distinct operations that
2339 operate on distinct devices. The operations are:
2340 .TP
2341 .B \-\-query
2342 The device is examined to see if it is
2343 (1) an active md array, or
2344 (2) a component of an md array.
2345 The information discovered is reported.
2346
2347 .TP
2348 .B \-\-detail
2349 The device should be an active md device.
2350 .B mdadm
2351 will display a detailed description of the array.
2352 .B \-\-brief
2353 or
2354 .B \-\-scan
2355 will cause the output to be less detailed and the format to be
2356 suitable for inclusion in
2357 .BR mdadm.conf .
2358 The exit status of
2359 .I mdadm
2360 will normally be 0 unless
2361 .I mdadm
2362 failed to get useful information about the device(s); however, if the
2363 .B \-\-test
2364 option is given, then the exit status will be:
2365 .RS
2366 .TP
2367 0
2368 The array is functioning normally.
2369 .TP
2370 1
2371 The array has at least one failed device.
2372 .TP
2373 2
2374 The array has multiple failed devices such that it is unusable.
2375 .TP
2376 4
2377 There was an error while trying to get information about the device.
2378 .RE
2379
2380 .TP
2381 .B \-\-detail\-platform
2382 Print detail of the platform's RAID capabilities (firmware / hardware
2383 topology). If the metadata is specified with
2384 .B \-e
2385 or
2386 .B \-\-metadata=
2387 then the return status will be:
2388 .RS
2389 .TP
2390 0
2391 metadata successfully enumerated its platform components on this system
2392 .TP
2393 1
2394 metadata is platform independent
2395 .TP
2396 2
2397 metadata failed to find its platform components on this system
2398 .RE
2399
2400 .TP
2401 .B \-\-update\-subarray=
2402 If the device is a container and the argument to \-\-update\-subarray
2403 specifies a subarray in the container, then attempt to update the given
2404 superblock field in the subarray. Similar to updating an array in
2405 "assemble" mode, the field to update is selected by
2406 .B \-U
2407 or
2408 .B \-\-update=
2409 option. The supported options are
2410 .BR name ,
2411 .B ppl
2412 and
2413 .BR no\-ppl .
2414
2415 The
2416 .B name
2417 option updates the subarray name in the metadata, it may not affect the
2418 device node name or the device node symlink until the subarray is
2419 re\-assembled. If updating
2420 .B name
2421 would change the UUID of an active subarray this operation is blocked,
2422 and the command will end in an error.
2423
2424 The
2425 .B ppl
2426 and
2427 .B no\-ppl
2428 options enable and disable PPL in the metadata. Currently supported only for
2429 IMSM subarrays.
2430
2431 .TP
2432 .B \-\-examine
2433 The device should be a component of an md array.
2434 .I mdadm
2435 will read the md superblock of the device and display the contents.
2436 If
2437 .B \-\-brief
2438 or
2439 .B \-\-scan
2440 is given, then multiple devices that are components of the one array
2441 are grouped together and reported in a single entry suitable
2442 for inclusion in
2443 .BR mdadm.conf .
2444
2445 Having
2446 .B \-\-scan
2447 without listing any devices will cause all devices listed in the
2448 config file to be examined.
2449
2450 .TP
2451 .BI \-\-dump= directory
2452 If the device contains RAID metadata, a file will be created in the
2453 .I directory
2454 and the metadata will be written to it. The file will be the same
2455 size as the device and have the metadata written in the file at the
2456 same locate that it exists in the device. However the file will be "sparse" so
2457 that only those blocks containing metadata will be allocated. The
2458 total space used will be small.
2459
2460 The file name used in the
2461 .I directory
2462 will be the base name of the device. Further if any links appear in
2463 .I /dev/disk/by-id
2464 which point to the device, then hard links to the file will be created
2465 in
2466 .I directory
2467 based on these
2468 .I by-id
2469 names.
2470
2471 Multiple devices can be listed and their metadata will all be stored
2472 in the one directory.
2473
2474 .TP
2475 .BI \-\-restore= directory
2476 This is the reverse of
2477 .BR \-\-dump .
2478 .I mdadm
2479 will locate a file in the directory that has a name appropriate for
2480 the given device and will restore metadata from it. Names that match
2481 .I /dev/disk/by-id
2482 names are preferred, however if two of those refer to different files,
2483 .I mdadm
2484 will not choose between them but will abort the operation.
2485
2486 If a file name is given instead of a
2487 .I directory
2488 then
2489 .I mdadm
2490 will restore from that file to a single device, always provided the
2491 size of the file matches that of the device, and the file contains
2492 valid metadata.
2493 .TP
2494 .B \-\-stop
2495 The devices should be active md arrays which will be deactivated, as
2496 long as they are not currently in use.
2497
2498 .TP
2499 .B \-\-run
2500 This will fully activate a partially assembled md array.
2501
2502 .TP
2503 .B \-\-readonly
2504 This will mark an active array as read-only, providing that it is
2505 not currently being used.
2506
2507 .TP
2508 .B \-\-readwrite
2509 This will change a
2510 .B readonly
2511 array back to being read/write.
2512
2513 .TP
2514 .B \-\-scan
2515 For all operations except
2516 .BR \-\-examine ,
2517 .B \-\-scan
2518 will cause the operation to be applied to all arrays listed in
2519 .BR /proc/mdstat .
2520 For
2521 .BR \-\-examine,
2522 .B \-\-scan
2523 causes all devices listed in the config file to be examined.
2524
2525 .TP
2526 .BR \-b ", " \-\-brief
2527 Be less verbose. This is used with
2528 .B \-\-detail
2529 and
2530 .BR \-\-examine .
2531 Using
2532 .B \-\-brief
2533 with
2534 .B \-\-verbose
2535 gives an intermediate level of verbosity.
2536
2537 .SH MONITOR MODE
2538
2539 .HP 12
2540 Usage:
2541 .B mdadm \-\-monitor
2542 .I options... devices...
2543
2544 .PP
2545 This usage causes
2546 .I mdadm
2547 to periodically poll a number of md arrays and to report on any events
2548 noticed.
2549 .I mdadm
2550 will never exit once it decides that there are arrays to be checked,
2551 so it should normally be run in the background.
2552
2553 As well as reporting events,
2554 .I mdadm
2555 may move a spare drive from one array to another if they are in the
2556 same
2557 .B spare-group
2558 or
2559 .B domain
2560 and if the destination array has a failed drive but no spares.
2561
2562 If any devices are listed on the command line,
2563 .I mdadm
2564 will only monitor those devices. Otherwise all arrays listed in the
2565 configuration file will be monitored. Further, if
2566 .B \-\-scan
2567 is given, then any other md devices that appear in
2568 .B /proc/mdstat
2569 will also be monitored.
2570
2571 The result of monitoring the arrays is the generation of events.
2572 These events are passed to a separate program (if specified) and may
2573 be mailed to a given E-mail address.
2574
2575 When passing events to a program, the program is run once for each event,
2576 and is given 2 or 3 command-line arguments: the first is the
2577 name of the event (see below), the second is the name of the
2578 md device which is affected, and the third is the name of a related
2579 device if relevant (such as a component device that has failed).
2580
2581 If
2582 .B \-\-scan
2583 is given, then a program or an E-mail address must be specified on the
2584 command line or in the config file. If neither are available, then
2585 .I mdadm
2586 will not monitor anything.
2587 Without
2588 .B \-\-scan,
2589 .I mdadm
2590 will continue monitoring as long as something was found to monitor. If
2591 no program or email is given, then each event is reported to
2592 .BR stdout .
2593
2594 The different events are:
2595
2596 .RS 4
2597 .TP
2598 .B DeviceDisappeared
2599 An md array which previously was configured appears to no longer be
2600 configured. (syslog priority: Critical)
2601
2602 If
2603 .I mdadm
2604 was told to monitor an array which is RAID0 or Linear, then it will
2605 report
2606 .B DeviceDisappeared
2607 with the extra information
2608 .BR Wrong-Level .
2609 This is because RAID0 and Linear do not support the device-failed,
2610 hot-spare and resync operations which are monitored.
2611
2612 .TP
2613 .B RebuildStarted
2614 An md array started reconstruction (e.g. recovery, resync, reshape,
2615 check, repair). (syslog priority: Warning)
2616
2617 .TP
2618 .BI Rebuild NN
2619 Where
2620 .I NN
2621 is a two-digit number (ie. 05, 48). This indicates that rebuild
2622 has passed that many percent of the total. The events are generated
2623 with fixed increment since 0. Increment size may be specified with
2624 a commandline option (default is 20). (syslog priority: Warning)
2625
2626 .TP
2627 .B RebuildFinished
2628 An md array that was rebuilding, isn't any more, either because it
2629 finished normally or was aborted. (syslog priority: Warning)
2630
2631 .TP
2632 .B Fail
2633 An active component device of an array has been marked as
2634 faulty. (syslog priority: Critical)
2635
2636 .TP
2637 .B FailSpare
2638 A spare component device which was being rebuilt to replace a faulty
2639 device has failed. (syslog priority: Critical)
2640
2641 .TP
2642 .B SpareActive
2643 A spare component device which was being rebuilt to replace a faulty
2644 device has been successfully rebuilt and has been made active.
2645 (syslog priority: Info)
2646
2647 .TP
2648 .B NewArray
2649 A new md array has been detected in the
2650 .B /proc/mdstat
2651 file. (syslog priority: Info)
2652
2653 .TP
2654 .B DegradedArray
2655 A newly noticed array appears to be degraded. This message is not
2656 generated when
2657 .I mdadm
2658 notices a drive failure which causes degradation, but only when
2659 .I mdadm
2660 notices that an array is degraded when it first sees the array.
2661 (syslog priority: Critical)
2662
2663 .TP
2664 .B MoveSpare
2665 A spare drive has been moved from one array in a
2666 .B spare-group
2667 or
2668 .B domain
2669 to another to allow a failed drive to be replaced.
2670 (syslog priority: Info)
2671
2672 .TP
2673 .B SparesMissing
2674 If
2675 .I mdadm
2676 has been told, via the config file, that an array should have a certain
2677 number of spare devices, and
2678 .I mdadm
2679 detects that it has fewer than this number when it first sees the
2680 array, it will report a
2681 .B SparesMissing
2682 message.
2683 (syslog priority: Warning)
2684
2685 .TP
2686 .B TestMessage
2687 An array was found at startup, and the
2688 .B \-\-test
2689 flag was given.
2690 (syslog priority: Info)
2691 .RE
2692
2693 Only
2694 .B Fail,
2695 .B FailSpare,
2696 .B DegradedArray,
2697 .B SparesMissing
2698 and
2699 .B TestMessage
2700 cause Email to be sent. All events cause the program to be run.
2701 The program is run with two or three arguments: the event
2702 name, the array device and possibly a second device.
2703
2704 Each event has an associated array device (e.g.
2705 .BR /dev/md1 )
2706 and possibly a second device. For
2707 .BR Fail ,
2708 .BR FailSpare ,
2709 and
2710 .B SpareActive
2711 the second device is the relevant component device.
2712 For
2713 .B MoveSpare
2714 the second device is the array that the spare was moved from.
2715
2716 For
2717 .I mdadm
2718 to move spares from one array to another, the different arrays need to
2719 be labeled with the same
2720 .B spare-group
2721 or the spares must be allowed to migrate through matching POLICY domains
2722 in the configuration file. The
2723 .B spare-group
2724 name can be any string; it is only necessary that different spare
2725 groups use different names.
2726
2727 When
2728 .I mdadm
2729 detects that an array in a spare group has fewer active
2730 devices than necessary for the complete array, and has no spare
2731 devices, it will look for another array in the same spare group that
2732 has a full complement of working drive and a spare. It will then
2733 attempt to remove the spare from the second drive and add it to the
2734 first.
2735 If the removal succeeds but the adding fails, then it is added back to
2736 the original array.
2737
2738 If the spare group for a degraded array is not defined,
2739 .I mdadm
2740 will look at the rules of spare migration specified by POLICY lines in
2741 .B mdadm.conf
2742 and then follow similar steps as above if a matching spare is found.
2743
2744 .SH GROW MODE
2745 The GROW mode is used for changing the size or shape of an active
2746 array.
2747 For this to work, the kernel must support the necessary change.
2748 Various types of growth are being added during 2.6 development.
2749
2750 Currently the supported changes include
2751 .IP \(bu 4
2752 change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2753 .IP \(bu 4
2754 increase or decrease the "raid\-devices" attribute of RAID0, RAID1, RAID4,
2755 RAID5, and RAID6.
2756 .IP \(bu 4
2757 change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and RAID10.
2758 .IP \(bu 4
2759 convert between RAID1 and RAID5, between RAID5 and RAID6, between
2760 RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).
2761 .IP \(bu 4
2762 add a write-intent bitmap to any array which supports these bitmaps, or
2763 remove a write-intent bitmap from such an array.
2764 .IP \(bu 4
2765 change the array's consistency policy.
2766 .PP
2767
2768 Using GROW on containers is currently supported only for Intel's IMSM
2769 container format. The number of devices in a container can be
2770 increased - which affects all arrays in the container - or an array
2771 in a container can be converted between levels where those levels are
2772 supported by the container, and the conversion is on of those listed
2773 above.
2774
2775 .PP
2776 Notes:
2777 .IP \(bu 4
2778 Intel's native checkpointing doesn't use
2779 .B --backup-file
2780 option and it is transparent for assembly feature.
2781 .IP \(bu 4
2782 Roaming between Windows(R) and Linux systems for IMSM metadata is not
2783 supported during grow process.
2784 .IP \(bu 4
2785 When growing a raid0 device, the new component disk size (or external
2786 backup size) should be larger than LCM(old, new) * chunk-size * 2,
2787 where LCM() is the least common multiple of the old and new count of
2788 component disks, and "* 2" comes from the fact that mdadm refuses to
2789 use more than half of a spare device for backup space.
2790
2791 .SS SIZE CHANGES
2792 Normally when an array is built the "size" is taken from the smallest
2793 of the drives. If all the small drives in an arrays are, one at a
2794 time, removed and replaced with larger drives, then you could have an
2795 array of large drives with only a small amount used. In this
2796 situation, changing the "size" with "GROW" mode will allow the extra
2797 space to start being used. If the size is increased in this way, a
2798 "resync" process will start to make sure the new parts of the array
2799 are synchronised.
2800
2801 Note that when an array changes size, any filesystem that may be
2802 stored in the array will not automatically grow or shrink to use or
2803 vacate the space. The
2804 filesystem will need to be explicitly told to use the extra space
2805 after growing, or to reduce its size
2806 .B prior
2807 to shrinking the array.
2808
2809 Also the size of an array cannot be changed while it has an active
2810 bitmap. If an array has a bitmap, it must be removed before the size
2811 can be changed. Once the change is complete a new bitmap can be created.
2812
2813 .PP
2814 Note:
2815 .B "--grow --size"
2816 is not yet supported for external file bitmap.
2817
2818 .SS RAID\-DEVICES CHANGES
2819
2820 A RAID1 array can work with any number of devices from 1 upwards
2821 (though 1 is not very useful). There may be times which you want to
2822 increase or decrease the number of active devices. Note that this is
2823 different to hot-add or hot-remove which changes the number of
2824 inactive devices.
2825
2826 When reducing the number of devices in a RAID1 array, the slots which
2827 are to be removed from the array must already be vacant. That is, the
2828 devices which were in those slots must be failed and removed.
2829
2830 When the number of devices is increased, any hot spares that are
2831 present will be activated immediately.
2832
2833 Changing the number of active devices in a RAID5 or RAID6 is much more
2834 effort. Every block in the array will need to be read and written
2835 back to a new location. From 2.6.17, the Linux Kernel is able to
2836 increase the number of devices in a RAID5 safely, including restarting
2837 an interrupted "reshape". From 2.6.31, the Linux Kernel is able to
2838 increase or decrease the number of devices in a RAID5 or RAID6.
2839
2840 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2841 or RAID5.
2842 .I mdadm
2843 uses this functionality and the ability to add
2844 devices to a RAID4 to allow devices to be added to a RAID0. When
2845 requested to do this,
2846 .I mdadm
2847 will convert the RAID0 to a RAID4, add the necessary disks and make
2848 the reshape happen, and then convert the RAID4 back to RAID0.
2849
2850 When decreasing the number of devices, the size of the array will also
2851 decrease. If there was data in the array, it could get destroyed and
2852 this is not reversible, so you should firstly shrink the filesystem on
2853 the array to fit within the new size. To help prevent accidents,
2854 .I mdadm
2855 requires that the size of the array be decreased first with
2856 .BR "mdadm --grow --array-size" .
2857 This is a reversible change which simply makes the end of the array
2858 inaccessible. The integrity of any data can then be checked before
2859 the non-reversible reduction in the number of devices is request.
2860
2861 When relocating the first few stripes on a RAID5 or RAID6, it is not
2862 possible to keep the data on disk completely consistent and
2863 crash-proof. To provide the required safety, mdadm disables writes to
2864 the array while this "critical section" is reshaped, and takes a
2865 backup of the data that is in that section. For grows, this backup may be
2866 stored in any spare devices that the array has, however it can also be
2867 stored in a separate file specified with the
2868 .B \-\-backup\-file
2869 option, and is required to be specified for shrinks, RAID level
2870 changes and layout changes. If this option is used, and the system
2871 does crash during the critical period, the same file must be passed to
2872 .B \-\-assemble
2873 to restore the backup and reassemble the array. When shrinking rather
2874 than growing the array, the reshape is done from the end towards the
2875 beginning, so the "critical section" is at the end of the reshape.
2876
2877 .SS LEVEL CHANGES
2878
2879 Changing the RAID level of any array happens instantaneously. However
2880 in the RAID5 to RAID6 case this requires a non-standard layout of the
2881 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2882 required before the change can be accomplished. So while the level
2883 change is instant, the accompanying layout change can take quite a
2884 long time. A
2885 .B \-\-backup\-file
2886 is required. If the array is not simultaneously being grown or
2887 shrunk, so that the array size will remain the same - for example,
2888 reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will
2889 be used not just for a "cricital section" but throughout the reshape
2890 operation, as described below under LAYOUT CHANGES.
2891
2892 .SS CHUNK-SIZE AND LAYOUT CHANGES
2893
2894 Changing the chunk-size or layout without also changing the number of
2895 devices as the same time will involve re-writing all blocks in-place.
2896 To ensure against data loss in the case of a crash, a
2897 .B --backup-file
2898 must be provided for these changes. Small sections of the array will
2899 be copied to the backup file while they are being rearranged. This
2900 means that all the data is copied twice, once to the backup and once
2901 to the new layout on the array, so this type of reshape will go very
2902 slowly.
2903
2904 If the reshape is interrupted for any reason, this backup file must be
2905 made available to
2906 .B "mdadm --assemble"
2907 so the array can be reassembled. Consequently the file cannot be
2908 stored on the device being reshaped.
2909
2910
2911 .SS BITMAP CHANGES
2912
2913 A write-intent bitmap can be added to, or removed from, an active
2914 array. Either internal bitmaps, or bitmaps stored in a separate file,
2915 can be added. Note that if you add a bitmap stored in a file which is
2916 in a filesystem that is on the RAID array being affected, the system
2917 will deadlock. The bitmap must be on a separate filesystem.
2918
2919 .SS CONSISTENCY POLICY CHANGES
2920
2921 The consistency policy of an active array can be changed by using the
2922 .B \-\-consistency\-policy
2923 option in Grow mode. Currently this works only for the
2924 .B ppl
2925 and
2926 .B resync
2927 policies and allows to enable or disable the RAID5 Partial Parity Log (PPL).
2928
2929 .SH INCREMENTAL MODE
2930
2931 .HP 12
2932 Usage:
2933 .B mdadm \-\-incremental
2934 .RB [ \-\-run ]
2935 .RB [ \-\-quiet ]
2936 .I component-device
2937 .RI [ optional-aliases-for-device ]
2938 .HP 12
2939 Usage:
2940 .B mdadm \-\-incremental \-\-fail
2941 .I component-device
2942 .HP 12
2943 Usage:
2944 .B mdadm \-\-incremental \-\-rebuild\-map
2945 .HP 12
2946 Usage:
2947 .B mdadm \-\-incremental \-\-run \-\-scan
2948
2949 .PP
2950 This mode is designed to be used in conjunction with a device
2951 discovery system. As devices are found in a system, they can be
2952 passed to
2953 .B "mdadm \-\-incremental"
2954 to be conditionally added to an appropriate array.
2955
2956 Conversely, it can also be used with the
2957 .B \-\-fail
2958 flag to do just the opposite and find whatever array a particular device
2959 is part of and remove the device from that array.
2960
2961 If the device passed is a
2962 .B CONTAINER
2963 device created by a previous call to
2964 .IR mdadm ,
2965 then rather than trying to add that device to an array, all the arrays
2966 described by the metadata of the container will be started.
2967
2968 .I mdadm
2969 performs a number of tests to determine if the device is part of an
2970 array, and which array it should be part of. If an appropriate array
2971 is found, or can be created,
2972 .I mdadm
2973 adds the device to the array and conditionally starts the array.
2974
2975 Note that
2976 .I mdadm
2977 will normally only add devices to an array which were previously working
2978 (active or spare) parts of that array. The support for automatic
2979 inclusion of a new drive as a spare in some array requires
2980 a configuration through POLICY in config file.
2981
2982 The tests that
2983 .I mdadm
2984 makes are as follow:
2985 .IP +
2986 Is the device permitted by
2987 .BR mdadm.conf ?
2988 That is, is it listed in a
2989 .B DEVICES
2990 line in that file. If
2991 .B DEVICES
2992 is absent then the default it to allow any device. Similarly if
2993 .B DEVICES
2994 contains the special word
2995 .B partitions
2996 then any device is allowed. Otherwise the device name given to
2997 .IR mdadm ,
2998 or one of the aliases given, or an alias found in the filesystem,
2999 must match one of the names or patterns in a
3000 .B DEVICES
3001 line.
3002
3003 This is the only context where the aliases are used. They are
3004 usually provided by a
3005 .I udev
3006 rules mentioning
3007 .BR $env{DEVLINKS} .
3008
3009 .IP +
3010 Does the device have a valid md superblock? If a specific metadata
3011 version is requested with
3012 .B \-\-metadata
3013 or
3014 .B \-e
3015 then only that style of metadata is accepted, otherwise
3016 .I mdadm
3017 finds any known version of metadata. If no
3018 .I md
3019 metadata is found, the device may be still added to an array
3020 as a spare if POLICY allows.
3021
3022 .ig
3023 .IP +
3024 Does the metadata match an expected array?
3025 The metadata can match in two ways. Either there is an array listed
3026 in
3027 .B mdadm.conf
3028 which identifies the array (either by UUID, by name, by device list,
3029 or by minor-number), or the array was created with a
3030 .B homehost
3031 specified and that
3032 .B homehost
3033 matches the one in
3034 .B mdadm.conf
3035 or on the command line.
3036 If
3037 .I mdadm
3038 is not able to positively identify the array as belonging to the
3039 current host, the device will be rejected.
3040 ..
3041
3042 .PP
3043 .I mdadm
3044 keeps a list of arrays that it has partially assembled in
3045 .BR {MAP_PATH} .
3046 If no array exists which matches
3047 the metadata on the new device,
3048 .I mdadm
3049 must choose a device name and unit number. It does this based on any
3050 name given in
3051 .B mdadm.conf
3052 or any name information stored in the metadata. If this name
3053 suggests a unit number, that number will be used, otherwise a free
3054 unit number will be chosen. Normally
3055 .I mdadm
3056 will prefer to create a partitionable array, however if the
3057 .B CREATE
3058 line in
3059 .B mdadm.conf
3060 suggests that a non-partitionable array is preferred, that will be
3061 honoured.
3062
3063 If the array is not found in the config file and its metadata does not
3064 identify it as belonging to the "homehost", then
3065 .I mdadm
3066 will choose a name for the array which is certain not to conflict with
3067 any array which does belong to this host. It does this be adding an
3068 underscore and a small number to the name preferred by the metadata.
3069
3070 Once an appropriate array is found or created and the device is added,
3071 .I mdadm
3072 must decide if the array is ready to be started. It will
3073 normally compare the number of available (non-spare) devices to the
3074 number of devices that the metadata suggests need to be active. If
3075 there are at least that many, the array will be started. This means
3076 that if any devices are missing the array will not be restarted.
3077
3078 As an alternative,
3079 .B \-\-run
3080 may be passed to
3081 .I mdadm
3082 in which case the array will be run as soon as there are enough
3083 devices present for the data to be accessible. For a RAID1, that
3084 means one device will start the array. For a clean RAID5, the array
3085 will be started as soon as all but one drive is present.
3086
3087 Note that neither of these approaches is really ideal. If it can
3088 be known that all device discovery has completed, then
3089 .br
3090 .B " mdadm \-IRs"
3091 .br
3092 can be run which will try to start all arrays that are being
3093 incrementally assembled. They are started in "read-auto" mode in
3094 which they are read-only until the first write request. This means
3095 that no metadata updates are made and no attempt at resync or recovery
3096 happens. Further devices that are found before the first write can
3097 still be added safely.
3098
3099 .SH ENVIRONMENT
3100 This section describes environment variables that affect how mdadm
3101 operates.
3102
3103 .TP
3104 .B MDADM_NO_MDMON
3105 Setting this value to 1 will prevent mdadm from automatically launching
3106 mdmon. This variable is intended primarily for debugging mdadm/mdmon.
3107
3108 .TP
3109 .B MDADM_NO_UDEV
3110 Normally,
3111 .I mdadm
3112 does not create any device nodes in /dev, but leaves that task to
3113 .IR udev .
3114 If
3115 .I udev
3116 appears not to be configured, or if this environment variable is set
3117 to '1', the
3118 .I mdadm
3119 will create and devices that are needed.
3120
3121 .TP
3122 .B MDADM_NO_SYSTEMCTL
3123 If
3124 .I mdadm
3125 detects that
3126 .I systemd
3127 is in use it will normally request
3128 .I systemd
3129 to start various background tasks (particularly
3130 .IR mdmon )
3131 rather than forking and running them in the background. This can be
3132 suppressed by setting
3133 .BR MDADM_NO_SYSTEMCTL=1 .
3134
3135 .TP
3136 .B IMSM_NO_PLATFORM
3137 A key value of IMSM metadata is that it allows interoperability with
3138 boot ROMs on Intel platforms, and with other major operating systems.
3139 Consequently,
3140 .I mdadm
3141 will only allow an IMSM array to be created or modified if detects
3142 that it is running on an Intel platform which supports IMSM, and
3143 supports the particular configuration of IMSM that is being requested
3144 (some functionality requires newer OROM support).
3145
3146 These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in the
3147 environment. This can be useful for testing or for disaster
3148 recovery. You should be aware that interoperability may be
3149 compromised by setting this value.
3150
3151 .TP
3152 .B MDADM_GROW_ALLOW_OLD
3153 If an array is stopped while it is performing a reshape and that
3154 reshape was making use of a backup file, then when the array is
3155 re-assembled
3156 .I mdadm
3157 will sometimes complain that the backup file is too old. If this
3158 happens and you are certain it is the right backup file, you can
3159 over-ride this check by setting
3160 .B MDADM_GROW_ALLOW_OLD=1
3161 in the environment.
3162
3163 .TP
3164 .B MDADM_CONF_AUTO
3165 Any string given in this variable is added to the start of the
3166 .B AUTO
3167 line in the config file, or treated as the whole
3168 .B AUTO
3169 line if none is given. It can be used to disable certain metadata
3170 types when
3171 .I mdadm
3172 is called from a boot script. For example
3173 .br
3174 .B " export MDADM_CONF_AUTO='-ddf -imsm'
3175 .br
3176 will make sure that
3177 .I mdadm
3178 does not automatically assemble any DDF or
3179 IMSM arrays that are found. This can be useful on systems configured
3180 to manage such arrays with
3181 .BR dmraid .
3182
3183
3184 .SH EXAMPLES
3185
3186 .B " mdadm \-\-query /dev/name-of-device"
3187 .br
3188 This will find out if a given device is a RAID array, or is part of
3189 one, and will provide brief information about the device.
3190
3191 .B " mdadm \-\-assemble \-\-scan"
3192 .br
3193 This will assemble and start all arrays listed in the standard config
3194 file. This command will typically go in a system startup file.
3195
3196 .B " mdadm \-\-stop \-\-scan"
3197 .br
3198 This will shut down all arrays that can be shut down (i.e. are not
3199 currently in use). This will typically go in a system shutdown script.
3200
3201 .B " mdadm \-\-follow \-\-scan \-\-delay=120"
3202 .br
3203 If (and only if) there is an Email address or program given in the
3204 standard config file, then
3205 monitor the status of all arrays listed in that file by
3206 polling them ever 2 minutes.
3207
3208 .B " mdadm \-\-create /dev/md0 \-\-level=1 \-\-raid\-devices=2 /dev/hd[ac]1"
3209 .br
3210 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
3211
3212 .br
3213 .B " echo 'DEVICE /dev/hd*[0\-9] /dev/sd*[0\-9]' > mdadm.conf"
3214 .br
3215 .B " mdadm \-\-detail \-\-scan >> mdadm.conf"
3216 .br
3217 This will create a prototype config file that describes currently
3218 active arrays that are known to be made from partitions of IDE or SCSI drives.
3219 This file should be reviewed before being used as it may
3220 contain unwanted detail.
3221
3222 .B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
3223 .br
3224 .B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
3225 .br
3226 This will find arrays which could be assembled from existing IDE and
3227 SCSI whole drives (not partitions), and store the information in the
3228 format of a config file.
3229 This file is very likely to contain unwanted detail, particularly
3230 the
3231 .B devices=
3232 entries. It should be reviewed and edited before being used as an
3233 actual config file.
3234
3235 .B " mdadm \-\-examine \-\-brief \-\-scan \-\-config=partitions"
3236 .br
3237 .B " mdadm \-Ebsc partitions"
3238 .br
3239 Create a list of devices by reading
3240 .BR /proc/partitions ,
3241 scan these for RAID superblocks, and printout a brief listing of all
3242 that were found.
3243
3244 .B " mdadm \-Ac partitions \-m 0 /dev/md0"
3245 .br
3246 Scan all partitions and devices listed in
3247 .BR /proc/partitions
3248 and assemble
3249 .B /dev/md0
3250 out of all such devices with a RAID superblock with a minor number of 0.
3251
3252 .B " mdadm \-\-monitor \-\-scan \-\-daemonise > /run/mdadm/mon.pid"
3253 .br
3254 If config file contains a mail address or alert program, run mdadm in
3255 the background in monitor mode monitoring all md devices. Also write
3256 pid of mdadm daemon to
3257 .BR /run/mdadm/mon.pid .
3258
3259 .B " mdadm \-Iq /dev/somedevice"
3260 .br
3261 Try to incorporate newly discovered device into some array as
3262 appropriate.
3263
3264 .B " mdadm \-\-incremental \-\-rebuild\-map \-\-run \-\-scan"
3265 .br
3266 Rebuild the array map from any current arrays, and then start any that
3267 can be started.
3268
3269 .B " mdadm /dev/md4 --fail detached --remove detached"
3270 .br
3271 Any devices which are components of /dev/md4 will be marked as faulty
3272 and then remove from the array.
3273
3274 .B " mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4"
3275 .br
3276 The array
3277 .B /dev/md4
3278 which is currently a RAID5 array will be converted to RAID6. There
3279 should normally already be a spare drive attached to the array as a
3280 RAID6 needs one more drive than a matching RAID5.
3281
3282 .B " mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]"
3283 .br
3284 Create a DDF array over 6 devices.
3285
3286 .B " mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf"
3287 .br
3288 Create a RAID5 array over any 3 devices in the given DDF set. Use
3289 only 30 gigabytes of each device.
3290
3291 .B " mdadm -A /dev/md/ddf1 /dev/sd[a-f]"
3292 .br
3293 Assemble a pre-exist ddf array.
3294
3295 .B " mdadm -I /dev/md/ddf1"
3296 .br
3297 Assemble all arrays contained in the ddf array, assigning names as
3298 appropriate.
3299
3300 .B " mdadm \-\-create \-\-help"
3301 .br
3302 Provide help about the Create mode.
3303
3304 .B " mdadm \-\-config \-\-help"
3305 .br
3306 Provide help about the format of the config file.
3307
3308 .B " mdadm \-\-help"
3309 .br
3310 Provide general help.
3311
3312 .SH FILES
3313
3314 .SS /proc/mdstat
3315
3316 If you're using the
3317 .B /proc
3318 filesystem,
3319 .B /proc/mdstat
3320 lists all active md devices with information about them.
3321 .I mdadm
3322 uses this to find arrays when
3323 .B \-\-scan
3324 is given in Misc mode, and to monitor array reconstruction
3325 on Monitor mode.
3326
3327 .SS /etc/mdadm.conf
3328
3329 The config file lists which devices may be scanned to see if
3330 they contain MD super block, and gives identifying information
3331 (e.g. UUID) about known MD arrays. See
3332 .BR mdadm.conf (5)
3333 for more details.
3334
3335 .SS /etc/mdadm.conf.d
3336
3337 A directory containing configuration files which are read in lexical
3338 order.
3339
3340 .SS {MAP_PATH}
3341 When
3342 .B \-\-incremental
3343 mode is used, this file gets a list of arrays currently being created.
3344
3345 .SH DEVICE NAMES
3346
3347 .I mdadm
3348 understand two sorts of names for array devices.
3349
3350 The first is the so-called 'standard' format name, which matches the
3351 names used by the kernel and which appear in
3352 .IR /proc/mdstat .
3353
3354 The second sort can be freely chosen, but must reside in
3355 .IR /dev/md/ .
3356 When giving a device name to
3357 .I mdadm
3358 to create or assemble an array, either full path name such as
3359 .I /dev/md0
3360 or
3361 .I /dev/md/home
3362 can be given, or just the suffix of the second sort of name, such as
3363 .I home
3364 can be given.
3365
3366 When
3367 .I mdadm
3368 chooses device names during auto-assembly or incremental assembly, it
3369 will sometimes add a small sequence number to the end of the name to
3370 avoid conflicted between multiple arrays that have the same name. If
3371 .I mdadm
3372 can reasonably determine that the array really is meant for this host,
3373 either by a hostname in the metadata, or by the presence of the array
3374 in
3375 .BR mdadm.conf ,
3376 then it will leave off the suffix if possible.
3377 Also if the homehost is specified as
3378 .B <ignore>
3379 .I mdadm
3380 will only use a suffix if a different array of the same name already
3381 exists or is listed in the config file.
3382
3383 The standard names for non-partitioned arrays (the only sort of md
3384 array available in 2.4 and earlier) are of the form
3385 .IP
3386 .RB /dev/md NN
3387 .PP
3388 where NN is a number.
3389 The standard names for partitionable arrays (as available from 2.6
3390 onwards) are of the form:
3391 .IP
3392 .RB /dev/md_d NN
3393 .PP
3394 Partition numbers should be indicated by adding "pMM" to these, thus "/dev/md/d1p2".
3395 .PP
3396 From kernel version 2.6.28 the "non-partitioned array" can actually
3397 be partitioned. So the "md_d\fBNN\fP"
3398 names are no longer needed, and
3399 partitions such as "/dev/md\fBNN\fPp\fBXX\fP"
3400 are possible.
3401 .PP
3402 From kernel version 2.6.29 standard names can be non-numeric following
3403 the form:
3404 .IP
3405 .RB /dev/md_ XXX
3406 .PP
3407 where
3408 .B XXX
3409 is any string. These names are supported by
3410 .I mdadm
3411 since version 3.3 provided they are enabled in
3412 .IR mdadm.conf .
3413
3414 .SH NOTE
3415 .I mdadm
3416 was previously known as
3417 .IR mdctl .
3418
3419 .SH SEE ALSO
3420 For further information on mdadm usage, MD and the various levels of
3421 RAID, see:
3422 .IP
3423 .B https://raid.wiki.kernel.org/
3424 .PP
3425 (based upon Jakob \(/Ostergaard's Software\-RAID.HOWTO)
3426 .PP
3427 The latest version of
3428 .I mdadm
3429 should always be available from
3430 .IP
3431 .B https://www.kernel.org/pub/linux/utils/raid/mdadm/
3432 .PP
3433 Related man pages:
3434 .PP
3435 .IR mdmon (8),
3436 .IR mdadm.conf (5),
3437 .IR md (4).