]> git.ipfire.org Git - thirdparty/mdadm.git/blob - md.4
Updates to md.4
[thirdparty/mdadm.git] / md.4
1 .TH MD 4
2 .SH NAME
3 md \- Multiple Device driver aka Linux Software Raid
4 .SH SYNOPSIS
5 .BI /dev/md n
6 .br
7 .BI /dev/md/ n
8 .SH DESCRIPTION
9 The
10 .B md
11 driver provides virtual devices that are created from one or more
12 independent underlying devices. This array of devices often contains
13 redundancy, and hence the acronym RAID which stands for a Redundant
14 Array of Independent Devices.
15 .PP
16 .B md
17 supports RAID levels
18 1 (mirroring),
19 4 (striped array with parity device),
20 5 (striped array with distributed parity information),
21 6 (striped array with distributed dual redundancy information), and
22 10 (striped and mirrored).
23 If some number of underlying devices fails while using one of these
24 levels, the array will continue to function; this number is one for
25 RAID levels 4 and 5, two for RAID level 6, and all but one (N-1) for
26 RAID level 1, and dependant on configuration for level 10.
27 .PP
28 .B md
29 also supports a number of pseudo RAID (non-redundant) configurations
30 including RAID0 (striped array), LINEAR (catenated array),
31 MULTIPATH (a set of different interfaces to the same device),
32 and FAULTY (a layer over a single device into which errors can be injected).
33
34 .SS MD SUPER BLOCK
35 Each device in an array may have a
36 .I superblock
37 which records information about the structure and state of the array.
38 This allows the array to be reliably re-assembled after a shutdown.
39
40 From Linux kernel version 2.6.10,
41 .B md
42 provides support for two different formats of this superblock, and
43 other formats can be added. Prior to this release, only one format is
44 supported.
45
46 The common format - known as version 0.90 - has
47 a superblock that is 4K long and is written into a 64K aligned block that
48 starts at least 64K and less than 128K from the end of the device
49 (i.e. to get the address of the superblock round the size of the
50 device down to a multiple of 64K and then subtract 64K).
51 The available size of each device is the amount of space before the
52 super block, so between 64K and 128K is lost when a device in
53 incorporated into an MD array.
54 This superblock stores multi-byte fields in a processor-dependant
55 manner, so arrays cannot easily be moved between computers with
56 different processors.
57
58 The new format - known as version 1 - has a superblock that is
59 normally 1K long, but can be longer. It is normally stored between 8K
60 and 12K from the end of the device, on a 4K boundary, though
61 variations can be stored at the start of the device (version 1.1) or 4K from
62 the start of the device (version 1.2).
63 This superblock format stores multibyte data in a
64 processor-independent format and has supports up to hundreds of
65 component devices (version 0.90 only supports 28).
66
67 The superblock contains, among other things:
68 .TP
69 LEVEL
70 The manner in which the devices are arranged into the array
71 (linear, raid0, raid1, raid4, raid5, raid10, multipath).
72 .TP
73 UUID
74 a 128 bit Universally Unique Identifier that identifies the array that
75 this device is part of.
76
77 .SS ARRAYS WITHOUT SUPERBLOCKS
78 While it is usually best to create arrays with superblocks so that
79 they can be assembled reliably, there are some circumstances where an
80 array without superblocks in preferred. This include:
81 .TP
82 LEGACY ARRAYS
83 Early versions of the
84 .B md
85 driver only supported Linear and Raid0 configurations and did not use
86 a superblock (which is less critical with these configurations).
87 While such arrays should be rebuilt with superblocks if possible,
88 .B md
89 continues to support them.
90 .TP
91 FAULTY
92 Being a largely transparent layer over a different device, the FAULTY
93 personality doesn't gain anything from having a superblock.
94 .TP
95 MULTIPATH
96 It is often possible to detect devices which are different paths to
97 the same storage directly rather than having a distinctive superblock
98 written to the device and searched for on all paths. In this case,
99 a MULTIPATH array with no superblock makes sense.
100 .TP
101 RAID1
102 In some configurations it might be desired to create a raid1
103 configuration that does use a superblock, and to maintain the state of
104 the array elsewhere. While not encouraged for general us, it does
105 have special-purpose uses and is supported.
106
107 .SS LINEAR
108
109 A linear array simply catenates the available space on each
110 drive together to form one large virtual drive.
111
112 One advantage of this arrangement over the more common RAID0
113 arrangement is that the array may be reconfigured at a later time with
114 an extra drive and so the array is made bigger without disturbing the
115 data that is on the array. However this cannot be done on a live
116 array.
117
118 If a chunksize is given with a LINEAR array, the usable space on each
119 device is rounded down to a multiple of this chunksize.
120
121 .SS RAID0
122
123 A RAID0 array (which has zero redundancy) is also known as a
124 striped array.
125 A RAID0 array is configured at creation with a
126 .B "Chunk Size"
127 which must be a power of two, and at least 4 kibibytes.
128
129 The RAID0 driver assigns the first chunk of the array to the first
130 device, the second chunk to the second device, and so on until all
131 drives have been assigned one chunk. This collection of chunks forms
132 a
133 .BR stripe .
134 Further chunks are gathered into stripes in the same way which are
135 assigned to the remaining space in the drives.
136
137 If devices in the array are not all the same size, then once the
138 smallest device has been exhausted, the RAID0 driver starts
139 collecting chunks into smaller stripes that only span the drives which
140 still have remaining space.
141
142
143 .SS RAID1
144
145 A RAID1 array is also known as a mirrored set (though mirrors tend to
146 provide reflected images, which RAID1 does not) or a plex.
147
148 Once initialised, each device in a RAID1 array contains exactly the
149 same data. Changes are written to all devices in parallel. Data is
150 read from any one device. The driver attempts to distribute read
151 requests across all devices to maximise performance.
152
153 All devices in a RAID1 array should be the same size. If they are
154 not, then only the amount of space available on the smallest device is
155 used. Any extra space on other devices is wasted.
156
157 .SS RAID4
158
159 A RAID4 array is like a RAID0 array with an extra device for storing
160 parity. This device is the last of the active devices in the
161 array. Unlike RAID0, RAID4 also requires that all stripes span all
162 drives, so extra space on devices that are larger than the smallest is
163 wasted.
164
165 When any block in a RAID4 array is modified the parity block for that
166 stripe (i.e. the block in the parity device at the same device offset
167 as the stripe) is also modified so that the parity block always
168 contains the "parity" for the whole stripe. i.e. its contents is
169 equivalent to the result of performing an exclusive-or operation
170 between all the data blocks in the stripe.
171
172 This allows the array to continue to function if one device fails.
173 The data that was on that device can be calculated as needed from the
174 parity block and the other data blocks.
175
176 .SS RAID5
177
178 RAID5 is very similar to RAID4. The difference is that the parity
179 blocks for each stripe, instead of being on a single device, are
180 distributed across all devices. This allows more parallelism when
181 writing as two different block updates will quite possibly affect
182 parity blocks on different devices so there is less contention.
183
184 This also allows more parallelism when reading as read requests are
185 distributed over all the devices in the array instead of all but one.
186
187 .SS RAID6
188
189 RAID6 is similar to RAID5, but can handle the loss of any \fItwo\fP
190 devices without data loss. Accordingly, it requires N+2 drives to
191 store N drives worth of data.
192
193 The performance for RAID6 is slightly lower but comparable to RAID5 in
194 normal mode and single disk failure mode. It is very slow in dual
195 disk failure mode, however.
196
197 .SS RAID10
198
199 RAID10 provides a combination of RAID1 and RAID0, and sometimes known
200 as RAID1+0. Every datablock is duplicated some number of times, and
201 the resulting collection of datablocks are distributed over multiple
202 drives.
203
204 When configuring a RAID10 array it is necessary to specify the number
205 of replicas of each data block that are required (this will normally
206 be 2) and whether the replicas should be 'near' or 'far'.
207
208 When 'near' replicas are chosen, the multiple copies of a given chunk
209 are laid out consecutively across the stripes of the array, so the two
210 copies of a datablock will likely be at the same offset on two
211 adjacent devices.
212
213 When 'far' replicas are chosen, the multiple copies of a given chunk
214 are laid out quite distant from each other. The first copy of all
215 data blocks will be striped across the early part of all drives in
216 RAID0 fashion, and then the next copy of all blocks will be striped
217 across a later section of all drives, always ensuring that all copies
218 of any given block are on different drives.
219
220 The 'far' arrangement can give sequential read performance equal to
221 that of a RAID0 array, but at the cost of degraded write performance.
222
223 It should be noted that the number of devices in a RAID10 array need
224 not be a multiple of the number of replica of each data block, those
225 there must be at least as many devices as replicas.
226
227 If, for example, an array is created with 5 devices and 2 replicas,
228 then space equivalent to 2.5 of the devices will be available, and
229 every block will be stored on two different devices.
230
231 Finally, it is possible to have an array with both 'near' and 'far'
232 copies. If and array is configured with 2 near copies and 2 far
233 copies, then there will be a total of 4 copies of each block, each on
234 a different drive. This is an artifact of the implementation and is
235 unlikely to be of real value.
236
237 .SS MUTIPATH
238
239 MULTIPATH is not really a RAID at all as there is only one real device
240 in a MULTIPATH md array. However there are multiple access points
241 (paths) to this device, and one of these paths might fail, so there
242 are some similarities.
243
244 A MULTIPATH array is composed of a number of logically different
245 devices, often fibre channel interfaces, that all refer the the same
246 real device. If one of these interfaces fails (e.g. due to cable
247 problems), the multipath driver will attempt to redirect requests to
248 another interface.
249
250 .SS FAULTY
251 The FAULTY md module is provided for testing purposes. A faulty array
252 has exactly one component device and is normally assembled without a
253 superblock, so the md array created provides direct access to all of
254 the data in the component device.
255
256 The FAULTY module may be requested to simulate faults to allow testing
257 of other md levels or of filesystems. Faults can be chosen to trigger
258 on read requests or write requests, and can be transient (a subsequent
259 read/write at the address will probably succeed) or persistent
260 (subsequent read/write of the same address will fail). Further, read
261 faults can be "fixable" meaning that they persist until a write
262 request at the same address.
263
264 Fault types can be requested with a period. In this case the fault
265 will recur repeatedly after the given number of requests of the
266 relevant type. For example if persistent read faults have a period of
267 100, then every 100th read request would generate a fault, and the
268 faulty sector would be recorded so that subsequent reads on that
269 sector would also fail.
270
271 There is a limit to the number of faulty sectors that are remembered.
272 Faults generated after this limit is exhausted are treated as
273 transient.
274
275 The list of faulty sectors can be flushed, and the active list of
276 failure modes can be cleared.
277
278 .SS UNCLEAN SHUTDOWN
279
280 When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array
281 there is a possibility of inconsistency for short periods of time as
282 each update requires are least two block to be written to different
283 devices, and these writes probably wont happen at exactly the same
284 time. Thus if a system with one of these arrays is shutdown in the
285 middle of a write operation (e.g. due to power failure), the array may
286 not be consistent.
287
288 To handle this situation, the md driver marks an array as "dirty"
289 before writing any data to it, and marks it as "clean" when the array
290 is being disabled, e.g. at shutdown. If the md driver finds an array
291 to be dirty at startup, it proceeds to correct any possibly
292 inconsistency. For RAID1, this involves copying the contents of the
293 first drive onto all other drives. For RAID4, RAID5 and RAID6 this
294 involves recalculating the parity for each stripe and making sure that
295 the parity block has the correct data. For RAID10 it involves copying
296 one of the replicas of each block onto all the others. This process,
297 known as "resynchronising" or "resync" is performed in the background.
298 The array can still be used, though possibly with reduced performance.
299
300 If a RAID4, RAID5 or RAID6 array is degraded (missing at least one
301 drive) when it is restarted after an unclean shutdown, it cannot
302 recalculate parity, and so it is possible that data might be
303 undetectably corrupted. The 2.4 md driver
304 .B does not
305 alert the operator to this condition. The 2.6 md driver will fail to
306 start an array in this condition without manual intervention, though
307 this behaviour can be over-ridden by a kernel parameter.
308
309 .SS RECOVERY
310
311 If the md driver detects a write error on a device in a RAID1, RAID4,
312 RAID5, RAID6, or RAID10 array, it immediately disables that device
313 (marking it as faulty) and continues operation on the remaining
314 devices. If there is a spare drive, the driver will start recreating
315 on one of the spare drives the data what was on that failed drive,
316 either by copying a working drive in a RAID1 configuration, or by
317 doing calculations with the parity block on RAID4, RAID5 or RAID6, or
318 by finding a copying originals for RAID10.
319
320 In kernels prior to about 2.6.15, a read error would cause the same
321 effect as a write error. In later kernels, a read-error will instead
322 cause md to attempt a recovery by overwriting the bad block. i.e. it
323 will find the correct data from elsewhere, write it over the block
324 that failed, and then try to read it back again. If either the write
325 or the re-read fail, md will treat the error the same way that a write
326 error is treated and will fail the whole device.
327
328 While this recovery process is happening, the md driver will monitor
329 accesses to the array and will slow down the rate of recovery if other
330 activity is happening, so that normal access to the array will not be
331 unduly affected. When no other activity is happening, the recovery
332 process proceeds at full speed. The actual speed targets for the two
333 different situations can be controlled by the
334 .B speed_limit_min
335 and
336 .B speed_limit_max
337 control files mentioned below.
338
339 .SS BITMAP WRITE-INTENT LOGGING
340
341 From Linux 2.6.13,
342 .I md
343 supports a bitmap based write-intent log. If configured, the bitmap
344 is used to record which blocks of the array may be out of sync.
345 Before any write request is honoured, md will make sure that the
346 corresponding bit in the log is set. After a period of time with no
347 writes to an area of the array, the corresponding bit will be cleared.
348
349 This bitmap is used for two optimisations.
350
351 Firstly, after an unclear shutdown, the resync process will consult
352 the bitmap and only resync those blocks that correspond to bits in the
353 bitmap that are set. This can dramatically increase resync time.
354
355 Secondly, when a drive fails and is removed from the array, md stops
356 clearing bits in the intent log. If that same drive is re-added to
357 the array, md will notice and will only recover the sections of the
358 drive that are covered by bits in the intent log that are set. This
359 can allow a device to be temporarily removed and reinserted without
360 causing an enormous recovery cost.
361
362 The intent log can be stored in a file on a separate device, or it can
363 be stored near the superblocks of an array which has superblocks.
364
365 It is possible to add an intent log or an active array, or remove an
366 intent log if one is present.
367
368 In 2.6.13, intent bitmaps are only supported with RAID1. Other levels
369 with redundancy are supported from 2.6.15.
370
371 .SS WRITE-BEHIND
372
373 From Linux 2.6.14,
374 .I md
375 supports WRITE-BEHIND on RAID1 arrays.
376
377 This allows certain devices in the array to be flagged as
378 .IR write-mostly .
379 MD will only read from such devices if there is no
380 other option.
381
382 If a write-intent bitmap is also provided, write requests to
383 write-mostly devices will be treated as write-behind requests and md
384 will not wait for writes to those requests to complete before
385 reporting the write as complete to the filesystem.
386
387 This allows for a RAID1 with WRITE-BEHIND to be used to mirror data
388 over a slow link to a remove computer (providing the link isn't too
389 slow). The extra latency of the remote link will not slow down normal
390 operations, but the remote system will still have a reasonably
391 up-to-date copy of all data.
392
393 .SS RESTRIPING
394
395 .IR Restriping ,
396 also known as
397 .IR Reshaping ,
398 is the processes of re-arranging the data stored in each stripe into a
399 new layout. This might involve changing the number of devices in the
400 array (so the stripes are wider) changing the chunk size (so stripes
401 are deeper or shallower), or changing the arrangement of data and
402 parity, possibly changing the raid level (e.g. 1 to 5 or 5 to 6).
403
404 As of Linux 2.6.17, md can reshape a raid5 array to have more
405 devices. Other possibilities may follow in future kernels.
406
407 During any stripe process there is a 'critical section' during which
408 live data is being over-written on disk. For the operation of
409 increasing the number of drives in a raid5, this critical section
410 covers the first few stripes (the number being the product of the old
411 and new number of devices). After this critical section is passed,
412 data is only written to areas of the array which no longer hold live
413 data - the live data has already been located away.
414
415 md is not able to ensure data preservation if there is a crash
416 (e.g. power failure) during the critical section. If md is asked to
417 start an array which failed during a critical section of restriping,
418 it will fail to start the array.
419
420 To deal with this possibility, a user-space program must
421 .IP \(bu 4
422 Disable writes to that section of the array (using the
423 .B sysfs
424 interface),
425 .IP \(bu 4
426 Take a copy of the data somewhere (i.e. make a backup)
427 .IP \(bu 4
428 Allow the process to continue and invalidate the backup and restore
429 write access once the critical section is passed, and
430 .IP \(bu 4
431 Provide for restoring the critical data before restarting the array
432 after a system crash.
433 .PP
434
435 .B mdadm
436 version 2.4 and later will do this for growing a RAID5 array.
437
438 For operations that do not change the size of the array, like simply
439 increasing chunk size, or converting RAID5 to RAID6 with one extra
440 device, the entire process is the critical section. In this case the
441 restripe will need to progress in stages as a section is suspended,
442 backed up,
443 restriped, and released. This is not yet implemented.
444
445 .SS SYSFS INTERFACE
446 All block devices appear as a directory in
447 .I sysfs
448 (usually mounted at
449 .BR /sys ).
450 For MD devices, this directory will contain a subdirectory called
451 .B md
452 which contains various files for providing access to information about
453 the array.
454
455 This interface is documented more fully in the file
456 .B Documentation/md.txt
457 which is distributed with the kernel sources. That file should be
458 consulted for full documentation. The following are just a selection
459 of attribute files that are available.
460
461 .TP
462 .B md/sync_speed_min
463 This value, if set, overrides the system-wide setting in
464 .B /proc/sys/dev/raid/speed_limit_min
465 for this array only.
466 Writing the value
467 .B system
468 to this file cause the system-wide setting to have effect.
469
470 .TP
471 .B md/sync_speed_max
472 This is the partner of
473 .B md/sync_speed_min
474 and overrides
475 .B /proc/sys/dev/raid/spool_limit_max
476 described below.
477
478 .TP
479 .B md/sync_action
480 This can be used to monitor and control the resync/recovery process of
481 MD.
482 In particular, writing "check" here will cause the array to read all
483 data block and check that they are consistent (e.g. parity is correct,
484 or all mirror replicas are the same). Any discrepancies found are
485 .B NOT
486 corrected.
487
488 A count of problems found will be stored in
489 .BR md/mismatch_count .
490
491 Alternately, "repair" can be written which will cause the same check
492 to be performed, but any errors will be corrected.
493
494 Finally, "idle" can be written to stop the check/repair process.
495
496 .TP
497 .B md/stripe_cache_size
498 This is only available on RAID5 and RAID6. It records the size (in
499 pages per device) of the stripe cache which is used for synchronising
500 all read and write operations to the array. The default is 128.
501 Increasing this number can increase performance in some situations, at
502 some cost in system memory.
503
504
505 .SS KERNEL PARAMETERS
506
507 The md driver recognised several different kernel parameters.
508 .TP
509 .B raid=noautodetect
510 This will disable the normal detection of md arrays that happens at
511 boot time. If a drive is partitioned with MS-DOS style partitions,
512 then if any of the 4 main partitions has a partition type of 0xFD,
513 then that partition will normally be inspected to see if it is part of
514 an MD array, and if any full arrays are found, they are started. This
515 kernel parameter disables this behaviour.
516
517 .TP
518 .B raid=partitionable
519 .TP
520 .B raid=part
521 These are available in 2.6 and later kernels only. They indicate that
522 autodetected MD arrays should be created as partitionable arrays, with
523 a different major device number to the original non-partitionable md
524 arrays. The device number is listed as
525 .I mdp
526 in
527 .IR /proc/devices .
528
529 .TP
530 .B md_mod.start_ro=1
531 This tells md to start all arrays in read-only mode. This is a soft
532 read-only that will automatically switch to read-write on the first
533 write request. However until that write request, nothing is written
534 to any device by md, and in particular, no resync or recovery
535 operation is started.
536
537 .TP
538 .B md_mod.start_dirty_degraded=1
539 As mentioned above, md will not normally start a RAID4, RAID5, or
540 RAID6 that is both dirty and degraded as this situation can imply
541 hidden data loss. This can be awkward if the root filesystem is
542 affected. Using the module parameter allows such arrays to be started
543 at boot time. It should be understood that there is a real (though
544 small) risk of data corruption in this situation.
545
546 .TP
547 .BI md= n , dev , dev ,...
548 .TP
549 .BI md=d n , dev , dev ,...
550 This tells the md driver to assemble
551 .B /dev/md n
552 from the listed devices. It is only necessary to start the device
553 holding the root filesystem this way. Other arrays are best started
554 once the system is booted.
555
556 In 2.6 kernels, the
557 .B d
558 immediately after the
559 .B =
560 indicates that a partitionable device (e.g.
561 .BR /dev/md/d0 )
562 should be created rather than the original non-partitionable device.
563
564 .TP
565 .BI md= n , l , c , i , dev...
566 This tells the md driver to assemble a legacy RAID0 or LINEAR array
567 without a superblock.
568 .I n
569 gives the md device number,
570 .I l
571 gives the level, 0 for RAID0 or -1 for LINEAR,
572 .I c
573 gives the chunk size as a base-2 logarithm offset by twelve, so 0
574 means 4K, 1 means 8K.
575 .I i
576 is ignored (legacy support).
577
578 .SH FILES
579 .TP
580 .B /proc/mdstat
581 Contains information about the status of currently running array.
582 .TP
583 .B /proc/sys/dev/raid/speed_limit_min
584 A readable and writable file that reflects the current goal rebuild
585 speed for times when non-rebuild activity is current on an array.
586 The speed is in Kibibytes per second, and is a per-device rate, not a
587 per-array rate (which means that an array with more disc will shuffle
588 more data for a given speed). The default is 100.
589
590 .TP
591 .B /proc/sys/dev/raid/speed_limit_max
592 A readable and writable file that reflects the current goal rebuild
593 speed for times when no non-rebuild activity is current on an array.
594 The default is 100,000.
595
596 .SH SEE ALSO
597 .BR mdadm (8),
598 .BR mkraid (8).