+.\" Copyright Neil Brown and others.
+.\" This program is free software; you can redistribute it and/or modify
+.\" it under the terms of the GNU General Public License as published by
+.\" the Free Software Foundation; either version 2 of the License, or
+.\" (at your option) any later version.
+.\" See file COPYING in distribution for details.
.TH MD 4
.SH NAME
-md \- Multiple Device driver aka Linux Software Raid
+md \- Multiple Device driver aka Linux Software RAID
.SH SYNOPSIS
.BI /dev/md n
.br
.BI /dev/md/ n
+.br
+.BR /dev/md/ name
.SH DESCRIPTION
The
.B md
driver provides virtual devices that are created from one or more
independent underlying devices. This array of devices often contains
-redundancy, and hence the acronym RAID which stands for a Redundant
-Array of Independent Devices.
+redundancy and the devices are often disk drives, hence the acronym RAID
+which stands for a Redundant Array of Independent Disks.
.PP
.B md
supports RAID levels
If some number of underlying devices fails while using one of these
levels, the array will continue to function; this number is one for
RAID levels 4 and 5, two for RAID level 6, and all but one (N-1) for
-RAID level 1, and dependant on configuration for level 10.
+RAID level 1, and dependent on configuration for level 10.
.PP
.B md
also supports a number of pseudo RAID (non-redundant) configurations
MULTIPATH (a set of different interfaces to the same device),
and FAULTY (a layer over a single device into which errors can be injected).
-.SS MD SUPER BLOCK
-Each device in an array may have a
-.I superblock
-which records information about the structure and state of the array.
+.SS MD METADATA
+Each device in an array may have some
+.I metadata
+stored in the device. This metadata is sometimes called a
+.BR superblock .
+The metadata records information about the structure and state of the array.
This allows the array to be reliably re-assembled after a shutdown.
From Linux kernel version 2.6.10,
.B md
-provides support for two different formats of this superblock, and
+provides support for two different formats of metadata, and
other formats can be added. Prior to this release, only one format is
supported.
-The common format - known as version 0.90 - has
+The common format \(em known as version 0.90 \(em has
a superblock that is 4K long and is written into a 64K aligned block that
starts at least 64K and less than 128K from the end of the device
(i.e. to get the address of the superblock round the size of the
The available size of each device is the amount of space before the
super block, so between 64K and 128K is lost when a device in
incorporated into an MD array.
-This superblock stores multi-byte fields in a processor-dependant
+This superblock stores multi-byte fields in a processor-dependent
manner, so arrays cannot easily be moved between computers with
different processors.
-The new format - known as version 1 - has a superblock that is
+The new format \(em known as version 1 \(em has a superblock that is
normally 1K long, but can be longer. It is normally stored between 8K
and 12K from the end of the device, on a 4K boundary, though
variations can be stored at the start of the device (version 1.1) or 4K from
the start of the device (version 1.2).
-This superblock format stores multibyte data in a
-processor-independent format and has supports up to hundreds of
+This metadata format stores multibyte data in a
+processor-independent format and supports up to hundreds of
component devices (version 0.90 only supports 28).
-The superblock contains, among other things:
+The metadata contains, among other things:
.TP
LEVEL
The manner in which the devices are arranged into the array
.TP
UUID
a 128 bit Universally Unique Identifier that identifies the array that
-this device is part of.
+contains this device.
-.SS ARRAYS WITHOUT SUPERBLOCKS
+.PP
+When a version 0.90 array is being reshaped (e.g. adding extra devices
+to a RAID5), the version number is temporarily set to 0.91. This
+ensures that if the reshape process is stopped in the middle (e.g. by
+a system crash) and the machine boots into an older kernel that does
+not support reshaping, then the array will not be assembled (which
+would cause data corruption) but will be left untouched until a kernel
+that can complete the reshape processes is used.
+
+.SS ARRAYS WITHOUT METADATA
While it is usually best to create arrays with superblocks so that
-they can be assembled reliably, there are some circumstances where an
-array without superblocks in preferred. This include:
+they can be assembled reliably, there are some circumstances when an
+array without superblocks is preferred. These include:
.TP
LEGACY ARRAYS
Early versions of the
.TP
RAID1
In some configurations it might be desired to create a raid1
-configuration that does use a superblock, and to maintain the state of
+configuration that does not use a superblock, and to maintain the state of
the array elsewhere. While not encouraged for general us, it does
have special-purpose uses and is supported.
+.SS ARRAYS WITH EXTERNAL METADATA
+
+From release 2.6.28, the
+.I md
+driver supports arrays with externally managed metadata. That is,
+the metadata is not managed by the kernel by rather by a user-space
+program which is external to the kernel. This allows support for a
+variety of metadata formats without cluttering the kernel with lots of
+details.
+.PP
+.I md
+is able to communicate with the user-space program through various
+sysfs attributes so that it can make appropriate changes to the
+metadata \- for example to make a device as faulty. When necessary,
+.I md
+will wait for the program to acknowledge the event by writing to a
+sysfs attribute.
+The manual page for
+.IR mdmon (8)
+contains more detail about this interaction.
+
+.SS CONTAINERS
+Many metadata formats use a single block of metadata to describe a
+number of different arrays which all use the same set of devices.
+In this case it is helpful for the kernel to know about the full set
+of devices as a whole. This set is known to md as a
+.IR container .
+A container is an
+.I md
+array with externally managed metadata and with device offset and size
+so that it just covers the metadata part of the devices. The
+remainder of each device is available to be incorporated into various
+arrays.
+
.SS LINEAR
A linear array simply catenates the available space on each
-drive together to form one large virtual drive.
+drive to form one large virtual drive.
One advantage of this arrangement over the more common RAID0
arrangement is that the array may be reconfigured at a later time with
-an extra drive and so the array is made bigger without disturbing the
-data that is on the array. However this cannot be done on a live
+an extra drive, so the array is made bigger without disturbing the
+data that is on the array. This can even be done on a live
array.
If a chunksize is given with a LINEAR array, the usable space on each
striped array.
A RAID0 array is configured at creation with a
.B "Chunk Size"
-which must be a power of two, and at least 4 kibibytes.
+which must be a power of two (prior to Linux 2.6.31), and at least 4
+kibibytes.
The RAID0 driver assigns the first chunk of the array to the first
device, the second chunk to the second device, and so on until all
-drives have been assigned one chunk. This collection of chunks forms
-a
+drives have been assigned one chunk. This collection of chunks forms a
.BR stripe .
-Further chunks are gathered into stripes in the same way which are
+Further chunks are gathered into stripes in the same way, and are
assigned to the remaining space in the drives.
If devices in the array are not all the same size, then once the
All devices in a RAID1 array should be the same size. If they are
not, then only the amount of space available on the smallest device is
-used. Any extra space on other devices is wasted.
+used (any extra space on other devices is wasted).
+
+Note that the read balancing done by the driver does not make the RAID1
+performance profile be the same as for RAID0; a single stream of
+sequential input will not be accelerated (e.g. a single dd), but
+multiple sequential streams or a random workload will use more than one
+spindle. In theory, having an N-disk RAID1 will allow N sequential
+threads to read from all disks.
+
+Individual devices in a RAID1 can be marked as "write-mostly".
+This drives are excluded from the normal read balancing and will only
+be read from when there is no other option. This can be useful for
+devices connected over a slow link.
.SS RAID4
drives, so extra space on devices that are larger than the smallest is
wasted.
-When any block in a RAID4 array is modified the parity block for that
+When any block in a RAID4 array is modified, the parity block for that
stripe (i.e. the block in the parity device at the same device offset
as the stripe) is also modified so that the parity block always
-contains the "parity" for the whole stripe. i.e. its contents is
+contains the "parity" for the whole stripe. I.e. its content is
equivalent to the result of performing an exclusive-or operation
between all the data blocks in the stripe.
RAID5 is very similar to RAID4. The difference is that the parity
blocks for each stripe, instead of being on a single device, are
distributed across all devices. This allows more parallelism when
-writing as two different block updates will quite possibly affect
+writing, as two different block updates will quite possibly affect
parity blocks on different devices so there is less contention.
-This also allows more parallelism when reading as read requests are
+This also allows more parallelism when reading, as read requests are
distributed over all the devices in the array instead of all but one.
.SS RAID6
.SS RAID10
-RAID10 provides a combination of RAID1 and RAID0, and sometimes known
+RAID10 provides a combination of RAID1 and RAID0, and is sometimes known
as RAID1+0. Every datablock is duplicated some number of times, and
the resulting collection of datablocks are distributed over multiple
drives.
-When configuring a RAID10 array it is necessary to specify the number
+When configuring a RAID10 array, it is necessary to specify the number
of replicas of each data block that are required (this will normally
be 2) and whether the replicas should be 'near', 'offset' or 'far'.
(Note that the 'offset' layout is only available from 2.6.18).
of any given block are on different drives.
The 'far' arrangement can give sequential read performance equal to
-that of a RAID0 array, but at the cost of degraded write performance.
+that of a RAID0 array, but at the cost of reduced write performance.
When 'offset' replicas are chosen, the multiple copies of a given
chunk are laid out on consecutive drives and at consecutive offsets.
writes.
It should be noted that the number of devices in a RAID10 array need
-not be a multiple of the number of replica of each data block, those
+not be a multiple of the number of replica of each data block; however,
there must be at least as many devices as replicas.
If, for example, an array is created with 5 devices and 2 replicas,
every block will be stored on two different devices.
Finally, it is possible to have an array with both 'near' and 'far'
-copies. If and array is configured with 2 near copies and 2 far
+copies. If an array is configured with 2 near copies and 2 far
copies, then there will be a total of 4 copies of each block, each on
a different drive. This is an artifact of the implementation and is
unlikely to be of real value.
-.SS MUTIPATH
+.SS MULTIPATH
MULTIPATH is not really a RAID at all as there is only one real device
in a MULTIPATH md array. However there are multiple access points
devices, often fibre channel interfaces, that all refer the the same
real device. If one of these interfaces fails (e.g. due to cable
problems), the multipath driver will attempt to redirect requests to
-another interface.
+another interface.
+
+The MULTIPATH drive is not receiving any ongoing development and
+should be considered a legacy driver. The device-mapper based
+multipath drivers should be preferred for new installations.
.SS FAULTY
The FAULTY md module is provided for testing purposes. A faulty array
faults can be "fixable" meaning that they persist until a write
request at the same address.
-Fault types can be requested with a period. In this case the fault
+Fault types can be requested with a period. In this case, the fault
will recur repeatedly after the given number of requests of the
relevant type. For example if persistent read faults have a period of
100, then every 100th read request would generate a fault, and the
When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array
there is a possibility of inconsistency for short periods of time as
-each update requires are least two block to be written to different
-devices, and these writes probably wont happen at exactly the same
+each update requires at least two block to be written to different
+devices, and these writes probably won't happen at exactly the same
time. Thus if a system with one of these arrays is shutdown in the
middle of a write operation (e.g. due to power failure), the array may
not be consistent.
The array can still be used, though possibly with reduced performance.
If a RAID4, RAID5 or RAID6 array is degraded (missing at least one
-drive) when it is restarted after an unclean shutdown, it cannot
+drive, two for RAID6) when it is restarted after an unclean shutdown, it cannot
recalculate parity, and so it is possible that data might be
undetectably corrupted. The 2.4 md driver
.B does not
alert the operator to this condition. The 2.6 md driver will fail to
start an array in this condition without manual intervention, though
-this behaviour can be over-ridden by a kernel parameter.
+this behaviour can be overridden by a kernel parameter.
.SS RECOVERY
If the md driver detects a write error on a device in a RAID1, RAID4,
RAID5, RAID6, or RAID10 array, it immediately disables that device
(marking it as faulty) and continues operation on the remaining
-devices. If there is a spare drive, the driver will start recreating
-on one of the spare drives the data what was on that failed drive,
+devices. If there are spare drives, the driver will start recreating
+on one of the spare drives the data which was on that failed drive,
either by copying a working drive in a RAID1 configuration, or by
doing calculations with the parity block on RAID4, RAID5 or RAID6, or
-by finding a copying originals for RAID10.
+by finding and copying originals for RAID10.
In kernels prior to about 2.6.15, a read error would cause the same
effect as a write error. In later kernels, a read-error will instead
will find the correct data from elsewhere, write it over the block
that failed, and then try to read it back again. If either the write
or the re-read fail, md will treat the error the same way that a write
-error is treated and will fail the whole device.
+error is treated, and will fail the whole device.
While this recovery process is happening, the md driver will monitor
accesses to the array and will slow down the rate of recovery if other
This bitmap is used for two optimisations.
-Firstly, after an unclear shutdown, the resync process will consult
+Firstly, after an unclean shutdown, the resync process will consult
the bitmap and only resync those blocks that correspond to bits in the
-bitmap that are set. This can dramatically increase resync time.
+bitmap that are set. This can dramatically reduce resync time.
Secondly, when a drive fails and is removed from the array, md stops
clearing bits in the intent log. If that same drive is re-added to
The intent log can be stored in a file on a separate device, or it can
be stored near the superblocks of an array which has superblocks.
-It is possible to add an intent log or an active array, or remove an
+It is possible to add an intent log to an active array, or remove an
intent log if one is present.
In 2.6.13, intent bitmaps are only supported with RAID1. Other levels
reporting the write as complete to the filesystem.
This allows for a RAID1 with WRITE-BEHIND to be used to mirror data
-over a slow link to a remove computer (providing the link isn't too
+over a slow link to a remote computer (providing the link isn't too
slow). The extra latency of the remote link will not slow down normal
operations, but the remote system will still have a reasonably
up-to-date copy of all data.
.IR Reshaping ,
is the processes of re-arranging the data stored in each stripe into a
new layout. This might involve changing the number of devices in the
-array (so the stripes are wider) changing the chunk size (so stripes
+array (so the stripes are wider), changing the chunk size (so stripes
are deeper or shallower), or changing the arrangement of data and
-parity, possibly changing the raid level (e.g. 1 to 5 or 5 to 6).
+parity (possibly changing the raid level, e.g. 1 to 5 or 5 to 6).
As of Linux 2.6.17, md can reshape a raid5 array to have more
devices. Other possibilities may follow in future kernels.
During any stripe process there is a 'critical section' during which
-live data is being over-written on disk. For the operation of
+live data is being overwritten on disk. For the operation of
increasing the number of drives in a raid5, this critical section
covers the first few stripes (the number being the product of the old
and new number of devices). After this critical section is passed,
data is only written to areas of the array which no longer hold live
-data - the live data has already been located away.
+data \(em the live data has already been located away.
md is not able to ensure data preservation if there is a crash
(e.g. power failure) during the critical section. If md is asked to
.B sysfs
interface),
.IP \(bu 4
-Take a copy of the data somewhere (i.e. make a backup)
+take a copy of the data somewhere (i.e. make a backup),
.IP \(bu 4
-Allow the process to continue and invalidate the backup and restore
+allow the process to continue and invalidate the backup and restore
write access once the critical section is passed, and
.IP \(bu 4
-Provide for restoring the critical data before restarting the array
+provide for restoring the critical data before restarting the array
after a system crash.
.PP
.B mdadm
-version 2.4 and later will do this for growing a RAID5 array.
+versions from 2.4 do this for growing a RAID5 array.
For operations that do not change the size of the array, like simply
increasing chunk size, or converting RAID5 to RAID6 with one extra
-device, the entire process is the critical section. In this case the
-restripe will need to progress in stages as a section is suspended,
+device, the entire process is the critical section. In this case, the
+restripe will need to progress in stages, as a section is suspended,
backed up,
-restriped, and released. This is not yet implemented.
+restriped, and released; this is not yet implemented.
.SS SYSFS INTERFACE
-All block devices appear as a directory in
+Each block device appears as a directory in
.I sysfs
-(usually mounted at
+(which is usually mounted at
.BR /sys ).
For MD devices, this directory will contain a subdirectory called
.B md
.B /proc/sys/dev/raid/speed_limit_min
for this array only.
Writing the value
-.B system
-to this file cause the system-wide setting to have effect.
+.B "system"
+to this file will cause the system-wide setting to have effect.
.TP
.B md/sync_speed_max
.B md/stripe_cache_size
This is only available on RAID5 and RAID6. It records the size (in
pages per device) of the stripe cache which is used for synchronising
-all read and write operations to the array. The default is 128.
+all write operations to the array and all read operations if the array
+is degraded. The default is 256. Valid values are 17 to 32768.
Increasing this number can increase performance in some situations, at
-some cost in system memory.
+some cost in system memory. Note, setting this value too high can
+result in an "out of memory" condition for the system.
+memory_consumed = system_page_size * nr_disks * stripe_cache_size
+
+.TP
+.B md/preread_bypass_threshold
+This is only available on RAID5 and RAID6. This variable sets the
+number of times MD will service a full-stripe-write before servicing a
+stripe that requires some "prereading". For fairness this defaults to
+1. Valid values are 0 to stripe_cache_size. Setting this to 0
+maximizes sequential-write throughput at the cost of fairness to threads
+doing small or random writes.
.SS KERNEL PARAMETERS
.TP
.B md_mod.start_ro=1
+.TP
+.B /sys/module/md_mod/parameters/start_ro
This tells md to start all arrays in read-only mode. This is a soft
read-only that will automatically switch to read-write on the first
write request. However until that write request, nothing is written
.TP
.B md_mod.start_dirty_degraded=1
+.TP
+.B /sys/module/md_mod/parameters/start_dirty_degraded
As mentioned above, md will not normally start a RAID4, RAID5, or
RAID6 that is both dirty and degraded as this situation can imply
hidden data loss. This can be awkward if the root filesystem is
-affected. Using the module parameter allows such arrays to be started
+affected. Using this module parameter allows such arrays to be started
at boot time. It should be understood that there is a real (though
small) risk of data corruption in this situation.
Contains information about the status of currently running array.
.TP
.B /proc/sys/dev/raid/speed_limit_min
-A readable and writable file that reflects the current goal rebuild
+A readable and writable file that reflects the current "goal" rebuild
speed for times when non-rebuild activity is current on an array.
The speed is in Kibibytes per second, and is a per-device rate, not a
-per-array rate (which means that an array with more disc will shuffle
-more data for a given speed). The default is 100.
+per-array rate (which means that an array with more disks will shuffle
+more data for a given speed). The default is 1000.
.TP
.B /proc/sys/dev/raid/speed_limit_max
-A readable and writable file that reflects the current goal rebuild
+A readable and writable file that reflects the current "goal" rebuild
speed for times when no non-rebuild activity is current on an array.
-The default is 100,000.
+The default is 200,000.
.SH SEE ALSO
.BR mdadm (8),