+.B TestMessage
+An array was found at startup, and the
+.B --test
+flag was given.
+.RE
+
+Only
+.B Fail ,
+.B FailSpare ,
+.B DegradedArray ,
+and
+.B TestMessage
+cause Email to be sent. All events cause the program to be run.
+The program is run with two or three arguments, they being the event
+name, the array device and possibly a second device.
+
+Each event has an associated array device (e.g.
+.BR /dev/md1 )
+and possibly a second device. For
+.BR Fail ,
+.BR FailSpare ,
+and
+.B SpareActive
+the second device is the relevant component device.
+For
+.B MoveSpare
+the second device is the array that the spare was moved from.
+
+For
+.B mdadm
+to move spares from one array to another, the different arrays need to
+be labelled with the same
+.B spare-group
+in the configuration file. The
+.B spare-group
+name can be any string. It is only necessary that different spare
+groups use different names.
+
+When
+.B mdadm
+detects that an array which is in a spare group has fewer active
+devices than necessary for the complete array, and has no spare
+devices, it will look for another array in the same spare group that
+has a full complement of working drive and a spare. It will then
+attempt to remove the spare from the second drive and add it to the
+first.
+If the removal succeeds but the adding fails, then it is added back to
+the original array.
+
+.SH EXAMPLES
+
+.B " mdadm --query /dev/name-of-device"
+.br
+This will find out if a given device is a raid array, or is part of
+one, and will provide brief information about the device.
+
+.B " mdadm --assemble --scan"
+.br
+This will assemble and start all arrays listed in the standard confile
+file. This command will typically go in a system startup file.
+
+.B " mdadm --stop --scan"
+.br
+This will shut down all array that can be shut down (i.e. are not
+currently in used). This will typically going in a system shutdown script.
+
+.B " mdadm --follow --scan --delay=120"
+.br
+If (and only if) there is an Email address or program given in the
+standard config file, then
+monitor the status of all arrays listed in that file by
+polling them ever 2 minutes.
+
+.B " mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1"
+.br
+Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
+
+.br
+.B " echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf"
+.br
+.B " mdadm --detail --scan >> mdadm.conf"
+.br
+This will create a prototype config file that describes currently
+active arrays that are known to be made from partitions of IDE or SCSI drives.
+This file should be reviewed before being used as it may
+contain unwanted detail.
+
+.B " echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf"
+.br
+.B " mdadm --examine --scan --config=mdadm.conf >> mdadm.conf"
+.ber
+This will find what arrays could be assembled from existign IDE and
+SCSI whole drives (not partitions) and store the information is the
+format of a config file.
+This file is very likely to contain unwanted detail, particularly
+the
+.B devices=
+entries. It should be reviewed and edited before being used as an
+actual config file.
+
+.B " mdadm --examine --brief --scan --config=partitions"
+.br
+.B " mdadm -Ebsc partitions"
+.br
+Create a list of devices by reading
+.BR /proc/partitions ,
+scan these for RAID superblocks, and printout a brief listing of all
+that was found.
+
+.B " mdadm -Ac partitions -m 0 /dev/md0"
+.br
+Scan all partitions and devices listed in
+.BR /proc/partitions
+and assemble
+.B /dev/md0
+out of all such devices with a RAID superblock with a minor number of 0.