When assembling an array undergoing rebuild the kernel will switch to
resync if there are no ppl entries to recover. Prevent that by adding an
empty entry when validating the ppl header.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:03 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-recovery against cluster-raid10
03r10_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:02 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-recovery against cluster-raid1
03r1_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:01 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-resync against cluster-raid10
03r10_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:00 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-resync against cluster-raid1
03r1_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:57 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add-spare against cluster-raid10
02r10_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
then check spares.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:56 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add-spare against cluster-raid1
02r1_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
then check spares.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:55 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add against cluster-raid10
02r10_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
the 'add' in equal to 'add-spare'.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:54 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add against cluster-raid1
02r1_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
the 'add' in equal to 'add-spare'.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:53 +0000 (14:10 +0800)]
clustermd_tests: add test case to test grow_add against cluster-raid1
01r1_Grow_add: It contains 3 kinds of growing array.
1. 2 active disk in md array, grow and add new disk into array.
2. 2 active and 1 spare disk in md array, grow and add new disk
into array.
3. 2 active and 1 spare disk in md array, grow the device-number
and make spare disk as active disk in array.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:52 +0000 (14:10 +0800)]
clustermd_tests: add test case to test switching bitmap against cluster-raid10
01r10_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid10.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:51 +0000 (14:10 +0800)]
clustermd_tests: add test case to test switching bitmap against cluster-raid1
01r1_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid1.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:50 +0000 (14:10 +0800)]
manpage: add prompt in --zero-superblock against clustered raid
Clustered raid would be damaged if calls --zero-superblock
incorrectly, so add prompt in --zero-superblock chapter of
manpage. Such as: cluster node1 has assembled cluster-md,
but calls --zero-superblock in other cluster node.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Guoqing Jiang [Mon, 22 Jan 2018 09:12:10 +0000 (17:12 +0800)]
Assemble: cleanup the failure path
There are some failure paths which share common codes
before return, so simplify them by move common codes
to the end of function, and just goto out in case
failure happened.
Guoqing Jiang [Mon, 22 Jan 2018 09:12:09 +0000 (17:12 +0800)]
Assemble: provide protection when clustered raid do assemble
The previous patch provides protection for other modes
such as CREATE, MANAGE, GROW and INCREMENTAL. And for
ASSEMBLE mode, we also need to protect during the process
of assemble clustered raid.
However, we can only know the array is clustered or not
when the metadata is ready, so the lock_cluster is called
after select_devices(). And we could re-read the metadata
when doing auto-assembly, so refresh the locking.
Guoqing Jiang [Mon, 22 Jan 2018 09:12:08 +0000 (17:12 +0800)]
mdadm: improve the dlm locking mechanism for clustered raid
Previously, the dlm locking only protects several
functions which writes to superblock (update_super,
add_to_super and store_super), and we missed other
funcs such as add_internal_bitmap. We also need to
call the funcs which read superblock under the
locking protection to avoid consistent issue.
So let's remove the dlm stuffs from super1.c, and
provide the locking mechanism to the main() except
assemble mode which will be handled in next commit.
And since we can identify it is a clustered raid or
not based on check the different conditions of each
mode, so the change should not have effect on native
array.
And we improve the existed locking stuffs as follows:
1. replace ls_unlock with ls_unlock_wait since we
should return when unlock operation is complete.
2. inspired by lvm, let's also try to use the existed
lockspace first before creat a lockspace blindly if
the lockspace not released for some reason.
3. try more times before quit if EAGAIN happened for
locking.
Note: for MANAGE mode, we do not need to get lock if
node just want to confirm device change, otherwise we
can't add a disk to cluster since all nodes are compete
for the lock.
BingJing Chang [Thu, 22 Feb 2018 07:00:28 +0000 (15:00 +0800)]
mdadm: prevent out-of-date reshaping devices from force assemble
With "--force", we can assemble the array even if some superblocks
appear out-of-date. But their data layout is regarded to make sense.
In reshape cases, if two devices claims different reshape progresses,
we cannot forcely assemble them back to array. Kernel will treat only
one of them as reshape progress. However, their data is still laid on
different layouts. It may lead to disaster if reshape goes on.
Reproducible Steps:
mdadm -C /dev/md0 --assume-clean -l5 -n3 /dev/loop[012]
mdadm -a /dev/md0 /dev/loop3
mdadm -G /dev/md0 -n4
mdadm -f /dev/md0 /dev/loop0 # after a period
mdadm -S /dev/md0 # after another period
mdadm -E /dev/loop[01] # make sure that they claims different ones
mdadm -Af -R /dev/md0 /dev/loop[023] # give no enough devices for
force_array() to pick non-fresh devices
cat /sys/block/md0/md/reshape_position # You can see that Kernel resume
reshape the from any progress of them.
Note: The unit of mdadm -E is KB, but reshape_position's is sector.
In order to prevent disaster, we add logics to prevent devices with
different reshape progress from being added into the array.
Reported-by: Allen Peng <allenpeng@synology.com> Reviewed-by: Alex Wu <alexwu@synology.com> Signed-off-by: BingJing Chang <bingjingc@synology.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
These udev rules attempt to set a safe kernel controller
timeout for disks containing RAID level 1 or higher
partitions for commodity disks which do not have SCTERC
capability, or do have it but it is disabled.
No attempt is made to change the STCERC settings on devices
which support it.
This attempts to mitigate the problem described here:
where the kernel controller may timeout on a read from a
disk after the default timeout of 30 seconds and consequently
cause mdraid to regard the disk as dead and eject it from the
RAID array.
The mitigation is to set the timeout to 180 seconds for disks
which contain a RAID level 1 or higher partition.
Signed-off-by: Jonathan G. Underwood <jonathan.underwood@gmail.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Thu, 25 Jan 2018 14:12:50 +0000 (15:12 +0100)]
Grow.c: Block any level migration with chunk size change
Mixing level and chunk changes in one grow operation is not supported.
Mdadm performs level migration correctly and ignores new chunk, but
after migration it tries to write this chunk to sysfs properties.
This is dangerous and can cause unexpected behaviours.
Assemble: prevent segfault with faulty "best" devices
In Assemble(), after context reload, best[i] can be -1 in some cases,
and before checking if this value is negative we use it to access
devices[j].i.disk.raid_disk, potentially causing a segfault.
Check if best[i] is negative before using it to prevent this potential
segfault.
Signed-off-by: Andrea Righi <andrea@betterlinux.com> Fixes: 69a481166be6 ("Assemble array with write journal") Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Robert LeBlanc <robert@leblancnet.us> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:09 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test grow_resize cluster-raid10
01r10_Grow_resize:
1. Create clustered raid10 with smaller size, then resize the
mddev to max size, finally change back to smaller size.
2. Create clustered raid10 with smaller chunk-size, then resize
it to larger, and trigger reshape.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:08 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test creating cluster-raid10
00r10_Create: It contains 4 scenarios of creating clustered raid10.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid10 with --assume-clean.
3. Creating clustered raid10 with spare disk.
4. Creating clustered raid10 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:06 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test creating cluster-raid1
00r1_Create: It contains 4 scenarios of creating clustered raid1.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid1 with --assume-clean parameter.
3. Creating clustered raid1 with spare disk.
4. Creating clustered raid1 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:05 +0000 (17:45 +0800)]
mdadm/test: add '--testdir=' to switch choosing test suite
By now, mdadm has two test suites to cover traditional sofr-raid
testing and clustermd testing, the '--testdir=' option supports
to switch which suite to test, tests/ or clustermd_tests/.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:04 +0000 (17:45 +0800)]
mdadm/test: enable clustermd testing under clustermd_tests/
For clustermd testing, it needs user deploys the basic cluster
manually, test scripts don't cover auto-deploy cluster due to
different linux distributions have lots of difference.
Then complete the configuration in cluster_conf, please refer to
the detail comments in 'cluster_conf'.
1. 'func.sh' source file, it achieves feature functions for
clustermd testing.
2. 'cluster_conf' configure file, it contains two parts as
the input of testing.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:02 +0000 (17:45 +0800)]
mdadm/test: correct the logic operation in save_log
1. delete the mdadm -As, keep the original testing scene intact.
2. move some actions into 'array' test, 'mdadm -D $array' would
complain errors if $array is null.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Thu, 11 Jan 2018 11:39:49 +0000 (12:39 +0100)]
policy.c: Avoid to take spare without defined domain by imsm
Only Imsm get_disk_controller_domain returns disk controller domain for
each disk. It causes that mdadm automatically creates disk controller
domain policy for imsm metadata, and imsm containers in the same disk
controller domain can take spare for recovery.
managemon: Don't add disk to the array after it has started
If disk has disappeared from the system and appears again, it is added to the
corresponding container as long the metadata matches and disk number is set.
This code had no effect on imsm until commit 20dc76d15b40 ("imsm: Set disk slot
number"). Now the disk is added to container but not to the array - it is
correct as the disk is out-of-sync. Rebuild should start for the disk but it
doesn't. There is the same behaviour for both imsm and ddf metadata.
There is no point to handle out-of-sync disk as "good member of array" so
remove that part of code. There are no scenarios when monitor is already
running and disk can be safely added to the array. Just write initial metadata
to the disk so it's taken for rebuild.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Thu, 23 Nov 2017 03:10:44 +0000 (11:10 +0800)]
mdadm/grow: correct the s->size > 1 to make 'max' work
s->size > 1 : s->size is '1' when '--grow --size max'
parameter is specified, so correct this test here.
Fixes: 1b21c449e6f2 ("mdadm/grow: adding a test to ensure resize was required") Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Maksymilian Kunt [Mon, 13 Nov 2017 11:30:49 +0000 (12:30 +0100)]
imsm: continue resync on 3-disk RAID10
If RAID10 gets degraded during resync and is stopped, it doesn't continue
resync after automatic assemble and it is reported to be in sync. Resync
is blocked because the disk is missing. It should not happen for RAID10 as
it can still continue with 3 disks.
Count missing disks. Block resync only if number of missing disks exceeds
limit for given RAID level (only different for RAID10). Check if the
disk under recovery is present. If not, resync should be allowed to run.
Mariusz Tkaczyk [Tue, 21 Nov 2017 10:30:20 +0000 (11:30 +0100)]
Monitor/msg: Don't print error message if mdmon doesn't run
Commit 4515fb28a53a ("Add detail information when can not connect
monitor") was added to warn about failed connection to monitor in
WaitClean function (see link below).
Mdmon runs for IMSM containers when they have array with redundancy so
if mdmon doesn't run, mdadm prints this error. This is misleading and
unnecessary. Just print it in WaitClean function.
The sock in WaitClean is deprecated so it is removed.
Mariusz Tkaczyk [Tue, 7 Nov 2017 15:49:56 +0000 (16:49 +0100)]
sysfs: include faulty drive in disk count
When the disk fails, it goes into faulty state first and it is removed
from the array in a while. It gives mdadm monitor a chance to see the disk
has failed and notify an event (e.g. FailSpare). It doesn't work when
sysfs is used to get a number of disks in the array as it skips faulty
disk. ioctl implementation doesn't differentiate between active and
faulty disk. Do the same for sysfs then. It should not matter that number
of disks reported is greater than list of disk structures returned by the
call because the same approach already takes place for offline disks.
Michal Zylowski [Wed, 8 Nov 2017 14:43:41 +0000 (15:43 +0100)]
imsm: More precise message when spanned raid is created
When RAID is created between VMD and SATA disks, printed message is
"Mixing devices attached to different VMD domains is not allowed". This message
is unclear and misleading because creating spanned containers between different
VMD domains is allowed. Set error message to more precise text.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Guoqing Jiang [Mon, 30 Oct 2017 09:09:51 +0000 (17:09 +0800)]
To support clustered raid10
We are now considering to extend clustered raid to
support raid10. But only near layout is supported,
so make the check when create the array or switch
the bitmap from internal to clustered.
Improve error detection after SG_IO ioctl. Checking only the return
value and response length is insufficient and leads to anomalies if a
drive does not have a serial number.
NeilBrown [Mon, 30 Oct 2017 04:43:41 +0000 (15:43 +1100)]
Incremental: Use ->validate_geometry instead of ->avail_size
Since mdadm 3.3 is has not been correct to call ->avail_size if
metadata hasn't been read from the device. ->validate_geometry
should be used instead.
Unfortunately array_try_spare() didn't get the memo, and it can crash
when adding a spare with no metdata.
So change it to use ->validate_geometry().
Only one place remains that uses ->avail_size(), and that is safe.
Also fix a comment with a typo.
Reported-and-tested-by: Bjørnar Ness <bjornar.ness@gmail.com> Fixes: 641da7459192 ("super1: separate to version of _avail_space1().") Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Mon, 16 Oct 2017 07:54:18 +0000 (15:54 +0800)]
mdadm/mdopen: create new function create_named_array for writing to new_array
Split 'write to new_array' out into a function named create_named_array.
And fixed a trivial compiling warning 'warn_unused_result' against commit: fdbf7aaa1956 (mdopen: call "modprobe md_mod" if it might be needed.)
Zhilong Liu [Wed, 11 Oct 2017 08:53:12 +0000 (16:53 +0800)]
mdadm/grow: adding a test to ensure resize was required
To fix the commit: 4b74a905a67e
(mdadm/grow: Component size must be larger than chunk size)
array.level > 1 : against the raids which chunk_size is meaningful.
s->size > 0 : ensure that changing component size has required.
array.chunk_size / 1024 > s->size : ensure component size should
be always >= current chunk_size when requires resize, otherwise,
mddev->pers->resize would be set mddev->dev_sectors as '0'.
Reported-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Suggested-by: NeilBrown <neilb@suse.com> Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
NeilBrown [Thu, 5 Oct 2017 06:13:17 +0000 (17:13 +1100)]
Move mdadm_env.sh out of /usr/lib/systemd
The systemd developers like to keep control of the
lib/systemd namespace, and haven't approved of the use
of lib/systemd/scripts. So we should stop using it.
Move the mdadm_env.sh script, optionally sourced by
mdmonitor.service, to a new directory /usr/lib/mdadm.
After switch root new mdmon is started. It sends initrd mdmon a signal
to terminate. initrd mdmon receives it and switches the safe mode delay
to 1 ms in order to get array to clean state and flush last version of
metadata. The problem is sysfs filesystem is not available to initrd mdmon
after switch root so the original safe mode delay is unchanged. The delay
is set to few seconds - if there is a lot of traffic on the filesystem,
initrd mdmon doesn't terminate for a long time (no clean state). There
are 2 instances of mdmon. initrd mdmon flushes metadata when array goes
to clean state but this metadata might be already outdated.
Use file descriptor obtained on mdmon start to change safe mode delay.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Tue, 3 Oct 2017 12:49:49 +0000 (14:49 +0200)]
imsm: Set disk slot number
If first disk of IMSM RAID1 is failed but still present in the system,
the array is not auto-assembled. Auto-assemble uses raid disk slot from
metadata to index disks. As it's not set, the valid disk is seen as a
replacement disk and its metadata is ignored. The problem is not
observed for other RAID levels as they have more than 2 disks -
replacement disks are only stored under uneven indexes so third disk
metadata is used in such scenario.
imsm: write initial ppl on a disk added for rebuild
When rebuild is initiated by the UEFI driver it is possible that the new
disk will not contain a valid ppl header. Just write the initial ppl
and don't abort assembly.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
imsm: Write empty PPL header if assembling regular clean array.
If array was initially assembled with kernel without PPL support -
initial header was never written to the drive.
If initial resync was completed and system is rebooted to kernel with
PPL support - mdadm prevents from assembling normal clean array
due to lack of valid PPL.
Write empty header when assemble normal clean array, so the
its assamble is no longer blocked.
imsm: don't skip resync when an invalid ppl header is found
If validate_ppl_imsm() detects an invalid ppl header it will be
overwritten with a valid, empty ppl header. But if we are assembling an
array after unclean shutdown this will cause the kernel to skip resync
after ppl recovery. We don't want that because if there was an invalid
ppl it's best to assume that the ppl recovery is not enough to make the
array consistent and a full resync should be performed. So when
overwriting the invalid ppl add one ppl_header_entry with a wrong
checksum. This will prevent the kernel from skipping resync after ppl
recovery.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If raid memeber is not in sync - it is skipped during
enablement of PPL. This is not correct, since the drive that
we are currently recovering to does not have ppl_size and ppl_sector
properly set in sysfs.
Remove this skipping, so all drives are updated during turning on the PPL.
Zeroout whole ppl space during creation/force assemble
PPL area should be cleared before creation/force assemble.
If the drive was used in other RAID array, it might contains PPL from it.
There is a risk that mdadm recognizes those PPLs and
refuses to assemble the RAID due to PPL conflict with created
array.
Change validation algorithm to check validity of multiple ppls that
are stored in PPL area.
If read error occurs during - treat the all PPLs as invalid -
there is no guarantee that this one was not latest. If the header CRC is
incorrect - assume that there are no further PPLs in PPL area.
If whole PPL area was written at least once - there is a possibility that
old PPL (with lower generation number) will follow the recent one
(with higest generation number). Compare those generation numbers to check
which PPL is latest.
Add support for super1 with multiple ppls. Extend ppl area size to 1MB.
Use 1MB as default during creation. Always start array as single ppl -
if kernel is capable of multiple ppls and there is enough space reserved -
it will switch the policy during first metadata update.
Don't abort starting the array if kernel does not support ppl
Change the behavior of assemble and create for consistency-policy=ppl
for external metadata arrays. If the kernel does not support ppl, don't
abort but print a warning and start the array without ppl
(consistency-policy=resync). No change for native md arrays because the
kernel will not allow starting the array if it finds an unsupported
feature bit in the superblock.
In sysfs_add_disk() check consistency_policy in the mdinfo structure
that represents the array, not the disk and read the current consistency
policy from sysfs in mdmon's manage_member(). This is necessary to make
sysfs_add_disk() honor the actual consistency policy and not what is in
the metadata. Also remove all the places where consistency_policy is set
for a disk's mdinfo - it is a property of the array, not the disk.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 5 Sep 2017 09:41:37 +0000 (17:41 +0800)]
mdadm/manpage: disable bitmap_resize for external file bitmap
Update the manpage in "SIZE CHANGES" against the md commit.
Commit: e8a27f836f165c26f867ece7f31eb5c811692319
(md/bitmap: disable bitmap_resize for file-backed bitmaps.)
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
GET_MISMATCH option doesn't exist for RAID arrays without redundancy
so sysfs_read fails if this information is requested. Set options
according to the device using information from /proc/mdstat.
If array is stopped during reshape and assembled again straight away,
reshape process in a background might still be running. systemd doesn't
start a new service if one already exists. If there is a race, previous
process might terminate and new one is not created. Reshape doesn't
continue after assemble.
Tell systemd to restart the service rather than just start it. It will
assure previous service is stopped first. If it's not running, stopping
has no effect and only new process is started.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
mdopen: call "modprobe md_mod" if it might be needed.
Creating an array by opening a block-device with major number of 9
will transparently load the md module if needed.
Creating an array by opening
/sys/module/md_mod/parameters/new_array
and writing to it won't, it will just fail if md_mod isn't loaded.
So when opening that file fails with ENOENT, run "modprobe md_mod" and
try again.
This fixes a bug whereby if you have "CREATE names=yes" in mdadm.conf,
and the md modules isn't loaded, then creating or assembling an
array will not honor the "names=yes" configuration.
Song Liu [Tue, 29 Aug 2017 16:53:02 +0000 (09:53 -0700)]
mdadm: set journal_clean after scanning all disks
Summary:
In Incremental.c:count_active(), max_events is tracked to show to
which devices are up to date. If a device has events==max_events+1,
getinfo_super() is called to reload the superblock from this
device. getinfo_super1() blindly set journal_clean to 0, which is
wrong.
This patch fixes this by tracking max_journal_events for all the
disks. After scanning all disks, journal_clean is set if
max_journal_events >= max_events-1.
Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Fri, 18 Aug 2017 10:00:23 +0000 (12:00 +0200)]
Detail: differentiate between container and inactive arrays
Containers used to be handled as active arrays because GET_ARRAY_INFO
ioctl returns valid structure for them. As containers appear as inactive
in sysfs, the output for detail command has changed.
Stop relying on inactive state for containers. Make the output look the
same as in mdadm 4.0.
Mariusz Tkaczyk [Wed, 16 Aug 2017 12:59:46 +0000 (14:59 +0200)]
Monitor: Include containers in spare migration
Spare migration doesn't work for external metadata. mdadm skips
a container with spare device because it is inactive. It used to work
because GET_ARRAY_INFO ioctl returned valid structure for a container
and mdadm treated such response as active container. Current
implementation checks it in sysfs where container is shown as inactive.
Adapt sysfs implementation to work the same way as ioctl.
Mariusz Tkaczyk [Wed, 16 Aug 2017 12:22:32 +0000 (14:22 +0200)]
Monitor: containers don't have the same sysfs properties as arrays
GET_MISMATCH option doesn't exist for containers so sysfs_read fails if
this information is requested. Set options according to the device using
information from /proc/mdstat.