Gioh Kim [Tue, 6 Nov 2018 15:20:17 +0000 (16:20 +0100)]
Assemble: mask FAILFAST and WRITEMOSTLY flags when finding the most recent device
If devices[].i.disk.state has MD_DISK_FAILFAST or MD_DISK_WRITEMOSTLY
flag, it cannot be the most recent device. Both flags should be masked
before checking the state.
Mariusz Tkaczyk [Wed, 17 Oct 2018 10:11:41 +0000 (12:11 +0200)]
imsm: update metadata correctly while raid10 double degradation
Mdmon calls end_migration() when map state changes from normal to
degraded. It is not valid because in raid 10 double degradation case
mdmon breaks checkpointing but array is still rebuilding.
In this case mdmon has to mark map as degraded and continues marking
recovery checkpoint in metadata. Migration can be finished only if newly
failed device is a rebuilding device.
Add catching double degraded to degraded transition. Migration is
finished but map state doesn't change, array is still degraded.
Update failed_disk_num correctly. If double degradation
happens rebuild will start on the lowest slot, but this variable points
to the first failed slot. If second fail happens while rebuild this
variable shouldn't be updated until rebuild is not finished.
NeilBrown [Wed, 5 Dec 2018 05:35:00 +0000 (16:35 +1100)]
Monitor: add system timer to run --oneshot periodically
"mdadm --monitor --oneshot" can be used to get a warning
if there are any degraded arrays. It can be helpful to get
this warning periodically while the condition persists.
This patch add a systemd service and timer which can
be enabled with
systemctl enable mdmonitor-oneshot.service
NeilBrown [Wed, 5 Dec 2018 05:35:00 +0000 (16:35 +1100)]
mdcheck: add systemd unit files to run mdcheck.
Having the mdcheck script is not use if is never run.
This patch adds systemd unit files so that it can easily
be run on the first Sunday of each month for 6 hours,
then on every subsequent morning until the check is
finished.
The units still need to be enabled with
systemctl enable mdcheck_start.timer
The timer will only actually be started when an array
which might need it becomes active.
NeilBrown [Fri, 9 Nov 2018 06:12:33 +0000 (17:12 +1100)]
policy: support devices with multiple paths.
As new releases of Linux some time change the name of
a path, some distros keep "legacy" names as well. This
is useful, but confuses mdadm which assumes each device has
precisely one path.
So change this assumption: allow a disk to have several
paths, and allow any to match when looking for a policy
which matches a disk.
NeilBrown [Fri, 9 Nov 2018 06:12:33 +0000 (17:12 +1100)]
Document PART-POLICY lines
PART-POLICY has been accepted in mdadm.conf since the same
time that POLICY was accepted, but it was never documented.
So add the missing documentation.
Also fix a bug which would have stopped it from working if
anyone had ever tried to use it.
Gioh Kim [Tue, 6 Nov 2018 14:27:42 +0000 (15:27 +0100)]
Assemble: keep MD_DISK_FAILFAST and MD_DISK_WRITEMOSTLY flag
Before updating superblock of slave disks, desired_state value
is set for the target state of the slave disks. But it forgets
to check MD_DISK_FAILFAST and MD_DISK_WRITEMOSTLY flags. Then
start_arrays() calls ADD_NEW_DISK ioctl-call and pass the state
without MD_DISK_FAILFAST and MD_DISK_WRITEMOSTLY.
Currenlty it does not generate any problem because kernel does not
care MD_DISK_FAILFAST or MD_DISK_WRITEMOSTLY flags.
When IMSM_NO_PLATFORM is exported mdadm allows to create array with
partitions or add partition to existing array but there is no
possibilty to assemble it after stopping, see commit 691c6ee1b6bb
("IMSM/DDF: don't recognised these metadata on partitions.").
When searching for hba capabilities first test device and print
corresponding error if it is a partition.
Guoqing Jiang [Mon, 27 Aug 2018 03:10:52 +0000 (11:10 +0800)]
Assemble: set devices to NULL when load_devices can't load device
Since load_devices frees "devices" when it can't find any
device, we should set it to NULL to avoid double free issue
which can be reproduced by below steps:
When Kill() cannot open device or find superblock it return the same
error and mdadm ignores it.
Change error handling in Kill() function. Return error if device is
busy, ignore it only when superblock doesn't exist- assume that metadata
is zeroed.
Mariusz Tkaczyk [Fri, 3 Aug 2018 07:41:50 +0000 (09:41 +0200)]
Incremental: remove external arrays and devices correctly
Kernel returns EBUSY when device fail invokes array fail.
In external metadata if kernel returns it, mdadm doesn't stop member
arrays but it will try to stop container directly. It fails because
container still has working arrays, so udev remove is triggered.
Try to set faulty state on device in member arrays first. If kernel
returns EBUSY, stop this array. After that remove the device from
container.
In external metadata mdmon has to remove faulty devices from degraded
arrays, just remove device from container.
Raid5 array doesn't return EBUSY, it allows to remove every device.
Mdadm shouldn't block it.
udev.rules: make safe timeouts compatible with split-usr systems.
Instead of /usr/bin/sh, and /usr/bin/echo, use /bin/sh and shell
built-in echo respectively. This makes
udev-md-raid-safe-timeouts.rules to be compatible with both usr-merged
and split-usr systems alike.
Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Sometimes node can't assemble array because all the nodes
need to contend dlm lock, which causes node fence in automatic
test.
And in fact, we don't need the protection since the assemble
cmd called by RA doesn't change superblock, so revert the
commit 76781701a487090172d32befae07671a10ea88ad ("Assemble:
provide protection when clustered raid do assemble") to remove
unneccessary protection.
It is caused by Manage_stop calls map_remove and map_unlock,
but *mapp is not set to NULL after map_remove -> map_free,
so map_unlock will call map_free again.
Michal Zylowski [Fri, 22 Jun 2018 14:34:12 +0000 (16:34 +0200)]
tests, imsm: Calculate expected array_size in proper way
Tests should calucalte expected array_size accordingly to raid level. Also
tests should take care about runding to neares MB introduced from b53bfba6
"imsm: use rounded size for metadata initialization".
Expect proper size in tests. Simplify 09imsm-overlap test by creating array
with size which has not been rounded. Main purpose of this test is checking
something else.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Michal Zylowski [Fri, 22 Jun 2018 14:34:10 +0000 (16:34 +0200)]
tests, imsm: Test shouldn't call grow with chunk and level in one command
Since a3b831c9 "Grow.c: Block any level migration with chunk size change"
there is no possibility to perform migration between level and chunk in
one operation. When any test tries to do this error message is printed
and tests finishes with fail.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Michal Zylowski [Fri, 22 Jun 2018 14:34:09 +0000 (16:34 +0200)]
tests, imsm: Set new_num_disks value corectly to perform expected size calculations
In some migration tests, variable new_num_disks should be set to expected
number of disks after migration. This is required for proper expected size
calculation.
Pass new_num_disks variable during test execution for:
- 16imsm-r0_3d-migrate-r5_4d
- 18imsm-r1_2d-takeover-r0_1d
- 16imsm-r0_5d-migrate-r5_6d
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Guoqing Jiang [Mon, 11 Jun 2018 09:03:44 +0000 (17:03 +0800)]
Free map to avoid resource leak issues
1. There are some places which didn't free map as
discovered by coverity.
CID 289661 (#1 of 1): Resource leak (RESOURCE_LEAK)12. leaked_storage: Variable mapl going out of scope leaks the storage it points to.
CID 289619 (#3 of 3): Resource leak (RESOURCE_LEAK)63. leaked_storage: Variable map going out of scope leaks the storage it points to.
CID 289618 (#1 of 1): Resource leak (RESOURCE_LEAK)26. leaked_storage: Variable map going out of scope leaks the storage it points to.
CID 289607 (#1 of 1): Resource leak (RESOURCE_LEAK)41. leaked_storage: Variable map going out of scope leaks the storage it points to.
2. If we call map_by_* inside a loop, then map_free
should be called in the same loop, and it is better
to set map to NULL after free.
3. And map_unlock is always called with map_lock,
if we don't call map_remove before map_unlock,
then the memory (allocated by map_lock -> map_read
-> map_add -> xmalloc) could be leaked. So we
need to free it in map_unlock as well.
Roman Sobanski [Fri, 8 Jun 2018 10:34:18 +0000 (12:34 +0200)]
imsm: correct num_data_stripes in metadata map for migration
When migrating an array from R0 to R10 num_data_stripes in metadata map
will not be updated. Update it to allow correct migration process.
Changes in R10 to R0 migration for clarity of code.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Thu, 7 Jun 2018 12:47:47 +0000 (14:47 +0200)]
Assemble.c Don't ignore faulty disk when array is auto assembled.
Since commit 20dc76d15b40 ("imsm: Set disk slot number") mdadm
sets slot number for each disk in imsm array. Now auto-assemble determines
devices using slot number and ignores devices on the same slot that have
older generation number.
It causes infinit loop if failed device is still visible in system
(it has metadata, but it is not merged with exisiting array).
To avoid it, out-of-sync device should be added to the best[]. Later
mdadm adds it as spare to the container.
Imsm doesn't support disk replacement feature, so it can use rooms for
replacements.
Zhilong Liu [Wed, 30 May 2018 07:04:41 +0000 (15:04 +0800)]
mdadm/test: correct tests/testdev as testdev in 02r5grow
Fixes: a6994ccc230b ("mdadm/test: get rid of the tests/testdev") Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Wed, 30 May 2018 07:04:05 +0000 (15:04 +0800)]
mdadm/test: mdadm needn't make install on the system
Fixes: beb71de04d31 ("mdadm/test: enable clustermd testing under clustermd_tests/")
clustermd_tests/func.sh:
remove unnecessary 'make install', just ensure 'make everything' has done.
the original idea is to make the /sbin/mdadm version same as ./mdadm, and
this breakage has pointed out by commit: 59416da78fc6 ("tests/func.sh: Fix some total breakage in the test scripts")
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Xiao Ni [Wed, 30 May 2018 05:49:41 +0000 (13:49 +0800)]
Check major number of block device when querying md device
It give error message when query a non md device.
mdadm /dev/null
/dev/null: is an md device, but gives "Inappropriate ioctl for device" when queried
It's introduced by commit 5cb8599 and 8d0cd09
At first it checks whether a block is md device by function md_get_version.
In this function it does mainly two jobs:
1. send request by ioctl. (now it can be replace by argument ioctlerr)
2. check the block device major number which we don't do this.
We add the second judgement in this patch.
Fixes: 5cb8599 and 8d0cd09 Reported-by: Karsten Weiss <karsten.weiss@atos.net> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Jes Sorensen [Thu, 31 May 2018 15:45:21 +0000 (11:45 -0400)]
Monitor: Increase size of percentalert to avoid gcc warning
gcc-8.1 complains about truncated string operations. While we know
percent will never grow larger than 100, it doesn't cost us anything
to increase the size of 'percentalert' on the stack like this.
Michal Zylowski [Tue, 29 May 2018 13:47:25 +0000 (15:47 +0200)]
imsm: Do not require MDADM_EXPERIMENTAL flag anymore
Grow feature for IMSM metadata is currently fully supported and tested.
Reshape operation is not in experimental state anymore, so usage of this
flag is unnecessary.
Do not require MDADM_EXPERIMENTAL flag and remove obsolete information
from manual.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Acked-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com> Acked-by: Roman Sobanski <roman.sobanski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Michal Zylowski [Tue, 29 May 2018 13:47:09 +0000 (15:47 +0200)]
imsm: Do not block volume creation when container has disks with mixed sector size
Currently when created container keeps disks with mixed sector size (few
4K disks and some 512 disks) there is no possibility to create volume from
disks with one sector size.
Allow volume creation when given disks are related with mixed container.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Jes Sorensen [Tue, 29 May 2018 20:09:47 +0000 (16:09 -0400)]
super-intel: Use memcpy() to avoid confusing gcc
When added :0 to serial number and copying it back, use memcpy()
instead of strncpy() as we know the actual length. This stops gcc
from complaining with -Werror=stringop-truncation enabled
Michal Zylowski [Tue, 29 May 2018 13:46:40 +0000 (15:46 +0200)]
Fix misspelling of 'alignment' and 'geometry'
Set gemetry to geometry in error message about geometry validation failed.
Fix misspelled 'alignment' word in imsm_component_size_alignment_check
function.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Roman Sobanski [Fri, 27 Apr 2018 10:12:21 +0000 (12:12 +0200)]
mdadm/grow: correct size and chunk_size casting
With commit 4b74a905a67e
("mdadm/grow: Component size must be larger than chunk size") mdadm returns
incorrect message if size given to grow was greater than 2 147 483 647 K.
Cast chunk_size to "unsigned long long" instead of casting size to "int".
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Roman Sobanski [Wed, 25 Apr 2018 09:25:06 +0000 (11:25 +0200)]
Prevent create IMSM volume with size smaller than 1M or chunk
Block creation of the imsm volume when given size is smaller than 1M and
print appropriate message.
Commit b53bfba6119d3f6f56eb9e10e5a59da6901af159
(imsm: use rounded size for metadata initialization) introduces issue with
rounding volume sizes smaller than 1M to 0. There is an inconsistency when
size smaller than 1M was given depends of what we give as target device:
1) When block devices was given created volume has maximum available size.
2) When container symlink was given created volume has size 0. Additionally
it causes below call trace:
imsm: do not use blocks_per_member in array size calculations
mdadm assumes that blocks_per_member value is equal to num_data_stripes *
blocks_per_stripe but it is not true. For IMSM arrays created in OROM
NUM_BLOCKS_DIRTY_STRIPE_REGION sectors are added up to this value. Because
of this mdadm shows invalid size of arrays created in OROM and to fix this
we need to use array size calculation based on num data stripes and blocks
per stripe.
imsm: pass already existing map to imsm_num_data_members
In almost every place where imsm_num_data_members is called there is
already existing map so it can be used it to avoid mistake when specifying
map for imsm_num_data_members.
tests/func.sh: Fix some total breakage in the test scripts
We will never mandate an obsolete file system such as ext[2-4] for
running the test suite, nor should the test version of mdadm be
installed on the system for the tests to be run.
Michal Zylowski [Wed, 4 Apr 2018 12:20:17 +0000 (14:20 +0200)]
imsm: Allow create RAID volume with link to container
After 1db03765("Subdevs can't be all missing when create raid device")
raid volume can't be created with link to container. This feature should
not be blocked in Create function. IMSM code forbids creation of
container with missing disk, so case like all dev's missing is already
handled.
Permit IMSM volume creation when devices are given as link to container.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When assembling an array undergoing rebuild the kernel will switch to
resync if there are no ppl entries to recover. Prevent that by adding an
empty entry when validating the ppl header.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:03 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-recovery against cluster-raid10
03r10_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:02 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-recovery against cluster-raid1
03r1_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:01 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-resync against cluster-raid10
03r10_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:11:00 +0000 (14:11 +0800)]
clustermd_tests: add test case to test switch-resync against cluster-raid1
03r1_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:57 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add-spare against cluster-raid10
02r10_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
then check spares.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:56 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add-spare against cluster-raid1
02r1_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
then check spares.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:55 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add against cluster-raid10
02r10_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
the 'add' in equal to 'add-spare'.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:54 +0000 (14:10 +0800)]
clustermd_tests: add test case to test manage_add against cluster-raid1
02r1_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
the 'add' in equal to 'add-spare'.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:53 +0000 (14:10 +0800)]
clustermd_tests: add test case to test grow_add against cluster-raid1
01r1_Grow_add: It contains 3 kinds of growing array.
1. 2 active disk in md array, grow and add new disk into array.
2. 2 active and 1 spare disk in md array, grow and add new disk
into array.
3. 2 active and 1 spare disk in md array, grow the device-number
and make spare disk as active disk in array.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:52 +0000 (14:10 +0800)]
clustermd_tests: add test case to test switching bitmap against cluster-raid10
01r10_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid10.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:51 +0000 (14:10 +0800)]
clustermd_tests: add test case to test switching bitmap against cluster-raid1
01r1_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid1.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Fri, 2 Feb 2018 06:10:50 +0000 (14:10 +0800)]
manpage: add prompt in --zero-superblock against clustered raid
Clustered raid would be damaged if calls --zero-superblock
incorrectly, so add prompt in --zero-superblock chapter of
manpage. Such as: cluster node1 has assembled cluster-md,
but calls --zero-superblock in other cluster node.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Guoqing Jiang [Mon, 22 Jan 2018 09:12:10 +0000 (17:12 +0800)]
Assemble: cleanup the failure path
There are some failure paths which share common codes
before return, so simplify them by move common codes
to the end of function, and just goto out in case
failure happened.
Guoqing Jiang [Mon, 22 Jan 2018 09:12:09 +0000 (17:12 +0800)]
Assemble: provide protection when clustered raid do assemble
The previous patch provides protection for other modes
such as CREATE, MANAGE, GROW and INCREMENTAL. And for
ASSEMBLE mode, we also need to protect during the process
of assemble clustered raid.
However, we can only know the array is clustered or not
when the metadata is ready, so the lock_cluster is called
after select_devices(). And we could re-read the metadata
when doing auto-assembly, so refresh the locking.
Guoqing Jiang [Mon, 22 Jan 2018 09:12:08 +0000 (17:12 +0800)]
mdadm: improve the dlm locking mechanism for clustered raid
Previously, the dlm locking only protects several
functions which writes to superblock (update_super,
add_to_super and store_super), and we missed other
funcs such as add_internal_bitmap. We also need to
call the funcs which read superblock under the
locking protection to avoid consistent issue.
So let's remove the dlm stuffs from super1.c, and
provide the locking mechanism to the main() except
assemble mode which will be handled in next commit.
And since we can identify it is a clustered raid or
not based on check the different conditions of each
mode, so the change should not have effect on native
array.
And we improve the existed locking stuffs as follows:
1. replace ls_unlock with ls_unlock_wait since we
should return when unlock operation is complete.
2. inspired by lvm, let's also try to use the existed
lockspace first before creat a lockspace blindly if
the lockspace not released for some reason.
3. try more times before quit if EAGAIN happened for
locking.
Note: for MANAGE mode, we do not need to get lock if
node just want to confirm device change, otherwise we
can't add a disk to cluster since all nodes are compete
for the lock.
BingJing Chang [Thu, 22 Feb 2018 07:00:28 +0000 (15:00 +0800)]
mdadm: prevent out-of-date reshaping devices from force assemble
With "--force", we can assemble the array even if some superblocks
appear out-of-date. But their data layout is regarded to make sense.
In reshape cases, if two devices claims different reshape progresses,
we cannot forcely assemble them back to array. Kernel will treat only
one of them as reshape progress. However, their data is still laid on
different layouts. It may lead to disaster if reshape goes on.
Reproducible Steps:
mdadm -C /dev/md0 --assume-clean -l5 -n3 /dev/loop[012]
mdadm -a /dev/md0 /dev/loop3
mdadm -G /dev/md0 -n4
mdadm -f /dev/md0 /dev/loop0 # after a period
mdadm -S /dev/md0 # after another period
mdadm -E /dev/loop[01] # make sure that they claims different ones
mdadm -Af -R /dev/md0 /dev/loop[023] # give no enough devices for
force_array() to pick non-fresh devices
cat /sys/block/md0/md/reshape_position # You can see that Kernel resume
reshape the from any progress of them.
Note: The unit of mdadm -E is KB, but reshape_position's is sector.
In order to prevent disaster, we add logics to prevent devices with
different reshape progress from being added into the array.
Reported-by: Allen Peng <allenpeng@synology.com> Reviewed-by: Alex Wu <alexwu@synology.com> Signed-off-by: BingJing Chang <bingjingc@synology.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
These udev rules attempt to set a safe kernel controller
timeout for disks containing RAID level 1 or higher
partitions for commodity disks which do not have SCTERC
capability, or do have it but it is disabled.
No attempt is made to change the STCERC settings on devices
which support it.
This attempts to mitigate the problem described here:
where the kernel controller may timeout on a read from a
disk after the default timeout of 30 seconds and consequently
cause mdraid to regard the disk as dead and eject it from the
RAID array.
The mitigation is to set the timeout to 180 seconds for disks
which contain a RAID level 1 or higher partition.
Signed-off-by: Jonathan G. Underwood <jonathan.underwood@gmail.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Thu, 25 Jan 2018 14:12:50 +0000 (15:12 +0100)]
Grow.c: Block any level migration with chunk size change
Mixing level and chunk changes in one grow operation is not supported.
Mdadm performs level migration correctly and ignores new chunk, but
after migration it tries to write this chunk to sysfs properties.
This is dangerous and can cause unexpected behaviours.
Assemble: prevent segfault with faulty "best" devices
In Assemble(), after context reload, best[i] can be -1 in some cases,
and before checking if this value is negative we use it to access
devices[j].i.disk.raid_disk, potentially causing a segfault.
Check if best[i] is negative before using it to prevent this potential
segfault.
Signed-off-by: Andrea Righi <andrea@betterlinux.com> Fixes: 69a481166be6 ("Assemble array with write journal") Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Robert LeBlanc <robert@leblancnet.us> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:09 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test grow_resize cluster-raid10
01r10_Grow_resize:
1. Create clustered raid10 with smaller size, then resize the
mddev to max size, finally change back to smaller size.
2. Create clustered raid10 with smaller chunk-size, then resize
it to larger, and trigger reshape.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:08 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test creating cluster-raid10
00r10_Create: It contains 4 scenarios of creating clustered raid10.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid10 with --assume-clean.
3. Creating clustered raid10 with spare disk.
4. Creating clustered raid10 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:06 +0000 (17:45 +0800)]
mdadm/clustermd_tests: add test case to test creating cluster-raid1
00r1_Create: It contains 4 scenarios of creating clustered raid1.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid1 with --assume-clean parameter.
3. Creating clustered raid1 with spare disk.
4. Creating clustered raid1 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:05 +0000 (17:45 +0800)]
mdadm/test: add '--testdir=' to switch choosing test suite
By now, mdadm has two test suites to cover traditional sofr-raid
testing and clustermd testing, the '--testdir=' option supports
to switch which suite to test, tests/ or clustermd_tests/.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:04 +0000 (17:45 +0800)]
mdadm/test: enable clustermd testing under clustermd_tests/
For clustermd testing, it needs user deploys the basic cluster
manually, test scripts don't cover auto-deploy cluster due to
different linux distributions have lots of difference.
Then complete the configuration in cluster_conf, please refer to
the detail comments in 'cluster_conf'.
1. 'func.sh' source file, it achieves feature functions for
clustermd testing.
2. 'cluster_conf' configure file, it contains two parts as
the input of testing.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Zhilong Liu [Tue, 16 Jan 2018 09:45:02 +0000 (17:45 +0800)]
mdadm/test: correct the logic operation in save_log
1. delete the mdadm -As, keep the original testing scene intact.
2. move some actions into 'array' test, 'mdadm -D $array' would
complain errors if $array is null.
Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mariusz Tkaczyk [Thu, 11 Jan 2018 11:39:49 +0000 (12:39 +0100)]
policy.c: Avoid to take spare without defined domain by imsm
Only Imsm get_disk_controller_domain returns disk controller domain for
each disk. It causes that mdadm automatically creates disk controller
domain policy for imsm metadata, and imsm containers in the same disk
controller domain can take spare for recovery.