Support for RAID 10 makes it necessary to rewrite the algorithm
for deriving DDF layout from MD layout. The functions level_to_prl
and layout_to_rlq are combined in a single function that takes
md layout parameters and converts them to DDF.
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
DDF: layout_ddf2md: new DDF->md RAID layout conversion
layout_ddf2md() is a new RAID layout conversion routine.
It obsoletes the previous separate routines for obtaining
md level and layout (map_num1, rlq_to_layout).
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
The DDF code was assuming that the VD slots 0..populated_vdes
were used and the rest was unused. Remove this assumption and
deal with empty slots instead.
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
If secondary RAID level is taken into account, translation between
the md RAID member (raid_disk) and the index of a physical disk
in a BVD becomes more complex.
Also, take into account that the member list can have unused entries
(this is independent of secondary RAID level).
Adapt usage of find_vdcr() accordingly
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
This patch implements the previously unsupported case where
store_super_ddf is called with a non-empty superblock.
For DDF, writing meta data to just one disk makes no sense.
We would run the risk of writing inconsistent meta data
to the devices. So just call __write_init_super_ddf and
write to all devices, including the one passed by the caller.
This patch assumes that the device to store the superblock on
has already been added to the DDF structure. Otherwise, an
error message will be emitted.
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
Assemble: avoid a consistency check when --force is given.
mdadm will normally not include a device into an array if that device
reports that the "best" device has failed, as this normally implies
some sort of inconsistency.
However when --force is given it means that the given drives really
should be assembled if at all possible so in that case the test should
be avoided.
The particular case where this was a problem was a RAID5 were all
devices had the same event count but three of them reported that the
first two had failed.
As they all had the same event count the first was taken as the 'best'
and that caused the later ones to be excluded. Listing one of the
later ones first allowed the array to be assembled. So in this case
the test clearly just got in the way and did nothing useful.
Grow: notice when --stop is synchronising a reshape and don't mess it up.
--stop now tries to wait for a reshape to be at just the right spot.
However for a reducing reshape, mdadm will be running in the
background watching, and might adjust sync_max and mess things up.
So teach "progress_reshape" to notice when "sync_max" is modified, and
leave it alone.
progress_reshape() may not set reshape_completed if the reshape is
interrupted, so we need to initialize it to the current value before
hand, so the value used afterwards is credible.
Stop: improve synchronising of reshape with whole stripes.
It is possible for 'sync_completed' to be further ahead than
we deduced from 'reshape_position'. However we cannot read it while
the array is frozen, so it is hard to know.
Once that array is unfrozen, check and if sync_completed is ahead of
'sync_max', push 'sync_max' well ahead if 'sync_completed' so it
will all synchronise up properly.
Assemble: improve messages when restarting a reshape.
If the restarted reshape needs a backup file and we don't have one,
that should be reported before we try to start the array.
Also we shouldn't say the "Cannot grow" but "cannot complete".
Assemble: ignore devices= if container= is present.
If "container=" is present, then we are going to assemble from the
given container where that container is made of those devices or not.
So in this case the "devices=" is purely documentation and is best
ignored.
As part of this, move the test on the "container=" value when that
start with "/" up before the device is opened. There sooner we test
things, the better.
Reported-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
Config: use better device names for "DEVICES container"
When "containers" appears on the "DEVICES" line (which is does by
default), use names from the mdadm map file instead of kernel names,
when possible.
This mean that the name will be more likely to appear in mdadm.conf
and so more likely to match "container=" tags.
If the container metadata doesn't know how many device to expect (as
is the case with IMSM), don't fail an --assemble which over-specifies
the number of devices.
Manage: check alignment when stopping an array undergoing reshape.
To be able to revert-reshape of raid4/5/6 which is changing
the number of devices, the reshape must has been stopped on a multiple
of the old and new stripe sizes.
The kernel only enforces the new stripe size multiple.
So we enforce the old-stripe-size multiple by careful use of
"sync_max" and monitoring "reshape_position".
NeilBrown [Thu, 27 Jun 2013 00:12:31 +0000 (10:12 +1000)]
Grow: report better message when --grow --chunk cannot work.
When changing the chunksize of an array, the new chunksize must
divide the device size.
If it doesn't we report a very brief message.
Make this message a bit longer and suggest a way forward be reducing
the size of the array.
Reported-by: Mark Knecht <markknecht@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Tue, 25 Jun 2013 05:56:22 +0000 (15:56 +1000)]
Subject: Make wait_for and open_dev_excl faster
When we crete or assemble an array, we wait for udev to create the
device file in /dev so that as soon as mdadm complete, the device can
be used.
This waiting is performed in multiples of 200ms, which can sometimes
be too long to wait.
So change to an exponential backoff. Wait 1, then 2, then 4 msec etc.
Once we get to 256msec, stop backing off and continue waiting 256ms at
a time until we reach the limit which is now 4.608sec rather than 5sec
which it was before.
NeilBrown [Tue, 25 Jun 2013 05:52:58 +0000 (15:52 +1000)]
Grow: fix bug in raid0 -> raid5 conversion.
The moment we change a RAID0 to a RAID5 it will try to recovery. This
will abort quite quickly as there are not spare devices, but it could
confuse the attempt to freeze the array.
So allow 'freeze' to work even on a recovering array.
NeilBrown [Mon, 24 Jun 2013 06:59:37 +0000 (16:59 +1000)]
Make: CXFLAGS should be conditionally assigned.
As the Makefile encourages users to set CXFLAGS for extra flags,
we should only conditionally set it.
That way it can be over-ridden in the environment as well as on
the command line.
mwilck@arcor.de [Thu, 20 Jun 2013 20:21:05 +0000 (22:21 +0200)]
Detail: deterministic ordering in --brief --verbose
Have mdadm --Detail --brief --verbose print the list of devices in
alphabetical order.
This is useful for debugging purposes. E.g. the test script
10ddf-create compares the output of two mdadm -Dbv calls which
may be different if the order is not deterministic.
(I confess: I use a modified "test" script that always runs
"mdadm --verbose" rather than "mdadm --quiet", otherwise this
wouldn't happen in 10ddf-create).
Signed-off-by: Martin Wilck <mwilck@arcor.de> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Mon, 24 Jun 2013 04:08:41 +0000 (14:08 +1000)]
Grow: remove excess drives when converting to RAID0.
When converting to RAID0, all spares and non-data drives
need to be removed first.
It is possible that the first HOT_REMOVE_DISK will fail because the
personality hasn't let go of it yet, so retry a few times.
NeilBrown [Mon, 24 Jun 2013 03:04:38 +0000 (13:04 +1000)]
Grow: fix two problems with new_data_offset
1/ ignore failed devices - obviously
2/ We need to tell the kernel which direction the reshape should
progress even if we didn't choose the particular data_offset
to use.
NeilBrown [Mon, 24 Jun 2013 03:02:35 +0000 (13:02 +1000)]
Grow: Try hard to set new_offset.
Setting new_offset can fail if the v1.x "data_size" is too small.
So if that happens, try increasing it first by writing "0".
That can fail on spare devices due to a kernel bug, so if it doesn't
try writing the correct number of sectors.
NeilBrown [Wed, 19 Jun 2013 01:09:33 +0000 (11:09 +1000)]
Assemble: when forcing a single-degraded RAID6 array, trigger a 'repair'.
When an active/degraded RAID6 array is force-started we clear the
'active' flag, but it is still possible that some parity is
no in sync. This is because there are two parity block.
It would be nice to be able to tell the kernel "P is OK, Q maybe not".
But that is not possible.
So when we force-assemble such an array, trigger a 'repair' to fix up
any errant Q blocks.
This is not ideal as a restart during the repair will not be continued
after the restart, but it is the best we can do without kernel help.
NeilBrown [Wed, 19 Jun 2013 00:33:47 +0000 (10:33 +1000)]
sysfs_read: return devices in same order as in filesystem.
When we read devices from sysfs (../md/dev-*), store them in the same
order that they appear. That makes more sense when exposed to a
human (as the next patch will).
Bernd Schubert [Tue, 18 Jun 2013 09:09:26 +0000 (11:09 +0200)]
raid6check: Fix memory leaks detected by valgrind
==2389947== 24 bytes in 1 blocks are definitely lost in loss record 1 of 10
==2389947== at 0x4C2B3F8: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2389947== by 0x408067: xmalloc (xmalloc.c:36)
==2389947== by 0x401B19: check_stripes (raid6check.c:151)
==2389947== by 0x4030C6: main (raid6check.c:521)
==2389947==
==2389947== 24 bytes in 1 blocks are definitely lost in loss record 2 of 10
==2389947== at 0x4C2B3F8: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2389947== by 0x408067: xmalloc (xmalloc.c:36)
==2389947== by 0x401B67: check_stripes (raid6check.c:155)
==2389947== by 0x4030C6: main (raid6check.c:521)
==2389947==
Bernd Schubert [Tue, 18 Jun 2013 09:09:16 +0000 (11:09 +0200)]
raid6check: Fix build of raid6check
After recent git pull 'make raid6check' did not work anymore, as
sysfs_read() was called with a wrong argument and as check_env()
was used by use_udev(), but not defined.
Replace sysfs_read(..., -1, ...) by sysfs_read(..., NULL, ...)
NeilBrown [Mon, 17 Jun 2013 06:55:31 +0000 (16:55 +1000)]
Assemble/Incr: Don't include spares with too-high event count.
Some failure scenarios can leave a spare with a higher event count
than an in-sync device. Assembling an array like this will confuse
the kernel.
So detect spares with event counts higher than the best non-spare
event count and exclude them from the array.
Reported-by: Alexander Lyakas <alex.bolshoy@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Mon, 27 May 2013 05:09:38 +0000 (15:09 +1000)]
Grow: allow for different sized devices when updating data_offset.
It is possible that the devices in an array have different sizes, and
different data_offsets. So the 'before_space' and 'after_space' may
be different from drive to drive.
Any decisions about how much to change the data_offset must work on
all devices, so must be based on the minimum available space on
any devices.
So find this minimum first, then do the calculation.
NeilBrown [Thu, 23 May 2013 04:41:29 +0000 (14:41 +1000)]
Assemble: --update=metadata converts v0.90 to v1.0
This allows the smooth conversion of legacy 0.90 arrays
to 1.0 metadata.
Old metadata is likely to remain but will be ignored.
It can be removed with
mdadm --zero-superblock --metadata=0.90 /dev/whatever
NeilBrown [Tue, 21 May 2013 06:28:23 +0000 (16:28 +1000)]
Grow: use new_data_offset instead of backups for raid4/5/6 reshape.
If we can modify the data_offset, we can avoid doing any backups at all.
If we can't fall back on old approach - but not if --data-offset
was requested.
NeilBrown [Wed, 22 May 2013 02:17:32 +0000 (12:17 +1000)]
Grow: introduce min_offset_change to struct reshape.
raid10 currently uses the 'backup_blocks' field to store something
else: a minimum offset change.
This is bad practice, we will shortly need to have both for RAID5/6,
so make a separate field.