Krzysztof Wojcik [Tue, 25 Jan 2011 06:44:10 +0000 (17:44 +1100)]
raid0->raid10 takeover- process metadata update
Implementation of raid0->raid10 takeover metadata update
at process_update level.
- We are using memory previously allocated in prepare_update to
create two dummy disks will be inserted in the metadata and
new imsm_dev structure with expanded disk order table.
- Update indexes in disk list
- Update metadata map
- Update disk order table
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Krzysztof Wojcik [Tue, 25 Jan 2011 06:44:10 +0000 (17:44 +1100)]
raid0->raid10 takeover- allocate memory for added disks
Allocate memory will be used in process_update.
For raid0->raid10 takeover operation number of disks doubles
so we should allocate memory for additional disks
and one imsm_dev structure with extended order table.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Krzysztof Wojcik [Tue, 25 Jan 2011 06:44:10 +0000 (17:44 +1100)]
raid0->raid10 takeover- create metadata update
Create metadata update for raid0 -> raid10 takeover.
Because we have no mdmon running for raid0 we have to
update metadata using local update mechanism
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Krzysztof Wojcik [Tue, 25 Jan 2011 06:49:03 +0000 (17:49 +1100)]
Add raid10 -> raid0 takeover support
The patch introduces takeover from level 10 to level 0 for imsm
metadata. This patch contains procedures connected with preparing
and applying metadata update during 10 -> 0 takeover.
When performing takeover 10->0 mdmon should update the external
metadata (due to disk slot and level changes).
To achieve that mdadm calls reshape_super() and prepare
the "update_takeover" metadata update type.
Prepared update is processed by mdmon in process_update().
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Tue, 25 Jan 2011 22:47:06 +0000 (08:47 +1000)]
Fix some issues with setting 'new' state of a reshape
- when reshaping a container, ->reshape_active is already set
even though it isn't really active yet, so we need to set
the new geometry even when reshape_active is set. This is safe.
- When restarting a reshape, make sure the reshape_position is set
appropriately when external metadata is used.
NeilBrown [Mon, 24 Jan 2011 20:56:53 +0000 (07:56 +1100)]
Don't close fds in write_init_super
We previously closed all 'fds' associated with an array in
write_init_super .. sometimes, and sometimes at bad times.
This isn't neat and free_super is a better place to close them.
So make sure free_super always closes the fds that the metadata
manager kept hold of, and stop closing them in write_init_super.
Also add a few more calls to free_super to make sure they really do
get closed.
Reported-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Thu, 20 Jan 2011 20:59:53 +0000 (07:59 +1100)]
Fix management of backed-up region for hi-to-low reshapes.
When reshaping from the end of the array to the start, for times
when the number of data devices is decreasing, the handling of the
backup area isn't a simple mirror of the handling on low-to-hi
reshapes as the backup areas is always low in the array.
Krzysztof Wojcik [Mon, 17 Jan 2011 01:56:43 +0000 (12:56 +1100)]
Unfreeze for non re-striping transitions
For non re-striping transitions array must be unfrozen
before end of processing.
For restriping transitions we normally let the child
unfreeze the array but in this case there is no child.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Krzysztof Wojcik [Mon, 17 Jan 2011 01:53:31 +0000 (12:53 +1100)]
Set reshape.after.data_disks for raid0<->raid10 takeover
reshape.after.data_disks field must be initiated
for raid0<->raid10 transition.
Instead calculated spares_needed variable in reshape_array
function has random value.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Mon, 17 Jan 2011 01:44:52 +0000 (12:44 +1100)]
FIX: sync_completed == 0 causes reshape cancellation in metadata
md signals reshape completion (whole area or parts) by setting
sync_completed to 0. This causes in set_array_state() to rollback
metadata changes (super-intel.c:4977. To avoid this do not allow for
set last_checkpoint to 0 if reshape is finished.
This was also root cause of my previous fix for finalization reshape
that I agreed earlier is not necessary,
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Mon, 17 Jan 2011 01:38:13 +0000 (12:38 +1100)]
FIX: mdadm throws coredump on exit in error case
When mdadm falls in "reduce size" error on takeovered array it jumps
to release and tries execute backward takeover. This time sra pointer
is not initialized and coredump is generated.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Czarnowska, Anna [Thu, 13 Jan 2011 13:22:16 +0000 (13:22 +0000)]
fix: segfault if subarray is monitored but container is not
In this situation to->parent is null so "to" doesn't change to
parent container and to->metadata is still null.
This results in segmentation fault when checking
to->metadata->ss->external.
We should just skip this array as container is needed to move spares to.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Sun, 16 Jan 2011 22:53:56 +0000 (09:53 +1100)]
Add 'restart' arg to various functions used for reshaping.
When we restart an array in the middle of a reshape, we reuse a lot of
the code for starting the reshape, but it needs to know that
circumstances are slightly different.
So add a 'restart' arg which is used:
- skip checking and adding spares
- activate the array (rather than start reshape)
- allow the backup file to already exist
NeilBrown [Wed, 12 Jan 2011 23:44:52 +0000 (10:44 +1100)]
reshape_array: move lots of code out of a 'switch'.
Everything other than the 'child' part of the 'switch(fork)' returns
quickly, so leave them inside the switch but move the other large bit
out so as to make the flow of code more natural.
NeilBrown [Wed, 12 Jan 2011 23:39:34 +0000 (10:39 +1100)]
Fix up calls to unfrozen at end of reshape.
1/ don't pass 'frozen' as an arg to unfreeze - just use it
to conditionally call 'unfreeze'.
2/ Always unfreeze at end of reshape_container
3/ Only unfreeze at end of reshape_array if not 'forked'. So when
reshape_array is called from reshape_container it doesn't unfreeze,
but when called directly.
Adam Kwolek [Wed, 12 Jan 2011 23:16:07 +0000 (10:16 +1100)]
imsm: FIX: spares are not counted
Field info->array.spare_disks is used on begin of reshape_array() to
check if there is enough number of spares to process reshape. During
container_content_imsm() call spare disks are not counted. This
causes that reshape_array() reports that there is not enough spares to
execute reshape.
Patch adds spares counting for reshape process.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Wed, 12 Jan 2011 23:15:54 +0000 (10:15 +1100)]
FIX: Cannot load container information
When container is passed to grow_reshape(), load_container() function
has to be used to get all required information from metadata.
So load_super is never correct here - in particular, cfd is a
'container fd' so we must 'load_container' on it.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Wed, 12 Jan 2011 23:06:29 +0000 (10:06 +1100)]
imsm: FIX: old devices memory has to be released
When process_update() replaces memory for bigger devices, old memory
areas are collected in a list and has to be assigned in to pointer in
update for later release.
List created from old devices is created and attached to space_list
for later releasing.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Wed, 12 Jan 2011 23:05:31 +0000 (10:05 +1100)]
imsm: FIX: local mdadm update shouldn't be done in update creation function.
Local update is performed based on created update, so this code can broke
local update and it is not necessary as prepare and process update functions
are used.
Code removed.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Wed, 12 Jan 2011 23:04:33 +0000 (10:04 +1100)]
imsm: FIX: mdadm should process local data
When update is created by mdadm, local information should be updated
also. This makes us to prepare one update for mdmon and second
"update" to maintain local changes. we can use prepared update for
"local/mdadm" metadata update purposes.
We have 2 cases:
1. when metadata is updated by mdmon, we avoid metadata reloading in
mdadm.
we proceed the same updtate 2 times:
- one time in mdadm for "local update"
- second time in mdmon for real metadat update
2. when metadata is updated by mdadm (no mdmon running) updates are
processed in the same way.
- one time in mdadm for "local update"
- there is no "second time" update but mdadm just flushes
metadata to array
This let us to avoid code duplication by using prepare and process
update functions as for update via mdmon. This makes update
preparing mdmon independent and there is no need to maintain the
same thing in 2 places in code.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Kwolek, Adam [Wed, 12 Jan 2011 10:42:44 +0000 (21:42 +1100)]
FIX: size is passed incorrectly
reshape_super() called from reshape_container() with size set to
info->component_size gives size in reshape_super == -2 due to unsigned
signed conversion (info->component_size is not initializes).
As size is not changed during container reshape '-1' value is passed
to indicate this.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Anna Czarnowska [Tue, 11 Jan 2011 10:36:37 +0000 (11:36 +0100)]
Monitor: skip array if error getting size
load_super tries to load container first anyway but if it fails
eg. after physically removing a disk
then it tries to read metadata from container device.
This will always fail and print confusing errors.
So use load_container instead of load_super on container.
On failure to read metadata we should skip this array.
It will be dealt with the next time round.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Wed, 12 Jan 2011 04:59:24 +0000 (15:59 +1100)]
Don't complain about missing spares when reshaping a raid0.
To reshape a RAID0 we convert to RAID4 first. This makes it look
like it could be degraded and so we are tempted to ensure there are
enough spares. However this is not appropriate for RAID0, so
explicitly exclude new_level == RAID0 in this check
Reported-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Wed, 12 Jan 2011 04:12:44 +0000 (15:12 +1100)]
imsm: FIX: update disks status in container_contents()
Based on status information disks are added to array during grow (in reshape_array()).
This information currently is not present and all disks (old and new) were added to md.
To avoid adding already present disks, disk.state has to be set.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Wed, 12 Jan 2011 03:46:17 +0000 (14:46 +1100)]
Make child_monitor a candidate for ->manage_reshape
Child_monitor was design to perform 'manage_reshape' for native
arrays. So change the signature for ->manage_reshape to match
child_monitor and move the all to the same place that child_monitor
is called from.
Also give super-intel a manage_reshape handler which simple calls
child_monitor.
NeilBrown [Wed, 12 Jan 2011 03:04:09 +0000 (14:04 +1100)]
Add comment about future enhancement
We currently suspend rather large sections of
the array which can take a while to process.
Possibly smaller sections are better. Possibly it should be
adjusted on a timeout basis.
NeilBrown [Tue, 11 Jan 2011 23:44:58 +0000 (10:44 +1100)]
Be more enthusiastic/flexible in backing up data.
At some points we need to perform 2 backups at once so as to
start the 'double buffering' approach. So rather than assuming
what the next backup should be, example suspend_point and backup
as much as possible up to there.
NeilBrown [Tue, 11 Jan 2011 23:40:56 +0000 (10:40 +1100)]
Fix issues with the finish of monitoring a reshape.
1/ We need to clean up the backup file after the reshape finishes.
2/ We need to remove the suspended region and clear the resync
controls after the resync finishes.
NeilBrown [Tue, 11 Jan 2011 23:37:26 +0000 (10:37 +1100)]
fix array_size in child_monitor
The array_size we need to consider is the largest possible size of the
array, which is a different calculation depending on whether the array
is growing or shrinking.
NeilBrown [Tue, 11 Jan 2011 23:34:44 +0000 (10:34 +1100)]
Fix calculations for max_progress and completed.
'sync_completed' can sometimes have a value which is slightly high.
So round-down relevant values to new-chunk size and that is what we
want.
Subtract from component_size after scaling down rather than before as
that is easier.
Make sure max_progress never goes negative when reshaping backwards.
NeilBrown [Tue, 11 Jan 2011 04:20:40 +0000 (15:20 +1100)]
Improve determination of when a backup is needed.
The current code is right.
Instead compute where we might eventually need to back up to, and
then compare that to how far we have progressed.
Also move suspend_point up towards where we might need to backup to,
rather than just as far as max_progress - as max_progress can never
exceed where we are currently suspended to.
NeilBrown [Tue, 11 Jan 2011 04:07:57 +0000 (15:07 +1100)]
Remove write_range from calculation of max_progress
It isn't needed as we always work in multiples of full
destination stripes.
Also multiply by 'after' disks, not before.
We can progress until the point we would write then lines up with
where we would read now.
We read now from
array-address: reshape_progress device-address: read_offset
So we write then to
device-address: read_offset array-address: read_offset * after.disks
NeilBrown [Tue, 11 Jan 2011 03:51:09 +0000 (14:51 +1100)]
Avoid confusing with 'blocks' number.
The 'blocks' number computed by analyse_change is the number of
blocks that it makes sense to back-up at a time.
It is the smallest number of blocks that is a whole number of
stripes in both the old and the new layout.
However we are also using it as the smallest amount of progress
that can be made at a time, which is wrong as it is always valid
to progress a single stripe in the new layout.
So change 'blocks' to be called 'backup_blocks' to make it more clear.
And pass new_chunk size down so it can be used for 'minimum forward
progress' calculations.
Also set 'stripes' (the amount actually backed up) from the
possibly-scaled 'blocks' number rather than ignoring it and using
backup_blocks.
Finally, get rid of 'read_range' as it isn't used (or needed).
NeilBrown [Tue, 11 Jan 2011 03:41:47 +0000 (14:41 +1100)]
Avoid double-unfreeze of arrays during grow.
Once we have called reshape_container or reshape_super we have handed
on the responsibility for unfreezing the array, so Grow_reshape
shouldn't call unfreeze.
NeilBrown [Tue, 11 Jan 2011 02:23:16 +0000 (13:23 +1100)]
Ensure start_reshape copes with unexpected state
We want start_reshape to work no matter what the current values
of suspend_lo/suspend_hi are. So initialise suspend_lo very high
as this allows suspend_hi to be set to anything.
Adam Kwolek [Thu, 6 Jan 2011 08:17:29 +0000 (19:17 +1100)]
Detect level change
For level migration support it is necessary to allow mdmon to react for level changes.
It has to have ability to change configuration of active array,
and for array level change to raid0 finish array monitoring.
Signed-off-by: Maciej Trela <maciej.trela@intel.com> Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 07:42:53 +0000 (18:42 +1100)]
Add spares to raid0 in mdadm
When user wants to add spares to container with raid0 arrays only
it is not possible to update metadata due to lack of running mdmon.
To allow for this direct metadata update by mdadm is used in such case.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 07:29:11 +0000 (18:29 +1100)]
imsm: FIX: Division by 0
For general migration function blocks_per_migr_unit() has to return valid value.
If there is no valid return, 0 is returned instead and causes division by 0 error.
Additionally guard in function was added for such case.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 07:29:07 +0000 (18:29 +1100)]
imsm: Finalize reshape in metadata
When reshape is finished monitor calls set_array_state() and finishes migration in metadata.
This change allows for finishing metadata migration on reshape end.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Tue, 4 Jan 2011 14:36:57 +0000 (15:36 +0100)]
imsm: FIX: update first array in container only
During first metadata update imsm for compatibility reason should update
only one array.
Buffers in prepare_update() are prepared for second update as well.
Adam Kwolek [Thu, 6 Jan 2011 05:56:05 +0000 (16:56 +1100)]
FIX: get updated information from metadata
Metadata is not modified by metadata preparation handler.
It has to be read again from array.
There is 2 read required:
1. before 'for' entry to get updated information after reshape_super() call
2. inside 'for' loop to get updated information for every processed array
(it can happen /i.e. imsm case/ that container operation is a set of array operations
and information in metadata is changed after every loop).
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 05:24:07 +0000 (16:24 +1100)]
imsm: FIX: Perform first metadata update for container operation
Meta data was not updated due to the following problems:
1.disk index < 0 was treated as invalid, but this is spare device
2. disk index greater than currently used disks is correct also
3. newmap pointer has to be refreshed for second map copy operation
4. size calculation has to be guarded for shrinking operation
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 05:22:01 +0000 (16:22 +1100)]
imsm: FIX: display error message
When container operation is not allowed user has to get proper information on console about it
Currently this information was displayed as debug info only.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Adam Kwolek [Thu, 6 Jan 2011 05:11:35 +0000 (16:11 +1100)]
imsm: FIX: display correct information for '-E' option
Correct information displayed by '-E' option.
1. FIX: Slot information during raid0 migration is displayed incorrectly
(missing disk position is taken from wrong map)
2. Improvement: information about (level, members, chunk size) migration is displayed.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Thu, 6 Jan 2011 04:58:32 +0000 (15:58 +1100)]
Refactor reshape monitoring.
Combine all the non-backing-up code into a single function:
progress_reshape. It is called repeatedly to monitor a
reshape and allow it to happen safely.
Have a single separate function 'child_monitor' which
performs backups of data and calls progress_reshape to
wait for the next backup to be needed.
Anna Czarnowska [Wed, 5 Jan 2011 03:42:27 +0000 (14:42 +1100)]
Incremental: move suitable spares to container when subarrays started.
By default Incremental places all imsm spares in separate container
with uuid=0:0:0:0. (patch giving spares uuid_zero needed)
When we find enough members to start an array
we are able to determine domain so we search spare container
for suitable spares and move them to the container that
is currently assembled.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Anna Czarnowska [Wed, 5 Jan 2011 02:54:18 +0000 (13:54 +1100)]
Assemble: allow to assemble spares on their own
If we find spares but no members of given array
we create container with just spares.
This allows auto assemble to pick up all lose imsm spares when there
is no config file.
When there is a valid config file and any array is assembled from it
we don't try auto assembly so we will not assemble spares that don't
match any array.
To remedy this we must add
ARRAY metadata=imsm UUID=00000000:00000000:00000000:00000000
to config file.
This container will include all remaining spares.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Anna Czarnowska [Wed, 5 Jan 2011 02:42:59 +0000 (13:42 +1100)]
Assemble: we need to read policy to know array domains
Policy must be read on all disks identified as array members
to get array's domains list.
Currently it is only read on first array member in auto assembly mode.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Validate size of potential spare disk for external metadata (with containers)
mdinfo read with sysfs_read do not contain information about the space
needed to store data of all volumes created in that container, so that
spare can be used as replacement for existing subarrays in the future.
If lost disk was the only one that belonged to particular domain, array
won't match with that domain any longer. We can achieve this by moving
domain check below the 'target' test.
Added test for array degradation for spare-same-slot
spare-same-slot allows re-adding of missing array member with disk
re-inserted into the same slot where previous member was plugged in.
If in the meantime another spare has been used for recovery, same slot
cookie should be ignored.
external: get number of failed disks for container
Container degradation here is defined as the number of failed disks in
mostly degraded sub-array. This number is used as value for
array.failed_disks and used in comparison to find best match.
Anna Czarnowska [Sun, 26 Dec 2010 11:08:51 +0000 (22:08 +1100)]
Assemble imsm spares in matching domain only
Imsm spare will only be taken if it matches domain of
identified members of currently assembled array.
This implies that:
- spare with null domain will match first array assembled.
- if array has null domain then no spare will match
If we allow spares to set st they may block assembly of subarrays.
This is because in auto-assembly tmpdev->used=0 for a spare not matching
any array. If we find such spare before container and set st, the content
will not get assembled.
We allow uuid_zero match any uuid in assembly as unsuitable spares will
be rejected on domain check.
Signed-off-by: Anna Czarnowska <anna.czarnowska@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>