]> git.ipfire.org Git - thirdparty/mdadm.git/blame - external-reshape-design.txt
Document the external reshape implementation
[thirdparty/mdadm.git] / external-reshape-design.txt
CommitLineData
d54d79bd
DW
1External Reshape
2
31 Problem statement
4
5External (third-party metadata) reshape differs from native-metadata
6reshape in three key ways:
7
81.1 Format specific constraints
9
10In the native case reshape is limited by what is implemented in the
11generic reshape routine (Grow_reshape()) and what is supported by the
12kernel. There are exceptional cases where Grow_reshape() may block
13operations when it knows that the kernel implementation is broken, but
14otherwise the kernel is relied upon to be the final arbiter of what
15reshape operations are supported.
16
17In the external case the kernel, and the generic checks in
18Grow_reshape(), become the super-set of what reshapes are possible. The
19metadata format may not support, or have yet to implement a given
20reshape type. The implication for Grow_reshape() is that it must query
21the metadata handler and effect changes in the metadata before the new
22geometry is posted to the kernel. The ->reshape_super method allows
23Grow_reshape() to validate the requested operation and post the metadata
24update.
25
261.2 Scope of reshape
27
28Native metadata reshape is always performed at the array scope (no
29metadata relationship with sibling arrays on the same disks). External
30reshape, depending on the format, may not allow the number of member
31disks to be changed in a subarray unless the change is simultaneously
32applied to all subarrays in the container. For example the imsm format
33requires all member disks to be a member of all subarrays, so a 4-disk
34raid5 in a container that also houses a 4-disk raid10 array could not be
35reshaped to 5 disks as the imsm format does not support a 5-disk raid10
36representation. This requires the ->reshape_super method to check the
37contents of the array and ask the user to run the reshape at container
38scope (if both subarrays are agreeable to the change), or report an
39error in the case where one subarray cannot support the change.
40
411.3 Monitoring / checkpointing
42
43Reshape, unlike rebuild/resync, requires strict checkpointing to survive
44interrupted reshape operations. For example when expanding a raid5
45array the first few stripes of the array will be overwritten in a
46destructive manner. When restarting the reshape process we need to know
47the exact location of the last successfully written stripe, and we need
48to restore the data in any partially overwritten stripe. Native
49metadata stores this backup data in the unused portion of spares that
50are being promoted to array members, or in an external backup file
51(located on a non-involved block device).
52
53The kernel is in charge of recording checkpoints of reshape progress,
54but mdadm is delegated the task of managing the backup space which
55involves:
561/ Identifying what data will be overwritten in the next unit of reshape
57 operation
582/ Suspending access to that region so that a snapshot of the data can
59 be transferred to the backup space.
603/ Allowing the kernel to reshape the saved region and setting the
61 boundary for the next backup.
62
63In the external reshape case we want to preserve this mdadm
64'reshape-manager' arrangement, but have a third actor, mdmon, to
65consider. It is tempting to give the role of managing reshape to mdmon,
66but that is counter to its role as a monitor, and conflicts with the
67existing capabilities and role of mdadm to manage the progress of
68reshape. For clarity the external reshape implementation maintains the
69role of mdmon as a (mostly) passive recorder of raid events, and mdadm
70treats it as it would the kernel in the native reshape case (modulo
71needing to send explicit metadata update messages and checking that
72mdmon took the expected action).
73
74External reshape can use the generic md backup file as a fallback, but in the
75optimal/firmware-compatible case the reshape-manager will use the metadata
76specific areas for managing reshape. The implementation also needs to spawn a
77reshape-manager per subarray when the reshape is being carried out at the
78container level. For these two reasons the ->manage_reshape() method is
79introduced. This method in addition to base tasks mentioned above:
801/ Spawns a manager per-subarray, when necessary
812/ Uses either generic routines in Grow.c for md-style backup file
82 support, or uses the metadata-format specific location for storing
83 recovery data.
84This aims to avoid a "midlayer mistake"[1] and lets the metadata handler
85optionally take advantage of generic infrastructure in Grow.c
86
872 Details for specific reshape requests
88
89There are quite a few moving pieces spread out across md, mdadm, and mdmon for
90the support of external reshape, and there are several different types of
91reshape that need to be comprehended by the implementation. A rundown of
92these details follows.
93
942.0 General provisions:
95
96Obtain an exclusive open on the container to make sure we are not
97running concurrently with a Create() event.
98
992.1 Freezing sync_action
100
1012.2 Reshape size
102
103 1/ mdadm::Grow_reshape(): checks if mdmon is running and optionally
104 initializes st->update_tail
105 2/ mdadm::Grow_reshape() calls ->reshape_super() to check that the size change
106 is allowed (being performed at subarray scope / enough room) prepares a
107 metadata update
108 3/ mdadm::Grow_reshape(): flushes the metadata update (via
109 flush_metadata_update(), or ->sync_metadata())
110 4/ mdadm::Grow_reshape(): post the new size to the kernel
111
112
1132.3 Reshape level (simple-takeover)
114
115"simple-takeover" implies the level change can be satisfied without touching
116sync_action
117
118 1/ mdadm::Grow_reshape(): checks if mdmon is running and optionally
119 initializes st->update_tail
120 2/ mdadm::Grow_reshape() calls ->reshape_super() to check that the level change
121 is allowed (being performed at subarray scope) prepares a
122 metadata update
123 2a/ raid10 --> raid0: degrade all mirror legs prior to calling
124 ->reshape_super
125 3/ mdadm::Grow_reshape(): flushes the metadata update (via
126 flush_metadata_update(), or ->sync_metadata())
127 4/ mdadm::Grow_reshape(): post the new level to the kernel
128
1292.4 Reshape chunk, layout
130
1312.5 Reshape raid disks (grow)
132
133 1/ mdadm::Grow_reshape(): unconditionally initializes st->update_tail
134 because only redundant raid levels can modify the number of raid disks
135 2/ mdadm::Grow_reshape(): calls ->reshape_super() to check that the level
136 change is allowed (being performed at proper scope / permissible
137 geometry / proper spares available in the container) prepares a metadata
138 update.
139 3/ mdadm::Grow_reshape(): Converts each subarray in the container to the
140 raid level that can perform the reshape and starts mdmon.
141 4/ mdadm::Grow_reshape(): Pushes the update to mdmon...
142 4a/ mdmon::process_update(): marks the array as reshaping
143 4b/ mdmon::manage_member(): adds the spares (without assigning a slot)
144 5/ mdadm::Grow_reshape(): Notes that mdmon has assigned spares and invokes
145 ->manage_reshape()
146 5/ mdadm::<format>->manage_reshape(): (for each subarray) sets sync_max to
147 zero, starts the reshape, and pings mdmon
148 5a/ mdmon::read_and_act(): notices that reshape has started and notifies
149 the metadata handler to record the slots chosen by the kernel
150 6/ mdadm::<format>->manage_reshape(): saves data that will be overwritten by
151 the kernel to either the backup file or the metadata specific location,
152 advances sync_max, waits for reshape, ping mdmon, repeat.
153 6a/ mdmon::read_and_act(): records checkpoints
154 7/ mdadm::<format>->manage_reshape(): Once reshape completes changes the raid
155 level back to the nominal raid level (if necessary)
156
157 FIXME: native metadata does not have the capability to record the original
158 raid level in reshape-restart case because the kernel always records current
159 raid level to the metadata, whereas external metadata can masquerade at an
160 alternate level based on the reshape state.
161
1622.6 Reshape raid disks (shrink)
163
1643 TODO
165
166...
167
168[1]: Linux kernel design patterns - part 3, Neil Brown http://lwn.net/Articles/336262/