1 From: Robert Love <robert.w.love@intel.com>
2 Subject: [FcOE] Improve fc_lport.c locking comment block
3 References: bnc #459142
5 Signed-off-by: Robert Love <robert.w.love@intel.com>
6 Acked-by: Bernhard Walle <bwalle@suse.de>
9 drivers/scsi/libfc/fc_lport.c | 76 ++++++++++++++++++++++++------------------
10 1 file changed, 45 insertions(+), 31 deletions(-)
13 --- a/drivers/scsi/libfc/fc_lport.c
14 +++ b/drivers/scsi/libfc/fc_lport.c
19 - * General locking notes:
20 + * PORT LOCKING NOTES
22 - * The lport and rport blocks both have mutexes that are used to protect
23 - * the port objects states. The main motivation for this protection is that
24 - * we don't want to be preparing a request/response in one context while
25 - * another thread "resets" the port in question. For example, if the lport
26 - * block is sending a SCR request to the directory server we don't want
27 - * the lport to be reset before we fill out the frame header's port_id. The
28 - * problem is that a reset would cause the lport's port_id to reset to 0.
29 - * If we don't protect the lport we'd spew incorrect frames.
31 - * At the time of this writing there are two primary mutexes, one for the
32 - * lport and one for the rport. Since the lport uses the rport and makes
33 - * calls into that block the rport should never make calls that would cause
34 - * the lport's mutex to be locked. In other words, the lport's mutex is
35 - * considered the outer lock and the rport's lock is considered the inner
36 - * lock. The bottom line is that you can hold a lport's mutex and then
37 - * hold the rport's mutex, but not the other way around.
39 - * The only complication to this rule is the callbacks from the rport to
40 - * the lport's rport_callback function. When rports become READY they make
41 - * a callback to the lport so that it can track them. In the case of the
42 - * directory server that callback might cause the lport to change its
43 - * state, implying that the lport mutex would need to be held. This problem
44 - * was solved by serializing the rport notifications to the lport and the
45 - * callback is made without holding the rport's lock.
46 + * These comments only apply to the 'port code' which consists of the lport,
47 + * disc and rport blocks.
49 - * lport locking notes:
52 + * The lport, disc and rport blocks all have mutexes that are used to protect
53 + * those objects. The main motivation for these locks is to prevent from
54 + * having an lport reset just before we send a frame. In that scenario the
55 + * lport's FID would get set to zero and then we'd send a frame with an
56 + * invalid SID. We also need to ensure that states don't change unexpectedly
57 + * while processing another state.
61 + * The following heirarchy defines the locking rules. A greater lock
62 + * may be held before acquiring a lesser lock, but a lesser lock should never
63 + * be held while attempting to acquire a greater lock. Here is the heirarchy-
65 + * lport > disc, lport > rport, disc > rport
69 + * The callbacks cause complications with this scheme. There is a callback
70 + * from the rport (to either lport or disc) and a callback from disc
73 + * As rports exit the rport state machine a callback is made to the owner of
74 + * the rport to notify success or failure. Since the callback is likely to
75 + * cause the lport or disc to grab its lock we cannot hold the rport lock
76 + * while making the callback. To ensure that the rport is not free'd while
77 + * processing the callback the rport callbacks are serialized through a
78 + * single-threaded workqueue. An rport would never be free'd while in a
79 + * callback handler becuase no other rport work in this queue can be executed
82 + * When discovery succeeds or fails a callback is made to the lport as
83 + * notification. Currently, succesful discovery causes the lport to take no
84 + * action. A failure will cause the lport to reset. There is likely a circular
85 + * locking problem with this implementation.
91 * The critical sections protected by the lport's mutex are quite broad and
92 * may be improved upon in the future. The lport code and its locking doesn't
95 * The strategy is to lock whenever processing a request or response. Note
96 * that every _enter_* function corresponds to a state change. They generally
97 - * change the lports state and then sends a request out on the wire. We lock
98 + * change the lports state and then send a request out on the wire. We lock
99 * before calling any of these functions to protect that state change. This
100 - * means that the entry points into the lport block to manage the locks while
101 + * means that the entry points into the lport block manage the locks while
102 * the state machine can transition between states (i.e. _enter_* functions)
103 * while always staying protected.
106 * Retries also have to consider the locking. The retries occur from a work
107 * context and the work function will lock the lport and then retry the state
108 * (i.e. _enter_* function).
110 - * The implication to all of this is that each lport can only process one
114 #include <linux/timer.h>