Andrew Tridgell [Wed, 12 Sep 2007 03:23:36 +0000 (13:23 +1000)]
- set arp_ignore to prevent replying to arp requests for addresses on loopback
- put removed IPs on loopback with scope host
- check for nul strings in ethtool call
;
Andrew Tridgell [Wed, 12 Sep 2007 03:22:31 +0000 (13:22 +1000)]
- don't allow the registration of clients with IPs we don't hold
- change some debug levels to make tracking of IP release problems easier
(This used to be ctdb commit 5f9aed62adaf87750f953412c55b29c58e4bb6c0)
Ronnie Sahlberg [Sun, 9 Sep 2007 21:20:44 +0000 (07:20 +1000)]
change the signature to ctdb_sys_have_ip() to also return:
a bool that specifies whether the ip was held by a loopback adaptor or
not
the name of the interface where the ip was held
when we release an ip address from an interface, move the ip address
over to the loopback interface
when we release an ip address after we have move it onto loopback,
use 60.nfs to kill off the server side (the local part) of the tcp
connection so that the tcp connections dont survive a
failover/failback
61.nfstickle, since we kill hte tcp connections when we release an ip
address we no longer need to restart the nfs service in 61.nfstickle
update ctdb_takeover to use the new signature for ctdb_sys_have_ip
when we add a tcp connection to kill in ctdb_killtcp_add_connection()
check if either the srouce or destination address match a known public
address
Ronnie Sahlberg [Fri, 7 Sep 2007 22:09:02 +0000 (08:09 +1000)]
set /proc/sys/net/ipv4/conf/all/arp_filter to 1 by default when
10.interfaces startsup
this setting makes the system only respond to APR requests from the NIC
where the ip address is tied to and adds to the
"principle of least surprise" when using multihoming servers
Ronnie Sahlberg [Fri, 7 Sep 2007 06:45:19 +0000 (16:45 +1000)]
ctdb ip must loop over all connected nodes to pull hte public ip list
and merge into a big list since with the deassociation between a node
and a public ipaddress the /etc/ctdb/public_addresses files can
differ between nodes and no node know about all public addresses that a
cluster can use
Ronnie Sahlberg [Thu, 6 Sep 2007 22:52:56 +0000 (08:52 +1000)]
60.nfs:
we must always restart the lockmanager when the cluster has been
reconfigured and ip addresses has changed. This is to make sure we get a
clusterwide grace period for nfs locking.
if we dont do this and only restart locking on the nodes that were
direclty affected, a different client can take out a conflicting lock
from a different node before affected clients has had a chance to
reclaim all the locks lost during reconfigure.
grace period on rhel5 kernel has bene increased to 90 seconds!
statd-callout:
we must restart lockmanager to ensure a clusterwide grace period for
nfs. this makes locking "more correct" for nfs clients and prevents
other clients/nodes from taking out a conflicting lock while a different
client/node tries to reclaim lost locks.
This makes it "almost consistent" for NFS clients but there is still
the possibility that a cifs client can take out a conflicting lock
before an nfs client has had a chance to reclaim an existing lock.
This can not be solved with anything less than making the kernel nfs
lock manager "samba aware" and making samba aware of the internal state
of the kernel lock manager so that they can cooperate.
we can not just stop/start the lockmanager back to back in rhel5 since
if they are stopped/started too close to eachother then when the new
lockmanager upon starting up sends out statd notifications two things
can happen:
1, new lockmanager sends out notification BEFORE it has registered with
portmapper leading to
lockmanager starts
lockmanager sends notification to the client
client tries to recover the lock and tries to portmap the lockmanager
port on the server.
server is not (yet) registered with portmapper and server responds
"no such program" to hte clients request to discover where lockmanager
is.
client then just completely gives up reclaiming the lock and doesnt
even reattempt the portmapper call after some timeout.
==> lock reclaim failed.
2, if they are started back to back, and a client tries to reclaim the
lock the lockmanager sometimes sends two responses back to back
to the client. one with status NLM_GRANTED (==you got the lock
reclaimed) and one with status NLM_DENIED (==you could not get the lock
reclaimed)
This confuses the client and leads to the server thinking that the
client does have the lock and the client thinking it has not got the
lock and orphaned locks result.
We also send out additional notification messages of different formats
to allow more legacy clients to interoperate with locking.
Ronnie Sahlberg [Tue, 4 Sep 2007 13:15:23 +0000 (23:15 +1000)]
allow different nodes in the cluster to use different public_addresses
files
so that we can partition the cluster into different subsets of nodes
which each serve a different subset of the public addresses
Ronnie Sahlberg [Tue, 4 Sep 2007 04:36:52 +0000 (14:36 +1000)]
we cant have takeover_ctx hanging off ctdb since it is freed/recreated
everytime we release an ip.
this context is used to hold all resources needed when sending out
gratious arps and tcp tickles during ip takeover.
we hang it off the vnn structure that manages that particular ip address
instead so that we can have multiple ones going in parallell
this bug (or the same bug in different shape) has probably been in ctdb
for very very long but is likely to be hard to trigger
Ronnie Sahlberg [Thu, 30 Aug 2007 05:27:45 +0000 (15:27 +1000)]
when we start 60.nfs we must make sure that the shared storage
nfs-state directory actually exists (by creating it)
or else the lock manager will not start
Ronnie Sahlberg [Mon, 27 Aug 2007 05:03:52 +0000 (15:03 +1000)]
make the ctdb shutdown command use the async _send() function to send
the shutdown command
and return success to the caller if the _send() was successful
Ronnie Sahlberg [Fri, 24 Aug 2007 05:53:41 +0000 (15:53 +1000)]
add an initial implementation of a service_id structure and three
controls to register/unregister/check a server id.
a server id consists of TYPE:VNN:ID where type is specific to the
application. VNN is the node where the serverid was registered and ID
might be a node unique identifier such as a pid or similar.
Clients can register a server id for themself at the local ctdb daemon.
When a client dissappears or when the domain socket connection for the
client drops then any and all server ids registered across that domain
socket will also be automatically removed from the store.
clients can register as many server_ids as they want at the same time
but each TYPE:VNN:ID must be globally unique.
Clients have the option of explicitely unregister a server id by using
the UNREGISTER control.
Registration and unregistration can only be done by clients to the local
daemon. clients can not register their server id to a remote node.
clients can check if a server id does exist on any ctdb node in the
network by using the check control
Ronnie Sahlberg [Fri, 24 Aug 2007 00:42:06 +0000 (10:42 +1000)]
change the api for managing callbacks to controls so that isntead of
passing it as a parameter we set the callback function explicitely from
the caller if the ..._send() function returned a valid state pointer.
Ronnie Sahlberg [Thu, 23 Aug 2007 03:00:10 +0000 (13:00 +1000)]
hang the ctdb_req_control structure off the ctdb_client_control_state
struct so that if we timeout a control we can print debug info such as
what opcode failed and to which node
we dont need the *status parameter to ctdb_client_control_state
Ronnie Sahlberg [Thu, 23 Aug 2007 01:58:09 +0000 (11:58 +1000)]
in ctdb_call_recv() we must check that state is non-NULL since
ctdb_call() may pass a null pointer to _recv() and this would cause a
segfault.
fortunately there appears there are no critical users for this codepath
right now so the risk was more theoretical IF clients start using this
call it coult segfault.
change ctdb_control() to become fully async so we later can make
recovery daemon do the expensive controls to nodes in parallell instead
of in sequence
Ronnie Sahlberg [Wed, 22 Aug 2007 02:53:24 +0000 (12:53 +1000)]
when we receive a packet from the network, check explicitely that the
node is not banned it the call is for a database record. i.e a REQ/REPLY
CALL/DMASTER
if we get such a call while banned, ignore the packet and write an entry
in the logfile
Ronnie Sahlberg [Wed, 22 Aug 2007 00:38:35 +0000 (10:38 +1000)]
when a node becomes banned its databases are no longer part of ctdb
and it should thus no longer serve any database access calls until it
has been reintroduced into the cluster.
when becoming banned, reset the local generation id to 1 to prevent
any further database access calls from other nodes from being processed.
Ronnie Sahlberg [Tue, 21 Aug 2007 07:25:15 +0000 (17:25 +1000)]
change the structure used for node flag change messages so that we can
see both the old flags as well as the new flags (so we can tell which
flags changed)
send the CTDB_SRVID_RECONFIGURE messages to connected nodes only, not to
every node, connected or not, in the cluster.
in the handler inside the recovery daemon which is invoked for node flag
change messages, only do a takeover_run() and redistribute the ip addresses IF it was the
disabled or the unhealthy flags that changed. Also send out the cluster
reconfigured message in this case.
If any of the other flags changed we dont need to do the takeover_run(0
here since that will be done during recovery.
Ronnie Sahlberg [Mon, 20 Aug 2007 23:46:27 +0000 (09:46 +1000)]
when we shutdown the service due to receiving a 'ctdb shutdown' command
from the administrator, log this as 'Received SHUTDOWN command. Stopping
CTDB daemon.' so that the administrator will know when looking at the
log 'why' the ctdb service was terminated.
Previously the only thing logged was 'shutting down' which is not
detailed enough.
Ronnie Sahlberg [Mon, 20 Aug 2007 04:16:58 +0000 (14:16 +1000)]
if a public address has already been taken over by a node, then let that
public address remain at that node until either the node becomes
unhealthy or the original/primary node for that address becomes healthy
again.
Othervise what will happen is
1, if we ban a node, the banning code immediately does a
takeover_run() and reassigns the public address to a different node in
the cluster.
2, a few seconds later (at most) the recovery daemon will detect that
the number of nodes has shrunk and will initiate a recovery.
During the recovery the public address would again be assigned to a
node, this time a different node.