From: Suzanne Goldlust Date: Wed, 2 Jan 2019 22:49:56 +0000 (-0500) Subject: Update congestion-handling.xml X-Git-Tag: 481-remote-subnet4-set-inconsistent-work-when-id-subnet-is-duplicated_base~45 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=1ee64bf2083022778d7c81218e755c12c0290c64;p=thirdparty%2Fkea.git Update congestion-handling.xml --- diff --git a/doc/guide/congestion-handling.xml b/doc/guide/congestion-handling.xml index 832fc5d51f..19388e5df0 100644 --- a/doc/guide/congestion-handling.xml +++ b/doc/guide/congestion-handling.xml @@ -3,7 +3,7 @@ - - This Source Code Form is subject to the terms of the Mozilla Public - License, v. 2.0. If a copy of the MPL was not distributed with this - - file, You can obtain one at http://mozilla.org/MPL/2.0/. + - file, you can obtain one at http://mozilla.org/MPL/2.0/. --> @@ -12,75 +12,74 @@
What is Congestion? Congestion occurs when servers are subjected to client queries - faster than they can be fulfilled. Subsequently, the servers begin - accumulating a backlog of pending queries. The longer the high rate of - traffic continues the farther behind the servers fall. Depending on the + faster than they can be fulfilled. As a result, the servers begin + accumulating a backlog of pending queries. The longer the high rate of + traffic continues the farther behind the servers fall. Depending on the client implementations, those that fail to get leases either give up or simply - continue to retry forever. In the former case, the server may eventually - recover. The latter case is vicious cycle from which the server is unable - escape. + continue to retry forever. In the former case, the server may eventually + recover, but the latter case is a vicious cycle from which the server is unable + to escape. In a well-planned deployment, the number and capacity of servers is matched - to the maximum client loads expected. As long as capacity is matched to + to the maximum client loads expected. As long as capacity is matched to load, congestion does not occur. If the load is routinely too heavy, then - the deployment needs to be re-evaluated. Congestion typically occurs when + the deployment needs to be re-evaluated. Congestion typically occurs when there is a network event that causes overly large numbers of clients to - simultaneously need leases such as recovery after a network outage. + simultaneously need leases, such as recovery after a network outage. - The goal of Congestion handling is to help the servers mitigate the peak + The goal of congestion handling is to help servers mitigate the peak in traffic by fulfilling as many of the most relevant requests as possible - until it subsides. + until the congestion subsides. Prior to Kea 1.5, kea-dhcp4 and kea-dhcp4 read inbound packets directly - from the interface sockets in the main application thread. This meant that + from the interface sockets in the main application thread, which meant that packets waiting to be processed were held in socket buffers themselves. Once - these buffers fill any new packets are discarded. Under swamped conditions - the servers can end up processing client packets that may no longer be - relevant, or worse are redundant. In other words, the packets waiting in - the FIFO socket buffers become increasingly stale. + these buffers filled, any new packets were discarded. Under swamped conditions, + the servers ended up processing client packets that were no longer + relevant, or worse were redundant. In other words, the packets waiting in + the FIFO socket buffers became increasingly stale.
Configuring Congestion Handling - Kea 1.5 introduces a new feature referred to as Congestion Handling. + Kea 1.5 introduced the Congestion Handling feature. Congestion handling offers the ability to configure the server to use a separate thread to read packets from the interface socket buffers. As the - thread reads packets from the buffers they are added to an internal packet - queue. The server's main application thread process packets from this queue - rather than the socket buffers. By structuring it this way, we've introduced + thread reads packets from the buffers, they are added to an internal packet + queue, and the server's main application thread processes packets from this queue + rather than from the socket buffers. By structuring it this way, we've introduced a configurable layer which can make decisions on which packets to process, how to store them, and the order in which they are processed by the server. The default packet queue implemenation for both kea-dhcp4 and kea-dhcp6 - is a simple ring buffer. Once it reaches capacity, new packets get added to - the back of queue by discarding packets from the front of queue. Rather than - always discarding the newest packets, we now always discard the oldest - packets. The capacity of the buffer, (i.e the maximum number of packets the - buffer can contain) is configurable. A reasonable starting point would be to + is a simple ring buffer. Once it reaches capacity, new packets get added to + the back of the queue by discarding packets from the front of the queue. Rather than + always discarding the newest packets, Kea now always discards the oldest + packets. The capacity of the buffer, i.e the maximum number of packets the + buffer can contain, is configurable. A reasonable starting point would be to match the capacity to the number of leases per second your installation of - Kea can handle. Please note this figure varies widely depending on the - specifics of your deployment. We anticipate adding more knobs as we learn - from experience, testing, and user feedback. + Kea can handle. Please note that this figure varies widely depending on the + specifics of your deployment. As there is no one algorithm that will best handle the dynamics of all sites, and because over time new approaches will evolve, the packet - queue is implemented as plug-in, which can replaced by a custom queue - implementation via hook library. This should make it straight forward + queue is implemented as a plug-in, which can replaced by a custom queue + implementation via a hook library. This should make it straightforward for interested parties to experiment with their own solutions. (Developers - may refer to isc::dhcp::PacketQueue and isc::dhcp::PacketQueueMgr, in our - Developer's guide). + may refer to isc::dhcp::PacketQueue and isc::dhcp::PacketQueueMgr, in the Kea + Developer's Guide). Packet queue behavior is configured in both kea-dhcp4 and kea-dhcp6 - servers through an optional, top level configuration element, - 'dhcp-queue-control' (Omitting this element disables packet queueing): + servers through an optional, top-level, configuration element, + 'dhcp-queue-control'. Omitting this element disables packet queueing: "dhcp-queue-control": { "enable-queue": true|false, @@ -92,26 +91,26 @@ enable-queue true|false. Enables or - disables packet queueing. When true, the server will process packets + disables packet queueing. When true, the server processes packets from the packet queue, which is filled by a separate thread. When - false, the server will process packets directly from the socket buffers - in the main thread (as done in all releases prior Kea 1.5). It is + false, the server processes packets directly from the socket buffers + in the main thread. It is disabled by default. queue-type name of the queue implementation - to use. This value exists such that custom implementations can be - registered (via hook lib) and then selected. There is a default + to use. This value exists so that custom implementations can be + registered (via hook library) and then selected. There is a default packet queue implementation that is pre-registered during server start up: "kea-ring4" for kea-dhcp4 and "kea-ring6" for kea-dhcp6. capacity = n [packets]. This is the - maximum number of packets the packet queue can hold before packets - are discarded. The optimal value for this is extremely site dependent. - The default value for is 500 for both kea-ring4 and kea-ring6. + maximum number of packets the queue can hold before packets + are discarded. The optimal value for this is extremely site-dependent. + The default value is 500 for both kea-ring4 and kea-ring6. @@ -146,8 +145,5 @@ } - - The number of parameters and plug-ins is expected to grow over time. - -
+