From: Takashi Sato Date: Sat, 6 Dec 2008 16:42:42 +0000 (+0000) Subject: Sync with the codes about the independence of load balancing scheduler algorithms... X-Git-Tag: 2.3.0~11 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=02fe31e6c5c5c68998cbaac91cde0cecad6c0a17;p=thirdparty%2Fapache%2Fhttpd.git Sync with the codes about the independence of load balancing scheduler algorithms. (r722948 - r722952) git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@724006 13f79535-47bb-0310-9956-ffa450edef68 --- diff --git a/docs/manual/mod/mod_lbmethod_bybusyness.xml b/docs/manual/mod/mod_lbmethod_bybusyness.xml new file mode 100644 index 00000000000..07fe1f14da5 --- /dev/null +++ b/docs/manual/mod/mod_lbmethod_bybusyness.xml @@ -0,0 +1,57 @@ + + + + + + + + + +mod_lbmethod_bybusyness +Pending Request Counting load balancer scheduler algorithm for mod_proxy_balancer +Extension +mod_lbmethod_bybusyness.c +lbmethod_bybusyness_module +Split off from mod_proxy_balancer in 2.3 + + + +mod_proxy +mod_proxy_balancer + +
+ + Pending Request Counting Algorithm + +

Enabled via lbmethod=bybusyness, this scheduler keeps + track of how many requests each worker is assigned at present. A new + request is automatically assigned to the worker with the lowest + number of active requests. This is useful in the case of workers + that queue incoming requests independently of Apache, to ensure that + queue length stays even and a request is always given to the worker + most likely to service it fastest.

+ +

In the case of multiple least-busy workers, the statistics (and + weightings) used by the Request Counting method are used to break the + tie. Over time, the distribution of work will come to resemble that + characteristic of byrequests.

+ +
+ +
diff --git a/docs/manual/mod/mod_lbmethod_byrequests.xml b/docs/manual/mod/mod_lbmethod_byrequests.xml new file mode 100644 index 00000000000..3a2c2575947 --- /dev/null +++ b/docs/manual/mod/mod_lbmethod_byrequests.xml @@ -0,0 +1,217 @@ + + + + + + + + + +mod_lbmethod_byrequests +Request Counting load balancer scheduler algorithm for mod_proxy_balancer +Extension +mod_lbmethod_byrequests.c +lbmethod_byrequests_module +Split off from mod_proxy_balancer in 2.3 + + + +mod_proxy +mod_proxy_balancer + +
+ Request Counting Algorithm +

Enabled via lbmethod=byrequests, the idea behind this + scheduler is that we distribute the requests among the + various workers to ensure that each gets their configured share + of the number of requests. It works as follows:

+ +

lbfactor is how much we expect this worker + to work, or the workers's work quota. This is + a normalized value representing their "share" of the amount of + work to be done.

+ +

lbstatus is how urgent this worker has to work + to fulfill its quota of work.

+ +

The worker is a member of the load balancer, + usually a remote host serving one of the supported protocols.

+ +

We distribute each worker's work quota to the worker, and then look + which of them needs to work most urgently (biggest lbstatus). This + worker is then selected for work, and its lbstatus reduced by the + total work quota we distributed to all workers. Thus the sum of all + lbstatus does not change(*) and we distribute the requests + as desired.

+ +

If some workers are disabled, the others will + still be scheduled correctly.

+ +
for each worker in workers
+    worker lbstatus += worker lbfactor
+    total factor    += worker lbfactor
+    if worker lbstatus > candidate lbstatus
+        candidate = worker
+
+candidate lbstatus -= total factor
+
+ +

If a balancer is configured as follows:

+ + + + + + + + + + + + + + + + + +
workerabcd
lbfactor25252525
lbstatus0000
+ +

And b gets disabled, the following schedule is produced:

+ + + + + + + + + + + + + + + + + + + + + + + +
workerabcd
lbstatus-5002525
lbstatus-250-2550
lbstatus0000
(repeat)
+ +

That is it schedules: a c d + a c d a c + d ... Please note that:

+ + + + + + + + + + + + +
workerabcd
lbfactor25252525
+ +

Has the exact same behavior as:

+ + + + + + + + + + + + +
workerabcd
lbfactor1111
+ +

This is because all values of lbfactor are normalized + with respect to the others. For:

+ + + + + + + + + + +
workerabc
lbfactor141
+ +

worker b will, on average, get 4 times the requests + that a and c will.

+ +

The following asymmetric configuration works as one would expect:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
workerab
lbfactor7030
 
lbstatus-3030
lbstatus40-40
lbstatus10-10
lbstatus-2020
lbstatus-5050
lbstatus20-20
lbstatus-1010
lbstatus-4040
lbstatus30-30
lbstatus00
(repeat)
+ +

That is after 10 schedules, the schedule repeats and 7 a + are selected with 3 b interspersed.

+
+ +
diff --git a/docs/manual/mod/mod_lbmethod_bytraffic.xml b/docs/manual/mod/mod_lbmethod_bytraffic.xml new file mode 100644 index 00000000000..5cb01c3585a --- /dev/null +++ b/docs/manual/mod/mod_lbmethod_bytraffic.xml @@ -0,0 +1,72 @@ + + + + + + + + + +mod_lbmethod_bytraffic +Weighted Traffic Counting load balancer scheduler algorithm for mod_proxy_balancer +Extension +mod_lbmethod_bytraffic.c +lbmethod_bytraffic_module +Split off from mod_proxy_balancer in 2.3 + + + +mod_proxy +mod_proxy_balancer + +
+ Weighted Traffic Counting Algorithm +

Enabled via lbmethod=bytraffic, the idea behind this + scheduler is very similar to the Request Counting method, with + the following changes:

+ +

lbfactor is how much traffic, in bytes, we want + this worker to handle. This is also a normalized value + representing their "share" of the amount of work to be done, + but instead of simply counting the number of requests, we take + into account the amount of traffic this worker has seen.

+ +

If a balancer is configured as follows:

+ + + + + + + + + + +
workerabc
lbfactor121
+ +

Then we mean that we want b to process twice the + amount of bytes than a or c should. It does + not necessarily mean that b would handle twice as + many requests, but it would process twice the I/O. Thus, the + size of the request and response are applied to the weighting + and selection algorithm.

+ +
+ +
diff --git a/docs/manual/mod/mod_proxy_balancer.xml b/docs/manual/mod/mod_proxy_balancer.xml index 1097f71d819..98efcfd0ff6 100644 --- a/docs/manual/mod/mod_proxy_balancer.xml +++ b/docs/manual/mod/mod_proxy_balancer.xml @@ -35,9 +35,17 @@ HTTP, FTP and AJP13 protocols

+

Load balancing scheduler algorithm is provided by not this + module but other modules such as: + mod_lbmethod_byrequests, + mod_lbmethod_bytraffic and + mod_lbmethod_bybusyness. +

+

Thus, in order to get the ability of load balancing, - mod_proxy and mod_proxy_balancer - have to be present in the server.

+ mod_proxy, mod_proxy_balancer + and at least one of load balancing scheduler algorithm modules have + to be present in the server.

Warning

Do not enable proxying until you have -

- Request Counting Algorithm -

Enabled via lbmethod=byrequests, the idea behind this - scheduler is that we distribute the requests among the - various workers to ensure that each gets their configured share - of the number of requests. It works as follows:

- -

lbfactor is how much we expect this worker - to work, or the workers's work quota. This is - a normalized value representing their "share" of the amount of - work to be done.

- -

lbstatus is how urgent this worker has to work - to fulfill its quota of work.

- -

The worker is a member of the load balancer, - usually a remote host serving one of the supported protocols.

- -

We distribute each worker's work quota to the worker, and then look - which of them needs to work most urgently (biggest lbstatus). This - worker is then selected for work, and its lbstatus reduced by the - total work quota we distributed to all workers. Thus the sum of all - lbstatus does not change(*) and we distribute the requests - as desired.

- -

If some workers are disabled, the others will - still be scheduled correctly.

- -
for each worker in workers
-    worker lbstatus += worker lbfactor
-    total factor    += worker lbfactor
-    if worker lbstatus > candidate lbstatus
-        candidate = worker
-
-candidate lbstatus -= total factor
-
- -

If a balancer is configured as follows:

- - - - - - - - - - - - - - - - - -
workerabcd
lbfactor25252525
lbstatus0000
- -

And b gets disabled, the following schedule is produced:

- - - - - - - - - - - - - - - - - - - - - - - -
workerabcd
lbstatus-5002525
lbstatus-250-2550
lbstatus0000
(repeat)
- -

That is it schedules: a c d - a c d a c - d ... Please note that:

- - - - - - - - - - - - -
workerabcd
lbfactor25252525
- -

Has the exact same behavior as:

- - - - - - - - - - - - -
workerabcd
lbfactor1111
- -

This is because all values of lbfactor are normalized - with respect to the others. For:

- - - - - - - - - - -
workerabc
lbfactor141
- -

worker b will, on average, get 4 times the requests - that a and c will.

- -

The following asymmetric configuration works as one would expect:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
workerab
lbfactor7030
 
lbstatus-3030
lbstatus40-40
lbstatus10-10
lbstatus-2020
lbstatus-5050
lbstatus20-20
lbstatus-1010
lbstatus-4040
lbstatus30-30
lbstatus00
(repeat)
- -

That is after 10 schedules, the schedule repeats and 7 a - are selected with 3 b interspersed.

-
- -
- Weighted Traffic Counting Algorithm -

Enabled via lbmethod=bytraffic, the idea behind this - scheduler is very similar to the Request Counting method, with - the following changes:

- -

lbfactor is how much traffic, in bytes, we want - this worker to handle. This is also a normalized value - representing their "share" of the amount of work to be done, - but instead of simply counting the number of requests, we take - into account the amount of traffic this worker has seen.

- -

If a balancer is configured as follows:

- - - - - - - - - - -
workerabc
lbfactor121
- -

Then we mean that we want b to process twice the - amount of bytes than a or c should. It does - not necessarily mean that b would handle twice as - many requests, but it would process twice the I/O. Thus, the - size of the request and response are applied to the weighting - and selection algorithm.

- -
- -
- - Pending Request Counting Algorithm - -

Enabled via lbmethod=bybusyness, this scheduler keeps - track of how many requests each worker is assigned at present. A new - request is automatically assigned to the worker with the lowest - number of active requests. This is useful in the case of workers - that queue incoming requests independently of Apache, to ensure that - queue length stays even and a request is always given to the worker - most likely to service it fastest.

- -

In the case of multiple least-busy workers, the statistics (and - weightings) used by the Request Counting method are used to break the - tie. Over time, the distribution of work will come to resemble that - characteristic of byrequests.

- -
-
Exported Environment Variables

At present there are 6 environment variables exported: