From: Takashi Sato Enabled via In the case of multiple least-busy workers, the statistics (and
+ weightings) used by the Request Counting method are used to break the
+ tie. Over time, the distribution of work will come to resemble that
+ characteristic of Enabled via lbfactor is how much we expect this worker
+ to work, or the workers's work quota. This is
+ a normalized value representing their "share" of the amount of
+ work to be done. lbstatus is how urgent this worker has to work
+ to fulfill its quota of work. The worker is a member of the load balancer,
+ usually a remote host serving one of the supported protocols. We distribute each worker's work quota to the worker, and then look
+ which of them needs to work most urgently (biggest lbstatus). This
+ worker is then selected for work, and its lbstatus reduced by the
+ total work quota we distributed to all workers. Thus the sum of all
+ lbstatus does not change(*) and we distribute the requests
+ as desired. If some workers are disabled, the others will
+ still be scheduled correctly. If a balancer is configured as follows: And b gets disabled, the following schedule is produced: That is it schedules: a c d
+ a c d a c
+ d ... Please note that: Has the exact same behavior as: This is because all values of lbfactor are normalized
+ with respect to the others. For: worker b will, on average, get 4 times the requests
+ that a and c will. The following asymmetric configuration works as one would expect: That is after 10 schedules, the schedule repeats and 7 a
+ are selected with 3 b interspersed. Enabled via lbfactor is how much traffic, in bytes, we want
+ this worker to handle. This is also a normalized value
+ representing their "share" of the amount of work to be done,
+ but instead of simply counting the number of requests, we take
+ into account the amount of traffic this worker has seen. If a balancer is configured as follows: Then we mean that we want b to process twice the
+ amount of bytes than a or c should. It does
+ not necessarily mean that b would handle twice as
+ many requests, but it would process twice the I/O. Thus, the
+ size of the request and response are applied to the weighting
+ and selection algorithm.lbmethod=bybusyness
, this scheduler keeps
+ track of how many requests each worker is assigned at present. A new
+ request is automatically assigned to the worker with the lowest
+ number of active requests. This is useful in the case of workers
+ that queue incoming requests independently of Apache, to ensure that
+ queue length stays even and a request is always given to the worker
+ most likely to service it fastest.byrequests
.lbmethod=byrequests
, the idea behind this
+ scheduler is that we distribute the requests among the
+ various workers to ensure that each gets their configured share
+ of the number of requests. It works as follows:
+ for each worker in workers
+ worker lbstatus += worker lbfactor
+ total factor += worker lbfactor
+ if worker lbstatus > candidate lbstatus
+ candidate = worker
+
+candidate lbstatus -= total factor
+
+
+
+ worker
+ a
+ b
+ c
+ d
+ lbfactor
+ 25
+ 25
+ 25
+ 25
+ lbstatus
+ 0
+ 0
+ 0
+ 0
+
+
+
+ worker
+ a
+ b
+ c
+ d
+ lbstatus
+ -50
+ 0
+ 25
+ 25
+ lbstatus
+ -25
+ 0
+ -25
+ 50
+ lbstatus
+ 0
+ 0
+ 0
+ 0
+ (repeat)
+
+
+
+ worker
+ a
+ b
+ c
+ d
+ lbfactor
+ 25
+ 25
+ 25
+ 25
+
+
+
+ worker
+ a
+ b
+ c
+ d
+ lbfactor
+ 1
+ 1
+ 1
+ 1
+
+
+
+ worker
+ a
+ b
+ c
+ lbfactor
+ 1
+ 4
+ 1
+
+
+
+ worker
+ a
+ b
+ lbfactor
+ 70
+ 30
+
+ lbstatus
+ -30
+ 30
+ lbstatus
+ 40
+ -40
+ lbstatus
+ 10
+ -10
+ lbstatus
+ -20
+ 20
+ lbstatus
+ -50
+ 50
+ lbstatus
+ 20
+ -20
+ lbstatus
+ -10
+ 10
+ lbstatus
+ -40
+ 40
+ lbstatus
+ 30
+ -30
+ lbstatus
+ 0
+ 0
+ (repeat) lbmethod=bytraffic
, the idea behind this
+ scheduler is very similar to the Request Counting method, with
+ the following changes:
+
+
+
+ worker
+ a
+ b
+ c
+ lbfactor
+ 1
+ 2
+ 1 HTTP
, FTP
and AJP13
protocols
Load balancing scheduler algorithm is provided by not this
+ module but other modules such as:
+
Thus, in order to get the ability of load balancing,
-