From: Mike Rumph AsyncRequestWorkerFactor.
This original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client +
The original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client
completes the first request, it can keep the connection
open, sending further requests using the same socket and saving
significant overhead in creating TCP connections. However,
@@ -122,24 +122,29 @@ of the AsyncRequestWorkerFactor.
mod_status shows new columns under the Async connections section:
write() to the socket returns EWOULDBLOCK or EAGAIN, to become writable again after an idle time. The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable"). Please check the Limitations section for more information.
+ write() to the socket returns EWOULDBLOCK or EAGAIN to become writable again after an idle time.
+ The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable").
+ Please check the Limitations section for more information.
KeepAliveTimeout occurs then the socket will be
- closed by the listener. In this way the worker threads are not responsible for idle
- sockets and they can be re-used to serve other requests.AsyncRequestWorkerFactor.
The above connection states are managed by the listener thread via dedicated queues, that up to 2.4.27 were checked every 100ms
to find which connections hit timeout settings like Timeout and
KeepAliveTimeout. This was a simple and efficient solution, but it presented a downside: the pollset was
- forcing a wake-up of the listener thread even if there was no need (for example because completely idle), wasting resources. From 2.4.28
- these queues are completely managed via an event based logic, not relying anymore on active polling.
+ forcing a wake-up of the listener thread even if there was no need (for example because completely idle), wasting resources.
+ From 2.4.28, these queues are completely managed via an event-based logic, not relying anymore on active polling.
Resource constrained environments, like embedded servers, may benefit from this improvement.
This mpm showed some scalability bottlenecks in the past leading to the following +
This mpm showed some scalability bottlenecks in the past, leading to the following
error: "scoreboard is full, not at MaxRequestWorkers".
MaxRequestWorkers
limits the number of simultaneous requests that will be served at any given time
and also the number of allowed processes
(MaxRequestWorkers
- / ThreadsPerChild), meanwhile
+ / ThreadsPerChild); meanwhile,
the Scoreboard is a representation of all the running processes and
the status of their worker threads. If the scoreboard is full (so all the
threads have a state that is not idle) but the number of active requests
served is not MaxRequestWorkers,
it means that some of them are blocking new requests that could be served
but that are queued instead (up to the limit imposed by
- ListenBacklog). Most of the times
+ ListenBacklog). Most of the time,
the threads are stuck in the Graceful state, namely they are waiting to
finish their work with a TCP connection to safely terminate and free up a
- scoreboard slot (for example handling long running requests, slow clients
+ scoreboard slot (for example, handling long-running requests, slow clients
or connections with keep-alive enabled). Two scenarios are very common:
MaxSpareThreads).
This is particularly problematic because when the load increases again,
httpd will try to start new processes.
@@ -198,7 +203,7 @@ of the AsyncRequestWorkerFactor.
ServerLimit.
MaxRequestWorkers and
ThreadsPerChild are used
- to limit the amount of active processes, meanwhile
+ to limit the amount of active processes; meanwhile,
ServerLimit
takes also into account the ones doing a graceful
close to allow extra slots when needed. The idea is to use
@@ -211,7 +216,7 @@ of the AsyncRequestWorkerFactor.
AsyncRequestWorkerFactor.
data, and the amount of data produced by the filter is too big to be
buffered in memory, the thread used for the request is not freed while
httpd waits until the pending data is sent to the client.EWOULDBLOCK
or EAGAIN is returned, delegating the rest to the listener. This one in turn
- waits for an event on the socket, and delegates the work to flush the rest of the content
- to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content)
+ waits for an event on the socket and delegates the work to flush the rest of the content
+ to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content),
the MPM can't predict the end of the response and a worker thread has to finish its work
before returning the control to the listener. The only alternative is to buffer the
response in memory, but it wouldn't be the safest option for the sake of the
@@ -260,7 +265,7 @@ of the AsyncRequestWorkerFactor.
Before these new APIs where made available, the traditional select and poll APIs had to be used.
Those APIs get slow if used to handle many connections or if the set of connections rate of change is high.
- The new APIs allow to monitor much more connections and they perform way better when the set of connections to monitor changes frequently. So these APIs made it possible to write the event MPM, that scales much better with the typical HTTP pattern of many idle connections.
The MPM assumes that the underlying apr_pollset
implementation is reasonably threadsafe. This enables the MPM to
@@ -290,7 +295,7 @@ of the AsyncRequestWorkerFactor.
libkse (see man libmap.conf).AsyncRequestWorkerFactor.
To mitigate this problem, the event MPM does two things:
This original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client +
The original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client
completes the first request, it can keep the connection
open, sending further requests using the same socket and saving
significant overhead in creating TCP connections. However,
@@ -83,24 +83,29 @@ of the
write() to the socket returns EWOULDBLOCK or EAGAIN, to become writable again after an idle time. The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable"). Please check the Limitations section for more information.
+ write() to the socket returns EWOULDBLOCK or EAGAIN to become writable again after an idle time.
+ The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable").
+ Please check the Limitations section for more information.
The above connection states are managed by the listener thread via dedicated queues, that up to 2.4.27 were checked every 100ms
to find which connections hit timeout settings like
This mpm showed some scalability bottlenecks in the past leading to the following +
This mpm showed some scalability bottlenecks in the past, leading to the following
error: "scoreboard is full, not at MaxRequestWorkers".
EWOULDBLOCK
or EAGAIN is returned, delegating the rest to the listener. This one in turn
- waits for an event on the socket, and delegates the work to flush the rest of the content
- to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content)
+ waits for an event on the socket and delegates the work to flush the rest of the content
+ to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content),
the MPM can't predict the end of the response and a worker thread has to finish its work
before returning the control to the listener. The only alternative is to buffer the
response in memory, but it wouldn't be the safest option for the sake of the
@@ -221,7 +226,7 @@ of the Before these new APIs where made available, the traditional select and poll APIs had to be used.
Those APIs get slow if used to handle many connections or if the set of connections rate of change is high.
- The new APIs allow to monitor much more connections and they perform way better when the set of connections to monitor changes frequently. So these APIs made it possible to write the event MPM, that scales much better with the typical HTTP pattern of many idle connections.
The MPM assumes that the underlying apr_pollset
implementation is reasonably threadsafe. This enables the MPM to
@@ -251,7 +256,7 @@ of the
libkse (see man libmap.conf).To mitigate this problem, the event MPM does two things:
event, worker, prefork, mpm_winnt, mpm_netware, mpmt_os2The maximum length of the queue of pending connections.
- Generally no tuning is needed or desired, however on some
- systems it is desirable to increase this when under a TCP SYN
+ Generally no tuning is needed or desired; however, on some
+ systems, it is desirable to increase this when under a TCP SYN
flood attack. See the backlog parameter to the
listen(2) system call.
0, then the process will never expire.
Setting MaxConnectionsPerChild to a
- non-zero value limits the amount of memory that process can consume
+ non-zero value limits the amount of memory that a process can consume
by (accidental) memory leakage.
ServerLimit.
For threaded and hybrid servers (e.g. event
- or worker) MaxRequestWorkers restricts
+ or worker), MaxRequestWorkers restricts
the total number of threads that will be available to serve clients.
- For hybrid MPMs the default value is 16 (ServerLimit) multiplied by the value of
+ For hybrid MPMs, the default value is 16 (ServerLimit) multiplied by the value of
25 (ThreadsPerChild). Therefore, to increase MaxRequestWorkers to a value that requires more than 16 processes,
you must also raise ServerLimit.
For worker and event, the default is
MaxSpareThreads 250. These MPMs deal with idle threads
on a server-wide basis. If there are too many idle threads in the
- server then child processes are killed until the number of idle
+ server, then child processes are killed until the number of idle
threads is less than this number. Additional processes/threads
might be created if ListenCoresBucketsRatio
is enabled.
worker and event use a default of
MinSpareThreads 75 and deal with idle threads on a server-wide
- basis. If there aren't enough idle threads in the server then child
+ basis. If there aren't enough idle threads in the server, then child
processes are created until the number of idle threads is greater
than number. Additional processes/threads
might be created if ListenCoresBucketsRatio
@@ -522,7 +522,7 @@ spikes
mpm_netware uses a default of
MinSpareThreads 10 and, since it is a single-process
- MPM, tracks this on a server-wide bases.
mpmt_os2 works
similar to mpm_netware. For
@@ -548,7 +548,7 @@ of the daemon
The PidFile directive sets the file to
which the server records the process id of the daemon. If the
- filename is not absolute then it is assumed to be relative to the
+ filename is not absolute, then it is assumed to be relative to the
DefaultRuntimeDir.
PidFile /var/run/apache.pid@@ -615,7 +615,7 @@ the child processes
File-based shared memory is useful for third-party applications that require direct access to the scoreboard.
-If you use a ScoreBoardFile then
+
If you use a ScoreBoardFile, then
you may see improved speed by placing it on a RAM disk. But be
careful that you heed the same warnings about log file placement
and security.
ThreadStackSize is
set to a value lower than the operating system default. This type
of adjustment should only be made in a test environment which allows
- the full set of web server processing can be exercised, as there
+ the full set of web server processing to be exercised, as there
may be infrequent requests which require more stack to process.
The minimum required stack size strongly depends on the modules
used, but any change in the web server configuration can invalidate
diff --git a/docs/manual/mod/mpm_common.xml b/docs/manual/mod/mpm_common.xml
index 6f755233f69..c845d670720 100644
--- a/docs/manual/mod/mpm_common.xml
+++ b/docs/manual/mod/mpm_common.xml
@@ -149,7 +149,7 @@ of the daemon
The
The maximum length of the queue of pending connections.
- Generally no tuning is needed or desired, however on some
- systems it is desirable to increase this when under a TCP SYN
+ Generally no tuning is needed or desired; however, on some
+ systems, it is desirable to increase this when under a TCP SYN
flood attack. See the backlog parameter to the
listen(2) system call.
For threaded and hybrid servers (e.g. 16 (25 (0, then the process will never expire.
Setting
For MaxSpareThreads 250. These MPMs deal with idle threads
on a server-wide basis. If there are too many idle threads in the
- server then child processes are killed until the number of idle
+ server, then child processes are killed until the number of idle
threads is less than this number. Additional processes/threads
might be created if
MinSpareThreads 75 and deal with idle threads on a server-wide
- basis. If there aren't enough idle threads in the server then child
+ basis. If there aren't enough idle threads in the server, then child
processes are created until the number of idle threads is greater
than number. Additional processes/threads
might be created if
MinSpareThreads 10 and, since it is a single-process
- MPM, tracks this on a server-wide bases.
File-based shared memory is useful for third-party applications that require direct access to the scoreboard.
-If you use a
If you use a