From: Luca Toscano Run-time configuration directives are identical to those provided by Run-time configuration directives are identical to those provided by
This original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client
completes the first request, it can keep the connection
- open, sending further requests using the same socket and saving
+ open, sending further requests using the same socket and saving
significant overhead in creating TCP connections. However,
- Apache HTTP Server traditionally keeps an entire child
- process/thread waiting for data from the client, which brings its own disadvantages.
+ Apache HTTP Server traditionally keeps an entire child
+ process/thread waiting for data from the client, which brings its own disadvantages.
To solve this problem, this MPM uses a dedicated listener thread for each process
along with a pool of worker threads, sharing queues specific for those
requests in keep-alive mode (or, more simply, "readable"), those in write-
@@ -70,7 +70,12 @@ of the
The total amount of connections that a single process/threads block can handle is regulated +
This new architecture, leveraging non blocking sockets and modern kernel
+ features exposed by mpm_accept
The total amount of connections that a single process/threads block can handle is regulated
by the
These improvements are valid for both HTTP/HTTPS connections.
+These improvements are valid for both HTTP/HTTPS connections.
A similar restriction is currently present for requests involving an
- output filter that needs to read and/or modify the whole response body.
+ output filter that needs to read and/or modify the whole response body.
If the connection to the client blocks while the filter is processing the
data, and the amount of data produced by the filter is too big to be
buffered in memory, the thread used for the request is not freed while
- httpd waits until the pending data is sent to the client.
- To illustrate this point we can think about the following two situations:
+ httpd waits until the pending data is sent to the client.
+ To illustrate this point we can think about the following two situations:
serving a static asset (like a CSS file) versus serving content retrieved from
- FCGI/CGI or a proxied server. The former is predictable, namely the event MPM
- has full visibility on the end of the content and it can use events: the worker
+ FCGI/CGI or a proxied server. The former is predictable, namely the event MPM
+ has full visibility on the end of the content and it can use events: the worker
thread serving the response content can flush the first bytes until EWOULDBLOCK
or EAGAIN is returned, delegating the rest to the listener. This one in turn
waits for an event on the socket, and delegates the work to flush the rest of the content
to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content)
the MPM can't predict the end of the response and a worker thread has to finish its work
- before returning the control to the listener. The only alternative is to buffer the
+ before returning the control to the listener. The only alternative is to buffer the
response in memory, but it wouldn't be the safest option for the sake of the
server's stability and memory footprint.
Before these new APIs where made available, the traditional select and poll APIs had to be used.
- Those APIs get slow if used to handle many connections or if the set of connections rate of change is high.
+
Before these new APIs where made available, the traditional select and poll APIs had to be used.
+ Those APIs get slow if used to handle many connections or if the set of connections rate of change is high.
The new APIs allow to monitor much more connections and they perform way better when the set of connections to monitor changes frequently. So these APIs made it possible to write the event MPM, that scales much better with the typical HTTP pattern of many idle connections.
The MPM assumes that the underlying apr_pollset
@@ -263,7 +268,7 @@ of the
(
When all the worker threads are idle, then absolute maximum numbers of concurrent +
When all the worker threads are idle, then absolute maximum numbers of concurrent connections can be calculared in a simpler way:
@@ -294,12 +299,12 @@ max_connections = (ThreadsPerChild + (AsyncRequestWorkerFactor * idle_workers))
If all the processes have all threads idle then: We can calculate the absolute maximum numbers of concurrent connections in two ways: