From: Mike Rumph Date: Thu, 30 Jan 2020 18:44:30 +0000 (+0000) Subject: Fix some grammar errors in the docs X-Git-Tag: 2.5.0-alpha2-ci-test-only~1689 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=03501ae1352f229161a68d5a61c297396bbd7826;p=thirdparty%2Fapache%2Fhttpd.git Fix some grammar errors in the docs git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1873372 13f79535-47bb-0310-9956-ffa450edef68 --- diff --git a/docs/manual/mod/event.html.en b/docs/manual/mod/event.html.en index 8cca7ed524a..6fe40e91e63 100644 --- a/docs/manual/mod/event.html.en +++ b/docs/manual/mod/event.html.en @@ -95,7 +95,7 @@ of the AsyncRequestWorkerFactor.

top

How it Works

-

This original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client +

The original goal of this MPM was to fix the 'keep alive problem' in HTTP. After a client completes the first request, it can keep the connection open, sending further requests using the same socket and saving significant overhead in creating TCP connections. However, @@ -122,24 +122,29 @@ of the AsyncRequestWorkerFactor.

The status page of mod_status shows new columns under the Async connections section:

Writing
-
While sending the response to the client, it might happen that the TCP write buffer fills up because the connection is too slow. Usually in this case a write() to the socket returns EWOULDBLOCK or EAGAIN, to become writable again after an idle time. The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable"). Please check the Limitations section for more information. +
While sending the response to the client, it might happen that the TCP write buffer fills up because the connection is too slow. + Usually in this case, a write() to the socket returns EWOULDBLOCK or EAGAIN to become writable again after an idle time. + The worker holding the socket might be able to offload the waiting task to the listener thread, that in turn will re-assign it to the first idle worker thread available once an event will be raised for the socket (for example, "the socket is now writable"). + Please check the Limitations section for more information.
Keep-alive
Keep Alive handling is the most basic improvement from the worker MPM. Once a worker thread finishes to flush the response to the client, it can offload the - socket handling to the listener thread, that in turns will wait for any event from the + socket handling to the listener thread, that in turn will wait for any event from the OS, like "the socket is readable". If any new request comes from the client, then the listener will forward it to the first worker thread available. Conversely, if the KeepAliveTimeout occurs then the socket will be - closed by the listener. In this way the worker threads are not responsible for idle - sockets and they can be re-used to serve other requests.
+ closed by the listener. In this way, the worker threads are not responsible for idle + sockets, and they can be re-used to serve other requests.
Closing
Sometimes the MPM needs to perform a lingering close, namely sending back an early error to the client while it is still transmitting data to httpd. Sending the response and then closing the connection immediately is not the correct thing to do since the client (still trying to send the rest of the - request) would get a connection reset and could not read the httpd's response. The lingering close is time bounded but it can take relatively long - time, so it's offloaded to a worker thread (including the shutdown hooks and real socket close). From 2.4.28 onward this is also the + request) would get a connection reset and could not read the httpd's response. + The lingering close is time-bounded, but it can take a relatively long + time, so it's offloaded to a worker thread (including the shutdown hooks and real socket close). + From 2.4.28 onward, this is also the case when connections finally timeout (the listener thread never handles connections besides waiting for and dispatching their events).
@@ -149,40 +154,40 @@ of the AsyncRequestWorkerFactor.

The above connection states are managed by the listener thread via dedicated queues, that up to 2.4.27 were checked every 100ms to find which connections hit timeout settings like Timeout and KeepAliveTimeout. This was a simple and efficient solution, but it presented a downside: the pollset was - forcing a wake-up of the listener thread even if there was no need (for example because completely idle), wasting resources. From 2.4.28 - these queues are completely managed via an event based logic, not relying anymore on active polling. + forcing a wake-up of the listener thread even if there was no need (for example because completely idle), wasting resources. + From 2.4.28, these queues are completely managed via an event-based logic, not relying anymore on active polling. Resource constrained environments, like embedded servers, may benefit from this improvement.

Graceful process termination and Scoreboard usage

-

This mpm showed some scalability bottlenecks in the past leading to the following +

This mpm showed some scalability bottlenecks in the past, leading to the following error: "scoreboard is full, not at MaxRequestWorkers". MaxRequestWorkers limits the number of simultaneous requests that will be served at any given time and also the number of allowed processes (MaxRequestWorkers - / ThreadsPerChild), meanwhile + / ThreadsPerChild); meanwhile, the Scoreboard is a representation of all the running processes and the status of their worker threads. If the scoreboard is full (so all the threads have a state that is not idle) but the number of active requests served is not MaxRequestWorkers, it means that some of them are blocking new requests that could be served but that are queued instead (up to the limit imposed by - ListenBacklog). Most of the times + ListenBacklog). Most of the time, the threads are stuck in the Graceful state, namely they are waiting to finish their work with a TCP connection to safely terminate and free up a - scoreboard slot (for example handling long running requests, slow clients + scoreboard slot (for example, handling long-running requests, slow clients or connections with keep-alive enabled). Two scenarios are very common:

@@ -235,14 +240,14 @@ of the AsyncRequestWorkerFactor.

data, and the amount of data produced by the filter is too big to be buffered in memory, the thread used for the request is not freed while httpd waits until the pending data is sent to the client.
- To illustrate this point we can think about the following two situations: + To illustrate this point, we can think about the following two situations: serving a static asset (like a CSS file) versus serving content retrieved from FCGI/CGI or a proxied server. The former is predictable, namely the event MPM has full visibility on the end of the content and it can use events: the worker thread serving the response content can flush the first bytes until EWOULDBLOCK or EAGAIN is returned, delegating the rest to the listener. This one in turn - waits for an event on the socket, and delegates the work to flush the rest of the content - to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content) + waits for an event on the socket and delegates the work to flush the rest of the content + to the first idle worker thread. Meanwhile in the latter example (FCGI/CGI/proxied content), the MPM can't predict the end of the response and a worker thread has to finish its work before returning the control to the listener. The only alternative is to buffer the response in memory, but it wouldn't be the safest option for the sake of the @@ -260,7 +265,7 @@ of the AsyncRequestWorkerFactor.

Before these new APIs where made available, the traditional select and poll APIs had to be used. Those APIs get slow if used to handle many connections or if the set of connections rate of change is high. - The new APIs allow to monitor much more connections and they perform way better when the set of connections to monitor changes frequently. So these APIs made it possible to write the event MPM, that scales much better with the typical HTTP pattern of many idle connections.

+ The new APIs allow to monitor many more connections, and they perform way better when the set of connections to monitor changes frequently. So these APIs made it possible to write the event MPM, that scales much better with the typical HTTP pattern of many idle connections.

The MPM assumes that the underlying apr_pollset implementation is reasonably threadsafe. This enables the MPM to @@ -290,7 +295,7 @@ of the AsyncRequestWorkerFactor.

top
@@ -283,8 +283,8 @@ including other causes. Module:event, worker, prefork, mpm_winnt, mpm_netware, mpmt_os2

The maximum length of the queue of pending connections. - Generally no tuning is needed or desired, however on some - systems it is desirable to increase this when under a TCP SYN + Generally no tuning is needed or desired; however, on some + systems, it is desirable to increase this when under a TCP SYN flood attack. See the backlog parameter to the listen(2) system call.

@@ -388,7 +388,7 @@ will handle during its life 0, then the process will never expire.

Setting MaxConnectionsPerChild to a - non-zero value limits the amount of memory that process can consume + non-zero value limits the amount of memory that a process can consume by (accidental) memory leakage.

@@ -436,9 +436,9 @@ simultaneously ServerLimit.

For threaded and hybrid servers (e.g. event - or worker) MaxRequestWorkers restricts + or worker), MaxRequestWorkers restricts the total number of threads that will be available to serve clients. - For hybrid MPMs the default value is 16 (ServerLimit) multiplied by the value of + For hybrid MPMs, the default value is 16 (ServerLimit) multiplied by the value of 25 (ThreadsPerChild). Therefore, to increase MaxRequestWorkers to a value that requires more than 16 processes, you must also raise ServerLimit.

@@ -463,7 +463,7 @@ simultaneously

For worker and event, the default is MaxSpareThreads 250. These MPMs deal with idle threads on a server-wide basis. If there are too many idle threads in the - server then child processes are killed until the number of idle + server, then child processes are killed until the number of idle threads is less than this number. Additional processes/threads might be created if ListenCoresBucketsRatio is enabled.

@@ -514,7 +514,7 @@ spikes

worker and event use a default of MinSpareThreads 75 and deal with idle threads on a server-wide - basis. If there aren't enough idle threads in the server then child + basis. If there aren't enough idle threads in the server, then child processes are created until the number of idle threads is greater than number. Additional processes/threads might be created if ListenCoresBucketsRatio @@ -522,7 +522,7 @@ spikes

mpm_netware uses a default of MinSpareThreads 10 and, since it is a single-process - MPM, tracks this on a server-wide bases.

+ MPM, tracks this on a server-wide basis.

mpmt_os2 works similar to mpm_netware. For @@ -548,7 +548,7 @@ of the daemon

The PidFile directive sets the file to which the server records the process id of the daemon. If the - filename is not absolute then it is assumed to be relative to the + filename is not absolute, then it is assumed to be relative to the DefaultRuntimeDir.

Example

PidFile /var/run/apache.pid
@@ -615,7 +615,7 @@ the child processes

File-based shared memory is useful for third-party applications that require direct access to the scoreboard.

-

If you use a ScoreBoardFile then +

If you use a ScoreBoardFile, then you may see improved speed by placing it on a RAM disk. But be careful that you heed the same warnings about log file placement and security.

@@ -869,7 +869,7 @@ client connections will be achievable if ThreadStackSize is set to a value lower than the operating system default. This type of adjustment should only be made in a test environment which allows - the full set of web server processing can be exercised, as there + the full set of web server processing to be exercised, as there may be infrequent requests which require more stack to process. The minimum required stack size strongly depends on the modules used, but any change in the web server configuration can invalidate diff --git a/docs/manual/mod/mpm_common.xml b/docs/manual/mod/mpm_common.xml index 6f755233f69..c845d670720 100644 --- a/docs/manual/mod/mpm_common.xml +++ b/docs/manual/mod/mpm_common.xml @@ -149,7 +149,7 @@ of the daemon

The PidFile directive sets the file to which the server records the process id of the daemon. If the - filename is not absolute then it is assumed to be relative to the + filename is not absolute, then it is assumed to be relative to the DefaultRuntimeDir.

Example @@ -348,8 +348,8 @@ in *BSDs.

The maximum length of the queue of pending connections. - Generally no tuning is needed or desired, however on some - systems it is desirable to increase this when under a TCP SYN + Generally no tuning is needed or desired; however, on some + systems, it is desirable to increase this when under a TCP SYN flood attack. See the backlog parameter to the listen(2) system call.

@@ -388,9 +388,9 @@ simultaneously ServerLimit.

For threaded and hybrid servers (e.g. event - or worker) MaxRequestWorkers restricts + or worker), MaxRequestWorkers restricts the total number of threads that will be available to serve clients. - For hybrid MPMs the default value is 16 (16 (ServerLimit) multiplied by the value of 25 (ThreadsPerChild). Therefore, to increase 0, then the process will never expire.

Setting MaxConnectionsPerChild to a - non-zero value limits the amount of memory that process can consume + non-zero value limits the amount of memory that a process can consume by (accidental) memory leakage.

@@ -470,7 +470,7 @@ will handle during its life

For worker and event, the default is MaxSpareThreads 250. These MPMs deal with idle threads on a server-wide basis. If there are too many idle threads in the - server then child processes are killed until the number of idle + server, then child processes are killed until the number of idle threads is less than this number. Additional processes/threads might be created if ListenCoresBucketsRatio is enabled.

@@ -520,7 +520,7 @@ spikes

worker and event use a default of MinSpareThreads 75 and deal with idle threads on a server-wide - basis. If there aren't enough idle threads in the server then child + basis. If there aren't enough idle threads in the server, then child processes are created until the number of idle threads is greater than number. Additional processes/threads might be created if ListenCoresBucketsRatio @@ -528,7 +528,7 @@ spikes

mpm_netware uses a default of MinSpareThreads 10 and, since it is a single-process - MPM, tracks this on a server-wide bases.

+ MPM, tracks this on a server-wide basis.

mpmt_os2 works similar to mpm_netware. For @@ -572,7 +572,7 @@ the child processes

File-based shared memory is useful for third-party applications that require direct access to the scoreboard.

-

If you use a ScoreBoardFile then +

If you use a ScoreBoardFile, then you may see improved speed by placing it on a RAM disk. But be careful that you heed the same warnings about log file placement and security.

@@ -870,7 +870,7 @@ client connections will be achievable if ThreadStackSize is set to a value lower than the operating system default. This type of adjustment should only be made in a test environment which allows - the full set of web server processing can be exercised, as there + the full set of web server processing to be exercised, as there may be infrequent requests which require more stack to process. The minimum required stack size strongly depends on the modules used, but any change in the web server configuration can invalidate @@ -917,7 +917,7 @@ to the httpd process. Some third-party firwewall software components may inject errors into accept() processing, using return codes not specified by the - operating system + operating system.