From: (no author) <(no author)@unknown> Date: Mon, 9 Dec 1996 06:06:41 +0000 (+0000) Subject: This commit was manufactured by cvs2svn to create tag 'APACHE_1_2b2'. X-Git-Tag: APACHE_1_2b2^0 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=54bfee287e17a47cd3f31db8c7a4eb46d078ae2e;p=thirdparty%2Fapache%2Fhttpd.git This commit was manufactured by cvs2svn to create tag 'APACHE_1_2b2'. git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/tags/APACHE_1_2b2@77239 13f79535-47bb-0310-9956-ffa450edef68 --- diff --git a/docs/docroot/apache_pb.gif b/docs/docroot/apache_pb.gif deleted file mode 100644 index 3a1c139fc42..00000000000 Binary files a/docs/docroot/apache_pb.gif and /dev/null differ diff --git a/docs/manual/bind.html.en b/docs/manual/bind.html.en deleted file mode 100644 index cb1fa0daacf..00000000000 --- a/docs/manual/bind.html.en +++ /dev/null @@ -1,100 +0,0 @@ -
-- -There are two directives used to restrict or specify which addresses -and ports Apache listens to. - -
BindAddress *- -Makes the server listen to just the specified address. If the argument -is *, the server listens to all addresses. The port listened to -is set with the Port directive. Only one BindAddress -should be used. - -
none- -Listen can be used instead of BindAddress and -Port. It tells the server to accept incoming requests on the -specified port or address-and-port combination. If the first format is -used, with a port number only, the server listens to the given port on -all interfaces, instead of the port given by the Port -directive. If an IP address is given as well as a port, the server -will listen on the given port and interface.
Multiple Listen -directives may be used to specify a number of addresses and ports to -listen to. The server will respond to requests from any of the listed -addresses and ports.
- -For example, to make the server accept connections on both port -80 and port 8000, use: -
- Listen 80 - Listen 8000 -- -To make the server accept connections on two specified -interfaces and port numbers, use -
- Listen 192.170.2.1:80 - Listen 192.170.2.5:8000 -- -
-
-The Apache module mod_negotiation handles
-content negotiation in two different ways; special treatment for the
-pseudo-mime-type application/x-type-map, and the
-MultiViews per-directory Option (which can be set in srm.conf, or in
-.htaccess files, as usual). These features are alternate user
-interfaces to what amounts to the same piece of code (in the new file
-http_mime_db.c) which implements the content negotiation
-portion of the HTTP protocol.
- -Each of these features allows one of several files to satisfy a -request, based on what the client says it's willing to accept; the -differences are in the way the files are identified: - -
*.var file) names the files
- containing the variants explicitly
- application/x-type-map. Note that to use this feature,
-you've got to have an AddType some place which defines a
-file suffix as application/x-type-map; the easiest thing
-may be to stick a
-- - AddType application/x-type-map var - --in
srm.conf. See comments in the sample config files for
-details. - -Type map files have an entry for each available variant; these entries -consist of contiguous RFC822-format header lines. Entries for -different variants are separated by blank lines. Blank lines are -illegal within an entry. It is conventional to begin a map file with -an entry for the combined entity as a whole, e.g., -
- - URI: foo; vary="type,language" - - URI: foo.en.html - Content-type: text/html; level=2 - Content-language: en - - URI: foo.fr.html - Content-type: text/html; level=2 - Content-language: fr - --If the variants have different qualities, that may be indicated by the -"qs" parameter, as in this picture (available as jpeg, gif, or ASCII-art): -
- - URI: foo; vary="type,language" - - URI: foo.jpeg - Content-type: image/jpeg; qs=0.8 - - URI: foo.gif - Content-type: image/gif; qs=0.5 - - URI: foo.txt - Content-type: text/plain; qs=0.01 - -
- -The full list of headers recognized is: - -
URI:
- Content-type:
- image/gif, text/plain, or
- text/html; level=3.
- Content-language:
- en for English,
- kr for Korean, etc.).
- Content-encoding:
- x-compress, or gzip, as appropriate.
- Content-length:
- Options directive within a <Directory>
-section in access.conf, or (if AllowOverride
-is properly set) in .htaccess files. Note that
-Options All does not set MultiViews; you
-have to ask for it by name. (Fixing this is a one-line change to
-httpd.h).
-
-
-
-The effect of MultiViews is as follows: if the server
-receives a request for /some/dir/foo, if
-/some/dir has MultiViews enabled, and
-/some/dir/foo does *not* exist, then the server reads the
-directory looking for files named foo.*, and effectively fakes up a
-type map which names all those files, assigning them the same media
-types and content-encodings it would have if the client had asked for
-one of them by name. It then chooses the best match to the client's
-requirements, and forwards them along.
-
-
-
-This applies to searches for the file named by the
-DirectoryIndex directive, if the server is trying to
-index a directory; if the configuration files specify
-
- - DirectoryIndex index - -then the server will arbitrate between
index.html
-and index.html3 if both are present. If neither are
-present, and index.cgi is there, the server will run it.
-
-- -If one of the files found by the globbing is a CGI script, it's not -obvious what should happen. My code gives that case gets special -treatment --- if the request was a POST, or a GET with QUERY_ARGS or -PATH_INFO, the script is given an extremely high quality rating, and -generally invoked; otherwise it is given an extremely low quality -rating, which generally causes one of the other views (if any) to be -retrieved. This is the only jiggering of quality ratings done by the -MultiViews code; aside from that, all Qualities in the synthesized -type maps are 1.0. - -
-
-New as of 0.8: Documents in multiple languages can also be resolved through the use
-of the AddLanguage and LanguagePriority
-directives:
-
-
-AddLanguage en .en -AddLanguage fr .fr -AddLanguage de .de -AddLanguage da .da -AddLanguage el .el -AddLanguage it .it - -# LanguagePriority allows you to give precedence to some languages -# in case of a tie during content negotiation. -# Just list the languages in decreasing order of preference. - -LanguagePriority en fr de -- -Here, a request for "foo.html" matched against "foo.html.en" and -"foo.html.fr" would return an French document to a browser that -indicated a preference for French, or an English document otherwise. -In fact, a request for "foo" matched against "foo.html.en", -"foo.html.fr", "foo.ps.en", "foo.pdf.de", and "foo.txt.it" would do -just what you expect - treat those suffices as a database and compare -the request to it, returning the best match. The languages and data -types share the same suffix name space. - -
-
-Note that this machinery only comes into play if the file which the
-user attempted to retrieve does not exist by that name; if it
-does, it is simply retrieved as usual. (So, someone who actually asks
-for foo.jpeg, as opposed to foo, never gets
-foo.gif).
-
-
-
diff --git a/docs/manual/custom-error.html.en b/docs/manual/custom-error.html.en
deleted file mode 100644
index efc3f041e2f..00000000000
--- a/docs/manual/custom-error.html.en
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Customizable responses can be defined to be activated in the event of a
-server detected error or problem.
-e.g. if a script crashes and produces a "500 Server Error" response, then
-this response can be replaced with either some friendlier text or by a
-redirection to another URL (local or external).
-
Redirecting to another URL can be useful, but only if some information
-can be passed which can then be used to explain and/or log the error/problem
-more clearly.
To achieve this, Apache will define new CGI-like environment
-variables, e.g.
-
-REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg
-REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712)
-REDIRECT_PATH=.:/bin:/usr/local/bin:/etc
-REDIRECT_QUERY_STRING=
-REDIRECT_REMOTE_ADDR=121.345.78.123
-REDIRECT_REMOTE_HOST=ooh.ahhh.com
-REDIRECT_SERVER_NAME=crash.bang.edu
-REDIRECT_SERVER_PORT=80
-REDIRECT_SERVER_SOFTWARE=Apache/0.8.15
-REDIRECT_URL=/cgi-bin/buggy.pl
-
-
-note the REDIRECT_ prefix.
-
-At least REDIRECT_URL and REDIRECT_QUERY_STRING will
-be passed to the new URL (assuming it's a cgi-script or a cgi-include). The
-other variables will exist only if they existed prior to the error/problem.
- -
Here are some examples... -
-ErrorDocument 500 /cgi-bin/crash-recover
-ErrorDocument 500 "Sorry, our script crashed because %s. Oh dear
-ErrorDocument 500 http://xxx/
-ErrorDocument 404 /Lame_excuses/not_found.html
-ErrorDocument 401 /Subscription/how_to_subscribe.html
-
-The syntax is,
-ErrorDocument
-<3-digit-code> action
- -where the action can be, -
%s.
-Note: the (") prefix isn't displayed.
-ErrorDocument definitions are sensitive to a
-SIGHUP, so you can change any of the definitions or add new ones
-prior to sending a SIGHUP (kill -1) signal.
-
- -
- -
-
REDIRECT_.REDIRECT_URL and REDIRECT_STATUS to help the script
-trace its origin.-
- -A few notes on general pedagogical style here. In the interest of -conciseness, all structure declarations here are incomplete --- the -real ones have more slots that I'm not telling you about. For the -most part, these are reserved to one component of the server core or -another, and should be altered by modules with caution. However, in -some cases, they really are things I just haven't gotten around to -yet. Welcome to the bleeding edge.
- -Finally, here's an outline, to give you some bare idea of what's -coming up, and in what order: - -
SetEnv, which don't really fit well elsewhere.
- OK.
- DECLINED. In this case, the
- server behaves in all respects as if the handler simply hadn't
- been there.
- */* (i.e., a
-wildcard MIME type specification). However, wildcard handlers are
-only invoked if the server has already tried and failed to find a more
-specific response handler for the MIME type of the requested object
-(either none existed, or they all declined).
-
-The handlers themselves are functions of one argument (a
-request_rec structure. vide infra), which returns an
-integer, as above.
- -
ScriptAlias config file
-command. It's actually a great deal more complicated than most
-modules, but if we're going to have only one example, it might as well
-be the one with its fingers in every place.
-
-Let's begin with handlers. In order to handle the CGI scripts, the
-module declares a response handler for them. Because of
-ScriptAlias, it also has handlers for the name
-translation phase (to recognise ScriptAliased URIs), the
-type-checking phase (any ScriptAliased request is typed
-as a CGI script).
-
-The module needs to maintain some per (virtual)
-server information, namely, the ScriptAliases in effect;
-the module structure therefore contains pointers to a functions which
-builds these structures, and to another which combines two of them (in
-case the main server and a virtual server both have
-ScriptAliases declared).
-
-Finally, this module contains code to handle the
-ScriptAlias command itself. This particular module only
-declares one command, but there could be more, so modules have
-command tables which declare their commands, and describe
-where they are permitted, and how they are to be invoked.
-
-A final note on the declared types of the arguments of some of these
-commands: a pool is a pointer to a resource pool
-structure; these are used by the server to keep track of the memory
-which has been allocated, files opened, etc., either to service a
-particular request, or to handle the process of configuring itself.
-That way, when the request is over (or, for the configuration pool,
-when the server is restarting), the memory can be freed, and the files
-closed, en masse, without anyone having to write explicit code to
-track them all down and dispose of them. Also, a
-cmd_parms structure contains various information about
-the config file being read, and other status information, which is
-sometimes of use to the function which processes a config-file command
-(such as ScriptAlias).
-
-With no further ado, the module itself:
-
-
-/* Declarations of handlers. */
-
-int translate_scriptalias (request_rec *);
-int type_scriptalias (request_rec *);
-int cgi_handler (request_rec *);
-
-/* Subsidiary dispatch table for response-phase handlers, by MIME type */
-
-handler_rec cgi_handlers[] = {
-{ "application/x-httpd-cgi", cgi_handler },
-{ NULL }
-};
-
-/* Declarations of routines to manipulate the module's configuration
- * info. Note that these are returned, and passed in, as void *'s;
- * the server core keeps track of them, but it doesn't, and can't,
- * know their internal structure.
- */
-
-void *make_cgi_server_config (pool *);
-void *merge_cgi_server_config (pool *, void *, void *);
-
-/* Declarations of routines to handle config-file commands */
-
-extern char *script_alias(cmd_parms *, void *per_dir_config, char *fake,
- char *real);
-
-command_rec cgi_cmds[] = {
-{ "ScriptAlias", script_alias, NULL, RSRC_CONF, TAKE2,
- "a fakename and a realname"},
-{ NULL }
-};
-
-module cgi_module = {
- STANDARD_MODULE_STUFF,
- NULL, /* initializer */
- NULL, /* dir config creator */
- NULL, /* dir merger --- default is to override */
- make_cgi_server_config, /* server config */
- merge_cgi_server_config, /* merge server config */
- cgi_cmds, /* command table */
- cgi_handlers, /* handlers */
- translate_scriptalias, /* filename translation */
- NULL, /* check_user_id */
- NULL, /* check auth */
- NULL, /* check access */
- type_scriptalias, /* type_checker */
- NULL, /* fixups */
- NULL /* logger */
-};
-
-
-request_rec structure.
-This structure describes a particular request which has been made to
-the server, on behalf of a client. In most cases, each connection to
-the client generates only one request_rec structure.- -
request_recrequest_rec contains pointers to a resource pool
-which will be cleared when the server is finished handling the
-request; to structures containing per-server and per-connection
-information, and most importantly, information on the request itself.- -The most important such information is a small set of character -strings describing attributes of the object being requested, including -its URI, filename, content-type and content-encoding (these being filled -in by the translation and type-check handlers which handle the -request, respectively).
-
-Other commonly used data items are tables giving the MIME headers on
-the client's original request, MIME headers to be sent back with the
-response (which modules can add to at will), and environment variables
-for any subprocesses which are spawned off in the course of servicing
-the request. These tables are manipulated using the
-table_get and table_set routines.
-
-Finally, there are pointers to two data structures which, in turn,
-point to per-module configuration structures. Specifically, these
-hold pointers to the data structures which the module has built to
-describe the way it has been configured to operate in a given
-directory (via .htaccess files or
-<Directory> sections), for private data it has
-built in the course of servicing the request (so modules' handlers for
-one phase can pass `notes' to their handlers for other phases). There
-is another such configuration vector in the server_rec
-data structure pointed to by the request_rec, which
-contains per (virtual) server configuration data.
- -Here is an abridged declaration, giving the fields most commonly used:
- -
-struct request_rec {
-
- pool *pool;
- conn_rec *connection;
- server_rec *server;
-
- /* What object is being requested */
-
- char *uri;
- char *filename;
- char *path_info;
- char *args; /* QUERY_ARGS, if any */
- struct stat finfo; /* Set by server core;
- * st_mode set to zero if no such file */
-
- char *content_type;
- char *content_encoding;
-
- /* MIME header environments, in and out. Also, an array containing
- * environment variables to be passed to subprocesses, so people can
- * write modules to add to that environment.
- *
- * The difference between headers_out and err_headers_out is that
- * the latter are printed even on error, and persist across internal
- * redirects (so the headers printed for ErrorDocument handlers will
- * have them).
- */
-
- table *headers_in;
- table *headers_out;
- table *err_headers_out;
- table *subprocess_env;
-
- /* Info about the request itself... */
-
- int header_only; /* HEAD request, as opposed to GET */
- char *protocol; /* Protocol, as given to us, or HTTP/0.9 */
- char *method; /* GET, HEAD, POST, etc. */
- int method_number; /* M_GET, M_POST, etc. */
-
- /* Info for logging */
-
- char *the_request;
- int bytes_sent;
-
- /* A flag which modules can set, to indicate that the data being
- * returned is volatile, and clients should be told not to cache it.
- */
-
- int no_cache;
-
- /* Various other config info which may change with .htaccess files
- * These are config vectors, with one void* pointer for each module
- * (the thing pointed to being the module's business).
- */
-
- void *per_dir_config; /* Options set in config files, etc. */
- void *request_config; /* Notes on *this* request */
-
-};
-
-
-
-request_rec structures are built by reading an HTTP
-request from a client, and filling in the fields. However, there are
-a few exceptions:
-
-*.var file), or a CGI script which returned a
- local `Location:', then the resource which the user requested
- is going to be ultimately located by some URI other than what
- the client originally supplied. In this case, the server does
- an internal redirect, constructing a new
- request_rec for the new URI, and processing it
- almost exactly as if the client had requested the new URI
- directly. - -
ErrorDocument is in scope, the same internal
- redirect machinery comes into play.- -
-
- Such handlers can construct a sub-request, using the
- functions sub_req_lookup_file and
- sub_req_lookup_uri; this constructs a new
- request_rec structure and processes it as you
- would expect, up to but not including the point of actually
- sending a response. (These functions skip over the access
- checks if the sub-request is for a file in the same directory
- as the original request).
-
- (Server-side includes work by building sub-requests and then
- actually invoking the response handler for them, via the
- function run_sub_request).
-
request_rec, has to return an int to
-indicate what happened. That can either be
-
-REDIRECT, then
-the module should put a Location in the request's
-headers_out, to indicate where the client should be
-redirected to. - -
request_rec structure (or, in the case of access
-checkers, simply by returning the correct error code). However,
-response handlers have to actually send a request back to the client.
-
-They should begin by sending an HTTP response header, using the
-function send_http_header. (You don't have to do
-anything special to skip sending the header for HTTP/0.9 requests; the
-function figures out on its own that it shouldn't do anything). If
-the request is marked header_only, that's all they should
-do; they should return after that, without attempting any further
-output.
-
-Otherwise, they should produce a request body which responds to the
-client as appropriate. The primitives for this are rputc
-and rprintf, for internally generated output, and
-send_fd, to copy the contents of some FILE *
-straight to the client.
-
-At this point, you should more or less understand the following piece
-of code, which is the handler which handles GET requests
-which have no more specific handler; it also shows how conditional
-GETs can be handled, if it's desirable to do so in a
-particular response handler --- set_last_modified checks
-against the If-modified-since value supplied by the
-client, if any, and returns an appropriate code (which will, if
-nonzero, be USE_LOCAL_COPY). No similar considerations apply for
-set_content_length, but it returns an error code for
-symmetry.
- -
-int default_handler (request_rec *r)
-{
- int errstatus;
- FILE *f;
-
- if (r->method_number != M_GET) return DECLINED;
- if (r->finfo.st_mode == 0) return NOT_FOUND;
-
- if ((errstatus = set_content_length (r, r->finfo.st_size))
- || (errstatus = set_last_modified (r, r->finfo.st_mtime)))
- return errstatus;
-
- f = fopen (r->filename, "r");
-
- if (f == NULL) {
- log_reason("file permissions deny server access",
- r->filename, r);
- return FORBIDDEN;
- }
-
- register_timeout ("send", r);
- send_http_header (r);
-
- if (!r->header_only) send_fd (f, r);
- pfclose (r->pool, f);
- return OK;
-}
-
-
-Finally, if all of this is too much of a challenge, there are a few
-ways out of it. First off, as shown above, a response handler which
-has not yet produced any output can simply return an error code, in
-which case the server will automatically produce an error response.
-Secondly, it can punt to some other handler by invoking
-internal_redirect, which is how the internal redirection
-machinery discussed above is invoked. A response handler which has
-internally redirected should always return OK.
-
-(Invoking internal_redirect from handlers which are
-not response handlers will lead to serious confusion).
-
-
auth_type,
- auth_name, and requires.
- get_basic_auth_pw,
- which sets the connection->user structure field
- automatically, and note_basic_auth_failure, which
- arranges for the proper WWW-Authenticate: header
- to be sent back).
-request_rec structures which are
-threaded through the r->prev and r->next
-pointers. The request_rec which is passed to the logging
-handlers in such cases is the one which was originally built for the
-initial request from the client; note that the bytes_sent field will
-only be correct in the last request in the chain (the one for which a
-response was actually sent).
-
-- -The way this works is as follows: the memory which is allocated, file -opened, etc., to deal with a particular request are tied to a -resource pool which is allocated for the request. The pool -is a data structure which itself tracks the resources in question.
- -When the request has been processed, the pool is cleared. At -that point, all the memory associated with it is released for reuse, -all files associated with it are closed, and any other clean-up -functions which are associated with the pool are run. When this is -over, we can be confident that all the resource tied to the pool have -been released, and that none of them have leaked.
- -Server restarts, and allocation of memory and resources for per-server -configuration, are handled in a similar way. There is a -configuration pool, which keeps track of resources which were -allocated while reading the server configuration files, and handling -the commands therein (for instance, the memory that was allocated for -per-server module configuration, log files and other files that were -opened, and so forth). When the server restarts, and has to reread -the configuration files, the configuration pool is cleared, and so the -memory and file descriptors which were taken up by reading them the -last time are made available for reuse.
-
-It should be noted that use of the pool machinery isn't generally
-obligatory, except for situations like logging handlers, where you
-really need to register cleanups to make sure that the log file gets
-closed when the server restarts (this is most easily done by using the
-function pfopen, which also
-arranges for the underlying file descriptor to be closed before any
-child processes, such as for CGI scripts, are execed), or
-in case you are using the timeout machinery (which isn't yet even
-documented here). However, there are two benefits to using it:
-resources allocated to a pool never leak (even if you allocate a
-scratch string, and just forget about it); also, for memory
-allocation, palloc is generally faster than
-malloc.
- -We begin here by describing how memory is allocated to pools, and then -discuss how other resources are tracked by the resource pool -machinery. - -
palloc, which takes two arguments, one being a pointer to
-a resource pool structure, and the other being the amount of memory to
-allocate (in chars). Within handlers for handling
-requests, the most common way of getting a resource pool structure is
-by looking at the pool slot of the relevant
-request_rec; hence the repeated appearance of the
-following idiom in module code:
-
-
-int my_handler(request_rec *r)
-{
- struct my_structure *foo;
- ...
-
- foo = (foo *)palloc (r->pool, sizeof(my_structure));
-}
-
-
-Note that there is no pfree ---
-palloced memory is freed only when the associated
-resource pool is cleared. This means that palloc does not
-have to do as much accounting as malloc(); all it does in
-the typical case is to round up the size, bump a pointer, and do a
-range check.
-
-(It also raises the possibility that heavy use of palloc
-could cause a server process to grow excessively large. There are
-two ways to deal with this, which are dealt with below; briefly, you
-can use malloc, and try to be sure that all of the memory
-gets explicitly freed, or you can allocate a sub-pool of
-the main pool, allocate your memory in the sub-pool, and clear it out
-periodically. The latter technique is discussed in the section on
-sub-pools below, and is used in the directory-indexing code, in order
-to avoid excessive storage allocation when listing directories with
-thousands of files).
-
-
pcalloc has the same
-interface as palloc, but clears out the memory it
-allocates before it returns it. The function pstrdup
-takes a resource pool and a char * as arguments, and
-allocates memory for a copy of the string the pointer points to,
-returning a pointer to the copy. Finally pstrcat is a
-varargs-style function, which takes a pointer to a resource pool, and
-at least two char * arguments, the last of which must be
-NULL. It allocates enough memory to fit copies of each
-of the strings, as a unit; for instance:
-
-- pstrcat (r->pool, "foo", "/", "bar", NULL); -- -returns a pointer to 8 bytes worth of memory, initialized to -
"foo/bar".
-
-pfopen, which
-takes a resource pool and two strings as arguments; the strings are
-the same as the typical arguments to fopen, e.g.,
-
-
- ...
- FILE *f = pfopen (r->pool, r->filename, "r");
-
- if (f == NULL) { ... } else { ... }
-
-
-There is also a popenf routine, which parallels the
-lower-level open system call. Both of these routines
-arrange for the file to be closed when the resource pool in question
-is cleared.
-
-Unlike the case for memory, there are functions to close
-files allocated with pfopen, and popenf,
-namely pfclose and pclosef. (This is
-because, on many systems, the number of files which a single process
-can have open is quite limited). It is important to use these
-functions to close files allocated with pfopen and
-popenf, since to do otherwise could cause fatal errors on
-systems such as Linux, which react badly if the same
-FILE* is closed more than once.
-
-(Using the close functions is not mandatory, since the
-file will eventually be closed regardless, but you should consider it
-in cases where your module is opening, or could open, a lot of files).
-
-
spawn_process.
-
-palloc() and the
-associated primitives may result in undesirably profligate resource
-allocation. You can deal with such a case by creating a
-sub-pool, allocating within the sub-pool rather than the main
-pool, and clearing or destroying the sub-pool, which releases the
-resources which were associated with it. (This really is a
-rare situation; the only case in which it comes up in the standard
-module set is in case of listing directories, and then only with
-very large directories. Unnecessary use of the primitives
-discussed here can hair up your code quite a bit, with very little
-gain).
-
-The primitive for creating a sub-pool is make_sub_pool,
-which takes another pool (the parent pool) as an argument. When the
-main pool is cleared, the sub-pool will be destroyed. The sub-pool
-may also be cleared or destroyed at any time, by calling the functions
-clear_pool and destroy_pool, respectively.
-(The difference is that clear_pool frees resources
-associated with the pool, while destroy_pool also
-deallocates the pool itself. In the former case, you can allocate new
-resources within the pool, and clear it again, and so forth; in the
-latter case, it is simply gone).
-
-One final note --- sub-requests have their own resource pools, which
-are sub-pools of the resource pool for the main request. The polite
-way to reclaim the resources associated with a sub request which you
-have allocated (using the sub_req_lookup_... functions)
-is destroy_sub_request, which frees the resource pool.
-Before calling this function, be sure to copy anything that you care
-about which might be allocated in the sub-request's resource pool into
-someplace a little less volatile (for instance, the filename in its
-request_rec structure).
-
-(Again, under most circumstances, you shouldn't feel obliged to call
-this function; only 2K of memory or so are allocated for a typical sub
-request, and it will be freed anyway when the main request pool is
-cleared. It is only when you are allocating many, many sub-requests
-for a single main request that you should seriously consider the
-destroy... functions).
-
-
-
-However, just giving the modules command tables is not enough to
-divorce them completely from the server core. The server has to
-remember the commands in order to act on them later. That involves
-maintaining data which is private to the modules, and which can be
-either per-server, or per-directory. Most things are per-directory,
-including in particular access control and authorization information,
-but also information on how to determine file types from suffixes,
-which can be modified by AddType and
-DefaultType directives, and so forth. In general, the
-governing philosophy is that anything which can be made
-configurable by directory should be; per-server information is
-generally used in the standard set of modules for information like
-Aliases and Redirects which come into play
-before the request is tied to a particular place in the underlying
-file system.
-
-Another requirement for emulating the NCSA server is being able to
-handle the per-directory configuration files, generally called
-.htaccess files, though even in the NCSA server they can
-contain directives which have nothing at all to do with access
-control. Accordingly, after URI -> filename translation, but before
-performing any other phase, the server walks down the directory
-hierarchy of the underlying filesystem, following the translated
-pathname, to read any .htaccess files which might be
-present. The information which is read in then has to be
-merged with the applicable information from the server's own
-config files (either from the <Directory> sections
-in access.conf, or from defaults in
-srm.conf, which actually behaves for most purposes almost
-exactly like <Directory />).
-
-Finally, after having served a request which involved reading
-.htaccess files, we need to discard the storage allocated
-for handling them. That is solved the same way it is solved wherever
-else similar problems come up, by tying those structures to the
-per-transaction resource pool.
- -
mod_mime.c,
-which defines the file typing handler which emulates the NCSA server's
-behavior of determining file types from suffixes. What we'll be
-looking at, here, is the code which implements the
-AddType and AddEncoding commands. These
-commands can appear in .htaccess files, so they must be
-handled in the module's private per-directory data, which in fact,
-consists of two separate tables for MIME types and
-encoding information, and is declared as follows:
-
-
-typedef struct {
- table *forced_types; /* Additional AddTyped stuff */
- table *encoding_types; /* Added with AddEncoding... */
-} mime_dir_config;
-
-
-When the server is reading a configuration file, or
-<Directory> section, which includes one of the MIME
-module's commands, it needs to create a mime_dir_config
-structure, so those commands have something to act on. It does this
-by invoking the function it finds in the module's `create per-dir
-config slot', with two arguments: the name of the directory to which
-this configuration information applies (or NULL for
-srm.conf), and a pointer to a resource pool in which the
-allocation should happen.
-
-(If we are reading a .htaccess file, that resource pool
-is the per-request resource pool for the request; otherwise it is a
-resource pool which is used for configuration data, and cleared on
-restarts. Either way, it is important for the structure being created
-to vanish when the pool is cleared, by registering a cleanup on the
-pool if necessary).
-
-For the MIME module, the per-dir config creation function just
-pallocs the structure above, and a creates a couple of
-tables to fill it. That looks like this:
-
-
-void *create_mime_dir_config (pool *p, char *dummy)
-{
- mime_dir_config *new =
- (mime_dir_config *) palloc (p, sizeof(mime_dir_config));
-
- new->forced_types = make_table (p, 4);
- new->encoding_types = make_table (p, 4);
-
- return new;
-}
-
-
-Now, suppose we've just read in a .htaccess file. We
-already have the per-directory configuration structure for the next
-directory up in the hierarchy. If the .htaccess file we
-just read in didn't have any AddType or
-AddEncoding commands, its per-directory config structure
-for the MIME module is still valid, and we can just use it.
-Otherwise, we need to merge the two structures somehow. - -To do that, the server invokes the module's per-directory config merge -function, if one is present. That function takes three arguments: -the two structures being merged, and a resource pool in which to -allocate the result. For the MIME module, all that needs to be done -is overlay the tables from the new per-directory config structure with -those from the parent: - -
-void *merge_mime_dir_configs (pool *p, void *parent_dirv, void *subdirv)
-{
- mime_dir_config *parent_dir = (mime_dir_config *)parent_dirv;
- mime_dir_config *subdir = (mime_dir_config *)subdirv;
- mime_dir_config *new =
- (mime_dir_config *)palloc (p, sizeof(mime_dir_config));
-
- new->forced_types = overlay_tables (p, subdir->forced_types,
- parent_dir->forced_types);
- new->encoding_types = overlay_tables (p, subdir->encoding_types,
- parent_dir->encoding_types);
-
- return new;
-}
-
-
-As a note --- if there is no per-directory merge function present, the
-server will just use the subdirectory's configuration info, and ignore
-the parent's. For some modules, that works just fine (e.g., for the
-includes module, whose per-directory configuration information
-consists solely of the state of the XBITHACK), and for
-those modules, you can just not declare one, and leave the
-corresponding structure slot in the module itself NULL.- -
AddType and AddEncoding commands. To find
-commands, the server looks in the module's command table.
-That table contains information on how many arguments the commands
-take, and in what formats, where it is permitted, and so forth. That
-information is sufficient to allow the server to invoke most
-command-handling functions with pre-parsed arguments. Without further
-ado, let's look at the AddType command handler, which
-looks like this (the AddEncoding command looks basically
-the same, and won't be shown here):
-
-
-char *add_type(cmd_parms *cmd, mime_dir_config *m, char *ct, char *ext)
-{
- if (*ext == '.') ++ext;
- table_set (m->forced_types, ext, ct);
- return NULL;
-}
-
-
-This command handler is unusually simple. As you can see, it takes
-four arguments, two of which are pre-parsed arguments, the third being
-the per-directory configuration structure for the module in question,
-and the fourth being a pointer to a cmd_parms structure.
-That structure contains a bunch of arguments which are frequently of
-use to some, but not all, commands, including a resource pool (from
-which memory can be allocated, and to which cleanups should be tied),
-and the (virtual) server being configured, from which the module's
-per-server configuration data can be obtained if required.
-
-Another way in which this particular command handler is unusually
-simple is that there are no error conditions which it can encounter.
-If there were, it could return an error message instead of
-NULL; this causes an error to be printed out on the
-server's stderr, followed by a quick exit, if it is in
-the main config files; for a .htaccess file, the syntax
-error is logged in the server error log (along with an indication of
-where it came from), and the request is bounced with a server error
-response (HTTP error status, code 500).
- -The MIME module's command table has entries for these commands, which -look like this: - -
-command_rec mime_cmds[] = {
-{ "AddType", add_type, NULL, OR_FILEINFO, TAKE2,
- "a mime type followed by a file extension" },
-{ "AddEncoding", add_encoding, NULL, OR_FILEINFO, TAKE2,
- "an encoding (e.g., gzip), followed by a file extension" },
-{ NULL }
-};
-
-
-The entries in these tables are:
-
-(void *) pointer, which is passed in the
- cmd_parms structure to the command handler ---
- this is useful in case many similar commands are handled by the
- same function.
- AllowOverride
- option, and an additional mask bit, RSRC_CONF,
- indicating that the command may appear in the server's own
- config files, but not in any .htaccess
- file.
- TAKE2 indicates two pre-parsed arguments. Other
- options are TAKE1, which indicates one pre-parsed
- argument, FLAG, which indicates that the argument
- should be On or Off, and is passed in
- as a boolean flag, RAW_ARGS, which causes the
- server to give the command the raw, unparsed arguments
- (everything but the command name itself). There is also
- ITERATE, which means that the handler looks the
- same as TAKE1, but that if multiple arguments are
- present, it should be called multiple times, and finally
- ITERATE2, which indicates that the command handler
- looks like a TAKE2, but if more arguments are
- present, then it should be called multiple times, holding the
- first argument constant.
- NULL).
-request_rec's per-directory configuration vector by using
-the get_module_config function.
-
-
-int find_ct(request_rec *r)
-{
- int i;
- char *fn = pstrdup (r->pool, r->filename);
- mime_dir_config *conf = (mime_dir_config *)
- get_module_config(r->per_dir_config, &mime_module);
- char *type;
-
- if (S_ISDIR(r->finfo.st_mode)) {
- r->content_type = DIR_MAGIC_TYPE;
- return OK;
- }
-
- if((i=rind(fn,'.')) < 0) return DECLINED;
- ++i;
-
- if ((type = table_get (conf->encoding_types, &fn[i])))
- {
- r->content_encoding = type;
-
- /* go back to previous extension to try to use it as a type */
-
- fn[i-1] = '\0';
- if((i=rind(fn,'.')) < 0) return OK;
- ++i;
- }
-
- if ((type = table_get (conf->forced_types, &fn[i])))
- {
- r->content_type = type;
- }
-
- return OK;
-}
-
-
-
-
-
-The only substantial difference is that when a command needs to
-configure the per-server private module data, it needs to go to the
-cmd_parms data to get at it. Here's an example, from the
-alias module, which also indicates how a syntax error can be returned
-(note that the per-directory configuration argument to the command
-handler is declared as a dummy, since the module doesn't actually have
-per-directory config data):
-
-
-char *add_redirect(cmd_parms *cmd, void *dummy, char *f, char *url)
-{
- server_rec *s = cmd->server;
- alias_server_conf *conf = (alias_server_conf *)
- get_module_config(s->module_config,&alias_module);
- alias_entry *new = push_array (conf->redirects);
-
- if (!is_url (url)) return "Redirect to non-URL";
-
- new->fake = f; new->real = url;
- return NULL;
-}
-
-
-
-
diff --git a/docs/manual/handler.html.en b/docs/manual/handler.html.en
deleted file mode 100644
index ea0f61c7e90..00000000000
--- a/docs/manual/handler.html.en
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-
-A "handler" is an internal Apache representation of the action to be -performed when a file is called. Generally, files have implicit -handlers, based on the file type. Normally, all files are simply -served by the server, but certain file typed are "handled" -separately. For example, you may use a type of -"application/x-httpd-cgi" to invoke CGI scripts.
- -Apache 1.1 adds the additional ability to use handlers -explicitly. Either based on filename extensions or on location, these -handlers are unrelated to file type. This is advantageous both because -it is a more elegant solution, but it also allows for both a type -and a handler to be associated with a file.
- -Handlers can either be built into the server or to a module, or -they can be added with the Action directive. The built-in -handlers in the standard distribution are as follows:
- -- -
AddHandler maps the filename extension extension to the
-handler handler-name. For example, to activate CGI scripts
-with the file extension ".cgi", you might use:
-
- AddHandler cgi-script cgi -- -
Once that has been put into your srm.conf or httpd.conf file, any
-file ending with ".cgi" will be treated as a CGI
-program.
When placed into an .htaccess file or a
-<Directory> or <Location section,
-this directive forces all matching files to be parsed through the
-handler given by handler-name. For example, if you had a
-directory you wanted to be parsed entirely as imagemap rule files,
-regardless of extension, you might put the following into an
-.htaccess file in that directory:
-
- SetHandler imap-file --
Another example: if you wanted to have the server display a status
-report whenever a URL of http://servername/status was
-called, you might put the following into access.conf:
-
- <Location /status> - SetHandler server-status - </Location> -- -
In order to implement the handler features, an addition has been
-made to the Apache API that you may wish to
-make use of. Specifically, a new record has been added to the
-request_rec structure:
- char *handler --
If you wish to have your module engage a handler, you need only to
-set r->handler to the name of the handler at any time
-prior to the invoke_handler stage of the
-request. Handlers are implemented as they were before, albeit using
-the handler name instead of a content type. While it is not
-necessary, the naming convention for handlers is to use a
-dash-separated word, with no slashes, so as to not invade the media
-type name-space.
modules.c') which simply has a list of them.
--It is also necessary to choose the correct options for your platform. - -To do this: -
Configuration.tmpl" to
-"Configuration" and then edit
-"Configuration". This contains the list and settings of various
-"Rules" and an additional section at the bottom which
-lists the modules which have been compiled in, and also names the
-files containing them. You will need to:
-EXTRA_CFLAGS|LIBS|LFLAGS|INCLUDES if
- you feel so inclined.
-
-- Note that DBM auth has to be explicitly configured in, if you want - it --- just uncomment the corresponding line. - - -
Configure" script:
-- This generates new versions of the Makefile and of modules.c. (If - you want to maintain multiple configurations, you can say, e.g., -- % Configure - Using 'Configuration' as config file - + configured forplatform - + setting C compiler to * - + setting C compiler optimization-level to * - % -
- % Configure -file Configuration.ai - Using alternate config file Configuration.ai - + configured forplatform - + setting C compiler to * - + setting C compiler optimization-level to * - % -
-*: Depending on Configuration and your system, Configure - make not print these lines. That's OK - -
make".
-
-The modules we place in the Apache distribution are the ones we have
-tested and are used regularly by various members of the Apache
-development group. Additional modules contributed by members or third
-parties with specific needs or functions are available at
-
src/ directory. A binary distribution of Apache will supply this
-file.
-
-The next step is to edit the configuration files for the server. In
-the subdirectory called `conf' you should find distribution versions
-of the three configuration files: srm.conf-dist,
-access.conf-dist and httpd.conf-dist. Copy them to
-srm.conf, access.conf and httpd.conf
-respectively.
-
-First edit httpd.conf. This sets up general attributes about the
-server; the port number, the user it runs as, etc. Next edit the
-srm.conf file; this sets up the root of the document tree,
-special functions like server-parsed HTML or internal imagemap parsing, etc.
-Finally, edit the access.conf file to at least set the base cases
-of access.
-
-Finally, make a call to httpd, with a -f to the full path to the -httpd.conf file. I.e., the common case: -
- /usr/local/etc/apache/src/httpd -f /usr/local/etc/apache/conf/httpd.conf
-
-The server should be now running.
-
-By default the srm.conf and access.conf files are
-located by name; to specifically call them by other names, use the
-AccessConfig and
-ResourceConfig directives in
-httpd.conf.
-
-
-
-
diff --git a/docs/manual/invoking.html.en b/docs/manual/invoking.html.en
deleted file mode 100644
index 73c570016aa..00000000000
--- a/docs/manual/invoking.html.en
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
-
httpd program is usually run as a daemon which executes
-continuously, handling requests. It is possible to invoke Apache by
-the Internet daemon inetd each time a connection to the HTTP
-service is made (use the
-ServerType directive)
-but this is not recommended.
-
--d serverroot
-/usr/local/etc/httpd.
-
--f config
-/, then it is taken to be a
-path relative to the ServerRoot. The
-default is conf/httpd.conf.
-
--X
--v
--h
--l
--?
--d command line flag.
-
-Conventionally, the files are:
-conf/httpd.conf
--f command line flag.
-
-conf/srm.conf
-conf/access.conf
-
-The server also reads a file containing mime document types; the filename
-is set by the TypesConfig directive,
-and is conf/mime.types by default.
-
-
logs/httpd.pid. This filename can be changed with the
-PidFile directive. The process-id is for
-use by the administrator in restarting and terminating the daemon;
-A HUP signal causes the daemon to re-read its configuration files and
-a TERM signal causes it to die gracefully.
--If the process dies (or is killed) abnormally, then it will be necessary to -kill the children httpd processes. - -
logs/error_log
-by default. The filename can be set using the
-ErrorLog directive; different error logs can
-be set for different virtual hosts.
-
-logs/access_log by default. The filename can be set using a
-TransferLog directive; different
-transfer logs can be set for different virtual
-hosts.
-
-
-
-
diff --git a/docs/manual/platform/perf-bsd44.html b/docs/manual/platform/perf-bsd44.html
deleted file mode 100644
index cc8eca733c9..00000000000
--- a/docs/manual/platform/perf-bsd44.html
+++ /dev/null
@@ -1,213 +0,0 @@
-
-
-- -Edit the following two files: -
/usr/include/sys/socket.h
- /usr/src/sys/sys/socket.h
-In each file, look for the following:
-- /* - * Maximum queue length specifiable by listen. - */ - #define SOMAXCONN 5 -- -Just change the "5" to whatever appears to work. I bumped the two -machines I was having problems with up to 32 and haven't noticed the -problem since. - -
- -After the edit, recompile the kernel and recompile the Apache server -then reboot. - -
- -FreeBSD 2.1 seems to be perfectly happy, with SOMAXCONN -set to 32 already. - -
-
-
-Addendum for very heavily loaded BSD servers
-
-from Chuck Murcko <chuck@telebase.com>
-
-
- -If you're running a really busy BSD Apache server, the following are useful -things to do if the system is acting sluggish:
- -
- -
-maxusers 256 -- -Maxusers drives a lot of other kernel parameters: - -
-# Network options. NMBCLUSTERS defines the number of mbuf clusters and -# defaults to 256. This machine is a server that handles lots of traffic, -# so we crank that value. -options SOMAXCONN=256 # max pending connects -options NMBCLUSTERS=4096 # mbuf clusters at 4096 - -# -# Misc. options -# -options CHILD_MAX=512 # maximum number of child processes -options OPEN_MAX=512 # maximum fds (breaks RPC svcs) -- -SOMAXCONN is not derived from maxusers, so you'll always need to increase -that yourself. We used a value guaranteed to be larger than Apache's -default for the listen() of 128, currently. - -
- -In many cases, NMBCLUSTERS must be set much larger than would appear -necessary at first glance. The reason for this is that if the browser -disconnects in mid-transfer, the socket fd associated with that particular -connection ends up in the TIME_WAIT state for several minutes, during -which time its mbufs are not yet freed. - -
- -Some more info on mbuf clusters (from sys/mbuf.h): -
-/* - * Mbufs are of a single size, MSIZE (machine/machparam.h), which - * includes overhead. An mbuf may add a single "mbuf cluster" of size - * MCLBYTES (also in machine/machparam.h), which has no additional overhead - * and is used instead of the internal data area; this is done when - * at least MINCLSIZE of data must be stored. - */ -- -
- -CHILD_MAX and OPEN_MAX are set to allow up to 512 child processes (different -than the maximum value for processes per user ID) and file descriptors. -These values may change for your particular configuration (a higher OPEN_MAX -value if you've got modules or CGI scripts opening lots of connections or -files). If you've got a lot of other activity besides httpd on the same -machine, you'll have to set NPROC higher still. In this example, the NPROC -value derived from maxusers proved sufficient for our load. - -
- -Caveats - -
- -Be aware that your system may not boot with a kernel that is configured -to use more resources than you have available system RAM. ALWAYS -have a known bootable kernel available when tuning your system this way, -and use the system tools beforehand to learn if you need to buy more -memory before tuning. - -
- -RPC services will fail when the value of OPEN_MAX is larger than 256. -This is a function of the original implementations of the RPC library, -which used a byte value for holding file descriptors. BSDI has partially -addressed this limit in its 2.1 release, but a real fix may well await -the redesign of RPC itself. - -
- -Finally, there's the hard limit of child processes configured in Apache. - -
- -For versions of Apache later than 1.0.5 you'll need to change the -definition for HARD_SERVER_LIMIT in httpd.h and recompile -if you need to run more than the default 150 instances of httpd. - -
- -From conf/httpd.conf-dist: - -
-# Limit on total number of servers running, i.e., limit on the number -# of clients who can simultaneously connect --- if this limit is ever -# reached, clients will be LOCKED OUT, so it should NOT BE SET TOO LOW. -# It is intended mainly as a brake to keep a runaway server from taking -# Unix with it as it spirals down... - -MaxClients 150 -- -Know what you're doing if you bump this value up, and make sure you've -done your system monitoring, RAM expansion, and kernel tuning beforehand. -Then you're ready to service some serious hits! - -
- -Thanks to Tony Sanders and Chris Torek at BSDI for their -helpful suggestions and information. - -
- Patch ID OSF350-195 for V3.2C- Patch IDs for V3.2E and V3.2F should be available soon. - There is no known reason why the Patch ID OSF360-350195 - won't work on these releases, but such use is not officially - supported by Digital. This patch kit will not be needed for - V3.2G when it is released. - - -
- Patch ID OSF360-350195 for V3.2D -
-From mogul@pa.dec.com (Jeffrey Mogul) -Organization DEC Western Research -Date 30 May 1996 00:50:25 GMT -Newsgroups comp.unix.osf.osf1 -Message-ID <4oirch$bc8@usenet.pa.dec.com> -Subject Re: Web Site Performance -References 1 - - - -In article <skoogDs54BH.9pF@netcom.com> skoog@netcom.com (Jim Skoog) writes: ->Where are the performance bottlenecks for Alpha AXP running the ->Netscape Commerce Server 1.12 with high volume internet traffic? ->We are evaluating network performance for a variety of Alpha AXP ->runing DEC UNIX 3.2C, which run DEC's seal firewall and behind ->that Alpha 1000 and 2100 webservers. - -Our experience (running such Web servers as altavista.digital.com -and www.digital.com) is that there is one important kernel tuning -knob to adjust in order to get good performance on V3.2C. You -need to patch the kernel global variable "somaxconn" (use dbx -k -to do this) from its default value of 8 to something much larger. - -How much larger? Well, no larger than 32767 (decimal). And -probably no less than about 2048, if you have a really high volume -(millions of hits per day), like AltaVista does. - -This change allows the system to maintain more than 8 TCP -connections in the SYN_RCVD state for the HTTP server. (You -can use "netstat -An |grep SYN_RCVD" to see how many such -connections exist at any given instant). - -If you don't make this change, you might find that as the load gets -high, some connection attempts take a very long time. And if a lot -of your clients disconnect from the Internet during the process of -TCP connection establishment (this happens a lot with dialup -users), these "embryonic" connections might tie up your somaxconn -quota of SYN_RCVD-state connections. Until the kernel times out -these embryonic connections, no other connections will be accepted, -and it will appear as if the server has died. - -The default value for somaxconn in Digital UNIX V4.0 will be quite -a bit larger than it has been in previous versions (we inherited -this default from 4.3BSD). - -Digital UNIX V4.0 includes some other performance-related changes -that significantly improve its maximum HTTP connection rate. However, -we've been using V3.2C systems to front-end for altavista.digital.com -with no obvious performance bottlenecks at the millions-of-hits-per-day -level. - -We have some Webstone performance results available at - http://www.digital.com/info/alphaserver/news/webff.html -I'm not sure if these were done using V4.0 or an earlier version -of Digital UNIX, although I suspect they were done using a test -version of V4.0. - --Jeff - -- - -
- ----------------------------------------------------------------------------- - -From mogul@pa.dec.com (Jeffrey Mogul) -Organization DEC Western Research -Date 31 May 1996 21:01:01 GMT -Newsgroups comp.unix.osf.osf1 -Message-ID <4onmmd$mmd@usenet.pa.dec.com> -Subject Digital UNIX V3.2C Internet tuning patch info - ----------------------------------------------------------------------------- - -Something that probably few people are aware of is that Digital -has a patch kit available for Digital UNIX V3.2C that may improve -Internet performance, especially for busy web servers. - -This patch kit is one way to increase the value of somaxconn, -which I discussed in a message here a day or two ago. - -I've included in this message the revised README file for this -patch kit below. Note that the original README file in the patch -kit itself may be an earlier version; I'm told that the version -below is the right one. - -Sorry, this patch kit is NOT available for other versions of Digital -UNIX. Most (but not quite all) of these changes also made it into V4.0, -so the description of the various tuning parameters in this README -file might be useful to people running V4.0 systems. - -This patch kit does not appear to be available (yet?) from - http://www.service.digital.com/html/patch_service.html -so I guess you'll have to call Digital's Customer Support to get it. - --Jeff - -DESCRIPTION: Digital UNIX Network tuning patch - - Patch ID: OSF350-146 - - SUPERSEDED PATCHES: OSF350-151, OSF350-158 - - This set of files improves the performance of the network - subsystem on a system being used as a web server. There are - additional tunable parameters included here, to be used - cautiously by an informed system administrator. - -TUNING - - To tune the web server, the number of simultaneous socket - connection requests are limited by: - - somaxconn Sets the maximum number of pending requests - allowed to wait on a listening socket. The - default value in Digital UNIX V3.2 is 8. - This patch kit increases the default to 1024, - which matches the value in Digital UNIX V4.0. - - sominconn Sets the minimum number of pending connections - allowed on a listening socket. When a user - process calls listen with a backlog less - than sominconn, the backlog will be set to - sominconn. sominconn overrides somaxconn. - The default value is 1. - - The effectiveness of tuning these parameters can be monitored by - the sobacklog variables available in the kernel: - - sobacklog_hiwat Tracks the maximum pending requests to any - socket. The initial value is 0. - - sobacklog_drops Tracks the number of drops exceeding the - socket set backlog limit. The initial - value is 0. - - somaxconn_drops Tracks the number of drops exceeding the - somaxconn limit. When sominconn is larger - than somaxconn, tracks the number of drops - exceeding sominconn. The initial value is 0. - - TCP timer parameters also affect performance. Tuning the following - require some knowledge of the characteristics of the network. - - tcp_msl Sets the tcp maximum segment lifetime. - This is the maximum lifetime in half - seconds that a packet can be in transit - on the network. This value, when doubled, - is the length of time a connection remains - in the TIME_WAIT state after a incoming - close request is processed. The unit is - specified in 1/2 seconds, the initial - value is 60. - - tcp_rexmit_interval_min - Sets the minimum TCP retransmit interval. - For some WAN networks the default value may - be too short, causing unnecessary duplicate - packets to be sent. The unit is specified - in 1/2 seconds, the initial value is 1. - - tcp_keepinit This is the amount of time a partially - established connection will sit on the listen - queue before timing out (e.g. if a client - sends a SYN but never answers our SYN/ACK). - Partially established connections tie up slots - on the listen queue. If the queue starts to - fill with connections in SYN_RCVD state, - tcp_keepinit can be decreased to make those - partial connects time out sooner. This should - be used with caution, since there might be - legitimate clients that are taking a while - to respond to SYN/ACK. The unit is specified - in 1/2 seconds, the default value is 150 - (ie. 75 seconds). - - The hashlist size for the TCP inpcb lookup table is regulated by: - - tcbhashsize The number of hash buckets used for the - TCP connection table used in the kernel. - The initial value is 32. For best results, - should be specified as a power of 2. For - busy Web servers, set this to 2048 or more. - - The hashlist size for the interface alias table is regulated by: - - inifaddr_hsize The number of hash buckets used for the - interface alias table used in the kernel. - The initial value is 32. For best results, - should be specified as a power of 2. - - ipport_userreserved The maximum number of concurrent non-reserved, - dynamically allocated ports. Default range - is 1025-5000. The maximum value is 65535. - This limits the numer of times you can - simultaneously telnet or ftp out to connect - to other systems. - - tcpnodelack Don't delay acknowledging TCP data; this - can sometimes improve performance of locally - run CAD packages. Default is value is 0, - the enabled value is 1. - - Digital UNIX version: - - V3.2C -Feature V3.2C patch V4.0 - ======= ===== ===== ==== -somaxconn X X X -sominconn - X X -sobacklog_hiwat - X - -sobacklog_drops - X - -somaxconn_drops - X - -tcpnodelack X X X -tcp_keepidle X X X -tcp_keepintvl X X X -tcp_keepcnt - X X -tcp_keepinit - X X -TCP keepalive per-socket - - X -tcp_msl - X - -tcp_rexmit_interval_min - X - -TCP inpcb hashing - X X -tcbhashsize - X X -interface alias hashing - X X -inifaddr_hsize - X X -ipport_userreserved - X - -sysconfig -q inet - - X -sysconfig -q socket - - X - -