From: (no author) <(no author)@unknown> Date: Sat, 25 Apr 1998 18:48:43 +0000 (+0000) Subject: This commit was manufactured by cvs2svn to create tag X-Git-Tag: djg_nspr_split^0 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=1abd0dbc00d66c87659345d387084e6304c415f7;p=thirdparty%2Fapache%2Fhttpd.git This commit was manufactured by cvs2svn to create tag 'djg_nspr_split'. git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/tags/djg_nspr_split@81040 13f79535-47bb-0310-9956-ffa450edef68 --- diff --git a/docs/docroot/apache_pb.gif b/docs/docroot/apache_pb.gif deleted file mode 100644 index 3a1c139fc42..00000000000 Binary files a/docs/docroot/apache_pb.gif and /dev/null differ diff --git a/docs/manual/bind.html.en b/docs/manual/bind.html.en deleted file mode 100644 index e3023bfe57c..00000000000 --- a/docs/manual/bind.html.en +++ /dev/null @@ -1,132 +0,0 @@ - -
-- -There are two directives used to restrict or specify which addresses -and ports Apache listens to. - -
BindAddress *- -Makes the server listen to just the specified address. If the argument -is *, the server listens to all addresses. The port listened to -is set with the Port directive. Only one BindAddress -should be used. - -
none- -Listen can be used instead of BindAddress and -Port. It tells the server to accept incoming requests on the -specified port or address-and-port combination. If the first format is -used, with a port number only, the server listens to the given port on -all interfaces, instead of the port given by the Port -directive. If an IP address is given as well as a port, the server -will listen on the given port and interface.
Multiple Listen -directives may be used to specify a number of addresses and ports to -listen to. The server will respond to requests from any of the listed -addresses and ports.
- -For example, to make the server accept connections on both port -80 and port 8000, use: -
- Listen 80 - Listen 8000 -- -To make the server accept connections on two specified -interfaces and port numbers, use -
- Listen 192.170.2.1:80 - Listen 192.170.2.5:8000 -- -
As implemented in Apache 1.1.1 and earlier versions, the method -Apache used to create PATH_INFO in the CGI environment was -counterintuitive, and could result in crashes in certain cases. In -Apache 1.2 and beyond, this behavior has changed. Although this -results in some compatibility problems with certain legacy CGI -applications, the Apache 1.2 behavior is still compatible with the -CGI/1.1 specification, and CGI scripts can be easily modified (see below). - -
Apache 1.1.1 and earlier implemented the PATH_INFO and SCRIPT_NAME -environment variables by looking at the filename, not the URL. While -this resulted in the correct values in many cases, when the filesystem -path was overloaded to contain path information, it could result in -errant behavior. For example, if the following appeared in a config -file: -
- Alias /cgi-ralph /usr/local/httpd/cgi-bin/user.cgi/ralph --
In this case, user.cgi is the CGI script, the "/ralph"
-is information to be passed onto the CGI. If this configuration was in
-place, and a request came for "/cgi-ralph/script/", the
-code would set PATH_INFO to "/ralph/script", and
-SCRIPT_NAME to "/cgi-". Obviously, the latter is
-incorrect. In certain cases, this could even cause the server to
-crash.
Apache 1.2 and later now determine SCRIPT_NAME and PATH_INFO by
-looking directly at the URL, and determining how much of the URL is
-client-modifiable, and setting PATH_INFO to it. To use the above
-example, PATH_INFO would be set to "/script", and
-SCRIPT_NAME to "/cgi-ralph". This makes sense and results
-in no server behavior problems. It also permits the script to be
-guaranteed that
-"http://$SERVER_NAME:$SERVER_PORT$SCRIPT_NAME$PATH_INFO"
-will always be an accessible URL that points to the current script,
-something which was not necessarily true with previous versions of
-Apache.
-
-
However, the "/ralph"
-information from the Alias directive is lost. This is
-unfortunate, but we feel that using the filesystem to pass along this
-sort of information is not a recommended method, and a script making
-use of it "deserves" not to work. Apache 1.2b3 and later, however, do
-provide a workaround.
-
-
It may be necessary for a script that was designed for earlier -versions of Apache or other servers to need the information that the -old PATH_INFO variable provided. For this purpose, Apache 1.2 (1.2b3 -and later) sets an additional variable, FILEPATH_INFO. This -environment variable contains the value that PATH_INFO would have had -with Apache 1.1.1.
- -A script that wishes to work with both Apache 1.2 and earlier -versions can simply test for the existence of FILEPATH_INFO, and use -it if available. Otherwise, it can use PATH_INFO. For example, in -Perl, one might use: -
- $path_info = $ENV{'FILEPATH_INFO'} || $ENV{'PATH_INFO'};
-
-
-By doing this, a script can work with all servers supporting the -CGI/1.1 specification, including all versions of Apache.
- - - - - diff --git a/docs/manual/content-negotiation.html.en b/docs/manual/content-negotiation.html.en deleted file mode 100644 index 98e01906772..00000000000 --- a/docs/manual/content-negotiation.html.en +++ /dev/null @@ -1,525 +0,0 @@ - - - --Apache's support for content negotiation has been updated to meet the -HTTP/1.1 specification. It can choose the best representation of a -resource based on the browser-supplied preferences for media type, -languages, character set and encoding. It is also implements a -couple of features to give more intelligent handling of requests from -browsers which send incomplete negotiation information.
- -Content negotiation is provided by the -mod_negotiation module, -which is compiled in by default. - -
-A resource may be available in several different representations. For -example, it might be available in different languages or different -media types, or a combination. One way of selecting the most -appropriate choice is to give the user an index page, and let them -select. However it is often possible for the server to choose -automatically. This works because browsers can send as part of each -request information about what representations they prefer. For -example, a browser could indicate that it would like to see -information in French, if possible, else English will do. Browsers -indicate their preferences by headers in the request. To request only -French representations, the browser would send - -
- Accept-Language: fr -- -
-Note that this preference will only be applied when there is a choice -of representations and they vary by language. -
- -As an example of a more complex request, this browser has been -configured to accept French and English, but prefer French, and to -accept various media types, preferring HTML over plain text or other -text types, and preferring GIF or JPEG over other media types, but also -allowing any other media type as a last resort: - -
- Accept-Language: fr; q=1.0, en; q=0.5 - Accept: text/html; q=1.0, text/*; q=0.8, image/gif; q=0.6, - image/jpeg; q=0.6, image/*; q=0.5, */*; q=0.1 -- -Apache 1.2 supports 'server driven' content negotiation, as defined in -the HTTP/1.1 specification. It fully supports the Accept, -Accept-Language, Accept-Charset and Accept-Encoding request headers. -
- -The terms used in content negotiation are: a resource is an -item which can be requested of a server, which might be selected as -the result of a content negotiation algorithm. If a resource is -available in several formats, these are called representations -or variants. The ways in which the variants for a particular -resource vary are called the dimensions of negotiation. - -
-In order to negotiate a resource, the server needs to be given -information about each of the variants. This is done in one of two -ways: - -
*.var file) which
- names the files containing the variants explicitly
-
-A type map is a document which is associated with the handler
-named type-map (or, for backwards-compatibility with
-older Apache configurations, the mime type
-application/x-type-map). Note that to use this feature,
-you've got to have a SetHandler some place which defines a
-file suffix as type-map; this is best done with a
-
- - AddHandler type-map var - --in
srm.conf. See comments in the sample config files for
-details. - -Type map files have an entry for each available variant; these entries -consist of contiguous RFC822-format header lines. Entries for -different variants are separated by blank lines. Blank lines are -illegal within an entry. It is conventional to begin a map file with -an entry for the combined entity as a whole (although this -is not required, and if present will be ignored). An example -map file is: -
- - URI: foo - - URI: foo.en.html - Content-type: text/html - Content-language: en - - URI: foo.fr.de.html - Content-type: text/html; charset=iso-8859-2 - Content-language: fr, de -- -If the variants have different source qualities, that may be indicated -by the "qs" parameter to the media type, as in this picture (available -as jpeg, gif, or ASCII-art): -
- URI: foo - - URI: foo.jpeg - Content-type: image/jpeg; qs=0.8 - - URI: foo.gif - Content-type: image/gif; qs=0.5 - - URI: foo.txt - Content-type: text/plain; qs=0.01 - --
- -qs values can vary between 0.000 and 1.000. Note that any variant with -a qs value of 0.000 will never be chosen. Variants with no 'qs' -parameter value are given a qs factor of 1.0.
- -The full list of headers recognized is: - -
URI:
- Content-type:
- image/gif, text/plain, or
- text/html; level=3.
- Content-language:
- en for English,
- kr for Korean, etc.).
- Content-encoding:
- x-compress, or x-gzip, as appropriate.
- Content-length:
-
-This is a per-directory option, meaning it can be set with an
-Options directive within a <Directory>,
-<Location> or <Files>
-section in access.conf, or (if AllowOverride
-is properly set) in .htaccess files. Note that
-Options All does not set MultiViews; you
-have to ask for it by name. (Fixing this is a one-line change to
-http_core.h).
-
-
-
-The effect of MultiViews is as follows: if the server
-receives a request for /some/dir/foo, if
-/some/dir has MultiViews enabled, and
-/some/dir/foo does not exist, then the server reads the
-directory looking for files named foo.*, and effectively fakes up a
-type map which names all those files, assigning them the same media
-types and content-encodings it would have if the client had asked for
-one of them by name. It then chooses the best match to the client's
-requirements, and forwards them along.
-
-
-
-This applies to searches for the file named by the
-DirectoryIndex directive, if the server is trying to
-index a directory; if the configuration files specify
-
- - DirectoryIndex index - -then the server will arbitrate between
index.html
-and index.html3 if both are present. If neither are
-present, and index.cgi is there, the server will run it.
-
-- -If one of the files found when reading the directive is a CGI script, -it's not obvious what should happen. The code gives that case -special treatment --- if the request was a POST, or a GET with -QUERY_ARGS or PATH_INFO, the script is given an extremely high quality -rating, and generally invoked; otherwise it is given an extremely low -quality rating, which generally causes one of the other views (if any) -to be retrieved. - -
- -In some circumstances, Apache can 'fiddle' the quality factor of a -particular dimension to achieve a better result. The ways Apache can -fiddle quality factors is explained in more detail below. - -
| Dimension - | Notes - |
|---|---|
| Media Type - | Browser indicates preferences on Accept: header. Each item -can have an associated quality factor. Variant description can also -have a quality factor. - |
| Language - | Browser indicates preferences on Accept-Language: header. Each -item -can have a quality factor. Variants can be associated with none, one -or more languages. - |
| Encoding - | Browser indicates preference with Accept-Encoding: header. - |
| Charset - | Browser indicates preference with Accept-Charset: header. Variants -can indicate a charset as a parameter of the media type. - |
-Apache uses an algorithm to select the 'best' variant (if any) to -return to the browser. This algorithm is not configurable. It operates -like this: - -
LanguagePriority directive (if present),
- else the order of languages on the Accept-Language header.
-
--Apache sometimes changes the quality values from what would be -expected by a strict interpretation of the algorithm above. This is to -get a better result from the algorithm for browsers which do not send -full or accurate information. Some of the most popular browsers send -Accept header information which would otherwise result in the -selection of the wrong variant in many cases. If a browser -sends full and correct information these fiddles will not -be applied. -
- -
-The Accept: request header indicates preferences for media types. It -can also include 'wildcard' media types, such as "image/*" or "*/*" -where the * matches any string. So a request including: -
- Accept: image/*, */* -- -would indicate that any type starting "image/" is acceptable, -as is any other type (so the first "image/*" is redundant). Some -browsers routinely send wildcards in addition to explicit types they -can handle. For example: -
- Accept: text/html, text/plain, image/gif, image/jpeg, */* -- -The intention of this is to indicate that the explicitly -listed types are preferred, but if a different representation is -available, that is ok too. However under the basic algorithm, as given -above, the */* wildcard has exactly equal preference to all the other -types, so they are not being preferred. The browser should really have -sent a request with a lower quality (preference) value for *.*, such -as: -
- Accept: text/html, text/plain, image/gif, image/jpeg, */*; q=0.01 -- -The explicit types have no quality factor, so they default to a -preference of 1.0 (the highest). The wildcard */* is given -a low preference of 0.01, so other types will only be returned if -no variant matches an explicitly listed type. -
- -If the Accept: header contains no q factors at all, Apache sets -the q value of "*/*", if present, to 0.01 to emulate the desired -behavior. It also sets the q value of wildcards of the format -"type/*" to 0.02 (so these are preferred over matches against -"*/*". If any media type on the Accept: header contains a q factor, -these special values are not applied, so requests from browsers -which send the correct information to start with work as expected. - -
-If some of the variants for a particular resource have a language -attribute, and some do not, those variants with no language -are given a very low language quality factor of 0.001.
- -The reason for setting this language quality factor for -variant with no language to a very low value is to allow -for a default variant which can be supplied if none of the -other variants match the browser's language preferences. - -For example, consider the situation with three variants: - -
-The meaning of a variant with no language is that it is -always acceptable to the browser. If the request Accept-Language -header includes either en or fr (or both) one of foo.en.html -or foo.fr.html will be returned. If the browser does not list -either en or fr as acceptable, foo.html will be returned instead. - -
-If you are using language negotiation you can choose between -different naming conventions, because files can have more than one -extension, and the order of the extensions is normally irrelevant -(see mod_mime documentation for details). -
-A typical file has a mime-type extension (e.g. html), -maybe an encoding extension (e.g. gz and of course a -language extension (e.g. en) when we have different -language variants of this file. - -
-Examples: -
-Here some more examples of filenames together with valid and invalid hyperlinks: -
- -| Filename | -Valid hyperlink | -Invalid hyperlink | -
|---|---|---|
| foo.html.en | -foo - foo.html |
- - | -
| foo.en.html | -foo | -foo.html | -
| foo.html.en.gz | -foo - foo.html |
- foo.gz - foo.html.gz |
-
| foo.en.html.gz | -foo | -foo.html - foo.html.gz - foo.gz |
-
| foo.gz.html.en | -foo - foo.gz - foo.gz.html |
- foo.html | -
| foo.html.gz.en | -foo - foo.html - foo.html.gz |
- foo.gz | -
-Looking at the table above you will notice that it is always possible to -use the name without any extensions in an hyperlink (e.g. foo). -The advantage is that you can hide the actual type of a -document rsp. file and can change it later, e.g. from html -to shtml or cgi without changing any -hyperlink references. - -
-If you want to continue to use a mime-type in your hyperlinks (e.g. -foo.html) the language extension (including an encoding extension -if there is one) must be on the right hand side of the mime-type extension -(e.g. foo.html.en). - - -
-When a cache stores a document, it associates it with the request URL. -The next time that URL is requested, the cache can use the stored -document, provided it is still within date. But if the resource is -subject to content negotiation at the server, this would result in -only the first requested variant being cached, and subsequent cache -hits could return the wrong response. To prevent this, -Apache normally marks all responses that are returned after content negotiation -as non-cacheable by HTTP/1.0 clients. Apache also supports the HTTP/1.1 -protocol features to allow caching of negotiated responses.
- -For requests which come from a HTTP/1.0 compliant client (either a -browser or a cache), the directive CacheNegotiatedDocs can be -used to allow caching of responses which were subject to negotiation. -This directive can be given in the server config or virtual host, and -takes no arguments. It has no effect on requests from HTTP/1.1 -clients. - - - - diff --git a/docs/manual/custom-error.html.en b/docs/manual/custom-error.html.en deleted file mode 100644 index 5e2a3a9475a..00000000000 --- a/docs/manual/custom-error.html.en +++ /dev/null @@ -1,152 +0,0 @@ - - -
-Customizable responses can be defined to be activated in the - event of a server detected error or problem. - -
e.g. if a script crashes and produces a "500 Server Error" - response, then this response can be replaced with either some - friendlier text or by a redirection to another URL (local or - external). -
- -
- -
Redirecting to another URL can be useful, but only if some information - can be passed which can then be used to explain and/or log the error/problem - more clearly. - -
To achieve this, Apache will define new CGI-like environment - variables, e.g. - -
-REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg
-REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712)
-REDIRECT_PATH=.:/bin:/usr/local/bin:/etc
-REDIRECT_QUERY_STRING=
-REDIRECT_REMOTE_ADDR=121.345.78.123
-REDIRECT_REMOTE_HOST=ooh.ahhh.com
-REDIRECT_SERVER_NAME=crash.bang.edu
-REDIRECT_SERVER_PORT=80
-REDIRECT_SERVER_SOFTWARE=Apache/0.8.15
-REDIRECT_URL=/cgi-bin/buggy.pl
-
-
- note the REDIRECT_ prefix.
-
-
At least REDIRECT_URL and REDIRECT_QUERY_STRING will
- be passed to the new URL (assuming it's a cgi-script or a cgi-include). The
- other variables will exist only if they existed prior to the error/problem.
- None of these will be set if your ErrorDocument is an
- external redirect (i.e. anything starting with a protocol name
- like http:, even if it refers to the same host as the
- server).
- -
Here are some examples... - -
-ErrorDocument 500 /cgi-bin/crash-recover
-ErrorDocument 500 "Sorry, our script crashed. Oh dear
-ErrorDocument 500 http://xxx/
-ErrorDocument 404 /Lame_excuses/not_found.html
-ErrorDocument 401 /Subscription/how_to_subscribe.html
-
-
- The syntax is, - -
ErrorDocument
-<3-digit-code> action
-
-
where the action can be, - -
- -
- -
- -
REDIRECT_. REDIRECT_ environment
-variables are created from the CGI environment variables which existed
-prior to the redirect, they are renamed with a REDIRECT_
-prefix, i.e. HTTP_USER_AGENT becomes
-REDIRECT_HTTP_USER_AGENT. In addition to these new
-variables, Apache will define REDIRECT_URL and
-REDIRECT_STATUS to help the script trace its origin.
-Both the original URL and the URL being redirected to can be logged in
-the access log.
-
-- -A few notes on general pedagogical style here. In the interest of -conciseness, all structure declarations here are incomplete --- the -real ones have more slots that I'm not telling you about. For the -most part, these are reserved to one component of the server core or -another, and should be altered by modules with caution. However, in -some cases, they really are things I just haven't gotten around to -yet. Welcome to the bleeding edge.
- -Finally, here's an outline, to give you some bare idea of what's -coming up, and in what order: - -
SetEnv, which don't really fit well elsewhere.
- OK.
- DECLINED. In this case, the
- server behaves in all respects as if the handler simply hadn't
- been there.
- */* (i.e., a
-wildcard MIME type specification). However, wildcard handlers are
-only invoked if the server has already tried and failed to find a more
-specific response handler for the MIME type of the requested object
-(either none existed, or they all declined).
-
-The handlers themselves are functions of one argument (a
-request_rec structure. vide infra), which returns an
-integer, as above.
- -
ScriptAlias config file
-command. It's actually a great deal more complicated than most
-modules, but if we're going to have only one example, it might as well
-be the one with its fingers in every place.
-
-Let's begin with handlers. In order to handle the CGI scripts, the
-module declares a response handler for them. Because of
-ScriptAlias, it also has handlers for the name
-translation phase (to recognize ScriptAliased URIs), the
-type-checking phase (any ScriptAliased request is typed
-as a CGI script).
-
-The module needs to maintain some per (virtual)
-server information, namely, the ScriptAliases in effect;
-the module structure therefore contains pointers to a functions which
-builds these structures, and to another which combines two of them (in
-case the main server and a virtual server both have
-ScriptAliases declared).
-
-Finally, this module contains code to handle the
-ScriptAlias command itself. This particular module only
-declares one command, but there could be more, so modules have
-command tables which declare their commands, and describe
-where they are permitted, and how they are to be invoked.
-
-A final note on the declared types of the arguments of some of these
-commands: a pool is a pointer to a resource pool
-structure; these are used by the server to keep track of the memory
-which has been allocated, files opened, etc., either to service a
-particular request, or to handle the process of configuring itself.
-That way, when the request is over (or, for the configuration pool,
-when the server is restarting), the memory can be freed, and the files
-closed, en masse, without anyone having to write explicit code to
-track them all down and dispose of them. Also, a
-cmd_parms structure contains various information about
-the config file being read, and other status information, which is
-sometimes of use to the function which processes a config-file command
-(such as ScriptAlias).
-
-With no further ado, the module itself:
-
-
-/* Declarations of handlers. */
-
-int translate_scriptalias (request_rec *);
-int type_scriptalias (request_rec *);
-int cgi_handler (request_rec *);
-
-/* Subsidiary dispatch table for response-phase handlers, by MIME type */
-
-handler_rec cgi_handlers[] = {
-{ "application/x-httpd-cgi", cgi_handler },
-{ NULL }
-};
-
-/* Declarations of routines to manipulate the module's configuration
- * info. Note that these are returned, and passed in, as void *'s;
- * the server core keeps track of them, but it doesn't, and can't,
- * know their internal structure.
- */
-
-void *make_cgi_server_config (pool *);
-void *merge_cgi_server_config (pool *, void *, void *);
-
-/* Declarations of routines to handle config-file commands */
-
-extern char *script_alias(cmd_parms *, void *per_dir_config, char *fake,
- char *real);
-
-command_rec cgi_cmds[] = {
-{ "ScriptAlias", script_alias, NULL, RSRC_CONF, TAKE2,
- "a fakename and a realname"},
-{ NULL }
-};
-
-module cgi_module = {
- STANDARD_MODULE_STUFF,
- NULL, /* initializer */
- NULL, /* dir config creator */
- NULL, /* dir merger --- default is to override */
- make_cgi_server_config, /* server config */
- merge_cgi_server_config, /* merge server config */
- cgi_cmds, /* command table */
- cgi_handlers, /* handlers */
- translate_scriptalias, /* filename translation */
- NULL, /* check_user_id */
- NULL, /* check auth */
- NULL, /* check access */
- type_scriptalias, /* type_checker */
- NULL, /* fixups */
- NULL, /* logger */
- NULL /* header parser */
-};
-
-
-request_rec structure.
-This structure describes a particular request which has been made to
-the server, on behalf of a client. In most cases, each connection to
-the client generates only one request_rec structure.- -
request_recrequest_rec contains pointers to a resource pool
-which will be cleared when the server is finished handling the
-request; to structures containing per-server and per-connection
-information, and most importantly, information on the request itself.- -The most important such information is a small set of character -strings describing attributes of the object being requested, including -its URI, filename, content-type and content-encoding (these being filled -in by the translation and type-check handlers which handle the -request, respectively).
-
-Other commonly used data items are tables giving the MIME headers on
-the client's original request, MIME headers to be sent back with the
-response (which modules can add to at will), and environment variables
-for any subprocesses which are spawned off in the course of servicing
-the request. These tables are manipulated using the
-ap_table_get and ap_table_set routines.
-
- Note that the Content-type header value cannot be - set by module content-handlers using the ap_table_*() - routines. Rather, it is set by pointing the content_type - field in the request_rec structure to an appropriate - string. E.g., --Finally, there are pointers to two data structures which, in turn, -point to per-module configuration structures. Specifically, these -hold pointers to the data structures which the module has built to -describe the way it has been configured to operate in a given -directory (via- r->content_type = "text/html"; --
.htaccess files or
-<Directory> sections), for private data it has
-built in the course of servicing the request (so modules' handlers for
-one phase can pass `notes' to their handlers for other phases). There
-is another such configuration vector in the server_rec
-data structure pointed to by the request_rec, which
-contains per (virtual) server configuration data.- -Here is an abridged declaration, giving the fields most commonly used:
- -
-struct request_rec {
-
- pool *pool;
- conn_rec *connection;
- server_rec *server;
-
- /* What object is being requested */
-
- char *uri;
- char *filename;
- char *path_info;
- char *args; /* QUERY_ARGS, if any */
- struct stat finfo; /* Set by server core;
- * st_mode set to zero if no such file */
-
- char *content_type;
- char *content_encoding;
-
- /* MIME header environments, in and out. Also, an array containing
- * environment variables to be passed to subprocesses, so people can
- * write modules to add to that environment.
- *
- * The difference between headers_out and err_headers_out is that
- * the latter are printed even on error, and persist across internal
- * redirects (so the headers printed for ErrorDocument handlers will
- * have them).
- */
-
- table *headers_in;
- table *headers_out;
- table *err_headers_out;
- table *subprocess_env;
-
- /* Info about the request itself... */
-
- int header_only; /* HEAD request, as opposed to GET */
- char *protocol; /* Protocol, as given to us, or HTTP/0.9 */
- char *method; /* GET, HEAD, POST, etc. */
- int method_number; /* M_GET, M_POST, etc. */
-
- /* Info for logging */
-
- char *the_request;
- int bytes_sent;
-
- /* A flag which modules can set, to indicate that the data being
- * returned is volatile, and clients should be told not to cache it.
- */
-
- int no_cache;
-
- /* Various other config info which may change with .htaccess files
- * These are config vectors, with one void* pointer for each module
- * (the thing pointed to being the module's business).
- */
-
- void *per_dir_config; /* Options set in config files, etc. */
- void *request_config; /* Notes on *this* request */
-
-};
-
-
-
-request_rec structures are built by reading an HTTP
-request from a client, and filling in the fields. However, there are
-a few exceptions:
-
-*.var file), or a CGI script which returned a
- local `Location:', then the resource which the user requested
- is going to be ultimately located by some URI other than what
- the client originally supplied. In this case, the server does
- an internal redirect, constructing a new
- request_rec for the new URI, and processing it
- almost exactly as if the client had requested the new URI
- directly. - -
ErrorDocument is in scope, the same internal
- redirect machinery comes into play.- -
-
- Such handlers can construct a sub-request, using the
- functions ap_sub_req_lookup_file and
- ap_sub_req_lookup_uri; this constructs a new
- request_rec structure and processes it as you
- would expect, up to but not including the point of actually
- sending a response. (These functions skip over the access
- checks if the sub-request is for a file in the same directory
- as the original request).
-
- (Server-side includes work by building sub-requests and then
- actually invoking the response handler for them, via the
- function run_sub_request).
-
request_rec, has to return an int to
-indicate what happened. That can either be
-
-REDIRECT, then
-the module should put a Location in the request's
-headers_out, to indicate where the client should be
-redirected to. - -
request_rec structure (or, in the case of access
-checkers, simply by returning the correct error code). However,
-response handlers have to actually send a request back to the client.
-
-They should begin by sending an HTTP response header, using the
-function ap_send_http_header. (You don't have to do
-anything special to skip sending the header for HTTP/0.9 requests; the
-function figures out on its own that it shouldn't do anything). If
-the request is marked header_only, that's all they should
-do; they should return after that, without attempting any further
-output.
-
-Otherwise, they should produce a request body which responds to the
-client as appropriate. The primitives for this are ap_rputc
-and ap_rprintf, for internally generated output, and
-ap_send_fd, to copy the contents of some FILE *
-straight to the client.
-
-At this point, you should more or less understand the following piece
-of code, which is the handler which handles GET requests
-which have no more specific handler; it also shows how conditional
-GETs can be handled, if it's desirable to do so in a
-particular response handler --- ap_set_last_modified checks
-against the If-modified-since value supplied by the
-client, if any, and returns an appropriate code (which will, if
-nonzero, be USE_LOCAL_COPY). No similar considerations apply for
-ap_set_content_length, but it returns an error code for
-symmetry.
- -
-int default_handler (request_rec *r)
-{
- int errstatus;
- FILE *f;
-
- if (r->method_number != M_GET) return DECLINED;
- if (r->finfo.st_mode == 0) return NOT_FOUND;
-
- if ((errstatus = ap_set_content_length (r, r->finfo.st_size))
- || (errstatus = ap_set_last_modified (r, r->finfo.st_mtime)))
- return errstatus;
-
- f = fopen (r->filename, "r");
-
- if (f == NULL) {
- log_reason("file permissions deny server access",
- r->filename, r);
- return FORBIDDEN;
- }
-
- register_timeout ("send", r);
- ap_send_http_header (r);
-
- if (!r->header_only) send_fd (f, r);
- ap_pfclose (r->pool, f);
- return OK;
-}
-
-
-Finally, if all of this is too much of a challenge, there are a few
-ways out of it. First off, as shown above, a response handler which
-has not yet produced any output can simply return an error code, in
-which case the server will automatically produce an error response.
-Secondly, it can punt to some other handler by invoking
-ap_internal_redirect, which is how the internal redirection
-machinery discussed above is invoked. A response handler which has
-internally redirected should always return OK.
-
-(Invoking ap_internal_redirect from handlers which are
-not response handlers will lead to serious confusion).
-
-
ap_auth_type,
- ap_auth_name, and ap_requires.
- ap_get_basic_auth_pw,
- which sets the connection->user structure field
- automatically, and ap_note_basic_auth_failure, which
- arranges for the proper WWW-Authenticate: header
- to be sent back).
-request_rec structures which are
-threaded through the r->prev and r->next
-pointers. The request_rec which is passed to the logging
-handlers in such cases is the one which was originally built for the
-initial request from the client; note that the bytes_sent field will
-only be correct in the last request in the chain (the one for which a
-response was actually sent).
-
--One of the problems of writing and designing a server-pool server is -that of preventing leakage, that is, allocating resources (memory, -open files, etc.), without subsequently releasing them. The resource -pool machinery is designed to make it easy to prevent this from -happening, by allowing resource to be allocated in such a way that -they are automatically released when the server is done with -them. -
--The way this works is as follows: the memory which is allocated, file -opened, etc., to deal with a particular request are tied to a -resource pool which is allocated for the request. The pool -is a data structure which itself tracks the resources in question. -
--When the request has been processed, the pool is cleared. At -that point, all the memory associated with it is released for reuse, -all files associated with it are closed, and any other clean-up -functions which are associated with the pool are run. When this is -over, we can be confident that all the resource tied to the pool have -been released, and that none of them have leaked. -
--Server restarts, and allocation of memory and resources for per-server -configuration, are handled in a similar way. There is a -configuration pool, which keeps track of resources which were -allocated while reading the server configuration files, and handling -the commands therein (for instance, the memory that was allocated for -per-server module configuration, log files and other files that were -opened, and so forth). When the server restarts, and has to reread -the configuration files, the configuration pool is cleared, and so the -memory and file descriptors which were taken up by reading them the -last time are made available for reuse. -
-
-It should be noted that use of the pool machinery isn't generally
-obligatory, except for situations like logging handlers, where you
-really need to register cleanups to make sure that the log file gets
-closed when the server restarts (this is most easily done by using the
-function ap_pfopen, which also
-arranges for the underlying file descriptor to be closed before any
-child processes, such as for CGI scripts, are execed), or
-in case you are using the timeout machinery (which isn't yet even
-documented here). However, there are two benefits to using it:
-resources allocated to a pool never leak (even if you allocate a
-scratch string, and just forget about it); also, for memory
-allocation, ap_palloc is generally faster than
-malloc.
-
-We begin here by describing how memory is allocated to pools, and then -discuss how other resources are tracked by the resource pool -machinery. -
-
-Memory is allocated to pools by calling the function
-ap_palloc, which takes two arguments, one being a pointer to
-a resource pool structure, and the other being the amount of memory to
-allocate (in chars). Within handlers for handling
-requests, the most common way of getting a resource pool structure is
-by looking at the pool slot of the relevant
-request_rec; hence the repeated appearance of the
-following idiom in module code:
-
-int my_handler(request_rec *r)
-{
- struct my_structure *foo;
- ...
-
- foo = (foo *)ap_palloc (r->pool, sizeof(my_structure));
-}
-
-
-Note that there is no ap_pfree ---
-ap_palloced memory is freed only when the associated
-resource pool is cleared. This means that ap_palloc does not
-have to do as much accounting as malloc(); all it does in
-the typical case is to round up the size, bump a pointer, and do a
-range check.
-
-(It also raises the possibility that heavy use of ap_palloc
-could cause a server process to grow excessively large. There are
-two ways to deal with this, which are dealt with below; briefly, you
-can use malloc, and try to be sure that all of the memory
-gets explicitly freed, or you can allocate a sub-pool of
-the main pool, allocate your memory in the sub-pool, and clear it out
-periodically. The latter technique is discussed in the section on
-sub-pools below, and is used in the directory-indexing code, in order
-to avoid excessive storage allocation when listing directories with
-thousands of files).
-
-There are functions which allocate initialized memory, and are
-frequently useful. The function ap_pcalloc has the same
-interface as ap_palloc, but clears out the memory it
-allocates before it returns it. The function ap_pstrdup
-takes a resource pool and a char * as arguments, and
-allocates memory for a copy of the string the pointer points to,
-returning a pointer to the copy. Finally ap_pstrcat is a
-varargs-style function, which takes a pointer to a resource pool, and
-at least two char * arguments, the last of which must be
-NULL. It allocates enough memory to fit copies of each
-of the strings, as a unit; for instance:
-
- ap_pstrcat (r->pool, "foo", "/", "bar", NULL); --
-returns a pointer to 8 bytes worth of memory, initialized to
-"foo/bar".
-
-A pool is really defined by its lifetime more than anything else. There -are some static pools in http_main which are passed to various -non-http_main functions as arguments at opportune times. Here they are: -
--For almost everything folks do, r->pool is the pool to use. But you -can see how other lifetimes, such as pchild, are useful to some -modules... such as modules that need to open a database connection once -per child, and wish to clean it up when the child dies. -
--You can also see how some bugs have manifested themself, such as setting -connection->user to a value from r->pool -- in this case connection exists -for the lifetime of ptrans, which is longer than r->pool (especially if -r->pool is a subrequest!). So the correct thing to do is to allocate -from connection->pool. -
--And there was another interesting bug in mod_include/mod_cgi. You'll see -in those that they do this test to decide if they should use r->pool -or r->main->pool. In this case the resource that they are registering -for cleanup is a child process. If it were registered in r->pool, -then the code would wait() for the child when the subrequest finishes. -With mod_include this could be any old #include, and the delay can be up -to 3 seconds... and happened quite frequently. Instead the subprocess -is registered in r->main->pool which causes it to be cleaned up when -the entire request is done -- i.e., after the output has been sent to -the client and logging has happened. -
-
-As indicated above, resource pools are also used to track other sorts
-of resources besides memory. The most common are open files. The
-routine which is typically used for this is ap_pfopen, which
-takes a resource pool and two strings as arguments; the strings are
-the same as the typical arguments to fopen, e.g.,
-
- ...
- FILE *f = ap_pfopen (r->pool, r->filename, "r");
-
- if (f == NULL) { ... } else { ... }
-
-
-There is also a ap_popenf routine, which parallels the
-lower-level open system call. Both of these routines
-arrange for the file to be closed when the resource pool in question
-is cleared.
-
-Unlike the case for memory, there are functions to close
-files allocated with ap_pfopen, and ap_popenf,
-namely ap_pfclose and ap_pclosef. (This is
-because, on many systems, the number of files which a single process
-can have open is quite limited). It is important to use these
-functions to close files allocated with ap_pfopen and
-ap_popenf, since to do otherwise could cause fatal errors on
-systems such as Linux, which react badly if the same
-FILE* is closed more than once.
-
-(Using the close functions is not mandatory, since the
-file will eventually be closed regardless, but you should consider it
-in cases where your module is opening, or could open, a lot of files).
-
-More text goes here. Describe the the cleanup primitives in terms of
-which the file stuff is implemented; also, spawn_process.
-
--Pool cleanups live until clear_pool() is called: clear_pool(a) recursively -calls destroy_pool() on all subpools of a; then calls all the cleanups for a; -then releases all the memory for a. destroy_pool(a) calls clear_pool(a) -and then releases the pool structure itself. i.e. clear_pool(a) doesn't -delete a, it just frees up all the resources and you can start using it -again immediately. -
-ap_palloc() and the
-associated primitives may result in undesirably profligate resource
-allocation. You can deal with such a case by creating a
-sub-pool, allocating within the sub-pool rather than the main
-pool, and clearing or destroying the sub-pool, which releases the
-resources which were associated with it. (This really is a
-rare situation; the only case in which it comes up in the standard
-module set is in case of listing directories, and then only with
-very large directories. Unnecessary use of the primitives
-discussed here can hair up your code quite a bit, with very little
-gain).
-
-The primitive for creating a sub-pool is ap_make_sub_pool,
-which takes another pool (the parent pool) as an argument. When the
-main pool is cleared, the sub-pool will be destroyed. The sub-pool
-may also be cleared or destroyed at any time, by calling the functions
-ap_clear_pool and ap_destroy_pool, respectively.
-(The difference is that ap_clear_pool frees resources
-associated with the pool, while ap_destroy_pool also
-deallocates the pool itself. In the former case, you can allocate new
-resources within the pool, and clear it again, and so forth; in the
-latter case, it is simply gone).
-
-One final note --- sub-requests have their own resource pools, which
-are sub-pools of the resource pool for the main request. The polite
-way to reclaim the resources associated with a sub request which you
-have allocated (using the ap_sub_req_lookup_... functions)
-is ap_destroy_sub_req, which frees the resource pool.
-Before calling this function, be sure to copy anything that you care
-about which might be allocated in the sub-request's resource pool into
-someplace a little less volatile (for instance, the filename in its
-request_rec structure).
-
-(Again, under most circumstances, you shouldn't feel obliged to call
-this function; only 2K of memory or so are allocated for a typical sub
-request, and it will be freed anyway when the main request pool is
-cleared. It is only when you are allocating many, many sub-requests
-for a single main request that you should seriously consider the
-ap_destroy... functions).
-
-
-
-However, just giving the modules command tables is not enough to
-divorce them completely from the server core. The server has to
-remember the commands in order to act on them later. That involves
-maintaining data which is private to the modules, and which can be
-either per-server, or per-directory. Most things are per-directory,
-including in particular access control and authorization information,
-but also information on how to determine file types from suffixes,
-which can be modified by AddType and
-DefaultType directives, and so forth. In general, the
-governing philosophy is that anything which can be made
-configurable by directory should be; per-server information is
-generally used in the standard set of modules for information like
-Aliases and Redirects which come into play
-before the request is tied to a particular place in the underlying
-file system.
-
-Another requirement for emulating the NCSA server is being able to
-handle the per-directory configuration files, generally called
-.htaccess files, though even in the NCSA server they can
-contain directives which have nothing at all to do with access
-control. Accordingly, after URI -> filename translation, but before
-performing any other phase, the server walks down the directory
-hierarchy of the underlying filesystem, following the translated
-pathname, to read any .htaccess files which might be
-present. The information which is read in then has to be
-merged with the applicable information from the server's own
-config files (either from the <Directory> sections
-in access.conf, or from defaults in
-srm.conf, which actually behaves for most purposes almost
-exactly like <Directory />).
-
-Finally, after having served a request which involved reading
-.htaccess files, we need to discard the storage allocated
-for handling them. That is solved the same way it is solved wherever
-else similar problems come up, by tying those structures to the
-per-transaction resource pool.
- -
mod_mime.c,
-which defines the file typing handler which emulates the NCSA server's
-behavior of determining file types from suffixes. What we'll be
-looking at, here, is the code which implements the
-AddType and AddEncoding commands. These
-commands can appear in .htaccess files, so they must be
-handled in the module's private per-directory data, which in fact,
-consists of two separate tables for MIME types and
-encoding information, and is declared as follows:
-
-
-typedef struct {
- table *forced_types; /* Additional AddTyped stuff */
- table *encoding_types; /* Added with AddEncoding... */
-} mime_dir_config;
-
-
-When the server is reading a configuration file, or
-<Directory> section, which includes one of the MIME
-module's commands, it needs to create a mime_dir_config
-structure, so those commands have something to act on. It does this
-by invoking the function it finds in the module's `create per-dir
-config slot', with two arguments: the name of the directory to which
-this configuration information applies (or NULL for
-srm.conf), and a pointer to a resource pool in which the
-allocation should happen.
-
-(If we are reading a .htaccess file, that resource pool
-is the per-request resource pool for the request; otherwise it is a
-resource pool which is used for configuration data, and cleared on
-restarts. Either way, it is important for the structure being created
-to vanish when the pool is cleared, by registering a cleanup on the
-pool if necessary).
-
-For the MIME module, the per-dir config creation function just
-ap_pallocs the structure above, and a creates a couple of
-tables to fill it. That looks like this:
-
-
-void *create_mime_dir_config (pool *p, char *dummy)
-{
- mime_dir_config *new =
- (mime_dir_config *) ap_palloc (p, sizeof(mime_dir_config));
-
- new->forced_types = ap_make_table (p, 4);
- new->encoding_types = ap_make_table (p, 4);
-
- return new;
-}
-
-
-Now, suppose we've just read in a .htaccess file. We
-already have the per-directory configuration structure for the next
-directory up in the hierarchy. If the .htaccess file we
-just read in didn't have any AddType or
-AddEncoding commands, its per-directory config structure
-for the MIME module is still valid, and we can just use it.
-Otherwise, we need to merge the two structures somehow. - -To do that, the server invokes the module's per-directory config merge -function, if one is present. That function takes three arguments: -the two structures being merged, and a resource pool in which to -allocate the result. For the MIME module, all that needs to be done -is overlay the tables from the new per-directory config structure with -those from the parent: - -
-void *merge_mime_dir_configs (pool *p, void *parent_dirv, void *subdirv)
-{
- mime_dir_config *parent_dir = (mime_dir_config *)parent_dirv;
- mime_dir_config *subdir = (mime_dir_config *)subdirv;
- mime_dir_config *new =
- (mime_dir_config *)ap_palloc (p, sizeof(mime_dir_config));
-
- new->forced_types = ap_overlay_tables (p, subdir->forced_types,
- parent_dir->forced_types);
- new->encoding_types = ap_overlay_tables (p, subdir->encoding_types,
- parent_dir->encoding_types);
-
- return new;
-}
-
-
-As a note --- if there is no per-directory merge function present, the
-server will just use the subdirectory's configuration info, and ignore
-the parent's. For some modules, that works just fine (e.g., for the
-includes module, whose per-directory configuration information
-consists solely of the state of the XBITHACK), and for
-those modules, you can just not declare one, and leave the
-corresponding structure slot in the module itself NULL.- -
AddType and AddEncoding commands. To find
-commands, the server looks in the module's command table.
-That table contains information on how many arguments the commands
-take, and in what formats, where it is permitted, and so forth. That
-information is sufficient to allow the server to invoke most
-command-handling functions with pre-parsed arguments. Without further
-ado, let's look at the AddType command handler, which
-looks like this (the AddEncoding command looks basically
-the same, and won't be shown here):
-
-
-char *add_type(cmd_parms *cmd, mime_dir_config *m, char *ct, char *ext)
-{
- if (*ext == '.') ++ext;
- ap_table_set (m->forced_types, ext, ct);
- return NULL;
-}
-
-
-This command handler is unusually simple. As you can see, it takes
-four arguments, two of which are pre-parsed arguments, the third being
-the per-directory configuration structure for the module in question,
-and the fourth being a pointer to a cmd_parms structure.
-That structure contains a bunch of arguments which are frequently of
-use to some, but not all, commands, including a resource pool (from
-which memory can be allocated, and to which cleanups should be tied),
-and the (virtual) server being configured, from which the module's
-per-server configuration data can be obtained if required.
-
-Another way in which this particular command handler is unusually
-simple is that there are no error conditions which it can encounter.
-If there were, it could return an error message instead of
-NULL; this causes an error to be printed out on the
-server's stderr, followed by a quick exit, if it is in
-the main config files; for a .htaccess file, the syntax
-error is logged in the server error log (along with an indication of
-where it came from), and the request is bounced with a server error
-response (HTTP error status, code 500).
- -The MIME module's command table has entries for these commands, which -look like this: - -
-command_rec mime_cmds[] = {
-{ "AddType", add_type, NULL, OR_FILEINFO, TAKE2,
- "a mime type followed by a file extension" },
-{ "AddEncoding", add_encoding, NULL, OR_FILEINFO, TAKE2,
- "an encoding (e.g., gzip), followed by a file extension" },
-{ NULL }
-};
-
-
-The entries in these tables are:
-
-(void *) pointer, which is passed in the
- cmd_parms structure to the command handler ---
- this is useful in case many similar commands are handled by the
- same function.
- AllowOverride
- option, and an additional mask bit, RSRC_CONF,
- indicating that the command may appear in the server's own
- config files, but not in any .htaccess
- file.
- TAKE2 indicates two pre-parsed arguments. Other
- options are TAKE1, which indicates one pre-parsed
- argument, FLAG, which indicates that the argument
- should be On or Off, and is passed in
- as a boolean flag, RAW_ARGS, which causes the
- server to give the command the raw, unparsed arguments
- (everything but the command name itself). There is also
- ITERATE, which means that the handler looks the
- same as TAKE1, but that if multiple arguments are
- present, it should be called multiple times, and finally
- ITERATE2, which indicates that the command handler
- looks like a TAKE2, but if more arguments are
- present, then it should be called multiple times, holding the
- first argument constant.
- NULL).
-request_rec's per-directory configuration vector by using
-the ap_get_module_config function.
-
-
-int find_ct(request_rec *r)
-{
- int i;
- char *fn = ap_pstrdup (r->pool, r->filename);
- mime_dir_config *conf = (mime_dir_config *)
- ap_get_module_config(r->per_dir_config, &mime_module);
- char *type;
-
- if (S_ISDIR(r->finfo.st_mode)) {
- r->content_type = DIR_MAGIC_TYPE;
- return OK;
- }
-
- if((i=ap_rind(fn,'.')) < 0) return DECLINED;
- ++i;
-
- if ((type = ap_table_get (conf->encoding_types, &fn[i])))
- {
- r->content_encoding = type;
-
- /* go back to previous extension to try to use it as a type */
-
- fn[i-1] = '\0';
- if((i=ap_rind(fn,'.')) < 0) return OK;
- ++i;
- }
-
- if ((type = ap_table_get (conf->forced_types, &fn[i])))
- {
- r->content_type = type;
- }
-
- return OK;
-}
-
-
-
-
-
-The only substantial difference is that when a command needs to
-configure the per-server private module data, it needs to go to the
-cmd_parms data to get at it. Here's an example, from the
-alias module, which also indicates how a syntax error can be returned
-(note that the per-directory configuration argument to the command
-handler is declared as a dummy, since the module doesn't actually have
-per-directory config data):
-
-
-char *add_redirect(cmd_parms *cmd, void *dummy, char *f, char *url)
-{
- server_rec *s = cmd->server;
- alias_server_conf *conf = (alias_server_conf *)
- ap_get_module_config(s->module_config,&alias_module);
- alias_entry *new = ap_push_array (conf->redirects);
-
- if (!ap_is_url (url)) return "Redirect to non-URL";
-
- new->fake = f; new->real = url;
- return NULL;
-}
-
-
-
diff --git a/docs/manual/handler.html.en b/docs/manual/handler.html.en
deleted file mode 100644
index 638d740f8e1..00000000000
--- a/docs/manual/handler.html.en
+++ /dev/null
@@ -1,165 +0,0 @@
-
-
-
-A "handler" is an internal Apache representation of the action to be -performed when a file is called. Generally, files have implicit -handlers, based on the file type. Normally, all files are simply -served by the server, but certain file typed are "handled" -separately. For example, you may use a type of -"application/x-httpd-cgi" to invoke CGI scripts.
- -Apache 1.1 adds the additional ability to use handlers -explicitly. Either based on filename extensions or on location, these -handlers are unrelated to file type. This is advantageous both because -it is a more elegant solution, but it also allows for both a type -and a handler to be associated with a file.
- -Handlers can either be built into the server or to a module, or -they can be added with the Action directive. The built-in -handlers in the standard distribution are as follows:
- -- -
AddHandler maps the filename extension extension to the
-handler handler-name. For example, to activate CGI scripts
-with the file extension ".cgi", you might use:
-
- AddHandler cgi-script cgi -- -
Once that has been put into your srm.conf or httpd.conf file, any
-file ending with ".cgi" will be treated as a CGI
-program.
When placed into an .htaccess file or a
-<Directory> or <Location> section,
-this directive forces all matching files to be parsed through the
-handler given by handler-name. For example, if you had a
-directory you wanted to be parsed entirely as imagemap rule files,
-regardless of extension, you might put the following into an
-.htaccess file in that directory:
-
- SetHandler imap-file --
Another example: if you wanted to have the server display a status
-report whenever a URL of http://servername/status was
-called, you might put the following into access.conf:
-
- <Location /status> - SetHandler server-status - </Location> -- -
In order to implement the handler features, an addition has been
-made to the Apache API that you may wish to
-make use of. Specifically, a new record has been added to the
-request_rec structure:
- char *handler --
If you wish to have your module engage a handler, you need only to
-set r->handler to the name of the handler at any time
-prior to the invoke_handler stage of the
-request. Handlers are implemented as they were before, albeit using
-the handler name instead of a content type. While it is not
-necessary, the naming convention for handlers is to use a
-dash-separated word, with no slashes, so as to not invade the media
-type name-space.
- -If you downloaded a binary distribution, skip to Installing Apache. Otherwise read the next section -for how to compile the server. - -
-
-All configuration of Apache is performed in the src
-directory of the Apache distribution. Change into this directory.
-
-
Configuration file. Uncomment lines corresponding to
- those optional modules you wish to include (among the AddModule lines
- at the bottom of the file), or add new lines corresponding to
- additional modules you have downloaded or written. (See API.html for preliminary docs on how to
- write Apache modules). Advanced users can comment out some of the
- default modules if they are sure they will not need them (be careful
- though, since many of the default modules are vital for the correct
- operation and security of the server).
-
-
- You should also read the instructions in the Configuration
- file to see if you need to set any of the Rule lines.
-
-
-
Configure script as given below. However
- if this fails or you have any special requirements (e.g. to include
- an additional library required by an optional module) you might need
- to edit one or more of the following options in the
- Configuration file:
- EXTRA_CFLAGS, LIBS, LDFLAGS, INCLUDES.
-
-
- Run the Configure script:
-
-- - (*: Depending on Configuration and your system, Configure - make not print these lines. That's OK).- % Configure - Using 'Configuration' as config file - + configured for <whatever> platform - + setting C compiler to <whatever> * - + setting C compiler optimization-level to <whatever> * - + Adding selected modules - + doing sanity check on compiler and options - Creating Makefile in support - Creating Makefile in main - Creating Makefile in os/unix - Creating Makefile in modules/standard --
- - This generates a Makefile for use in stage 3. It also creates a - Makefile in the support directory, for compilation of the optional - support programs. -
-
- (If you want to maintain multiple configurations, you can give a
- option to Configure to tell it to read an alternative
- Configuration file, such as Configure -file
- Configuration.ai).
-
- -
make.
-httpd in the
-src directory. A binary distribution of Apache will
-supply this file.
-
-The next step is to install the program and configure it. Apache is
-designed to be configured and run from the same set of directories
-where it is compiled. If you want to run it from somewhere else, make
-a directory and copy the conf, logs and
-icons directories into it. In either case you should
-read the security tips
-describing how to set the permissions on the server root directory.
-
-The next step is to edit the configuration files for the server. This
-consists of setting up various directives in up to three
-central configuration files. By default, these files are located in
-the conf directory and are called srm.conf,
-access.conf and httpd.conf. To help you get
-started there are same files in the conf directory of the
-distribution, called srm.conf-dist,
-access.conf-dist and httpd.conf-dist. Copy
-or rename these files to the names without the -dist.
-Then edit each of the files. Read the comments in each file carefully.
-Failure to setup these files correctly could lead to your server not
-working or being insecure. You should also have an additional file in
-the conf directory called mime.types. This
-file usually does not need editing.
-
-
-
-First edit httpd.conf. This sets up general attributes
-about the server: the port number, the user it runs as, etc. Next
-edit the srm.conf file; this sets up the root of the
-document tree, special functions like server-parsed HTML or internal
-imagemap parsing, etc. Finally, edit the access.conf
-file to at least set the base cases of access.
-
-
-
-In addition to these three files, the server behavior can be configured
-on a directory-by-directory basis by using .htaccess
-files in directories accessed by the server.
-
-
httpd. This will look for
-httpd.conf in the location compiled into the code (by
-default /usr/local/apache/conf/httpd.conf). If
-this file is somewhere else, you can give the real
-location with the -f argument. For example:
-
-- /usr/local/apache/httpd -f /usr/local/apache/conf/httpd.conf -- -If all goes well this will return to the command prompt almost -immediately. This indicates that the server is now up and running. If -anything goes wrong during the initialization of the server you will -see an error message on the screen. - -If the server started ok, you can now use your browser to -connect to the server and read the documentation. If you are running -the browser on the same machine as the server and using the default -port of 80, a suitable URL to enter into your browser is - -
- http://localhost/ -- -
- -Note that when the server starts it will create a number of -child processes to handle the requests. If you started Apache -as the root user, the parent process will continue to run as root -while the children will change to the user as given in the httpd.conf -file. - -
-
-If when you run httpd it complained about being unable to
-"bind" to an address, then either some other process is already using
-the port you have configured Apache to use, or you are running httpd
-as a normal user but trying to use port below 1024 (such as the
-default port 80).
-
-
-
-If the server is not running, read the error message displayed
-when you run httpd. You should also check the server
-error_log for additional information (with the default configuration,
-this will be located in the file error_log in the
-logs directory).
-
-
-
-If you want your server to continue running after a system reboot, you
-should add a call to httpd to your system startup files
-(typically rc.local or a file in an
-rc.N directory). This will start Apache as root.
-Before doing this ensure that your server is properly configured
-for security and access restrictions.
-
-
-
-To stop Apache send the parent process a TERM signal. The PID of this
-process is written to the file httpd.pid in the
-logs directory (unless configured otherwise). Do not
-attempt to kill the child processes because they will be renewed by
-the parent. A typical command to stop the server is:
-
-
- kill -TERM `cat /usr/local/apache/logs/httpd.pid` -- -
- -For more information about Apache command line options, configuration -and log files, see Starting Apache. For a -reference guide to all Apache directives supported by the distributed -modules, see the Apache directives. - -
httpd server which is compiled
-and configured as above, Apache includes a number of support programs.
-These are not compiled by default. The support programs are in the
-support directory of the distribution. To compile
-the support programs, change into this directory and type
-- make -- - - - diff --git a/docs/manual/invoking.html.en b/docs/manual/invoking.html.en deleted file mode 100644 index e7767cbbdfa..00000000000 --- a/docs/manual/invoking.html.en +++ /dev/null @@ -1,134 +0,0 @@ - - - -
httpd program is usually run as a daemon which executes
-continuously, handling requests. It is possible to invoke Apache by
-the Internet daemon inetd each time a connection to the HTTP
-service is made (use the
-ServerType directive)
-but this is not recommended.
-
--d serverroot
-/usr/local/apache.
-
--f config
-/, then it is taken to be a
-path relative to the ServerRoot. The
-default is conf/httpd.conf.
-
--X
--v
--V
--h
--l
--S
--?
--d command line flag.
-
-Conventionally, the files are:
-conf/httpd.conf
--f command line flag.
-
-conf/srm.conf
-conf/access.conf
-
-The server also reads a file containing mime document types; the filename
-is set by the TypesConfig directive,
-and is conf/mime.types by default.
-
-
logs/httpd.pid. This filename can be changed with the
-PidFile directive. The process-id is for
-use by the administrator in restarting and terminating the daemon;
-A HUP or USR1 signal causes the daemon to re-read its configuration files and
-a TERM signal causes it to die gracefully. For more information
-see the Stopping and Restarting page.
--If the process dies (or is killed) abnormally, then it will be necessary to -kill the children httpd processes. - -
logs/error_log
-by default. The filename can be set using the
-ErrorLog directive; different error logs can
-be set for different virtual hosts.
-
-logs/access_log by default. The filename can be set using a
-TransferLog directive; different
-transfer logs can be set for different virtual
-hosts.
-
-
-
-
diff --git a/docs/manual/mod/directive-dict.html.en b/docs/manual/mod/directive-dict.html.en
deleted file mode 100644
index 780ac3a6270..00000000000
--- a/docs/manual/mod/directive-dict.html.en
+++ /dev/null
@@ -1,262 +0,0 @@
-
-
-
- - Each Apache configuration directive is described using a common format - that looks like this: -
-- Each of the directive's attributes, complete with possible values - where possible, are described in this document. -
- -- This indicates the format of the directive as it would appear in a - configuration file. This syntax is extremely directive-specific, so - refer to the text of the directive's description for details. -
- -- If the directive has a default value (i.e., if you omit it - from your configuration entirely, the Apache Web server will behave as - though you set it to a particular value), it is described here. If - there is no default value, this section should say - "None". -
- -- This indicates where in the server's configuration files the directive - is legal. It's a comma-separated list of one or more of the following - values: -
--
--
--
--
-- The directive is only allowed within the designated context; - if you try to use it elsewhere, you'll get a configuration error that - will either prevent the server from handling requests in that context - correctly, or will keep the server from operating at all -- - i.e., the server won't even start. -
-- The valid locations for the directive are actually the result of a - Boolean OR of all of the listed contexts. In other words, a directive - that is marked as being valid in "server config, - .htaccess" can be used in the httpd.conf file - and in .htaccess files, but not within any - <Directory> or <VirtualHost> containers. -
- -- This directive attribute indicates which configuration override must - be active in order for the directive to be processed when it appears - in a .htaccess file. If the directive's - context - doesn't permit it to appear in .htaccess files, this - attribute should say "Not applicable". -
-- Overrides are activated by the - AllowOverrides - directive, and apply to a particular scope (such as a directory) and - all descendants, unless further modified by other - AllowOverrides directives at lower levels. The - documentation for that directive also lists the possible override - names available. -
- -- This indicates how tightly bound into the Apache Web server the - directive is; in other words, you may need to recompile the server - with an enhanced set of modules in order to gain access to the - directive and its functionality. Possible values for this attribute - are: -
--
--
--
--
-- This quite simply lists the name of the source module which defines - the directive. -
- -- If the directive wasn't part of the original Apache version 1 - distribution, the version in which it was introduced should be listed - here. If the directive has the same name as one from the NCSA HTTPd - server, any inconsistencies in behaviour between the two should also - be mentioned. Otherwise, this attribute should say "No - compatibility issues." -
- - - diff --git a/docs/manual/platform/perf-bsd44.html b/docs/manual/platform/perf-bsd44.html deleted file mode 100644 index 96536c266e4..00000000000 --- a/docs/manual/platform/perf-bsd44.html +++ /dev/null @@ -1,236 +0,0 @@ - - - -- -Edit the following two files: -
/usr/include/sys/socket.h
- /usr/src/sys/sys/socket.h
-In each file, look for the following:
-- /* - * Maximum queue length specifiable by listen. - */ - #define SOMAXCONN 5 -- -Just change the "5" to whatever appears to work. I bumped the two -machines I was having problems with up to 32 and haven't noticed the -problem since. - -
- -After the edit, recompile the kernel and recompile the Apache server -then reboot. - -
- -FreeBSD 2.1 seems to be perfectly happy, with SOMAXCONN -set to 32 already. - -
-
-
-Addendum for very heavily loaded BSD servers
-
-from Chuck Murcko <chuck@telebase.com>
-
-
- -If you're running a really busy BSD Apache server, the following are useful -things to do if the system is acting sluggish:
- -
- -
-maxusers 256 -- -Maxusers drives a lot of other kernel parameters: - -
-# Network options. NMBCLUSTERS defines the number of mbuf clusters and -# defaults to 256. This machine is a server that handles lots of traffic, -# so we crank that value. -options SOMAXCONN=256 # max pending connects -options NMBCLUSTERS=4096 # mbuf clusters at 4096 - -# -# Misc. options -# -options CHILD_MAX=512 # maximum number of child processes -options OPEN_MAX=512 # maximum fds (breaks RPC svcs) -- -SOMAXCONN is not derived from maxusers, so you'll always need to increase -that yourself. We used a value guaranteed to be larger than Apache's -default for the listen() of 128, currently. - -
- -In many cases, NMBCLUSTERS must be set much larger than would appear -necessary at first glance. The reason for this is that if the browser -disconnects in mid-transfer, the socket fd associated with that particular -connection ends up in the TIME_WAIT state for several minutes, during -which time its mbufs are not yet freed. Another reason is that, on server -timeouts, some connections end up in FIN_WAIT_2 state forever, because -this state doesn't time out on the server, and the browser never sent -a final FIN. For more details see the -FIN_WAIT_2 page. - -
- -Some more info on mbuf clusters (from sys/mbuf.h): -
-/* - * Mbufs are of a single size, MSIZE (machine/machparam.h), which - * includes overhead. An mbuf may add a single "mbuf cluster" of size - * MCLBYTES (also in machine/machparam.h), which has no additional overhead - * and is used instead of the internal data area; this is done when - * at least MINCLSIZE of data must be stored. - */ -- -
- -CHILD_MAX and OPEN_MAX are set to allow up to 512 child processes (different -than the maximum value for processes per user ID) and file descriptors. -These values may change for your particular configuration (a higher OPEN_MAX -value if you've got modules or CGI scripts opening lots of connections or -files). If you've got a lot of other activity besides httpd on the same -machine, you'll have to set NPROC higher still. In this example, the NPROC -value derived from maxusers proved sufficient for our load. - -
- -Caveats - -
- -Be aware that your system may not boot with a kernel that is configured -to use more resources than you have available system RAM. ALWAYS -have a known bootable kernel available when tuning your system this way, -and use the system tools beforehand to learn if you need to buy more -memory before tuning. - -
- -RPC services will fail when the value of OPEN_MAX is larger than 256. -This is a function of the original implementations of the RPC library, -which used a byte value for holding file descriptors. BSDI has partially -addressed this limit in its 2.1 release, but a real fix may well await -the redesign of RPC itself. - -
- -Finally, there's the hard limit of child processes configured in Apache. - -
- -For versions of Apache later than 1.0.5 you'll need to change the -definition for HARD_SERVER_LIMIT in httpd.h and recompile -if you need to run more than the default 150 instances of httpd. - -
- -From conf/httpd.conf-dist: - -
-# Limit on total number of servers running, i.e., limit on the number -# of clients who can simultaneously connect --- if this limit is ever -# reached, clients will be LOCKED OUT, so it should NOT BE SET TOO LOW. -# It is intended mainly as a brake to keep a runaway server from taking -# Unix with it as it spirals down... - -MaxClients 150 -- -Know what you're doing if you bump this value up, and make sure you've -done your system monitoring, RAM expansion, and kernel tuning beforehand. -Then you're ready to service some serious hits! - -
- -Thanks to Tony Sanders and Chris Torek at BSDI for their -helpful suggestions and information. - -
- -"M. Teterin" <mi@ALDAN.ziplink.net> writes:
-
It really does help if your kernel and frequently used utilities -are fully optimized. Rebuilding the FreeBSD kernel on an AMD-133 -(486-class CPU) web-server with-
--m486 -fexpensive-optimizations -fomit-frame-pointer -O2
-helped reduce the number of "unable" errors, because the CPU was -often maxed out.
- -
- Patch ID OSF350-195 for V3.2C- Patch IDs for V3.2E and V3.2F should be available soon. - There is no known reason why the Patch ID OSF360-350195 - won't work on these releases, but such use is not officially - supported by Digital. This patch kit will not be needed for - V3.2G when it is released. -
- Patch ID OSF360-350195 for V3.2D -
-From mogul@pa.dec.com (Jeffrey Mogul) -Organization DEC Western Research -Date 30 May 1996 00:50:25 GMT -Newsgroups comp.unix.osf.osf1 -Message-ID <4oirch$bc8@usenet.pa.dec.com> -Subject Re: Web Site Performance -References 1 - - - -In article <skoogDs54BH.9pF@netcom.com> skoog@netcom.com (Jim Skoog) writes: ->Where are the performance bottlenecks for Alpha AXP running the ->Netscape Commerce Server 1.12 with high volume internet traffic? ->We are evaluating network performance for a variety of Alpha AXP ->runing DEC UNIX 3.2C, which run DEC's seal firewall and behind ->that Alpha 1000 and 2100 webservers. - -Our experience (running such Web servers as altavista.digital.com -and www.digital.com) is that there is one important kernel tuning -knob to adjust in order to get good performance on V3.2C. You -need to patch the kernel global variable "somaxconn" (use dbx -k -to do this) from its default value of 8 to something much larger. - -How much larger? Well, no larger than 32767 (decimal). And -probably no less than about 2048, if you have a really high volume -(millions of hits per day), like AltaVista does. - -This change allows the system to maintain more than 8 TCP -connections in the SYN_RCVD state for the HTTP server. (You -can use "netstat -An |grep SYN_RCVD" to see how many such -connections exist at any given instant). - -If you don't make this change, you might find that as the load gets -high, some connection attempts take a very long time. And if a lot -of your clients disconnect from the Internet during the process of -TCP connection establishment (this happens a lot with dialup -users), these "embryonic" connections might tie up your somaxconn -quota of SYN_RCVD-state connections. Until the kernel times out -these embryonic connections, no other connections will be accepted, -and it will appear as if the server has died. - -The default value for somaxconn in Digital UNIX V4.0 will be quite -a bit larger than it has been in previous versions (we inherited -this default from 4.3BSD). - -Digital UNIX V4.0 includes some other performance-related changes -that significantly improve its maximum HTTP connection rate. However, -we've been using V3.2C systems to front-end for altavista.digital.com -with no obvious performance bottlenecks at the millions-of-hits-per-day -level. - -We have some Webstone performance results available at - http://www.digital.com/info/alphaserver/news/webff.html -I'm not sure if these were done using V4.0 or an earlier version -of Digital UNIX, although I suspect they were done using a test -version of V4.0. - --Jeff - -- - - diff --git a/docs/manual/platform/perf-hp.html b/docs/manual/platform/perf-hp.html deleted file mode 100644 index 13ed152e6a2..00000000000 --- a/docs/manual/platform/perf-hp.html +++ /dev/null @@ -1,118 +0,0 @@ - - - -
- ----------------------------------------------------------------------------- - -From mogul@pa.dec.com (Jeffrey Mogul) -Organization DEC Western Research -Date 31 May 1996 21:01:01 GMT -Newsgroups comp.unix.osf.osf1 -Message-ID <4onmmd$mmd@usenet.pa.dec.com> -Subject Digital UNIX V3.2C Internet tuning patch info - ----------------------------------------------------------------------------- - -Something that probably few people are aware of is that Digital -has a patch kit available for Digital UNIX V3.2C that may improve -Internet performance, especially for busy web servers. - -This patch kit is one way to increase the value of somaxconn, -which I discussed in a message here a day or two ago. - -I've included in this message the revised README file for this -patch kit below. Note that the original README file in the patch -kit itself may be an earlier version; I'm told that the version -below is the right one. - -Sorry, this patch kit is NOT available for other versions of Digital -UNIX. Most (but not quite all) of these changes also made it into V4.0, -so the description of the various tuning parameters in this README -file might be useful to people running V4.0 systems. - -This patch kit does not appear to be available (yet?) from - http://www.service.digital.com/html/patch_service.html -so I guess you'll have to call Digital's Customer Support to get it. - --Jeff - -DESCRIPTION: Digital UNIX Network tuning patch - - Patch ID: OSF350-146 - - SUPERSEDED PATCHES: OSF350-151, OSF350-158 - - This set of files improves the performance of the network - subsystem on a system being used as a web server. There are - additional tunable parameters included here, to be used - cautiously by an informed system administrator. - -TUNING - - To tune the web server, the number of simultaneous socket - connection requests are limited by: - - somaxconn Sets the maximum number of pending requests - allowed to wait on a listening socket. The - default value in Digital UNIX V3.2 is 8. - This patch kit increases the default to 1024, - which matches the value in Digital UNIX V4.0. - - sominconn Sets the minimum number of pending connections - allowed on a listening socket. When a user - process calls listen with a backlog less - than sominconn, the backlog will be set to - sominconn. sominconn overrides somaxconn. - The default value is 1. - - The effectiveness of tuning these parameters can be monitored by - the sobacklog variables available in the kernel: - - sobacklog_hiwat Tracks the maximum pending requests to any - socket. The initial value is 0. - - sobacklog_drops Tracks the number of drops exceeding the - socket set backlog limit. The initial - value is 0. - - somaxconn_drops Tracks the number of drops exceeding the - somaxconn limit. When sominconn is larger - than somaxconn, tracks the number of drops - exceeding sominconn. The initial value is 0. - - TCP timer parameters also affect performance. Tuning the following - require some knowledge of the characteristics of the network. - - tcp_msl Sets the tcp maximum segment lifetime. - This is the maximum lifetime in half - seconds that a packet can be in transit - on the network. This value, when doubled, - is the length of time a connection remains - in the TIME_WAIT state after a incoming - close request is processed. The unit is - specified in 1/2 seconds, the initial - value is 60. - - tcp_rexmit_interval_min - Sets the minimum TCP retransmit interval. - For some WAN networks the default value may - be too short, causing unnecessary duplicate - packets to be sent. The unit is specified - in 1/2 seconds, the initial value is 1. - - tcp_keepinit This is the amount of time a partially - established connection will sit on the listen - queue before timing out (e.g. if a client - sends a SYN but never answers our SYN/ACK). - Partially established connections tie up slots - on the listen queue. If the queue starts to - fill with connections in SYN_RCVD state, - tcp_keepinit can be decreased to make those - partial connects time out sooner. This should - be used with caution, since there might be - legitimate clients that are taking a while - to respond to SYN/ACK. The unit is specified - in 1/2 seconds, the default value is 150 - (ie. 75 seconds). - - The hashlist size for the TCP inpcb lookup table is regulated by: - - tcbhashsize The number of hash buckets used for the - TCP connection table used in the kernel. - The initial value is 32. For best results, - should be specified as a power of 2. For - busy Web servers, set this to 2048 or more. - - The hashlist size for the interface alias table is regulated by: - - inifaddr_hsize The number of hash buckets used for the - interface alias table used in the kernel. - The initial value is 32. For best results, - should be specified as a power of 2. - - ipport_userreserved The maximum number of concurrent non-reserved, - dynamically allocated ports. Default range - is 1025-5000. The maximum value is 65535. - This limits the numer of times you can - simultaneously telnet or ftp out to connect - to other systems. - - tcpnodelack Don't delay acknowledging TCP data; this - can sometimes improve performance of locally - run CAD packages. Default is value is 0, - the enabled value is 1. - - Digital UNIX version: - - V3.2C -Feature V3.2C patch V4.0 - ======= ===== ===== ==== -somaxconn X X X -sominconn - X X -sobacklog_hiwat - X - -sobacklog_drops - X - -somaxconn_drops - X - -tcpnodelack X X X -tcp_keepidle X X X -tcp_keepintvl X X X -tcp_keepcnt - X X -tcp_keepinit - X X -TCP keepalive per-socket - - X -tcp_msl - X - -tcp_rexmit_interval_min - X - -TCP inpcb hashing - X X -tcbhashsize - X X -interface alias hashing - X X -inifaddr_hsize - X X -ipport_userreserved - X - -sysconfig -q inet - - X -sysconfig -q socket - - X - -
-Date: Wed, 05 Nov 1997 16:59:34 -0800 -From: Rick Jones <raj@cup.hp.com> -Reply-To: raj@cup.hp.com -Organization: Network Performance -Subject: HP-UX tuning tips -- -Here are some tuning tips for HP-UX to add to the tuning page. - -
-
-For HP-UX 9.X: Upgrade to 10.20
-For HP-UX 10.[00|01|10]: Upgrade to 10.20
-
-
- -For HP-UX 10.20: - -
- -Install the latest cumulative ARPA Transport Patch. This will allow you -to configure the size of the TCP connection lookup hash table. The -default is 256 buckets and must be set to a power of two. This is -accomplished with adb against the *disc* image of the kernel. The -variable name is tcp_hash_size. - -
- -How to pick the value? Examine the output of - -ftp://ftp.cup.hp.com/dist/networking/tools/connhist and see how many -total TCP connections exist on the system. You probably want that number -divided by the hash table size to be reasonably small, say less than 10. -Folks can look at HP's SPECweb96 disclosures for some common settings. -These can be found at -http://www.specbench.org/. If an HP-UX system was -performing at 1000 SPECweb96 connections per second, the TIME_WAIT time -of 60 seconds would mean 60,000 TCP "connections" being tracked. - -
- -Folks can check their listen queue depths with - -ftp://ftp.cup.hp.com/dist/networking/misc/listenq. - -
- -If folks are running Apache on a PA-8000 based system, they should -consider "chatr'ing" the Apache executable to have a large page size. -This would be "chatr +pi L <BINARY>." The GID of the running executable -must have MLOCK privileges. Setprivgrp(1m) should be consulted for -assigning MLOCK. The change can be validated by running Glance and -examining the memory regions of the server(s) to make sure that they -show a non-trivial fraction of the text segment being locked. - -
- -If folks are running Apache on MP systems, they might consider writing a -small program that uses mpctl() to bind processes to processors. A -simple pid % numcpu algorithm is probably sufficient. This might even go -into the source code. - -
- -If folks are concerned about the number of FIN_WAIT_2 connections, they -can use nettune to shrink the value of tcp_keepstart. However, they -should be careful there - certainly do not make it less than oh two to -four minutes. If tcp_hash_size has been set well, it is probably OK to -let the FIN_WAIT_2's take longer to timeout (perhaps even the default -two hours) - they will not on average have a big impact on performance. - -
- -There are other things that could go into the code base, but that might -be left for another email. Feel free to drop me a message if you or -others are interested. - -
- -sincerely, - -
-
-rick jones
-
-http://www.cup.hp.com/netperf/NetperfPage.html
-
-
-
-
-
-
diff --git a/docs/manual/platform/perf.html b/docs/manual/platform/perf.html
deleted file mode 100644
index 6bd50c37651..00000000000
--- a/docs/manual/platform/perf.html
+++ /dev/null
@@ -1,179 +0,0 @@
-
-
-
-- -Other links: - -
- open("/usr/lib/locale/TZ/MET", O_RDONLY) = 3
- read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 7944) = 778
- close(3) = 0
-
- - -In addition, make sure that USE_FCNTL_SERIALIZE_ACCEPT is defined (if not -defined by Apache autoconfiguration). To reduce instances of connections -in FIN_WAIT_2 state, you may also want to define NO_LINGCLOSE (Apache 1.2 -only). - -
- -NOTE: Unixware 2.1.2 and later already have patch ptf3123 included
- -In addition, make sure that USE_FCNTL_SERIALIZE_ACCEPT is defined (if not -defined by Apache autoconfiguration). To reduce instances of connections -in FIN_WAIT_2 state, you may also want to define NO_LINGCLOSE (Apache 1.2 -only).
- -Thanks to Joe Doupnik <JRD@cc.usu.edu> and Rich Vaughn -<rvaughn@aad.com> for additional info for UnixWare builds.
- - - - diff --git a/docs/manual/platform/windows.html b/docs/manual/platform/windows.html deleted file mode 100644 index fe41c402461..00000000000 --- a/docs/manual/platform/windows.html +++ /dev/null @@ -1,233 +0,0 @@ - - -
-This document explains how to compile, install, configure and run - Apache 1.3b3 (or later) under Microsoft Windows. Please note that at - this time, Windows support is entirely experimental, and is - recommended only for experienced users. The Apache Group does not - guarantee that this software will work as documented, or even at - all. If you find any bugs, or wish to contribute in other ways, please - use our bug reporting - page.
- -Warning: Apache on NT has not yet been optimized for performance. -Apache still performs best, and is most reliable on Unix platforms. Over -time we will improve NT performance. Folks doing comparative reviews -of webserver performance are asked to compare against Apache -on a Unix platform such as Solaris, FreeBSD, or Linux.
- -Apache 1.3b3 requires the following:
- -* Apache may run with Windows NT 3.5.1, but - has not been tested.
- -This documentation assumes good working knowledge of Microsoft - Windows, Microsoft Visual C++, and the Apache web server (for - Unix).
- -If running on Windows 95, using the "Winsock2" upgrade is recommended - but may not be necessary. If running on NT 4.0, installing Service Pack 2 - is recommended.
- -Information on the latest version of Apache can be found on the Apache -web server at http://www.apache.org/. This will -list the current release, any more recent alpha or beta-test release, -together with details of mirror web and anonymous ftp sites.
- -You can download Apache 1.3b3 in two different forms: an InstallShield-based
- .exe file which contains the precompiled binary, and a
- .tar.gz which contains the source code (and is also the
- regular Unix distribution).
-
-
Compiling Apache requires Microsoft Visual C++ 5.0 to be properly - installed. It is easiest to compile with the command-line tools - (nmake, etc...). Consult the VC++ manual to determine how to install - them.
- -First, unpack the Apache distribution into an appropriate
- directory. Open a command-line prompt, and change to the
- src subdirectory of the Apache distribution.
The master Apache makefile instructions are contained in the
- Makefile.nt file. To compile Apache, simply use one of
- the following commands:
-
nmake /f Makefile.nt _apacher (release build)
-nmake /f Makefile.nt _apached (debug build)
-These will both compile Apache. The latter will include debugging - information in the resulting files, making it easier to find bugs and - track down problems.
- -Apache can also be compiled using VC++'s Visual Studio development - environment. Although compiling Apache in this manner is not as simple, - it makes it possible to easily modify the Apache source, or to compile - Apache if the command-line tools are not installed.
- -Project files (.DSP) are included for each of the
- portions of Apache. The three projects that are necessary for
- Apache to run are Apache.dsp, ap/ap.dsp,
- regex/regex.dsp, ApacheCore.dsp and
- os/win32/ApacheOS.dsp. The src/win32
- subdirectory contains project files for the optional modules (see
- below).
Once Apache has been compiled, it needs to be installed in its server
- root directory. The hard-coded default is the \Apache
- directory, on the current hard drive. Another directory may be used,
- but the files will need to be installed manually.
To install the files into the \Apache directory
- automatically, use one the following nmake commands (see above):
nmake /f Makefile.nt installr (for release build)
-nmake /f Makefile.nt installd (for debug build)
-This will install the following:
- -\Apache\Apache.exe - Apache executable
- \Apache\ApacheCore.dll - Main Apache shared library
- \Apache\modules\ApacheModule*.dll - Optional Apache
- modules (7 files)
- \Apache\conf - Empty configuration directory
- \Apache\logs - Empty logging directory
-If you do not have nmake, or wish to install in a different directory, - be sure to use a similar naming scheme.
- -The first step is to set up Apache's configuration files. Default
- configuration files for Windows are located in the conf
- subdirectory in the Apache distribution, and are named
- httpd.conf-dist-win, access.conf-dist-win
- and srm.conf-dist-win. Move these into
- \Apache\conf, and rename them httpd.conf,
- access.conf and srm.conf, respectively.
Configuring Apache is nearly identical to the Unix version of Apache, - so most of the standard Apache documentation is - applicable. A few things are, however, different, or new:
- -Because Apache for Windows is multithreaded, it does not use a - separate process for each request, as Apache does with - Unix. Instead there are usually only two Apache processes running: - a parent process, and a child which handles the requests. Within - the child each request is handled by a separate thread. -
- - So the "process"-management directives are different: -
MaxRequestsPerChild
- - Like the Unix directive, this controls how many requests a
- process will serve before exiting. However, unlike Unix, a
- process serves all the requests at once, not just one, so if
- this is set, it is recommended that a very high number is
- used. The recommended default, MaxRequestsPerChild
- 0, does not cause the process to ever exit.
-
ThreadsPerChild -
- This directive is new, and tells the server how many threads it
- should use. This is the maximum number of connections the server
- can handle at once; be sure and set this number high enough for
- your site if you get a lot of hits. The recommended default is
- ThreadsPerChild 50.
The directives that accept filenames as arguments now must use - Windows filenames instead of Unix ones. However, because Apache - uses Unix-style names internally, you must use forward slashes, not - backslashes. Drive letters can be used; if omitted, the drive with - the Apache executable will be assumed.
-Apache for Windows contains the ability to load modules at runtime,
- without recompiling the server. If Apache is compiled normally, it
- will install a number of optional modules in the
- \Apache\modules directory. To activate these, or other
- modules, the new LoadModule
- directive must be used. For example, to active the status module,
- use the following (in addition to the status-activating directives
- in access.conf):
- LoadModule status_module modules/ApacheModuleStatus.dll --
Information on creating loadable - modules is also available.
-Apache can also load ISAPI Extensions (i.e., Internet Server - Applications), such as those used by Microsoft's IIS, and other - Windows servers. More information - is available. -
Once Apache is configured correctly, it is nearly ready to be
-run. However, we recommend you copy the icons and
-htdocs subdirectories from the Apache distribution to
-\Apache. The latter is especially important, as it contains
-the document root (what the server actually serves).
-
-
Apache can be executed in one of two ways, directly from the command - line, or as a Windows NT service. To run it from the command line, use - the following command: -
- C:\Apache> apache -s -- -
Apache will then execute, and will remain running until it is - exited. To use Apache as a Windows NT service, use the following:
-- C:\Apache> apache -i --
Then open the Services control panel, and start the Apache service.
- -If you installed Apache in a server root other than
- \Apache, you must use the -f command-line
- option to specify the httpd.conf file, or the -d option
- to specify the server root.
- - This script performs a very simple search across the Apache - documentation for any single case-insensitive word. No combinations, - wildcards, regular expressions, word-stubbing, or other fancy options - are supported; this is just to help you find topics quickly. Only - those pages which include the exact word you type will be - listed. -
-- Documents containing the search word are not listed in any - sort of priority order. -
-\n Sorry, no matches found.\n
\n"); - last QUERY; - } - # - # Found an entry, so turn the hash value (a comma-separated list - # of relative file names) into an array for display. - # Incidentally, tell the user how many there are. - # - @files = split (/,/, $Index{$word}); - printf ("Total of %d match", scalar (@files)); - # - # Be smart about plurals. - # - if (scalar (@files) != 1) { - printf ("es") ; - } - printf (" found.\n
\n"); - # - # Right. Now display the files as they're listed. - # - printf ("
- <Directory>, <Location> and <Files> can contain
-directives which only apply to specified directories, URLs or files
-respectively. Also htaccess files can be used inside a directory to
-apply directives to that directory. This document explains how these
-different sections differ and how they relate to each other when
-Apache decides which directives apply for a particular directory or
-request URL.
-
-<Directory> is also allowed in
-<Location> (except a sub-<Files>
-section, but the code doesn't test for that, Lars has an open bug
-report on that). Semantically however some things, and the most
-notable is AllowOverrides, make no sense in
-<Location>. The same for
-<Files> -- syntactically everything is fine, but
-semantically some things are different.
-
-<Directory> (except regular expressions) and
- .htaccess done simultaneously (with .htaccess overriding
- <Directory>)
-
-<DirectoryMatch>, and
- <Directory> with regular expressions
-
-<Files> and <FilesMatch> done simultaneously
- <Location> and <LocationMatch> done simultaneously
- <Directory>, each group is processed in
-the order that they appear in the configuration
-files. <Directory> (group 1 above) is processed in
-the order shortest directory component to longest. If multiple
-<Directory> sections apply to the same directory
-they they are processed in the configuration file order. The
-configuration files are read in the order httpd.conf, srm.conf and
-access.conf. Configurations included via the Include
-directive will be treated as if they where inside the including file
-at the location of the Include directive.
-
-
-
-Sections inside <VirtualHost> sections are applied
-after the corresponding sections outside the virtual host
-definition. This allows virtual hosts to override the main server
-configuration. (Note: this only works correctly from 1.2.2 and 1.3a2
-onwards. Before those releases sections inside virtual hosts were
-applied before the main server).
-
-
- -
<Directory> and/or
- <Files>.
-<Location>
-<Directory>. This is
- a legacy mistake because the proxy existed prior to
- <Location>. A future version of the config
- language should probably switch this to
- <Location>.
-- -Another note: -
- -
<Location>/<LocationMatch>
- sequence performed just before the name translation phase (where
- Aliases and DocumentRoots are used to
- map URLs to filenames). The results of this sequence are
- completely thrown away after the translation has completed.
-You will notice many httpd executables running on your system,
-but you should not send signals to any of them except the parent, whose
-pid is in the PidFile. That is to
-say you shouldn't ever need to send signals to any process except the
-parent. There are three signals that you can send the parent:
-TERM, HUP, and USR1, which will
-be described in a moment.
-
-
To send a signal to the parent you should issue a command such as: -
- -You can read about its progress by issuing: - -- kill -TERM `cat /usr/local/apache/logs/httpd.pid` -
- -Modify those examples to match your -ServerRoot and -PidFile settings. - -- tail -f /usr/local/apache/logs/error_log -
As of Apache 1.3 we provide a script src/support/apachectl
-which can be used to start, stop, and restart Apache. It may need a
-little customization for your system, see the comments at the top of
-the script.
-
-
Sending the TERM signal to the parent causes it to
-immediately attempt to kill off all of its children. It may take it
-several seconds to complete killing off its children. Then the
-parent itself exits. Any requests in progress are terminated, and no
-further requests are served.
-
-
Sending the HUP signal to the parent causes it to kill off
-its children like in TERM but the parent doesn't exit. It
-re-reads its configuration files, and re-opens any log files.
-Then it spawns a new set of children and continues
-serving hits.
-
-
Users of the
-status module
-will notice that the server statistics are
-set to zero when a HUP is sent.
-
-
Note: If your configuration file has errors in it when you issue a -restart then your parent will not restart, it will exit with an error. -See below for a method of avoiding this. - -
Note: prior to release 1.2b9 this code is quite unstable and -shouldn't be used at all. - -
The USR1 signal causes the parent process to advise
-the children to exit after their current request (or to exit immediately
-if they're not serving anything). The parent re-reads its configuration
-files and re-opens its log files. As each child dies off the parent
-replaces it with a child from the new generation of the
-configuration, which begins serving new requests immediately.
-
-
This code is designed to always respect the -MaxClients, -MinSpareServers, -and MaxSpareServers settings. -Furthermore, it respects StartServers -in the following manner: if after one second at least StartServers new -children have not been created, then create enough to pick up the slack. -This is to say that the code tries to maintain both the number of children -appropriate for the current load on the server, and respect your wishes -with the StartServers parameter. - -
Users of the
-status module
-will notice that the server statistics
-are not set to zero when a USR1 is sent. The code
-was written to both minimize the time in which the server is unable to serve
-new requests (they will be queued up by the operating system, so they're
-not lost in any event) and to respect your tuning parameters. In order
-to do this it has to keep the scoreboard used to keep track
-of all children across generations.
-
-
The status module will also use a G to indicate those
-children which are still serving requests started before the graceful
-restart was given.
-
-
At present there is no way for a log rotation script using
-USR1 to know for certain that all children writing the
-pre-restart log have finished. We suggest that you use a suitable delay
-after sending the USR1 signal before you do anything with the
-old log. For example if most of your hits take less than 10 minutes to
-complete for users on low bandwidth links then you could wait 15 minutes
-before doing anything with the old log.
-
-
Note: If your configuration file has errors in it when you issue a -restart then your parent will not restart, it will exit with an error. -In the case of graceful -restarts it will also leave children running when it exits. (These are -the children which are "gracefully exiting" by handling their last request.) -This will cause problems if you attempt to restart the server -- it will -not be able to bind to its listening ports. At present the only work -around is to check the syntax of your files before doing a restart. The -easiest way is to just run httpd as a non-root user. If there are no -errors it will attempt to open its sockets and logs and fail because it's -not root (or because the currently running httpd already has those ports -bound). If it fails for any other reason then it's probably a config file -error and the error should be fixed before issuing the graceful restart. - -
Prior to Apache 1.2b9 there were several race conditions -involving the restart and die signals (a simple description of race -condition is: a time-sensitive problem, as in if something happens at just -the wrong time it won't behave as expected). For those architectures that -have the "right" feature set we have eliminated as many as we can. -But it should be noted that there still do exist race conditions on -certain architectures. - -
Architectures that use an on disk
-ScoreBoardFile
-have the potential to corrupt their scoreboards. This can result in
-the "bind: Address already in use" (after HUP) or
-"long lost child came home!" (after USR1). The former is
-a fatal error, while the latter just causes the server to lose a scoreboard
-slot. So it might be advisable to use graceful restarts, with
-an occasional hard restart. These problems are very difficult to work
-around, but fortunately most architectures do not require a scoreboard file.
-See the ScoreBoardFile documentation for a method to determine if your
-architecture uses it.
-
-
NEXT and MACHTEN (68k only) have small race
-conditions
-which can cause a restart/die signal to be lost, but should not cause the
-server to do anything otherwise problematic.
-
-
-
All architectures have a small race condition in each child involving -the second and subsequent requests on a persistent HTTP connection -(KeepAlive). It may exit after reading the request line but before -reading any of the request headers. There is a fix that was discovered -too late to make 1.2. In theory this isn't an issue because the KeepAlive -client has to expect these events because of network latencies and -server timeouts. In practice it doesn't seem to affect anything either --- in a test case the server was restarted twenty times per second and -clients successfully browsed the site without getting broken images or -empty documents. - - - - diff --git a/docs/manual/suexec.html.en b/docs/manual/suexec.html.en deleted file mode 100644 index 8b3a1caad8b..00000000000 --- a/docs/manual/suexec.html.en +++ /dev/null @@ -1,506 +0,0 @@ - - -
--
-The suEXEC feature -- introduced in Apache 1.2 -- provides -Apache users the ability to run CGI and SSI -programs under user IDs different from the user ID of the calling web-server. -Normally, when a CGI or SSI program executes, it runs as the same user who is -running the web server. -
- --Used properly, this feature can reduce considerably the security risks involved -with allowing users to develop and run private CGI or SSI programs. However, -if suEXEC is improperly configured, it can cause any number of problems and -possibly create new holes in your computer's security. If you aren't familiar -with managing setuid root programs and the security issues they present, we -highly recommend that you not consider using suEXEC. -
- - - --Before jumping head-first into this document, you should be aware of the -assumptions made on the part of the Apache Group and this document. -
- --First, it is assumed that you are using a UNIX derivate operating system that -is capable of setuid and setgid operations. -All command examples are given in this regard. Other platforms, if they are -capable of supporting suEXEC, may differ in their configuration. -
- --Second, it is assumed you are familiar with some basic concepts of your -computer's security and its administration. This involves an understanding -of setuid/setgid operations and the various effects they -may have on your system and its level of security. -
- --Third, it is assumed that you are using an unmodified -version of suEXEC code. All code for suEXEC has been carefully scrutinized and -tested by the developers as well as numerous beta testers. Every precaution has -been taken to ensure a simple yet solidly safe base of code. Altering this -code can cause unexpected problems and new security risks. It is -highly recommended you not alter the suEXEC code unless you -are well versed in the particulars of security programming and are willing to -share your work with the Apache Group for consideration. -
- --Fourth, and last, it has been the decision of the Apache Group to -NOT make suEXEC part of the default installation of Apache. -To this end, suEXEC configuration is a manual process requiring of the -administrator careful attention to details. It is through this process -that the Apache Group hopes to limit suEXEC installation only to those -who are determined to use it. -
- --Still with us? Yes? Good. Let's move on! -
- - - --Before we begin configuring and installing suEXEC, we will first discuss -the security model you are about to implement. By doing so, you may -better understand what exactly is going on inside suEXEC and what precautions -are taken to ensure your system's security. -
- --suEXEC is based on a setuid "wrapper" program that is -called by the main Apache web server. This wrapper is called when an HTTP -request is made for a CGI or SSI program that the administrator has designated -to run as a userid other than that of the main server. When such a request -is made, Apache provides the suEXEC wrapper with the program's name and the -user and group IDs under which the program is to execute. -
- --The wrapper then employs the following process to determine success or -failure -- if any one of these conditions fail, the program logs the failure -and exits with an error, otherwise it will continue: -
- The wrapper will only execute if it is given the proper number of arguments. - The proper argument format is known to the Apache web server. If the wrapper - is not receiving the proper number of arguments, it is either being hacked, or - there is something wrong with the suEXEC portion of your Apache binary. --
- This is to ensure that the user executing the wrapper is truly a user of the system. --
- Is this user the user allowed to run this wrapper? Only one user (the Apache - user) is allowed to execute this program. --
- Does the target program contain a leading '/' or have a '..' backreference? These - are not allowed; the target program must reside within the Apache webspace. --
- Does the target user exist? --
- Does the target group exist? --
- Presently, suEXEC does not allow 'root' to execute CGI/SSI programs. --
- The minimum user ID number is specified during configuration. This allows you - to set the lowest possible userid that will be allowed to execute CGI/SSI programs. - This is useful to block out "system" accounts. --
- Presently, suEXEC does not allow the 'root' group to execute CGI/SSI programs. --
- The minimum group ID number is specified during configuration. This allows you - to set the lowest possible groupid that will be allowed to execute CGI/SSI programs. - This is useful to block out "system" groups. --
- Here is where the program becomes the target user and group via setuid and setgid - calls. The group access list is also initialized with all of the groups of which - the user is a member. --
- If it doesn't exist, it can't very well contain files. --
- If the request is for a regular portion of the server, is the requested directory - within the server's document root? If the request is for a UserDir, is the requested - directory within the user's document root? --
- We don't want to open up the directory to others; only the owner user may be able - to alter this directories contents. --
- If it doesn't exists, it can't very well be executed. --
- We don't want to give anyone other than the owner the ability to change the program. --
- We do not want to execute programs that will then change our UID/GID again. --
- Is the user the owner of the file? --
- suEXEC cleans the process' environment by establishing a safe execution PATH (defined - during configuration), as well as only passing through those variables whose names - are listed in the safe environment list (also created during configuration). --
- Here is where suEXEC ends and the target program begins. --
-This is the standard operation of the the suEXEC wrapper's security model. -It is somewhat stringent and can impose new limitations and guidelines for -CGI/SSI design, but it was developed carefully step-by-step with security -in mind. -
- --For more information as to how this security model can limit your possibilities -in regards to server configuration, as well as what security risks can be avoided -with a proper suEXEC setup, see the "Beware the Jabberwock" -section of this document. -
- - - --Here's where we begin the fun. The configuration and installation of suEXEC is -a four step process: edit the suEXEC header file, compile suEXEC, place the -suEXEC binary in its proper location, and configure Apache for use with suEXEC. -
- -
-EDITING THE SUEXEC HEADER FILE
-- From the top-level of the Apache source tree, type:
-cd support [ENTER]
-
-Edit the suexec.h file and change the following macros to
-match your local Apache installation.
-
-From support/suexec.h -
- /* - * HTTPD_USER -- Define as the username under which Apache normally - * runs. This is the only user allowed to execute - * this program. - */ - #define HTTPD_USER "www" - - /* - * UID_MIN -- Define this as the lowest UID allowed to be a target user - * for suEXEC. For most systems, 500 or 100 is common. - */ - #define UID_MIN 100 - - /* - * GID_MIN -- Define this as the lowest GID allowed to be a target group - * for suEXEC. For most systems, 100 is common. - */ - #define GID_MIN 100 - - /* - * USERDIR_SUFFIX -- Define to be the subdirectory under users' - * home directories where suEXEC access should - * be allowed. All executables under this directory - * will be executable by suEXEC as the user so - * they should be "safe" programs. If you are - * using a "simple" UserDir directive (ie. one - * without a "*" in it) this should be set to - * the same value. suEXEC will not work properly - * in cases where the UserDir directive points to - * a location that is not the same as the user's - * home directory as referenced in the passwd file. - * - * If you have VirtualHosts with a different - * UserDir for each, you will need to define them to - * all reside in one parent directory; then name that - * parent directory here. IF THIS IS NOT DEFINED - * PROPERLY, ~USERDIR CGI REQUESTS WILL NOT WORK! - * See the suEXEC documentation for more detailed - * information. - */ - #define USERDIR_SUFFIX "public_html" - - /* - * LOG_EXEC -- Define this as a filename if you want all suEXEC - * transactions and errors logged for auditing and - * debugging purposes. - */ - #define LOG_EXEC "/usr/local/apache/logs/cgi.log" /* Need me? */ - - /* - * DOC_ROOT -- Define as the DocumentRoot set for Apache. This - * will be the only hierarchy (aside from UserDirs) - * that can be used for suEXEC behavior. - */ - #define DOC_ROOT "/usr/local/apache/htdocs" - - /* - * SAFE_PATH -- Define a safe PATH environment to pass to CGI executables. - * - */ - #define SAFE_PATH "/usr/local/bin:/usr/bin:/bin" -- - -
-COMPILING THE SUEXEC WRAPPER
-You now need to compile the suEXEC wrapper. At the shell command prompt,
-type: cc suexec.c -o suexec [ENTER].
-This should create the suexec wrapper executable.
-
-COMPILING APACHE FOR USE WITH SUEXEC
-By default, Apache is compiled to look for the suEXEC wrapper in the following
-location.
-
-From src/httpd.h -
- /* The path to the suEXEC wrapper */ - #define SUEXEC_BIN "/usr/local/apache/sbin/suexec" -- - -
-If your installation requires location of the wrapper program in a different -directory, edit src/httpd.h and recompile your Apache server. -See Compiling and Installing Apache for more -info on this process. -
- -
-COPYING THE SUEXEC BINARY TO ITS PROPER LOCATION
-Copy the suexec executable created in the
-exercise above to the defined location for SUEXEC_BIN.
-
-cp suexec /usr/local/apache/sbin/suexec [ENTER]
-
-In order for the wrapper to set the user ID, it must me installed as owner -root and must have the setuserid execution bit -set for file modes. If you are not running a root -user shell, do so now and execute the following commands. -
- -
-chown root /usr/local/apache/sbin/suexec [ENTER]
-chmod 4711 /usr/local/apache/sbin/suexec [ENTER]
-
-After properly installing the suexec wrapper
-executable, you must kill and restart the Apache server. A simple
-kill -1 `cat httpd.pid` will not be enough.
-Upon startup of the web-server, if Apache finds a properly configured
-suexec wrapper, it will print the following message to
-the console:
-
-Configuring Apache for use with suexec wrapper.
-
-If you don't see this message at server startup, the server is most -likely not finding the wrapper program where it expects it, or the -executable is not installed setuid root. Check -your installation and try again. -
- --One way to use suEXEC is through the -User and -Group directives in -VirtualHost -definitions. By setting these directives to values different from the -main server user ID, all requests for CGI resources will be executed as -the User and Group defined for that -<VirtualHost>. If only one or -neither of these directives are specified for a -<VirtualHost> then the main -server userid is assumed.
- -suEXEC can also be used to to execute CGI programs as -the user to which the request is being directed. This is accomplished by -using the ~ character prefixing the user ID for whom -execution is desired. -The only requirement needed for this feature to work is for CGI -execution to be enabled for the user and that the script must meet the -scrutiny of the security checks above. - -
- -
-The suEXEC wrapper will write log information to the location defined in
-the suexec.h as indicated above. If you feel you have
-configured and installed the wrapper properly, have a look at this log
-and the error_log for the server to see where you may have gone astray.
-
-NOTE! This section may not be complete. For the latest -revision of this section of the documentation, see the Apache Group's -Online Documentation -version. -
- --There are a few points of interest regarding the wrapper that can cause -limitations on server setup. Please review these before submitting any -"bugs" regarding suEXEC. -
- For security and efficiency reasons, all suexec requests must - remain within either a top-level document root for virtual - host requests, or one top-level personal document root for - userdir requests. For example, if you have four VirtualHosts - configured, you would need to structure all of your VHosts' - document roots off of one main Apache document hierarchy to - take advantage of suEXEC for VirtualHosts. (Example forthcoming.) --
- This can be a dangerous thing to change. Make certain every - path you include in this define is a trusted - directory. You don't want to open people up to having someone - from across the world running a trojan horse on them. --
- Again, this can cause Big Trouble if you try - this without knowing what you are doing. Stay away from it - if at all possible. --
-When using a large number of Virtual Hosts, Apache may run out of available -file descriptors (sometimes called file handles if each Virtual -Host specifies different log files. -The total number of file descriptors used by Apache is one for each distinct -error log file, one for every other log file directive, plus 10-20 for -internal use. Unix operating systems limit the number of file descriptors that -may be used by a process; the limit is typically 64, and may usually be -increased up to a large hard-limit. -
-Although Apache attempts to increase the limit as required, this -may not work if: -
-#!/bin/sh
-ulimit -S -n 100
-exec httpd
--Please see the -Descriptors and Apache -document containing further details about file descriptor problems and how -they can be solved on your operating system. -
- - - - diff --git a/docs/manual/vhosts/index.html.en b/docs/manual/vhosts/index.html.en deleted file mode 100644 index be02eaa4d71..00000000000 --- a/docs/manual/vhosts/index.html.en +++ /dev/null @@ -1,64 +0,0 @@ - - - -The term Virtual Host refers to the practice of maintaining -more than one server on one machine, as differentiated by their apparent -hostname. For example, it is often desirable for companies sharing a -web server to have their own domains, with web servers accessible as -www.company1.com and www.company2.com, -without requiring the user to know any extra path information.
- -Apache was one of the first servers to support IP-based -virtual hosts right out of the box. Versions 1.1 and later of -Apache support both, IP-based and name-based virtual hosts (vhosts). -The latter variant of virtual hosts is sometimes also called host-based or -non-IP virtual hosts.
- -Below is a list of documentation pages which explain all details -of virtual host support in Apache version 1.3 and later.
- -Folks trying to debug their virtual host configuration may find the
-Apache -S command line switch useful. It will dump out a
-description of how Apache parsed the configuration file. Careful
-examination of the IP addresses and server names may help uncover
-configuration mistakes.
-
-
-
-
diff --git a/docs/manual/vhosts/name-based.html.en b/docs/manual/vhosts/name-based.html.en
deleted file mode 100644
index f02e4d85acf..00000000000
--- a/docs/manual/vhosts/name-based.html.en
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
While the approach with IP-based virtual hosts works very well,
-it is not the most elegant solution, because a dedicated IP address
-is needed for every virtual host and it is hard to implement on some
-machines. The HTTP/1.1 protocol contains a method for the
-server to identify what name it is being addressed as. Apache 1.1 and
-later support this approach as well as the traditional
-IP-address-per-hostname method.
The benefits of using the new name-based virtual host support is a -practically unlimited number of servers, ease of configuration and use, and -requires no additional hardware or software. -The main disadvantage is that the client must support this part of the -protocol. The latest versions of most browsers do, but there are still -old browsers in use who do not. This can cause problems, although a possible -solution is addressed below.
- -Using the new virtual hosts is quite easy, and superficially looks
-like the old method. You simply add to one of the Apache configuration
-files (most likely httpd.conf or srm.conf)
-code similar to the following:
- NameVirtualHost 111.22.33.44 - - <VirtualHost 111.22.33.44> - ServerName www.domain.tld - DocumentRoot /web/domain - </VirtualHost> -- -
The notable difference between IP-based and name-based virtual host
-configuration is the
-NameVirtualHost
-directive which specifies an IP address that should be used as a target for
-name-based virtual hosts.
-
-
Of course, any additional directives can (and should) be placed
-into the <VirtualHost> section. To make this work,
-all that is needed is to make sure that the name
-www.domain.tld is an alias (CNAME) pointing to the IP address
-111.22.33.44
Note: When you specify an IP address in a NameVirtualHost
-directive then requests to that IP address will only ever be served
-by matching <VirtualHost>s. The "main server" will never
-be served from the specified IP address.
-
-
Additionally, many servers may wish to be accessible by more than
-one name. For example, the example server might want to be accessible
-as domain.tld, or www2.domain.tld, assuming
-the IP addresses pointed to the same server. In fact, one might want it
-so that all addresses at domain.tld were picked up by the
-server. This is possible with the
-ServerAlias
-directive, placed inside the <VirtualHost> section. For
-example:
- ServerAlias domain.tld *.domain.tld -- -
Note that you can use * and ? as wild-card
-characters.
You also might need ServerAlias if you are
-serving local users who do not always include the domain name.
-For example, if local users are
-familiar with typing "www" or "www.foobar" then you will need to add
-ServerAlias www www.foobar. It isn't possible for the
-server to know what domain the client uses for their name resolution
-because the client doesn't provide that information in the request.
As mentioned earlier, there are still some clients in use who -do not send the required data for the name-based virtual hosts to work -properly. These clients will always be sent the pages from the -primary name-based virtual host (the first virtual host -appearing in the configuration file for a specific IP address).
- -There is a possible workaround with the
-ServerPath
-directive, albeit a slightly cumbersome one:
Example configuration: - -
- NameVirtualHost 111.22.33.44 - - <VirtualHost 111.22.33.44> - ServerName www.domain.tld - ServerPath /domain - DocumentRoot /web/domain - </VirtualHost> -- -
What does this mean? It means that a request for any URI beginning
-with "/domain" will be served from the virtual host
-www.domain.tld This means that the pages can be accessed as
-http://www.domain.tld/domain/ for all clients, although
-clients sending a Host: header can also access it as
-http://www.domain.tld/.
In order to make this work, put a link on your primary virtual host's page -to http://www.domain.tld/domain/ -Then, in the virtual host's pages, be sure to use either purely -relative links (e.g. "file.html" or -"../icons/image.gif" or links containing the prefacing -/domain/ -(e.g. "http://www.domain.tld/domain/misc/file.html" or -"/domain/misc/file.html").
- -This requires a bit of -discipline, but adherence to these guidelines will, for the most part, -ensure that your pages will work with all browsers, new and old.
- -See also: ServerPath configuration -example
- - - -