============================================================================
-The current project management committe of the Apache HTTP Server
+The current project management committee of the Apache HTTP Server
project (as of March, 2011) is:
Aaron Bannert André Malo Astrid Stolper
*) mod_http2: Fixed interaction with mod_reqtimeout. A loaded mod_http2 was disabling the
ssl handshake timeouts. Also, fixed a mistake of the last version that made `H2Direct`
- always `on`, irregardless of configuration. Found and reported by
+ always `on`, regardless of configuration. Found and reported by
<Armin.Abfalterer@united-security-providers.ch> and
<Marcial.Rion@united-security-providers.ch>. [Stefan Eissing]
* The translate_name hook goes away
Wrowe altogether disagrees. translate_name today even operates
- on URIs ... this mechansim needs to be preserved.
+ on URIs ... this mechanism needs to be preserved.
* The doc for map_to_storage is totally opaque to me. It has
something to do with filesystems, but it also talks about
and in-your-face.) DocumentRoot unset would be accepted [and would
not permit content to be served, only virtual resources such as
server-info or server-status.
- This proposed change would _not_ depricate Alias.
+ This proposed change would _not_ deprecate Alias.
striker: See the thread starting with Message-ID:
JLEGKKNELMHCJPNMOKHOGEEJFBAA.striker@apache.org.
HTTP or SNMP?
jerenkrantz says: Yawn. Who cares.
- * Regex containers don't work in an intutive way
+ * Regex containers don't work in an intuitive way
Status: No one has come up with an efficient way to fix this
behavior. Dean has suggested getting rid of regex containers
completely.
All even numbered releases will be considered stable revisions.
-Stable revisions will retain forward compatiblity to the maximum
+Stable revisions will retain forward compatibility to the maximum
possible extent. Features may be added during minor revisions, and
features may be deprecated by making appropriate notations in the
documentation, but no features may be removed.
the modification of the MMN at any time in order to correct deficiencies
or shortcomings in the API. This means that modules from one development
release to another may not be binary compatible, or may not successfully
-compile without modification to accomodate the API changes.
+compile without modification to accommodate the API changes.
The only 'supported' development release at any time will be the most
recently released version. Developers will not be answering bug reports
of older development releases once a new release is available. It becomes
-the resposibility of the reporter to use the latest development version
+the responsibility of the reporter to use the latest development version
to confirm that any issue still exists.
Any new code, new API features or new ('experimental') modules may be
* operators)
*/
#define AP_EXPR_FLAG_SSL_EXPR_COMPAT 1
-/** Don't add siginificant request headers to the Vary response header */
+/** Don't add significant request headers to the Vary response header */
#define AP_EXPR_FLAG_DONT_VARY 2
/** Don't allow functions/vars that bypass the current request's access
* restrictions or would otherwise leak confidential information.
/** Function for looking up the provider function for a variable, operator
* or function in an expression.
- * @param parms The parameter struct, also determins where the result is
+ * @param parms The parameter struct, also determines where the result is
* stored.
* @return OK on success,
* !OK on failure,
* @param header the header value
*
* @return APR_SUCCESS.
- * @return ::APREQ_ERROR_BADSEQ if an unparseable character sequence appears.
+ * @return ::APREQ_ERROR_BADSEQ if an unparsable character sequence appears.
* @return ::APREQ_ERROR_MISMATCH if an rfc-cookie attribute appears in a
* netscape cookie header.
* @return ::APR_ENOTIMPL if an unrecognized rfc-cookie attribute appears.
AP_DECLARE(const char *) ap_add_loaded_module(module *mod, apr_pool_t *p,
const char *s);
/**
- * Remove a module fromthe chained modules list and the list of loaded modules
+ * Remove a module from the chained modules list and the list of loaded modules
* @param mod the module structure of the module to remove
*/
AP_DECLARE(void) ap_remove_loaded_module(module *mod);
* @param conf_pool The pconf pool
* @param temp_pool The temporary pool
* @param conftree Place to store the root node of the config tree
- * @return Error string on erro, NULL otherwise
+ * @return Error string on error, NULL otherwise
* @note If conf_pool == temp_pool, ap_build_config() will assume .htaccess
* context and use a lower maximum line length.
*/
/**
* Run the register hooks function for a specified module
- * @param m The module to run the register hooks function fo
+ * @param m The module to run the register hooks function from
* @param p The pool valid for the lifetime of the module
*/
AP_DECLARE(void) ap_register_hooks(module *m, apr_pool_t *p);
* @param c The current connection
* @param r The current request or NULL
* @param s The server/virtual host selected
- * @param protocol The protocol identifier we try to swicth to
+ * @param protocol The protocol identifier we try to switch to
* @return OK or DECLINED
*/
AP_DECLARE_HOOK(int,protocol_switch,(conn_rec *c, request_rec *r,
/**
* Kill the current request
- * @param type Why the request is dieing
+ * @param type Why the request is dying
* @param r The current request
*/
AP_DECLARE(void) ap_die(int type, request_rec *r);
/**
* Check whether a connection is still established and has data available,
- * optionnaly consuming blank lines ([CR]LF).
+ * optionally consuming blank lines ([CR]LF).
* @param c The current connection
* @param bb The brigade to filter
* @param max_blank_lines Max number of blank lines to consume, or zero
* If "Satisfy any" is in effect, this hook may be skipped.
*
* @param r the current request
- * @return OK (allow acces), DECLINED (let later modules decide),
+ * @return OK (allow access), DECLINED (let later modules decide),
* or HTTP_... (deny access)
* @ingroup hooks
* @see ap_hook_check_access_ex
* This hook allows modules to affect the request immediately after the
* per-directory configuration for the request has been generated.
* @param r The current request
- * @return OK (allow acces), DECLINED (let later modules decide),
+ * @return OK (allow access), DECLINED (let later modules decide),
* or HTTP_... (deny access)
* @ingroup hooks
*/
#define AP_MAX_REG_MATCH 10
/**
- * APR_HAS_LARGE_FILES introduces the problem of spliting sendfile into
+ * APR_HAS_LARGE_FILES introduces the problem of splitting sendfile into
* multiple buckets, no greater than MAX(apr_size_t), and more granular
* than that in case the brigade code/filters attempt to read it directly.
* ### 16mb is an invention, no idea if it is reasonable.
#if !APR_CHARSET_EBCDIC
/** linefeed */
#define LF 10
-/** carrige return */
+/** carriage return */
#define CR 13
-/** carrige return /Line Feed Combo */
+/** carriage return /Line Feed Combo */
#define CRLF "\015\012"
#else /* APR_CHARSET_EBCDIC */
/* For platforms using the EBCDIC charset, the transition ASCII->EBCDIC is done
/*
* Things which may vary per file-lookup WITHIN a request ---
* e.g., state of MIME config. Basically, the name of an object, info
- * about the object, and any other info we may ahve which may need to
+ * about the object, and any other info we may have which may need to
* change as we go poking around looking for it (e.g., overridden by
* .htaccess files).
*
* Unlike ap_pbase64decode(), this function allows encoded NULLs in the input to
* be retained by the caller, by inspecting the len argument after the call
* instead of using strlen(). A NULL terminator is still appended to the buffer
- * to faciliate string use (it is not included in len).
+ * to facilitate string use (it is not included in len).
*
* @param p The pool to allocate from
* @param encoded The encoded string
} authz_provider;
/* ap_authn_cache_store: Optional function for authn providers
- * to enable cacheing their lookups with mod_authn_cache
+ * to enable caching their lookups with mod_authn_cache
* @param r The request rec
* @param module Module identifier
* @param user User name to authenticate
#define SERVER_IDLE_KILL 10 /* Server is cleaning up idle children. */
#define SERVER_NUM_STATUS 11 /* number of status settings */
-/* Type used for generation indicies. Startup and every restart cause a
+/* Type used for generation indices. Startup and every restart cause a
* new generation of children to be spawned. Children within the same
* generation share the same configuration information -- pointers to stuff
* created at config time in the parent are valid across children. However,
/* Scoreboard is now in 'local' memory, since it isn't updated once created,
* even in forked architectures. Child created-processes (non-fork) will
- * set up these indicies into the (possibly relocated) shmem records.
+ * set up these indices into the (possibly relocated) shmem records.
*/
typedef struct {
global_score *global;
/**
* @file util_fcgi.h
- * @brief FastCGI protocol defitions and support routines
+ * @brief FastCGI protocol definitions and support routines
*
* @defgroup APACHE_CORE_FASTCGI FastCGI Tools
* @ingroup APACHE_CORE
apr_pool_t *rebind_pool; /* frequently cleared pool for rebind data */
int must_rebind; /* The connection was last bound with other then binddn/bindpw */
request_rec *r; /* request_rec used to find this util_ldap_connection_t */
- apr_time_t last_backend_conn; /* the approximate time of the last backend LDAP requst */
+ apr_time_t last_backend_conn; /* the approximate time of the last backend LDAP request */
} util_ldap_connection_t;
typedef struct util_ldap_config_t {
const char *key;
apr_time_t expiry;
- /* first check whether we're cacheing for this module */
+ /* first check whether we're caching for this module */
dcfg = ap_get_module_config(r->per_dir_config, &authn_socache_module);
if (!configured || !dcfg->providers) {
return;
" on or off (default: off)"),
AP_INIT_FLAG("ISAPIAppendLogToQuery", ap_set_flag_slot,
(void *)APR_OFFSETOF(isapi_dir_conf, log_to_query),
- OR_FILEINFO, "Append Log requests are concatinated to the query args"
+ OR_FILEINFO, "Append Log requests are concatenated to the query args"
" on or off (default: on)"),
AP_INIT_FLAG("ISAPIFakeAsync", ap_set_flag_slot,
(void *)APR_OFFSETOF(isapi_dir_conf, fake_async),
isa->isapi_version = apr_pcalloc(p, sizeof(HSE_VERSION_INFO));
- /* TODO: These aught to become overrideable, so that we
+ /* TODO: These aught to become overridable, so that we
* assure a given isapi can be fooled into behaving well.
*
* The tricky bit, they aren't really a per-dir sort of
#define HSE_REQ_SEND_RESPONSE_HEADER 3
#define HSE_REQ_DONE_WITH_SESSION 4
-/* MS Extented methods to ISAPI ServerSupportFunction() HSE_code */
+/* MS Extended methods to ISAPI ServerSupportFunction() HSE_code */
#define HSE_REQ_MAP_URL_TO_PATH 1001 /* Emulated */
#define HSE_REQ_GET_SSPI_INFO 1002 /* Not Supported */
#define HSE_APPEND_LOG_PARAMETER 1003 /* Supported */
/* 304 does not contain Content-Type and mod_mime regenerates the
* Content-Type based on the r->filename. This would lead to original
- * Content-Type to be lost (overwriten by whatever mod_mime generates).
+ * Content-Type to be lost (overwritten by whatever mod_mime generates).
* We preserves the original Content-Type here. */
ap_set_content_type(r, apr_table_get(
cache->stale_handle->resp_hdrs, "Content-Type"));
{
/* XXX
* Consider a new config directive that enables loading specific cache
- * implememtations (like mod_cache_mem, mod_cache_file, etc.).
+ * implementations (like mod_cache_mem, mod_cache_file, etc.).
* Rather than using a LoadModule directive, admin would use something
* like CacheModule mem_cache_module | file_cache_module, etc,
* which would cause the approprpriate cache module to be loaded.
* HTTP URI's (3.2.3) [host and scheme are insensitive]
* HTTP method (5.1.1)
* HTTP-date values (3.3.1)
- * 3.7 Media Types [exerpt]
+ * 3.7 Media Types [excerpt]
* The type, subtype, and parameter attribute names are case-
* insensitive. Parameter values might or might not be case-sensitive,
* depending on the semantics of the parameter name.
- * 4.20 Except [exerpt]
+ * 4.20 Except [excerpt]
* Comparison of expectation values is case-insensitive for unquoted
* tokens (including the 100-continue token), and is case-sensitive for
* quoted-string expectation-extensions.
* HTTP URI's (3.2.3) [host and scheme are insensitive]
* HTTP method (5.1.1)
* HTTP-date values (3.3.1)
- * 3.7 Media Types [exerpt]
+ * 3.7 Media Types [excerpt]
* The type, subtype, and parameter attribute names are case-
* insensitive. Parameter values might or might not be case-sensitive,
* depending on the semantics of the parameter name.
- * 4.20 Except [exerpt]
+ * 4.20 Except [excerpt]
* Comparison of expectation values is case-insensitive for unquoted
* tokens (including the 100-continue token), and is case-sensitive for
* quoted-string expectation-extensions.
/* This mode of operation will open a temporary connection to the 'target'
* for each cache operation - this makes it safe against fork()
* automatically. This mode is preferred when running a local proxy (over
- * unix domain sockets) because overhead is negligable and it reduces the
+ * unix domain sockets) because overhead is negligible and it reduces the
* performance/stability danger of file-descriptor bloatage. */
#define SESSION_CTX_FLAGS 0
#endif
int status = ap_run_watchdog_need(s, w->name, 1,
w->singleton);
if (status == OK) {
- /* One of the modules returned OK to this watchog.
+ /* One of the modules returned OK to this watchdog.
* Mark it as active
*/
w->active = 1;
int status = ap_run_watchdog_need(s, w->name, 0,
w->singleton);
if (status == OK) {
- /* One of the modules returned OK to this watchog.
+ /* One of the modules returned OK to this watchdog.
* Mark it as active
*/
w->active = 1;
/*
** dav_fs_remove_locknull_state: Given a request, check to see if r->filename
-** is/was a lock-null resource. If so, return it to an existant state, i.e.
+** is/was a lock-null resource. If so, return it to an existent state, i.e.
** remove it from the list in the appropriate .DAV/locknull file.
*/
static dav_error * dav_fs_remove_locknull_state(
/* put a slash back on the end of the directory */
fsctx->path1.buf[fsctx->path1.cur_len - 1] = '/';
- /* these are all non-existant (files) */
+ /* these are all non-existent (files) */
fsctx->res1.exists = 0;
fsctx->res1.collection = 0;
memset(&fsctx->info1.finfo, 0, sizeof(fsctx->info1.finfo));
}
-/* Write a complete RESPONSE object out as a <DAV:repsonse> xml
+/* Write a complete RESPONSE object out as a <DAV:response> xml
element. Data is sent into brigade BB, which is auto-flushed into
the output filter stack for request R. Use POOL for any temporary
allocations.
/* ### RFC 2518 s. 8.11: If this resource is locked by locktoken,
* _all_ resources locked by locktoken are released. It does not say
- * resource has to be the root of an infinte lock. Thus, an UNLOCK
- * on any part of an infinte lock will remove the lock on all resources.
+ * resource has to be the root of an infinite lock. Thus, an UNLOCK
+ * on any part of an infinite lock will remove the lock on all resources.
*
* For us, if r->filename represents an indirect lock (part of an infinity lock),
* we must actually perform an UNLOCK on the direct lock for this resource.
if ((*p == '\'') || (*p == '"')) {
delim = *p++;
for (q = p; *q && *q != delim; ++q);
- /* No terminating delimiter found? Skip the boggus directive */
+ /* No terminating delimiter found? Skip the bogus directive */
if (*q != delim)
break;
} else {
rv = ap_get_brigade(f->next, bb, mode, block, readbytes);
/* Don't extend the timeout in speculative mode, wait for
* the real (relevant) bytes to be asked later, within the
- * currently alloted time.
+ * currently allotted time.
*/
if (ccfg->cur_stage.rate_factor && rv == APR_SUCCESS
&& mode != AP_MODE_SPECULATIVE) {
* This ensures that it only influences normal http connections and not
* e.g. mod_ftp. We still process it first though, for the handshake stage
* to work with/before mod_ssl, but since it's disabled by default it won't
- * influence non-HTTP modules unless configured explicitely. Also, if
+ * influence non-HTTP modules unless configured explicitly. Also, if
* mod_reqtimeout used the pre_connection hook, it would be inserted on
* mod_proxy's backend connections, and we don't want this.
*/
}
else {
apr_size_t repl_len;
- /* acount for string before the match */
+ /* account for string before the match */
if (space_left <= regm[0].rm_so)
return APR_ENOMEM;
space_left -= regm[0].rm_so;
(((enc)!=XML_CHAR_ENCODING_NONE)&&((enc)!=XML_CHAR_ENCODING_ERROR))
/*
- * XXX: Check all those ap_assert()s ans replace those that should not happen
+ * XXX: Check all those ap_assert()s and replace those that should not happen
* XXX: with AP_DEBUG_ASSERT and those that may happen with proper error
* XXX: handling.
*/
APLOG_USE_MODULE(http);
-/* New Apache routine to map status codes into array indicies
+/* New Apache routine to map status codes into array indices
* e.g. 100 -> 0, 101 -> 1, 200 -> 2 ...
* The number of status lines must equal the value of
* RESPONSE_CODES (httpd.h) and must be listed in order.
apr_table_t *headers;
apr_time_t request_time;
- unsigned int chunked : 1; /* iff requst body needs to be forwarded as chunked */
+ unsigned int chunked : 1; /* iff request body needs to be forwarded as chunked */
unsigned int serialize : 1; /* iff this request is written in HTTP/1.1 serialization */
apr_off_t raw_bytes; /* RAW network bytes that generated this request - if known. */
};
mpm_type = H2_MPM_PREFORK;
mpm_module = m;
/* While http2 can work really well on prefork, it collides
- * today's use case for prefork: runnning single-thread app engines
+ * today's use case for prefork: running single-thread app engines
* like php. If we restrict h2_workers to 1 per process, php will
* work fine, but browser will be limited to 1 active request at a
* time. */
/* We create a pool with its own allocator to be used for
* processing a request. This is the only way to have the processing
- * independant of its parent pool in the sense that it can work in
+ * independent of its parent pool in the sense that it can work in
* another thread. Also, the new allocator needs its own mutex to
* synchronize sub-pools.
*/
* reuse internal structures like memory pools.
* The wanted effect of this is that httpd does not try to clean up
* any dangling data on this connection when a request is done. Which
- * is unneccessary on a h2 stream.
+ * is unnecessary on a h2 stream.
*/
slave->keepalive = AP_CONN_CLOSE;
return ap_run_pre_connection(slave, csd);
/* We create a pool with its own allocator to be used for
* processing slave connections. This is the only way to have the
- * processing independant of its parent pool in the sense that it
+ * processing independent of its parent pool in the sense that it
* can work in another thread. Also, the new allocator needs its own
* mutex to synchronize sub-pools.
*/
apr_time_t request_time;
- unsigned int chunked : 1; /* iff requst body needs to be forwarded as chunked */
+ unsigned int chunked : 1; /* iff request body needs to be forwarded as chunked */
unsigned int serialize : 1; /* iff this request is written in HTTP/1.1 serialization */
};
log2n = h2_log2(N);
/* Now log2p is the max number of relevant bits, so that
- * log2p + log2n == mask_bits. We can uise a lower log2p
+ * log2p + log2n == mask_bits. We can use a lower log2p
* and have a shorter set encoding...
*/
log2pmax = h2_log2(ceil_power_of_2(maxP));
/* rfc7540, ch. 8.1.2.3:
* - if we have :authority, it overrides any Host header
- * - :authority MUST be ommited when converting h1->h2, so we
+ * - :authority MUST be omitted when converting h1->h2, so we
* might get a stream without, but then Host needs to be there */
if (!req->authority) {
const char *host = apr_table_get(req->headers, "Host");
break;
case NGHTTP2_RST_STREAM:
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03067)
- "h2_stream(%ld-%d): RST_STREAM by client, errror=%d",
+ "h2_stream(%ld-%d): RST_STREAM by client, error=%d",
session->id, (int)frame->hd.stream_id,
(int)frame->rst_stream.error_code);
stream = get_stream(session, frame->hd.stream_id);
&& (headers->status < 400)
&& (headers->status != 304)
&& h2_session_push_enabled(session)) {
- /* PUSH is possibe and enabled on server, unless the request
+ /* PUSH is possible and enabled on server, unless the request
* denies it, submit resources to push */
s = apr_table_get(headers->notes, H2_PUSH_MODE_NOTE);
if (!s || strcmp(s, "0")) {
ap_assert(stream->request == NULL);
if (stream->rtmp == NULL) {
/* This can only happen, if the stream has received no header
- * name/value pairs at all. The lastest nghttp2 version have become
+ * name/value pairs at all. The latest nghttp2 version have become
* pretty good at detecting this early. In any case, we have
* to abort the connection here, since this is clearly a protocol error */
return APR_EINVAL;
const char *name, size_t nlen,
const char *value, size_t vlen);
-/* End the contruction of request headers */
+/* End the construction of request headers */
apr_status_t h2_stream_end_headers(h2_stream *stream, int eos, size_t raw_bytes);
*
* Finally, to keep certain connection level filters, such as ourselves and
* especially mod_ssl ones, from messing with our data, we need a filter
- * of our own to disble those.
+ * of our own to disable those.
*/
struct h2_bucket_beam;
return status;
}
else if (blen == 0) {
- /* brigade without data, does it have an EOS bucket somwhere? */
+ /* brigade without data, does it have an EOS bucket somewhere? */
*plen = 0;
*peos = h2_util_has_eos(bb, -1);
}
typedef enum {
H2_FIFO_OP_PULL, /* pull the element from the queue, ie discard it */
- H2_FIFO_OP_REPUSH, /* pull and immediatley re-push it */
+ H2_FIFO_OP_REPUSH, /* pull and immediately re-push it */
} h2_fifo_op_t;
typedef h2_fifo_op_t h2_fifo_peek_fn(void *head, void *ctx);
/* The following functions were introduced for the experimental mod_proxy_http2
* support, but have been abandoned since.
- * They are still declared here for backward compatibiliy, in case someone
+ * They are still declared here for backward compatibility, in case someone
* tries to build an old mod_proxy_http2 against it, but will disappear
* completely sometime in the future.
*/
apr_thread_mutex_unlock(st->mutex);
#endif
- /* Destory the pool associated with this connection */
+ /* Destroy the pool associated with this connection */
apr_pool_destroy(ldc->pool);
able to handle the connection timeout per-connection
but the Novell SDK cannot. Allowing the timeout to
be set by each vhost is of little value so rather than
- trying to make special expections for one LDAP SDK, GLOBAL_ONLY
+ trying to make special exceptions for one LDAP SDK, GLOBAL_ONLY
is being enforced on this setting as well. */
st->connectionTimeout = base->connectionTimeout;
st->opTimeout = base->opTimeout;
}
/* Additionally we strip the physical path from the url to match
- * it independent from the underlaying filesystem.
+ * it independent from the underlying filesystem.
*/
if (!is_proxyreq && strlen(ctx->uri) >= dirlen &&
!strncmp(ctx->uri, ctx->perdir, dirlen)) {
* - if setup failed, continue to look for another supported challenge type
* - if there is no overlap in types, tell the user that she has to configure
* either more types (dns, tls-alpn-01), make ports available or refrain
- * from useing wildcard domains when dns is not available. etc.
- * - if there was an overlap, but no setup was successfull, report that. We
+ * from using wildcard domains when dns is not available. etc.
+ * - if there was an overlap, but no setup was successful, report that. We
* will retry this, maybe the failure is temporary (e.g. command to setup DNS
*/
rv = APR_ENOTIMPL;
rv = APR_EINVAL;
if (!authz->error_type) {
md_result_printf(ctx->result, rv,
- "domain authorization for %s failed, CA consideres "
+ "domain authorization for %s failed, CA considers "
"answer to challenge invalid, no error given",
authz->domain);
}
* Pre-Req: we have an account for the ACME server that has accepted the current license agreement
* For each domain in MD:
* - check if there already is a valid AUTHZ resource
- * - if ot, create an AUTHZ resource with challenge data
+ * - if not, create an AUTHZ resource with challenge data
*/
static apr_status_t ad_setup_order(md_proto_driver_t *d, md_result_t *result)
{
} md_cert_state_t;
/**
- * Create a holder of the certificate that will free its memmory when the
+ * Create a holder of the certificate that will free its memory when the
* pool is destroyed.
*/
md_cert_t *md_cert_make(apr_pool_t *p, void *x509);
void md_http_set_response_limit(md_http_t *http, apr_off_t resp_limit);
/**
- * Set the timeout for the complete reqest. This needs to take everything from
+ * Set the timeout for the complete request. This needs to take everything from
* DNS looksups, to conntects, to transfer of all data into account and should
* be sufficiently large.
* Set to 0 the have no timeout for this.
void md_http_set_on_response_cb(md_http_request_t *req, md_http_response_cb *cb, void *baton);
/**
- * Create a GET reqest.
+ * Create a GET request.
* @param preq the created request after success
* @param http the md_http instance
* @param url the url to GET
struct apr_table_t *headers);
/**
- * Create a HEAD reqest.
+ * Create a HEAD request.
* @param preq the created request after success
* @param http the md_http instance
* @param url the url to GET
struct apr_table_t *headers);
/**
- * Create a POST reqest with a bucket brigade as request body.
+ * Create a POST request with a bucket brigade as request body.
* @param preq the created request after success
* @param http the md_http instance
* @param url the url to GET
struct apr_bucket_brigade *body, int detect_len);
/**
- * Create a POST reqest with known request body data.
+ * Create a POST request with known request body data.
* @param preq the created request after success
* @param http the md_http instance
* @param url the url to GET
* To limit the number of parallel requests, nextreq should return APR_ENOENT when the limit
* is reached. It will be called again when the number of in_flight requests changes.
*
- * When all reqests are done, nextreq will be called one more time. Should it not
+ * When all requests are done, nextreq will be called one more time. Should it not
* return anything, this function returns.
*/
apr_status_t md_http_multi_perform(md_http_t *http, md_http_next_req *nextreq, void *baton);
}
/**************************************************************************************************/
-/* synching */
+/* syncing */
apr_status_t md_reg_set_props(md_reg_t *reg, apr_pool_t *p, int can_http, int can_https)
{
}
/**
- * Finish synching an MD with the store.
+ * Finish syncing an MD with the store.
* 1. if there are changed properties (or if the MD is new), save it.
* 2. read any existing certificate and init the state of the memory MD
*/
* Cleanup any challenges that are no longer in use.
*
* @param reg the registry
- * @param p pool for permament storage
+ * @param p pool for permanent storage
* @param ptemp pool for temporary storage
* @param mds the list of configured MDs
*/
apr_status_t md_reg_freeze_domains(md_reg_t *reg, apr_array_header_t *mds);
/**
- * Return if the certificate of the MD shoud be renewed. This includes reaching
+ * Return if the certificate of the MD should be renewed. This includes reaching
* the renewal window of an otherwise valid certificate. It return also !0 iff
* no certificate has been obtained yet.
*/
};
/**
- * Run a test intialization of the renew protocol for the given MD. This verifies
+ * Run a test initialization of the renew protocol for the given MD. This verifies
* basic parameter settings and is expected to return a description of encountered
* problems in <pmessage> when != APR_SUCCESS.
* A message return is allocated fromt the given pool.
/**
* Take stock of all MDs given for a short overview. The JSON returned
- * will carry intergers for MD_KEY_COMPLETE, MD_KEY_RENEWING,
+ * will carry integers for MD_KEY_COMPLETE, MD_KEY_RENEWING,
* MD_KEY_ERRORED, MD_KEY_READY and MD_KEY_TOTAL.
*/
void md_status_take_stock(struct md_json_t **pjson, apr_array_header_t *mds,
const char *status, const char *detail);
/**
- * Retrieve the lastest log entry of a certain type.
+ * Retrieve the latest log entry of a certain type.
*/
md_json_t *md_job_log_get_latest(md_job_t *job, const char *type);
int case_sensitive);
/**
- * Create a new array with all occurances of <exclude> removed.
+ * Create a new array with all occurrences of <exclude> removed.
*/
struct apr_array_header_t *md_array_str_remove(apr_pool_t *p, struct apr_array_header_t *src,
const char *exclude, int case_sensitive);
/* How to bootstrap this module:
* 1. find out if we know if http: and/or https: requests will arrive
- * 2. apply the now complete configuration setttings to the MDs
+ * 2. apply the now complete configuration settings to the MDs
* 3. Link MDs to the server_recs they are used in. Detect unused MDs.
* 4. Update the store with the MDs. Change domain names, create new MDs, etc.
* Basically all MD properties that are configured directly.
* store will find the old settings and "recover" the previous name.
* 5. Load any staged data from previous driving.
* 6. on a dry run, this is all we do
- * 7. Read back the MD properties that reflect the existance and aspect of
+ * 7. Read back the MD properties that reflect the existence and aspect of
* credentials that are in the store (or missing there).
* Expiry times, MD state, etc.
* 8. Determine the list of MDs that need driving/supervision.
/*4*/
if (APR_SUCCESS != (rv = md_reg_sync_start(mc->reg, mc->mds, ptemp))) {
ap_log_error(APLOG_MARK, APLOG_ERR, rv, s, APLOGNO(10073)
- "synching %d mds to registry", mc->mds->nelts);
+ "syncing %d mds to registry", mc->mds->nelts);
goto leave;
}
/*5*/
}
if (APR_SUCCESS != (rv = md_reg_sync_finish(mc->reg, md, p, ptemp))) {
ap_log_error( APLOG_MARK, APLOG_ERR, rv, s, APLOGNO(10172)
- "md[%s]: error synching to store", md->name);
+ "md[%s]: error syncing to store", md->name);
goto leave;
}
}
parm.method = method;
/* We make a sub_pool so that we can collect our child early, otherwise
- * there are cases (i.e. generating directory indicies with mod_autoindex)
+ * there are cases (i.e. generating directory indices with mod_autoindex)
* where we would end up with LOTS of zombies.
*/
if (apr_pool_create(&sub_context, r->pool) != APR_SUCCESS)
/*
* Sun Jun 7 05:43:49 CEST 1998 -- Alvaro
* More comments:
- * 1) The UUencoding prodecure is now done in a general way, avoiding the problems
+ * 1) The UUencoding procedure is now done in a general way, avoiding the problems
* with sizes and paddings that can arise depending on the architecture. Now the
* offsets and sizes of the elements of the unique_id_rec structure are calculated
* in unique_id_global_init; and then used to duplicate the structure without the
return "Max must be a positive number";
worker->s->hmax = ival;
}
- /* XXX: More inteligent naming needed */
+ /* XXX: More intelligent naming needed */
else if (!strcasecmp(key, "smax")) {
/* Maximum number of connections to remote that
* will not be destroyed
APR_OPTIONAL_HOOK(ap, status_hook, proxy_status_hook, NULL, NULL,
APR_HOOK_MIDDLE);
- /* Reset workers count on gracefull restart */
+ /* Reset workers count on graceful restart */
proxy_lb_workers = 0;
set_worker_hc_param_f = APR_RETRIEVE_OPTIONAL_FN(set_worker_hc_param);
return OK;
* @param conf server configuration
* @param hostname hostname from request URI
* @param addr resolved address of hostname, or NULL if not known
- * @return OK on success, or else an errro
+ * @return OK on success, or else an error
*/
PROXY_DECLARE(int) ap_proxy_checkproxyblock(request_rec *r, proxy_server_conf *conf,
const char *hostname, apr_sockaddr_t *addr);
return HTTP_INTERNAL_SERVER_ERROR;
}
- /* read the first bloc of data */
+ /* read the first block of data */
input_brigade = apr_brigade_create(p, r->connection->bucket_alloc);
tenc = apr_table_get(r->headers_in, "Transfer-Encoding");
if (tenc && (ap_cstr_casecmp(tenc, "chunked") == 0)) {
/* XXX: This can perhaps be build using some
* smarter mechanism, like tread_cond.
* But since the statuses can came from
- * different childs, use the provided algo.
+ * different children, use the provided algo.
*/
apr_interval_time_t timeout = balancer->s->timeout;
apr_interval_time_t step, tval = 0;
/*
* First try to compute an unique ID for each vhost with minimal criteria,
* that is the first Host/IP:port and ServerName. For most cases this should
- * be enough and avoids changing the ID unnecessarily accross restart (or
+ * be enough and avoids changing the ID unnecessarily across restart (or
* stop/start w.r.t. persisted files) for things that this module does not
* care about.
*
}
/*
- * Process the paramters and add or update the worker of the balancer
+ * Process the parameters and add or update the worker of the balancer
*/
static int balancer_process_balancer_worker(request_rec *r, proxy_server_conf *conf,
proxy_balancer *bsel,
* that may be okay, since the data is supposed to
* be transparent. In fact, this doesn't log at all
* yet. 8^)
- * FIXME: doesn't check any headers initally sent from the
+ * FIXME: doesn't check any headers initially sent from the
* client.
* FIXME: should allow authentication, but hopefully the
* generic proxy authentication is good enough.
* with username and password (which was presumably queried from the user)
* supplied in the Authorization: header.
* Note that we "invent" a realm name which consists of the
- * ftp://user@host part of the reqest (sans password -if supplied but invalid-)
+ * ftp://user@host part of the request (sans password -if supplied but invalid-)
*/
static int ftp_unauthorized(request_rec *r, int log_it)
{
* Because the new logic looks at input_brigade, we will self-terminate
* input_brigade and jump past all of the request body logic...
* Reading anything with ap_get_brigade is likely to consume the
- * main request's body or read beyond EOS - which would be unplesant.
+ * main request's body or read beyond EOS - which would be unpleasant.
*
* An exception: when a kept_body is present, then subrequest CAN use
* pass request bodies, and we DONT skip the body.
* To reduce server resource use, setenv proxy-sendchunked
*
* Then address specific servers with conditional setenv
- * options to restore the default behavior where desireable.
+ * options to restore the default behavior where desirable.
*
* We have to compute content length by reading the entire request
* body; if request body is not small, we'll spool the remaining
/* read the headers. */
/* N.B. for HTTP/1.0 clients, we have to fold line-wrapped headers*/
- /* Also, take care with headers with multiple occurences. */
+ /* Also, take care with headers with multiple occurrences. */
/* First, tuck away all already existing cookies */
save_table = apr_table_make(r->pool, 2);
return APR_SUCCESS;
}
-/* TOOD: rewrite drive_serf to make it async */
+/* TODO: rewrite drive_serf to make it async */
static int drive_serf(request_rec *r, serf_config_t *conf)
{
apr_status_t rv = 0;
return HTTP_INTERNAL_SERVER_ERROR;
}
- /* TOOD: restructure try all servers in the array !! */
+ /* TODO: restructure try all servers in the array !! */
pick = ap_random_pick(0, servers->nelts-1);
choice = APR_ARRAY_IDX(servers, pick, ap_serf_server_t *);
#define AP_SERF_CLUSTER_PROVIDER "serf_cluster"
typedef struct ap_serf_server_t ap_serf_server_t;
struct ap_serf_server_t {
- /* TOOD: consider using apr_sockaddr_t, except they suck. */
+ /* TODO: consider using apr_sockaddr_t, except they suck. */
const char *ip;
apr_port_t port;
};
/* Filter chain is OK and empty, yet we can't determine from
* ap_check_pipeline (actually ap_core_input_filter) whether
* an empty non-blocking read is EAGAIN or EOF on the socket
- * side (it's always SUCCESS), so check it explicitely here.
+ * side (it's always SUCCESS), so check it explicitly here.
*/
if (ap_proxy_is_socket_connected(conn->sock)) {
rv = APR_SUCCESS;
now = apr_time_now();
if (zz) {
- /* load the session attibutes */
+ /* load the session attributes */
rv = ap_run_session_decode(r, zz);
/* having a session we cannot decode is just as good as having
};
/*
- * Layout for SHM and persited file :
+ * Layout for SHM and persisted file :
*
* +-------------------------------------------------------------+~>
* | desc | num_free | base (slots) | inuse (array) | md5 | desc | compat..
rv = apr_uri_parse(p, elts[i], &uri);
if (rv != APR_SUCCESS) {
ap_log_error(APLOG_MARK, APLOG_CRIT, rv, s,
- APLOGNO(02697) "unparseable log URL %s in file "
+ APLOGNO(02697) "unparsable log URL %s in file "
"%s - ignoring",
elts[i], listfile);
/* some garbage in the file? can't map to an auto-maintained SCT,
alloc)
/* Custom apr_status_t error code, used when a plain HTTP request is
- * recevied on an SSL port. */
+ * received on an SSL port. */
#define MODSSL_ERROR_HTTP_ON_HTTPS (APR_OS_START_USERERR + 0)
/* Custom apr_status_t error code, used when the proxy cannot
}
if (APR_BRIGADE_EMPTY(ctx->bb)) {
- /* Suprisingly (and perhaps, wrongly), the request body can be
+ /* Surprisingly (and perhaps, wrongly), the request body can be
* pulled from the input filter stack more than once; a
* handler may read it, and ap_discard_request_body() will
* attempt to do so again after *every* request. So input
/*
* Repeat the calls, because SSL_shutdown internally dispatches through a
- * little state machine. Usually only one or two interation should be
+ * little state machine. Usually only one or two iterations should be
* needed, so we restrict the total number of restrictions in order to
* avoid process hangs in case the client played bad with the socket
* connection and OpenSSL cannot recognize it.
for (i = 0; i < 4 /* max 2x pending + 2x data = 4 */; i++) {
rc = SSL_shutdown(ssl);
if (rc >= 0 && flush && (SSL_get_shutdown(ssl) & SSL_SENT_SHUTDOWN)) {
- /* Once the close notity is sent through the output filters,
+ /* Once the close notify is sent through the output filters,
* ensure it is flushed through the socket.
*/
if (BIO_flush(SSL_get_wbio(ssl)) <= 0) {
static int stapling_cb(SSL *ssl, void *arg);
/**
- * Maxiumum OCSP stapling response size. This should be the response for a
+ * Maximum OCSP stapling response size. This should be the response for a
* single certificate and will typically include the responder certificate chain
* so 10K should be more than enough.
*
}
if (ssl_run_get_stapling_status(&rspder, &rspderlen, conn, s, x) == APR_SUCCESS) {
- /* a hook handles stapling for this certicate and determines the response */
+ /* a hook handles stapling for this certificate and determines the response */
if (rspder == NULL || rspderlen <= 0) {
return SSL_TLSEXT_ERR_NOACK;
}
}
if (dcfg->bytes_per_second == 0) {
- return "mod_diaulup: Unkonwn Modem Standard specified.";
+ return "mod_diaulup: Unknown Modem Standard specified.";
}
return NULL;
/*
* Optional function coming from mod_authn_core, used for
- * retrieving the type of autorization
+ * retrieving the type of authorization
*/
static APR_OPTIONAL_FN_TYPE(authn_ap_auth_type) *authn_ap_auth_type;
/* Stop for any non-'token' character, including ctrls, obs-text,
* and "tspecials" (RFC2068) a.k.a. "separators" (RFC2616), which
- * is easer to express as characters remaining in the ASCII token set
+ * is easier to express as characters remaining in the ASCII token set
*/
if (!c || !(apr_isalnum(c) || strchr("!#$%&'*+-.^_`|~", c))) {
flags |= T_HTTP_TOKEN_STOP;
if (sdc < 0) {
ap_log_perror(APLOG_MARK, APLOG_CRIT, sdc, process->pool, APLOGNO(02486)
- "find_systemd_socket: Error parsing enviroment, sd_listen_fds returned %d",
+ "find_systemd_socket: Error parsing environment, sd_listen_fds returned %d",
sdc);
return -1;
}
if (use_systemd) {
const char *userdata_key = "ap_open_systemd_listeners";
void *data;
- /* clear the enviroment on our second run
+ /* clear the environment on our second run
* so that none of our future children get confused.
*/
apr_pool_userdata_get(&data, userdata_key, s->process->pool);
*
* *BSDs have SO_REUSEPORT too but with a different semantic: the first
* wildcard address bound socket or the last non-wildcard address bound
- * socket will receive connections (no evenness garantee); the rest of
+ * socket will receive connections (no evenness guarantee); the rest of
* the sockets bound to the same port will not.
* This can't (always) work for httpd.
*
/*
* You might ponder why stderr_pool should survive?
* The trouble is, stderr_pool may have s_main->error_log,
- * so we aren't in a position to destory stderr_pool until
+ * so we aren't in a position to destroy stderr_pool until
* the next recycle. There's also an apparent bug which
* is not; if some folk decided to call this function before
* the core open error logs hook, this pool won't survive.
}
/* Create a child process running PROGNAME with a pipe connected to
- * the childs stdin. The write-end of the pipe will be placed in
+ * the child's stdin. The write-end of the pipe will be placed in
* *FPIN on successful return. If dummy_stderr is non-zero, the
* stderr for the child will be the same as the stdout of the parent.
* Otherwise the child will inherit the stderr from the parent. */
if (!logf && !(errorlog_provider && errorlog_provider_handle)) {
/* There is no file to send the log message to (or it is
- * redirected to /dev/null and therefore any formating done below
+ * redirected to /dev/null and therefore any formatting done below
* would be lost anyway) and there is no initialized log provider
* available, so we just return here.
*/
* Sleep for TASK_SWITCH_SLEEP micro seconds to cause a task switch on
* OS layer and thus give possibly started piped loggers a chance to
* process their input. Otherwise it is possible that they get killed
- * by us before they can do so. In this case maybe valueable log messages
+ * by us before they can do so. In this case maybe valuable log messages
* might get lost.
*/
* completion at some point may require reads (e.g. SSL_ERROR_WANT_READ),
* an output filter can also set the sense to CONN_SENSE_WANT_READ at any
* time for event MPM to do the right thing,
- * - suspend the connection (SUSPENDED) such that it now interracts with
+ * - suspend the connection (SUSPENDED) such that it now interacts with
* the MPM through suspend/resume_connection() hooks, and/or registered
* poll callbacks (PT_USER), and/or registered timed callbacks triggered
* by timer events.
#if HAVE_SERF
rc = serf_context_prerun(g_serf);
if (rc != APR_SUCCESS) {
- /* TOOD: what should do here? ugh. */
+ /* TODO: what should we do here? ugh. */
}
#endif
* the connections they handle (i.e. ptrans). We can't use this thread's
* self pool because all these objects survive it, nor use pchild or pconf
* directly because this starter thread races with other modules' runtime,
- * nor finally pchild (or subpool thereof) because it is killed explicitely
+ * nor finally pchild (or subpool thereof) because it is killed explicitly
* before pconf (thus connections/ptrans can live longer, which matters in
* ONE_PROCESS mode). So this leaves us with a subpool of pconf, created
* before any ptrans hence destroyed after.
* from being received. The child processes no longer use signals for
* any communication with the parent process. Let's also do this before
* child_init() hooks are called and possibly create threads that
- * otherwise could "steal" (implicitely) MPM's signals.
+ * otherwise could "steal" (implicitly) MPM's signals.
*/
rv = apr_setup_signal_thread();
if (rv != APR_SUCCESS) {
apr_signal(SIGHUP, just_die);
apr_signal(SIGTERM, just_die);
/* Ignore SIGINT in child. This fixes race-condition in signals
- * handling when httpd is runnning on foreground and user hits ctrl+c.
+ * handling when httpd is running on foreground and user hits ctrl+c.
* In this case, SIGINT is sent to all children followed by SIGTERM
* from the main process, which interrupts the SIGINT handler and
* leads to inconsistency.
*
* Each child process consists of a pool of worker threads and a
* main thread that accepts connections & passes them to the workers via
- * a work queue. The worker thread pool is dynamic, managed by a maintanence
+ * a work queue. The worker thread pool is dynamic, managed by a maintenance
* thread so that the number of idle threads is kept between
* min_spare_threads & max_spare_threads.
*
*/
apr_signal(SIGHUP, just_die);
apr_signal(SIGTERM, just_die);
- /* Ignore SIGINT in child. This fixes race-condition in signals
- * handling when httpd is runnning on foreground and user hits ctrl+c.
+ /* Ignore SIGINT in child. This fixes race-conditions in signals
+ * handling when httpd is running on foreground and user hits ctrl+c.
* In this case, SIGINT is sent to all children followed by SIGTERM
* from the main process, which interrupts the SIGINT handler and
* leads to inconsistency.
apr_status_t simple_io_event_process(simple_core_t * sc, simple_sb_t * sb)
{
/* pqXXXXX: In theory, if we have non-blocking operations on the connection
- * we can do them here, before pushing to another thread, thats just
+ * we can do them here, before pushing to another thread, that's just
* not implemented right now.
*/
return apr_thread_pool_push(sc->workers,
return rv;
}
- /* XXXXX: Hack. Reseting parts of the simple core needs to be more
+ /* XXXXX: Hack. Resetting parts of the simple core needs to be more
* thought out than this.
*/
APR_RING_INIT(&sc->timer_ring, simple_timer_t, link);
* get_listeners_from_parent()
* The listen sockets are opened in the parent. This function, which runs
* exclusively in the child process, receives them from the parent and
- * makes them availeble in the child.
+ * makes them available in the child.
*/
static void get_listeners_from_parent(server_rec *s)
{
* of this event means that the child process has exited prematurely
* due to a seg fault or other irrecoverable error. For server
* robustness, master_main will restart the child process under this
- * condtion.
+ * condition.
*
* master_main uses the child_exit_event to signal the child process
* to exit.
"Failed to get the full path of %s", process->argv[0]);
exit(APEXIT_INIT);
}
- /* WARNING: There is an implict assumption here that the
+ /* WARNING: There is an implicit assumption here that the
* executable resides in ServerRoot or ServerRoot\bin
*/
def_server_root = (char *) apr_filepath_name_get(binpath);
ap_exists_config_define("DEBUG"))
one_process = -1;
- /* XXX: presume proper privilages; one nice thing would be
+ /* XXX: presume proper privileges; one nice thing would be
* a loud emit if running as "LocalSystem"/"SYSTEM" to indicate
* they should change to a user with write access to logs/ alone.
*/
CleanNullACL((void *)sa);
/* Create the start mutex, as an unnamed object for security.
- * Ths start mutex is used during a restart to prevent more than
+ * The start mutex is used during a restart to prevent more than
* one child process from entering the accept loop at once.
*/
rv = apr_proc_mutex_create(&start_mutex, NULL,
rv = apr_get_os_error();
ap_log_error(APLOG_MARK, APLOG_ERR | APLOG_STARTUP, rv, NULL,
APLOGNO(00369) "Failed to open the Windows service "
- "manager, perhaps you forgot to log in as Adminstrator?");
+ "manager, perhaps you forgot to log in as Administrator?");
return (rv);
}
rv = apr_get_os_error();
ap_log_error(APLOG_MARK, APLOG_ERR | APLOG_STARTUP, rv, NULL,
APLOGNO(10009) "Failed to open the Windows service "
- "manager, perhaps you forgot to log in as Adminstrator?");
+ "manager, perhaps you forgot to log in as Administrator?");
return (rv);
}
rv = apr_get_os_error();
ap_log_error(APLOG_MARK, APLOG_ERR | APLOG_STARTUP, rv, NULL,
APLOGNO(10011) "Failed to open the Windows service "
- "manager, perhaps you forgot to log in as Adminstrator?");
+ "manager, perhaps you forgot to log in as Administrator?");
return (rv);
}
ap_log_error(APLOG_MARK, APLOG_ERR | APLOG_STARTUP,
apr_get_os_error(), NULL,
APLOGNO(10013) "Failed to open the Windows service "
- "manager, perhaps you forgot to log in as Adminstrator?");
+ "manager, perhaps you forgot to log in as Administrator?");
return;
}
* the connections they handle (i.e. ptrans). We can't use this thread's
* self pool because all these objects survive it, nor use pchild or pconf
* directly because this starter thread races with other modules' runtime,
- * nor finally pchild (or subpool thereof) because it is killed explicitely
+ * nor finally pchild (or subpool thereof) because it is killed explicitly
* before pconf (thus connections/ptrans can live longer, which matters in
* ONE_PROCESS mode). So this leaves us with a subpool of pconf, created
* before any ptrans hence destroyed after.
* from being received. The child processes no longer use signals for
* any communication with the parent process. Let's also do this before
* child_init() hooks are called and possibly create threads that
- * otherwise could "steal" (implicitely) MPM's signals.
+ * otherwise could "steal" (implicitly) MPM's signals.
*/
rv = apr_setup_signal_thread();
if (rv != APR_SUCCESS) {
}
else /* ... XXX other request types here? */ {
/* Create an HTTP request string. We include a User-Agent so
- * that adminstrators can track down the cause of the
+ * that administrators can track down the cause of the
* odd-looking requests in their logs. A complete request is
* used since kernel-level filtering may require that much
* data before returning from accept(). */
return (index2 >= 0) ? -1 : 1;
}
}
- /* both have the same index (mabye -1 or no pref configured) and we compare
+ /* both have the same index (maybe -1 or no pref configured) and we compare
* the names so that spdy3 gets precedence over spdy2. That makes
* the outcome at least deterministic. */
return strcmp(proto1, proto2);
}
do {
/* If previous match was empty, we can't issue the exact same one or
- * we'd loop indefinitively. So let's instead ask for an anchored and
+ * we'd loop indefinitely. So let's instead ask for an anchored and
* non-empty match (i.e. something not empty at the start of the value)
* and if nothing is found advance by one character below.
*/
return T_OP_UNARY;
}
- /* Apply subtitution to a string */
+ /* Apply substitution to a string */
<expr>"sub" {
return T_OP_SUB;
}
#ifdef HAVE_PCRE2
/* TODO: create a generic TLS matchdata buffer of some nmatch limit,
- * e.g. 10 matches, to avoid a malloc-per-call. If it must be alloced,
+ * e.g. 10 matches, to avoid a malloc-per-call. If it must be allocated,
* implement a general context using palloc and no free implementation.
*/
nlim = ((apr_size_t)preg->re_nsub + 1) > nmatch
/*
* Various utility functions which are common to a whole lot of
* script-type extensions mechanisms, and might as well be gathered
- * in one place (if only to avoid creating inter-module dependancies
+ * in one place (if only to avoid creating inter-module dependencies
* where there don't have to be).
*/
# on the command line and generates a username
# sha1-encrytped password on the stdout.
#
-# Typical useage:
+# Typical usage:
# ./htpasswd-sha1.pl dirkx MySecret >> sha1-passwd
#
# This is public domain code. Do whatever you want with it.
** trapping of connection errors which influenced measurements.
** Contributed by Sander Temme, Early 2001
** Version 1.3e
- ** - Changed timeout behavour during write to work whilst the sockets
+ ** - Changed timeout behavior during write to work whilst the sockets
** are filling up and apr_write() does writes a few - but not all.
** This will potentially change results. <dirkx@webweaving.org>, April 2001
** Version 2.0.36-dev
total = ap_round_ms(total);
if (done > 0) { /* avoid division by zero (if 0 done) */
- printf("<tr %s><th %s colspan=4>Connnection Times (ms)</th></tr>\n",
+ printf("<tr %s><th %s colspan=4>Connection Times (ms)</th></tr>\n",
trstring, tdstring);
printf("<tr %s><th %s> </th> <th %s>min</th> <th %s>avg</th> <th %s>max</th></tr>\n",
trstring, tdstring, tdstring, tdstring, tdstring);
/*
* Get a size or time param from a string.
* Parameter 'last' indicates, whether the
- * argument is the last commadnline argument.
+ * argument is the last commandline argument.
* UTC offset is only allowed as a last argument
* in order to make is distinguishable from the
* rotation interval time.
/*
* Get the current working directory, as well as the proper
- * document root (dependant upon whether or not it is a
+ * document root (dependent upon whether or not it is a
* ~userdir request). Error out if we cannot get either one,
* or if the current working directory is not in the docroot.
* Use chdir()s and getcwd()s to avoid problems with symlinked
-keyout ${CDIR}/xs-root-2.key -out ${CDIR}/xs-root-2.pem \
|| exit 2
-# Create a chain of just the two access authorites:
+# Create a chain of just the two access authorities:
cat ${CDIR}/xs-root-2.pem ${CDIR}/xs-root-1.pem > ${CDIR}/xs-root-chain.pem
# And likewise a directory with the same information (using the
SSLSessionCache none
# Note that this SSL configuration is far
-# from complete - you propably will want
+# from complete - you probably will want
# to configure SSLSession Caches at the
# very least.
# Uncomment the following lines if you
# want to only allow access to clients with
# a certificate issued/signed by some
- # selection of the issuing authorites
+ # selection of the issuing authorities
#
# SSLCACertificate ${CDIR}/xs-root-1.pem # just root 1
# SSLCACertificate ${CDIR}/xs-root-2.pem # just root 2
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
- * derived from this software withough specific prior written permission
+ * derived from this software without specific prior written permission
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES