From: Daniel Corbett Date: Sat, 8 May 2021 14:50:37 +0000 (-0400) Subject: DOC: Fix a few grammar/spelling issues and casing of HAProxy X-Git-Tag: v2.4-dev19~21 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9f0843f4e21af6872871b649932d805c473daaf6;p=thirdparty%2Fhaproxy.git DOC: Fix a few grammar/spelling issues and casing of HAProxy This patch fixes a few grammar and spelling issues in configuration.txt. It was also noted that there was a wide range of case usage (i.e. haproxy, HAproxy, HAProxy, etc... ). This patch updates them all to be consistently "HAProxy" except where a binary is mentioned. --- diff --git a/doc/configuration.txt b/doc/configuration.txt index e01637010b..dd92bd5815 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -337,7 +337,7 @@ sent to a single request, and that this only works when keep-alive is enabled correctly forward and skip them, and only process the next non-100 response. As such, these messages are neither logged nor transformed, unless explicitly state otherwise. Status 101 messages indicate that the protocol is changing -over the same connection and that haproxy must switch to tunnel mode, just as +over the same connection and that HAProxy must switch to tunnel mode, just as if a CONNECT had occurred. Then the Upgrade header would contain additional information about the type of protocol the connection is switching to. @@ -381,9 +381,9 @@ HAProxy may emit the following status codes by itself : 408 when the request timeout strikes before the request is complete 410 when the requested resource is no longer available and will not be available again - 500 when haproxy encounters an unrecoverable internal error, such as a + 500 when HAProxy encounters an unrecoverable internal error, such as a memory allocation failure, which should never happen - 501 when haproxy is unable to satisfy a client request because of an + 501 when HAProxy is unable to satisfy a client request because of an unsupported feature 502 when the server returns an empty, invalid or incomplete response, or when an "http-response deny" rule blocks the response. @@ -690,7 +690,7 @@ thus single quotes are preferred (or double escaping). Example: arg2=my/\1 ________________/ / arg3 ______________________/ -Remember that backslahes are not escape characters within single quotes and +Remember that backslashes are not escape characters within single quotes and that the whole word3 above is already protected against them using the single quotes. Conversely, if double quotes had been used around the whole expression, single the dollar character and the backslashes would have been resolved at top @@ -1207,7 +1207,7 @@ daemon systemd mode. default-path { current | config | parent | origin } - By default haproxy loads all files designated by a relative path from the + By default HAProxy loads all files designated by a relative path from the location the process is started in. In some circumstances it might be desirable to force all relative paths to start from a different location just as if the process was started from such locations. This is what this @@ -1239,7 +1239,7 @@ default-path { current | config | parent | origin } - "origin" indicates that all relative files should be loaded from the designated (mandatory) path. This may be used to ease management of - different haproxy instances running in parallel on a system, where each + different HAProxy instances running in parallel on a system, where each instance uses a different prefix but where the rest of the sections are made easily relocatable. @@ -1292,7 +1292,7 @@ gid Changes the process's group ID to . It is recommended that the group ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must be started with a user belonging to this group, or with superuser privileges. - Note that if haproxy is started from a user having supplementary groups, it + Note that if HAProxy is started from a user having supplementary groups, it will only be able to drop these groups if started with superuser privileges. See also "group" and "uid". @@ -1366,7 +1366,7 @@ h1-case-adjust-file "option h1-case-adjust-bogus-server". insecure-fork-wanted - By default haproxy tries hard to prevent any thread and process creation + By default HAProxy tries hard to prevent any thread and process creation after it starts. Doing so is particularly important when using Lua files of uncertain origin, and when experimenting with development versions which may still contain bugs whose exploitability is uncertain. And generally speaking @@ -1374,7 +1374,7 @@ insecure-fork-wanted triggered by traffic. But this prevents external checks from working, and may break some very specific Lua scripts which actively rely on the ability to fork. This option is there to disable this protection. Note that it is a bad - idea to disable it, as a vulnerability in a library or within haproxy itself + idea to disable it, as a vulnerability in a library or within HAProxy itself will be easier to exploit once disabled. In addition, forking from Lua or anywhere else is not reliable as the forked process may randomly embed a lock set by another thread and never manage to finish an operation. As such it is @@ -1388,12 +1388,12 @@ insecure-setuid-wanted external checks which are strongly recommended against), and is even expected to isolate itself into an empty chroot. As such, there basically is no valid reason to allow a setuid executable to be called without the user being fully - aware of the risks. In a situation where haproxy would need to call external + aware of the risks. In a situation where HAProxy would need to call external checks and/or disable chroot, exploiting a vulnerability in a library or in - haproxy itself could lead to the execution of an external program. On Linux + HAProxy itself could lead to the execution of an external program. On Linux it is possible to lock the process so that any setuid bit present on such an executable is ignored. This significantly reduces the risk of privilege - escalation in such a situation. This is what haproxy does by default. In case + escalation in such a situation. This is what HAProxy does by default. In case this causes a problem to an external check (for example one which would need the "ping" command), then it is possible to disable this protection by explicitly adding this directive in the global section. If enabled, it is @@ -1403,7 +1403,7 @@ issuers-chain-path Assigns a directory to load certificate chain for issuer completion. All files must be in PEM format. For certificates loaded with "crt" or "crt-list", if certificate chain is not included in PEM (also commonly known as - intermediate certificate), haproxy will complete chain if the issuer of the + intermediate certificate), HAProxy will complete chain if the issuer of the certificate corresponds to the first certificate of the chain loaded with "issuers-chain-path". A "crt" file with PrivateKey+Certificate+IntermediateCA2+IntermediateCA1 @@ -1452,7 +1452,7 @@ log
[len ] [format ] [sample :] 512 and which is 4096 bytes on most modern operating systems. Any larger message may be interleaved with messages from other processes. Exceptionally for debugging purposes the file descriptor may also be - directed to a file, but doing so will significantly slow haproxy down + directed to a file, but doing so will significantly slow HAProxy down as non-blocking calls will be ignored. Also there will be no way to purge nor rotate this file without restarting the process. Note that the configured syslog format is preserved, so the output is suitable @@ -1655,7 +1655,7 @@ nbproc (deprecated) nbthread This setting is only available when support for threads was built in. It - makes haproxy run on threads. This is exclusive with "nbproc". While + makes HAProxy run on threads. This is exclusive with "nbproc". While "nbproc" historically used to be the only way to use multiple processors, it also involved a number of shortcomings related to the lack of synchronization between processes (health-checks, peers, stick-tables, stats, ...) which do @@ -1669,7 +1669,7 @@ nbthread value is reported in the output of "haproxy -vv". See also "nbproc". numa-cpu-mapping - By default, if running on Linux, haproxy inspects on startup the CPU topology + By default, if running on Linux, HAProxy inspects on startup the CPU topology of the machine. If a multi-socket machine is detected, the affinity is automatically calculated to run on the CPUs of a single node. This is done in order to not suffer from the performance penalties caused by the inter-socket @@ -1784,7 +1784,7 @@ set-dumpable simply writing "core", "core.%p" or "/var/log/core/core.%p" addresses the issue. When trying to enable this option waiting for a rare issue to re-appear, it's often a good idea to first try to obtain such a dump by - issuing, for example, "kill -11" to the haproxy process and verify that it + issuing, for example, "kill -11" to the "haproxy" process and verify that it leaves a core where expected when dying. ssl-default-bind-ciphers @@ -1912,17 +1912,17 @@ ssl-load-extra-files * bind configuration. To associate these PEM files into a "cert bundle" that is recognized by - haproxy, they must be named in the following way: All PEM files that are to + HAProxy, they must be named in the following way: All PEM files that are to be bundled must have the same base name, with a suffix indicating the key type. Currently, three suffixes are supported: rsa, dsa and ecdsa. For example, if www.example.com has two PEM files, an RSA file and an ECDSA file, they must be named: "example.pem.rsa" and "example.pem.ecdsa". The first part of the filename is arbitrary; only the suffix matters. To load - this bundle into haproxy, specify the base name only: + this bundle into HAProxy, specify the base name only: Example : bind :8443 ssl crt example.pem - Note that the suffix is not given to haproxy; this tells haproxy to look for + Note that the suffix is not given to HAProxy; this tells HAProxy to look for a cert bundle. HAProxy will load all PEM files in the bundle as if they were configured @@ -2016,7 +2016,7 @@ unix-bind [ prefix ] [ mode ] [ user ] [ uid ] the risk of errors, since those settings are most commonly required but are also process-specific. The setting can be used to force all socket path to be relative to that directory. This might be needed to access another - component's chroot. Note that those paths are resolved before haproxy chroots + component's chroot. Note that those paths are resolved before HAProxy chroots itself, so they are absolute. The , , , and all have the same meaning as their homonyms used by the "bind" statement. If both are specified, the "bind" statement has priority, meaning that the @@ -2054,7 +2054,7 @@ description The path of the 51Degrees data file to provide device detection services. The file should be unzipped and accessible by HAProxy with relevant permissions. - Please note that this option is only available when haproxy has been + Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. 51degrees-property-name-list [ ...] @@ -2062,14 +2062,14 @@ description of names is available on the 51Degrees website: https://51degrees.com/resources/property-dictionary - Please note that this option is only available when haproxy has been + Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. 51degrees-property-separator A char that will be appended to every property value in a response header containing 51Degrees results. If not set that will be set as ','. - Please note that this option is only available when haproxy has been + Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. 51degrees-cache-size @@ -2077,14 +2077,14 @@ description is an LRU cache which reminds previous device detections and their results. By default, this cache is disabled. - Please note that this option is only available when haproxy has been + Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. wurfl-data-file The path of the WURFL data file to provide device detection services. The file should be accessible by HAProxy with relevant permissions. - Please note that this option is only available when haproxy has been compiled + Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. wurfl-information-list []* @@ -2117,21 +2117,21 @@ wurfl-information-list []* - wurfl_normalized_useragent The normalized useragent. - Please note that this option is only available when haproxy has been compiled + Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. wurfl-information-list-separator A char that will be used to separate values in a response header containing WURFL results. If not set that a comma (',') will be used by default. - Please note that this option is only available when haproxy has been compiled + Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. wurfl-patch-file [] A list of WURFL patch file paths. Note that patches are loaded during startup thus before the chroot. - Please note that this option is only available when haproxy has been compiled + Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. wurfl-cache-size @@ -2140,14 +2140,14 @@ wurfl-cache-size - "0" : no cache is used. - : size of lru cache in elements. - Please note that this option is only available when haproxy has been compiled + Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. strict-limits - Makes process fail at startup when a setrlimit fails. Haproxy tries to set the + Makes process fail at startup when a setrlimit fails. HAProxy tries to set the best setrlimit according to what has been calculated. If it fails, it will emit a warning. This option is here to guarantee an explicit failure of - haproxy when those limits fail. It is enabled by default. It may still be + HAProxy when those limits fail. It is enabled by default. It may still be forcibly disabled by prefixing it with the "no" keyword. 3.2. Performance tuning @@ -2177,7 +2177,7 @@ busy-polling stay around for some time waiting for the end of their current connections. max-spread-checks - By default, haproxy tries to spread the start of health checks across the + By default, HAProxy tries to spread the start of health checks across the smallest health check interval of all the servers in a farm. The principle is to avoid hammering services running on the same server. But when using large check intervals (10 seconds or more), the last servers in the farm take some @@ -2224,7 +2224,7 @@ maxcompcpuusage Sets the maximum CPU usage HAProxy can reach before stopping the compression for new requests or decreasing the compression level of current requests. It works like 'maxcomprate' but measures CPU usage instead of incoming data - bandwidth. The value is expressed in percent of the CPU used by haproxy. In + bandwidth. The value is expressed in percent of the CPU used by HAProxy. In case of multiple processes (nbproc > 1), each process manages its individual usage. A value of 100 disable the limit. The default value is 100. Setting a lower value will prevent the compression work from slowing the whole @@ -2260,7 +2260,7 @@ maxsslconn automatically computed based on the memory limit, maxconn, the buffer size, memory allocated to compression, SSL cache size, and use of SSL in either frontends, backends or both. If neither maxconn nor maxsslconn are specified - when there is a memory limit, haproxy will automatically adjust these values + when there is a memory limit, HAProxy will automatically adjust these values so that 100% of the connections can be made over SSL with no risk, and will consider the sides where it is enabled (frontend, backend, both). @@ -2369,7 +2369,7 @@ ssl-engine [algo ] Sets the OpenSSL engine to . List of valid values for may be obtained using the command "openssl engine". This statement may be used multiple times, it will simply enable multiple crypto engines. Referencing an - unsupported engine will prevent haproxy from starting. Note that many engines + unsupported engine will prevent HAProxy from starting. Note that many engines will lead to lower HTTPS performance than pure software with recent processors. The optional command "algo" sets the default algorithms an ENGINE will supply using the OPENSSL function ENGINE_set_default_string(). A value @@ -2385,7 +2385,7 @@ ssl-mode-async I/O operations if asynchronous capable SSL engines are used. The current implementation supports a maximum of 32 engines. The Openssl ASYNC API doesn't support moving read/write buffers and is not compliant with - haproxy's buffer management. So the asynchronous mode is disabled on + HAProxy's buffer management. So the asynchronous mode is disabled on read/write operations (it is only enabled during initial and renegotiation handshakes). @@ -2404,13 +2404,13 @@ tune.buffers.limit expected global maxconn setting, which also significantly reduces memory usage. The memory savings come from the fact that a number of connections will not allocate 2*tune.bufsize. It is best not to touch this value unless - advised to do so by an haproxy core developer. + advised to do so by an HAProxy core developer. tune.buffers.reserve Sets the number of buffers which are pre-allocated and reserved for use only during memory shortage conditions resulting in failed memory allocations. The minimum value is 2 and is also the default. There is no reason a user would - want to change this value, it's mostly aimed at haproxy core developers. + want to change this value, it's mostly aimed at HAProxy core developers. tune.bufsize Sets the buffer size to this size (in bytes). Lower values allow more @@ -2422,9 +2422,9 @@ tune.bufsize possibly causing the system to run out of memory. At least the global maxconn parameter should be decreased by the same factor as this one is increased. In addition, use of HTTP/2 mandates that this value must be 16384 or more. If an - HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will + HTTP request is larger than (tune.bufsize - tune.maxrewrite), HAProxy will return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger - than this size, haproxy will return HTTP 502 (Bad Gateway). Note that the + than this size, HAProxy will return HTTP 502 (Bad Gateway). Note that the value set using this parameter will automatically be rounded up to the next multiple of 8 on 32-bit machines and 16 on 64-bit machines. @@ -2459,7 +2459,7 @@ tune.h2.header-table-size tune.h2.initial-window-size Sets the HTTP/2 initial window size, which is the number of bytes the client - can upload before waiting for an acknowledgment from haproxy. This setting + can upload before waiting for an acknowledgment from HAProxy. This setting only affects payload contents (i.e. the body of POST requests), not headers. The default value is 65535, which roughly allows up to 5 Mbps of upload bandwidth per client over a network showing a 100 ms ping time, or 500 Mbps @@ -2473,13 +2473,13 @@ tune.h2.max-concurrent-streams 100. A larger one may slightly improve page load time for complex sites when visited over high latency networks, but increases the amount of resources a single client may allocate. A value of zero disables the limit so a single - client may create as many streams as allocatable by haproxy. It is highly + client may create as many streams as allocatable by HAProxy. It is highly recommended not to change this value. tune.h2.max-frame-size - Sets the HTTP/2 maximum frame size that haproxy announces it is willing to + Sets the HTTP/2 maximum frame size that HAProxy announces it is willing to receive to its peers. The default value is the largest between 16384 and the - buffer size (tune.bufsize). In any case, haproxy will not announce support + buffer size (tune.bufsize). In any case, HAProxy will not announce support for frame sizes larger than buffers. The main purpose of this setting is to allow to limit the maximum frame size setting when using large buffers. Too large frame sizes might have performance impact or cause some peers to @@ -2528,12 +2528,12 @@ tune.idle-pool.shared { on | off } increases. tune.idletimer - Sets the duration after which haproxy will consider that an empty buffer is + Sets the duration after which HAProxy will consider that an empty buffer is probably associated with an idle stream. This is used to optimally adjust some packet sizes while forwarding large and small data alternatively. The decision to use splice() or to send large buffers in SSL is modulated by this parameter. The value is in milliseconds between 0 and 65535. A value of zero - means that haproxy will not try to detect idle streams. The default is 1000, + means that HAProxy will not try to detect idle streams. The default is 1000, which seems to correctly detect end user pauses (e.g. read a page before clicking). There should be no reason for changing this value. Please check tune.ssl.maxrecord below. @@ -2644,7 +2644,7 @@ tune.pipesize tune.pool-high-fd-ratio This setting sets the max number of file descriptors (in percentage) used by - haproxy globally against the maximum number of file descriptors haproxy can + HAProxy globally against the maximum number of file descriptors HAProxy can use before we start killing idle connections when we can't reuse a connection and we have to create a new one. The default is 25 (one quarter of the file descriptor will mean that roughly half of the maximum front connections can @@ -2653,7 +2653,7 @@ tune.pool-high-fd-ratio tune.pool-low-fd-ratio This setting sets the max number of file descriptors (in percentage) used by - haproxy globally against the maximum number of file descriptors haproxy can + HAProxy globally against the maximum number of file descriptors HAProxy can use before we stop putting connection into the idle pool for reuse. The default is 20. @@ -2686,7 +2686,7 @@ tune.runqueue-depth tune.sched.low-latency { on | off } Enables ('on') or disables ('off') the low-latency task scheduler. By default - haproxy processes tasks from several classes one class at a time as this is + HAProxy processes tasks from several classes one class at a time as this is the most efficient. But when running with large values of tune.runqueue-depth this can have a measurable effect on request or connection latency. When this low-latency setting is enabled, tasks of lower priority classes will always @@ -2706,7 +2706,7 @@ tune.sndbuf.server of received data. Lower values will significantly increase CPU usage though. Another use case is to prevent write timeouts with extremely slow clients due to the kernel waiting for a large part of the buffer to be read before - notifying haproxy again. + notifying HAProxy again. tune.ssl.cachesize Sets the size of the global SSL session cache, in a number of blocks. A block @@ -2855,7 +2855,7 @@ quiet line argument "-q". zero-warning - When this option is set, haproxy will refuse to start if any warning was + When this option is set, HAProxy will refuse to start if any warning was emitted while processing the configuration. It is highly recommended to set this option on configurations that are not changed often, as it helps detect subtle mistakes and keep the configuration clean and forward-compatible. Note @@ -2894,7 +2894,7 @@ user [password|insecure-password ] value specified in the config file. Most current algorithms are deliberately designed to be expensive to compute to achieve resistance against brute force attacks. They do not simply salt/hash the clear text password once, - but thousands of times. This can quickly become a major factor in haproxy's + but thousands of times. This can quickly become a major factor in HAProxy's overall CPU consumption! Example: @@ -2920,7 +2920,7 @@ user [password|insecure-password ] 3.5. Peers ---------- It is possible to propagate entries of any data-types in stick-tables between -several haproxy instances over TCP connections in a multi-master fashion. Each +several HAProxy instances over TCP connections in a multi-master fashion. Each instance pushes its local updates and insertions to remote peers. The pushed values overwrite remote ones without aggregation. Interrupted exchanges are automatically detected and recovered from the last known point. @@ -2973,7 +2973,7 @@ peer : [param*] Defines a peer inside a peers section. If is set to the local peer name (by default hostname, or forced using "-L" command line option or "localpeer" global configuration setting), - haproxy will listen for incoming remote peer connection on :. + HAProxy will listen for incoming remote peer connection on :. Otherwise, : defines where to connect to in order to join the remote peer, and is used at the protocol level to identify and validate the remote peer on the server side. @@ -3306,7 +3306,7 @@ timeout server ------------------- It is possible to declare one or multiple log forwarding section, -haproxy will forward all received log messages to a log servers list. +HAProxy will forward all received log messages to a log servers list. log-forward Creates a new log forwarder proxy identified as . @@ -3335,11 +3335,11 @@ log
[len ] [format ] [sample :] [ []] Used to configure target log servers. See more details on proxies documentation. - If no format specified, haproxy tries to keep the incoming log format. + If no format specified, HAProxy tries to keep the incoming log format. Configured facility is ignored, except if incoming message does not present a facility but one is mandatory on the outgoing format. If there is no timestamp available in the input format, but the field - exists in output format, haproxy will use the local date. + exists in output format, HAProxy will use the local date. Example: global @@ -3464,21 +3464,21 @@ weakest option and close is the strongest. CLO | CLO | CLO | CLO It is possible to chain a TCP frontend to an HTTP backend. It is pointless if -only HTTP traffic is handled. But It may be used to handle several protocols -into the same frontend. It this case, the client's connection is first handled +only HTTP traffic is handled. But it may be used to handle several protocols +within the same frontend. In this case, the client's connection is first handled as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the -content processings are performend on raw data. Once upgraded, data are parsed +content processings are performend on raw data. Once upgraded, data is parsed and stored using an internal representation called HTX and it is no longer possible to rely on raw representation. There is no way to go back. There are two kind of upgrades, in-place upgrades and destructive upgrades. The -first ones concern the TCP to HTTP/1 upgrades. In HTTP/1, the request +first ones involves a TCP to HTTP/1 upgrade. In HTTP/1, the request processings are serialized, thus the applicative stream can be preserved. The -second ones concern the TCP to HTTP/2 upgrades. Because it is a multiplexed +second one involves a TCP to HTTP/2 upgrade. Because it is a multiplexed protocol, the applicative stream cannot be associated to any HTTP/2 stream and is destroyed. New applicative streams are then created when HAProxy receives new HTTP/2 streams at the lower level, in the H2 multiplexer. It is important -to understand this difference because that drastically change the way to +to understand this difference because that drastically changes the way to process data. When an HTTP/1 upgrade is performed, the content processings already performed on raw data are neither lost nor reexecuted while for an HTTP/2 upgrade, applicative streams are distinct and all frontend rules are @@ -3494,7 +3494,7 @@ HTTP. For instance, it is not possible to choose a backend based on the Host header value while it is trivial in HTTP/1. Hopefully, there is a solution to mitigate this drawback. -It exists two way to perform HTTP upgrades. The first one, the historical +There are two ways to perform an HTTP upgrade. The first one, the historical method, is to select an HTTP backend. The upgrade happens when the backend is set. Thus, for in-place upgrades, only the backend configuration is considered in the HTTP data processing. For destructive upgrades, the applicative stream @@ -4377,7 +4377,7 @@ compression offload Arguments : algo is followed by the list of supported compression algorithms. type is followed by the list of MIME types that will be compressed. - offload makes haproxy work as a compression offloader only (see notes). + offload makes HAProxy work as a compression offloader only (see notes). The currently supported algorithms are : identity this is mostly for debugging, and it was useful for developing @@ -4406,21 +4406,21 @@ compression offload Compression will be activated depending on the Accept-Encoding request header. With identity, it does not take care of that header. If backend servers support HTTP compression, these directives - will be no-op: haproxy will see the compressed response and will not + will be no-op: HAProxy will see the compressed response and will not compress again. If backend servers do not support HTTP compression and - there is Accept-Encoding header in request, haproxy will compress the + there is Accept-Encoding header in request, HAProxy will compress the matching response. - The "offload" setting makes haproxy remove the Accept-Encoding header to + The "offload" setting makes HAProxy remove the Accept-Encoding header to prevent backend servers from compressing responses. It is strongly recommended not to do this because this means that all the compression work - will be done on the single point where haproxy is located. However in some - deployment scenarios, haproxy may be installed in front of a buggy gateway + will be done on the single point where HAProxy is located. However in some + deployment scenarios, HAProxy may be installed in front of a buggy gateway with broken HTTP compression implementation which can't be turned off. - In that case haproxy can be used to prevent that gateway from emitting + In that case HAProxy can be used to prevent that gateway from emitting invalid payloads. In this case, simply removing the header in the configuration does not work because it applies before the header is parsed, - so that prevents haproxy from compressing. The "offload" setting should + so that prevents HAProxy from compressing. The "offload" setting should then be used for such scenarios. Note: for now, the "offload" setting is ignored when set in a defaults section. @@ -4467,7 +4467,7 @@ cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] between all backends if persistence between them is not desired. rewrite This keyword indicates that the cookie will be provided by the - server and that haproxy will have to modify its value to set the + server and that HAProxy will have to modify its value to set the server's identifier in it. This mode is handy when the management of complex combinations of "Set-cookie" and "Cache-control" headers is left to the application. The application can then @@ -4479,7 +4479,7 @@ cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] incompatible with "insert" and "prefix". insert This keyword indicates that the persistence cookie will have to - be inserted by haproxy in server responses if the client did not + be inserted by HAProxy in server responses if the client did not already have a cookie that would have permitted it to access this server. When used without the "preserve" option, if the server @@ -4540,7 +4540,7 @@ cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] preserve This option may only be used with "insert" and/or "indirect". It allows the server to emit the persistence cookie itself. In this - case, if a cookie is found in the response, haproxy will leave it + case, if a cookie is found in the response, HAProxy will leave it untouched. This is useful in order to end persistence after a logout request for instance. For this, the server just has to emit a cookie with an invalid value (e.g. empty) or with a date in @@ -4549,12 +4549,12 @@ cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] shutdown because users will definitely leave the server after they logout. - httponly This option tells haproxy to add an "HttpOnly" cookie attribute + httponly This option tells HAProxy to add an "HttpOnly" cookie attribute when a cookie is inserted. This attribute is used so that a user agent doesn't share the cookie with non-HTTP components. Please check RFC6265 for more information on this attribute. - secure This option tells haproxy to add a "Secure" cookie attribute when + secure This option tells HAProxy to add a "Secure" cookie attribute when a cookie is inserted. This attribute is used so that a user agent never emits this cookie over non-secure channels, which means that a cookie learned with this flag will be presented only over @@ -4610,7 +4610,7 @@ cookie [ rewrite | insert | prefix ] [ indirect ] [ nocache ] The cookie will be regenerated each time the IP address change, and is only generated for IPv4/IPv6. - attr This option tells haproxy to add an extra attribute when a + attr This option tells HAProxy to add an extra attribute when a cookie is inserted. The attribute value can contain any characters except control ones or ";". This option may be repeated. @@ -5080,7 +5080,7 @@ fullconn push it further for important loads without overloading the servers during exceptional loads. - Since it's hard to get this value right, haproxy automatically sets it to + Since it's hard to get this value right, HAProxy automatically sets it to 10% of the sum of the maxconns of all frontends that may branch to this backend (based on "use_backend" and "default_backend" rules). That way it's safe to leave it unset. However, "use_backend" involving dynamic names are @@ -5198,7 +5198,7 @@ hash-type poorly with numeric-only input or when the total server weight is a multiple of 33, unless the avalanche modifier is also used. - wt6 this function was designed for haproxy while testing other + wt6 this function was designed for HAProxy while testing other functions in the past. It is not as smooth as the other ones, but is much less sensible to the input data set or to the number of servers. It can make sense as an alternative to sdbm+avalanche or @@ -5764,11 +5764,11 @@ http-check send-state yes | no | yes | yes Arguments : none - When this option is set, haproxy will systematically send a special header + When this option is set, HAProxy will systematically send a special header "X-Haproxy-Server-State" with a list of parameters indicating to each server - how they are seen by haproxy. This can be used for instance when a server is - manipulated without access to haproxy and the operator needs to know whether - haproxy still sees it up or not, or if the server is the last one in a farm. + how they are seen by HAProxy. This can be used for instance when a server is + manipulated without access to HAProxy and the operator needs to know whether + HAProxy still sees it up or not, or if the server is the last one in a farm. The header is composed of fields delimited by semi-colons, the first of which is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid @@ -5787,7 +5787,7 @@ http-check send-state ("/") then the name of the server. This can be used when a server is checked in multiple backends. - - a variable "node" containing the name of the haproxy node, as set in the + - a variable "node" containing the name of the HAProxy node, as set in the global "node" variable, otherwise the system's hostname if unspecified. - a variable "weight" indicating the weight of the server, a slash ("/") @@ -6449,20 +6449,20 @@ http-request return [status ] [content-type ] * If "default-errorfiles" argument is set, the proxy's errorfiles are considered. If the "status" argument is defined, it must be one of the - status code handled by haproxy (200, 400, 403, 404, 405, 408, 410, 413, + status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If a specific errorfile is defined, with an "errorfile" argument, the corresponding file, containing a full HTTP response, is returned. Only the "status" argument is considered. It must be one of the status code handled - by haproxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, + by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If an http-errors section is defined, with an "errorfiles" argument, the corresponding file in the specified http-errors section, containing a full HTTP response, is returned. Only the "status" argument is considered. It - must be one of the status code handled by haproxy (200, 400, 403, 404, 405, + must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. @@ -6848,7 +6848,7 @@ http-request tarpit [ { status | deny_status } ] [content-type ] on the number of concurrent requests. It can be very efficient against very dumb robots, and will significantly reduce the load on firewalls compared to a "deny" rule. But when facing "correctly" developed robots, it can make - things worse by forcing haproxy and the front firewall to support insane + things worse by forcing HAProxy and the front firewall to support insane number of concurrent connections. By default an HTTP error 500 is returned. But the response may be customized using same syntax than "http-request return" rules. Thus, see "http-request return" for details. @@ -7149,20 +7149,20 @@ http-response return [status ] [content-type ] * If "default-errorfiles" argument is set, the proxy's errorfiles are considered. If the "status" argument is defined, it must be one of the - status code handled by haproxy (200, 400, 403, 404, 405, 408, 410, 413, + status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If a specific errorfile is defined, with an "errorfile" argument, the corresponding file, containing a full HTTP response, is returned. Only the "status" argument is considered. It must be one of the status code handled - by haproxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, + by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If an http-errors section is defined, with an "errorfiles" argument, the corresponding file in the specified http-errors section, containing a full HTTP response, is returned. Only the "status" argument is considered. It - must be one of the status code handled by haproxy (200, 400, 403, 404, 405, + must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. @@ -7369,7 +7369,7 @@ http-response track-sc2 [table ] [ { if | unless } ] from "http-request track-sc" is the sample expression can only make use of samples in response (e.g. res.*, status etc.) and samples below Layer 6 (e.g. SSL-related samples, see section 7.3.4). If the sample is not - supported, haproxy will fail and warn while parsing the config. + supported, HAProxy will fail and warn while parsing the config. http-response unset-var() [ { if | unless } ] @@ -7407,7 +7407,7 @@ http-reuse { never | safe | aggressive | always } May be used in sections: defaults | frontend | listen | backend yes | no | yes | yes - By default, a connection established between haproxy and the backend server + By default, a connection established between HAProxy and the backend server which is considered safe for reuse is moved back to the server's idle connections pool so that any other request can make use of it. This is the "safe" strategy below. @@ -7421,7 +7421,7 @@ http-reuse { never | safe | aggressive | always } the same connection come from the same client and it is not possible to fix the application, it may be desirable to disable connection sharing in a single backend. An example of - such an application could be an old haproxy using cookie + such an application could be an old HAProxy using cookie insertion in tunnel mode and not checking any request past the first one. @@ -7552,7 +7552,7 @@ load-server-state-from-file { global | local | none } running process has been saved. That way, when starting up, before handling traffic, the new process can apply old states to servers exactly has if no reload occurred. The purpose of the "load-server-state-from-file" directive is - to tell haproxy which file to use. For now, only 2 arguments to either prevent + to tell HAProxy which file to use. For now, only 2 arguments to either prevent loading state or load states from a file containing all backends and servers. The state file can be generated by running the command "show servers state" over the stats socket and redirect output. @@ -7684,7 +7684,7 @@ no log larger message may be interleaved with messages from other processes. Exceptionally for debugging purposes the file descriptor may also be directed to a file, but doing so will - significantly slow haproxy down as non-blocking calls will be + significantly slow HAProxy down as non-blocking calls will be ignored. Also there will be no way to purge nor rotate this file without restarting the process. Note that the configured syslog format is preserved, so the output is suitable for use @@ -7859,7 +7859,7 @@ log-tag Sets the tag field in the syslog header to this string. It defaults to the log-tag set in the global section, otherwise the program name as launched - from the command line, which usually is "haproxy". Sometimes it can be useful + from the command line, which usually is "HAProxy". Sometimes it can be useful to differentiate between multiple processes running on the same host, or to differentiate customer instances running in the same process. In the backend, logs about servers up/down will use this tag. As a hint, it can be convenient @@ -7878,7 +7878,7 @@ max-keep-alive-queue servers. The purpose of this setting is to set a threshold on the number of queued - connections at which haproxy stops trying to reuse the same server and prefers + connections at which HAProxy stops trying to reuse the same server and prefers to find another one. The default value, -1, means there is no limit. A value of zero means that keep-alive requests will never be queued. For very close servers which can be reached with a low latency and which are not sensible to @@ -7910,7 +7910,7 @@ maxconn closes. If the system supports it, it can be useful on big sites to raise this limit - very high so that haproxy manages connection queues, instead of leaving the + very high so that HAProxy manages connection queues, instead of leaving the clients with unanswered connection attempts. This value should not exceed the global maxconn. Also, keep in mind that a connection contains two buffers of tune.bufsize (16kB by default) each, as well as some other data resulting @@ -7972,10 +7972,10 @@ monitor fail { if | unless } This statement adds a condition which can force the response to a monitor request to report a failure. By default, when an external component queries the URI dedicated to monitoring, a 200 response is returned. When one of the - conditions above is met, haproxy will return 503 instead of 200. This is + conditions above is met, HAProxy will return 503 instead of 200. This is very useful to report a site failure to an external component which may base routing advertisements between multiple sites on the availability reported by - haproxy. In this case, one would rely on an ACL involving the "nbsrv" + HAProxy. In this case, one would rely on an ACL involving the "nbsrv" criterion. Note that "monitor fail" only works in HTTP mode. Both status messages may be tweaked using "errorfile" or "errorloc" if needed. @@ -8021,7 +8021,7 @@ monitor-uri are encouraged to send absolute URIs only. Example : - # Use /haproxy_test to report haproxy's status + # Use /haproxy_test to report HAProxy's status frontend www mode http monitor-uri /haproxy_test @@ -8269,7 +8269,7 @@ option contstats By default, counters used for statistics calculation are incremented only when a session finishes. It works quite well when serving small objects, but with big ones (for example large images or archives) or - with A/V streaming, a graph generated from haproxy counters looks like + with A/V streaming, a graph generated from HAProxy counters looks like a hedgehog. With this option enabled counters get incremented frequently along the session, typically every 5 seconds, which is often enough to produce clean graphs. Recounting touches a hotpath directly so it is not @@ -8387,7 +8387,7 @@ option forwardfor [ except ] [ header ] [ if-none ] Alternatively, the keyword "if-none" states that the header will only be added if it is not present. This should only be used in perfectly trusted - environment, as this might cause a security issue if headers reaching haproxy + environment, as this might cause a security issue if headers reaching HAProxy are under the control of the end-user. This option may be specified either in the frontend or in the backend. If at @@ -8608,7 +8608,7 @@ no option http-no-delay interleaved data chunks in both directions within a single request. This is absolutely not supported by the HTTP specification and will not work across most proxies or servers. When such applications attempt to do this through - haproxy, it works but they will experience high delays due to the network + HAProxy, it works but they will experience high delays due to the network optimizations which favor performance by instructing the system to wait for enough data to be available in order to only send full packets. Typical delays are around 200 ms per round trip. Note that this only happens with @@ -8630,23 +8630,23 @@ no option http-no-delay option http-pretend-keepalive no option http-pretend-keepalive - Define whether haproxy will announce keepalive to the server or not + Define whether HAProxy will announce keepalive to the server or not May be used in sections : defaults | frontend | listen | backend yes | no | yes | yes Arguments : none - When running with "option http-server-close" or "option httpclose", haproxy + When running with "option http-server-close" or "option httpclose", HAProxy adds a "Connection: close" header to the request forwarded to the server. Unfortunately, when some servers see this header, they automatically refrain from using the chunked encoding for responses of unknown length, while this - is totally unrelated. The immediate effect is that this prevents haproxy from + is totally unrelated. The immediate effect is that this prevents HAProxy from maintaining the client connection alive. A second effect is that a client or a cache could receive an incomplete response without being aware of it, and consider the response complete. - By setting "option http-pretend-keepalive", haproxy will make the server + By setting "option http-pretend-keepalive", HAProxy will make the server believe it will keep the connection alive. The server will then not fall back - to the abnormal undesired above. When haproxy gets the whole response, it + to the abnormal undesired above. When HAProxy gets the whole response, it will close the connection with the server just as it would do with the "option httpclose". That way the client gets a normal response and the connection is correctly closed on the server side. @@ -8655,8 +8655,8 @@ no option http-pretend-keepalive will more efficiently close the connection themselves after the last packet, and release its buffers slightly earlier. Also, the added packet on the network could slightly reduce the overall peak performance. However it is - worth noting that when this option is enabled, haproxy will have slightly - less work to do. So if haproxy is the bottleneck on the whole architecture, + worth noting that when this option is enabled, HAProxy will have slightly + less work to do. So if HAProxy is the bottleneck on the whole architecture, enabling this option might save a few CPU cycles. This option may be set in backend and listen sections. Using it in a frontend @@ -8728,9 +8728,9 @@ no option http-use-proxy-header connections and make use of the undocumented, non-standard Proxy-Connection header instead. The issue begins when trying to put a load balancer between browsers and such proxies, because there will be a difference between what - haproxy understands and what the client and the proxy agree on. + HAProxy understands and what the client and the proxy agree on. - By setting this option in a frontend, haproxy can automatically switch to use + By setting this option in a frontend, HAProxy can automatically switch to use that non-standard header if it sees proxied requests. A proxied request is defined here as one where the URI begins with neither a '/' nor a '*'. This is incompatible with the HTTP tunnel mode. Note that this option can only be @@ -9002,7 +9002,7 @@ no option log-separate-errors yes | yes | yes | no Arguments : none - Sometimes looking for errors in logs is not easy. This option makes haproxy + Sometimes looking for errors in logs is not easy. This option makes HAProxy raise the level of logs containing potentially interesting information such as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The level changes from "info" to "err". This makes it possible to log them @@ -9099,7 +9099,7 @@ option mysql-check [ user [ { post-41 | pre-41 } ] ] When possible, it is often wise to masquerade the client's IP address when connecting to the server using the "usesrc" argument of the "source" keyword, which requires the transparent proxy feature to be compiled in, and the MySQL - server to route the client via the machine hosting haproxy. + server to route the client via the machine hosting HAProxy. See also: "option httpchk" @@ -9254,16 +9254,16 @@ no option prefer-last-server Arguments : none When the load balancing algorithm in use is not deterministic, and a previous - request was sent to a server to which haproxy still holds a connection, it is + request was sent to a server to which HAProxy still holds a connection, it is sometimes desirable that subsequent requests on a same session go to the same server as much as possible. Note that this is different from persistence, as - we only indicate a preference which haproxy tries to apply without any form + we only indicate a preference which HAProxy tries to apply without any form of warranty. The real use is for keep-alive connections sent to servers. When - this option is used, haproxy will try to reuse the same connection that is + this option is used, HAProxy will try to reuse the same connection that is attached to the server instead of rebalancing to another server, causing a close of the connection. This can make sense for static file servers. It does not make much sense to use this in combination with hashing algorithms. Note, - haproxy already automatically tries to stick to a server which sends a 401 or + HAProxy already automatically tries to stick to a server which sends a 401 or to a proxy which sends a 407 (authentication required), when the load balancing algorithm is not deterministic. This is mandatory for use with the broken NTLM authentication challenge, and significantly helps in @@ -9404,7 +9404,7 @@ no option splice-auto yes | yes | yes | yes Arguments : none - When this option is enabled either on a frontend or on a backend, haproxy + When this option is enabled either on a frontend or on a backend, HAProxy will automatically evaluate the opportunity to use kernel tcp splicing to forward data between the client and the server, in either direction. HAProxy uses heuristics to estimate if kernel splicing might improve performance or @@ -9442,7 +9442,7 @@ no option splice-request yes | yes | yes | yes Arguments : none - When this option is enabled either on a frontend or on a backend, haproxy + When this option is enabled either on a frontend or on a backend, HAProxy will use kernel tcp splicing whenever possible to forward data going from the client to the server. It might still use the recv/send scheme if there are no spare pipes left. This option requires splicing to be enabled at @@ -9468,7 +9468,7 @@ no option splice-response yes | yes | yes | yes Arguments : none - When this option is enabled either on a frontend or on a backend, haproxy + When this option is enabled either on a frontend or on a backend, HAProxy will use kernel tcp splicing whenever possible to forward data going from the server to the client. It might still use the recv/send scheme if there are no spare pipes left. This option requires splicing to be enabled at @@ -9557,7 +9557,7 @@ option ssl-hello-chk and most servers tested do not even log the requests containing only hello messages, which is appreciable. - Note that this check works even when SSL support was not built into haproxy + Note that this check works even when SSL support was not built into HAProxy because it forges the SSL message. When SSL support is available, it is best to use native SSL health checks instead of this one. @@ -9584,7 +9584,7 @@ option tcp-check only. - "tcp-check expect" only is mentioned : this is used to test a banner. - The connection is opened and haproxy waits for the server to present some + The connection is opened and HAProxy waits for the server to present some contents which must validate some rules. The check result will be based on the matching between the contents and the rules. This is suited for POP, IMAP, SMTP, FTP, SSH, TELNET. @@ -9924,7 +9924,7 @@ rate-limit sessions When the frontend reaches the specified number of new sessions per second, it stops accepting new connections until the rate drops below the limit again. During this time, the pending sessions will be kept in the socket's backlog - (in system buffers) and haproxy will not even be aware that sessions are + (in system buffers) and HAProxy will not even be aware that sessions are pending. When applying very low limit on a highly loaded service, it may make sense to increase the socket's backlog using the "backlog" keyword. @@ -10044,7 +10044,7 @@ redirect scheme [code ]