2.9.3. Protocol prefixes
2.10. Examples
-3. Global parameters
+3. Global section
3.1. Process management and security
3.2. Performance tuning
3.3. Debugging
-3.3.1. Traces
-3.4. Userlists
-3.5. Mailers
-3.6. Programs (deprecated)
-3.7. HTTP-errors
-3.8. Rings
-3.9. Log forwarding
-3.10. HTTPClient tuning
-3.11. Certificate Storage
-3.11.1. Load options
-3.12. ACME
+3.4. HTTPClient tuning
4. Proxies
4.1. Proxy keywords matrix
11.1. Stick-tables declaration
11.2. Peers declaration
+12. Other sections
+12.1. Traces
+12.2. Userlists
+12.3. Mailers
+12.4. HTTP-errors
+12.5. Rings
+12.6. Log forwarding
+12.7. Certificate Storage
+12.7.1. Load options
+12.8. ACME
+12.9. Programs (deprecated)
+
1. Quick reminder about HTTP
----------------------------
user "$HAPROXY_USER"
Some variables are defined by HAProxy, they can be used in the configuration
-file, or could be inherited by a program (See 3.6. Programs). These variables
+file, or could be inherited by a program (See 12.9. Programs). These variables
are listed in the matrix below, and they are classified among four categories:
* usable: the variable is accessible from the configuration, either to be
described in section 9.3 "Unix Sockets commands" of the management guide.
* exported: variable is exported to launch programs in a modified environment
- (See section 3.6 "Programs"). Note that this does not apply to external
+ (See section 12.9 "Programs"). Note that this does not apply to external
checks which have their own rules regarding exported variables.
There also two subcategories "master" and "worker", respectively marked 'M' and
report errors in such a case. This option is equivalent to command line
argument "-dW".
-3.3.1. Traces
--------------
+3.4. HTTPClient tuning
+----------------------
-For debugging purpose, it is possible to activate traces on an HAProxy's
-subsystem. This will dump debug messages about a specific subsystem. It is a
-very powerful tool to diagnose issues. Traces can be dynamically configured via
-the CLI. It is also possible to predefined some settings in the configuration
-file, in dedicated "traces" sections. More details about traces can be found in
-the management guide. It remains a developer tools used during complex
-debugging sessions. It is pretty verbose and have a cost, so use it with
-caution. And because it is a developer tool, there is no warranty about the
-backward compatibility of this section.
+HTTPClient is an internal HTTP library, it can be used by various subsystems,
+for example in LUA scripts. HTTPClient is not used in the data path, in other
+words it has nothing with HTTP traffic passing through HAProxy.
-traces
- Starts a new traces section. One or multiple "traces" section may be
- used. All direcitives are evaluated in the declararion order, the last ones
- overriding previous ones.
+httpclient.resolvers.disabled <on|off>
+ Disable the DNS resolution of the httpclient. Prevent the creation of the
+ "default" resolvers section.
-trace <source> <args...>
- Configures on "trace" subsystem. Each of them can be found in the management
- manual, and follow the exact same syntax. Any output that the "trace"
- command would produce will be emitted during the parsing step of the
- section. Most of the time these will be errors and warnings, but certain
- incomplete commands might list permissible choices. This command is not meant
- for regular use, it will generally only be suggested by developers along
- complex debugging sessions. It is important to keep in mind that depending on
- the trace level and details, enabling traces can severely degrade the global
- performance. Please refer to the management manual for the statements syntax.
+ Default value is off.
- Example:
- ring buf1
- size 10485760 # 10MB
- format timed
- backing-file /tmp/h1.traces
+httpclient.resolvers.id <resolvers id>
+ This option defines the resolvers section with which the httpclient will try
+ to resolve.
- ring buf2
- size 10485760 # 10MB
- format timed
- backing-file /tmp/h2.traces
+ Default option is the "default" resolvers ID. By default, if this option is
+ not used, it will simply disable the resolving if the section is not found.
- traces
- trace h1 sink buf1 level developer verbosity complete start now
- trace h2 sink buf1 level developer verbosity complete start now
+ However, when this option is explicitly enabled it will trigger a
+ configuration error if it fails to load.
-3.4. Userlists
---------------
-It is possible to control access to frontend/backend/listen sections or to
-http stats by allowing only authenticated and authorized users. To do this,
-it is required to create at least one userlist and to define users.
+httpclient.resolvers.prefer <ipv4|ipv6>
+ This option allows to chose which family of IP you want when resolving,
+ which is convenient when IPv6 is not available on your network. Default
+ option is "ipv6".
-userlist <listname>
- Creates new userlist with name <listname>. Many independent userlists can be
- used to store authentication & authorization data for independent customers.
+httpclient.retries <number>
+ This option allows to configure the number of retries attempt of the
+ httpclient when a request failed. This does the same as the "retries" keyword
+ in a backend.
-group <groupname> [users <user>,<user>,(...)]
- Adds group <groupname> to the current userlist. It is also possible to
- attach users to this group by using a comma separated list of names
- proceeded by "users" keyword.
+ Default value is 3.
-user <username> [password|insecure-password <password>]
- [groups <group>,<group>,(...)]
- Adds user <username> to the current userlist. Both secure (encrypted) and
- insecure (unencrypted) passwords can be used. Encrypted passwords are
- evaluated using the crypt(3) function, so depending on the system's
- capabilities, different algorithms are supported. For example, modern Glibc
- based Linux systems support MD5, SHA-256, SHA-512, and, of course, the
- classic DES-based method of encrypting passwords.
+httpclient.ssl.ca-file <cafile>
+ This option defines the ca-file which should be used to verify the server
+ certificate. It takes the same parameters as the "ca-file" option on the
+ server line.
- Attention: Be aware that using encrypted passwords might cause significantly
- increased CPU usage, depending on the number of requests, and the algorithm
- used. For any of the hashed variants, the password for each request must
- be processed through the chosen algorithm, before it can be compared to the
- value specified in the config file. Most current algorithms are deliberately
- designed to be expensive to compute to achieve resistance against brute
- force attacks. They do not simply salt/hash the clear text password once,
- but thousands of times. This can quickly become a major factor in HAProxy's
- overall CPU consumption, and can even lead to application crashes!
+ By default and when this option is not used, the value is
+ "@system-ca" which tries to load the CA of the system. If it fails the SSL
+ will be disabled for the httpclient.
- To address the high CPU usage of hash functions, one approach is to reduce
- the number of rounds of the hash function (SHA family algorithms) or decrease
- the "cost" of the function, if the algorithm supports it.
+ However, when this option is explicitly enabled it will trigger a
+ configuration error if it fails.
- As a side note, musl (e.g. Alpine Linux) implementations are known to be
- slower than their glibc counterparts when calculating hashes, so you might
- want to consider this aspect too.
+httpclient.ssl.verify [none|required]
+ Works the same way as the verify option on server lines. If specified to 'none',
+ servers certificates are not verified. Default option is "required".
- Example:
- userlist L1
- group G1 users tiger,scott
- group G2 users xdb,scott
+ By default and when this option is not used, the value is
+ "required". If it fails the SSL will be disabled for the httpclient.
- user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91
- user scott insecure-password elgato
- user xdb insecure-password hello
+ However, when this option is explicitly enabled it will trigger a
+ configuration error if it fails.
- userlist L2
- group G1
- group G2
+httpclient.timeout.connect <timeout>
+ Set the maximum time to wait for a connection attempt by default for the
+ httpclient.
- user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1
- user scott insecure-password elgato groups G1,G2
- user xdb insecure-password hello groups G2
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
- Please note that both lists are functionally identical.
+ The default value is 5000ms.
-3.5. Mailers
-------------
-It is possible to send email alerts when the state of servers changes.
-If configured email alerts are sent to each mailer that is configured
-in a mailers section. Email is sent to mailers through Lua (see
-examples/lua/mailers.lua).
+4. Proxies
+----------
-mailers <mailersect>
- Creates a new mailer list with the name <mailersect>. It is an
- independent section which is referenced by one or more proxies.
+Proxy configuration can be located in a set of sections :
+ - defaults [<name>] [ from <defaults_name> ]
+ - frontend <name> [ from <defaults_name> ]
+ - backend <name> [ from <defaults_name> ]
+ - listen <name> [ from <defaults_name> ]
-mailer <mailername> <ip>:<port>
- Defines a mailer inside a mailers section.
+A "frontend" section describes a set of listening sockets accepting client
+connections.
- Example:
- global
- # mailers.lua file as provided in the git repository
- # adjust path as needed
- lua-load examples/lua/mailers.lua
+A "backend" section describes a set of servers to which the proxy will connect
+to forward incoming connections.
- mailers mymailers
- mailer smtp1 192.168.0.1:587
- mailer smtp2 192.168.0.2:587
+A "listen" section defines a complete proxy with its frontend and backend
+parts combined in one section. It is generally useful for TCP-only traffic.
- backend mybackend
- mode tcp
- balance roundrobin
+A "defaults" section resets all settings to the documented ones and presets new
+ones for use by subsequent sections. All of "frontend", "backend" and "listen"
+sections always take their initial settings from a defaults section, by default
+the latest one that appears before the newly created section. It is possible to
+explicitly designate a specific "defaults" section to load the initial settings
+from by indicating its name on the section line after the optional keyword
+"from". While "defaults" section do not impose a name, this use is encouraged
+for better readability. It is also the only way to designate a specific section
+to use instead of the default previous one. Since "defaults" section names are
+optional, by default a very permissive check is applied on their name and these
+are even permitted to overlap. However if a "defaults" section is referenced by
+any other section, its name must comply with the syntax imposed on all proxy
+names, and this name must be unique among the defaults sections. Please note
+that regardless of what is currently permitted, it is recommended to avoid
+duplicate section names in general and to respect the same syntax as for proxy
+names. This rule might be enforced in a future version. In addition, a warning
+is emitted if a defaults section is explicitly used by a proxy while it is also
+implicitly used by another one because it is the last one defined. It is highly
+encouraged to not mix both usages by always using explicit references or by
+adding a last common defaults section reserved for all implicit uses.
- email-alert mailers mymailers
- email-alert from test1@horms.org
- email-alert to test2@horms.org
+Note that it is even possible for a defaults section to take its initial
+settings from another one, and as such, inherit settings across multiple levels
+of defaults sections. This can be convenient to establish certain configuration
+profiles to carry groups of default settings (e.g. TCP vs HTTP or short vs long
+timeouts) but can quickly become confusing to follow.
- server srv1 192.168.0.30:80
- server srv2 192.168.0.31:80
+All proxy names must be formed from upper and lower case letters, digits,
+'-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are
+case-sensitive, which means that "www" and "WWW" are two different proxies.
-timeout mail <time>
- Defines the time available for a mail/connection to be made and send to
- the mail-server. If not defined the default value is 10 seconds. To allow
- for at least two SYN-ACK packets to be send during initial TCP handshake it
- is advised to keep this value above 4 seconds.
+Historically, all proxy names could overlap, it just caused troubles in the
+logs. Since the introduction of content switching, it is mandatory that two
+proxies with overlapping capabilities (frontend/backend) have different names.
+However, it is still permitted that a frontend and a backend share the same
+name, as this configuration seems to be commonly encountered.
- Example:
- mailers mymailers
- timeout mail 20s
- mailer smtp1 192.168.0.1:587
+Right now, two major proxy modes are supported : "tcp", also known as layer 4,
+and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards
+bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the
+protocol, and can interact with it by allowing, blocking, switching, adding,
+modifying, or removing arbitrary contents in requests or responses, based on
+arbitrary criteria.
-3.6. Programs (deprecated)
---------------------------
+In HTTP mode, the processing applied to requests and responses flowing over
+a connection depends in the combination of the frontend's HTTP options and
+the backend's. HAProxy supports 3 connection modes :
-This section is deprecated and should disappear with HAProxy 3.3. The section
-could be replaced easily by separated process managers. Systemd unit files or
-sysvinit scripts could replace this section as they are more reliable. In docker
-environments, some alternatives can also be found such as s6 or supervisord.
+ - KAL : keep alive ("option http-keep-alive") which is the default mode : all
+ requests and responses are processed, and connections remain open but idle
+ between responses and new requests.
-In master-worker mode, it is possible to launch external binaries with the
-master, these processes are called programs. These programs are launched and
-managed the same way as the workers.
+ - SCL: server close ("option http-server-close") : the server-facing
+ connection is closed after the end of the response is received, but the
+ client-facing connection remains open.
-Since version 3.1, the program section has a slightly different behavior, the
-section is parsed and the program is started from the master, but the rest of
-the configuration is loaded in the worker. This mean the program configuration
-is completely separated from the worker configuration, and a program could be
-reexecuted even if the worker configuration is wrong upon a reload.
+ - CLO: close ("option httpclose"): the connection is closed after the end of
+ the response and "Connection: close" appended in both directions.
-During a reload of HAProxy, those processes are dealing with the same
-sequence as a worker:
+The effective mode that will be applied to a connection passing through a
+frontend and a backend can be determined by both proxy modes according to the
+following matrix, but in short, the modes are symmetric, keep-alive is the
+weakest option and close is the strongest.
- - the master is re-executed
- - the master sends a SIGUSR1 signal to the program
- - if "option start-on-reload" is not disabled, the master launches a new
- instance of the program
+ Backend mode
-During a stop, or restart, a SIGTERM is sent to the programs.
-
-program <name>
- This is a new program section, this section will create an instance <name>
- which is visible in "show proc" on the master CLI. (See "9.4. Master CLI" in
- the management guide).
-
-command <command> [arguments*]
- Define the command to start with optional arguments. The command is looked
- up in the current PATH if it does not include an absolute path. This is a
- mandatory option of the program section. Arguments containing spaces must
- be enclosed in quotes or double quotes or be prefixed by a backslash.
-
-user <user name>
- Changes the executed command user ID to the <user name> from /etc/passwd.
- See also "group".
-
-group <group name>
- Changes the executed command group ID to the <group name> from /etc/group.
- See also "user".
-
-option start-on-reload
-no option start-on-reload
- Start (or not) a new instance of the program upon a reload of the master.
- The default is to start a new instance. This option may only be used in a
- program section.
-
-
-3.7. HTTP-errors
-----------------
-
-It is possible to globally declare several groups of HTTP errors, to be
-imported afterwards in any proxy section. Same group may be referenced at
-several places and can be fully or partially imported.
-
-http-errors <name>
- Create a new http-errors group with the name <name>. It is an independent
- section that may be referenced by one or more proxies using its name.
-
-errorfile <code> <file>
- Associate a file contents to an HTTP error code
-
- Arguments :
- <code> is the HTTP status code. Currently, HAProxy is capable of
- generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
- 425, 429, 500, 501, 502, 503, and 504.
-
- <file> designates a file containing the full HTTP response. It is
- recommended to follow the common practice of appending ".http" to
- the filename so that people do not confuse the response with HTML
- error pages, and to use absolute paths, since files are read
- before any chroot is performed.
-
- Please referrers to "errorfile" keyword in section 4 for details.
-
- Example:
- http-errors website-1
- errorfile 400 /etc/haproxy/errorfiles/site1/400.http
- errorfile 404 /etc/haproxy/errorfiles/site1/404.http
- errorfile 408 /dev/null # work around Chrome pre-connect bug
-
- http-errors website-2
- errorfile 400 /etc/haproxy/errorfiles/site2/400.http
- errorfile 404 /etc/haproxy/errorfiles/site2/404.http
- errorfile 408 /dev/null # work around Chrome pre-connect bug
-
-3.8. Rings
-----------
-
-It is possible to globally declare ring-buffers, to be used as target for log
-servers or traces.
-
-ring <ringname>
- Creates a new ring-buffer with name <ringname>.
-
-backing-file <path>
- This replaces the regular memory allocation by a RAM-mapped file to store the
- ring. This can be useful for collecting traces or logs for post-mortem
- analysis, without having to attach a slow client to the CLI. Newer contents
- will automatically replace older ones so that the latest contents are always
- available. The contents written to the ring will be visible in that file once
- the process stops (most often they will even be seen very soon after but
- there is no such guarantee since writes are not synchronous).
-
- When this option is used, the total storage area is reduced by the size of
- the "struct ring" that starts at the beginning of the area, and that is
- required to recover the area's contents. The file will be created with the
- starting user's ownership, with mode 0600 and will be of the size configured
- by the "size" directive. When the directive is parsed (thus even during
- config checks), any existing non-empty file will first be renamed with the
- extra suffix ".bak", and any previously existing file with suffix ".bak" will
- be removed. This ensures that instant reload or restart of the process will
- not wipe precious debugging information, and will leave time for an admin to
- spot this new ".bak" file and to archive it if needed. As such, after a crash
- the file designated by <path> will contain the freshest information, and if
- the service is restarted, the "<path>.bak" file will have it instead. This
- means that the total storage capacity required will be double of the ring
- size. Failures to rotate the file are silently ignored, so placing the file
- into a directory without write permissions will be sufficient to avoid the
- backup file if not desired.
-
- WARNING: there are stability and security implications in using this feature.
- First, backing the ring to a slow device (e.g. physical hard drive) may cause
- perceptible slowdowns during accesses, and possibly even panics if too many
- threads compete for accesses. Second, an external process modifying the area
- could cause the haproxy process to crash or to overwrite some of its own
- memory with traces. Third, if the file system fills up before the ring,
- writes to the ring may cause the process to crash.
-
- The information present in this ring are structured and are NOT directly
- readable using a text editor (even though most of it looks barely readable).
- The output of this file is only intended for developers.
-
-description <text>
- The description is an optional description string of the ring. It will
- appear on CLI. By default, <name> is reused to fill this field.
-
-format <format>
- Format used to store events into the ring buffer.
-
- Arguments:
- <format> is the log format used when generating syslog messages. It may be
- one of the following :
-
- iso A message containing only the ISO date, followed by the text.
- The PID, process name and system name are omitted. This is
- designed to be used with a local log server.
-
- local Analog to rfc3164 syslog message format except that hostname
- field is stripped. This is the default.
- Note: option "log-send-hostname" switches the default to
- rfc3164.
-
- raw A message containing only the text. The level, PID, date, time,
- process name and system name are omitted. This is designed to be
- used in containers or during development, where the severity
- only depends on the file descriptor used (stdout/stderr). This
- is the default.
-
- rfc3164 The RFC3164 syslog message format.
- (https://tools.ietf.org/html/rfc3164)
-
- rfc5424 The RFC5424 syslog message format.
- (https://tools.ietf.org/html/rfc5424)
-
- short A message containing only a level between angle brackets such as
- '<3>', followed by the text. The PID, date, time, process name
- and system name are omitted. This is designed to be used with a
- local log server. This format is compatible with what the systemd
- logger consumes.
+ | KAL | SCL | CLO
+ ----+-----+-----+----
+ KAL | KAL | SCL | CLO
+ ----+-----+-----+----
+ mode SCL | SCL | SCL | CLO
+ ----+-----+-----+----
+ CLO | CLO | CLO | CLO
- priority A message containing only a level plus syslog facility between angle
- brackets such as '<63>', followed by the text. The PID, date, time,
- process name and system name are omitted. This is designed to be used
- with a local log server.
+It is possible to chain a TCP frontend to an HTTP backend. It is pointless if
+only HTTP traffic is handled. But it may be used to handle several protocols
+within the same frontend. In this case, the client's connection is first handled
+as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the
+content processings are performend on raw data. Once upgraded, data is parsed
+and stored using an internal representation called HTX and it is no longer
+possible to rely on raw representation. There is no way to go back.
- timed A message containing only a level between angle brackets such as
- '<3>', followed by ISO date and by the text. The PID, process
- name and system name are omitted. This is designed to be
- used with a local log server.
+There are two kind of upgrades, in-place upgrades and destructive upgrades. The
+first ones involves a TCP to HTTP/1 upgrade. In HTTP/1, the request
+processings are serialized, thus the applicative stream can be preserved. The
+second one involves a TCP to HTTP/2 upgrade. Because it is a multiplexed
+protocol, the applicative stream cannot be associated to any HTTP/2 stream and
+is destroyed. New applicative streams are then created when HAProxy receives
+new HTTP/2 streams at the lower level, in the H2 multiplexer. It is important
+to understand this difference because that drastically changes the way to
+process data. When an HTTP/1 upgrade is performed, the content processings
+already performed on raw data are neither lost nor reexecuted while for an
+HTTP/2 upgrade, applicative streams are distinct and all frontend rules are
+evaluated systematically on each one. And as said, the first stream, the TCP
+one, is destroyed, but only after the frontend rules were evaluated.
-maxlen <length>
- The maximum length of an event message stored into the ring,
- including formatted header. If an event message is longer than
- <length>, it will be truncated to this length.
+There is another importnat point to understand when HTTP processings are
+performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from
+tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2
+preface can be parsed. This is a huge limitation regarding the HTTP content
+analysis in TCP. Concretely it is only possible to know if received data are
+HTTP. For instance, it is not possible to choose a backend based on the Host
+header value while it is trivial in HTTP/1. Hopefully, there is a solution to
+mitigate this drawback.
-server <name> <address> [param*]
- Used to configure a syslog tcp server to forward messages from ring buffer.
- This supports for all "server" parameters found in 5.2 paragraph. Some of
- these parameters are irrelevant for "ring" sections. Important point: there
- is little reason to add more than one server to a ring, because all servers
- will receive the exact same copy of the ring contents, and as such the ring
- will progress at the speed of the slowest server. If one server does not
- respond, it will prevent old messages from being purged and may block new
- messages from being inserted into the ring. The proper way to send messages
- to multiple servers is to use one distinct ring per log server, not to
- attach multiple servers to the same ring. Note that specific server directive
- "log-proto" is used to set the protocol used to send messages.
+There are two ways to perform an HTTP upgrade. The first one, the historical
+method, is to select an HTTP backend. The upgrade happens when the backend is
+set. Thus, for in-place upgrades, only the backend configuration is considered
+in the HTTP data processing. For destructive upgrades, the applicative stream
+is destroyed, thus its processing is stopped. With this method, possibilities
+to choose a backend with an HTTP/2 connection are really limited, as mentioned
+above, and a bit useless because the stream is destroyed. The second method is
+to upgrade during the tcp-request content rules evaluation, thanks to the
+"switch-mode http" action. In this case, the upgrade is performed in the
+frontend context and it is possible to define HTTP directives in this
+frontend. For in-place upgrades, it offers all the power of the HTTP analysis
+as soon as possible. It is not that far from an HTTP frontend. For destructive
+upgrades, it does not change anything except it is useless to choose a backend
+on limited information. It is of course the recommended method. Thus, testing
+the request protocol from the tcp-request content rules to perform an HTTP
+upgrade is enough. All the remaining HTTP manipulation may be moved to the
+frontend http-request ruleset. But keep in mind that tcp-request content rules
+remains evaluated on each streams, that can't be changed.
-size <size>
- This is the optional size in bytes for the ring-buffer. Default value is
- set to BUFSIZE.
+4.1. Proxy keywords matrix
+--------------------------
-timeout connect <timeout>
- Set the maximum time to wait for a connection attempt to a server to succeed.
+The following list of keywords is supported. Most of them may only be used in a
+limited set of section types. Some of them are marked as "deprecated" because
+they are inherited from an old syntax which may be confusing or functionally
+limited, and there are new recommended keywords to replace them. Keywords
+marked with "(*)" can be optionally inverted using the "no" prefix, e.g. "no
+option contstats". This makes sense when the option has been enabled by default
+and must be disabled for a specific instance. Such options may also be prefixed
+with "default" in order to restore default settings regardless of what has been
+specified in a previous "defaults" section. Keywords supported in defaults
+sections marked with "(!)" are only supported in named defaults sections, not
+anonymous ones.
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
+Note: Some dangerous and not recommended directives are intentionnaly not
+ listed in the following matrix. It is on purpose. These directives are
+ documentated. But by not listing them below is one more way to discourage
+ anyone to use it.
-timeout server <timeout>
- Set the maximum time for pending data staying into output buffer.
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- Example:
- global
- log ring@myring local7
+ keyword defaults frontend listen backend
+------------------------------------+----------+----------+---------+---------
+acl X (!) X X X
+backlog X X X -
+balance X - X X
+bind - X X -
+capture cookie - X X -
+capture request header - X X -
+capture response header - X X -
+clitcpka-cnt X X X -
+clitcpka-idle X X X -
+clitcpka-intvl X X X -
+compression X X X X
+cookie X - X X
+crt - X X -
+declare capture - X X -
+default-server X - X X
+default_backend X X X -
+description - X X X
+disabled X X X X
+dispatch - - X X
+email-alert from X X X X
+email-alert level X X X X
+email-alert mailers X X X X
+email-alert myhostname X X X X
+email-alert to X X X X
+enabled X X X X
+errorfile X X X X
+errorfiles X X X X
+errorloc X X X X
+errorloc302 X X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+errorloc303 X X X X
+error-log-format X X X -
+force-persist - - X X
+filter - X X X
+fullconn X - X X
+guid - X X X
+hash-balance-factor X - X X
+hash-preserve-affinity X - X X
+hash-type X - X X
+http-after-response X (!) X X X
+http-check comment X - X X
+http-check connect X - X X
+http-check disable-on-404 X - X X
+http-check expect X - X X
+http-check send X - X X
+http-check send-state X - X X
+http-check set-var X - X X
+http-check unset-var X - X X
+http-error X X X X
+http-request X (!) X X X
+http-response X (!) X X X
+http-reuse X - X X
+http-send-name-header X - X X
+id - X X X
+ignore-persist - - X X
+load-server-state-from-file X - X X
+log (*) X X X X
+log-format X X X -
+log-format-sd X X X -
+log-tag X X X X
+log-steps X X X -
+max-keep-alive-queue X - X X
+max-session-srv-conns X X X -
+maxconn X X X -
+mode X X X X
+monitor fail - X X -
+monitor-uri X X X -
+option abortonclose (*) X - X X
+option allbackups (*) X - X X
+option checkcache (*) X - X X
+option clitcpka (*) X X X -
+option contstats (*) X X X -
+option disable-h2-upgrade (*) X X X -
+option dontlog-normal (*) X X X -
+option dontlognull (*) X X X -
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+option forwardfor X X X X
+option forwarded (*) X - X X
+option h1-case-adjust-bogus-client (*) X X X -
+option h1-case-adjust-bogus-server (*) X - X X
+option http-buffer-request (*) X X X X
+option http-drop-request-trailers (*) X - - X
+option http-drop-response-trailers (*) X - X -
+option http-ignore-probes (*) X X X -
+option http-keep-alive (*) X X X X
+option http-no-delay (*) X X X X
+option http-pretend-keepalive (*) X - X X
+option http-restrict-req-hdr-names X X X X
+option http-server-close (*) X X X X
+option http-use-proxy-header (*) X X X -
+option httpchk X - X X
+option httpclose (*) X X X X
+option httplog X X X -
+option httpslog X X X -
+option independent-streams (*) X X X X
+option ldap-check X - X X
+option external-check X - X X
+option log-health-checks (*) X - X X
+option log-separate-errors (*) X X X -
+option logasap (*) X X X -
+option mysql-check X - X X
+option nolinger (*) X X X X
+option originalto X X X X
+option persist (*) X - X X
+option pgsql-check X - X X
+option prefer-last-server (*) X - X X
+option redispatch (*) X - X X
+option redis-check X - X X
+option smtpchk X - X X
+option socket-stats (*) X X X -
+option splice-auto (*) X X X X
+option splice-request (*) X X X X
+option splice-response (*) X X X X
+option spop-check X - X X
+option srvtcpka (*) X - X X
+option ssl-hello-chk X - X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+option tcp-check X - X X
+option tcp-smart-accept (*) X X X -
+option tcp-smart-connect (*) X - X X
+option tcpka X X X X
+option tcplog X X X -
+option transparent (*) X - X X
+option idle-close-on-response (*) X X X -
+external-check command X - X X
+external-check path X - X X
+persist rdp-cookie X - X X
+quic-initial X (!) X X -
+rate-limit sessions X X X -
+redirect - X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+retries X - X X
+retry-on X - X X
+server - - X X
+server-state-file-name X - X X
+server-template - - X X
+source X - X X
+srvtcpka-cnt X - X X
+srvtcpka-idle X - X X
+srvtcpka-intvl X - X X
+stats admin - X X X
+stats auth X X X X
+stats enable X X X X
+stats hide-version X X X X
+stats http-request - X X X
+stats realm X X X X
+stats refresh X X X X
+stats scope X X X X
+stats show-desc X X X X
+stats show-legends X X X X
+stats show-node X X X X
+stats uri X X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+stick match - - X X
+stick on - - X X
+stick store-request - - X X
+stick store-response - - X X
+stick-table - X X X
+tcp-check comment X - X X
+tcp-check connect X - X X
+tcp-check expect X - X X
+tcp-check send X - X X
+tcp-check send-lf X - X X
+tcp-check send-binary X - X X
+tcp-check send-binary-lf X - X X
+tcp-check set-var X - X X
+tcp-check unset-var X - X X
+tcp-request connection X (!) X X -
+tcp-request content X (!) X X X
+tcp-request inspect-delay X (!) X X X
+tcp-request session X (!) X X -
+tcp-response content X (!) - X X
+tcp-response inspect-delay X (!) - X X
+timeout check X - X X
+timeout client X X X -
+timeout client-fin X X X -
+timeout client-hs X X X -
+timeout connect X - X X
+timeout http-keep-alive X X X X
+timeout http-request X X X X
+timeout queue X - X X
+timeout server X - X X
+timeout server-fin X - X X
+timeout tarpit X X X X
+timeout tunnel X - X X
+transparent (deprecated) X - X X
+unique-id-format X X X -
+unique-id-header X X X -
+use_backend - X X -
+use-fcgi-app - - X X
+use-server - - X X
+------------------------------------+----------+----------+---------+---------
+ keyword defaults frontend listen backend
- ring myring
- description "My local buffer"
- format rfc5424
- maxlen 1200
- size 32764
- timeout connect 5s
- timeout server 10s
- server mysyslogsrv 127.0.0.1:6514 log-proto octet-count
-3.9. Log forwarding
--------------------
+4.2. Alphabetically sorted keywords reference
+---------------------------------------------
-It is possible to declare one or multiple log forwarding section,
-HAProxy will forward all received log messages to a log servers list.
+This section provides a description of each keyword and its usage.
-log-forward <name>
- Creates a new log forwarder proxy identified as <name>.
-backlog <conns>
- Give hints to the system about the approximate listen backlog desired size
- on connections accept.
+acl <aclname> <criterion> [flags] [operator] <value> ...
+ Declare or complete an access list.
-bind <addr> [param*]
- Used to configure a stream log listener to receive messages to forward.
- This supports the "bind" parameters found in 5.1 paragraph including
- those about ssl but some statements such as "alpn" may be irrelevant for
- syslog protocol over TCP.
- Those listeners support both "Octet Counting" and "Non-Transparent-Framing"
- modes as defined in rfc-6587.
+ May be used in the following contexts: tcp, http
-dgram-bind <addr> [param*]
- Used to configure a datagram log listener to receive messages to forward.
- Addresses must be in IPv4 or IPv6 form,followed by a port. This supports
- for some of the "bind" parameters found in 5.1 paragraph among which
- "interface", "namespace" or "transparent", the other ones being
- silently ignored as irrelevant for UDP/syslog case.
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | yes | yes | yes
-log global
-log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
- <facility> [<level> [<minlevel>]]
- Used to configure target log servers. See more details on proxies
- documentation.
- If no format specified, HAProxy tries to keep the incoming log format.
- Configured facility is ignored, except if incoming message does not
- present a facility but one is mandatory on the outgoing format.
- If there is no timestamp available in the input format, but the field
- exists in output format, HAProxy will use the local date.
+ This directive is only available from named defaults sections, not anonymous
+ ones. ACLs defined in a defaults section are not visible from other sections
+ using it.
Example:
- global
- log stderr format iso local7
+ acl invalid_src src 0.0.0.0/7 224.0.0.0/3
+ acl invalid_src src_port 0:1023
+ acl local_dst hdr(host) -i localhost
- ring myring
- description "My local buffer"
- format rfc5424
- maxlen 1200
- size 32764
- timeout connect 5s
- timeout server 10s
- # syslog tcp server
- server mysyslogsrv 127.0.0.1:514 log-proto octet-count
-
- log-forward sylog-loadb
- dgram-bind 127.0.0.1:1514
- bind 127.0.0.1:1514
- # all messages on stderr
- log global
- # all messages on local tcp syslog server
- log ring@myring local0
- # load balance messages on 4 udp syslog servers
- log 127.0.0.1:10001 sample 1:4 local0
- log 127.0.0.1:10002 sample 2:4 local0
- log 127.0.0.1:10003 sample 3:4 local0
- log 127.0.0.1:10004 sample 4:4 local0
-
-maxconn <conns>
- Fix the maximum number of concurrent connections on a log forwarder.
- 10 is the default.
+ See section 7 about ACL usage.
-timeout client <timeout>
- Set the maximum inactivity time on the client side.
-option assume-rfc6587-ntf
- Directs HAProxy to treat incoming TCP log streams always as using
- non-transparent framing. This option simplifies the framing logic and ensures
- consistent handling of messages, particularly useful when dealing with
- improperly formed starting characters.
+backlog <conns>
+ Give hints to the system about the approximate listen backlog desired size
-option dont-parse-log
- Enables HAProxy to relay syslog messages without attempting to parse and
- restructure them, useful for forwarding messages that may not conform to
- traditional formats. This option should be used with the format raw setting on
- destination log targets to ensure the original message content is preserved.
+ May be used in the following contexts: tcp, http
-option host { replace | fill | keep | append }
- Set the host strategy that should be used on the log-forward section
- regarding syslog hostname field for outbound rfc3164 or rfc5424 messages.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- replace If input message already contains a value for the hostname field,
- we replace it by the source IP address from the sender.
- If input message doesn't contain a value for the hostname field
- (ie: '-' as input rfc5424 message or non compliant rfc3164 or
- rfc5424 message), we use the source IP address from the sender as
- hostname field.
+ Arguments :
+ <conns> is the number of pending connections. Depending on the operating
+ system, it may represent the number of already acknowledged
+ connections, of non-acknowledged ones, or both.
- fill If input message already contains a value for the hostname field,
- we keep it.
- If input message doesn't contain a value for the hostname field
- (ie: '-' as input rfc5424 message or non compliant rfc3164 or
- rfc5424 message), we use the source IP address from the sender as
- hostname field.
- (This is the default)
+ This option is only meaningful for stream listeners, including QUIC ones. Its
+ behavior however is not identical with QUIC instances.
- keep If input message already contains a value for the hostname field,
- we keep it.
- If input message doesn't contain a value for the hostname field,
- we set it to 'localhost' (rfc3164) or '-' (rfc5424).
+ For all listeners but QUIC, in order to protect against SYN flood attacks,
+ one solution is to increase the system's SYN backlog size. Depending on the
+ system, sometimes it is just tunable via a system parameter, sometimes it is
+ not adjustable at all, and sometimes the system relies on hints given by the
+ application at the time of the listen() syscall. By default, HAProxy passes
+ the frontend's maxconn value to the listen() syscall. On systems which can
+ make use of this value, it can sometimes be useful to be able to specify a
+ different value, hence this backlog parameter.
- append If input message already contains a value for the hostname field,
- we append a comma followed by the IP address from the sender.
- If input message doesn't contain a value for the hostname field,
- we use the source IP address from the sender.
+ On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is
+ used as a hint and the system accepts up to the smallest greater power of
+ two, and never more than some limits (usually 32768).
- For all options above, if the source IP address from the sender is not
- available (ie: UNIX/ABNS socket), then the resulting strategy is "keep".
+ For QUIC listeners, backlog sets a shared limits for both the maximum count
+ of active handshakes and connections waiting to be accepted. The handshake
+ phase relies primarily of the network latency with the remote peer, whereas
+ the second phase depends solely on haproxy load. When either one of this
+ limit is reached, haproxy starts to drop reception of INITIAL packets,
+ preventing any new connection allocation, until the connection excess starts
+ to decrease. This situation may cause browsers to silently downgrade the HTTP
+ versions and switching to TCP.
- Note that this option is only relevant for rfc3164 or rfc5424 destination
- log format. Else setting the option will have no visible effect.
+ See also : "maxconn" and the target operating system's tuning guide.
-3.10. HTTPClient tuning
------------------------
-HTTPClient is an internal HTTP library, it can be used by various subsystems,
-for example in LUA scripts. HTTPClient is not used in the data path, in other
-words it has nothing with HTTP traffic passing through HAProxy.
+balance <algorithm> [ <arguments> ]
+balance url_param <param> [check_post]
+ Define the load balancing algorithm to be used in a backend.
-httpclient.resolvers.disabled <on|off>
- Disable the DNS resolution of the httpclient. Prevent the creation of the
- "default" resolvers section.
+ May be used in the following contexts: tcp, http, log
- Default value is off.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
-httpclient.resolvers.id <resolvers id>
- This option defines the resolvers section with which the httpclient will try
- to resolve.
+ Arguments :
+ <algorithm> is the algorithm used to select a server when doing load
+ balancing. This only applies when no persistence information
+ is available, or when a connection is redispatched to another
+ server. <algorithm> may be one of the following :
- Default option is the "default" resolvers ID. By default, if this option is
- not used, it will simply disable the resolving if the section is not found.
+ roundrobin Each server is used in turns, according to their weights.
+ This is the smoothest and fairest algorithm when the server's
+ processing time remains equally distributed. This algorithm
+ is dynamic, which means that server weights may be adjusted
+ on the fly for slow starts for instance. It is limited by
+ design to 4095 active servers per backend. Note that in some
+ large farms, when a server becomes up after having been down
+ for a very short time, it may sometimes take a few hundreds
+ requests for it to be re-integrated into the farm and start
+ receiving traffic. This is normal, though very rare. It is
+ indicated here in case you would have the chance to observe
+ it, so that you don't worry. Note: weights are ignored for
+ backends in LOG mode.
- However, when this option is explicitly enabled it will trigger a
- configuration error if it fails to load.
+ static-rr Each server is used in turns, according to their weights.
+ This algorithm is as similar to roundrobin except that it is
+ static, which means that changing a server's weight on the
+ fly will have no effect. On the other hand, it has no design
+ limitation on the number of servers, and when a server goes
+ up, it is always immediately reintroduced into the farm, once
+ the full map is recomputed. It also uses slightly less CPU to
+ run (around -1%). This algorithm is not usable in LOG mode.
-httpclient.resolvers.prefer <ipv4|ipv6>
- This option allows to chose which family of IP you want when resolving,
- which is convenient when IPv6 is not available on your network. Default
- option is "ipv6".
+ leastconn The server with the lowest number of connections receives the
+ connection. Round-robin is performed within groups of servers
+ of the same load to ensure that all servers will be used. Use
+ of this algorithm is recommended where very long sessions are
+ expected, such as LDAP, SQL, TSE, etc... but is not very well
+ suited for protocols using short sessions such as HTTP. This
+ algorithm is dynamic, which means that server weights may be
+ adjusted on the fly for slow starts for instance. It will
+ also consider the number of queued connections in addition to
+ the established ones in order to minimize queuing. This
+ algorithm is not usable in LOG mode.
-httpclient.retries <number>
- This option allows to configure the number of retries attempt of the
- httpclient when a request failed. This does the same as the "retries" keyword
- in a backend.
+ first The first server with available connection slots receives the
+ connection. The servers are chosen from the lowest numeric
+ identifier to the highest (see server parameter "id"), which
+ defaults to the server's position in the farm. Once a server
+ reaches its maxconn value, the next server is used. It does
+ not make sense to use this algorithm without setting maxconn.
+ The purpose of this algorithm is to always use the smallest
+ number of servers so that extra servers can be powered off
+ during non-intensive hours. This algorithm ignores the server
+ weight, and brings more benefit to long session such as RDP
+ or IMAP than HTTP, though it can be useful there too. In
+ order to use this algorithm efficiently, it is recommended
+ that a cloud controller regularly checks server usage to turn
+ them off when unused, and regularly checks backend queue to
+ turn new servers on when the queue inflates. Alternatively,
+ using "http-check send-state" may inform servers on the load.
+ This algorithm is not usable in LOG mode.
- Default value is 3.
+ hash Takes a regular sample expression in argument. The expression
+ is evaluated for each request and hashed according to the
+ configured hash-type. The result of the hash is divided by
+ the total weight of the running servers to designate which
+ server will receive the request. This can be used in place of
+ "source", "uri", "hdr()", "url_param()", "rdp-cookie" to make
+ use of a converter, refine the evaluation, or be used to
+ extract data from local variables for example. When the data
+ is not available, round robin will apply. This algorithm is
+ static by default, which means that changing a server's
+ weight on the fly will have no effect, but this can be
+ changed using "hash-type". This algorithm is not usable for
+ backends in LOG mode, please use "log-hash" instead.
-httpclient.ssl.ca-file <cafile>
- This option defines the ca-file which should be used to verify the server
- certificate. It takes the same parameters as the "ca-file" option on the
- server line.
+ source The source IP address is hashed and divided by the total
+ weight of the running servers to designate which server will
+ receive the request. This ensures that the same client IP
+ address will always reach the same server as long as no
+ server goes down or up. If the hash result changes due to the
+ number of running servers changing, many clients will be
+ directed to a different server. This algorithm is generally
+ used in TCP mode where no cookie may be inserted. It may also
+ be used on the Internet to provide a best-effort stickiness
+ to clients which refuse session cookies. This algorithm is
+ static by default, which means that changing a server's
+ weight on the fly will have no effect, but this can be
+ changed using "hash-type". See also the "hash" option above.
+ This algorithm is not usable for backends in LOG mode.
- By default and when this option is not used, the value is
- "@system-ca" which tries to load the CA of the system. If it fails the SSL
- will be disabled for the httpclient.
+ uri This algorithm hashes either the left part of the URI (before
+ the question mark) or the whole URI (if the "whole" parameter
+ is present) and divides the hash value by the total weight of
+ the running servers. The result designates which server will
+ receive the request. This ensures that the same URI will
+ always be directed to the same server as long as no server
+ goes up or down. This is used with proxy caches and
+ anti-virus proxies in order to maximize the cache hit rate.
+ Note that this algorithm may only be used in an HTTP backend.
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type".
- However, when this option is explicitly enabled it will trigger a
- configuration error if it fails.
+ This algorithm supports two optional parameters "len" and
+ "depth", both followed by a positive integer number. These
+ options may be helpful when it is needed to balance servers
+ based on the beginning of the URI only. The "len" parameter
+ indicates that the algorithm should only consider that many
+ characters at the beginning of the URI to compute the hash.
+ Note that having "len" set to 1 rarely makes sense since most
+ URIs start with a leading "/".
-httpclient.ssl.verify [none|required]
- Works the same way as the verify option on server lines. If specified to 'none',
- servers certificates are not verified. Default option is "required".
+ The "depth" parameter indicates the maximum directory depth
+ to be used to compute the hash. One level is counted for each
+ slash in the request. If both parameters are specified, the
+ evaluation stops when either is reached.
- By default and when this option is not used, the value is
- "required". If it fails the SSL will be disabled for the httpclient.
+ A "path-only" parameter indicates that the hashing key starts
+ at the first '/' of the path. This can be used to ignore the
+ authority part of absolute URIs, and to make sure that HTTP/1
+ and HTTP/2 URIs will provide the same hash. See also the
+ "hash" option above.
- However, when this option is explicitly enabled it will trigger a
- configuration error if it fails.
+ url_param The URL parameter specified in argument will be looked up in
+ the query string of each HTTP GET request.
-httpclient.timeout.connect <timeout>
- Set the maximum time to wait for a connection attempt by default for the
- httpclient.
+ If the modifier "check_post" is used, then an HTTP POST
+ request entity will be searched for the parameter argument,
+ when it is not found in a query string after a question mark
+ ('?') in the URL. The message body will only start to be
+ analyzed once either the advertised amount of data has been
+ received or the request buffer is full. In the unlikely event
+ that chunked encoding is used, only the first chunk is
+ scanned. Parameter values separated by a chunk boundary, may
+ be randomly balanced if at all. This keyword used to support
+ an optional <max_wait> parameter which is now ignored.
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
+ If the parameter is found followed by an equal sign ('=') and
+ a value, then the value is hashed and divided by the total
+ weight of the running servers. The result designates which
+ server will receive the request.
- The default value is 5000ms.
+ This is used to track user identifiers in requests and ensure
+ that a same user ID will always be sent to the same server as
+ long as no server goes up or down. If no value is found or if
+ the parameter is not found, then a round robin algorithm is
+ applied. Note that this algorithm may only be used in an HTTP
+ backend. This algorithm is static by default, which means
+ that changing a server's weight on the fly will have no
+ effect, but this can be changed using "hash-type". See also
+ the "hash" option above.
+ hdr(<name>) The HTTP header <name> will be looked up in each HTTP
+ request. Just as with the equivalent ACL 'hdr()' function,
+ the header name in parenthesis is not case sensitive. If the
+ header is absent or if it does not contain any value, the
+ roundrobin algorithm is applied instead.
-3.11. Certificate Storage
--------------------------
+ An optional 'use_domain_only' parameter is available, for
+ reducing the hash algorithm to the main domain part with some
+ specific headers such as 'Host'. For instance, in the Host
+ value "haproxy.1wt.eu", only "1wt" will be considered.
-HAProxy uses an internal storage mechanism to load and store certificates used
-in the configuration. This storage can be configured by using a "crt-store"
-section. It allows to configure certificate definitions and which files should
-be loaded in it. A certificate definition must be written before it is used
-elsewhere in the configuration.
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type". See also the
+ "hash" option above.
-crt-store [<name>]
+ random
+ random(<draws>)
+ A random number will be used as the key for the consistent
+ hashing function. This means that the servers' weights are
+ respected, dynamic weight changes immediately take effect, as
+ well as new server additions. Random load balancing can be
+ useful with large farms or when servers are frequently added
+ or removed as it may avoid the hammering effect that could
+ result from roundrobin or leastconn in this situation. The
+ hash-balance-factor directive can be used to further improve
+ fairness of the load balancing, especially in situations
+ where servers show highly variable response times. When an
+ argument <draws> is present, it must be an integer value one
+ or greater, indicating the number of draws before selecting
+ the least loaded of these servers. It was indeed demonstrated
+ that picking the least loaded of two servers is enough to
+ significantly improve the fairness of the algorithm, by
+ always avoiding to pick the most loaded server within a farm
+ and getting rid of any bias that could be induced by the
+ unfair distribution of the consistent list. Higher values N
+ will take away N-1 of the highest loaded servers at the
+ expense of performance. With very high values, the algorithm
+ will converge towards the leastconn's result but much slower.
+ The default value is 2, which generally shows very good
+ distribution and performance. This algorithm is also known as
+ the Power of Two Random Choices and is described here :
+ http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
-The "crt-store" takes an optional name in argument. If a name is specified,
-every certificate of this store must be referenced using "@<name>/<crt>" or
-"@<name>/<alias>".
+ For backends in LOG mode, the number of draws is ignored and
+ a single random is picked since there is no notion of server
+ load. Random log balancing can be useful with large farms or
+ when servers are frequently added or removed from the pool of
+ available servers as it may avoid the hammering effect that
+ could result from roundrobin in this situation.
-Files in the certificate storage can also be updated dynamically with the CLI.
-See "set ssl cert" in the section 9.3 of the management guide.
+ rdp-cookie
+ rdp-cookie(<name>)
+ The RDP cookie <name> (or "mstshash" if omitted) will be
+ looked up and hashed for each incoming TCP request. Just as
+ with the equivalent ACL 'req.rdp_cookie()' function, the name
+ is not case-sensitive. This mechanism is useful as a degraded
+ persistence mode, as it makes it possible to always send the
+ same user (or the same session ID) to the same server. If the
+ cookie is not found, the normal roundrobin algorithm is
+ used instead.
+ Note that for this to work, the frontend must ensure that an
+ RDP cookie is already present in the request buffer. For this
+ you must use 'tcp-request content accept' rule combined with
+ a 'req.rdp_cookie_cnt' ACL.
-The following keywords are supported in the "crt-store" section :
- - crt-base
- - key-base
- - load
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type". See also the
+ "hash" option above.
-crt-base <dir>
- Assigns a default directory to fetch SSL certificates from when a relative
- path is used with "crt" directives. Absolute locations specified prevail and
- ignore "crt-base". When used in a crt-store, the crt-base of the global
- section is ignored.
+ log-hash Takes a comma-delimited list of converters in argument. These
+ converters are applied in sequence to the input log message,
+ and the result will be cast as a string then hashed according
+ to the configured hash-type. The resulting hash will be used
+ to select the destination server among the ones declared in
+ the log backend. The goal of this algorithm is to be able to
+ extract a key within the final log message using string
+ converters and then be able to stick to the same server thanks
+ to the hash. Only "map-based" hashes are supported for now.
+ This algorithm is only usable for backends in LOG mode, for
+ others, please use "hash" instead.
-key-base <dir>
- Assigns a default directory to fetch SSL private keys from when a relative
- path is used with "key" directives. Absolute locations specified prevail and
- ignore "key-base". When used in a crt-store, the key-base of the global
- section is ignored.
+ sticky Tries to stick to the same server as much as possible. The
+ first server in the list of available servers receives all
+ the log messages. When the server goes DOWN, the next server
+ in the list takes its place. When a previously DOWN server
+ goes back UP it is added at the end of the list so that the
+ sticky server doesn't change until it becomes DOWN.
-load [crt <filename>] [param*]
- Load SSL files in the certificate storage. For the parameter list, see section
- "3.11.1. Load options"
+ <arguments> is an optional list of arguments which may be needed by some
+ algorithms. Right now, only "url_param", "uri" and "log-hash"
+ support an optional argument.
-Example:
+ The load balancing algorithm of a backend is set to roundrobin when no other
+ algorithm, mode nor option have been set. The algorithm may only be set once
+ for each backend.
- crt-store
- load crt "site1.crt" key "site1.key" ocsp "site1.ocsp" alias "site1"
- load crt "site2.crt" key "site2.key"
+ With authentication schemes that require the same connection like NTLM, URI
+ based algorithms must not be used, as they would cause subsequent requests
+ to be routed to different backend servers, breaking the invalid assumptions
+ NTLM relies on.
- frontend in2
- bind *:443 ssl crt "@/site1" crt "site2.crt"
+ TCP/HTTP Examples :
+ balance roundrobin
+ balance url_param userid
+ balance url_param session_id check_post 64
+ balance hdr(User-Agent)
+ balance hdr(host)
+ balance hdr(Host) use_domain_only
+ balance hash req.cookie(clientid)
+ balance hash var(req.client_id)
+ balance hash req.hdr_ip(x-forwarded-for,-1),ipmask(24)
- crt-store web
- crt-base /etc/ssl/certs/
- key-base /etc/ssl/private/
- load crt "site3.crt" alias "site3"
- load crt "site4.crt" key "site4.key"
+ LOG backend examples:
+ global
+ log backend@mylog-rrb local0 # send all logs to mylog-rrb backend
+ log backend@mylog-hash local0 # send all logs to mylog-hash backend
- frontend in2
- bind *:443 ssl crt "@web/site1" crt "site2.crt" crt "@web/site3" crt "@web/site4.crt"
+ backend mylog-rrb
+ mode log
+ balance roundrobin
-3.11.1. Load options
---------------------
+ server s1 udp@127.0.0.1:514 # will receive 50% of log messages
+ server s2 udp@127.0.0.1:514
-Load SSL files in the certificate storage. The load keyword can take multiple
-parameters which are listed below. These keywords are also usable in a
-crt-list.
+ backend mylog-hash
+ mode log
-crt <filename>
- This argument is mandatory, it loads a PEM which must contain the public
- certificate but could also contain the intermediate certificates and the
- private key. If no private key is provided in this file, a key can be provided
- with the "key" keyword.
+ # extract "METHOD URL PROTO" at the end of the log message,
+ # and let haproxy hash it so that log messages generated from
+ # similar requests get sent to the same syslog server:
+ balance log-hash 'field(-2,\")'
-acme <string>
- This option allows to configure the ACME protocol for a given certificate.
- This is an experimental feature which needs the
- "expose-experimental-directives" keyword in the global section.
+ # server list here
+ server s1 127.0.0.1:514
+ #...
- See also Section 3.12 ("ACME") and "domains" in this section.
+ Note: the following caveats and limitations on using the "check_post"
+ extension with "url_param" must be considered :
-alias <string>
- Optional argument. Allow to name the certificate with an alias, so it can be
- referenced with it in the configuration. An alias must be prefixed with '@/'
- when called elsewhere in the configuration.
+ - all POST requests are eligible for consideration, because there is no way
+ to determine if the parameters will be found in the body or entity which
+ may contain binary data. Therefore another method may be required to
+ restrict consideration of POST requests that have no URL parameters in
+ the body. (see acl http_end)
-domains <string>
- Configure the list of domains that will be used for ACME certificates. The
- first domain of the list is used as the CN. Domains are separated by commas in the list.
+ - using a <max_wait> value larger than the request buffer size does not
+ make sense and is useless. The buffer size is set at build time, and
+ defaults to 16 kB.
- See also Section 3.12 ("ACME") and "acme" in this section.
+ - Content-Encoding is not supported, the parameter search will probably
+ fail; and load balancing will fall back to Round Robin.
- Example:
+ - Expect: 100-continue is not supported, load balancing will fall back to
+ Round Robin.
- load crt "example.com.pem" acme LE domains "bar.example.com,foo.example.com"
+ - Transfer-Encoding (RFC7230 3.3.1) is only supported in the first chunk.
+ If the entire parameter value is not present in the first chunk, the
+ selection of server is undefined (actually, defined by how little
+ actually appeared in the first chunk).
-key <filename>
- This argument is optional. Load a private key in PEM format. If a private key
- was already defined in "crt", it will overwrite it.
+ - This feature does not support generation of a 100, 411 or 501 response.
-ocsp <filename>
- This argument is optional, it loads an OCSP response in DER format. It can
- be updated with the CLI.
+ - In some cases, requesting "check_post" MAY attempt to scan the entire
+ contents of a message body. Scanning normally terminates when linear
+ white space or control characters are found, indicating the end of what
+ might be a URL parameter list. This is probably not a concern with SGML
+ type message bodies.
-issuer <filename>
- This argument is optional. Load the OCSP issuer in PEM format. In order to
- identify which certificate an OCSP Response applies to, the issuer's
- certificate is necessary. If the issuer's certificate is not found in the
- "crt" file, it could be loaded from a file with this argument.
+ See also : "dispatch", "cookie", "transparent", "hash-type".
-sctl <filename>
- This argument is optional. Support for Certificate Transparency (RFC6962) TLS
- extension is enabled. The file must contain a valid Signed Certificate
- Timestamp List, as described in RFC. File is parsed to check basic syntax,
- but no signatures are verified.
-ocsp-update [ off | on ]
- Enable automatic OCSP response update when set to 'on', disable it otherwise.
- Its value defaults to 'off'.
- To enable the OCSP auto update on a bind line, you can use this option in a
- crt-store or you can use the global option "tune.ocsp-update.mode".
- If a given certificate is used in multiple crt-lists with different values of
- the 'ocsp-update' set, an error will be raised. Likewise, if a certificate
- inherits from the global option on a bind line and has an incompatible
- explicit 'ocsp-update' option set in a crt-list, the same error will be
- raised.
-
- Examples:
+bind [<address>]:<port_range> [, ...] [param*]
+bind /<path> [, ...] [param*]
+ Define one or several listening addresses and/or ports in a frontend.
- Here is an example configuration enabling it with a crt-list:
+ May be used in the following contexts: tcp, http
- haproxy.cfg:
- frontend fe
- bind :443 ssl crt-list haproxy.list
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
- haproxy.list:
- server_cert.pem [ocsp-update on] foo.bar
+ Arguments :
+ <address> is optional and can be a host name, an IPv4 address, an IPv6
+ address, or '*'. It designates the address the frontend will
+ listen on. If unset, all IPv4 addresses of the system will be
+ listened on. The same will apply for '*' or the system's
+ special address "0.0.0.0". The IPv6 equivalent is '::'. Note
+ that for UDP, specific OS features are required when binding
+ on multiple addresses to ensure the correct network interface
+ and source address will be used on response. In other way,
+ for QUIC listeners only bind on multiple addresses if running
+ with a modern enough systems.
- Here is an example configuration enabling it with a crt-store:
+ Optionally, an address family prefix may be used before the
+ address to force the family regardless of the address format,
+ which can be useful to specify a path to a unix socket with
+ no slash ('/'). Currently supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'udp@' -> address is resolved as IPv4 or IPv6 and
+ protocol UDP is used. Currently those listeners are
+ supported only in log-forward sections.
+ - 'udp4@' -> address is always IPv4 and protocol UDP
+ is used. Currently those listeners are supported
+ only in log-forward sections.
+ - 'udp6@' -> address is always IPv6 and protocol UDP
+ is used. Currently those listeners are supported
+ only in log-forward sections.
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only).
+ - 'abnsz@' -> address is in abstract namespace (Linux only)
+ but it is explicitly zero-terminated. This means no \0
+ padding is used to complete sun_path. It is useful to
+ interconnect with programs that don't implement the
+ default abns naming logic that haproxy uses.
+ - 'fd@<n>' -> use file descriptor <n> inherited from the
+ parent. The fd must be bound and may or may not already
+ be listening.
+ - 'sockpair@<n>'-> like fd@ but you must use the fd of a
+ connected unix socket or of a socketpair. The bind waits
+ to receive a FD over the unix socket and uses it as if it
+ was the FD of an accept(). Should be used carefully.
+ - 'quic4@' -> address is resolved as IPv4 and protocol UDP
+ is used. Note that to achieve the best performance with a
+ large traffic you should keep "tune.quic.socket-owner" on
+ connection. Else QUIC connections will be multiplexed
+ over the listener socket. Another alternative would be to
+ duplicate QUIC listener instances over several threads,
+ for example using "shards" keyword to at least reduce
+ thread contention.
+ - 'quic6@' -> address is resolved as IPv6 and protocol UDP
+ is used. The performance note for QUIC over IPv4 applies
+ as well.
+ - 'rhttp@' [ EXPERIMENTAL ] -> used for reverse HTTP.
+ Address must be a server with the format
+ '<backend>/<server>'. The server will be used to
+ instantiate connections to a remote address. The listener
+ will try to maintain "nbconn" connections. This is an
+ experimental features which requires
+ "expose-experimental-directives" on a line before this
+ bind.
- haproxy.cfg:
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment
+ variables.
- crt-store
- load crt foobar.pem ocsp-update on
+ <port_range> is either a unique TCP port, or a port range for which the
+ proxy will accept connections for the IP address specified
+ above. The port is mandatory for TCP listeners. Note that in
+ the case of an IPv6 address, the port is always the number
+ after the last colon (':'). A range can either be :
+ - a numerical port (ex: '80')
+ - a dash-delimited ports range explicitly stating the lower
+ and upper bounds (ex: '2000-2100') which are included in
+ the range.
- frontend fe
- bind :443 ssl crt foobar.pem
+ Particular care must be taken against port ranges, because
+ every <address:port> couple consumes one socket (= a file
+ descriptor), so it's easy to consume lots of descriptors
+ with a simple range, and to run out of sockets. Also, each
+ <address:port> couple must be used only once among all
+ instances running on a same system. Please note that binding
+ to ports lower than 1024 generally require particular
+ privileges to start the program, which are independent of
+ the 'uid' parameter.
- When the option is set to 'on', we will try to get an ocsp response whenever
- an ocsp uri is found in the frontend's certificate. The only limitation of
- this mode is that the certificate's issuer will have to be known in order for
- the OCSP certid to be built.
- Each OCSP response will be updated at least once an hour, and even more
- frequently if a given OCSP response has an expire date earlier than this one
- hour limit. A minimum update interval of 5 minutes will still exist in order
- to avoid updating too often responses that have a really short expire time or
- even no 'Next Update' at all. Because of this hard limit, please note that
- when auto update is set to 'on', any OCSP response loaded during init will
- not be updated until at least 5 minutes, even if its expire time ends before
- now+5m. This should not be too much of a hassle since an OCSP response must
- be valid when it gets loaded during init (its expire time must be in the
- future) so it is unlikely that this response expires in such a short time
- after init.
- On the other hand, if a certificate has an OCSP uri specified and no OCSP
- response, setting this option to 'on' for the given certificate will ensure
- that the OCSP response gets fetched automatically right after init.
- The default minimum and maximum delays (5 minutes and 1 hour respectively)
- can be configured by the "ocsp-update.maxdelay" and "ocsp-update.mindelay"
- global options.
+ <path> is a UNIX socket path beginning with a slash ('/'). This is
+ alternative to the TCP listening port. HAProxy will then
+ receive UNIX connections on the socket located at this place.
+ The path must begin with a slash and by default is absolute.
+ It can be relative to the prefix defined by "unix-bind" in
+ the global section. Note that the total length of the prefix
+ followed by the socket path cannot exceed some system limits
+ for UNIX sockets, which commonly are set to 107 characters.
- Whenever an OCSP response is updated by the auto update task or following a
- call to the "update ssl ocsp-response" CLI command, a dedicated log line is
- emitted. It follows a dedicated format that contains the following header
- "<OCSP-UPDATE>" and is followed by specific OCSP-related information:
- - the path of the corresponding frontend certificate
- - a numerical update status
- - a textual update status
- - the number of update failures for the given response
- - the number of update successes for the givan response
- See "show ssl ocsp-updates" CLI command for a full list of error codes and
- error messages. This line is emitted regardless of the success or failure of
- the concerned OCSP response update.
- The OCSP request/response is sent and received through an http_client
- instance that has the dontlog-normal option set and that uses the regular
- HTTP log format in case of error (unreachable OCSP responder for instance).
- If such an error occurs, another log line that contains HTTP-related
- information will then be emitted alongside the "regular" OCSP one (which will
- likely have "HTTP error" as text status). But if a purely HTTP error happens
- (unreachable OCSP responder for instance), an extra log line that follows the
- regular HTTP log-format will be emitted.
- Here are two examples of such log lines, with a successful OCSP update log
- line first and then an example of an HTTP error with the two different lines
- (lines were spit and the URL was shortened for readability):
- <133>Mar 6 11:16:53 haproxy[14872]: <OCSP-UPDATE> /path_to_cert/foo.pem 1 \
- "Update successful" 0 1
+ <param*> is a list of parameters common to all sockets declared on the
+ same line. These numerous parameters depend on OS and build
+ options and have a complete section dedicated to them. Please
+ refer to section 5 to for more details.
- <133>Mar 6 11:18:55 haproxy[14872]: <OCSP-UPDATE> /path_to_cert/bar.pem 2 \
- "HTTP error" 1 0
- <133>Mar 6 11:18:55 haproxy[14872]: -:- [06/Mar/2023:11:18:52.200] \
- <OCSP-UPDATE> -/- 2/0/-1/-1/3009 503 217 - - SC-- 0/0/0/0/3 0/0 {} \
- "GET http://127.0.0.1:12345/MEMwQT HTTP/1.1"
+ It is possible to specify a list of address:port combinations delimited by
+ commas. The frontend will then listen on all of these addresses. There is no
+ fixed limit to the number of addresses and ports which can be listened on in
+ a frontend, as well as there is no limit to the number of "bind" statements
+ in a frontend.
- Troubleshooting:
- A common error that can happen with let's encrypt certificates is if the DNS
- resolution provides an IPv6 address and your system does not have a valid
- outgoing IPv6 route. In such a case, you can either create the appropriate
- route or set the "httpclient.resolvers.prefer ipv4" option in the global
- section.
- In case of "OCSP response check failure" error, you might want to check that
- the issuer certificate that you provided is valid.
- A more precise error message might also be displayed between parenthesis
- after the "generic" error message. It can happen for "OCSP response check
- failure" or "Error during insertion" errors.
+ Example :
+ listen http_proxy
+ bind :80,:443
+ bind 10.0.0.1:10080,10.0.0.1:10443
+ bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy
-3.12. ACME
-----------
+ listen http_https_proxy
+ bind :80
+ bind :443 ssl crt /etc/haproxy/site.pem
-acme <name>
+ listen http_https_proxy_explicit
+ bind ipv6@:80
+ bind ipv4@public_ssl:443 ssl crt /etc/haproxy/site.pem
+ bind unix@ssl-frontend.sock user root mode 600 accept-proxy
-The ACME protocol can be configured using the "acme" section. The section takes
-a "<name>" argument, which is used to link a certificate to the section.
+ listen external_bind_app1
+ bind "fd@${FD_APP1}"
-The ACME section allows to configure HAProxy as an ACMEv2 client. This feature
-is experimental meaning that "expose-experimental-directives" must be in the
-global section so this can be used.
+ listen h3_quic_proxy
+ bind quic4@10.0.0.1:8888 ssl crt /etc/mycrt
-Current limitations as of 3.2: The feature is limited to the HTTP-01 challenge
-for now. The current HAProxy architecture is a non-blocking model, access to
-the disk is not supposed to be done after the configuration is loaded, because
-it could block the event loop, blocking the traffic on the same thread. Meaning
-that the certificates and keys generated from HAProxy will need to be dumped
-from outside HAProxy using "dump ssl cert" on the stats socket.
-External Account Binding (EAB) is not supported.
+ Note: regarding Linux's abstract namespace sockets, "abns" HAProxy sockets
+ uses the whole sun_path length is used for the address length. Some
+ other programs such as socat use the string length only by default.
+ Pass the option ",unix-tightsocklen=0" to any abstract socket
+ definition in socat to make it compatible with HAProxy's, or use the
+ "abnsz" HAProxy socket family instead.
-The ACME scheduler starts at HAProxy startup, it will loop over the
-certificates and start an ACME renewal task when the notAfter task is past
-curtime + (notAfter - notBefore) / 12, or 7 days if notBefore is not defined.
-The scheduler will then sleep and wakeup after 12 hours.
-It is possible to start manually a renewal task with "acme renew'.
-See also "acme status" in the management guide.
+ See also : "source", "option forwardfor", "unix-bind" and the PROXY protocol
+ documentation, and section 5 about bind options.
-The following keywords are usable in the ACME section:
-account-key <filename>
- Configure the path to the account key. The key need to be generated before
- launching HAProxy. If no account keyword is used, the acme section will try
- to load a filename using the section name "<name>.account.key". If the file
- doesn't exist, HAProxy will generate one, using the parameters from the acme
- section.
+capture cookie <name> len <length>
+ Capture and log a cookie in the request and in the response.
- You can also generate manually an RSA private key with openssl:
+ May be used in the following contexts: http
- openssl genrsa -out account.key 2048
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
- Or an ecdsa one:
+ Arguments :
+ <name> is the beginning of the name of the cookie to capture. In order
+ to match the exact name, simply suffix the name with an equal
+ sign ('='). The full name will appear in the logs, which is
+ useful with application servers which adjust both the cookie name
+ and value (e.g. ASPSESSIONXXX).
- openssl ecparam -name secp384r1 -genkey -noout -out account.key
+ <length> is the maximum number of characters to report in the logs, which
+ include the cookie name, the equal sign and the value, all in the
+ standard "name=value" form. The string will be truncated on the
+ right if it exceeds <length>.
-bits <number>
- Configure the number of bits to generate an RSA certificate. Default to 2048.
- Setting a too high value can trigger a warning if your machine is not
- powerful enough. (This can be configured with "warn-blocked-traffic-after"
- but blocking the traffic too long could trigger the watchdog.)
+ Only the first cookie is captured. Both the "cookie" request headers and the
+ "set-cookie" response headers are monitored. This is particularly useful to
+ check for application bugs causing session crossing or stealing between
+ users, because generally the user's cookies can only change on a login page.
-challenge <string>
- Takes a challenge type as parameter, this must be HTTP-01 or DNS-01. When not
- used the default is HTTP-01.
+ When the cookie was not presented by the client, the associated log column
+ will report "-". When a request does not cause a cookie to be assigned by the
+ server, a "-" is reported in the response column.
-contact <string>
- The contact email that will be associated to the account key in the CA.
+ The capture is performed in the frontend only because it is necessary that
+ the log format does not change for a given frontend depending on the
+ backends. This may change in the future. Note that there can be only one
+ "capture cookie" statement in a frontend. The maximum capture length is set
+ by the global "tune.http.cookielen" setting and defaults to 63 characters. It
+ is not possible to specify a capture in a "defaults" section.
-curves <string>
- When using the ECDSA keytype, configure the curves. The default is P-384.
+ Example:
+ capture cookie ASPSESSION len 32
-directory <string>
- This keyword configures the directory URL for the CA used by this acme
- section. This keyword is mandatory as there is no default URL.
+ See also : "capture request header", "capture response header" as well as
+ section 8 about logging.
- Example:
- directory https://acme-staging-v02.api.letsencrypt.org/directory
-keytype <string>
- Configure the type of key that will be generated. Value can be either "RSA"
- or "ECDSA". You can also configure the "curves" for ECDSA and the number of
- "bits" for RSA. By default EC384 keys are generated.
+capture request header <name> len <length>
+ Capture and log the last occurrence of the specified request header.
-map <map>
- Configure the map which will be used to store token (key) and thumbprint
- (value), which is useful to reply to a challenge when there are multiple
- account used. The acme task will add entries before validating the challenge
- and will remove the entries at the end of the task.
+ May be used in the following contexts: http
-Example:
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
- global
- expose-experimental-directives
- httpclient.resolvers.prefer ipv4
+ Arguments :
+ <name> is the name of the header to capture. The header names are not
+ case-sensitive, but it is a common practice to write them as they
+ appear in the requests, with the first letter of each word in
+ upper case. The header name will not appear in the logs, only the
+ value is reported, but the position in the logs is respected.
- frontend in
- bind *:80
- bind *:443 ssl
- http-request return status 200 content-type text/plain lf-string "%[path,field(-1,/)].%[path,field(-1,/),map(virt@acme)]\n" if { path_beg '/.well-known/acme-challenge/' }
- ssl-f-use crt "foo.example.com.pem.rsa" acme LE1 domains "foo.example.com.pem,bar.example.com"
- ssl-f-use crt "foo.example.com.pem.ecdsa" acme LE2 domains "foo.example.com.pem,bar.example.com"
+ <length> is the maximum number of characters to extract from the value and
+ report in the logs. The string will be truncated on the right if
+ it exceeds <length>.
- acme LE1
- directory https://acme-staging-v02.api.letsencrypt.org/directory
- account-key /etc/haproxy/letsencrypt.account.key
- contact john.doe@example.com
- challenge HTTP-01
- keytype RSA
- bits 2048
- map virt@acme
+ The complete value of the last occurrence of the header is captured. The
+ value will be added to the logs between braces ('{}'). If multiple headers
+ are captured, they will be delimited by a vertical bar ('|') and will appear
+ in the same order they were declared in the configuration. Non-existent
+ headers will be logged just as an empty string. Common uses for request
+ header captures include the "Host" field in virtual hosting environments, the
+ "Content-length" when uploads are supported, "User-agent" to quickly
+ differentiate between real users and robots, and "X-Forwarded-For" in proxied
+ environments to find where the request came from.
- acme LE2
- directory https://acme-staging-v02.api.letsencrypt.org/directory
- account-key /etc/haproxy/letsencrypt.account.key
- contact john.doe@example.com
- challenge HTTP-01
- keytype ECDSA
- curves P-384
- map virt@acme
+ Note that when capturing headers such as "User-agent", some spaces may be
+ logged, making the log analysis more difficult. Thus be careful about what
+ you log if you know your log parser is not smart enough to rely on the
+ braces.
+ There is no limit to the number of captured request headers nor to their
+ length, though it is wise to keep them low to limit memory usage per stream.
+ In order to keep log format consistent for a same frontend, header captures
+ can only be declared in a frontend. It is not possible to specify a capture
+ in a "defaults" section.
-4. Proxies
-----------
+ Example:
+ capture request header Host len 15
+ capture request header X-Forwarded-For len 15
+ capture request header Referer len 15
-Proxy configuration can be located in a set of sections :
- - defaults [<name>] [ from <defaults_name> ]
- - frontend <name> [ from <defaults_name> ]
- - backend <name> [ from <defaults_name> ]
- - listen <name> [ from <defaults_name> ]
+ See also : "capture cookie", "capture response header" as well as section 8
+ about logging.
-A "frontend" section describes a set of listening sockets accepting client
-connections.
-A "backend" section describes a set of servers to which the proxy will connect
-to forward incoming connections.
+capture response header <name> len <length>
+ Capture and log the last occurrence of the specified response header.
-A "listen" section defines a complete proxy with its frontend and backend
-parts combined in one section. It is generally useful for TCP-only traffic.
+ May be used in the following contexts: http
-A "defaults" section resets all settings to the documented ones and presets new
-ones for use by subsequent sections. All of "frontend", "backend" and "listen"
-sections always take their initial settings from a defaults section, by default
-the latest one that appears before the newly created section. It is possible to
-explicitly designate a specific "defaults" section to load the initial settings
-from by indicating its name on the section line after the optional keyword
-"from". While "defaults" section do not impose a name, this use is encouraged
-for better readability. It is also the only way to designate a specific section
-to use instead of the default previous one. Since "defaults" section names are
-optional, by default a very permissive check is applied on their name and these
-are even permitted to overlap. However if a "defaults" section is referenced by
-any other section, its name must comply with the syntax imposed on all proxy
-names, and this name must be unique among the defaults sections. Please note
-that regardless of what is currently permitted, it is recommended to avoid
-duplicate section names in general and to respect the same syntax as for proxy
-names. This rule might be enforced in a future version. In addition, a warning
-is emitted if a defaults section is explicitly used by a proxy while it is also
-implicitly used by another one because it is the last one defined. It is highly
-encouraged to not mix both usages by always using explicit references or by
-adding a last common defaults section reserved for all implicit uses.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
-Note that it is even possible for a defaults section to take its initial
-settings from another one, and as such, inherit settings across multiple levels
-of defaults sections. This can be convenient to establish certain configuration
-profiles to carry groups of default settings (e.g. TCP vs HTTP or short vs long
-timeouts) but can quickly become confusing to follow.
+ Arguments :
+ <name> is the name of the header to capture. The header names are not
+ case-sensitive, but it is a common practice to write them as they
+ appear in the response, with the first letter of each word in
+ upper case. The header name will not appear in the logs, only the
+ value is reported, but the position in the logs is respected.
-All proxy names must be formed from upper and lower case letters, digits,
-'-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are
-case-sensitive, which means that "www" and "WWW" are two different proxies.
+ <length> is the maximum number of characters to extract from the value and
+ report in the logs. The string will be truncated on the right if
+ it exceeds <length>.
-Historically, all proxy names could overlap, it just caused troubles in the
-logs. Since the introduction of content switching, it is mandatory that two
-proxies with overlapping capabilities (frontend/backend) have different names.
-However, it is still permitted that a frontend and a backend share the same
-name, as this configuration seems to be commonly encountered.
+ The complete value of the last occurrence of the header is captured. The
+ result will be added to the logs between braces ('{}') after the captured
+ request headers. If multiple headers are captured, they will be delimited by
+ a vertical bar ('|') and will appear in the same order they were declared in
+ the configuration. Non-existent headers will be logged just as an empty
+ string. Common uses for response header captures include the "Content-length"
+ header which indicates how many bytes are expected to be returned, the
+ "Location" header to track redirections.
-Right now, two major proxy modes are supported : "tcp", also known as layer 4,
-and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards
-bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the
-protocol, and can interact with it by allowing, blocking, switching, adding,
-modifying, or removing arbitrary contents in requests or responses, based on
-arbitrary criteria.
+ There is no limit to the number of captured response headers nor to their
+ length, though it is wise to keep them low to limit memory usage per stream.
+ In order to keep log format consistent for a same frontend, header captures
+ can only be declared in a frontend. It is not possible to specify a capture
+ in a "defaults" section.
-In HTTP mode, the processing applied to requests and responses flowing over
-a connection depends in the combination of the frontend's HTTP options and
-the backend's. HAProxy supports 3 connection modes :
+ Example:
+ capture response header Content-length len 9
+ capture response header Location len 15
- - KAL : keep alive ("option http-keep-alive") which is the default mode : all
- requests and responses are processed, and connections remain open but idle
- between responses and new requests.
+ See also : "capture cookie", "capture request header" as well as section 8
+ about logging.
- - SCL: server close ("option http-server-close") : the server-facing
- connection is closed after the end of the response is received, but the
- client-facing connection remains open.
- - CLO: close ("option httpclose"): the connection is closed after the end of
- the response and "Connection: close" appended in both directions.
+clitcpka-cnt <count>
+ Sets the maximum number of keepalive probes TCP should send before dropping
+ the connection on the client side.
-The effective mode that will be applied to a connection passing through a
-frontend and a backend can be determined by both proxy modes according to the
-following matrix, but in short, the modes are symmetric, keep-alive is the
-weakest option and close is the strongest.
+ May be used in the following contexts: tcp, http, log
- Backend mode
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- | KAL | SCL | CLO
- ----+-----+-----+----
- KAL | KAL | SCL | CLO
- ----+-----+-----+----
- mode SCL | SCL | SCL | CLO
- ----+-----+-----+----
- CLO | CLO | CLO | CLO
+ Arguments :
+ <count> is the maximum number of keepalive probes.
-It is possible to chain a TCP frontend to an HTTP backend. It is pointless if
-only HTTP traffic is handled. But it may be used to handle several protocols
-within the same frontend. In this case, the client's connection is first handled
-as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the
-content processings are performend on raw data. Once upgraded, data is parsed
-and stored using an internal representation called HTX and it is no longer
-possible to rely on raw representation. There is no way to go back.
+ This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
-There are two kind of upgrades, in-place upgrades and destructive upgrades. The
-first ones involves a TCP to HTTP/1 upgrade. In HTTP/1, the request
-processings are serialized, thus the applicative stream can be preserved. The
-second one involves a TCP to HTTP/2 upgrade. Because it is a multiplexed
-protocol, the applicative stream cannot be associated to any HTTP/2 stream and
-is destroyed. New applicative streams are then created when HAProxy receives
-new HTTP/2 streams at the lower level, in the H2 multiplexer. It is important
-to understand this difference because that drastically changes the way to
-process data. When an HTTP/1 upgrade is performed, the content processings
-already performed on raw data are neither lost nor reexecuted while for an
-HTTP/2 upgrade, applicative streams are distinct and all frontend rules are
-evaluated systematically on each one. And as said, the first stream, the TCP
-one, is destroyed, but only after the frontend rules were evaluated.
+ See also : "option clitcpka", "clitcpka-idle", "clitcpka-intvl".
-There is another importnat point to understand when HTTP processings are
-performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from
-tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2
-preface can be parsed. This is a huge limitation regarding the HTTP content
-analysis in TCP. Concretely it is only possible to know if received data are
-HTTP. For instance, it is not possible to choose a backend based on the Host
-header value while it is trivial in HTTP/1. Hopefully, there is a solution to
-mitigate this drawback.
-There are two ways to perform an HTTP upgrade. The first one, the historical
-method, is to select an HTTP backend. The upgrade happens when the backend is
-set. Thus, for in-place upgrades, only the backend configuration is considered
-in the HTTP data processing. For destructive upgrades, the applicative stream
-is destroyed, thus its processing is stopped. With this method, possibilities
-to choose a backend with an HTTP/2 connection are really limited, as mentioned
-above, and a bit useless because the stream is destroyed. The second method is
-to upgrade during the tcp-request content rules evaluation, thanks to the
-"switch-mode http" action. In this case, the upgrade is performed in the
-frontend context and it is possible to define HTTP directives in this
-frontend. For in-place upgrades, it offers all the power of the HTTP analysis
-as soon as possible. It is not that far from an HTTP frontend. For destructive
-upgrades, it does not change anything except it is useless to choose a backend
-on limited information. It is of course the recommended method. Thus, testing
-the request protocol from the tcp-request content rules to perform an HTTP
-upgrade is enough. All the remaining HTTP manipulation may be moved to the
-frontend http-request ruleset. But keep in mind that tcp-request content rules
-remains evaluated on each streams, that can't be changed.
-
-4.1. Proxy keywords matrix
---------------------------
-
-The following list of keywords is supported. Most of them may only be used in a
-limited set of section types. Some of them are marked as "deprecated" because
-they are inherited from an old syntax which may be confusing or functionally
-limited, and there are new recommended keywords to replace them. Keywords
-marked with "(*)" can be optionally inverted using the "no" prefix, e.g. "no
-option contstats". This makes sense when the option has been enabled by default
-and must be disabled for a specific instance. Such options may also be prefixed
-with "default" in order to restore default settings regardless of what has been
-specified in a previous "defaults" section. Keywords supported in defaults
-sections marked with "(!)" are only supported in named defaults sections, not
-anonymous ones.
-
-Note: Some dangerous and not recommended directives are intentionnaly not
- listed in the following matrix. It is on purpose. These directives are
- documentated. But by not listing them below is one more way to discourage
- anyone to use it.
+clitcpka-idle <timeout>
+ Sets the time the connection needs to remain idle before TCP starts sending
+ keepalive probes, if enabled the sending of TCP keepalive packets on the
+ client side.
+ May be used in the following contexts: tcp, http
- keyword defaults frontend listen backend
-------------------------------------+----------+----------+---------+---------
-acl X (!) X X X
-backlog X X X -
-balance X - X X
-bind - X X -
-capture cookie - X X -
-capture request header - X X -
-capture response header - X X -
-clitcpka-cnt X X X -
-clitcpka-idle X X X -
-clitcpka-intvl X X X -
-compression X X X X
-cookie X - X X
-crt - X X -
-declare capture - X X -
-default-server X - X X
-default_backend X X X -
-description - X X X
-disabled X X X X
-dispatch - - X X
-email-alert from X X X X
-email-alert level X X X X
-email-alert mailers X X X X
-email-alert myhostname X X X X
-email-alert to X X X X
-enabled X X X X
-errorfile X X X X
-errorfiles X X X X
-errorloc X X X X
-errorloc302 X X X X
--- keyword -------------------------- defaults - frontend - listen -- backend -
-errorloc303 X X X X
-error-log-format X X X -
-force-persist - - X X
-filter - X X X
-fullconn X - X X
-guid - X X X
-hash-balance-factor X - X X
-hash-preserve-affinity X - X X
-hash-type X - X X
-http-after-response X (!) X X X
-http-check comment X - X X
-http-check connect X - X X
-http-check disable-on-404 X - X X
-http-check expect X - X X
-http-check send X - X X
-http-check send-state X - X X
-http-check set-var X - X X
-http-check unset-var X - X X
-http-error X X X X
-http-request X (!) X X X
-http-response X (!) X X X
-http-reuse X - X X
-http-send-name-header X - X X
-id - X X X
-ignore-persist - - X X
-load-server-state-from-file X - X X
-log (*) X X X X
-log-format X X X -
-log-format-sd X X X -
-log-tag X X X X
-log-steps X X X -
-max-keep-alive-queue X - X X
-max-session-srv-conns X X X -
-maxconn X X X -
-mode X X X X
-monitor fail - X X -
-monitor-uri X X X -
-option abortonclose (*) X - X X
-option allbackups (*) X - X X
-option checkcache (*) X - X X
-option clitcpka (*) X X X -
-option contstats (*) X X X -
-option disable-h2-upgrade (*) X X X -
-option dontlog-normal (*) X X X -
-option dontlognull (*) X X X -
--- keyword -------------------------- defaults - frontend - listen -- backend -
-option forwardfor X X X X
-option forwarded (*) X - X X
-option h1-case-adjust-bogus-client (*) X X X -
-option h1-case-adjust-bogus-server (*) X - X X
-option http-buffer-request (*) X X X X
-option http-drop-request-trailers (*) X - - X
-option http-drop-response-trailers (*) X - X -
-option http-ignore-probes (*) X X X -
-option http-keep-alive (*) X X X X
-option http-no-delay (*) X X X X
-option http-pretend-keepalive (*) X - X X
-option http-restrict-req-hdr-names X X X X
-option http-server-close (*) X X X X
-option http-use-proxy-header (*) X X X -
-option httpchk X - X X
-option httpclose (*) X X X X
-option httplog X X X -
-option httpslog X X X -
-option independent-streams (*) X X X X
-option ldap-check X - X X
-option external-check X - X X
-option log-health-checks (*) X - X X
-option log-separate-errors (*) X X X -
-option logasap (*) X X X -
-option mysql-check X - X X
-option nolinger (*) X X X X
-option originalto X X X X
-option persist (*) X - X X
-option pgsql-check X - X X
-option prefer-last-server (*) X - X X
-option redispatch (*) X - X X
-option redis-check X - X X
-option smtpchk X - X X
-option socket-stats (*) X X X -
-option splice-auto (*) X X X X
-option splice-request (*) X X X X
-option splice-response (*) X X X X
-option spop-check X - X X
-option srvtcpka (*) X - X X
-option ssl-hello-chk X - X X
--- keyword -------------------------- defaults - frontend - listen -- backend -
-option tcp-check X - X X
-option tcp-smart-accept (*) X X X -
-option tcp-smart-connect (*) X - X X
-option tcpka X X X X
-option tcplog X X X -
-option transparent (*) X - X X
-option idle-close-on-response (*) X X X -
-external-check command X - X X
-external-check path X - X X
-persist rdp-cookie X - X X
-quic-initial X (!) X X -
-rate-limit sessions X X X -
-redirect - X X X
--- keyword -------------------------- defaults - frontend - listen -- backend -
-retries X - X X
-retry-on X - X X
-server - - X X
-server-state-file-name X - X X
-server-template - - X X
-source X - X X
-srvtcpka-cnt X - X X
-srvtcpka-idle X - X X
-srvtcpka-intvl X - X X
-stats admin - X X X
-stats auth X X X X
-stats enable X X X X
-stats hide-version X X X X
-stats http-request - X X X
-stats realm X X X X
-stats refresh X X X X
-stats scope X X X X
-stats show-desc X X X X
-stats show-legends X X X X
-stats show-node X X X X
-stats uri X X X X
--- keyword -------------------------- defaults - frontend - listen -- backend -
-stick match - - X X
-stick on - - X X
-stick store-request - - X X
-stick store-response - - X X
-stick-table - X X X
-tcp-check comment X - X X
-tcp-check connect X - X X
-tcp-check expect X - X X
-tcp-check send X - X X
-tcp-check send-lf X - X X
-tcp-check send-binary X - X X
-tcp-check send-binary-lf X - X X
-tcp-check set-var X - X X
-tcp-check unset-var X - X X
-tcp-request connection X (!) X X -
-tcp-request content X (!) X X X
-tcp-request inspect-delay X (!) X X X
-tcp-request session X (!) X X -
-tcp-response content X (!) - X X
-tcp-response inspect-delay X (!) - X X
-timeout check X - X X
-timeout client X X X -
-timeout client-fin X X X -
-timeout client-hs X X X -
-timeout connect X - X X
-timeout http-keep-alive X X X X
-timeout http-request X X X X
-timeout queue X - X X
-timeout server X - X X
-timeout server-fin X - X X
-timeout tarpit X X X X
-timeout tunnel X - X X
-transparent (deprecated) X - X X
-unique-id-format X X X -
-unique-id-header X X X -
-use_backend - X X -
-use-fcgi-app - - X X
-use-server - - X X
-------------------------------------+----------+----------+---------+---------
- keyword defaults frontend listen backend
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <timeout> is the time the connection needs to remain idle before TCP starts
+ sending keepalive probes. It is specified in seconds by default,
+ but can be in any other unit if the number is suffixed by the
+ unit, as explained at the top of this document.
-4.2. Alphabetically sorted keywords reference
----------------------------------------------
+ This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_time) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
-This section provides a description of each keyword and its usage.
+ See also : "option clitcpka", "clitcpka-cnt", "clitcpka-intvl".
-acl <aclname> <criterion> [flags] [operator] <value> ...
- Declare or complete an access list.
+clitcpka-intvl <timeout>
+ Sets the time between individual keepalive probes on the client side.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | yes
+ yes | yes | yes | no
- This directive is only available from named defaults sections, not anonymous
- ones. ACLs defined in a defaults section are not visible from other sections
- using it.
+ Arguments :
+ <timeout> is the time between individual keepalive probes. It is specified
+ in seconds by default, but can be in any other unit if the number
+ is suffixed by the unit, as explained at the top of this
+ document.
- Example:
- acl invalid_src src 0.0.0.0/7 224.0.0.0/3
- acl invalid_src src_port 0:1023
- acl local_dst hdr(host) -i localhost
+ This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
- See section 7 about ACL usage.
+ See also : "option clitcpka", "clitcpka-cnt", "clitcpka-idle".
-backlog <conns>
- Give hints to the system about the approximate listen backlog desired size
+compression algo <algorithm> ...
+compression algo-req <algorithm>
+compression algo-res <algorithm>
+compression type <mime type> ...
+ Enable HTTP compression.
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
Arguments :
- <conns> is the number of pending connections. Depending on the operating
- system, it may represent the number of already acknowledged
- connections, of non-acknowledged ones, or both.
+ algo is followed by the list of supported compression algorithms for
+ responses (legacy keyword)
+ algo-req is followed by compression algorithm for request (only one is
+ provided).
+ algo-res is followed by the list of supported compression algorithms for
+ responses.
+ type is followed by the list of MIME types that will be compressed for
+ responses (legacy keyword).
+ type-req is followed by the list of MIME types that will be compressed for
+ requests.
+ type-res is followed by the list of MIME types that will be compressed for
+ responses.
- This option is only meaningful for stream listeners, including QUIC ones. Its
- behavior however is not identical with QUIC instances.
+ The currently supported algorithms are :
+ identity this is mostly for debugging, and it was useful for developing
+ the compression feature. Identity does not apply any change on
+ data.
- For all listeners but QUIC, in order to protect against SYN flood attacks,
- one solution is to increase the system's SYN backlog size. Depending on the
- system, sometimes it is just tunable via a system parameter, sometimes it is
- not adjustable at all, and sometimes the system relies on hints given by the
- application at the time of the listen() syscall. By default, HAProxy passes
- the frontend's maxconn value to the listen() syscall. On systems which can
- make use of this value, it can sometimes be useful to be able to specify a
- different value, hence this backlog parameter.
+ gzip applies gzip compression. This setting is only available when
+ support for zlib or libslz was built in.
- On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is
- used as a hint and the system accepts up to the smallest greater power of
- two, and never more than some limits (usually 32768).
+ deflate same as "gzip", but with deflate algorithm and zlib format.
+ Note that this algorithm has ambiguous support on many
+ browsers and no support at all from recent ones. It is
+ strongly recommended not to use it for anything else than
+ experimentation. This setting is only available when support
+ for zlib or libslz was built in.
- For QUIC listeners, backlog sets a shared limits for both the maximum count
- of active handshakes and connections waiting to be accepted. The handshake
- phase relies primarily of the network latency with the remote peer, whereas
- the second phase depends solely on haproxy load. When either one of this
- limit is reached, haproxy starts to drop reception of INITIAL packets,
- preventing any new connection allocation, until the connection excess starts
- to decrease. This situation may cause browsers to silently downgrade the HTTP
- versions and switching to TCP.
+ raw-deflate same as "deflate" without the zlib wrapper, and used as an
+ alternative when the browser wants "deflate". All major
+ browsers understand it and despite violating the standards,
+ it is known to work better than "deflate", at least on MSIE
+ and some versions of Safari. Do not use it in conjunction
+ with "deflate", use either one or the other since both react
+ to the same Accept-Encoding token. This setting is only
+ available when support for zlib or libslz was built in.
- See also : "maxconn" and the target operating system's tuning guide.
+ Compression will be activated depending on the Accept-Encoding request
+ header. With identity, it does not take care of that header.
+ If backend servers support HTTP compression, these directives
+ will be no-op: HAProxy will see the compressed response and will not
+ compress again. If backend servers do not support HTTP compression and
+ there is Accept-Encoding header in request, HAProxy will compress the
+ matching response.
+ Compression is disabled when:
+ * the request does not advertise a supported compression algorithm in the
+ "Accept-Encoding" header
+ * the response message is not HTTP/1.1 or above
+ * HTTP status code is not one of 200, 201, 202, or 203
+ * response contain neither a "Content-Length" header nor a
+ "Transfer-Encoding" whose last value is "chunked"
+ * response contains a "Content-Type" header whose first value starts with
+ "multipart"
+ * the response contains the "no-transform" value in the "Cache-control"
+ header
+ * User-Agent matches "Mozilla/4" unless it is MSIE 6 with XP SP2, or MSIE 7
+ and later
+ * The response contains a "Content-Encoding" header, indicating that the
+ response is already compressed (see compression offload)
+ * The response contains an invalid "ETag" header or multiple ETag headers
+ * The payload size is smaller than the minimum size
+ (see compression minsize-res)
-balance <algorithm> [ <arguments> ]
-balance url_param <param> [check_post]
- Define the load balancing algorithm to be used in a backend.
+ Note: The compression does not emit the Warning header.
- May be used in the following contexts: tcp, http, log
+ Examples :
+ compression algo gzip
+ compression type text/html text/plain
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ See also : "compression offload", "compression direction",
+ "compression minsize-req" and "compression minsize-res"
- Arguments :
- <algorithm> is the algorithm used to select a server when doing load
- balancing. This only applies when no persistence information
- is available, or when a connection is redispatched to another
- server. <algorithm> may be one of the following :
+compression minsize-req <size>
+compression minsize-res <size>
+ Sets the minimum payload size in bytes for compression to be applied.
- roundrobin Each server is used in turns, according to their weights.
- This is the smoothest and fairest algorithm when the server's
- processing time remains equally distributed. This algorithm
- is dynamic, which means that server weights may be adjusted
- on the fly for slow starts for instance. It is limited by
- design to 4095 active servers per backend. Note that in some
- large farms, when a server becomes up after having been down
- for a very short time, it may sometimes take a few hundreds
- requests for it to be re-integrated into the farm and start
- receiving traffic. This is normal, though very rare. It is
- indicated here in case you would have the chance to observe
- it, so that you don't worry. Note: weights are ignored for
- backends in LOG mode.
+ May be used in the following contexts: http
- static-rr Each server is used in turns, according to their weights.
- This algorithm is as similar to roundrobin except that it is
- static, which means that changing a server's weight on the
- fly will have no effect. On the other hand, it has no design
- limitation on the number of servers, and when a server goes
- up, it is always immediately reintroduced into the farm, once
- the full map is recomputed. It also uses slightly less CPU to
- run (around -1%). This algorithm is not usable in LOG mode.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- leastconn The server with the lowest number of connections receives the
- connection. Round-robin is performed within groups of servers
- of the same load to ensure that all servers will be used. Use
- of this algorithm is recommended where very long sessions are
- expected, such as LDAP, SQL, TSE, etc... but is not very well
- suited for protocols using short sessions such as HTTP. This
- algorithm is dynamic, which means that server weights may be
- adjusted on the fly for slow starts for instance. It will
- also consider the number of queued connections in addition to
- the established ones in order to minimize queuing. This
- algorithm is not usable in LOG mode.
+ Payloads smaller than this size will not be compressed, avoiding unnecessary
+ CPU overhead for data that would not significantly benefit from compression.
+ "minsize-req" applies on requests and "minsize-res" on responses.
+ The default value is 0.
- first The first server with available connection slots receives the
- connection. The servers are chosen from the lowest numeric
- identifier to the highest (see server parameter "id"), which
- defaults to the server's position in the farm. Once a server
- reaches its maxconn value, the next server is used. It does
- not make sense to use this algorithm without setting maxconn.
- The purpose of this algorithm is to always use the smallest
- number of servers so that extra servers can be powered off
- during non-intensive hours. This algorithm ignores the server
- weight, and brings more benefit to long session such as RDP
- or IMAP than HTTP, though it can be useful there too. In
- order to use this algorithm efficiently, it is recommended
- that a cloud controller regularly checks server usage to turn
- them off when unused, and regularly checks backend queue to
- turn new servers on when the queue inflates. Alternatively,
- using "http-check send-state" may inform servers on the load.
- This algorithm is not usable in LOG mode.
+compression offload
+ Makes HAProxy work as a compression offloader only.
- hash Takes a regular sample expression in argument. The expression
- is evaluated for each request and hashed according to the
- configured hash-type. The result of the hash is divided by
- the total weight of the running servers to designate which
- server will receive the request. This can be used in place of
- "source", "uri", "hdr()", "url_param()", "rdp-cookie" to make
- use of a converter, refine the evaluation, or be used to
- extract data from local variables for example. When the data
- is not available, round robin will apply. This algorithm is
- static by default, which means that changing a server's
- weight on the fly will have no effect, but this can be
- changed using "hash-type". This algorithm is not usable for
- backends in LOG mode, please use "log-hash" instead.
+ May be used in the following contexts: http
- source The source IP address is hashed and divided by the total
- weight of the running servers to designate which server will
- receive the request. This ensures that the same client IP
- address will always reach the same server as long as no
- server goes down or up. If the hash result changes due to the
- number of running servers changing, many clients will be
- directed to a different server. This algorithm is generally
- used in TCP mode where no cookie may be inserted. It may also
- be used on the Internet to provide a best-effort stickiness
- to clients which refuse session cookies. This algorithm is
- static by default, which means that changing a server's
- weight on the fly will have no effect, but this can be
- changed using "hash-type". See also the "hash" option above.
- This algorithm is not usable for backends in LOG mode.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
- uri This algorithm hashes either the left part of the URI (before
- the question mark) or the whole URI (if the "whole" parameter
- is present) and divides the hash value by the total weight of
- the running servers. The result designates which server will
- receive the request. This ensures that the same URI will
- always be directed to the same server as long as no server
- goes up or down. This is used with proxy caches and
- anti-virus proxies in order to maximize the cache hit rate.
- Note that this algorithm may only be used in an HTTP backend.
- This algorithm is static by default, which means that
- changing a server's weight on the fly will have no effect,
- but this can be changed using "hash-type".
+ The "offload" setting makes HAProxy remove the Accept-Encoding header to
+ prevent backend servers from compressing responses. It is strongly
+ recommended not to do this because this means that all the compression work
+ will be done on the single point where HAProxy is located. However in some
+ deployment scenarios, HAProxy may be installed in front of a buggy gateway
+ with broken HTTP compression implementation which can't be turned off.
+ In that case HAProxy can be used to prevent that gateway from emitting
+ invalid payloads. In this case, simply removing the header in the
+ configuration does not work because it applies before the header is parsed,
+ so that prevents HAProxy from compressing. The "offload" setting should
+ then be used for such scenarios.
- This algorithm supports two optional parameters "len" and
- "depth", both followed by a positive integer number. These
- options may be helpful when it is needed to balance servers
- based on the beginning of the URI only. The "len" parameter
- indicates that the algorithm should only consider that many
- characters at the beginning of the URI to compute the hash.
- Note that having "len" set to 1 rarely makes sense since most
- URIs start with a leading "/".
+ If this setting is used in a defaults section, a warning is emitted and the
+ option is ignored.
- The "depth" parameter indicates the maximum directory depth
- to be used to compute the hash. One level is counted for each
- slash in the request. If both parameters are specified, the
- evaluation stops when either is reached.
-
- A "path-only" parameter indicates that the hashing key starts
- at the first '/' of the path. This can be used to ignore the
- authority part of absolute URIs, and to make sure that HTTP/1
- and HTTP/2 URIs will provide the same hash. See also the
- "hash" option above.
+ See also : "compression type", "compression algo", "compression direction"
- url_param The URL parameter specified in argument will be looked up in
- the query string of each HTTP GET request.
+compression direction <direction>
+ Makes haproxy able to compress both requests and responses.
+ Valid values are "request", to compress only requests, "response", to
+ compress only responses, or "both", when you want to compress both.
+ The default value is "response".
- If the modifier "check_post" is used, then an HTTP POST
- request entity will be searched for the parameter argument,
- when it is not found in a query string after a question mark
- ('?') in the URL. The message body will only start to be
- analyzed once either the advertised amount of data has been
- received or the request buffer is full. In the unlikely event
- that chunked encoding is used, only the first chunk is
- scanned. Parameter values separated by a chunk boundary, may
- be randomly balanced if at all. This keyword used to support
- an optional <max_wait> parameter which is now ignored.
+ May be used in the following contexts: http
- If the parameter is found followed by an equal sign ('=') and
- a value, then the value is hashed and divided by the total
- weight of the running servers. The result designates which
- server will receive the request.
+ See also : "compression type", "compression algo", "compression offload"
- This is used to track user identifiers in requests and ensure
- that a same user ID will always be sent to the same server as
- long as no server goes up or down. If no value is found or if
- the parameter is not found, then a round robin algorithm is
- applied. Note that this algorithm may only be used in an HTTP
- backend. This algorithm is static by default, which means
- that changing a server's weight on the fly will have no
- effect, but this can be changed using "hash-type". See also
- the "hash" option above.
+cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ]
+ [ postonly ] [ preserve ] [ httponly ] [ secure ]
+ [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ]
+ [ dynamic ] [ attr <value> ]*
+ Enable cookie-based persistence in a backend.
- hdr(<name>) The HTTP header <name> will be looked up in each HTTP
- request. Just as with the equivalent ACL 'hdr()' function,
- the header name in parenthesis is not case sensitive. If the
- header is absent or if it does not contain any value, the
- roundrobin algorithm is applied instead.
+ May be used in the following contexts: http
- An optional 'use_domain_only' parameter is available, for
- reducing the hash algorithm to the main domain part with some
- specific headers such as 'Host'. For instance, in the Host
- value "haproxy.1wt.eu", only "1wt" will be considered.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- This algorithm is static by default, which means that
- changing a server's weight on the fly will have no effect,
- but this can be changed using "hash-type". See also the
- "hash" option above.
+ Arguments :
+ <name> is the name of the cookie which will be monitored, modified or
+ inserted in order to bring persistence. This cookie is sent to
+ the client via a "Set-Cookie" header in the response, and is
+ brought back by the client in a "Cookie" header in all requests.
+ Special care should be taken to choose a name which does not
+ conflict with any likely application cookie. Also, if the same
+ backends are subject to be used by the same clients (e.g.
+ HTTP/HTTPS), care should be taken to use different cookie names
+ between all backends if persistence between them is not desired.
- random
- random(<draws>)
- A random number will be used as the key for the consistent
- hashing function. This means that the servers' weights are
- respected, dynamic weight changes immediately take effect, as
- well as new server additions. Random load balancing can be
- useful with large farms or when servers are frequently added
- or removed as it may avoid the hammering effect that could
- result from roundrobin or leastconn in this situation. The
- hash-balance-factor directive can be used to further improve
- fairness of the load balancing, especially in situations
- where servers show highly variable response times. When an
- argument <draws> is present, it must be an integer value one
- or greater, indicating the number of draws before selecting
- the least loaded of these servers. It was indeed demonstrated
- that picking the least loaded of two servers is enough to
- significantly improve the fairness of the algorithm, by
- always avoiding to pick the most loaded server within a farm
- and getting rid of any bias that could be induced by the
- unfair distribution of the consistent list. Higher values N
- will take away N-1 of the highest loaded servers at the
- expense of performance. With very high values, the algorithm
- will converge towards the leastconn's result but much slower.
- The default value is 2, which generally shows very good
- distribution and performance. This algorithm is also known as
- the Power of Two Random Choices and is described here :
- http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
+ rewrite This keyword indicates that the cookie will be provided by the
+ server and that HAProxy will have to modify its value to set the
+ server's identifier in it. This mode is handy when the management
+ of complex combinations of "Set-cookie" and "Cache-control"
+ headers is left to the application. The application can then
+ decide whether or not it is appropriate to emit a persistence
+ cookie. Since all responses should be monitored, this mode
+ doesn't work in HTTP tunnel mode. Unless the application
+ behavior is very complex and/or broken, it is advised not to
+ start with this mode for new deployments. This keyword is
+ incompatible with "insert" and "prefix".
- For backends in LOG mode, the number of draws is ignored and
- a single random is picked since there is no notion of server
- load. Random log balancing can be useful with large farms or
- when servers are frequently added or removed from the pool of
- available servers as it may avoid the hammering effect that
- could result from roundrobin in this situation.
+ insert This keyword indicates that the persistence cookie will have to
+ be inserted by HAProxy in server responses if the client did not
- rdp-cookie
- rdp-cookie(<name>)
- The RDP cookie <name> (or "mstshash" if omitted) will be
- looked up and hashed for each incoming TCP request. Just as
- with the equivalent ACL 'req.rdp_cookie()' function, the name
- is not case-sensitive. This mechanism is useful as a degraded
- persistence mode, as it makes it possible to always send the
- same user (or the same session ID) to the same server. If the
- cookie is not found, the normal roundrobin algorithm is
- used instead.
+ already have a cookie that would have permitted it to access this
+ server. When used without the "preserve" option, if the server
+ emits a cookie with the same name, it will be removed before
+ processing. For this reason, this mode can be used to upgrade
+ existing configurations running in the "rewrite" mode. The cookie
+ will only be a session cookie and will not be stored on the
+ client's disk. By default, unless the "indirect" option is added,
+ the server will see the cookies emitted by the client. Due to
+ caching effects, it is generally wise to add the "nocache" or
+ "postonly" keywords (see below). The "insert" keyword is not
+ compatible with "rewrite" and "prefix".
- Note that for this to work, the frontend must ensure that an
- RDP cookie is already present in the request buffer. For this
- you must use 'tcp-request content accept' rule combined with
- a 'req.rdp_cookie_cnt' ACL.
+ prefix This keyword indicates that instead of relying on a dedicated
+ cookie for the persistence, an existing one will be completed.
+ This may be needed in some specific environments where the client
+ does not support more than one single cookie and the application
+ already needs it. In this case, whenever the server sets a cookie
+ named <name>, it will be prefixed with the server's identifier
+ and a delimiter. The prefix will be removed from all client
+ requests so that the server still finds the cookie it emitted.
+ Since all requests and responses are subject to being modified,
+ this mode doesn't work with tunnel mode. The "prefix" keyword is
+ not compatible with "rewrite" and "insert". Note: it is highly
+ recommended not to use "indirect" with "prefix", otherwise server
+ cookie updates would not be sent to clients.
- This algorithm is static by default, which means that
- changing a server's weight on the fly will have no effect,
- but this can be changed using "hash-type". See also the
- "hash" option above.
+ indirect When this option is specified, no cookie will be emitted to a
+ client which already has a valid one for the server which has
+ processed the request. If the server sets such a cookie itself,
+ it will be removed, unless the "preserve" option is also set. In
+ "insert" mode, this will additionally remove cookies from the
+ requests transmitted to the server, making the persistence
+ mechanism totally transparent from an application point of view.
+ Note: it is highly recommended not to use "indirect" with
+ "prefix", otherwise server cookie updates would not be sent to
+ clients.
- log-hash Takes a comma-delimited list of converters in argument. These
- converters are applied in sequence to the input log message,
- and the result will be cast as a string then hashed according
- to the configured hash-type. The resulting hash will be used
- to select the destination server among the ones declared in
- the log backend. The goal of this algorithm is to be able to
- extract a key within the final log message using string
- converters and then be able to stick to the same server thanks
- to the hash. Only "map-based" hashes are supported for now.
- This algorithm is only usable for backends in LOG mode, for
- others, please use "hash" instead.
+ nocache This option is recommended in conjunction with the insert mode
+ when there is a cache between the client and HAProxy, as it
+ ensures that a cacheable response will be tagged non-cacheable if
+ a cookie needs to be inserted. This is important because if all
+ persistence cookies are added on a cacheable home page for
+ instance, then all customers will then fetch the page from an
+ outer cache and will all share the same persistence cookie,
+ leading to one server receiving much more traffic than others.
+ See also the "insert" and "postonly" options.
- sticky Tries to stick to the same server as much as possible. The
- first server in the list of available servers receives all
- the log messages. When the server goes DOWN, the next server
- in the list takes its place. When a previously DOWN server
- goes back UP it is added at the end of the list so that the
- sticky server doesn't change until it becomes DOWN.
+ postonly This option ensures that cookie insertion will only be performed
+ on responses to POST requests. It is an alternative to the
+ "nocache" option, because POST responses are not cacheable, so
+ this ensures that the persistence cookie will never get cached.
+ Since most sites do not need any sort of persistence before the
+ first POST which generally is a login request, this is a very
+ efficient method to optimize caching without risking to find a
+ persistence cookie in the cache.
+ See also the "insert" and "nocache" options.
- <arguments> is an optional list of arguments which may be needed by some
- algorithms. Right now, only "url_param", "uri" and "log-hash"
- support an optional argument.
+ preserve This option may only be used with "insert" and/or "indirect". It
+ allows the server to emit the persistence cookie itself. In this
+ case, if a cookie is found in the response, HAProxy will leave it
+ untouched. This is useful in order to end persistence after a
+ logout request for instance. For this, the server just has to
+ emit a cookie with an invalid value (e.g. empty) or with a date in
+ the past. By combining this mechanism with the "disable-on-404"
+ check option, it is possible to perform a completely graceful
+ shutdown because users will definitely leave the server after
+ they logout.
- The load balancing algorithm of a backend is set to roundrobin when no other
- algorithm, mode nor option have been set. The algorithm may only be set once
- for each backend.
+ httponly This option tells HAProxy to add an "HttpOnly" cookie attribute
+ when a cookie is inserted. This attribute is used so that a
+ user agent doesn't share the cookie with non-HTTP components.
+ Please check RFC6265 for more information on this attribute.
- With authentication schemes that require the same connection like NTLM, URI
- based algorithms must not be used, as they would cause subsequent requests
- to be routed to different backend servers, breaking the invalid assumptions
- NTLM relies on.
+ secure This option tells HAProxy to add a "Secure" cookie attribute when
+ a cookie is inserted. This attribute is used so that a user agent
+ never emits this cookie over non-secure channels, which means
+ that a cookie learned with this flag will be presented only over
+ SSL/TLS connections. Please check RFC6265 for more information on
+ this attribute.
- TCP/HTTP Examples :
- balance roundrobin
- balance url_param userid
- balance url_param session_id check_post 64
- balance hdr(User-Agent)
- balance hdr(host)
- balance hdr(Host) use_domain_only
- balance hash req.cookie(clientid)
- balance hash var(req.client_id)
- balance hash req.hdr_ip(x-forwarded-for,-1),ipmask(24)
+ domain This option allows to specify the domain at which a cookie is
+ inserted. It requires exactly one parameter: a valid domain
+ name. If the domain begins with a dot, the browser is allowed to
+ use it for any host ending with that name. It is also possible to
+ specify several domain names by invoking this option multiple
+ times. Some browsers might have small limits on the number of
+ domains, so be careful when doing that. For the record, sending
+ 10 domains to MSIE 6 or Firefox 2 works as expected.
- LOG backend examples:
- global
- log backend@mylog-rrb local0 # send all logs to mylog-rrb backend
- log backend@mylog-hash local0 # send all logs to mylog-hash backend
-
- backend mylog-rrb
- mode log
- balance roundrobin
-
- server s1 udp@127.0.0.1:514 # will receive 50% of log messages
- server s2 udp@127.0.0.1:514
+ maxidle This option allows inserted cookies to be ignored after some idle
+ time. It only works with insert-mode cookies. When a cookie is
+ sent to the client, the date this cookie was emitted is sent too.
+ Upon further presentations of this cookie, if the date is older
+ than the delay indicated by the parameter (in seconds), it will
+ be ignored. Otherwise, it will be refreshed if needed when the
+ response is sent to the client. This is particularly useful to
+ prevent users who never close their browsers from remaining for
+ too long on the same server (e.g. after a farm size change). When
+ this option is set and a cookie has no date, it is always
+ accepted, but gets refreshed in the response. This maintains the
+ ability for admins to access their sites. Cookies that have a
+ date in the future further than 24 hours are ignored. Doing so
+ lets admins fix timezone issues without risking kicking users off
+ the site.
- backend mylog-hash
- mode log
+ maxlife This option allows inserted cookies to be ignored after some life
+ time, whether they're in use or not. It only works with insert
+ mode cookies. When a cookie is first sent to the client, the date
+ this cookie was emitted is sent too. Upon further presentations
+ of this cookie, if the date is older than the delay indicated by
+ the parameter (in seconds), it will be ignored. If the cookie in
+ the request has no date, it is accepted and a date will be set.
+ Cookies that have a date in the future further than 24 hours are
+ ignored. Doing so lets admins fix timezone issues without risking
+ kicking users off the site. Contrary to maxidle, this value is
+ not refreshed, only the first visit date counts. Both maxidle and
+ maxlife may be used at the time. This is particularly useful to
+ prevent users who never close their browsers from remaining for
+ too long on the same server (e.g. after a farm size change). This
+ is stronger than the maxidle method in that it forces a
+ redispatch after some absolute delay.
- # extract "METHOD URL PROTO" at the end of the log message,
- # and let haproxy hash it so that log messages generated from
- # similar requests get sent to the same syslog server:
- balance log-hash 'field(-2,\")'
+ dynamic Activate dynamic cookies. When used, a session cookie is
+ dynamically created for each server, based on the IP and port
+ of the server, and a secret key, specified in the
+ "dynamic-cookie-key" backend directive.
+ The cookie will be regenerated each time the IP address change,
+ and is only generated for IPv4/IPv6.
- # server list here
- server s1 127.0.0.1:514
- #...
+ attr This option tells HAProxy to add an extra attribute when a
+ cookie is inserted. The attribute value can contain any
+ characters except control ones or ";". This option may be
+ repeated.
- Note: the following caveats and limitations on using the "check_post"
- extension with "url_param" must be considered :
+ There can be only one persistence cookie per HTTP backend, and it can be
+ declared in a defaults section. The value of the cookie will be the value
+ indicated after the "cookie" keyword in a "server" statement. If no cookie
+ is declared for a given server, the cookie is not set.
- - all POST requests are eligible for consideration, because there is no way
- to determine if the parameters will be found in the body or entity which
- may contain binary data. Therefore another method may be required to
- restrict consideration of POST requests that have no URL parameters in
- the body. (see acl http_end)
+ Examples :
+ cookie JSESSIONID prefix
+ cookie SRV insert indirect nocache
+ cookie SRV insert postonly indirect
+ cookie SRV insert indirect nocache maxidle 30m maxlife 8h
- - using a <max_wait> value larger than the request buffer size does not
- make sense and is useless. The buffer size is set at build time, and
- defaults to 16 kB.
+ See also : "balance source", "capture cookie", "server" and "ignore-persist".
- - Content-Encoding is not supported, the parameter search will probably
- fail; and load balancing will fall back to Round Robin.
+declare capture [ request | response ] len <length>
+ Declares a capture slot.
- - Expect: 100-continue is not supported, load balancing will fall back to
- Round Robin.
+ May be used in the following contexts: tcp, http
- - Transfer-Encoding (RFC7230 3.3.1) is only supported in the first chunk.
- If the entire parameter value is not present in the first chunk, the
- selection of server is undefined (actually, defined by how little
- actually appeared in the first chunk).
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
- - This feature does not support generation of a 100, 411 or 501 response.
+ Arguments:
+ <length> is the length allowed for the capture.
- - In some cases, requesting "check_post" MAY attempt to scan the entire
- contents of a message body. Scanning normally terminates when linear
- white space or control characters are found, indicating the end of what
- might be a URL parameter list. This is probably not a concern with SGML
- type message bodies.
+ This declaration is only available in the frontend or listen section, but the
+ reserved slot can be used in the backends. The "request" keyword allocates a
+ capture slot for use in the request, and "response" allocates a capture slot
+ for use in the response.
- See also : "dispatch", "cookie", "transparent", "hash-type".
+ See also: "capture-req", "capture-res" (sample converters),
+ "capture.req.hdr", "capture.res.hdr" (sample fetches),
+ "http-request capture" and "http-response capture".
-bind [<address>]:<port_range> [, ...] [param*]
-bind /<path> [, ...] [param*]
- Define one or several listening addresses and/or ports in a frontend.
+default-server [param*]
+ Change default options for a server in a backend
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
+ yes | no | yes | yes
- Arguments :
- <address> is optional and can be a host name, an IPv4 address, an IPv6
- address, or '*'. It designates the address the frontend will
- listen on. If unset, all IPv4 addresses of the system will be
- listened on. The same will apply for '*' or the system's
- special address "0.0.0.0". The IPv6 equivalent is '::'. Note
- that for UDP, specific OS features are required when binding
- on multiple addresses to ensure the correct network interface
- and source address will be used on response. In other way,
- for QUIC listeners only bind on multiple addresses if running
- with a modern enough systems.
+ Arguments:
+ <param*> is a list of parameters for this server. The "default-server"
+ keyword accepts an important number of options and has a complete
+ section dedicated to it. Please refer to section 5 for more
+ details.
- Optionally, an address family prefix may be used before the
- address to force the family regardless of the address format,
- which can be useful to specify a path to a unix socket with
- no slash ('/'). Currently supported prefixes are :
- - 'ipv4@' -> address is always IPv4
- - 'ipv6@' -> address is always IPv6
- - 'udp@' -> address is resolved as IPv4 or IPv6 and
- protocol UDP is used. Currently those listeners are
- supported only in log-forward sections.
- - 'udp4@' -> address is always IPv4 and protocol UDP
- is used. Currently those listeners are supported
- only in log-forward sections.
- - 'udp6@' -> address is always IPv6 and protocol UDP
- is used. Currently those listeners are supported
- only in log-forward sections.
- - 'unix@' -> address is a path to a local unix socket
- - 'abns@' -> address is in abstract namespace (Linux only).
- - 'abnsz@' -> address is in abstract namespace (Linux only)
- but it is explicitly zero-terminated. This means no \0
- padding is used to complete sun_path. It is useful to
- interconnect with programs that don't implement the
- default abns naming logic that haproxy uses.
- - 'fd@<n>' -> use file descriptor <n> inherited from the
- parent. The fd must be bound and may or may not already
- be listening.
- - 'sockpair@<n>'-> like fd@ but you must use the fd of a
- connected unix socket or of a socketpair. The bind waits
- to receive a FD over the unix socket and uses it as if it
- was the FD of an accept(). Should be used carefully.
- - 'quic4@' -> address is resolved as IPv4 and protocol UDP
- is used. Note that to achieve the best performance with a
- large traffic you should keep "tune.quic.socket-owner" on
- connection. Else QUIC connections will be multiplexed
- over the listener socket. Another alternative would be to
- duplicate QUIC listener instances over several threads,
- for example using "shards" keyword to at least reduce
- thread contention.
- - 'quic6@' -> address is resolved as IPv6 and protocol UDP
- is used. The performance note for QUIC over IPv4 applies
- as well.
- - 'rhttp@' [ EXPERIMENTAL ] -> used for reverse HTTP.
- Address must be a server with the format
- '<backend>/<server>'. The server will be used to
- instantiate connections to a remote address. The listener
- will try to maintain "nbconn" connections. This is an
- experimental features which requires
- "expose-experimental-directives" on a line before this
- bind.
+ Example :
+ default-server inter 1000 weight 13
- You may want to reference some environment variables in the
- address parameter, see section 2.3 about environment
- variables.
+ See also: "server" and section 5 about server options
- <port_range> is either a unique TCP port, or a port range for which the
- proxy will accept connections for the IP address specified
- above. The port is mandatory for TCP listeners. Note that in
- the case of an IPv6 address, the port is always the number
- after the last colon (':'). A range can either be :
- - a numerical port (ex: '80')
- - a dash-delimited ports range explicitly stating the lower
- and upper bounds (ex: '2000-2100') which are included in
- the range.
- Particular care must be taken against port ranges, because
- every <address:port> couple consumes one socket (= a file
- descriptor), so it's easy to consume lots of descriptors
- with a simple range, and to run out of sockets. Also, each
- <address:port> couple must be used only once among all
- instances running on a same system. Please note that binding
- to ports lower than 1024 generally require particular
- privileges to start the program, which are independent of
- the 'uid' parameter.
+default_backend <backend>
+ Specify the backend to use when no "use_backend" rule has been matched.
- <path> is a UNIX socket path beginning with a slash ('/'). This is
- alternative to the TCP listening port. HAProxy will then
- receive UNIX connections on the socket located at this place.
- The path must begin with a slash and by default is absolute.
- It can be relative to the prefix defined by "unix-bind" in
- the global section. Note that the total length of the prefix
- followed by the socket path cannot exceed some system limits
- for UNIX sockets, which commonly are set to 107 characters.
+ May be used in the following contexts: tcp, http
- <param*> is a list of parameters common to all sockets declared on the
- same line. These numerous parameters depend on OS and build
- options and have a complete section dedicated to them. Please
- refer to section 5 to for more details.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- It is possible to specify a list of address:port combinations delimited by
- commas. The frontend will then listen on all of these addresses. There is no
- fixed limit to the number of addresses and ports which can be listened on in
- a frontend, as well as there is no limit to the number of "bind" statements
- in a frontend.
+ Arguments :
+ <backend> is the name of the backend to use.
+
+ When doing content-switching between frontend and backends using the
+ "use_backend" keyword, it is often useful to indicate which backend will be
+ used when no rule has matched. It generally is the dynamic backend which
+ will catch all undetermined requests.
Example :
- listen http_proxy
- bind :80,:443
- bind 10.0.0.1:10080,10.0.0.1:10443
- bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy
- listen http_https_proxy
- bind :80
- bind :443 ssl crt /etc/haproxy/site.pem
+ use_backend dynamic if url_dyn
+ use_backend static if url_css url_img extension_img
+ default_backend dynamic
- listen http_https_proxy_explicit
- bind ipv6@:80
- bind ipv4@public_ssl:443 ssl crt /etc/haproxy/site.pem
- bind unix@ssl-frontend.sock user root mode 600 accept-proxy
+ See also : "use_backend"
- listen external_bind_app1
- bind "fd@${FD_APP1}"
- listen h3_quic_proxy
- bind quic4@10.0.0.1:8888 ssl crt /etc/mycrt
-
- Note: regarding Linux's abstract namespace sockets, "abns" HAProxy sockets
- uses the whole sun_path length is used for the address length. Some
- other programs such as socat use the string length only by default.
- Pass the option ",unix-tightsocklen=0" to any abstract socket
- definition in socat to make it compatible with HAProxy's, or use the
- "abnsz" HAProxy socket family instead.
-
- See also : "source", "option forwardfor", "unix-bind" and the PROXY protocol
- documentation, and section 5 about bind options.
+description <string>
+ Describe a listen, frontend or backend.
+ May be used in the following contexts: tcp, http, log
-capture cookie <name> len <length>
- Capture and log a cookie in the request and in the response.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
- May be used in the following contexts: http
+ Arguments : string
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
+ Allows to add a sentence to describe the related object in the HAProxy HTML
+ stats page. The description will be printed on the right of the object name
+ it describes.
+ No need to backslash spaces in the <string> arguments.
- Arguments :
- <name> is the beginning of the name of the cookie to capture. In order
- to match the exact name, simply suffix the name with an equal
- sign ('='). The full name will appear in the logs, which is
- useful with application servers which adjust both the cookie name
- and value (e.g. ASPSESSIONXXX).
- <length> is the maximum number of characters to report in the logs, which
- include the cookie name, the equal sign and the value, all in the
- standard "name=value" form. The string will be truncated on the
- right if it exceeds <length>.
+disabled
+ Disable a proxy, frontend or backend.
- Only the first cookie is captured. Both the "cookie" request headers and the
- "set-cookie" response headers are monitored. This is particularly useful to
- check for application bugs causing session crossing or stealing between
- users, because generally the user's cookies can only change on a login page.
+ May be used in the following contexts: tcp, http, log
- When the cookie was not presented by the client, the associated log column
- will report "-". When a request does not cause a cookie to be assigned by the
- server, a "-" is reported in the response column.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- The capture is performed in the frontend only because it is necessary that
- the log format does not change for a given frontend depending on the
- backends. This may change in the future. Note that there can be only one
- "capture cookie" statement in a frontend. The maximum capture length is set
- by the global "tune.http.cookielen" setting and defaults to 63 characters. It
- is not possible to specify a capture in a "defaults" section.
+ Arguments : none
- Example:
- capture cookie ASPSESSION len 32
+ The "disabled" keyword is used to disable an instance, mainly in order to
+ liberate a listening port or to temporarily disable a service. The instance
+ will still be created and its configuration will be checked, but it will be
+ created in the "stopped" state and will appear as such in the statistics. It
+ will not receive any traffic nor will it send any health-checks or logs. It
+ is possible to disable many instances at once by adding the "disabled"
+ keyword in a "defaults" section.
- See also : "capture request header", "capture response header" as well as
- section 8 about logging.
+ See also : "enabled"
-capture request header <name> len <length>
- Capture and log the last occurrence of the specified request header.
+dispatch <address>:<port>
+ Set a default server address
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
+ no | no | yes | yes
Arguments :
- <name> is the name of the header to capture. The header names are not
- case-sensitive, but it is a common practice to write them as they
- appear in the requests, with the first letter of each word in
- upper case. The header name will not appear in the logs, only the
- value is reported, but the position in the logs is respected.
-
- <length> is the maximum number of characters to extract from the value and
- report in the logs. The string will be truncated on the right if
- it exceeds <length>.
-
- The complete value of the last occurrence of the header is captured. The
- value will be added to the logs between braces ('{}'). If multiple headers
- are captured, they will be delimited by a vertical bar ('|') and will appear
- in the same order they were declared in the configuration. Non-existent
- headers will be logged just as an empty string. Common uses for request
- header captures include the "Host" field in virtual hosting environments, the
- "Content-length" when uploads are supported, "User-agent" to quickly
- differentiate between real users and robots, and "X-Forwarded-For" in proxied
- environments to find where the request came from.
- Note that when capturing headers such as "User-agent", some spaces may be
- logged, making the log analysis more difficult. Thus be careful about what
- you log if you know your log parser is not smart enough to rely on the
- braces.
+ <address> is the IPv4 address of the default server. Alternatively, a
+ resolvable hostname is supported, but this name will be resolved
+ during start-up.
- There is no limit to the number of captured request headers nor to their
- length, though it is wise to keep them low to limit memory usage per stream.
- In order to keep log format consistent for a same frontend, header captures
- can only be declared in a frontend. It is not possible to specify a capture
- in a "defaults" section.
+ <ports> is a mandatory port specification. All connections will be sent
+ to this port, and it is not permitted to use port offsets as is
+ possible with normal servers.
- Example:
- capture request header Host len 15
- capture request header X-Forwarded-For len 15
- capture request header Referer len 15
+ The "dispatch" keyword designates a default server for use when no other
+ server can take the connection. In the past it was used to forward non
+ persistent connections to an auxiliary load balancer. Due to its simple
+ syntax, it has also been used for simple TCP relays. It is recommended not to
+ use it for more clarity, and to use the "server" directive instead.
- See also : "capture cookie", "capture response header" as well as section 8
- about logging.
+ See also : "server"
-capture response header <name> len <length>
- Capture and log the last occurrence of the specified response header.
+dynamic-cookie-key <string>
+ Set the dynamic cookie secret key for a backend.
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
-
- Arguments :
- <name> is the name of the header to capture. The header names are not
- case-sensitive, but it is a common practice to write them as they
- appear in the response, with the first letter of each word in
- upper case. The header name will not appear in the logs, only the
- value is reported, but the position in the logs is respected.
-
- <length> is the maximum number of characters to extract from the value and
- report in the logs. The string will be truncated on the right if
- it exceeds <length>.
-
- The complete value of the last occurrence of the header is captured. The
- result will be added to the logs between braces ('{}') after the captured
- request headers. If multiple headers are captured, they will be delimited by
- a vertical bar ('|') and will appear in the same order they were declared in
- the configuration. Non-existent headers will be logged just as an empty
- string. Common uses for response header captures include the "Content-length"
- header which indicates how many bytes are expected to be returned, the
- "Location" header to track redirections.
-
- There is no limit to the number of captured response headers nor to their
- length, though it is wise to keep them low to limit memory usage per stream.
- In order to keep log format consistent for a same frontend, header captures
- can only be declared in a frontend. It is not possible to specify a capture
- in a "defaults" section.
-
- Example:
- capture response header Content-length len 9
- capture response header Location len 15
+ yes | no | yes | yes
- See also : "capture cookie", "capture request header" as well as section 8
- about logging.
+ Arguments : The secret key to be used.
+ When dynamic cookies are enabled (see the "dynamic" directive for cookie),
+ a dynamic cookie is created for each server (unless one is explicitly
+ specified on the "server" line), using a hash of the IP address of the
+ server, the TCP port, and the secret key.
+ That way, we can ensure session persistence across multiple load-balancers,
+ even if servers are dynamically added or removed.
-clitcpka-cnt <count>
- Sets the maximum number of keepalive probes TCP should send before dropping
- the connection on the client side.
+enabled
+ Enable a proxy, frontend or backend.
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
- Arguments :
- <count> is the maximum number of keepalive probes.
+ Arguments : none
- This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
+ The "enabled" keyword is used to explicitly enable an instance, when the
+ defaults has been set to "disabled". This is very rarely used.
- See also : "option clitcpka", "clitcpka-idle", "clitcpka-intvl".
+ See also : "disabled"
-clitcpka-idle <timeout>
- Sets the time the connection needs to remain idle before TCP starts sending
- keepalive probes, if enabled the sending of TCP keepalive packets on the
- client side.
+errorfile <code> <file>
+ Return a file contents instead of errors generated by HAProxy
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
Arguments :
- <timeout> is the time the connection needs to remain idle before TCP starts
- sending keepalive probes. It is specified in seconds by default,
- but can be in any other unit if the number is suffixed by the
- unit, as explained at the top of this document.
-
- This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_time) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
-
- See also : "option clitcpka", "clitcpka-cnt", "clitcpka-intvl".
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
+ 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+ <file> designates a file containing the full HTTP response. It is
+ recommended to follow the common practice of appending ".http" to
+ the filename so that people do not confuse the response with HTML
+ error pages, and to use absolute paths, since files are read
+ before any chroot is performed.
-clitcpka-intvl <timeout>
- Sets the time between individual keepalive probes on the client side.
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
- May be used in the following contexts: tcp, http
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ The files are parsed when HAProxy starts and must be valid according to the
+ HTTP specification. They should not exceed the configured buffer size
+ (BUFSIZE), which generally is 16 kB, otherwise an internal error will be
+ returned. It is also wise not to put any reference to local contents
+ (e.g. images) in order to avoid loops between the client and HAProxy when all
+ servers are down, causing an error to be returned instead of an
+ image. Finally, The response cannot exceed (tune.bufsize - tune.maxrewrite)
+ so that "http-after-response" rules still have room to operate (see
+ "tune.maxrewrite").
- Arguments :
- <timeout> is the time between individual keepalive probes. It is specified
- in seconds by default, but can be in any other unit if the number
- is suffixed by the unit, as explained at the top of this
- document.
+ The files are read at the same time as the configuration and kept in memory.
+ For this reason, the errors continue to be returned even when the process is
+ chrooted, and no file change is considered while the process is running. A
+ simple method for developing those files consists in associating them to the
+ 403 status code and interrogating a blocked URL.
- This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
+ See also : "http-error", "errorloc", "errorloc302", "errorloc303"
- See also : "option clitcpka", "clitcpka-cnt", "clitcpka-idle".
+ Example :
+ errorfile 400 /etc/haproxy/errorfiles/400badreq.http
+ errorfile 408 /dev/null # work around Chrome pre-connect bug
+ errorfile 403 /etc/haproxy/errorfiles/403forbid.http
+ errorfile 503 /etc/haproxy/errorfiles/503sorry.http
-compression algo <algorithm> ...
-compression algo-req <algorithm>
-compression algo-res <algorithm>
-compression type <mime type> ...
- Enable HTTP compression.
+errorfiles <name> [<code> ...]
+ Import, fully or partially, the error files defined in the <name> http-errors
+ section.
May be used in the following contexts: http
yes | yes | yes | yes
Arguments :
- algo is followed by the list of supported compression algorithms for
- responses (legacy keyword)
- algo-req is followed by compression algorithm for request (only one is
- provided).
- algo-res is followed by the list of supported compression algorithms for
- responses.
- type is followed by the list of MIME types that will be compressed for
- responses (legacy keyword).
- type-req is followed by the list of MIME types that will be compressed for
- requests.
- type-res is followed by the list of MIME types that will be compressed for
- responses.
+ <name> is the name of an existing http-errors section.
- The currently supported algorithms are :
- identity this is mostly for debugging, and it was useful for developing
- the compression feature. Identity does not apply any change on
- data.
+ <code> is a HTTP status code. Several status code may be listed.
+ Currently, HAProxy is capable of generating codes 200, 400, 401,
+ 403, 404, 405, 407, 408, 410, 413, 414, 425, 429, 431, 500, 501,
+ 502, 503, and 504.
- gzip applies gzip compression. This setting is only available when
- support for zlib or libslz was built in.
+ Errors defined in the http-errors section with the name <name> are imported
+ in the current proxy. If no status code is specified, all error files of the
+ http-errors section are imported. Otherwise, only error files associated to
+ the listed status code are imported. Those error files override the already
+ defined custom errors for the proxy. And they may be overridden by following
+ ones. Functionally, it is exactly the same as declaring all error files by
+ hand using "errorfile" directives.
- deflate same as "gzip", but with deflate algorithm and zlib format.
- Note that this algorithm has ambiguous support on many
- browsers and no support at all from recent ones. It is
- strongly recommended not to use it for anything else than
- experimentation. This setting is only available when support
- for zlib or libslz was built in.
+ See also : "http-error", "errorfile", "errorloc", "errorloc302" ,
+ "errorloc303" and section 12.4 about http-errors.
- raw-deflate same as "deflate" without the zlib wrapper, and used as an
- alternative when the browser wants "deflate". All major
- browsers understand it and despite violating the standards,
- it is known to work better than "deflate", at least on MSIE
- and some versions of Safari. Do not use it in conjunction
- with "deflate", use either one or the other since both react
- to the same Accept-Encoding token. This setting is only
- available when support for zlib or libslz was built in.
+ Example :
+ errorfiles generic
+ errorfiles site-1 403 404
- Compression will be activated depending on the Accept-Encoding request
- header. With identity, it does not take care of that header.
- If backend servers support HTTP compression, these directives
- will be no-op: HAProxy will see the compressed response and will not
- compress again. If backend servers do not support HTTP compression and
- there is Accept-Encoding header in request, HAProxy will compress the
- matching response.
- Compression is disabled when:
- * the request does not advertise a supported compression algorithm in the
- "Accept-Encoding" header
- * the response message is not HTTP/1.1 or above
- * HTTP status code is not one of 200, 201, 202, or 203
- * response contain neither a "Content-Length" header nor a
- "Transfer-Encoding" whose last value is "chunked"
- * response contains a "Content-Type" header whose first value starts with
- "multipart"
- * the response contains the "no-transform" value in the "Cache-control"
- header
- * User-Agent matches "Mozilla/4" unless it is MSIE 6 with XP SP2, or MSIE 7
- and later
- * The response contains a "Content-Encoding" header, indicating that the
- response is already compressed (see compression offload)
- * The response contains an invalid "ETag" header or multiple ETag headers
- * The payload size is smaller than the minimum size
- (see compression minsize-res)
+errorloc <code> <url>
+errorloc302 <code> <url>
+ Return an HTTP redirection to a URL instead of errors generated by HAProxy
- Note: The compression does not emit the Warning header.
+ May be used in the following contexts: http
- Examples :
- compression algo gzip
- compression type text/html text/plain
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- See also : "compression offload", "compression direction",
- "compression minsize-req" and "compression minsize-res"
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
+ 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
-compression minsize-req <size>
-compression minsize-res <size>
- Sets the minimum payload size in bytes for compression to be applied.
+ <url> it is the exact contents of the "Location" header. It may contain
+ either a relative URI to an error page hosted on the same site,
+ or an absolute URI designating an error page on another site.
+ Special care should be given to relative URIs to avoid redirect
+ loops if the URI itself may generate the same error (e.g. 500).
- May be used in the following contexts: http
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
- Payloads smaller than this size will not be compressed, avoiding unnecessary
- CPU overhead for data that would not significantly benefit from compression.
- "minsize-req" applies on requests and "minsize-res" on responses.
- The default value is 0.
+ Note that both keyword return the HTTP 302 status code, which tells the
+ client to fetch the designated URL using the same HTTP method. This can be
+ quite problematic in case of non-GET methods such as POST, because the URL
+ sent to the client might not be allowed for something other than GET. To
+ work around this problem, please use "errorloc303" which send the HTTP 303
+ status code, indicating to the client that the URL must be fetched with a GET
+ request.
-compression offload
- Makes HAProxy work as a compression offloader only.
+ See also : "http-error", "errorfile", "errorloc303"
+
+
+errorloc303 <code> <url>
+ Return an HTTP redirection to a URL instead of errors generated by HAProxy
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ yes | yes | yes | yes
- The "offload" setting makes HAProxy remove the Accept-Encoding header to
- prevent backend servers from compressing responses. It is strongly
- recommended not to do this because this means that all the compression work
- will be done on the single point where HAProxy is located. However in some
- deployment scenarios, HAProxy may be installed in front of a buggy gateway
- with broken HTTP compression implementation which can't be turned off.
- In that case HAProxy can be used to prevent that gateway from emitting
- invalid payloads. In this case, simply removing the header in the
- configuration does not work because it applies before the header is parsed,
- so that prevents HAProxy from compressing. The "offload" setting should
- then be used for such scenarios.
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
+ 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
- If this setting is used in a defaults section, a warning is emitted and the
- option is ignored.
+ <url> it is the exact contents of the "Location" header. It may contain
+ either a relative URI to an error page hosted on the same site,
+ or an absolute URI designating an error page on another site.
+ Special care should be given to relative URIs to avoid redirect
+ loops if the URI itself may generate the same error (e.g. 500).
- See also : "compression type", "compression algo", "compression direction"
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
-compression direction <direction>
- Makes haproxy able to compress both requests and responses.
- Valid values are "request", to compress only requests, "response", to
- compress only responses, or "both", when you want to compress both.
- The default value is "response".
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
- May be used in the following contexts: http
+ Note that both keyword return the HTTP 303 status code, which tells the
+ client to fetch the designated URL using the same HTTP GET method. This
+ solves the usual problems associated with "errorloc" and the 302 code. It is
+ possible that some very old browsers designed before HTTP/1.1 do not support
+ it, but no such problem has been reported till now.
- See also : "compression type", "compression algo", "compression offload"
+ See also : "http-error", "errorfile", "errorloc", "errorloc302"
-cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ]
- [ postonly ] [ preserve ] [ httponly ] [ secure ]
- [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ]
- [ dynamic ] [ attr <value> ]*
- Enable cookie-based persistence in a backend.
- May be used in the following contexts: http
+email-alert from <emailaddr>
+ Declare the from email address to be used in both the envelope and header
+ of email alerts. This is the address that email alerts are sent from.
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in the following contexts: tcp, http, log
+
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
Arguments :
- <name> is the name of the cookie which will be monitored, modified or
- inserted in order to bring persistence. This cookie is sent to
- the client via a "Set-Cookie" header in the response, and is
- brought back by the client in a "Cookie" header in all requests.
- Special care should be taken to choose a name which does not
- conflict with any likely application cookie. Also, if the same
- backends are subject to be used by the same clients (e.g.
- HTTP/HTTPS), care should be taken to use different cookie names
- between all backends if persistence between them is not desired.
- rewrite This keyword indicates that the cookie will be provided by the
- server and that HAProxy will have to modify its value to set the
- server's identifier in it. This mode is handy when the management
- of complex combinations of "Set-cookie" and "Cache-control"
- headers is left to the application. The application can then
- decide whether or not it is appropriate to emit a persistence
- cookie. Since all responses should be monitored, this mode
- doesn't work in HTTP tunnel mode. Unless the application
- behavior is very complex and/or broken, it is advised not to
- start with this mode for new deployments. This keyword is
- incompatible with "insert" and "prefix".
+ <emailaddr> is the from email address to use when sending email alerts
- insert This keyword indicates that the persistence cookie will have to
- be inserted by HAProxy in server responses if the client did not
+ Also requires "email-alert mailers" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
- already have a cookie that would have permitted it to access this
- server. When used without the "preserve" option, if the server
- emits a cookie with the same name, it will be removed before
- processing. For this reason, this mode can be used to upgrade
- existing configurations running in the "rewrite" mode. The cookie
- will only be a session cookie and will not be stored on the
- client's disk. By default, unless the "indirect" option is added,
- the server will see the cookies emitted by the client. Due to
- caching effects, it is generally wise to add the "nocache" or
- "postonly" keywords (see below). The "insert" keyword is not
- compatible with "rewrite" and "prefix".
+ See also : "email-alert level", "email-alert mailers",
+ "email-alert myhostname", "email-alert to", section 12.3 about
+ mailers.
- prefix This keyword indicates that instead of relying on a dedicated
- cookie for the persistence, an existing one will be completed.
- This may be needed in some specific environments where the client
- does not support more than one single cookie and the application
- already needs it. In this case, whenever the server sets a cookie
- named <name>, it will be prefixed with the server's identifier
- and a delimiter. The prefix will be removed from all client
- requests so that the server still finds the cookie it emitted.
- Since all requests and responses are subject to being modified,
- this mode doesn't work with tunnel mode. The "prefix" keyword is
- not compatible with "rewrite" and "insert". Note: it is highly
- recommended not to use "indirect" with "prefix", otherwise server
- cookie updates would not be sent to clients.
- indirect When this option is specified, no cookie will be emitted to a
- client which already has a valid one for the server which has
- processed the request. If the server sets such a cookie itself,
- it will be removed, unless the "preserve" option is also set. In
- "insert" mode, this will additionally remove cookies from the
- requests transmitted to the server, making the persistence
- mechanism totally transparent from an application point of view.
- Note: it is highly recommended not to use "indirect" with
- "prefix", otherwise server cookie updates would not be sent to
- clients.
+email-alert level <level>
+ Declare the maximum log level of messages for which email alerts will be
+ sent. This acts as a filter on the sending of email alerts.
- nocache This option is recommended in conjunction with the insert mode
- when there is a cache between the client and HAProxy, as it
- ensures that a cacheable response will be tagged non-cacheable if
- a cookie needs to be inserted. This is important because if all
- persistence cookies are added on a cacheable home page for
- instance, then all customers will then fetch the page from an
- outer cache and will all share the same persistence cookie,
- leading to one server receiving much more traffic than others.
- See also the "insert" and "postonly" options.
+ May be used in the following contexts: tcp, http, log
- postonly This option ensures that cookie insertion will only be performed
- on responses to POST requests. It is an alternative to the
- "nocache" option, because POST responses are not cacheable, so
- this ensures that the persistence cookie will never get cached.
- Since most sites do not need any sort of persistence before the
- first POST which generally is a login request, this is a very
- efficient method to optimize caching without risking to find a
- persistence cookie in the cache.
- See also the "insert" and "nocache" options.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
- preserve This option may only be used with "insert" and/or "indirect". It
- allows the server to emit the persistence cookie itself. In this
- case, if a cookie is found in the response, HAProxy will leave it
- untouched. This is useful in order to end persistence after a
- logout request for instance. For this, the server just has to
- emit a cookie with an invalid value (e.g. empty) or with a date in
- the past. By combining this mechanism with the "disable-on-404"
- check option, it is possible to perform a completely graceful
- shutdown because users will definitely leave the server after
- they logout.
+ Arguments :
- httponly This option tells HAProxy to add an "HttpOnly" cookie attribute
- when a cookie is inserted. This attribute is used so that a
- user agent doesn't share the cookie with non-HTTP components.
- Please check RFC6265 for more information on this attribute.
+ <level> One of the 8 syslog levels:
+ emerg alert crit err warning notice info debug
+ The above syslog levels are ordered from lowest to highest.
- secure This option tells HAProxy to add a "Secure" cookie attribute when
- a cookie is inserted. This attribute is used so that a user agent
- never emits this cookie over non-secure channels, which means
- that a cookie learned with this flag will be presented only over
- SSL/TLS connections. Please check RFC6265 for more information on
- this attribute.
+ By default level is alert
- domain This option allows to specify the domain at which a cookie is
- inserted. It requires exactly one parameter: a valid domain
- name. If the domain begins with a dot, the browser is allowed to
- use it for any host ending with that name. It is also possible to
- specify several domain names by invoking this option multiple
- times. Some browsers might have small limits on the number of
- domains, so be careful when doing that. For the record, sending
- 10 domains to MSIE 6 or Firefox 2 works as expected.
+ Also requires "email-alert from", "email-alert mailers" and
+ "email-alert to" to be set and if so sending email alerts is enabled
+ for the proxy.
- maxidle This option allows inserted cookies to be ignored after some idle
- time. It only works with insert-mode cookies. When a cookie is
- sent to the client, the date this cookie was emitted is sent too.
- Upon further presentations of this cookie, if the date is older
- than the delay indicated by the parameter (in seconds), it will
- be ignored. Otherwise, it will be refreshed if needed when the
- response is sent to the client. This is particularly useful to
- prevent users who never close their browsers from remaining for
- too long on the same server (e.g. after a farm size change). When
- this option is set and a cookie has no date, it is always
- accepted, but gets refreshed in the response. This maintains the
- ability for admins to access their sites. Cookies that have a
- date in the future further than 24 hours are ignored. Doing so
- lets admins fix timezone issues without risking kicking users off
- the site.
+ Alerts are sent when :
- maxlife This option allows inserted cookies to be ignored after some life
- time, whether they're in use or not. It only works with insert
- mode cookies. When a cookie is first sent to the client, the date
- this cookie was emitted is sent too. Upon further presentations
- of this cookie, if the date is older than the delay indicated by
- the parameter (in seconds), it will be ignored. If the cookie in
- the request has no date, it is accepted and a date will be set.
- Cookies that have a date in the future further than 24 hours are
- ignored. Doing so lets admins fix timezone issues without risking
- kicking users off the site. Contrary to maxidle, this value is
- not refreshed, only the first visit date counts. Both maxidle and
- maxlife may be used at the time. This is particularly useful to
- prevent users who never close their browsers from remaining for
- too long on the same server (e.g. after a farm size change). This
- is stronger than the maxidle method in that it forces a
- redispatch after some absolute delay.
+ * An un-paused server is marked as down and <level> is alert or lower
+ * A paused server is marked as down and <level> is notice or lower
+ * A server is marked as up or enters the drain state and <level>
+ is notice or lower
+ * "option log-health-checks" is enabled, <level> is info or lower,
+ and a health check status update occurs
- dynamic Activate dynamic cookies. When used, a session cookie is
- dynamically created for each server, based on the IP and port
- of the server, and a secret key, specified in the
- "dynamic-cookie-key" backend directive.
- The cookie will be regenerated each time the IP address change,
- and is only generated for IPv4/IPv6.
+ See also : "email-alert from", "email-alert mailers",
+ "email-alert myhostname", "email-alert to",
+ section 12.3 about mailers.
- attr This option tells HAProxy to add an extra attribute when a
- cookie is inserted. The attribute value can contain any
- characters except control ones or ";". This option may be
- repeated.
- There can be only one persistence cookie per HTTP backend, and it can be
- declared in a defaults section. The value of the cookie will be the value
- indicated after the "cookie" keyword in a "server" statement. If no cookie
- is declared for a given server, the cookie is not set.
+email-alert mailers <mailersect>
+ Declare the mailers to be used when sending email alerts
- Examples :
- cookie JSESSIONID prefix
- cookie SRV insert indirect nocache
- cookie SRV insert postonly indirect
- cookie SRV insert indirect nocache maxidle 30m maxlife 8h
+ May be used in the following contexts: tcp, http, log
- See also : "balance source", "capture cookie", "server" and "ignore-persist".
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
-declare capture [ request | response ] len <length>
- Declares a capture slot.
+ Arguments :
- May be used in the following contexts: tcp, http
+ <mailersect> is the name of the mailers section to send email alerts.
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
+ Also requires "email-alert from" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
- Arguments:
- <length> is the length allowed for the capture.
+ See also : "email-alert from", "email-alert level", "email-alert myhostname",
+ "email-alert to", section 12.3 about mailers.
- This declaration is only available in the frontend or listen section, but the
- reserved slot can be used in the backends. The "request" keyword allocates a
- capture slot for use in the request, and "response" allocates a capture slot
- for use in the response.
- See also: "capture-req", "capture-res" (sample converters),
- "capture.req.hdr", "capture.res.hdr" (sample fetches),
- "http-request capture" and "http-response capture".
+email-alert myhostname <hostname>
+ Declare the to hostname address to be used when communicating with
+ mailers.
+ May be used in the following contexts: tcp, http, log
-default-server [param*]
- Change default options for a server in a backend
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
- May be used in the following contexts: tcp, http
+ Arguments :
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ <hostname> is the hostname to use when communicating with mailers
- Arguments:
- <param*> is a list of parameters for this server. The "default-server"
- keyword accepts an important number of options and has a complete
- section dedicated to it. Please refer to section 5 for more
- details.
+ By default the systems hostname is used.
- Example :
- default-server inter 1000 weight 13
+ Also requires "email-alert from", "email-alert mailers" and
+ "email-alert to" to be set and if so sending email alerts is enabled
+ for the proxy.
- See also: "server" and section 5 about server options
+ See also : "email-alert from", "email-alert level", "email-alert mailers",
+ "email-alert to", section 12.3 about mailers.
-default_backend <backend>
- Specify the backend to use when no "use_backend" rule has been matched.
+email-alert to <emailaddr>
+ Declare both the recipient address in the envelope and to address in the
+ header of email alerts. This is the address that email alerts are sent to.
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
Arguments :
- <backend> is the name of the backend to use.
- When doing content-switching between frontend and backends using the
- "use_backend" keyword, it is often useful to indicate which backend will be
- used when no rule has matched. It generally is the dynamic backend which
- will catch all undetermined requests.
+ <emailaddr> is the to email address to use when sending email alerts
- Example :
+ Also requires "email-alert mailers" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
- use_backend dynamic if url_dyn
- use_backend static if url_css url_img extension_img
- default_backend dynamic
+ See also : "email-alert from", "email-alert level", "email-alert mailers",
+ "email-alert myhostname", section 12.3 about mailers.
- See also : "use_backend"
+error-log-format <fmt>
+ Specifies the log format string to use in case of connection error on the frontend side.
-description <string>
- Describe a listen, frontend or backend.
+ May be used in the following contexts: tcp, http
- May be used in the following contexts: tcp, http, log
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ This directive specifies the log format string that will be used for logs
+ containing information related to errors, timeouts, retries redispatches or
+ HTTP status code 5xx. This format will in short be used for every log line
+ that would be concerned by the "log-separate-errors" option, including
+ connection errors described in section 8.2.5.
- Arguments : string
+ If the directive is used in a defaults section, all subsequent frontends will
+ use the same log format. Please see section 8.2.6 which covers the custom log
+ format string in depth.
- Allows to add a sentence to describe the related object in the HAProxy HTML
- stats page. The description will be printed on the right of the object name
- it describes.
- No need to backslash spaces in the <string> arguments.
+ "error-log-format" directive overrides previous "error-log-format"
+ directives.
-disabled
- Disable a proxy, frontend or backend.
+force-persist { if | unless } <condition>
+ Declare a condition to force persistence on down servers
- May be used in the following contexts: tcp, http, log
+ May be used in the following contexts: tcp, http
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
- Arguments : none
+ By default, requests are not dispatched to down servers. It is possible to
+ force this using "option persist", but it is unconditional and redispatches
+ to a valid server if "option redispatch" is set. That leaves with very little
+ possibilities to force some requests to reach a server which is artificially
+ marked down for maintenance operations.
- The "disabled" keyword is used to disable an instance, mainly in order to
- liberate a listening port or to temporarily disable a service. The instance
- will still be created and its configuration will be checked, but it will be
- created in the "stopped" state and will appear as such in the statistics. It
- will not receive any traffic nor will it send any health-checks or logs. It
- is possible to disable many instances at once by adding the "disabled"
- keyword in a "defaults" section.
+ The "force-persist" statement allows one to declare various ACL-based
+ conditions which, when met, will cause a request to ignore the down status of
+ a server and still try to connect to it. That makes it possible to start a
+ server, still replying an error to the health checks, and run a specially
+ configured browser to test the service. Among the handy methods, one could
+ use a specific source IP address, or a specific cookie. The cookie also has
+ the advantage that it can easily be added/removed on the browser from a test
+ page. Once the service is validated, it is then possible to open the service
+ to the world by returning a valid response to health checks.
- See also : "enabled"
+ The forced persistence is enabled when an "if" condition is met, or unless an
+ "unless" condition is met. The final redispatch is always disabled when this
+ is used.
+ See also : "option redispatch", "ignore-persist", "persist",
+ and section 7 about ACL usage.
-dispatch <address>:<port>
- Set a default server address
+
+filter <name> [param*]
+ Add the filter <name> in the filter list attached to the proxy.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ no | yes | yes | yes
Arguments :
+ <name> is the name of the filter. Officially supported filters are
+ referenced in section 9.
- <address> is the IPv4 address of the default server. Alternatively, a
- resolvable hostname is supported, but this name will be resolved
- during start-up.
+ <param*> is a list of parameters accepted by the filter <name>. The
+ parsing of these parameters are the responsibility of the
+ filter. Please refer to the documentation of the corresponding
+ filter (section 9) for all details on the supported parameters.
- <ports> is a mandatory port specification. All connections will be sent
- to this port, and it is not permitted to use port offsets as is
- possible with normal servers.
+ Multiple occurrences of the filter line can be used for the same proxy. The
+ same filter can be referenced many times if needed.
- The "dispatch" keyword designates a default server for use when no other
- server can take the connection. In the past it was used to forward non
- persistent connections to an auxiliary load balancer. Due to its simple
- syntax, it has also been used for simple TCP relays. It is recommended not to
- use it for more clarity, and to use the "server" directive instead.
+ Example:
+ listen
+ bind *:80
- See also : "server"
+ filter trace name BEFORE-HTTP-COMP
+ filter compression
+ filter trace name AFTER-HTTP-COMP
+ compression algo gzip
+ compression offload
-dynamic-cookie-key <string>
- Set the dynamic cookie secret key for a backend.
+ server srv1 192.168.0.1:80
- May be used in the following contexts: http
+ See also : section 9.
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
- Arguments : The secret key to be used.
+fullconn <conns>
+ Specify at what backend load the servers will reach their maxconn
- When dynamic cookies are enabled (see the "dynamic" directive for cookie),
- a dynamic cookie is created for each server (unless one is explicitly
- specified on the "server" line), using a hash of the IP address of the
- server, the TCP port, and the secret key.
- That way, we can ensure session persistence across multiple load-balancers,
- even if servers are dynamically added or removed.
+ May be used in the following contexts: tcp, http
-enabled
- Enable a proxy, frontend or backend.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- May be used in the following contexts: tcp, http, log
+ Arguments :
+ <conns> is the number of connections on the backend which will make the
+ servers use the maximal number of connections.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ When a server has a "maxconn" parameter specified, it means that its number
+ of concurrent connections will never go higher. Additionally, if it has a
+ "minconn" parameter, it indicates a dynamic limit following the backend's
+ load. The server will then always accept at least <minconn> connections,
+ never more than <maxconn>, and the limit will be on the ramp between both
+ values when the backend has less than <conns> concurrent connections. This
+ makes it possible to limit the load on the servers during normal loads, but
+ push it further for important loads without overloading the servers during
+ exceptional loads.
- Arguments : none
+ Since it's hard to get this value right, HAProxy automatically sets it to
+ 10% of the sum of the maxconns of all frontends that may branch to this
+ backend (based on "use_backend" and "default_backend" rules). That way it's
+ safe to leave it unset. However, "use_backend" involving dynamic names are
+ not counted since there is no way to know if they could match or not.
- The "enabled" keyword is used to explicitly enable an instance, when the
- defaults has been set to "disabled". This is very rarely used.
+ Example :
+ # The servers will accept between 100 and 1000 concurrent connections each
+ # and the maximum of 1000 will be reached when the backend reaches 10000
+ # connections.
+ backend dynamic
+ fullconn 10000
+ server srv1 dyn1:80 minconn 100 maxconn 1000
+ server srv2 dyn2:80 minconn 100 maxconn 1000
- See also : "disabled"
+ See also : "maxconn", "server"
-errorfile <code> <file>
- Return a file contents instead of errors generated by HAProxy
+guid <string>
+ Specify a case-sensitive global unique ID for this proxy.
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ no | yes | yes | yes
- Arguments :
- <code> is the HTTP status code. Currently, HAProxy is capable of
- generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
- 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+ <string> must be unique across all haproxy configuration on every object
+ types. Format is left unspecified to allow the user to select its naming
+ policy. The only restriction is its length which cannot be greater than
+ 127 characters. All alphanumerical values and '.', ':', '-' and '_'
+ characters are valid.
- <file> designates a file containing the full HTTP response. It is
- recommended to follow the common practice of appending ".http" to
- the filename so that people do not confuse the response with HTML
- error pages, and to use absolute paths, since files are read
- before any chroot is performed.
- It is important to understand that this keyword is not meant to rewrite
- errors returned by the server, but errors detected and returned by HAProxy.
- This is why the list of supported errors is limited to a small set.
+hash-balance-factor <factor>
+ Specify the balancing factor for bounded-load consistent hashing
- Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+ May be used in the following contexts: tcp, http
- The files are parsed when HAProxy starts and must be valid according to the
- HTTP specification. They should not exceed the configured buffer size
- (BUFSIZE), which generally is 16 kB, otherwise an internal error will be
- returned. It is also wise not to put any reference to local contents
- (e.g. images) in order to avoid loops between the client and HAProxy when all
- servers are down, causing an error to be returned instead of an
- image. Finally, The response cannot exceed (tune.bufsize - tune.maxrewrite)
- so that "http-after-response" rules still have room to operate (see
- "tune.maxrewrite").
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | no | yes
- The files are read at the same time as the configuration and kept in memory.
- For this reason, the errors continue to be returned even when the process is
- chrooted, and no file change is considered while the process is running. A
- simple method for developing those files consists in associating them to the
- 403 status code and interrogating a blocked URL.
+ Arguments :
+ <factor> is the control for the maximum number of concurrent requests to
+ send to a server, expressed as a percentage of the average number
+ of concurrent requests across all of the active servers.
- See also : "http-error", "errorloc", "errorloc302", "errorloc303"
+ Specifying a "hash-balance-factor" for a server with "hash-type consistent"
+ enables an algorithm that prevents any one server from getting too many
+ requests at once, even if some hash buckets receive many more requests than
+ others. Setting <factor> to 0 (the default) disables the feature. Otherwise,
+ <factor> is a percentage greater than 100. For example, if <factor> is 150,
+ then no server will be allowed to have a load more than 1.5 times the average.
+ If server weights are used, they will be respected.
- Example :
- errorfile 400 /etc/haproxy/errorfiles/400badreq.http
- errorfile 408 /dev/null # work around Chrome pre-connect bug
- errorfile 403 /etc/haproxy/errorfiles/403forbid.http
- errorfile 503 /etc/haproxy/errorfiles/503sorry.http
+ If the first-choice server is disqualified, the algorithm will choose another
+ server based on the request hash, until a server with additional capacity is
+ found. A higher <factor> allows more imbalance between the servers, while a
+ lower <factor> means that more servers will be checked on average, affecting
+ performance. Reasonable values are from 125 to 200.
+ This setting is also used by "balance random" which internally relies on the
+ consistent hashing mechanism.
-errorfiles <name> [<code> ...]
- Import, fully or partially, the error files defined in the <name> http-errors
- section.
+ See also : "balance" and "hash-type".
- May be used in the following contexts: http
+hash-preserve-affinity { always | maxconn | maxqueue }
+ Specify a method for assigning streams to servers with hash load balancing
+ when servers are satured or have a full queue.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in the following contexts: http
- Arguments :
- <name> is the name of an existing http-errors section.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- <code> is a HTTP status code. Several status code may be listed.
- Currently, HAProxy is capable of generating codes 200, 400, 401,
- 403, 404, 405, 407, 408, 410, 413, 414, 425, 429, 431, 500, 501,
- 502, 503, and 504.
+ The following values can be specified:
- Errors defined in the http-errors section with the name <name> are imported
- in the current proxy. If no status code is specified, all error files of the
- http-errors section are imported. Otherwise, only error files associated to
- the listed status code are imported. Those error files override the already
- defined custom errors for the proxy. And they may be overridden by following
- ones. Functionally, it is exactly the same as declaring all error files by
- hand using "errorfile" directives.
+ - "always" : this is the default strategy. A stream is assigned to a
+ server based on hashing irrespective of whether the server
+ is currently saturated.
- See also : "http-error", "errorfile", "errorloc", "errorloc302" ,
- "errorloc303" and section 3.7 about http-errors.
+ - "maxconn" : when selected, servers that have "maxconn" set and are
+ currently saturated will be skipped. Another server will be
+ picked by following the hashing ring. This has no effect on
+ servers that do not set "maxconn". If all servers are
+ saturated, the request is enqueued to the last server in the
+ hash ring before the initially selected server.
- Example :
- errorfiles generic
- errorfiles site-1 403 404
+ - "maxqueue" : when selected, servers that have "maxconn" set, "maxqueue"
+ set to a non-zero value (limited queue size) and currently
+ have a full queue will be skipped. Another server will be
+ picked by following the hashing ring. This has no effect on
+ servers that do not set both "maxconn" and "maxqueue".
+ See also : "maxconn", "maxqueue", "hash-balance-factor"
-errorloc <code> <url>
-errorloc302 <code> <url>
- Return an HTTP redirection to a URL instead of errors generated by HAProxy
+hash-type <method> <function> <modifier>
+ Specify a method to use for mapping hashes to servers
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ yes | no | yes | yes
Arguments :
- <code> is the HTTP status code. Currently, HAProxy is capable of
- generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
- 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+ <method> is the method used to select a server from the hash computed by
+ the <function> :
- <url> it is the exact contents of the "Location" header. It may contain
- either a relative URI to an error page hosted on the same site,
- or an absolute URI designating an error page on another site.
- Special care should be given to relative URIs to avoid redirect
- loops if the URI itself may generate the same error (e.g. 500).
+ map-based the hash table is a static array containing all alive servers.
+ The hashes will be very smooth, will consider weights, but
+ will be static in that weight changes while a server is up
+ will be ignored. This means that there will be no slow start.
+ Also, since a server is selected by its position in the array,
+ most mappings are changed when the server count changes. This
+ means that when a server goes up or down, or when a server is
+ added to a farm, most connections will be redistributed to
+ different servers. This can be inconvenient with caches for
+ instance.
- It is important to understand that this keyword is not meant to rewrite
- errors returned by the server, but errors detected and returned by HAProxy.
- This is why the list of supported errors is limited to a small set.
+ consistent the hash table is a tree filled with many occurrences of each
+ server. The hash key is looked up in the tree and the closest
+ server is chosen. This hash is dynamic, it supports changing
+ weights while the servers are up, so it is compatible with the
+ slow start feature. It has the advantage that when a server
+ goes up or down, only its associations are moved. When a
+ server is added to the farm, only a few part of the mappings
+ are redistributed, making it an ideal method for caches.
+ However, due to its principle, the distribution will never be
+ very smooth and it may sometimes be necessary to adjust a
+ server's weight or its ID to get a more balanced distribution.
+ In order to get the same distribution on multiple load
+ balancers, it is important that all servers have the exact
+ same IDs. Note: consistent hash uses sdbm and avalanche if no
+ hash function is specified.
- Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+ <function> is the hash function to be used :
- Note that both keyword return the HTTP 302 status code, which tells the
- client to fetch the designated URL using the same HTTP method. This can be
- quite problematic in case of non-GET methods such as POST, because the URL
- sent to the client might not be allowed for something other than GET. To
- work around this problem, please use "errorloc303" which send the HTTP 303
- status code, indicating to the client that the URL must be fetched with a GET
- request.
+ sdbm this function was created initially for sdbm (a public-domain
+ reimplementation of ndbm) database library. It was found to do
+ well in scrambling bits, causing better distribution of the keys
+ and fewer splits. It also happens to be a good general hashing
+ function with good distribution, unless the total server weight
+ is a multiple of 64, in which case applying the avalanche
+ modifier may help.
- See also : "http-error", "errorfile", "errorloc303"
+ djb2 this function was first proposed by Dan Bernstein many years ago
+ on comp.lang.c. Studies have shown that for certain workload this
+ function provides a better distribution than sdbm. It generally
+ works well with text-based inputs though it can perform extremely
+ poorly with numeric-only input or when the total server weight is
+ a multiple of 33, unless the avalanche modifier is also used.
+ wt6 this function was designed for HAProxy while testing other
+ functions in the past. It is not as smooth as the other ones, but
+ is much less sensible to the input data set or to the number of
+ servers. It can make sense as an alternative to sdbm+avalanche or
+ djb2+avalanche for consistent hashing or when hashing on numeric
+ data such as a source IP address or a visitor identifier in a URL
+ parameter.
-errorloc303 <code> <url>
- Return an HTTP redirection to a URL instead of errors generated by HAProxy
-
- May be used in the following contexts: http
+ crc32 this is the most common CRC32 implementation as used in Ethernet,
+ gzip, PNG, etc. It is slower than the other ones but may provide
+ a better distribution or less predictable results especially when
+ used on strings.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ none don't hash the key, the key will be used as a hash, this can be
+ useful to manually hash the key using a converter for that purpose
+ and let haproxy use the result directly.
- Arguments :
- <code> is the HTTP status code. Currently, HAProxy is capable of
- generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
- 413, 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+ <modifier> indicates an optional method applied after hashing the key :
- <url> it is the exact contents of the "Location" header. It may contain
- either a relative URI to an error page hosted on the same site,
- or an absolute URI designating an error page on another site.
- Special care should be given to relative URIs to avoid redirect
- loops if the URI itself may generate the same error (e.g. 500).
+ avalanche This directive indicates that the result from the hash
+ function above should not be used in its raw form but that
+ a 4-byte full avalanche hash must be applied first. The
+ purpose of this step is to mix the resulting bits from the
+ previous hash in order to avoid any undesired effect when
+ the input contains some limited values or when the number of
+ servers is a multiple of one of the hash's components (64
+ for SDBM, 33 for DJB2). Enabling avalanche tends to make the
+ result less predictable, but it's also not as smooth as when
+ using the original function. Some testing might be needed
+ with some workloads. This hash is one of the many proposed
+ by Bob Jenkins.
- It is important to understand that this keyword is not meant to rewrite
- errors returned by the server, but errors detected and returned by HAProxy.
- This is why the list of supported errors is limited to a small set.
+ The default hash type is "map-based" and is recommended for most usages. The
+ default function is "sdbm", the selection of a function should be based on
+ the range of the values being hashed.
- Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+ See also : "balance", "hash-balance-factor", "hash-preserve-affinity",
+ "server"
- Note that both keyword return the HTTP 303 status code, which tells the
- client to fetch the designated URL using the same HTTP GET method. This
- solves the usual problems associated with "errorloc" and the 302 code. It is
- possible that some very old browsers designed before HTTP/1.1 do not support
- it, but no such problem has been reported till now.
+http-after-response <action> <options...> [ { if | unless } <condition> ]
+ Access control for all Layer 7 responses (server, applet/service and internal
+ ones).
- See also : "http-error", "errorfile", "errorloc", "errorloc302"
+ May be used in the following contexts: http
+ May be used in sections: defaults | frontend | listen | backend
+ yes(!) | yes | yes | yes
-email-alert from <emailaddr>
- Declare the from email address to be used in both the envelope and header
- of email alerts. This is the address that email alerts are sent from.
+ The http-after-response statement defines a set of rules which apply to layer
+ 7 processing. The rules are evaluated in their declaration order when they
+ are met in a frontend, listen or backend section. Since these rules apply on
+ responses, the backend rules are applied first, followed by the frontend's
+ rules. Any rule may optionally be followed by an ACL-based condition, in
+ which case it will only be evaluated if the condition evaluates true.
- May be used in the following contexts: tcp, http, log
+ Unlike http-response rules, these ones are applied on all responses, the
+ server ones but also to all responses generated by HAProxy. These rules are
+ evaluated at the end of the responses analysis, before the data forwarding
+ phase.
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
+ The condition is evaluated just before the action is executed, and the action
+ is performed exactly once. As such, there is no problem if an action changes
+ an element which is checked as part of the condition. This also means that
+ multiple actions may rely on the same condition so that the first action that
+ changes the condition's evaluation is sufficient to implicitly disable the
+ remaining actions. This is used for example when trying to assign a value to
+ a variable from various sources when it's empty. There is no limit to the
+ number of "http-after-response" statements per instance.
- Arguments :
+ The first keyword after "http-after-response" in the syntax is the rule's
+ action, optionally followed by a varying number of arguments for the action.
+ The supported actions and their respective syntaxes are enumerated in section
+ 4.3 "Actions" (look for actions which tick "HTTP Aft").
- <emailaddr> is the from email address to use when sending email alerts
+ This directive is only available from named defaults sections, not anonymous
+ ones. Rules defined in the defaults section are evaluated before ones in the
+ associated proxy section. To avoid ambiguities, in this case the same
+ defaults section cannot be used by proxies with the frontend capability and
+ by proxies with the backend capability. It means a listen section cannot use
+ a defaults section defining such rules.
- Also requires "email-alert mailers" and "email-alert to" to be set
- and if so sending email alerts is enabled for the proxy.
+ Note: Errors emitted in early stage of the request parsing are handled by the
+ multiplexer at a lower level, before any http analysis. Thus no
+ http-after-response ruleset is evaluated on these errors.
- See also : "email-alert level", "email-alert mailers",
- "email-alert myhostname", "email-alert to", section 3.5 about
- mailers.
+ Example:
+ http-after-response set-header Strict-Transport-Security "max-age=31536000"
+ http-after-response set-header Cache-Control "no-store,no-cache,private"
+ http-after-response set-header Pragma "no-cache"
-email-alert level <level>
- Declare the maximum log level of messages for which email alerts will be
- sent. This acts as a filter on the sending of email alerts.
+http-check comment <string>
+ Defines a comment for the following the http-check rule, reported in logs if
+ it fails.
- May be used in the following contexts: tcp, http, log
+ May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
+ <string> is the comment message to add in logs if the following http-check
+ rule fails.
- <level> One of the 8 syslog levels:
- emerg alert crit err warning notice info debug
- The above syslog levels are ordered from lowest to highest.
-
- By default level is alert
+ It only works for connect, send and expect rules. It is useful to make
+ user-friendly error reporting.
- Also requires "email-alert from", "email-alert mailers" and
- "email-alert to" to be set and if so sending email alerts is enabled
- for the proxy.
+ See also : "option httpchk", "http-check connect", "http-check send" and
+ "http-check expect".
- Alerts are sent when :
- * An un-paused server is marked as down and <level> is alert or lower
- * A paused server is marked as down and <level> is notice or lower
- * A server is marked as up or enters the drain state and <level>
- is notice or lower
- * "option log-health-checks" is enabled, <level> is info or lower,
- and a health check status update occurs
+http-check connect [default] [port <expr>] [addr <ip>] [send-proxy]
+ [via-socks4] [ssl] [sni <sni>] [alpn <alpn>] [linger]
+ [proto <name>] [comment <msg>]
+ Opens a new connection to perform an HTTP health check
- See also : "email-alert from", "email-alert mailers",
- "email-alert myhostname", "email-alert to",
- section 3.5 about mailers.
+ May be used in the following contexts: tcp, http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
-email-alert mailers <mailersect>
- Declare the mailers to be used when sending email alerts
+ Arguments :
+ comment <msg> defines a message to report if the rule evaluation fails.
- May be used in the following contexts: tcp, http, log
+ default Use default options of the server line to do the health
+ checks. The server options are used only if not redefined.
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
+ port <expr> if not set, check port or server port is used.
+ It tells HAProxy where to open the connection to.
+ <port> must be a valid TCP port source integer, from 1 to
+ 65535 or an sample-fetch expression.
- Arguments :
+ addr <ip> defines the IP address to do the health check.
- <mailersect> is the name of the mailers section to send email alerts.
+ send-proxy send a PROXY protocol string
- Also requires "email-alert from" and "email-alert to" to be set
- and if so sending email alerts is enabled for the proxy.
+ via-socks4 enables outgoing health checks using upstream socks4 proxy.
- See also : "email-alert from", "email-alert level", "email-alert myhostname",
- "email-alert to", section 3.5 about mailers.
+ ssl opens a ciphered connection
+ sni <sni> specifies the SNI to use to do health checks over SSL.
-email-alert myhostname <hostname>
- Declare the to hostname address to be used when communicating with
- mailers.
+ alpn <alpn> defines which protocols to advertise with ALPN. The protocol
+ list consists in a comma-delimited list of protocol names,
+ for instance: "h2,http/1.1". If it is not set, the server ALPN
+ is used.
- May be used in the following contexts: tcp, http, log
+ proto <name> forces the multiplexer's protocol to use for this connection.
+ It must be an HTTP mux protocol and it must be usable on the
+ backend side. The list of available protocols is reported in
+ haproxy -vv.
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
+ linger cleanly close the connection instead of using a single RST.
- Arguments :
+ Just like tcp-check health checks, it is possible to configure the connection
+ to use to perform HTTP health check. This directive should also be used to
+ describe a scenario involving several request/response exchanges, possibly on
+ different ports or with different servers.
- <hostname> is the hostname to use when communicating with mailers
+ When there are no TCP port configured on the server line neither server port
+ directive, then the first step of the http-check sequence must be to specify
+ the port with a "http-check connect".
- By default the systems hostname is used.
+ In an http-check ruleset a 'connect' is required, it is also mandatory to start
+ the ruleset with a 'connect' rule. Purpose is to ensure admin know what they
+ do.
- Also requires "email-alert from", "email-alert mailers" and
- "email-alert to" to be set and if so sending email alerts is enabled
- for the proxy.
+ When a connect must start the ruleset, if may still be preceded by set-var,
+ unset-var or comment rules.
- See also : "email-alert from", "email-alert level", "email-alert mailers",
- "email-alert to", section 3.5 about mailers.
+ Examples :
+ # check HTTP and HTTPs services on a server.
+ # first open port 80 thanks to server line port directive, then
+ # tcp-check opens port 443, ciphered and run a request on it:
+ option httpchk
+ http-check connect
+ http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
+ http-check expect status 200-399
+ http-check connect port 443 ssl sni haproxy.1wt.eu
+ http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
+ http-check expect status 200-399
-email-alert to <emailaddr>
- Declare both the recipient address in the envelope and to address in the
- header of email alerts. This is the address that email alerts are sent to.
+ server www 10.0.0.1 check port 80
- May be used in the following contexts: tcp, http, log
+ See also : "option httpchk", "http-check send", "http-check expect"
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
- Arguments :
+http-check disable-on-404
+ Enable a maintenance mode upon HTTP/404 response to health-checks
- <emailaddr> is the to email address to use when sending email alerts
+ May be used in the following contexts: tcp, http
- Also requires "email-alert mailers" and "email-alert to" to be set
- and if so sending email alerts is enabled for the proxy.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- See also : "email-alert from", "email-alert level", "email-alert mailers",
- "email-alert myhostname", section 3.5 about mailers.
+ Arguments : none
+ When this option is set, a server which returns an HTTP code 404 will be
+ excluded from further load-balancing, but will still receive persistent
+ connections. This provides a very convenient method for Web administrators
+ to perform a graceful shutdown of their servers. It is also important to note
+ that a server which is detected as failed while it was in this mode will not
+ generate an alert, just a notice. If the server responds 2xx or 3xx again, it
+ will immediately be reinserted into the farm. The status on the stats page
+ reports "NOLB" for a server in this mode. It is important to note that this
+ option only works in conjunction with the "httpchk" option. If this option
+ is used with "http-check expect", then it has precedence over it so that 404
+ responses will still be considered as soft-stop. Note also that a stopped
+ server will stay stopped even if it replies 404s. This option is only
+ evaluated for running servers.
-error-log-format <fmt>
- Specifies the log format string to use in case of connection error on the frontend side.
+ See also : "option httpchk" and "http-check expect".
- May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | no
+http-check expect [min-recv <int>] [comment <msg>]
+ [ok-status <st>] [error-status <st>] [tout-status <st>]
+ [on-success <fmt>] [on-error <fmt>] [status-code <expr>]
+ [!] <match> <pattern>
+ Make HTTP health checks consider response contents or specific status codes
- This directive specifies the log format string that will be used for logs
- containing information related to errors, timeouts, retries redispatches or
- HTTP status code 5xx. This format will in short be used for every log line
- that would be concerned by the "log-separate-errors" option, including
- connection errors described in section 8.2.5.
+ May be used in the following contexts: tcp, http
- If the directive is used in a defaults section, all subsequent frontends will
- use the same log format. Please see section 8.2.6 which covers the custom log
- format string in depth.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- "error-log-format" directive overrides previous "error-log-format"
- directives.
+ Arguments :
+ comment <msg> defines a message to report if the rule evaluation fails.
+ min-recv is optional and can define the minimum amount of data required to
+ evaluate the current expect rule. If the number of received bytes
+ is under this limit, the check will wait for more data. This
+ option can be used to resolve some ambiguous matching rules or to
+ avoid executing costly regex matches on content known to be still
+ incomplete. If an exact string is used, the minimum between the
+ string length and this parameter is used. This parameter is
+ ignored if it is set to -1. If the expect rule does not match,
+ the check will wait for more data. If set to 0, the evaluation
+ result is always conclusive.
-force-persist { if | unless } <condition>
- Declare a condition to force persistence on down servers
+ ok-status <st> is optional and can be used to set the check status if
+ the expect rule is successfully evaluated and if it is
+ the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
+ "L6OK" and "L4OK" are supported :
+ - L7OK : check passed on layer 7
+ - L7OKC : check conditionally passed on layer 7, set
+ server to NOLB state.
+ - L6OK : check passed on layer 6
+ - L4OK : check passed on layer 4
+ By default "L7OK" is used.
- May be used in the following contexts: tcp, http
+ error-status <st> is optional and can be used to set the check status if
+ an error occurred during the expect rule evaluation.
+ "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are
+ supported :
+ - L7OKC : check conditionally passed on layer 7, set
+ server to NOLB state.
+ - L7RSP : layer 7 invalid response - protocol error
+ - L7STS : layer 7 response error, for example HTTP 5xx
+ - L6RSP : layer 6 invalid response - protocol error
+ - L4CON : layer 1-4 connection problem
+ By default "L7RSP" is used.
- May be used in sections: defaults | frontend | listen | backend
- no | no | yes | yes
+ tout-status <st> is optional and can be used to set the check status if
+ a timeout occurred during the expect rule evaluation.
+ "L7TOUT", "L6TOUT", and "L4TOUT" are supported :
+ - L7TOUT : layer 7 (HTTP/SMTP) timeout
+ - L6TOUT : layer 6 (SSL) timeout
+ - L4TOUT : layer 1-4 timeout
+ By default "L7TOUT" is used.
- By default, requests are not dispatched to down servers. It is possible to
- force this using "option persist", but it is unconditional and redispatches
- to a valid server if "option redispatch" is set. That leaves with very little
- possibilities to force some requests to reach a server which is artificially
- marked down for maintenance operations.
+ on-success <fmt> is optional and can be used to customize the
+ informational message reported in logs if the expect
+ rule is successfully evaluated and if it is the last rule
+ in the tcp-check ruleset. <fmt> is a Custom log format
+ string (see section 8.2.6).
- The "force-persist" statement allows one to declare various ACL-based
- conditions which, when met, will cause a request to ignore the down status of
- a server and still try to connect to it. That makes it possible to start a
- server, still replying an error to the health checks, and run a specially
- configured browser to test the service. Among the handy methods, one could
- use a specific source IP address, or a specific cookie. The cookie also has
- the advantage that it can easily be added/removed on the browser from a test
- page. Once the service is validated, it is then possible to open the service
- to the world by returning a valid response to health checks.
+ on-error <fmt> is optional and can be used to customize the
+ informational message reported in logs if an error
+ occurred during the expect rule evaluation. <fmt> is a
+ Custom log format string (see section 8.2.6).
- The forced persistence is enabled when an "if" condition is met, or unless an
- "unless" condition is met. The final redispatch is always disabled when this
- is used.
+ <match> is a keyword indicating how to look for a specific pattern in the
+ response. The keyword may be one of "status", "rstatus", "hdr",
+ "fhdr", "string", or "rstring". The keyword may be preceded by an
+ exclamation mark ("!") to negate the match. Spaces are allowed
+ between the exclamation mark and the keyword. See below for more
+ details on the supported keywords.
- See also : "option redispatch", "ignore-persist", "persist",
- and section 7 about ACL usage.
+ <pattern> is the pattern to look for. It may be a string, a regular
+ expression or a more complex pattern with several arguments. If
+ the string pattern contains spaces, they must be escaped with the
+ usual backslash ('\').
+ By default, "option httpchk" considers that response statuses 2xx and 3xx
+ are valid, and that others are invalid. When "http-check expect" is used,
+ it defines what is considered valid or invalid. Only one "http-check"
+ statement is supported in a backend. If a server fails to respond or times
+ out, the check obviously fails. The available matches are :
-filter <name> [param*]
- Add the filter <name> in the filter list attached to the proxy.
+ status <codes> : test the status codes found parsing <codes> string. it
+ must be a comma-separated list of status codes or range
+ codes. A health check response will be considered as
+ valid if the response's status code matches any status
+ code or is inside any range of the list. If the "status"
+ keyword is prefixed with "!", then the response will be
+ considered invalid if the status code matches.
- May be used in the following contexts: tcp, http
+ rstatus <regex> : test a regular expression for the HTTP status code.
+ A health check response will be considered valid if the
+ response's status code matches the expression. If the
+ "rstatus" keyword is prefixed with "!", then the response
+ will be considered invalid if the status code matches.
+ This is mostly used to check for multiple codes.
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ hdr { name | name-lf } [ -m <meth> ] <name>
+ [ { value | value-lf } [ -m <meth> ] <value> :
+ test the specified header pattern on the HTTP response
+ headers. The name pattern is mandatory but the value
+ pattern is optional. If not specified, only the header
+ presence is verified. <meth> is the matching method,
+ applied on the header name or the header value. Supported
+ matching methods are "str" (exact match), "beg" (prefix
+ match), "end" (suffix match), "sub" (substring match) or
+ "reg" (regex match). If not specified, exact matching
+ method is used. If the "name-lf" parameter is used,
+ <name> is evaluated as a Custom log format string (see
+ section 8.2.6). If "value-lf" parameter is used, <value>
+ is evaluated as a log-format string. These parameters
+ cannot be used with the regex matching method. Finally,
+ the header value is considered as comma-separated
+ list. Note that matchings are case insensitive on the
+ header names.
- Arguments :
- <name> is the name of the filter. Officially supported filters are
- referenced in section 9.
+ fhdr { name | name-lf } [ -m <meth> ] <name>
+ [ { value | value-lf } [ -m <meth> ] <value> :
+ test the specified full header pattern on the HTTP
+ response headers. It does exactly the same as the "hdr"
+ keyword, except the full header value is tested, commas
+ are not considered as delimiters.
- <param*> is a list of parameters accepted by the filter <name>. The
- parsing of these parameters are the responsibility of the
- filter. Please refer to the documentation of the corresponding
- filter (section 9) for all details on the supported parameters.
+ string <string> : test the exact string match in the HTTP response body.
+ A health check response will be considered valid if the
+ response's body contains this exact string. If the
+ "string" keyword is prefixed with "!", then the response
+ will be considered invalid if the body contains this
+ string. This can be used to look for a mandatory word at
+ the end of a dynamic page, or to detect a failure when a
+ specific error appears on the check page (e.g. a stack
+ trace).
- Multiple occurrences of the filter line can be used for the same proxy. The
- same filter can be referenced many times if needed.
+ rstring <regex> : test a regular expression on the HTTP response body.
+ A health check response will be considered valid if the
+ response's body matches this expression. If the "rstring"
+ keyword is prefixed with "!", then the response will be
+ considered invalid if the body matches the expression.
+ This can be used to look for a mandatory word at the end
+ of a dynamic page, or to detect a failure when a specific
+ error appears on the check page (e.g. a stack trace).
- Example:
- listen
- bind *:80
+ string-lf <fmt> : test a Custom log format string (see section 8.2.6) match
+ in the HTTP response body. A health check response will
+ be considered valid if the response's body contains the
+ string resulting of the evaluation of <fmt>, which
+ follows the log-format rules. If prefixed with "!", then
+ the response will be considered invalid if the body
+ contains the string.
- filter trace name BEFORE-HTTP-COMP
- filter compression
- filter trace name AFTER-HTTP-COMP
+ It is important to note that the responses will be limited to a certain size
+ defined by the global "tune.bufsize" option, which defaults to 16384 bytes.
+ Thus, too large responses may not contain the mandatory pattern when using
+ "string" or "rstring". If a large response is absolutely required, it is
+ possible to change the default max size by setting the global variable.
+ However, it is worth keeping in mind that parsing very large responses can
+ waste some CPU cycles, especially when regular expressions are used, and that
+ it is always better to focus the checks on smaller resources.
- compression algo gzip
- compression offload
+ In an http-check ruleset, the last expect rule may be implicit. If no expect
+ rule is specified after the last "http-check send", an implicit expect rule
+ is defined to match on 2xx or 3xx status codes. It means this rule is also
+ defined if there is no "http-check" rule at all, when only "option httpchk"
+ is set.
- server srv1 192.168.0.1:80
+ Last, if "http-check expect" is combined with "http-check disable-on-404",
+ then this last one has precedence when the server responds with 404.
- See also : section 9.
+ Examples :
+ # only accept status 200 as valid
+ http-check expect status 200,201,300-310
+ # be sure a sessid coookie is set
+ http-check expect header name "set-cookie" value -m beg "sessid="
-fullconn <conns>
- Specify at what backend load the servers will reach their maxconn
+ # consider SQL errors as errors
+ http-check expect ! string SQL\ Error
+
+ # consider status 5xx only as errors
+ http-check expect ! rstatus ^5
+
+ # check that we have a correct hexadecimal tag before /html
+ http-check expect rstring <!--tag:[0-9a-f]*--></html>
+
+ See also : "option httpchk", "http-check connect", "http-check disable-on-404"
+ and "http-check send".
+
+
+http-check send [meth <method>] [{ uri <uri> | uri-lf <fmt> }>] [ver <version>]
+ [hdr <name> <fmt>]* [{ body <string> | body-lf <fmt> }]
+ [comment <msg>]
+ Add a possible list of headers and/or a body to the request sent during HTTP
+ health checks.
May be used in the following contexts: tcp, http
yes | no | yes | yes
Arguments :
- <conns> is the number of connections on the backend which will make the
- servers use the maximal number of connections.
-
- When a server has a "maxconn" parameter specified, it means that its number
- of concurrent connections will never go higher. Additionally, if it has a
- "minconn" parameter, it indicates a dynamic limit following the backend's
- load. The server will then always accept at least <minconn> connections,
- never more than <maxconn>, and the limit will be on the ramp between both
- values when the backend has less than <conns> concurrent connections. This
- makes it possible to limit the load on the servers during normal loads, but
- push it further for important loads without overloading the servers during
- exceptional loads.
+ comment <msg> defines a message to report if the rule evaluation fails.
- Since it's hard to get this value right, HAProxy automatically sets it to
- 10% of the sum of the maxconns of all frontends that may branch to this
- backend (based on "use_backend" and "default_backend" rules). That way it's
- safe to leave it unset. However, "use_backend" involving dynamic names are
- not counted since there is no way to know if they could match or not.
+ meth <method> is the optional HTTP method used with the requests. When not
+ set, the "OPTIONS" method is used, as it generally requires
+ low server processing and is easy to filter out from the
+ logs. Any method may be used, though it is not recommended
+ to invent non-standard ones.
- Example :
- # The servers will accept between 100 and 1000 concurrent connections each
- # and the maximum of 1000 will be reached when the backend reaches 10000
- # connections.
- backend dynamic
- fullconn 10000
- server srv1 dyn1:80 minconn 100 maxconn 1000
- server srv2 dyn2:80 minconn 100 maxconn 1000
+ uri <uri> is optional and set the URI referenced in the HTTP requests
+ to the string <uri>. It defaults to "/" which is accessible
+ by default on almost any server, but may be changed to any
+ other URI. Query strings are permitted.
- See also : "maxconn", "server"
+ uri-lf <fmt> is optional and set the URI referenced in the HTTP requests
+ using the Custom log format <fmt> (see section 8.2.6). It
+ defaults to "/" which is accessible by default on almost any
+ server, but may be changed to any other URI. Query strings
+ are permitted.
+ ver <version> is the optional HTTP version string. It defaults to
+ "HTTP/1.0" but some servers might behave incorrectly in HTTP
+ 1.0, so turning it to HTTP/1.1 may sometimes help. Note that
+ the Host field is mandatory in HTTP/1.1, use "hdr" argument
+ to add it.
-guid <string>
- Specify a case-sensitive global unique ID for this proxy.
+ hdr <name> <fmt> adds the HTTP header field whose name is specified in
+ <name> and whose value is defined by <fmt>, which follows
+ the Custom log format rules described in section 8.2.6.
- May be used in the following contexts: tcp, http, log
+ body <string> add the body defined by <string> to the request sent during
+ HTTP health checks. If defined, the "Content-Length" header
+ is thus automatically added to the request.
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ body-lf <fmt> add the body defined by the Custom log format <fmt> (see
+ section 8.2.6) to the request sent during HTTP health
+ checks. If defined, the "Content-Length" header is thus
+ automatically added to the request.
- <string> must be unique across all haproxy configuration on every object
- types. Format is left unspecified to allow the user to select its naming
- policy. The only restriction is its length which cannot be greater than
- 127 characters. All alphanumerical values and '.', ':', '-' and '_'
- characters are valid.
+ In addition to the request line defined by the "option httpchk" directive,
+ this one is the valid way to add some headers and optionally a body to the
+ request sent during HTTP health checks. If a body is defined, the associate
+ "Content-Length" header is automatically added. Thus, this header or
+ "Transfer-encoding" header should not be present in the request provided by
+ "http-check send". If so, it will be ignored. The old trick consisting to add
+ headers after the version string on the "option httpchk" line is now
+ deprecated.
+ Also "http-check send" doesn't support HTTP keep-alive. Keep in mind that it
+ will automatically append a "Connection: close" header, unless a Connection
+ header has already already been configured via a hdr entry.
-hash-balance-factor <factor>
- Specify the balancing factor for bounded-load consistent hashing
+ Note that the Host header and the request authority, when both defined, are
+ automatically synchronized. It means when the HTTP request is sent, when a
+ Host is inserted in the request, the request authority is accordingly
+ updated. Thus, don't be surprised if the Host header value overwrites the
+ configured request authority.
+
+ Note also for now, no Host header is automatically added in HTTP/1.1 or above
+ requests. You should add it explicitly.
+
+ See also : "option httpchk", "http-check send-state" and "http-check expect".
+
+
+http-check send-state
+ Enable emission of a state header with HTTP health checks
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | no | no | yes
-
- Arguments :
- <factor> is the control for the maximum number of concurrent requests to
- send to a server, expressed as a percentage of the average number
- of concurrent requests across all of the active servers.
+ yes | no | yes | yes
- Specifying a "hash-balance-factor" for a server with "hash-type consistent"
- enables an algorithm that prevents any one server from getting too many
- requests at once, even if some hash buckets receive many more requests than
- others. Setting <factor> to 0 (the default) disables the feature. Otherwise,
- <factor> is a percentage greater than 100. For example, if <factor> is 150,
- then no server will be allowed to have a load more than 1.5 times the average.
- If server weights are used, they will be respected.
+ Arguments : none
- If the first-choice server is disqualified, the algorithm will choose another
- server based on the request hash, until a server with additional capacity is
- found. A higher <factor> allows more imbalance between the servers, while a
- lower <factor> means that more servers will be checked on average, affecting
- performance. Reasonable values are from 125 to 200.
+ When this option is set, HAProxy will systematically send a special header
+ "X-Haproxy-Server-State" with a list of parameters indicating to each server
+ how they are seen by HAProxy. This can be used for instance when a server is
+ manipulated without access to HAProxy and the operator needs to know whether
+ HAProxy still sees it up or not, or if the server is the last one in a farm.
- This setting is also used by "balance random" which internally relies on the
- consistent hashing mechanism.
+ The header is composed of fields delimited by semi-colons, the first of which
+ is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid
+ checks on the total number before transition, just as appears in the stats
+ interface. Next headers are in the form "<variable>=<value>", indicating in
+ no specific order some values available in the stats interface :
+ - a variable "address", containing the address of the backend server.
+ This corresponds to the <address> field in the server declaration. For
+ unix domain sockets, it will read "unix".
- See also : "balance" and "hash-type".
+ - a variable "port", containing the port of the backend server. This
+ corresponds to the <port> field in the server declaration. For unix
+ domain sockets, it will read "unix".
-hash-preserve-affinity { always | maxconn | maxqueue }
- Specify a method for assigning streams to servers with hash load balancing
- when servers are satured or have a full queue.
+ - a variable "name", containing the name of the backend followed by a slash
+ ("/") then the name of the server. This can be used when a server is
+ checked in multiple backends.
- May be used in the following contexts: http
+ - a variable "node" containing the name of the HAProxy node, as set in the
+ global "node" variable, otherwise the system's hostname if unspecified.
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ - a variable "weight" indicating the weight of the server, a slash ("/")
+ and the total weight of the farm (just counting usable servers). This
+ helps to know if other servers are available to handle the load when this
+ one fails.
- The following values can be specified:
+ - a variable "scur" indicating the current number of concurrent connections
+ on the server, followed by a slash ("/") then the total number of
+ connections on all servers of the same backend.
- - "always" : this is the default strategy. A stream is assigned to a
- server based on hashing irrespective of whether the server
- is currently saturated.
+ - a variable "qcur" indicating the current number of requests in the
+ server's queue.
- - "maxconn" : when selected, servers that have "maxconn" set and are
- currently saturated will be skipped. Another server will be
- picked by following the hashing ring. This has no effect on
- servers that do not set "maxconn". If all servers are
- saturated, the request is enqueued to the last server in the
- hash ring before the initially selected server.
+ Example of a header received by the application server :
+ >>> X-Haproxy-Server-State: UP 2/3; name=bck/srv2; node=lb1; weight=1/2; \
+ scur=13/22; qcur=0
- - "maxqueue" : when selected, servers that have "maxconn" set, "maxqueue"
- set to a non-zero value (limited queue size) and currently
- have a full queue will be skipped. Another server will be
- picked by following the hashing ring. This has no effect on
- servers that do not set both "maxconn" and "maxqueue".
+ See also : "option httpchk", "http-check disable-on-404" and
+ "http-check send".
- See also : "maxconn", "maxqueue", "hash-balance-factor"
-hash-type <method> <function> <modifier>
- Specify a method to use for mapping hashes to servers
+http-check set-var(<var-name>[,<cond>...]) <expr>
+http-check set-var-fmt(<var-name>[,<cond>...]) <fmt>
+ This operation sets the content of a variable. The variable is declared inline.
- May be used in the following contexts: tcp, http, log
+ May be used in the following contexts: tcp, http
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
- <method> is the method used to select a server from the hash computed by
- the <function> :
+ <var-name> The name of the variable. Only "proc", "sess" and "check"
+ scopes can be used. See section 2.8 about variables for details.
- map-based the hash table is a static array containing all alive servers.
- The hashes will be very smooth, will consider weights, but
- will be static in that weight changes while a server is up
- will be ignored. This means that there will be no slow start.
- Also, since a server is selected by its position in the array,
- most mappings are changed when the server count changes. This
- means that when a server goes up or down, or when a server is
- added to a farm, most connections will be redistributed to
- different servers. This can be inconvenient with caches for
- instance.
+ <cond> A set of conditions that must all be true for the variable to
+ actually be set (such as "ifnotempty", "ifgt" ...). See the
+ set-var converter's description for a full list of possible
+ conditions.
- consistent the hash table is a tree filled with many occurrences of each
- server. The hash key is looked up in the tree and the closest
- server is chosen. This hash is dynamic, it supports changing
- weights while the servers are up, so it is compatible with the
- slow start feature. It has the advantage that when a server
- goes up or down, only its associations are moved. When a
- server is added to the farm, only a few part of the mappings
- are redistributed, making it an ideal method for caches.
- However, due to its principle, the distribution will never be
- very smooth and it may sometimes be necessary to adjust a
- server's weight or its ID to get a more balanced distribution.
- In order to get the same distribution on multiple load
- balancers, it is important that all servers have the exact
- same IDs. Note: consistent hash uses sdbm and avalanche if no
- hash function is specified.
+ <expr> Is a sample-fetch expression potentially followed by converters.
- <function> is the hash function to be used :
+ <fmt> This is the value expressed using Custom log format (see Custom
+ Log Format in section 8.2.6).
- sdbm this function was created initially for sdbm (a public-domain
- reimplementation of ndbm) database library. It was found to do
- well in scrambling bits, causing better distribution of the keys
- and fewer splits. It also happens to be a good general hashing
- function with good distribution, unless the total server weight
- is a multiple of 64, in which case applying the avalanche
- modifier may help.
+ Examples :
+ http-check set-var(check.port) int(1234)
+ http-check set-var-fmt(check.port) "name=%H"
- djb2 this function was first proposed by Dan Bernstein many years ago
- on comp.lang.c. Studies have shown that for certain workload this
- function provides a better distribution than sdbm. It generally
- works well with text-based inputs though it can perform extremely
- poorly with numeric-only input or when the total server weight is
- a multiple of 33, unless the avalanche modifier is also used.
- wt6 this function was designed for HAProxy while testing other
- functions in the past. It is not as smooth as the other ones, but
- is much less sensible to the input data set or to the number of
- servers. It can make sense as an alternative to sdbm+avalanche or
- djb2+avalanche for consistent hashing or when hashing on numeric
- data such as a source IP address or a visitor identifier in a URL
- parameter.
+http-check unset-var(<var-name>)
+ Free a reference to a variable within its scope.
- crc32 this is the most common CRC32 implementation as used in Ethernet,
- gzip, PNG, etc. It is slower than the other ones but may provide
- a better distribution or less predictable results especially when
- used on strings.
+ May be used in the following contexts: tcp, http
- none don't hash the key, the key will be used as a hash, this can be
- useful to manually hash the key using a converter for that purpose
- and let haproxy use the result directly.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- <modifier> indicates an optional method applied after hashing the key :
+ Arguments :
+ <var-name> The name of the variable. Only "proc", "sess" and "check"
+ scopes can be used. See section 2.8 about variables for details.
- avalanche This directive indicates that the result from the hash
- function above should not be used in its raw form but that
- a 4-byte full avalanche hash must be applied first. The
- purpose of this step is to mix the resulting bits from the
- previous hash in order to avoid any undesired effect when
- the input contains some limited values or when the number of
- servers is a multiple of one of the hash's components (64
- for SDBM, 33 for DJB2). Enabling avalanche tends to make the
- result less predictable, but it's also not as smooth as when
- using the original function. Some testing might be needed
- with some workloads. This hash is one of the many proposed
- by Bob Jenkins.
+ Examples :
+ http-check unset-var(check.port)
- The default hash type is "map-based" and is recommended for most usages. The
- default function is "sdbm", the selection of a function should be based on
- the range of the values being hashed.
- See also : "balance", "hash-balance-factor", "hash-preserve-affinity",
- "server"
+http-error status <code> [content-type <type>]
+ [ { default-errorfiles | errorfile <file> | errorfiles <name> |
+ file <file> | lf-file <file> | string <str> | lf-string <fmt> } ]
+ [ hdr <name> <fmt> ]*
+ Defines a custom error message to use instead of errors generated by HAProxy.
+
+ May be used in the following contexts: http
+
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+ status <code> is the HTTP status code. It must be specified.
+ Currently, HAProxy is capable of generating codes
+ 200, 400, 401, 403, 404, 405, 407, 408, 410, 413,
+ 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+
+ content-type <type> is the response content type, for instance
+ "text/plain". This parameter is ignored and should be
+ omitted when an errorfile is configured or when the
+ payload is empty. Otherwise, it must be defined.
+
+ default-errorfiles Reset the previously defined error message for current
+ proxy for the status <code>. If used on a backend, the
+ frontend error message is used, if defined. If used on
+ a frontend, the default error message is used.
+
+ errorfile <file> designates a file containing the full HTTP response.
+ It is recommended to follow the common practice of
+ appending ".http" to the filename so that people do
+ not confuse the response with HTML error pages, and to
+ use absolute paths, since files are read before any
+ chroot is performed.
+
+ errorfiles <name> designates the http-errors section to use to import
+ the error message with the status code <code>. If no
+ such message is found, the proxy's error messages are
+ considered.
+
+ file <file> specifies the file to use as response payload. If the
+ file is not empty, its content-type must be set as
+ argument to "content-type", otherwise, any
+ "content-type" argument is ignored. <file> is
+ considered as a raw string.
+
+ string <str> specifies the raw string to use as response payload.
+ The content-type must always be set as argument to
+ "content-type".
+
+ lf-file <file> specifies the file to use as response payload. If the
+ file is not empty, its content-type must be set as
+ argument to "content-type", otherwise, any
+ "content-type" argument is ignored. <file> is
+ evaluated as a Custom log format (see section 8.2.6).
+
+ lf-string <str> specifies the log-format string to use as response
+ payload. The content-type must always be set as
+ argument to "content-type".
+
+ hdr <name> <fmt> adds to the response the HTTP header field whose name
+ is specified in <name> and whose value is defined by
+ <fmt>, which follows the Custom log format rules (see
+ section 8.2.6). This parameter is ignored if an
+ errorfile is used.
+
+ This directive may be used instead of "errorfile", to define a custom error
+ message. As "errorfile" directive, it is used for errors detected and
+ returned by HAProxy. If an errorfile is defined, it is parsed when HAProxy
+ starts and must be valid according to the HTTP standards. The generated
+ response must not exceed the configured buffer size (BUFFSIZE), otherwise an
+ internal error will be returned. Finally, if you consider to use some
+ http-after-response rules to rewrite these errors, the reserved buffer space
+ should be available (see "tune.maxrewrite").
+
+ The files are read at the same time as the configuration and kept in memory.
+ For this reason, the errors continue to be returned even when the process is
+ chrooted, and no file change is considered while the process is running.
+
+ Note: 400/408/500 errors emitted in early stage of the request parsing are
+ handled by the multiplexer at a lower level. No custom formatting is
+ supported at this level. Thus only static error messages, defined with
+ "errorfile" directive, are supported. However, this limitation only
+ exists during the request headers parsing or between two transactions.
+
+ See also : "errorfile", "errorfiles", "errorloc", "errorloc302",
+ "errorloc303" and section 12.4 about http-errors.
-http-after-response <action> <options...> [ { if | unless } <condition> ]
- Access control for all Layer 7 responses (server, applet/service and internal
- ones).
+
+http-request <action> [options...] [ { if | unless } <condition> ]
+ Access control for Layer 7 requests
May be used in the following contexts: http
May be used in sections: defaults | frontend | listen | backend
yes(!) | yes | yes | yes
- The http-after-response statement defines a set of rules which apply to layer
- 7 processing. The rules are evaluated in their declaration order when they
- are met in a frontend, listen or backend section. Since these rules apply on
- responses, the backend rules are applied first, followed by the frontend's
- rules. Any rule may optionally be followed by an ACL-based condition, in
- which case it will only be evaluated if the condition evaluates true.
-
- Unlike http-response rules, these ones are applied on all responses, the
- server ones but also to all responses generated by HAProxy. These rules are
- evaluated at the end of the responses analysis, before the data forwarding
- phase.
+ The http-request statement defines a set of rules which apply to layer 7
+ processing. The rules are evaluated in their declaration order when they are
+ met in a frontend, listen or backend section. Any rule may optionally be
+ followed by an ACL-based condition, in which case it will only be evaluated
+ if the condition evaluates to true.
The condition is evaluated just before the action is executed, and the action
is performed exactly once. As such, there is no problem if an action changes
changes the condition's evaluation is sufficient to implicitly disable the
remaining actions. This is used for example when trying to assign a value to
a variable from various sources when it's empty. There is no limit to the
- number of "http-after-response" statements per instance.
+ number of "http-request" statements per instance.
- The first keyword after "http-after-response" in the syntax is the rule's
- action, optionally followed by a varying number of arguments for the action.
- The supported actions and their respective syntaxes are enumerated in section
- 4.3 "Actions" (look for actions which tick "HTTP Aft").
+ The first keyword after "http-request" in the syntax is the rule's action,
+ optionally followed by a varying number of arguments for the action. The
+ supported actions and their respective syntaxes are enumerated in section 4.3
+ "Actions" (look for actions which tick "HTTP Req").
This directive is only available from named defaults sections, not anonymous
ones. Rules defined in the defaults section are evaluated before ones in the
by proxies with the backend capability. It means a listen section cannot use
a defaults section defining such rules.
- Note: Errors emitted in early stage of the request parsing are handled by the
- multiplexer at a lower level, before any http analysis. Thus no
- http-after-response ruleset is evaluated on these errors.
-
Example:
- http-after-response set-header Strict-Transport-Security "max-age=31536000"
- http-after-response set-header Cache-Control "no-store,no-cache,private"
- http-after-response set-header Pragma "no-cache"
+ acl nagios src 192.168.129.3
+ acl local_net src 192.168.0.0/16
+ acl auth_ok http_auth(L1)
+ http-request allow if nagios
+ http-request allow if local_net auth_ok
+ http-request auth realm Gimme if local_net auth_ok
+ http-request deny
-http-check comment <string>
- Defines a comment for the following the http-check rule, reported in logs if
- it fails.
+ Example:
+ acl key req.hdr(X-Add-Acl-Key) -m found
+ acl add path /addacl
+ acl del path /delacl
- May be used in the following contexts: tcp, http
+ acl myhost hdr(Host) -f myhost.lst
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ http-request add-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key add
+ http-request del-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key del
- Arguments :
- <string> is the comment message to add in logs if the following http-check
- rule fails.
+ Example:
+ acl value req.hdr(X-Value) -m found
+ acl setmap path /setmap
+ acl delmap path /delmap
- It only works for connect, send and expect rules. It is useful to make
- user-friendly error reporting.
+ use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
- See also : "option httpchk", "http-check connect", "http-check send" and
- "http-check expect".
+ http-request set-map(map.lst) %[src] %[req.hdr(X-Value)] if setmap value
+ http-request del-map(map.lst) %[src] if delmap
+ See also : "stats http-request", section 12.2 about userlists and section 7
+ about ACL usage.
-http-check connect [default] [port <expr>] [addr <ip>] [send-proxy]
- [via-socks4] [ssl] [sni <sni>] [alpn <alpn>] [linger]
- [proto <name>] [comment <msg>]
- Opens a new connection to perform an HTTP health check
+http-response <action> <options...> [ { if | unless } <condition> ]
+ Access control for Layer 7 responses
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes(!) | yes | yes | yes
- Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
+ The http-response statement defines a set of rules which apply to layer 7
+ processing. The rules are evaluated in their declaration order when they are
+ met in a frontend, listen or backend section. Since these rules apply on
+ responses, the backend rules are applied first, followed by the frontend's
+ rules. Any rule may optionally be followed by an ACL-based condition, in
+ which case it will only be evaluated if the condition evaluates to true.
- default Use default options of the server line to do the health
- checks. The server options are used only if not redefined.
+ The condition is evaluated just before the action is executed, and the action
+ is performed exactly once. As such, there is no problem if an action changes
+ an element which is checked as part of the condition. This also means that
+ multiple actions may rely on the same condition so that the first action that
+ changes the condition's evaluation is sufficient to implicitly disable the
+ remaining actions. This is used for example when trying to assign a value to
+ a variable from various sources when it's empty. There is no limit to the
+ number of "http-response" statements per instance.
- port <expr> if not set, check port or server port is used.
- It tells HAProxy where to open the connection to.
- <port> must be a valid TCP port source integer, from 1 to
- 65535 or an sample-fetch expression.
+ The first keyword after "http-response" in the syntax is the rule's action,
+ optionally followed by a varying number of arguments for the action. The
+ supported actions and their respective syntaxes are enumerated in section 4.3
+ "Actions" (look for actions which tick "HTTP Res").
- addr <ip> defines the IP address to do the health check.
+ This directive is only available from named defaults sections, not anonymous
+ ones. Rules defined in the defaults section are evaluated before ones in the
+ associated proxy section. To avoid ambiguities, in this case the same
+ defaults section cannot be used by proxies with the frontend capability and
+ by proxies with the backend capability. It means a listen section cannot use
+ a defaults section defining such rules.
- send-proxy send a PROXY protocol string
+ Example:
+ acl key_acl res.hdr(X-Acl-Key) -m found
- via-socks4 enables outgoing health checks using upstream socks4 proxy.
+ acl myhost hdr(Host) -f myhost.lst
- ssl opens a ciphered connection
+ http-response add-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
+ http-response del-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
- sni <sni> specifies the SNI to use to do health checks over SSL.
+ Example:
+ acl value res.hdr(X-Value) -m found
- alpn <alpn> defines which protocols to advertise with ALPN. The protocol
- list consists in a comma-delimited list of protocol names,
- for instance: "h2,http/1.1". If it is not set, the server ALPN
- is used.
+ use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
- proto <name> forces the multiplexer's protocol to use for this connection.
- It must be an HTTP mux protocol and it must be usable on the
- backend side. The list of available protocols is reported in
- haproxy -vv.
+ http-response set-map(map.lst) %[src] %[res.hdr(X-Value)] if value
+ http-response del-map(map.lst) %[src] if ! value
- linger cleanly close the connection instead of using a single RST.
+ See also : "http-request", section 12.2 about userlists and section 7 about
+ ACL usage.
- Just like tcp-check health checks, it is possible to configure the connection
- to use to perform HTTP health check. This directive should also be used to
- describe a scenario involving several request/response exchanges, possibly on
- different ports or with different servers.
+http-reuse { never | safe | aggressive | always }
+ Declare how idle HTTP connections may be shared between requests
- When there are no TCP port configured on the server line neither server port
- directive, then the first step of the http-check sequence must be to specify
- the port with a "http-check connect".
+ May be used in the following contexts: http
- In an http-check ruleset a 'connect' is required, it is also mandatory to start
- the ruleset with a 'connect' rule. Purpose is to ensure admin know what they
- do.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- When a connect must start the ruleset, if may still be preceded by set-var,
- unset-var or comment rules.
+ In order to avoid the cost of setting up new connections to backend servers
+ for each HTTP request, HAProxy tries to keep such idle connections opened
+ after being used. These connections are specific to a server and are stored
+ in a list called a pool, and are grouped together by a set of common key
+ properties. Subsequent HTTP requests will cause a lookup of a compatible
+ connection sharing identical properties in the associated pool and result in
+ this connection being reused instead of establishing a new one.
- Examples :
- # check HTTP and HTTPs services on a server.
- # first open port 80 thanks to server line port directive, then
- # tcp-check opens port 443, ciphered and run a request on it:
- option httpchk
+ A limit on the number of idle connections to keep on a server can be
+ specified via the "pool-max-conn" server keyword. Unused connections are
+ periodically purged according to the "pool-purge-delay" interval.
- http-check connect
- http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
- http-check expect status 200-399
- http-check connect port 443 ssl sni haproxy.1wt.eu
- http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
- http-check expect status 200-399
+ The following connection properties are used to determine if an idle
+ connection is eligible for reuse on a given request:
+ - source and destination addresses
+ - proxy protocol
+ - TOS and mark socket options
+ - connection name, determined either by the result of the evaluation of the
+ "pool-conn-name" expression if present, otherwise by the "sni" expression
- server www 10.0.0.1 check port 80
+ In some occasions, connection lookup or reuse is not performed due to extra
+ restrictions. This is determined by the reuse strategy specified via the
+ keyword argument:
- See also : "option httpchk", "http-check send", "http-check expect"
+ - "never" : idle connections are never shared between sessions. This mode
+ may be enforced to cancel a different strategy inherited from
+ a defaults section or for troubleshooting. For example, if an
+ old bogus application considers that multiple requests over
+ the same connection come from the same client and it is not
+ possible to fix the application, it may be desirable to
+ disable connection sharing in a single backend. An example of
+ such an application could be an old HAProxy using cookie
+ insertion in tunnel mode and not checking any request past the
+ first one.
+ - "safe" : this is the default and the recommended strategy. The first
+ request of a session is always sent over its own connection,
+ and only subsequent requests may be dispatched over other
+ existing connections. This ensures that in case the server
+ closes the connection when the request is being sent, the
+ browser can decide to silently retry it. Since it is exactly
+ equivalent to regular keep-alive, there should be no side
+ effects. There is also a special handling for the connections
+ using protocols subject to Head-of-line blocking (backend with
+ h2 or fcgi). In this case, when at least one stream is
+ processed, the used connection is reserved to handle streams
+ of the same session. When no more streams are processed, the
+ connection is released and can be reused.
-http-check disable-on-404
- Enable a maintenance mode upon HTTP/404 response to health-checks
+ - "aggressive" : this mode may be useful in webservices environments where
+ all servers are not necessarily known and where it would be
+ appreciable to deliver most first requests over existing
+ connections. In this case, first requests are only delivered
+ over existing connections that have been reused at least once,
+ proving that the server correctly supports connection reuse.
+ It should only be used when it's sure that the client can
+ retry a failed request once in a while and where the benefit
+ of aggressive connection reuse significantly outweighs the
+ downsides of rare connection failures.
- May be used in the following contexts: tcp, http
+ - "always" : this mode is only recommended when the path to the server is
+ known for never breaking existing connections quickly after
+ releasing them. It allows the first request of a session to be
+ sent to an existing connection. This can provide a significant
+ performance increase over the "safe" strategy when the backend
+ is a cache farm, since such components tend to show a
+ consistent behavior and will benefit from the connection
+ sharing. It is recommended that the "http-keep-alive" timeout
+ remains low in this mode so that no dead connections remain
+ usable. In most cases, this will lead to the same performance
+ gains as "aggressive" but with more risks. It should only be
+ used when it improves the situation over "aggressive".
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ Also note that connections with certain bogus authentication schemes (relying
+ on the connection) like NTLM are marked private if possible and never shared.
+ This won't be the case however when using a protocol with multiplexing
+ abilities and using reuse mode level value greater than the default "safe"
+ strategy as in this case nothing prevents the connection from being already
+ shared.
- Arguments : none
+ The rules to decide to keep an idle connection opened or to close it after
+ processing are also governed by the "tune.pool-low-fd-ratio" (default: 20%)
+ and "tune.pool-high-fd-ratio" (default: 25%). These correspond to the
+ percentage of total file descriptors spent in idle connections above which
+ haproxy will respectively refrain from keeping a connection opened after a
+ response, and actively kill idle connections. Some setups using a very high
+ ratio of idle connections, either because of too low a global "maxconn", or
+ due to a lot of HTTP/2 or HTTP/3 traffic on the frontend (few connections)
+ but HTTP/1 connections on the backend, may observe a lower reuse rate because
+ too few connections are kept open. It may be desirable in this case to adjust
+ such thresholds or simply to increase the global "maxconn" value.
- When this option is set, a server which returns an HTTP code 404 will be
- excluded from further load-balancing, but will still receive persistent
- connections. This provides a very convenient method for Web administrators
- to perform a graceful shutdown of their servers. It is also important to note
- that a server which is detected as failed while it was in this mode will not
- generate an alert, just a notice. If the server responds 2xx or 3xx again, it
- will immediately be reinserted into the farm. The status on the stats page
- reports "NOLB" for a server in this mode. It is important to note that this
- option only works in conjunction with the "httpchk" option. If this option
- is used with "http-check expect", then it has precedence over it so that 404
- responses will still be considered as soft-stop. Note also that a stopped
- server will stay stopped even if it replies 404s. This option is only
- evaluated for running servers.
+ When thread groups are explicitly enabled, it is important to understand that
+ idle connections are only usable between threads from a same group. As such
+ it may happen that unfair load between groups leads to more idle connections
+ being needed, causing a lower reuse rate. The same solution may then be
+ applied (increase global "maxconn" or increase pool ratios).
- See also : "option httpchk" and "http-check expect".
+ See also : "option http-keep-alive", "pool-conn-name", "pool-max-conn",
+ "pool-purge-delay", "server maxconn", "sni", "thread-groups",
+ "tune.pool-high-fd-ratio", "tune.pool-low-fd-ratio"
-http-check expect [min-recv <int>] [comment <msg>]
- [ok-status <st>] [error-status <st>] [tout-status <st>]
- [on-success <fmt>] [on-error <fmt>] [status-code <expr>]
- [!] <match> <pattern>
- Make HTTP health checks consider response contents or specific status codes
+http-send-name-header [<header>]
+ Add the server name to a request. Use the header string given by <header>
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
+ <header> The header string to use to send the server name
- min-recv is optional and can define the minimum amount of data required to
- evaluate the current expect rule. If the number of received bytes
- is under this limit, the check will wait for more data. This
- option can be used to resolve some ambiguous matching rules or to
- avoid executing costly regex matches on content known to be still
- incomplete. If an exact string is used, the minimum between the
- string length and this parameter is used. This parameter is
- ignored if it is set to -1. If the expect rule does not match,
- the check will wait for more data. If set to 0, the evaluation
- result is always conclusive.
+ The "http-send-name-header" statement causes the header field named <header>
+ to be set to the name of the target server at the moment the request is about
+ to be sent on the wire. Any existing occurrences of this header are removed.
+ Upon retries and redispatches, the header field is updated to always reflect
+ the server being attempted to connect to. Given that this header is modified
+ very late in the connection setup, it may have unexpected effects on already
+ modified headers. For example using it with transport-level header such as
+ connection, content-length, transfer-encoding and so on will likely result in
+ invalid requests being sent to the server. Additionally it has been reported
+ that this directive is currently being used as a way to overwrite the Host
+ header field in outgoing requests; while this trick has been known to work
+ as a side effect of the feature for some time, it is not officially supported
+ and might possibly not work anymore in a future version depending on the
+ technical difficulties this feature induces. A long-term solution instead
+ consists in fixing the application which required this trick so that it binds
+ to the correct host name.
- ok-status <st> is optional and can be used to set the check status if
- the expect rule is successfully evaluated and if it is
- the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
- "L6OK" and "L4OK" are supported :
- - L7OK : check passed on layer 7
- - L7OKC : check conditionally passed on layer 7, set
- server to NOLB state.
- - L6OK : check passed on layer 6
- - L4OK : check passed on layer 4
- By default "L7OK" is used.
+ See also : "server"
- error-status <st> is optional and can be used to set the check status if
- an error occurred during the expect rule evaluation.
- "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are
- supported :
- - L7OKC : check conditionally passed on layer 7, set
- server to NOLB state.
- - L7RSP : layer 7 invalid response - protocol error
- - L7STS : layer 7 response error, for example HTTP 5xx
- - L6RSP : layer 6 invalid response - protocol error
- - L4CON : layer 1-4 connection problem
- By default "L7RSP" is used.
+id <value>
+ Set a persistent ID to a proxy.
- tout-status <st> is optional and can be used to set the check status if
- a timeout occurred during the expect rule evaluation.
- "L7TOUT", "L6TOUT", and "L4TOUT" are supported :
- - L7TOUT : layer 7 (HTTP/SMTP) timeout
- - L6TOUT : layer 6 (SSL) timeout
- - L4TOUT : layer 1-4 timeout
- By default "L7TOUT" is used.
+ May be used in the following contexts: tcp, http, log
- on-success <fmt> is optional and can be used to customize the
- informational message reported in logs if the expect
- rule is successfully evaluated and if it is the last rule
- in the tcp-check ruleset. <fmt> is a Custom log format
- string (see section 8.2.6).
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
- on-error <fmt> is optional and can be used to customize the
- informational message reported in logs if an error
- occurred during the expect rule evaluation. <fmt> is a
- Custom log format string (see section 8.2.6).
+ Arguments : none
- <match> is a keyword indicating how to look for a specific pattern in the
- response. The keyword may be one of "status", "rstatus", "hdr",
- "fhdr", "string", or "rstring". The keyword may be preceded by an
- exclamation mark ("!") to negate the match. Spaces are allowed
- between the exclamation mark and the keyword. See below for more
- details on the supported keywords.
+ Set a persistent ID for the proxy. This ID must be unique and positive.
+ An unused ID will automatically be assigned if unset. The first assigned
+ value will be 1. This ID is currently only returned in statistics.
- <pattern> is the pattern to look for. It may be a string, a regular
- expression or a more complex pattern with several arguments. If
- the string pattern contains spaces, they must be escaped with the
- usual backslash ('\').
- By default, "option httpchk" considers that response statuses 2xx and 3xx
- are valid, and that others are invalid. When "http-check expect" is used,
- it defines what is considered valid or invalid. Only one "http-check"
- statement is supported in a backend. If a server fails to respond or times
- out, the check obviously fails. The available matches are :
+ignore-persist { if | unless } <condition>
+ Declare a condition to ignore persistence
- status <codes> : test the status codes found parsing <codes> string. it
- must be a comma-separated list of status codes or range
- codes. A health check response will be considered as
- valid if the response's status code matches any status
- code or is inside any range of the list. If the "status"
- keyword is prefixed with "!", then the response will be
- considered invalid if the status code matches.
+ May be used in the following contexts: tcp, http
- rstatus <regex> : test a regular expression for the HTTP status code.
- A health check response will be considered valid if the
- response's status code matches the expression. If the
- "rstatus" keyword is prefixed with "!", then the response
- will be considered invalid if the status code matches.
- This is mostly used to check for multiple codes.
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
- hdr { name | name-lf } [ -m <meth> ] <name>
- [ { value | value-lf } [ -m <meth> ] <value> :
- test the specified header pattern on the HTTP response
- headers. The name pattern is mandatory but the value
- pattern is optional. If not specified, only the header
- presence is verified. <meth> is the matching method,
- applied on the header name or the header value. Supported
- matching methods are "str" (exact match), "beg" (prefix
- match), "end" (suffix match), "sub" (substring match) or
- "reg" (regex match). If not specified, exact matching
- method is used. If the "name-lf" parameter is used,
- <name> is evaluated as a Custom log format string (see
- section 8.2.6). If "value-lf" parameter is used, <value>
- is evaluated as a log-format string. These parameters
- cannot be used with the regex matching method. Finally,
- the header value is considered as comma-separated
- list. Note that matchings are case insensitive on the
- header names.
+ By default, when cookie persistence is enabled, every requests containing
+ the cookie are unconditionally persistent (assuming the target server is up
+ and running).
- fhdr { name | name-lf } [ -m <meth> ] <name>
- [ { value | value-lf } [ -m <meth> ] <value> :
- test the specified full header pattern on the HTTP
- response headers. It does exactly the same as the "hdr"
- keyword, except the full header value is tested, commas
- are not considered as delimiters.
-
- string <string> : test the exact string match in the HTTP response body.
- A health check response will be considered valid if the
- response's body contains this exact string. If the
- "string" keyword is prefixed with "!", then the response
- will be considered invalid if the body contains this
- string. This can be used to look for a mandatory word at
- the end of a dynamic page, or to detect a failure when a
- specific error appears on the check page (e.g. a stack
- trace).
+ The "ignore-persist" statement allows one to declare various ACL-based
+ conditions which, when met, will cause a request to ignore persistence.
+ This is sometimes useful to load balance requests for static files, which
+ often don't require persistence. This can also be used to fully disable
+ persistence for a specific User-Agent (for example, some web crawler bots).
- rstring <regex> : test a regular expression on the HTTP response body.
- A health check response will be considered valid if the
- response's body matches this expression. If the "rstring"
- keyword is prefixed with "!", then the response will be
- considered invalid if the body matches the expression.
- This can be used to look for a mandatory word at the end
- of a dynamic page, or to detect a failure when a specific
- error appears on the check page (e.g. a stack trace).
+ The persistence is ignored when an "if" condition is met, or unless an
+ "unless" condition is met.
- string-lf <fmt> : test a Custom log format string (see section 8.2.6) match
- in the HTTP response body. A health check response will
- be considered valid if the response's body contains the
- string resulting of the evaluation of <fmt>, which
- follows the log-format rules. If prefixed with "!", then
- the response will be considered invalid if the body
- contains the string.
+ Example:
+ acl url_static path_beg /static /images /img /css
+ acl url_static path_end .gif .png .jpg .css .js
+ ignore-persist if url_static
- It is important to note that the responses will be limited to a certain size
- defined by the global "tune.bufsize" option, which defaults to 16384 bytes.
- Thus, too large responses may not contain the mandatory pattern when using
- "string" or "rstring". If a large response is absolutely required, it is
- possible to change the default max size by setting the global variable.
- However, it is worth keeping in mind that parsing very large responses can
- waste some CPU cycles, especially when regular expressions are used, and that
- it is always better to focus the checks on smaller resources.
+ See also : "force-persist", "cookie", and section 7 about ACL usage.
- In an http-check ruleset, the last expect rule may be implicit. If no expect
- rule is specified after the last "http-check send", an implicit expect rule
- is defined to match on 2xx or 3xx status codes. It means this rule is also
- defined if there is no "http-check" rule at all, when only "option httpchk"
- is set.
+load-server-state-from-file { global | local | none }
+ Allow seamless reload of HAProxy
- Last, if "http-check expect" is combined with "http-check disable-on-404",
- then this last one has precedence when the server responds with 404.
+ May be used in the following contexts: tcp, http, log
- Examples :
- # only accept status 200 as valid
- http-check expect status 200,201,300-310
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- # be sure a sessid coookie is set
- http-check expect header name "set-cookie" value -m beg "sessid="
+ This directive points HAProxy to a file where server state from previous
+ running process has been saved. That way, when starting up, before handling
+ traffic, the new process can apply old states to servers exactly has if no
+ reload occurred. The purpose of the "load-server-state-from-file" directive is
+ to tell HAProxy which file to use. For now, only 2 arguments to either prevent
+ loading state or load states from a file containing all backends and servers.
+ The state file can be generated by running the command "show servers state"
+ over the stats socket and redirect output.
- # consider SQL errors as errors
- http-check expect ! string SQL\ Error
+ The format of the file is versioned and is very specific. To understand it,
+ please read the documentation of the "show servers state" command (chapter
+ 9.3 of Management Guide).
- # consider status 5xx only as errors
- http-check expect ! rstatus ^5
+ Arguments:
+ global load the content of the file pointed by the global directive
+ named "server-state-file".
- # check that we have a correct hexadecimal tag before /html
- http-check expect rstring <!--tag:[0-9a-f]*--></html>
+ local load the content of the file pointed by the directive
+ "server-state-file-name" if set. If not set, then the backend
+ name is used as a file name.
- See also : "option httpchk", "http-check connect", "http-check disable-on-404"
- and "http-check send".
+ none don't load any stat for this backend
+ Notes:
+ - server's IP address is preserved across reloads by default, but the
+ order can be changed thanks to the server's "init-addr" setting. This
+ means that an IP address change performed on the CLI at run time will
+ be preserved, and that any change to the local resolver (e.g. /etc/hosts)
+ will possibly not have any effect if the state file is in use.
-http-check send [meth <method>] [{ uri <uri> | uri-lf <fmt> }>] [ver <version>]
- [hdr <name> <fmt>]* [{ body <string> | body-lf <fmt> }]
- [comment <msg>]
- Add a possible list of headers and/or a body to the request sent during HTTP
- health checks.
+ - server's weight is applied from previous running process unless it has
+ has changed between previous and new configuration files.
- May be used in the following contexts: tcp, http
+ Example: Minimal configuration
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ global
+ stats socket /tmp/socket
+ server-state-file /tmp/server_state
- Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
+ defaults
+ load-server-state-from-file global
- meth <method> is the optional HTTP method used with the requests. When not
- set, the "OPTIONS" method is used, as it generally requires
- low server processing and is easy to filter out from the
- logs. Any method may be used, though it is not recommended
- to invent non-standard ones.
+ backend bk
+ server s1 127.0.0.1:22 check weight 11
+ server s2 127.0.0.1:22 check weight 12
- uri <uri> is optional and set the URI referenced in the HTTP requests
- to the string <uri>. It defaults to "/" which is accessible
- by default on almost any server, but may be changed to any
- other URI. Query strings are permitted.
- uri-lf <fmt> is optional and set the URI referenced in the HTTP requests
- using the Custom log format <fmt> (see section 8.2.6). It
- defaults to "/" which is accessible by default on almost any
- server, but may be changed to any other URI. Query strings
- are permitted.
+ Then one can run :
- ver <version> is the optional HTTP version string. It defaults to
- "HTTP/1.0" but some servers might behave incorrectly in HTTP
- 1.0, so turning it to HTTP/1.1 may sometimes help. Note that
- the Host field is mandatory in HTTP/1.1, use "hdr" argument
- to add it.
+ socat /tmp/socket - <<< "show servers state" > /tmp/server_state
- hdr <name> <fmt> adds the HTTP header field whose name is specified in
- <name> and whose value is defined by <fmt>, which follows
- the Custom log format rules described in section 8.2.6.
+ Content of the file /tmp/server_state would be like this:
- body <string> add the body defined by <string> to the request sent during
- HTTP health checks. If defined, the "Content-Length" header
- is thus automatically added to the request.
+ 1
+ # <field names skipped for the doc example>
+ 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
+ 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
- body-lf <fmt> add the body defined by the Custom log format <fmt> (see
- section 8.2.6) to the request sent during HTTP health
- checks. If defined, the "Content-Length" header is thus
- automatically added to the request.
+ Example: Minimal configuration
- In addition to the request line defined by the "option httpchk" directive,
- this one is the valid way to add some headers and optionally a body to the
- request sent during HTTP health checks. If a body is defined, the associate
- "Content-Length" header is automatically added. Thus, this header or
- "Transfer-encoding" header should not be present in the request provided by
- "http-check send". If so, it will be ignored. The old trick consisting to add
- headers after the version string on the "option httpchk" line is now
- deprecated.
+ global
+ stats socket /tmp/socket
+ server-state-base /etc/haproxy/states
- Also "http-check send" doesn't support HTTP keep-alive. Keep in mind that it
- will automatically append a "Connection: close" header, unless a Connection
- header has already already been configured via a hdr entry.
+ defaults
+ load-server-state-from-file local
- Note that the Host header and the request authority, when both defined, are
- automatically synchronized. It means when the HTTP request is sent, when a
- Host is inserted in the request, the request authority is accordingly
- updated. Thus, don't be surprised if the Host header value overwrites the
- configured request authority.
+ backend bk
+ server s1 127.0.0.1:22 check weight 11
+ server s2 127.0.0.1:22 check weight 12
- Note also for now, no Host header is automatically added in HTTP/1.1 or above
- requests. You should add it explicitly.
- See also : "option httpchk", "http-check send-state" and "http-check expect".
+ Then one can run :
+ socat /tmp/socket - <<< "show servers state bk" > /etc/haproxy/states/bk
-http-check send-state
- Enable emission of a state header with HTTP health checks
+ Content of the file /etc/haproxy/states/bk would be like this:
- May be used in the following contexts: tcp, http
+ 1
+ # <field names skipped for the doc example>
+ 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
+ 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ See also: "server-state-file", "server-state-file-name", and
+ "show servers state"
- Arguments : none
- When this option is set, HAProxy will systematically send a special header
- "X-Haproxy-Server-State" with a list of parameters indicating to each server
- how they are seen by HAProxy. This can be used for instance when a server is
- manipulated without access to HAProxy and the operator needs to know whether
- HAProxy still sees it up or not, or if the server is the last one in a farm.
+log global
+log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
+ [profile <prof>] <facility> [<level> [<minlevel>]]
+no log
+ Enable per-instance logging of events and traffic.
- The header is composed of fields delimited by semi-colons, the first of which
- is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid
- checks on the total number before transition, just as appears in the stats
- interface. Next headers are in the form "<variable>=<value>", indicating in
- no specific order some values available in the stats interface :
- - a variable "address", containing the address of the backend server.
- This corresponds to the <address> field in the server declaration. For
- unix domain sockets, it will read "unix".
+ May be used in the following contexts: tcp, http, log
- - a variable "port", containing the port of the backend server. This
- corresponds to the <port> field in the server declaration. For unix
- domain sockets, it will read "unix".
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- - a variable "name", containing the name of the backend followed by a slash
- ("/") then the name of the server. This can be used when a server is
- checked in multiple backends.
+ Prefix :
+ no should be used when the logger list must be flushed. For example,
+ if you don't want to inherit from the default logger list. This
+ prefix does not allow arguments.
- - a variable "node" containing the name of the HAProxy node, as set in the
- global "node" variable, otherwise the system's hostname if unspecified.
+ Arguments :
+ global should be used when the instance's logging parameters are the
+ same as the global ones. This is the most common usage. "global"
+ replaces all log arguments with those of the log entries found
+ in the "global" section. Only one "log global" statement may be
+ used per instance, and this form takes no other parameter.
- - a variable "weight" indicating the weight of the server, a slash ("/")
- and the total weight of the farm (just counting usable servers). This
- helps to know if other servers are available to handle the load when this
- one fails.
+ <target> indicates where to send the logs. It takes the same format as
+ for the "global" section's logs, and can be one of :
- - a variable "scur" indicating the current number of concurrent connections
- on the server, followed by a slash ("/") then the total number of
- connections on all servers of the same backend.
+ - An IPv4 address optionally followed by a colon (':') and a UDP
+ port. If no port is specified, 514 is used by default (the
+ standard syslog port).
- - a variable "qcur" indicating the current number of requests in the
- server's queue.
+ - An IPv6 address followed by a colon (':') and optionally a UDP
+ port. If no port is specified, 514 is used by default (the
+ standard syslog port).
- Example of a header received by the application server :
- >>> X-Haproxy-Server-State: UP 2/3; name=bck/srv2; node=lb1; weight=1/2; \
- scur=13/22; qcur=0
+ - A filesystem path to a UNIX domain socket, keeping in mind
+ considerations for chroot (be sure the path is accessible
+ inside the chroot) and uid/gid (be sure the path is
+ appropriately writable).
- See also : "option httpchk", "http-check disable-on-404" and
- "http-check send".
+ - A file descriptor number in the form "fd@<number>", which may
+ point to a pipe, terminal, or socket. In this case unbuffered
+ logs are used and one writev() call per log is performed. This
+ is a bit expensive but acceptable for most workloads. Messages
+ sent this way will not be truncated but may be dropped, in
+ which case the DroppedLogs counter will be incremented. The
+ writev() call is atomic even on pipes for messages up to
+ PIPE_BUF size, which POSIX recommends to be at least 512 and
+ which is 4096 bytes on most modern operating systems. Any
+ larger message may be interleaved with messages from other
+ processes. Exceptionally for debugging purposes the file
+ descriptor may also be directed to a file, but doing so will
+ significantly slow HAProxy down as non-blocking calls will be
+ ignored. Also there will be no way to purge nor rotate this
+ file without restarting the process. Note that the configured
+ syslog format is preserved, so the output is suitable for use
+ with a TCP syslog server. See also the "short" and "raw"
+ formats below.
+ - "stdout" / "stderr", which are respectively aliases for "fd@1"
+ and "fd@2", see above.
-http-check set-var(<var-name>[,<cond>...]) <expr>
-http-check set-var-fmt(<var-name>[,<cond>...]) <fmt>
- This operation sets the content of a variable. The variable is declared inline.
+ - A ring buffer in the form "ring@<name>", which will correspond
+ to an in-memory ring buffer accessible over the CLI using the
+ "show events" command, which will also list existing rings and
+ their sizes. Such buffers are lost on reload or restart but
+ when used as a complement this can help troubleshooting by
+ having the logs instantly available. See section 12.5 about
+ rings.
- May be used in the following contexts: tcp, http
+ - A log backend in the form "backend@<name>", which will send
+ log messages to the corresponding log backend responsible for
+ sending the message to the proper server according to the
+ backend's lb settings. A log backend is a backend section with
+ "mode log" set (see "mode" for more information).
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ - An explicit stream address prefix such as "tcp@","tcp6@",
+ "tcp4@" or "uxst@" will allocate an implicit ring buffer with
+ a stream forward server targeting the given address.
- Arguments :
- <var-name> The name of the variable. Only "proc", "sess" and "check"
- scopes can be used. See section 2.8 about variables for details.
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment variables.
- <cond> A set of conditions that must all be true for the variable to
- actually be set (such as "ifnotempty", "ifgt" ...). See the
- set-var converter's description for a full list of possible
- conditions.
+ <length> is an optional maximum line length. Log lines larger than this
+ value will be truncated before being sent. The reason is that
+ syslog servers act differently on log line length. All servers
+ support the default value of 1024, but some servers simply drop
+ larger lines while others do log them. If a server supports long
+ lines, it may make sense to set this value here in order to avoid
+ truncating long lines. Similarly, if a server drops long lines,
+ it is preferable to truncate them before sending them. Accepted
+ values are 80 to 65535 inclusive. The default value of 1024 is
+ generally fine for all standard usages. Some specific cases of
+ long captures or JSON-formatted logs may require larger values.
+ You may also need to increase "tune.http.logurilen" if your
+ request URIs are truncated.
- <expr> Is a sample-fetch expression potentially followed by converters.
+ <ranges> A list of comma-separated ranges to identify the logs to sample.
+ This is used to balance the load of the logs to send to the log
+ server. The limits of the ranges cannot be null. They are numbered
+ from 1. The size or period (in number of logs) of the sample must
+ be set with <sample_size> parameter.
- <fmt> This is the value expressed using Custom log format (see Custom
- Log Format in section 8.2.6).
+ <sample_size>
+ The size of the sample in number of logs to consider when balancing
+ their logging loads. It is used to balance the load of the logs to
+ send to the syslog server. This size must be greater or equal to the
+ maximum of the high limits of the ranges.
+ (see also <ranges> parameter).
- Examples :
- http-check set-var(check.port) int(1234)
- http-check set-var-fmt(check.port) "name=%H"
+ <format> is the log format used when generating syslog messages. It may be
+ one of the following :
+ local Analog to rfc3164 syslog message format except that hostname
+ field is stripped. This is the default.
+ Note: option "log-send-hostname" switches the default to
+ rfc3164.
-http-check unset-var(<var-name>)
- Free a reference to a variable within its scope.
+ rfc3164 The RFC3164 syslog message format.
+ (https://tools.ietf.org/html/rfc3164)
- May be used in the following contexts: tcp, http
+ rfc5424 The RFC5424 syslog message format.
+ (https://tools.ietf.org/html/rfc5424)
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ priority A message containing only a level plus syslog facility between
+ angle brackets such as '<63>', followed by the text. The PID,
+ date, time, process name and system name are omitted. This is
+ designed to be used with a local log server.
- Arguments :
- <var-name> The name of the variable. Only "proc", "sess" and "check"
- scopes can be used. See section 2.8 about variables for details.
+ short A message containing only a level between angle brackets such as
+ '<3>', followed by the text. The PID, date, time, process name
+ and system name are omitted. This is designed to be used with a
+ local log server. This format is compatible with what the
+ systemd logger consumes.
- Examples :
- http-check unset-var(check.port)
+ timed A message containing only a level between angle brackets such as
+ '<3>', followed by ISO date and by the text. The PID, process
+ name and system name are omitted. This is designed to be
+ used with a local log server.
+ iso A message containing only the ISO date, followed by the text.
+ The PID, process name and system name are omitted. This is
+ designed to be used with a local log server.
-http-error status <code> [content-type <type>]
- [ { default-errorfiles | errorfile <file> | errorfiles <name> |
- file <file> | lf-file <file> | string <str> | lf-string <fmt> } ]
- [ hdr <name> <fmt> ]*
- Defines a custom error message to use instead of errors generated by HAProxy.
+ raw A message containing only the text. The level, PID, date, time,
+ process name and system name are omitted. This is designed to
+ be used in containers or during development, where the severity
+ only depends on the file descriptor used (stdout/stderr).
- May be used in the following contexts: http
+ <prof> name of the optional "log-profile" section that will be
+ considered during the log building process to override some
+ log options. Check out "8.3.5. Log profiles" for more info.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ <facility> must be one of the 24 standard syslog facilities :
- Arguments :
- status <code> is the HTTP status code. It must be specified.
- Currently, HAProxy is capable of generating codes
- 200, 400, 401, 403, 404, 405, 407, 408, 410, 413,
- 414, 425, 429, 431, 500, 501, 502, 503, and 504.
+ kern user mail daemon auth syslog lpr news
+ uucp cron auth2 ftp ntp audit alert cron2
+ local0 local1 local2 local3 local4 local5 local6 local7
- content-type <type> is the response content type, for instance
- "text/plain". This parameter is ignored and should be
- omitted when an errorfile is configured or when the
- payload is empty. Otherwise, it must be defined.
+ Note that the facility is ignored for the "short" and "raw"
+ formats, but still required as a positional field. It is
+ recommended to use "daemon" in this case to make it clear that
+ it's only supposed to be used locally.
- default-errorfiles Reset the previously defined error message for current
- proxy for the status <code>. If used on a backend, the
- frontend error message is used, if defined. If used on
- a frontend, the default error message is used.
+ <level> is optional and can be specified to filter outgoing messages. By
+ default, all messages are sent. If a level is specified, only
+ messages with a severity at least as important as this level
+ will be sent. An optional minimum level can be specified. If it
+ is set, logs emitted with a more severe level than this one will
+ be capped to this level. This is used to avoid sending "emerg"
+ messages on all terminals on some default syslog configurations.
+ Eight levels are known :
- errorfile <file> designates a file containing the full HTTP response.
- It is recommended to follow the common practice of
- appending ".http" to the filename so that people do
- not confuse the response with HTML error pages, and to
- use absolute paths, since files are read before any
- chroot is performed.
+ emerg alert crit err warning notice info debug
- errorfiles <name> designates the http-errors section to use to import
- the error message with the status code <code>. If no
- such message is found, the proxy's error messages are
- considered.
+ It is important to keep in mind that it is the frontend which decides what to
+ log from a connection, and that in case of content switching, the log entries
+ from the backend will be ignored. Connections are logged at level "info".
- file <file> specifies the file to use as response payload. If the
- file is not empty, its content-type must be set as
- argument to "content-type", otherwise, any
- "content-type" argument is ignored. <file> is
- considered as a raw string.
+ However, backend log declaration define how and where servers status changes
+ will be logged. Level "notice" will be used to indicate a server going up,
+ "warning" will be used for termination signals and definitive service
+ termination, and "alert" will be used for when a server goes down.
- string <str> specifies the raw string to use as response payload.
- The content-type must always be set as argument to
- "content-type".
+ Note : According to RFC3164, messages are truncated to 1024 bytes before
+ being emitted.
- lf-file <file> specifies the file to use as response payload. If the
- file is not empty, its content-type must be set as
- argument to "content-type", otherwise, any
- "content-type" argument is ignored. <file> is
- evaluated as a Custom log format (see section 8.2.6).
+ Example :
+ log global
+ log stdout format short daemon # send log to systemd
+ log stdout format raw daemon # send everything to stdout
+ log stderr format raw daemon notice # send important events to stderr
+ log 127.0.0.1:514 local0 notice # only send important events
+ log tcp@127.0.0.1:514 local0 notice notice # same but limit output
+ # level and send in tcp
+ log "${LOCAL_SYSLOG}:514" local0 notice # send to local server
- lf-string <str> specifies the log-format string to use as response
- payload. The content-type must always be set as
- argument to "content-type".
+log-format <fmt>
+ Specifies the custom log format string to use for traffic logs
- hdr <name> <fmt> adds to the response the HTTP header field whose name
- is specified in <name> and whose value is defined by
- <fmt>, which follows the Custom log format rules (see
- section 8.2.6). This parameter is ignored if an
- errorfile is used.
+ May be used in the following contexts: tcp, http
- This directive may be used instead of "errorfile", to define a custom error
- message. As "errorfile" directive, it is used for errors detected and
- returned by HAProxy. If an errorfile is defined, it is parsed when HAProxy
- starts and must be valid according to the HTTP standards. The generated
- response must not exceed the configured buffer size (BUFFSIZE), otherwise an
- internal error will be returned. Finally, if you consider to use some
- http-after-response rules to rewrite these errors, the reserved buffer space
- should be available (see "tune.maxrewrite").
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
- The files are read at the same time as the configuration and kept in memory.
- For this reason, the errors continue to be returned even when the process is
- chrooted, and no file change is considered while the process is running.
+ This directive specifies the log format string that will be used for all logs
+ resulting from traffic passing through the frontend using this line. If the
+ directive is used in a defaults section, all subsequent frontends will use
+ the same log format. Please see section 8.2.6 which covers the custom log
+ format string in depth.
- Note: 400/408/500 errors emitted in early stage of the request parsing are
- handled by the multiplexer at a lower level. No custom formatting is
- supported at this level. Thus only static error messages, defined with
- "errorfile" directive, are supported. However, this limitation only
- exists during the request headers parsing or between two transactions.
+ A specific log-format used only in case of connection error can also be
+ defined, see the "error-log-format" option.
- See also : "errorfile", "errorfiles", "errorloc", "errorloc302",
- "errorloc303" and section 3.7 about http-errors.
+ "log-format" directive overrides previous "option tcplog", "log-format",
+ "option httplog" and "option httpslog" directives.
+log-format-sd <fmt>
+ Specifies the Custom log format string used to produce RFC5424 structured-data
-http-request <action> [options...] [ { if | unless } <condition> ]
- Access control for Layer 7 requests
+ May be used in the following contexts: tcp, http
- May be used in the following contexts: http
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
- May be used in sections: defaults | frontend | listen | backend
- yes(!) | yes | yes | yes
+ This directive specifies the RFC5424 structured-data log format string that
+ will be used for all logs resulting from traffic passing through the frontend
+ using this line. If the directive is used in a defaults section, all
+ subsequent frontends will use the same log format. Please see section 8.2.6
+ which covers the log format string in depth.
- The http-request statement defines a set of rules which apply to layer 7
- processing. The rules are evaluated in their declaration order when they are
- met in a frontend, listen or backend section. Any rule may optionally be
- followed by an ACL-based condition, in which case it will only be evaluated
- if the condition evaluates to true.
+ See https://tools.ietf.org/html/rfc5424#section-6.3 for more information
+ about the RFC5424 structured-data part.
- The condition is evaluated just before the action is executed, and the action
- is performed exactly once. As such, there is no problem if an action changes
- an element which is checked as part of the condition. This also means that
- multiple actions may rely on the same condition so that the first action that
- changes the condition's evaluation is sufficient to implicitly disable the
- remaining actions. This is used for example when trying to assign a value to
- a variable from various sources when it's empty. There is no limit to the
- number of "http-request" statements per instance.
+ Note : This log format string will be used only for loggers that have set
+ log format to "rfc5424".
- The first keyword after "http-request" in the syntax is the rule's action,
- optionally followed by a varying number of arguments for the action. The
- supported actions and their respective syntaxes are enumerated in section 4.3
- "Actions" (look for actions which tick "HTTP Req").
+ Example :
+ log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
- This directive is only available from named defaults sections, not anonymous
- ones. Rules defined in the defaults section are evaluated before ones in the
- associated proxy section. To avoid ambiguities, in this case the same
- defaults section cannot be used by proxies with the frontend capability and
- by proxies with the backend capability. It means a listen section cannot use
- a defaults section defining such rules.
+log-steps <steps>
+ Specifies at which steps during transaction processing logs should be
+ generated.
- Example:
- acl nagios src 192.168.129.3
- acl local_net src 192.168.0.0/16
- acl auth_ok http_auth(L1)
+ May be used in the following contexts: tcp, http
- http-request allow if nagios
- http-request allow if local_net auth_ok
- http-request auth realm Gimme if local_net auth_ok
- http-request deny
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
- Example:
- acl key req.hdr(X-Add-Acl-Key) -m found
- acl add path /addacl
- acl del path /delacl
+ During tcp/http transaction processing, haproxy may produce logs at different
+ steps during the processing (ie: accept, connect, request, response, close).
- acl myhost hdr(Host) -f myhost.lst
+ By default, HAProxy emits a single log per transaction, once all of the
+ items used in the logformat expression could be satisfied, which means
+ that in practice the log is usually emitted at the end of the transaction
+ (after the end of the response for HTTP or end of connection for TCP),
+ unless "option logasap" is used.
- http-request add-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key add
- http-request del-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key del
+ The "log-steps" directive allows to refine the precise instants where
+ logs will be emitted, and even permits to emit multiple logs for a
+ same transaction. Special value 'all' may be used to enable all available
+ log origins, making it possible to track a transaction from accept to close.
+ Indidivual log origins may also be specified using their names separated by
+ spaces to selectively enable when logs should be produced.
+
+ Common log origins are: accept, connect, request, response, close.
Example:
- acl value req.hdr(X-Value) -m found
- acl setmap path /setmap
- acl delmap path /delmap
+ frontend myfront
+ option httplog
+ log-steps accept,close #only log accept and close for the txn
- use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
+ Log origins specified as "logging steps" (such as accept, close) can be
+ used as-is in log-profiles (after 'on' directive). Combining "log-steps"
+ with log-profiles is really interesting to have fine-grained control over
+ logs automatically generated by haproxy during transaction processing.
- http-request set-map(map.lst) %[src] %[req.hdr(X-Value)] if setmap value
- http-request del-map(map.lst) %[src] if delmap
+ See also : "log-profile"
- See also : "stats http-request", section 3.4 about userlists and section 7
- about ACL usage.
+log-tag <string>
+ Specifies the log tag to use for all outgoing logs
-http-response <action> <options...> [ { if | unless } <condition> ]
- Access control for Layer 7 responses
+ May be used in the following contexts: tcp, http, log
- May be used in the following contexts: http
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
- May be used in sections: defaults | frontend | listen | backend
- yes(!) | yes | yes | yes
+ Sets the tag field in the syslog header to this string. It defaults to the
+ log-tag set in the global section, otherwise the program name as launched
+ from the command line, which usually is "HAProxy". Sometimes it can be useful
+ to differentiate between multiple processes running on the same host, or to
+ differentiate customer instances running in the same process. In the backend,
+ logs about servers up/down will use this tag. As a hint, it can be convenient
+ to set a log-tag related to a hosted customer in a defaults section then put
+ all the frontends and backends for that customer, then start another customer
+ in a new defaults section. See also the global "log-tag" directive.
- The http-response statement defines a set of rules which apply to layer 7
- processing. The rules are evaluated in their declaration order when they are
- met in a frontend, listen or backend section. Since these rules apply on
- responses, the backend rules are applied first, followed by the frontend's
- rules. Any rule may optionally be followed by an ACL-based condition, in
- which case it will only be evaluated if the condition evaluates to true.
+max-keep-alive-queue <value>
+ Set the maximum server queue size for maintaining keep-alive connections
- The condition is evaluated just before the action is executed, and the action
- is performed exactly once. As such, there is no problem if an action changes
- an element which is checked as part of the condition. This also means that
- multiple actions may rely on the same condition so that the first action that
- changes the condition's evaluation is sufficient to implicitly disable the
- remaining actions. This is used for example when trying to assign a value to
- a variable from various sources when it's empty. There is no limit to the
- number of "http-response" statements per instance.
+ May be used in the following contexts: http
- The first keyword after "http-response" in the syntax is the rule's action,
- optionally followed by a varying number of arguments for the action. The
- supported actions and their respective syntaxes are enumerated in section 4.3
- "Actions" (look for actions which tick "HTTP Res").
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- This directive is only available from named defaults sections, not anonymous
- ones. Rules defined in the defaults section are evaluated before ones in the
- associated proxy section. To avoid ambiguities, in this case the same
- defaults section cannot be used by proxies with the frontend capability and
- by proxies with the backend capability. It means a listen section cannot use
- a defaults section defining such rules.
+ HTTP keep-alive tries to reuse the same server connection whenever possible,
+ but sometimes it can be counter-productive, for example if a server has a lot
+ of connections while other ones are idle. This is especially true for static
+ servers.
- Example:
- acl key_acl res.hdr(X-Acl-Key) -m found
+ The purpose of this setting is to set a threshold on the number of queued
+ connections at which HAProxy stops trying to reuse the same server and prefers
+ to find another one. The default value, -1, means there is no limit. A value
+ of zero means that keep-alive requests will never be queued. For very close
+ servers which can be reached with a low latency and which are not sensible to
+ breaking keep-alive, a low value is recommended (e.g. local static server can
+ use a value of 10 or less). For remote servers suffering from a high latency,
+ higher values might be needed to cover for the latency and/or the cost of
+ picking a different server.
- acl myhost hdr(Host) -f myhost.lst
+ Note that this has no impact on responses which are maintained to the same
+ server consecutively to a 401 response. They will still go to the same server
+ even if they have to be queued.
- http-response add-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
- http-response del-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
+ See also : "option http-server-close", "option prefer-last-server", server
+ "maxconn" and cookie persistence.
- Example:
- acl value res.hdr(X-Value) -m found
+max-session-srv-conns <nb>
+ Set the maximum number of outgoing connections we can keep idling for a given
+ client session. The default is 5 (it precisely equals MAX_SRV_LIST which is
+ defined at build time).
- use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
+ May be used in the following contexts: tcp, http
- http-response set-map(map.lst) %[src] %[res.hdr(X-Value)] if value
- http-response del-map(map.lst) %[src] if ! value
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- See also : "http-request", section 3.4 about userlists and section 7 about
- ACL usage.
+maxconn <conns>
+ Fix the maximum number of concurrent connections on a frontend
-http-reuse { never | safe | aggressive | always }
- Declare how idle HTTP connections may be shared between requests
+ May be used in the following contexts: tcp, http
- May be used in the following contexts: http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ Arguments :
+ <conns> is the maximum number of concurrent connections the frontend will
+ accept to serve. Excess connections will be queued by the system
+ in the socket's listen queue and will be served once a connection
+ closes.
- In order to avoid the cost of setting up new connections to backend servers
- for each HTTP request, HAProxy tries to keep such idle connections opened
- after being used. These connections are specific to a server and are stored
- in a list called a pool, and are grouped together by a set of common key
- properties. Subsequent HTTP requests will cause a lookup of a compatible
- connection sharing identical properties in the associated pool and result in
- this connection being reused instead of establishing a new one.
+ If the system supports it, it can be useful on big sites to raise this limit
+ very high so that HAProxy manages connection queues, instead of leaving the
+ clients with unanswered connection attempts. This value should not exceed the
+ global maxconn. Also, keep in mind that a connection contains two buffers
+ of tune.bufsize (16kB by default) each, as well as some other data resulting
+ in about 33 kB of RAM being consumed per established connection. That means
+ that a medium system equipped with 1GB of RAM can withstand around
+ 20000-25000 concurrent connections if properly tuned.
- A limit on the number of idle connections to keep on a server can be
- specified via the "pool-max-conn" server keyword. Unused connections are
- periodically purged according to the "pool-purge-delay" interval.
+ Also, when <conns> is set to large values, it is possible that the servers
+ are not sized to accept such loads, and for this reason it is generally wise
+ to assign them some reasonable connection limits.
- The following connection properties are used to determine if an idle
- connection is eligible for reuse on a given request:
- - source and destination addresses
- - proxy protocol
- - TOS and mark socket options
- - connection name, determined either by the result of the evaluation of the
- "pool-conn-name" expression if present, otherwise by the "sni" expression
+ When this value is set to zero, which is the default, the global "maxconn"
+ value is used.
- In some occasions, connection lookup or reuse is not performed due to extra
- restrictions. This is determined by the reuse strategy specified via the
- keyword argument:
+ See also : "server", global section's "maxconn", "fullconn"
- - "never" : idle connections are never shared between sessions. This mode
- may be enforced to cancel a different strategy inherited from
- a defaults section or for troubleshooting. For example, if an
- old bogus application considers that multiple requests over
- the same connection come from the same client and it is not
- possible to fix the application, it may be desirable to
- disable connection sharing in a single backend. An example of
- such an application could be an old HAProxy using cookie
- insertion in tunnel mode and not checking any request past the
- first one.
- - "safe" : this is the default and the recommended strategy. The first
- request of a session is always sent over its own connection,
- and only subsequent requests may be dispatched over other
- existing connections. This ensures that in case the server
- closes the connection when the request is being sent, the
- browser can decide to silently retry it. Since it is exactly
- equivalent to regular keep-alive, there should be no side
- effects. There is also a special handling for the connections
- using protocols subject to Head-of-line blocking (backend with
- h2 or fcgi). In this case, when at least one stream is
- processed, the used connection is reserved to handle streams
- of the same session. When no more streams are processed, the
- connection is released and can be reused.
+mode { tcp|http|log|spop }
+ Set the running mode or protocol of the instance
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ tcp The instance will work in pure TCP mode. A full-duplex connection
+ will be established between clients and servers, and no layer 7
+ examination will be performed. This is the default mode. It
+ should be used for SSL, SSH, SMTP, ...
- - "aggressive" : this mode may be useful in webservices environments where
- all servers are not necessarily known and where it would be
- appreciable to deliver most first requests over existing
- connections. In this case, first requests are only delivered
- over existing connections that have been reused at least once,
- proving that the server correctly supports connection reuse.
- It should only be used when it's sure that the client can
- retry a failed request once in a while and where the benefit
- of aggressive connection reuse significantly outweighs the
- downsides of rare connection failures.
-
- - "always" : this mode is only recommended when the path to the server is
- known for never breaking existing connections quickly after
- releasing them. It allows the first request of a session to be
- sent to an existing connection. This can provide a significant
- performance increase over the "safe" strategy when the backend
- is a cache farm, since such components tend to show a
- consistent behavior and will benefit from the connection
- sharing. It is recommended that the "http-keep-alive" timeout
- remains low in this mode so that no dead connections remain
- usable. In most cases, this will lead to the same performance
- gains as "aggressive" but with more risks. It should only be
- used when it improves the situation over "aggressive".
+ http The instance will work in HTTP mode. The client request will be
+ analyzed in depth before connecting to any server. Any request
+ which is not RFC-compliant will be rejected. Layer 7 filtering,
+ processing and switching will be possible. This is the mode which
+ brings HAProxy most of its value.
- Also note that connections with certain bogus authentication schemes (relying
- on the connection) like NTLM are marked private if possible and never shared.
- This won't be the case however when using a protocol with multiplexing
- abilities and using reuse mode level value greater than the default "safe"
- strategy as in this case nothing prevents the connection from being already
- shared.
+ log When used in a backend section, it will turn the backend into a
+ log backend. Such backend can be used as a log destination for
+ any "log" directive by using the "backend@<name>" syntax. Log
+ messages will be distributed to the servers from the backend
+ according to the lb settings which can be configured using the
+ "balance" keyword. Log backends support UDP servers by prefixing
+ the server's address with the "udp@" prefix. Common backend and
+ server features are supported, but not TCP or HTTP specific ones.
- The rules to decide to keep an idle connection opened or to close it after
- processing are also governed by the "tune.pool-low-fd-ratio" (default: 20%)
- and "tune.pool-high-fd-ratio" (default: 25%). These correspond to the
- percentage of total file descriptors spent in idle connections above which
- haproxy will respectively refrain from keeping a connection opened after a
- response, and actively kill idle connections. Some setups using a very high
- ratio of idle connections, either because of too low a global "maxconn", or
- due to a lot of HTTP/2 or HTTP/3 traffic on the frontend (few connections)
- but HTTP/1 connections on the backend, may observe a lower reuse rate because
- too few connections are kept open. It may be desirable in this case to adjust
- such thresholds or simply to increase the global "maxconn" value.
+ spop When used in a backend section, it will turn the backend into a
+ log backend. This mode is mandatory and automatically set, if
+ necessary, for backends referenced by SPOE engines.
- When thread groups are explicitly enabled, it is important to understand that
- idle connections are only usable between threads from a same group. As such
- it may happen that unfair load between groups leads to more idle connections
- being needed, causing a lower reuse rate. The same solution may then be
- applied (increase global "maxconn" or increase pool ratios).
+ When doing content switching, it is mandatory that the frontend and the
+ backend are in the same mode (generally HTTP), otherwise the configuration
+ will be refused.
- See also : "option http-keep-alive", "pool-conn-name", "pool-max-conn",
- "pool-purge-delay", "server maxconn", "sni", "thread-groups",
- "tune.pool-high-fd-ratio", "tune.pool-low-fd-ratio"
+ Example :
+ defaults http_instances
+ mode http
-http-send-name-header [<header>]
- Add the server name to a request. Use the header string given by <header>
+monitor fail { if | unless } <condition>
+ Add a condition to report a failure to a monitor HTTP request.
May be used in the following contexts: http
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
Arguments :
- <header> The header string to use to send the server name
+ if <cond> the monitor request will fail if the condition is satisfied,
+ and will succeed otherwise. The condition should describe a
+ combined test which must induce a failure if all conditions
+ are met, for instance a low number of servers both in a
+ backend and its backup.
- The "http-send-name-header" statement causes the header field named <header>
- to be set to the name of the target server at the moment the request is about
- to be sent on the wire. Any existing occurrences of this header are removed.
- Upon retries and redispatches, the header field is updated to always reflect
- the server being attempted to connect to. Given that this header is modified
- very late in the connection setup, it may have unexpected effects on already
- modified headers. For example using it with transport-level header such as
- connection, content-length, transfer-encoding and so on will likely result in
- invalid requests being sent to the server. Additionally it has been reported
- that this directive is currently being used as a way to overwrite the Host
- header field in outgoing requests; while this trick has been known to work
- as a side effect of the feature for some time, it is not officially supported
- and might possibly not work anymore in a future version depending on the
- technical difficulties this feature induces. A long-term solution instead
- consists in fixing the application which required this trick so that it binds
- to the correct host name.
+ unless <cond> the monitor request will succeed only if the condition is
+ satisfied, and will fail otherwise. Such a condition may be
+ based on a test on the presence of a minimum number of active
+ servers in a list of backends.
- See also : "server"
+ This statement adds a condition which can force the response to a monitor
+ request to report a failure. By default, when an external component queries
+ the URI dedicated to monitoring, a 200 response is returned. When one of the
+ conditions above is met, HAProxy will return 503 instead of 200. This is
+ very useful to report a site failure to an external component which may base
+ routing advertisements between multiple sites on the availability reported by
+ HAProxy. In this case, one would rely on an ACL involving the "nbsrv"
+ criterion. Note that "monitor fail" only works in HTTP mode. Both status
+ messages may be tweaked using "errorfile" or "errorloc" if needed.
-id <value>
- Set a persistent ID to a proxy.
+ Example:
+ frontend www
+ mode http
+ acl site_dead nbsrv(dynamic) lt 2
+ acl site_dead nbsrv(static) lt 2
+ monitor-uri /site_alive
+ monitor fail if site_dead
- May be used in the following contexts: tcp, http, log
+ See also : "monitor-uri", "errorfile", "errorloc"
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
- Arguments : none
+monitor-uri <uri>
+ Intercept a URI used by external components' monitor requests
- Set a persistent ID for the proxy. This ID must be unique and positive.
- An unused ID will automatically be assigned if unset. The first assigned
- value will be 1. This ID is currently only returned in statistics.
+ May be used in the following contexts: http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
-ignore-persist { if | unless } <condition>
- Declare a condition to ignore persistence
+ Arguments :
+ <uri> is the exact URI which we want to intercept to return HAProxy's
+ health status instead of forwarding the request.
- May be used in the following contexts: tcp, http
+ When an HTTP request referencing <uri> will be received on a frontend,
+ HAProxy will not forward it nor log it, but instead will return either
+ "HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure
+ conditions defined with "monitor fail". This is normally enough for any
+ front-end HTTP probe to detect that the service is UP and running without
+ forwarding the request to a backend server. Note that the HTTP method, the
+ version and all headers are ignored, but the request must at least be valid
+ at the HTTP level. This keyword may only be used with an HTTP-mode frontend.
- May be used in sections: defaults | frontend | listen | backend
- no | no | yes | yes
+ Monitor requests are processed very early, just after the request is parsed
+ and even before any "http-request". The only rulesets applied before are the
+ tcp-request ones. They cannot be logged either, and it is the intended
+ purpose. Only one URI may be configured for monitoring; when multiple
+ "monitor-uri" statements are present, the last one will define the URI to
+ be used. They are only used to report HAProxy's health to an upper component,
+ nothing more. However, it is possible to add any number of conditions using
+ "monitor fail" and ACLs so that the result can be adjusted to whatever check
+ can be imagined (most often the number of available servers in a backend).
- By default, when cookie persistence is enabled, every requests containing
- the cookie are unconditionally persistent (assuming the target server is up
- and running).
+ Note: if <uri> starts by a slash ('/'), the matching is performed against the
+ request's path instead of the request's uri. It is a workaround to let
+ the HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients
+ are encouraged to send absolute URIs only.
- The "ignore-persist" statement allows one to declare various ACL-based
- conditions which, when met, will cause a request to ignore persistence.
- This is sometimes useful to load balance requests for static files, which
- often don't require persistence. This can also be used to fully disable
- persistence for a specific User-Agent (for example, some web crawler bots).
+ Example :
+ # Use /haproxy_test to report HAProxy's status
+ frontend www
+ mode http
+ monitor-uri /haproxy_test
- The persistence is ignored when an "if" condition is met, or unless an
- "unless" condition is met.
+ See also : "monitor fail"
- Example:
- acl url_static path_beg /static /images /img /css
- acl url_static path_end .gif .png .jpg .css .js
- ignore-persist if url_static
- See also : "force-persist", "cookie", and section 7 about ACL usage.
+option abortonclose
+no option abortonclose
+ Enable or disable early dropping of aborted requests pending in queues.
-load-server-state-from-file { global | local | none }
- Allow seamless reload of HAProxy
+ May be used in the following contexts: tcp, http
- May be used in the following contexts: tcp, http, log
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ Arguments : none
- This directive points HAProxy to a file where server state from previous
- running process has been saved. That way, when starting up, before handling
- traffic, the new process can apply old states to servers exactly has if no
- reload occurred. The purpose of the "load-server-state-from-file" directive is
- to tell HAProxy which file to use. For now, only 2 arguments to either prevent
- loading state or load states from a file containing all backends and servers.
- The state file can be generated by running the command "show servers state"
- over the stats socket and redirect output.
+ In presence of very high loads, the servers will take some time to respond.
+ The per-instance connection queue will inflate, and the response time will
+ increase respective to the size of the queue times the average per-stream
+ response time. When clients will wait for more than a few seconds, they will
+ often hit the "STOP" button on their browser, leaving a useless request in
+ the queue, and slowing down other users, and the servers as well, because the
+ request will eventually be served, then aborted at the first error
+ encountered while delivering the response.
- The format of the file is versioned and is very specific. To understand it,
- please read the documentation of the "show servers state" command (chapter
- 9.3 of Management Guide).
+ As there is no way to distinguish between a full STOP and a simple output
+ close on the client side, HTTP agents should be conservative and consider
+ that the client might only have closed its output channel while waiting for
+ the response. However, this introduces risks of congestion when lots of users
+ do the same, and is completely useless nowadays because probably no client at
+ all will close the stream while waiting for the response. Some HTTP agents
+ support this behavior (Squid, Apache, HAProxy), and others do not (TUX, most
+ hardware-based load balancers). So the probability for a closed input channel
+ to represent a user hitting the "STOP" button is close to 100%, and the risk
+ of being the single component to break rare but valid traffic is extremely
+ low, which adds to the temptation to be able to abort a stream early while
+ still not served and not pollute the servers.
- Arguments:
- global load the content of the file pointed by the global directive
- named "server-state-file".
+ In HAProxy, the user can choose the desired behavior using the option
+ "abortonclose". By default (without the option) the behavior is HTTP
+ compliant and aborted requests will be served. But when the option is
+ specified, a stream with an incoming channel closed will be aborted while
+ it is still possible, either pending in the queue for a connection slot, or
+ during the connection establishment if the server has not yet acknowledged
+ the connection request. This considerably reduces the queue size and the load
+ on saturated servers when users are tempted to click on STOP, which in turn
+ reduces the response time for other users.
- local load the content of the file pointed by the directive
- "server-state-file-name" if set. If not set, then the backend
- name is used as a file name.
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- none don't load any stat for this backend
+ See also : "timeout queue" and server's "maxconn" and "maxqueue" parameters
- Notes:
- - server's IP address is preserved across reloads by default, but the
- order can be changed thanks to the server's "init-addr" setting. This
- means that an IP address change performed on the CLI at run time will
- be preserved, and that any change to the local resolver (e.g. /etc/hosts)
- will possibly not have any effect if the state file is in use.
- - server's weight is applied from previous running process unless it has
- has changed between previous and new configuration files.
+option accept-invalid-http-request (deprecated)
+no option accept-invalid-http-request (deprecated)
+ Enable or disable relaxing of HTTP request parsing
- Example: Minimal configuration
+ The "accept-invalid-http-request" keyword is deprecated, use "option
+ accept-unsafe-violations-in-http-request" instead.
- global
- stats socket /tmp/socket
- server-state-file /tmp/server_state
- defaults
- load-server-state-from-file global
+option accept-invalid-http-response (deprecated)
+no option accept-invalid-http-response (deprecated)
+ Enable or disable relaxing of HTTP response parsing
- backend bk
- server s1 127.0.0.1:22 check weight 11
- server s2 127.0.0.1:22 check weight 12
+ The "accept-invalid-http-response" keyword is deprecated, use "option
+ accept-unsafe-violations-in-http-response" instead.
- Then one can run :
+option accept-unsafe-violations-in-http-request
+no option accept-unsafe-violations-in-http-request
+ Enable or disable relaxing of HTTP request parsing
- socat /tmp/socket - <<< "show servers state" > /tmp/server_state
+ May be used in the following contexts: http
- Content of the file /tmp/server_state would be like this:
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- 1
- # <field names skipped for the doc example>
- 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
- 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
+ Arguments : none
- Example: Minimal configuration
+ By default, HAProxy complies with the different HTTP RFCs in terms of message
+ parsing. This means the message parsing is quite strict and causes an error
+ to be returned to the client for malformed messages. This is the desired
+ behavior as such malformed messages are essentially used to build attacks
+ exploiting server weaknesses, and bypass security filtering. Sometimes, a
+ buggy browser will not respect these RCFs for whatever reason (configuration,
+ implementation...) and the issue will not be immediately fixed. In such case,
+ it is possible to relax HAProxy's parser to accept some invalid requests by
+ specifying this option. Most of rules concern the H1 parsing for historical
+ reason. Newer HTTP versions tends to be cleaner and applications follow more
+ stickly these protocols.
- global
- stats socket /tmp/socket
- server-state-base /etc/haproxy/states
+ When this option is set, the following rules are observed:
- defaults
- load-server-state-from-file local
+ * In H1 only, invalid characters, including NULL character, in header name
+ will be accepted;
- backend bk
- server s1 127.0.0.1:22 check weight 11
- server s2 127.0.0.1:22 check weight 12
+ * In H1 only, NULL character in header value will be accepted;
+ * The list of characters allowed to appear in a URI is well defined by
+ RFC3986, and chars 0-31, 32 (space), 34 ('"'), 60 ('<'), 62 ('>'), 92
+ ('\'), 94 ('^'), 96 ('`'), 123 ('{'), 124 ('|'), 125 ('}'), 127 (delete)
+ and anything above are normally not allowed. But here, in H1 only,
+ HAProxy will only block a number of them (0..32, 127);
- Then one can run :
+ * In H1 and H2, URLs containing fragment references ('#' after the path)
+ will be accepted;
- socat /tmp/socket - <<< "show servers state bk" > /etc/haproxy/states/bk
+ * In H1 only, no check will be performed on the authority for CONNECT
+ requests;
- Content of the file /etc/haproxy/states/bk would be like this:
+ * In H1 only, no check will be performed against the authority and the Host
+ header value.
- 1
- # <field names skipped for the doc example>
- 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
- 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
+ * In H1 only, tests on the HTTP version will be relaxed. It will allow
+ HTTP/0.9 GET requests to pass through (no version specified), as well as
+ different protocol names (e.g. RTSP), and multiple digits for both the
+ major and the minor version.
- See also: "server-state-file", "server-state-file-name", and
- "show servers state"
+ * In H1 only, WebSocket (RFC6455) requests failing to present a valid
+ "Sec-Websocket-Key" header field will be accepted.
+ This option should never be enabled by default as it hides application bugs
+ and open security breaches. It should only be deployed after a problem has
+ been confirmed.
-log global
-log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
- [profile <prof>] <facility> [<level> [<minlevel>]]
-no log
- Enable per-instance logging of events and traffic.
+ When this option is enabled, invalid but accepted H1 requests will be
+ captured in order to permit later analysis using the "show errors" request on
+ the UNIX stats socket.Doing this also helps confirming that the issue has
+ been solved.
- May be used in the following contexts: tcp, http, log
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ See also : "option accept-unsafe-violations-in-http-response" and "show
+ errors" on the stats socket.
- Prefix :
- no should be used when the logger list must be flushed. For example,
- if you don't want to inherit from the default logger list. This
- prefix does not allow arguments.
- Arguments :
- global should be used when the instance's logging parameters are the
- same as the global ones. This is the most common usage. "global"
- replaces all log arguments with those of the log entries found
- in the "global" section. Only one "log global" statement may be
- used per instance, and this form takes no other parameter.
+option accept-unsafe-violations-in-http-response
+no option accept-unsafe-violations-in-http-response
+ Enable or disable relaxing of HTTP response parsing
- <target> indicates where to send the logs. It takes the same format as
- for the "global" section's logs, and can be one of :
+ May be used in the following contexts: http
- - An IPv4 address optionally followed by a colon (':') and a UDP
- port. If no port is specified, 514 is used by default (the
- standard syslog port).
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- - An IPv6 address followed by a colon (':') and optionally a UDP
- port. If no port is specified, 514 is used by default (the
- standard syslog port).
+ Arguments : none
- - A filesystem path to a UNIX domain socket, keeping in mind
- considerations for chroot (be sure the path is accessible
- inside the chroot) and uid/gid (be sure the path is
- appropriately writable).
+ Similarly to "option accept-unsafe-violations-in-http-request", this option
+ may be used to relax parsing rules of HTTP responses. It should only be
+ enabled for trusted legacy servers to accept some invalid responses. Most of
+ rules concern the H1 parsing for historical reason. Newer HTTP versions tends
+ to be cleaner and applications follow more stickly these protocols.
- - A file descriptor number in the form "fd@<number>", which may
- point to a pipe, terminal, or socket. In this case unbuffered
- logs are used and one writev() call per log is performed. This
- is a bit expensive but acceptable for most workloads. Messages
- sent this way will not be truncated but may be dropped, in
- which case the DroppedLogs counter will be incremented. The
- writev() call is atomic even on pipes for messages up to
- PIPE_BUF size, which POSIX recommends to be at least 512 and
- which is 4096 bytes on most modern operating systems. Any
- larger message may be interleaved with messages from other
- processes. Exceptionally for debugging purposes the file
- descriptor may also be directed to a file, but doing so will
- significantly slow HAProxy down as non-blocking calls will be
- ignored. Also there will be no way to purge nor rotate this
- file without restarting the process. Note that the configured
- syslog format is preserved, so the output is suitable for use
- with a TCP syslog server. See also the "short" and "raw"
- formats below.
+ When this option is set, the following rules are observed:
- - "stdout" / "stderr", which are respectively aliases for "fd@1"
- and "fd@2", see above.
+ * In H1 only, invalid characters, including NULL character, in header name
+ will be accepted;
- - A ring buffer in the form "ring@<name>", which will correspond
- to an in-memory ring buffer accessible over the CLI using the
- "show events" command, which will also list existing rings and
- their sizes. Such buffers are lost on reload or restart but
- when used as a complement this can help troubleshooting by
- having the logs instantly available.
+ * In H1 only, NULL character in header value will be accepted;
- - A log backend in the form "backend@<name>", which will send
- log messages to the corresponding log backend responsible for
- sending the message to the proper server according to the
- backend's lb settings. A log backend is a backend section with
- "mode log" set (see "mode" for more information).
+ * In H1 only, empty values or several "chunked" value occurrences for
+ Transfer-Encoding header will be accepted;
- - An explicit stream address prefix such as "tcp@","tcp6@",
- "tcp4@" or "uxst@" will allocate an implicit ring buffer with
- a stream forward server targeting the given address.
+ * In H1 only, no check will be performed against the authority and the Host
+ header value.
- You may want to reference some environment variables in the
- address parameter, see section 2.3 about environment variables.
+ * In H1 only, tests on the HTTP version will be relaxed. It will allow
+ different protocol names (e.g. RTSP), and multiple digits for both the
+ major and the minor version.
- <length> is an optional maximum line length. Log lines larger than this
- value will be truncated before being sent. The reason is that
- syslog servers act differently on log line length. All servers
- support the default value of 1024, but some servers simply drop
- larger lines while others do log them. If a server supports long
- lines, it may make sense to set this value here in order to avoid
- truncating long lines. Similarly, if a server drops long lines,
- it is preferable to truncate them before sending them. Accepted
- values are 80 to 65535 inclusive. The default value of 1024 is
- generally fine for all standard usages. Some specific cases of
- long captures or JSON-formatted logs may require larger values.
- You may also need to increase "tune.http.logurilen" if your
- request URIs are truncated.
+ * In H1 only, WebSocket (RFC6455) responses failing to present a valid
+ "Sec-Websocket-Accept" header field will be accepted.
- <ranges> A list of comma-separated ranges to identify the logs to sample.
- This is used to balance the load of the logs to send to the log
- server. The limits of the ranges cannot be null. They are numbered
- from 1. The size or period (in number of logs) of the sample must
- be set with <sample_size> parameter.
+ This option should never be enabled by default as it hides application bugs
+ and open security breaches. It should only be deployed after a problem has
+ been confirmed.
- <sample_size>
- The size of the sample in number of logs to consider when balancing
- their logging loads. It is used to balance the load of the logs to
- send to the syslog server. This size must be greater or equal to the
- maximum of the high limits of the ranges.
- (see also <ranges> parameter).
+ When this option is enabled, erroneous header names will still be accepted in
+ responses, but the complete response will be captured in order to permit
+ later analysis using the "show errors" request on the UNIX stats socket.
+ Doing this also helps confirming that the issue has been solved.
- <format> is the log format used when generating syslog messages. It may be
- one of the following :
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- local Analog to rfc3164 syslog message format except that hostname
- field is stripped. This is the default.
- Note: option "log-send-hostname" switches the default to
- rfc3164.
+ See also : "option accept-unsafe-violations-in-http-request" and "show
+ errors" on the stats socket.
- rfc3164 The RFC3164 syslog message format.
- (https://tools.ietf.org/html/rfc3164)
- rfc5424 The RFC5424 syslog message format.
- (https://tools.ietf.org/html/rfc5424)
+option allbackups
+no option allbackups
+ Use either all backup servers at a time or only the first one
- priority A message containing only a level plus syslog facility between
- angle brackets such as '<63>', followed by the text. The PID,
- date, time, process name and system name are omitted. This is
- designed to be used with a local log server.
+ May be used in the following contexts: tcp, http, log
- short A message containing only a level between angle brackets such as
- '<3>', followed by the text. The PID, date, time, process name
- and system name are omitted. This is designed to be used with a
- local log server. This format is compatible with what the
- systemd logger consumes.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- timed A message containing only a level between angle brackets such as
- '<3>', followed by ISO date and by the text. The PID, process
- name and system name are omitted. This is designed to be
- used with a local log server.
+ Arguments : none
- iso A message containing only the ISO date, followed by the text.
- The PID, process name and system name are omitted. This is
- designed to be used with a local log server.
+ By default, the first operational backup server gets all traffic when normal
+ servers are all down. Sometimes, it may be preferred to use multiple backups
+ at once, because one will not be enough. When "option allbackups" is enabled,
+ the load balancing will be performed among all backup servers when all normal
+ ones are unavailable. The same load balancing algorithm will be used and the
+ servers' weights will be respected. Thus, there will not be any priority
+ order between the backup servers anymore.
- raw A message containing only the text. The level, PID, date, time,
- process name and system name are omitted. This is designed to
- be used in containers or during development, where the severity
- only depends on the file descriptor used (stdout/stderr).
+ This option is mostly used with static server farms dedicated to return a
+ "sorry" page when an application is completely offline.
- <prof> name of the optional "log-profile" section that will be
- considered during the log building process to override some
- log options. Check out "8.3.5. Log profiles" for more info.
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- <facility> must be one of the 24 standard syslog facilities :
- kern user mail daemon auth syslog lpr news
- uucp cron auth2 ftp ntp audit alert cron2
- local0 local1 local2 local3 local4 local5 local6 local7
+option checkcache
+no option checkcache
+ Analyze all server responses and block responses with cacheable cookies
- Note that the facility is ignored for the "short" and "raw"
- formats, but still required as a positional field. It is
- recommended to use "daemon" in this case to make it clear that
- it's only supposed to be used locally.
+ May be used in the following contexts: http
- <level> is optional and can be specified to filter outgoing messages. By
- default, all messages are sent. If a level is specified, only
- messages with a severity at least as important as this level
- will be sent. An optional minimum level can be specified. If it
- is set, logs emitted with a more severe level than this one will
- be capped to this level. This is used to avoid sending "emerg"
- messages on all terminals on some default syslog configurations.
- Eight levels are known :
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- emerg alert crit err warning notice info debug
+ Arguments : none
- It is important to keep in mind that it is the frontend which decides what to
- log from a connection, and that in case of content switching, the log entries
- from the backend will be ignored. Connections are logged at level "info".
+ Some high-level frameworks set application cookies everywhere and do not
+ always let enough control to the developer to manage how the responses should
+ be cached. When a session cookie is returned on a cacheable object, there is a
+ high risk of session crossing or stealing between users traversing the same
+ caches. In some situations, it is better to block the response than to let
+ some sensitive session information go in the wild.
- However, backend log declaration define how and where servers status changes
- will be logged. Level "notice" will be used to indicate a server going up,
- "warning" will be used for termination signals and definitive service
- termination, and "alert" will be used for when a server goes down.
+ The option "checkcache" enables deep inspection of all server responses for
+ strict compliance with HTTP specification in terms of cacheability. It
+ carefully checks "Cache-control", "Pragma" and "Set-cookie" headers in server
+ response to check if there's a risk of caching a cookie on a client-side
+ proxy. When this option is enabled, the only responses which can be delivered
+ to the client are :
+ - all those without "Set-Cookie" header;
+ - all those with a return code other than 200, 203, 204, 206, 300, 301,
+ 404, 405, 410, 414, 501, provided that the server has not set a
+ "Cache-control: public" header field;
+ - all those that result from a request using a method other than GET, HEAD,
+ OPTIONS, TRACE, provided that the server has not set a 'Cache-Control:
+ public' header field;
+ - those with a 'Pragma: no-cache' header
+ - those with a 'Cache-control: private' header
+ - those with a 'Cache-control: no-store' header
+ - those with a 'Cache-control: max-age=0' header
+ - those with a 'Cache-control: s-maxage=0' header
+ - those with a 'Cache-control: no-cache' header
+ - those with a 'Cache-control: no-cache="set-cookie"' header
+ - those with a 'Cache-control: no-cache="set-cookie,' header
+ (allowing other fields after set-cookie)
- Note : According to RFC3164, messages are truncated to 1024 bytes before
- being emitted.
+ If a response doesn't respect these requirements, then it will be blocked
+ just as if it was from an "http-response deny" rule, with an "HTTP 502 bad
+ gateway". The session state shows "PH--" meaning that the proxy blocked the
+ response during headers processing. Additionally, an alert will be sent in
+ the logs so that admins are informed that there's something to be fixed.
- Example :
- log global
- log stdout format short daemon # send log to systemd
- log stdout format raw daemon # send everything to stdout
- log stderr format raw daemon notice # send important events to stderr
- log 127.0.0.1:514 local0 notice # only send important events
- log tcp@127.0.0.1:514 local0 notice notice # same but limit output
- # level and send in tcp
- log "${LOCAL_SYSLOG}:514" local0 notice # send to local server
+ Due to the high impact on the application, the application should be tested
+ in depth with the option enabled before going to production. It is also a
+ good practice to always activate it during tests, even if it is not used in
+ production, as it will report potentially dangerous application behaviors.
-log-format <fmt>
- Specifies the custom log format string to use for traffic logs
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | no
+option clitcpka
+no option clitcpka
+ Enable or disable the sending of TCP keepalive packets on the client side
- This directive specifies the log format string that will be used for all logs
- resulting from traffic passing through the frontend using this line. If the
- directive is used in a defaults section, all subsequent frontends will use
- the same log format. Please see section 8.2.6 which covers the custom log
- format string in depth.
+ May be used in the following contexts: tcp, http
- A specific log-format used only in case of connection error can also be
- defined, see the "error-log-format" option.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- "log-format" directive overrides previous "option tcplog", "log-format",
- "option httplog" and "option httpslog" directives.
+ Arguments : none
-log-format-sd <fmt>
- Specifies the Custom log format string used to produce RFC5424 structured-data
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (e.g. remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
- May be used in the following contexts: tcp, http
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | no
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
- This directive specifies the RFC5424 structured-data log format string that
- will be used for all logs resulting from traffic passing through the frontend
- using this line. If the directive is used in a defaults section, all
- subsequent frontends will use the same log format. Please see section 8.2.6
- which covers the log format string in depth.
+ Please note that this has nothing to do with HTTP keep-alive.
- See https://tools.ietf.org/html/rfc5424#section-6.3 for more information
- about the RFC5424 structured-data part.
+ Using option "clitcpka" enables the emission of TCP keep-alive probes on the
+ client side of a connection, which should help when session expirations are
+ noticed between HAProxy and a client.
- Note : This log format string will be used only for loggers that have set
- log format to "rfc5424".
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- Example :
- log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
+ See also : "option srvtcpka", "option tcpka"
-log-steps <steps>
- Specifies at which steps during transaction processing logs should be
- generated.
+
+option contstats
+ Enable continuous traffic statistics updates
May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
+ May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
- During tcp/http transaction processing, haproxy may produce logs at different
- steps during the processing (ie: accept, connect, request, response, close).
+ Arguments : none
- By default, HAProxy emits a single log per transaction, once all of the
- items used in the logformat expression could be satisfied, which means
- that in practice the log is usually emitted at the end of the transaction
- (after the end of the response for HTTP or end of connection for TCP),
- unless "option logasap" is used.
+ By default, counters used for statistics calculation are incremented
+ only when a stream finishes. It works quite well when serving small
+ objects, but with big ones (for example large images or archives) or
+ with A/V streaming, a graph generated from HAProxy counters looks like
+ a hedgehog. With this option enabled counters get incremented frequently
+ along the stream, typically every 5 seconds, which is often enough to
+ produce clean graphs. Recounting touches a hotpath directly so it is not
+ not enabled by default, as it can cause a lot of wakeups for very large
+ session counts and cause a small performance drop.
- The "log-steps" directive allows to refine the precise instants where
- logs will be emitted, and even permits to emit multiple logs for a
- same transaction. Special value 'all' may be used to enable all available
- log origins, making it possible to track a transaction from accept to close.
- Indidivual log origins may also be specified using their names separated by
- spaces to selectively enable when logs should be produced.
+option disable-h2-upgrade
+no option disable-h2-upgrade
+ Enable or disable the implicit HTTP/2 upgrade from an HTTP/1.x client
+ connection.
- Common log origins are: accept, connect, request, response, close.
+ May be used in the following contexts: http
- Example:
- frontend myfront
- option httplog
- log-steps accept,close #only log accept and close for the txn
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- Log origins specified as "logging steps" (such as accept, close) can be
- used as-is in log-profiles (after 'on' directive). Combining "log-steps"
- with log-profiles is really interesting to have fine-grained control over
- logs automatically generated by haproxy during transaction processing.
+ Arguments : none
- See also : "log-profile"
+ By default, HAProxy is able to implicitly upgrade an HTTP/1.x client
+ connection to an HTTP/2 connection if the first request it receives from a
+ given HTTP connection matches the HTTP/2 connection preface (i.e. the string
+ "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n"). This way, it is possible to support
+ HTTP/1.x and HTTP/2 clients on a non-SSL connections. This option must be
+ used to disable the implicit upgrade. Note this implicit upgrade is only
+ supported for HTTP proxies, thus this option too. Note also it is possible to
+ force the HTTP/2 on clear connections by specifying "proto h2" on the bind
+ line. Finally, this option is applied on all bind lines. To disable implicit
+ HTTP/2 upgrades for a specific bind line, it is possible to use "proto h1".
-log-tag <string>
- Specifies the log tag to use for all outgoing logs
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- May be used in the following contexts: tcp, http, log
+option dontlog-normal
+no option dontlog-normal
+ Enable or disable logging of normal, successful connections
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in the following contexts: tcp, http
- Sets the tag field in the syslog header to this string. It defaults to the
- log-tag set in the global section, otherwise the program name as launched
- from the command line, which usually is "HAProxy". Sometimes it can be useful
- to differentiate between multiple processes running on the same host, or to
- differentiate customer instances running in the same process. In the backend,
- logs about servers up/down will use this tag. As a hint, it can be convenient
- to set a log-tag related to a hosted customer in a defaults section then put
- all the frontends and backends for that customer, then start another customer
- in a new defaults section. See also the global "log-tag" directive.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
-max-keep-alive-queue <value>
- Set the maximum server queue size for maintaining keep-alive connections
+ Arguments : none
- May be used in the following contexts: http
+ There are large sites dealing with several thousand connections per second
+ and for which logging is a major pain. Some of them are even forced to turn
+ logs off and cannot debug production issues. Setting this option ensures that
+ normal connections, those which experience no error, no timeout, no retry nor
+ redispatch, will not be logged. This leaves disk space for anomalies. In HTTP
+ mode, the response status code is checked and return codes 5xx will still be
+ logged.
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ It is strongly discouraged to use this option as most of the time, the key to
+ complex issues is in the normal logs which will not be logged here. If you
+ need to separate logs, see the "log-separate-errors" option instead.
- HTTP keep-alive tries to reuse the same server connection whenever possible,
- but sometimes it can be counter-productive, for example if a server has a lot
- of connections while other ones are idle. This is especially true for static
- servers.
+ See also : "log", "dontlognull", "log-separate-errors" and section 8 about
+ logging.
- The purpose of this setting is to set a threshold on the number of queued
- connections at which HAProxy stops trying to reuse the same server and prefers
- to find another one. The default value, -1, means there is no limit. A value
- of zero means that keep-alive requests will never be queued. For very close
- servers which can be reached with a low latency and which are not sensible to
- breaking keep-alive, a low value is recommended (e.g. local static server can
- use a value of 10 or less). For remote servers suffering from a high latency,
- higher values might be needed to cover for the latency and/or the cost of
- picking a different server.
-
- Note that this has no impact on responses which are maintained to the same
- server consecutively to a 401 response. They will still go to the same server
- even if they have to be queued.
-
- See also : "option http-server-close", "option prefer-last-server", server
- "maxconn" and cookie persistence.
-max-session-srv-conns <nb>
- Set the maximum number of outgoing connections we can keep idling for a given
- client session. The default is 5 (it precisely equals MAX_SRV_LIST which is
- defined at build time).
+option dontlognull
+no option dontlognull
+ Enable or disable logging of null connections
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
-maxconn <conns>
- Fix the maximum number of concurrent connections on a frontend
+ Arguments : none
- May be used in the following contexts: tcp, http
+ In certain environments, there are components which will regularly connect to
+ various systems to ensure that they are still alive. It can be the case from
+ another load balancer as well as from monitoring systems. By default, even a
+ simple port probe or scan will produce a log. If those connections pollute
+ the logs too much, it is possible to enable option "dontlognull" to indicate
+ that a connection on which no data has been transferred will not be logged,
+ which typically corresponds to those probes. Note that errors will still be
+ returned to the client and accounted for in the stats. If this is not what is
+ desired, option http-ignore-probes can be used instead.
+
+ It is generally recommended not to use this option in uncontrolled
+ environments (e.g. internet), otherwise scans and other malicious activities
+ would not be logged.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "log", "http-ignore-probes", "monitor-uri", and
+ section 8 about logging.
+
+option forwarded [ proto ]
+ [ host | host-expr <host_expr> ]
+ [ by | by-expr <by_expr> ] [ by_port | by_port-expr <by_port_expr>]
+ [ for | for-expr <for_expr> ] [ for_port | for_port-expr <for_port_expr>]
+no option forwarded
+ Enable insertion of the rfc 7239 forwarded header in requests sent to servers
+
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
Arguments :
- <conns> is the maximum number of concurrent connections the frontend will
- accept to serve. Excess connections will be queued by the system
- in the socket's listen queue and will be served once a connection
- closes.
+ <host_expr> optional argument to specify a custom sample expression
+ those result will be used as 'host' parameter value
- If the system supports it, it can be useful on big sites to raise this limit
- very high so that HAProxy manages connection queues, instead of leaving the
- clients with unanswered connection attempts. This value should not exceed the
- global maxconn. Also, keep in mind that a connection contains two buffers
- of tune.bufsize (16kB by default) each, as well as some other data resulting
- in about 33 kB of RAM being consumed per established connection. That means
- that a medium system equipped with 1GB of RAM can withstand around
- 20000-25000 concurrent connections if properly tuned.
+ <by_expr> optional argument to specify a custom sample expression
+ those result will be used as 'by' parameter nodename value
- Also, when <conns> is set to large values, it is possible that the servers
- are not sized to accept such loads, and for this reason it is generally wise
- to assign them some reasonable connection limits.
+ <for_expr> optional argument to specify a custom sample expression
+ those result will be used as 'for' parameter nodename value
- When this value is set to zero, which is the default, the global "maxconn"
- value is used.
+ <by_port_expr> optional argument to specify a custom sample expression
+ those result will be used as 'by' parameter nodeport value
- See also : "server", global section's "maxconn", "fullconn"
+ <for_port_expr> optional argument to specify a custom sample expression
+ those result will be used as 'for' parameter nodeport value
-mode { tcp|http|log|spop }
- Set the running mode or protocol of the instance
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
- Arguments :
- tcp The instance will work in pure TCP mode. A full-duplex connection
- will be established between clients and servers, and no layer 7
- examination will be performed. This is the default mode. It
- should be used for SSL, SSH, SMTP, ...
+ Since HAProxy works in reverse-proxy mode, servers are losing some request
+ context (request origin: client ip address, protocol used...)
- http The instance will work in HTTP mode. The client request will be
- analyzed in depth before connecting to any server. Any request
- which is not RFC-compliant will be rejected. Layer 7 filtering,
- processing and switching will be possible. This is the mode which
- brings HAProxy most of its value.
+ A common way to address this limitation is to use the well known
+ x-forward-for and x-forward-* friends to expose some of this context to the
+ underlying servers/applications.
+ While this use to work and is widely deployed, it is not officially supported
+ by the IETF and can be the root of some interoperability as well as security
+ issues.
- log When used in a backend section, it will turn the backend into a
- log backend. Such backend can be used as a log destination for
- any "log" directive by using the "backend@<name>" syntax. Log
- messages will be distributed to the servers from the backend
- according to the lb settings which can be configured using the
- "balance" keyword. Log backends support UDP servers by prefixing
- the server's address with the "udp@" prefix. Common backend and
- server features are supported, but not TCP or HTTP specific ones.
+ To solve this, a new HTTP extension has been described by the IETF:
+ forwarded header (RFC7239).
+ More information here: https://www.rfc-editor.org/rfc/rfc7239.html
- spop When used in a backend section, it will turn the backend into a
- log backend. This mode is mandatory and automatically set, if
- necessary, for backends referenced by SPOE engines.
+ The use of this single header allow to convey numerous details
+ within the same header, and most importantly, fixes the proxy chaining
+ issue. (the rfc allows for multiple chained proxies to append their own
+ values to an already existing header).
- When doing content switching, it is mandatory that the frontend and the
- backend are in the same mode (generally HTTP), otherwise the configuration
- will be refused.
+ This option may be specified in defaults, listen or backend section, but it
+ will be ignored for frontend sections.
- Example :
- defaults http_instances
- mode http
+ Setting option forwarded without arguments results in using default implicit
+ behavior.
+ Default behavior enables proto parameter and injects original client ip.
+ The equivalent explicit/manual configuration would be:
+ option forwarded proto for
-monitor fail { if | unless } <condition>
- Add a condition to report a failure to a monitor HTTP request.
+ The keyword 'by' is used to enable 'by' parameter ("nodename") in
+ forwarded header. It allows to embed request proxy information.
+ 'by' value will be set to proxy ip (destination address)
+ If not available (ie: UNIX listener), 'by' will be set to
+ "unknown".
- May be used in the following contexts: http
+ The keyword 'by-expr' is used to enable 'by' parameter ("nodename") in
+ forwarded header. It allows to embed request proxy information.
+ 'by' value will be set to the result of the sample expression
+ <by_expr>, if valid, otherwise it will be set to "unknown".
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
+ The keyword 'for' is used to enable 'for' parameter ("nodename") in
+ forwarded header. It allows to embed request client information.
+ 'for' value will be set to client ip (source address)
+ If not available (ie: UNIX listener), 'for' will be set to
+ "unknown".
- Arguments :
- if <cond> the monitor request will fail if the condition is satisfied,
- and will succeed otherwise. The condition should describe a
- combined test which must induce a failure if all conditions
- are met, for instance a low number of servers both in a
- backend and its backup.
+ The keyword 'for-expr' is used to enable 'for' parameter ("nodename") in
+ forwarded header. It allows to embed request client information.
+ 'for' value will be set to the result of the sample expression
+ <for_expr>, if valid, otherwise it will be set to "unknown".
- unless <cond> the monitor request will succeed only if the condition is
- satisfied, and will fail otherwise. Such a condition may be
- based on a test on the presence of a minimum number of active
- servers in a list of backends.
+ The keyword 'by_port' is used to provide "nodeport" info to
+ 'by' parameter. 'by_port' requires 'by' or 'by-expr' to be set or
+ it will be ignored.
+ "nodeport" will be set to proxy (destination) port if available,
+ otherwise it will be ignored.
- This statement adds a condition which can force the response to a monitor
- request to report a failure. By default, when an external component queries
- the URI dedicated to monitoring, a 200 response is returned. When one of the
- conditions above is met, HAProxy will return 503 instead of 200. This is
- very useful to report a site failure to an external component which may base
- routing advertisements between multiple sites on the availability reported by
- HAProxy. In this case, one would rely on an ACL involving the "nbsrv"
- criterion. Note that "monitor fail" only works in HTTP mode. Both status
- messages may be tweaked using "errorfile" or "errorloc" if needed.
+ The keyword 'by_port-expr' is used to provide "nodeport" info to
+ 'by' parameter. 'by_port-expr' requires 'by' or 'by-expr' to be set or
+ it will be ignored.
+ "nodeport" will be set to the result of the sample expression
+ <by_port_expr>, if valid, otherwise it will be ignored.
- Example:
- frontend www
+ The keyword 'for_port' is used to provide "nodeport" info to
+ 'for' parameter. 'for_port' requires 'for' or 'for-expr' to be set or
+ it will be ignored.
+ "nodeport" will be set to client (source) port if available,
+ otherwise it will be ignored.
+
+ The keyword 'for_port-expr' is used to provide "nodeport" info to
+ 'for' parameter. 'for_port-expr' requires 'for' or 'for-expr' to be set or
+ it will be ignored.
+ "nodeport" will be set to the result of the sample expression
+ <for_port_expr>, if valid, otherwise it will be ignored.
+
+ Examples :
+ # Those servers want the ip address and protocol of the client request
+ # Resulting header would look like this:
+ # forwarded: proto=http;for=127.0.0.1
+ backend www_default
mode http
- acl site_dead nbsrv(dynamic) lt 2
- acl site_dead nbsrv(static) lt 2
- monitor-uri /site_alive
- monitor fail if site_dead
+ option forwarded
+ #equivalent to: option forwarded proto for
- See also : "monitor-uri", "errorfile", "errorloc"
+ # Those servers want the requested host and hashed client ip address
+ # as well as client source port (you should use seed for xxh32 if ensuring
+ # ip privacy is a concern)
+ # Resulting header would look like this:
+ # forwarded: host="haproxy.org";for="_000000007F2F367E:60138"
+ backend www_host
+ mode http
+ option forwarded host for-expr src,xxh32,hex for_port
+ # Those servers want custom data in host, for and by parameters
+ # Resulting header would look like this:
+ # forwarded: host="host.com";by=_haproxy;for="[::1]:10"
+ backend www_custom
+ mode http
+ option forwarded host-expr str(host.com) by-expr str(_haproxy) for for_port-expr int(10)
-monitor-uri <uri>
- Intercept a URI used by external components' monitor requests
+ # Those servers want random 'for' obfuscated identifiers for request
+ # tracing purposes while protecting sensitive IP information
+ # Resulting header would look like this:
+ # forwarded: for=_000000002B1F4D63
+ backend www_for_hide
+ mode http
+ option forwarded for-expr rand,hex
+
+ See also : "option forwardfor", "option originalto"
+
+option forwardfor [ except <network> ] [ header <name> ] [ if-none ]
+ Enable insertion of the X-Forwarded-For header to requests sent to servers
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
Arguments :
- <uri> is the exact URI which we want to intercept to return HAProxy's
- health status instead of forwarding the request.
-
- When an HTTP request referencing <uri> will be received on a frontend,
- HAProxy will not forward it nor log it, but instead will return either
- "HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure
- conditions defined with "monitor fail". This is normally enough for any
- front-end HTTP probe to detect that the service is UP and running without
- forwarding the request to a backend server. Note that the HTTP method, the
- version and all headers are ignored, but the request must at least be valid
- at the HTTP level. This keyword may only be used with an HTTP-mode frontend.
+ <network> is an optional argument used to disable this option for sources
+ matching <network>
+ <name> an optional argument to specify a different "X-Forwarded-For"
+ header name.
- Monitor requests are processed very early, just after the request is parsed
- and even before any "http-request". The only rulesets applied before are the
- tcp-request ones. They cannot be logged either, and it is the intended
- purpose. Only one URI may be configured for monitoring; when multiple
- "monitor-uri" statements are present, the last one will define the URI to
- be used. They are only used to report HAProxy's health to an upper component,
- nothing more. However, it is possible to add any number of conditions using
- "monitor fail" and ACLs so that the result can be adjusted to whatever check
- can be imagined (most often the number of available servers in a backend).
+ Since HAProxy works in reverse-proxy mode, the servers see its IP address as
+ their client address. This is sometimes annoying when the client's IP address
+ is expected in server logs. To solve this problem, the well-known HTTP header
+ "X-Forwarded-For" may be added by HAProxy to all requests sent to the server.
+ This header contains a value representing the client's IP address. Since this
+ header is always appended at the end of the existing header list, the server
+ must be configured to always use the last occurrence of this header only. See
+ the server's manual to find how to enable use of this standard header. Note
+ that only the last occurrence of the header must be used, since it is really
+ possible that the client has already brought one.
- Note: if <uri> starts by a slash ('/'), the matching is performed against the
- request's path instead of the request's uri. It is a workaround to let
- the HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients
- are encouraged to send absolute URIs only.
+ The keyword "header" may be used to supply a different header name to replace
+ the default "X-Forwarded-For". This can be useful where you might already
+ have a "X-Forwarded-For" header from a different application (e.g. stunnel),
+ and you need preserve it. Also if your backend server doesn't use the
+ "X-Forwarded-For" header and requires different one (e.g. Zeus Web Servers
+ require "X-Cluster-Client-IP").
+
+ Sometimes, a same HAProxy instance may be shared between a direct client
+ access and a reverse-proxy access (for instance when an SSL reverse-proxy is
+ used to decrypt HTTPS traffic). It is possible to disable the addition of the
+ header for a known source address or network by adding the "except" keyword
+ followed by the network address. In this case, any source IP matching the
+ network will not cause an addition of this header. Most common uses are with
+ private networks or 127.0.0.1. IPv4 and IPv6 are both supported.
+
+ Alternatively, the keyword "if-none" states that the header will only be
+ added if it is not present. This should only be used in perfectly trusted
+ environment, as this might cause a security issue if headers reaching HAProxy
+ are under the control of the end-user.
+
+ This option may be specified either in the frontend or in the backend. If at
+ least one of them uses it, the header will be added. Note that the backend's
+ setting of the header subargument takes precedence over the frontend's if
+ both are defined. In the case of the "if-none" argument, if at least one of
+ the frontend or the backend does not specify it, it wants the addition to be
+ mandatory, so it wins.
Example :
- # Use /haproxy_test to report HAProxy's status
+ # Public HTTP address also used by stunnel on the same machine
frontend www
mode http
- monitor-uri /haproxy_test
+ option forwardfor except 127.0.0.1 # stunnel already adds the header
- See also : "monitor fail"
+ # Those servers want the IP Address in X-Client
+ backend www
+ mode http
+ option forwardfor header X-Client
+ See also : "option httpclose", "option http-server-close",
+ "option http-keep-alive"
-option abortonclose
-no option abortonclose
- Enable or disable early dropping of aborted requests pending in queues.
- May be used in the following contexts: tcp, http
+option h1-case-adjust-bogus-client
+no option h1-case-adjust-bogus-client
+ Enable or disable the case adjustment of HTTP/1 headers sent to bogus clients
+
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | no
Arguments : none
- In presence of very high loads, the servers will take some time to respond.
- The per-instance connection queue will inflate, and the response time will
- increase respective to the size of the queue times the average per-stream
- response time. When clients will wait for more than a few seconds, they will
- often hit the "STOP" button on their browser, leaving a useless request in
- the queue, and slowing down other users, and the servers as well, because the
- request will eventually be served, then aborted at the first error
- encountered while delivering the response.
+ There is no standard case for header names because, as stated in RFC7230,
+ they are case-insensitive. So applications must handle them in a case-
+ insensitive manner. But some bogus applications violate the standards and
+ erroneously rely on the cases most commonly used by browsers. This problem
+ becomes critical with HTTP/2 because all header names must be exchanged in
+ lower case, and HAProxy follows the same convention. All header names are
+ sent in lower case to clients and servers, regardless of the HTTP version.
- As there is no way to distinguish between a full STOP and a simple output
- close on the client side, HTTP agents should be conservative and consider
- that the client might only have closed its output channel while waiting for
- the response. However, this introduces risks of congestion when lots of users
- do the same, and is completely useless nowadays because probably no client at
- all will close the stream while waiting for the response. Some HTTP agents
- support this behavior (Squid, Apache, HAProxy), and others do not (TUX, most
- hardware-based load balancers). So the probability for a closed input channel
- to represent a user hitting the "STOP" button is close to 100%, and the risk
- of being the single component to break rare but valid traffic is extremely
- low, which adds to the temptation to be able to abort a stream early while
- still not served and not pollute the servers.
+ When HAProxy receives an HTTP/1 response, its header names are converted to
+ lower case and manipulated and sent this way to the clients. If a client is
+ known to violate the HTTP standards and to fail to process a response coming
+ from HAProxy, it is possible to transform the lower case header names to a
+ different format when the response is formatted and sent to the client, by
+ enabling this option and specifying the list of headers to be reformatted
+ using the global directives "h1-case-adjust" or "h1-case-adjust-file". This
+ must only be a temporary workaround for the time it takes the client to be
+ fixed, because clients which require such workarounds might be vulnerable to
+ content smuggling attacks and must absolutely be fixed.
- In HAProxy, the user can choose the desired behavior using the option
- "abortonclose". By default (without the option) the behavior is HTTP
- compliant and aborted requests will be served. But when the option is
- specified, a stream with an incoming channel closed will be aborted while
- it is still possible, either pending in the queue for a connection slot, or
- during the connection establishment if the server has not yet acknowledged
- the connection request. This considerably reduces the queue size and the load
- on saturated servers when users are tempted to click on STOP, which in turn
- reduces the response time for other users.
+ Please note that this option will not affect standards-compliant clients.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "timeout queue" and server's "maxconn" and "maxqueue" parameters
+ See also: "option h1-case-adjust-bogus-server", "h1-case-adjust",
+ "h1-case-adjust-file".
-option accept-invalid-http-request (deprecated)
-no option accept-invalid-http-request (deprecated)
- Enable or disable relaxing of HTTP request parsing
+option h1-case-adjust-bogus-server
+no option h1-case-adjust-bogus-server
+ Enable or disable the case adjustment of HTTP/1 headers sent to bogus servers
- The "accept-invalid-http-request" keyword is deprecated, use "option
- accept-unsafe-violations-in-http-request" instead.
+ May be used in the following contexts: http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
-option accept-invalid-http-response (deprecated)
-no option accept-invalid-http-response (deprecated)
- Enable or disable relaxing of HTTP response parsing
+ Arguments : none
- The "accept-invalid-http-response" keyword is deprecated, use "option
- accept-unsafe-violations-in-http-response" instead.
+ There is no standard case for header names because, as stated in RFC7230,
+ they are case-insensitive. So applications must handle them in a case-
+ insensitive manner. But some bogus applications violate the standards and
+ erroneously rely on the cases most commonly used by browsers. This problem
+ becomes critical with HTTP/2 because all header names must be exchanged in
+ lower case, and HAProxy follows the same convention. All header names are
+ sent in lower case to clients and servers, regardless of the HTTP version.
+ When HAProxy receives an HTTP/1 request, its header names are converted to
+ lower case and manipulated and sent this way to the servers. If a server is
+ known to violate the HTTP standards and to fail to process a request coming
+ from HAProxy, it is possible to transform the lower case header names to a
+ different format when the request is formatted and sent to the server, by
+ enabling this option and specifying the list of headers to be reformatted
+ using the global directives "h1-case-adjust" or "h1-case-adjust-file". This
+ must only be a temporary workaround for the time it takes the server to be
+ fixed, because servers which require such workarounds might be vulnerable to
+ content smuggling attacks and must absolutely be fixed.
-option accept-unsafe-violations-in-http-request
-no option accept-unsafe-violations-in-http-request
- Enable or disable relaxing of HTTP request parsing
+ Please note that this option will not affect standards-compliant servers.
- May be used in the following contexts: http
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ See also: "option h1-case-adjust-bogus-client", "h1-case-adjust",
+ "h1-case-adjust-file".
- Arguments : none
- By default, HAProxy complies with the different HTTP RFCs in terms of message
- parsing. This means the message parsing is quite strict and causes an error
- to be returned to the client for malformed messages. This is the desired
- behavior as such malformed messages are essentially used to build attacks
- exploiting server weaknesses, and bypass security filtering. Sometimes, a
- buggy browser will not respect these RCFs for whatever reason (configuration,
- implementation...) and the issue will not be immediately fixed. In such case,
- it is possible to relax HAProxy's parser to accept some invalid requests by
- specifying this option. Most of rules concern the H1 parsing for historical
- reason. Newer HTTP versions tends to be cleaner and applications follow more
- stickly these protocols.
+option http-buffer-request
+no option http-buffer-request
+ Enable or disable waiting for whole HTTP request body before proceeding
- When this option is set, the following rules are observed:
+ May be used in the following contexts: http
- * In H1 only, invalid characters, including NULL character, in header name
- will be accepted;
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- * In H1 only, NULL character in header value will be accepted;
+ Arguments : none
- * The list of characters allowed to appear in a URI is well defined by
- RFC3986, and chars 0-31, 32 (space), 34 ('"'), 60 ('<'), 62 ('>'), 92
- ('\'), 94 ('^'), 96 ('`'), 123 ('{'), 124 ('|'), 125 ('}'), 127 (delete)
- and anything above are normally not allowed. But here, in H1 only,
- HAProxy will only block a number of them (0..32, 127);
+ It is sometimes desirable to wait for the body of an HTTP request before
+ taking a decision. This is what is being done by "balance url_param" for
+ example. The first use case is to buffer requests from slow clients before
+ connecting to the server. Another use case consists in taking the routing
+ decision based on the request body's contents. This option placed in a
+ frontend or backend forces the HTTP processing to wait until either the whole
+ body is received or the request buffer is full. It can have undesired side
+ effects with some applications abusing HTTP by expecting unbuffered
+ transmissions between the frontend and the backend, so this should definitely
+ not be used by default.
- * In H1 and H2, URLs containing fragment references ('#' after the path)
- will be accepted;
+ See also : "option http-no-delay", "timeout http-request",
+ "http-request wait-for-body"
- * In H1 only, no check will be performed on the authority for CONNECT
- requests;
+option http-drop-request-trailers
+no option http-drop-request-trailers
+ Drop the HTTP trailers from the request when sent to the server
- * In H1 only, no check will be performed against the authority and the Host
- header value.
+ May be used in the following contexts: http
- * In H1 only, tests on the HTTP version will be relaxed. It will allow
- HTTP/0.9 GET requests to pass through (no version specified), as well as
- different protocol names (e.g. RTSP), and multiple digits for both the
- major and the minor version.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | no | yes
- * In H1 only, WebSocket (RFC6455) requests failing to present a valid
- "Sec-Websocket-Key" header field will be accepted.
+ Arguments : none
- This option should never be enabled by default as it hides application bugs
- and open security breaches. It should only be deployed after a problem has
- been confirmed.
+ When this option is enabled, any HTTP trailers found in a request will be
+ dropped before sending it to the server.
- When this option is enabled, invalid but accepted H1 requests will be
- captured in order to permit later analysis using the "show errors" request on
- the UNIX stats socket.Doing this also helps confirming that the issue has
- been solved.
+ RFC9110#section-6.5.1 stated that trailer fields could be merged into the
+ header fields. It should be done on purpose, but it may be a problem for some
+ applications, espcially if malicious clients hide sensitive header fields in
+ the trailers part and some intermediaries merge them with headers with no
+ specific checks. In that case, this option can be enabled on the backend to
+ drop any trailer fields found in requests before sending them to the server.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option accept-unsafe-violations-in-http-response" and "show
- errors" on the stats socket.
-
+ See also: "option http-drop-response-trailers"
-option accept-unsafe-violations-in-http-response
-no option accept-unsafe-violations-in-http-response
- Enable or disable relaxing of HTTP response parsing
+option http-drop-response-trailers
+no option http-drop-response-trailers
+ Drop the HTTP trailers from the response when sent to the client
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | no
Arguments : none
- Similarly to "option accept-unsafe-violations-in-http-request", this option
- may be used to relax parsing rules of HTTP responses. It should only be
- enabled for trusted legacy servers to accept some invalid responses. Most of
- rules concern the H1 parsing for historical reason. Newer HTTP versions tends
- to be cleaner and applications follow more stickly these protocols.
-
- When this option is set, the following rules are observed:
+ This option is similar to "option http-drop-request-trailers" but it must be
+ used to drop trailer fields from responses before sending them to clients.
- * In H1 only, invalid characters, including NULL character, in header name
- will be accepted;
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- * In H1 only, NULL character in header value will be accepted;
+ See also: "option http-drop-request-trailers"
- * In H1 only, empty values or several "chunked" value occurrences for
- Transfer-Encoding header will be accepted;
+option http-ignore-probes
+no option http-ignore-probes
+ Enable or disable logging of null connections and request timeouts
- * In H1 only, no check will be performed against the authority and the Host
- header value.
+ May be used in the following contexts: http
- * In H1 only, tests on the HTTP version will be relaxed. It will allow
- different protocol names (e.g. RTSP), and multiple digits for both the
- major and the minor version.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- * In H1 only, WebSocket (RFC6455) responses failing to present a valid
- "Sec-Websocket-Accept" header field will be accepted.
+ Arguments : none
- This option should never be enabled by default as it hides application bugs
- and open security breaches. It should only be deployed after a problem has
- been confirmed.
+ Recently some browsers started to implement a "pre-connect" feature
+ consisting in speculatively connecting to some recently visited web sites
+ just in case the user would like to visit them. This results in many
+ connections being established to web sites, which end up in 408 Request
+ Timeout if the timeout strikes first, or 400 Bad Request when the browser
+ decides to close them first. These ones pollute the log and feed the error
+ counters. There was already "option dontlognull" but it's insufficient in
+ this case. Instead, this option does the following things :
+ - prevent any 400/408 message from being sent to the client if nothing
+ was received over a connection before it was closed;
+ - prevent any log from being emitted in this situation;
+ - prevent any error counter from being incremented
- When this option is enabled, erroneous header names will still be accepted in
- responses, but the complete response will be captured in order to permit
- later analysis using the "show errors" request on the UNIX stats socket.
- Doing this also helps confirming that the issue has been solved.
+ That way the empty connection is silently ignored. Note that it is better
+ not to use this unless it is clear that it is needed, because it will hide
+ real problems. The most common reason for not receiving a request and seeing
+ a 408 is due to an MTU inconsistency between the client and an intermediary
+ element such as a VPN, which blocks too large packets. These issues are
+ generally seen with POST requests as well as GET with large cookies. The logs
+ are often the only way to detect them.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option accept-unsafe-violations-in-http-request" and "show
- errors" on the stats socket.
+ See also : "log", "dontlognull", "errorfile", and section 8 about logging.
-option allbackups
-no option allbackups
- Use either all backup servers at a time or only the first one
+option http-keep-alive
+no option http-keep-alive
+ Enable or disable HTTP keep-alive from client to server for HTTP/1.x
+ connections
- May be used in the following contexts: tcp, http, log
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
Arguments : none
- By default, the first operational backup server gets all traffic when normal
- servers are all down. Sometimes, it may be preferred to use multiple backups
- at once, because one will not be enough. When "option allbackups" is enabled,
- the load balancing will be performed among all backup servers when all normal
- ones are unavailable. The same load balancing algorithm will be used and the
- servers' weights will be respected. Thus, there will not be any priority
- order between the backup servers anymore.
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ HTTP/1.x connections: for each connection it processes each request and
+ response, and leaves the connection idle on both sides. This mode may be
+ changed by several options such as "option http-server-close" or "option
+ httpclose". This option allows to set back the keep-alive mode, which can be
+ useful when another mode was used in a defaults section.
- This option is mostly used with static server farms dedicated to return a
- "sorry" page when an application is completely offline.
+ Setting "option http-keep-alive" enables HTTP keep-alive mode on the client-
+ and server- sides. This provides the lowest latency on the client side (slow
+ network) and the fastest session reuse on the server side at the expense
+ of maintaining idle connections to the servers. In general, it is possible
+ with this option to achieve approximately twice the request rate that the
+ "http-server-close" option achieves on small objects. There are mainly two
+ situations where this option may be useful :
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ - when the server is non-HTTP compliant and authenticates the connection
+ instead of requests (e.g. NTLM authentication)
+ - when the cost of establishing the connection to the server is significant
+ compared to the cost of retrieving the associated object from the server.
-option checkcache
-no option checkcache
- Analyze all server responses and block responses with cacheable cookies
+ This last case can happen when the server is a fast static server of cache.
+
+ At the moment, logs will not indicate whether requests came from the same
+ session or not. The accept date reported in the logs corresponds to the end
+ of the previous request, and the request time corresponds to the time spent
+ waiting for a new request. The keep-alive request time is still bound to the
+ timeout defined by "timeout http-keep-alive" or "timeout http-request" if
+ not set.
+
+ This option disables and replaces any previous "option httpclose" or "option
+ http-server-close".
+
+ See also : "option httpclose",, "option http-server-close",
+ "option prefer-last-server" and "option http-pretend-keepalive".
+
+
+option http-no-delay
+no option http-no-delay
+ Instruct the system to favor low interactive delays over performance in HTTP
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
Arguments : none
- Some high-level frameworks set application cookies everywhere and do not
- always let enough control to the developer to manage how the responses should
- be cached. When a session cookie is returned on a cacheable object, there is a
- high risk of session crossing or stealing between users traversing the same
- caches. In some situations, it is better to block the response than to let
- some sensitive session information go in the wild.
-
- The option "checkcache" enables deep inspection of all server responses for
- strict compliance with HTTP specification in terms of cacheability. It
- carefully checks "Cache-control", "Pragma" and "Set-cookie" headers in server
- response to check if there's a risk of caching a cookie on a client-side
- proxy. When this option is enabled, the only responses which can be delivered
- to the client are :
- - all those without "Set-Cookie" header;
- - all those with a return code other than 200, 203, 204, 206, 300, 301,
- 404, 405, 410, 414, 501, provided that the server has not set a
- "Cache-control: public" header field;
- - all those that result from a request using a method other than GET, HEAD,
- OPTIONS, TRACE, provided that the server has not set a 'Cache-Control:
- public' header field;
- - those with a 'Pragma: no-cache' header
- - those with a 'Cache-control: private' header
- - those with a 'Cache-control: no-store' header
- - those with a 'Cache-control: max-age=0' header
- - those with a 'Cache-control: s-maxage=0' header
- - those with a 'Cache-control: no-cache' header
- - those with a 'Cache-control: no-cache="set-cookie"' header
- - those with a 'Cache-control: no-cache="set-cookie,' header
- (allowing other fields after set-cookie)
-
- If a response doesn't respect these requirements, then it will be blocked
- just as if it was from an "http-response deny" rule, with an "HTTP 502 bad
- gateway". The session state shows "PH--" meaning that the proxy blocked the
- response during headers processing. Additionally, an alert will be sent in
- the logs so that admins are informed that there's something to be fixed.
+ In HTTP, each payload is unidirectional and has no notion of interactivity.
+ Any agent is expected to queue data somewhat for a reasonably low delay.
+ There are some very rare server-to-server applications that abuse the HTTP
+ protocol and expect the payload phase to be highly interactive, with many
+ interleaved data chunks in both directions within a single request. This is
+ absolutely not supported by the HTTP specification and will not work across
+ most proxies or servers. When such applications attempt to do this through
+ HAProxy, it works but they will experience high delays due to the network
+ optimizations which favor performance by instructing the system to wait for
+ enough data to be available in order to only send full packets. Typical
+ delays are around 200 ms per round trip. Note that this only happens with
+ abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
+ affected.
- Due to the high impact on the application, the application should be tested
- in depth with the option enabled before going to production. It is also a
- good practice to always activate it during tests, even if it is not used in
- production, as it will report potentially dangerous application behaviors.
+ When "option http-no-delay" is present in either the frontend or the backend
+ used by a connection, all such optimizations will be disabled in order to
+ make the exchanges as fast as possible. Of course this offers no guarantee on
+ the functionality, as it may break at any other place. But if it works via
+ HAProxy, it will work as fast as possible. This option should never be used
+ by default, and should never be used at all unless such a buggy application
+ is discovered. The impact of using this option is an increase of bandwidth
+ usage and CPU usage, which may significantly lower performance in high
+ latency environments.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ See also : "option http-buffer-request"
-option clitcpka
-no option clitcpka
- Enable or disable the sending of TCP keepalive packets on the client side
+option http-pretend-keepalive
+no option http-pretend-keepalive
+ Define whether HAProxy will announce keepalive for HTTP/1.x connection to the
+ server or not
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
Arguments : none
- When there is a firewall or any session-aware component between a client and
- a server, and when the protocol involves very long sessions with long idle
- periods (e.g. remote desktops), there is a risk that one of the intermediate
- components decides to expire a session which has remained idle for too long.
-
- Enabling socket-level TCP keep-alives makes the system regularly send packets
- to the other end of the connection, leaving it active. The delay between
- keep-alive probes is controlled by the system only and depends both on the
- operating system and its tuning parameters.
+ When running with "option http-server-close" or "option httpclose", HAProxy
+ adds a "Connection: close" header to the HTTP/1.x request forwarded to the
+ server. Unfortunately, when some servers see this header, they automatically
+ refrain from using the chunked encoding for responses of unknown length,
+ while this is totally unrelated. The effect is that a client or a cache could
+ receive an incomplete response without being aware of it, and consider the
+ response complete.
- It is important to understand that keep-alive packets are neither emitted nor
- received at the application level. It is only the network stacks which sees
- them. For this reason, even if one side of the proxy already uses keep-alives
- to maintain its connection alive, those keep-alive packets will not be
- forwarded to the other side of the proxy.
+ By setting "option http-pretend-keepalive", HAProxy will make the server
+ believe it will keep the connection alive. The server will then not fall back
+ to the abnormal undesired above. When HAProxy gets the whole response, it
+ will close the connection with the server just as it would do with the
+ "option httpclose". That way the client gets a normal response and the
+ connection is correctly closed on the server side.
- Please note that this has nothing to do with HTTP keep-alive.
+ It is recommended not to enable this option by default, because most servers
+ will more efficiently close the connection themselves after the last packet,
+ and release its buffers slightly earlier. Also, the added packet on the
+ network could slightly reduce the overall peak performance. However it is
+ worth noting that when this option is enabled, HAProxy will have slightly
+ less work to do. So if HAProxy is the bottleneck on the whole architecture,
+ enabling this option might save a few CPU cycles.
- Using option "clitcpka" enables the emission of TCP keep-alive probes on the
- client side of a connection, which should help when session expirations are
- noticed between HAProxy and a client.
+ This option may be set in backend and listen sections. Using it in a frontend
+ section will be ignored and a warning will be reported during startup. It is
+ a backend related option, so there is no real reason to set it on a
+ frontend.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option srvtcpka", "option tcpka"
-
+ See also : "option httpclose", "option http-server-close", and
+ "option http-keep-alive"
-option contstats
- Enable continuous traffic statistics updates
+option http-restrict-req-hdr-names { preserve | delete | reject }
+ Set HAProxy policy about HTTP request header names containing characters
+ outside the "[a-zA-Z0-9-]" charset
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
- Arguments : none
+ Arguments :
+ preserve disable the filtering. It is the default mode for HTTP proxies
+ with no FastCGI application configured.
- By default, counters used for statistics calculation are incremented
- only when a stream finishes. It works quite well when serving small
- objects, but with big ones (for example large images or archives) or
- with A/V streaming, a graph generated from HAProxy counters looks like
- a hedgehog. With this option enabled counters get incremented frequently
- along the stream, typically every 5 seconds, which is often enough to
- produce clean graphs. Recounting touches a hotpath directly so it is not
- not enabled by default, as it can cause a lot of wakeups for very large
- session counts and cause a small performance drop.
+ delete remove request headers with a name containing a character
+ outside the "[a-zA-Z0-9-]" charset. It is the default mode for
+ HTTP backends with a configured FastCGI application.
-option disable-h2-upgrade
-no option disable-h2-upgrade
- Enable or disable the implicit HTTP/2 upgrade from an HTTP/1.x client
- connection.
+ reject reject the request with a 403-Forbidden response if it contains a
+ header name with a character outside the "[a-zA-Z0-9-]" charset.
+
+ This option may be used to restrict the request header names to alphanumeric
+ and hyphen characters ([A-Za-z0-9-]). This may be mandatory to interoperate
+ with non-HTTP compliant servers that fail to handle some characters in header
+ names. It may also be mandatory for FastCGI applications because all
+ non-alphanumeric characters in header names are replaced by an underscore
+ ('_'). Thus, it is easily possible to mix up header names and bypass some
+ rules. For instance, "X-Forwarded-For" and "X_Forwarded-For" headers are both
+ converted to "HTTP_X_FORWARDED_FOR" in FastCGI.
+
+ Note this option is evaluated per proxy and after the http-request rules
+ evaluation.
+
+option http-server-close
+no option http-server-close
+ Enable or disable HTTP/1.x connection closing on the server side
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
Arguments : none
- By default, HAProxy is able to implicitly upgrade an HTTP/1.x client
- connection to an HTTP/2 connection if the first request it receives from a
- given HTTP connection matches the HTTP/2 connection preface (i.e. the string
- "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n"). This way, it is possible to support
- HTTP/1.x and HTTP/2 clients on a non-SSL connections. This option must be
- used to disable the implicit upgrade. Note this implicit upgrade is only
- supported for HTTP proxies, thus this option too. Note also it is possible to
- force the HTTP/2 on clear connections by specifying "proto h2" on the bind
- line. Finally, this option is applied on all bind lines. To disable implicit
- HTTP/2 upgrades for a specific bind line, it is possible to use "proto h1".
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ HTTP/1.x connections: for each connection it processes each request and
+ response, and leaves the connection idle on both sides. This mode may be
+ changed by several options such as "option http-server-close" or "option
+ httpclose". Setting "option http-server-close" enables HTTP connection-close
+ mode on the server side while keeping the ability to support HTTP keep-alive
+ and pipelining on the client side. This provides the lowest latency on the
+ client side (slow network) and the fastest session reuse on the server side
+ to save server resources, similarly to "option httpclose". It also permits
+ non-keepalive capable servers to be served in keep-alive mode to the clients
+ if they conform to the requirements of RFC7230. Please note that some servers
+ do not always conform to those requirements when they see "Connection: close"
+ in the request. The effect will be that keep-alive will never be used. A
+ workaround consists in enabling "option http-pretend-keepalive".
+
+ At the moment, logs will not indicate whether requests came from the same
+ session or not. The accept date reported in the logs corresponds to the end
+ of the previous request, and the request time corresponds to the time spent
+ waiting for a new request. The keep-alive request time is still bound to the
+ timeout defined by "timeout http-keep-alive" or "timeout http-request" if
+ not set.
+
+ This option may be set both in a frontend and in a backend. It is enabled if
+ at least one of the frontend or backend holding a connection has it enabled.
+ It disables and replaces any previous "option httpclose" or "option
+ http-keep-alive". Please check section 4 ("Proxies") to see how this option
+ combines with others when frontend and backend options differ.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
-option dontlog-normal
-no option dontlog-normal
- Enable or disable logging of normal, successful connections
+ See also : "option httpclose", "option http-pretend-keepalive" and
+ "option http-keep-alive".
- May be used in the following contexts: tcp, http
+option http-use-proxy-header
+no option http-use-proxy-header
+ Make use of non-standard Proxy-Connection header instead of Connection
+
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
- There are large sites dealing with several thousand connections per second
- and for which logging is a major pain. Some of them are even forced to turn
- logs off and cannot debug production issues. Setting this option ensures that
- normal connections, those which experience no error, no timeout, no retry nor
- redispatch, will not be logged. This leaves disk space for anomalies. In HTTP
- mode, the response status code is checked and return codes 5xx will still be
- logged.
+ While RFC7230 explicitly states that HTTP/1.1 agents must use the
+ Connection header to indicate their wish of persistent or non-persistent
+ connections, both browsers and proxies ignore this header for proxied
+ connections and make use of the undocumented, non-standard Proxy-Connection
+ header instead. The issue begins when trying to put a load balancer between
+ browsers and such proxies, because there will be a difference between what
+ HAProxy understands and what the client and the proxy agree on.
- It is strongly discouraged to use this option as most of the time, the key to
- complex issues is in the normal logs which will not be logged here. If you
- need to separate logs, see the "log-separate-errors" option instead.
+ By setting this option in a frontend, HAProxy can automatically switch to use
+ that non-standard header if it sees proxied requests. A proxied request is
+ defined here as one where the URI begins with neither a '/' nor a '*'. This
+ is incompatible with the HTTP tunnel mode. Note that this option can only be
+ specified in a frontend and will affect the request along its whole life.
- See also : "log", "dontlognull", "log-separate-errors" and section 8 about
- logging.
+ Also, when this option is set, a request which requires authentication will
+ automatically switch to use proxy authentication headers if it is itself a
+ proxied request. That makes it possible to check or enforce authentication in
+ front of an existing proxy.
+ This option should normally never be used, except in front of a proxy.
-option dontlognull
-no option dontlognull
- Enable or disable logging of null connections
+ See also : "option httpclose", and "option http-server-close".
+
+option httpchk
+option httpchk <uri>
+option httpchk <method> <uri>
+option httpchk <method> <uri> <version>
+option httpchk <method> <uri> <version> <host>
+ Enables HTTP protocol to check on the servers health
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
- Arguments : none
+ Arguments :
+ <method> is the optional HTTP method used with the requests. When not set,
+ the "OPTIONS" method is used, as it generally requires low server
+ processing and is easy to filter out from the logs. Any method
+ may be used, though it is not recommended to invent non-standard
+ ones.
- In certain environments, there are components which will regularly connect to
- various systems to ensure that they are still alive. It can be the case from
- another load balancer as well as from monitoring systems. By default, even a
- simple port probe or scan will produce a log. If those connections pollute
- the logs too much, it is possible to enable option "dontlognull" to indicate
- that a connection on which no data has been transferred will not be logged,
- which typically corresponds to those probes. Note that errors will still be
- returned to the client and accounted for in the stats. If this is not what is
- desired, option http-ignore-probes can be used instead.
+ <uri> is the URI referenced in the HTTP requests. It defaults to " / "
+ which is accessible by default on almost any server, but may be
+ changed to any other URI. Query strings are permitted.
- It is generally recommended not to use this option in uncontrolled
- environments (e.g. internet), otherwise scans and other malicious activities
- would not be logged.
+ <version> is the optional HTTP version string. It defaults to "HTTP/1.0"
+ but some servers might behave incorrectly in HTTP 1.0, so turning
+ it to HTTP/1.1 may sometimes help. Note that the Host field is
+ mandatory in HTTP/1.1.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ <host> is the optional HTTP Host header value. It is not set by default.
+ It is a log-format string.
- See also : "log", "http-ignore-probes", "monitor-uri", and
- section 8 about logging.
+ By default, server health checks only consist in trying to establish a TCP
+ connection. When "option httpchk" is specified, a complete HTTP request is
+ sent once the TCP connection is established, and responses 2xx and 3xx are
+ considered valid, while all other ones indicate a server failure, including
+ the lack of any response.
-option forwarded [ proto ]
- [ host | host-expr <host_expr> ]
- [ by | by-expr <by_expr> ] [ by_port | by_port-expr <by_port_expr>]
- [ for | for-expr <for_expr> ] [ for_port | for_port-expr <for_port_expr>]
-no option forwarded
- Enable insertion of the rfc 7239 forwarded header in requests sent to servers
+ Combined with "http-check" directives, it is possible to customize the
+ request sent during the HTTP health checks or the matching rules on the
+ response. It is also possible to configure a send/expect sequence, just like
+ with the directive "tcp-check" for TCP health checks.
- May be used in the following contexts: http
+ The server configuration is used by default to open connections to perform
+ HTTP health checks. By it is also possible to overwrite server parameters
+ using "http-check connect" rules.
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ "httpchk" option does not necessarily require an HTTP backend, it also works
+ with plain TCP backends. This is particularly useful to check simple scripts
+ bound to some dedicated ports using the inetd daemon. However, it will always
+ internally relies on an HTX multiplexer. Thus, it means the request
+ formatting and the response parsing will be strict.
- Arguments :
- <host_expr> optional argument to specify a custom sample expression
- those result will be used as 'host' parameter value
+ Examples :
+ # Relay HTTPS traffic to Apache instance and check service availability
+ # using HTTP request "OPTIONS * HTTP/1.1" on port 80.
+ backend https_relay
+ mode tcp
+ option httpchk OPTIONS * HTTP/1.1
+ http-check send hdr Host www
+ server apache1 192.168.1.1:443 check port 80
- <by_expr> optional argument to specify a custom sample expression
- those result will be used as 'by' parameter nodename value
+ See also : "option ssl-hello-chk", "option smtpchk", "option mysql-check",
+ "option pgsql-check", "http-check" and the "check", "port" and
+ "inter" server options.
- <for_expr> optional argument to specify a custom sample expression
- those result will be used as 'for' parameter nodename value
- <by_port_expr> optional argument to specify a custom sample expression
- those result will be used as 'by' parameter nodeport value
+option httpclose
+no option httpclose
+ Enable or disable HTTP/1.x connection closing
- <for_port_expr> optional argument to specify a custom sample expression
- those result will be used as 'for' parameter nodeport value
+ May be used in the following contexts: http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- Since HAProxy works in reverse-proxy mode, servers are losing some request
- context (request origin: client ip address, protocol used...)
+ Arguments : none
- A common way to address this limitation is to use the well known
- x-forward-for and x-forward-* friends to expose some of this context to the
- underlying servers/applications.
- While this use to work and is widely deployed, it is not officially supported
- by the IETF and can be the root of some interoperability as well as security
- issues.
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ HTTP/1.x connections: for each connection it processes each request and
+ response, and leaves the connection idle on both sides. This mode may be
+ changed by several options such as "option http-server-close" or "option
+ httpclose".
- To solve this, a new HTTP extension has been described by the IETF:
- forwarded header (RFC7239).
- More information here: https://www.rfc-editor.org/rfc/rfc7239.html
+ If "option httpclose" is set, HAProxy will close the client or the server
+ connection, depending where the option is set. The frontend is considered for
+ client connections while the backend is considered for server ones. If the
+ option is set on a listener, it is applied both on client and server
+ connections. It will check if a "Connection: close" header is already set in
+ each direction, and will add one if missing.
- The use of this single header allow to convey numerous details
- within the same header, and most importantly, fixes the proxy chaining
- issue. (the rfc allows for multiple chained proxies to append their own
- values to an already existing header).
+ This option may also be combined with "option http-pretend-keepalive", which
+ will disable sending of the "Connection: close" request header, but will
+ still cause the connection to be closed once the whole response is received.
- This option may be specified in defaults, listen or backend section, but it
- will be ignored for frontend sections.
+ It disables and replaces any previous "option http-server-close" or "option
+ http-keep-alive".
- Setting option forwarded without arguments results in using default implicit
- behavior.
- Default behavior enables proto parameter and injects original client ip.
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- The equivalent explicit/manual configuration would be:
- option forwarded proto for
+ See also : "option http-server-close".
- The keyword 'by' is used to enable 'by' parameter ("nodename") in
- forwarded header. It allows to embed request proxy information.
- 'by' value will be set to proxy ip (destination address)
- If not available (ie: UNIX listener), 'by' will be set to
- "unknown".
- The keyword 'by-expr' is used to enable 'by' parameter ("nodename") in
- forwarded header. It allows to embed request proxy information.
- 'by' value will be set to the result of the sample expression
- <by_expr>, if valid, otherwise it will be set to "unknown".
-
- The keyword 'for' is used to enable 'for' parameter ("nodename") in
- forwarded header. It allows to embed request client information.
- 'for' value will be set to client ip (source address)
- If not available (ie: UNIX listener), 'for' will be set to
- "unknown".
+option httplog [ clf ]
+ Enable logging of HTTP request, stream state and timers
- The keyword 'for-expr' is used to enable 'for' parameter ("nodename") in
- forwarded header. It allows to embed request client information.
- 'for' value will be set to the result of the sample expression
- <for_expr>, if valid, otherwise it will be set to "unknown".
+ May be used in the following contexts: http
- The keyword 'by_port' is used to provide "nodeport" info to
- 'by' parameter. 'by_port' requires 'by' or 'by-expr' to be set or
- it will be ignored.
- "nodeport" will be set to proxy (destination) port if available,
- otherwise it will be ignored.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- The keyword 'by_port-expr' is used to provide "nodeport" info to
- 'by' parameter. 'by_port-expr' requires 'by' or 'by-expr' to be set or
- it will be ignored.
- "nodeport" will be set to the result of the sample expression
- <by_port_expr>, if valid, otherwise it will be ignored.
+ Arguments :
+ clf if the "clf" argument is added, then the output format will be
+ the CLF format instead of HAProxy's default HTTP format. You can
+ use this when you need to feed HAProxy's logs through a specific
+ log analyzer which only support the CLF format and which is not
+ extensible.
- The keyword 'for_port' is used to provide "nodeport" info to
- 'for' parameter. 'for_port' requires 'for' or 'for-expr' to be set or
- it will be ignored.
- "nodeport" will be set to client (source) port if available,
- otherwise it will be ignored.
+ By default, the log output format is very poor, as it only contains the
+ source and destination addresses, and the instance name. By specifying
+ "option httplog", each log line turns into a much richer format including,
+ but not limited to, the HTTP request, the connection timers, the stream
+ status, the connections numbers, the captured headers and cookies, the
+ frontend, backend and server name, and of course the source address and
+ ports.
- The keyword 'for_port-expr' is used to provide "nodeport" info to
- 'for' parameter. 'for_port-expr' requires 'for' or 'for-expr' to be set or
- it will be ignored.
- "nodeport" will be set to the result of the sample expression
- <for_port_expr>, if valid, otherwise it will be ignored.
+ Specifying only "option httplog" will automatically clear the 'clf' mode
+ if it was set by default.
- Examples :
- # Those servers want the ip address and protocol of the client request
- # Resulting header would look like this:
- # forwarded: proto=http;for=127.0.0.1
- backend www_default
- mode http
- option forwarded
- #equivalent to: option forwarded proto for
+ "option httplog" overrides any previous "log-format" directive.
- # Those servers want the requested host and hashed client ip address
- # as well as client source port (you should use seed for xxh32 if ensuring
- # ip privacy is a concern)
- # Resulting header would look like this:
- # forwarded: host="haproxy.org";for="_000000007F2F367E:60138"
- backend www_host
- mode http
- option forwarded host for-expr src,xxh32,hex for_port
+ See also : section 8 about logging.
- # Those servers want custom data in host, for and by parameters
- # Resulting header would look like this:
- # forwarded: host="host.com";by=_haproxy;for="[::1]:10"
- backend www_custom
- mode http
- option forwarded host-expr str(host.com) by-expr str(_haproxy) for for_port-expr int(10)
+option httpslog
+ Enable logging of HTTPS request, stream state and timers
- # Those servers want random 'for' obfuscated identifiers for request
- # tracing purposes while protecting sensitive IP information
- # Resulting header would look like this:
- # forwarded: for=_000000002B1F4D63
- backend www_for_hide
- mode http
- option forwarded for-expr rand,hex
+ May be used in the following contexts: http
- See also : "option forwardfor", "option originalto"
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
-option forwardfor [ except <network> ] [ header <name> ] [ if-none ]
- Enable insertion of the X-Forwarded-For header to requests sent to servers
+ By default, the log output format is very poor, as it only contains the
+ source and destination addresses, and the instance name. By specifying
+ "option httpslog", each log line turns into a much richer format including,
+ but not limited to, the HTTP request, the connection timers, the stream
+ status, the connections numbers, the captured headers and cookies, the
+ frontend, backend and server name, the SSL certificate verification and SSL
+ handshake statuses, and of course the source address and ports.
- May be used in the following contexts: http
+ "option httpslog" overrides any previous "log-format" directive.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ See also : section 8 about logging.
- Arguments :
- <network> is an optional argument used to disable this option for sources
- matching <network>
- <name> an optional argument to specify a different "X-Forwarded-For"
- header name.
- Since HAProxy works in reverse-proxy mode, the servers see its IP address as
- their client address. This is sometimes annoying when the client's IP address
- is expected in server logs. To solve this problem, the well-known HTTP header
- "X-Forwarded-For" may be added by HAProxy to all requests sent to the server.
- This header contains a value representing the client's IP address. Since this
- header is always appended at the end of the existing header list, the server
- must be configured to always use the last occurrence of this header only. See
- the server's manual to find how to enable use of this standard header. Note
- that only the last occurrence of the header must be used, since it is really
- possible that the client has already brought one.
+option independent-streams
+no option independent-streams
+ Enable or disable independent timeout processing for both directions
- The keyword "header" may be used to supply a different header name to replace
- the default "X-Forwarded-For". This can be useful where you might already
- have a "X-Forwarded-For" header from a different application (e.g. stunnel),
- and you need preserve it. Also if your backend server doesn't use the
- "X-Forwarded-For" header and requires different one (e.g. Zeus Web Servers
- require "X-Cluster-Client-IP").
+ May be used in the following contexts: tcp, http
- Sometimes, a same HAProxy instance may be shared between a direct client
- access and a reverse-proxy access (for instance when an SSL reverse-proxy is
- used to decrypt HTTPS traffic). It is possible to disable the addition of the
- header for a known source address or network by adding the "except" keyword
- followed by the network address. In this case, any source IP matching the
- network will not cause an addition of this header. Most common uses are with
- private networks or 127.0.0.1. IPv4 and IPv6 are both supported.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- Alternatively, the keyword "if-none" states that the header will only be
- added if it is not present. This should only be used in perfectly trusted
- environment, as this might cause a security issue if headers reaching HAProxy
- are under the control of the end-user.
+ Arguments : none
- This option may be specified either in the frontend or in the backend. If at
- least one of them uses it, the header will be added. Note that the backend's
- setting of the header subargument takes precedence over the frontend's if
- both are defined. In the case of the "if-none" argument, if at least one of
- the frontend or the backend does not specify it, it wants the addition to be
- mandatory, so it wins.
+ By default, when data is sent over a socket, both the write timeout and the
+ read timeout for that socket are refreshed, because we consider that there is
+ activity on that socket, and we have no other means of guessing if we should
+ receive data or not.
- Example :
- # Public HTTP address also used by stunnel on the same machine
- frontend www
- mode http
- option forwardfor except 127.0.0.1 # stunnel already adds the header
+ While this default behavior is desirable for almost all applications, there
+ exists a situation where it is desirable to disable it, and only refresh the
+ read timeout if there are incoming data. This happens on streams with large
+ timeouts and low amounts of exchanged data such as telnet session. If the
+ server suddenly disappears, the output data accumulates in the system's
+ socket buffers, both timeouts are correctly refreshed, and there is no way
+ to know the server does not receive them, so we don't timeout. However, when
+ the underlying protocol always echoes sent data, it would be enough by itself
+ to detect the issue using the read timeout. Note that this problem does not
+ happen with more verbose protocols because data won't accumulate long in the
+ socket buffers.
- # Those servers want the IP Address in X-Client
- backend www
- mode http
- option forwardfor header X-Client
+ When this option is set on the frontend, it will disable read timeout updates
+ on data sent to the client. There probably is little use of this case. When
+ the option is set on the backend, it will disable read timeout updates on
+ data sent to the server. Doing so will typically break large HTTP posts from
+ slow lines, so use it with caution.
- See also : "option httpclose", "option http-server-close",
- "option http-keep-alive"
+ See also : "timeout client", "timeout server" and "timeout tunnel"
-option h1-case-adjust-bogus-client
-no option h1-case-adjust-bogus-client
- Enable or disable the case adjustment of HTTP/1 headers sent to bogus clients
+option ldap-check
+ Use LDAPv3 health checks for server testing
- May be used in the following contexts: http
+ May be used in the following contexts: tcp
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
Arguments : none
- There is no standard case for header names because, as stated in RFC7230,
- they are case-insensitive. So applications must handle them in a case-
- insensitive manner. But some bogus applications violate the standards and
- erroneously rely on the cases most commonly used by browsers. This problem
- becomes critical with HTTP/2 because all header names must be exchanged in
- lower case, and HAProxy follows the same convention. All header names are
- sent in lower case to clients and servers, regardless of the HTTP version.
+ It is possible to test that the server correctly talks LDAPv3 instead of just
+ testing that it accepts the TCP connection. When this option is set, an
+ LDAPv3 anonymous simple bind message is sent to the server, and the response
+ is analyzed to find an LDAPv3 bind response message.
- When HAProxy receives an HTTP/1 response, its header names are converted to
- lower case and manipulated and sent this way to the clients. If a client is
- known to violate the HTTP standards and to fail to process a response coming
- from HAProxy, it is possible to transform the lower case header names to a
- different format when the response is formatted and sent to the client, by
- enabling this option and specifying the list of headers to be reformatted
- using the global directives "h1-case-adjust" or "h1-case-adjust-file". This
- must only be a temporary workaround for the time it takes the client to be
- fixed, because clients which require such workarounds might be vulnerable to
- content smuggling attacks and must absolutely be fixed.
+ The server is considered valid only when the LDAP response contains success
+ resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9).
- Please note that this option will not affect standards-compliant clients.
+ Logging of bind requests is server dependent see your documentation how to
+ configure it.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Example :
+ option ldap-check
- See also: "option h1-case-adjust-bogus-server", "h1-case-adjust",
- "h1-case-adjust-file".
+ See also : "option httpchk"
-option h1-case-adjust-bogus-server
-no option h1-case-adjust-bogus-server
- Enable or disable the case adjustment of HTTP/1 headers sent to bogus servers
+option external-check
+ Use external processes for server health checks
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | no | yes | yes
- Arguments : none
+ It is possible to test the health of a server using an external command.
+ This is achieved by running the executable set using "external-check
+ command".
- There is no standard case for header names because, as stated in RFC7230,
- they are case-insensitive. So applications must handle them in a case-
- insensitive manner. But some bogus applications violate the standards and
- erroneously rely on the cases most commonly used by browsers. This problem
- becomes critical with HTTP/2 because all header names must be exchanged in
- lower case, and HAProxy follows the same convention. All header names are
- sent in lower case to clients and servers, regardless of the HTTP version.
+ Requires the "external-check" global to be set.
- When HAProxy receives an HTTP/1 request, its header names are converted to
- lower case and manipulated and sent this way to the servers. If a server is
- known to violate the HTTP standards and to fail to process a request coming
- from HAProxy, it is possible to transform the lower case header names to a
- different format when the request is formatted and sent to the server, by
- enabling this option and specifying the list of headers to be reformatted
- using the global directives "h1-case-adjust" or "h1-case-adjust-file". This
- must only be a temporary workaround for the time it takes the server to be
- fixed, because servers which require such workarounds might be vulnerable to
- content smuggling attacks and must absolutely be fixed.
+ See also : "external-check", "external-check command", "external-check path"
- Please note that this option will not affect standards-compliant servers.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
-
- See also: "option h1-case-adjust-bogus-client", "h1-case-adjust",
- "h1-case-adjust-file".
-
-
-option http-buffer-request
-no option http-buffer-request
- Enable or disable waiting for whole HTTP request body before proceeding
+option idle-close-on-response
+no option idle-close-on-response
+ Avoid closing idle frontend connections if a soft stop is in progress
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ yes | yes | yes | no
Arguments : none
- It is sometimes desirable to wait for the body of an HTTP request before
- taking a decision. This is what is being done by "balance url_param" for
- example. The first use case is to buffer requests from slow clients before
- connecting to the server. Another use case consists in taking the routing
- decision based on the request body's contents. This option placed in a
- frontend or backend forces the HTTP processing to wait until either the whole
- body is received or the request buffer is full. It can have undesired side
- effects with some applications abusing HTTP by expecting unbuffered
- transmissions between the frontend and the backend, so this should definitely
- not be used by default.
+ By default, idle connections will be closed during a soft stop. In some
+ environments, a client talking to the proxy may have prepared some idle
+ connections in order to send requests later. If there is no proper retry on
+ write errors, this can result in errors while haproxy is reloading. Even
+ though a proper implementation should retry on connection/write errors, this
+ option was introduced to support backwards compatibility with haproxy prior
+ to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
+ and response to add a "connection: close" header before closing, thus
+ notifying the client that the connection would not be reusable.
- See also : "option http-no-delay", "timeout http-request",
- "http-request wait-for-body"
+ In a real life example, this behavior was seen in AWS using the ALB in front
+ of a haproxy. The end result was ALB sending 502 during haproxy reloads.
-option http-drop-request-trailers
-no option http-drop-request-trailers
- Drop the HTTP trailers from the request when sent to the server
+ Users are warned that using this option may increase the number of old
+ processes if connections remain idle for too long. Adjusting the client
+ timeouts and/or the "hard-stop-after" parameter accordingly might be
+ needed in case of frequent reloads.
+
+ See also: "timeout client", "timeout client-fin", "timeout http-request",
+ "hard-stop-after"
- May be used in the following contexts: http
+
+option log-health-checks
+no option log-health-checks
+ Enable or disable logging of health checks status updates
+
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | no | no | yes
+ yes | no | yes | yes
Arguments : none
- When this option is enabled, any HTTP trailers found in a request will be
- dropped before sending it to the server.
+ By default, failed health check are logged if server is UP and successful
+ health checks are logged if server is DOWN, so the amount of additional
+ information is limited.
- RFC9110#section-6.5.1 stated that trailer fields could be merged into the
- header fields. It should be done on purpose, but it may be a problem for some
- applications, espcially if malicious clients hide sensitive header fields in
- the trailers part and some intermediaries merge them with headers with no
- specific checks. In that case, this option can be enabled on the backend to
- drop any trailer fields found in requests before sending them to the server.
+ When this option is enabled, any change of the health check status or to
+ the server's health will be logged, so that it becomes possible to know
+ that a server was failing occasional checks before crashing, or exactly when
+ it failed to respond a valid HTTP status, then when the port started to
+ reject connections, then when the server stopped responding at all.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Note that status changes not caused by health checks (e.g. enable/disable on
+ the CLI) are intentionally not logged by this option.
- See also: "option http-drop-response-trailers"
+ See also: "option httpchk", "option ldap-check", "option mysql-check",
+ "option pgsql-check", "option redis-check", "option smtpchk",
+ "option tcp-check", "log" and section 8 about logging.
-option http-drop-response-trailers
-no option http-drop-response-trailers
- Drop the HTTP trailers from the response when sent to the client
- May be used in the following contexts: http
+option log-separate-errors
+no option log-separate-errors
+ Change log level for non-completely successful connections
+
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
- This option is similar to "option http-drop-request-trailers" but it must be
- used to drop trailer fields from responses before sending them to clients.
+ Sometimes looking for errors in logs is not easy. This option makes HAProxy
+ raise the level of logs containing potentially interesting information such
+ as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The
+ level changes from "info" to "err". This makes it possible to log them
+ separately to a different file with most syslog daemons. Be careful not to
+ remove them from the original file, otherwise you would lose ordering which
+ provides very important information.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Using this option, large sites dealing with several thousand connections per
+ second may log normal traffic to a rotating buffer and only archive smaller
+ error logs.
- See also: "option http-drop-request-trailers"
+ See also : "log", "dontlognull", "dontlog-normal" and section 8 about
+ logging.
-option http-ignore-probes
-no option http-ignore-probes
- Enable or disable logging of null connections and request timeouts
- May be used in the following contexts: http
+option logasap
+no option logasap
+ Enable or disable early logging.
+
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
- Recently some browsers started to implement a "pre-connect" feature
- consisting in speculatively connecting to some recently visited web sites
- just in case the user would like to visit them. This results in many
- connections being established to web sites, which end up in 408 Request
- Timeout if the timeout strikes first, or 400 Bad Request when the browser
- decides to close them first. These ones pollute the log and feed the error
- counters. There was already "option dontlognull" but it's insufficient in
- this case. Instead, this option does the following things :
- - prevent any 400/408 message from being sent to the client if nothing
- was received over a connection before it was closed;
- - prevent any log from being emitted in this situation;
- - prevent any error counter from being incremented
+ By default, logs are emitted when all the log format aliases and sample
+ fetches used in the definition of the log-format string return a value, or
+ when the stream is terminated. This allows the built in log-format strings
+ to account for the transfer time, or the number of bytes in log messages.
- That way the empty connection is silently ignored. Note that it is better
- not to use this unless it is clear that it is needed, because it will hide
- real problems. The most common reason for not receiving a request and seeing
- a 408 is due to an MTU inconsistency between the client and an intermediary
- element such as a VPN, which blocks too large packets. These issues are
- generally seen with POST requests as well as GET with large cookies. The logs
- are often the only way to detect them.
+ When handling long lived connections such as large file transfers or RDP,
+ it may take a while for the request or connection to appear in the logs.
+ Using "option logasap", the log message is created as soon as the server
+ connection is established in mode tcp, or as soon as the server sends the
+ complete headers in mode http. Missing information in the logs will be the
+ total number of bytes which will only indicate the amount of data transferred
+ before the message was created and the total time which will not take the
+ remainder of the connection life or transfer time into account. For the case
+ of HTTP, it is good practice to capture the Content-Length response header
+ so that the logs at least indicate how many bytes are expected to be
+ transferred.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Examples :
+ listen http_proxy 0.0.0.0:80
+ mode http
+ option httplog
+ option logasap
+ log 192.168.2.200 local3
- See also : "log", "dontlognull", "errorfile", and section 8 about logging.
+ >>> Feb 6 12:14:14 localhost \
+ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \
+ static/srv1 9/10/7/14/+30 200 +243 - - ---- 3/1/1/1/0 1/0 \
+ "GET /image.iso HTTP/1.0"
+ See also : "option httplog", "capture response header", and section 8 about
+ logging.
-option http-keep-alive
-no option http-keep-alive
- Enable or disable HTTP keep-alive from client to server for HTTP/1.x
- connections
- May be used in the following contexts: http
+option mysql-check [ user <username> [ { post-41 | pre-41 } ] ]
+ Use MySQL health checks for server testing
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in the following contexts: tcp
- Arguments : none
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- By default HAProxy operates in keep-alive mode with regards to persistent
- HTTP/1.x connections: for each connection it processes each request and
- response, and leaves the connection idle on both sides. This mode may be
- changed by several options such as "option http-server-close" or "option
- httpclose". This option allows to set back the keep-alive mode, which can be
- useful when another mode was used in a defaults section.
+ Arguments :
+ <username> This is the username which will be used when connecting to MySQL
+ server.
+ post-41 Send post v4.1 client compatible checks (the default)
+ pre-41 Send pre v4.1 client compatible checks
- Setting "option http-keep-alive" enables HTTP keep-alive mode on the client-
- and server- sides. This provides the lowest latency on the client side (slow
- network) and the fastest session reuse on the server side at the expense
- of maintaining idle connections to the servers. In general, it is possible
- with this option to achieve approximately twice the request rate that the
- "http-server-close" option achieves on small objects. There are mainly two
- situations where this option may be useful :
+ If you specify a username, the check consists of sending two MySQL packet,
+ one Client Authentication packet, and one QUIT packet, to correctly close
+ MySQL session. We then parse the MySQL Handshake Initialization packet and/or
+ Error packet. It is a basic but useful test which does not produce error nor
+ aborted connect on the server. However, it requires an unlocked authorised
+ user without a password. To create a basic limited user in MySQL with
+ optional resource limits:
- - when the server is non-HTTP compliant and authenticates the connection
- instead of requests (e.g. NTLM authentication)
+ CREATE USER '<username>'@'<ip_of_haproxy|network_of_haproxy/netmask>'
+ /*!50701 WITH MAX_QUERIES_PER_HOUR 1 MAX_UPDATES_PER_HOUR 0 */
+ /*M!100201 MAX_STATEMENT_TIME 0.0001 */;
- - when the cost of establishing the connection to the server is significant
- compared to the cost of retrieving the associated object from the server.
+ If you don't specify a username (it is deprecated and not recommended), the
+ check only consists in parsing the Mysql Handshake Initialization packet or
+ Error packet, we don't send anything in this mode. It was reported that it
+ can generate lockout if check is too frequent and/or if there is not enough
+ traffic. In fact, you need in this case to check MySQL "max_connect_errors"
+ value as if a connection is established successfully within fewer than MySQL
+ "max_connect_errors" attempts after a previous connection was interrupted,
+ the error count for the host is cleared to zero. If HAProxy's server get
+ blocked, the "FLUSH HOSTS" statement is the only way to unblock it.
- This last case can happen when the server is a fast static server of cache.
+ Remember that this does not check database presence nor database consistency.
+ To do this, you can use an external check with xinetd for example.
- At the moment, logs will not indicate whether requests came from the same
- session or not. The accept date reported in the logs corresponds to the end
- of the previous request, and the request time corresponds to the time spent
- waiting for a new request. The keep-alive request time is still bound to the
- timeout defined by "timeout http-keep-alive" or "timeout http-request" if
- not set.
+ The check requires MySQL >=3.22, for older version, please use TCP check.
- This option disables and replaces any previous "option httpclose" or "option
- http-server-close".
+ Most often, an incoming MySQL server needs to see the client's IP address for
+ various purposes, including IP privilege matching and connection logging.
+ When possible, it is often wise to masquerade the client's IP address when
+ connecting to the server using the "usesrc" argument of the "source" keyword,
+ which requires the transparent proxy feature to be compiled in, and the MySQL
+ server to route the client via the machine hosting HAProxy.
- See also : "option httpclose",, "option http-server-close",
- "option prefer-last-server" and "option http-pretend-keepalive".
+ See also: "option httpchk"
-option http-no-delay
-no option http-no-delay
- Instruct the system to favor low interactive delays over performance in HTTP
+option nolinger
+no option nolinger
+ Enable or disable immediate session resource cleaning after close
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http, log
- May be used in sections : defaults | frontend | listen | backend
+ May be used in sections: defaults | frontend | listen | backend
yes | yes | yes | yes
Arguments : none
- In HTTP, each payload is unidirectional and has no notion of interactivity.
- Any agent is expected to queue data somewhat for a reasonably low delay.
- There are some very rare server-to-server applications that abuse the HTTP
- protocol and expect the payload phase to be highly interactive, with many
- interleaved data chunks in both directions within a single request. This is
- absolutely not supported by the HTTP specification and will not work across
- most proxies or servers. When such applications attempt to do this through
- HAProxy, it works but they will experience high delays due to the network
- optimizations which favor performance by instructing the system to wait for
- enough data to be available in order to only send full packets. Typical
- delays are around 200 ms per round trip. Note that this only happens with
- abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
- affected.
-
- When "option http-no-delay" is present in either the frontend or the backend
- used by a connection, all such optimizations will be disabled in order to
- make the exchanges as fast as possible. Of course this offers no guarantee on
- the functionality, as it may break at any other place. But if it works via
- HAProxy, it will work as fast as possible. This option should never be used
- by default, and should never be used at all unless such a buggy application
- is discovered. The impact of using this option is an increase of bandwidth
- usage and CPU usage, which may significantly lower performance in high
- latency environments.
-
- See also : "option http-buffer-request"
-
-
-option http-pretend-keepalive
-no option http-pretend-keepalive
- Define whether HAProxy will announce keepalive for HTTP/1.x connection to the
- server or not
-
- May be used in the following contexts: http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments : none
-
- When running with "option http-server-close" or "option httpclose", HAProxy
- adds a "Connection: close" header to the HTTP/1.x request forwarded to the
- server. Unfortunately, when some servers see this header, they automatically
- refrain from using the chunked encoding for responses of unknown length,
- while this is totally unrelated. The effect is that a client or a cache could
- receive an incomplete response without being aware of it, and consider the
- response complete.
+ When clients or servers abort connections in a dirty way (e.g. they are
+ physically disconnected), the session timeouts triggers and the session is
+ closed. But it will remain in FIN_WAIT1 state for some time in the system,
+ using some resources and possibly limiting the ability to establish newer
+ connections.
- By setting "option http-pretend-keepalive", HAProxy will make the server
- believe it will keep the connection alive. The server will then not fall back
- to the abnormal undesired above. When HAProxy gets the whole response, it
- will close the connection with the server just as it would do with the
- "option httpclose". That way the client gets a normal response and the
- connection is correctly closed on the server side.
+ When this happens, it is possible to activate "option nolinger" which forces
+ the system to immediately remove any socket's pending data on close. Thus,
+ a TCP RST is emitted, any pending data are truncated, and the session is
+ instantly purged from the system's tables. The generally visible effect for
+ a client is that responses are truncated if the close happens with a last
+ block of data (e.g. on a redirect or error response). On the server side,
+ it may help release the source ports immediately when forwarding a client
+ aborts in tunnels. In both cases, TCP resets are emitted and given that
+ the session is instantly destroyed, there will be no retransmit. On a lossy
+ network this can increase problems, especially when there is a firewall on
+ the lossy side, because the firewall might see and process the reset (hence
+ purge its session) and block any further traffic for this session,, including
+ retransmits from the other side. So if the other side doesn't receive it,
+ it will never receive any RST again, and the firewall might log many blocked
+ packets.
- It is recommended not to enable this option by default, because most servers
- will more efficiently close the connection themselves after the last packet,
- and release its buffers slightly earlier. Also, the added packet on the
- network could slightly reduce the overall peak performance. However it is
- worth noting that when this option is enabled, HAProxy will have slightly
- less work to do. So if HAProxy is the bottleneck on the whole architecture,
- enabling this option might save a few CPU cycles.
+ For all these reasons, it is strongly recommended NOT to use this option,
+ unless absolutely needed as a last resort. In most situations, using the
+ "client-fin" or "server-fin" timeouts achieves similar results with a more
+ reliable behavior. On Linux it's also possible to use the "tcp-ut" bind or
+ server setting.
- This option may be set in backend and listen sections. Using it in a frontend
- section will be ignored and a warning will be reported during startup. It is
- a backend related option, so there is no real reason to set it on a
- frontend.
+ This option may be used both on frontends and backends, depending on the side
+ where it is required. Use it on the frontend for clients, and on the backend
+ for servers. While this option is technically supported in "defaults"
+ sections, it must really not be used there as it risks to accidentally
+ propagate to sections that must no use it and to cause problems there.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option httpclose", "option http-server-close", and
- "option http-keep-alive"
+ See also: "timeout client-fin", "timeout server-fin", "tcp-ut" bind or server
+ keywords.
-option http-restrict-req-hdr-names { preserve | delete | reject }
- Set HAProxy policy about HTTP request header names containing characters
- outside the "[a-zA-Z0-9-]" charset
+option originalto [ except <network> ] [ header <name> ]
+ Enable insertion of the X-Original-To header to requests sent to servers
May be used in the following contexts: http
yes | yes | yes | yes
Arguments :
- preserve disable the filtering. It is the default mode for HTTP proxies
- with no FastCGI application configured.
+ <network> is an optional argument used to disable this option for sources
+ matching <network>
+ <name> an optional argument to specify a different "X-Original-To"
+ header name.
- delete remove request headers with a name containing a character
- outside the "[a-zA-Z0-9-]" charset. It is the default mode for
- HTTP backends with a configured FastCGI application.
+ Since HAProxy can work in transparent mode, every request from a client can
+ be redirected to the proxy and HAProxy itself can proxy every request to a
+ complex SQUID environment and the destination host from SO_ORIGINAL_DST will
+ be lost. This is annoying when you want access rules based on destination ip
+ addresses. To solve this problem, a new HTTP header "X-Original-To" may be
+ added by HAProxy to all requests sent to the server. This header contains a
+ value representing the original destination IP address. Since this must be
+ configured to always use the last occurrence of this header only. Note that
+ only the last occurrence of the header must be used, since it is really
+ possible that the client has already brought one.
- reject reject the request with a 403-Forbidden response if it contains a
- header name with a character outside the "[a-zA-Z0-9-]" charset.
+ The keyword "header" may be used to supply a different header name to replace
+ the default "X-Original-To". This can be useful where you might already
+ have a "X-Original-To" header from a different application, and you need
+ preserve it. Also if your backend server doesn't use the "X-Original-To"
+ header and requires different one.
- This option may be used to restrict the request header names to alphanumeric
- and hyphen characters ([A-Za-z0-9-]). This may be mandatory to interoperate
- with non-HTTP compliant servers that fail to handle some characters in header
- names. It may also be mandatory for FastCGI applications because all
- non-alphanumeric characters in header names are replaced by an underscore
- ('_'). Thus, it is easily possible to mix up header names and bypass some
- rules. For instance, "X-Forwarded-For" and "X_Forwarded-For" headers are both
- converted to "HTTP_X_FORWARDED_FOR" in FastCGI.
+ Sometimes, a same HAProxy instance may be shared between a direct client
+ access and a reverse-proxy access (for instance when an SSL reverse-proxy is
+ used to decrypt HTTPS traffic). It is possible to disable the addition of the
+ header for a known destination address or network by adding the "except"
+ keyword followed by the network address. In this case, any destination IP
+ matching the network will not cause an addition of this header. Most common
+ uses are with private networks or 127.0.0.1. IPv4 and IPv6 are both
+ supported.
- Note this option is evaluated per proxy and after the http-request rules
- evaluation.
+ This option may be specified either in the frontend or in the backend. If at
+ least one of them uses it, the header will be added. Note that the backend's
+ setting of the header subargument takes precedence over the frontend's if
+ both are defined.
-option http-server-close
-no option http-server-close
- Enable or disable HTTP/1.x connection closing on the server side
+ Examples :
+ # Original Destination address
+ frontend www
+ mode http
+ option originalto except 127.0.0.1
- May be used in the following contexts: http
+ # Those servers want the IP Address in X-Client-Dst
+ backend www
+ mode http
+ option originalto header X-Client-Dst
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ See also : "option httpclose", "option http-server-close".
- Arguments : none
- By default HAProxy operates in keep-alive mode with regards to persistent
- HTTP/1.x connections: for each connection it processes each request and
- response, and leaves the connection idle on both sides. This mode may be
- changed by several options such as "option http-server-close" or "option
- httpclose". Setting "option http-server-close" enables HTTP connection-close
- mode on the server side while keeping the ability to support HTTP keep-alive
- and pipelining on the client side. This provides the lowest latency on the
- client side (slow network) and the fastest session reuse on the server side
- to save server resources, similarly to "option httpclose". It also permits
- non-keepalive capable servers to be served in keep-alive mode to the clients
- if they conform to the requirements of RFC7230. Please note that some servers
- do not always conform to those requirements when they see "Connection: close"
- in the request. The effect will be that keep-alive will never be used. A
- workaround consists in enabling "option http-pretend-keepalive".
+option persist
+no option persist
+ Enable or disable forced persistence on down servers
- At the moment, logs will not indicate whether requests came from the same
- session or not. The accept date reported in the logs corresponds to the end
- of the previous request, and the request time corresponds to the time spent
- waiting for a new request. The keep-alive request time is still bound to the
- timeout defined by "timeout http-keep-alive" or "timeout http-request" if
- not set.
+ May be used in the following contexts: tcp, http
- This option may be set both in a frontend and in a backend. It is enabled if
- at least one of the frontend or backend holding a connection has it enabled.
- It disables and replaces any previous "option httpclose" or "option
- http-keep-alive". Please check section 4 ("Proxies") to see how this option
- combines with others when frontend and backend options differ.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ Arguments : none
+
+ When an HTTP request reaches a backend with a cookie which references a dead
+ server, by default it is redispatched to another server. It is possible to
+ force the request to be sent to the dead server first using "option persist"
+ if absolutely needed. A common use case is when servers are under extreme
+ load and spend their time flapping. In this case, the users would still be
+ directed to the server they opened the session on, in the hope they would be
+ correctly served. It is recommended to use "option redispatch" in conjunction
+ with this option so that in the event it would not be possible to connect to
+ the server at all (server definitely dead), the client would finally be
+ redirected to another valid server.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option httpclose", "option http-pretend-keepalive" and
- "option http-keep-alive".
-
-option http-use-proxy-header
-no option http-use-proxy-header
- Make use of non-standard Proxy-Connection header instead of Connection
+ See also : "option redispatch", "retries", "force-persist"
- May be used in the following contexts: http
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+option pgsql-check user <username>
+ Use PostgreSQL health checks for server testing
- Arguments : none
+ May be used in the following contexts: tcp
- While RFC7230 explicitly states that HTTP/1.1 agents must use the
- Connection header to indicate their wish of persistent or non-persistent
- connections, both browsers and proxies ignore this header for proxied
- connections and make use of the undocumented, non-standard Proxy-Connection
- header instead. The issue begins when trying to put a load balancer between
- browsers and such proxies, because there will be a difference between what
- HAProxy understands and what the client and the proxy agree on.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- By setting this option in a frontend, HAProxy can automatically switch to use
- that non-standard header if it sees proxied requests. A proxied request is
- defined here as one where the URI begins with neither a '/' nor a '*'. This
- is incompatible with the HTTP tunnel mode. Note that this option can only be
- specified in a frontend and will affect the request along its whole life.
+ Arguments :
+ <username> This is the username which will be used when connecting to
+ PostgreSQL server.
- Also, when this option is set, a request which requires authentication will
- automatically switch to use proxy authentication headers if it is itself a
- proxied request. That makes it possible to check or enforce authentication in
- front of an existing proxy.
+ The check sends a PostgreSQL StartupMessage and waits for either
+ Authentication request or ErrorResponse message. It is a basic but useful
+ test which does not produce error nor aborted connect on the server.
+ This check is identical with the "mysql-check".
- This option should normally never be used, except in front of a proxy.
+ See also: "option httpchk"
- See also : "option httpclose", and "option http-server-close".
-option httpchk
-option httpchk <uri>
-option httpchk <method> <uri>
-option httpchk <method> <uri> <version>
-option httpchk <method> <uri> <version> <host>
- Enables HTTP protocol to check on the servers health
+option prefer-last-server
+no option prefer-last-server
+ Allow multiple load balanced requests to remain on the same server
May be used in the following contexts: tcp, http
- May be used in sections : defaults | frontend | listen | backend
+ May be used in sections: defaults | frontend | listen | backend
yes | no | yes | yes
- Arguments :
- <method> is the optional HTTP method used with the requests. When not set,
- the "OPTIONS" method is used, as it generally requires low server
- processing and is easy to filter out from the logs. Any method
- may be used, though it is not recommended to invent non-standard
- ones.
+ Arguments : none
- <uri> is the URI referenced in the HTTP requests. It defaults to " / "
- which is accessible by default on almost any server, but may be
- changed to any other URI. Query strings are permitted.
+ When the load balancing algorithm in use is not deterministic, and a previous
+ request was sent to a server to which HAProxy still holds a connection, it is
+ sometimes desirable that subsequent requests on a same session go to the same
+ server as much as possible. Note that this is different from persistence, as
+ we only indicate a preference which HAProxy tries to apply without any form
+ of warranty. The real use is for keep-alive connections sent to servers. When
+ this option is used, HAProxy will try to reuse the same connection that is
+ attached to the server instead of rebalancing to another server, causing a
+ close of the connection. This can make sense for static file servers. It does
+ not make much sense to use this in combination with hashing algorithms. Note,
+ HAProxy already automatically tries to stick to a server which sends a 401 or
+ to a proxy which sends a 407 (authentication required), when the load
+ balancing algorithm is not deterministic. This is mandatory for use with the
+ broken NTLM authentication challenge, and significantly helps in
+ troubleshooting some faulty applications. Option prefer-last-server might be
+ desirable in these environments as well, to avoid redistributing the traffic
+ after every other response.
- <version> is the optional HTTP version string. It defaults to "HTTP/1.0"
- but some servers might behave incorrectly in HTTP 1.0, so turning
- it to HTTP/1.1 may sometimes help. Note that the Host field is
- mandatory in HTTP/1.1.
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
- <host> is the optional HTTP Host header value. It is not set by default.
- It is a log-format string.
+ See also: "option http-keep-alive"
- By default, server health checks only consist in trying to establish a TCP
- connection. When "option httpchk" is specified, a complete HTTP request is
- sent once the TCP connection is established, and responses 2xx and 3xx are
- considered valid, while all other ones indicate a server failure, including
- the lack of any response.
- Combined with "http-check" directives, it is possible to customize the
- request sent during the HTTP health checks or the matching rules on the
- response. It is also possible to configure a send/expect sequence, just like
- with the directive "tcp-check" for TCP health checks.
+option redispatch
+option redispatch <interval>
+no option redispatch
+ Enable or disable session redistribution in case of connection failure
- The server configuration is used by default to open connections to perform
- HTTP health checks. By it is also possible to overwrite server parameters
- using "http-check connect" rules.
+ May be used in the following contexts: tcp, http
- "httpchk" option does not necessarily require an HTTP backend, it also works
- with plain TCP backends. This is particularly useful to check simple scripts
- bound to some dedicated ports using the inetd daemon. However, it will always
- internally relies on an HTX multiplexer. Thus, it means the request
- formatting and the response parsing will be strict.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- Examples :
- # Relay HTTPS traffic to Apache instance and check service availability
- # using HTTP request "OPTIONS * HTTP/1.1" on port 80.
- backend https_relay
- mode tcp
- option httpchk OPTIONS * HTTP/1.1
- http-check send hdr Host www
- server apache1 192.168.1.1:443 check port 80
+ Arguments :
+ <interval> The optional integer value that controls how often redispatches
+ occur when retrying connections. Positive value P indicates a
+ redispatch is desired on every Pth retry, and negative value
+ N indicate a redispatch is desired on the Nth retry prior to the
+ last retry. For example, the default of -1 preserves the
+ historical behavior of redispatching on the last retry, a
+ positive value of 1 would indicate a redispatch on every retry,
+ and a positive value of 3 would indicate a redispatch on every
+ third retry. You can disable redispatches with a value of 0.
- See also : "option ssl-hello-chk", "option smtpchk", "option mysql-check",
- "option pgsql-check", "http-check" and the "check", "port" and
- "inter" server options.
+ In HTTP mode, if a server designated by a cookie is down, clients may
+ definitely stick to it, for example when using "option persist" or
+ "force-persist", because they cannot flush the cookie, so they will not
+ be able to access the service anymore.
-option httpclose
-no option httpclose
- Enable or disable HTTP/1.x connection closing
+ Specifying "option redispatch" will allow the proxy to break cookie or
+ consistent hash based persistence and redistribute them to a working server.
- May be used in the following contexts: http
+ Active servers are selected from a subset of the list of available
+ servers. Active servers that are not down or in maintenance (i.e., whose
+ health is not checked or that have been checked as "up"), are selected in the
+ following order:
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ 1. Any active, non-backup server, if any, or,
- Arguments : none
+ 2. If the "allbackups" option is not set, the first backup server in the
+ list, or
- By default HAProxy operates in keep-alive mode with regards to persistent
- HTTP/1.x connections: for each connection it processes each request and
- response, and leaves the connection idle on both sides. This mode may be
- changed by several options such as "option http-server-close" or "option
- httpclose".
+ 3. If the "allbackups" option is set, any backup server.
- If "option httpclose" is set, HAProxy will close the client or the server
- connection, depending where the option is set. The frontend is considered for
- client connections while the backend is considered for server ones. If the
- option is set on a listener, it is applied both on client and server
- connections. It will check if a "Connection: close" header is already set in
- each direction, and will add one if missing.
+ When a retry occurs, HAProxy tries to select another server than the last
+ one. The new server is selected from the current list of servers.
- This option may also be combined with "option http-pretend-keepalive", which
- will disable sending of the "Connection: close" request header, but will
- still cause the connection to be closed once the whole response is received.
+ Sometimes, if the list is updated between retries (e.g., if numerous retries
+ occur and last longer than the time needed to check that a server is down,
+ remove it from the list and fall back on the list of backup servers),
+ connections may be redirected to a backup server, though.
- It disables and replaces any previous "option http-server-close" or "option
- http-keep-alive".
+ It also allows to retry connections to another server in case of multiple
+ connection failures. Of course, it requires having "retries" set to a nonzero
+ value.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option http-server-close".
+ See also : "option persist", "force-persist", "retries"
-option httplog [ clf ]
- Enable logging of HTTP request, stream state and timers
+option redis-check
+ Use redis health checks for server testing
- May be used in the following contexts: http
+ May be used in the following contexts: tcp
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
- Arguments :
- clf if the "clf" argument is added, then the output format will be
- the CLF format instead of HAProxy's default HTTP format. You can
- use this when you need to feed HAProxy's logs through a specific
- log analyzer which only support the CLF format and which is not
- extensible.
+ Arguments : none
- By default, the log output format is very poor, as it only contains the
- source and destination addresses, and the instance name. By specifying
- "option httplog", each log line turns into a much richer format including,
- but not limited to, the HTTP request, the connection timers, the stream
- status, the connections numbers, the captured headers and cookies, the
- frontend, backend and server name, and of course the source address and
- ports.
+ It is possible to test that the server correctly talks REDIS protocol instead
+ of just testing that it accepts the TCP connection. When this option is set,
+ a PING redis command is sent to the server, and the response is analyzed to
+ find the "+PONG" response message.
- Specifying only "option httplog" will automatically clear the 'clf' mode
- if it was set by default.
+ Example :
+ option redis-check
- "option httplog" overrides any previous "log-format" directive.
+ See also : "option httpchk", "option tcp-check", "tcp-check expect"
- See also : section 8 about logging.
-option httpslog
- Enable logging of HTTPS request, stream state and timers
+option smtpchk
+option smtpchk <hello> <domain>
+ Use SMTP health checks for server testing
- May be used in the following contexts: http
+ May be used in the following contexts: tcp
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
- By default, the log output format is very poor, as it only contains the
- source and destination addresses, and the instance name. By specifying
- "option httpslog", each log line turns into a much richer format including,
- but not limited to, the HTTP request, the connection timers, the stream
- status, the connections numbers, the captured headers and cookies, the
- frontend, backend and server name, the SSL certificate verification and SSL
- handshake statuses, and of course the source address and ports.
+ Arguments :
+ <hello> is an optional argument. It is the "hello" command to use. It can
+ be either "HELO" (for SMTP) or "EHLO" (for ESMTP). All other
+ values will be turned into the default command ("HELO").
- "option httpslog" overrides any previous "log-format" directive.
+ <domain> is the domain name to present to the server. It may only be
+ specified (and is mandatory) if the hello command has been
+ specified. By default, "localhost" is used.
- See also : section 8 about logging.
+ When "option smtpchk" is set, the health checks will consist in TCP
+ connections followed by an SMTP command. By default, this command is
+ "HELO localhost". The server's return code is analyzed and only return codes
+ starting with a "2" will be considered as valid. All other responses,
+ including a lack of response will constitute an error and will indicate a
+ dead server.
+ This test is meant to be used with SMTP servers or relays. Depending on the
+ request, it is possible that some servers do not log each connection attempt,
+ so you may want to experiment to improve the behavior. Using telnet on port
+ 25 is often easier than adjusting the configuration.
-option independent-streams
-no option independent-streams
- Enable or disable independent timeout processing for both directions
+ Most often, an incoming SMTP server needs to see the client's IP address for
+ various purposes, including spam filtering, anti-spoofing and logging. When
+ possible, it is often wise to masquerade the client's IP address when
+ connecting to the server using the "usesrc" argument of the "source" keyword,
+ which requires the transparent proxy feature to be compiled in.
- May be used in the following contexts: tcp, http
+ Example :
+ option smtpchk HELO mydomain.org
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ See also : "option httpchk", "source"
- Arguments : none
- By default, when data is sent over a socket, both the write timeout and the
- read timeout for that socket are refreshed, because we consider that there is
- activity on that socket, and we have no other means of guessing if we should
- receive data or not.
+option socket-stats
+no option socket-stats
- While this default behavior is desirable for almost all applications, there
- exists a situation where it is desirable to disable it, and only refresh the
- read timeout if there are incoming data. This happens on streams with large
- timeouts and low amounts of exchanged data such as telnet session. If the
- server suddenly disappears, the output data accumulates in the system's
- socket buffers, both timeouts are correctly refreshed, and there is no way
- to know the server does not receive them, so we don't timeout. However, when
- the underlying protocol always echoes sent data, it would be enough by itself
- to detect the issue using the read timeout. Note that this problem does not
- happen with more verbose protocols because data won't accumulate long in the
- socket buffers.
+ Enable or disable collecting & providing separate statistics for each socket.
- When this option is set on the frontend, it will disable read timeout updates
- on data sent to the client. There probably is little use of this case. When
- the option is set on the backend, it will disable read timeout updates on
- data sent to the server. Doing so will typically break large HTTP posts from
- slow lines, so use it with caution.
+ May be used in the following contexts: tcp, http
- See also : "timeout client", "timeout server" and "timeout tunnel"
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
-option ldap-check
- Use LDAPv3 health checks for server testing
- May be used in the following contexts: tcp
+option splice-auto
+no option splice-auto
+ Enable or disable automatic kernel acceleration on sockets in both directions
+
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
Arguments : none
- It is possible to test that the server correctly talks LDAPv3 instead of just
- testing that it accepts the TCP connection. When this option is set, an
- LDAPv3 anonymous simple bind message is sent to the server, and the response
- is analyzed to find an LDAPv3 bind response message.
-
- The server is considered valid only when the LDAP response contains success
- resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9).
+ When this option is enabled either on a frontend or on a backend, HAProxy
+ will automatically evaluate the opportunity to use kernel tcp splicing to
+ forward data between the client and the server, in either direction. HAProxy
+ uses heuristics to estimate if kernel splicing might improve performance or
+ not. Both directions are handled independently. Note that the heuristics used
+ are not much aggressive in order to limit excessive use of splicing. This
+ option requires splicing to be enabled at compile time, and may be globally
+ disabled with the global option "nosplice". Since splice uses pipes, using it
+ requires that there are enough spare pipes.
- Logging of bind requests is server dependent see your documentation how to
- configure it.
+ Important note: kernel-based TCP splicing is a Linux-specific feature which
+ first appeared in kernel 2.6.25. It offers kernel-based acceleration to
+ transfer data between sockets without copying these data to user-space, thus
+ providing noticeable performance gains and CPU cycles savings. Since many
+ early implementations are buggy, corrupt data and/or are inefficient, this
+ feature is not enabled by default, and it should be used with extreme care.
+ While it is not possible to detect the correctness of an implementation,
+ 2.6.29 is the first version offering a properly working implementation. In
+ case of doubt, splicing may be globally disabled using the global "nosplice"
+ keyword.
Example :
- option ldap-check
+ option splice-auto
- See also : "option httpchk"
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+ See also : "option splice-request", "option splice-response", and global
+ options "nosplice" and "maxpipes"
-option external-check
- Use external processes for server health checks
- May be used in the following contexts: tcp, http, log
+option splice-request
+no option splice-request
+ Enable or disable automatic kernel acceleration on sockets for requests
+
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
- It is possible to test the health of a server using an external command.
- This is achieved by running the executable set using "external-check
- command".
+ Arguments : none
- Requires the "external-check" global to be set.
+ When this option is enabled either on a frontend or on a backend, HAProxy
+ will use kernel tcp splicing whenever possible to forward data going from
+ the client to the server. It might still use the recv/send scheme if there
+ are no spare pipes left. This option requires splicing to be enabled at
+ compile time, and may be globally disabled with the global option "nosplice".
+ Since splice uses pipes, using it requires that there are enough spare pipes.
- See also : "external-check", "external-check command", "external-check path"
+ Important note: see "option splice-auto" for usage limitations.
+ Example :
+ option splice-request
-option idle-close-on-response
-no option idle-close-on-response
- Avoid closing idle frontend connections if a soft stop is in progress
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option splice-auto", "option splice-response", and global options
+ "nosplice" and "maxpipes"
- May be used in the following contexts: http
+
+option splice-response
+no option splice-response
+ Enable or disable automatic kernel acceleration on sockets for responses
+
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | yes | yes | yes
Arguments : none
- By default, idle connections will be closed during a soft stop. In some
- environments, a client talking to the proxy may have prepared some idle
- connections in order to send requests later. If there is no proper retry on
- write errors, this can result in errors while haproxy is reloading. Even
- though a proper implementation should retry on connection/write errors, this
- option was introduced to support backwards compatibility with haproxy prior
- to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
- and response to add a "connection: close" header before closing, thus
- notifying the client that the connection would not be reusable.
+ When this option is enabled either on a frontend or on a backend, HAProxy
+ will use kernel tcp splicing whenever possible to forward data going from
+ the server to the client. It might still use the recv/send scheme if there
+ are no spare pipes left. This option requires splicing to be enabled at
+ compile time, and may be globally disabled with the global option "nosplice".
+ Since splice uses pipes, using it requires that there are enough spare pipes.
- In a real life example, this behavior was seen in AWS using the ALB in front
- of a haproxy. The end result was ALB sending 502 during haproxy reloads.
+ Important note: see "option splice-auto" for usage limitations.
- Users are warned that using this option may increase the number of old
- processes if connections remain idle for too long. Adjusting the client
- timeouts and/or the "hard-stop-after" parameter accordingly might be
- needed in case of frequent reloads.
+ Example :
+ option splice-response
- See also: "timeout client", "timeout client-fin", "timeout http-request",
- "hard-stop-after"
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+ See also : "option splice-auto", "option splice-request", and global options
+ "nosplice" and "maxpipes"
-option log-health-checks
-no option log-health-checks
- Enable or disable logging of health checks status updates
- May be used in the following contexts: tcp, http, log
+option spop-check
+ Use SPOP health checks for server testing
+
+ May be used in the following contexts: tcp
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | no | yes | yes
Arguments : none
- By default, failed health check are logged if server is UP and successful
- health checks are logged if server is DOWN, so the amount of additional
- information is limited.
-
- When this option is enabled, any change of the health check status or to
- the server's health will be logged, so that it becomes possible to know
- that a server was failing occasional checks before crashing, or exactly when
- it failed to respond a valid HTTP status, then when the port started to
- reject connections, then when the server stopped responding at all.
+ It is possible to test that the server correctly talks SPOP protocol instead
+ of just testing that it accepts the TCP connection. When this option is set,
+ a HELLO handshake is performed between HAProxy and the server, and the
+ response is analyzed to check no error is reported.
- Note that status changes not caused by health checks (e.g. enable/disable on
- the CLI) are intentionally not logged by this option.
+ Example :
+ option spop-check
- See also: "option httpchk", "option ldap-check", "option mysql-check",
- "option pgsql-check", "option redis-check", "option smtpchk",
- "option tcp-check", "log" and section 8 about logging.
+ See also : "option httpchk"
-option log-separate-errors
-no option log-separate-errors
- Change log level for non-completely successful connections
+option srvtcpka
+no option srvtcpka
+ Enable or disable the sending of TCP keepalive packets on the server side
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
Arguments : none
- Sometimes looking for errors in logs is not easy. This option makes HAProxy
- raise the level of logs containing potentially interesting information such
- as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The
- level changes from "info" to "err". This makes it possible to log them
- separately to a different file with most syslog daemons. Be careful not to
- remove them from the original file, otherwise you would lose ordering which
- provides very important information.
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (e.g. remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
- Using this option, large sites dealing with several thousand connections per
- second may log normal traffic to a rotating buffer and only archive smaller
- error logs.
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
- See also : "log", "dontlognull", "dontlog-normal" and section 8 about
- logging.
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
+ Please note that this has nothing to do with HTTP keep-alive.
-option logasap
-no option logasap
- Enable or disable early logging.
+ Using option "srvtcpka" enables the emission of TCP keep-alive probes on the
+ server side of a connection, which should help when session expirations are
+ noticed between HAProxy and a server.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option clitcpka", "option tcpka"
+
+
+option ssl-hello-chk
+ Use SSLv3 client hello health checks for server testing
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ yes | no | yes | yes
Arguments : none
- By default, logs are emitted when all the log format aliases and sample
- fetches used in the definition of the log-format string return a value, or
- when the stream is terminated. This allows the built in log-format strings
- to account for the transfer time, or the number of bytes in log messages.
-
- When handling long lived connections such as large file transfers or RDP,
- it may take a while for the request or connection to appear in the logs.
- Using "option logasap", the log message is created as soon as the server
- connection is established in mode tcp, or as soon as the server sends the
- complete headers in mode http. Missing information in the logs will be the
- total number of bytes which will only indicate the amount of data transferred
- before the message was created and the total time which will not take the
- remainder of the connection life or transfer time into account. For the case
- of HTTP, it is good practice to capture the Content-Length response header
- so that the logs at least indicate how many bytes are expected to be
- transferred.
+ When some SSL-based protocols are relayed in TCP mode through HAProxy, it is
+ possible to test that the server correctly talks SSL instead of just testing
+ that it accepts the TCP connection. When "option ssl-hello-chk" is set, pure
+ SSLv3 client hello messages are sent once the connection is established to
+ the server, and the response is analyzed to find an SSL server hello message.
+ The server is considered valid only when the response contains this server
+ hello message.
- Examples :
- listen http_proxy 0.0.0.0:80
- mode http
- option httplog
- option logasap
- log 192.168.2.200 local3
+ All servers tested till there correctly reply to SSLv3 client hello messages,
+ and most servers tested do not even log the requests containing only hello
+ messages, which is appreciable.
- >>> Feb 6 12:14:14 localhost \
- haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \
- static/srv1 9/10/7/14/+30 200 +243 - - ---- 3/1/1/1/0 1/0 \
- "GET /image.iso HTTP/1.0"
+ Note that this check works even when SSL support was not built into HAProxy
+ because it forges the SSL message. When SSL support is available, it is best
+ to use native SSL health checks instead of this one.
- See also : "option httplog", "capture response header", and section 8 about
- logging.
+ See also: "option httpchk", "check-ssl"
-option mysql-check [ user <username> [ { post-41 | pre-41 } ] ]
- Use MySQL health checks for server testing
+option tcp-check
+ Perform health checks using tcp-check send/expect sequences
- May be used in the following contexts: tcp
+ May be used in the following contexts: tcp, http, log
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- Arguments :
- <username> This is the username which will be used when connecting to MySQL
- server.
- post-41 Send post v4.1 client compatible checks (the default)
- pre-41 Send pre v4.1 client compatible checks
+ This health check method is intended to be combined with "tcp-check" command
+ lists in order to support send/expect types of health check sequences.
- If you specify a username, the check consists of sending two MySQL packet,
- one Client Authentication packet, and one QUIT packet, to correctly close
- MySQL session. We then parse the MySQL Handshake Initialization packet and/or
- Error packet. It is a basic but useful test which does not produce error nor
- aborted connect on the server. However, it requires an unlocked authorised
- user without a password. To create a basic limited user in MySQL with
- optional resource limits:
+ TCP checks currently support 4 modes of operations :
+ - no "tcp-check" directive : the health check only consists in a connection
+ attempt, which remains the default mode.
- CREATE USER '<username>'@'<ip_of_haproxy|network_of_haproxy/netmask>'
- /*!50701 WITH MAX_QUERIES_PER_HOUR 1 MAX_UPDATES_PER_HOUR 0 */
- /*M!100201 MAX_STATEMENT_TIME 0.0001 */;
-
- If you don't specify a username (it is deprecated and not recommended), the
- check only consists in parsing the Mysql Handshake Initialization packet or
- Error packet, we don't send anything in this mode. It was reported that it
- can generate lockout if check is too frequent and/or if there is not enough
- traffic. In fact, you need in this case to check MySQL "max_connect_errors"
- value as if a connection is established successfully within fewer than MySQL
- "max_connect_errors" attempts after a previous connection was interrupted,
- the error count for the host is cleared to zero. If HAProxy's server get
- blocked, the "FLUSH HOSTS" statement is the only way to unblock it.
-
- Remember that this does not check database presence nor database consistency.
- To do this, you can use an external check with xinetd for example.
-
- The check requires MySQL >=3.22, for older version, please use TCP check.
+ - "tcp-check send" or "tcp-check send-binary" only is mentioned : this is
+ used to send a string along with a connection opening. With some
+ protocols, it helps sending a "QUIT" message for example that prevents
+ the server from logging a connection error for each health check. The
+ check result will still be based on the ability to open the connection
+ only.
- Most often, an incoming MySQL server needs to see the client's IP address for
- various purposes, including IP privilege matching and connection logging.
- When possible, it is often wise to masquerade the client's IP address when
- connecting to the server using the "usesrc" argument of the "source" keyword,
- which requires the transparent proxy feature to be compiled in, and the MySQL
- server to route the client via the machine hosting HAProxy.
+ - "tcp-check expect" only is mentioned : this is used to test a banner.
+ The connection is opened and HAProxy waits for the server to present some
+ contents which must validate some rules. The check result will be based
+ on the matching between the contents and the rules. This is suited for
+ POP, IMAP, SMTP, FTP, SSH, TELNET.
- See also: "option httpchk"
+ - both "tcp-check send" and "tcp-check expect" are mentioned : this is
+ used to test a hello-type protocol. HAProxy sends a message, the server
+ responds and its response is analyzed. the check result will be based on
+ the matching between the response contents and the rules. This is often
+ suited for protocols which require a binding or a request/response model.
+ LDAP, MySQL, Redis and SSL are example of such protocols, though they
+ already all have their dedicated checks with a deeper understanding of
+ the respective protocols.
+ In this mode, many questions may be sent and many answers may be
+ analyzed.
+ A fifth mode can be used to insert comments in different steps of the script.
-option nolinger
-no option nolinger
- Enable or disable immediate session resource cleaning after close
+ For each tcp-check rule you create, you can add a "comment" directive,
+ followed by a string. This string will be reported in the log and stderr in
+ debug mode. It is useful to make user-friendly error reporting. The
+ "comment" is of course optional.
- May be used in the following contexts: tcp, http, log
+ During the execution of a health check, a variable scope is made available to
+ store data samples, using the "tcp-check set-var" operation. Freeing those
+ variable is possible using "tcp-check unset-var".
- May be used in sections: defaults | frontend | listen | backend
- yes | yes | yes | yes
- Arguments : none
+ Examples :
+ # perform a POP check (analyze only server's banner)
+ option tcp-check
+ tcp-check expect string +OK\ POP3\ ready comment POP\ protocol
- When clients or servers abort connections in a dirty way (e.g. they are
- physically disconnected), the session timeouts triggers and the session is
- closed. But it will remain in FIN_WAIT1 state for some time in the system,
- using some resources and possibly limiting the ability to establish newer
- connections.
+ # perform an IMAP check (analyze only server's banner)
+ option tcp-check
+ tcp-check expect string *\ OK\ IMAP4\ ready comment IMAP\ protocol
- When this happens, it is possible to activate "option nolinger" which forces
- the system to immediately remove any socket's pending data on close. Thus,
- a TCP RST is emitted, any pending data are truncated, and the session is
- instantly purged from the system's tables. The generally visible effect for
- a client is that responses are truncated if the close happens with a last
- block of data (e.g. on a redirect or error response). On the server side,
- it may help release the source ports immediately when forwarding a client
- aborts in tunnels. In both cases, TCP resets are emitted and given that
- the session is instantly destroyed, there will be no retransmit. On a lossy
- network this can increase problems, especially when there is a firewall on
- the lossy side, because the firewall might see and process the reset (hence
- purge its session) and block any further traffic for this session,, including
- retransmits from the other side. So if the other side doesn't receive it,
- it will never receive any RST again, and the firewall might log many blocked
- packets.
+ # look for the redis master server after ensuring it speaks well
+ # redis protocol, then it exits properly.
+ # (send a command then analyze the response 3 times)
+ option tcp-check
+ tcp-check comment PING\ phase
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check comment role\ check
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check comment QUIT\ phase
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
- For all these reasons, it is strongly recommended NOT to use this option,
- unless absolutely needed as a last resort. In most situations, using the
- "client-fin" or "server-fin" timeouts achieves similar results with a more
- reliable behavior. On Linux it's also possible to use the "tcp-ut" bind or
- server setting.
+ forge a HTTP request, then analyze the response
+ (send many headers before analyzing)
+ option tcp-check
+ tcp-check comment forge\ and\ send\ HTTP\ request
+ tcp-check send HEAD\ /\ HTTP/1.1\r\n
+ tcp-check send Host:\ www.mydomain.com\r\n
+ tcp-check send User-Agent:\ HAProxy\ tcpcheck\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring HTTP/1\..\ (2..|3..) comment check\ HTTP\ response
- This option may be used both on frontends and backends, depending on the side
- where it is required. Use it on the frontend for clients, and on the backend
- for servers. While this option is technically supported in "defaults"
- sections, it must really not be used there as it risks to accidentally
- propagate to sections that must no use it and to cause problems there.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ See also : "tcp-check connect", "tcp-check expect" and "tcp-check send".
- See also: "timeout client-fin", "timeout server-fin", "tcp-ut" bind or server
- keywords.
-option originalto [ except <network> ] [ header <name> ]
- Enable insertion of the X-Original-To header to requests sent to servers
+option tcp-smart-accept
+no option tcp-smart-accept
+ Enable or disable the saving of one ACK packet during the accept sequence
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
-
- Arguments :
- <network> is an optional argument used to disable this option for sources
- matching <network>
- <name> an optional argument to specify a different "X-Original-To"
- header name.
+ yes | yes | yes | no
- Since HAProxy can work in transparent mode, every request from a client can
- be redirected to the proxy and HAProxy itself can proxy every request to a
- complex SQUID environment and the destination host from SO_ORIGINAL_DST will
- be lost. This is annoying when you want access rules based on destination ip
- addresses. To solve this problem, a new HTTP header "X-Original-To" may be
- added by HAProxy to all requests sent to the server. This header contains a
- value representing the original destination IP address. Since this must be
- configured to always use the last occurrence of this header only. Note that
- only the last occurrence of the header must be used, since it is really
- possible that the client has already brought one.
+ Arguments : none
- The keyword "header" may be used to supply a different header name to replace
- the default "X-Original-To". This can be useful where you might already
- have a "X-Original-To" header from a different application, and you need
- preserve it. Also if your backend server doesn't use the "X-Original-To"
- header and requires different one.
+ When an HTTP connection request comes in, the system acknowledges it on
+ behalf of HAProxy, then the client immediately sends its request, and the
+ system acknowledges it too while it is notifying HAProxy about the new
+ connection. HAProxy then reads the request and responds. This means that we
+ have one TCP ACK sent by the system for nothing, because the request could
+ very well be acknowledged by HAProxy when it sends its response.
- Sometimes, a same HAProxy instance may be shared between a direct client
- access and a reverse-proxy access (for instance when an SSL reverse-proxy is
- used to decrypt HTTPS traffic). It is possible to disable the addition of the
- header for a known destination address or network by adding the "except"
- keyword followed by the network address. In this case, any destination IP
- matching the network will not cause an addition of this header. Most common
- uses are with private networks or 127.0.0.1. IPv4 and IPv6 are both
- supported.
+ For this reason, in HTTP mode, HAProxy automatically asks the system to avoid
+ sending this useless ACK on platforms which support it (currently at least
+ Linux). It must not cause any problem, because the system will send it anyway
+ after 40 ms if the response takes more time than expected to come.
- This option may be specified either in the frontend or in the backend. If at
- least one of them uses it, the header will be added. Note that the backend's
- setting of the header subargument takes precedence over the frontend's if
- both are defined.
+ During complex network debugging sessions, it may be desirable to disable
+ this optimization because delayed ACKs can make troubleshooting more complex
+ when trying to identify where packets are delayed. It is then possible to
+ fall back to normal behavior by specifying "no option tcp-smart-accept".
- Examples :
- # Original Destination address
- frontend www
- mode http
- option originalto except 127.0.0.1
+ It is also possible to force it for non-HTTP proxies by simply specifying
+ "option tcp-smart-accept". For instance, it can make sense with some services
+ such as SMTP where the server speaks first.
- # Those servers want the IP Address in X-Client-Dst
- backend www
- mode http
- option originalto header X-Client-Dst
+ It is recommended to avoid forcing this option in a defaults section. In case
+ of doubt, consider setting it back to automatic values by prepending the
+ "default" keyword before it, or disabling it using the "no" keyword.
- See also : "option httpclose", "option http-server-close".
+ See also : "option tcp-smart-connect"
-option persist
-no option persist
- Enable or disable forced persistence on down servers
+option tcp-smart-connect
+no option tcp-smart-connect
+ Enable or disable the saving of one ACK packet during the connect sequence
May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
+ May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments : none
- When an HTTP request reaches a backend with a cookie which references a dead
- server, by default it is redispatched to another server. It is possible to
- force the request to be sent to the dead server first using "option persist"
- if absolutely needed. A common use case is when servers are under extreme
- load and spend their time flapping. In this case, the users would still be
- directed to the server they opened the session on, in the hope they would be
- correctly served. It is recommended to use "option redispatch" in conjunction
- with this option so that in the event it would not be possible to connect to
- the server at all (server definitely dead), the client would finally be
- redirected to another valid server.
+ On certain systems (at least Linux), HAProxy can ask the kernel not to
+ immediately send an empty ACK upon a connection request, but to directly
+ send the buffer request instead. This saves one packet on the network and
+ thus boosts performance. It can also be useful for some servers, because they
+ immediately get the request along with the incoming connection.
+
+ This feature is enabled when "option tcp-smart-connect" is set in a backend.
+ It is not enabled by default because it makes network troubleshooting more
+ complex.
+
+ It only makes sense to enable it with protocols where the client speaks first
+ such as HTTP. In other situations, if there is no data to send in place of
+ the ACK, a normal ACK is sent.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
- See also : "option redispatch", "retries", "force-persist"
+ See also : "option tcp-smart-accept"
-option pgsql-check user <username>
- Use PostgreSQL health checks for server testing
+option tcpka
+ Enable or disable the sending of TCP keepalive packets on both sides
- May be used in the following contexts: tcp
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
- Arguments :
- <username> This is the username which will be used when connecting to
- PostgreSQL server.
+ Arguments : none
- The check sends a PostgreSQL StartupMessage and waits for either
- Authentication request or ErrorResponse message. It is a basic but useful
- test which does not produce error nor aborted connect on the server.
- This check is identical with the "mysql-check".
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (e.g. remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
- See also: "option httpchk"
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
-option prefer-last-server
-no option prefer-last-server
- Allow multiple load balanced requests to remain on the same server
+ Please note that this has nothing to do with HTTP keep-alive.
+
+ Using option "tcpka" enables the emission of TCP keep-alive probes on both
+ the client and server sides of a connection. Note that this is meaningful
+ only in "defaults" or "listen" sections. If this option is used in a
+ frontend, only the client side will get keep-alives, and if this option is
+ used in a backend, only the server side will get keep-alives. For this
+ reason, it is strongly recommended to explicitly use "option clitcpka" and
+ "option srvtcpka" when the configuration is split between frontends and
+ backends.
+
+ See also : "option clitcpka", "option srvtcpka"
+
+
+option tcplog [clf]
+ Enable advanced logging of TCP connections with stream state and timers
May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+
+ Arguments :
+ clf if the "clf" argument is added, then the output format will be
+ the CLF format instead of HAProxy's default TCP format. You can
+ use this when you need to feed HAProxy's logs through a specific
+ log analyzer which only support the CLF format and which is not
+ extensible. Since this expects an HTTP format some of the
+ values have been pre set. The http request will show as TCP and
+ the response code will show as 000.
+
+ By default, the log output format is very poor, as it only contains the
+ source and destination addresses, and the instance name. By specifying
+ "option tcplog", each log line turns into a much richer format including, but
+ not limited to, the connection timers, the stream status, the connections
+ numbers, the frontend, backend and server name, and of course the source
+ address and ports. This option is useful for pure TCP proxies in order to
+ find which of the client or server disconnects or times out. For normal HTTP
+ proxies, it's better to use "option httplog" which is even more complete.
+
+ "option tcplog" overrides any previous "log-format" directive.
+
+ See also : "option httplog", and section 8 about logging.
+
+
+option transparent
+no option transparent
+ Enable client-side transparent proxying
+
+ May be used in the following contexts: tcp, http
+
+ May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments : none
- When the load balancing algorithm in use is not deterministic, and a previous
- request was sent to a server to which HAProxy still holds a connection, it is
- sometimes desirable that subsequent requests on a same session go to the same
- server as much as possible. Note that this is different from persistence, as
- we only indicate a preference which HAProxy tries to apply without any form
- of warranty. The real use is for keep-alive connections sent to servers. When
- this option is used, HAProxy will try to reuse the same connection that is
- attached to the server instead of rebalancing to another server, causing a
- close of the connection. This can make sense for static file servers. It does
- not make much sense to use this in combination with hashing algorithms. Note,
- HAProxy already automatically tries to stick to a server which sends a 401 or
- to a proxy which sends a 407 (authentication required), when the load
- balancing algorithm is not deterministic. This is mandatory for use with the
- broken NTLM authentication challenge, and significantly helps in
- troubleshooting some faulty applications. Option prefer-last-server might be
- desirable in these environments as well, to avoid redistributing the traffic
- after every other response.
+ This option was introduced in order to provide layer 7 persistence to layer 3
+ load balancers. The idea is to use the OS's ability to redirect an incoming
+ connection for a remote address to a local process (here HAProxy), and let
+ this process know what address was initially requested. When this option is
+ used, sessions without cookies will be forwarded to the original destination
+ IP address of the incoming request (which should match that of another
+ equipment), while requests with cookies will still be forwarded to the
+ appropriate server.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Note that contrary to a common belief, this option does NOT make HAProxy
+ present the client's IP to the server when establishing the connection.
- See also: "option http-keep-alive"
+ See also: the "usesrc" argument of the "source" keyword, and the
+ "transparent" option of the "bind" keyword.
-option redispatch
-option redispatch <interval>
-no option redispatch
- Enable or disable session redistribution in case of connection failure
+external-check command <command>
+ Executable to run when performing an external-check
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
- May be used in sections: defaults | frontend | listen | backend
+ May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
- <interval> The optional integer value that controls how often redispatches
- occur when retrying connections. Positive value P indicates a
- redispatch is desired on every Pth retry, and negative value
- N indicate a redispatch is desired on the Nth retry prior to the
- last retry. For example, the default of -1 preserves the
- historical behavior of redispatching on the last retry, a
- positive value of 1 would indicate a redispatch on every retry,
- and a positive value of 3 would indicate a redispatch on every
- third retry. You can disable redispatches with a value of 0.
+ <command> is the external command to run
+ The arguments passed to the to the command are:
- In HTTP mode, if a server designated by a cookie is down, clients may
- definitely stick to it, for example when using "option persist" or
- "force-persist", because they cannot flush the cookie, so they will not
- be able to access the service anymore.
+ <proxy_address> <proxy_port> <server_address> <server_port>
- Specifying "option redispatch" will allow the proxy to break cookie or
- consistent hash based persistence and redistribute them to a working server.
+ The <proxy_address> and <proxy_port> are derived from the first listener
+ that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
+ listener the proxy_address will be the path of the socket and the
+ <proxy_port> will be the string "NOT_USED". In a backend section, it's not
+ possible to determine a listener, and both <proxy_address> and <proxy_port>
+ will have the string value "NOT_USED".
- Active servers are selected from a subset of the list of available
- servers. Active servers that are not down or in maintenance (i.e., whose
- health is not checked or that have been checked as "up"), are selected in the
- following order:
+ Some values are also provided through environment variables.
- 1. Any active, non-backup server, if any, or,
+ Environment variables :
+ HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
+ applicable, for example in a "backend" section).
- 2. If the "allbackups" option is not set, the first backup server in the
- list, or
+ HAPROXY_PROXY_ID The backend id.
- 3. If the "allbackups" option is set, any backup server.
+ HAPROXY_PROXY_NAME The backend name.
- When a retry occurs, HAProxy tries to select another server than the last
- one. The new server is selected from the current list of servers.
+ HAPROXY_PROXY_PORT The first bind port if available (or empty if not
+ applicable, for example in a "backend" section or
+ for a UNIX socket).
- Sometimes, if the list is updated between retries (e.g., if numerous retries
- occur and last longer than the time needed to check that a server is down,
- remove it from the list and fall back on the list of backup servers),
- connections may be redirected to a backup server, though.
+ HAPROXY_SERVER_ADDR The server address.
- It also allows to retry connections to another server in case of multiple
- connection failures. Of course, it requires having "retries" set to a nonzero
- value.
+ HAPROXY_SERVER_CURCONN The current number of connections on the server.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ HAPROXY_SERVER_ID The server id.
- See also : "option persist", "force-persist", "retries"
+ HAPROXY_SERVER_MAXCONN The server max connections.
+ HAPROXY_SERVER_NAME The server name.
-option redis-check
- Use redis health checks for server testing
+ HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
+ socket).
- May be used in the following contexts: tcp
+ HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
+ of "cli" (the haproxy CLI), "syslog" (syslog TCP
+ server), "peers" (peers TCP server), "h1" (HTTP/1.x
+ server), "h2" (HTTP/2 server), or "tcp" (any other
+ TCP server).
- Arguments : none
+ PATH The PATH environment variable used when executing
+ the command may be set using "external-check path".
- It is possible to test that the server correctly talks REDIS protocol instead
- of just testing that it accepts the TCP connection. When this option is set,
- a PING redis command is sent to the server, and the response is analyzed to
- find the "+PONG" response message.
+ If the command executed and exits with a zero status then the check is
+ considered to have passed, otherwise the check is considered to have
+ failed.
Example :
- option redis-check
+ external-check command /bin/true
- See also : "option httpchk", "option tcp-check", "tcp-check expect"
+ See also : "external-check", "option external-check", "external-check path"
-option smtpchk
-option smtpchk <hello> <domain>
- Use SMTP health checks for server testing
+external-check path <path>
+ The value of the PATH environment variable used when running an external-check
- May be used in the following contexts: tcp
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
- <hello> is an optional argument. It is the "hello" command to use. It can
- be either "HELO" (for SMTP) or "EHLO" (for ESMTP). All other
- values will be turned into the default command ("HELO").
-
- <domain> is the domain name to present to the server. It may only be
- specified (and is mandatory) if the hello command has been
- specified. By default, "localhost" is used.
-
- When "option smtpchk" is set, the health checks will consist in TCP
- connections followed by an SMTP command. By default, this command is
- "HELO localhost". The server's return code is analyzed and only return codes
- starting with a "2" will be considered as valid. All other responses,
- including a lack of response will constitute an error and will indicate a
- dead server.
-
- This test is meant to be used with SMTP servers or relays. Depending on the
- request, it is possible that some servers do not log each connection attempt,
- so you may want to experiment to improve the behavior. Using telnet on port
- 25 is often easier than adjusting the configuration.
+ <path> is the path used when executing external command to run
- Most often, an incoming SMTP server needs to see the client's IP address for
- various purposes, including spam filtering, anti-spoofing and logging. When
- possible, it is often wise to masquerade the client's IP address when
- connecting to the server using the "usesrc" argument of the "source" keyword,
- which requires the transparent proxy feature to be compiled in.
+ The default path is "".
Example :
- option smtpchk HELO mydomain.org
-
- See also : "option httpchk", "source"
+ external-check path "/usr/bin:/bin"
+ See also : "external-check", "option external-check",
+ "external-check command"
-option socket-stats
-no option socket-stats
- Enable or disable collecting & providing separate statistics for each socket.
+persist rdp-cookie
+persist rdp-cookie(<name>)
+ Enable RDP cookie-based persistence
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments : none
+ yes | no | yes | yes
+ Arguments :
+ <name> is the optional name of the RDP cookie to check. If omitted, the
+ default cookie name "msts" will be used. There currently is no
+ valid reason to change this name.
-option splice-auto
-no option splice-auto
- Enable or disable automatic kernel acceleration on sockets in both directions
+ This statement enables persistence based on an RDP cookie. The RDP cookie
+ contains all information required to find the server in the list of known
+ servers. So when this option is set in the backend, the request is analyzed
+ and if an RDP cookie is found, it is decoded. If it matches a known server
+ which is still UP (or if "option persist" is set), then the connection is
+ forwarded to this server.
- May be used in the following contexts: tcp, http
+ Note that this only makes sense in a TCP backend, but for this to work, the
+ frontend must have waited long enough to ensure that an RDP cookie is present
+ in the request buffer. This is the same requirement as with the "rdp-cookie"
+ load-balancing method. Thus it is highly recommended to put all statements in
+ a single "listen" section.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
-
- Arguments : none
-
- When this option is enabled either on a frontend or on a backend, HAProxy
- will automatically evaluate the opportunity to use kernel tcp splicing to
- forward data between the client and the server, in either direction. HAProxy
- uses heuristics to estimate if kernel splicing might improve performance or
- not. Both directions are handled independently. Note that the heuristics used
- are not much aggressive in order to limit excessive use of splicing. This
- option requires splicing to be enabled at compile time, and may be globally
- disabled with the global option "nosplice". Since splice uses pipes, using it
- requires that there are enough spare pipes.
-
- Important note: kernel-based TCP splicing is a Linux-specific feature which
- first appeared in kernel 2.6.25. It offers kernel-based acceleration to
- transfer data between sockets without copying these data to user-space, thus
- providing noticeable performance gains and CPU cycles savings. Since many
- early implementations are buggy, corrupt data and/or are inefficient, this
- feature is not enabled by default, and it should be used with extreme care.
- While it is not possible to detect the correctness of an implementation,
- 2.6.29 is the first version offering a properly working implementation. In
- case of doubt, splicing may be globally disabled using the global "nosplice"
- keyword.
+ Also, it is important to understand that the terminal server will emit this
+ RDP cookie only if it is configured for "token redirection mode", which means
+ that the "IP address redirection" option is disabled.
Example :
- option splice-auto
-
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ listen tse-farm
+ bind :3389
+ # wait up to 5s for an RDP cookie in the request
+ tcp-request inspect-delay 5s
+ tcp-request content accept if RDP_COOKIE
+ # apply RDP cookie persistence
+ persist rdp-cookie
+ # if server is unknown, let's balance on the same cookie.
+ # alternatively, "balance leastconn" may be useful too.
+ balance rdp-cookie
+ server srv1 1.1.1.1:3389
+ server srv2 1.1.1.2:3389
- See also : "option splice-request", "option splice-response", and global
- options "nosplice" and "maxpipes"
+ See also : "balance rdp-cookie", "tcp-request" and the "req.rdp_cookie" ACL.
-option splice-request
-no option splice-request
- Enable or disable automatic kernel acceleration on sockets for requests
+quic-initial <action> [ { if | unless } <condition> ]
+ Perform an action on an incoming QUIC Initial packet. Contrary to
+ "tcp-request connection", this is executed prior to any connection element
+ instantiation and starting and completion of the SSL handshake, which is more
+ efficient when wanting to reject connections attempts.
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
-
- Arguments : none
-
- When this option is enabled either on a frontend or on a backend, HAProxy
- will use kernel tcp splicing whenever possible to forward data going from
- the client to the server. It might still use the recv/send scheme if there
- are no spare pipes left. This option requires splicing to be enabled at
- compile time, and may be globally disabled with the global option "nosplice".
- Since splice uses pipes, using it requires that there are enough spare pipes.
+ yes(!) | yes | yes | no
- Important note: see "option splice-auto" for usage limitations.
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
- Example :
- option splice-request
+ <condition> is a standard layer4-only ACL-based condition (see section 7).
+ However, QUIC initial rules are executed too early even for
+ some layer4 sample fetch methods despite no configuration
+ warning and may result in unspecified runtime behavior,
+ although they will not crash. Consider that only internal
+ samples and layer4 "src*" and "dst*" are considered as
+ supported for now.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
- See also : "option splice-auto", "option splice-response", and global options
- "nosplice" and "maxpipes"
+ This action is executed early during QUIC packet parsing. As such, only a
+ minimal list of actions is supported :
+ - accept
+ - dgram-drop
+ - reject
+ - send-retry
-option splice-response
-no option splice-response
- Enable or disable automatic kernel acceleration on sockets for responses
+rate-limit sessions <rate>
+ Set a limit on the number of new sessions accepted per second on a frontend
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ yes | yes | yes | no
- Arguments : none
+ Arguments :
+ <rate> The <rate> parameter is an integer designating the maximum number
+ of new sessions per second to accept on the frontend.
- When this option is enabled either on a frontend or on a backend, HAProxy
- will use kernel tcp splicing whenever possible to forward data going from
- the server to the client. It might still use the recv/send scheme if there
- are no spare pipes left. This option requires splicing to be enabled at
- compile time, and may be globally disabled with the global option "nosplice".
- Since splice uses pipes, using it requires that there are enough spare pipes.
+ When the frontend reaches the specified number of new sessions per second, it
+ stops accepting new connections until the rate drops below the limit again.
+ During this time, the pending sessions will be kept in the socket's backlog
+ (in system buffers) and HAProxy will not even be aware that sessions are
+ pending. When applying very low limit on a highly loaded service, it may make
+ sense to increase the socket's backlog using the "backlog" keyword.
- Important note: see "option splice-auto" for usage limitations.
+ This feature is particularly efficient at blocking connection-based attacks
+ or service abuse on fragile servers. Since the session rate is measured every
+ millisecond, it is extremely accurate. Also, the limit applies immediately,
+ no delay is needed at all to detect the threshold.
- Example :
- option splice-response
+ Example : limit the connection rate on SMTP to 10 per second max
+ listen smtp
+ mode tcp
+ bind :25
+ rate-limit sessions 10
+ server smtp1 127.0.0.1:1025
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ Note : when the maximum rate is reached, the frontend's status is not changed
+ but its sockets appear as "WAITING" in the statistics if the
+ "socket-stats" option is enabled.
- See also : "option splice-auto", "option splice-request", and global options
- "nosplice" and "maxpipes"
+ See also : the "backlog" keyword and the "fe_sess_rate" ACL criterion.
-option spop-check
- Use SPOP health checks for server testing
+redirect location <loc> [code <code>] <option> [{if | unless} <condition>]
+redirect prefix <pfx> [code <code>] <option> [{if | unless} <condition>]
+redirect scheme <sch> [code <code>] <option> [{if | unless} <condition>]
+ Return an HTTP redirection if/unless a condition is matched
- May be used in the following contexts: tcp
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ no | yes | yes | yes
- Arguments : none
+ If/unless the condition is matched, the HTTP request will lead to a redirect
+ response. If no condition is specified, the redirect applies unconditionally.
- It is possible to test that the server correctly talks SPOP protocol instead
- of just testing that it accepts the TCP connection. When this option is set,
- a HELLO handshake is performed between HAProxy and the server, and the
- response is analyzed to check no error is reported.
+ Arguments :
+ <loc> With "redirect location", the exact value in <loc> is placed into
+ the HTTP "Location" header. When used in an "http-request" rule,
+ <loc> value follows the Custom log format rules and can include
+ some dynamic values (see Custom log format in section 8.2.6).
- Example :
- option spop-check
+ <pfx> With "redirect prefix", the "Location" header is built from the
+ concatenation of <pfx> and the complete URI path, including the
+ query string, unless the "drop-query" option is specified (see
+ below). As a special case, if <pfx> equals exactly "/", then
+ nothing is inserted before the original URI. It allows one to
+ redirect to the same URL (for instance, to insert a cookie). When
+ used in an "http-request" rule, <pfx> value follows the Custom
+ Log Format rules and can include some dynamic values (see Custom
+ Log Format in section 8.2.6).
- See also : "option httpchk"
+ <sch> With "redirect scheme", then the "Location" header is built by
+ concatenating <sch> with "://" then the first occurrence of the
+ "Host" header, and then the URI path, including the query string
+ unless the "drop-query" option is specified (see below). If no
+ path is found or if the path is "*", then "/" is used instead. If
+ no "Host" header is found, then an empty host component will be
+ returned, which most recent browsers interpret as redirecting to
+ the same host. This directive is mostly used to redirect HTTP to
+ HTTPS. When used in an "http-request" rule, <sch> value follows
+ the Custom log format rules and can include some dynamic values
+ (see Custom log format in section 8.2.6).
+ <code> The code is optional. It indicates which type of HTTP redirection
+ is desired. Only codes 301, 302, 303, 307 and 308 are supported,
+ with 302 used by default if no code is specified. 301 means
+ "Moved permanently", and a browser may cache the Location. 302
+ means "Moved temporarily" and means that the browser should not
+ cache the redirection. 303 is equivalent to 302 except that the
+ browser will fetch the location with a GET method. 307 is just
+ like 302 but makes it clear that the same method must be reused.
+ Likewise, 308 replaces 301 if the same method must be used.
-option srvtcpka
-no option srvtcpka
- Enable or disable the sending of TCP keepalive packets on the server side
+ <option> There are several options which can be specified to adjust the
+ expected behavior of a redirection :
- May be used in the following contexts: tcp, http, log
+ - "drop-query"
+ When this keyword is used in a prefix-based redirection, then the
+ location will be set without any possible query-string, which is useful
+ for directing users to a non-secure page for instance. It has no effect
+ with a location-type redirect.
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ - "append-slash"
+ This keyword may be used in conjunction with "drop-query" to redirect
+ users who use a URL not ending with a '/' to the same one with the '/'.
+ It can be useful to ensure that search engines will only see one URL.
+ For this, a return code 301 is preferred.
- Arguments : none
+ - "ignore-empty"
+ This keyword only has effect when a location is produced using a log
+ format expression (i.e. when used in http-request or http-response).
+ It indicates that if the result of the expression is empty, the rule
+ should silently be skipped. The main use is to allow mass-redirects
+ of known paths using a simple map.
- When there is a firewall or any session-aware component between a client and
- a server, and when the protocol involves very long sessions with long idle
- periods (e.g. remote desktops), there is a risk that one of the intermediate
- components decides to expire a session which has remained idle for too long.
+ - "set-cookie NAME[=value]"
+ A "Set-Cookie" header will be added with NAME (and optionally "=value")
+ to the response. This is sometimes used to indicate that a user has
+ been seen, for instance to protect against some types of DoS. No other
+ cookie option is added, so the cookie will be a session cookie. Note
+ that for a browser, a sole cookie name without an equal sign is
+ different from a cookie with an equal sign.
- Enabling socket-level TCP keep-alives makes the system regularly send packets
- to the other end of the connection, leaving it active. The delay between
- keep-alive probes is controlled by the system only and depends both on the
- operating system and its tuning parameters.
+ - "set-cookie-fmt <fmt>"
+ It is equivaliant to the option above, except the "Set-Cookie" header
+ will be filled with the result of the log-format string <fmt>
+ evaluation. Be careful to respect the "NAME[=value]" format because no
+ special check are performed during the configuration parsing.
- It is important to understand that keep-alive packets are neither emitted nor
- received at the application level. It is only the network stacks which sees
- them. For this reason, even if one side of the proxy already uses keep-alives
- to maintain its connection alive, those keep-alive packets will not be
- forwarded to the other side of the proxy.
+ - "clear-cookie NAME[=]"
+ A "Set-Cookie" header will be added with NAME (and optionally "="), but
+ with the "Max-Age" attribute set to zero. This will tell the browser to
+ delete this cookie. It is useful for instance on logout pages. It is
+ important to note that clearing the cookie "NAME" will not remove a
+ cookie set with "NAME=value". You have to clear the cookie "NAME=" for
+ that, because the browser makes the difference.
- Please note that this has nothing to do with HTTP keep-alive.
+ - "keep-query"
+ When this keyword is used in a location-based redirection, then the
+ query-string of the original URI, if any, will be appended to the
+ location. If no query-string is found, nothing is added. If the
+ location already contains a query-string, the original one will be
+ appended with the '&' delimiter.
- Using option "srvtcpka" enables the emission of TCP keep-alive probes on the
- server side of a connection, which should help when session expirations are
- noticed between HAProxy and a server.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
-
- See also : "option clitcpka", "option tcpka"
-
-
-option ssl-hello-chk
- Use SSLv3 client hello health checks for server testing
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments : none
+ Example: move the login URL only to HTTPS.
+ acl clear dst_port 80
+ acl secure dst_port 8080
+ acl login_page url_beg /login
+ acl logout url_beg /logout
+ acl uid_given url_reg /login?userid=[^&]+
+ acl cookie_set hdr_sub(cookie) SEEN=1
- When some SSL-based protocols are relayed in TCP mode through HAProxy, it is
- possible to test that the server correctly talks SSL instead of just testing
- that it accepts the TCP connection. When "option ssl-hello-chk" is set, pure
- SSLv3 client hello messages are sent once the connection is established to
- the server, and the response is analyzed to find an SSL server hello message.
- The server is considered valid only when the response contains this server
- hello message.
+ redirect prefix https://mysite.com set-cookie SEEN=1 if !cookie_set
+ redirect prefix https://mysite.com if login_page !secure
+ redirect prefix http://mysite.com drop-query if login_page !uid_given
+ redirect location http://mysite.com/ if !login_page secure
+ redirect location / clear-cookie USERID= if logout
- All servers tested till there correctly reply to SSLv3 client hello messages,
- and most servers tested do not even log the requests containing only hello
- messages, which is appreciable.
+ Example: send redirects for request for articles without a '/'.
+ acl missing_slash path_reg ^/article/[^/]*$
+ redirect code 301 prefix / drop-query append-slash if missing_slash
- Note that this check works even when SSL support was not built into HAProxy
- because it forges the SSL message. When SSL support is available, it is best
- to use native SSL health checks instead of this one.
+ Example: redirect all HTTP traffic to HTTPS when SSL is handled by HAProxy.
+ redirect scheme https if !{ ssl_fc }
- See also: "option httpchk", "check-ssl"
+ Example: append 'www.' prefix in front of all hosts not having it
+ http-request redirect code 301 location \
+ http://www.%[hdr(host)]%[capture.req.uri] \
+ unless { hdr_beg(host) -i www }
+ Example: permanently redirect only old URLs to new ones
+ http-request redirect code 301 location \
+ %[path,map_str(old-blog-articles.map)] ignore-empty
-option tcp-check
- Perform health checks using tcp-check send/expect sequences
+ See section 7 about ACL usage.
- May be used in the following contexts: tcp, http, log
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+retries <value>
+ Set the number of retries to perform on a server after a failure
- This health check method is intended to be combined with "tcp-check" command
- lists in order to support send/expect types of health check sequences.
+ May be used in the following contexts: tcp, http
- TCP checks currently support 4 modes of operations :
- - no "tcp-check" directive : the health check only consists in a connection
- attempt, which remains the default mode.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- - "tcp-check send" or "tcp-check send-binary" only is mentioned : this is
- used to send a string along with a connection opening. With some
- protocols, it helps sending a "QUIT" message for example that prevents
- the server from logging a connection error for each health check. The
- check result will still be based on the ability to open the connection
- only.
+ Arguments :
+ <value> is the number of times a request or connection attempt should be
+ retried on a server after a failure.
- - "tcp-check expect" only is mentioned : this is used to test a banner.
- The connection is opened and HAProxy waits for the server to present some
- contents which must validate some rules. The check result will be based
- on the matching between the contents and the rules. This is suited for
- POP, IMAP, SMTP, FTP, SSH, TELNET.
+ By default, retries apply only to new connection attempts. However, when
+ the "retry-on" directive is used, other conditions might trigger a retry
+ (e.g. empty response, undesired status code), and each of them will count
+ one attempt, and when the total number attempts reaches the value here, an
+ error will be returned.
- - both "tcp-check send" and "tcp-check expect" are mentioned : this is
- used to test a hello-type protocol. HAProxy sends a message, the server
- responds and its response is analyzed. the check result will be based on
- the matching between the response contents and the rules. This is often
- suited for protocols which require a binding or a request/response model.
- LDAP, MySQL, Redis and SSL are example of such protocols, though they
- already all have their dedicated checks with a deeper understanding of
- the respective protocols.
- In this mode, many questions may be sent and many answers may be
- analyzed.
+ In order to avoid immediate reconnections to a server which is restarting,
+ a turn-around timer of min("timeout connect", one second) is applied before
+ a retry occurs on the same server.
- A fifth mode can be used to insert comments in different steps of the script.
+ When "option redispatch" is set, some retries may be performed on another
+ server even if a cookie references a different server. By default this will
+ only be the last retry unless an argument is passed to "option redispatch".
- For each tcp-check rule you create, you can add a "comment" directive,
- followed by a string. This string will be reported in the log and stderr in
- debug mode. It is useful to make user-friendly error reporting. The
- "comment" is of course optional.
+ See also : "option redispatch"
- During the execution of a health check, a variable scope is made available to
- store data samples, using the "tcp-check set-var" operation. Freeing those
- variable is possible using "tcp-check unset-var".
+retry-on [space-delimited list of keywords]
+ Specify when to attempt to automatically retry a failed request.
+ This setting is only valid when "mode" is set to http and is silently ignored
+ otherwise.
- Examples :
- # perform a POP check (analyze only server's banner)
- option tcp-check
- tcp-check expect string +OK\ POP3\ ready comment POP\ protocol
+ May be used in the following contexts: tcp, http
- # perform an IMAP check (analyze only server's banner)
- option tcp-check
- tcp-check expect string *\ OK\ IMAP4\ ready comment IMAP\ protocol
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- # look for the redis master server after ensuring it speaks well
- # redis protocol, then it exits properly.
- # (send a command then analyze the response 3 times)
- option tcp-check
- tcp-check comment PING\ phase
- tcp-check send PING\r\n
- tcp-check expect string +PONG
- tcp-check comment role\ check
- tcp-check send info\ replication\r\n
- tcp-check expect string role:master
- tcp-check comment QUIT\ phase
- tcp-check send QUIT\r\n
- tcp-check expect string +OK
+ Arguments :
+ <keywords> is a space-delimited list of keywords or HTTP status codes, each
+ representing a type of failure event on which an attempt to
+ retry the request is desired. Please read the notes at the
+ bottom before changing this setting. The following keywords are
+ supported :
- forge a HTTP request, then analyze the response
- (send many headers before analyzing)
- option tcp-check
- tcp-check comment forge\ and\ send\ HTTP\ request
- tcp-check send HEAD\ /\ HTTP/1.1\r\n
- tcp-check send Host:\ www.mydomain.com\r\n
- tcp-check send User-Agent:\ HAProxy\ tcpcheck\r\n
- tcp-check send \r\n
- tcp-check expect rstring HTTP/1\..\ (2..|3..) comment check\ HTTP\ response
+ none never retry
+ conn-failure retry when the connection or the SSL handshake failed
+ and the request could not be sent. This is the default.
- See also : "tcp-check connect", "tcp-check expect" and "tcp-check send".
+ empty-response retry when the server connection was closed after part
+ of the request was sent, and nothing was received from
+ the server. This type of failure may be caused by the
+ request timeout on the server side, poor network
+ condition, or a server crash or restart while
+ processing the request.
+ junk-response retry when the server returned something not looking
+ like a complete HTTP response. This includes partial
+ responses headers as well as non-HTTP contents. It
+ usually is a bad idea to retry on such events, which
+ may be caused a configuration issue (wrong server port)
+ or by the request being harmful to the server (buffer
+ overflow attack for example).
-option tcp-smart-accept
-no option tcp-smart-accept
- Enable or disable the saving of one ACK packet during the accept sequence
+ response-timeout the server timeout stroke while waiting for the server
+ to respond to the request. This may be caused by poor
+ network condition, the reuse of an idle connection
+ which has expired on the path, or by the request being
+ extremely expensive to process. It generally is a bad
+ idea to retry on such events on servers dealing with
+ heavy database processing (full scans, etc) as it may
+ amplify denial of service attacks.
- May be used in the following contexts: tcp, http
+ 0rtt-rejected retry requests which were sent over early data and were
+ rejected by the server. These requests are generally
+ considered to be safe to retry.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
+ <status> any HTTP status code among "401" (Unauthorized), "403"
+ (Forbidden), "404" (Not Found), "408" (Request Timeout),
+ "421" (Misdirected Request), "425" (Too Early),
+ "429" (Too Many Requests), "500" (Server Error),
+ "501" (Not Implemented), "502" (Bad Gateway),
+ "503" (Service Unavailable), "504" (Gateway Timeout).
- Arguments : none
+ all-retryable-errors
+ retry request for any error that are considered
+ retryable. This currently activates "conn-failure",
+ "empty-response", "junk-response", "response-timeout",
+ "0rtt-rejected", "500", "502", "503", and "504".
- When an HTTP connection request comes in, the system acknowledges it on
- behalf of HAProxy, then the client immediately sends its request, and the
- system acknowledges it too while it is notifying HAProxy about the new
- connection. HAProxy then reads the request and responds. This means that we
- have one TCP ACK sent by the system for nothing, because the request could
- very well be acknowledged by HAProxy when it sends its response.
+ Using this directive replaces any previous settings with the new ones; it is
+ not cumulative.
- For this reason, in HTTP mode, HAProxy automatically asks the system to avoid
- sending this useless ACK on platforms which support it (currently at least
- Linux). It must not cause any problem, because the system will send it anyway
- after 40 ms if the response takes more time than expected to come.
+ Please note that using anything other than "none" and "conn-failure" requires
+ to allocate a buffer and copy the whole request into it, so it has memory and
+ performance impacts. Requests not fitting in a single buffer will never be
+ retried (see the global tune.bufsize setting).
- During complex network debugging sessions, it may be desirable to disable
- this optimization because delayed ACKs can make troubleshooting more complex
- when trying to identify where packets are delayed. It is then possible to
- fall back to normal behavior by specifying "no option tcp-smart-accept".
+ You have to make sure the application has a replay protection mechanism built
+ in such as a unique transaction IDs passed in requests, or that replaying the
+ same request has no consequence, or it is very dangerous to use any retry-on
+ value beside "conn-failure" and "none". Static file servers and caches are
+ generally considered safe against any type of retry. Using a status code can
+ be useful to quickly leave a server showing an abnormal behavior (out of
+ memory, file system issues, etc), but in this case it may be a good idea to
+ immediately redispatch the connection to another server (please see "option
+ redispatch" for this). Last, it is important to understand that most causes
+ of failures are the requests themselves and that retrying a request causing a
+ server to misbehave will often make the situation even worse for this server,
+ or for the whole service in case of redispatch.
- It is also possible to force it for non-HTTP proxies by simply specifying
- "option tcp-smart-accept". For instance, it can make sense with some services
- such as SMTP where the server speaks first.
+ Unless you know exactly how the application deals with replayed requests, you
+ should not use this directive.
- It is recommended to avoid forcing this option in a defaults section. In case
- of doubt, consider setting it back to automatic values by prepending the
- "default" keyword before it, or disabling it using the "no" keyword.
+ The default is "conn-failure".
- See also : "option tcp-smart-connect"
+ Example:
+ retry-on 503 504
+ See also: "retries", "option redispatch", "tune.bufsize"
-option tcp-smart-connect
-no option tcp-smart-connect
- Enable or disable the saving of one ACK packet during the connect sequence
+server <name> <address>[:[port]] [param*]
+ Declare a server in a backend
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments : none
+ no | no | yes | yes
- On certain systems (at least Linux), HAProxy can ask the kernel not to
- immediately send an empty ACK upon a connection request, but to directly
- send the buffer request instead. This saves one packet on the network and
- thus boosts performance. It can also be useful for some servers, because they
- immediately get the request along with the incoming connection.
-
- This feature is enabled when "option tcp-smart-connect" is set in a backend.
- It is not enabled by default because it makes network troubleshooting more
- complex.
-
- It only makes sense to enable it with protocols where the client speaks first
- such as HTTP. In other situations, if there is no data to send in place of
- the ACK, a normal ACK is sent.
+ Arguments :
+ <name> is the internal name assigned to this server. This name will
+ appear in logs and alerts. If "http-send-name-header" is
+ set, it will be added to the request header sent to the server.
- If this option has been enabled in a "defaults" section, it can be disabled
- in a specific instance by prepending the "no" keyword before it.
+ <address> is the IPv4 or IPv6 address of the server. Alternatively, a
+ resolvable hostname is supported, but this name will be resolved
+ during start-up. Address "0.0.0.0" or "*" has a special meaning.
+ It indicates that the connection will be forwarded to the same IP
+ address as the one from the client connection. This is useful in
+ transparent proxy architectures where the client's connection is
+ intercepted and HAProxy must forward to the original destination
+ address. This is more or less what the "transparent" keyword does
+ except that with a server it's possible to limit concurrency and
+ to report statistics. Optionally, an address family prefix may be
+ used before the address to force the family regardless of the
+ address format, which can be useful to specify a path to a unix
+ socket with no slash ('/'). Currently supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only)
+ - 'abnsz@' -> address is in abstract namespace (Linux only)
+ but it is explicitly zero-terminated. This means no \0
+ padding is used to complete sun_path. It is useful to
+ interconnect with programs that don't implement the
+ default abns naming logic that haproxy uses.
+ - 'sockpair@' -> address is the FD of a connected unix
+ socket or of a socketpair. During a connection, the
+ backend creates a pair of connected sockets, and passes
+ one of them over the FD. The bind part will use the
+ received socket as the client FD. Should be used
+ carefully.
+ - 'rhttp@' [ EXPERIMENTAL ] -> custom address family for a
+ passive server in HTTP reverse context. This is an
+ experimental features which requires
+ "expose-experimental-directives" on a line before this
+ server.
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment
+ variables. The "init-addr" setting can be used to modify the way
+ IP addresses should be resolved upon startup.
- See also : "option tcp-smart-accept"
+ <port> is an optional port specification. If set, all connections will
+ be sent to this port. If unset, the same port the client
+ connected to will be used. The port may also be prefixed by a "+"
+ or a "-". In this case, the server's port will be determined by
+ adding this value to the client's port.
+ <param*> is a list of parameters for this server. The "server" keywords
+ accepts an important number of options and has a complete section
+ dedicated to it. Please refer to section 5 for more details.
-option tcpka
- Enable or disable the sending of TCP keepalive packets on both sides
+ Examples :
+ server first 10.1.1.1:1080 cookie first check inter 1000
+ server second 10.1.1.2:1080 cookie second check inter 1000
+ server transp ipv4@
+ server backup "${SRV_BACKUP}:1080" backup
+ server www1_dc1 "${LAN_DC1}.101:80"
+ server www1_dc2 "${LAN_DC2}.101:80"
- May be used in the following contexts: tcp, http, log
+ Note: regarding Linux's abstract namespace sockets, "abns" HAProxy sockets
+ uses the whole sun_path length is used for the address length. Some
+ other programs such as socat use the string length only by default.
+ Pass the option ",unix-tightsocklen=0" to any abstract socket
+ definition in socat to make it compatible with HAProxy's, or use the
+ "abnsz" HAProxy socket family instead.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ See also: "default-server", "http-send-name-header" and section 5 about
+ server options
- Arguments : none
+server-state-file-name [ { use-backend-name | <file> } ]
+ Set the server state file to read, load and apply to servers available in
+ this backend.
- When there is a firewall or any session-aware component between a client and
- a server, and when the protocol involves very long sessions with long idle
- periods (e.g. remote desktops), there is a risk that one of the intermediate
- components decides to expire a session which has remained idle for too long.
+ May be used in the following contexts: tcp, http, log
- Enabling socket-level TCP keep-alives makes the system regularly send packets
- to the other end of the connection, leaving it active. The delay between
- keep-alive probes is controlled by the system only and depends both on the
- operating system and its tuning parameters.
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
- It is important to understand that keep-alive packets are neither emitted nor
- received at the application level. It is only the network stacks which sees
- them. For this reason, even if one side of the proxy already uses keep-alives
- to maintain its connection alive, those keep-alive packets will not be
- forwarded to the other side of the proxy.
+ It only applies when the directive "load-server-state-from-file" is set to
+ "local". When <file> is not provided, if "use-backend-name" is used or if
+ this directive is not set, then backend name is used. If <file> starts with a
+ slash '/', then it is considered as an absolute path. Otherwise, <file> is
+ concatenated to the global directive "server-state-base".
- Please note that this has nothing to do with HTTP keep-alive.
+ Example: the minimal configuration below would make HAProxy look for the
+ state server file '/etc/haproxy/states/bk':
- Using option "tcpka" enables the emission of TCP keep-alive probes on both
- the client and server sides of a connection. Note that this is meaningful
- only in "defaults" or "listen" sections. If this option is used in a
- frontend, only the client side will get keep-alives, and if this option is
- used in a backend, only the server side will get keep-alives. For this
- reason, it is strongly recommended to explicitly use "option clitcpka" and
- "option srvtcpka" when the configuration is split between frontends and
- backends.
+ global
+ server-state-file-base /etc/haproxy/states
- See also : "option clitcpka", "option srvtcpka"
+ backend bk
+ load-server-state-from-file
+ See also: "server-state-base", "load-server-state-from-file", and
+ "show servers state"
-option tcplog [clf]
- Enable advanced logging of TCP connections with stream state and timers
+server-template <prefix> <num | range> <fqdn>[:<port>] [params*]
+ Set a template to initialize servers with shared parameters.
+ The names of these servers are built from <prefix> and <num | range> parameters.
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments :
- clf if the "clf" argument is added, then the output format will be
- the CLF format instead of HAProxy's default TCP format. You can
- use this when you need to feed HAProxy's logs through a specific
- log analyzer which only support the CLF format and which is not
- extensible. Since this expects an HTTP format some of the
- values have been pre set. The http request will show as TCP and
- the response code will show as 000.
-
- By default, the log output format is very poor, as it only contains the
- source and destination addresses, and the instance name. By specifying
- "option tcplog", each log line turns into a much richer format including, but
- not limited to, the connection timers, the stream status, the connections
- numbers, the frontend, backend and server name, and of course the source
- address and ports. This option is useful for pure TCP proxies in order to
- find which of the client or server disconnects or times out. For normal HTTP
- proxies, it's better to use "option httplog" which is even more complete.
-
- "option tcplog" overrides any previous "log-format" directive.
+ no | no | yes | yes
- See also : "option httplog", and section 8 about logging.
+ Arguments:
+ <prefix> A prefix for the server names to be built.
+ <num | range>
+ If <num> is provided, this template initializes <num> servers
+ with 1 up to <num> as server name suffixes. A range of numbers
+ <num_low>-<num_high> may also be used to use <num_low> up to
+ <num_high> as server name suffixes.
-option transparent
-no option transparent
- Enable client-side transparent proxying
+ <fqdn> A FQDN for all the servers this template initializes.
- May be used in the following contexts: tcp, http
+ <port> Same meaning as "server" <port> argument (see "server" keyword).
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ <params*>
+ Remaining server parameters among all those supported by "server"
+ keyword.
- Arguments : none
+ Examples:
+ # Initializes 3 servers with srv1, srv2 and srv3 as names,
+ # google.com as FQDN, and health-check enabled.
+ server-template srv 1-3 google.com:80 check
- This option was introduced in order to provide layer 7 persistence to layer 3
- load balancers. The idea is to use the OS's ability to redirect an incoming
- connection for a remote address to a local process (here HAProxy), and let
- this process know what address was initially requested. When this option is
- used, sessions without cookies will be forwarded to the original destination
- IP address of the incoming request (which should match that of another
- equipment), while requests with cookies will still be forwarded to the
- appropriate server.
+ # or
+ server-template srv 3 google.com:80 check
- Note that contrary to a common belief, this option does NOT make HAProxy
- present the client's IP to the server when establishing the connection.
+ # would be equivalent to:
+ server srv1 google.com:80 check
+ server srv2 google.com:80 check
+ server srv3 google.com:80 check
- See also: the "usesrc" argument of the "source" keyword, and the
- "transparent" option of the "bind" keyword.
-external-check command <command>
- Executable to run when performing an external-check
+source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | client | clientip } ]
+source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ]
+source <addr>[:<port>] [interface <name>]
+ Set the source address for outgoing connections
- May be used in the following contexts: tcp, http, log
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
- <command> is the external command to run
-
- The arguments passed to the to the command are:
-
- <proxy_address> <proxy_port> <server_address> <server_port>
+ <addr> is the IPv4 address HAProxy will bind to before connecting to a
+ server. This address is also used as a source for health checks.
- The <proxy_address> and <proxy_port> are derived from the first listener
- that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
- listener the proxy_address will be the path of the socket and the
- <proxy_port> will be the string "NOT_USED". In a backend section, it's not
- possible to determine a listener, and both <proxy_address> and <proxy_port>
- will have the string value "NOT_USED".
+ The default value of 0.0.0.0 means that the system will select
+ the most appropriate address to reach its destination. Optionally
+ an address family prefix may be used before the address to force
+ the family regardless of the address format, which can be useful
+ to specify a path to a unix socket with no slash ('/'). Currently
+ supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only)
+ - 'abnsz@' -> address is in zero-terminated abstract namespace
+ (Linux only)
- Some values are also provided through environment variables.
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment variables.
- Environment variables :
- HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
- applicable, for example in a "backend" section).
+ <port> is an optional port. It is normally not needed but may be useful
+ in some very specific contexts. The default value of zero means
+ the system will select a free port. Note that port ranges are not
+ supported in the backend. If you want to force port ranges, you
+ have to specify them on each "server" line.
- HAPROXY_PROXY_ID The backend id.
+ <addr2> is the IP address to present to the server when connections are
+ forwarded in full transparent proxy mode. This is currently only
+ supported on some patched Linux kernels. When this address is
+ specified, clients connecting to the server will be presented
+ with this address, while health checks will still use the address
+ <addr>.
- HAPROXY_PROXY_NAME The backend name.
+ <port2> is the optional port to present to the server when connections
+ are forwarded in full transparent proxy mode (see <addr2> above).
+ The default value of zero means the system will select a free
+ port.
- HAPROXY_PROXY_PORT The first bind port if available (or empty if not
- applicable, for example in a "backend" section or
- for a UNIX socket).
+ <hdr> is the name of a HTTP header in which to fetch the IP to bind to.
+ This is the name of a comma-separated header list which can
+ contain multiple IP addresses. By default, the last occurrence is
+ used. This is designed to work with the X-Forwarded-For header
+ and to automatically bind to the client's IP address as seen
+ by previous proxy, typically Stunnel. In order to use another
+ occurrence from the last one, please see the <occ> parameter
+ below. When the header (or occurrence) is not found, no binding
+ is performed so that the proxy's default IP address is used. Also
+ keep in mind that the header name is case insensitive, as for any
+ HTTP header.
- HAPROXY_SERVER_ADDR The server address.
+ <occ> is the occurrence number of a value to be used in a multi-value
+ header. This is to be used in conjunction with "hdr_ip(<hdr>)",
+ in order to specify which occurrence to use for the source IP
+ address. Positive values indicate a position from the first
+ occurrence, 1 being the first one. Negative values indicate
+ positions relative to the last one, -1 being the last one. This
+ is helpful for situations where an X-Forwarded-For header is set
+ at the entry point of an infrastructure and must be used several
+ proxy layers away. When this value is not specified, -1 is
+ assumed. Passing a zero here disables the feature.
- HAPROXY_SERVER_CURCONN The current number of connections on the server.
+ <name> is an optional interface name to which to bind to for outgoing
+ traffic. On systems supporting this features (currently, only
+ Linux), this allows one to bind all traffic to the server to
+ this interface even if it is not the one the system would select
+ based on routing tables. This should be used with extreme care.
+ Note that using this option requires root privileges.
- HAPROXY_SERVER_ID The server id.
+ The "source" keyword is useful in complex environments where a specific
+ address only is allowed to connect to the servers. It may be needed when a
+ private address must be used through a public gateway for instance, and it is
+ known that the system cannot determine the adequate source address by itself.
- HAPROXY_SERVER_MAXCONN The server max connections.
+ An extension which is available on certain patched Linux kernels may be used
+ through the "usesrc" optional keyword. It makes it possible to connect to the
+ servers with an IP address which does not belong to the system itself. This
+ is called "full transparent proxy mode". For this to work, the destination
+ servers have to route their traffic back to this address through the machine
+ running HAProxy, and IP forwarding must generally be enabled on this machine.
- HAPROXY_SERVER_NAME The server name.
+ In this "full transparent proxy" mode, it is possible to force a specific IP
+ address to be presented to the servers. This is not much used in fact. A more
+ common use is to tell HAProxy to present the client's IP address. For this,
+ there are two methods :
- HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
- socket).
+ - present the client's IP and port addresses. This is the most transparent
+ mode, but it can cause problems when IP connection tracking is enabled on
+ the machine, because a same connection may be seen twice with different
+ states. However, this solution presents the huge advantage of not
+ limiting the system to the 64k outgoing address+port couples, because all
+ of the client ranges may be used.
- HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
+ - present only the client's IP address and select a spare port. This
+ solution is still quite elegant but slightly less transparent (downstream
+ firewalls logs will not match upstream's). It also presents the downside
+ of limiting the number of concurrent connections to the usual 64k ports.
+ However, since the upstream and downstream ports are different, local IP
+ connection tracking on the machine will not be upset by the reuse of the
+ same session.
- HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
- of "cli" (the haproxy CLI), "syslog" (syslog TCP
- server), "peers" (peers TCP server), "h1" (HTTP/1.x
- server), "h2" (HTTP/2 server), or "tcp" (any other
- TCP server).
+ This option sets the default source for all servers in the backend. It may
+ also be specified in a "defaults" section. Finer source address specification
+ is possible at the server level using the "source" server option. Refer to
+ section 5 for more information.
- PATH The PATH environment variable used when executing
- the command may be set using "external-check path".
+ In order to work, "usesrc" requires root privileges, or on supported systems,
+ the "cap_net_raw" capability. See also the "setcap" global directive.
- If the command executed and exits with a zero status then the check is
- considered to have passed, otherwise the check is considered to have
- failed.
+ Examples :
+ backend private
+ # Connect to the servers using our 192.168.1.200 source address
+ source 192.168.1.200
- Example :
- external-check command /bin/true
+ backend transparent_ssl1
+ # Connect to the SSL farm from the client's source address
+ source 192.168.1.200 usesrc clientip
- See also : "external-check", "option external-check", "external-check path"
+ backend transparent_ssl2
+ # Connect to the SSL farm from the client's source address and port
+ # not recommended if IP conntrack is present on the local machine.
+ source 192.168.1.200 usesrc client
+
+ backend transparent_ssl3
+ # Connect to the SSL farm from the client's source address. It
+ # is more conntrack-friendly.
+ source 192.168.1.200 usesrc clientip
+ backend transparent_smtp
+ # Connect to the SMTP farm from the client's source address/port
+ # with Tproxy version 4.
+ source 0.0.0.0 usesrc clientip
-external-check path <path>
- The value of the PATH environment variable used when running an external-check
+ backend transparent_http
+ # Connect to the servers using the client's IP as seen by previous
+ # proxy.
+ source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1)
+
+ See also : the "source" server option in section 5, the Tproxy patches for
+ the Linux kernel on www.balabit.com, the "bind" keyword.
+
+
+srvtcpka-cnt <count>
+ Sets the maximum number of keepalive probes TCP should send before dropping
+ the connection on the server side.
May be used in the following contexts: tcp, http, log
yes | no | yes | yes
Arguments :
- <path> is the path used when executing external command to run
-
- The default path is "".
+ <count> is the maximum number of keepalive probes.
- Example :
- external-check path "/usr/bin:/bin"
+ This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
- See also : "external-check", "option external-check",
- "external-check command"
+ See also : "option srvtcpka", "srvtcpka-idle", "srvtcpka-intvl".
-persist rdp-cookie
-persist rdp-cookie(<name>)
- Enable RDP cookie-based persistence
+srvtcpka-idle <timeout>
+ Sets the time the connection needs to remain idle before TCP starts sending
+ keepalive probes, if enabled the sending of TCP keepalive packets on the
+ server side.
- May be used in the following contexts: tcp
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
- <name> is the optional name of the RDP cookie to check. If omitted, the
- default cookie name "msts" will be used. There currently is no
- valid reason to change this name.
-
- This statement enables persistence based on an RDP cookie. The RDP cookie
- contains all information required to find the server in the list of known
- servers. So when this option is set in the backend, the request is analyzed
- and if an RDP cookie is found, it is decoded. If it matches a known server
- which is still UP (or if "option persist" is set), then the connection is
- forwarded to this server.
-
- Note that this only makes sense in a TCP backend, but for this to work, the
- frontend must have waited long enough to ensure that an RDP cookie is present
- in the request buffer. This is the same requirement as with the "rdp-cookie"
- load-balancing method. Thus it is highly recommended to put all statements in
- a single "listen" section.
-
- Also, it is important to understand that the terminal server will emit this
- RDP cookie only if it is configured for "token redirection mode", which means
- that the "IP address redirection" option is disabled.
+ <timeout> is the time the connection needs to remain idle before TCP starts
+ sending keepalive probes. It is specified in seconds by default,
+ but can be in any other unit if the number is suffixed by the
+ unit, as explained at the top of this document.
- Example :
- listen tse-farm
- bind :3389
- # wait up to 5s for an RDP cookie in the request
- tcp-request inspect-delay 5s
- tcp-request content accept if RDP_COOKIE
- # apply RDP cookie persistence
- persist rdp-cookie
- # if server is unknown, let's balance on the same cookie.
- # alternatively, "balance leastconn" may be useful too.
- balance rdp-cookie
- server srv1 1.1.1.1:3389
- server srv2 1.1.1.2:3389
+ This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_time) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
- See also : "balance rdp-cookie", "tcp-request" and the "req.rdp_cookie" ACL.
+ See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-intvl".
-quic-initial <action> [ { if | unless } <condition> ]
- Perform an action on an incoming QUIC Initial packet. Contrary to
- "tcp-request connection", this is executed prior to any connection element
- instantiation and starting and completion of the SSL handshake, which is more
- efficient when wanting to reject connections attempts.
+srvtcpka-intvl <timeout>
+ Sets the time between individual keepalive probes on the server side.
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | no
+ yes | no | yes | yes
Arguments :
- <action> defines the action to perform if the condition applies. See
- below.
-
- <condition> is a standard layer4-only ACL-based condition (see section 7).
- However, QUIC initial rules are executed too early even for
- some layer4 sample fetch methods despite no configuration
- warning and may result in unspecified runtime behavior,
- although they will not crash. Consider that only internal
- samples and layer4 "src*" and "dst*" are considered as
- supported for now.
+ <timeout> is the time between individual keepalive probes. It is specified
+ in seconds by default, but can be in any other unit if the number
+ is suffixed by the unit, as explained at the top of this
+ document.
+ This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword
+ is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used.
+ The availability of this setting depends on the operating system. It is
+ known to work on Linux.
- This action is executed early during QUIC packet parsing. As such, only a
- minimal list of actions is supported :
- - accept
- - dgram-drop
- - reject
- - send-retry
+ See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-idle".
-rate-limit sessions <rate>
- Set a limit on the number of new sessions accepted per second on a frontend
+stats admin { if | unless } <cond>
+ Enable statistics admin level if/unless a condition is matched
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments :
- <rate> The <rate> parameter is an integer designating the maximum number
- of new sessions per second to accept on the frontend.
+ no | yes | yes | yes
- When the frontend reaches the specified number of new sessions per second, it
- stops accepting new connections until the rate drops below the limit again.
- During this time, the pending sessions will be kept in the socket's backlog
- (in system buffers) and HAProxy will not even be aware that sessions are
- pending. When applying very low limit on a highly loaded service, it may make
- sense to increase the socket's backlog using the "backlog" keyword.
+ This statement enables the statistics admin level if/unless a condition is
+ matched.
- This feature is particularly efficient at blocking connection-based attacks
- or service abuse on fragile servers. Since the session rate is measured every
- millisecond, it is extremely accurate. Also, the limit applies immediately,
- no delay is needed at all to detect the threshold.
+ The admin level allows to enable/disable servers from the web interface. By
+ default, statistics page is read-only for security reasons.
- Example : limit the connection rate on SMTP to 10 per second max
- listen smtp
- mode tcp
- bind :25
- rate-limit sessions 10
- server smtp1 127.0.0.1:1025
+ Currently, the POST request is limited to the buffer size minus the reserved
+ buffer space, which means that if the list of servers is too long, the
+ request won't be processed. It is recommended to alter few servers at a
+ time.
- Note : when the maximum rate is reached, the frontend's status is not changed
- but its sockets appear as "WAITING" in the statistics if the
- "socket-stats" option is enabled.
+ Example :
+ # statistics admin level only for localhost
+ backend stats_localhost
+ stats enable
+ stats admin if LOCALHOST
- See also : the "backlog" keyword and the "fe_sess_rate" ACL criterion.
+ Example :
+ # statistics admin level always enabled because of the authentication
+ backend stats_auth
+ stats enable
+ stats auth admin:AdMiN123
+ stats admin if TRUE
+ Example :
+ # statistics admin level depends on the authenticated user
+ userlist stats-auth
+ group admin users admin
+ user admin insecure-password AdMiN123
+ group readonly users haproxy
+ user haproxy insecure-password haproxy
-redirect location <loc> [code <code>] <option> [{if | unless} <condition>]
-redirect prefix <pfx> [code <code>] <option> [{if | unless} <condition>]
-redirect scheme <sch> [code <code>] <option> [{if | unless} <condition>]
- Return an HTTP redirection if/unless a condition is matched
+ backend stats_auth
+ stats enable
+ acl AUTH http_auth(stats-auth)
+ acl AUTH_ADMIN http_auth_group(stats-auth) admin
+ stats http-request auth unless AUTH
+ stats admin if AUTH_ADMIN
- May be used in the following contexts: http
+ See also : "stats enable", "stats auth", "stats http-request", section 12.2
+ about userlists and section 7 about ACL usage.
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ssl-f-use [<sslbindconf> ...]*
+ Assignate a certificate to the current frontend.
- If/unless the condition is matched, the HTTP request will lead to a redirect
- response. If no condition is specified, the redirect applies unconditionally.
+ May be used in the following contexts: tcp, http
- Arguments :
- <loc> With "redirect location", the exact value in <loc> is placed into
- the HTTP "Location" header. When used in an "http-request" rule,
- <loc> value follows the Custom log format rules and can include
- some dynamic values (see Custom log format in section 8.2.6).
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
- <pfx> With "redirect prefix", the "Location" header is built from the
- concatenation of <pfx> and the complete URI path, including the
- query string, unless the "drop-query" option is specified (see
- below). As a special case, if <pfx> equals exactly "/", then
- nothing is inserted before the original URI. It allows one to
- redirect to the same URL (for instance, to insert a cookie). When
- used in an "http-request" rule, <pfx> value follows the Custom
- Log Format rules and can include some dynamic values (see Custom
- Log Format in section 8.2.6).
+ Arguments :
+ <sslbindconf> supports the following keywords from the bind line
+ (see Section 5.1. Bind options):
- <sch> With "redirect scheme", then the "Location" header is built by
- concatenating <sch> with "://" then the first occurrence of the
- "Host" header, and then the URI path, including the query string
- unless the "drop-query" option is specified (see below). If no
- path is found or if the path is "*", then "/" is used instead. If
- no "Host" header is found, then an empty host component will be
- returned, which most recent browsers interpret as redirecting to
- the same host. This directive is mostly used to redirect HTTP to
- HTTPS. When used in an "http-request" rule, <sch> value follows
- the Custom log format rules and can include some dynamic values
- (see Custom log format in section 8.2.6).
+ - allow-0rtt
+ - alpn
+ - ca-file
+ - ca-verify-file
+ - ciphers
+ - ciphersuites
+ - client-sigalgs
+ - crl-file
+ - curves
+ - ecdhe
+ - no-alpn
+ - no-ca-names
+ - npn
+ - sigalgs
+ - ssl-min-ver
+ - ssl-max-ver
+ - verify
- <code> The code is optional. It indicates which type of HTTP redirection
- is desired. Only codes 301, 302, 303, 307 and 308 are supported,
- with 302 used by default if no code is specified. 301 means
- "Moved permanently", and a browser may cache the Location. 302
- means "Moved temporarily" and means that the browser should not
- cache the redirection. 303 is equivalent to 302 except that the
- browser will fetch the location with a GET method. 307 is just
- like 302 but makes it clear that the same method must be reused.
- Likewise, 308 replaces 301 if the same method must be used.
+ sslbindconf also supports the following keywords from the crt-store load
+ keyword (see Section 12.7.1. Load options):
- <option> There are several options which can be specified to adjust the
- expected behavior of a redirection :
+ - crt
+ - key
+ - ocsp
+ - issuer
+ - sctl
+ - ocsp-update
- - "drop-query"
- When this keyword is used in a prefix-based redirection, then the
- location will be set without any possible query-string, which is useful
- for directing users to a non-secure page for instance. It has no effect
- with a location-type redirect.
+ Assignate a certificate <crtname> to a crt-list created automatically with the
+ frontend name and prefixed by @ (ex: '@frontend1').
- - "append-slash"
- This keyword may be used in conjunction with "drop-query" to redirect
- users who use a URL not ending with a '/' to the same one with the '/'.
- It can be useful to ensure that search engines will only see one URL.
- For this, a return code 301 is preferred.
+ This implicit crt-list will be assigned to every "ssl" bind lines in a
+ frontend that does not already have the "crt" or the "crt-list" line.
+ crt-list commands from the stats socket are effective with this crt-list, so
+ one could replace, remove or add certificates and SSL options to it.
- - "ignore-empty"
- This keyword only has effect when a location is produced using a log
- format expression (i.e. when used in http-request or http-response).
- It indicates that if the result of the expression is empty, the rule
- should silently be skipped. The main use is to allow mass-redirects
- of known paths using a simple map.
+ Example :
- - "set-cookie NAME[=value]"
- A "Set-Cookie" header will be added with NAME (and optionally "=value")
- to the response. This is sometimes used to indicate that a user has
- been seen, for instance to protect against some types of DoS. No other
- cookie option is added, so the cookie will be a session cookie. Note
- that for a browser, a sole cookie name without an equal sign is
- different from a cookie with an equal sign.
+ frontend https
+ bind :443 ssl
+ bind quic4@:443 ssl
+ ssl-f-use crt foobar.pem.rsa sigalgs "RSA-PSS+SHA256"
+ ssl-f-use crt test.foobar.pem
+ ssl-f-use crt test2.foobar.crt key test2.foobar.key ocsp test2.foobar.ocsp ocsp-update on
- - "set-cookie-fmt <fmt>"
- It is equivaliant to the option above, except the "Set-Cookie" header
- will be filled with the result of the log-format string <fmt>
- evaluation. Be careful to respect the "NAME[=value]" format because no
- special check are performed during the configuration parsing.
+ See also : "crt-list" and "crt".
- - "clear-cookie NAME[=]"
- A "Set-Cookie" header will be added with NAME (and optionally "="), but
- with the "Max-Age" attribute set to zero. This will tell the browser to
- delete this cookie. It is useful for instance on logout pages. It is
- important to note that clearing the cookie "NAME" will not remove a
- cookie set with "NAME=value". You have to clear the cookie "NAME=" for
- that, because the browser makes the difference.
+stats auth <user>:<passwd>
+ Enable statistics with authentication and grant access to an account
- - "keep-query"
- When this keyword is used in a location-based redirection, then the
- query-string of the original URI, if any, will be appended to the
- location. If no query-string is found, nothing is added. If the
- location already contains a query-string, the original one will be
- appended with the '&' delimiter.
+ May be used in the following contexts: http
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- Example: move the login URL only to HTTPS.
- acl clear dst_port 80
- acl secure dst_port 8080
- acl login_page url_beg /login
- acl logout url_beg /logout
- acl uid_given url_reg /login?userid=[^&]+
- acl cookie_set hdr_sub(cookie) SEEN=1
+ Arguments :
+ <user> is a user name to grant access to
- redirect prefix https://mysite.com set-cookie SEEN=1 if !cookie_set
- redirect prefix https://mysite.com if login_page !secure
- redirect prefix http://mysite.com drop-query if login_page !uid_given
- redirect location http://mysite.com/ if !login_page secure
- redirect location / clear-cookie USERID= if logout
+ <passwd> is the cleartext password associated to this user
- Example: send redirects for request for articles without a '/'.
- acl missing_slash path_reg ^/article/[^/]*$
- redirect code 301 prefix / drop-query append-slash if missing_slash
+ This statement enables statistics with default settings, and restricts access
+ to declared users only. It may be repeated as many times as necessary to
+ allow as many users as desired. When a user tries to access the statistics
+ without a valid account, a "401 Forbidden" response will be returned so that
+ the browser asks the user to provide a valid user and password. The real
+ which will be returned to the browser is configurable using "stats realm".
- Example: redirect all HTTP traffic to HTTPS when SSL is handled by HAProxy.
- redirect scheme https if !{ ssl_fc }
+ Since the authentication method is HTTP Basic Authentication, the passwords
+ circulate in cleartext on the network. Thus, it was decided that the
+ configuration file would also use cleartext passwords to remind the users
+ that those ones should not be sensitive and not shared with any other account.
- Example: append 'www.' prefix in front of all hosts not having it
- http-request redirect code 301 location \
- http://www.%[hdr(host)]%[capture.req.uri] \
- unless { hdr_beg(host) -i www }
+ It is also possible to reduce the scope of the proxies which appear in the
+ report using "stats scope".
- Example: permanently redirect only old URLs to new ones
- http-request redirect code 301 location \
- %[path,map_str(old-blog-articles.map)] ignore-empty
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
- See section 7 about ACL usage.
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
-retries <value>
- Set the number of retries to perform on a server after a failure
+ See also : "stats enable", "stats realm", "stats scope", "stats uri"
- May be used in the following contexts: tcp, http
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+stats enable
+ Enable statistics reporting with default settings
- Arguments :
- <value> is the number of times a request or connection attempt should be
- retried on a server after a failure.
+ May be used in the following contexts: http
- By default, retries apply only to new connection attempts. However, when
- the "retry-on" directive is used, other conditions might trigger a retry
- (e.g. empty response, undesired status code), and each of them will count
- one attempt, and when the total number attempts reaches the value here, an
- error will be returned.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- In order to avoid immediate reconnections to a server which is restarting,
- a turn-around timer of min("timeout connect", one second) is applied before
- a retry occurs on the same server.
+ Arguments : none
- When "option redispatch" is set, some retries may be performed on another
- server even if a cookie references a different server. By default this will
- only be the last retry unless an argument is passed to "option redispatch".
+ This statement enables statistics reporting with default settings defined
+ at build time. Unless stated otherwise, these settings are used :
+ - stats uri : /haproxy?stats
+ - stats realm : "HAProxy Statistics"
+ - stats auth : no authentication
+ - stats scope : no restriction
- See also : "option redispatch"
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
-retry-on [space-delimited list of keywords]
- Specify when to attempt to automatically retry a failed request.
- This setting is only valid when "mode" is set to http and is silently ignored
- otherwise.
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
- May be used in the following contexts: tcp, http
+ See also : "stats auth", "stats realm", "stats uri"
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
- Arguments :
- <keywords> is a space-delimited list of keywords or HTTP status codes, each
- representing a type of failure event on which an attempt to
- retry the request is desired. Please read the notes at the
- bottom before changing this setting. The following keywords are
- supported :
+stats hide-version
+ Enable statistics and hide HAProxy version reporting
- none never retry
+ May be used in the following contexts: http
- conn-failure retry when the connection or the SSL handshake failed
- and the request could not be sent. This is the default.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- empty-response retry when the server connection was closed after part
- of the request was sent, and nothing was received from
- the server. This type of failure may be caused by the
- request timeout on the server side, poor network
- condition, or a server crash or restart while
- processing the request.
+ Arguments : none
- junk-response retry when the server returned something not looking
- like a complete HTTP response. This includes partial
- responses headers as well as non-HTTP contents. It
- usually is a bad idea to retry on such events, which
- may be caused a configuration issue (wrong server port)
- or by the request being harmful to the server (buffer
- overflow attack for example).
+ By default, the stats page reports some useful status information along with
+ the statistics. Among them is HAProxy's version. However, it is generally
+ considered dangerous to report precise version to anyone, as it can help them
+ target known weaknesses with specific attacks. The "stats hide-version"
+ statement removes the version from the statistics report. This is recommended
+ for public sites or any site with a weak login/password.
- response-timeout the server timeout stroke while waiting for the server
- to respond to the request. This may be caused by poor
- network condition, the reuse of an idle connection
- which has expired on the path, or by the request being
- extremely expensive to process. It generally is a bad
- idea to retry on such events on servers dealing with
- heavy database processing (full scans, etc) as it may
- amplify denial of service attacks.
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
- 0rtt-rejected retry requests which were sent over early data and were
- rejected by the server. These requests are generally
- considered to be safe to retry.
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
- <status> any HTTP status code among "401" (Unauthorized), "403"
- (Forbidden), "404" (Not Found), "408" (Request Timeout),
- "421" (Misdirected Request), "425" (Too Early),
- "429" (Too Many Requests), "500" (Server Error),
- "501" (Not Implemented), "502" (Bad Gateway),
- "503" (Service Unavailable), "504" (Gateway Timeout).
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
- all-retryable-errors
- retry request for any error that are considered
- retryable. This currently activates "conn-failure",
- "empty-response", "junk-response", "response-timeout",
- "0rtt-rejected", "500", "502", "503", and "504".
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
- Using this directive replaces any previous settings with the new ones; it is
- not cumulative.
- Please note that using anything other than "none" and "conn-failure" requires
- to allocate a buffer and copy the whole request into it, so it has memory and
- performance impacts. Requests not fitting in a single buffer will never be
- retried (see the global tune.bufsize setting).
+stats http-request { allow | deny | auth [realm <realm>] }
+ [ { if | unless } <condition> ]
+ Access control for statistics
- You have to make sure the application has a replay protection mechanism built
- in such as a unique transaction IDs passed in requests, or that replaying the
- same request has no consequence, or it is very dangerous to use any retry-on
- value beside "conn-failure" and "none". Static file servers and caches are
- generally considered safe against any type of retry. Using a status code can
- be useful to quickly leave a server showing an abnormal behavior (out of
- memory, file system issues, etc), but in this case it may be a good idea to
- immediately redispatch the connection to another server (please see "option
- redispatch" for this). Last, it is important to understand that most causes
- of failures are the requests themselves and that retrying a request causing a
- server to misbehave will often make the situation even worse for this server,
- or for the whole service in case of redispatch.
+ May be used in the following contexts: http
- Unless you know exactly how the application deals with replayed requests, you
- should not use this directive.
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
- The default is "conn-failure".
+ As "http-request", these set of options allow to fine control access to
+ statistics. Each option may be followed by if/unless and acl.
+ First option with matched condition (or option without condition) is final.
+ For "deny" a 403 error will be returned, for "allow" normal processing is
+ performed, for "auth" a 401/407 error code is returned so the client
+ should be asked to enter a username and password.
- Example:
- retry-on 503 504
+ There is no fixed limit to the number of http-request statements per
+ instance.
- See also: "retries", "option redispatch", "tune.bufsize"
+ See also : "http-request", section 12.2 about userlists and section 7
+ about ACL usage.
-server <name> <address>[:[port]] [param*]
- Declare a server in a backend
- May be used in the following contexts: tcp, http, log
+stats realm <realm>
+ Enable statistics and set authentication realm
+
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ yes | yes | yes | yes
Arguments :
- <name> is the internal name assigned to this server. This name will
- appear in logs and alerts. If "http-send-name-header" is
- set, it will be added to the request header sent to the server.
-
- <address> is the IPv4 or IPv6 address of the server. Alternatively, a
- resolvable hostname is supported, but this name will be resolved
- during start-up. Address "0.0.0.0" or "*" has a special meaning.
- It indicates that the connection will be forwarded to the same IP
- address as the one from the client connection. This is useful in
- transparent proxy architectures where the client's connection is
- intercepted and HAProxy must forward to the original destination
- address. This is more or less what the "transparent" keyword does
- except that with a server it's possible to limit concurrency and
- to report statistics. Optionally, an address family prefix may be
- used before the address to force the family regardless of the
- address format, which can be useful to specify a path to a unix
- socket with no slash ('/'). Currently supported prefixes are :
- - 'ipv4@' -> address is always IPv4
- - 'ipv6@' -> address is always IPv6
- - 'unix@' -> address is a path to a local unix socket
- - 'abns@' -> address is in abstract namespace (Linux only)
- - 'abnsz@' -> address is in abstract namespace (Linux only)
- but it is explicitly zero-terminated. This means no \0
- padding is used to complete sun_path. It is useful to
- interconnect with programs that don't implement the
- default abns naming logic that haproxy uses.
- - 'sockpair@' -> address is the FD of a connected unix
- socket or of a socketpair. During a connection, the
- backend creates a pair of connected sockets, and passes
- one of them over the FD. The bind part will use the
- received socket as the client FD. Should be used
- carefully.
- - 'rhttp@' [ EXPERIMENTAL ] -> custom address family for a
- passive server in HTTP reverse context. This is an
- experimental features which requires
- "expose-experimental-directives" on a line before this
- server.
- You may want to reference some environment variables in the
- address parameter, see section 2.3 about environment
- variables. The "init-addr" setting can be used to modify the way
- IP addresses should be resolved upon startup.
+ <realm> is the name of the HTTP Basic Authentication realm reported to
+ the browser. The browser uses it to display it in the pop-up
+ inviting the user to enter a valid username and password.
- <port> is an optional port specification. If set, all connections will
- be sent to this port. If unset, the same port the client
- connected to will be used. The port may also be prefixed by a "+"
- or a "-". In this case, the server's port will be determined by
- adding this value to the client's port.
+ The realm is read as a single word, so any spaces in it should be escaped
+ using a backslash ('\').
- <param*> is a list of parameters for this server. The "server" keywords
- accepts an important number of options and has a complete section
- dedicated to it. Please refer to section 5 for more details.
+ This statement is useful only in conjunction with "stats auth" since it is
+ only related to authentication.
- Examples :
- server first 10.1.1.1:1080 cookie first check inter 1000
- server second 10.1.1.2:1080 cookie second check inter 1000
- server transp ipv4@
- server backup "${SRV_BACKUP}:1080" backup
- server www1_dc1 "${LAN_DC1}.101:80"
- server www1_dc2 "${LAN_DC2}.101:80"
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
- Note: regarding Linux's abstract namespace sockets, "abns" HAProxy sockets
- uses the whole sun_path length is used for the address length. Some
- other programs such as socat use the string length only by default.
- Pass the option ",unix-tightsocklen=0" to any abstract socket
- definition in socat to make it compatible with HAProxy's, or use the
- "abnsz" HAProxy socket family instead.
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
- See also: "default-server", "http-send-name-header" and section 5 about
- server options
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
-server-state-file-name [ { use-backend-name | <file> } ]
- Set the server state file to read, load and apply to servers available in
- this backend.
+ See also : "stats auth", "stats enable", "stats uri"
- May be used in the following contexts: tcp, http, log
- May be used in sections: defaults | frontend | listen | backend
- no | no | yes | yes
+stats refresh <delay>
+ Enable statistics with automatic refresh
- It only applies when the directive "load-server-state-from-file" is set to
- "local". When <file> is not provided, if "use-backend-name" is used or if
- this directive is not set, then backend name is used. If <file> starts with a
- slash '/', then it is considered as an absolute path. Otherwise, <file> is
- concatenated to the global directive "server-state-base".
+ May be used in the following contexts: http
- Example: the minimal configuration below would make HAProxy look for the
- state server file '/etc/haproxy/states/bk':
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- global
- server-state-file-base /etc/haproxy/states
+ Arguments :
+ <delay> is the suggested refresh delay, specified in seconds, which will
+ be returned to the browser consulting the report page. While the
+ browser is free to apply any delay, it will generally respect it
+ and refresh the page this every seconds. The refresh interval may
+ be specified in any other non-default time unit, by suffixing the
+ unit after the value, as explained at the top of this document.
- backend bk
- load-server-state-from-file
+ This statement is useful on monitoring displays with a permanent page
+ reporting the load balancer's activity. When set, the HTML report page will
+ include a link "refresh"/"stop refresh" so that the user can select whether
+ they want automatic refresh of the page or not.
- See also: "server-state-base", "load-server-state-from-file", and
- "show servers state"
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
-server-template <prefix> <num | range> <fqdn>[:<port>] [params*]
- Set a template to initialize servers with shared parameters.
- The names of these servers are built from <prefix> and <num | range> parameters.
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
- May be used in the following contexts: tcp, http, log
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
- May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
- Arguments:
- <prefix> A prefix for the server names to be built.
- <num | range>
- If <num> is provided, this template initializes <num> servers
- with 1 up to <num> as server name suffixes. A range of numbers
- <num_low>-<num_high> may also be used to use <num_low> up to
- <num_high> as server name suffixes.
+stats scope { <name> | "." }
+ Enable statistics and limit access scope
- <fqdn> A FQDN for all the servers this template initializes.
+ May be used in the following contexts: http
- <port> Same meaning as "server" <port> argument (see "server" keyword).
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- <params*>
- Remaining server parameters among all those supported by "server"
- keyword.
+ Arguments :
+ <name> is the name of a listen, frontend or backend section to be
+ reported. The special name "." (a single dot) designates the
+ section in which the statement appears.
- Examples:
- # Initializes 3 servers with srv1, srv2 and srv3 as names,
- # google.com as FQDN, and health-check enabled.
- server-template srv 1-3 google.com:80 check
+ When this statement is specified, only the sections enumerated with this
+ statement will appear in the report. All other ones will be hidden. This
+ statement may appear as many times as needed if multiple sections need to be
+ reported. Please note that the name checking is performed as simple string
+ comparisons, and that it is never checked that a give section name really
+ exists.
- # or
- server-template srv 3 google.com:80 check
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
- # would be equivalent to:
- server srv1 google.com:80 check
- server srv2 google.com:80 check
- server srv3 google.com:80 check
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm HAProxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
-source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | client | clientip } ]
-source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ]
-source <addr>[:<port>] [interface <name>]
- Set the source address for outgoing connections
+stats show-desc [ <desc> ]
+ Enable reporting of a description on the statistics page.
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ yes | yes | yes | yes
- Arguments :
- <addr> is the IPv4 address HAProxy will bind to before connecting to a
- server. This address is also used as a source for health checks.
+ <desc> is an optional description to be reported. If unspecified, the
+ description from global section is automatically used instead.
- The default value of 0.0.0.0 means that the system will select
- the most appropriate address to reach its destination. Optionally
- an address family prefix may be used before the address to force
- the family regardless of the address format, which can be useful
- to specify a path to a unix socket with no slash ('/'). Currently
- supported prefixes are :
- - 'ipv4@' -> address is always IPv4
- - 'ipv6@' -> address is always IPv6
- - 'unix@' -> address is a path to a local unix socket
- - 'abns@' -> address is in abstract namespace (Linux only)
- - 'abnsz@' -> address is in zero-terminated abstract namespace
- (Linux only)
+ This statement is useful for users that offer shared services to their
+ customers, where node or description should be different for each customer.
- You may want to reference some environment variables in the
- address parameter, see section 2.3 about environment variables.
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters. By default description is not shown.
- <port> is an optional port. It is normally not needed but may be useful
- in some very specific contexts. The default value of zero means
- the system will select a free port. Note that port ranges are not
- supported in the backend. If you want to force port ranges, you
- have to specify them on each "server" line.
+ Example :
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats show-desc Master node for Europe, Asia, Africa
+ stats uri /admin?stats
+ stats refresh 5s
- <addr2> is the IP address to present to the server when connections are
- forwarded in full transparent proxy mode. This is currently only
- supported on some patched Linux kernels. When this address is
- specified, clients connecting to the server will be presented
- with this address, while health checks will still use the address
- <addr>.
+ See also: "show-node", "stats enable", "stats uri" and "description" in
+ global section.
- <port2> is the optional port to present to the server when connections
- are forwarded in full transparent proxy mode (see <addr2> above).
- The default value of zero means the system will select a free
- port.
- <hdr> is the name of a HTTP header in which to fetch the IP to bind to.
- This is the name of a comma-separated header list which can
- contain multiple IP addresses. By default, the last occurrence is
- used. This is designed to work with the X-Forwarded-For header
- and to automatically bind to the client's IP address as seen
- by previous proxy, typically Stunnel. In order to use another
- occurrence from the last one, please see the <occ> parameter
- below. When the header (or occurrence) is not found, no binding
- is performed so that the proxy's default IP address is used. Also
- keep in mind that the header name is case insensitive, as for any
- HTTP header.
+stats show-legends
+ Enable reporting additional information on the statistics page
- <occ> is the occurrence number of a value to be used in a multi-value
- header. This is to be used in conjunction with "hdr_ip(<hdr>)",
- in order to specify which occurrence to use for the source IP
- address. Positive values indicate a position from the first
- occurrence, 1 being the first one. Negative values indicate
- positions relative to the last one, -1 being the last one. This
- is helpful for situations where an X-Forwarded-For header is set
- at the entry point of an infrastructure and must be used several
- proxy layers away. When this value is not specified, -1 is
- assumed. Passing a zero here disables the feature.
+ May be used in the following contexts: http
- <name> is an optional interface name to which to bind to for outgoing
- traffic. On systems supporting this features (currently, only
- Linux), this allows one to bind all traffic to the server to
- this interface even if it is not the one the system would select
- based on routing tables. This should be used with extreme care.
- Note that using this option requires root privileges.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- The "source" keyword is useful in complex environments where a specific
- address only is allowed to connect to the servers. It may be needed when a
- private address must be used through a public gateway for instance, and it is
- known that the system cannot determine the adequate source address by itself.
+ Arguments : none
- An extension which is available on certain patched Linux kernels may be used
- through the "usesrc" optional keyword. It makes it possible to connect to the
- servers with an IP address which does not belong to the system itself. This
- is called "full transparent proxy mode". For this to work, the destination
- servers have to route their traffic back to this address through the machine
- running HAProxy, and IP forwarding must generally be enabled on this machine.
+ Enable reporting additional information on the statistics page :
+ - cap: capabilities (proxy)
+ - mode: one of tcp, http or health (proxy)
+ - id: SNMP ID (proxy, socket, server)
+ - IP (socket, server)
+ - cookie (backend, server)
- In this "full transparent proxy" mode, it is possible to force a specific IP
- address to be presented to the servers. This is not much used in fact. A more
- common use is to tell HAProxy to present the client's IP address. For this,
- there are two methods :
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters. Default behavior is not to show this information.
- - present the client's IP and port addresses. This is the most transparent
- mode, but it can cause problems when IP connection tracking is enabled on
- the machine, because a same connection may be seen twice with different
- states. However, this solution presents the huge advantage of not
- limiting the system to the 64k outgoing address+port couples, because all
- of the client ranges may be used.
+ See also: "stats enable", "stats uri".
- - present only the client's IP address and select a spare port. This
- solution is still quite elegant but slightly less transparent (downstream
- firewalls logs will not match upstream's). It also presents the downside
- of limiting the number of concurrent connections to the usual 64k ports.
- However, since the upstream and downstream ports are different, local IP
- connection tracking on the machine will not be upset by the reuse of the
- same session.
- This option sets the default source for all servers in the backend. It may
- also be specified in a "defaults" section. Finer source address specification
- is possible at the server level using the "source" server option. Refer to
- section 5 for more information.
+stats show-modules
+ Enable display of extra statistics module on the statistics page
- In order to work, "usesrc" requires root privileges, or on supported systems,
- the "cap_net_raw" capability. See also the "setcap" global directive.
+ May be used in the following contexts: http
- Examples :
- backend private
- # Connect to the servers using our 192.168.1.200 source address
- source 192.168.1.200
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
- backend transparent_ssl1
- # Connect to the SSL farm from the client's source address
- source 192.168.1.200 usesrc clientip
+ Arguments : none
- backend transparent_ssl2
- # Connect to the SSL farm from the client's source address and port
- # not recommended if IP conntrack is present on the local machine.
- source 192.168.1.200 usesrc client
-
- backend transparent_ssl3
- # Connect to the SSL farm from the client's source address. It
- # is more conntrack-friendly.
- source 192.168.1.200 usesrc clientip
-
- backend transparent_smtp
- # Connect to the SMTP farm from the client's source address/port
- # with Tproxy version 4.
- source 0.0.0.0 usesrc clientip
-
- backend transparent_http
- # Connect to the servers using the client's IP as seen by previous
- # proxy.
- source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1)
-
- See also : the "source" server option in section 5, the Tproxy patches for
- the Linux kernel on www.balabit.com, the "bind" keyword.
-
-
-srvtcpka-cnt <count>
- Sets the maximum number of keepalive probes TCP should send before dropping
- the connection on the server side.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- <count> is the maximum number of keepalive probes.
-
- This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
-
- See also : "option srvtcpka", "srvtcpka-idle", "srvtcpka-intvl".
-
-
-srvtcpka-idle <timeout>
- Sets the time the connection needs to remain idle before TCP starts sending
- keepalive probes, if enabled the sending of TCP keepalive packets on the
- server side.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- <timeout> is the time the connection needs to remain idle before TCP starts
- sending keepalive probes. It is specified in seconds by default,
- but can be in any other unit if the number is suffixed by the
- unit, as explained at the top of this document.
-
- This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_time) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
-
- See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-intvl".
-
-
-srvtcpka-intvl <timeout>
- Sets the time between individual keepalive probes on the server side.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- <timeout> is the time between individual keepalive probes. It is specified
- in seconds by default, but can be in any other unit if the number
- is suffixed by the unit, as explained at the top of this
- document.
+ New columns are added at the end of the line containing the extra statistics
+ values as a tooltip.
- This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword
- is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used.
- The availability of this setting depends on the operating system. It is
- known to work on Linux.
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters. Default behavior is not to show this information.
- See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-idle".
+ See also: "stats enable", "stats uri".
-stats admin { if | unless } <cond>
- Enable statistics admin level if/unless a condition is matched
+stats show-node [ <name> ]
+ Enable reporting of a host name on the statistics page.
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
-
- This statement enables the statistics admin level if/unless a condition is
- matched.
-
- The admin level allows to enable/disable servers from the web interface. By
- default, statistics page is read-only for security reasons.
-
- Currently, the POST request is limited to the buffer size minus the reserved
- buffer space, which means that if the list of servers is too long, the
- request won't be processed. It is recommended to alter few servers at a
- time.
+ yes | yes | yes | yes
- Example :
- # statistics admin level only for localhost
- backend stats_localhost
- stats enable
- stats admin if LOCALHOST
+ Arguments:
+ <name> is an optional name to be reported. If unspecified, the
+ node name from global section is automatically used instead.
- Example :
- # statistics admin level always enabled because of the authentication
- backend stats_auth
- stats enable
- stats auth admin:AdMiN123
- stats admin if TRUE
+ This statement is useful for users that offer shared services to their
+ customers, where node or description might be different on a stats page
+ provided for each customer. Default behavior is not to show host name.
- Example :
- # statistics admin level depends on the authenticated user
- userlist stats-auth
- group admin users admin
- user admin insecure-password AdMiN123
- group readonly users haproxy
- user haproxy insecure-password haproxy
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
- backend stats_auth
+ Example:
+ # internal monitoring access (unlimited)
+ backend private_monitoring
stats enable
- acl AUTH http_auth(stats-auth)
- acl AUTH_ADMIN http_auth_group(stats-auth) admin
- stats http-request auth unless AUTH
- stats admin if AUTH_ADMIN
-
- See also : "stats enable", "stats auth", "stats http-request", section 3.4
- about userlists and section 7 about ACL usage.
-
-ssl-f-use [<sslbindconf> ...]*
- Assignate a certificate to the current frontend.
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | no
-
- Arguments :
- <sslbindconf> supports the following keywords from the bind line
- (see Section 5.1. Bind options):
-
- - allow-0rtt
- - alpn
- - ca-file
- - ca-verify-file
- - ciphers
- - ciphersuites
- - client-sigalgs
- - crl-file
- - curves
- - ecdhe
- - no-alpn
- - no-ca-names
- - npn
- - sigalgs
- - ssl-min-ver
- - ssl-max-ver
- - verify
-
- sslbindconf also supports the following keywords from the crt-store load
- keyword (see Section 3.11.1. Load options):
-
- - crt
- - key
- - ocsp
- - issuer
- - sctl
- - ocsp-update
-
- Assignate a certificate <crtname> to a crt-list created automatically with the
- frontend name and prefixed by @ (ex: '@frontend1').
-
- This implicit crt-list will be assigned to every "ssl" bind lines in a
- frontend that does not already have the "crt" or the "crt-list" line.
- crt-list commands from the stats socket are effective with this crt-list, so
- one could replace, remove or add certificates and SSL options to it.
-
- Example :
+ stats show-node Europe-1
+ stats uri /admin?stats
+ stats refresh 5s
- frontend https
- bind :443 ssl
- bind quic4@:443 ssl
- ssl-f-use crt foobar.pem.rsa sigalgs "RSA-PSS+SHA256"
- ssl-f-use crt test.foobar.pem
- ssl-f-use crt test2.foobar.crt key test2.foobar.key ocsp test2.foobar.ocsp ocsp-update on
+ See also: "show-desc", "stats enable", "stats uri", and "node" in global
+ section.
- See also : "crt-list" and "crt".
-stats auth <user>:<passwd>
- Enable statistics with authentication and grant access to an account
+stats uri <prefix>
+ Enable statistics and define the URI prefix to access them
May be used in the following contexts: http
yes | yes | yes | yes
Arguments :
- <user> is a user name to grant access to
-
- <passwd> is the cleartext password associated to this user
+ <prefix> is the prefix of any URI which will be redirected to stats. This
+ prefix may contain a question mark ('?') to indicate part of a
+ query string.
- This statement enables statistics with default settings, and restricts access
- to declared users only. It may be repeated as many times as necessary to
- allow as many users as desired. When a user tries to access the statistics
- without a valid account, a "401 Forbidden" response will be returned so that
- the browser asks the user to provide a valid user and password. The real
- which will be returned to the browser is configurable using "stats realm".
+ The statistics URI is intercepted on the relayed traffic, so it appears as a
+ page within the normal application. It is strongly advised to ensure that the
+ selected URI will never appear in the application, otherwise it will never be
+ possible to reach it in the application.
- Since the authentication method is HTTP Basic Authentication, the passwords
- circulate in cleartext on the network. Thus, it was decided that the
- configuration file would also use cleartext passwords to remind the users
- that those ones should not be sensitive and not shared with any other account.
+ The default URI compiled in HAProxy is "/haproxy?stats", but this may be
+ changed at build time, so it's better to always explicitly specify it here.
+ It is generally a good idea to include a question mark in the URI so that
+ intermediate proxies refrain from caching the results. Also, since any string
+ beginning with the prefix will be accepted as a stats request, the question
+ mark helps ensuring that no valid URI will begin with the same words.
- It is also possible to reduce the scope of the proxies which appear in the
- report using "stats scope".
+ It is sometimes very convenient to use "/" as the URI prefix, and put that
+ statement in a "listen" instance of its own. That makes it easy to dedicate
+ an address or a port to statistics only.
Though this statement alone is enough to enable statistics reporting, it is
recommended to set all other settings in order to avoid relying on default
stats uri /admin?stats
stats refresh 5s
- See also : "stats enable", "stats realm", "stats scope", "stats uri"
+ See also : "stats auth", "stats enable", "stats realm"
-stats enable
- Enable statistics reporting with default settings
+stick match <pattern> [table <table>] [{if | unless} <cond>]
+ Define a request pattern matching condition to stick a user to a server
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ no | no | yes | yes
- Arguments : none
+ Arguments :
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the incoming request or connection
+ will be analyzed in the hope to find a matching entry in a
+ stickiness table. This rule is mandatory.
- This statement enables statistics reporting with default settings defined
- at build time. Unless stated otherwise, these settings are used :
- - stats uri : /haproxy?stats
- - stats realm : "HAProxy Statistics"
- - stats auth : no authentication
- - stats scope : no restriction
-
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
- Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+ <cond> is an optional matching condition. It makes it possible to match
+ on a certain criterion only when other conditions are met (or
+ not met). For instance, it could be used to match on a source IP
+ address except when a request passes through a known proxy, in
+ which case we'd match on a header containing that IP address.
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick match" statement
+ describes a rule to extract the stickiness criterion from an incoming request
+ or connection. See section 7 for a complete list of possible patterns and
+ transformation rules.
- See also : "stats auth", "stats realm", "stats uri"
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
+ It is possible to restrict the conditions where a "stick match" statement
+ will apply, using "if" or "unless" followed by a condition. See section 7 for
+ ACL based conditions.
-stats hide-version
- Enable statistics and hide HAProxy version reporting
+ There is no limit on the number of "stick match" statements. The first that
+ applies and matches will cause the request to be directed to the same server
+ as was used for the request which created the entry. That way, multiple
+ matches can be used as fallbacks.
- May be used in the following contexts: http
+ The stick rules are checked after the persistence cookies, so they will not
+ affect stickiness if a cookie has already been used to select a server. That
+ way, it becomes very easy to insert cookies and match on IP addresses in
+ order to maintain stickiness between HTTP and HTTPS.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ Example :
+ # forward SMTP users to the same server they just used for POP in the
+ # last 30 minutes
+ backend pop
+ mode tcp
+ balance roundrobin
+ stick store-request src
+ stick-table type ip size 200k expire 30m
+ server s1 192.168.1.1:110
+ server s2 192.168.1.1:110
- Arguments : none
+ backend smtp
+ mode tcp
+ balance roundrobin
+ stick match src table pop
+ server s1 192.168.1.1:25
+ server s2 192.168.1.1:25
- By default, the stats page reports some useful status information along with
- the statistics. Among them is HAProxy's version. However, it is generally
- considered dangerous to report precise version to anyone, as it can help them
- target known weaknesses with specific attacks. The "stats hide-version"
- statement removes the version from the statistics report. This is recommended
- for public sites or any site with a weak login/password.
+ See also : "stick-table", "stick on", section 11 about stick-tables, and
+ section 7 about ACLs and samples fetching.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
- Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+stick on <pattern> [table <table>] [{if | unless} <condition>]
+ Define a request pattern to associate a user to a server
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ May be used in the following contexts: tcp, http
- See also : "stats auth", "stats enable", "stats realm", "stats uri"
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Note : This form is exactly equivalent to "stick match" followed by
+ "stick store-request", all with the same arguments. Please refer
+ to both keywords for details. It is only provided as a convenience
+ for writing more maintainable configurations.
-stats http-request { allow | deny | auth [realm <realm>] }
- [ { if | unless } <condition> ]
- Access control for statistics
+ Examples :
+ # The following form ...
+ stick on src table pop if !localhost
- May be used in the following contexts: http
+ # ...is strictly equivalent to this one :
+ stick match src table pop if !localhost
+ stick store-request src table pop if !localhost
- May be used in sections: defaults | frontend | listen | backend
- no | no | yes | yes
- As "http-request", these set of options allow to fine control access to
- statistics. Each option may be followed by if/unless and acl.
- First option with matched condition (or option without condition) is final.
- For "deny" a 403 error will be returned, for "allow" normal processing is
- performed, for "auth" a 401/407 error code is returned so the client
- should be asked to enter a username and password.
+ # Use cookie persistence for HTTP, and stick on source address for HTTPS as
+ # well as HTTP without cookie. Share the same table between both accesses.
+ backend http
+ mode http
+ balance roundrobin
+ stick on src table https
+ cookie SRV insert indirect nocache
+ server s1 192.168.1.1:80 cookie s1
+ server s2 192.168.1.1:80 cookie s2
- There is no fixed limit to the number of http-request statements per
- instance.
+ backend https
+ mode tcp
+ balance roundrobin
+ stick-table type ip size 200k expire 30m
+ stick on src
+ server s1 192.168.1.1:443
+ server s2 192.168.1.1:443
- See also : "http-request", section 3.4 about userlists and section 7
- about ACL usage.
+ See also : "stick match", "stick store-request", and section 11 about
+ stick-tables.
-stats realm <realm>
- Enable statistics and set authentication realm
+stick store-request <pattern> [table <table>] [{if | unless} <condition>]
+ Define a request pattern used to create an entry in a stickiness table
- May be used in the following contexts: http
+ May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ no | no | yes | yes
Arguments :
- <realm> is the name of the HTTP Basic Authentication realm reported to
- the browser. The browser uses it to display it in the pop-up
- inviting the user to enter a valid username and password.
-
- The realm is read as a single word, so any spaces in it should be escaped
- using a backslash ('\').
-
- This statement is useful only in conjunction with "stats auth" since it is
- only related to authentication.
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the incoming request or connection
+ will be analyzed, extracted and stored in the table once a
+ server is selected.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
- Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+ <cond> is an optional storage condition. It makes it possible to store
+ certain criteria only when some conditions are met (or not met).
+ For instance, it could be used to store the source IP address
+ except when the request passes through a known proxy, in which
+ case we'd store a converted form of a header containing that IP
+ address.
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick store-request" statement
+ describes a rule to decide what to extract from the request and when to do
+ it, in order to store it into a stickiness table for further requests to
+ match it using the "stick match" statement. Obviously the extracted part must
+ make sense and have a chance to be matched in a further request. Storing a
+ client's IP address for instance often makes sense. Storing an ID found in a
+ URL parameter also makes sense. Storing a source port will almost never make
+ any sense because it will be randomly matched. See section 7 for a complete
+ list of possible patterns and transformation rules.
- See also : "stats auth", "stats enable", "stats uri"
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
+ It is possible to restrict the conditions where a "stick store-request"
+ statement will apply, using "if" or "unless" followed by a condition. This
+ condition will be evaluated while parsing the request, so any criteria can be
+ used. See section 7 for ACL based conditions.
-stats refresh <delay>
- Enable statistics with automatic refresh
+ There is no limit on the number of "stick store-request" statements, but
+ there is a limit of 8 simultaneous stores per request or response. This
+ makes it possible to store up to 8 criteria, all extracted from either the
+ request or the response, regardless of the number of rules. Only the 8 first
+ ones which match will be kept. Using this, it is possible to feed multiple
+ tables at once in the hope to increase the chance to recognize a user on
+ another protocol or access method. Using multiple store-request rules with
+ the same table is possible and may be used to find the best criterion to rely
+ on, by arranging the rules by decreasing preference order. Only the first
+ extracted criterion for a given table will be stored. All subsequent store-
+ request rules referencing the same table will be skipped and their ACLs will
+ not be evaluated.
- May be used in the following contexts: http
+ The "store-request" rules are evaluated once the server connection has been
+ established, so that the table will contain the real server that processed
+ the request.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ Example :
+ # forward SMTP users to the same server they just used for POP in the
+ # last 30 minutes
+ backend pop
+ mode tcp
+ balance roundrobin
+ stick store-request src
+ stick-table type ip size 200k expire 30m
+ server s1 192.168.1.1:110
+ server s2 192.168.1.1:110
- Arguments :
- <delay> is the suggested refresh delay, specified in seconds, which will
- be returned to the browser consulting the report page. While the
- browser is free to apply any delay, it will generally respect it
- and refresh the page this every seconds. The refresh interval may
- be specified in any other non-default time unit, by suffixing the
- unit after the value, as explained at the top of this document.
+ backend smtp
+ mode tcp
+ balance roundrobin
+ stick match src table pop
+ server s1 192.168.1.1:25
+ server s2 192.168.1.1:25
- This statement is useful on monitoring displays with a permanent page
- reporting the load balancer's activity. When set, the HTML report page will
- include a link "refresh"/"stop refresh" so that the user can select whether
- they want automatic refresh of the page or not.
+ See also : "stick-table", "stick on", section 11 about stick-tables, and
+ section 7 about ACLs and sample fetching.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
- Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+stick store-response <pattern> [table <table>] [{if | unless} <condition>]
+ Define a response pattern used to create an entry in a stickiness table
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ May be used in the following contexts: tcp, http
- See also : "stats auth", "stats enable", "stats realm", "stats uri"
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the response or connection will
+ be analyzed, extracted and stored in the table once a
+ server is selected.
-stats scope { <name> | "." }
- Enable statistics and limit access scope
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
- May be used in the following contexts: http
+ <cond> is an optional storage condition. It makes it possible to store
+ certain criteria only when some conditions are met (or not met).
+ For instance, it could be used to store the SSL session ID only
+ when the response is a SSL server hello.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick store-response"
+ statement describes a rule to decide what to extract from the response and
+ when to do it, in order to store it into a stickiness table for further
+ requests to match it using the "stick match" statement. Obviously the
+ extracted part must make sense and have a chance to be matched in a further
+ request. Storing an ID found in a header of a response makes sense.
+ See section 7 for a complete list of possible patterns and transformation
+ rules.
- Arguments :
- <name> is the name of a listen, frontend or backend section to be
- reported. The special name "." (a single dot) designates the
- section in which the statement appears.
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
- When this statement is specified, only the sections enumerated with this
- statement will appear in the report. All other ones will be hidden. This
- statement may appear as many times as needed if multiple sections need to be
- reported. Please note that the name checking is performed as simple string
- comparisons, and that it is never checked that a give section name really
- exists.
+ It is possible to restrict the conditions where a "stick store-response"
+ statement will apply, using "if" or "unless" followed by a condition. This
+ condition will be evaluated while parsing the response, so any criteria can
+ be used. See section 7 for ACL based conditions.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
+ There is no limit on the number of "stick store-response" statements, but
+ there is a limit of 8 simultaneous stores per request or response. This
+ makes it possible to store up to 8 criteria, all extracted from either the
+ request or the response, regardless of the number of rules. Only the 8 first
+ ones which match will be kept. Using this, it is possible to feed multiple
+ tables at once in the hope to increase the chance to recognize a user on
+ another protocol or access method. Using multiple store-response rules with
+ the same table is possible and may be used to find the best criterion to rely
+ on, by arranging the rules by decreasing preference order. Only the first
+ extracted criterion for a given table will be stored. All subsequent store-
+ response rules referencing the same table will be skipped and their ACLs will
+ not be evaluated. However, even if a store-request rule references a table, a
+ store-response rule may also use the same table. This means that each table
+ may learn exactly one element from the request and one element from the
+ response at once.
+
+ The table will contain the real server that processed the request.
Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+ # Learn SSL session ID from both request and response and create affinity.
+ backend https
+ mode tcp
+ balance roundrobin
+ # maximum SSL session ID length is 32 bytes.
+ stick-table type binary len 32 size 30k expire 30m
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ acl clienthello req.ssl_hello_type 1
+ acl serverhello res.ssl_hello_type 2
- See also : "stats auth", "stats enable", "stats realm", "stats uri"
+ # use tcp content accepts to detects ssl client and server hello.
+ tcp-request inspect-delay 5s
+ tcp-request content accept if clienthello
+ # no timeout on response inspect delay by default.
+ tcp-response content accept if serverhello
-stats show-desc [ <desc> ]
- Enable reporting of a description on the statistics page.
+ # SSL session ID (SSLID) may be present on a client or server hello.
+ # Its length is coded on 1 byte at offset 43 and its value starts
+ # at offset 44.
- May be used in the following contexts: http
+ # Match and learn on request if client hello.
+ stick on req.payload_lv(43,1) if clienthello
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ # Learn on response if server hello.
+ stick store-response resp.payload_lv(43,1) if serverhello
- <desc> is an optional description to be reported. If unspecified, the
- description from global section is automatically used instead.
+ server s1 192.168.1.1:443
+ server s2 192.168.1.1:443
- This statement is useful for users that offer shared services to their
- customers, where node or description should be different for each customer.
+ See also : "stick-table", "stick on", section 11 about stick-tables, and
+ section 7 about ACLs and pattern extraction.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters. By default description is not shown.
- Example :
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats show-desc Master node for Europe, Asia, Africa
- stats uri /admin?stats
- stats refresh 5s
+stick-table type <type> size <size> [expire <expire>] [args...]
+ Configure the stickiness table for the current section
- See also: "show-node", "stats enable", "stats uri" and "description" in
- global section.
+ May be used in the following contexts: tcp, http
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
-stats show-legends
- Enable reporting additional information on the statistics page
+ This is used to declare and configure a stick-table. Please refer to section
+ 11.1 for the complete details and the list of supported arguments. Only the
+ type and the size are mandatory.
- May be used in the following contexts: http
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+tcp-check comment <string>
+ Defines a comment for the following the tcp-check rule, reported in logs if
+ it fails.
- Arguments : none
+ May be used in the following contexts: tcp, http, log
- Enable reporting additional information on the statistics page :
- - cap: capabilities (proxy)
- - mode: one of tcp, http or health (proxy)
- - id: SNMP ID (proxy, socket, server)
- - IP (socket, server)
- - cookie (backend, server)
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters. Default behavior is not to show this information.
+ Arguments :
+ <string> is the comment message to add in logs if the following tcp-check
+ rule fails.
- See also: "stats enable", "stats uri".
+ It only works for connect, send and expect rules. It is useful to make
+ user-friendly error reporting.
+ See also : "option tcp-check", "tcp-check connect", "tcp-check send" and
+ "tcp-check expect".
-stats show-modules
- Enable display of extra statistics module on the statistics page
- May be used in the following contexts: http
+tcp-check connect [default] [port <expr>] [addr <ip>] [send-proxy] [via-socks4]
+ [ssl] [sni <sni>] [alpn <alpn>] [linger]
+ [proto <name>] [comment <msg>]
+ Opens a new connection
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ May be used in the following contexts: tcp, http, log
- Arguments : none
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- New columns are added at the end of the line containing the extra statistics
- values as a tooltip.
+ Arguments :
+ comment <msg> defines a message to report if the rule evaluation fails.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters. Default behavior is not to show this information.
+ default Use default options of the server line to do the health
+ checks. The server options are used only if not redefined.
- See also: "stats enable", "stats uri".
+ port <expr> if not set, check port or server port is used.
+ It tells HAProxy where to open the connection to.
+ <port> must be a valid TCP port source integer, from 1 to
+ 65535 or an sample-fetch expression.
+ addr <ip> defines the IP address to do the health check.
-stats show-node [ <name> ]
- Enable reporting of a host name on the statistics page.
+ send-proxy send a PROXY protocol string
- May be used in the following contexts: http
+ via-socks4 enables outgoing health checks using upstream socks4 proxy.
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ ssl opens a ciphered connection
- Arguments:
- <name> is an optional name to be reported. If unspecified, the
- node name from global section is automatically used instead.
+ sni <sni> specifies the SNI to use to do health checks over SSL.
- This statement is useful for users that offer shared services to their
- customers, where node or description might be different on a stats page
- provided for each customer. Default behavior is not to show host name.
+ alpn <alpn> defines which protocols to advertise with ALPN. The protocol
+ list consists in a comma-delimited list of protocol names,
+ for instance: "http/1.1,http/1.0" (without quotes).
+ If it is not set, the server ALPN is used.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
-
- Example:
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats show-node Europe-1
- stats uri /admin?stats
- stats refresh 5s
-
- See also: "show-desc", "stats enable", "stats uri", and "node" in global
- section.
-
-
-stats uri <prefix>
- Enable statistics and define the URI prefix to access them
-
- May be used in the following contexts: http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | yes
+ proto <name> forces the multiplexer's protocol to use for this connection.
+ It must be a TCP mux protocol and it must be usable on the
+ backend side. The list of available protocols is reported in
+ haproxy -vv.
- Arguments :
- <prefix> is the prefix of any URI which will be redirected to stats. This
- prefix may contain a question mark ('?') to indicate part of a
- query string.
+ linger cleanly close the connection instead of using a single RST.
- The statistics URI is intercepted on the relayed traffic, so it appears as a
- page within the normal application. It is strongly advised to ensure that the
- selected URI will never appear in the application, otherwise it will never be
- possible to reach it in the application.
+ When an application lies on more than a single TCP port or when HAProxy
+ load-balance many services in a single backend, it makes sense to probe all
+ the services individually before considering a server as operational.
- The default URI compiled in HAProxy is "/haproxy?stats", but this may be
- changed at build time, so it's better to always explicitly specify it here.
- It is generally a good idea to include a question mark in the URI so that
- intermediate proxies refrain from caching the results. Also, since any string
- beginning with the prefix will be accepted as a stats request, the question
- mark helps ensuring that no valid URI will begin with the same words.
+ When there are no TCP port configured on the server line neither server port
+ directive, then the 'tcp-check connect port <port>' must be the first step
+ of the sequence.
- It is sometimes very convenient to use "/" as the URI prefix, and put that
- statement in a "listen" instance of its own. That makes it easy to dedicate
- an address or a port to statistics only.
+ In a tcp-check ruleset a 'connect' is required, it is also mandatory to start
+ the ruleset with a 'connect' rule. Purpose is to ensure admin know what they
+ do.
- Though this statement alone is enough to enable statistics reporting, it is
- recommended to set all other settings in order to avoid relying on default
- unobvious parameters.
+ When a connect must start the ruleset, if may still be preceded by set-var,
+ unset-var or comment rules.
- Example :
- # public access (limited to this backend only)
- backend public_www
- server srv1 192.168.0.1:80
- stats enable
- stats hide-version
- stats scope .
- stats uri /admin?stats
- stats realm HAProxy\ Statistics
- stats auth admin1:AdMiN123
- stats auth admin2:AdMiN321
+ Examples :
+ # check HTTP and HTTPs services on a server.
+ # first open port 80 thanks to server line port directive, then
+ # tcp-check opens port 443, ciphered and run a request on it:
+ option tcp-check
+ tcp-check connect
+ tcp-check send GET\ /\ HTTP/1.0\r\n
+ tcp-check send Host:\ haproxy.1wt.eu\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring (2..|3..)
+ tcp-check connect port 443 ssl
+ tcp-check send GET\ /\ HTTP/1.0\r\n
+ tcp-check send Host:\ haproxy.1wt.eu\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring (2..|3..)
+ server www 10.0.0.1 check port 80
- # internal monitoring access (unlimited)
- backend private_monitoring
- stats enable
- stats uri /admin?stats
- stats refresh 5s
+ # check both POP and IMAP from a single server:
+ option tcp-check
+ tcp-check connect port 110 linger
+ tcp-check expect string +OK\ POP3\ ready
+ tcp-check connect port 143
+ tcp-check expect string *\ OK\ IMAP4\ ready
+ server mail 10.0.0.1 check
- See also : "stats auth", "stats enable", "stats realm"
+ See also : "option tcp-check", "tcp-check send", "tcp-check expect"
-stick match <pattern> [table <table>] [{if | unless} <cond>]
- Define a request pattern matching condition to stick a user to a server
+tcp-check expect [min-recv <int>] [comment <msg>]
+ [ok-status <st>] [error-status <st>] [tout-status <st>]
+ [on-success <fmt>] [on-error <fmt>] [status-code <expr>]
+ [!] <match> <pattern>
+ Specify data to be collected and analyzed during a generic health check
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
- May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
- <pattern> is a sample expression rule as described in section 7.3. It
- describes what elements of the incoming request or connection
- will be analyzed in the hope to find a matching entry in a
- stickiness table. This rule is mandatory.
+ comment <msg> defines a message to report if the rule evaluation fails.
- <table> is an optional stickiness table name. If unspecified, the same
- backend's table is used. A stickiness table is declared using
- the "stick-table" statement.
+ min-recv is optional and can define the minimum amount of data required to
+ evaluate the current expect rule. If the number of received bytes
+ is under this limit, the check will wait for more data. This
+ option can be used to resolve some ambiguous matching rules or to
+ avoid executing costly regex matches on content known to be still
+ incomplete. If an exact string (string or binary) is used, the
+ minimum between the string length and this parameter is used.
+ This parameter is ignored if it is set to -1. If the expect rule
+ does not match, the check will wait for more data. If set to 0,
+ the evaluation result is always conclusive.
- <cond> is an optional matching condition. It makes it possible to match
- on a certain criterion only when other conditions are met (or
- not met). For instance, it could be used to match on a source IP
- address except when a request passes through a known proxy, in
- which case we'd match on a header containing that IP address.
+ <match> is a keyword indicating how to look for a specific pattern in the
+ response. The keyword may be one of "string", "rstring", "binary" or
+ "rbinary".
+ The keyword may be preceded by an exclamation mark ("!") to negate
+ the match. Spaces are allowed between the exclamation mark and the
+ keyword. See below for more details on the supported keywords.
- Some protocols or applications require complex stickiness rules and cannot
- always simply rely on cookies nor hashing. The "stick match" statement
- describes a rule to extract the stickiness criterion from an incoming request
- or connection. See section 7 for a complete list of possible patterns and
- transformation rules.
+ ok-status <st> is optional and can be used to set the check status if
+ the expect rule is successfully evaluated and if it is
+ the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
+ "L6OK" and "L4OK" are supported :
+ - L7OK : check passed on layer 7
+ - L7OKC : check conditionally passed on layer 7, set
+ server to NOLB state.
+ - L6OK : check passed on layer 6
+ - L4OK : check passed on layer 4
+ By default "L7OK" is used.
- The table has to be declared using the "stick-table" statement. It must be of
- a type compatible with the pattern. By default it is the one which is present
- in the same backend. It is possible to share a table with other backends by
- referencing it using the "table" keyword. If another table is referenced,
- the server's ID inside the backends are used. By default, all server IDs
- start at 1 in each backend, so the server ordering is enough. But in case of
- doubt, it is highly recommended to force server IDs using their "id" setting.
+ error-status <st> is optional and can be used to set the check status if
+ an error occurred during the expect rule evaluation.
+ "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are
+ supported :
+ - L7OKC : check conditionally passed on layer 7, set
+ server to NOLB state.
+ - L7RSP : layer 7 invalid response - protocol error
+ - L7STS : layer 7 response error, for example HTTP 5xx
+ - L6RSP : layer 6 invalid response - protocol error
+ - L4CON : layer 1-4 connection problem
+ By default "L7RSP" is used.
- It is possible to restrict the conditions where a "stick match" statement
- will apply, using "if" or "unless" followed by a condition. See section 7 for
- ACL based conditions.
+ tout-status <st> is optional and can be used to set the check status if
+ a timeout occurred during the expect rule evaluation.
+ "L7TOUT", "L6TOUT", and "L4TOUT" are supported :
+ - L7TOUT : layer 7 (HTTP/SMTP) timeout
+ - L6TOUT : layer 6 (SSL) timeout
+ - L4TOUT : layer 1-4 timeout
+ By default "L7TOUT" is used.
- There is no limit on the number of "stick match" statements. The first that
- applies and matches will cause the request to be directed to the same server
- as was used for the request which created the entry. That way, multiple
- matches can be used as fallbacks.
+ on-success <fmt> is optional and can be used to customize the
+ informational message reported in logs if the expect
+ rule is successfully evaluated and if it is the last rule
+ in the tcp-check ruleset. <fmt> is a Custom log format
+ (see section 8.2.6).
- The stick rules are checked after the persistence cookies, so they will not
- affect stickiness if a cookie has already been used to select a server. That
- way, it becomes very easy to insert cookies and match on IP addresses in
- order to maintain stickiness between HTTP and HTTPS.
+ on-error <fmt> is optional and can be used to customize the
+ informational message reported in logs if an error
+ occurred during the expect rule evaluation. <fmt> is a
+ Custom log format (see section 8.2.6).
- Example :
- # forward SMTP users to the same server they just used for POP in the
- # last 30 minutes
- backend pop
- mode tcp
- balance roundrobin
- stick store-request src
- stick-table type ip size 200k expire 30m
- server s1 192.168.1.1:110
- server s2 192.168.1.1:110
+ status-code <expr> is optional and can be used to set the check status code
+ reported in logs, on success or on error. <expr> is a
+ standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
- backend smtp
- mode tcp
- balance roundrobin
- stick match src table pop
- server s1 192.168.1.1:25
- server s2 192.168.1.1:25
+ <pattern> is the pattern to look for. It may be a string or a regular
+ expression. If the pattern contains spaces, they must be escaped
+ with the usual backslash ('\').
+ If the match is set to binary, then the pattern must be passed as
+ a series of hexadecimal digits in an even number. Each sequence of
+ two digits will represent a byte. The hexadecimal digits may be
+ used upper or lower case.
- See also : "stick-table", "stick on", section 11 about stick-tables, and
- section 7 about ACLs and samples fetching.
+ The available matches are intentionally similar to their http-check cousins :
+ string <string> : test the exact string matches in the response buffer.
+ A health check response will be considered valid if the
+ response's buffer contains this exact string. If the
+ "string" keyword is prefixed with "!", then the response
+ will be considered invalid if the body contains this
+ string. This can be used to look for a mandatory pattern
+ in a protocol response, or to detect a failure when a
+ specific error appears in a protocol banner.
-stick on <pattern> [table <table>] [{if | unless} <condition>]
- Define a request pattern to associate a user to a server
+ rstring <regex> : test a regular expression on the response buffer.
+ A health check response will be considered valid if the
+ response's buffer matches this expression. If the
+ "rstring" keyword is prefixed with "!", then the response
+ will be considered invalid if the body matches the
+ expression.
- May be used in the following contexts: tcp, http
+ string-lf <fmt> : test a Custom log format match in the response's buffer.
+ A health check response will be considered valid if the
+ response's buffer contains the string resulting of the
+ evaluation of <fmt>, which follows the Custom log format
+ rules described in section 8.2.6. If prefixed with "!",
+ then the response will be considered invalid if the
+ buffer contains the string.
- May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ binary <hexstring> : test the exact string in its hexadecimal form matches
+ in the response buffer. A health check response will
+ be considered valid if the response's buffer contains
+ this exact hexadecimal string.
+ Purpose is to match data on binary protocols.
- Note : This form is exactly equivalent to "stick match" followed by
- "stick store-request", all with the same arguments. Please refer
- to both keywords for details. It is only provided as a convenience
- for writing more maintainable configurations.
+ rbinary <regex> : test a regular expression on the response buffer, like
+ "rstring". However, the response buffer is transformed
+ into its hexadecimal form, including NUL-bytes. This
+ allows using all regex engines to match any binary
+ content. The hexadecimal transformation takes twice the
+ size of the original response. As such, the expected
+ pattern should work on at-most half the response buffer
+ size.
- Examples :
- # The following form ...
- stick on src table pop if !localhost
+ binary-lf <hexfmt> : test a Custom log format in its hexadecimal form match
+ in the response's buffer. A health check response will
+ be considered valid if the response's buffer contains
+ the hexadecimal string resulting of the evaluation of
+ <fmt>, which follows the Custom log format rules (see
+ section 8.2.6). If prefixed with "!", then the
+ response will be considered invalid if the buffer
+ contains the hexadecimal string. The hexadecimal
+ string is converted in a binary string before matching
+ the response's buffer.
- # ...is strictly equivalent to this one :
- stick match src table pop if !localhost
- stick store-request src table pop if !localhost
+ It is important to note that the responses will be limited to a certain size
+ defined by the global "tune.bufsize" option, which defaults to 16384 bytes.
+ Thus, too large responses may not contain the mandatory pattern when using
+ "string", "rstring" or binary. If a large response is absolutely required, it
+ is possible to change the default max size by setting the global variable.
+ However, it is worth keeping in mind that parsing very large responses can
+ waste some CPU cycles, especially when regular expressions are used, and that
+ it is always better to focus the checks on smaller resources. Also, in its
+ current state, the check will not find any string nor regex past a null
+ character in the response. Similarly it is not possible to request matching
+ the null character.
+ Examples :
+ # perform a POP check
+ option tcp-check
+ tcp-check expect string +OK\ POP3\ ready
- # Use cookie persistence for HTTP, and stick on source address for HTTPS as
- # well as HTTP without cookie. Share the same table between both accesses.
- backend http
- mode http
- balance roundrobin
- stick on src table https
- cookie SRV insert indirect nocache
- server s1 192.168.1.1:80 cookie s1
- server s2 192.168.1.1:80 cookie s2
+ # perform an IMAP check
+ option tcp-check
+ tcp-check expect string *\ OK\ IMAP4\ ready
- backend https
- mode tcp
- balance roundrobin
- stick-table type ip size 200k expire 30m
- stick on src
- server s1 192.168.1.1:443
- server s2 192.168.1.1:443
+ # look for the redis master server
+ option tcp-check
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
- See also : "stick match", "stick store-request", and section 11 about
- stick-tables.
+
+ See also : "option tcp-check", "tcp-check connect", "tcp-check send",
+ "tcp-check send-binary", "http-check expect", tune.bufsize
-stick store-request <pattern> [table <table>] [{if | unless} <condition>]
- Define a request pattern used to create an entry in a stickiness table
+tcp-check send <data> [comment <msg>]
+tcp-check send-lf <fmt> [comment <msg>]
+ Specify a string or a Custom log format to be sent as a question during a
+ generic health check
- May be used in the following contexts: tcp, http
+ May be used in the following contexts: tcp, http, log
- May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
- <pattern> is a sample expression rule as described in section 7.3. It
- describes what elements of the incoming request or connection
- will be analyzed, extracted and stored in the table once a
- server is selected.
+ comment <msg> defines a message to report if the rule evaluation fails.
- <table> is an optional stickiness table name. If unspecified, the same
- backend's table is used. A stickiness table is declared using
- the "stick-table" statement.
+ <data> is the string that will be sent during a generic health
+ check session.
- <cond> is an optional storage condition. It makes it possible to store
- certain criteria only when some conditions are met (or not met).
- For instance, it could be used to store the source IP address
- except when the request passes through a known proxy, in which
- case we'd store a converted form of a header containing that IP
- address.
+ <fmt> is the Custom log format that will be sent, once evaluated,
+ during a generic health check session (see section 8.2.6).
- Some protocols or applications require complex stickiness rules and cannot
- always simply rely on cookies nor hashing. The "stick store-request" statement
- describes a rule to decide what to extract from the request and when to do
- it, in order to store it into a stickiness table for further requests to
- match it using the "stick match" statement. Obviously the extracted part must
- make sense and have a chance to be matched in a further request. Storing a
- client's IP address for instance often makes sense. Storing an ID found in a
- URL parameter also makes sense. Storing a source port will almost never make
- any sense because it will be randomly matched. See section 7 for a complete
- list of possible patterns and transformation rules.
+ Examples :
+ # look for the redis master server
+ option tcp-check
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
- The table has to be declared using the "stick-table" statement. It must be of
- a type compatible with the pattern. By default it is the one which is present
- in the same backend. It is possible to share a table with other backends by
- referencing it using the "table" keyword. If another table is referenced,
- the server's ID inside the backends are used. By default, all server IDs
- start at 1 in each backend, so the server ordering is enough. But in case of
- doubt, it is highly recommended to force server IDs using their "id" setting.
+ See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
+ "tcp-check send-binary", tune.bufsize
- It is possible to restrict the conditions where a "stick store-request"
- statement will apply, using "if" or "unless" followed by a condition. This
- condition will be evaluated while parsing the request, so any criteria can be
- used. See section 7 for ACL based conditions.
- There is no limit on the number of "stick store-request" statements, but
- there is a limit of 8 simultaneous stores per request or response. This
- makes it possible to store up to 8 criteria, all extracted from either the
- request or the response, regardless of the number of rules. Only the 8 first
- ones which match will be kept. Using this, it is possible to feed multiple
- tables at once in the hope to increase the chance to recognize a user on
- another protocol or access method. Using multiple store-request rules with
- the same table is possible and may be used to find the best criterion to rely
- on, by arranging the rules by decreasing preference order. Only the first
- extracted criterion for a given table will be stored. All subsequent store-
- request rules referencing the same table will be skipped and their ACLs will
- not be evaluated.
+tcp-check send-binary <hexstring> [comment <msg>]
+tcp-check send-binary-lf <hexfmt> [comment <msg>]
+ Specify an hex digits string or an hex digits Custom log format to be sent as
+ a binary question during a raw tcp health check
- The "store-request" rules are evaluated once the server connection has been
- established, so that the table will contain the real server that processed
- the request.
+ May be used in the following contexts: tcp, http, log
- Example :
- # forward SMTP users to the same server they just used for POP in the
- # last 30 minutes
- backend pop
- mode tcp
- balance roundrobin
- stick store-request src
- stick-table type ip size 200k expire 30m
- server s1 192.168.1.1:110
- server s2 192.168.1.1:110
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- backend smtp
- mode tcp
- balance roundrobin
- stick match src table pop
- server s1 192.168.1.1:25
- server s2 192.168.1.1:25
+ Arguments :
+ comment <msg> defines a message to report if the rule evaluation fails.
- See also : "stick-table", "stick on", section 11 about stick-tables, and
- section 7 about ACLs and sample fetching.
+ <hexstring> is the hexadecimal string that will be send, once converted
+ to binary, during a generic health check session.
+ <hexfmt> is the hexadecimal Custom log format that will be send, once
+ evaluated and converted to binary, during a generic health
+ check session (see section 8.2.6).
-stick store-response <pattern> [table <table>] [{if | unless} <condition>]
- Define a response pattern used to create an entry in a stickiness table
+ Examples :
+ # redis check in binary
+ option tcp-check
+ tcp-check send-binary 50494e470d0a # PING\r\n
+ tcp-check expect binary 2b504F4e47 # +PONG
- May be used in the following contexts: tcp, http
- May be used in sections : defaults | frontend | listen | backend
- no | no | yes | yes
+ See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
+ "tcp-check send", tune.bufsize
+
+
+tcp-check set-var(<var-name>[,<cond>...]) <expr>
+tcp-check set-var-fmt(<var-name>[,<cond>...]) <fmt>
+ This operation sets the content of a variable. The variable is declared inline.
+
+ May be used in the following contexts: tcp, http, log
+
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
Arguments :
- <pattern> is a sample expression rule as described in section 7.3. It
- describes what elements of the response or connection will
- be analyzed, extracted and stored in the table once a
- server is selected.
+ <var-name> The name of the variable. Only "proc", "sess" and "check"
+ scopes can be used. See section 2.8 about variables for details.
- <table> is an optional stickiness table name. If unspecified, the same
- backend's table is used. A stickiness table is declared using
- the "stick-table" statement.
+ <cond> A set of conditions that must all be true for the variable to
+ actually be set (such as "ifnotempty", "ifgt" ...). See the
+ set-var converter's description for a full list of possible
+ conditions.
- <cond> is an optional storage condition. It makes it possible to store
- certain criteria only when some conditions are met (or not met).
- For instance, it could be used to store the SSL session ID only
- when the response is a SSL server hello.
+ <expr> Is a sample-fetch expression potentially followed by converters.
- Some protocols or applications require complex stickiness rules and cannot
- always simply rely on cookies nor hashing. The "stick store-response"
- statement describes a rule to decide what to extract from the response and
- when to do it, in order to store it into a stickiness table for further
- requests to match it using the "stick match" statement. Obviously the
- extracted part must make sense and have a chance to be matched in a further
- request. Storing an ID found in a header of a response makes sense.
- See section 7 for a complete list of possible patterns and transformation
- rules.
+ <fmt> This is the value expressed using Custom log format rules (see
+ Custom log format in section 8.2.6).
- The table has to be declared using the "stick-table" statement. It must be of
- a type compatible with the pattern. By default it is the one which is present
- in the same backend. It is possible to share a table with other backends by
- referencing it using the "table" keyword. If another table is referenced,
- the server's ID inside the backends are used. By default, all server IDs
- start at 1 in each backend, so the server ordering is enough. But in case of
- doubt, it is highly recommended to force server IDs using their "id" setting.
+ Examples :
+ tcp-check set-var(check.port) int(1234)
+ tcp-check set-var-fmt(check.name) "%H"
- It is possible to restrict the conditions where a "stick store-response"
- statement will apply, using "if" or "unless" followed by a condition. This
- condition will be evaluated while parsing the response, so any criteria can
- be used. See section 7 for ACL based conditions.
- There is no limit on the number of "stick store-response" statements, but
- there is a limit of 8 simultaneous stores per request or response. This
- makes it possible to store up to 8 criteria, all extracted from either the
- request or the response, regardless of the number of rules. Only the 8 first
- ones which match will be kept. Using this, it is possible to feed multiple
- tables at once in the hope to increase the chance to recognize a user on
- another protocol or access method. Using multiple store-response rules with
- the same table is possible and may be used to find the best criterion to rely
- on, by arranging the rules by decreasing preference order. Only the first
- extracted criterion for a given table will be stored. All subsequent store-
- response rules referencing the same table will be skipped and their ACLs will
- not be evaluated. However, even if a store-request rule references a table, a
- store-response rule may also use the same table. This means that each table
- may learn exactly one element from the request and one element from the
- response at once.
+tcp-check unset-var(<var-name>)
+ Free a reference to a variable within its scope.
- The table will contain the real server that processed the request.
+ May be used in the following contexts: tcp, http, log
- Example :
- # Learn SSL session ID from both request and response and create affinity.
- backend https
- mode tcp
- balance roundrobin
- # maximum SSL session ID length is 32 bytes.
- stick-table type binary len 32 size 30k expire 30m
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- acl clienthello req.ssl_hello_type 1
- acl serverhello res.ssl_hello_type 2
+ Arguments :
+ <var-name> The name of the variable. Only "proc", "sess" and "check"
+ scopes can be used. See section 2.8 about variables for details.
- # use tcp content accepts to detects ssl client and server hello.
- tcp-request inspect-delay 5s
- tcp-request content accept if clienthello
+ Examples :
+ tcp-check unset-var(check.port)
- # no timeout on response inspect delay by default.
- tcp-response content accept if serverhello
- # SSL session ID (SSLID) may be present on a client or server hello.
- # Its length is coded on 1 byte at offset 43 and its value starts
- # at offset 44.
+tcp-request connection <action> <options...> [ { if | unless } <condition> ]
+ Perform an action on an incoming connection depending on a layer 4 condition
- # Match and learn on request if client hello.
- stick on req.payload_lv(43,1) if clienthello
+ May be used in the following contexts: tcp, http
- # Learn on response if server hello.
- stick store-response resp.payload_lv(43,1) if serverhello
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | yes | yes | no
- server s1 192.168.1.1:443
- server s2 192.168.1.1:443
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
- See also : "stick-table", "stick on", section 11 about stick-tables, and
- section 7 about ACLs and pattern extraction.
+ <condition> is a standard layer4-only ACL-based condition (see section 7).
+ Immediately after acceptance of a new incoming connection, it is possible to
+ evaluate some conditions to decide whether this connection must be accepted
+ or dropped or have its counters tracked. Those conditions cannot make use of
+ any data contents because the connection has not been read from yet, and the
+ buffers are not yet allocated. This is used to selectively and very quickly
+ accept or drop connections from various sources with a very low overhead. If
+ some contents need to be inspected in order to take the decision, the
+ "tcp-request content" statements must be used instead.
-stick-table type <type> size <size> [expire <expire>] [args...]
- Configure the stickiness table for the current section
+ The "tcp-request connection" rules are evaluated in their exact declaration
+ order. If no rule matches or if there is no rule, the default action is to
+ accept the incoming connection. There is no specific limit to the number of
+ rules which may be inserted. Any rule may optionally be followed by an
+ ACL-based condition, in which case it will only be evaluated if the condition
+ evaluates to true.
- May be used in the following contexts: tcp, http
+ The condition is evaluated just before the action is executed, and the action
+ is performed exactly once. As such, there is no problem if an action changes
+ an element which is checked as part of the condition. This also means that
+ multiple actions may rely on the same condition so that the first action that
+ changes the condition's evaluation is sufficient to implicitly disable the
+ remaining actions. This is used for example when trying to assign a value to
+ a variable from various sources when it's empty.
- May be used in sections : defaults | frontend | listen | backend
- no | yes | yes | yes
+ The first keyword after "tcp-request connection" in the syntax is the rule's
+ action, optionally followed by a varying number of arguments for the action.
+ The supported actions and their respective syntaxes are enumerated in
+ section 4.3 "Actions" (look for actions which tick "TCP RqCon").
- This is used to declare and configure a stick-table. Please refer to section
- 11.1 for the complete details and the list of supported arguments. Only the
- type and the size are mandatory.
+ This directive is only available from named defaults sections, not anonymous
+ ones. Rules defined in the defaults section are evaluated before ones in the
+ associated proxy section. To avoid ambiguities, in this case the same
+ defaults section cannot be used by proxies with the frontend capability and
+ by proxies with the backend capability. It means a listen section cannot use
+ a defaults section defining such rules.
+ Note that the "if/unless" condition is optional. If no condition is set on
+ the action, it is simply performed unconditionally. That can be useful for
+ "track-sc*" actions as well as for changing the default action to a reject.
-tcp-check comment <string>
- Defines a comment for the following the tcp-check rule, reported in logs if
- it fails.
+ Example: accept all connections from white-listed hosts, reject too fast
+ connection without counting them, and track accepted connections.
+ This results in connection rate being capped from abusive sources.
- May be used in the following contexts: tcp, http, log
+ tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request connection reject if { src_conn_rate gt 10 }
+ tcp-request connection track-sc0 src
- May be used in sections : defaults | frontend | listen | backend
- yes | no | yes | yes
+ Example: accept all connections from white-listed hosts, count all other
+ connections and reject too fast ones. This results in abusive ones
+ being blocked as long as they don't slow down.
- Arguments :
- <string> is the comment message to add in logs if the following tcp-check
- rule fails.
+ tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request connection track-sc0 src
+ tcp-request connection reject if { sc0_conn_rate gt 10 }
- It only works for connect, send and expect rules. It is useful to make
- user-friendly error reporting.
+ Example: enable the PROXY protocol for traffic coming from all known proxies.
- See also : "option tcp-check", "tcp-check connect", "tcp-check send" and
- "tcp-check expect".
+ tcp-request connection expect-proxy layer4 if { src -f proxies.lst }
+ See section 7 about ACL usage.
-tcp-check connect [default] [port <expr>] [addr <ip>] [send-proxy] [via-socks4]
- [ssl] [sni <sni>] [alpn <alpn>] [linger]
- [proto <name>] [comment <msg>]
- Opens a new connection
+ See also : "tcp-request session", "tcp-request content", "stick-table"
- May be used in the following contexts: tcp, http, log
+tcp-request content <action> [{if | unless} <condition>]
+ Perform an action on a new session depending on a layer 4-7 condition
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ May be used in the following contexts: tcp, http
+
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | yes | yes | yes
Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
+ <action> defines the action to perform if the condition applies. See
+ below.
- default Use default options of the server line to do the health
- checks. The server options are used only if not redefined.
+ <condition> is a standard layer 4-7 ACL-based condition (see section 7).
- port <expr> if not set, check port or server port is used.
- It tells HAProxy where to open the connection to.
- <port> must be a valid TCP port source integer, from 1 to
- 65535 or an sample-fetch expression.
+ A request's contents can be analyzed at an early stage of request processing
+ called "TCP content inspection". During this stage, ACL-based rules are
+ evaluated every time the request contents are updated, until either an
+ "accept", a "reject" or a "switch-mode" rule matches, or the TCP request
+ inspection delay expires with no matching rule.
- addr <ip> defines the IP address to do the health check.
+ The first difference between these rules and "tcp-request connection" rules
+ is that "tcp-request content" rules can make use of contents to take a
+ decision. Most often, these decisions will consider a protocol recognition or
+ validity. The second difference is that content-based rules can be used in
+ both frontends and backends. In case of HTTP keep-alive with the client, all
+ tcp-request content rules are evaluated again, so HAProxy keeps a record of
+ what sticky counters were assigned by a "tcp-request connection" versus a
+ "tcp-request content" rule, and flushes all the content-related ones after
+ processing an HTTP request, so that they may be evaluated again by the rules
+ being evaluated again for the next request. This is of particular importance
+ when the rule tracks some L7 information or when it is conditioned by an
+ L7-based ACL, since tracking may change between requests.
- send-proxy send a PROXY protocol string
+ Content-based rules are evaluated in their exact declaration order. If no
+ rule matches or if there is no rule, the default action is to accept the
+ contents. There is no specific limit to the number of rules which may be
+ inserted.
- via-socks4 enables outgoing health checks using upstream socks4 proxy.
+ While there is nothing mandatory about it, it is recommended to use the
+ track-sc0 in "tcp-request connection" rules, track-sc1 for "tcp-request
+ content" rules in the frontend, and track-sc2 for "tcp-request content"
+ rules in the backend, because that makes the configuration more readable
+ and easier to troubleshoot, but this is just a guideline and all counters
+ may be used everywhere.
- ssl opens a ciphered connection
+ The first keyword after "tcp-request content" in the syntax is the rule's
+ action, optionally followed by a varying number of arguments for the action.
+ The supported actions and their respective syntaxes are enumerated in
+ section 4.3 "Actions" (look for actions which tick "TCP RqCnt").
- sni <sni> specifies the SNI to use to do health checks over SSL.
+ This directive is only available from named defaults sections, not anonymous
+ ones. Rules defined in the defaults section are evaluated before ones in the
+ associated proxy section. To avoid ambiguities, in this case the same
+ defaults section cannot be used by proxies with the frontend capability and
+ by proxies with the backend capability. It means a listen section cannot use
+ a defaults section defining such rules.
- alpn <alpn> defines which protocols to advertise with ALPN. The protocol
- list consists in a comma-delimited list of protocol names,
- for instance: "http/1.1,http/1.0" (without quotes).
- If it is not set, the server ALPN is used.
+ Note that the "if/unless" condition is optional. If no condition is set on
+ the action, it is simply performed unconditionally. That can be useful for
+ "track-sc*" actions as well as for changing the default action to a reject.
- proto <name> forces the multiplexer's protocol to use for this connection.
- It must be a TCP mux protocol and it must be usable on the
- backend side. The list of available protocols is reported in
- haproxy -vv.
+ Note also that it is recommended to use a "tcp-request session" rule to track
+ information that does *not* depend on Layer 7 contents, especially for HTTP
+ frontends. Some HTTP processing are performed at the session level and may
+ lead to an early rejection of the requests. Thus, the tracking at the content
+ level may be disturbed in such case. A warning is emitted during startup to
+ prevent, as far as possible, such unreliable usage.
- linger cleanly close the connection instead of using a single RST.
+ It is perfectly possible to match layer 7 contents with "tcp-request content"
+ rules from a TCP proxy, since HTTP-specific ACL matches are able to
+ preliminarily parse the contents of a buffer before extracting the required
+ data. If the buffered contents do not parse as a valid HTTP message, then the
+ ACL does not match. The parser which is involved there is exactly the same
+ as for all other HTTP processing, so there is no risk of parsing something
+ differently. In an HTTP frontend or an HTTP backend, it is guaranteed that
+ HTTP contents will always be immediately present when the rule is evaluated
+ first because the HTTP parsing is performed in the early stages of the
+ connection processing, at the session level. But for such proxies, using
+ "http-request" rules is much more natural and recommended.
- When an application lies on more than a single TCP port or when HAProxy
- load-balance many services in a single backend, it makes sense to probe all
- the services individually before considering a server as operational.
+ Tracking layer7 information is also possible provided that the information
+ are present when the rule is processed. The rule processing engine is able to
+ wait until the inspect delay expires when the data to be tracked is not yet
+ available.
- When there are no TCP port configured on the server line neither server port
- directive, then the 'tcp-check connect port <port>' must be the first step
- of the sequence.
+ Example:
+ tcp-request content use-service lua.deny if { src -f /etc/haproxy/blacklist.lst }
- In a tcp-check ruleset a 'connect' is required, it is also mandatory to start
- the ruleset with a 'connect' rule. Purpose is to ensure admin know what they
- do.
+ Example:
+ tcp-request content set-var(sess.my_var) src
+ tcp-request content set-var-fmt(sess.from) %[src]:%[src_port]
+ tcp-request content unset-var(sess.my_var2)
- When a connect must start the ruleset, if may still be preceded by set-var,
- unset-var or comment rules.
+ Example:
+ # Accept HTTP requests containing a Host header saying "example.com"
+ # and reject everything else. (Only works for HTTP/1 connections)
+ acl is_host_com hdr(Host) -i example.com
+ tcp-request inspect-delay 30s
+ tcp-request content accept if is_host_com
+ tcp-request content reject
- Examples :
- # check HTTP and HTTPs services on a server.
- # first open port 80 thanks to server line port directive, then
- # tcp-check opens port 443, ciphered and run a request on it:
- option tcp-check
- tcp-check connect
- tcp-check send GET\ /\ HTTP/1.0\r\n
- tcp-check send Host:\ haproxy.1wt.eu\r\n
- tcp-check send \r\n
- tcp-check expect rstring (2..|3..)
- tcp-check connect port 443 ssl
- tcp-check send GET\ /\ HTTP/1.0\r\n
- tcp-check send Host:\ haproxy.1wt.eu\r\n
- tcp-check send \r\n
- tcp-check expect rstring (2..|3..)
- server www 10.0.0.1 check port 80
+ # Accept HTTP requests containing a Host header saying "example.com"
+ # and reject everything else. (works for HTTP/1 and HTTP/2 connections)
+ acl is_host_com hdr(Host) -i example.com
+ tcp-request inspect-delay 5s
+ tcp-request content switch-mode http if HTTP
+ tcp-request content reject # non-HTTP traffic is implicit here
+ ...
+ http-request reject unless is_host_com
- # check both POP and IMAP from a single server:
- option tcp-check
- tcp-check connect port 110 linger
- tcp-check expect string +OK\ POP3\ ready
- tcp-check connect port 143
- tcp-check expect string *\ OK\ IMAP4\ ready
- server mail 10.0.0.1 check
+ Example:
+ # reject SMTP connection if client speaks first
+ tcp-request inspect-delay 30s
+ acl content_present req.len gt 0
+ tcp-request content reject if content_present
- See also : "option tcp-check", "tcp-check send", "tcp-check expect"
+ # Forward HTTPS connection only if client speaks
+ tcp-request inspect-delay 30s
+ acl content_present req.len gt 0
+ tcp-request content accept if content_present
+ tcp-request content reject
+ Example:
+ # Track the last IP(stick-table type string) from X-Forwarded-For
+ tcp-request inspect-delay 10s
+ tcp-request content track-sc0 hdr(x-forwarded-for,-1)
+ # Or track the last IP(stick-table type ip|ipv6) from X-Forwarded-For
+ tcp-request content track-sc0 req.hdr_ip(x-forwarded-for,-1)
-tcp-check expect [min-recv <int>] [comment <msg>]
- [ok-status <st>] [error-status <st>] [tout-status <st>]
- [on-success <fmt>] [on-error <fmt>] [status-code <expr>]
- [!] <match> <pattern>
- Specify data to be collected and analyzed during a generic health check
+ Example:
+ # track request counts per "base" (concatenation of Host+URL)
+ tcp-request inspect-delay 10s
+ tcp-request content track-sc0 base table req-rate
- May be used in the following contexts: tcp, http, log
+ Example: track per-frontend and per-backend counters, block abusers at the
+ frontend when the backend detects abuse(and marks gpc0).
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
+ frontend http
+ # Use General Purpose Counter 0 in SC0 as a global abuse counter
+ # protecting all our sites
+ stick-table type ip size 1m expire 5m store gpc0
+ tcp-request connection track-sc0 src
+ tcp-request connection reject if { sc0_get_gpc0 gt 0 }
+ ...
+ use_backend http_dynamic if { path_end .php }
- Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
+ backend http_dynamic
+ # if a source makes too fast requests to this dynamic site (tracked
+ # by SC1), block it globally in the frontend.
+ stick-table type ip size 1m expire 5m store http_req_rate(10s)
+ acl click_too_fast sc1_http_req_rate gt 10
+ acl mark_as_abuser sc0_inc_gpc0(http) gt 0
+ tcp-request content track-sc1 src
+ tcp-request content reject if click_too_fast mark_as_abuser
- min-recv is optional and can define the minimum amount of data required to
- evaluate the current expect rule. If the number of received bytes
- is under this limit, the check will wait for more data. This
- option can be used to resolve some ambiguous matching rules or to
- avoid executing costly regex matches on content known to be still
- incomplete. If an exact string (string or binary) is used, the
- minimum between the string length and this parameter is used.
- This parameter is ignored if it is set to -1. If the expect rule
- does not match, the check will wait for more data. If set to 0,
- the evaluation result is always conclusive.
+ See section 7 about ACL usage.
- <match> is a keyword indicating how to look for a specific pattern in the
- response. The keyword may be one of "string", "rstring", "binary" or
- "rbinary".
- The keyword may be preceded by an exclamation mark ("!") to negate
- the match. Spaces are allowed between the exclamation mark and the
- keyword. See below for more details on the supported keywords.
+ See also : "tcp-request connection", "tcp-request session",
+ "tcp-request inspect-delay", and "http-request".
- ok-status <st> is optional and can be used to set the check status if
- the expect rule is successfully evaluated and if it is
- the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
- "L6OK" and "L4OK" are supported :
- - L7OK : check passed on layer 7
- - L7OKC : check conditionally passed on layer 7, set
- server to NOLB state.
- - L6OK : check passed on layer 6
- - L4OK : check passed on layer 4
- By default "L7OK" is used.
+tcp-request inspect-delay <timeout>
+ Set the maximum allowed time to wait for data during content inspection
- error-status <st> is optional and can be used to set the check status if
- an error occurred during the expect rule evaluation.
- "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are
- supported :
- - L7OKC : check conditionally passed on layer 7, set
- server to NOLB state.
- - L7RSP : layer 7 invalid response - protocol error
- - L7STS : layer 7 response error, for example HTTP 5xx
- - L6RSP : layer 6 invalid response - protocol error
- - L4CON : layer 1-4 connection problem
- By default "L7RSP" is used.
+ May be used in the following contexts: tcp, http
- tout-status <st> is optional and can be used to set the check status if
- a timeout occurred during the expect rule evaluation.
- "L7TOUT", "L6TOUT", and "L4TOUT" are supported :
- - L7TOUT : layer 7 (HTTP/SMTP) timeout
- - L6TOUT : layer 6 (SSL) timeout
- - L4TOUT : layer 1-4 timeout
- By default "L7TOUT" is used.
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | yes | yes | yes
- on-success <fmt> is optional and can be used to customize the
- informational message reported in logs if the expect
- rule is successfully evaluated and if it is the last rule
- in the tcp-check ruleset. <fmt> is a Custom log format
- (see section 8.2.6).
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
- on-error <fmt> is optional and can be used to customize the
- informational message reported in logs if an error
- occurred during the expect rule evaluation. <fmt> is a
- Custom log format (see section 8.2.6).
+ People using HAProxy primarily as a TCP relay are often worried about the
+ risk of passing any type of protocol to a server without any analysis. In
+ order to be able to analyze the request contents, we must first withhold
+ the data then analyze them. This statement simply enables withholding of
+ data for at most the specified amount of time.
- status-code <expr> is optional and can be used to set the check status code
- reported in logs, on success or on error. <expr> is a
- standard HAProxy expression formed by a sample-fetch
- followed by some converters.
+ TCP content inspection applies very early when a connection reaches a
+ frontend, then very early when the connection is forwarded to a backend. This
+ means that a connection may experience a first delay in the frontend and a
+ second delay in the backend if both have tcp-request rules.
- <pattern> is the pattern to look for. It may be a string or a regular
- expression. If the pattern contains spaces, they must be escaped
- with the usual backslash ('\').
- If the match is set to binary, then the pattern must be passed as
- a series of hexadecimal digits in an even number. Each sequence of
- two digits will represent a byte. The hexadecimal digits may be
- used upper or lower case.
+ Note that when performing content inspection, HAProxy will evaluate the whole
+ rules for every new chunk which gets in, taking into account the fact that
+ those data are partial. If no rule matches before the aforementioned delay,
+ a last check is performed upon expiration, this time considering that the
+ contents are definitive. If no delay is set, HAProxy will not wait at all
+ and will immediately apply a verdict based on the available information.
+ Obviously this is unlikely to be very useful and might even be racy, so such
+ setups are not recommended.
- The available matches are intentionally similar to their http-check cousins :
+ Note the inspection delay is shortened if an connection error or shutdown is
+ experienced or if the request buffer appears as full.
- string <string> : test the exact string matches in the response buffer.
- A health check response will be considered valid if the
- response's buffer contains this exact string. If the
- "string" keyword is prefixed with "!", then the response
- will be considered invalid if the body contains this
- string. This can be used to look for a mandatory pattern
- in a protocol response, or to detect a failure when a
- specific error appears in a protocol banner.
+ As soon as a rule matches, the request is released and continues as usual. If
+ the timeout is reached and no rule matches, the default policy will be to let
+ it pass through unaffected.
- rstring <regex> : test a regular expression on the response buffer.
- A health check response will be considered valid if the
- response's buffer matches this expression. If the
- "rstring" keyword is prefixed with "!", then the response
- will be considered invalid if the body matches the
- expression.
+ For most protocols, it is enough to set it to a few seconds, as most clients
+ send the full request immediately upon connection. Add 3 or more seconds to
+ cover TCP retransmits but that's all. For some protocols, it may make sense
+ to use large values, for instance to ensure that the client never talks
+ before the server (e.g. SMTP), or to wait for a client to talk before passing
+ data to the server (e.g. SSL). Note that the client timeout must cover at
+ least the inspection delay, otherwise it will expire first. If the client
+ closes the connection or if the buffer is full, the delay immediately expires
+ since the contents will not be able to change anymore.
- string-lf <fmt> : test a Custom log format match in the response's buffer.
- A health check response will be considered valid if the
- response's buffer contains the string resulting of the
- evaluation of <fmt>, which follows the Custom log format
- rules described in section 8.2.6. If prefixed with "!",
- then the response will be considered invalid if the
- buffer contains the string.
+ This directive is only available from named defaults sections, not anonymous
+ ones. Proxies inherit this value from their defaults section.
- binary <hexstring> : test the exact string in its hexadecimal form matches
- in the response buffer. A health check response will
- be considered valid if the response's buffer contains
- this exact hexadecimal string.
- Purpose is to match data on binary protocols.
+ See also : "tcp-request content accept", "tcp-request content reject",
+ "timeout client".
- rbinary <regex> : test a regular expression on the response buffer, like
- "rstring". However, the response buffer is transformed
- into its hexadecimal form, including NUL-bytes. This
- allows using all regex engines to match any binary
- content. The hexadecimal transformation takes twice the
- size of the original response. As such, the expected
- pattern should work on at-most half the response buffer
- size.
- binary-lf <hexfmt> : test a Custom log format in its hexadecimal form match
- in the response's buffer. A health check response will
- be considered valid if the response's buffer contains
- the hexadecimal string resulting of the evaluation of
- <fmt>, which follows the Custom log format rules (see
- section 8.2.6). If prefixed with "!", then the
- response will be considered invalid if the buffer
- contains the hexadecimal string. The hexadecimal
- string is converted in a binary string before matching
- the response's buffer.
+tcp-request session <action> [{if | unless} <condition>]
+ Perform an action on a validated session depending on a layer 5 condition
- It is important to note that the responses will be limited to a certain size
- defined by the global "tune.bufsize" option, which defaults to 16384 bytes.
- Thus, too large responses may not contain the mandatory pattern when using
- "string", "rstring" or binary. If a large response is absolutely required, it
- is possible to change the default max size by setting the global variable.
- However, it is worth keeping in mind that parsing very large responses can
- waste some CPU cycles, especially when regular expressions are used, and that
- it is always better to focus the checks on smaller resources. Also, in its
- current state, the check will not find any string nor regex past a null
- character in the response. Similarly it is not possible to request matching
- the null character.
+ May be used in the following contexts: tcp, http
- Examples :
- # perform a POP check
- option tcp-check
- tcp-check expect string +OK\ POP3\ ready
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | yes | yes | no
- # perform an IMAP check
- option tcp-check
- tcp-check expect string *\ OK\ IMAP4\ ready
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
- # look for the redis master server
- option tcp-check
- tcp-check send PING\r\n
- tcp-check expect string +PONG
- tcp-check send info\ replication\r\n
- tcp-check expect string role:master
- tcp-check send QUIT\r\n
- tcp-check expect string +OK
-
-
- See also : "option tcp-check", "tcp-check connect", "tcp-check send",
- "tcp-check send-binary", "http-check expect", tune.bufsize
-
-
-tcp-check send <data> [comment <msg>]
-tcp-check send-lf <fmt> [comment <msg>]
- Specify a string or a Custom log format to be sent as a question during a
- generic health check
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
-
- <data> is the string that will be sent during a generic health
- check session.
-
- <fmt> is the Custom log format that will be sent, once evaluated,
- during a generic health check session (see section 8.2.6).
-
- Examples :
- # look for the redis master server
- option tcp-check
- tcp-check send info\ replication\r\n
- tcp-check expect string role:master
-
- See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
- "tcp-check send-binary", tune.bufsize
-
-
-tcp-check send-binary <hexstring> [comment <msg>]
-tcp-check send-binary-lf <hexfmt> [comment <msg>]
- Specify an hex digits string or an hex digits Custom log format to be sent as
- a binary question during a raw tcp health check
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- comment <msg> defines a message to report if the rule evaluation fails.
-
- <hexstring> is the hexadecimal string that will be send, once converted
- to binary, during a generic health check session.
-
- <hexfmt> is the hexadecimal Custom log format that will be send, once
- evaluated and converted to binary, during a generic health
- check session (see section 8.2.6).
-
- Examples :
- # redis check in binary
- option tcp-check
- tcp-check send-binary 50494e470d0a # PING\r\n
- tcp-check expect binary 2b504F4e47 # +PONG
-
-
- See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
- "tcp-check send", tune.bufsize
-
-
-tcp-check set-var(<var-name>[,<cond>...]) <expr>
-tcp-check set-var-fmt(<var-name>[,<cond>...]) <fmt>
- This operation sets the content of a variable. The variable is declared inline.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- <var-name> The name of the variable. Only "proc", "sess" and "check"
- scopes can be used. See section 2.8 about variables for details.
-
- <cond> A set of conditions that must all be true for the variable to
- actually be set (such as "ifnotempty", "ifgt" ...). See the
- set-var converter's description for a full list of possible
- conditions.
-
- <expr> Is a sample-fetch expression potentially followed by converters.
-
- <fmt> This is the value expressed using Custom log format rules (see
- Custom log format in section 8.2.6).
-
- Examples :
- tcp-check set-var(check.port) int(1234)
- tcp-check set-var-fmt(check.name) "%H"
-
-
-tcp-check unset-var(<var-name>)
- Free a reference to a variable within its scope.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments :
- <var-name> The name of the variable. Only "proc", "sess" and "check"
- scopes can be used. See section 2.8 about variables for details.
-
- Examples :
- tcp-check unset-var(check.port)
-
-
-tcp-request connection <action> <options...> [ { if | unless } <condition> ]
- Perform an action on an incoming connection depending on a layer 4 condition
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | no
-
- Arguments :
- <action> defines the action to perform if the condition applies. See
- below.
-
- <condition> is a standard layer4-only ACL-based condition (see section 7).
+ <condition> is a standard layer5-only ACL-based condition (see section 7).
- Immediately after acceptance of a new incoming connection, it is possible to
- evaluate some conditions to decide whether this connection must be accepted
- or dropped or have its counters tracked. Those conditions cannot make use of
- any data contents because the connection has not been read from yet, and the
- buffers are not yet allocated. This is used to selectively and very quickly
- accept or drop connections from various sources with a very low overhead. If
- some contents need to be inspected in order to take the decision, the
- "tcp-request content" statements must be used instead.
+ Once a session is validated, (i.e. after all handshakes have been completed),
+ it is possible to evaluate some conditions to decide whether this session
+ must be accepted or dropped or have its counters tracked. Those conditions
+ cannot make use of any data contents because no buffers are allocated yet and
+ the processing cannot wait at this stage. The main use case is to copy some
+ early information into variables (since variables are accessible in the
+ session), or to keep track of some information collected after the handshake,
+ such as SSL-level elements (SNI, ciphers, client cert's CN) or information
+ from the PROXY protocol header (e.g. track a source forwarded this way). The
+ extracted information can thus be copied to a variable or tracked using
+ "track-sc" rules. Of course it is also possible to decide to accept/reject as
+ with other rulesets. Most operations performed here could also be performed
+ in "tcp-request content" rules, except that in HTTP these rules are evaluated
+ for each new request, and that might not always be acceptable. For example a
+ rule might increment a counter on each evaluation. It would also be possible
+ that a country is resolved by geolocation from the source IP address,
+ assigned to a session-wide variable, then the source address rewritten from
+ an HTTP header for all requests. If some contents need to be inspected in
+ order to take the decision, the "tcp-request content" statements must be used
+ instead.
- The "tcp-request connection" rules are evaluated in their exact declaration
+ The "tcp-request session" rules are evaluated in their exact declaration
order. If no rule matches or if there is no rule, the default action is to
- accept the incoming connection. There is no specific limit to the number of
- rules which may be inserted. Any rule may optionally be followed by an
- ACL-based condition, in which case it will only be evaluated if the condition
- evaluates to true.
-
- The condition is evaluated just before the action is executed, and the action
- is performed exactly once. As such, there is no problem if an action changes
- an element which is checked as part of the condition. This also means that
- multiple actions may rely on the same condition so that the first action that
- changes the condition's evaluation is sufficient to implicitly disable the
- remaining actions. This is used for example when trying to assign a value to
- a variable from various sources when it's empty.
+ accept the incoming session. There is no specific limit to the number of
+ rules which may be inserted.
- The first keyword after "tcp-request connection" in the syntax is the rule's
+ The first keyword after "tcp-request session" in the syntax is the rule's
action, optionally followed by a varying number of arguments for the action.
The supported actions and their respective syntaxes are enumerated in
- section 4.3 "Actions" (look for actions which tick "TCP RqCon").
+ section 4.3 "Actions" (look for actions which tick "TCP RqSes").
This directive is only available from named defaults sections, not anonymous
ones. Rules defined in the defaults section are evaluated before ones in the
the action, it is simply performed unconditionally. That can be useful for
"track-sc*" actions as well as for changing the default action to a reject.
- Example: accept all connections from white-listed hosts, reject too fast
- connection without counting them, and track accepted connections.
- This results in connection rate being capped from abusive sources.
+ Example: track the original source address by default, or the one advertised
+ in the PROXY protocol header for connection coming from the local
+ proxies. The first connection-level rule enables receipt of the
+ PROXY protocol for these ones, the second rule tracks whatever
+ address we decide to keep after optional decoding.
- tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
- tcp-request connection reject if { src_conn_rate gt 10 }
- tcp-request connection track-sc0 src
+ tcp-request connection expect-proxy layer4 if { src -f proxies.lst }
+ tcp-request session track-sc0 src
- Example: accept all connections from white-listed hosts, count all other
- connections and reject too fast ones. This results in abusive ones
- being blocked as long as they don't slow down.
+ Example: accept all sessions from white-listed hosts, reject too fast
+ sessions without counting them, and track accepted sessions.
+ This results in session rate being capped from abusive sources.
- tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
- tcp-request connection track-sc0 src
- tcp-request connection reject if { sc0_conn_rate gt 10 }
+ tcp-request session accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request session reject if { src_sess_rate gt 10 }
+ tcp-request session track-sc0 src
- Example: enable the PROXY protocol for traffic coming from all known proxies.
+ Example: accept all sessions from white-listed hosts, count all other
+ sessions and reject too fast ones. This results in abusive ones
+ being blocked as long as they don't slow down.
- tcp-request connection expect-proxy layer4 if { src -f proxies.lst }
+ tcp-request session accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request session track-sc0 src
+ tcp-request session reject if { sc0_sess_rate gt 10 }
See section 7 about ACL usage.
- See also : "tcp-request session", "tcp-request content", "stick-table"
+ See also : "tcp-request connection", "tcp-request content", "stick-table"
-tcp-request content <action> [{if | unless} <condition>]
- Perform an action on a new session depending on a layer 4-7 condition
+tcp-response content <action> [{if | unless} <condition>]
+ Perform an action on a session response depending on a layer 4-7 condition
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | yes
+ yes(!) | no | yes | yes
Arguments :
<action> defines the action to perform if the condition applies. See
<condition> is a standard layer 4-7 ACL-based condition (see section 7).
- A request's contents can be analyzed at an early stage of request processing
+ Response contents can be analyzed at an early stage of response processing
called "TCP content inspection". During this stage, ACL-based rules are
- evaluated every time the request contents are updated, until either an
- "accept", a "reject" or a "switch-mode" rule matches, or the TCP request
- inspection delay expires with no matching rule.
+ evaluated every time the response contents are updated, until either a final
+ rule matches, or a TCP response inspection delay is set and expires with no
+ matching rule.
- The first difference between these rules and "tcp-request connection" rules
- is that "tcp-request content" rules can make use of contents to take a
- decision. Most often, these decisions will consider a protocol recognition or
- validity. The second difference is that content-based rules can be used in
- both frontends and backends. In case of HTTP keep-alive with the client, all
- tcp-request content rules are evaluated again, so HAProxy keeps a record of
- what sticky counters were assigned by a "tcp-request connection" versus a
- "tcp-request content" rule, and flushes all the content-related ones after
- processing an HTTP request, so that they may be evaluated again by the rules
- being evaluated again for the next request. This is of particular importance
- when the rule tracks some L7 information or when it is conditioned by an
- L7-based ACL, since tracking may change between requests.
+ Most often, these decisions will consider a protocol recognition or validity.
Content-based rules are evaluated in their exact declaration order. If no
rule matches or if there is no rule, the default action is to accept the
contents. There is no specific limit to the number of rules which may be
inserted.
- While there is nothing mandatory about it, it is recommended to use the
- track-sc0 in "tcp-request connection" rules, track-sc1 for "tcp-request
- content" rules in the frontend, and track-sc2 for "tcp-request content"
- rules in the backend, because that makes the configuration more readable
- and easier to troubleshoot, but this is just a guideline and all counters
- may be used everywhere.
-
- The first keyword after "tcp-request content" in the syntax is the rule's
+ The first keyword after "tcp-response content" in the syntax is the rule's
action, optionally followed by a varying number of arguments for the action.
The supported actions and their respective syntaxes are enumerated in
- section 4.3 "Actions" (look for actions which tick "TCP RqCnt").
+ section 4.3 "Actions" (look for actions which tick "TCP RsCnt").
This directive is only available from named defaults sections, not anonymous
ones. Rules defined in the defaults section are evaluated before ones in the
Note that the "if/unless" condition is optional. If no condition is set on
the action, it is simply performed unconditionally. That can be useful for
- "track-sc*" actions as well as for changing the default action to a reject.
+ for changing the default action to a reject.
- Note also that it is recommended to use a "tcp-request session" rule to track
- information that does *not* depend on Layer 7 contents, especially for HTTP
- frontends. Some HTTP processing are performed at the session level and may
- lead to an early rejection of the requests. Thus, the tracking at the content
- level may be disturbed in such case. A warning is emitted during startup to
- prevent, as far as possible, such unreliable usage.
+ Several types of actions are supported :
- It is perfectly possible to match layer 7 contents with "tcp-request content"
- rules from a TCP proxy, since HTTP-specific ACL matches are able to
- preliminarily parse the contents of a buffer before extracting the required
- data. If the buffered contents do not parse as a valid HTTP message, then the
- ACL does not match. The parser which is involved there is exactly the same
- as for all other HTTP processing, so there is no risk of parsing something
- differently. In an HTTP frontend or an HTTP backend, it is guaranteed that
- HTTP contents will always be immediately present when the rule is evaluated
- first because the HTTP parsing is performed in the early stages of the
- connection processing, at the session level. But for such proxies, using
- "http-request" rules is much more natural and recommended.
+ It is perfectly possible to match layer 7 contents with "tcp-response
+ content" rules, but then it is important to ensure that a full response has
+ been buffered, otherwise no contents will match. In order to achieve this,
+ the best solution involves detecting the HTTP protocol during the inspection
+ period.
- Tracking layer7 information is also possible provided that the information
- are present when the rule is processed. The rule processing engine is able to
- wait until the inspect delay expires when the data to be tracked is not yet
- available.
+ See section 7 about ACL usage.
- Example:
- tcp-request content use-service lua.deny if { src -f /etc/haproxy/blacklist.lst }
+ See also : "tcp-request content", "tcp-response inspect-delay"
- Example:
- tcp-request content set-var(sess.my_var) src
- tcp-request content set-var-fmt(sess.from) %[src]:%[src_port]
- tcp-request content unset-var(sess.my_var2)
+tcp-response inspect-delay <timeout>
+ Set the maximum allowed time to wait for a response during content inspection
- Example:
- # Accept HTTP requests containing a Host header saying "example.com"
- # and reject everything else. (Only works for HTTP/1 connections)
- acl is_host_com hdr(Host) -i example.com
- tcp-request inspect-delay 30s
- tcp-request content accept if is_host_com
- tcp-request content reject
+ May be used in the following contexts: tcp, http
- # Accept HTTP requests containing a Host header saying "example.com"
- # and reject everything else. (works for HTTP/1 and HTTP/2 connections)
- acl is_host_com hdr(Host) -i example.com
- tcp-request inspect-delay 5s
- tcp-request content switch-mode http if HTTP
- tcp-request content reject # non-HTTP traffic is implicit here
- ...
- http-request reject unless is_host_com
+ May be used in sections : defaults | frontend | listen | backend
+ yes(!) | no | yes | yes
- Example:
- # reject SMTP connection if client speaks first
- tcp-request inspect-delay 30s
- acl content_present req.len gt 0
- tcp-request content reject if content_present
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
- # Forward HTTPS connection only if client speaks
- tcp-request inspect-delay 30s
- acl content_present req.len gt 0
- tcp-request content accept if content_present
- tcp-request content reject
+ This directive is only available from named defaults sections, not anonymous
+ ones. Proxies inherit this value from their defaults section.
- Example:
- # Track the last IP(stick-table type string) from X-Forwarded-For
- tcp-request inspect-delay 10s
- tcp-request content track-sc0 hdr(x-forwarded-for,-1)
- # Or track the last IP(stick-table type ip|ipv6) from X-Forwarded-For
- tcp-request content track-sc0 req.hdr_ip(x-forwarded-for,-1)
+ See also : "tcp-response content", "tcp-request inspect-delay".
- Example:
- # track request counts per "base" (concatenation of Host+URL)
- tcp-request inspect-delay 10s
- tcp-request content track-sc0 base table req-rate
- Example: track per-frontend and per-backend counters, block abusers at the
- frontend when the backend detects abuse(and marks gpc0).
+timeout check <timeout>
+ Set additional check timeout, but only after a connection has been already
+ established.
- frontend http
- # Use General Purpose Counter 0 in SC0 as a global abuse counter
- # protecting all our sites
- stick-table type ip size 1m expire 5m store gpc0
- tcp-request connection track-sc0 src
- tcp-request connection reject if { sc0_get_gpc0 gt 0 }
- ...
- use_backend http_dynamic if { path_end .php }
+ May be used in the following contexts: tcp, http, log
- backend http_dynamic
- # if a source makes too fast requests to this dynamic site (tracked
- # by SC1), block it globally in the frontend.
- stick-table type ip size 1m expire 5m store http_req_rate(10s)
- acl click_too_fast sc1_http_req_rate gt 10
- acl mark_as_abuser sc0_inc_gpc0(http) gt 0
- tcp-request content track-sc1 src
- tcp-request content reject if click_too_fast mark_as_abuser
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
- See section 7 about ACL usage.
+ Arguments:
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
- See also : "tcp-request connection", "tcp-request session",
- "tcp-request inspect-delay", and "http-request".
+ If set, HAProxy uses min("timeout connect", "inter") as a connect timeout
+ for check and "timeout check" as an additional read timeout. The "min" is
+ used so that people running with *very* long "timeout connect" (e.g. those
+ who needed this due to the queue or tarpit) do not slow down their checks.
+ (Please also note that there is no valid reason to have such long connect
+ timeouts, because "timeout queue" and "timeout tarpit" can always be used to
+ avoid that).
-tcp-request inspect-delay <timeout>
- Set the maximum allowed time to wait for data during content inspection
+ If "timeout check" is not set HAProxy uses "inter" for complete check
+ timeout (connect + read) exactly like all <1.3.15 version.
+
+ In most cases check request is much simpler and faster to handle than normal
+ requests and people may want to kick out laggy servers so this timeout should
+ be smaller than "timeout server".
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it.
+
+ See also: "timeout connect", "timeout queue", "timeout server",
+ "timeout tarpit".
+
+
+timeout client <timeout>
+ Set the maximum inactivity time on the client side.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | yes
+ yes | yes | yes | no
Arguments :
<timeout> is the timeout value specified in milliseconds by default, but
can be in any other unit if the number is suffixed by the unit,
as explained at the top of this document.
- People using HAProxy primarily as a TCP relay are often worried about the
- risk of passing any type of protocol to a server without any analysis. In
- order to be able to analyze the request contents, we must first withhold
- the data then analyze them. This statement simply enables withholding of
- data for at most the specified amount of time.
+ The inactivity timeout applies when the client is expected to acknowledge or
+ send data. In HTTP mode, this timeout is particularly important to consider
+ during the first phase, when the client sends the request, and during the
+ response while it is reading data sent by the server. That said, for the
+ first phase, it is preferable to set the "timeout http-request" to better
+ protect HAProxy from Slowloris like attacks. The value is specified in
+ milliseconds by default, but can be in any other unit if the number is
+ suffixed by the unit, as specified at the top of this document. In TCP mode
+ (and to a lesser extent, in HTTP mode), it is highly recommended that the
+ client timeout remains equal to the server timeout in order to avoid complex
+ situations to debug. It is a good practice to cover one or several TCP packet
+ losses by specifying timeouts that are slightly above multiples of 3 seconds
+ (e.g. 4 or 5 seconds). If some long-lived streams are mixed with short-lived
+ streams (e.g. WebSocket and HTTP), it's worth considering "timeout tunnel",
+ which overrides "timeout client" and "timeout server" for tunnels, as well as
+ "timeout client-fin" for half-closed connections.
- TCP content inspection applies very early when a connection reaches a
- frontend, then very early when the connection is forwarded to a backend. This
- means that a connection may experience a first delay in the frontend and a
- second delay in the backend if both have tcp-request rules.
+ This parameter is specific to frontends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may result in accumulation of expired sessions in
+ the system if the system's timeouts are not configured either.
- Note that when performing content inspection, HAProxy will evaluate the whole
- rules for every new chunk which gets in, taking into account the fact that
- those data are partial. If no rule matches before the aforementioned delay,
- a last check is performed upon expiration, this time considering that the
- contents are definitive. If no delay is set, HAProxy will not wait at all
- and will immediately apply a verdict based on the available information.
- Obviously this is unlikely to be very useful and might even be racy, so such
- setups are not recommended.
+ See also : "timeout server", "timeout tunnel", "timeout http-request".
- Note the inspection delay is shortened if an connection error or shutdown is
- experienced or if the request buffer appears as full.
- As soon as a rule matches, the request is released and continues as usual. If
- the timeout is reached and no rule matches, the default policy will be to let
- it pass through unaffected.
+timeout client-fin <timeout>
+ Set the inactivity timeout on the client side for half-closed connections.
- For most protocols, it is enough to set it to a few seconds, as most clients
- send the full request immediately upon connection. Add 3 or more seconds to
- cover TCP retransmits but that's all. For some protocols, it may make sense
- to use large values, for instance to ensure that the client never talks
- before the server (e.g. SMTP), or to wait for a client to talk before passing
- data to the server (e.g. SSL). Note that the client timeout must cover at
- least the inspection delay, otherwise it will expire first. If the client
- closes the connection or if the buffer is full, the delay immediately expires
- since the contents will not be able to change anymore.
+ May be used in the following contexts: tcp, http
- This directive is only available from named defaults sections, not anonymous
- ones. Proxies inherit this value from their defaults section.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
- See also : "tcp-request content accept", "tcp-request content reject",
- "timeout client".
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the client is expected to acknowledge or
+ send data while one direction is already shut down. This timeout is different
+ from "timeout client" in that it only applies to connections which are closed
+ in one direction. This is particularly useful to avoid keeping connections in
+ FIN_WAIT state for too long when clients do not disconnect cleanly. This
+ problem is particularly common long connections such as RDP or WebSocket.
+ Note that this timeout can override "timeout tunnel" when a connection shuts
+ down in one direction. It is applied to idle HTTP/2 connections once a GOAWAY
+ frame was sent, often indicating an expectation that the connection quickly
+ ends.
+ This parameter is specific to frontends, but can be specified once for all in
+ "defaults" sections. By default it is not set, so half-closed connections
+ will use the other timeouts (timeout.client or timeout.tunnel).
-tcp-request session <action> [{if | unless} <condition>]
- Perform an action on a validated session depending on a layer 5 condition
+ See also : "timeout client", "timeout server-fin", and "timeout tunnel".
+
+
+timeout client-hs <timeout>
+ Set the maximum time to wait for a client TLS handshake to complete. This is
+ usable both for TCP and QUIC connections.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
- yes(!) | yes | yes | no
+ yes | yes | yes | no
Arguments :
- <action> defines the action to perform if the condition applies. See
- below.
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
- <condition> is a standard layer5-only ACL-based condition (see section 7).
-
- Once a session is validated, (i.e. after all handshakes have been completed),
- it is possible to evaluate some conditions to decide whether this session
- must be accepted or dropped or have its counters tracked. Those conditions
- cannot make use of any data contents because no buffers are allocated yet and
- the processing cannot wait at this stage. The main use case is to copy some
- early information into variables (since variables are accessible in the
- session), or to keep track of some information collected after the handshake,
- such as SSL-level elements (SNI, ciphers, client cert's CN) or information
- from the PROXY protocol header (e.g. track a source forwarded this way). The
- extracted information can thus be copied to a variable or tracked using
- "track-sc" rules. Of course it is also possible to decide to accept/reject as
- with other rulesets. Most operations performed here could also be performed
- in "tcp-request content" rules, except that in HTTP these rules are evaluated
- for each new request, and that might not always be acceptable. For example a
- rule might increment a counter on each evaluation. It would also be possible
- that a country is resolved by geolocation from the source IP address,
- assigned to a session-wide variable, then the source address rewritten from
- an HTTP header for all requests. If some contents need to be inspected in
- order to take the decision, the "tcp-request content" statements must be used
- instead.
-
- The "tcp-request session" rules are evaluated in their exact declaration
- order. If no rule matches or if there is no rule, the default action is to
- accept the incoming session. There is no specific limit to the number of
- rules which may be inserted.
-
- The first keyword after "tcp-request session" in the syntax is the rule's
- action, optionally followed by a varying number of arguments for the action.
- The supported actions and their respective syntaxes are enumerated in
- section 4.3 "Actions" (look for actions which tick "TCP RqSes").
-
- This directive is only available from named defaults sections, not anonymous
- ones. Rules defined in the defaults section are evaluated before ones in the
- associated proxy section. To avoid ambiguities, in this case the same
- defaults section cannot be used by proxies with the frontend capability and
- by proxies with the backend capability. It means a listen section cannot use
- a defaults section defining such rules.
-
- Note that the "if/unless" condition is optional. If no condition is set on
- the action, it is simply performed unconditionally. That can be useful for
- "track-sc*" actions as well as for changing the default action to a reject.
-
- Example: track the original source address by default, or the one advertised
- in the PROXY protocol header for connection coming from the local
- proxies. The first connection-level rule enables receipt of the
- PROXY protocol for these ones, the second rule tracks whatever
- address we decide to keep after optional decoding.
-
- tcp-request connection expect-proxy layer4 if { src -f proxies.lst }
- tcp-request session track-sc0 src
-
- Example: accept all sessions from white-listed hosts, reject too fast
- sessions without counting them, and track accepted sessions.
- This results in session rate being capped from abusive sources.
-
- tcp-request session accept if { src -f /etc/haproxy/whitelist.lst }
- tcp-request session reject if { src_sess_rate gt 10 }
- tcp-request session track-sc0 src
-
- Example: accept all sessions from white-listed hosts, count all other
- sessions and reject too fast ones. This results in abusive ones
- being blocked as long as they don't slow down.
-
- tcp-request session accept if { src -f /etc/haproxy/whitelist.lst }
- tcp-request session track-sc0 src
- tcp-request session reject if { sc0_sess_rate gt 10 }
-
- See section 7 about ACL usage.
-
- See also : "tcp-request connection", "tcp-request content", "stick-table"
-
-tcp-response content <action> [{if | unless} <condition>]
- Perform an action on a session response depending on a layer 4-7 condition
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes(!) | no | yes | yes
-
- Arguments :
- <action> defines the action to perform if the condition applies. See
- below.
-
- <condition> is a standard layer 4-7 ACL-based condition (see section 7).
-
- Response contents can be analyzed at an early stage of response processing
- called "TCP content inspection". During this stage, ACL-based rules are
- evaluated every time the response contents are updated, until either a final
- rule matches, or a TCP response inspection delay is set and expires with no
- matching rule.
-
- Most often, these decisions will consider a protocol recognition or validity.
-
- Content-based rules are evaluated in their exact declaration order. If no
- rule matches or if there is no rule, the default action is to accept the
- contents. There is no specific limit to the number of rules which may be
- inserted.
-
- The first keyword after "tcp-response content" in the syntax is the rule's
- action, optionally followed by a varying number of arguments for the action.
- The supported actions and their respective syntaxes are enumerated in
- section 4.3 "Actions" (look for actions which tick "TCP RsCnt").
-
- This directive is only available from named defaults sections, not anonymous
- ones. Rules defined in the defaults section are evaluated before ones in the
- associated proxy section. To avoid ambiguities, in this case the same
- defaults section cannot be used by proxies with the frontend capability and
- by proxies with the backend capability. It means a listen section cannot use
- a defaults section defining such rules.
-
- Note that the "if/unless" condition is optional. If no condition is set on
- the action, it is simply performed unconditionally. That can be useful for
- for changing the default action to a reject.
-
- Several types of actions are supported :
-
- It is perfectly possible to match layer 7 contents with "tcp-response
- content" rules, but then it is important to ensure that a full response has
- been buffered, otherwise no contents will match. In order to achieve this,
- the best solution involves detecting the HTTP protocol during the inspection
- period.
-
- See section 7 about ACL usage.
-
- See also : "tcp-request content", "tcp-response inspect-delay"
-
-tcp-response inspect-delay <timeout>
- Set the maximum allowed time to wait for a response during content inspection
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes(!) | no | yes | yes
-
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- This directive is only available from named defaults sections, not anonymous
- ones. Proxies inherit this value from their defaults section.
-
- See also : "tcp-response content", "tcp-request inspect-delay".
-
-
-timeout check <timeout>
- Set additional check timeout, but only after a connection has been already
- established.
-
- May be used in the following contexts: tcp, http, log
-
- May be used in sections: defaults | frontend | listen | backend
- yes | no | yes | yes
-
- Arguments:
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- If set, HAProxy uses min("timeout connect", "inter") as a connect timeout
- for check and "timeout check" as an additional read timeout. The "min" is
- used so that people running with *very* long "timeout connect" (e.g. those
- who needed this due to the queue or tarpit) do not slow down their checks.
- (Please also note that there is no valid reason to have such long connect
- timeouts, because "timeout queue" and "timeout tarpit" can always be used to
- avoid that).
-
- If "timeout check" is not set HAProxy uses "inter" for complete check
- timeout (connect + read) exactly like all <1.3.15 version.
-
- In most cases check request is much simpler and faster to handle than normal
- requests and people may want to kick out laggy servers so this timeout should
- be smaller than "timeout server".
-
- This parameter is specific to backends, but can be specified once for all in
- "defaults" sections. This is in fact one of the easiest solutions not to
- forget about it.
-
- See also: "timeout connect", "timeout queue", "timeout server",
- "timeout tarpit".
-
-
-timeout client <timeout>
- Set the maximum inactivity time on the client side.
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- The inactivity timeout applies when the client is expected to acknowledge or
- send data. In HTTP mode, this timeout is particularly important to consider
- during the first phase, when the client sends the request, and during the
- response while it is reading data sent by the server. That said, for the
- first phase, it is preferable to set the "timeout http-request" to better
- protect HAProxy from Slowloris like attacks. The value is specified in
- milliseconds by default, but can be in any other unit if the number is
- suffixed by the unit, as specified at the top of this document. In TCP mode
- (and to a lesser extent, in HTTP mode), it is highly recommended that the
- client timeout remains equal to the server timeout in order to avoid complex
- situations to debug. It is a good practice to cover one or several TCP packet
- losses by specifying timeouts that are slightly above multiples of 3 seconds
- (e.g. 4 or 5 seconds). If some long-lived streams are mixed with short-lived
- streams (e.g. WebSocket and HTTP), it's worth considering "timeout tunnel",
- which overrides "timeout client" and "timeout server" for tunnels, as well as
- "timeout client-fin" for half-closed connections.
-
- This parameter is specific to frontends, but can be specified once for all in
- "defaults" sections. This is in fact one of the easiest solutions not to
- forget about it. An unspecified timeout results in an infinite timeout, which
- is not recommended. Such a usage is accepted and works but reports a warning
- during startup because it may result in accumulation of expired sessions in
- the system if the system's timeouts are not configured either.
-
- See also : "timeout server", "timeout tunnel", "timeout http-request".
-
-
-timeout client-fin <timeout>
- Set the inactivity timeout on the client side for half-closed connections.
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- The inactivity timeout applies when the client is expected to acknowledge or
- send data while one direction is already shut down. This timeout is different
- from "timeout client" in that it only applies to connections which are closed
- in one direction. This is particularly useful to avoid keeping connections in
- FIN_WAIT state for too long when clients do not disconnect cleanly. This
- problem is particularly common long connections such as RDP or WebSocket.
- Note that this timeout can override "timeout tunnel" when a connection shuts
- down in one direction. It is applied to idle HTTP/2 connections once a GOAWAY
- frame was sent, often indicating an expectation that the connection quickly
- ends.
-
- This parameter is specific to frontends, but can be specified once for all in
- "defaults" sections. By default it is not set, so half-closed connections
- will use the other timeouts (timeout.client or timeout.tunnel).
-
- See also : "timeout client", "timeout server-fin", and "timeout tunnel".
-
-
-timeout client-hs <timeout>
- Set the maximum time to wait for a client TLS handshake to complete. This is
- usable both for TCP and QUIC connections.
-
- May be used in the following contexts: tcp, http
-
- May be used in sections : defaults | frontend | listen | backend
- yes | yes | yes | no
-
- Arguments :
- <timeout> is the timeout value specified in milliseconds by default, but
- can be in any other unit if the number is suffixed by the unit,
- as explained at the top of this document.
-
- If this handshake timeout is not set, this is the client timeout which is used
- in place.
+ If this handshake timeout is not set, this is the client timeout which is used
+ in place.
timeout connect <timeout>
- verify
<sslbindconf> also supports the following keywords from the crt-store load
- keyword (see Section 3.11.1. Load options):
+ keyword (see Section 12.7.1. Load options):
- crt
- key
| H | %tsc | termination_state with cookie status | string |
+---+------+------------------------------------------------------+---------+
- R = Restrictions : H = mode http only ; S = SSL only ; L = log only
+ R = Restrictions : H = mode http only ; S = SSL only ; L = log only
+
+
+8.3. Advanced logging options
+-----------------------------
+
+Some advanced logging options are often looked for but are not easy to find out
+just by looking at the various options. Here is an entry point for the few
+options which can enable better logging. Please refer to the keywords reference
+for more information about their usage.
+
+
+8.3.1. Disabling logging of external tests
+------------------------------------------
+
+It is quite common to have some monitoring tools perform health checks on
+HAProxy. Sometimes it will be a layer 3 load-balancer such as LVS or any
+commercial load-balancer, and sometimes it will simply be a more complete
+monitoring system such as Nagios. When the tests are very frequent, users often
+ask how to disable logging for those checks. There are three possibilities :
+
+ - if connections come from everywhere and are just TCP probes, it is often
+ desired to simply disable logging of connections without data exchange, by
+ setting "option dontlognull" in the frontend. It also disables logging of
+ port scans, which may or may not be desired.
+
+ - it is possible to use the "http-request set-log-level silent" action using
+ a variety of conditions (source networks, paths, user-agents, etc).
+
+ - if the tests are performed on a known URI, use "monitor-uri" to declare
+ this URI as dedicated to monitoring. Any host sending this request will
+ only get the result of a health-check, and the request will not be logged.
+
+
+8.3.2. Logging before waiting for the stream to terminate
+----------------------------------------------------------
+
+The problem with logging at end of connection is that you have no clue about
+what is happening during very long streams, such as remote terminal sessions
+or large file downloads. This problem can be worked around by specifying
+"option logasap" in the frontend. HAProxy will then log as soon as possible,
+just before data transfer begins. This means that in case of TCP, it will still
+log the connection status to the server, and in case of HTTP, it will log just
+after processing the server headers. In this case, the number of bytes reported
+is the number of header bytes sent to the client. In order to avoid confusion
+with normal logs, the total time field and the number of bytes are prefixed
+with a '+' sign which means that real numbers are certainly larger.
+
+
+8.3.3. Raising log level upon errors
+------------------------------------
+
+Sometimes it is more convenient to separate normal traffic from errors logs,
+for instance in order to ease error monitoring from log files. When the option
+"log-separate-errors" is used, connections which experience errors, timeouts,
+retries, redispatches or HTTP status codes 5xx will see their syslog level
+raised from "info" to "err". This will help a syslog daemon store the log in
+a separate file. It is very important to keep the errors in the normal traffic
+file too, so that log ordering is not altered. You should also be careful if
+you already have configured your syslog daemon to store all logs higher than
+"notice" in an "admin" file, because the "err" level is higher than "notice".
+
+
+8.3.4. Disabling logging of successful connections
+--------------------------------------------------
+
+Although this may sound strange at first, some large sites have to deal with
+multiple thousands of logs per second and are experiencing difficulties keeping
+them intact for a long time or detecting errors within them. If the option
+"dontlog-normal" is set on the frontend, all normal connections will not be
+logged. In this regard, a normal connection is defined as one without any
+error, timeout, retry nor redispatch. In HTTP, the status code is checked too,
+and a response with a status 5xx is not considered normal and will be logged
+too. Of course, doing is is really discouraged as it will remove most of the
+useful information from the logs. Do this only if you have no other
+alternative.
+
+
+8.3.5. Log profiles
+-------------------
+
+While some directives such as "log-format", "log-format-sd", "error-log-format"
+or "log-tag" make it possible to configure log formatting globally or at the
+proxy level, it may be relevant to configure such settings as close as possible
+to the log endpoints, that is, per "log" directive.
+
+This is where "log-profile" section comes into play: "log-profile" may be
+defined anywhere in the configuration. This section accepts a set of different
+keywords that are used to describe how the logs emitted for a given `log`
+directive should be built.
+
+From a "log" directive, one can choose to use a specific log-profile by its
+name. The same profile may be used from multiple "log" directives.
+
+log-profile <name>
+ Creates a new log profile identified as <name>
+
+log-tag <string>
+ Override syslog log tag set globally or per-proxy using "log-tag" directive.
+
+on <step> [drop] [format <fmt>] [sd <sd_fmt>]
+ Override the log-format string normally used to build the log line at
+ <step> logging step. <fmt> is used to override "log-format" or
+ "error-log-format" strings (depending on the <step>) whereas <sd_fmt> is
+ used to override "log-format-sd" string (both can be combined).
+
+ "drop" special keyword may be used to specify that no log should be
+ emitted for the given <step>. It takes precedence over "format" and
+ "sd" if previously defined.
+
+ Possible values for <step> are:
+
+ - "accept" : override log-format if the log is generated right after
+ frontend conn was accepted
+ - "request" : override log-format if the log is generated after client
+ request was received
+ - "connect" : override log-format if the log is generated after backend
+ connection establishment
+ - "response" : override log-format if the log is generated during server
+ response handling
+ - "close" : override log-format if the log is generated at the final
+ transaction (txn) step
+ - "error" : override error-log-format for if the log is generated due
+ to a transaction error
+ - "any" : override both log-format and error-log-format for all logging
+ steps, unless a more precise step override is declared.
+
+ See "do-log" action for relevant additional <step> values.
+
+ This setting is only relevant for "log" directives used from contexts where
+ using "log-format" directive makes sense (e.g.: http and tcp proxies).
+ Else it will simply be ignored.
+
+ Example :
+
+ log-profile myprof
+
+ log-tag "custom-tag"
+
+ on error format "%ci: error"
+ on connect drop
+ on any sd "custom-sd"
+
+ listen myproxy
+ mode http
+ option httplog
+ log-tag "normal"
+
+ log stdout format rfc5424 local0
+ # success:
+ # <134>1 2024-06-12T10:09:11.823400+02:00 - normal 224482 - - 127.0.0.1:53594 [12/Jun/2024:10:09:11.814] myproxy myproxy/<NOSRV> 0/-1/-1/-1/0 200 49 - - LR-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
+ #
+ # error:
+ # <134>1 2024-06-12T10:09:44.810929+02:00 - normal 224482 - - 127.0.0.1:59258 [12/Jun/2024:10:09:44.426] myproxy myproxy/<NOSRV> -1/-1/-1/-1/384 400 0 - - CR-- 1/1/0/0/0 0/0 "<BADREQ>"
+
+ log 127.0.0.1:514 format rfc5424 profile myprof local0
+ # success:
+ # <134>1 2024-06-12T10:09:11.823428+02:00 - custom-tag 224482 - custom-sd 127.0.0.1:53594 [12/Jun/2024:10:09:11.814] myproxy myproxy/<NOSRV> 0/-1/-1/-1/0 200 49 - - LR-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
+ #
+ # error:
+ # <134>1 2024-06-12T10:09:51.566524+02:00 - custom-tag 224482 - - 127.0.0.1: error
+
+
+8.4. Timing events
+------------------
+
+Timers provide a great help in troubleshooting network problems. All values are
+reported in milliseconds (ms). These timers should be used in conjunction with
+the stream termination flags. In TCP mode with "option tcplog" set on the
+frontend, 3 control points are reported under the form "Tw/Tc/Tt", and in HTTP
+mode, 5 control points are reported under the form "TR/Tw/Tc/Tr/Ta". In
+addition, three other measures are provided, "Th", "Ti", and "Tq".
+
+Timings events in HTTP mode:
+
+ first request 2nd request
+ |<-------------------------------->|<-------------- ...
+ t tr t tr ...
+ ---|----|----|----|----|----|----|----|----|--
+ : Th Ti TR Tw Tc Tr Td : Ti ...
+ :<---- Tq ---->: :
+ :<-------------- Tt -------------->:
+ :<-- -----Tu--------------->:
+ :<--------- Ta --------->:
+
+Timings events in TCP mode:
+
+ TCP session
+ |<----------------->|
+ t t
+ ---|----|----|----|----|---
+ | Th Tw Tc Td |
+ |<------ Tt ------->|
+
+ - Th: total time to accept tcp connection and execute handshakes for low level
+ protocols. Currently, these protocols are proxy-protocol and SSL. This may
+ only happen once during the whole connection's lifetime. A large time here
+ may indicate that the client only pre-established the connection without
+ speaking, that it is experiencing network issues preventing it from
+ completing a handshake in a reasonable time (e.g. MTU issues), or that an
+ SSL handshake was very expensive to compute. Please note that this time is
+ reported only before the first request, so it is safe to average it over
+ all request to calculate the amortized value. The second and subsequent
+ request will always report zero here.
+
+ This timer is named %Th as a log-format alias, and fc.timer.handshake as a
+ sample fetch.
+
+ - Ti: is the idle time before the HTTP request (HTTP mode only). This timer
+ counts between the end of the handshakes and the first byte of the HTTP
+ request. When dealing with a second request in keep-alive mode, it starts
+ to count after the end of the transmission the previous response. When a
+ multiplexed protocol such as HTTP/2 is used, it starts to count immediately
+ after the previous request. Some browsers pre-establish connections to a
+ server in order to reduce the latency of a future request, and keep them
+ pending until they need it. This delay will be reported as the idle time. A
+ value of -1 indicates that nothing was received on the connection.
+
+ This timer is named %Ti as a log-format alias, and req.timer.idle as a
+ sample fetch.
+
+ - TR: total time to get the client request (HTTP mode only). It's the time
+ elapsed between the first bytes received and the moment the proxy received
+ the empty line marking the end of the HTTP headers. The value "-1"
+ indicates that the end of headers has never been seen. This happens when
+ the client closes prematurely or times out. This time is usually very short
+ since most requests fit in a single packet. A large time may indicate a
+ request typed by hand during a test.
+
+ This timer is named %TR as a log-format alias, and req.timer.hdr as a
+ sample fetch.
+
+ - Tq: total time to get the client request from the accept date or since the
+ emission of the last byte of the previous response (HTTP mode only). It's
+ exactly equal to Th + Ti + TR unless any of them is -1, in which case it
+ returns -1 as well. This timer used to be very useful before the arrival of
+ HTTP keep-alive and browsers' pre-connect feature. It's recommended to drop
+ it in favor of TR nowadays, as the idle time adds a lot of noise to the
+ reports.
+
+ This timer is named %Tq as a log-format alias, and req.timer.tq as a
+ sample fetch.
+
+ - Tw: total time spent in the queues waiting for a connection slot. It
+ accounts for backend queue as well as the server queues, and depends on the
+ queue size, and the time needed for the server to complete previous
+ requests. The value "-1" means that the request was killed before reaching
+ the queue, which is generally what happens with invalid or denied requests.
+
+ This timer is named %Tw as a log-format alias, and req.timer.queue as a
+ sample fetch.
+
+ - Tc: total time to establish the TCP connection to the server. It's the time
+ elapsed between the moment the proxy sent the connection request, and the
+ moment it was acknowledged by the server, or between the TCP SYN packet and
+ the matching SYN/ACK packet in return. The value "-1" means that the
+ connection never established.
+
+ This timer is named %Tc as a log-format alias, and bc.timer.connect as a
+ sample fetch.
+
+ - Tr: server response time (HTTP mode only). It's the time elapsed between
+ the moment the TCP connection was established to the server and the moment
+ the server sent its complete response headers. It purely shows its request
+ processing time, without the network overhead due to the data transmission.
+ It is worth noting that when the client has data to send to the server, for
+ instance during a POST request, the time already runs, and this can distort
+ apparent response time. For this reason, it's generally wise not to trust
+ too much this field for POST requests initiated from clients behind an
+ untrusted network. A value of "-1" here means that the last response header
+ (empty line) was never seen, most likely because the server timeout stroke
+ before the server managed to process the request or because the server
+ returned an invalid response.
+
+ This timer is named %Tr as a log-format alias, and res.timer.hdr as a
+ sample fetch.
+
+ - Td: this is the total transfer time of the response payload till the last
+ byte sent to the client. In HTTP it starts after the last response header
+ (after Tr).
+
+ The data sent are not guaranteed to be received by the client, they can be
+ stuck in either the kernel or the network.
+
+ This timer is named %Td as a log-format alias, and res.timer.data as a
+ sample fetch.
+
+ - Ta: total active time for the HTTP request, between the moment the proxy
+ received the first byte of the request header and the emission of the last
+ byte of the response body. The exception is when the "logasap" option is
+ specified. In this case, it only equals (TR+Tw+Tc+Tr), and is prefixed with
+ a '+' sign. From this field, we can deduce "Td", the data transmission time,
+ by subtracting other timers when valid :
+
+ Td = Ta - (TR + Tw + Tc + Tr)
+
+ Timers with "-1" values have to be excluded from this equation. Note that
+ "Ta" can never be negative.
+
+ This timer is named %Ta as a log-format alias, and txn.timer.total as a
+ sample fetch.
+
+ - Tt: total stream duration time, between the moment the proxy accepted it
+ and the moment both ends were closed. The exception is when the "logasap"
+ option is specified. In this case, it only equals (Th+Ti+TR+Tw+Tc+Tr), and
+ is prefixed with a '+' sign. From this field, we can deduce "Td", the data
+ transmission time, by subtracting other timers when valid :
+
+ Td = Tt - (Th + Ti + TR + Tw + Tc + Tr)
+
+ Timers with "-1" values have to be excluded from this equation. In TCP
+ mode, "Ti", "Tq" and "Tr" have to be excluded too. Note that "Tt" can never
+ be negative and that for HTTP, Tt is simply equal to (Th+Ti+Ta).
+
+ This timer is named %Tt as a log-format alias, and fc.timer.total as a
+ sample fetch.
+
+ - Tu: total estimated time as seen from client, between the moment the proxy
+ accepted it and the moment both ends were closed, without idle time.
+ This is useful to roughly measure end-to-end time as a user would see it,
+ without idle time pollution from keep-alive time between requests. This
+ timer in only an estimation of time seen by user as it assumes network
+ latency is the same in both directions. The exception is when the "logasap"
+ option is specified. In this case, it only equals (Th+TR+Tw+Tc+Tr), and is
+ prefixed with a '+' sign.
+
+ This timer is named %Tu as a log-format alias, and txn.timer.user as a
+ sample fetch.
+
+These timers provide precious indications on trouble causes. Since the TCP
+protocol defines retransmit delays of 3, 6, 12... seconds, we know for sure
+that timers close to multiples of 3s are nearly always related to lost packets
+due to network problems (wires, negotiation, congestion). Moreover, if "Ta" or
+"Tt" is close to a timeout value specified in the configuration, it often means
+that a stream has been aborted on timeout.
+
+Most common cases :
+
+ - If "Th" or "Ti" are close to 3000, a packet has probably been lost between
+ the client and the proxy. This is very rare on local networks but might
+ happen when clients are on far remote networks and send large requests. It
+ may happen that values larger than usual appear here without any network
+ cause. Sometimes, during an attack or just after a resource starvation has
+ ended, HAProxy may accept thousands of connections in a few milliseconds.
+ The time spent accepting these connections will inevitably slightly delay
+ processing of other connections, and it can happen that request times in the
+ order of a few tens of milliseconds are measured after a few thousands of
+ new connections have been accepted at once. Using one of the keep-alive
+ modes may display larger idle times since "Ti" measures the time spent
+ waiting for additional requests.
+
+ - If "Tc" is close to 3000, a packet has probably been lost between the
+ server and the proxy during the server connection phase. This value should
+ always be very low, such as 1 ms on local networks and less than a few tens
+ of ms on remote networks.
+
+ - If "Tr" is nearly always lower than 3000 except some rare values which seem
+ to be the average majored by 3000, there are probably some packets lost
+ between the proxy and the server.
+
+ - If "Ta" is large even for small byte counts, it generally is because
+ neither the client nor the server decides to close the connection while
+ HAProxy is running in tunnel mode and both have agreed on a keep-alive
+ connection mode. In order to solve this issue, it will be needed to specify
+ one of the HTTP options to manipulate keep-alive or close options on either
+ the frontend or the backend. Having the smallest possible 'Ta' or 'Tt' is
+ important when connection regulation is used with the "maxconn" option on
+ the servers, since no new connection will be sent to the server until
+ another one is released.
+
+Other noticeable HTTP log cases ('xx' means any value to be ignored) :
+
+ TR/Tw/Tc/Tr/+Ta The "option logasap" is present on the frontend and the log
+ was emitted before the data phase. All the timers are valid
+ except "Ta" which is shorter than reality.
+
+ -1/xx/xx/xx/Ta The client was not able to send a complete request in time
+ or it aborted too early. Check the stream termination flags
+ then "timeout http-request" and "timeout client" settings.
+
+ TR/-1/xx/xx/Ta It was not possible to process the request, maybe because
+ servers were out of order, because the request was invalid
+ or forbidden by ACL rules. Check the stream termination
+ flags.
+
+ TR/Tw/-1/xx/Ta The connection could not establish on the server. Either it
+ actively refused it or it timed out after Ta-(TR+Tw) ms.
+ Check the stream termination flags, then check the
+ "timeout connect" setting. Note that the tarpit action might
+ return similar-looking patterns, with "Tw" equal to the time
+ the client connection was maintained open.
+
+ TR/Tw/Tc/-1/Ta The server has accepted the connection but did not return
+ a complete response in time, or it closed its connection
+ unexpectedly after Ta-(TR+Tw+Tc) ms. Check the stream
+ termination flags, then check the "timeout server" setting.
+
+
+8.5. Stream state at disconnection
+-----------------------------------
+
+TCP and HTTP logs provide a stream termination indicator in the
+"termination_state" field, just before the number of active connections. It is
+2-characters long in TCP mode, and is extended to 4 characters in HTTP mode,
+each of which has a special meaning :
+
+ - On the first character, a code reporting the first event which caused the
+ stream to terminate :
+
+ C : the TCP session was unexpectedly aborted by the client.
+
+ S : the TCP session was unexpectedly aborted by the server, or the
+ server explicitly refused it.
+
+ P : the stream or session was prematurely aborted by the proxy, because
+ of a connection limit enforcement, because a DENY filter was
+ matched, because of a security check which detected and blocked a
+ dangerous error in server response which might have caused
+ information leak (e.g. cacheable cookie).
+
+ L : the stream was locally processed by HAProxy.
+
+ R : a resource on the proxy has been exhausted (memory, sockets, source
+ ports, ...). Usually, this appears during the connection phase, and
+ system logs should contain a copy of the precise error. If this
+ happens, it must be considered as a very serious anomaly which
+ should be fixed as soon as possible by any means.
+
+ I : an internal error was identified by the proxy during a self-check.
+ This should NEVER happen, and you are encouraged to report any log
+ containing this, because this would almost certainly be a bug. It
+ would be wise to preventively restart the process after such an
+ event too, in case it would be caused by memory corruption.
+
+ D : the stream was killed by HAProxy because the server was detected
+ as down and was configured to kill all connections when going down.
+
+ U : the stream was killed by HAProxy on this backup server because an
+ active server was detected as up and was configured to kill all
+ backup connections when going up.
+
+ K : the stream was actively killed by an admin operating on HAProxy.
+
+ c : the client-side timeout expired while waiting for the client to
+ send or receive data.
+
+ s : the server-side timeout expired while waiting for the server to
+ send or receive data.
+
+ - : normal stream completion, both the client and the server closed
+ with nothing left in the buffers.
+
+ - on the second character, the TCP or HTTP stream state when it was closed :
+
+ R : the proxy was waiting for a complete, valid REQUEST from the client
+ (HTTP mode only). Nothing was sent to any server.
+
+ Q : the proxy was waiting in the QUEUE for a connection slot. This can
+ only happen when servers have a 'maxconn' parameter set. It can
+ also happen in the global queue after a redispatch consecutive to
+ a failed attempt to connect to a dying server. If no redispatch is
+ reported, then no connection attempt was made to any server.
+
+ C : the proxy was waiting for the CONNECTION to establish on the
+ server. The server might at most have noticed a connection attempt.
+
+ H : the proxy was waiting for complete, valid response HEADERS from the
+ server (HTTP only).
+
+ D : the stream was in the DATA phase.
+
+ L : the proxy was still transmitting LAST data to the client while the
+ server had already finished. This one is very rare as it can only
+ happen when the client dies while receiving the last packets.
+
+ T : the request was tarpitted. It has been held open with the client
+ during the whole "timeout tarpit" duration or until the client
+ closed, both of which will be reported in the "Tw" timer.
+
+ - : normal stream completion after end of data transfer.
+
+ - the third character tells whether the persistence cookie was provided by
+ the client (only in HTTP mode) :
+
+ N : the client provided NO cookie. This is usually the case for new
+ visitors, so counting the number of occurrences of this flag in the
+ logs generally indicate a valid trend for the site frequentation.
+
+ I : the client provided an INVALID cookie matching no known server.
+ This might be caused by a recent configuration change, mixed
+ cookies between HTTP/HTTPS sites, persistence conditionally
+ ignored, or an attack.
+
+ D : the client provided a cookie designating a server which was DOWN,
+ so either "option persist" was used and the client was sent to
+ this server, or it was not set and the client was redispatched to
+ another server.
+
+ V : the client provided a VALID cookie, and was sent to the associated
+ server.
+
+ E : the client provided a valid cookie, but with a last date which was
+ older than what is allowed by the "maxidle" cookie parameter, so
+ the cookie is consider EXPIRED and is ignored. The request will be
+ redispatched just as if there was no cookie.
+
+ O : the client provided a valid cookie, but with a first date which was
+ older than what is allowed by the "maxlife" cookie parameter, so
+ the cookie is consider too OLD and is ignored. The request will be
+ redispatched just as if there was no cookie.
+
+ U : a cookie was present but was not used to select the server because
+ some other server selection mechanism was used instead (typically a
+ "use-server" rule).
+
+ - : does not apply (no cookie set in configuration).
+
+ - the last character reports what operations were performed on the persistence
+ cookie returned by the server (only in HTTP mode) :
+
+ N : NO cookie was provided by the server, and none was inserted either.
+
+ I : no cookie was provided by the server, and the proxy INSERTED one.
+ Note that in "cookie insert" mode, if the server provides a cookie,
+ it will still be overwritten and reported as "I" here.
+
+ U : the proxy UPDATED the last date in the cookie that was presented by
+ the client. This can only happen in insert mode with "maxidle". It
+ happens every time there is activity at a different date than the
+ date indicated in the cookie. If any other change happens, such as
+ a redispatch, then the cookie will be marked as inserted instead.
+
+ P : a cookie was PROVIDED by the server and transmitted as-is.
+
+ R : the cookie provided by the server was REWRITTEN by the proxy, which
+ happens in "cookie rewrite" or "cookie prefix" modes.
+
+ D : the cookie provided by the server was DELETED by the proxy.
+
+ - : does not apply (no cookie set in configuration).
+
+The combination of the two first flags gives a lot of information about what
+was happening when the stream or session terminated, and why it did terminate.
+It can be helpful to detect server saturation, network troubles, local system
+resource starvation, attacks, etc...
+
+The most common termination flags combinations are indicated below. They are
+alphabetically sorted, with the lowercase set just after the upper case for
+easier finding and understanding.
+
+ Flags Reason
+
+ -- Normal termination.
+
+ CC The client aborted before the connection could be established to the
+ server. This can happen when HAProxy tries to connect to a recently
+ dead (or unchecked) server, and the client aborts while HAProxy is
+ waiting for the server to respond or for "timeout connect" to expire.
+
+ CD The client unexpectedly aborted during data transfer. This can be
+ caused by a browser crash, by an intermediate equipment between the
+ client and HAProxy which decided to actively break the connection,
+ by network routing issues between the client and HAProxy, or by a
+ keep-alive stream between the server and the client terminated first
+ by the client.
+
+ cD The client did not send nor acknowledge any data for as long as the
+ "timeout client" delay. This is often caused by network failures on
+ the client side, or the client simply leaving the net uncleanly.
+
+ CH The client aborted while waiting for the server to start responding.
+ It might be the server taking too long to respond or the client
+ clicking the 'Stop' button too fast.
+
+ cH The "timeout client" stroke while waiting for client data during a
+ POST request. This is sometimes caused by too large TCP MSS values
+ for PPPoE networks which cannot transport full-sized packets. It can
+ also happen when client timeout is smaller than server timeout and
+ the server takes too long to respond.
+
+ CQ The client aborted while its stream was queued, waiting for a server
+ with enough empty slots to accept it. It might be that either all the
+ servers were saturated or that the assigned server was taking too
+ long a time to respond.
+
+ CR The client aborted before sending a full HTTP request. Most likely
+ the request was typed by hand using a telnet client, and aborted
+ too early. The HTTP status code is likely a 400 here. Sometimes this
+ might also be caused by an IDS killing the connection between HAProxy
+ and the client. "option http-ignore-probes" can be used to ignore
+ connections without any data transfer.
+
+ cR The "timeout http-request" stroke before the client sent a full HTTP
+ request. This is sometimes caused by too large TCP MSS values on the
+ client side for PPPoE networks which cannot transport full-sized
+ packets, or by clients sending requests by hand and not typing fast
+ enough, or forgetting to enter the empty line at the end of the
+ request. The HTTP status code is likely a 408 here. Note: recently,
+ some browsers started to implement a "pre-connect" feature consisting
+ in speculatively connecting to some recently visited web sites just
+ in case the user would like to visit them. This results in many
+ connections being established to web sites, which end up in 408
+ Request Timeout if the timeout strikes first, or 400 Bad Request when
+ the browser decides to close them first. These ones pollute the log
+ and feed the error counters. Some versions of some browsers have even
+ been reported to display the error code. It is possible to work
+ around the undesirable effects of this behavior by adding "option
+ http-ignore-probes" in the frontend, resulting in connections with
+ zero data transfer to be totally ignored. This will definitely hide
+ the errors of people experiencing connectivity issues though.
+
+ CT The client aborted while its stream was tarpitted. It is important to
+ check if this happens on valid requests, in order to be sure that no
+ wrong tarpit rules have been written. If a lot of them happen, it
+ might make sense to lower the "timeout tarpit" value to something
+ closer to the average reported "Tw" timer, in order not to consume
+ resources for just a few attackers.
+
+ LC The request was intercepted and locally handled by HAProxy. The
+ request was not sent to the server. It only happens with a redirect
+ because of a "redir" parameter on the server line.
+
+ LR The request was intercepted and locally handled by HAProxy. The
+ request was not sent to the server. Generally it means a redirect was
+ returned, an HTTP return statement was processed or the request was
+ handled by an applet (stats, cache, Prometheus exported, lua applet...).
+
+ LH The response was intercepted and locally handled by HAProxy. Generally
+ it means a redirect was returned or an HTTP return statement was
+ processed.
+
+ SC The server or an equipment between it and HAProxy explicitly refused
+ the TCP connection (the proxy received a TCP RST or an ICMP message
+ in return). Under some circumstances, it can also be the network
+ stack telling the proxy that the server is unreachable (e.g. no route,
+ or no ARP response on local network). When this happens in HTTP mode,
+ the status code is likely a 502 or 503 here.
+
+ sC The "timeout connect" stroke before a connection to the server could
+ complete. When this happens in HTTP mode, the status code is likely a
+ 503 or 504 here.
+
+ SD The connection to the server died with an error during the data
+ transfer. This usually means that HAProxy has received an RST from
+ the server or an ICMP message from an intermediate equipment while
+ exchanging data with the server. This can be caused by a server crash
+ or by a network issue on an intermediate equipment.
+
+ sD The server did not send nor acknowledge any data for as long as the
+ "timeout server" setting during the data phase. This is often caused
+ by too short timeouts on L4 equipment before the server (firewalls,
+ load-balancers, ...), as well as keep-alive sessions maintained
+ between the client and the server expiring first on HAProxy.
+
+ SH The server aborted before sending its full HTTP response headers, or
+ it crashed while processing the request. Since a server aborting at
+ this moment is very rare, it would be wise to inspect its logs to
+ control whether it crashed and why. The logged request may indicate a
+ small set of faulty requests, demonstrating bugs in the application.
+ Sometimes this might also be caused by an IDS killing the connection
+ between HAProxy and the server.
+
+ sH The "timeout server" stroke before the server could return its
+ response headers. This is the most common anomaly, indicating too
+ long transactions, probably caused by server or database saturation.
+ The immediate workaround consists in increasing the "timeout server"
+ setting, but it is important to keep in mind that the user experience
+ will suffer from these long response times. The only long term
+ solution is to fix the application.
+ sQ The stream spent too much time in queue and has been expired. See
+ the "timeout queue" and "timeout connect" settings to find out how to
+ fix this if it happens too often. If it often happens massively in
+ short periods, it may indicate general problems on the affected
+ servers due to I/O or database congestion, or saturation caused by
+ external attacks.
-8.3. Advanced logging options
------------------------------
+ PC The proxy refused to establish a connection to the server because the
+ process's socket limit has been reached while attempting to connect.
+ The global "maxconn" parameter may be increased in the configuration
+ so that it does not happen anymore. This status is very rare and
+ might happen when the global "ulimit-n" parameter is forced by hand.
-Some advanced logging options are often looked for but are not easy to find out
-just by looking at the various options. Here is an entry point for the few
-options which can enable better logging. Please refer to the keywords reference
-for more information about their usage.
+ PD The proxy blocked an incorrectly formatted chunked encoded message in
+ a request or a response, after the server has emitted its headers. In
+ most cases, this will indicate an invalid message from the server to
+ the client. HAProxy supports chunk sizes of up to 2GB - 1 (2147483647
+ bytes). Any larger size will be considered as an error.
+ PH The proxy blocked the server's response, because it was invalid,
+ incomplete, dangerous (cache control), or matched a security filter.
+ In any case, an HTTP 502 error is sent to the client. One possible
+ cause for this error is an invalid syntax in an HTTP header name
+ containing unauthorized characters. It is also possible but quite
+ rare, that the proxy blocked a chunked-encoding request from the
+ client due to an invalid syntax, before the server responded. In this
+ case, an HTTP 400 error is sent to the client and reported in the
+ logs. Finally, it may be due to an HTTP header rewrite failure on the
+ response. In this case, an HTTP 500 error is sent (see
+ "tune.maxrewrite" and "http-response strict-mode" for more
+ inforomation).
-8.3.1. Disabling logging of external tests
-------------------------------------------
+ PR The proxy blocked the client's HTTP request, either because of an
+ invalid HTTP syntax, in which case it returned an HTTP 400 error to
+ the client, or because a deny filter matched, in which case it
+ returned an HTTP 403 error. It may also be due to an HTTP header
+ rewrite failure on the request. In this case, an HTTP 500 error is
+ sent (see "tune.maxrewrite" and "http-request strict-mode" for more
+ inforomation).
-It is quite common to have some monitoring tools perform health checks on
-HAProxy. Sometimes it will be a layer 3 load-balancer such as LVS or any
-commercial load-balancer, and sometimes it will simply be a more complete
-monitoring system such as Nagios. When the tests are very frequent, users often
-ask how to disable logging for those checks. There are three possibilities :
+ PT The proxy blocked the client's request and has tarpitted its
+ connection before returning it a 500 server error. Nothing was sent
+ to the server. The connection was maintained open for as long as
+ reported by the "Tw" timer field.
- - if connections come from everywhere and are just TCP probes, it is often
- desired to simply disable logging of connections without data exchange, by
- setting "option dontlognull" in the frontend. It also disables logging of
- port scans, which may or may not be desired.
+ RC A local resource has been exhausted (memory, sockets, source ports)
+ preventing the connection to the server from establishing. The error
+ logs will tell precisely what was missing. This is very rare and can
+ only be solved by proper system tuning.
- - it is possible to use the "http-request set-log-level silent" action using
- a variety of conditions (source networks, paths, user-agents, etc).
+The combination of the two last flags gives a lot of information about how
+persistence was handled by the client, the server and by HAProxy. This is very
+important to troubleshoot disconnections, when users complain they have to
+re-authenticate. The commonly encountered flags are :
- - if the tests are performed on a known URI, use "monitor-uri" to declare
- this URI as dedicated to monitoring. Any host sending this request will
- only get the result of a health-check, and the request will not be logged.
+ -- Persistence cookie is not enabled.
+ NN No cookie was provided by the client, none was inserted in the
+ response. For instance, this can be in insert mode with "postonly"
+ set on a GET request.
-8.3.2. Logging before waiting for the stream to terminate
-----------------------------------------------------------
+ II A cookie designating an invalid server was provided by the client,
+ a valid one was inserted in the response. This typically happens when
+ a "server" entry is removed from the configuration, since its cookie
+ value can be presented by a client when no other server knows it.
-The problem with logging at end of connection is that you have no clue about
-what is happening during very long streams, such as remote terminal sessions
-or large file downloads. This problem can be worked around by specifying
-"option logasap" in the frontend. HAProxy will then log as soon as possible,
-just before data transfer begins. This means that in case of TCP, it will still
-log the connection status to the server, and in case of HTTP, it will log just
-after processing the server headers. In this case, the number of bytes reported
-is the number of header bytes sent to the client. In order to avoid confusion
-with normal logs, the total time field and the number of bytes are prefixed
-with a '+' sign which means that real numbers are certainly larger.
+ NI No cookie was provided by the client, one was inserted in the
+ response. This typically happens for first requests from every user
+ in "insert" mode, which makes it an easy way to count real users.
+ VN A cookie was provided by the client, none was inserted in the
+ response. This happens for most responses for which the client has
+ already got a cookie.
-8.3.3. Raising log level upon errors
-------------------------------------
+ VU A cookie was provided by the client, with a last visit date which is
+ not completely up-to-date, so an updated cookie was provided in
+ response. This can also happen if there was no date at all, or if
+ there was a date but the "maxidle" parameter was not set, so that the
+ cookie can be switched to unlimited time.
-Sometimes it is more convenient to separate normal traffic from errors logs,
-for instance in order to ease error monitoring from log files. When the option
-"log-separate-errors" is used, connections which experience errors, timeouts,
-retries, redispatches or HTTP status codes 5xx will see their syslog level
-raised from "info" to "err". This will help a syslog daemon store the log in
-a separate file. It is very important to keep the errors in the normal traffic
-file too, so that log ordering is not altered. You should also be careful if
-you already have configured your syslog daemon to store all logs higher than
-"notice" in an "admin" file, because the "err" level is higher than "notice".
+ EI A cookie was provided by the client, with a last visit date which is
+ too old for the "maxidle" parameter, so the cookie was ignored and a
+ new cookie was inserted in the response.
+ OI A cookie was provided by the client, with a first visit date which is
+ too old for the "maxlife" parameter, so the cookie was ignored and a
+ new cookie was inserted in the response.
-8.3.4. Disabling logging of successful connections
---------------------------------------------------
+ DI The server designated by the cookie was down, a new server was
+ selected and a new cookie was emitted in the response.
-Although this may sound strange at first, some large sites have to deal with
-multiple thousands of logs per second and are experiencing difficulties keeping
-them intact for a long time or detecting errors within them. If the option
-"dontlog-normal" is set on the frontend, all normal connections will not be
-logged. In this regard, a normal connection is defined as one without any
-error, timeout, retry nor redispatch. In HTTP, the status code is checked too,
-and a response with a status 5xx is not considered normal and will be logged
-too. Of course, doing is is really discouraged as it will remove most of the
-useful information from the logs. Do this only if you have no other
-alternative.
+ VI The server designated by the cookie was not marked dead but could not
+ be reached. A redispatch happened and selected another one, which was
+ then advertised in the response.
-8.3.5. Log profiles
--------------------
+8.6. Non-printable characters
+-----------------------------
-While some directives such as "log-format", "log-format-sd", "error-log-format"
-or "log-tag" make it possible to configure log formatting globally or at the
-proxy level, it may be relevant to configure such settings as close as possible
-to the log endpoints, that is, per "log" directive.
+In order not to cause trouble to log analysis tools or terminals during log
+consulting, non-printable characters are not sent as-is into log files, but are
+converted to the two-digits hexadecimal representation of their ASCII code,
+prefixed by the character '#'. The only characters that can be logged without
+being escaped are comprised between 32 and 126 (inclusive). Obviously, the
+escape character '#' itself is also encoded to avoid any ambiguity ("#23"). It
+is the same for the character '"' which becomes "#22", as well as '{', '|' and
+'}' when logging headers.
-This is where "log-profile" section comes into play: "log-profile" may be
-defined anywhere in the configuration. This section accepts a set of different
-keywords that are used to describe how the logs emitted for a given `log`
-directive should be built.
+Note that the space character (' ') is not encoded in headers, which can cause
+issues for tools relying on space count to locate fields. A typical header
+containing spaces is "User-Agent".
-From a "log" directive, one can choose to use a specific log-profile by its
-name. The same profile may be used from multiple "log" directives.
+Last, it has been observed that some syslog daemons such as syslog-ng escape
+the quote ('"') with a backslash ('\'). The reverse operation can safely be
+performed since no quote may appear anywhere else in the logs.
-log-profile <name>
- Creates a new log profile identified as <name>
-log-tag <string>
- Override syslog log tag set globally or per-proxy using "log-tag" directive.
+8.7. Capturing HTTP cookies
+---------------------------
-on <step> [drop] [format <fmt>] [sd <sd_fmt>]
- Override the log-format string normally used to build the log line at
- <step> logging step. <fmt> is used to override "log-format" or
- "error-log-format" strings (depending on the <step>) whereas <sd_fmt> is
- used to override "log-format-sd" string (both can be combined).
+Cookie capture simplifies the tracking a complete user session. This can be
+achieved using the "capture cookie" statement in the frontend. Please refer to
+section 4.2 for more details. Only one cookie can be captured, and the same
+cookie will simultaneously be checked in the request ("Cookie:" header) and in
+the response ("Set-Cookie:" header). The respective values will be reported in
+the HTTP logs at the "captured_request_cookie" and "captured_response_cookie"
+locations (see section 8.2.3 about HTTP log format). When either cookie is
+not seen, a dash ('-') replaces the value. This way, it's easy to detect when a
+user switches to a new session for example, because the server will reassign it
+a new cookie. It is also possible to detect if a server unexpectedly sets a
+wrong cookie to a client, leading to session crossing.
- "drop" special keyword may be used to specify that no log should be
- emitted for the given <step>. It takes precedence over "format" and
- "sd" if previously defined.
+ Examples :
+ # capture the first cookie whose name starts with "ASPSESSION"
+ capture cookie ASPSESSION len 32
+
+ # capture the first cookie whose name is exactly "vgnvisitor"
+ capture cookie vgnvisitor= len 32
+
+
+8.8. Capturing HTTP headers
+---------------------------
+
+Header captures are useful to track unique request identifiers set by an upper
+proxy, virtual host names, user-agents, POST content-length, referrers, etc. In
+the response, one can search for information about the response length, how the
+server asked the cache to behave, or an object location during a redirection.
+
+Header captures are performed using the "capture request header" and "capture
+response header" statements in the frontend. Please consult their definition in
+section 4.2 for more details.
+
+It is possible to include both request headers and response headers at the same
+time. Non-existent headers are logged as empty strings, and if one header
+appears more than once, only its last occurrence will be logged. Request headers
+are grouped within braces '{' and '}' in the same order as they were declared,
+and delimited with a vertical bar '|' without any space. Response headers
+follow the same representation, but are displayed after a space following the
+request headers block. These blocks are displayed just before the HTTP request
+in the logs.
+
+As a special case, it is possible to specify an HTTP header capture in a TCP
+frontend. The purpose is to enable logging of headers which will be parsed in
+an HTTP backend if the request is then switched to this HTTP backend.
+
+ Example :
+ # This instance chains to the outgoing proxy
+ listen proxy-out
+ mode http
+ option httplog
+ option logasap
+ log global
+ server cache1 192.168.1.1:3128
+
+ # log the name of the virtual server
+ capture request header Host len 20
+
+ # log the amount of data uploaded during a POST
+ capture request header Content-Length len 10
+
+ # log the beginning of the referrer
+ capture request header Referer len 20
+
+ # server name (useful for outgoing proxies only)
+ capture response header Server len 20
+
+ # logging the content-length is useful with "option logasap"
+ capture response header Content-Length len 10
+
+ # log the expected cache behavior on the response
+ capture response header Cache-Control len 8
- Possible values for <step> are:
+ # the Via header will report the next proxy's name
+ capture response header Via len 20
- - "accept" : override log-format if the log is generated right after
- frontend conn was accepted
- - "request" : override log-format if the log is generated after client
- request was received
- - "connect" : override log-format if the log is generated after backend
- connection establishment
- - "response" : override log-format if the log is generated during server
- response handling
- - "close" : override log-format if the log is generated at the final
- transaction (txn) step
- - "error" : override error-log-format for if the log is generated due
- to a transaction error
- - "any" : override both log-format and error-log-format for all logging
- steps, unless a more precise step override is declared.
+ # log the URL location during a redirection
+ capture response header Location len 20
- See "do-log" action for relevant additional <step> values.
+ >>> Aug 9 20:26:09 localhost \
+ haproxy[2022]: 127.0.0.1:34014 [09/Aug/2004:20:26:09] proxy-out \
+ proxy-out/cache1 0/0/0/162/+162 200 +350 - - ---- 0/0/0/0/0 0/0 \
+ {fr.adserver.yahoo.co||http://fr.f416.mail.} {|864|private||} \
+ "GET http://fr.adserver.yahoo.com/"
- This setting is only relevant for "log" directives used from contexts where
- using "log-format" directive makes sense (e.g.: http and tcp proxies).
- Else it will simply be ignored.
+ >>> Aug 9 20:30:46 localhost \
+ haproxy[2022]: 127.0.0.1:34020 [09/Aug/2004:20:30:46] proxy-out \
+ proxy-out/cache1 0/0/0/182/+182 200 +279 - - ---- 0/0/0/0/0 0/0 \
+ {w.ods.org||} {Formilux/0.1.8|3495|||} \
+ "GET http://trafic.1wt.eu/ HTTP/1.1"
- Example :
+ >>> Aug 9 20:30:46 localhost \
+ haproxy[2022]: 127.0.0.1:34028 [09/Aug/2004:20:30:46] proxy-out \
+ proxy-out/cache1 0/0/2/126/+128 301 +223 - - ---- 0/0/0/0/0 0/0 \
+ {www.sytadin.equipement.gouv.fr||http://trafic.1wt.eu/} \
+ {Apache|230|||http://www.sytadin.} \
+ "GET http://www.sytadin.equipement.gouv.fr/ HTTP/1.1"
- log-profile myprof
- log-tag "custom-tag"
+8.9. Examples of logs
+---------------------
- on error format "%ci: error"
- on connect drop
- on any sd "custom-sd"
+These are real-world examples of logs accompanied with an explanation. Some of
+them have been made up by hand. The syslog part has been removed for better
+reading. Their sole purpose is to explain how to decipher them.
- listen myproxy
- mode http
- option httplog
- log-tag "normal"
+ >>> haproxy[674]: 127.0.0.1:33318 [15/Oct/2003:08:31:57.130] px-http \
+ px-http/srv1 6559/0/7/147/6723 200 243 - - ---- 5/3/3/1/0 0/0 \
+ "HEAD / HTTP/1.0"
- log stdout format rfc5424 local0
- # success:
- # <134>1 2024-06-12T10:09:11.823400+02:00 - normal 224482 - - 127.0.0.1:53594 [12/Jun/2024:10:09:11.814] myproxy myproxy/<NOSRV> 0/-1/-1/-1/0 200 49 - - LR-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
- #
- # error:
- # <134>1 2024-06-12T10:09:44.810929+02:00 - normal 224482 - - 127.0.0.1:59258 [12/Jun/2024:10:09:44.426] myproxy myproxy/<NOSRV> -1/-1/-1/-1/384 400 0 - - CR-- 1/1/0/0/0 0/0 "<BADREQ>"
+ => long request (6.5s) entered by hand through 'telnet'. The server replied
+ in 147 ms, and the session ended normally ('----')
- log 127.0.0.1:514 format rfc5424 profile myprof local0
- # success:
- # <134>1 2024-06-12T10:09:11.823428+02:00 - custom-tag 224482 - custom-sd 127.0.0.1:53594 [12/Jun/2024:10:09:11.814] myproxy myproxy/<NOSRV> 0/-1/-1/-1/0 200 49 - - LR-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
- #
- # error:
- # <134>1 2024-06-12T10:09:51.566524+02:00 - custom-tag 224482 - - 127.0.0.1: error
+ >>> haproxy[674]: 127.0.0.1:33319 [15/Oct/2003:08:31:57.149] px-http \
+ px-http/srv1 6559/1230/7/147/6870 200 243 - - ---- 324/239/239/99/0 \
+ 0/9 "HEAD / HTTP/1.0"
+ => Idem, but the request was queued in the global queue behind 9 other
+ requests, and waited there for 1230 ms.
-8.4. Timing events
-------------------
+ >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.654] px-http \
+ px-http/srv1 9/0/7/14/+30 200 +243 - - ---- 3/3/3/1/0 0/0 \
+ "GET /image.iso HTTP/1.0"
-Timers provide a great help in troubleshooting network problems. All values are
-reported in milliseconds (ms). These timers should be used in conjunction with
-the stream termination flags. In TCP mode with "option tcplog" set on the
-frontend, 3 control points are reported under the form "Tw/Tc/Tt", and in HTTP
-mode, 5 control points are reported under the form "TR/Tw/Tc/Tr/Ta". In
-addition, three other measures are provided, "Th", "Ti", and "Tq".
+ => request for a long data transfer. The "logasap" option was specified, so
+ the log was produced just before transferring data. The server replied in
+ 14 ms, 243 bytes of headers were sent to the client, and total time from
+ accept to first data byte is 30 ms.
-Timings events in HTTP mode:
+ >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.925] px-http \
+ px-http/srv1 9/0/7/14/30 502 243 - - PH-- 3/2/2/0/0 0/0 \
+ "GET /cgi-bin/bug.cgi? HTTP/1.0"
- first request 2nd request
- |<-------------------------------->|<-------------- ...
- t tr t tr ...
- ---|----|----|----|----|----|----|----|----|--
- : Th Ti TR Tw Tc Tr Td : Ti ...
- :<---- Tq ---->: :
- :<-------------- Tt -------------->:
- :<-- -----Tu--------------->:
- :<--------- Ta --------->:
+ => the proxy blocked a server response either because of an "http-response
+ deny" rule, or because the response was improperly formatted and not
+ HTTP-compliant, or because it blocked sensitive information which risked
+ being cached. In this case, the response is replaced with a "502 bad
+ gateway". The flags ("PH--") tell us that it was HAProxy who decided to
+ return the 502 and not the server.
-Timings events in TCP mode:
+ >>> haproxy[18113]: 127.0.0.1:34548 [15/Oct/2003:15:18:55.798] px-http \
+ px-http/<NOSRV> -1/-1/-1/-1/8490 -1 0 - - CR-- 2/2/2/0/0 0/0 ""
- TCP session
- |<----------------->|
- t t
- ---|----|----|----|----|---
- | Th Tw Tc Td |
- |<------ Tt ------->|
+ => the client never completed its request and aborted itself ("C---") after
+ 8.5s, while the proxy was waiting for the request headers ("-R--").
+ Nothing was sent to any server.
- - Th: total time to accept tcp connection and execute handshakes for low level
- protocols. Currently, these protocols are proxy-protocol and SSL. This may
- only happen once during the whole connection's lifetime. A large time here
- may indicate that the client only pre-established the connection without
- speaking, that it is experiencing network issues preventing it from
- completing a handshake in a reasonable time (e.g. MTU issues), or that an
- SSL handshake was very expensive to compute. Please note that this time is
- reported only before the first request, so it is safe to average it over
- all request to calculate the amortized value. The second and subsequent
- request will always report zero here.
+ >>> haproxy[18113]: 127.0.0.1:34549 [15/Oct/2003:15:19:06.103] px-http \
+ px-http/<NOSRV> -1/-1/-1/-1/50001 408 0 - - cR-- 2/2/2/0/0 0/0 ""
- This timer is named %Th as a log-format alias, and fc.timer.handshake as a
- sample fetch.
+ => The client never completed its request, which was aborted by the
+ time-out ("c---") after 50s, while the proxy was waiting for the request
+ headers ("-R--"). Nothing was sent to any server, but the proxy could
+ send a 408 return code to the client.
- - Ti: is the idle time before the HTTP request (HTTP mode only). This timer
- counts between the end of the handshakes and the first byte of the HTTP
- request. When dealing with a second request in keep-alive mode, it starts
- to count after the end of the transmission the previous response. When a
- multiplexed protocol such as HTTP/2 is used, it starts to count immediately
- after the previous request. Some browsers pre-establish connections to a
- server in order to reduce the latency of a future request, and keep them
- pending until they need it. This delay will be reported as the idle time. A
- value of -1 indicates that nothing was received on the connection.
+ >>> haproxy[18989]: 127.0.0.1:34550 [15/Oct/2003:15:24:28.312] px-tcp \
+ px-tcp/srv1 0/0/5007 0 cD 0/0/0/0/0 0/0
- This timer is named %Ti as a log-format alias, and req.timer.idle as a
- sample fetch.
+ => This log was produced with "option tcplog". The client timed out after
+ 5 seconds ("c----").
- - TR: total time to get the client request (HTTP mode only). It's the time
- elapsed between the first bytes received and the moment the proxy received
- the empty line marking the end of the HTTP headers. The value "-1"
- indicates that the end of headers has never been seen. This happens when
- the client closes prematurely or times out. This time is usually very short
- since most requests fit in a single packet. A large time may indicate a
- request typed by hand during a test.
+ >>> haproxy[18989]: 10.0.0.1:34552 [15/Oct/2003:15:26:31.462] px-http \
+ px-http/srv1 3183/-1/-1/-1/11215 503 0 - - SC-- 205/202/202/115/3 \
+ 0/0 "HEAD / HTTP/1.0"
- This timer is named %TR as a log-format alias, and req.timer.hdr as a
- sample fetch.
+ => The request took 3s to complete (probably a network problem), and the
+ connection to the server failed ('SC--') after 4 attempts of 2 seconds
+ (config says 'retries 3'), and no redispatch (otherwise we would have
+ seen "/+3"). Status code 503 was returned to the client. There were 115
+ connections on this server, 202 connections on this proxy, and 205 on
+ the global process. It is possible that the server refused the
+ connection because of too many already established.
- - Tq: total time to get the client request from the accept date or since the
- emission of the last byte of the previous response (HTTP mode only). It's
- exactly equal to Th + Ti + TR unless any of them is -1, in which case it
- returns -1 as well. This timer used to be very useful before the arrival of
- HTTP keep-alive and browsers' pre-connect feature. It's recommended to drop
- it in favor of TR nowadays, as the idle time adds a lot of noise to the
- reports.
- This timer is named %Tq as a log-format alias, and req.timer.tq as a
- sample fetch.
+9. Supported filters
+--------------------
- - Tw: total time spent in the queues waiting for a connection slot. It
- accounts for backend queue as well as the server queues, and depends on the
- queue size, and the time needed for the server to complete previous
- requests. The value "-1" means that the request was killed before reaching
- the queue, which is generally what happens with invalid or denied requests.
+Here are listed officially supported filters with the list of parameters they
+accept. Depending on compile options, some of these filters might be
+unavailable. The list of available filters is reported in haproxy -vv.
- This timer is named %Tw as a log-format alias, and req.timer.queue as a
- sample fetch.
+See also : "filter"
- - Tc: total time to establish the TCP connection to the server. It's the time
- elapsed between the moment the proxy sent the connection request, and the
- moment it was acknowledged by the server, or between the TCP SYN packet and
- the matching SYN/ACK packet in return. The value "-1" means that the
- connection never established.
+9.1. Trace
+----------
- This timer is named %Tc as a log-format alias, and bc.timer.connect as a
- sample fetch.
+filter trace [name <name>] [random-forwarding] [hexdump]
- - Tr: server response time (HTTP mode only). It's the time elapsed between
- the moment the TCP connection was established to the server and the moment
- the server sent its complete response headers. It purely shows its request
- processing time, without the network overhead due to the data transmission.
- It is worth noting that when the client has data to send to the server, for
- instance during a POST request, the time already runs, and this can distort
- apparent response time. For this reason, it's generally wise not to trust
- too much this field for POST requests initiated from clients behind an
- untrusted network. A value of "-1" here means that the last response header
- (empty line) was never seen, most likely because the server timeout stroke
- before the server managed to process the request or because the server
- returned an invalid response.
+ Arguments:
+ <name> is an arbitrary name that will be reported in
+ messages. If no name is provided, "TRACE" is used.
- This timer is named %Tr as a log-format alias, and res.timer.hdr as a
- sample fetch.
+ <quiet> inhibits trace messages.
- - Td: this is the total transfer time of the response payload till the last
- byte sent to the client. In HTTP it starts after the last response header
- (after Tr).
+ <random-forwarding> enables the random forwarding of parsed data. By
+ default, this filter forwards all previously parsed
+ data. With this parameter, it only forwards a random
+ amount of the parsed data.
- The data sent are not guaranteed to be received by the client, they can be
- stuck in either the kernel or the network.
+ <hexdump> dumps all forwarded data to the server and the client.
- This timer is named %Td as a log-format alias, and res.timer.data as a
- sample fetch.
+This filter can be used as a base to develop new filters. It defines all
+callbacks and print a message on the standard error stream (stderr) with useful
+information for all of them. It may be useful to debug the activity of other
+filters or, quite simply, HAProxy's activity.
- - Ta: total active time for the HTTP request, between the moment the proxy
- received the first byte of the request header and the emission of the last
- byte of the response body. The exception is when the "logasap" option is
- specified. In this case, it only equals (TR+Tw+Tc+Tr), and is prefixed with
- a '+' sign. From this field, we can deduce "Td", the data transmission time,
- by subtracting other timers when valid :
+Using <random-parsing> and/or <random-forwarding> parameters is a good way to
+tests the behavior of a filter that parses data exchanged between a client and
+a server by adding some latencies in the processing.
- Td = Ta - (TR + Tw + Tc + Tr)
- Timers with "-1" values have to be excluded from this equation. Note that
- "Ta" can never be negative.
+9.2. HTTP compression
+---------------------
+
+filter compression
- This timer is named %Ta as a log-format alias, and txn.timer.total as a
- sample fetch.
+The HTTP compression has been moved in a filter in HAProxy 1.7. "compression"
+keyword must still be used to enable and configure the HTTP compression. And
+when no other filter is used, it is enough. When used with the cache or the
+fcgi-app enabled, it is also enough. In this case, the compression is always
+done after the response is stored in the cache. But it is mandatory to
+explicitly use a filter line to enable the HTTP compression when at least one
+filter other than the cache or the fcgi-app is used for the same
+listener/frontend/backend. This is important to know the filters evaluation
+order.
- - Tt: total stream duration time, between the moment the proxy accepted it
- and the moment both ends were closed. The exception is when the "logasap"
- option is specified. In this case, it only equals (Th+Ti+TR+Tw+Tc+Tr), and
- is prefixed with a '+' sign. From this field, we can deduce "Td", the data
- transmission time, by subtracting other timers when valid :
+See also : "compression", section 9.4 about the cache filter and section 9.5
+ about the fcgi-app filter.
- Td = Tt - (Th + Ti + TR + Tw + Tc + Tr)
- Timers with "-1" values have to be excluded from this equation. In TCP
- mode, "Ti", "Tq" and "Tr" have to be excluded too. Note that "Tt" can never
- be negative and that for HTTP, Tt is simply equal to (Th+Ti+Ta).
+9.3. Stream Processing Offload Engine (SPOE)
+--------------------------------------------
- This timer is named %Tt as a log-format alias, and fc.timer.total as a
- sample fetch.
+filter spoe [engine <name>] config <file>
- - Tu: total estimated time as seen from client, between the moment the proxy
- accepted it and the moment both ends were closed, without idle time.
- This is useful to roughly measure end-to-end time as a user would see it,
- without idle time pollution from keep-alive time between requests. This
- timer in only an estimation of time seen by user as it assumes network
- latency is the same in both directions. The exception is when the "logasap"
- option is specified. In this case, it only equals (Th+TR+Tw+Tc+Tr), and is
- prefixed with a '+' sign.
+ Arguments :
- This timer is named %Tu as a log-format alias, and txn.timer.user as a
- sample fetch.
+ <name> is the engine name that will be used to find the right scope in
+ the configuration file. If not provided, all the file will be
+ parsed.
-These timers provide precious indications on trouble causes. Since the TCP
-protocol defines retransmit delays of 3, 6, 12... seconds, we know for sure
-that timers close to multiples of 3s are nearly always related to lost packets
-due to network problems (wires, negotiation, congestion). Moreover, if "Ta" or
-"Tt" is close to a timeout value specified in the configuration, it often means
-that a stream has been aborted on timeout.
+ <file> is the path of the engine configuration file. This file can
+ contain configuration of several engines. In this case, each
+ part must be placed in its own scope.
-Most common cases :
+The Stream Processing Offload Engine (SPOE) is a filter communicating with
+external components. It allows the offload of some specifics processing on the
+streams in tiered applications. These external components and information
+exchanged with them are configured in dedicated files, for the main part. It
+also requires dedicated backends, defined in HAProxy configuration.
- - If "Th" or "Ti" are close to 3000, a packet has probably been lost between
- the client and the proxy. This is very rare on local networks but might
- happen when clients are on far remote networks and send large requests. It
- may happen that values larger than usual appear here without any network
- cause. Sometimes, during an attack or just after a resource starvation has
- ended, HAProxy may accept thousands of connections in a few milliseconds.
- The time spent accepting these connections will inevitably slightly delay
- processing of other connections, and it can happen that request times in the
- order of a few tens of milliseconds are measured after a few thousands of
- new connections have been accepted at once. Using one of the keep-alive
- modes may display larger idle times since "Ti" measures the time spent
- waiting for additional requests.
+SPOE communicates with external components using an in-house binary protocol,
+the Stream Processing Offload Protocol (SPOP).
- - If "Tc" is close to 3000, a packet has probably been lost between the
- server and the proxy during the server connection phase. This value should
- always be very low, such as 1 ms on local networks and less than a few tens
- of ms on remote networks.
+When the SPOE is used on a stream, a dedicated stream is spawned to handle the
+communication with the external component. The main stream is the parent stream
+of this "SPOE" stream. It means it is possible to retrieve variables of the
+main stream from the "SPOE" stream. See section 2.8 about variables for
+details.
- - If "Tr" is nearly always lower than 3000 except some rare values which seem
- to be the average majored by 3000, there are probably some packets lost
- between the proxy and the server.
+For all information about the SPOE configuration and the SPOP specification, see
+"doc/SPOE.txt".
- - If "Ta" is large even for small byte counts, it generally is because
- neither the client nor the server decides to close the connection while
- HAProxy is running in tunnel mode and both have agreed on a keep-alive
- connection mode. In order to solve this issue, it will be needed to specify
- one of the HTTP options to manipulate keep-alive or close options on either
- the frontend or the backend. Having the smallest possible 'Ta' or 'Tt' is
- important when connection regulation is used with the "maxconn" option on
- the servers, since no new connection will be sent to the server until
- another one is released.
+9.4. Cache
+----------
-Other noticeable HTTP log cases ('xx' means any value to be ignored) :
+filter cache <name>
- TR/Tw/Tc/Tr/+Ta The "option logasap" is present on the frontend and the log
- was emitted before the data phase. All the timers are valid
- except "Ta" which is shorter than reality.
+ Arguments :
- -1/xx/xx/xx/Ta The client was not able to send a complete request in time
- or it aborted too early. Check the stream termination flags
- then "timeout http-request" and "timeout client" settings.
+ <name> is name of the cache section this filter will use.
- TR/-1/xx/xx/Ta It was not possible to process the request, maybe because
- servers were out of order, because the request was invalid
- or forbidden by ACL rules. Check the stream termination
- flags.
+The cache uses a filter to store cacheable responses. The HTTP rules
+"cache-store" and "cache-use" must be used to define how and when to use a
+cache. By default the corresponding filter is implicitly defined. And when no
+other filters than fcgi-app or compression are used, it is enough. In such
+case, the compression filter is always evaluated after the cache filter. But it
+is mandatory to explicitly use a filter line to use a cache when at least one
+filter other than the compression or the fcgi-app is used for the same
+listener/frontend/backend. This is important to know the filters evaluation
+order.
- TR/Tw/-1/xx/Ta The connection could not establish on the server. Either it
- actively refused it or it timed out after Ta-(TR+Tw) ms.
- Check the stream termination flags, then check the
- "timeout connect" setting. Note that the tarpit action might
- return similar-looking patterns, with "Tw" equal to the time
- the client connection was maintained open.
+See also : section 9.2 about the compression filter, section 9.5 about the
+ fcgi-app filter and section 6 about cache.
- TR/Tw/Tc/-1/Ta The server has accepted the connection but did not return
- a complete response in time, or it closed its connection
- unexpectedly after Ta-(TR+Tw+Tc) ms. Check the stream
- termination flags, then check the "timeout server" setting.
+9.5. Fcgi-app
+-------------
-8.5. Stream state at disconnection
------------------------------------
+filter fcgi-app <name>
-TCP and HTTP logs provide a stream termination indicator in the
-"termination_state" field, just before the number of active connections. It is
-2-characters long in TCP mode, and is extended to 4 characters in HTTP mode,
-each of which has a special meaning :
+ Arguments :
- - On the first character, a code reporting the first event which caused the
- stream to terminate :
+ <name> is name of the fcgi-app section this filter will use.
- C : the TCP session was unexpectedly aborted by the client.
+The FastCGI application uses a filter to evaluate all custom parameters on the
+request path, and to process the headers on the response path. the <name> must
+reference an existing fcgi-app section. The directive "use-fcgi-app" should be
+used to define the application to use. By default the corresponding filter is
+implicitly defined. And when no other filters than cache or compression are
+used, it is enough. But it is mandatory to explicitly use a filter line to a
+fcgi-app when at least one filter other than the compression or the cache is
+used for the same backend. This is important to know the filters evaluation
+order.
- S : the TCP session was unexpectedly aborted by the server, or the
- server explicitly refused it.
+See also: "use-fcgi-app", section 9.2 about the compression filter, section 9.4
+ about the cache filter and section 10 about FastCGI application.
- P : the stream or session was prematurely aborted by the proxy, because
- of a connection limit enforcement, because a DENY filter was
- matched, because of a security check which detected and blocked a
- dangerous error in server response which might have caused
- information leak (e.g. cacheable cookie).
- L : the stream was locally processed by HAProxy.
+9.6. OpenTracing
+----------------
- R : a resource on the proxy has been exhausted (memory, sockets, source
- ports, ...). Usually, this appears during the connection phase, and
- system logs should contain a copy of the precise error. If this
- happens, it must be considered as a very serious anomaly which
- should be fixed as soon as possible by any means.
+The OpenTracing filter adds native support for using distributed tracing in
+HAProxy. This is enabled by sending an OpenTracing compliant request to one
+of the supported tracers such as Datadog, Jaeger, Lightstep and Zipkin tracers.
+Please note: tracers are not listed by any preference, but alphabetically.
- I : an internal error was identified by the proxy during a self-check.
- This should NEVER happen, and you are encouraged to report any log
- containing this, because this would almost certainly be a bug. It
- would be wise to preventively restart the process after such an
- event too, in case it would be caused by memory corruption.
+This feature is only enabled when HAProxy was built with USE_OT=1.
- D : the stream was killed by HAProxy because the server was detected
- as down and was configured to kill all connections when going down.
+The OpenTracing filter activation is done explicitly by specifying it in the
+HAProxy configuration. If this is not done, the OpenTracing filter in no way
+participates in the work of HAProxy.
- U : the stream was killed by HAProxy on this backup server because an
- active server was detected as up and was configured to kill all
- backup connections when going up.
+filter opentracing [id <id>] config <file>
- K : the stream was actively killed by an admin operating on HAProxy.
+ Arguments :
- c : the client-side timeout expired while waiting for the client to
- send or receive data.
+ <id> is the OpenTracing filter id that will be used to find the
+ right scope in the configuration file. If no filter id is
+ specified, 'ot-filter' is used as default. If scope is not
+ specified in the configuration file, it applies to all defined
+ OpenTracing filters.
- s : the server-side timeout expired while waiting for the server to
- send or receive data.
+ <file> is the path of the OpenTracing configuration file. The same
+ file can contain configurations for multiple OpenTracing
+ filters simultaneously. In that case we do not need to define
+ scope so the same configuration applies to all filters or each
+ filter must have its own scope defined.
- - : normal stream completion, both the client and the server closed
- with nothing left in the buffers.
+More detailed documentation related to the operation, configuration and use
+of the filter can be found in the addons/ot directory.
- - on the second character, the TCP or HTTP stream state when it was closed :
+Note: The OpenTracing filter shouldn't be used for new designs as OpenTracing
+ itself is no longer maintained nor supported by its authors. A
+ replacement filter base on OpenTelemetry is currently under development
+ and is expected to be ready around HAProxy 3.2. As such OpenTracing will
+ be deprecated in 3.3 and removed in 3.5.
- R : the proxy was waiting for a complete, valid REQUEST from the client
- (HTTP mode only). Nothing was sent to any server.
- Q : the proxy was waiting in the QUEUE for a connection slot. This can
- only happen when servers have a 'maxconn' parameter set. It can
- also happen in the global queue after a redispatch consecutive to
- a failed attempt to connect to a dying server. If no redispatch is
- reported, then no connection attempt was made to any server.
+9.7. Bandwidth limitation
+--------------------------
- C : the proxy was waiting for the CONNECTION to establish on the
- server. The server might at most have noticed a connection attempt.
+filter bwlim-in <name> default-limit <size> default-period <time> [min-size <sz>]
+filter bwlim-out <name> default-limit <size> default-period <time> [min-size <sz>]
+filter bwlim-in <name> limit <size> key <pattern> [table <table>] [min-size <sz>]
+filter bwlim-out <name> limit <size> key <pattern> [table <table>] [min-size <sz>]
- H : the proxy was waiting for complete, valid response HEADERS from the
- server (HTTP only).
+ Arguments :
- D : the stream was in the DATA phase.
+ <name> is the filter name that will be used by 'set-bandwidth-limit'
+ actions to reference a specific bandwidth limitation filter.
- L : the proxy was still transmitting LAST data to the client while the
- server had already finished. This one is very rare as it can only
- happen when the client dies while receiving the last packets.
+ <size> is max number of bytes that can be forwarded over the period.
+ The value must be specified for per-stream and shared bandwidth
+ limitation filters. It follows the HAProxy size format and is
+ expressed in bytes.
- T : the request was tarpitted. It has been held open with the client
- during the whole "timeout tarpit" duration or until the client
- closed, both of which will be reported in the "Tw" timer.
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements will be analyzed, extracted, combined,
+ and used to select which table entry to update the counters. It
+ must be specified for shared bandwidth limitation filters only.
- - : normal stream completion after end of data transfer.
+ <table> is an optional table to be used instead of the default one,
+ which is the stick-table declared in the current proxy. It can
+ be specified for shared bandwidth limitation filters only.
- - the third character tells whether the persistence cookie was provided by
- the client (only in HTTP mode) :
+ <time> is the default time period used to evaluate the bandwidth
+ limitation rate. It can be specified for per-stream bandwidth
+ limitation filters only. It follows the HAProxy time format and
+ is expressed in milliseconds.
- N : the client provided NO cookie. This is usually the case for new
- visitors, so counting the number of occurrences of this flag in the
- logs generally indicate a valid trend for the site frequentation.
+ <min-size> is the optional minimum number of bytes forwarded at a time by
+ a stream excluding the last packet that may be smaller. This
+ value can be specified for per-stream and shared bandwidth
+ limitation filters. It follows the HAProxy size format and is
+ expressed in bytes.
- I : the client provided an INVALID cookie matching no known server.
- This might be caused by a recent configuration change, mixed
- cookies between HTTP/HTTPS sites, persistence conditionally
- ignored, or an attack.
+Bandwidth limitation filters should be used to restrict the data forwarding
+speed at the stream level. By extension, such filters limit the network
+bandwidth consumed by a resource. Several bandwidth limitation filters can be
+used. For instance, it is possible to define a limit per source address to be
+sure a client will never consume all the network bandwidth, thereby penalizing
+other clients, and another one per stream to be able to fairly handle several
+connections for a given client.
- D : the client provided a cookie designating a server which was DOWN,
- so either "option persist" was used and the client was sent to
- this server, or it was not set and the client was redispatched to
- another server.
+The definition order of these filters is important. If several bandwidth
+filters are enabled on a stream, the filtering will be applied in their
+definition order. It is also important to understand the definition order of
+the other filters have an influence. For instance, depending on the HTTP
+compression filter is defined before or after a bandwidth limitation filter,
+the limit will be applied on the compressed payload or not. The same is true
+for the cache filter.
- V : the client provided a VALID cookie, and was sent to the associated
- server.
+There are two kinds of bandwidth limitation filters. The first one enforces a
+default limit and is applied per stream. The second one uses a stickiness table
+to enforce a limit equally divided between all streams sharing the same entry in
+the table.
- E : the client provided a valid cookie, but with a last date which was
- older than what is allowed by the "maxidle" cookie parameter, so
- the cookie is consider EXPIRED and is ignored. The request will be
- redispatched just as if there was no cookie.
+In addition, for a given filter, depending on the filter keyword used, the
+limitation can be applied on incoming data, received from the client and
+forwarded to a server, or on outgoing data, received from a server and sent to
+the client. To apply a limit on incoming data, "bwlim-in" keyword must be
+used. To apply it on outgoing data, "bwlim-out" keyword must be used. In both
+cases, the bandwidth limitation is applied on forwarded data, at the stream
+level.
- O : the client provided a valid cookie, but with a first date which was
- older than what is allowed by the "maxlife" cookie parameter, so
- the cookie is consider too OLD and is ignored. The request will be
- redispatched just as if there was no cookie.
+The bandwidth limitation is applied at the stream level and not at the
+connection level. For multiplexed protocols (H2, H3 and FastCGI), the streams
+of the same connection may have different limits.
- U : a cookie was present but was not used to select the server because
- some other server selection mechanism was used instead (typically a
- "use-server" rule).
+For a per-stream bandwidth limitation filter, default period and limit must be
+defined. As their names suggest, they are the default values used to setup the
+bandwidth limitation rate for a stream. However, for this kind of filter and
+only this one, it is possible to redefine these values using sample expressions
+when the filter is enabled with a TCP/HTTP "set-bandwidth-limit" action.
- - : does not apply (no cookie set in configuration).
+For a shared bandwidth limitation filter, depending on whether it is applied on
+incoming or outgoing data, the stickiness table used must store the
+corresponding bytes rate information. "bytes_in_rate(<period>)" counter must be
+stored to limit incoming data and "bytes_out_rate(<period>)" counter must be
+used to limit outgoing data.
- - the last character reports what operations were performed on the persistence
- cookie returned by the server (only in HTTP mode) :
+Finally, it is possible to set the minimum number of bytes that a bandwidth
+limitation filter can forward at a time for a given stream. It should be used
+to not forward too small amount of data, to reduce the CPU usage. It must
+carefully be defined. Too small, a value can increase the CPU usage. Too high,
+it can increase the latency. It is also highly linked to the defined bandwidth
+limit. If it is too close to the bandwidth limit, some pauses may be
+experienced to not exceed the limit because too many bytes will be consumed at
+a time. It is highly dependent on the filter configuration. A good idea is to
+start with something around 2 TCP MSS, typically 2896 bytes, and tune it after
+some experimentations.
- N : NO cookie was provided by the server, and none was inserted either.
+ Example:
+ frontend http
+ bind *:80
+ mode http
- I : no cookie was provided by the server, and the proxy INSERTED one.
- Note that in "cookie insert" mode, if the server provides a cookie,
- it will still be overwritten and reported as "I" here.
+ # If this filter is enabled, the stream will share the download limit
+ # of 10m/s with all other streams with the same source address.
+ filter bwlim-out limit-by-src key src table limit-by-src limit 10m
- U : the proxy UPDATED the last date in the cookie that was presented by
- the client. This can only happen in insert mode with "maxidle". It
- happens every time there is activity at a different date than the
- date indicated in the cookie. If any other change happens, such as
- a redispatch, then the cookie will be marked as inserted instead.
+ # If this filter is enabled, the stream will be limited to download at 1m/s,
+ # independently of all other streams.
+ filter bwlim-out limit-by-strm default-limit 1m default-period 1s
- P : a cookie was PROVIDED by the server and transmitted as-is.
+ # Limit all streams to 1m/s (the default limit) and those accessing the
+ # internal API to 100k/s. Limit each source address to 10m/s. The shared
+ # limit is applied first. Both are limiting the download rate.
+ http-request set-bandwidth-limit limit-by-strm
+ http-request set-bandwidth-limit limit-by-strm limit 100k if { path_beg /internal }
+ http-request set-bandwidth-limit limit-by-src
+ ...
- R : the cookie provided by the server was REWRITTEN by the proxy, which
- happens in "cookie rewrite" or "cookie prefix" modes.
+ backend limit-by-src
+ # The stickiness table used by <limit-by-src> filter
+ stick-table type ip size 1m expire 3600s store bytes_out_rate(1s)
- D : the cookie provided by the server was DELETED by the proxy.
+See also : "tcp-request content set-bandwidth-limit",
+ "tcp-response content set-bandwidth-limit",
+ "http-request set-bandwidth-limit" and
+ "http-response set-bandwidth-limit".
- - : does not apply (no cookie set in configuration).
+10. FastCGI applications
+-------------------------
-The combination of the two first flags gives a lot of information about what
-was happening when the stream or session terminated, and why it did terminate.
-It can be helpful to detect server saturation, network troubles, local system
-resource starvation, attacks, etc...
+HAProxy is able to send HTTP requests to Responder FastCGI applications. This
+feature was added in HAProxy 2.1. To do so, servers must be configured to use
+the FastCGI protocol (using the keyword "proto fcgi" on the server line) and a
+FastCGI application must be configured and used by the backend managing these
+servers (using the keyword "use-fcgi-app" into the proxy section). Several
+FastCGI applications may be defined, but only one can be used at a time by a
+backend.
-The most common termination flags combinations are indicated below. They are
-alphabetically sorted, with the lowercase set just after the upper case for
-easier finding and understanding.
+HAProxy implements all features of the FastCGI specification for Responder
+application. Especially it is able to multiplex several requests on a simple
+connection.
- Flags Reason
+10.1. Setup
+-----------
- -- Normal termination.
+10.1.1. Fcgi-app section
+--------------------------
- CC The client aborted before the connection could be established to the
- server. This can happen when HAProxy tries to connect to a recently
- dead (or unchecked) server, and the client aborts while HAProxy is
- waiting for the server to respond or for "timeout connect" to expire.
+fcgi-app <name>
+ Declare a FastCGI application named <name>. To be valid, at least the
+ document root must be defined.
- CD The client unexpectedly aborted during data transfer. This can be
- caused by a browser crash, by an intermediate equipment between the
- client and HAProxy which decided to actively break the connection,
- by network routing issues between the client and HAProxy, or by a
- keep-alive stream between the server and the client terminated first
- by the client.
+acl <aclname> <criterion> [flags] [operator] <value> ...
+ Declare or complete an access list.
- cD The client did not send nor acknowledge any data for as long as the
- "timeout client" delay. This is often caused by network failures on
- the client side, or the client simply leaving the net uncleanly.
+ See "acl" keyword in section 4.2 and section 7 about ACL usage for
+ details. ACLs defined for a FastCGI application are private. They cannot be
+ used by any other application or by any proxy. In the same way, ACLs defined
+ in any other section are not usable by a FastCGI application. However,
+ Pre-defined ACLs are available.
- CH The client aborted while waiting for the server to start responding.
- It might be the server taking too long to respond or the client
- clicking the 'Stop' button too fast.
+docroot <path>
+ Define the document root on the remote host. <path> will be used to build
+ the default value of FastCGI parameters SCRIPT_FILENAME and
+ PATH_TRANSLATED. It is a mandatory setting.
- cH The "timeout client" stroke while waiting for client data during a
- POST request. This is sometimes caused by too large TCP MSS values
- for PPPoE networks which cannot transport full-sized packets. It can
- also happen when client timeout is smaller than server timeout and
- the server takes too long to respond.
+index <script-name>
+ Define the script name that will be appended after an URI that ends with a
+ slash ("/") to set the default value of the FastCGI parameter SCRIPT_NAME. It
+ is an optional setting.
- CQ The client aborted while its stream was queued, waiting for a server
- with enough empty slots to accept it. It might be that either all the
- servers were saturated or that the assigned server was taking too
- long a time to respond.
+ Example :
+ index index.php
- CR The client aborted before sending a full HTTP request. Most likely
- the request was typed by hand using a telnet client, and aborted
- too early. The HTTP status code is likely a 400 here. Sometimes this
- might also be caused by an IDS killing the connection between HAProxy
- and the client. "option http-ignore-probes" can be used to ignore
- connections without any data transfer.
+log-stderr global
+log-stderr <target> [len <length>] [format <format>]
+ [sample <ranges>:<sample_size>] <facility> [<level> [<minlevel>]]
+ Enable logging of STDERR messages reported by the FastCGI application.
- cR The "timeout http-request" stroke before the client sent a full HTTP
- request. This is sometimes caused by too large TCP MSS values on the
- client side for PPPoE networks which cannot transport full-sized
- packets, or by clients sending requests by hand and not typing fast
- enough, or forgetting to enter the empty line at the end of the
- request. The HTTP status code is likely a 408 here. Note: recently,
- some browsers started to implement a "pre-connect" feature consisting
- in speculatively connecting to some recently visited web sites just
- in case the user would like to visit them. This results in many
- connections being established to web sites, which end up in 408
- Request Timeout if the timeout strikes first, or 400 Bad Request when
- the browser decides to close them first. These ones pollute the log
- and feed the error counters. Some versions of some browsers have even
- been reported to display the error code. It is possible to work
- around the undesirable effects of this behavior by adding "option
- http-ignore-probes" in the frontend, resulting in connections with
- zero data transfer to be totally ignored. This will definitely hide
- the errors of people experiencing connectivity issues though.
+ See "log" keyword in section 4.2 for details. It is an optional setting. By
+ default STDERR messages are ignored.
- CT The client aborted while its stream was tarpitted. It is important to
- check if this happens on valid requests, in order to be sure that no
- wrong tarpit rules have been written. If a lot of them happen, it
- might make sense to lower the "timeout tarpit" value to something
- closer to the average reported "Tw" timer, in order not to consume
- resources for just a few attackers.
+pass-header <name> [ { if | unless } <condition> ]
+ Specify the name of a request header which will be passed to the FastCGI
+ application. It may optionally be followed by an ACL-based condition, in
+ which case it will only be evaluated if the condition is true.
- LC The request was intercepted and locally handled by HAProxy. The
- request was not sent to the server. It only happens with a redirect
- because of a "redir" parameter on the server line.
+ Most request headers are already available to the FastCGI application,
+ prefixed with "HTTP_". Thus, this directive is only required to pass headers
+ that are purposefully omitted. Currently, the headers "Authorization",
+ "Proxy-Authorization" and hop-by-hop headers are omitted.
- LR The request was intercepted and locally handled by HAProxy. The
- request was not sent to the server. Generally it means a redirect was
- returned, an HTTP return statement was processed or the request was
- handled by an applet (stats, cache, Prometheus exported, lua applet...).
+ Note that the headers "Content-type" and "Content-length" are never passed to
+ the FastCGI application because they are already converted into parameters.
- LH The response was intercepted and locally handled by HAProxy. Generally
- it means a redirect was returned or an HTTP return statement was
- processed.
+path-info <regex>
+ Define a regular expression to extract the script-name and the path-info from
+ the URL-decoded path. Thus, <regex> may have two captures: the first one to
+ capture the script name and the second one to capture the path-info. The
+ first one is mandatory, the second one is optional. This way, it is possible
+ to extract the script-name from the path ignoring the path-info. It is an
+ optional setting. If it is not defined, no matching is performed on the
+ path. and the FastCGI parameters PATH_INFO and PATH_TRANSLATED are not
+ filled.
- SC The server or an equipment between it and HAProxy explicitly refused
- the TCP connection (the proxy received a TCP RST or an ICMP message
- in return). Under some circumstances, it can also be the network
- stack telling the proxy that the server is unreachable (e.g. no route,
- or no ARP response on local network). When this happens in HTTP mode,
- the status code is likely a 502 or 503 here.
+ For security reason, when this regular expression is defined, the newline and
+ the null characters are forbidden from the path, once URL-decoded. The reason
+ to such limitation is because otherwise the matching always fails (due to a
+ limitation one the way regular expression are executed in HAProxy). So if one
+ of these two characters is found in the URL-decoded path, an error is
+ returned to the client. The principle of least astonishment is applied here.
- sC The "timeout connect" stroke before a connection to the server could
- complete. When this happens in HTTP mode, the status code is likely a
- 503 or 504 here.
+ Example :
+ path-info ^(/.+\.php)(/.*)?$ # both script-name and path-info may be set
+ path-info ^(/.+\.php) # the path-info is ignored
- SD The connection to the server died with an error during the data
- transfer. This usually means that HAProxy has received an RST from
- the server or an ICMP message from an intermediate equipment while
- exchanging data with the server. This can be caused by a server crash
- or by a network issue on an intermediate equipment.
+option get-values
+no option get-values
+ Enable or disable the retrieve of variables about connection management.
- sD The server did not send nor acknowledge any data for as long as the
- "timeout server" setting during the data phase. This is often caused
- by too short timeouts on L4 equipment before the server (firewalls,
- load-balancers, ...), as well as keep-alive sessions maintained
- between the client and the server expiring first on HAProxy.
+ HAProxy is able to send the record FCGI_GET_VALUES on connection
+ establishment to retrieve the value for following variables:
- SH The server aborted before sending its full HTTP response headers, or
- it crashed while processing the request. Since a server aborting at
- this moment is very rare, it would be wise to inspect its logs to
- control whether it crashed and why. The logged request may indicate a
- small set of faulty requests, demonstrating bugs in the application.
- Sometimes this might also be caused by an IDS killing the connection
- between HAProxy and the server.
+ * FCGI_MAX_REQS The maximum number of concurrent requests this
+ application will accept.
- sH The "timeout server" stroke before the server could return its
- response headers. This is the most common anomaly, indicating too
- long transactions, probably caused by server or database saturation.
- The immediate workaround consists in increasing the "timeout server"
- setting, but it is important to keep in mind that the user experience
- will suffer from these long response times. The only long term
- solution is to fix the application.
+ * FCGI_MPXS_CONNS "0" if this application does not multiplex connections,
+ "1" otherwise.
- sQ The stream spent too much time in queue and has been expired. See
- the "timeout queue" and "timeout connect" settings to find out how to
- fix this if it happens too often. If it often happens massively in
- short periods, it may indicate general problems on the affected
- servers due to I/O or database congestion, or saturation caused by
- external attacks.
+ Some FastCGI applications does not support this feature. Some others close
+ the connection immediately after sending their response. So, by default, this
+ option is disabled.
- PC The proxy refused to establish a connection to the server because the
- process's socket limit has been reached while attempting to connect.
- The global "maxconn" parameter may be increased in the configuration
- so that it does not happen anymore. This status is very rare and
- might happen when the global "ulimit-n" parameter is forced by hand.
+ Note that the maximum number of concurrent requests accepted by a FastCGI
+ application is a connection variable. It only limits the number of streams
+ per connection. If the global load must be limited on the application, the
+ server parameters "maxconn" and "pool-max-conn" must be set. In addition, if
+ an application does not support connection multiplexing, the maximum number
+ of concurrent requests is automatically set to 1.
- PD The proxy blocked an incorrectly formatted chunked encoded message in
- a request or a response, after the server has emitted its headers. In
- most cases, this will indicate an invalid message from the server to
- the client. HAProxy supports chunk sizes of up to 2GB - 1 (2147483647
- bytes). Any larger size will be considered as an error.
+option keep-conn
+no option keep-conn
+ Instruct the FastCGI application to keep the connection open or not after
+ sending a response.
- PH The proxy blocked the server's response, because it was invalid,
- incomplete, dangerous (cache control), or matched a security filter.
- In any case, an HTTP 502 error is sent to the client. One possible
- cause for this error is an invalid syntax in an HTTP header name
- containing unauthorized characters. It is also possible but quite
- rare, that the proxy blocked a chunked-encoding request from the
- client due to an invalid syntax, before the server responded. In this
- case, an HTTP 400 error is sent to the client and reported in the
- logs. Finally, it may be due to an HTTP header rewrite failure on the
- response. In this case, an HTTP 500 error is sent (see
- "tune.maxrewrite" and "http-response strict-mode" for more
- inforomation).
+ If disabled, the FastCGI application closes the connection after responding
+ to this request. By default, this option is enabled.
- PR The proxy blocked the client's HTTP request, either because of an
- invalid HTTP syntax, in which case it returned an HTTP 400 error to
- the client, or because a deny filter matched, in which case it
- returned an HTTP 403 error. It may also be due to an HTTP header
- rewrite failure on the request. In this case, an HTTP 500 error is
- sent (see "tune.maxrewrite" and "http-request strict-mode" for more
- inforomation).
+option max-reqs <reqs>
+ Define the maximum number of concurrent requests this application will
+ accept.
- PT The proxy blocked the client's request and has tarpitted its
- connection before returning it a 500 server error. Nothing was sent
- to the server. The connection was maintained open for as long as
- reported by the "Tw" timer field.
+ This option may be overwritten if the variable FCGI_MAX_REQS is retrieved
+ during connection establishment. Furthermore, if the application does not
+ support connection multiplexing, this option will be ignored. By default set
+ to 1.
- RC A local resource has been exhausted (memory, sockets, source ports)
- preventing the connection to the server from establishing. The error
- logs will tell precisely what was missing. This is very rare and can
- only be solved by proper system tuning.
+option mpxs-conns
+no option mpxs-conns
+ Enable or disable the support of connection multiplexing.
-The combination of the two last flags gives a lot of information about how
-persistence was handled by the client, the server and by HAProxy. This is very
-important to troubleshoot disconnections, when users complain they have to
-re-authenticate. The commonly encountered flags are :
+ This option may be overwritten if the variable FCGI_MPXS_CONNS is retrieved
+ during connection establishment. It is disabled by default.
- -- Persistence cookie is not enabled.
+set-param <name> <fmt> [ { if | unless } <condition> ]
+ Set a FastCGI parameter that should be passed to this application. Its
+ value, defined by <fmt> must follows the Custom log format rules (see section
+ 8.2.6 "Custom Log format"). It may optionally be followed by an ACL-based
+ condition, in which case it will only be evaluated if the condition is true.
- NN No cookie was provided by the client, none was inserted in the
- response. For instance, this can be in insert mode with "postonly"
- set on a GET request.
+ With this directive, it is possible to overwrite the value of default FastCGI
+ parameters. If the value is evaluated to an empty string, the rule is
+ ignored. These directives are evaluated in their declaration order.
- II A cookie designating an invalid server was provided by the client,
- a valid one was inserted in the response. This typically happens when
- a "server" entry is removed from the configuration, since its cookie
- value can be presented by a client when no other server knows it.
+ Example :
+ # PHP only, required if PHP was built with --enable-force-cgi-redirect
+ set-param REDIRECT_STATUS 200
- NI No cookie was provided by the client, one was inserted in the
- response. This typically happens for first requests from every user
- in "insert" mode, which makes it an easy way to count real users.
+ set-param PHP_AUTH_DIGEST %[req.hdr(Authorization)]
- VN A cookie was provided by the client, none was inserted in the
- response. This happens for most responses for which the client has
- already got a cookie.
- VU A cookie was provided by the client, with a last visit date which is
- not completely up-to-date, so an updated cookie was provided in
- response. This can also happen if there was no date at all, or if
- there was a date but the "maxidle" parameter was not set, so that the
- cookie can be switched to unlimited time.
+10.1.2. Proxy section
+---------------------
- EI A cookie was provided by the client, with a last visit date which is
- too old for the "maxidle" parameter, so the cookie was ignored and a
- new cookie was inserted in the response.
+use-fcgi-app <name>
+ Define the FastCGI application to use for the backend.
- OI A cookie was provided by the client, with a first visit date which is
- too old for the "maxlife" parameter, so the cookie was ignored and a
- new cookie was inserted in the response.
+ Arguments :
+ <name> is the name of the FastCGI application to use.
- DI The server designated by the cookie was down, a new server was
- selected and a new cookie was emitted in the response.
+ This keyword is only available for HTTP proxies with the backend capability
+ and with at least one FastCGI server. However, FastCGI servers can be mixed
+ with HTTP servers. But except there is a good reason to do so, it is not
+ recommended (see section 10.3 about the limitations for details). Only one
+ application may be defined at a time per backend.
- VI The server designated by the cookie was not marked dead but could not
- be reached. A redispatch happened and selected another one, which was
- then advertised in the response.
+ Note that, once a FastCGI application is referenced for a backend, depending
+ on the configuration some processing may be done even if the request is not
+ sent to a FastCGI server. Rules to set parameters or pass headers to an
+ application are evaluated.
-8.6. Non-printable characters
------------------------------
+10.1.3. Example
+---------------
-In order not to cause trouble to log analysis tools or terminals during log
-consulting, non-printable characters are not sent as-is into log files, but are
-converted to the two-digits hexadecimal representation of their ASCII code,
-prefixed by the character '#'. The only characters that can be logged without
-being escaped are comprised between 32 and 126 (inclusive). Obviously, the
-escape character '#' itself is also encoded to avoid any ambiguity ("#23"). It
-is the same for the character '"' which becomes "#22", as well as '{', '|' and
-'}' when logging headers.
+ frontend front-http
+ mode http
+ bind *:80
+ bind *:
-Note that the space character (' ') is not encoded in headers, which can cause
-issues for tools relying on space count to locate fields. A typical header
-containing spaces is "User-Agent".
+ use_backend back-dynamic if { path_reg ^/.+\.php(/.*)?$ }
+ default_backend back-static
-Last, it has been observed that some syslog daemons such as syslog-ng escape
-the quote ('"') with a backslash ('\'). The reverse operation can safely be
-performed since no quote may appear anywhere else in the logs.
+ backend back-static
+ mode http
+ server www A.B.C.D:80
+ backend back-dynamic
+ mode http
+ use-fcgi-app php-fpm
+ server php-fpm A.B.C.D:9000 proto fcgi
-8.7. Capturing HTTP cookies
----------------------------
+ fcgi-app php-fpm
+ log-stderr global
+ option keep-conn
-Cookie capture simplifies the tracking a complete user session. This can be
-achieved using the "capture cookie" statement in the frontend. Please refer to
-section 4.2 for more details. Only one cookie can be captured, and the same
-cookie will simultaneously be checked in the request ("Cookie:" header) and in
-the response ("Set-Cookie:" header). The respective values will be reported in
-the HTTP logs at the "captured_request_cookie" and "captured_response_cookie"
-locations (see section 8.2.3 about HTTP log format). When either cookie is
-not seen, a dash ('-') replaces the value. This way, it's easy to detect when a
-user switches to a new session for example, because the server will reassign it
-a new cookie. It is also possible to detect if a server unexpectedly sets a
-wrong cookie to a client, leading to session crossing.
+ docroot /var/www/my-app
+ index index.php
+ path-info ^(/.+\.php)(/.*)?$
- Examples :
- # capture the first cookie whose name starts with "ASPSESSION"
- capture cookie ASPSESSION len 32
- # capture the first cookie whose name is exactly "vgnvisitor"
- capture cookie vgnvisitor= len 32
+10.2. Default parameters
+------------------------
+A Responder FastCGI application has the same purpose as a CGI/1.1 program. In
+the CGI/1.1 specification (RFC3875), several variables must be passed to the
+script. So HAProxy set them and some others commonly used by FastCGI
+applications. All these variables may be overwritten, with caution though.
+
+ +-------------------+-----------------------------------------------------+
+ | AUTH_TYPE | Identifies the mechanism, if any, used by HAProxy |
+ | | to authenticate the user. Concretely, only the |
+ | | BASIC authentication mechanism is supported. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | CONTENT_LENGTH | Contains the size of the message-body attached to |
+ | | the request. It means only requests with a known |
+ | | size are considered as valid and sent to the |
+ | | application. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | CONTENT_TYPE | Contains the type of the message-body attached to |
+ | | the request. It may not be set. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | DOCUMENT_ROOT | Contains the document root on the remote host under |
+ | | which the script should be executed, as defined in |
+ | | the application's configuration. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | GATEWAY_INTERFACE | Contains the dialect of CGI being used by HAProxy |
+ | | to communicate with the FastCGI application. |
+ | | Concretely, it is set to "CGI/1.1". |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | PATH_INFO | Contains the portion of the URI path hierarchy |
+ | | following the part that identifies the script |
+ | | itself. To be set, the directive "path-info" must |
+ | | be defined. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | PATH_TRANSLATED | If PATH_INFO is set, it is its translated version. |
+ | | It is the concatenation of DOCUMENT_ROOT and |
+ | | PATH_INFO. If PATH_INFO is not set, this parameters |
+ | | is not set too. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | QUERY_STRING | Contains the request's query string. It may not be |
+ | | set. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | REMOTE_ADDR | Contains the network address of the client sending |
+ | | the request. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | REMOTE_USER | Contains the user identification string supplied by |
+ | | client as part of user authentication. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | REQUEST_METHOD | Contains the method which should be used by the |
+ | | script to process the request. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | REQUEST_URI | Contains the request's URI. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SCRIPT_FILENAME | Contains the absolute pathname of the script. it is |
+ | | the concatenation of DOCUMENT_ROOT and SCRIPT_NAME. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SCRIPT_NAME | Contains the name of the script. If the directive |
+ | | "path-info" is defined, it is the first part of the |
+ | | URI path hierarchy, ending with the script name. |
+ | | Otherwise, it is the entire URI path. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SERVER_NAME | Contains the name of the server host to which the |
+ | | client request is directed. It is the value of the |
+ | | header "Host", if defined. Otherwise, the |
+ | | destination address of the connection on the client |
+ | | side. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SERVER_PORT | Contains the destination TCP port of the connection |
+ | | on the client side, which is the port the client |
+ | | connected to. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SERVER_PROTOCOL | Contains the request's protocol. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | SERVER_SOFTWARE | Contains the string "HAProxy" followed by the |
+ | | current HAProxy version. |
+ | | |
+ +-------------------+-----------------------------------------------------+
+ | HTTPS | Set to a non-empty value ("on") if the script was |
+ | | queried through the HTTPS protocol. |
+ | | |
+ +-------------------+-----------------------------------------------------+
-8.8. Capturing HTTP headers
----------------------------
-Header captures are useful to track unique request identifiers set by an upper
-proxy, virtual host names, user-agents, POST content-length, referrers, etc. In
-the response, one can search for information about the response length, how the
-server asked the cache to behave, or an object location during a redirection.
+10.3. Limitations
+------------------
-Header captures are performed using the "capture request header" and "capture
-response header" statements in the frontend. Please consult their definition in
-section 4.2 for more details.
+The current implementation have some limitations. The first one is about the
+way some request headers are hidden to the FastCGI applications. This happens
+during the headers analysis, on the backend side, before the connection
+establishment. At this stage, HAProxy know the backend is using a FastCGI
+application but it don't know if the request will be routed to a FastCGI server
+or not. But to hide request headers, it simply removes them from the HTX
+message. So, if the request is finally routed to an HTTP server, it never see
+these headers. For this reason, it is not recommended to mix FastCGI servers
+and HTTP servers under the same backend.
-It is possible to include both request headers and response headers at the same
-time. Non-existent headers are logged as empty strings, and if one header
-appears more than once, only its last occurrence will be logged. Request headers
-are grouped within braces '{' and '}' in the same order as they were declared,
-and delimited with a vertical bar '|' without any space. Response headers
-follow the same representation, but are displayed after a space following the
-request headers block. These blocks are displayed just before the HTTP request
-in the logs.
+Similarly, the rules "set-param" and "pass-header" are evaluated during the
+request headers analysis. So the evaluation is always performed, even if the
+requests is finally forwarded to an HTTP server.
-As a special case, it is possible to specify an HTTP header capture in a TCP
-frontend. The purpose is to enable logging of headers which will be parsed in
-an HTTP backend if the request is then switched to this HTTP backend.
+About the rules "set-param", when a rule is applied, a pseudo header is added
+into the HTX message. So, the same way than for HTTP header rewrites, it may
+fail if the buffer is full. The rules "set-param" will compete with
+"http-request" ones.
- Example :
- # This instance chains to the outgoing proxy
- listen proxy-out
- mode http
- option httplog
- option logasap
- log global
- server cache1 192.168.1.1:3128
+Finally, all FastCGI params and HTTP headers are sent into a unique record
+FCGI_PARAM. Encoding of this record must be done in one pass, otherwise a
+processing error is returned. It means the record FCGI_PARAM, once encoded,
+must not exceeds the size of a buffer. However, there is no reserve to respect
+here.
- # log the name of the virtual server
- capture request header Host len 20
- # log the amount of data uploaded during a POST
- capture request header Content-Length len 10
+11. Stick-tables and Peers
+--------------------------
- # log the beginning of the referrer
- capture request header Referer len 20
+Stick-tables in HAProxy are a mechanism which permits to associate a certain
+number of information and metrics with a key of a certain type, and this for a
+certain duration after the last update. This can be seen as a multicolumn line
+in a table, where the line number is defined by the key value, and the columns
+all represent distinct criteria.
- # server name (useful for outgoing proxies only)
- capture response header Server len 20
+Stick-tables were originally designed to store client-server stickiness
+information in order to maintain persistent sessions between these entities. A
+client would connect or send a request, this client would be identified via a
+discriminator (source address, cookie, URL parameter) and the chosen server
+would be stored in association with this discriminator in a stick table for a
+configurable duration so that subsequent accesses from the same client could
+automatically be routed to the same server, where the client had created its
+application session.
- # logging the content-length is useful with "option logasap"
- capture response header Content-Length len 10
+Nowadays, stick-tables can store more information than just a server number,
+elements such as activity metrics related to a specific client can be stored
+(request counts/rates, connection counts/rates, byte counts/rates etc), as well
+as some arbitrary event counters ("gpc" for "General Purpose Counters") and
+some tags to label a client with certain characteristics ("gpt" for "General
+Purpose Tag").
- # log the expected cache behavior on the response
- capture response header Cache-Control len 8
+Stick-tables may be referenced by the "stick" directives, which are used for
+client-server stickiness, by "track-sc" rules, which are used to describe what
+key to track in which table in order to collect metrics, as well as by a number
+of sample-fetch functions and converters which can perform an immediate lookup
+of a given key to retrieve a specific metric or data. The general principle is
+that updates to tables (gpt/gpc/metrics) as well as lookups of stickiness
+information refresh the accessed entry and postpone its expiration, while mere
+lookups from sample-fetch functions and converters only extract the data
+without postponing the entry's expiration.
- # the Via header will report the next proxy's name
- capture response header Via len 20
+In order for the mechanism to scale and to resist to HAProxy reloads and fail-
+over, it is possible to share stick-tables updates with other nodes called
+"peers" via the "Peers" mechanism described in section 11.2. In order to finely
+tune the communication with peers, it is possible to also decide that some
+tables only receive information from peers, or that updates from peers should
+instead be forwarded to a different table.
- # log the URL location during a redirection
- capture response header Location len 20
+Finally, stick-tables may be declared either in proxy sections (frontends,
+backends) using the "stick-table" keyword, where there may only be one per
+section and where they will get the name of that section, or in peers sections
+with the "table" keyword followed by the table's name, and which permits to
+declare multiple stick-tables in the same "peers" section. If multiple
+stick-tables are needed, usually the recommended solution is either to declare
+them in a peers section (in case they intend to be shared), or to create extra
+backend sections, each with only the "stick-table" definition in them.
- >>> Aug 9 20:26:09 localhost \
- haproxy[2022]: 127.0.0.1:34014 [09/Aug/2004:20:26:09] proxy-out \
- proxy-out/cache1 0/0/0/162/+162 200 +350 - - ---- 0/0/0/0/0 0/0 \
- {fr.adserver.yahoo.co||http://fr.f416.mail.} {|864|private||} \
- "GET http://fr.adserver.yahoo.com/"
- >>> Aug 9 20:30:46 localhost \
- haproxy[2022]: 127.0.0.1:34020 [09/Aug/2004:20:30:46] proxy-out \
- proxy-out/cache1 0/0/0/182/+182 200 +279 - - ---- 0/0/0/0/0 0/0 \
- {w.ods.org||} {Formilux/0.1.8|3495|||} \
- "GET http://trafic.1wt.eu/ HTTP/1.1"
+11.1. stick-table declaration
+-----------------------------
- >>> Aug 9 20:30:46 localhost \
- haproxy[2022]: 127.0.0.1:34028 [09/Aug/2004:20:30:46] proxy-out \
- proxy-out/cache1 0/0/2/126/+128 301 +223 - - ---- 0/0/0/0/0 0/0 \
- {www.sytadin.equipement.gouv.fr||http://trafic.1wt.eu/} \
- {Apache|230|||http://www.sytadin.} \
- "GET http://www.sytadin.equipement.gouv.fr/ HTTP/1.1"
+The declaration of a stick-table in a proxy section ("frontend", "backend",
+"listen") and in "peers" sections is very similar, with the differences being
+that the one in the peers section requires a mandatory name and doesn't take a
+"peers" option.
+In a "frontend", "backend" or "listen" section:
-8.9. Examples of logs
----------------------
+ stick-table type <type> size <size> [expire <expire>] [nopurge] [recv-only]
+ [write-to <wtable>] [srvkey <srvkey>] [store <data_type>]*
+ [brates-factor <factor>] [peers <peersect>]
-These are real-world examples of logs accompanied with an explanation. Some of
-them have been made up by hand. The syslog part has been removed for better
-reading. Their sole purpose is to explain how to decipher them.
+In a "peers" section:
- >>> haproxy[674]: 127.0.0.1:33318 [15/Oct/2003:08:31:57.130] px-http \
- px-http/srv1 6559/0/7/147/6723 200 243 - - ---- 5/3/3/1/0 0/0 \
- "HEAD / HTTP/1.0"
+ table <name> type <type> size <size> [expire <expire>] [nopurge] [recv-only]
+ [write-to <wtable>] [srvkey <srvkey>] [store <data_type>]*
+ [brates-factor <factor>]
- => long request (6.5s) entered by hand through 'telnet'. The server replied
- in 147 ms, and the session ended normally ('----')
+Arguments (mandatory ones first, then alphabetically sorted):
- >>> haproxy[674]: 127.0.0.1:33319 [15/Oct/2003:08:31:57.149] px-http \
- px-http/srv1 6559/1230/7/147/6870 200 243 - - ---- 324/239/239/99/0 \
- 0/9 "HEAD / HTTP/1.0"
+ - type <type>
+ This mandatory argument sets the key type to <type>, which
+ usually is a single word but may also have its own arguments:
- => Idem, but the request was queued in the global queue behind 9 other
- requests, and waited there for 1230 ms.
+ * ip This type should be avoided in favor of a more explicit one such
+ as "ipv4" or "ipv6". Prior to version 3.2 it was the only way to
+ configure IPv4. In 3.2, "ip" is an alias for "ipv4", and "ipv4"
+ is preferred. In a future version, "ip" will instead correspond
+ to "ipv6". It is only meant to ease the transition from pre-3.2
+ to post-3.2.
- >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.654] px-http \
- px-http/srv1 9/0/7/14/+30 200 +243 - - ---- 3/3/3/1/0 0/0 \
- "GET /image.iso HTTP/1.0"
+ * ipv4 A table declared with this type will only store IPv4 addresses.
+ This form is very compact (about 50 bytes per entry) and allows
+ very fast entry lookup and stores with almost no overhead. This
+ is mainly used to store client source IP addresses.
- => request for a long data transfer. The "logasap" option was specified, so
- the log was produced just before transferring data. The server replied in
- 14 ms, 243 bytes of headers were sent to the client, and total time from
- accept to first data byte is 30 ms.
+ * ipv6 A table declared with "type ipv6" will only store IPv6 addresses.
+ This form is very compact (about 60 bytes per entry) and allows
+ very fast entry lookup and stores with almost no overhead. This
+ is mainly used to store client source IP addresses.
- >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.925] px-http \
- px-http/srv1 9/0/7/14/30 502 243 - - PH-- 3/2/2/0/0 0/0 \
- "GET /cgi-bin/bug.cgi? HTTP/1.0"
+ * integer A table declared with "type integer" will store 32bit integers
+ which can represent a client identifier found in a request for
+ instance.
- => the proxy blocked a server response either because of an "http-response
- deny" rule, or because the response was improperly formatted and not
- HTTP-compliant, or because it blocked sensitive information which risked
- being cached. In this case, the response is replaced with a "502 bad
- gateway". The flags ("PH--") tell us that it was HAProxy who decided to
- return the 502 and not the server.
+ * string [length <len>]
+ A table declared with "type string" will store substrings of
+ up to <len> characters. If the string provided by the pattern
+ extractor is larger than <len>, it will be truncated before
+ being stored. During matching, at most <len> characters will
+ be compared between the string in the table and the extracted
+ pattern. When not specified, the string is automatically
+ limited to 32 characters. Increasing the length can have a
+ non-negligible memory usage impact.
- >>> haproxy[18113]: 127.0.0.1:34548 [15/Oct/2003:15:18:55.798] px-http \
- px-http/<NOSRV> -1/-1/-1/-1/8490 -1 0 - - CR-- 2/2/2/0/0 0/0 ""
+ * binary [length <len>]
+ A table declared with "type binary" will store binary blocks
+ of <len> bytes. If the block provided by the pattern
+ extractor is larger than <len>, it will be truncated before
+ being stored. If the block provided by the sample expression
+ is shorter than <len>, it will be padded by 0. When not
+ specified, the block is automatically limited to 32
+ bytes. Increasing the length can have a non-negligible memory
+ usage impact.
- => the client never completed its request and aborted itself ("C---") after
- 8.5s, while the proxy was waiting for the request headers ("-R--").
- Nothing was sent to any server.
+ - size <size>
+ This mandatory argument sets maximum number of entries that can
+ fit in the table to <size>. This value directly impacts memory
+ usage. Count approximately 50 bytes per entry in addition to the
+ key size above, and optionally stored metrics, plus the size of a
+ string if any. The size supports suffixes "k", "m", "g" for 2^10,
+ 2^20 and 2^30 factors.
- >>> haproxy[18113]: 127.0.0.1:34549 [15/Oct/2003:15:19:06.103] px-http \
- px-http/<NOSRV> -1/-1/-1/-1/50001 408 0 - - cR-- 2/2/2/0/0 0/0 ""
+ - expire <delay>
+ Defines the maximum duration of an entry in the table since it was
+ last created, refreshed using 'track-sc' or matched using 'stick
+ match' or 'stick on' rule. The expiration delay <delay> is defined
+ using the standard time format, similarly as the various timeouts,
+ defaulting to milliseconds. The maximum duration is slightly above
+ 24 days. See section 2.5 for more information. If this delay is
+ not specified, sessions won't automatically expire, but oldest
+ entries will be removed upon creation once full. Be sure not to
+ use the "nopurge" parameter if not expiration delay is specified.
+ Note: 'table_*' converters performs lookups but won't update touch
+ expire since they don't require 'track-sc'.
- => The client never completed its request, which was aborted by the
- time-out ("c---") after 50s, while the proxy was waiting for the request
- headers ("-R--"). Nothing was sent to any server, but the proxy could
- send a 408 return code to the client.
+ - brates-factor <factor>
+ Specifies a factor to be applied to in/out bytes rate. Instead of
+ counting each bytes, blocks of bytes are counted. Internally,
+ rates are defined on 32-bits counters, limiting them to about 4
+ billion per period. By using this parameter, it is possible to
+ have rates exceeding this 4G limit over the defined period. The
+ factor must be greater than 0 and lower than or equal to 1024.
- >>> haproxy[18989]: 127.0.0.1:34550 [15/Oct/2003:15:24:28.312] px-tcp \
- px-tcp/srv1 0/0/5007 0 cD 0/0/0/0/0 0/0
+ - nopurge indicates that we refuse to purge older entries when the table is
+ full. When not specified and the table is full when HAProxy wants
+ to store an entry in it, it will flush a few of the oldest entries
+ in order to release some space for the new ones. This is most
+ often the desired behavior. In some specific cases, it will be
+ desirable to refuse new entries instead of purging the older ones.
+ That may be the case when the amount of data to store is far above
+ the hardware limits and we prefer not to offer access to new
+ clients than to reject the ones already connected. When using this
+ parameter, be sure to properly set the "expire" parameter (see
+ above).
- => This log was produced with "option tcplog". The client timed out after
- 5 seconds ("c----").
+ - recv-only
+ indicates that we don't intend to use the table to perform updates
+ on it, but that we only plan on using the table to retrieve data
+ from a remote peer which we are interested in. Indeed, the use of
+ this keyword enables the retrieval of local-only values such as
+ "conn_cur" that are not learned by default as they would conflict
+ with local updates performed on the table by the local peer. Use
+ of this option is only relevant for tables that are not involved
+ in tracking rules or methods that perform update operations on the
+ table, or put simpler: remote tables that are only used to
+ retrieve information.
- >>> haproxy[18989]: 10.0.0.1:34552 [15/Oct/2003:15:26:31.462] px-http \
- px-http/srv1 3183/-1/-1/-1/11215 503 0 - - SC-- 205/202/202/115/3 \
- 0/0 "HEAD / HTTP/1.0"
+ - peers <peersect>
+ Entries that are created, updated or refreshed will be sent to the
+ peers in section <peersect> for synchronization, and keys learned
+ from peers in this section will also be inserted or updated in the
+ table. Additionally, on startup, an attempt may be done to learn
+ entries from an older instance of the process, designated as the
+ "local peer" via this section.
- => The request took 3s to complete (probably a network problem), and the
- connection to the server failed ('SC--') after 4 attempts of 2 seconds
- (config says 'retries 3'), and no redispatch (otherwise we would have
- seen "/+3"). Status code 503 was returned to the client. There were 115
- connections on this server, 202 connections on this proxy, and 205 on
- the global process. It is possible that the server refused the
- connection because of too many already established.
+ - srvkey <srvkey>
+ Specifies how each server is identified for the purposes of the
+ stick table. The valid values are "name" and "addr". If "name" is
+ given, then <name> argument for the server (may be generated by a
+ template). If "addr" is given, then the server is identified by
+ its current network address, including the port. "addr" is
+ especially useful if you are using service discovery to generate
+ the addresses for servers with peered stick-tables and want to
+ consistently use the same host across peers for a stickiness
+ token.
+ - store <data_type>
+ This is used to store additional information in the stick-table.
+ This may be used by ACLs in order to control various criteria
+ related to the activity of the client matching the stick-table.
+ For each item specified here, the size of each entry will be
+ inflated so that the additional data can fit. Several data types
+ may be stored with an entry. Multiple data types may be specified
+ after the "store" keyword, as a comma-separated list.
+ Alternatively, it is possible to repeat the "store" keyword
+ followed by one or several data types. Except for the "server_id"
+ type which is automatically detected and enabled, all data types
+ must be explicitly declared to be stored. If an ACL references a
+ data type which is not stored, the ACL will simply not match. Some
+ data types require an argument which must be passed just after the
+ type between parenthesis. See below for the supported data types
+ and their arguments.
-9. Supported filters
---------------------
+ - write-to <wtable>
+ Specifies the name of another stick table where peers updates will
+ be written to in addition to the source table. <wtable> must be of
+ the same type as the table being defined and must have the same
+ key length, and source table cannot be used as a target table
+ itself. Every time an entry update will be received on the source
+ table through a peer, HAProxy will try to refresh related <wtable>
+ entry. If the entry doesn't exist yet, it will be created, else
+ its values will be updated as well as its timer. Note that only
+ types that are not involved in arithmetic ops such as server_id,
+ server_key and gpt will be written to <wtable> to prevent
+ processed values from a remote table from interfering with
+ arithmetic operations performed on the local target table. (ie:
+ prevent shared cumulative counter from growing indefinitely) One
+ common use of this option is to be able to use sticking rules (for
+ server persistence) in a peers cluster setup, because matching
+ keys will be learned from remote tables.
-Here are listed officially supported filters with the list of parameters they
-accept. Depending on compile options, some of these filters might be
-unavailable. The list of available filters is reported in haproxy -vv.
+The data types that can be associated with an entry via the "store" directive
+are listed below. It is important to keep in mind that memory requirements may
+be important when storing many data types. Indeed, storing all indicators below
+at once in each entry can requires hundreds of bytes per entry, or hundreds of
+MB for a 1-million entries table. For this reason, the approximate storage size
+is mentioned for each type between brackets:
-See also : "filter"
+ - bytes_in_cnt [4 bytes]
+ This is the client to server byte count. It is a positive 64-bit
+ integer which counts the cumulative number of bytes received from
+ clients which matched this entry. Headers are included in the
+ count. This may be used to limit abuse of upload features on photo
+ or video servers.
-9.1. Trace
-----------
+ - bytes_in_rate(<period>) [12 bytes]
+ This is a rate counter on bytes from the client to the server.
+ It takes an integer parameter <period> which indicates in
+ milliseconds the length of the period over which the average is
+ measured. It reports the average incoming bytes rate over that
+ period, in bytes per period. It may be used to detect users which
+ upload too much and too fast. Warning: with large uploads, it is
+ possible that the amount of uploaded data will be counted once
+ upon termination, thus causing spikes in the average transfer
+ speed instead of having a smooth one. This may partially be
+ smoothed with "option contstats" though this is not perfect. Use
+ of byte_in_cnt is recommended for better fairness.
-filter trace [name <name>] [random-forwarding] [hexdump]
+ - bytes_out_cnt [4 bytes]
+ This is the server to client byte count. It is a positive 64-bit
+ integer which counts the cumulative number of bytes sent to
+ clients which matched this entry. Headers are included in the
+ count. This may be used to limit abuse of bots sucking the whole
+ site.
- Arguments:
- <name> is an arbitrary name that will be reported in
- messages. If no name is provided, "TRACE" is used.
+ - bytes_out_rate(<period>) [12 bytes]
+ This is a rate counter on bytes from the server to the client.
+ It takes an integer parameter <period> which indicates in
+ milliseconds the length of the period over which the average is
+ measured. It reports the average outgoing bytes rate over that
+ period, in bytes per period. It may be used to detect users which
+ download too much and too fast. Warning: with large transfers, it
+ is possible that the amount of transferred data will be counted
+ once upon termination, thus causing spikes in the average transfer
+ speed instead of having a smooth one. This may partially be
+ smoothed with "option contstats" though this is not perfect
+ yet. Use of byte_out_cnt is recommended for better fairness.
- <quiet> inhibits trace messages.
+ - conn_cnt [4 bytes]
+ This is the Connection Count. It is a positive 32-bit integer
+ which counts the absolute number of connections received from
+ clients which matched this entry. It does not mean the connections
+ were accepted, just that they were received.
- <random-forwarding> enables the random forwarding of parsed data. By
- default, this filter forwards all previously parsed
- data. With this parameter, it only forwards a random
- amount of the parsed data.
+ - conn_cur [4 bytes]
+ This is the Current Connections count. It is a positive 32-bit
+ integer which stores the concurrent connection count for the
+ entry. It is incremented once an incoming connection matches the
+ entry, and decremented once the connection leaves. That way it is
+ possible to know at any time the exact number of concurrent
+ connections for an entry. This type is not learned from other
+ peers by default as it wouldn't represent anything given that it
+ would ignore the local count. However, in combination with
+ recv-only it can be used to learn the number of concurrent
+ connections seen by peers.
- <hexdump> dumps all forwarded data to the server and the client.
+ - conn_rate(<period>) [12 bytes]
+ This is a connection frequency counter. It takes an integer
+ parameter <period> which indicates in milliseconds the length of
+ the period over which the average is measured. It reports the
+ average incoming connection rate over that period, in connections
+ per period. The result is an integer which can be matched using
+ ACLs. Whether connections are accepted or rejected has no effect
+ on their measurement.
-This filter can be used as a base to develop new filters. It defines all
-callbacks and print a message on the standard error stream (stderr) with useful
-information for all of them. It may be useful to debug the activity of other
-filters or, quite simply, HAProxy's activity.
+ - glitch_cnt [4 bytes]
+ This is the front glitches count. It is a positive 32-bit integer
+ which counts the cumulative number of glitches reported on a front
+ connection. Glitches correspond to either unusual or unexpected
+ actions (protocol- wise) from the client that could indicate a
+ badly defective client or possibly an attacker. As such, this
+ counter can help in order to decide how to act with them in such
+ case.
-Using <random-parsing> and/or <random-forwarding> parameters is a good way to
-tests the behavior of a filter that parses data exchanged between a client and
-a server by adding some latencies in the processing.
+ - glitch_rate(<period>) [12 bytes]
+ This is a frequency counter on glitches. It takes an integer
+ parameter <period> which indicates in milliseconds the length of
+ the period over which the average is measured. It reports the
+ average front glitches rate over that period. It may be used to
+ detect defective clients or potential attackers that perform
+ uncommon or unexpected actions from a protocol point of view,
+ provided that HAProxy flagged them them as such.
+ - gpc(<nb>) [4 * <nb> bytes]
+ This is an array of <nb> General Purpose Counter elements. This is
+ an array of positive 32-bit integers which may be used to count
+ anything. Most of the time they will be used as a incremental
+ counters on some entries, for instance to note that a limit is
+ reached and trigger some actions. This array is limited to a
+ maximum of 100 elements: gpc0 to gpc99, to ensure that the build
+ of a peer update message can fit into the buffer. Users should
+ take in consideration that a large amount of counters will
+ increase the data size and the traffic load using peers protocol
+ since all data/counters are pushed each time any of them is
+ updated. This data_type will exclude the usage of the legacy
+ data_types 'gpc0' and 'gpc1' on the same table. Using the 'gpc'
+ array data_type, all 'gpc0' and 'gpc1' related sample fetch
+ functions and actions will apply to the two first elements of this
+ array.
-9.2. HTTP compression
----------------------
+ - gpc_rate(<nb>,<period>) [12 * <nb> bytes]
+ This is an array of increment rates of General Purpose Counters
+ over a period. Those elements are positive 32-bit integers which
+ may be used for anything. Just like <gpc>, the count events, but
+ instead of keeping a cumulative number, they maintain the rate at
+ which the counter is incremented. Most of the time it will be
+ used to measure the frequency of occurrence of certain events
+ (e.g. requests to a specific URL). This array is limited to a
+ maximum of 100 elements: gpt(100) allowing the storage of gpc0 to
+ gpc99, to ensure that the build of a peer update message can fit
+ into the buffer. The array cannot contain less than 1 element:
+ use gpc(1) if you want to store only the counter gpc0. Users
+ should take in consideration that a large amount of counters will
+ increase the data size and the traffic load using peers protocol
+ since all data/counters are pushed each time any of them is
+ updated. This data_type will exclude the usage of the legacy
+ data_types 'gpc0_rate' and 'gpc1_rate' on the same table. Using
+ the 'gpc_rate' array data_type, all 'gpc0' and 'gpc1' related
+ fetches and actions will apply to the two first elements of this
+ array.
-filter compression
+ - gpc0 [4 bytes]
+ This is the first General Purpose Counter. It is a positive 32-bit
+ integer integer which may be used for anything. Most of the time
+ it will be used to put a special tag on some entries, for instance
+ to note that a specific behavior was detected and must be known
+ for future matches.
-The HTTP compression has been moved in a filter in HAProxy 1.7. "compression"
-keyword must still be used to enable and configure the HTTP compression. And
-when no other filter is used, it is enough. When used with the cache or the
-fcgi-app enabled, it is also enough. In this case, the compression is always
-done after the response is stored in the cache. But it is mandatory to
-explicitly use a filter line to enable the HTTP compression when at least one
-filter other than the cache or the fcgi-app is used for the same
-listener/frontend/backend. This is important to know the filters evaluation
-order.
+ - gpc0_rate(<period>) [12 bytes]
+ This is the increment rate of the first General Purpose Counter
+ over a period. It is a positive 32-bit integer integer which may
+ be used for anything. Just like <gpc0>, it counts events, but
+ instead of keeping a cumulative number, it maintains the rate at
+ which the counter is incremented. Most of the time it will be used
+ to measure the frequency of occurrence of certain events
+ (e.g. requests to a specific URL).
-See also : "compression", section 9.4 about the cache filter and section 9.5
- about the fcgi-app filter.
+ - gpc1 [4 bytes]
+ This is the second General Purpose Counter. It is a positive
+ 32-bit integer integer which may be used for anything. Most of the
+ time it will be used to put a special tag on some entries, for
+ instance to note that a specific behavior was detected and must be
+ known for future matches.
+ - gpc1_rate(<period>) [12 bytes]
+ This is the increment rate of the second General Purpose Counter
+ over a period. It is a positive 32-bit integer integer which may
+ be used for anything. Just like <gpc1>, it counts events, but
+ instead of keeping a cumulative number, it maintains the rate at
+ which the counter is incremented. Most of the time it will be used
+ to measure the frequency of occurrence of certain events
+ (e.g. requests to a specific URL).
-9.3. Stream Processing Offload Engine (SPOE)
---------------------------------------------
+ - gpt(<nb>) [4 * <nb> bytes]
+ This is an array of <nb> General Purpose Tags elements. This is an
+ array of positive 32-bit integers which may be used for anything.
+ Most of the time they will be used to put a special tags on some
+ entries, for instance to note that a specific behavior was
+ detected and must be known for future matches. This array is
+ limited to a maximum of 100 elements: gpt(100) allowing the
+ storage of gpt0 to gpt99, to ensure that the build of a peer
+ update message can fit into the buffer. The array cannot contain
+ less than 1 element: use gpt(1) if you want to to store only the
+ tag gpt0. Users should take in consideration that a large amount
+ of counters will increase the data size and the traffic load using
+ peers protocol since all data/counters are pushed each time any of
+ them is updated. This data_type will exclude the usage of the
+ legacy data_type 'gpt0' on the same table. Using the 'gpt' array
+ data_type, all 'gpt0' related fetches and actions will apply to
+ the first element of this array.
-filter spoe [engine <name>] config <file>
+ - gpt0 [4 bytes]
+ This is the first General Purpose Tag. It is a positive 32-bit
+ integer which may be used for anything. Most of the time it will
+ be used to put a special tag on some entries, for instance to note
+ that a specific behavior was detected and must be known for future
+ matches.
- Arguments :
+ - http_req_cnt [4 bytes]
+ This is the HTTP request Count. It is a positive 32-bit integer
+ which counts the absolute number of HTTP requests received from
+ clients which matched this entry. It does not matter whether they
+ are valid requests or not. Note that this is different from
+ sessions when keep-alive is used on the client side.
- <name> is the engine name that will be used to find the right scope in
- the configuration file. If not provided, all the file will be
- parsed.
+ - http_req_rate(<period>) [12 bytes]
+ This is a request frequency counter. It takes an integer parameter
+ <period> which indicates in milliseconds the length of the period
+ over which the average is measured. It reports the average HTTP
+ request rate over that period, in requests per period. The result
+ is an integer which can be matched using ACLs. It does not matter
+ whether they are valid requests or not. Note that this is
+ different from sessions when keep-alive is used on the client
+ side.
- <file> is the path of the engine configuration file. This file can
- contain configuration of several engines. In this case, each
- part must be placed in its own scope.
+ - http_err_cnt [4 bytes]
+ This is the HTTP request Error Count. It is a positive 32-bit
+ integer which counts the absolute number of HTTP requests errors
+ induced by clients which matched this entry. Errors are counted on
+ invalid and truncated requests, as well as on denied or tarpitted
+ requests, and on failed authentications. If the server responds
+ with 4xx, then the request is also counted as an error since it's
+ an error triggered by the client (e.g. vulnerability scan).
-The Stream Processing Offload Engine (SPOE) is a filter communicating with
-external components. It allows the offload of some specifics processing on the
-streams in tiered applications. These external components and information
-exchanged with them are configured in dedicated files, for the main part. It
-also requires dedicated backends, defined in HAProxy configuration.
+ - http_err_rate(<period>) [12 bytes]
+ This is an HTTP request frequency counter. It takes an integer
+ parameter <period> which indicates in milliseconds the length of
+ the period over which the average is measured. It reports the
+ average HTTP request error rate over that period, in requests per
+ period (see http_err_cnt above for what is accounted as an
+ error). The result is an integer which can be matched using ACLs.
-SPOE communicates with external components using an in-house binary protocol,
-the Stream Processing Offload Protocol (SPOP).
+ - http_fail_cnt [4 bytes]
+ This is the HTTP response Failure Count. It is a positive 32-bit
+ integer which counts the absolute number of HTTP response failures
+ induced by servers which matched this entry. Errors are counted on
+ invalid and truncated responses, as well as any 5xx response other
+ than 501 or 505. It aims at being used combined with path or URI
+ to detect service failures.
-When the SPOE is used on a stream, a dedicated stream is spawned to handle the
-communication with the external component. The main stream is the parent stream
-of this "SPOE" stream. It means it is possible to retrieve variables of the
-main stream from the "SPOE" stream. See section 2.8 about variables for
-details.
+ - http_fail_rate(<period>) [12 bytes]
+ This is an HTTP response failure frequency counter. It takes an
+ integer parameter <period> which indicates in milliseconds the
+ length of the period over which the average is measured. It
+ reports the average HTTP response failure rate over that period,
+ in requests per period (see http_fail_cnt above for what is
+ accounted as a failure). The result is an integer which can be
+ matched using ACLs.
-For all information about the SPOE configuration and the SPOP specification, see
-"doc/SPOE.txt".
+ - server_id [4 bytes]
+ This is an integer which holds the numeric ID of the server a
+ request was assigned to. It is used by the "stick match", "stick
+ store", and "stick on" rules. It is automatically enabled when
+ referenced. It is important to understand that stickiness based on
+ learning information has some limitations, including the fact that
+ all learned associations are lost upon restart unless peers are
+ properly configured to transfer such information upon restart
+ (recommended). In general it can be good as a complement to other
+ stickiness mechanisms but not always as the sole mechanism.
-9.4. Cache
-----------
+ - sess_cnt [4 bytes]
+ This is the Session Count. It is a positive 32-bit integer which
+ counts the absolute number of sessions received from clients which
+ matched this entry. A session is a connection that was accepted by
+ the layer 4 rules ("tcp-request connection").
-filter cache <name>
+ - sess_rate(<period>) [12 bytes]
+ This is a session frequency counter. It takes an integer parameter
+ <period> which indicates in milliseconds the length of the period
+ over which the average is measured. It reports the average
+ incoming session rate over that period, in sessions per
+ period. The result is an integer which can be matched using ACLs.
- Arguments :
+Example:
+ # Keep track of counters of up to 1 million IP addresses over 5 minutes
+ # and store a general purpose counter and the average connection rate
+ # computed over a sliding window of 30 seconds.
+ stick-table type ip size 1m expire 5m store gpc0,conn_rate(30s)
- <name> is name of the cache section this filter will use.
+See also : "stick match", "stick on", "stick store-request", "track-sc",
+ section 2.5 about time format, section 11.2 about peers, section 9.7
+ about bandwidth limitations, and section 7 about ACLs.
-The cache uses a filter to store cacheable responses. The HTTP rules
-"cache-store" and "cache-use" must be used to define how and when to use a
-cache. By default the corresponding filter is implicitly defined. And when no
-other filters than fcgi-app or compression are used, it is enough. In such
-case, the compression filter is always evaluated after the cache filter. But it
-is mandatory to explicitly use a filter line to use a cache when at least one
-filter other than the compression or the fcgi-app is used for the same
-listener/frontend/backend. This is important to know the filters evaluation
-order.
-See also : section 9.2 about the compression filter, section 9.5 about the
- fcgi-app filter and section 6 about cache.
+11.2. Peers declaration
+-----------------------
+It is possible to propagate entries of any data-types in stick-tables between
+several HAProxy instances over TCP connections in a multi-master fashion. Each
+instance pushes its local updates and insertions to remote peers. The pushed
+values overwrite remote ones without aggregation.
-9.5. Fcgi-app
--------------
+One exception is the data type "conn_cur" which is never learned from peers by
+default as it is supposed to reflect local values. Earlier versions used to
+synchronize it by default which was known to cause negative values in active-
+active setups, and always-growing values upon reloads or active-passive
+switches because the local value would reflect more connections than locally
+present. However there are some setups where it could be relevant to learn
+this value from peers, for instance when the table is a passive remote table
+solely used to learn/monitor data from it without relying on it for write-
+oriented operations or updates. To achieve this, the "recv-only" keyword can
+be added on the table declaration. In any case, the "conn_cur" info is always
+pushed so that monitoring systems can watch it.
-filter fcgi-app <name>
+Interrupted exchanges are automatically detected and recovered from the last
+known point. In addition, during a soft restart, the old process connects to
+the new one using such a TCP connection to push all its entries before the new
+process tries to connect to other peers. That ensures very fast replication
+during a reload, it typically takes a fraction of a second even for large
+tables.
- Arguments :
+Note that Server IDs are used to identify servers remotely, so it is important
+that configurations look similar or at least that the same IDs are forced on
+each server on all participants.
- <name> is name of the fcgi-app section this filter will use.
+peers <peersect>
+ Creates a new peer list with name <peersect>. It is an independent section,
+ which is referenced by one or more stick-tables.
-The FastCGI application uses a filter to evaluate all custom parameters on the
-request path, and to process the headers on the response path. the <name> must
-reference an existing fcgi-app section. The directive "use-fcgi-app" should be
-used to define the application to use. By default the corresponding filter is
-implicitly defined. And when no other filters than cache or compression are
-used, it is enough. But it is mandatory to explicitly use a filter line to a
-fcgi-app when at least one filter other than the compression or the cache is
-used for the same backend. This is important to know the filters evaluation
-order.
+bind [<address>]:port [param*]
+bind /<path> [param*]
+ Defines the binding parameters of the local peer of this "peers" section.
+ Such lines are not supported with "peer" line in the same "peers" section.
-See also: "use-fcgi-app", section 9.2 about the compression filter, section 9.4
- about the cache filter and section 10 about FastCGI application.
+disabled
+ Disables a peers section. It disables both listening and any synchronization
+ related to this section. This is provided to disable synchronization of stick
+ tables without having to comment out all "peers" references.
+default-bind [param*]
+ Defines the binding parameters for the local peer, excepted its address.
-9.6. OpenTracing
-----------------
+default-server [param*]
+ Change default options for a server in a "peers" section.
-The OpenTracing filter adds native support for using distributed tracing in
-HAProxy. This is enabled by sending an OpenTracing compliant request to one
-of the supported tracers such as Datadog, Jaeger, Lightstep and Zipkin tracers.
-Please note: tracers are not listed by any preference, but alphabetically.
+ Arguments:
+ <param*> is a list of parameters for this server. The "default-server"
+ keyword accepts an important number of options and has a complete
+ section dedicated to it. In a peers section, the transport
+ parameters of a "default-server" line are supported. Please refer
+ to section 5 for more details, and the "server" keyword below in
+ this section for some of the restrictions.
-This feature is only enabled when HAProxy was built with USE_OT=1.
+ See also: "server" and section 5 about server options
-The OpenTracing filter activation is done explicitly by specifying it in the
-HAProxy configuration. If this is not done, the OpenTracing filter in no way
-participates in the work of HAProxy.
+enabled
+ This re-enables a peers section which was previously disabled via the
+ "disabled" keyword.
-filter opentracing [id <id>] config <file>
+log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
+ <facility> [<level> [<minlevel>]]
+ "peers" sections support the same "log" keyword as for the proxies to
+ log information about the "peers" listener. See "log" option for proxies for
+ more details.
- Arguments :
+peer <peername> [<address>]:port [param*]
+peer <peername> /<path> [param*]
+ Defines a peer inside a peers section.
+ If <peername> is set to the local peer name (by default hostname, or forced
+ using "-L" command line option or "localpeer" global configuration setting),
+ HAProxy will listen for incoming remote peer connection on the provided
+ address. Otherwise, the address defines where to connect to in order to join
+ the remote peer, and <peername> is used at the protocol level to identify and
+ validate the remote peer on the server side.
- <id> is the OpenTracing filter id that will be used to find the
- right scope in the configuration file. If no filter id is
- specified, 'ot-filter' is used as default. If scope is not
- specified in the configuration file, it applies to all defined
- OpenTracing filters.
+ During a soft restart, local peer address is used by the old instance to
+ connect the new one and initiate a complete replication (teaching process).
- <file> is the path of the OpenTracing configuration file. The same
- file can contain configurations for multiple OpenTracing
- filters simultaneously. In that case we do not need to define
- scope so the same configuration applies to all filters or each
- filter must have its own scope defined.
+ It is strongly recommended to have the exact same peers declaration on all
+ peers and to only rely on the "-L" command line argument or the "localpeer"
+ global configuration setting to change the local peer name. This makes it
+ easier to maintain coherent configuration files across all peers.
-More detailed documentation related to the operation, configuration and use
-of the filter can be found in the addons/ot directory.
+ You may want to reference some environment variables in the address
+ parameter, see section 2.3 about environment variables.
-Note: The OpenTracing filter shouldn't be used for new designs as OpenTracing
- itself is no longer maintained nor supported by its authors. A
- replacement filter base on OpenTelemetry is currently under development
- and is expected to be ready around HAProxy 3.2. As such OpenTracing will
- be deprecated in 3.3 and removed in 3.5.
+ Note: "peer" keyword may transparently be replaced by "server" keyword (see
+ "server" keyword explanation below).
+server <peername> [<address>:<port>] [param*]
+server <peername> [/<path>] [param*]
+ As previously mentioned, "peer" keyword may be replaced by "server" keyword
+ with a support for all "server" parameters found in 5.2 paragraph that are
+ related to transport settings. If the underlying peer is local, the address
+ parameter must not be present; it must be provided on a "bind" line (see
+ "bind" keyword of this "peers" section).
-9.7. Bandwidth limitation
---------------------------
+ A number of "server" parameters are irrelevant for "peers" sections. Peers by
+ nature do not support dynamic host name resolution nor health checks, hence
+ parameters like "init_addr", "resolvers", "check", "agent-check", or "track"
+ are not supported. Similarly, there is no load balancing nor stickiness, thus
+ parameters such as "weight" or "cookie" have no effect.
-filter bwlim-in <name> default-limit <size> default-period <time> [min-size <sz>]
-filter bwlim-out <name> default-limit <size> default-period <time> [min-size <sz>]
-filter bwlim-in <name> limit <size> key <pattern> [table <table>] [min-size <sz>]
-filter bwlim-out <name> limit <size> key <pattern> [table <table>] [min-size <sz>]
+ Example:
+ # The old way.
+ peers mypeers
+ peer haproxy1 192.168.0.1:1024
+ peer haproxy2 192.168.0.2:1024
+ peer haproxy3 10.2.0.1:1024
- Arguments :
+ backend mybackend
+ mode tcp
+ balance roundrobin
+ stick-table type ip size 20k peers mypeers
+ stick on src
- <name> is the filter name that will be used by 'set-bandwidth-limit'
- actions to reference a specific bandwidth limitation filter.
+ server srv1 192.168.0.30:80
+ server srv2 192.168.0.31:80
- <size> is max number of bytes that can be forwarded over the period.
- The value must be specified for per-stream and shared bandwidth
- limitation filters. It follows the HAProxy size format and is
- expressed in bytes.
+ Example:
+ peers mypeers
+ bind 192.168.0.1:1024 ssl crt mycerts/pem
+ default-server ssl verify none
+ server haproxy1 #local peer
+ server haproxy2 192.168.0.2:1024
+ server haproxy3 10.2.0.1:1024
- <pattern> is a sample expression rule as described in section 7.3. It
- describes what elements will be analyzed, extracted, combined,
- and used to select which table entry to update the counters. It
- must be specified for shared bandwidth limitation filters only.
+shards <shards>
- <table> is an optional table to be used instead of the default one,
- which is the stick-table declared in the current proxy. It can
- be specified for shared bandwidth limitation filters only.
+ In some configurations, one would like to distribute the stick-table contents
+ to some peers in place of sending all the stick-table contents to each peer
+ declared in the "peers" section. In such cases, "shards" specifies the
+ number of peer involved in this stick-table contents distribution.
+ See also "shard" server parameter.
- <time> is the default time period used to evaluate the bandwidth
- limitation rate. It can be specified for per-stream bandwidth
- limitation filters only. It follows the HAProxy time format and
- is expressed in milliseconds.
+table <tablename> type {ip | integer | string [len <length>] | binary [len <length>]}
+ size <size> [expire <expire>] [write-to <wtable>] [nopurge] [store <data_type>]*
+ [recv-only]
- <min-size> is the optional minimum number of bytes forwarded at a time by
- a stream excluding the last packet that may be smaller. This
- value can be specified for per-stream and shared bandwidth
- limitation filters. It follows the HAProxy size format and is
- expressed in bytes.
+ Configure a stickiness table for the current section. This line is parsed
+ exactly the same way as the "stick-table" keyword in others section, except
+ for the "peers" argument which is not required here and with an additional
+ mandatory first parameter to designate the stick-table. Contrary to others
+ sections, there may be several "table" lines in "peers" sections (see also
+ the complete definition of the "table" and "stick-table" keywords in
+ section 11.1 above).
-Bandwidth limitation filters should be used to restrict the data forwarding
-speed at the stream level. By extension, such filters limit the network
-bandwidth consumed by a resource. Several bandwidth limitation filters can be
-used. For instance, it is possible to define a limit per source address to be
-sure a client will never consume all the network bandwidth, thereby penalizing
-other clients, and another one per stream to be able to fairly handle several
-connections for a given client.
+ Also be aware of the fact that "peers" sections have their own stick-table
+ namespaces to avoid collisions between stick-table names identical in
+ different "peers" section. This is internally handled prepending the "peers"
+ sections names to the name of the stick-tables followed by a '/' character.
+ If somewhere else in the configuration file you have to refer to such
+ stick-tables declared in "peers" sections you must use the prefixed version
+ of the stick-table name as follows:
-The definition order of these filters is important. If several bandwidth
-filters are enabled on a stream, the filtering will be applied in their
-definition order. It is also important to understand the definition order of
-the other filters have an influence. For instance, depending on the HTTP
-compression filter is defined before or after a bandwidth limitation filter,
-the limit will be applied on the compressed payload or not. The same is true
-for the cache filter.
+ peers mypeers
+ peer A ...
+ peer B ...
+ table t1 ...
-There are two kinds of bandwidth limitation filters. The first one enforces a
-default limit and is applied per stream. The second one uses a stickiness table
-to enforce a limit equally divided between all streams sharing the same entry in
-the table.
+ frontend fe1
+ tcp-request content track-sc0 src table mypeers/t1
-In addition, for a given filter, depending on the filter keyword used, the
-limitation can be applied on incoming data, received from the client and
-forwarded to a server, or on outgoing data, received from a server and sent to
-the client. To apply a limit on incoming data, "bwlim-in" keyword must be
-used. To apply it on outgoing data, "bwlim-out" keyword must be used. In both
-cases, the bandwidth limitation is applied on forwarded data, at the stream
-level.
+ This is also this prefixed version of the stick-table names which must be
+ used to refer to stick-tables through the CLI.
-The bandwidth limitation is applied at the stream level and not at the
-connection level. For multiplexed protocols (H2, H3 and FastCGI), the streams
-of the same connection may have different limits.
+ About "peers" protocol, as only "peers" belonging to the same section may
+ communicate with each others, there is no need to do such a distinction.
+ Several "peers" sections may declare stick-tables with the same name.
+ This is shorter version of the stick-table name which is sent over the network.
+ There is only a '/' character as prefix to avoid stick-table name collisions between
+ stick-tables declared as backends and stick-table declared in "peers" sections
+ as follows in this weird but supported configuration:
-For a per-stream bandwidth limitation filter, default period and limit must be
-defined. As their names suggest, they are the default values used to setup the
-bandwidth limitation rate for a stream. However, for this kind of filter and
-only this one, it is possible to redefine these values using sample expressions
-when the filter is enabled with a TCP/HTTP "set-bandwidth-limit" action.
+ peers mypeers
+ peer A ...
+ peer B ...
+ table t1 type string size 10m store gpc0
-For a shared bandwidth limitation filter, depending on whether it is applied on
-incoming or outgoing data, the stickiness table used must store the
-corresponding bytes rate information. "bytes_in_rate(<period>)" counter must be
-stored to limit incoming data and "bytes_out_rate(<period>)" counter must be
-used to limit outgoing data.
+ backend t1
+ stick-table type string size 10m store gpc0 peers mypeers
-Finally, it is possible to set the minimum number of bytes that a bandwidth
-limitation filter can forward at a time for a given stream. It should be used
-to not forward too small amount of data, to reduce the CPU usage. It must
-carefully be defined. Too small, a value can increase the CPU usage. Too high,
-it can increase the latency. It is also highly linked to the defined bandwidth
-limit. If it is too close to the bandwidth limit, some pauses may be
-experienced to not exceed the limit because too many bytes will be consumed at
-a time. It is highly dependent on the filter configuration. A good idea is to
-start with something around 2 TCP MSS, typically 2896 bytes, and tune it after
-some experimentations.
+ Here "t1" table declared in "mypeers" section has "mypeers/t1" as global name.
+ "t1" table declared as a backend as "t1" as global name. But at peer protocol
+ level the former table is named "/t1", the latter is again named "t1".
- Example:
- frontend http
- bind *:80
- mode http
+12. Other sections
+------------------
- # If this filter is enabled, the stream will share the download limit
- # of 10m/s with all other streams with the same source address.
- filter bwlim-out limit-by-src key src table limit-by-src limit 10m
+The sections described below are less commonly used and usually support only a
+few parameters. There is no implicit relation between any of them. They're all
+started using a single keyword. None of them is permitted before a "global"
+section. The support for some of them might be conditionned by build options
+(e.g. anything SSL-related).
- # If this filter is enabled, the stream will be limited to download at 1m/s,
- # independently of all other streams.
- filter bwlim-out limit-by-strm default-limit 1m default-period 1s
+12.1. Traces
+------------
- # Limit all streams to 1m/s (the default limit) and those accessing the
- # internal API to 100k/s. Limit each source address to 10m/s. The shared
- # limit is applied first. Both are limiting the download rate.
- http-request set-bandwidth-limit limit-by-strm
- http-request set-bandwidth-limit limit-by-strm limit 100k if { path_beg /internal }
- http-request set-bandwidth-limit limit-by-src
- ...
+For debugging purpose, it is possible to activate traces on an HAProxy's
+subsystem. This will dump debug messages about a specific subsystem. It is a
+very powerful tool to diagnose issues. Traces can be dynamically configured via
+the CLI. It is also possible to predefined some settings in the configuration
+file, in dedicated "traces" sections. More details about traces can be found in
+the management guide. It remains a developer tools used during complex
+debugging sessions. It is pretty verbose and have a cost, so use it with
+caution. And because it is a developer tool, there is no warranty about the
+backward compatibility of this section.
- backend limit-by-src
- # The stickiness table used by <limit-by-src> filter
- stick-table type ip size 1m expire 3600s store bytes_out_rate(1s)
+traces
+ Starts a new traces section. One or multiple "traces" section may be
+ used. All direcitives are evaluated in the declararion order, the last ones
+ overriding previous ones.
-See also : "tcp-request content set-bandwidth-limit",
- "tcp-response content set-bandwidth-limit",
- "http-request set-bandwidth-limit" and
- "http-response set-bandwidth-limit".
+trace <source> <args...>
+ Configures on "trace" subsystem. Each of them can be found in the management
+ manual, and follow the exact same syntax. Any output that the "trace"
+ command would produce will be emitted during the parsing step of the
+ section. Most of the time these will be errors and warnings, but certain
+ incomplete commands might list permissible choices. This command is not meant
+ for regular use, it will generally only be suggested by developers along
+ complex debugging sessions. It is important to keep in mind that depending on
+ the trace level and details, enabling traces can severely degrade the global
+ performance. Please refer to the management manual for the statements syntax.
-10. FastCGI applications
--------------------------
+ Example:
+ ring buf1
+ size 10485760 # 10MB
+ format timed
+ backing-file /tmp/h1.traces
-HAProxy is able to send HTTP requests to Responder FastCGI applications. This
-feature was added in HAProxy 2.1. To do so, servers must be configured to use
-the FastCGI protocol (using the keyword "proto fcgi" on the server line) and a
-FastCGI application must be configured and used by the backend managing these
-servers (using the keyword "use-fcgi-app" into the proxy section). Several
-FastCGI applications may be defined, but only one can be used at a time by a
-backend.
+ ring buf2
+ size 10485760 # 10MB
+ format timed
+ backing-file /tmp/h2.traces
-HAProxy implements all features of the FastCGI specification for Responder
-application. Especially it is able to multiplex several requests on a simple
-connection.
+ traces
+ trace h1 sink buf1 level developer verbosity complete start now
+ trace h2 sink buf1 level developer verbosity complete start now
-10.1. Setup
------------
+12.2. Userlists
+---------------
-10.1.1. Fcgi-app section
---------------------------
+It is possible to control access to frontend/backend/listen sections or to
+http stats by allowing only authenticated and authorized users. To do this,
+it is required to create at least one userlist and to define users.
-fcgi-app <name>
- Declare a FastCGI application named <name>. To be valid, at least the
- document root must be defined.
+userlist <listname>
+ Creates new userlist with name <listname>. Many independent userlists can be
+ used to store authentication & authorization data for independent customers.
-acl <aclname> <criterion> [flags] [operator] <value> ...
- Declare or complete an access list.
+group <groupname> [users <user>,<user>,(...)]
+ Adds group <groupname> to the current userlist. It is also possible to
+ attach users to this group by using a comma separated list of names
+ proceeded by "users" keyword.
- See "acl" keyword in section 4.2 and section 7 about ACL usage for
- details. ACLs defined for a FastCGI application are private. They cannot be
- used by any other application or by any proxy. In the same way, ACLs defined
- in any other section are not usable by a FastCGI application. However,
- Pre-defined ACLs are available.
+user <username> [password|insecure-password <password>]
+ [groups <group>,<group>,(...)]
+ Adds user <username> to the current userlist. Both secure (encrypted) and
+ insecure (unencrypted) passwords can be used. Encrypted passwords are
+ evaluated using the crypt(3) function, so depending on the system's
+ capabilities, different algorithms are supported. For example, modern Glibc
+ based Linux systems support MD5, SHA-256, SHA-512, and, of course, the
+ classic DES-based method of encrypting passwords.
-docroot <path>
- Define the document root on the remote host. <path> will be used to build
- the default value of FastCGI parameters SCRIPT_FILENAME and
- PATH_TRANSLATED. It is a mandatory setting.
+ Attention: Be aware that using encrypted passwords might cause significantly
+ increased CPU usage, depending on the number of requests, and the algorithm
+ used. For any of the hashed variants, the password for each request must
+ be processed through the chosen algorithm, before it can be compared to the
+ value specified in the config file. Most current algorithms are deliberately
+ designed to be expensive to compute to achieve resistance against brute
+ force attacks. They do not simply salt/hash the clear text password once,
+ but thousands of times. This can quickly become a major factor in HAProxy's
+ overall CPU consumption, and can even lead to application crashes!
-index <script-name>
- Define the script name that will be appended after an URI that ends with a
- slash ("/") to set the default value of the FastCGI parameter SCRIPT_NAME. It
- is an optional setting.
+ To address the high CPU usage of hash functions, one approach is to reduce
+ the number of rounds of the hash function (SHA family algorithms) or decrease
+ the "cost" of the function, if the algorithm supports it.
- Example :
- index index.php
+ As a side note, musl (e.g. Alpine Linux) implementations are known to be
+ slower than their glibc counterparts when calculating hashes, so you might
+ want to consider this aspect too.
-log-stderr global
-log-stderr <target> [len <length>] [format <format>]
- [sample <ranges>:<sample_size>] <facility> [<level> [<minlevel>]]
- Enable logging of STDERR messages reported by the FastCGI application.
+ Example:
+ userlist L1
+ group G1 users tiger,scott
+ group G2 users xdb,scott
- See "log" keyword in section 4.2 for details. It is an optional setting. By
- default STDERR messages are ignored.
+ user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91
+ user scott insecure-password elgato
+ user xdb insecure-password hello
-pass-header <name> [ { if | unless } <condition> ]
- Specify the name of a request header which will be passed to the FastCGI
- application. It may optionally be followed by an ACL-based condition, in
- which case it will only be evaluated if the condition is true.
+ userlist L2
+ group G1
+ group G2
- Most request headers are already available to the FastCGI application,
- prefixed with "HTTP_". Thus, this directive is only required to pass headers
- that are purposefully omitted. Currently, the headers "Authorization",
- "Proxy-Authorization" and hop-by-hop headers are omitted.
+ user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1
+ user scott insecure-password elgato groups G1,G2
+ user xdb insecure-password hello groups G2
- Note that the headers "Content-type" and "Content-length" are never passed to
- the FastCGI application because they are already converted into parameters.
+ Please note that both lists are functionally identical.
-path-info <regex>
- Define a regular expression to extract the script-name and the path-info from
- the URL-decoded path. Thus, <regex> may have two captures: the first one to
- capture the script name and the second one to capture the path-info. The
- first one is mandatory, the second one is optional. This way, it is possible
- to extract the script-name from the path ignoring the path-info. It is an
- optional setting. If it is not defined, no matching is performed on the
- path. and the FastCGI parameters PATH_INFO and PATH_TRANSLATED are not
- filled.
- For security reason, when this regular expression is defined, the newline and
- the null characters are forbidden from the path, once URL-decoded. The reason
- to such limitation is because otherwise the matching always fails (due to a
- limitation one the way regular expression are executed in HAProxy). So if one
- of these two characters is found in the URL-decoded path, an error is
- returned to the client. The principle of least astonishment is applied here.
+12.3. Mailers
+-------------
- Example :
- path-info ^(/.+\.php)(/.*)?$ # both script-name and path-info may be set
- path-info ^(/.+\.php) # the path-info is ignored
+It is possible to send email alerts when the state of servers changes.
+If configured email alerts are sent to each mailer that is configured
+in a mailers section. Email is sent to mailers through Lua (see
+examples/lua/mailers.lua).
-option get-values
-no option get-values
- Enable or disable the retrieve of variables about connection management.
+mailers <mailersect>
+ Creates a new mailer list with the name <mailersect>. It is an
+ independent section which is referenced by one or more proxies.
- HAProxy is able to send the record FCGI_GET_VALUES on connection
- establishment to retrieve the value for following variables:
+mailer <mailername> <ip>:<port>
+ Defines a mailer inside a mailers section.
- * FCGI_MAX_REQS The maximum number of concurrent requests this
- application will accept.
+ Example:
+ global
+ # mailers.lua file as provided in the git repository
+ # adjust path as needed
+ lua-load examples/lua/mailers.lua
- * FCGI_MPXS_CONNS "0" if this application does not multiplex connections,
- "1" otherwise.
+ mailers mymailers
+ mailer smtp1 192.168.0.1:587
+ mailer smtp2 192.168.0.2:587
- Some FastCGI applications does not support this feature. Some others close
- the connection immediately after sending their response. So, by default, this
- option is disabled.
+ backend mybackend
+ mode tcp
+ balance roundrobin
- Note that the maximum number of concurrent requests accepted by a FastCGI
- application is a connection variable. It only limits the number of streams
- per connection. If the global load must be limited on the application, the
- server parameters "maxconn" and "pool-max-conn" must be set. In addition, if
- an application does not support connection multiplexing, the maximum number
- of concurrent requests is automatically set to 1.
+ email-alert mailers mymailers
+ email-alert from test1@horms.org
+ email-alert to test2@horms.org
-option keep-conn
-no option keep-conn
- Instruct the FastCGI application to keep the connection open or not after
- sending a response.
+ server srv1 192.168.0.30:80
+ server srv2 192.168.0.31:80
- If disabled, the FastCGI application closes the connection after responding
- to this request. By default, this option is enabled.
+timeout mail <time>
+ Defines the time available for a mail/connection to be made and send to
+ the mail-server. If not defined the default value is 10 seconds. To allow
+ for at least two SYN-ACK packets to be send during initial TCP handshake it
+ is advised to keep this value above 4 seconds.
-option max-reqs <reqs>
- Define the maximum number of concurrent requests this application will
- accept.
+ Example:
+ mailers mymailers
+ timeout mail 20s
+ mailer smtp1 192.168.0.1:587
- This option may be overwritten if the variable FCGI_MAX_REQS is retrieved
- during connection establishment. Furthermore, if the application does not
- support connection multiplexing, this option will be ignored. By default set
- to 1.
+12.4. HTTP-errors
+-----------------
-option mpxs-conns
-no option mpxs-conns
- Enable or disable the support of connection multiplexing.
+It is possible to globally declare several groups of HTTP errors, to be
+imported afterwards in any proxy section. Same group may be referenced at
+several places and can be fully or partially imported.
- This option may be overwritten if the variable FCGI_MPXS_CONNS is retrieved
- during connection establishment. It is disabled by default.
+http-errors <name>
+ Create a new http-errors group with the name <name>. It is an independent
+ section that may be referenced by one or more proxies using its name.
-set-param <name> <fmt> [ { if | unless } <condition> ]
- Set a FastCGI parameter that should be passed to this application. Its
- value, defined by <fmt> must follows the Custom log format rules (see section
- 8.2.6 "Custom Log format"). It may optionally be followed by an ACL-based
- condition, in which case it will only be evaluated if the condition is true.
+errorfile <code> <file>
+ Associate a file contents to an HTTP error code
+
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410,
+ 425, 429, 500, 501, 502, 503, and 504.
- With this directive, it is possible to overwrite the value of default FastCGI
- parameters. If the value is evaluated to an empty string, the rule is
- ignored. These directives are evaluated in their declaration order.
+ <file> designates a file containing the full HTTP response. It is
+ recommended to follow the common practice of appending ".http" to
+ the filename so that people do not confuse the response with HTML
+ error pages, and to use absolute paths, since files are read
+ before any chroot is performed.
- Example :
- # PHP only, required if PHP was built with --enable-force-cgi-redirect
- set-param REDIRECT_STATUS 200
+ Please referrers to "errorfile" keyword in section 4 for details.
- set-param PHP_AUTH_DIGEST %[req.hdr(Authorization)]
+ Example:
+ http-errors website-1
+ errorfile 400 /etc/haproxy/errorfiles/site1/400.http
+ errorfile 404 /etc/haproxy/errorfiles/site1/404.http
+ errorfile 408 /dev/null # work around Chrome pre-connect bug
+ http-errors website-2
+ errorfile 400 /etc/haproxy/errorfiles/site2/400.http
+ errorfile 404 /etc/haproxy/errorfiles/site2/404.http
+ errorfile 408 /dev/null # work around Chrome pre-connect bug
-10.1.2. Proxy section
----------------------
+12.5. Rings
+-----------
-use-fcgi-app <name>
- Define the FastCGI application to use for the backend.
+It is possible to globally declare ring-buffers, to be used as target for log
+servers or traces.
- Arguments :
- <name> is the name of the FastCGI application to use.
+ring <ringname>
+ Creates a new ring-buffer with name <ringname>.
- This keyword is only available for HTTP proxies with the backend capability
- and with at least one FastCGI server. However, FastCGI servers can be mixed
- with HTTP servers. But except there is a good reason to do so, it is not
- recommended (see section 10.3 about the limitations for details). Only one
- application may be defined at a time per backend.
+backing-file <path>
+ This replaces the regular memory allocation by a RAM-mapped file to store the
+ ring. This can be useful for collecting traces or logs for post-mortem
+ analysis, without having to attach a slow client to the CLI. Newer contents
+ will automatically replace older ones so that the latest contents are always
+ available. The contents written to the ring will be visible in that file once
+ the process stops (most often they will even be seen very soon after but
+ there is no such guarantee since writes are not synchronous).
- Note that, once a FastCGI application is referenced for a backend, depending
- on the configuration some processing may be done even if the request is not
- sent to a FastCGI server. Rules to set parameters or pass headers to an
- application are evaluated.
+ When this option is used, the total storage area is reduced by the size of
+ the "struct ring" that starts at the beginning of the area, and that is
+ required to recover the area's contents. The file will be created with the
+ starting user's ownership, with mode 0600 and will be of the size configured
+ by the "size" directive. When the directive is parsed (thus even during
+ config checks), any existing non-empty file will first be renamed with the
+ extra suffix ".bak", and any previously existing file with suffix ".bak" will
+ be removed. This ensures that instant reload or restart of the process will
+ not wipe precious debugging information, and will leave time for an admin to
+ spot this new ".bak" file and to archive it if needed. As such, after a crash
+ the file designated by <path> will contain the freshest information, and if
+ the service is restarted, the "<path>.bak" file will have it instead. This
+ means that the total storage capacity required will be double of the ring
+ size. Failures to rotate the file are silently ignored, so placing the file
+ into a directory without write permissions will be sufficient to avoid the
+ backup file if not desired.
+ WARNING: there are stability and security implications in using this feature.
+ First, backing the ring to a slow device (e.g. physical hard drive) may cause
+ perceptible slowdowns during accesses, and possibly even panics if too many
+ threads compete for accesses. Second, an external process modifying the area
+ could cause the haproxy process to crash or to overwrite some of its own
+ memory with traces. Third, if the file system fills up before the ring,
+ writes to the ring may cause the process to crash.
-10.1.3. Example
----------------
+ The information present in this ring are structured and are NOT directly
+ readable using a text editor (even though most of it looks barely readable).
+ The output of this file is only intended for developers.
- frontend front-http
- mode http
- bind *:80
- bind *:
+description <text>
+ The description is an optional description string of the ring. It will
+ appear on CLI. By default, <name> is reused to fill this field.
- use_backend back-dynamic if { path_reg ^/.+\.php(/.*)?$ }
- default_backend back-static
+format <format>
+ Format used to store events into the ring buffer.
- backend back-static
- mode http
- server www A.B.C.D:80
+ Arguments:
+ <format> is the log format used when generating syslog messages. It may be
+ one of the following :
- backend back-dynamic
- mode http
- use-fcgi-app php-fpm
- server php-fpm A.B.C.D:9000 proto fcgi
+ iso A message containing only the ISO date, followed by the text.
+ The PID, process name and system name are omitted. This is
+ designed to be used with a local log server.
- fcgi-app php-fpm
- log-stderr global
- option keep-conn
+ local Analog to rfc3164 syslog message format except that hostname
+ field is stripped. This is the default.
+ Note: option "log-send-hostname" switches the default to
+ rfc3164.
- docroot /var/www/my-app
- index index.php
- path-info ^(/.+\.php)(/.*)?$
+ raw A message containing only the text. The level, PID, date, time,
+ process name and system name are omitted. This is designed to be
+ used in containers or during development, where the severity
+ only depends on the file descriptor used (stdout/stderr). This
+ is the default.
+ rfc3164 The RFC3164 syslog message format.
+ (https://tools.ietf.org/html/rfc3164)
-10.2. Default parameters
-------------------------
+ rfc5424 The RFC5424 syslog message format.
+ (https://tools.ietf.org/html/rfc5424)
-A Responder FastCGI application has the same purpose as a CGI/1.1 program. In
-the CGI/1.1 specification (RFC3875), several variables must be passed to the
-script. So HAProxy set them and some others commonly used by FastCGI
-applications. All these variables may be overwritten, with caution though.
+ short A message containing only a level between angle brackets such as
+ '<3>', followed by the text. The PID, date, time, process name
+ and system name are omitted. This is designed to be used with a
+ local log server. This format is compatible with what the systemd
+ logger consumes.
- +-------------------+-----------------------------------------------------+
- | AUTH_TYPE | Identifies the mechanism, if any, used by HAProxy |
- | | to authenticate the user. Concretely, only the |
- | | BASIC authentication mechanism is supported. |
- | | |
- +-------------------+-----------------------------------------------------+
- | CONTENT_LENGTH | Contains the size of the message-body attached to |
- | | the request. It means only requests with a known |
- | | size are considered as valid and sent to the |
- | | application. |
- | | |
- +-------------------+-----------------------------------------------------+
- | CONTENT_TYPE | Contains the type of the message-body attached to |
- | | the request. It may not be set. |
- | | |
- +-------------------+-----------------------------------------------------+
- | DOCUMENT_ROOT | Contains the document root on the remote host under |
- | | which the script should be executed, as defined in |
- | | the application's configuration. |
- | | |
- +-------------------+-----------------------------------------------------+
- | GATEWAY_INTERFACE | Contains the dialect of CGI being used by HAProxy |
- | | to communicate with the FastCGI application. |
- | | Concretely, it is set to "CGI/1.1". |
- | | |
- +-------------------+-----------------------------------------------------+
- | PATH_INFO | Contains the portion of the URI path hierarchy |
- | | following the part that identifies the script |
- | | itself. To be set, the directive "path-info" must |
- | | be defined. |
- | | |
- +-------------------+-----------------------------------------------------+
- | PATH_TRANSLATED | If PATH_INFO is set, it is its translated version. |
- | | It is the concatenation of DOCUMENT_ROOT and |
- | | PATH_INFO. If PATH_INFO is not set, this parameters |
- | | is not set too. |
- | | |
- +-------------------+-----------------------------------------------------+
- | QUERY_STRING | Contains the request's query string. It may not be |
- | | set. |
- | | |
- +-------------------+-----------------------------------------------------+
- | REMOTE_ADDR | Contains the network address of the client sending |
- | | the request. |
- | | |
- +-------------------+-----------------------------------------------------+
- | REMOTE_USER | Contains the user identification string supplied by |
- | | client as part of user authentication. |
- | | |
- +-------------------+-----------------------------------------------------+
- | REQUEST_METHOD | Contains the method which should be used by the |
- | | script to process the request. |
- | | |
- +-------------------+-----------------------------------------------------+
- | REQUEST_URI | Contains the request's URI. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SCRIPT_FILENAME | Contains the absolute pathname of the script. it is |
- | | the concatenation of DOCUMENT_ROOT and SCRIPT_NAME. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SCRIPT_NAME | Contains the name of the script. If the directive |
- | | "path-info" is defined, it is the first part of the |
- | | URI path hierarchy, ending with the script name. |
- | | Otherwise, it is the entire URI path. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SERVER_NAME | Contains the name of the server host to which the |
- | | client request is directed. It is the value of the |
- | | header "Host", if defined. Otherwise, the |
- | | destination address of the connection on the client |
- | | side. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SERVER_PORT | Contains the destination TCP port of the connection |
- | | on the client side, which is the port the client |
- | | connected to. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SERVER_PROTOCOL | Contains the request's protocol. |
- | | |
- +-------------------+-----------------------------------------------------+
- | SERVER_SOFTWARE | Contains the string "HAProxy" followed by the |
- | | current HAProxy version. |
- | | |
- +-------------------+-----------------------------------------------------+
- | HTTPS | Set to a non-empty value ("on") if the script was |
- | | queried through the HTTPS protocol. |
- | | |
- +-------------------+-----------------------------------------------------+
+ priority A message containing only a level plus syslog facility between angle
+ brackets such as '<63>', followed by the text. The PID, date, time,
+ process name and system name are omitted. This is designed to be used
+ with a local log server.
+ timed A message containing only a level between angle brackets such as
+ '<3>', followed by ISO date and by the text. The PID, process
+ name and system name are omitted. This is designed to be
+ used with a local log server.
-10.3. Limitations
-------------------
+maxlen <length>
+ The maximum length of an event message stored into the ring,
+ including formatted header. If an event message is longer than
+ <length>, it will be truncated to this length.
-The current implementation have some limitations. The first one is about the
-way some request headers are hidden to the FastCGI applications. This happens
-during the headers analysis, on the backend side, before the connection
-establishment. At this stage, HAProxy know the backend is using a FastCGI
-application but it don't know if the request will be routed to a FastCGI server
-or not. But to hide request headers, it simply removes them from the HTX
-message. So, if the request is finally routed to an HTTP server, it never see
-these headers. For this reason, it is not recommended to mix FastCGI servers
-and HTTP servers under the same backend.
+server <name> <address> [param*]
+ Used to configure a syslog tcp server to forward messages from ring buffer.
+ This supports for all "server" parameters found in 5.2 paragraph. Some of
+ these parameters are irrelevant for "ring" sections. Important point: there
+ is little reason to add more than one server to a ring, because all servers
+ will receive the exact same copy of the ring contents, and as such the ring
+ will progress at the speed of the slowest server. If one server does not
+ respond, it will prevent old messages from being purged and may block new
+ messages from being inserted into the ring. The proper way to send messages
+ to multiple servers is to use one distinct ring per log server, not to
+ attach multiple servers to the same ring. Note that specific server directive
+ "log-proto" is used to set the protocol used to send messages.
-Similarly, the rules "set-param" and "pass-header" are evaluated during the
-request headers analysis. So the evaluation is always performed, even if the
-requests is finally forwarded to an HTTP server.
+size <size>
+ This is the optional size in bytes for the ring-buffer. Default value is
+ set to BUFSIZE.
-About the rules "set-param", when a rule is applied, a pseudo header is added
-into the HTX message. So, the same way than for HTTP header rewrites, it may
-fail if the buffer is full. The rules "set-param" will compete with
-"http-request" ones.
+timeout connect <timeout>
+ Set the maximum time to wait for a connection attempt to a server to succeed.
-Finally, all FastCGI params and HTTP headers are sent into a unique record
-FCGI_PARAM. Encoding of this record must be done in one pass, otherwise a
-processing error is returned. It means the record FCGI_PARAM, once encoded,
-must not exceeds the size of a buffer. However, there is no reserve to respect
-here.
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+timeout server <timeout>
+ Set the maximum time for pending data staying into output buffer.
-11. Stick-tables and Peers
---------------------------
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
-Stick-tables in HAProxy are a mechanism which permits to associate a certain
-number of information and metrics with a key of a certain type, and this for a
-certain duration after the last update. This can be seen as a multicolumn line
-in a table, where the line number is defined by the key value, and the columns
-all represent distinct criteria.
+ Example:
+ global
+ log ring@myring local7
-Stick-tables were originally designed to store client-server stickiness
-information in order to maintain persistent sessions between these entities. A
-client would connect or send a request, this client would be identified via a
-discriminator (source address, cookie, URL parameter) and the chosen server
-would be stored in association with this discriminator in a stick table for a
-configurable duration so that subsequent accesses from the same client could
-automatically be routed to the same server, where the client had created its
-application session.
+ ring myring
+ description "My local buffer"
+ format rfc5424
+ maxlen 1200
+ size 32764
+ timeout connect 5s
+ timeout server 10s
+ server mysyslogsrv 127.0.0.1:6514 log-proto octet-count
-Nowadays, stick-tables can store more information than just a server number,
-elements such as activity metrics related to a specific client can be stored
-(request counts/rates, connection counts/rates, byte counts/rates etc), as well
-as some arbitrary event counters ("gpc" for "General Purpose Counters") and
-some tags to label a client with certain characteristics ("gpt" for "General
-Purpose Tag").
+12.6. Log forwarding
+--------------------
-Stick-tables may be referenced by the "stick" directives, which are used for
-client-server stickiness, by "track-sc" rules, which are used to describe what
-key to track in which table in order to collect metrics, as well as by a number
-of sample-fetch functions and converters which can perform an immediate lookup
-of a given key to retrieve a specific metric or data. The general principle is
-that updates to tables (gpt/gpc/metrics) as well as lookups of stickiness
-information refresh the accessed entry and postpone its expiration, while mere
-lookups from sample-fetch functions and converters only extract the data
-without postponing the entry's expiration.
+It is possible to declare one or multiple log forwarding section,
+HAProxy will forward all received log messages to a log servers list.
-In order for the mechanism to scale and to resist to HAProxy reloads and fail-
-over, it is possible to share stick-tables updates with other nodes called
-"peers" via the "Peers" mechanism described in section 11.2. In order to finely
-tune the communication with peers, it is possible to also decide that some
-tables only receive information from peers, or that updates from peers should
-instead be forwarded to a different table.
+log-forward <name>
+ Creates a new log forwarder proxy identified as <name>.
-Finally, stick-tables may be declared either in proxy sections (frontends,
-backends) using the "stick-table" keyword, where there may only be one per
-section and where they will get the name of that section, or in peers sections
-with the "table" keyword followed by the table's name, and which permits to
-declare multiple stick-tables in the same "peers" section. If multiple
-stick-tables are needed, usually the recommended solution is either to declare
-them in a peers section (in case they intend to be shared), or to create extra
-backend sections, each with only the "stick-table" definition in them.
+backlog <conns>
+ Give hints to the system about the approximate listen backlog desired size
+ on connections accept.
+bind <addr> [param*]
+ Used to configure a stream log listener to receive messages to forward.
+ This supports the "bind" parameters found in 5.1 paragraph including
+ those about ssl but some statements such as "alpn" may be irrelevant for
+ syslog protocol over TCP.
+ Those listeners support both "Octet Counting" and "Non-Transparent-Framing"
+ modes as defined in rfc-6587.
-11.1. stick-table declaration
------------------------------
+dgram-bind <addr> [param*]
+ Used to configure a datagram log listener to receive messages to forward.
+ Addresses must be in IPv4 or IPv6 form,followed by a port. This supports
+ for some of the "bind" parameters found in 5.1 paragraph among which
+ "interface", "namespace" or "transparent", the other ones being
+ silently ignored as irrelevant for UDP/syslog case.
-The declaration of a stick-table in a proxy section ("frontend", "backend",
-"listen") and in "peers" sections is very similar, with the differences being
-that the one in the peers section requires a mandatory name and doesn't take a
-"peers" option.
+log global
+log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
+ <facility> [<level> [<minlevel>]]
+ Used to configure target log servers. See more details on proxies
+ documentation.
+ If no format specified, HAProxy tries to keep the incoming log format.
+ Configured facility is ignored, except if incoming message does not
+ present a facility but one is mandatory on the outgoing format.
+ If there is no timestamp available in the input format, but the field
+ exists in output format, HAProxy will use the local date.
-In a "frontend", "backend" or "listen" section:
+ Example:
+ global
+ log stderr format iso local7
- stick-table type <type> size <size> [expire <expire>] [nopurge] [recv-only]
- [write-to <wtable>] [srvkey <srvkey>] [store <data_type>]*
- [brates-factor <factor>] [peers <peersect>]
+ ring myring
+ description "My local buffer"
+ format rfc5424
+ maxlen 1200
+ size 32764
+ timeout connect 5s
+ timeout server 10s
+ # syslog tcp server
+ server mysyslogsrv 127.0.0.1:514 log-proto octet-count
-In a "peers" section:
+ log-forward sylog-loadb
+ dgram-bind 127.0.0.1:1514
+ bind 127.0.0.1:1514
+ # all messages on stderr
+ log global
+ # all messages on local tcp syslog server
+ log ring@myring local0
+ # load balance messages on 4 udp syslog servers
+ log 127.0.0.1:10001 sample 1:4 local0
+ log 127.0.0.1:10002 sample 2:4 local0
+ log 127.0.0.1:10003 sample 3:4 local0
+ log 127.0.0.1:10004 sample 4:4 local0
- table <name> type <type> size <size> [expire <expire>] [nopurge] [recv-only]
- [write-to <wtable>] [srvkey <srvkey>] [store <data_type>]*
- [brates-factor <factor>]
+maxconn <conns>
+ Fix the maximum number of concurrent connections on a log forwarder.
+ 10 is the default.
-Arguments (mandatory ones first, then alphabetically sorted):
+timeout client <timeout>
+ Set the maximum inactivity time on the client side.
- - type <type>
- This mandatory argument sets the key type to <type>, which
- usually is a single word but may also have its own arguments:
+option assume-rfc6587-ntf
+ Directs HAProxy to treat incoming TCP log streams always as using
+ non-transparent framing. This option simplifies the framing logic and ensures
+ consistent handling of messages, particularly useful when dealing with
+ improperly formed starting characters.
- * ip This type should be avoided in favor of a more explicit one such
- as "ipv4" or "ipv6". Prior to version 3.2 it was the only way to
- configure IPv4. In 3.2, "ip" is an alias for "ipv4", and "ipv4"
- is preferred. In a future version, "ip" will instead correspond
- to "ipv6". It is only meant to ease the transition from pre-3.2
- to post-3.2.
+option dont-parse-log
+ Enables HAProxy to relay syslog messages without attempting to parse and
+ restructure them, useful for forwarding messages that may not conform to
+ traditional formats. This option should be used with the format raw setting on
+ destination log targets to ensure the original message content is preserved.
- * ipv4 A table declared with this type will only store IPv4 addresses.
- This form is very compact (about 50 bytes per entry) and allows
- very fast entry lookup and stores with almost no overhead. This
- is mainly used to store client source IP addresses.
+option host { replace | fill | keep | append }
+ Set the host strategy that should be used on the log-forward section
+ regarding syslog hostname field for outbound rfc3164 or rfc5424 messages.
- * ipv6 A table declared with "type ipv6" will only store IPv6 addresses.
- This form is very compact (about 60 bytes per entry) and allows
- very fast entry lookup and stores with almost no overhead. This
- is mainly used to store client source IP addresses.
+ replace If input message already contains a value for the hostname field,
+ we replace it by the source IP address from the sender.
+ If input message doesn't contain a value for the hostname field
+ (ie: '-' as input rfc5424 message or non compliant rfc3164 or
+ rfc5424 message), we use the source IP address from the sender as
+ hostname field.
- * integer A table declared with "type integer" will store 32bit integers
- which can represent a client identifier found in a request for
- instance.
+ fill If input message already contains a value for the hostname field,
+ we keep it.
+ If input message doesn't contain a value for the hostname field
+ (ie: '-' as input rfc5424 message or non compliant rfc3164 or
+ rfc5424 message), we use the source IP address from the sender as
+ hostname field.
+ (This is the default)
- * string [length <len>]
- A table declared with "type string" will store substrings of
- up to <len> characters. If the string provided by the pattern
- extractor is larger than <len>, it will be truncated before
- being stored. During matching, at most <len> characters will
- be compared between the string in the table and the extracted
- pattern. When not specified, the string is automatically
- limited to 32 characters. Increasing the length can have a
- non-negligible memory usage impact.
+ keep If input message already contains a value for the hostname field,
+ we keep it.
+ If input message doesn't contain a value for the hostname field,
+ we set it to 'localhost' (rfc3164) or '-' (rfc5424).
- * binary [length <len>]
- A table declared with "type binary" will store binary blocks
- of <len> bytes. If the block provided by the pattern
- extractor is larger than <len>, it will be truncated before
- being stored. If the block provided by the sample expression
- is shorter than <len>, it will be padded by 0. When not
- specified, the block is automatically limited to 32
- bytes. Increasing the length can have a non-negligible memory
- usage impact.
+ append If input message already contains a value for the hostname field,
+ we append a comma followed by the IP address from the sender.
+ If input message doesn't contain a value for the hostname field,
+ we use the source IP address from the sender.
- - size <size>
- This mandatory argument sets maximum number of entries that can
- fit in the table to <size>. This value directly impacts memory
- usage. Count approximately 50 bytes per entry in addition to the
- key size above, and optionally stored metrics, plus the size of a
- string if any. The size supports suffixes "k", "m", "g" for 2^10,
- 2^20 and 2^30 factors.
+ For all options above, if the source IP address from the sender is not
+ available (ie: UNIX/ABNS socket), then the resulting strategy is "keep".
- - expire <delay>
- Defines the maximum duration of an entry in the table since it was
- last created, refreshed using 'track-sc' or matched using 'stick
- match' or 'stick on' rule. The expiration delay <delay> is defined
- using the standard time format, similarly as the various timeouts,
- defaulting to milliseconds. The maximum duration is slightly above
- 24 days. See section 2.5 for more information. If this delay is
- not specified, sessions won't automatically expire, but oldest
- entries will be removed upon creation once full. Be sure not to
- use the "nopurge" parameter if not expiration delay is specified.
- Note: 'table_*' converters performs lookups but won't update touch
- expire since they don't require 'track-sc'.
+ Note that this option is only relevant for rfc3164 or rfc5424 destination
+ log format. Else setting the option will have no visible effect.
+
+12.7. Certificate Storage
+-------------------------
- - brates-factor <factor>
- Specifies a factor to be applied to in/out bytes rate. Instead of
- counting each bytes, blocks of bytes are counted. Internally,
- rates are defined on 32-bits counters, limiting them to about 4
- billion per period. By using this parameter, it is possible to
- have rates exceeding this 4G limit over the defined period. The
- factor must be greater than 0 and lower than or equal to 1024.
+HAProxy uses an internal storage mechanism to load and store certificates used
+in the configuration. This storage can be configured by using a "crt-store"
+section. It allows to configure certificate definitions and which files should
+be loaded in it. A certificate definition must be written before it is used
+elsewhere in the configuration.
- - nopurge indicates that we refuse to purge older entries when the table is
- full. When not specified and the table is full when HAProxy wants
- to store an entry in it, it will flush a few of the oldest entries
- in order to release some space for the new ones. This is most
- often the desired behavior. In some specific cases, it will be
- desirable to refuse new entries instead of purging the older ones.
- That may be the case when the amount of data to store is far above
- the hardware limits and we prefer not to offer access to new
- clients than to reject the ones already connected. When using this
- parameter, be sure to properly set the "expire" parameter (see
- above).
+crt-store [<name>]
- - recv-only
- indicates that we don't intend to use the table to perform updates
- on it, but that we only plan on using the table to retrieve data
- from a remote peer which we are interested in. Indeed, the use of
- this keyword enables the retrieval of local-only values such as
- "conn_cur" that are not learned by default as they would conflict
- with local updates performed on the table by the local peer. Use
- of this option is only relevant for tables that are not involved
- in tracking rules or methods that perform update operations on the
- table, or put simpler: remote tables that are only used to
- retrieve information.
+The "crt-store" takes an optional name in argument. If a name is specified,
+every certificate of this store must be referenced using "@<name>/<crt>" or
+"@<name>/<alias>".
- - peers <peersect>
- Entries that are created, updated or refreshed will be sent to the
- peers in section <peersect> for synchronization, and keys learned
- from peers in this section will also be inserted or updated in the
- table. Additionally, on startup, an attempt may be done to learn
- entries from an older instance of the process, designated as the
- "local peer" via this section.
+Files in the certificate storage can also be updated dynamically with the CLI.
+See "set ssl cert" in the section 9.3 of the management guide.
- - srvkey <srvkey>
- Specifies how each server is identified for the purposes of the
- stick table. The valid values are "name" and "addr". If "name" is
- given, then <name> argument for the server (may be generated by a
- template). If "addr" is given, then the server is identified by
- its current network address, including the port. "addr" is
- especially useful if you are using service discovery to generate
- the addresses for servers with peered stick-tables and want to
- consistently use the same host across peers for a stickiness
- token.
- - store <data_type>
- This is used to store additional information in the stick-table.
- This may be used by ACLs in order to control various criteria
- related to the activity of the client matching the stick-table.
- For each item specified here, the size of each entry will be
- inflated so that the additional data can fit. Several data types
- may be stored with an entry. Multiple data types may be specified
- after the "store" keyword, as a comma-separated list.
- Alternatively, it is possible to repeat the "store" keyword
- followed by one or several data types. Except for the "server_id"
- type which is automatically detected and enabled, all data types
- must be explicitly declared to be stored. If an ACL references a
- data type which is not stored, the ACL will simply not match. Some
- data types require an argument which must be passed just after the
- type between parenthesis. See below for the supported data types
- and their arguments.
+The following keywords are supported in the "crt-store" section :
+ - crt-base
+ - key-base
+ - load
- - write-to <wtable>
- Specifies the name of another stick table where peers updates will
- be written to in addition to the source table. <wtable> must be of
- the same type as the table being defined and must have the same
- key length, and source table cannot be used as a target table
- itself. Every time an entry update will be received on the source
- table through a peer, HAProxy will try to refresh related <wtable>
- entry. If the entry doesn't exist yet, it will be created, else
- its values will be updated as well as its timer. Note that only
- types that are not involved in arithmetic ops such as server_id,
- server_key and gpt will be written to <wtable> to prevent
- processed values from a remote table from interfering with
- arithmetic operations performed on the local target table. (ie:
- prevent shared cumulative counter from growing indefinitely) One
- common use of this option is to be able to use sticking rules (for
- server persistence) in a peers cluster setup, because matching
- keys will be learned from remote tables.
+crt-base <dir>
+ Assigns a default directory to fetch SSL certificates from when a relative
+ path is used with "crt" directives. Absolute locations specified prevail and
+ ignore "crt-base". When used in a crt-store, the crt-base of the global
+ section is ignored.
-The data types that can be associated with an entry via the "store" directive
-are listed below. It is important to keep in mind that memory requirements may
-be important when storing many data types. Indeed, storing all indicators below
-at once in each entry can requires hundreds of bytes per entry, or hundreds of
-MB for a 1-million entries table. For this reason, the approximate storage size
-is mentioned for each type between brackets:
+key-base <dir>
+ Assigns a default directory to fetch SSL private keys from when a relative
+ path is used with "key" directives. Absolute locations specified prevail and
+ ignore "key-base". When used in a crt-store, the key-base of the global
+ section is ignored.
- - bytes_in_cnt [4 bytes]
- This is the client to server byte count. It is a positive 64-bit
- integer which counts the cumulative number of bytes received from
- clients which matched this entry. Headers are included in the
- count. This may be used to limit abuse of upload features on photo
- or video servers.
+load [crt <filename>] [param*]
+ Load SSL files in the certificate storage. For the parameter list, see section
+ "12.7.1. Load options"
- - bytes_in_rate(<period>) [12 bytes]
- This is a rate counter on bytes from the client to the server.
- It takes an integer parameter <period> which indicates in
- milliseconds the length of the period over which the average is
- measured. It reports the average incoming bytes rate over that
- period, in bytes per period. It may be used to detect users which
- upload too much and too fast. Warning: with large uploads, it is
- possible that the amount of uploaded data will be counted once
- upon termination, thus causing spikes in the average transfer
- speed instead of having a smooth one. This may partially be
- smoothed with "option contstats" though this is not perfect. Use
- of byte_in_cnt is recommended for better fairness.
+Example:
- - bytes_out_cnt [4 bytes]
- This is the server to client byte count. It is a positive 64-bit
- integer which counts the cumulative number of bytes sent to
- clients which matched this entry. Headers are included in the
- count. This may be used to limit abuse of bots sucking the whole
- site.
+ crt-store
+ load crt "site1.crt" key "site1.key" ocsp "site1.ocsp" alias "site1"
+ load crt "site2.crt" key "site2.key"
- - bytes_out_rate(<period>) [12 bytes]
- This is a rate counter on bytes from the server to the client.
- It takes an integer parameter <period> which indicates in
- milliseconds the length of the period over which the average is
- measured. It reports the average outgoing bytes rate over that
- period, in bytes per period. It may be used to detect users which
- download too much and too fast. Warning: with large transfers, it
- is possible that the amount of transferred data will be counted
- once upon termination, thus causing spikes in the average transfer
- speed instead of having a smooth one. This may partially be
- smoothed with "option contstats" though this is not perfect
- yet. Use of byte_out_cnt is recommended for better fairness.
+ frontend in2
+ bind *:443 ssl crt "@/site1" crt "site2.crt"
- - conn_cnt [4 bytes]
- This is the Connection Count. It is a positive 32-bit integer
- which counts the absolute number of connections received from
- clients which matched this entry. It does not mean the connections
- were accepted, just that they were received.
+ crt-store web
+ crt-base /etc/ssl/certs/
+ key-base /etc/ssl/private/
+ load crt "site3.crt" alias "site3"
+ load crt "site4.crt" key "site4.key"
- - conn_cur [4 bytes]
- This is the Current Connections count. It is a positive 32-bit
- integer which stores the concurrent connection count for the
- entry. It is incremented once an incoming connection matches the
- entry, and decremented once the connection leaves. That way it is
- possible to know at any time the exact number of concurrent
- connections for an entry. This type is not learned from other
- peers by default as it wouldn't represent anything given that it
- would ignore the local count. However, in combination with
- recv-only it can be used to learn the number of concurrent
- connections seen by peers.
+ frontend in2
+ bind *:443 ssl crt "@web/site1" crt "site2.crt" crt "@web/site3" crt "@web/site4.crt"
- - conn_rate(<period>) [12 bytes]
- This is a connection frequency counter. It takes an integer
- parameter <period> which indicates in milliseconds the length of
- the period over which the average is measured. It reports the
- average incoming connection rate over that period, in connections
- per period. The result is an integer which can be matched using
- ACLs. Whether connections are accepted or rejected has no effect
- on their measurement.
+12.7.1. Load options
+--------------------
- - glitch_cnt [4 bytes]
- This is the front glitches count. It is a positive 32-bit integer
- which counts the cumulative number of glitches reported on a front
- connection. Glitches correspond to either unusual or unexpected
- actions (protocol- wise) from the client that could indicate a
- badly defective client or possibly an attacker. As such, this
- counter can help in order to decide how to act with them in such
- case.
+Load SSL files in the certificate storage. The load keyword can take multiple
+parameters which are listed below. These keywords are also usable in a
+crt-list.
- - glitch_rate(<period>) [12 bytes]
- This is a frequency counter on glitches. It takes an integer
- parameter <period> which indicates in milliseconds the length of
- the period over which the average is measured. It reports the
- average front glitches rate over that period. It may be used to
- detect defective clients or potential attackers that perform
- uncommon or unexpected actions from a protocol point of view,
- provided that HAProxy flagged them them as such.
+crt <filename>
+ This argument is mandatory, it loads a PEM which must contain the public
+ certificate but could also contain the intermediate certificates and the
+ private key. If no private key is provided in this file, a key can be provided
+ with the "key" keyword.
- - gpc(<nb>) [4 * <nb> bytes]
- This is an array of <nb> General Purpose Counter elements. This is
- an array of positive 32-bit integers which may be used to count
- anything. Most of the time they will be used as a incremental
- counters on some entries, for instance to note that a limit is
- reached and trigger some actions. This array is limited to a
- maximum of 100 elements: gpc0 to gpc99, to ensure that the build
- of a peer update message can fit into the buffer. Users should
- take in consideration that a large amount of counters will
- increase the data size and the traffic load using peers protocol
- since all data/counters are pushed each time any of them is
- updated. This data_type will exclude the usage of the legacy
- data_types 'gpc0' and 'gpc1' on the same table. Using the 'gpc'
- array data_type, all 'gpc0' and 'gpc1' related sample fetch
- functions and actions will apply to the two first elements of this
- array.
+acme <string>
+ This option allows to configure the ACME protocol for a given certificate.
+ This is an experimental feature which needs the
+ "expose-experimental-directives" keyword in the global section.
- - gpc_rate(<nb>,<period>) [12 * <nb> bytes]
- This is an array of increment rates of General Purpose Counters
- over a period. Those elements are positive 32-bit integers which
- may be used for anything. Just like <gpc>, the count events, but
- instead of keeping a cumulative number, they maintain the rate at
- which the counter is incremented. Most of the time it will be
- used to measure the frequency of occurrence of certain events
- (e.g. requests to a specific URL). This array is limited to a
- maximum of 100 elements: gpt(100) allowing the storage of gpc0 to
- gpc99, to ensure that the build of a peer update message can fit
- into the buffer. The array cannot contain less than 1 element:
- use gpc(1) if you want to store only the counter gpc0. Users
- should take in consideration that a large amount of counters will
- increase the data size and the traffic load using peers protocol
- since all data/counters are pushed each time any of them is
- updated. This data_type will exclude the usage of the legacy
- data_types 'gpc0_rate' and 'gpc1_rate' on the same table. Using
- the 'gpc_rate' array data_type, all 'gpc0' and 'gpc1' related
- fetches and actions will apply to the two first elements of this
- array.
+ See also Section 12.8 ("ACME") and "domains" in this section.
+
+alias <string>
+ Optional argument. Allow to name the certificate with an alias, so it can be
+ referenced with it in the configuration. An alias must be prefixed with '@/'
+ when called elsewhere in the configuration.
+
+domains <string>
+ Configure the list of domains that will be used for ACME certificates. The
+ first domain of the list is used as the CN. Domains are separated by commas in the list.
- - gpc0 [4 bytes]
- This is the first General Purpose Counter. It is a positive 32-bit
- integer integer which may be used for anything. Most of the time
- it will be used to put a special tag on some entries, for instance
- to note that a specific behavior was detected and must be known
- for future matches.
+ See also Section 12.8 ("ACME") and "acme" in this section.
- - gpc0_rate(<period>) [12 bytes]
- This is the increment rate of the first General Purpose Counter
- over a period. It is a positive 32-bit integer integer which may
- be used for anything. Just like <gpc0>, it counts events, but
- instead of keeping a cumulative number, it maintains the rate at
- which the counter is incremented. Most of the time it will be used
- to measure the frequency of occurrence of certain events
- (e.g. requests to a specific URL).
+ Example:
- - gpc1 [4 bytes]
- This is the second General Purpose Counter. It is a positive
- 32-bit integer integer which may be used for anything. Most of the
- time it will be used to put a special tag on some entries, for
- instance to note that a specific behavior was detected and must be
- known for future matches.
+ load crt "example.com.pem" acme LE domains "bar.example.com,foo.example.com"
- - gpc1_rate(<period>) [12 bytes]
- This is the increment rate of the second General Purpose Counter
- over a period. It is a positive 32-bit integer integer which may
- be used for anything. Just like <gpc1>, it counts events, but
- instead of keeping a cumulative number, it maintains the rate at
- which the counter is incremented. Most of the time it will be used
- to measure the frequency of occurrence of certain events
- (e.g. requests to a specific URL).
+key <filename>
+ This argument is optional. Load a private key in PEM format. If a private key
+ was already defined in "crt", it will overwrite it.
- - gpt(<nb>) [4 * <nb> bytes]
- This is an array of <nb> General Purpose Tags elements. This is an
- array of positive 32-bit integers which may be used for anything.
- Most of the time they will be used to put a special tags on some
- entries, for instance to note that a specific behavior was
- detected and must be known for future matches. This array is
- limited to a maximum of 100 elements: gpt(100) allowing the
- storage of gpt0 to gpt99, to ensure that the build of a peer
- update message can fit into the buffer. The array cannot contain
- less than 1 element: use gpt(1) if you want to to store only the
- tag gpt0. Users should take in consideration that a large amount
- of counters will increase the data size and the traffic load using
- peers protocol since all data/counters are pushed each time any of
- them is updated. This data_type will exclude the usage of the
- legacy data_type 'gpt0' on the same table. Using the 'gpt' array
- data_type, all 'gpt0' related fetches and actions will apply to
- the first element of this array.
+ocsp <filename>
+ This argument is optional, it loads an OCSP response in DER format. It can
+ be updated with the CLI.
- - gpt0 [4 bytes]
- This is the first General Purpose Tag. It is a positive 32-bit
- integer which may be used for anything. Most of the time it will
- be used to put a special tag on some entries, for instance to note
- that a specific behavior was detected and must be known for future
- matches.
+issuer <filename>
+ This argument is optional. Load the OCSP issuer in PEM format. In order to
+ identify which certificate an OCSP Response applies to, the issuer's
+ certificate is necessary. If the issuer's certificate is not found in the
+ "crt" file, it could be loaded from a file with this argument.
- - http_req_cnt [4 bytes]
- This is the HTTP request Count. It is a positive 32-bit integer
- which counts the absolute number of HTTP requests received from
- clients which matched this entry. It does not matter whether they
- are valid requests or not. Note that this is different from
- sessions when keep-alive is used on the client side.
+sctl <filename>
+ This argument is optional. Support for Certificate Transparency (RFC6962) TLS
+ extension is enabled. The file must contain a valid Signed Certificate
+ Timestamp List, as described in RFC. File is parsed to check basic syntax,
+ but no signatures are verified.
- - http_req_rate(<period>) [12 bytes]
- This is a request frequency counter. It takes an integer parameter
- <period> which indicates in milliseconds the length of the period
- over which the average is measured. It reports the average HTTP
- request rate over that period, in requests per period. The result
- is an integer which can be matched using ACLs. It does not matter
- whether they are valid requests or not. Note that this is
- different from sessions when keep-alive is used on the client
- side.
+ocsp-update [ off | on ]
+ Enable automatic OCSP response update when set to 'on', disable it otherwise.
+ Its value defaults to 'off'.
+ To enable the OCSP auto update on a bind line, you can use this option in a
+ crt-store or you can use the global option "tune.ocsp-update.mode".
+ If a given certificate is used in multiple crt-lists with different values of
+ the 'ocsp-update' set, an error will be raised. Likewise, if a certificate
+ inherits from the global option on a bind line and has an incompatible
+ explicit 'ocsp-update' option set in a crt-list, the same error will be
+ raised.
- - http_err_cnt [4 bytes]
- This is the HTTP request Error Count. It is a positive 32-bit
- integer which counts the absolute number of HTTP requests errors
- induced by clients which matched this entry. Errors are counted on
- invalid and truncated requests, as well as on denied or tarpitted
- requests, and on failed authentications. If the server responds
- with 4xx, then the request is also counted as an error since it's
- an error triggered by the client (e.g. vulnerability scan).
+ Examples:
- - http_err_rate(<period>) [12 bytes]
- This is an HTTP request frequency counter. It takes an integer
- parameter <period> which indicates in milliseconds the length of
- the period over which the average is measured. It reports the
- average HTTP request error rate over that period, in requests per
- period (see http_err_cnt above for what is accounted as an
- error). The result is an integer which can be matched using ACLs.
+ Here is an example configuration enabling it with a crt-list:
- - http_fail_cnt [4 bytes]
- This is the HTTP response Failure Count. It is a positive 32-bit
- integer which counts the absolute number of HTTP response failures
- induced by servers which matched this entry. Errors are counted on
- invalid and truncated responses, as well as any 5xx response other
- than 501 or 505. It aims at being used combined with path or URI
- to detect service failures.
+ haproxy.cfg:
+ frontend fe
+ bind :443 ssl crt-list haproxy.list
- - http_fail_rate(<period>) [12 bytes]
- This is an HTTP response failure frequency counter. It takes an
- integer parameter <period> which indicates in milliseconds the
- length of the period over which the average is measured. It
- reports the average HTTP response failure rate over that period,
- in requests per period (see http_fail_cnt above for what is
- accounted as a failure). The result is an integer which can be
- matched using ACLs.
+ haproxy.list:
+ server_cert.pem [ocsp-update on] foo.bar
- - server_id [4 bytes]
- This is an integer which holds the numeric ID of the server a
- request was assigned to. It is used by the "stick match", "stick
- store", and "stick on" rules. It is automatically enabled when
- referenced. It is important to understand that stickiness based on
- learning information has some limitations, including the fact that
- all learned associations are lost upon restart unless peers are
- properly configured to transfer such information upon restart
- (recommended). In general it can be good as a complement to other
- stickiness mechanisms but not always as the sole mechanism.
+ Here is an example configuration enabling it with a crt-store:
- - sess_cnt [4 bytes]
- This is the Session Count. It is a positive 32-bit integer which
- counts the absolute number of sessions received from clients which
- matched this entry. A session is a connection that was accepted by
- the layer 4 rules ("tcp-request connection").
+ haproxy.cfg:
- - sess_rate(<period>) [12 bytes]
- This is a session frequency counter. It takes an integer parameter
- <period> which indicates in milliseconds the length of the period
- over which the average is measured. It reports the average
- incoming session rate over that period, in sessions per
- period. The result is an integer which can be matched using ACLs.
+ crt-store
+ load crt foobar.pem ocsp-update on
-Example:
- # Keep track of counters of up to 1 million IP addresses over 5 minutes
- # and store a general purpose counter and the average connection rate
- # computed over a sliding window of 30 seconds.
- stick-table type ip size 1m expire 5m store gpc0,conn_rate(30s)
+ frontend fe
+ bind :443 ssl crt foobar.pem
-See also : "stick match", "stick on", "stick store-request", "track-sc",
- section 2.5 about time format, section 11.2 about peers, section 9.7
- about bandwidth limitations, and section 7 about ACLs.
+ When the option is set to 'on', we will try to get an ocsp response whenever
+ an ocsp uri is found in the frontend's certificate. The only limitation of
+ this mode is that the certificate's issuer will have to be known in order for
+ the OCSP certid to be built.
+ Each OCSP response will be updated at least once an hour, and even more
+ frequently if a given OCSP response has an expire date earlier than this one
+ hour limit. A minimum update interval of 5 minutes will still exist in order
+ to avoid updating too often responses that have a really short expire time or
+ even no 'Next Update' at all. Because of this hard limit, please note that
+ when auto update is set to 'on', any OCSP response loaded during init will
+ not be updated until at least 5 minutes, even if its expire time ends before
+ now+5m. This should not be too much of a hassle since an OCSP response must
+ be valid when it gets loaded during init (its expire time must be in the
+ future) so it is unlikely that this response expires in such a short time
+ after init.
+ On the other hand, if a certificate has an OCSP uri specified and no OCSP
+ response, setting this option to 'on' for the given certificate will ensure
+ that the OCSP response gets fetched automatically right after init.
+ The default minimum and maximum delays (5 minutes and 1 hour respectively)
+ can be configured by the "ocsp-update.maxdelay" and "ocsp-update.mindelay"
+ global options.
+
+ Whenever an OCSP response is updated by the auto update task or following a
+ call to the "update ssl ocsp-response" CLI command, a dedicated log line is
+ emitted. It follows a dedicated format that contains the following header
+ "<OCSP-UPDATE>" and is followed by specific OCSP-related information:
+ - the path of the corresponding frontend certificate
+ - a numerical update status
+ - a textual update status
+ - the number of update failures for the given response
+ - the number of update successes for the givan response
+ See "show ssl ocsp-updates" CLI command for a full list of error codes and
+ error messages. This line is emitted regardless of the success or failure of
+ the concerned OCSP response update.
+ The OCSP request/response is sent and received through an http_client
+ instance that has the dontlog-normal option set and that uses the regular
+ HTTP log format in case of error (unreachable OCSP responder for instance).
+ If such an error occurs, another log line that contains HTTP-related
+ information will then be emitted alongside the "regular" OCSP one (which will
+ likely have "HTTP error" as text status). But if a purely HTTP error happens
+ (unreachable OCSP responder for instance), an extra log line that follows the
+ regular HTTP log-format will be emitted.
+ Here are two examples of such log lines, with a successful OCSP update log
+ line first and then an example of an HTTP error with the two different lines
+ (lines were spit and the URL was shortened for readability):
+ <133>Mar 6 11:16:53 haproxy[14872]: <OCSP-UPDATE> /path_to_cert/foo.pem 1 \
+ "Update successful" 0 1
+ <133>Mar 6 11:18:55 haproxy[14872]: <OCSP-UPDATE> /path_to_cert/bar.pem 2 \
+ "HTTP error" 1 0
+ <133>Mar 6 11:18:55 haproxy[14872]: -:- [06/Mar/2023:11:18:52.200] \
+ <OCSP-UPDATE> -/- 2/0/-1/-1/3009 503 217 - - SC-- 0/0/0/0/3 0/0 {} \
+ "GET http://127.0.0.1:12345/MEMwQT HTTP/1.1"
-11.2. Peers declaration
------------------------
+ Troubleshooting:
+ A common error that can happen with let's encrypt certificates is if the DNS
+ resolution provides an IPv6 address and your system does not have a valid
+ outgoing IPv6 route. In such a case, you can either create the appropriate
+ route or set the "httpclient.resolvers.prefer ipv4" option in the global
+ section.
+ In case of "OCSP response check failure" error, you might want to check that
+ the issuer certificate that you provided is valid.
+ A more precise error message might also be displayed between parenthesis
+ after the "generic" error message. It can happen for "OCSP response check
+ failure" or "Error during insertion" errors.
-It is possible to propagate entries of any data-types in stick-tables between
-several HAProxy instances over TCP connections in a multi-master fashion. Each
-instance pushes its local updates and insertions to remote peers. The pushed
-values overwrite remote ones without aggregation.
+12.8. ACME
+----------
-One exception is the data type "conn_cur" which is never learned from peers by
-default as it is supposed to reflect local values. Earlier versions used to
-synchronize it by default which was known to cause negative values in active-
-active setups, and always-growing values upon reloads or active-passive
-switches because the local value would reflect more connections than locally
-present. However there are some setups where it could be relevant to learn
-this value from peers, for instance when the table is a passive remote table
-solely used to learn/monitor data from it without relying on it for write-
-oriented operations or updates. To achieve this, the "recv-only" keyword can
-be added on the table declaration. In any case, the "conn_cur" info is always
-pushed so that monitoring systems can watch it.
+acme <name>
+
+The ACME protocol can be configured using the "acme" section. The section takes
+a "<name>" argument, which is used to link a certificate to the section.
+
+The ACME section allows to configure HAProxy as an ACMEv2 client. This feature
+is experimental meaning that "expose-experimental-directives" must be in the
+global section so this can be used.
-Interrupted exchanges are automatically detected and recovered from the last
-known point. In addition, during a soft restart, the old process connects to
-the new one using such a TCP connection to push all its entries before the new
-process tries to connect to other peers. That ensures very fast replication
-during a reload, it typically takes a fraction of a second even for large
-tables.
+Current limitations as of 3.2: The feature is limited to the HTTP-01 challenge
+for now. The current HAProxy architecture is a non-blocking model, access to
+the disk is not supposed to be done after the configuration is loaded, because
+it could block the event loop, blocking the traffic on the same thread. Meaning
+that the certificates and keys generated from HAProxy will need to be dumped
+from outside HAProxy using "dump ssl cert" on the stats socket.
+External Account Binding (EAB) is not supported.
-Note that Server IDs are used to identify servers remotely, so it is important
-that configurations look similar or at least that the same IDs are forced on
-each server on all participants.
+The ACME scheduler starts at HAProxy startup, it will loop over the
+certificates and start an ACME renewal task when the notAfter task is past
+curtime + (notAfter - notBefore) / 12, or 7 days if notBefore is not defined.
+The scheduler will then sleep and wakeup after 12 hours.
+It is possible to start manually a renewal task with "acme renew'.
+See also "acme status" in the management guide.
-peers <peersect>
- Creates a new peer list with name <peersect>. It is an independent section,
- which is referenced by one or more stick-tables.
+The following keywords are usable in the ACME section:
-bind [<address>]:port [param*]
-bind /<path> [param*]
- Defines the binding parameters of the local peer of this "peers" section.
- Such lines are not supported with "peer" line in the same "peers" section.
+account-key <filename>
+ Configure the path to the account key. The key need to be generated before
+ launching HAProxy. If no account keyword is used, the acme section will try
+ to load a filename using the section name "<name>.account.key". If the file
+ doesn't exist, HAProxy will generate one, using the parameters from the acme
+ section.
-disabled
- Disables a peers section. It disables both listening and any synchronization
- related to this section. This is provided to disable synchronization of stick
- tables without having to comment out all "peers" references.
+ You can also generate manually an RSA private key with openssl:
-default-bind [param*]
- Defines the binding parameters for the local peer, excepted its address.
+ openssl genrsa -out account.key 2048
-default-server [param*]
- Change default options for a server in a "peers" section.
+ Or an ecdsa one:
- Arguments:
- <param*> is a list of parameters for this server. The "default-server"
- keyword accepts an important number of options and has a complete
- section dedicated to it. In a peers section, the transport
- parameters of a "default-server" line are supported. Please refer
- to section 5 for more details, and the "server" keyword below in
- this section for some of the restrictions.
+ openssl ecparam -name secp384r1 -genkey -noout -out account.key
- See also: "server" and section 5 about server options
+bits <number>
+ Configure the number of bits to generate an RSA certificate. Default to 2048.
+ Setting a too high value can trigger a warning if your machine is not
+ powerful enough. (This can be configured with "warn-blocked-traffic-after"
+ but blocking the traffic too long could trigger the watchdog.)
-enabled
- This re-enables a peers section which was previously disabled via the
- "disabled" keyword.
+challenge <string>
+ Takes a challenge type as parameter, this must be HTTP-01 or DNS-01. When not
+ used the default is HTTP-01.
-log <target> [len <length>] [format <format>] [sample <ranges>:<sample_size>]
- <facility> [<level> [<minlevel>]]
- "peers" sections support the same "log" keyword as for the proxies to
- log information about the "peers" listener. See "log" option for proxies for
- more details.
+contact <string>
+ The contact email that will be associated to the account key in the CA.
-peer <peername> [<address>]:port [param*]
-peer <peername> /<path> [param*]
- Defines a peer inside a peers section.
- If <peername> is set to the local peer name (by default hostname, or forced
- using "-L" command line option or "localpeer" global configuration setting),
- HAProxy will listen for incoming remote peer connection on the provided
- address. Otherwise, the address defines where to connect to in order to join
- the remote peer, and <peername> is used at the protocol level to identify and
- validate the remote peer on the server side.
+curves <string>
+ When using the ECDSA keytype, configure the curves. The default is P-384.
- During a soft restart, local peer address is used by the old instance to
- connect the new one and initiate a complete replication (teaching process).
+directory <string>
+ This keyword configures the directory URL for the CA used by this acme
+ section. This keyword is mandatory as there is no default URL.
- It is strongly recommended to have the exact same peers declaration on all
- peers and to only rely on the "-L" command line argument or the "localpeer"
- global configuration setting to change the local peer name. This makes it
- easier to maintain coherent configuration files across all peers.
+ Example:
+ directory https://acme-staging-v02.api.letsencrypt.org/directory
- You may want to reference some environment variables in the address
- parameter, see section 2.3 about environment variables.
+keytype <string>
+ Configure the type of key that will be generated. Value can be either "RSA"
+ or "ECDSA". You can also configure the "curves" for ECDSA and the number of
+ "bits" for RSA. By default EC384 keys are generated.
- Note: "peer" keyword may transparently be replaced by "server" keyword (see
- "server" keyword explanation below).
+map <map>
+ Configure the map which will be used to store token (key) and thumbprint
+ (value), which is useful to reply to a challenge when there are multiple
+ account used. The acme task will add entries before validating the challenge
+ and will remove the entries at the end of the task.
-server <peername> [<address>:<port>] [param*]
-server <peername> [/<path>] [param*]
- As previously mentioned, "peer" keyword may be replaced by "server" keyword
- with a support for all "server" parameters found in 5.2 paragraph that are
- related to transport settings. If the underlying peer is local, the address
- parameter must not be present; it must be provided on a "bind" line (see
- "bind" keyword of this "peers" section).
+Example:
- A number of "server" parameters are irrelevant for "peers" sections. Peers by
- nature do not support dynamic host name resolution nor health checks, hence
- parameters like "init_addr", "resolvers", "check", "agent-check", or "track"
- are not supported. Similarly, there is no load balancing nor stickiness, thus
- parameters such as "weight" or "cookie" have no effect.
+ global
+ expose-experimental-directives
+ httpclient.resolvers.prefer ipv4
- Example:
- # The old way.
- peers mypeers
- peer haproxy1 192.168.0.1:1024
- peer haproxy2 192.168.0.2:1024
- peer haproxy3 10.2.0.1:1024
+ frontend in
+ bind *:80
+ bind *:443 ssl
+ http-request return status 200 content-type text/plain lf-string "%[path,field(-1,/)].%[path,field(-1,/),map(virt@acme)]\n" if { path_beg '/.well-known/acme-challenge/' }
+ ssl-f-use crt "foo.example.com.pem.rsa" acme LE1 domains "foo.example.com.pem,bar.example.com"
+ ssl-f-use crt "foo.example.com.pem.ecdsa" acme LE2 domains "foo.example.com.pem,bar.example.com"
- backend mybackend
- mode tcp
- balance roundrobin
- stick-table type ip size 20k peers mypeers
- stick on src
+ acme LE1
+ directory https://acme-staging-v02.api.letsencrypt.org/directory
+ account-key /etc/haproxy/letsencrypt.account.key
+ contact john.doe@example.com
+ challenge HTTP-01
+ keytype RSA
+ bits 2048
+ map virt@acme
- server srv1 192.168.0.30:80
- server srv2 192.168.0.31:80
+ acme LE2
+ directory https://acme-staging-v02.api.letsencrypt.org/directory
+ account-key /etc/haproxy/letsencrypt.account.key
+ contact john.doe@example.com
+ challenge HTTP-01
+ keytype ECDSA
+ curves P-384
+ map virt@acme
- Example:
- peers mypeers
- bind 192.168.0.1:1024 ssl crt mycerts/pem
- default-server ssl verify none
- server haproxy1 #local peer
- server haproxy2 192.168.0.2:1024
- server haproxy3 10.2.0.1:1024
+12.9. Programs (deprecated)
+---------------------------
-shards <shards>
+This section is deprecated and should disappear with HAProxy 3.3. The section
+could be replaced easily by separated process managers. Systemd unit files or
+sysvinit scripts could replace this section as they are more reliable. In docker
+environments, some alternatives can also be found such as s6 or supervisord.
- In some configurations, one would like to distribute the stick-table contents
- to some peers in place of sending all the stick-table contents to each peer
- declared in the "peers" section. In such cases, "shards" specifies the
- number of peer involved in this stick-table contents distribution.
- See also "shard" server parameter.
+In master-worker mode, it is possible to launch external binaries with the
+master, these processes are called programs. These programs are launched and
+managed the same way as the workers.
-table <tablename> type {ip | integer | string [len <length>] | binary [len <length>]}
- size <size> [expire <expire>] [write-to <wtable>] [nopurge] [store <data_type>]*
- [recv-only]
+Since version 3.1, the program section has a slightly different behavior, the
+section is parsed and the program is started from the master, but the rest of
+the configuration is loaded in the worker. This mean the program configuration
+is completely separated from the worker configuration, and a program could be
+reexecuted even if the worker configuration is wrong upon a reload.
- Configure a stickiness table for the current section. This line is parsed
- exactly the same way as the "stick-table" keyword in others section, except
- for the "peers" argument which is not required here and with an additional
- mandatory first parameter to designate the stick-table. Contrary to others
- sections, there may be several "table" lines in "peers" sections (see also
- the complete definition of the "table" and "stick-table" keywords in
- section 11.1 above).
+During a reload of HAProxy, those processes are dealing with the same
+sequence as a worker:
- Also be aware of the fact that "peers" sections have their own stick-table
- namespaces to avoid collisions between stick-table names identical in
- different "peers" section. This is internally handled prepending the "peers"
- sections names to the name of the stick-tables followed by a '/' character.
- If somewhere else in the configuration file you have to refer to such
- stick-tables declared in "peers" sections you must use the prefixed version
- of the stick-table name as follows:
+ - the master is re-executed
+ - the master sends a SIGUSR1 signal to the program
+ - if "option start-on-reload" is not disabled, the master launches a new
+ instance of the program
- peers mypeers
- peer A ...
- peer B ...
- table t1 ...
+During a stop, or restart, a SIGTERM is sent to the programs.
- frontend fe1
- tcp-request content track-sc0 src table mypeers/t1
+program <name>
+ This is a new program section, this section will create an instance <name>
+ which is visible in "show proc" on the master CLI. (See "9.4. Master CLI" in
+ the management guide).
- This is also this prefixed version of the stick-table names which must be
- used to refer to stick-tables through the CLI.
+command <command> [arguments*]
+ Define the command to start with optional arguments. The command is looked
+ up in the current PATH if it does not include an absolute path. This is a
+ mandatory option of the program section. Arguments containing spaces must
+ be enclosed in quotes or double quotes or be prefixed by a backslash.
- About "peers" protocol, as only "peers" belonging to the same section may
- communicate with each others, there is no need to do such a distinction.
- Several "peers" sections may declare stick-tables with the same name.
- This is shorter version of the stick-table name which is sent over the network.
- There is only a '/' character as prefix to avoid stick-table name collisions between
- stick-tables declared as backends and stick-table declared in "peers" sections
- as follows in this weird but supported configuration:
+user <user name>
+ Changes the executed command user ID to the <user name> from /etc/passwd.
+ See also "group".
- peers mypeers
- peer A ...
- peer B ...
- table t1 type string size 10m store gpc0
+group <group name>
+ Changes the executed command group ID to the <group name> from /etc/group.
+ See also "user".
- backend t1
- stick-table type string size 10m store gpc0 peers mypeers
+option start-on-reload
+no option start-on-reload
+ Start (or not) a new instance of the program upon a reload of the master.
+ The default is to start a new instance. This option may only be used in a
+ program section.
- Here "t1" table declared in "mypeers" section has "mypeers/t1" as global name.
- "t1" table declared as a backend as "t1" as global name. But at peer protocol
- level the former table is named "/t1", the latter is again named "t1".
/*
* Local variables: