Tim Düsterhus reported an annoying problem in the H2 decoder related to
an ambiguity in the H2 spec. The spec says in section 10.3 that HTTP/2
allows header field values that are not valid (since they're binary) and
at the same time that an H2 to H1 gateway must be careful to reject headers
whose values contain \0, \r or \n.
Till now, and for the sake of the ability to maintain end-to-end binary
transparency in H2-to-H2, the H2 mux wouldn't reject this since it does
not know what version will be used on the other side.
In theory we should in fact perform such a check when converting an HTX
header to H1. But this causes a problem as it means that all our rule sets,
sample fetches, captures, logs or redirects may still find an LF in a header
coming from H2. Also in 2.0 and older in legacy mode, the frames are instantly
converted to H1 and HTX couldn't help there. So this means that in practice
we must refrain from delivering such a header upwards, regardless of any
outgoing protocol consideration.
Applying such a lookup on all headers leaving the mux comes with a
significant performance hit, especially for large ones. A first attempt
was made at placing this into the HPACK decoder to refrain from learning
invalid literals but error reporting becomes more complicated. Additional
tests show that doing this within the HTX transcoding loop benefits from
the hot L1 cache, and that by skipping up to 8 bytes per iteration the
CPU cost remains within noise margin, around ~0.5%.
This patch must be backported as far as 1.8 since this bug could be
exploited and serve as the base for an attack. In 2.0 and earlier the
fix must also be added to functions h2_make_h1_request() and
h2_make_h1_trailers() to handle legacy mode. It relies on previous patch
"MINOR: ist: add ist_find_ctl()" to speed up the control bytes lookup.
All credits go to Tim for his detailed bug report and his initial patch.
Willy Tarreau [Fri, 22 Nov 2019 14:58:53 +0000 (15:58 +0100)]
MINOR: ist: add ist_find_ctl()
This new function looks for the first control character in a string (a
char whose value is between 0x00 and 0x1F included) and returns it, or
NULL if there is none. It is optimized for quickly evicting non-matching
strings and scans ~0.43 bytes per cycle. It can be used as an accelerator
when it's needed to look up several of these characters (e.g. CR/LF/NUL).
Tim Duesterhus [Sat, 23 Nov 2019 22:52:30 +0000 (23:52 +0100)]
BUG/MINOR: ssl: Stop passing dynamic strings as format arguments
gcc complains rightfully:
src/ssl_sock.c: In function ‘ssl_sock_prepare_all_ctx’:
src/ssl_sock.c:5507:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_warning(errmsg);
^
src/ssl_sock.c:5509:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_alert(errmsg);
^
src/ssl_sock.c: In function ‘cli_io_handler_commit_cert’:
src/ssl_sock.c:10208:3: warning: format not a string literal and no format arguments [-Wformat-security]
chunk_appendf(trash, err);
Lukas Tribus [Sun, 24 Nov 2019 17:20:40 +0000 (18:20 +0100)]
BUG/MINOR: ssl: fix curve setup with LibreSSL
Since commit 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER
instead of OPENSSL_VERSION_NUMBER") we restrict LibreSSL to the OpenSSL
1.0.1 API, to avoid breaking LibreSSL every minute. We set
HA_OPENSSL_VERSION_NUMBER to 0x1000107fL if LibreSSL is detected and
only allow curves to be configured if HA_OPENSSL_VERSION_NUMBER is at
least 0x1000200fL.
However all relevant LibreSSL releases actually support settings curves,
which is now broken. Fix this by always allowing curve configuration when
using LibreSSL.
Reported on GitHub in issue #366.
Fixes: 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER instead
of OPENSSL_VERSION_NUMBER").
William Dauchy [Sat, 23 Nov 2019 20:14:33 +0000 (21:14 +0100)]
MINOR: ssl: fix possible null dereference in error handling
recent commit 8b453912ce9a ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
converted all errors handling; in this patch we always test `err`, but
three of them are missing. I did not found a plausible explanation about
it.
this should fix issue #374
Fixes: 8b453912ce9a ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code") Reported-by: Илья Шипицин <chipitsine@gmail.com> Signed-off-by: William Dauchy <w.dauchy@criteo.com>
DOC: Add documentation about the use-service action
The use-service action may be used in tcp-request and http-request rules. It was
added to customize HAproxy reply to a client using an applet (initially a lua
applet). But the documentation was missing.
Willy Tarreau [Thu, 21 Nov 2019 17:48:20 +0000 (18:48 +0100)]
MINOR: version: emit the link to the known bugs in output of "haproxy -v"
The link to the known bugs page for the current version is built and
reported there. When it is a development version (less than 2 dots),
instead a link to github open issues is reported as there's no way to
be sure about the current situation in this case and it's better that
users report their trouble there.
it's not always easy for end users to know the lifecycle of the version
they are using. This patch introduces a "Status" line in the output of
"haproxy -vv" indicating whether it's a development, stable, long-term
supported version, possibly with an estimated end of life for the branch
when it can be anticipated (e.g. for stable versions). This field should
be adjusted when creating a major release to reflect the new status.
It may make sense to backport this to other branches to clarify the
situation.
MEDIUM: ssl/cli: apply SSL configuration on SSL_CTX during commit
Apply the configuration of the ssl_bind_conf on the generated SSL_CTX.
It's a little bit hacky at the moment because the ssl_sock_prepare_ctx()
function was made for the configuration parsing, not for being using at
runtime. Only the 'verify' bind keyword seems to cause a file access so
we prevent it before calling the function.
MINOR: ssl: ssl_sock_prepare_ctx() return an error code
Rework ssl_sock_prepare_ctx() so it fills a buffer with the error
messages instead of using ha_alert()/ha_warning(). Also returns an error
code (ERR_*) instead of the number of errors.
Daniel Corbett [Sun, 17 Nov 2019 14:48:56 +0000 (09:48 -0500)]
MEDIUM: dns: Add resolve-opts "ignore-weight"
It was noted in #48 that there are times when a configuration
may use the server-template directive with SRV records and
simultaneously want to control weights using an agent-check or
through the runtime api. This patch adds a new option
"ignore-weight" to the "resolve-opts" directive.
When specified, any weight indicated within an SRV record will
be ignored. This is for both initial resolution and ongoing
resolution.
Willy Tarreau [Wed, 20 Nov 2019 15:45:15 +0000 (16:45 +0100)]
DOC: internal: document the init calls
INITCALLs are used a lot in the code now and were not documented, resulting
in each user having to grep for functions in other files. This doc is not
perfect but aims at improving the situation. It documents what's been
available since 1.9 and may be backported there if it helps though it's
unlikely to be needed as it's mostly aimed at developers.
BUG/MINOR: stream-int: Fix si_cs_recv() return value
The previous patch on this function (36b536d6c "BUG/MEDIUM: stream-int: Don't
loose events on the CS when an EOS is reported") contains a bug. The return
value is based on the conn-stream's flags. But it may be reset if the CS is
closed. Ironically it was exactly the purpose of this patch...
DOC: Update http-buffer-request description to remove the part about chunks
The limitation on the first chunk for chunked requests was true for the legacy
HTTP mode. But, it does not exist with the HTX. Becaue, the legacy HTTP mode was
removed in 2.1, this limitation does not exist anymore.
BUG/MINOR: http-ana: Properly catch aborts during the payload forwarding
When no data filter are registered on a channel, if the message length is known,
the HTTP payload is infinitely forwarded to save calls to process_stream(). When
we finally fall back again in XFER_BODY analyzers, we detect the end of the
message by checking channel flags. If CF_EOI or CF_SHUTR is set, we switch the
message in DONE state. For CF_EOI, it is relevant. But not for CF_SHUTR. a
shutdown for reads without the end of input must be interpreted as an abort for
messages with a known length.
Because of this bug, some aborts are not properly handled and reported. Instead,
we interpret it as a legitimate shutdown.
BUG/MINOR: mux-h1: Fix tunnel mode detection on the response path
There are two issues with the way tunnel mode is detected on the response
path. First, when a response with an unknown content length is handled, the
request is also switched in tunnel mode. It is obviously wrong. Because it was
done on the server side only (so not during the request parsing), it is no
noticeable effects.
The second issue is about the way protocol upgrades are handled. The request is
switched in tunnel mode from the time the 101 response is processed. So an
unfinished request may be switched in tunnel mode too early. It is not a common
use, but a protocol upgrade on a POST is allowed. Thus, parsing of the payload
may be hijacked. It is especially bad for chunked payloads.
Now, conditions to switch the request in tunnel mode reflect what should be
done. Especially for the second issue. We wait the end of the request to switch
it in tunnel mode.
This patch must be backported to 2.0 and 1.9. Note that these versions are only
affected by the second issue but the patch cannot be easily splitted.
BUILD: debug: Avoid warnings in dev mode with -02 because of some BUG_ON tests
Some BUG_ON() tests emit a warning because of a potential null pointer
dereference on an HTX block. In fact, it should never happen, but now, GCC is
happy.
MINOR: contrib/prometheus-exporter: Add a param to ignore servers in maintenance
By passing the parameter "no-maint" in the query-string, it is now possible to
ignore servers in maintenance. It means that the metrics for servers in this
state will not be exported.
MINOR: contrib/prometheus-exporter: filter exported metrics by scope
Now, the prometheus exporter parses the HTTP query-string to filter or to adapt
the exported metrics. In this first version, it is only possible select the
scopes of metrics to export. To do so, one or more parameters with "scope" as
name must be passed in the query-string, with one of those values: global,
frontend, backend, server or '*' (means all). A scope parameter with no value
means to filter out all scopes (nothing is returned). The scope parameters are
parsed in their appearance order in the query-string. So an empty scope will
reset all scopes already parsed. But it can be overridden by following scope
parameters in the query-string. By default everything is exported.
The filtering can also be done on prometheus scraping configuration, but general
aim is to optimise the source of data to improve load and scraping time. This is
particularly true for huge configuration with thousands of backends and servers.
Also note that this configuration was possible on the previous official haproxy
exporter but with even more parameters to select the needed metrics. Here we
thought it was sufficient to simply avoid a given type of metric. However, more
filters are still possible.
Thanks to William Dauchy. This patch is based on his work.
BUG/MEDIUM: stream-int: Don't loose events on the CS when an EOS is reported
In si_cs_recv(), when a shutdown for reads is handled, the conn-stream may be
closed. It happens when the ouput channel is closed for writes or if
SI_FL_NOHALF is set on the stream-interface. In this case, conn-stream's flags
are reset. Thus, if an error (CS_FL_ERROR) or an end of input (CS_FL_EOI) is
reported by the mux, the event is lost. si_cs_recv() does not report these
events by itself. It relies on si_cs_process() to report them to the
stream-interface and/or the channel.
For instance, if CS_FL_EOS and CS_FL_EOI are set by the H1 multiplexer during a
call to si_cs_recv() on the server side, if the conn-stream is closed (read0 +
SI_FL_NOHALF), the CS_FL_EOI flag is lost. Thus, this may lead the stream to
interpret it as a server abort.
Now, conn-stream's flags are processed at the end of si_cs_recv(). The function
is responsible to set the right flags on the stream-interface and/or the
channel. Due to this patch, the function is now almost linear. Except some early
checks at the beginning, there is only one return statement. It also fixes a
potential bug because of an inconsistency between the splicing and the buffered
receipt. On the first case, CS_FL_EOS if handled before errors on the connection
or the conn-stream. On the second one, it is the opposite.
BUG/MINOR: peers: "peer alive" flag not reset when deconnecting.
The peer flags (->flags member of peer struct) are reset by __peer_session_deinit()
function. PEER_F_ALIVE flag which is used by the heartbeat part of the peer protocol
to mark a peer as being alive was not reset by this function. This simple patch adds
add the statement to this.
Note that, at this time, there was no identified issue due to this missing reset.
BUG/MEDIUM: mworker: don't fill the -sf argument with -1 during the reexec
Upon a reexec_on_failure, if the process tried to exit after the
initialization of the process structure but before it was filled with a
PID, the PID in the mworker_proc structure is set to -1.
In this particular case the -sf argument is filled with -1 and haproxy
will exit with the usage message because of that argument.
BUG/MINOR: peers: Wrong null "server_name" data field handling.
As the peers protocol expects to parse at least one encoded integer value for
each stick-table data field even when not configured on the local side,
about the "server_name" data field we must emit something even if it has
not been set (no server was configured for instance).
As this data field is made of first one encoded integer which is the length
of the remaining data (the dictionary cache entry), we encode the length 0
when emitting such an absent dictionary cache entry.
On the remote side, when we decode such an integer with 0 as value, we stop
parsing the data field and that's it.
MINOR: peers: Add debugging information to "show peers".
This patch adds three counters to help in debugging peers protocol issues
to "peer" struct:
->no_hbt counts the number of reconnection period without receiving heartbeat
->new_conn counts the number of reconnections after ->reconnect timeout expirations.
->proto_err counts the number of protocol errors.
Add RX/TX heartbeat counters to "peer" struct to have an idead about which
peer is alive or not.
Dump these counters values on the CLI via "show peers" command.
MINOR: peers: Alway show the table info for disconnected peers.
This patch enable us to dump the stick-table information of remote or local peers
without already opened peer session. This may be the case also for the local peer
during synchronizations with an old processus (reload).
Emmanuel Hocdet [Mon, 4 Nov 2019 14:49:46 +0000 (15:49 +0100)]
BUG/MINOR: ssl: fix crt-list neg filter for openssl < 1.1.1
Certificate selection in client_hello_cb (openssl >= 1.1.1) correctly
handles crt-list neg filter. Certificate selection for openssl < 1.1.1
has not been touched for a while: crt-list neg filter is not the same
than his counterpart and is wrong. Fix it to mimic the same behavior
has is counterpart.
Emmanuel Hocdet [Mon, 4 Nov 2019 17:19:32 +0000 (18:19 +0100)]
BUG/MINOR: ssl: ssl_pkey_info_index ex_data can store a dereferenced pointer
With CLI cert update, sni_ctx can be removed at runtime. ssl_pkey_info_index
ex_data is filled with one of sni_ctx.kinfo pointer but SSL_CTX can be shared
between sni_ctx. Remove and free a sni_ctx can lead to a segfault when
ssl_pkey_info_index ex_data is used (in ssl_sock_get_pkey_algo). Removing the
dependency on ssl_pkey_info_index ex_data is the easiest way to fix the issue.
William Dauchy [Sun, 17 Nov 2019 14:47:16 +0000 (15:47 +0100)]
MINOR: init: avoid code duplication while setting identify
since the introduction of mworker, the setuid/setgid was duplicated in
two places; try to improve that by creating a dedicated function.
this patch does not introduce any functional change.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
William Dauchy [Sun, 17 Nov 2019 14:47:15 +0000 (15:47 +0100)]
BUG/MINOR: init: fix set-dumpable when using uid/gid
in mworker mode used with uid/gid settings, it was not possible to get
a coredump despite the set-dumpable option.
indeed prctl(2) manual page specifies the dumpable attribute is reverted
to `/proc/sys/fs/suid_dumpable` in a few conditions such as process
effective user and group are changed.
this patch moves the whole set-dumpable logic before the polling code in
order to catch all possible cases where we could have changed the
uid/gid. It however does not cover the possible segfault at startup.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Willy Tarreau [Fri, 15 Nov 2019 17:49:37 +0000 (18:49 +0100)]
[RELEASE] Released version 2.1-dev5
Released version 2.1-dev5 with the following main changes :
- BUG/MEDIUM: ssl/cli: don't alloc path when cert not found
- BUG/MINOR: ssl/cli: unable to update a certificate without bundle extension
- BUG/MINOR: ssl/cli: fix an error when a file is not found
- MINOR: ssl/cli: replace the default_ctx during 'commit ssl cert'
- DOC: fix date and http_date keywords syntax
- MINOR: peers: Add "log" directive to "peers" section.
- BUG/MEDIUM: mux-h1: Disable splicing for chunked messages
- BUG/MEDIUM: stream: Be sure to support splicing at the mux level to enable it
- MINOR: flt_trace: Rename macros to print trace messages
- MINOR: trace: Add a set of macros to trace events if HA is compiled with debug
- MEDIUM: stream/trace: Register a new trace source with its events
- MINOR: doc: http-reuse connection pool fix
- BUG/MEDIUM: stream: Be sure to release allocated captures for TCP streams
- MINOR: http-ana: Remove the unused function http_reset_txn()
- BUG/MINOR: action: do-resolve now use cached response
- BUG: dns: timeout resolve not applied for valid resolutions
- DOC: management: fix typo on "cache_lookups" stats output
- BUG/MINOR: stream: init variables when the list is empty
- BUG/MEDIUM: tasks: Make tasklet_remove_from_tasklet_list() no matter the tasklet.
- BUG/MINOR: queue/threads: make the queue unlinking atomic
- BUG/MEDIUM: Make sure we leave the session list in session_free().
- CLEANUP: session: slightly simplify idle connection cleanup logic
- MINOR: memory: also poison the area on freeing
- CLEANUP: cli: use srv_shutdown_streams() instead of open-coding it
- CLEANUP: stats: use srv_shutdown_streams() instead of open-coding it
- BUG/MEDIUM: listeners: always pause a listener on out-of-resource condition
- BUILD: contrib/da: remove an "unused" warning
- BUG/MEDIUM: filters: Don't call TCP callbacks for HTX streams
- MEDIUM: filters: Adapt filters API to allow again TCP filtering on HTX streams
- MINOR: freq_ctr: Make the sliding window sums thread-safe
- MINOR: stream: Remove the lock on the proxy to update time stats
- MINOR: counters: Add fields to store the max observed for {q,c,d,t}_time
- MINOR: stats: Report max times in addition of the averages for sessions
- MINOR: contrib/prometheus-exporter: Report metrics about max times for sessions
- BUG/MINOR: contrib/prometheus-exporter: Rename some metrics
- MINOR: contrib/prometheus-exporter: report the number of idle conns per server
- DOC: Add missing stats fields in the management manual
- BUG/MINOR: mux-h1: Properly catch parsing errors on payload and trailers
- BUG/MINOR: mux-h1: Don't set CS_FL_EOS on a read0 when receiving data to pipe
- MINOR: mux-h1: Set EOI on the conn-stream when EOS is reported in TUNNEL state
- MINOR: sink: Set the default max length for a message to BUFSIZE
- MINOR: ring: make the parse function automatically set the handler/release
- BUG/MINOR: log: make "show startup-log" use a ring buffer instead
- MINOR: stick-table: allow sc-set-gpt0 to set value from an expression
Cédric Dufour [Wed, 6 Nov 2019 17:38:53 +0000 (18:38 +0100)]
MINOR: stick-table: allow sc-set-gpt0 to set value from an expression
Allow the sc-set-gpt0 action to set GPT0 to a value dynamically evaluated from
its <expr> argument (in addition to the existing static <int> alternative).
Willy Tarreau [Fri, 15 Nov 2019 14:16:57 +0000 (15:16 +0100)]
BUG/MINOR: log: make "show startup-log" use a ring buffer instead
The copy of the startup logs used to rely on a re-allocated memory area
on the fly, that would attempt to be delivered at once over the CLI. But
if it's too large (too many warnings) it will take time to start up, and
may not even show up on the CLI as it doesn't fit in a buffer.
The ring buffer infrastructure solves all this with no more code, let's
switch to this instead. It simply requires a parsing function to attach
the ring via ring_attach_cli() and all the rest is automatically handled.
Initially this was imagined as a code cleanup, until a test with a config
involving 100k backends and just one occurrence of
"load-server-state-from-file global" in the defaults section took approx
20 minutes to parse due to the O(N^2) cost of concatenating the warnings
resulting in ~1 TB of data to be copied, while it took only 0.57s with
the ring.
Ideally this patch should be backported to 2.0 and 1.9, though it relies
on the ring infrastructure which will then also need to be backported.
Configs able to trigger the bug are uncommon, so another workaround for
older versions without backporting the rings would consist in simply
limiting the size of the error message in print_message() to something
always printable, which will only return the first errors.
Willy Tarreau [Fri, 15 Nov 2019 14:07:21 +0000 (15:07 +0100)]
MINOR: ring: make the parse function automatically set the handler/release
ring_attach_cli() is called by the keyword parsing function to dump a
ring to the CLI. It can only work with a specific handler and release
function. Let's make it set them appropriately instead of having the
caller know these functions. This way adding a command to dump a ring
is as simple as declaring a parsing function calling ring_attach_cli().
MINOR: sink: Set the default max length for a message to BUFSIZE
It was set to MAX_SYSLOG_LEN (1K). It is a bit short to print debug
traces. Especially when part of a buffers is dump. Now, the maximum length is
set to BUFSIZE (16K).
BUG/MINOR: mux-h1: Don't set CS_FL_EOS on a read0 when receiving data to pipe
This is mandatory to process input one more time to add the EOM in the HTX
message and to set CS_FL_EOI on the conn-stream. Otherwise, in the stream, a
SHUTR will be reported on the corresponding channel without the EOI. It may be
erroneously interpreted as an abort.
BUG/MINOR: mux-h1: Properly catch parsing errors on payload and trailers
Errors during the payload or the trailers parsing are reported with the
HTX_FL_PARSING_ERROR flag on the HTX message and not a negative return
value. This change was introduced when the fonctions to convert an H1 message to
HTX one were moved to a dedicated file. But the h1 mux was not fully updated
accordingly.
No backport needed except if the commits about file h1_htx.c are backported.
MINOR: contrib/prometheus-exporter: Report metrics about max times for sessions
Now, for the sessions, the maximum times (queue, connect, response, total) are
reported in addition of the averages over the last 1024 connections. These
metrics are reported per backend and per server. Here are the metrics name :
MINOR: stats: Report max times in addition of the averages for sessions
Now, for the sessions, the maximum times (queue, connect, response, total) are
reported in addition of the averages over the last 1024 connections. These
values are called qtime_max, ctime_max, rtime_max and ttime_max.
MINOR: counters: Add fields to store the max observed for {q,c,d,t}_time
For backends and servers, some average times for last 1024 connections are
already calculated. For the moment, the averages for the time passed in the
queue, the connect time, the response time (for HTTP session only) and the total
time are calculated. Now, in addition, the maximum time observed for these
values are also stored.
In addition, These new counters are cleared as all other max values with the CLI
command "clear counters".
MEDIUM: filters: Adapt filters API to allow again TCP filtering on HTX streams
This change make the payload filtering uniform between TCP and HTTP
filters. Now, in TCP, like in HTTP, there is only one callback responsible to
forward data. Thus, old callbacks, tcp_data() and tcp_forward_data(), are
replaced by a single callback function, tcp_payload(). This new callback gets
the offset in the payload to (re)start the filtering and the maximum amount of
data it can forward. It is the filter's responsibility to be compatible with HTX
streams. If not, it must not set the flag FLT_CFG_FL_HTX.
Because of this change, nxt and fwd offsets are no longer needed. Thus they are
removed from the filter structure with their update functions,
flt_change_next_size() and flt_change_forward_size(). Moreover, the trace filter
has been updated accordingly.
This patch breaks the compatibility with the old API. Thus it should probably
not be backported. But, AFAIK, there is no TCP filter, thus the breakage is very
limited.
BUG/MEDIUM: filters: Don't call TCP callbacks for HTX streams
For now, TCP callbacks are incompatible with the HTX streams because they are
designed to manipulate raw buffers. A new callback will probably be added to be
used in both modes, raw and HTX. So, for HTX streams, these callbacks are
ignored. This should not be a real problem because there is no known filters,
expect the trace filter, implementing these callbacks.
Willy Tarreau [Fri, 15 Nov 2019 09:20:07 +0000 (10:20 +0100)]
BUG/MEDIUM: listeners: always pause a listener on out-of-resource condition
A corner case was opened in the listener_accept() code by commit 3f0d02bbc2
("MAJOR: listener: do not hold the listener lock in listener_accept()"). The
issue is when one listener (or a group of) managed to eat all the proxy's or
all the process's maxconn, and another listener tries to accept a new socket.
This results in the atomic increment to detect the excess connection count
and immediately abort, without pausing the listener, thus the call is
immediately performed again. This doesn't happen when the test is run on a
single listener because this listener got limited when crossing the limit.
But with 2 or more listeners, we don't have this luxury.
The solution consists in limiting the listener as soon as we have to
decline accepting an incoming connection. This means that the listener
will not be marked full yet if it gets the exact connection count but
this is not a problem in practice since all other listeners will only be
marked full after their first attempt. Thus from now on, a listener is
only full once it has already failed taking an incoming connection.
This bug was definitely responsible for the unreproduceable occasional
reports of high CPU usage showing epoll_wait() returning immediately
without accepting an incoming connection, like in bug #129.
Willy Tarreau [Thu, 14 Nov 2019 15:42:04 +0000 (16:42 +0100)]
CLEANUP: stats: use srv_shutdown_streams() instead of open-coding it
The "shutdown sessions" admin-mode command used to open-code the list
traversal while there's already a function for this: srv_shutdown_streams().
Better use it.
Willy Tarreau [Thu, 14 Nov 2019 15:37:16 +0000 (16:37 +0100)]
CLEANUP: cli: use srv_shutdown_streams() instead of open-coding it
The "shutdown session server" command used to open-code the list traversal
while there's already a function for this: srv_shutdown_streams(). Better
use it.
Since previous commit a132e5efa9 ("BUG/MEDIUM: Make sure we leave the
session list in session_free().") it's pointless to delete the conn
element inside "if" blocks given that the second test is always true
as well. Let's simplify this with a single LIST_DEL_INIT() before the
test.
Olivier Houchard [Thu, 14 Nov 2019 18:26:14 +0000 (19:26 +0100)]
BUG/MEDIUM: Make sure we leave the session list in session_free().
In session_free(), if we're about to destroy a connection that had no mux,
make sure we leave the session_list before calling conn_free(). Otherwise,
conn_free() would call session_unown_conn(), which would potentially free
the associated srv_list, but session_free() also frees it, so that would
lead to a double free, and random memory corruption.
Willy Tarreau [Thu, 14 Nov 2019 13:58:39 +0000 (14:58 +0100)]
BUG/MINOR: queue/threads: make the queue unlinking atomic
There is a very short race in the queues which happens in the following
situation:
- stream A on thread 1 is being processed by a server
- stream B on thread 2 waits in the backend queue for a server
- stream B on thread 2 is fed up with waiting and expires, calls
stream_free() which calls pendconn_free(), which sees the
stream attached
- at the exact same instant, stream A finishes on thread 1, sees
one stream is waiting (B), detaches it and wakes it up
- stream B continues pendconn_free() and calls pendconn_unlink()
- pendconn_unlink() now detaches the node again and performs a
second deletion (harmless since idempotent), and decrements
srv/px->nbpend again
=> the number of connections on the proxy or server may reach -1 if/when
this race occurs.
It is extremely tight as it can only occur during the test on p->leaf_p
though it has been witnessed at least once. The solution consists in
testing leaf_p again once the lock is held to make sure the element was
not removed in the mean time.
This should be backported to 2.0 and 1.9, probably even 1.8.
BUG/MEDIUM: tasks: Make tasklet_remove_from_tasklet_list() no matter the tasklet.
In tasklet_remove_from_tasket_list(), we can be called for a tasklet that is
either in the private task list, or in the shared tasklet list. Take that into
account and always use MT_LIST_DEL() to remove it, otherwise if we're in the
shared list and another thread attempts to add a tasklet in it, bad things
will happen.
__tasklet_remove_from_tasklet_list() is left unchanged, it's only supposed
to be used by process_runnable_task() to remove task/tasklets from the private
tast list.
This should not be backported.
This should fix github issue #357.
Jerome Magnin [Sat, 9 Nov 2019 17:00:47 +0000 (18:00 +0100)]
BUG/MINOR: stream: init variables when the list is empty
We need to call vars_init() when the list is empty otherwise we
can't use variables in the response scope. This regression was
introduced by cda7f3f5 (MINOR: stream: don't prune variables if
the list is empty).
The following config reproduces the issue:
defaults
mode http
frontend in
bind *:11223
http-request set-var(req.foo) str("foo") if { path /bar }
http-request set-header bar %[var(req.foo)] if { var(req.foo) -m found }
http-response set-var(res.bar) str("bar")
http-response set-header foo %[var(res.bar)] if { var(res.bar) -m found }
use_backend out
backend out
server s1 127.0.0.1:11224
listen back
bind *:11224
http-request deny deny_status 200
BUG: dns: timeout resolve not applied for valid resolutions
Documentation states that the interval between 2 DNS resolution is
driven by "timeout resolve <time>" directive.
From a code point of view, this was applied unless the latest status of
the resolution was VALID. In such case, "hold valid" was enforce.
This is a bug, because "hold" timers are not here to drive how often we
want to trigger a DNS resolution, but more how long we want to keep an
information if the status of the resolution itself as changed.
This avoid flapping and prevent shutting down an entire backend when a
DNS server is not answering.
This issue was reported by hamshiva in github issue #345.
Baptiste Assmann [Wed, 30 Oct 2019 15:06:53 +0000 (16:06 +0100)]
BUG/MINOR: action: do-resolve now use cached response
As reported by David Birdsong on the ML, the HTTP action do-resolve does
not use the DNS cache.
Actually, the action is "registred" to the resolution for said name to
be resolved and wait until an other requester triggers the it. Once the
resolution is finished, then the action is updated with the result.
To trigger this, you must have a server with runtime DNS resolution
enabled and run a do-resolve action with the same fqdn AND they use the
same resolvers section.
This patch fixes this behavior by ensuring the resolution associated to
the action has a valid answer which is not considered as expired. If
those conditions are valid, then we can use it (it's the "cache").
MINOR: http-ana: Remove the unused function http_reset_txn()
Since the legacy HTTP mode was removed, the stream is always released at the end
of each HTTP transaction and a new is created to handle the next request for
keep-alive connections. So the HTTP transaction is no longer reset and the
function http_reset_txn() can be removed.
BUG/MEDIUM: stream: Be sure to release allocated captures for TCP streams
All TCP and HTTP captures are stored in 2 arrays, one for the request and
another for the response. In HAPRoxy 1.5, these arrays are part of the HTTP
transaction and thus are released during its cleanup. Because in this version,
the transaction is part of the stream (in 1.5, streams are still called
sessions), the cleanup is always performed, for HTTP and TCP streams.
In HAProxy 1.6, the HTTP transaction was moved out from the stream and is now
dynamically allocated only when required (becaues of an HTTP proxy or an HTTP
sample fetch). In addition, still in 1.6, the captures arrays were moved from
the HTTP transaction to the stream. This way, it is still possible to capture
elements from TCP rules for a full TCP stream. Unfortunately, the release is
still exclusively performed during the HTTP transaction cleanup. Thus, for a TCP
stream where the HTTP transaction is not required, the TCP captures, if any, are
never released.
Now, all captures are released when the stream is freed. This fixes the memory
leak for TCP streams. For streams with an HTTP transaction, the captures are now
released when the transaction is reset and not systematically during its
cleanup.
MEDIUM: stream/trace: Register a new trace source with its events
Runtime traces are now supported for the streams, only if compiled with
debug. process_stream() is covered as well as TCP/HTTP analyzers and filters.
In traces, the first argument is always a stream. So it is easy to get the info
about the channels and the stream-interfaces. The second argument, when defined,
is always a HTTP transaction. And the third one is an HTTP message. The trace
message is adapted to report HTTP info when possible.
MINOR: trace: Add a set of macros to trace events if HA is compiled with debug
The macros DBG_TRACE_*() can be used instead of existing trace macros to emit
trace messages in debug mode only, ie, when HAProxy is compiled with DEBUG_FULL
or DEBUG_DEV. Otherwise, these macros do nothing. So it is possible to add
traces for development purpose without impacting performance of production
instances.
MINOR: flt_trace: Rename macros to print trace messages
Names of these macros may enter in conflict with the macros of the runtime
tracing mechanism. So the prefix "FLT_" has been added to avoid any ambiguities.
BUG/MEDIUM: stream: Be sure to support splicing at the mux level to enable it
Despite the addition of the mux layer, no change have been made on how to enable
the TCP splicing on process_stream(). We still check if transport layer on both
sides support the splicing, but we don't check the muxes support. So it is
possible to start to splice data with an unencrypted H2 connection on a side and
an H1 connection on the other. This leads to a freeze of the stream until a
client or server timeout is reached.
This patch fixed a part of the issue #356. It must be backported as far as 1.8.
BUG/MEDIUM: mux-h1: Disable splicing for chunked messages
The mux H1 announces the support of the TCP splicing. It only works for payload
data. It works for messages with an explicit content-length or for tunnelled
data. For chunked messages, the mux H1 should normally not try to xfer more than
the current chunk through the pipe. Unfortunately, this works on the read side
but the send is completely bogus. During the output formatting, the announced
size of chunks does not handle the size that will be spliced. Because there is
no formatting when spliced data are sent, the produced message is malformed and
rejected by the peer.
For now, because it is quick and simple, the TCP splicing is disabled for
chunked messages. I will try to enable it again in a proper way. I don't know
for now if it will be backportable in previous versions. This will depend on the
amount of changes required to handle it.
This patch fixes a part of the issue #356. It must be backported to 2.0 and 1.9.
MINOR: peers: Add "log" directive to "peers" section.
This patch is easy to review: let's call parse_logsrv() function to parse
"log" directive as this is already for other sections for proxies.
This enable us to log incoming TCP connections for the listeners for "peers"
sections.
Cyril Bonté [Tue, 5 Nov 2019 22:13:59 +0000 (23:13 +0100)]
DOC: fix date and http_date keywords syntax
These keywords received a second argument with commit ae6f125 ("MINOR:
sample: add us/ms support to date/http_date"). Each argument is optional,
it's not either both or none.
MINOR: ssl/cli: replace the default_ctx during 'commit ssl cert'
If the SSL_CTX of a previous instance (ckch_inst) was used as a
default_ctx, replace the default_ctx of the bind_conf by the first
SSL_CTX inserted in the SNI tree.
Use the RWLOCK of the sni tree to handle the change of the default_ctx.
BUG/MINOR: ssl/cli: fix an error when a file is not found
When trying to update a certificate <file>.{rsa,ecdsa,dsa}, but this one
does not exist and if <file> was used as a regular file in the
configuration, the error was ambiguous. Correct it so we can return a
certificate not found error.
BUG/MINOR: ssl/cli: unable to update a certificate without bundle extension
Commit bc6ca7c ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
broke the ability to commit a unique certificate which does not use a
bundle extension .{rsa,ecdsa,dsa}.
BUG/MEDIUM: ssl/cli: don't alloc path when cert not found
When doing an 'ssl set cert' with a certificate which does not exist in
configuration, the appctx->ctx.ssl.old_ckchs->path was duplicated while
app->ctx.ssl.old_ckchs was NULL, resulting in a NULL dereference.
Move the code so the 'not referenced' error is done before this.
Willy Tarreau [Sun, 3 Nov 2019 14:43:10 +0000 (15:43 +0100)]
[RELEASE] Released version 2.1-dev4
Released version 2.1-dev4 with the following main changes :
- BUG/MINOR: cli: don't call the kw->io_release if kw->parse failed
- BUG/MINOR: mux-h2: Don't pretend mux buffers aren't full anymore if nothing sent
- BUG/MAJOR: stream-int: Don't receive data from mux until SI_ST_EST is reached
- DOC: remove obsolete section about header manipulation
- BUG/MINOR: ssl/cli: cleanup on cli_parse_set_cert error
- MINOR: ssl/cli: rework the 'set ssl cert' IO handler
- BUILD: CI: comment out cygwin build, upgrade various ssl libraries
- DOC: Improve documentation of http-re(quest|sponse) replace-(header|value|uri)
- BUILD/MINOR: tools: shut up the format truncation warning in get_gmt_offset()
- BUG/MINOR: spoe: fix off-by-one length in UUID format string
- BUILD/MINOR: ssl: shut up a build warning about format truncation
- BUILD: do not disable -Wformat-truncation anymore
- MINOR: chunk: add chunk_istcat() to concatenate an ist after a chunk
- Revert "MINOR: istbuf: add b_fromist() to make a buffer from an ist"
- MINOR: mux: Add a new method to get informations about a mux.
- BUG/MEDIUM: stream_interface: Only use SI_ST_RDY when the mux is ready.
- BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready.
- MINOR: doc: fix busy-polling performance reference
- MINOR: config: allow no set-dumpable config option
- MINOR: init: always fail when setrlimit fails
- MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'
- CLEANUP: ssl/cli: remove leftovers of bundle/certs (it < 2)
- REGTEST: vtest can now enable mcli with its own flag
- BUG/MINOR: config: Update cookie domain warn to RFC6265
- MINOR: sample: add us/ms support to date/http_date
- BUG/MINOR: ssl/cli: check trash allocation in cli_io_handler_commit_cert()
- BUG/MEDIUM: mux-h2: report no available stream on a connection having errors
- BUG/MEDIUM: mux-h2: immediately remove a failed connection from the idle list
- BUG/MEDIUM: mux-h2: immediately report connection errors on streams
- BUG/MINOR: stats: properly check the path and not the whole URI
- BUG/MINOR: ssl: segfault in cli_parse_set_cert with old openssl/boringssl
- BUG/MINOR: ssl: ckch->chain must be initialized
- BUG/MINOR: ssl: double free on error for ckch->{key,cert}
- MINOR: ssl: BoringSSL ocsp_response does not need issuer
- BUG/MEDIUM: ssl/cli: fix dot research in cli_parse_set_cert
- MINOR: backend: Add srv_name sample fetche
- DOC: Add GitHub issue config.yml
Willy Tarreau [Thu, 31 Oct 2019 14:50:28 +0000 (15:50 +0100)]
BUG/MINOR: stats: properly check the path and not the whole URI
Since we now have full URIs with h2, stats may fail to work over H2
so we must carefully only check the path there if the stats URI was
passed with a path only. This way it remains possible to intercept
proxy requests to report stats on explicit domains but it continues
to work as expected on origin requests.
Willy Tarreau [Thu, 31 Oct 2019 14:48:18 +0000 (15:48 +0100)]
BUG/MEDIUM: mux-h2: immediately report connection errors on streams
In case a stream tries to send on a connection error, we must report the
error so that the stream interface keeps the data available and may safely
retry on another connection. Till now this would happen only before the
connection was established, not in case of a failed handshake or an early
GOAWAY for example.
Willy Tarreau [Thu, 31 Oct 2019 14:36:30 +0000 (15:36 +0100)]
BUG/MEDIUM: mux-h2: immediately remove a failed connection from the idle list
If a connection faces an error or a timeout, it must be removed from its
idle list ASAP. We certainly don't want to risk sending new streams on
it.
This should be backported to 2.0 (replacing MT_LIST_DEL with LIST_DEL_LOCKED)
and 1.9 (there's no lock there, the idle lists are per-thread and per-server
however a LIST_DEL_INIT will be needed).
Willy Tarreau [Thu, 31 Oct 2019 14:10:03 +0000 (15:10 +0100)]
BUG/MEDIUM: mux-h2: report no available stream on a connection having errors
If an H2 mux has met an error, we must not report available streams
anymore, or it risks to accumulate new streams while not being able
to process them.
Damien Claisse [Wed, 30 Oct 2019 15:57:28 +0000 (15:57 +0000)]
MINOR: sample: add us/ms support to date/http_date
It can be sometimes interesting to have a timestamp with a
resolution of less than a second.
It is currently painful to obtain this, because concatenation
of date and date_us lead to a shorter timestamp during first
100ms of a second, which is not parseable and needs ugly ACLs
in configuration to prepend 0s when needed.
To improve this, add an optional <unit> parameter to date sample
to report an integer with desired unit.
Also support this unit in http_date converter to report
a date string with sub-second precision.
Joao Morais [Thu, 31 Oct 2019 00:04:00 +0000 (21:04 -0300)]
BUG/MINOR: config: Update cookie domain warn to RFC6265
The domain option of the cookie keyword allows to define which domain or
domains should use the the cookie value of a cookie-based server
affinity. If the domain does not start with a dot, the user agent should
only use the cookie on hosts that matches the provided domains. If the
configured domain starts with a dot, the user agent can use the cookie
with any host ending with the configured domain.
haproxy config parser helps the admin warning about a potentially buggy
config: defining a domain without an embedded dot which does not start
with a dot, which is forbidden by the RFC.
The current condition to issue the warning implements RFC2109. This
change updates the implementation to RFC6265 which allows domain without
a leading dot.
Should be backported to all supported versions. The feature exists at least
since 1.5.
Jerome Magnin [Wed, 30 Oct 2019 17:27:45 +0000 (18:27 +0100)]
REGTEST: vtest can now enable mcli with its own flag
VTest can now enable mworker and mcli with separate flags so lets
update vtc files that need it. This also allows to revert the change
made with 1545a59c ("REGTESTS: make seamless-reload depend on 1.9
and above").
MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'
This patch splits the 'set ssl cert' CLI command into 2 commands.
The previous way of updating the certificate on the CLI was limited with
the bundles. It was only able to apply one of the tree part of the
certificate during an update, which mean that we needed 3 updates to
update a full 3 certs bundle.
It was also not possible to apply atomically several part of a
certificate with the ability to rollback on error. (For example applying
a .pem, then a .ocsp, then a .sctl)
The command 'set ssl cert' will now duplicate the certificate (or
bundle) and update it in a temporary transaction..
The second command 'commit ssl cert' will commit all the changes made
during the transaction for the certificate.
This commit breaks the ability to update a certificate which was used as
a unique file and as a bundle in the HAProxy configuration. This way of
using the certificates wasn't making any sense.
Example:
// For a bundle:
$ echo -e "set ssl cert localhost.pem.rsa <<\n$(cat kikyo.pem.rsa)\n" | socat /tmp/sock1 -
Transaction created for certificate localhost.pem!
William Dauchy [Sun, 27 Oct 2019 19:08:11 +0000 (20:08 +0100)]
MINOR: init: always fail when setrlimit fails
this patch introduces a strict-limits parameter which enforces the
setrlimit setting instead of a warning. This option can be forcingly
disable with the "no" keyword.
The general aim of this patch is to avoid bad surprises on a production
environment where you change the maxconn for example, a new fd limit is
calculated, but cannot be set because of sysfs setting. In that case you
might want to have an explicit failure to be aware of it before seeing
your traffic going down. During a global rollout it is also useful to
explictly fail as most progressive rollout would simply check the
general health check of the process.
As discussed, plan to use the strict by default mode starting from v2.3.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
William Dauchy [Sun, 27 Oct 2019 19:08:10 +0000 (20:08 +0100)]
MINOR: config: allow no set-dumpable config option
in global config parsing, we currently expect to have a possible no
keyword (KWN_NO), but we never allow it in config parsing.
another patch could have been to simply remove the code handling a
possible KWN_NO.
take this opportunity to update documentation of set-dumpable.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>