MEDIUM: h1: Don't wake the H1 tasklet if we got the whole request.
In h1_rcv_buf(), don't wake the H1 tasklet to attempt to receive more data
if we got the whole request. It will lead to a recv and maybe to a subscribe
while it may not be needed.
If the connection is keep alive, the tasklet will be woken up later by
h1_detach(), so that we'll be able to get the next request, or an end of
connection.
MEDIUM: h1: Don't try to subscribe if we managed to read data.
In h1_recv(), don't subscribe if we managed to receive data. We may not have
to, if we received a complete request, and a new receive will be attempted
later, as the tasklet is woken up either by h1_rcv_buf() or by h1_detach.
DOC: improve the wording in CONTRIBUTING about how to document a bug fix
Insufficiently described bug fixes are still too frequent. It's a real
pain to create each new maintenance release, as 3/4 of the time is spent
trying to guess what problem a patch fixes, which is already important
in order to decide whether to pick the fix or not, but is even more
capital in order to write understandable release notes.
Christopher rightfully demands that a patch tagged "BUG" MUST ABSOLUTELY
describe the problem and why this problem is a bug. Describing the fix
is one thing but if the bug is unknown, why would there be a fix ? How
can a stable maintainer be convinced to take a fix if its author didn't
care about checking whether it was a real bug ? This patch tries to
explain a bit better what really needs to appear in the commit message
and how to describe a bug.
BUG/MINOR: log: make sure writev() is not interrupted on a file output
Since 1.9 we support sending logs to various non-blocking outputs like
stdou/stderr or flies, by using writev() which guarantees that it only
returns after having written everything or nothing. However the syscall
may be interrupted while doing so, and this is visible when writing to
a tty during debug sessions, as some logs occasionally appear interleaved
if an xterm or SSH connection is not very fast. Performance here is not a
critical concern, log correctness is. Let's simply take the logger's lock
around the writev() call to prevent multiple senders from stepping onto
each other's toes.
BUG/MEDIUM: streams: Don't switch the SI to SI_ST_DIS if we have data to send.
In sess_established(), don't immediately switch the backend stream_interface
to SI_ST_DIS if we only got a SHUTR. We may still have something to send,
ie if the request is a POST, and we should be switched to SI_ST8DIS later
when the shutw will happen.
BUG/MEDIUM: lb-chash: Fix the realloc() when the number of nodes is increased
When the number of nodes is increased because the server weight is changed, the
nodes array must be realloc. But its new size is not correctly set. Only the
total number of nodes is used to set the new size. But it must also depends on
the size of a node. It must be the total nomber of nodes times the size of a
node.
This issue was reported on Github (#189).
This patch must be backported to all versions since the 1.6.
This one was added by commit daacf3664 ("BUG/MEDIUM: protocols: add a
global lock for the init/deinit stuff") but I forgot to add it to the
include file, breaking DEBUG_THREAD.
MEDIUM: mux-h1: Add the support of headers adjustment for bogus HTTP/1 apps
There is no standard case for HTTP header names because, as stated in the
RFC7230, they are case-insensitive. So applications must handle them in a
case-insensitive manner. But some bogus applications erroneously rely on the
case used by most browsers. This problem becomes critical with HTTP/2
because all header names must be exchanged in lowercase. And HAProxy uses the
same convention. All header names are sent in lowercase to clients and servers,
regardless of the HTTP version.
This design choice is linked to the HTX implementation. So, for previous
versions (2.0 and 1.9), a workaround is to disable the HTX mode to fall
back to the legacy HTTP mode.
Since the legacy HTTP mode was removed, some users reported interoperability
issues because their application was not able anymore to handle HTTP/1 message
received from HAProxy. So, we've decided to add a way to change the case of some
headers before sending them. It is now possible to define a "mapping" between a
lowercase header name and a version supported by the bogus application. To do
so, you must use the global directives "h1-case-adjust" and
"h1-case-adjust-file". Then options "h1-case-adjust-bogus-client" and
"h1-case-adjust-bogus-server" may be used in proxy sections to enable the
conversion. See the configuration manual for more info.
Of course, our advice is to urgently upgrade these applications for
interoperability concerns and because they may be vulnerable to various types of
content smuggling attacks. But, if your are really forced to use an unmaintained
bogus application, you may use these directive, at your own risks.
If it is relevant, this feature may be backported to 2.0.
There is one unprotected call to stop_proxy() from the manage_proxy()
task, so there is a single caller by definition, but there is also
another such call from the CLI's "shutdown frontend" parser. This
one does it under the proxy's lock but the first one doesn't use it.
Thus it is theorically possible to corrupt the list of listeners in a
proxy by issuing "shutdown frontend" and SIGUSR1 exactly at the same
time. While it sounds particularly contrived or stupid, it could
possibly happen with automated tools that would send actions via
various channels. This could cause the process to loop forever or
to crash and thus stop faster than expected.
BUG/MEDIUM: protocols: add a global lock for the init/deinit stuff
Dragan Dosen found that the listeners lock is not sufficient to protect
the listeners list when proxies are stopping because the listeners are
also unlinked from the protocol list, and under certain situations like
bombing with soft-stop signals or shutting down many frontends in parallel
from multiple CLI connections, it could be possible to provoke multiple
instances of delete_listener() to be called in parallel for different
listeners, thus corrupting the protocol lists.
Such operations are pretty rare, they are performed once per proxy upon
startup and once per proxy on shut down. Thus there is no point trying
to optimize anything and we can use a global lock to protect the protocol
lists during these manipulations.
This fix (or a variant) will have to be backported as far as 1.8.
BUG/CRITICAL: http_ana: Fix parsing of malformed cookies which start by a delimiter
When client-side or server-side cookies are parsed, HAProxy enters in an
infinite loop if a Cookie/Set-Cookie header value starts by a delimiter (a colon
or a semicolon). Depending on the operating system, the service may become
degraded, unresponsive, or may trigger haproxy's watchdog causing a service stop
or automatic restart.
To fix this bug, in the loop parsing the attributes, we must be sure to always
skip delimiters once the first attribute-value pair was parsed, empty or
not. The credit for the fix goes to Olivier.
CVE-2019-14241 was assigned to this bug. This patch fixes the Github issue #181.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
Empty error files may be used to disable the sending of any message for specific
error codes. A common use-case is to use the file "/dev/null". This way the
default error message is overridden and no message is returned to the client. It
was supported in the legacy HTTP mode, but not in HTX. Because of a bug, such
messages triggered an error.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
BUG/MINOR: http_ana: Be sure to have an allocated buffer to generate an error
In http_reply_and_close() and http_server_error(), we must be sure to have an
allocated buffer (buf.size > 0) to consider it as a valid HTX message. For now,
there is no way to hit this bug. But a fix to support "empty" error messages in
HTX is pending. Such empty messages, after parsing, will be converted into
unallocated buffer (buf.size == 0).
This patch must be backported to 2.0 and 1.9. owever, the patch will have to be
adapted.
BUG/MEDIUM: tcp-checks: do not dereference inexisting conn_stream
Github user @jpulz reported a crash with tcp-checks in issue #184
where cs==NULL. If we enter the function with cs==NULL and check->result
!= CHK_RES_UKNOWN, we'll go directly to out_end_tcpcheck and dereference
cs. We must validate there that cs is valid (and conn at the same time
since it would be NULL as well).
BUG/MINOR: mux-h1: Close server connection if input data remains in h1_detach()
With the previous commit 03627245c ("BUG/MEDIUM: mux-h1: Trim excess server data
at the end of a transaction"), we try to avoid to handle junk data coming from a
server as a response. But it only works for data already received. Starting from
the moment a server sends an invalid response, it is safer to close the
connection too, because more data may come after and there is no good reason to
handle them.
So now, when a conn_stream is detached from a server connection, if there are
some unexpected input data, we simply trim them and close the connection
ASAP. We don't close it immediately only if there are still some outgoing data
to deliver to the server.
MEDIUM: backend: remove impossible cases from connect_server()
Now that we start by releasing any possibly existing connection,
the conditions simplify a little bit and some of the complex cases
can be removed. A few comments were also added for non-obvious cases.
MEDIUM: backend: always release any existing prior connection in connect_server()
When entering connect_server() we're not supposed to have a connection
already, except when retrying a failed connection, which is pretty rare.
Let's simplify the code by starting to unconditionally release any existing
connection. For now we don't go further, as this change alone will lead to
quite some simplification that'd rather be done as a separate cleanup.
MEDIUM: lua: do not allocate the remote connection anymore
Lua cosockets do not need to allocate the remote connection anymore.
However this was trickier than expected because some tests were made
on this remote connection's existence to detect establishment instead
of relying on the stream interface's state (which is how it's now done).
The flag SF_ADDR_SET was set a bit too early (before assigning the
address) so this was moved to the right place. It should not have had
any impact beyond confusing debugging.
The only remaining occurrence of the remote connection knowledge now
is for getsockname() which requires to access the connection to send
the syscall, and it's unlikely that we'll need to change this before
QUIC or so.
MAJOR: stream: store the target address into s->target_addr
When forcing the outgoing address of a connection, till now we used to
allocate this outgoing connection and set the address into it, then set
SF_ADDR_SET. With connection reuse this causes a whole lot of issues and
difficulties in the code.
Thanks to the previous changes, it is now possible to store the target
address into the stream instead, and copy the address from the stream to
the connection when initializing the connection. assign_server_address()
does this and as a result SF_ADDR_SET now reflects the presence of the
target address in the stream, not in the connection. The http_proxy mode,
the peers and the master's CLI now use the same mechanism. For now the
existing connection code was not removed to limit the amount of tricky
changes, but the allocated connection is not used anymore.
This change also revealed a latent issue that we've been having around
option http_proxy : the address was set in the connection but neither the
SF_ADDR_SET nor the SF_ASSIGNED flags were set. It looks like the connection
could establish only due to the fact that it existed with a non-null
destination address.
MINOR: stream: add a new target_addr entry in the stream structure
The purpose will be to store the target address there and not to
allocate a connection just for this anymore. For now it's only placed
in the struct, a few fields were moved to plug some holes, and the
entry is freed on release (never allocated yet for now). This must
have no impact. Note that in order to fit, the store_count which
previously was an int was turned into a short, which is way more
than enough given that the hard-coded limit is 8.
MINOR: connection: don't use clear_addr() anymore, just release the address
Now that we have dynamically allocated addresses, there's no need to
clear an address before reusing it, just release it. Note that this
is not equivalent to saying that an address is never zero, as shown in
assign_server_address() where an address 0.0.0.0 can still be assigned
to a connection for the time it takes to modify it.
Now addresses are dynamically allocated when needed. Each connection is
created with src=dst=NULL, these entries are allocated on the fly, and
released when the connection is released.
MEDIUM: connection: make sure all address producers allocate their address
This commit places calls to sockaddr_alloc() at the places where an address
is needed, and makes sure that the allocation is properly tested. This does
not add too many error paths since connection allocations are already in the
vicinity and share the same error paths. For the two cases where a
clear_addr() was called, instead the address was not allocated.
MINOR: connection: create a new pool for struct sockaddr_storage
This pool will be used to allocate storage for source and destination
addresses used in connections. Two functions sockaddr_{alloc,free}()
were added and will have to be used everywhere an address is needed.
These ones are safe for progressive replacement as they check that the
existing pointer is set before replacing it. The pool is not yet used
during allocation nor freeing. Also they operate on pointers to pointers
so they will perform checks and replace values. The free one nulls the
pointer.
MEDIUM: backend: turn all conn->addr.{from,to} to conn->{src,dst}
All reads were carefully reviewed for only reading already checked
values. Assignments were commented indicating that an allocation will
be needed once they become dynamic. The memset() used to clear the
addresses should then be turned to a free() and a NULL assignment.
MINOR: checks: replace conn->addr.to with conn->dst
Two places will require a dynamic address allocation since the connection
is created from scratch. For the source address it looks like the
clear_addr() call will simply have to be removed as the pointer will
already be NULL.
MINOR: tcp: replace conn->addr.{from,to} with conn->{src,dst}
Most of the locations were already safe, only two places needed to have
one extra check to avoid assuming that cli_conn->src is necessarily set
(it is in practice but let's stay safe).
MINOR: session: use conn->src instead of conn->addr.from
In session_accept_fd() we'll soon have to dynamically allocate the
address, or better, steal it from the caller and define a strict calling
convention regarding who's responsible for the freeing. In the simpler
session_prepare_log_prefix(), just add an attempt to retrieve the address
if not yet set and do not dereference it on failure.
MINOR: stream: switch from conn->addr.{from,to} to conn->{src,dst}
No allocation is needed there. Some extra checks were added in the
stream dump code to make sure the source address is effectively valid
(it always is but it doesn't cost much to be certain).
MINOR: htx: switch from conn->addr.{from,to} to conn->{src,dst}
One place (transparent proxy) will require an allocation when the
address becomes dynamic. A few dereferences of the family were adjusted
to preliminary check for the address pointer to exist at all. The
remaining operations were already performed under control of a
successful retrieval.
MINOR: peers: use conn->dst for the peer's target address
The target address is duplicated from the peer's configured one. For
now we keep the target address as-is but we'll have to dynamically
allocate it and place it into the stream instead. Maybe a sockaddr_dup()
will help by the way.
The "show peers" part is safe as it's already called after checking
the addresses' validity.
MINOR: lua: switch to conn->dst for a connection's target address
This one will soon need a dynamic allocation, though this will be
temporary as ideally the address will be placed on the stream and no
connection will be allocated anymore.
MINOR: connection: use conn->{src,dst} instead of &conn->addr.{from,to}
This is in preparation for the switch to dynamic address allocation,
let's migrate the code using the old fields to the pointers instead.
Note that no extra check was added for now, the purpose is only to
get the code to use the pointers and still work.
In the proxy protocol message handling we make sure the addresses are
properly allocated before declaring them unset.
At the moment we're facing difficulties with connection reuse based on
the fact that connections may be allocated very early only to set a
target address in transparent mode. With the imminent removal of the
legacy mode, the connection reuse by a same stream will not exist
anymore and all this awful complexity is not justified anymore. However
we still need to be able to assign addresses somewhere.
Thus instead of allocating a connection, we'll only place addresses where
needed in the stream during operations. But this takes quite some room
(typically 128 bytes). This is a nice opportunity for cleaning all this
up and dynamically allocatating the addresses fields, which will result
in actually saving memory from connection structs since most of the time
the client's "to" address is not used and the server's "from" is not used
either, thus saving ~256 bytes per end-to-end connection.
For now these new "src" and "dst" pointers point to addr.from and addr.to.
This will allow us to smoothly update the whole code to use these pointers
prior to going further and switching them to pools.
CLEANUP: connection: remove the now unused conn_get_{from,to}_addr()
These functions are not used anymore. They didn't report failures and
as such were often misused. conn_get_src() and conn_get_dst() now
replaced them everywhere.
MINOR: http: check the source address via conn_get_src() in sample fetch functions
In smp_fetch_url32_src() and smp_fetch_base32_src() it's better to
validate that the source address was properly initialized since it
will soon be dynamic, thus let's call conn_get_src().
MINOR: stream/cli: use conn_get_{src,dst} in "show sess" and "show peers" output
The stream outputs requires to retrieve connections sources and
destinations. The previous call involving conn_get_{to,from}_addr()
was missing a status check which has now been integrated with the
new call since these places already handle connection errors there.
The same code parts were reused for "show peers" and were modified
similarly.
MINOR: stream-int: use conn_get_{src,dst} in conn_si_send_proxy()
These ones replace the previous conn_get_{from,to}_addr() used to wait
for the connection establishment before sending a LOCAL line. The
error handling was preserved.
MINOR: backend: switch to conn_get_{src,dst}() for port and address mapping
The backend connect code uses conn_get_{from,to}_addr to forward addresses
in transparent mode and to map server ports, without really checking if the
operation succeeds. In preparation of future changes, let's switch to
conn_get_{src,dst}() and integrate status check for possible failures.
MINOR: frontend: switch to conn_get_{src,dst}() for logging and debugging
The frontend accept code uses conn_get_{from,to}_addr for logging and
debugging, without really checking if the operation succeeds. In
preparation of future changes, let's switch to conn_get_{src,dst}() and
integrate status check for possible failures.
MINOR: connection: add conn_get_src() and conn_get_dst()
These functions currently are the same as conn_get_from_addr() and
conn_get_to_addr() respectively except that they return a status for
the operation that the caller can test.
BUG/MEDIUM: mux-h1: Trim excess server data at the end of a transaction
At the end of a transaction, when the conn_stream is detach from the H1
connection, on the server side, we must release the input buffer to trim any
excess data received from the server to be sure to block invalid responses. A
typical example of such data would be from a buggy server responding to a HEAD
with some data, or sending more than the advertised content-length.
This issue was reported on Gitbub. See issue #176.
MINOR: config: Warn only if the option http-use-htx is used with "no" prefix
No warning message is emitted anymore if the option is used to enable the
HTX. But it is still diplayed when the "no" prefix is used to disable the HTX
explicitly. So, for existing configs, we display a warning only if there is a
change in the behavior of HAProxy between the 2.1 and the previous versions.
BUG/MINOR: checks: do not exit tcp-checks from the middle of the loop
There's a comment above tcpcheck_main() clearly stating that no return
statement should be placed in the middle, still we did have one after
installing the mux. It looks mostly harmless though as it will only
fail to mark the server as being in error in case of allocation failure
or config issue.
This fix should be backported to 2.0 and probably 1.9 as well.
BUG/MINOR: session: Send a default HTTP error if accept fails for a H1 socket
If session_accept_fd() fails for a raw HTTP socket, we try to send an HTTP error
500. But we must not rely on error messages of the proxy or on the array
http_err_chunks because these are HTX messages. And it should be too expensive
to convert an HTX message to a raw message at this place. So instead, we send a
default HTTP error message from the array http_err_msgs.
BUG/MINOR: session: Emit an HTTP error if accept fails only for H1 connection
If session_accept_fd() fails for a raw HTTP socket, we try to send an HTTP error
500. But, we must also take care it is an HTTP/1 connection. We cannot rely on
the mux at this stage, because the error, if any, happens before or during its
creation. So, instead, we check if the mux_proto is specified or not. Indeed,
the mux h1 cannot be forced on the bind line and there is no ALPN to choose
another mux on a raw socket. So if there is no mux_proto defined for a raw HTTP
socket, we are sure to have an HTTP/1 connection.
MINOR: http: Don't store raw HTTP errors in chunks anymore
Default HTTP error messages are stored in an array of chunks. And since the HTX
was added, these messages are also converted in HTX and stored in another
array. But now, the first array is not used anymore because the legacy HTTP mode
was removed.
So now, only the array with the HTX messages are kept. The other one was
removed.
MINOR: global: Preset tune.max_http_hdr to its default value
By default, this tune parameter is set to MAX_HTTP_HDR. This assignment is done
after the configuration parsing, when we check the configuration validity. So
during the configuration parsing, its value is 0. Now, it is set to MAX_HTTP_HDR
from the start. So, it is possible to rely on it during the configuration
parsing.
MINOR: proxy: Remove support of the option 'http-tunnel'
The option 'http-tunnel' is deprecated and it was only used in the legacy HTTP
mode. So this option is now totally ignored and a warning is emitted during
HAProxy startup if it is found in a configuration file.
REORG: proto_htx: Move HTX analyzers & co to http_ana.{c,h} files
The old module proto_http does not exist anymore. All code dedicated to the HTTP
analysis is now grouped in the file proto_htx.c. So, to finish the polishing
after removing the legacy HTTP code, proto_htx.{c,h} files have been moved in
http_ana.{c,h} files.
In addition, all HTX analyzers and related functions prefixed with "htx_" have
been renamed to start with "http_" instead.
Many flags of the HTTP transction (TX_*) are now unused and useless. So the
flags TX_WAIT_CLEANUP, TX_HDR_CONN_*, TX_CON_CLO_SET and TX_CON_KAL_SET were
removed. Most of TX_CON_WANT_* were also removed. Only TX_CON_WANT_TUN has been
kept.
MINOR: hlua: Remove useless test on TX_CON_WANT_* flags
When an HTTP applet is initialized, it is useless to force server-close mode on
the HTTP transaction because the connection mode is now handled by muxes. In
HTX, during analysis, the flag TX_CON_WANT_CLO is set by default in
htx_wait_for_request(), and TX_CON_WANT_SCL is never tested anywere.
First of all, all legacy HTTP analyzers and all functions exclusively used by
them were removed. So the most of the functions in proto_http.{c,h} were
removed. Only functions to deal with the HTTP transaction have been kept. Then,
http_msg and hdr_idx modules were entirely removed. And finally the structure
http_msg was lightened of all its useless information about the legacy HTTP. The
structure hdr_ctx was also removed because unused now, just like unused states
in the enum h1_state. Note that the memory pool "hdr_idx" was removed and
"http_txn" is now smaller.
MAJOR: filters: Remove code relying on the legacy HTTP mode
This commit breaks the compatibility with filters still relying on the legacy
HTTP code. The legacy callbacks were removed (http_data, http_chunk_trailers and
http_forward_data).
For now, the filters must still set the flag FLT_CFG_FL_HTX to be used on HTX
streams.
MINOR: stats: Remove code relying on the legacy HTTP mode
The part of the applet dealing with raw buffer was removed, for the HTTP part
only. So the old functions stats_send_http_headers() and
stats_send_http_redirect() were removed and replaced by the htx ones. The legacy
applet I/O handler was replaced by the htx one. And the parsing of POST data was
purged of the legacy HTTP code.
MINOR: flt_trace: Remove code relying on the legacy HTTP mode
The legacy HTTP callbacks were removed (trace_http_data,
trace_http_chunk_trailers and trace_http_forward_data). And the loop on the HTTP
headers was updated to only handle HTX messages.
MEDIUM: compression: Remove code relying on the legacy HTTP mode
The legacy HTTP callbacks were removed (comp_http_data, comp_http_chunk_trailers
and comp_http_forward_data). Functions emitting compressed chunks of data for
the legacy HTTP mode were also removed. The state for the compression filter was
updated accordingly. The compression context and the algorigttm used to compress
data are the only useful information remaining.
MEDIUM: cache: Remove code relying on the legacy HTTP mode
The applet delivering cached objects based on the legacy HTTP code was removed
as the filter callback cache_store_http_forward_data(). And the action analyzing
the response coming from the server to store it in the cache or not was purged
of the legacy HTTP code.
MEDIUM: backend: Remove code relying on the HTTP legacy mode
The L7 loadbalancing algorithms are concerned (uri, url_param and hdr), the
"sni" parameter on the server line and the "source" parameter on the server line
when used with "use_src hdr_ip()".
MINOR: proxy: Don't adjust connection mode of HTTP proxies anymore
This was only used for the legacy HTTP mode where the connection mode was
handled by the HTTP analyzers. In HTX, the function http_adjust_conn_mode() does
nothing. The connection mode is handled by the muxes.
MINOR: proxy: Remove tests on the option 'http-use-htx' during H1 upgrade
To know if an upgrade from TCP to H1 must be performed, we now only need to know
if a non HTX stream is assigned to an HTTP backend. So we don't rely anymore on
the flag PR_O2_USE_HTX to handle such upgrades.
MINOR: stream: Remove tests on the option 'http-use-htx' in stream_new()
All streams created for an HTTP proxy must now use the HTX internal
resprentation. So, it is no more necessary to test the flag PR_O2_USE_HTX. It
means a stream is an HTX stream if the frontend is an HTTP proxy or if the
frontend multiplexer, if any, set the flag MX_FL_HTX.
MEDIUM: http_fetch: Remove code relying on HTTP legacy mode
Since the legacy HTTP mode is disbabled, all HTTP sample fetches work on HTX
streams. So it is safe to remove all code relying on HTTP legacy mode. Among
other things, the function smp_prefetch_http() was removed with the associated
macros CHECK_HTTP_MESSAGE_FIRST() and CHECK_HTTP_MESSAGE_FIRST_PERM().
MINOR: stream: Rely on HTX analyzers instead of legacy HTTP ones
Since the legacy HTTP mode is disabled, old HTTP analyzers do nothing but call
those of the HTX. So, it is safe to directly call HTX analyzers from
process_stream().
MINOR: connection: Remove the multiplexer protocol PROTO_MODE_HTX
Since the legacy HTTP mode is disabled and no multiplexer relies on it anymore,
there is no reason to have 2 multiplexer protocols for the HTTP. So the protocol
PROTO_MODE_HTX was removed and all HTTP multiplexers use now PROTO_MODE_HTTP.