BUG/MEDIUM: mux-h1; Ignore headers modifications about payload representation
We now ignore modifications during the message analysis about the payload
representation if only headers are updated and not meta-data. It means a C-L
header removed to add a T-E one or the opposite via HTTP actions. This kind
of changes are ignored because it is extremly hard to be sure the payload
will be properly formatted.
It is an issue since the HTX was introduced and it was never reported. Thus,
there is no reason to backport this patch for now. It relies on following commits:
* MINOR: mux-h1: Add flags if outgoing msg contains a header about its payload
* MINOR: mux-h1: Rely on H1S_F_HAVE_CHNK to add T-E in outgoing messages
* BUG/MEDIUM: mux-h1: Add C-L header in outgoing message if it was removed
BUG/MEDIUM: mux-h1: Add C-L header in outgoing message if it was removed
If a C-L header was found during parsing of a message but it was removed via
a HTTP action, it is re-added during the message formatting. Indeed, if
headers about the payload are modified, meta-data of the message must also
be updated. Otherwise, it is not possible to guarantee the message will be
properly formatted.
To do so, we rely on the flag H1S_F_HAVE_CLEN.
This patch should not be backported except an issue is explicitly
reported. It relies on "MINOR: mux-h1: Add flags if outgoing msg contains a
header about its payload".
MINOR: mux-h1: Rely on H1S_F_HAVE_CHNK to add T-E in outgoing messages
If a message is declared to have a known length but no C-L or T-E headers
are set, a "Transfer-Encoding; chunked" header is automatically added. It is
useful for H2/H3 messages with no C-L header. There is now a flag to know
this header was found or added. So we use it.
MINOR: mux-h1: Add flags if outgoing msg contains a header about its payload
If a "Content-length" or "Transfer-Encoding; chunked" headers is found or
inserted in an outgoing message, a specific flag is now set on the H1
stream. H1S_F_HAVE_CLEN is set for "Content-length" header and
H1S_F_HAVE_CHNK for "Transfer-Encoding: chunked".
This will be useful to properly format outgoing messages, even if one of
these headers was removed by hand (with no update of the message meta-data).
REGTESTS: filters: Don't set C-L header in the successful response to CONNECT
in random-forwarding.vtc script, adding "Content-Lnegth; 0" header in the
successful response to the CONNECT request is invalid but it may also lead
to wrong check on the response. "rxresp" directive don"t handle CONNECT
response. Thus "-no_obj" must be added instead, to be sure the payload won't
be retrieved or expected.
BUG/MEDIUM: h1: Ignore C-L value in the H1 parser if T-E is also set
In fact, during the parsing there is already a test to remove the
Content-Length header if a Transfer-Encoding one is found. However, in the
parser, the content-length value was still used to set the body length (the
final one and the remaining one). This value is thus also used to set the
extra field in the HTX message and is then used during the sending stage to
announce the chunk size.
So, Content-Length header value must be ignored by the H1 parser to properly
reformat the message when it is sent.
This patch must be backported as far as 2.6. Lower versions don"t handle
this case.
BUG/MINOR: mux-h1: Ignore C-L when sending H1 messages if T-E is also set
In fact, it is already done but both flags (H1_MF_CLEN and H1_MF_CHUNK) are
set on the H1 parser. Thus it is errorprone when H1 messages are sent,
especially because most of time, the "Content-length" case is processed
before the "chunked" one. This may lead to compute the wrong chunk size and
to miss the last chunk.
This patch must be backported as far as 2.6. This case is not handled in 2.4
and lower.
BUG/MINOR: mux-h1: Handle read0 in rcv_pipe() only when data receipt was tried
In rcv_pipe() callback we must be careful to not report the end of stream
too early because some data may still be present in the input buffer. If we
report a EOS here, this will block the subsequent call to rcv_buf() to
process remaining input data. This only happens when we try a last
rcv_pipe() when the xfer length is unknown and all data was already received
in the input buffer. Concretely this happens with a payload larger than a
buffer but lower than 2 buffers.
BUG/MEDIUM: hlua: Initialize appctx used by a lua socket on connect only
Ths appctx used by a lua socket was synchronously initialized after the
appctx creation. The connect itself is performed later. However it is an
issue because the script may be interrupted beteween the two operation. In
this case, the stream attached to the appctx is woken up before any
destination is set. The stream will try to connect but without destination,
it fails. When the lua script is rescheduled and the connect is performed,
the connection has already failed and an error is returned.
To fix the issue, we must be sure to not woken up the stream before the
connect. To do so, we must defer the appctx initilization. It is now perform
on connect.
This patch relies on the following commits:
* MINOR: hlua: Test the hlua struct first when the lua socket is connecting
* MINOR: hlua: Save the lua socket's server in its context
* MINOR: hlua: Save the lua socket's timeout in its context
* MINOR: hlua: Don't preform operations on a not connected socket
* MINOR: hlua: Set context's appctx when the lua socket is created
MINOR: hlua: Save the lua socket's server in its context
For the same reason than the timeout, the server used by a lua socket is now
saved in its context. This will be mandatory to fix issues with the lua
sockets.
MINOR: hlua: Save the lua socket's timeout in its context
When the lua socket timeout is set, it is now saved in its context. If there
is already a stream attached to the appctx, the timeout is then immediately
modified. Otherwise, it is modified when the stream is created, thus during
the appctx initialization.
For now, the appctx is initialized when it is created. But this will change
to fix issues with the lua sockets. Thus, this patch is mandatory.
MINOR: hlua: Don't preform operations on a not connected socket
There is nothing that prevent someone to create a lua socket and try to
receive or to write before the connection was established ot after the
shutdown was performed. The same is true when info about the socket are
retrieved.
It is not an issue because this will fail later. But now, we check the
socket is connected or not earlier. It is more effecient but it will be also
mandatory to fix issue with the lua sockets.
MINOR: hlua: Set context's appctx when the lua socket is created
The lua socket's context referenced the owning appctx. It was set when the
appctx was initialized. It is now performed when the appctx is created. It
is a small change but this will be required to fix several issues with the
lua sockets.
BUILD: pool: Fix GCC error about potential null pointer dereference
In pool_gc(), GCC 13.2.1 reports an error about a potential null potential
dereference:
src/pool.c: In function ‘pool_gc’:
src/pool.c:807:64: error: potential null pointer dereference [-Werror=null-dereference]
807 | entry->buckets[bucket].free_list = temp->next;
| ~~~~^~~~~~
There is no issue here because "bucket" variable cannot be greater than
CONFIG_HAP_POOL_BUCKETS. But to make GCC happy, we now break the loop if it
is greater or equal to CONFIG_HAP_POOL_BUCKETS.
MINOR: proto_reverse_connect: support source address setting
Support backend configuration for explicit source address on
pre-connect. These settings can be specified via "source" backend
keyword or directly on the server line.
Previously, all source parameters triggered a BUG_ON() when binding a
reverse connect listener. This was done because some settings are
incompatible with reverse connect context : this is the case for all
source settings which do not specify a fixed address but rather rely on
a frontend connection. Indeed, in case of preconnect, connection is
initiated on its own without the existence of a previous frontend
connection.
This patch allows to use a source parameter with a fixed address. All
other settings (usesrc client/clientip/hdr_ip) are rejected on listener
binding. On connection init, alloc_bind_address() is used to set the
optional source address.
MINOR: backend: refactor specific source address allocation
Refactor alloc_bind_address() function which is used to allocate a
sockaddr if a connection to a target server relies on a specific source
address setting.
The main objective of this change is to be able to use this function
outside of backend module, namely for preconnections using a reverse
server. As such, this function is now exported globally.
For reverse connect, there is no stream instance. As such, the function
parts which relied on it were reduced to the minimal. Now, stream is
only used if a non-static address is configured which is useful for
usesrc client|clientip|hdr_ip. These options have no sense for reverse
connect so it should be safe to use the same function.
MINOR: quic: handle perm error on bind during runtime
Improve EACCES permission errors encounterd when using QUIC connection
socket at runtime :
* First occurence of the error on the process will generate a log
warning. This should prevent users from using a privileged port
without mandatory access rights.
* Socket mode will automatically fallback to listener socket for the
receiver instance. This requires to duplicate the settings from the
bind_conf to the receiver instance to support configurations with
multiple addresses on the same bind line.
Define a new bind option quic-socket :
quic-socket [ connection | listener ]
This new setting works in conjunction with the existing configuration
global tune.quic.socket-owner and reuse the same semantics.
The purpose of this setting is to allow to disable connection socket
usage on listener instances individually. This will notably be useful
when needing to deactivating it when encountered a fatal permission
error on bind() at runtime.
DOC: sample: Add a comment in 'check_operator' to explain why 'vars_check_arg' should ignore the 'err' buffer
This extra comment ensure that we do not try to pass an 'err' argument
to 'vars_check_arg' otherwise some warnings will be raised if an
operator is given an integer directly in the configuration file.
The "check_operator" function is used for all the operator converters
such as "and", "or", "add"...
With such a converter that accepts a variable name as well as an
integer, the "vars_check_arg" call is expected to fail when an integer
is provided. Passing an "err" variable has the unwanted side effect of
raising a warning during init for a configuration such as the following:
http-request set-query "s=%[rand,add(20)]"
which raises the following warning:
[WARNING] (33040) : config : parsing [hap.cfg:14] : invalid
variable name '20'. A variable name must be start by its scope. The
scope can be 'proc', 'sess', 'txn', 'req', 'res' or 'check'.
Willy Tarreau [Tue, 3 Oct 2023 06:23:33 +0000 (08:23 +0200)]
BUG/MAJOR: plock: fix major bug in pl_take_w() introduced with EBO
When EBO was brought to pl_take_w() by plock commit 60d750d ("plock: use
EBO when waiting for readers to leave in take_w() and stow()"), a mistake
was made: the mask against which the current value of the lock is tested
excludes the first reader like in stow(), but it must not because it was
just obtained via an ldadd() which means that it doesn't count itself.
The problem this causes is that if there is exactly one reader when a
writer grabs the lock, the writer will not wait for it to leave before
starting its operations.
The solution consists in checking for any reader in the IF. However the
mask passed to pl_wait_unlock_*() must still exclude the lowest bit as
it's verified after a subsequent load.
Kudos to Remi Tricot-Le Breton for reporting and bisecting this issue
with a reproducer.
No backport is needed since this was brought in 2.9-dev3 with commit 8178a5211 ("MAJOR: threads/plock: update the embedded library again").
The code is now on par with plock commit ada70fe.
BUG/MINOR: proto_reverse_connect: fix FD leak upon connect
new_reverse_conn() is creating its own socket with
sock_create_server_socket(). However the connect is done with
conn->ctrl->connect() which is tcp_connect_server().
tcp_connect_server() is also creating its own socket and sets it in the
struct conn, left the previous socket unclosed and leaking at each
attempt.
This patch fixes the issue by letting tcp_connect_server() handling the
socket part, and removes it in new_reverse_conn().
MINOR: tcp_act: remove limitation on protocol for attach-srv
This patch allows to specify "tcp-request session attach-srv" without
requiring that each associated bind lines mandates HTTP/2 usage. If a
non supported protocol is targetted by this rule, conn_install_mux_fe()
is responsible to reject it.
This change is mandatory to be able to mix attach-srv and standard
non-reversable connection on the same bind instances. An ACL can be used
to activate attach-srv only on some conditions.
MINOR: connection: define mux flag for reverse support
Add a new MUX flag MX_FL_REVERSABLE. This value is used to indicate that
MUX instance supports connection reversal. For the moment, only HTTP/2
multiplexer is flagged with it.
This allows to dynamically check if reversal can be completed during MUX
installation. This will allow to relax requirement on config writing for
'tcp-request session attach-srv' which currently cannot be used mixed
with non-http/2 listener instances, even if used conditionnally with an
ACL.
MINOR: connection: define error for reverse connect
Define a new error code for connection CO_ER_REVERSE. This will be used
to report an issue which happens on a connection targetted for reversal
before reverse process is completed.
Fix parser for tcp-request session attach-srv rule. Before this commit,
it was impossible to use an anonymous ACL with it. This was caused
because support for optional name argument was badly implemented.
BUG/MINOR: proto_reverse_connect: fix FD leak on connection error
Listener using "rev@" address is responsible to setup connection and
reverse it using a server instance. If an error occured before reversal
is completed, proper freeing must be taken care of by the listener as no
session exists for this.
Currently, there is two locations where a connection is freed on error
before reversal inside reverse_connect protocol. Both of these were
incomplete as several function must be used to ensure connection is
properly freed. This commit fixes this by reusing the same cleaning
mechanism used inside H2 multiplexer.
One of the biggest drawback before this patch was that connection FD was
not properly removed from fdtab which caused a file-descriptor leak.
MINOR: stream: fix output alignment of stuck thread dumps
Since commit c185bc465 ("MEDIUM: stream: now provide full stream dumps
in case of loops"), the stuck threads show the stream's pointer in the
margin since it appears immediately after a line feed. Let's add it after
the prefix and "stream=" to make the output more readable.
Doing h2load with h3 tests we notice this behavior:
Client ---- INIT no token SCID = a , DCID = A ---> Server (1)
Client <--- RETRY+TOKEN DCID = a, SCID = B ---- Server (2)
Client ---- INIT+TOKEN SCID = a , DCID = B ---> Server (3)
Client <--- INIT DCID = a, SCID = C ---- Server (4)
Client ---- INIT+TOKEN SCID = a, DCID = C ---> Server (5)
With (5) dropped by haproxy due to token validation.
Indeed the previous patch adds SCID of retry packet sent to the aad
of the token ciphering aad. It was useful to validate the next INIT
packets including the token are sent by the client using the new
provided SCID for DCID as mantionned into the RFC 9000.
But this stateless information is lost on received INIT packets
following the first outgoing INIT packet from the server because
the client is also supposed to re-use a second time the lastest
received SCID for its new DCID. This will break the token validation
on those last packets and they will be dropped by haproxy.
It was discussed there:
https://mailarchive.ietf.org/arch/msg/quic/7kXVvzhNCpgPk6FwtyPuIC6tRk0/
To resume: this is not the role of the server to verify the re-use of
retry's SCID for DCID in further client's INIT packets.
The previous patch must be reverted in all versions where it was
backported (supposed until 2.6)
MEDIUM: stream: now provide full stream dumps in case of loops
When a stream is caught looping, we produce some output to help figure
its internal state explaining why it's looping. The problem is that this
debug output is quite old and the info it provides are quite insufficient
to debug a modern process, and since such bugs happen only once or twice
a year the situation doesn't improve.
On the other hand the output of "show sess all" is extremely detailed
and kept up to date with code evolutions since it's a heavily used
debugging tool.
This commit replaces the call to the totally outdated stream_dump() with
a call to strm_dump_to_buffer(), and removes the filters dump since they
are already emitted there, and it now produces much more exploitable
output:
Note that the output is subject to the global anon key so that IPs and
object names can be anonymized if required. It could make sense to
backport this and the few related previous patches next time such an
issue is reported.
MINOR: streams: add support for line prefixes to strm_dump_to_buffer()
Now the function can prepend every new line with a caller-fed prefix
that will later be used for indenting. The caller has to feed the
prefix for the first line itself though, allowing to possibly append
the first line at the end of an existing one.
MINOR: stream: make stream_dump() always multi-line
There used to be two working modes for this function, a single-line one
and a multi-line one, the difference being made on the "eol" argument
which could contain either a space or an LF (and with the prefix being
adjusted accordingly). Let's get rid of the single-line mode as it's
what limits the output contents because it's difficult to produce
exploitable structured data this way. It was only used in the rare case
of spinning streams and applets and these are the ones lacking info. Now
a spinning stream produces:
[ALERT] (3511) : A bogus STREAM [0x227e7b0] is spinning at 5581202 calls per second and refuses to die, aborting now! Please report this error to developers:
strm=0x227e7b0,c4a src=127.0.0.1 fe=public be=public dst=s1
txn=0x2041650,3000 txn.req=MSG_DONE,4c txn.rsp=MSG_RPBEFORE,0
rqf=1840000 rqa=8000 rpf=80000000 rpa=1400000
scf=0x24af280,EST,482 scb=0x24af430,EST,1411
af=(nil),0 sab=(nil),0
cof=0x7fdb28026630,300:H1(0x24a6f60)/RAW((nil))/tcpv4(33)
cob=0x23199f0,10000300:H1(0x24af630)/RAW((nil))/tcpv4(32)
filters={}
call trace(11):
(...)
MINOR: stream: make strm_dump_to_buffer() show the list of filters
That's one of the rare pieces of information that was not present in
the full dump and only in the short one, the list of filters the stream
is subscribed to (however the current filter was present and more
detailed).
CLEANUP: stream: make strm_dump_to_buffer() take a const stream
Now that we don't need a variable anymore, let's pass a const stream.
It will void any doubt about what can happen to the stream when the
function is called from inspection points (show sess etc).
CLEANUP: stream: use const filters in the dump function
The strm_dump_to_buffer() function requires a variable stream only
for a few functions in it that do not take a const. strm_flt() is
one of them (and for good reasons since most call places want to
update filters). Here we know we won't modify the filter nor the
stream so let's directly access the strm_flt in the stream and assign
it to a const filter. This will also catch any future accidental change.
MINOR: stream: split stats_dump_full_strm_to_buffer() in two
The function only works with the CLI's appctx and does most of the
convenient work of dumping a stream into a buffer (well, the trash
buffer for now). Let's split it in two so that most of the work is
done in a generic function and that the CLI-specific function relies
on that one.
The diff looks huge due to the changed indent caused by the extraction
of the switch/case statement, but when looked at using diff -b it's
small.
CLEANUP: stream: make the dump code not depend on the CLI appctx
The HA_ANON_CLI() helper relies on the CLI appctx and prevents the code
from being made more generic. Let's extract the CLI's anon key separately
and pass it via HA_ANON_STR() instead.
CLEANUP: freq_ctr: make all freq_ctr readers take a const
Since 2.4-dev18 with commit b4476c6a8 ("CLEANUP: freq_ctr: make
arguments of freq_ctr_total() const"), most of the freq_ctr readers
should be fine with a const, except that they were not updated to
reflect this and they continue to force variable on some functions
that call them. Let's update this. This could even be backported if
needed.
BUG/MINOR: mux-quic: remove full demux flag on ncbuf release
When rcv_buf stream callback is invoked, mux tasklet is woken up if
demux was previously blocked due to lack of buffer space. A BUG_ON() is
present to ensure there is data in qcs Rx buffer. If this is not the
case, wakeup is unneeded :
BUG_ON(!ncb_data(&qcs->rx.ncbuf, 0));
This BUG_ON() may be triggered if RESET_STREAM is received after demux
has been blocked. On reset, Rx buffer is purged according to RFC 9000
which allows to discard any data not yet consumed. This will trigger the
BUG_ON() assertion if rcv_buf stream callback is invoked after this.
To prevent BUG_ON() crash, just clear demux block flag each time Rx
buffer is purged. This covers accordingly RESET_STREAM reception.
This should be backported up to 2.7.
This may fix github issue #2293.
This bug relies on several precondition so its occurence is rare. This
was reproduced by using a custom client which post big enough data to
fill the buffer. It then emits a RESET_STREAM in place of a proper FIN.
Moreover, mux code has been edited to artificially stalled stream read
to force demux blocking.
Vladimir Vdovin [Wed, 27 Sep 2023 14:43:07 +0000 (17:43 +0300)]
MINOR: support for http-request set-timeout client
Added set-timeout for frontend side of session, so it can be used to set
custom per-client timeouts if needed. Added cur_client_timeout to fetch
client timeout samples.
Released version 2.9-dev6 with the following main changes :
- BUG/MINOR: quic: fdtab array underflow access
- DEBUG: pools: always record the caller for uncached allocs as well
- DEBUG: pools: pass the caller pointer to the check functions and macros
- DEBUG: pools: make pool_check_pattern() take a pointer to the pool
- DEBUG: pools: inspect pools on fatal error and dump information found
- BUG/MEDIUM: quic: quic_cc_conn ->cntrs counters unreachable
- DEBUG: pools: also print the item's pointer when crashing
- DEBUG: pools: also print the value of the tag when it doesn't match
- DEBUG: pools: print the contents surrounding the expected tag location
- MEDIUM: pools: refine pool size rounding
- BUG/MEDIUM: hlua: don't pass stale nargs argument to lua_resume()
- BUG/MINOR: hlua/init: coroutine may not resume itself
- BUG/MEDIUM: mux-fcgi: Don't swap trash and dbuf when handling STDERR records
- BUG/MINOR: promex: fix backend_agg_check_status
- BUG/MEDIUM: master/cli: Pin the master CLI on the first thread of the group 1
- MAJOR: import: update mt_list to support exponential back-off
- CLEANUP: pools: simplify the pool expression when no pool was matched in dump
- MINOR: samples: implement bytes_in and bytes_out samples
- DOC: configuration: add %[req.ver] sample to %HV
- BUG/MINOR: quic: Leak of frames to send.
- DOC: configuration: add %[query] to %HQ
- BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API
- BUG/MAJOR: mux-h2: Report a protocol error for any DATA frame before headers
- BUILD: quic: fix build on centos 8 and USE_QUIC_OPENSSL_COMPAT
- Revert "MAJOR: import: update mt_list to support exponential back-off"
- BUG/MINOR: server: add missing free for server->rdr_pfx
- REGTESTS: ssl: skip OCSP test w/ WolfSSL
- REGTESTS: ssl: skip generate-certificates test w/ wolfSSL
- MINOR: logs: clarify the check of the log range
- MINOR: log: remove the unused curr_idx in struct smp_log_range
- CLEANUP: logs: rename a confusing local variable "curr_rg" to "smp_rg"
- MINOR: logs: use a single index to store the current range and index
- MEDIUM: logs: atomically check and update the log sample index
- CLEANUP: ring: rename the ring lock "RING_LOCK" instead of "LOGSRV_LOCK"
- BUG/MEDIUM: http-ana: Try to handle response before handling server abort
- MEDIUM: tools/ip: v4tov6() and v6tov4() rework
- MINOR: pattern/ip: offload ip conversion logic to helper functions
- MINOR: pattern: fix pat_{parse,match}_ip() function comments
- MINOR: pattern/ip: simplify pat_match_ip() function
- BUG/MEDIUM: server/cli: don't delete a dynamic server that has streams
- MINOR: hlua: Add support for the "http-after-res" action
- BUG/MINOR: proto_reverse_connect: fix preconnect with startup name resolution
- MINOR: proto_reverse_connect: prevent transparent server for pre-connect
- CI: cirrus-ci: display gdb bt if any
- MEDIUM: sample: Enhances converter "bytes" to take variable names as arguments
- MEDIUM: sample: Small fix in function check_operator for eror reporting
- MINOR: quic: handle external extra CIDs generator.
- BUG/MINOR: proto_reverse_connect: set default maxconn
- MINOR: proto_reverse_connect: refactor preconnect failure
- MINOR: proto_reverse_connect: remove unneeded wakeup
- MINOR: proto_reverse_connect: emit log for preconnect
MINOR: proto_reverse_connect: emit log for preconnect
Add reporting using send_log() for preconnect operation. This is minimal
to ensure we understand the current status of listener in active reverse
connect.
To limit logging quantity, only important transition are considered.
This requires to implement a minimal state machine as a new field in
receiver structure.
Here are the logs produced :
* Initiating : first time preconnect is enabled on a listener
* Error : last preconnect attempt interrupted on a connection error
* Reaching maxconn : all necessary connections were reversed and are
operational on a listener
No need to use task_wakeup() on rev_bind_listener() to bootstrap
preconnect. A similar call is done on rev_enable_listener() which serve
both for bootstrap and also later to reinitiate attemps to maintain
maxconn if connection are freed.
When a connection is freed during preconnect before reversal, the error
must be notified to the listener to remove any connection reference and
rearm a new preconnect attempt. Currently, this can occur through 2 code
paths :
* conn_free() called directly by H2 mux
* error during conn_create_mux(). For this case, connection is flagged
with CO_FL_ERROR and reverse_connect task is woken up. The process
task handler is then responsible to call conn_free() for such
connection.
Duplicated steps where done both in conn_free() and process task
handler. These are now removed. To facilitate code maintenance,
dedicated operation have been centralized in a new function
rev_notify_preconn_err() which is called by conn_free().
BUG/MINOR: proto_reverse_connect: set default maxconn
If maxconn is not set for preconnect, it assumes we want to establish a
single connection. However, this does not work properly in case the
connection is closed after reversal. Listener is not resumed by protocol
layer to attempt a new preconnect.
To fix this, explicitely set maxconn to 1 in the listener instance if
none is defined. This ensures the behavior is consistent. A BUG_ON() has
been added to validate we never try to use a listener with a 0 maxconn.
MINOR: quic: handle external extra CIDs generator.
This patch adds the ability to externalize and customize the code
of the computation of extra CIDs after the first one was derived from
the ODCID.
This is to prepare interoperability with extra components such as
different QUIC proxies or routers for instance.
To process the patch defines two function callbacks:
- the first one to compute a hash 64bits from the first generated CID
(itself continues to be derived from ODCID). Resulting hash is stored
into the 'quic_conn' and 64bits is chosen large enought to be able to
store an entire haproxy's CID.
- the second callback re-uses the previoulsy computed hash to derive
an extra CID using the custom algorithm. If not set haproxy will
continue to choose a randomized CID value.
Those two functions have also the 'cluster_secret' passed as an argument:
this way, it is usable for obfuscation or ciphering.
MEDIUM: sample: Small fix in function check_operator for eror reporting
When function "check_operator" calls function "vars_check_arg" to decode
a variable, it passes in NULL value for pointer to the char array meant
for capturing the error message. This commit replaces NULL with the
pointer to the real char array. This should help in correct error
reporting.
MEDIUM: sample: Enhances converter "bytes" to take variable names as arguments
Prior to this commit, converter "bytes" takes only integer values as
arguments. After this commit, it can take variable names as inputs.
This allows us to dynamically determine the offset/length and capture
them in variables. These variables can then be used with the converter.
Example use case: parsing a token present in a request header.
MINOR: proto_reverse_connect: prevent transparent server for pre-connect
Prevent using transparent servers for pre-connect on startup by emitting
a fatal error. This is used to ensure we never try to connect to a
target with an unspecified destination address or port.
BUG/MINOR: proto_reverse_connect: fix preconnect with startup name resolution
addr member of server structure is not set consistently depending on the
server address type. When using <IP:PORT> notation, its port is properly
set. However, when using <HOSTNAME:PORT>, only IP address is set after
startup name resolution but its port is left to 0.
This behavior causes preconnect to not be functional when using server
with hostname for startup name resolution. Indeed, only srv.addr is used
as connect argument through function new_reverse_conn(). To fix this,
rely on srv.svc_port : this member is always set for servers using IP or
hostname. This is similar to connect_server() on the backend side.
MINOR: hlua: Add support for the "http-after-res" action
This commit introduces support for the "http-after-res" action in
hlua, enabling the invocation of a Lua function in a
"http-after-response" rule. With this enhancement, a Lua action can be
registered using the "http-after-res" action type:
BUG/MEDIUM: server/cli: don't delete a dynamic server that has streams
In cli_parse_delete_server(), we take care of checking that the server is
in MAINT and that the cur_sess counter is set to 0, in the hope that no
connection/stream ressources continue to point to the server, else we
refuse to delete it.
As shown in GH #2298, this is not sufficient.
Indeed, when the server option "on-marked-down shutdown-sessions" is not
used, server streams are not purged when srv enters maintenance mode.
As such, there could be remaining streams that point to the server. To
detect this, a secondary check on srv->cur_sess counter was performed in
cli_parse_delete_server(). Unfortunately, there are some code paths that
could lead to cur_sess being decremented, and not resulting in a stream
being actually shutdown. As such, if the delete_server cli is handled
right after cur_sess has been decremented with streams still pointing to
the server, we could face some nasty bugs where stream->srv_conn could
point to garbage memory area, as described in the original github report.
To make the check more reliable prior to deleting the server, we don't
rely exclusively on cur_sess and directly check that the server is not
used in any stream through the srv_has_stream() helper function.
Thanks to @capflam which found out the root cause for the bug and greatly
helped to provide the fix.
MINOR: pattern/ip: simplify pat_match_ip() function
pat_match_ip() has been updated several times over the last decade to
introduce new features, but it was never cleaned up.
The result is that the function is pretty hard to read, and there are
multiple duplicated code blocks so it becomes error-prone to maintain it,
plus it bloats the haproxy binary for nothing.
In this patch, we move the tree search (ip4 / ip6) logic into 2
dedicated helper functions. This allows us to refactor pat_match_ip()
without touching to the original behavior.
MINOR: pattern/ip: offload ip conversion logic to helper functions
Now that v4tov6() and v6tov4() were reworked to match behavior from
pat_match_ip() function in ("MINOR: tools/ip: v4tov6() and v6tov4()
rework"), we can remove code duplication in pat_match_ip() by directly
using those dedicated functions where relevant.
v4tov6() and v6tov4() helper function were initially implemented in 4f92d3200 ("[MEDIUM] IPv6 support for stick-tables").
However, since ceb4ac9c3 ("MEDIUM: acl: support IPv6 address matching")
support for legacy ip6 to ip4 conversion formats were added, with the
parsing logic directly performed in acl_match_ip (which later became
pat_match_ip)
The issue is that the original v6tov4() function which is used for sample
expressions handling lacks those additional formats, so we could face
inconsistencies whether we rely on ip4/ip6 conversions from an acl context
or an expression context.
To unify ip4/ip6 automatic mapping behavior, we reworked v4tov6 and v6tov4
functions so that they now behave like in pat_match_ip() function.
Note: '6to4 (RFC3056)' and 'RFC4291 ipv4 compatible address' formats are
still supported for legacy purposes despite being deprecated for a while
now.
BUG/MEDIUM: http-ana: Try to handle response before handling server abort
In the request analyser responsible to forward the request, we try to detect
the server abort to stop the request forwarding. However, we must be careful
to not block the response processing, if any. Indeed, it is possible to get
the response and the server abort in same time. In this case, we must try to
forward the response to the client first.
So to fix the issue, in the request analyser we no longer handle the server
abort if the response channel is not empty. In the end, the response
analyser is able to detect the server abort if it is relevant. Otherwise,
the stream will be woken up after the response forwarding and the server
abort should be handled at this stage.
This patch should be backported as far as 2.7 only because the risk of
breakage is high. And it is probably a good idea to wait a bit before
backporting it.
CLEANUP: ring: rename the ring lock "RING_LOCK" instead of "LOGSRV_LOCK"
The ring lock was initially mostly used for the logs and used to inherit
its name in lock stats. Now that it's exclusively used by rings, let's
rename it accordingly.
MEDIUM: logs: atomically check and update the log sample index
The log server lock is pretty visible in perf top when using log samples
because it's taken for each server in turn while trying to validate and
update the log server's index. Let's change this for a CAS, since we have
the index and the range at hand now. This allow us to remove the logsrv
lock.
The test on 4 servers now shows a 3.7 times improvement thanks to much
lower contention. Without log sampling a test producing 4.4M logs/s
delivers 4.4M logs/s at 21 CPUs used, everything spent in the kernel.
After enabling 4 samples (1:4, 2:4, 3:4 and 4:4), the throughput would
previously drop to 1.13M log/s with 37 CPUs used and 75% spent in
process_send_log(). Now with this change, 4.25M logs/s are emitted,
using 26 CPUs and 22% in process_send_log(). That's a 3.7x throughput
improvement for a 30% global CPU usage reduction, but in practice it
mostly shows that the performance drop caused by having samples is much
less noticeable (each of the 4 servers has its index updated for each
log).
Note that in order to even avoid incrementing an index for each log srv
that is consulted, it would be more convenient to have a single index
per frontend and apply the modulus on each log server in turn to see if
the range has to be updated. It would then only perform one write per
range switch. However the place where this is done doesn't have access
to a frontend, so some changes would need to be performed for this, and
it would require to update the current range independently in each
logsrv, which is not necessarily easier since we don't know yet if we
can commit it.
MINOR: logs: use a single index to store the current range and index
By using a single long long to store both the current range and the
next index, we'll make it possible to perform atomic operations instead
of locking. Let's only regroup them for now under a new "curr_rg_idx".
The upper word is the range, the lower is the index.
CLEANUP: logs: rename a confusing local variable "curr_rg" to "smp_rg"
The variable curr_rg in process_send_log() is misleading because it is
not related to the integer curr_rg that's used to calculate it, instead
it's a pointer to the current smp_log_range from smp_rgs[], so let's call
it "smp_rg" as a singular for this "smp_rgs" and put an end to this
confusion.
MINOR: log: remove the unused curr_idx in struct smp_log_range
This index is useless because it only serves to know when the global
index reached the end, while the global one already knows it. Let's
just drop it and perform the test on the global range.
It was verified with the following config that the first server continues
to take 1/10 of the traffic, the 2nd one 2/10, the 3rd one 3/10 and the
4th one 4/10:
The test of the log range is not very clear, in part due to the
reuse of the "curr_idx" name that happens at two levels. The call
to in_smp_log_range() applies to the smp_info's index to which 1 is
added: it verifies that the next index is still within the current
range.
Let's just have a local variable "next_index" in process_send_log()
that gets assigned the next index (current+1) and compare it to the
current range's boundaries. This makes the test much clearer. We can
then simply remove in_smp_log_range() that's no longer needed.
BUG/MINOR: server: add missing free for server->rdr_pfx
rdr_pfx was not being free during server cleanup, leading to small memory
leak when "redir" argument was used on a server line (HTTP only).
This should be backported to every stable versions.
[For 2.6 and 2.7: the free should be performed in srv_drop() directly.
For older versions: free in deinit() function near the free for the
cookie string]
The list iterator is broken. As found by Fred, running QUIC single-
threaded shows that only the first connection is accepted because the
accepter relies on the element being initialized once detached (which
is expected and matches what MT_LIST_DELETE_SAFE() used to do before).
However while doing this in the quic_sock code seems to work, doing it
inside the macro show total breakage and the unit test doesn't work
anymore (random crashes). Thus it looks like the fix is not trivial,
let's roll this back for the time it will take to fix the loop.
BUILD: quic: fix build on centos 8 and USE_QUIC_OPENSSL_COMPAT
When using USE_QUIC_OPENSSL_COMPAT=1 on centos-8 the build fail this
way:
In file included from src/quic_openssl_compat.c:11:
/usr/include/openssl/kdf.h:33:46: error: unknown type name 'va_list'
int EVP_KDF_vctrl(EVP_KDF_CTX *ctx, int cmd, va_list args);
This is because of openssl/kdf.h being include before openssl-compat.h
BUG/MAJOR: mux-h2: Report a protocol error for any DATA frame before headers
If any DATA frame is received before all headers are fully received, a
protocol error must be reported. It is required by the HTTP/2 RFC but it is
also important because the HTTP analyzers expect the first HTX block is a
start-line. It leads to a crash if this statement is not respected.
For instance, it is possible to trigger a crash by sending an interim
message with a DATA frame (It may be an empty DATA frame with the ES
flag). AFAIK, only the server side is affected by this bug.
To fix the issue, an protocol error is reported for the stream.
This patch should fix the issue #2291. It must be backported as far as 2.2
(and probably to 2.0 too).
BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API
In 1.9 with commit 627505d36 ("MINOR: freq_ctr: add swrate_add_scaled()
to work with large samples") we got the ability to indicate when adding
some values that they represent a number of samples. However there is an
issue in the calculation which is that the number of samples that is
added to the sum before the division in order to avoid fading away too
fast, is multiplied by the scale. The problem it causes is that this is
done in the negative part of the expression, and that as soon if the sum
of old_sum and v*s is too small (e.g. zero), we end up with a negative
value of -s.
This is visible in "show pools" which occasionally report a very large
value on "needed_avg" since 2.9, though the bug has been there for longer.
Indeed in 2.9 since they're hashed in buckets, it suffices that any
thread got one such error once for the sum to be wrong. One possible
impact is memory usage not shrinking after a short burst due to pools
refraining from releasing objects, believing they don't have enough.
This must be backported to all versions. Note that the opportunistic
version can be dropped before 2.8.
In very rare cases, it is possible that packet are detected as lost, their frames
requeued, then the connection is released without releasing for any reason (to
be killed because of a sendto() fatal failure for instance. Such frames are lost
and never release because the function which release their packet number spaces
does not release the frames which are still enqueued to be send.
CLEANUP: pools: simplify the pool expression when no pool was matched in dump
When dumping pool information, we make a special case of the condition
where the pool couldn't be identified and we consider that it was the
correct one. In the code arrangements brought by commit efc46dede ("DEBUG:
pools: inspect pools on fatal error and dump information found"), a
ternary expression for testing this depends on the "if" block condition
so this can be simplified and will make Coverity happy. This was reported
in GH #2290.
MAJOR: import: update mt_list to support exponential back-off
The new mt_list code supports exponential back-off on conflict, which
is important for use cases where there is contention on a large number
of threads. The API evolved a little bit and required some updates:
- mt_list_for_each_entry_safe() is now in upper case to explicitly
show that it is a macro, and only uses the back element, doesn't
require a secondary pointer for deletes anymore.
- MT_LIST_DELETE_SAFE() doesn't exist anymore, instead one just has
to set the list iterator to NULL so that it is not re-inserted
into the list and the list is spliced there. One must be careful
because it was usually performed before freeing the element. Now
instead the element must be nulled before the continue/break.
- MT_LIST_LOCK_ELT() and MT_LIST_UNLOCK_ELT() have always been
unclear. They were replaced by mt_list_cut_around() and
mt_list_connect_elem() which more explicitly detach the element
and reconnect it into the list.
- MT_LIST_APPEND_LOCKED() was only in haproxy so it was left as-is
in list.h. It may however possibly benefit from being upstreamed.
This required tiny adaptations to event_hdl.c and quic_sock.c. The
test case was updated and the API doc added. Note that in order to
keep include files small, the struct mt_list definition remains in
list-t.h (par of the internal API) and was ifdef'd out in mt_list.h.
A test on QUIC with both quictls 1.1.1 and wolfssl 5.6.3 on ARM64 with
80 threads shows a drastic reduction of CPU usage thanks to this and
the refined memory barriers. Please note that the CPU usage on OpenSSL
3.0.9 is significantly higher due to the excessive use of atomic ops
by openssl, but 3.1 is only slightly above 1.1.1 though:
- before: 35 Gbps, 3.5 Mpps, 7800% CPU
- after: 41 Gbps, 4.2 Mpps, 2900% CPU
BUG/MEDIUM: master/cli: Pin the master CLI on the first thread of the group 1
There is no reason to start the master CLI on several threads and on several
groups. And in fact, it must not be done otherwise the same FD is inserted
several times in the fdtab, leading to a crash during startup because of a
BUG_ON(). It happens when several groups are configured.
To fix the bug the master CLI is now pinned on the first thread of the first
group.
This patch should fix the issue #2259 and must be backported to 2.8.
When a server is in maintenance, the check.status is no longer updated.
Therefore, we shouldn't consider check.status if the checks are not active. This
check was properly implemented in the haproxy_server_check_status metric, but
wasn't carried over to backend_agg_check_status, which introduced
inconsistencies between the two metrics.
BUG/MEDIUM: mux-fcgi: Don't swap trash and dbuf when handling STDERR records
trahs chunks are buffers but not allocated from the buffers pool. And the
"trash" chunk is static and thread-local. It is two reason to not swap it
with a regular buffer allocated from the buffers pool.
Unfortunatly, it is exactly what is performed in the FCGI mux when a STDERR
record is handled. b_xfer() is used to copy data from the demux buffer to
the trash to format the error message. A zeor-copy via a swap may be
performed. In this case, this leads to a memory corruption and a crash
because, some time later, the demux buffer is released because it is
empty. And it is in fact the trash chunk.
b_force_xfer() must be used instead. This function forces the copy.
This patch must be backported as far as 2.2. For 2.4 and 2.2, b_force_xfer()
does not exist. For these versions, the following commit must be backported
too:
BUG/MINOR: hlua/init: coroutine may not resume itself
It's not supported to call lua_resume with <L> and <from> designating
the same lua coroutine. It didn't cause visible bugs so far because
Lua 5.3 used to be more permissive about this, and moreover, yielding
is not involved during the hlua init state.
But this is wrong usage, and the doc clearly specifies that the <from>
argument can be NULL when there is no such coroutine, which is the case
here.
This should be backported in every stable versions.
Once the call returns, we may call the function again with the same
hlua context when E_YIELD is returned (the execution was interrupted
and may be resumed through another lua_resume() call).
The 3rd argument to lua_resume(), 'nargs', is a hint passed to Lua to
know how many (optional) arguments were pushed on the stack prior to
resuming the execution (arguments that Lua will then expose to the Lua
script).
But here is the catch: we never reset lua->nargs between successive
lua_resume() calls, meaning that next lua_resume() calls will still
inherit from the initial nargs value that was set in hlua ctx prior
to calling hlua_ctx_resume() (our wrapper function) for the first time.
This is problematic, because despite not being explicitly mentioned in
the Lua documentation, passed arguments (to which `nargs` refer to), are
already consumed once lua_resume() returns.
This means that we cannot keep calling lua_resume() with non-zero nargs
if we don't push new arguments on the stack prior to resuming lua after
the initial call: nargs is proper to a single lua_resume() invocation.
Despite improper use of lua_resume() for a long time, this didn't cause
visible issues in the past with Lua 5.3, but it is particularly sensitive
starting with Lua 5.4.3 due to debugging hooks improvements that led to
some internal changes (see: lua/lua@58aa09a). Not using nargs properly
now exposes us to undefined behavior when resuming after a yield triggered
from a debugging hook, which may cause running scripts to crash
unexpectedly: for instance with Lua raising errors and complaining about
values being NULL where it should not be the case.
For reference, this issue was initially raised on the Lua mailing list:
http://lua-users.org/lists/lua-l/2023-09/msg00005.html
In this patch, we immediately reset nargs when lua_resume() returns to
prevent any misuse.
It should be backported to every maintained versions.
The pools sizes were rounded up a little bit too much with commit 30f931ead ("BUG/MEDIUM: pools: fix the minimum allocation size"). The
goal was in fact to make sure they were always at least large enough to
store 2 list heads, and stuffing this into the alignment calculation
resulted in the size being always rounded up to this size. This is
problematic because it means that the appended tag at the end doesn't
always catch potential overflows since more bytes than needed are
allocated. Moreover, this test was later reinforced by commit b5ba09ed5
("BUG/MEDIUM: pools: ensure items are always large enough for the
pool_cache_item"), proving that the first test was not always sufficient.
This needs to be reworked to proceed correctly:
- the two lists are needed when the object is in the cache, hence
when we don't care about the tag, which means that the tag's size,
if any, can easily cover for the missing bytes to reach that size.
This is actually what was already being checked for.
- the rounding should not be performed (beyond the size of a word to
preserve pointer alignment) when pool tagging is enabled, otherwise
we don't detect small overflows. It means that there will be less
merging when proceeding like this. Tests show that we merge 93 pools
into 36 without tags and 43 with tags enabled.
- the rounding should not consider the extra size, since it's already
done when calculating the allocated size later (i.e. don't round up
twice). The difference is subtle but it's what makes sure the tag
immediately follows the area instead of starting from the end.
Thanks to this, now when writing one byte too many at the end of a struct
stream, the error is instantly caught.
DEBUG: pools: print the contents surrounding the expected tag location
When no tag matches a known pool, we can inspect around to help figure
what could have possibly overwritten memory. The contents are printed
one machine word per line in hex, then using printable characters, and
when they can be resolved to a pointer, either the pool's pointer name
or a resolvable symbol with offset. The goal here is to help recognize
what is easily identifiable in memory.
For example applying the following patch to stream_free():
FATAL: pool inconsistency detected in thread 1: tag mismatch on free().
caller: 0x59e968 (stream_free+0x6d8/0xa0a)
item: 0x13df5c1
pool: 0x12782c0 ('stream', size 888, real 904, users 1)
Tag does not match (0x4f00000000012782). Tag does not match any other pool.
Contents around address 0x13df5c1+888=0x13df939:
0x13df918 [00 00 00 00 00 00 00 00] [........]
0x13df920 [00 00 00 00 00 00 00 00] [........]
0x13df928 [00 00 00 00 00 00 00 00] [........]
0x13df930 [00 00 00 00 00 00 00 00] [........]
0x13df938 [c0 82 27 01 00 00 00 00] [..'.....] [pool:stream]
0x13df940 [4f c0 59 00 00 00 00 00] [O.Y.....] [stream_new+0x4f/0xbec]
0x13df948 [49 46 49 43 41 54 45 2d] [IFICATE-]
0x13df950 [81 02 00 00 00 00 00 00] [........]
0x13df958 [df 13 00 00 00 00 00 00] [........]
Other possible callers:
(...)
We notice that the tag references pool_head_stream with the allocation
point in stream_new. Another benefit is that a caller may be figured
from the tag even if the "caller" feature is not enabled, because upon
a free() we always put the caller's location into the tag. This should
be sufficient to debug most cases that normally require gdb.
MEDIUM: quic: Allow the quic_conn memory to be asap released.
When sending packets from quic_cc_conn_io_cb(), e.g. when the quic_conn
object has been released and replaced by a lighter one (quic_cc_conn),
some counters may have to be incremented. But they were not reachable
because not shared between quic_conn and quic_cc_conn struct.
To fix this, one has only to move the ->cntrs counters from quic_conn
to QUIC_CONN_COMMON struct which is shared between both quic_cc_conn
Thank you to Tristan for having reported this in GH #2247.
DEBUG: pools: inspect pools on fatal error and dump information found
It's a bit frustrating sometimes to see pool checks catch a bug but not
provide exploitable information without a core.
Here we're adding a function "pool_inspect_item()" which is called just
before aborting in pool_check_pattern() and POOL_DEBUG_CHECK_MARK() and
which will display the error type, the pool's pointer and name, and will
try to check if the item's tag matches the pool, and if not, will iterate
over all pools to see if one would be a better candidate, then will try
to figure the last known caller and possibly other likely candidates if
the pool's tag is not sufficiently trusted. This typically helps better
diagnose corruption in use-after-free scenarios, or freeing to a pool
that differs from the one the object was allocated from, and will also
indicate calling points that may help figure where an object was last
released or allocated. The info is printed on stderr just before the
backtrace.
For example, the recent off-by-one test in the PPv2 changes would have
produced the following output in vtest logs:
*** h1 debug|FATAL: pool inconsistency detected in thread 1: tag mismatch on free().
*** h1 debug| caller: 0x62bb87 (conn_free+0x147/0x3c5)
*** h1 debug| pool: 0x2211ec0 ('pp_tlv_256', size 304, real 320, users 1)
*** h1 debug|Tag does not match. Possible origin pool(s):
*** h1 debug| tag: @0x2565530 = 0x2216740 (pp_tlv_128, size 176, real 192, users 1)
*** h1 debug|Recorded caller if pool 'pp_tlv_128':
*** h1 debug| @0x2565538 (+0184) = 0x62c76d (conn_recv_proxy+0x4cd/0xa24)
A mismatch in the allocated/released pool is already visible, and the
callers confirm it once resolved, where the allocator indeed allocates
from pp_tlv_128 and conn_free() releases to pp_tlv_256:
$ addr2line -spafe ./haproxy <<< $'0x62bb87\n0x62c76d'
0x000000000062bb87: conn_free at connection.c:568
0x000000000062c76d: conn_recv_proxy at connection.c:1177
DEBUG: pools: pass the caller pointer to the check functions and macros
In preparation for more detailed pool error reports, let's pass the
caller pointers to the check functions. This will be useful to produce
messages indicating where the issue happened.
DEBUG: pools: always record the caller for uncached allocs as well
When recording the caller of a pool_alloc(), we currently store it only
when the object comes from the cache and never when it comes from the
heap. There's no valid reason for this except that the caller's pointer
was not passed to pool_alloc_nocache(), so it used to set NULL there.
Let's just pass it down the chain.
When using the listener socket as file descriptor, qc->fd value is -1.
In this case one must not access fdtab[qc->fd] element to change its value.
This bug could have been detected by asan with such a backtrace:
=================================================================
==402222==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fa8ecf417ex7fa8e915cf90 sp 0x7fa8e915cf88
WRITE of size 8 at 0x7fa8ecf417e8 thread T6
#0 0x55707a0bf18a in qc_new_cc_conn src/quic_conn.c:838
#1 0x55707a0c6dc0 in quic_conn_release src/quic_conn.c:1408
#2 0x55707a10916f in quic_close src/xprt_quic.c:35
#3 0x55707a0cec77 in conn_xprt_close include/haproxy/connection.h:153
#4 0x55707a0ceed0 in conn_full_close include/haproxy/connection.h:197
#5 0x55707a0ec253 in qcc_release src/mux_quic.c:2412
#6 0x55707a0ec7d0 in qcc_io_cb src/mux_quic.c:2443
#7 0x55707a63ff2a in run_tasks_from_lists src/task.c:596
#8 0x55707a641cc9 in process_runnable_tasks src/task.c:876
#9 0x55707a56f7b2 in run_poll_loop src/haproxy.c:2954
#10 0x55707a5705fd in run_thread_poll_loop src/haproxy.c:3153
#11 0x7fa8f9450ea6 in start_thread nptl/pthread_create.c:477
#12 0x7fa8f936ea2e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xfba2e)
0x7fa8ecf417e8 is located 24 bytes to the left of 134217728-byte region [0x7fa8e
allocated by thread T0 here:
#0 0x7fa8f9a37037 in __interceptor_calloc ../../../../src/libsanitizer/asan/
#1 0x55707a71a61d in init_pollers src/fd.c:1161
#2 0x55707a56cdf1 in init src/haproxy.c:2672
#3 0x55707a5714c2 in main src/haproxy.c:3298
#4 0x7fa8f9296d09 in __libc_start_main ../csu/libc-start.c:308
Thread T6 created by T0 here:
#0 0x7fa8f99e22a2 in __interceptor_pthread_create ../../../../src/libsanitizpp:214
#1 0x55707a748a21 in setup_extra_threads src/thread.c:252
#2 0x55707a5735c9 in main src/haproxy.c:3844
#3 0x7fa8f9296d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: heap-buffer-overflow src/quic_conn.c:838 in qc_new_cc
Shadow bytes around the buggy address:
0x0ff59d9e02a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0ff59d9e02f0: fa fa fa fa fa fa fa fa fa fa fa fa fa[fa]fa fa
0x0ff59d9e0300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==402222==ABORTING
Aborted
Thank you to @Tristan971 for having reported this bug in GH #2247.
Released version 2.9-dev5 with the following main changes :
- BUG/MEDIUM: mux-h2: fix crash when checking for reverse connection after error
- BUILD: import: guard plock.h against multiple inclusion
- BUILD: pools: import plock.h to build even without thread support
- BUG/MINOR: ssl/cli: can't find ".crt" files when replacing a certificate
- BUG/MINOR: stream: protect stream_dump() against incomplete streams
- DOC: config: mention uid dependency on the tune.quic.socket-owner option
- MEDIUM: capabilities: enable support for Linux capabilities
- CLEANUP/MINOR: connection: Improve consistency of PPv2 related constants
- MEDIUM: connection: Generic, list-based allocation and look-up of PPv2 TLVs
- MEDIUM: sample: Add fetch for arbitrary TLVs
- MINOR: sample: Refactor fc_pp_authority by wrapping the generic TLV fetch
- MINOR: sample: Refactor fc_pp_unique_id by wrapping the generic TLV fetch
- MINOR: sample: Add common TLV types as constants for fc_pp_tlv
- MINOR: ssl_sock: avoid iterating realloc(+1) on stored context
- DOC: ssl: add some comments about the non-obvious session allocation stuff
- CLEANUP: ssl: keep a pointer to the server in ssl_sock_init()
- MEDIUM: ssl_sock: always use the SSL's server name, not the one from the tid
- MEDIUM: server/ssl: place an rwlock in the per-thread ssl server session
- MINOR: server/ssl: maintain an index of the last known valid SSL session
- MINOR: server/ssl: clear the shared good session index on failure
- MEDIUM: server/ssl: pick another thread's session when we have none yet
- MINOR: activity: report the current run queue size
- BUG/MINOR: checks: do not queue/wake a bounced check
- MINOR: checks: start the checks in sleeping state
- MINOR: checks: pin the check to its thread upon wakeup
- MINOR: check: remember when we migrate a check
- MINOR: check/activity: collect some per-thread check activity stats
- MINOR: checks: maintain counters of active checks per thread
- MINOR: check: also consider the random other thread's active checks
- MEDIUM: checks: search more aggressively for another thread on overload
- MEDIUM: checks: implement a queue in order to limit concurrent checks
- MINOR: checks: also consider the thread's queue for rebalancing
- DEBUG: applet: Properly report opposite SC expiration dates in traces
- BUG/MEDIUM: stconn: Update stream expiration date on blocked sends
- BUG/MINOR: stconn: Don't report blocked sends during connection establishment
- BUG/MEDIUM: stconn: Wake applets on sending path if there is a pending shutdown
- BUG/MEDIUM: stconn: Don't block sends if there is a pending shutdown
- BUG/MINOR: quic: Possible skipped RTT sampling
- MINOR: quic: Add a trace to quic_release_frm()
- BUG/MAJOR: quic: Really ignore malformed ACK frames.
- BUG/MINOR: quic: Unchecked pointer to packet number space dereferenced
- BUG/MEDIUM: connection: fix pool free regression with recent ppv2 TLV patches
- BUG/MEDIUM: h1-htx: Ensure chunked parsing with full output buffer
- BUG/MINOR: stream: further protect stream_dump() against incomplete sessions
- DOC: configuration: update examples for req.ver
- MINOR: properly mark the end of the CLI command in error messages
- BUILD: ssl: Build with new cryptographic library AWS-LC
- REGTESTS: ssl: skip ssl_dh test with AWS-LC
- BUILD: bug: make BUG_ON() void to avoid a rare warning
- BUILD: checks: shut up yet another stupid gcc warning
- MINOR: cpuset: add ha_cpuset_isset() to check for the presence of a CPU in a set
- MINOR: cpuset: add ha_cpuset_or() to bitwise-OR two CPU sets
- MINOR: cpuset: centralize a reliable bound cpu detection
- MEDIUM: threads: detect incomplete CPU bindings
- MEDIUM: threads: detect excessive thread counts vs cpu-map
- BUILD: quic: Compilation issue on 32-bits systems with quic_may_send_bytes()
- BUG/MINOR: quic: Unchecked pointer to Handshake packet number space
- MINOR: global: export the display_version() symbol
- MEDIUM: mworker: display a more accessible message when a worker crash
- MINOR: httpclient: allow to configure the retries
- MINOR: httpclient: allow to configure the timeout.connect
- BUG/MINOR: quic: Wrong RTT adjusments
- BUG/MINOR: quic: Wrong RTT computation (srtt and rrt_var)
- BUG/MINOR: stconn: Don't inhibit shutdown on connection on error
- BUG/MEDIUM: applet: Fix API for function to push new data in channels buffer
- BUG/MEDIUM: stconn: Report read activity when a stream is attached to front SC
- BUG/MEDIUM: applet: Report an error if applet request more room on aborted SC
- BUG/MEDIUM: stconn/stream: Forward shutdown on write timeout
- NUG/MEDIUM: stconn: Always update stream's expiration date after I/O
- BUG/MINOR: applet: Always expect data when CLI is waiting for a new command
- BUG/MINOR: ring/cli: Don't expect input data when showing events
- BUG/MINOR: quic: Dereferenced unchecked pointer to Handshke packet number space
- BUG/MINOR: hlua/action: incorrect message on E_YIELD error
- MINOR: http_ana: position the FINAL flag for http_after_res execution
- CI: scripts: add support to build-ssl.sh to download and build AWS-LC
- CI: add support to matrix.py to determine the latest AWS-LC release
- CI: Update matrix.py so all code is contained in functions.
- CI: github: Add a weekly CI run building with AWS-LC
- MINOR: ring: add a function to compute max ring payload
- BUG/MEDIUM: ring: adjust maxlen consistency check
- MINOR: sink: simplify post_sink_resolve function
- MINOR: log/sink: detect when log maxlen exceeds sink size
- MINOR: sink: inform the user when logs will be implicitly truncated
- MEDIUM: sink: don't perform implicit truncations when maxlen is not set
- MINOR: log: move log-forwarders cleanup in log.c
- MEDIUM: httpclient/logs: rely on per-proxy post-check instead of global one
- MINOR: log: add dup_logsrv() helper function
- MEDIUM: log/sink: make logsrv postparsing more generic
- MEDIUM: fcgi-app: properly postresolve logsrvs
- MEDIUM: spoe-agent: properly postresolve log rings
- MINOR: sink: add helper function to deallocate sink struct
- MEDIUM: sink/ring: introduce high level ring creation helper function
- MEDIUM: sink: add sink_finalize() function
- CLEANUP: log: remove unnecessary trim in __do_send_log
- MINOR: cache: Change hash function in default normalizer used in case of "vary"
- MINOR: tasks/stats: report the number of niced tasks in "show info"
- CI: Update to actions/checkout@v4
- MINOR: ssl: add support for 'curves' keyword on server lines
- BUG/MINOR: quic: Wrong cluster secret initialization
- CLEANUP: quic: Remove useless free_quic_tx_pkts() function.
- MEDIUM: init: initialize the trash earlier
- MINOR: tools: add function read_line_to_trash() to read a line of a file
- MINOR: cfgparse: use read_line_from_trash() to read from /sys
- MEDIUM: cfgparse: assign NUMA affinity to cpu-maps
- MINOR: cpuset: dynamically allocate cpu_map
- REORG: cpuset: move parse_cpu_set() and parse_cpumap() to cpuset.c
- CI: musl: highlight section if there are coredumps
- CI: musl: drop shopt in workflow invocation