Willy Tarreau [Wed, 21 Dec 2022 14:23:04 +0000 (15:23 +0100)]
BUILD: makefile: rework 51D to split v3/v4
There are multiple options for 51DEGREES, v3/v4, threading or not,
pattern/trie for v3, vhash for v4, use of libatomic, etc. While the
current rules deal with all of that correctly, it's too difficult to
focus on one version because the two are interleaved for every single
option. Let's just split them into two independent blocks. This removes
some if/endif, and makes the lecture much more straightforward.
Some conditional blocks have become out of control over time and are
totally unreadble. It took 15 minutes to figure what "endif" matched
what "if" in the PCRE one for example, and DA and 51D use multiple
levels as well that are not easy to sort out.
Let's reindent the whole thing. Most places that were already indented
used 2 spaces per level, so here we're keeping that principle. It was
just not done on the two last ones that are used to define some rules
because we don't want spaces before rule names. A few had the opening
condition indicated on the endif line.
It would be desirable that over time this more maintainable layout is
preserved.
Willy Tarreau [Wed, 21 Dec 2022 16:27:46 +0000 (17:27 +0100)]
BUILD: makefile: properly report USE_PCRE/USE_PCRE2 in features
The PCRE/PCRE2 CFLAGS forcefully add -DUSE_PCRE or -DUSE_PCRE2 because
we want that USE_STATIC_PCRE or USE_PCRE_JIT implicitly enables them.
However, doing it this way is incorrect because the option is not visible
in BUILD_FEATURES, and for example, some regtests depending on such
features (such as map_redirect.vtc) would be skipped if only the static
or jit versions are enabled.
The correct way to do this is to always set USE_PCRE feature for such
variants instead of adding the define.
This could almost be backported but would require to backport other
makefile patches and likely only has effects on the reg-tests at the
moment, so it's probably not worth the hassle.
Willy Tarreau [Wed, 21 Dec 2022 10:59:33 +0000 (11:59 +0100)]
BUILD: makefile: add an implicit USE_MATH variable for -lm
Lua and 51d make use of -lm, which would be better served by having its
own option than being passed in the LDFLAGS. It also simplifies linking
against a static version of libm. The option uses its own LDFLAGS which
are automatically collected into OPTIONS_LDFLAGS.
Willy Tarreau [Wed, 21 Dec 2022 09:46:07 +0000 (10:46 +0100)]
BUILD: makefile: never force -latomic, set USE_LIBATOMIC instead
Two places, 51Dv4 and AIX7.2, used to forcefully add -latomic to the
ldflags (and via different variables). This must not be done because
it depends on compiler, arch, etc. USE_LIBATOMIC=implicit is much
better: it allows the user to forcefully disable it if undesired.
The LIBATOMIC_LDFLAGS are set to -latomic and automatically added
to OPTIONS_LDFLAGS.
It will make this dependency appear in haproxy -vv but that's not
and issue and it may even sometimes help when troubleshooting.
Willy Tarreau [Thu, 15 Dec 2022 10:55:51 +0000 (11:55 +0100)]
BUILD: makefile: do not restrict Lua's prepend path to empty LUA_LIB_NAME
The HLUA_PREPEND_PATH and HLUA_PREPEND_CPATH settings were only applied
when LUA_LIB_NAME was empty, otherwise they were silently ignored. Let's
take them out of that conditional block as this makes no sense to enforce
such a restriction (the main reason in fact is that this whole block is
unreadable).
Also take this opportunity to unfold the last two imbricated tests of
LUA_LIB_NAME and put comments around certain blocks to know what "endif"
matches what "if".
Willy Tarreau [Thu, 15 Dec 2022 09:47:14 +0000 (10:47 +0100)]
BUILD: makefile: make sure LUA_INC and LUA_LIB are always initialized
While LUA_INC is sometimes set in the makefile (only when LUA_LIB_NAME
is not set), LUA_LIB is never pre-initialized and faces the risk of
being accidently inherited from the environment. Let's make sure both
are properly reset first when not explicitly set. For this we always
set LUA_INC based on the autodetection if it's not set, and always
pre-initialize LUA_LIB to empty. This also helps make that block
slightly less difficult to understand.
Willy Tarreau [Wed, 14 Dec 2022 17:46:48 +0000 (18:46 +0100)]
BUILD: makefile: reference libdl only once
There used to be special cases where USE_DL was only for the SSL library,
then for Lua, then was used globally, but each of them kept their own copy
of -ldl. When building on a system supporting libdl, with SSL and Lua
enabled, no less than 3 -ldl are found on the linker's command line.
What matters is only that it's close to the end, so let's remove the old
specific ones and move the global one to the end. The option now uses its
own DL_LDFLAGS that is automatically collected into OPTIONS_LDFLAGS.
Willy Tarreau [Wed, 14 Dec 2022 17:38:58 +0000 (18:38 +0100)]
BUILD: makefile: make sure to also ignore SSL_INC when using wolfssl
I got a build error when adding USE_OPENSSL_WOLFSSL to my make command
line because SSL_INC was still set and caused some conflicting headers
to be included first. There's already an exclusion test for the wolfssl
variant used for SSL_LIB, make it also cover SSL_INC to avoid this.
This may be backported to 2.7 to ease testing of wolfssl.
Willy Tarreau [Wed, 14 Dec 2022 17:18:41 +0000 (18:18 +0100)]
BUILD: makefile: clean the wolfssl include and lib generation rules
The default include paths for wolfssl didn't match the explicit pattern
one. This was causing some confusion about what to look for, complexifying
the rules and making /usr/local/include to be automatically included if a
path was not set.
Let's just proceed as we usually do, i.e. pass -I only when a path is
specified, so that it works similarly to openssl. Let's also simplify
the LDFLAG rule at the same time.
This may be backported to 2.7 to ease testing of wolfssl.
Willy Tarreau [Thu, 22 Dec 2022 16:58:24 +0000 (17:58 +0100)]
BUILD: makefile: ensure that all USE_* handlers appear before CFLAGS are used
It happens that a few "if USE_foo" were placed too low in the makefile,
and would mostly work by luck thanks to not using variables that were
already referenced before. The opentracing include is even trickier
because it extends OPTIONS_CFLAGS that was last read a few lines before
being included, but it only works because COPTS is defined as a macro and
not a variable, so it will be evaluated later. At least now it doesn't
touch OPTIONS_* anymore and since it's cleanly arranged, it will work by
default via the flags collector.
Let's just move these late USE_* handlers upper and place a visible
delimiter after them reminding not to add any after.
Willy Tarreau [Thu, 22 Dec 2022 17:24:46 +0000 (18:24 +0100)]
BUILD: makefile: start to automatically collect CFLAGS/LDFLAGS
Now OPTIONS_CFLAGS and OPTIONS_LDFLAGS don't need to be set anymore
for options USE_xxx that set xxx_CFLAGS or xxx_LDFLAGS. These ones
will be automatically connected.
The only entry for now that was ready for this was PCRE2, so it was
adjusted so as not to append to OPTIONS_LDFLAGS anymore. More will
come later.
Willy Tarreau [Thu, 22 Dec 2022 18:44:35 +0000 (19:44 +0100)]
BUILD: makefile: add a function to collect all options' CFLAGS/LDFLAGS
The new function collect_opts_flags now scans all USE_* options defined
in use_opts and appends the corresponding *_CFLAGS and *_LDFLAGS to
OPTIONS_{C,LD}FLAGS respectively. This will be useful to get rid of all
the individual concatenations to these variables.
Willy Tarreau [Thu, 22 Dec 2022 14:47:47 +0000 (15:47 +0100)]
BUILD: makefile: initialize all build options' variables at once
A lot of _SRC, _INC, _LIB etc variables are set and expected to be
initialized to an empty string by default. However, an in-depth
review of all of them showed that WOLFSSL_{INC,LIB}, SSL_{INC,LIB},
LUA_{INC,LIB}, and maybe others were not always initialized and could
sometimes leak from the environment and as such cause strange build
issues when running from cascaded scripts that had exported them.
The approach taken here consists in iterating over all USE_* options
and unsetting any _SRC, _INC, _LIB, _CFLAGS and _LDFLAGS that follows
the same name. For the few variable names options that don't exactly
match the build option (SSL & WOLFSSL), these ones are specifically
added to the list. The few that were explicitly cleared in their own
sections were just removed since not needed anymore. Note that an
"undefine" command appeared in GNU make 3.82 but since we support
older ones we can only initialize the variables to an empty string
here. It's not a problem in practice.
We're now certain that these variables are empty wherever they are
used, and that it is possible to just append to them, or use them
as-is.
Willy Tarreau [Thu, 22 Dec 2022 18:32:24 +0000 (19:32 +0100)]
BUILD: makefile: move common options-oriented macros to include/make/options.mk
Some macros and functions are barely understandable and are only used
to iterate over known options from the use_opts list. Better assign
them a name and move them into a dedicated file to clean the makefile
a little bit. Now at least "use_opts" only appears once, where it is
defined. This also allowed to completely remove the BUILD_FEATURES
macro that caused some confusion until previous commit.
Willy Tarreau [Wed, 21 Dec 2022 10:57:43 +0000 (11:57 +0100)]
BUILD: makefile: build the features list dynamically
The BUILD_FEATURES string was created too early to inherit implicit
additions. This could make the features list report that some features
were disabled while they had later been enabled. Better make it a macro
that is interpreted where needed based on the current state of each
option.
Amaury Denoyelle [Thu, 22 Dec 2022 17:56:09 +0000 (18:56 +0100)]
BUG/MINOR: mux-quic: ignore remote unidirectional stream close
Remove ABORT_NOW() on remote unidirectional stream closure. This is
required to ensure our implementation is evolutive enough to not fail on
unknown stream type.
Note that for the moment MAX_STREAMS_UNI flow-control frame is never
emitted. This should be unnecessary for HTTP/3 which have a limited
usage of unidirectional streams but may be required if other application
protocols are supported in the future.
ABORT_NOW() was triggered by s2n-quic which opens an unknown
unidirectional stream with greasing. This was detected by QUIC interop
runner for http3 testcase.
MINOR: h3: use stream error when needed instead of connection
Use a stream error when possible instead of always closing the whole
connection. This requires a new field <err> in h3s structure.
Change slightly the decoding loop to facilitate error propagation. It
will be interrupted as soon as <h3s.err> or <h3c.err> is non null. In
the later case, a CONNECTION_CLOSE is requested through
qcc_emit_cc_app().
For stream error, H3 layer uses qcc_abort_stream_read() coupled with
qcc_reset_stream(). This is in conformance with RFC 9114 which
recommends to use STOP_SENDING + RESET_STREAM emission on stream error.
This commit is part of implementing H3 errors at the stream level.
Implement STOP_SENDING. This is divided in two main functions :
* qcc_abort_stream_read() which can be used by application protocol to
request for a STOP_SENDING. This set the flag QC_SF_READ_ABORTED.
* qcs_send_reset() is a static function called after the preceding one.
It will send a STOP_SENDING via qcc_send().
QC_SF_READ_ABORTED flag is now properly used : if activated on a stream
during qcc_recv(), <qcc.app_ops.decode_qcs> callback is skipped. Also,
abort reading on unknown unidirection remote stream is now fully
supported with the emission of a STOP_SENDING as specified by RFC 9000.
This commit is part of implementing H3 errors at the stream level. This
will allows the H3 layer to request the peer to close its endpoint for
an error on a stream.
Implement RESET_STREAM reception by mux-quic. On reception, qcs instance
will be mark as remotely closed and its Rx buffer released. The stream
layer will be flagged on error if still attached.
This commit is part of implementing H3 errors at the stream level.
Indeed, on H3 stream errors, STOP_SENDING + RESET_STREAM should be
emitted. The STOP_SENDING will in turn generate a RESET_STREAM by the
remote peer which will be handled thanks to this patch.
MINOR: mux-quic: do not count stream flow-control if already closed
It is unnecessary to increase stream credit once its size is known.
Indeed, a peer cannot sent a greater offset than the value advertized.
Else, connection will be closed on STREAM reception with
FINAL_SIZE_ERROR.
This commit is a small optimization and may prevent the emission of
unneeded MAX_STREAM_DATA frames on some occasions.
Amaury Denoyelle [Wed, 21 Dec 2022 09:21:58 +0000 (10:21 +0100)]
MEDIUM: mux-quic: implement shutw
Implement mux_ops shutw operation for QUIC mux. A RESET_STREAM is
emitted unless the stream is already closed due to all data or
RESET_STREAM already transmitted.
This operation is notably useful when upper stream layer wants to close
the connection early due to an error.
This was tested by using a HTTP server which listens with PROXY protocol
support. The corresponding server line on haproxy configuration
deliberately not specify send-proxy. This causes the server to close
abruptly the connection. Without this patch, nothing was done on the QUIC
stream which was kept open until the whole connection is closed. Now, a
proper RESET_STREAM is emitted to report the error.
MEDIUM: httpclient: change the default log format to skip duplicate proxy data
The httpclient emits logs in the httplog format, however it still
display the frontend, the backend and the server.
In the case of the httpclient we only need to know that we are using the
httpclient, so the backend and server information are irelevant.
In the case of extra code the name of the proxy can be long and will be
displayed twice which is not useful.
This is the same log-format as the httplog but the %b/%s is now -/- so
the format is still compatible with an httplog parser.
MINOR: httpclient: don't add body when istlen is empty
Don't try to create a request with a body in httpclient_req_gen() if the
payload ist has a ptr but no len.
Sometimes people have their httpclient stuck because they use an ist
with a data ptr but no len. Check the len so this mistake doesn't block
the client.
BUG/MINOR: pool/stats: Use ullong to report total pool usage in bytes in stats
The same change was already performed for the cli. The stats applet and the
prometheus exporter are also concerned. Both use the stats API and rely on
pool functions to get total pool usage in bytes. pool_total_allocated() and
pool_total_used() must return 64 bits unsigned integer to avoid any wrapping
around 4G.
BUG/MEDIUM: mux-h2: Refuse interim responses with end-stream flag set
As state in RFC9113#8.1, HEADERS frame with the ES flag set that carries an
informational status code is malformed. However, there is no test on this
condition.
On 2.4 and higher, it is hard to predict consequences of this bug because
end of the message is only reported with a flag. But on 2.2 and lower, it
leads to a crash because there is an unexpected extra EOM block at the end
of an interim response.
Now, when a ES flag is detected on a HEADERS frame for an interim message, a
stream error is sent (RST_STREAM/PROTOCOL_ERROR).
This patch should solve the issue #1972. It should be backported as far as
2.0.
REGTESTS: ssl: Add tests for ocsp auto update mechanism
Tests a subpart of the ocsp auto update feature. It will mainly focus on
the 'auto' mode since the 'on' one relies strongly on timers way too
long to be used in a regtest context.
MEDIUM: ssl: Start update task if at least one ocsp-update option is set to on
This patch effectively enables the ocsp auto update mechanism. If a
least one ocsp-update option is enabled in a crt-list, then the ocsp
auto update task is created. It will look into the dedicated ocsp update
tree for the next update to be updated, use the http_client to send the
ocsp request to the proper responder, validate the received ocsp
response and update the ocsp response tree before finally reinserting
the entry in the ocsp update tree (with a next update time set to
now+1H).
The main task will then sleep until another entry needs to be updated.
The task gets scheduled after config check in order to avoid trying to
update ocsp responses while configuration is still being parsed (and
certificates and actual ocsp responses are loaded).
This patch contains the main function of the ocsp auto update mechanism
as well as an init and destroy function of the task used for this.
The task is not created in this patch but in a later one.
The function has two distinct parts and the branching to one or the
other is completely based on the fact that the cur_ocsp pointer of the
ssl_ocsp_task_ctx member is set.
If the pointer is not set, we need to look at the first item of the
update tree and see if it needs to be updated. If it does not we simply
wait until the time is right and let the task asleep. If it does need to
be updated, we simply build and send the corresponding ocsp request
thanks to the http_client. The task is then sent to sleep with an expire
time set to infinity. The http_client will wake it back up once the
response is received (or a timeout occurs). Just note that during this
whole process the cetificate_ocsp object corresponding to the entry
being updated is taken out of the update tree and only stored in the
ssl_ocsp_task_ctx context.
Once the task is waken up by the http_client, it branches on the
response processing part of the function which basically checks that the
response is valid and inserts it into the ocsp_response tree. The task
then goes back to sleep until another entry needs to be updated.
MEDIUM: ssl: Insert ocsp responses in update tree when needed
When 'ocsp-update' is enabled for a given certificate, we need to insert
the certificate_ocsp member of this certificate in the OCSP update tree
as well as the already existing OCSP response tree. For such an entry to
be created, the certificate needs to contain an "OCSP URI" field, and we
also need to know the certificate's issuer, that is used to build the
OCSP_CERTID. When no OCSP response is known for a given certificate, an
empty certificate_ocsp object gets created so that it can be inserted in
the ocsp update tree.
The entry is inserted on the first spot of the update tree since its
expire time is 0. Then whenever the update task is started, it will try
to get responses for those certificates first.
In order for the update process to work, we also need to store some
information relative to the main certificate into the certificate_ocsp
structure. This avoids having to keep a reference to a ckch in an ocsp
tree entry.
This patch adds a reference to the certificate chain as well as the ocsp
issuer that might have been filled during init into the certificate_ocsp
object. It also gets the ocsp uri at this time since it is contained in
the server's certificate. We only take the first uri that might be
contained in the certificate though.
Those fields are only filled when ocsp auto update is enabled for the
concerned certificate.
MINOR: ssl: Store 'ocsp-update' mode in the ckch_data and check for inconsistencies
The 'ocsp-update' option is parsed at the same time as all the other
bind line options but it does not actually have anything to do with the
bind line since it concerns the frontend certificate instead. For that
reason, we should have a mean to identify inconsistencies in the
configuration and raise an error when a given certificate has two
different ocsp-update modes specified in one or more crt-lists.
The simplest way to do it is to store the ocsp update mode directly in
the ckch and not only in the ssl_bind_conf.
This option will define how the ocsp update mechanism behaves. The
option can either be set to 'on' or 'off' and can only be specified in a
crt-list entry so that we ensure that it concerns a single certificate.
The 'off' mode is the default one and corresponds to the old behavior
(no automatic update).
When the option is set to 'on', we will try to get an ocsp response
whenever an ocsp uri can be found in the frontend's certificate. The
only limitation of this mode is that the certificate's issuer will have
to be known in order for the OCSP certid to be built.
This patch only adds the parsing of the option. The full functionality
will come in a later commit.
MINOR: ssl: Add ocsp_update_tree and helper functions
The OCSP update tree holds ocsp responses that will need to be updated
automatically. The entries are inserted in an eb64_tree where the keys
are the absolute time after which the entry will need to be updated.
MEDIUM: ssl: Add ocsp_certid in ckch structure and discard ocsp buffer early
The ocsp_response member of the cert_key_and_chain structure is only
used temporarily. During a standard init process where an ocsp response
is provided, this ocsp file is first copied into the ocsp_response
buffer without any ocsp-related parsing (see
ssl_sock_load_ocsp_response_from_file), and then the contents are
actually interpreted and inserted into the actual ocsp tree
(cert_ocsp_tree) later in the process (see ssl_sock_load_ocsp). If the
response was deemed valid, it is then copied into the actual
ocsp_response structure's 'response' field (see
ssl_sock_load_ocsp_response). From this point, the ocsp_response field
of the cert_key_and_chain object could be discarded since actual ocsp
operations will be based of the certificate_ocsp object.
The only remaining runtime use of the ckch's ocsp_response field was in
the CLI, and more precisely in the 'show ssl cert' mechanism.
This constraint could be removed by adding an OCSP_CERTID directly in
the ckch because the buffer was only used to get this id.
This patch then adds the OCSP_CERTID pointer in the ckch, it clears the
ocsp_response buffer early and simplifies the ckch_store_build_certid
function.
The new "update ssl ocsp-response <certfile>" CLI command allows to
update the stored OCSP response for a given certificate. It relies on
the http_client which is used to send an HTTP request to the OCSP
responder whose URI can be extracted from the certificate.
This command won't work for a certificate that did not have a stored
OCSP response yet.
MINOR: ssl: Add helper function that checks the validity of an OCSP response
This helper function will check that an OCSP response is valid, meaning
that the proper "Content-Type: application/ocsp-response" header is
present and the data itself is a proper OCSP_RESPONSE that can be
checked thanks to the issuer certificate.
The tree that contains OCSP responses is never locked despite being used
at runtime for OCSP stapling as well as the CLI through "set ssl cert"
and "set ssl ocsp-response" commands.
Everything works though because the certificate_ocsp structure is
refcounted and the tree's entries are cleaned up when SSL_CTXs are
destroyed (thanks to an ex_data entry in which the certificate_ocsp
pointer is stored).
This new lock will come to use when the OCSP auto update mechanism is
fully implemented because this new feature will be based on another tree
that stores the same certificate_ocsp members and updates their contents
periodically.
Willy Tarreau [Wed, 21 Dec 2022 08:16:55 +0000 (09:16 +0100)]
BUG/MINOR: quic: do not allocate more rxbufs than necessary
When QUIC thread binding was fixed by commit f5a0c8abf ("MEDIUM: quic:
respect the threads assigned to a bind line"), one point was overlooked
regarding rxbuf allocation. Indeed, there's one rxbuf per listener and
per bound thread. Originally the loop would iterate over all threads,
but this is not needed anymore and causes lots of memory to be allocated
in scenarios where shards are used, the worst one being "shards by-thread"
which allocates N^2 buffers for N threads. This gives us 2304 buffers
(or 576 MB of RAM) for 48 threads!
Let's only allocate one buffer per bound thread on each listener to fix
this.
This should be backported to 2.7 and generally wherever the commit
above is backported. It depends on the previous commit below:
"BUG/MEDIUM: quic: properly take shards into account on bind lines"
Willy Tarreau [Wed, 21 Dec 2022 08:09:19 +0000 (09:09 +0100)]
BUG/MEDIUM: quic: properly take shards into account on bind lines
Shards were completely forgotten in commit f5a0c8abf ("MEDIUM: quic:
respect the threads assigned to a bind line"). The thread mask is
taken from the bind_conf, but since shards were introduced in 2.5,
the per-listener mask is held by the receiver and can be smaller
than the bind_conf's mask.
The effect here is that the traffic is not distributed to the
appropriate thread. At first glance it's not dramatic since it remains
one of the threads eligible by the bind_conf, but it still means that
in some contexts such as "shards by-thread", some concurrency may
persist on listeners while they're expected to be alone. One identified
impact is that it requires more rxbufs than necessary, but there may
possibly be other not yet identified side effects.
This must be backported to 2.7 and everywhere the commit above is
backported.
Amaury Denoyelle [Tue, 20 Dec 2022 13:47:16 +0000 (14:47 +0100)]
BUG/MEDIUM: mux-quic: fix double delete from qcc.opening_list
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
In case a stream is deleted before receiving full HTTP request, it also
must be removed from <qcc.opening_list>. This was not the case on first
implementation but has been fixed by the following patch : 641a65ff3cccd394eed49378c6ccdb8ba0a101a7
BUG/MINOR: mux-quic: remove qcs from opening-list on free
This means that now a stream can be deleted from the list in two
different functions. Sadly, as LIST_DELETE was used in both cases,
nothing prevented a double-deletion from the list, even though
LIST_INLIST was used. Both calls are replaced with LIST_DEL_INIT which
is idempotent.
This bug causes memory corruption which results in most cases in a
segfault, most of times outside of mux-quic code itself. It has been
found first by gabrieltz who reported it on the github issue #1903. Big
thanks to him for his testing.
This bug also causes failures on several 'M' transfer testcase of QUIC
interop-runner. The s2n-quic client is particularly useful in this case
as segfaults triggers were most of the times on the LIST_DELETE
operation itself. This is probably due to its encapsulating of HEADERS
frame with fin bit delayed in a following empty STREAM frame.
This must be backported wherever the above patch is, up to 2.6.
Willy Tarreau [Mon, 19 Dec 2022 16:26:25 +0000 (17:26 +0100)]
MINOR: pool: only use opportunistic versions of the swrate_add() functions
We don't need to know very accurately how much RAM is needed in a pool,
however we must not spend time competing with other threads trying to be
the one with the most accurate value. Let's use the "_opportunistic"
variants of swrate_add() which will simply cause some updates to be
dropped in case of thread contention. This should significantly improve
the situation when dealing with many threads and small per-thread caches.
Performance gains of up to 1-2% were observed on 48-thread systems thanks
to this alone.
Willy Tarreau [Mon, 19 Dec 2022 16:19:45 +0000 (17:19 +0100)]
MINOR: freq_ctr: add opportunistic versions of swrate_add()
Some uses of swrate_add() only consist in getting a rough estimate of
a frequency. There are cases where speed matters more than accuracy
(e.g. pools). For such use cases, let's just stop looping on the CAS,
if the update fails, another thread is already providing input, and
it's not dramatic to lose the race. All these functions are now
suffixed with "_opportunistic".
Willy Tarreau [Mon, 19 Dec 2022 07:15:57 +0000 (08:15 +0100)]
MINOR: pool: make the thread-local hot cache size configurable
Till now it was only possible to change the thread local hot cache size
at build time using CONFIG_HAP_POOL_CACHE_SIZE. But along benchmarks it
was sometimes noticed a huge contention in the lower level memory
allocators indicating that larger caches could be beneficial, especially
on machines with large L2 CPUs.
Given that the checks against this value was no longer on a hot path
anymore, there was no reason for continuing to force it to be tuned at
build time. So this patch allows to set it by tune.memory-hot-size.
It's worth noting that during the boot phase the value remains zero so
that it's possible to know if the value was set or not, which opens the
possibility that we try to automatically adjust it based on the per-cpu
L2 cache size or the use of certain protocols (none of this is done yet).
Willy Tarreau [Tue, 20 Dec 2022 10:54:35 +0000 (11:54 +0100)]
OPTIM: pool: split the read_mostly from read_write parts in pool_head
Performance profiling on a 48-thread machine showed a lot of time spent
in pool_free(), precisely at the point where pool->limit was retrieved.
And the reason is simple. Some parts of the pool_head are heavily updated
only when facing a cache miss ("allocated", "used", "needed_avg"), while
others are always accessed (limit, flags, size). The fact that both
entries were stored into the same cache line makes it very difficult for
each thread to access these precious info even when working with its own
cache.
By just splitting the fields apart, a test on QUIC (which stresses pools
a lot) more than doubled performance from 42 Gbps to 96 Gbps!
Given that the patch only reorders fields and addresses such a significant
contention, it should be backported to 2.7 and 2.6.
BUG/MEDIUM: stats: Rely on a local trash buffer to dump the stats
It is possible to block the stats applet if a line exceeds the free space in
the responsse buffer while the buffer is empty. It is only an issue in HTTP
becaues of the HTX overhead and, AFAIK, only with json output.
In this case, the applet is unable to write anything in the response buffer
and waits for some free space to proceed further. On the other hand, because
the response channel is empty, nothing is sent and thus no space can be
freed. At this stage, the stream and the applet are blocked waiting for the
other side.
To avoid this situation, we must take care to not dump a line exceeding the
free space in the HTX message. It means we cannot rely anymore on the global
trash buffer. At least, not directly. The trick is to use a local trash
buffer, mapped on the global one but with a different size. We use b_make()
to do so. The local trash buffer is thread local to avoid any concurrency
issue.
It is a valid fix. However it could be good to review the internal API of
the stats applet to not rely on a global variable.
This patch should solve the #1873. It must be backported at least as far as
2.6. Older versions must be evaluated first but it is probably possible to
hit this bug with long proxy/server names.
BUG/MINOR:: mux-h1: Never handle error at mux level for running connection
During the request parsing, we must be sure to never handle errors at the
mux level if the connection is running (or closing). The error must be
handled by the upper layer.
It should never happen. But there is an edge case that was not properly
handled. If all data are received on the first packet with the read0 and the
request is truncated on the payload, in this case the stream-connector is
created, so the H1C is in RUNNING state. But an error is reported because
the request is truncated. In this specific case, the error is handled by the
mux while it should not.
This patch is related to #1966. It must be backported to 2.7.
BUG/MINOR: mux-h1: Report EOS on parsing/internal error for not running stream
When an error occurred during the request parsing while the stream is not
running, an EOS must be reported. It is not an issue for an embryonic
connection because the H1 stream is orphan. However, it is an issue with
connections upgraded from TCP to H1. In this case, the upgrade is not
performed because there is an early error. However the H1 stream is not
orphan and is not destroyed. The H1 multiplexer will wait for the detach
event. But without EOS, the upper layer is unable to perform the shutdown.
This patch is related to #1966. It must be backported to 2.7. Older versions
are not affected by this issue.
Bertrand Jacquin [Sat, 17 Dec 2022 21:39:38 +0000 (21:39 +0000)]
BUG/MEDIUM: tests: use tmpdir to create UNIX socket
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().
BUILD: peers: peers-t.h depends on stick-table-t.h
peers-t.h uses "struct stktable" as well as STKTABLE_DATA_TYPES which
are defined in stick-table-t.h. It works by accident because
stick-table-t.h was always included before. But could provoke build
issue with EXTRA code.
Willy Tarreau [Thu, 15 Dec 2022 18:34:23 +0000 (19:34 +0100)]
BUILD: 51d: fix build issue with recent compilers
With gcc-11.2 and binutils-2.37 I'm getting link errors due to multiply
defined symbols when enabling USE_51DEGREES_V4. This is caused by two
variables being present in hash.h instead of hash.c, hence they're
defined twice.
This patch just moves them to hash.c and turns their declaration to
extern.
No backport is needed since this was introduced in 2.8-dev.
Amaury Denoyelle [Wed, 14 Dec 2022 17:04:06 +0000 (18:04 +0100)]
BUG/MINOR: quic: fix crash on PTO rearm if anti-amplification reset
There is a possible segfault when accessing qc->timer_task in
quic_conn_io_cb() without testing it. It seems however very rare as it
requires several condition to be encounter.
* quic_conn must be in CLOSING state after having sent a
CONNECTION_CLOSE which free the qc.timer_task
* quic_conn handshake must still be in progress : in fact, qc.timer_task
is accessed on this path because of the anti-amplification limit
lifted.
I was unable thus far to trigger it but benchmarking tests seems to have
fire it with the following backtrace as a result :
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
195 state = _HA_ATOMIC_OR_FETCH(&t->state, f);
[Current thread is 1 (Thread 0x7fc714ff1700 (LWP 14305))]
(gdb) bt
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
#2 0x00005620ecedab6e in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:596
#3 0x00005620ecedb63c in process_runnable_tasks () at src/task.c:861
#4 0x00005620ecea971a in run_poll_loop () at src/haproxy.c:2913
#5 0x00005620ecea9cf9 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3102
#6 0x00007fc773c3f609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007fc77372d133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) up
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
4393 task_wakeup(qc->timer_task, TASK_WOKEN_MSG);
(gdb) p qc
$1 = (struct quic_conn *) 0x7fc5d0df49c0
(gdb) p qc->timer_task
$2 = (struct task *) 0x0
This is a follow up of "BUG/MINOR: stats: fix show stat json buffer limitation"
However this time this is purely preemptive as we did not reach the buffer
limitation yet. But now is the proper time so that this won't be an issue
in the upcoming versions.
BUG/MINOR: stats: fix show stat json buffer limitation
json output type is a lot more verbose than other output types.
Because of this and the increasing number of metrics implemented within
haproxy, we are starting to reach max bufsize limit (defaults to 16k)
when dumping stats to json since 2.6-dev1.
This results in stats output being truncated with
"[{"errorStr":"output buffer too short"}]"
This was reported by Gabriel in #1964.
Thanks to "MINOR: stats: introduce stats field ctx", we can now make
multipart (using multiple buffers) dumping, in case a single buffer is not big
enough to hold the complete stat line.
For now, only stats_dump_fields_json() makes use of it as it is by
far the most verbose stats output type.
(csv, typed and html outputs should be good for a while and may use this
capability if the need arises in some distant future)
--
It could be backported to 2.6 and 2.7.
This commit depends on:
- MINOR: stats: provide ctx for dumping functions
- MINOR: stats: introduce stats field ctx
Add a new value in stats ctx: field.
Implement field support in line dumping parent functions
stats_print_proxy_field_json() and stats_dump_proxy_to_buffer().
This will allow child dumping functions to support partial line dumping
when needed. ie: when dumping buffer is exhausted: do a partial send and
wait for a new buffer to finish the dump. Thanks to field ctx, the function
can start dumping where it left off on previous (unterminated) invokation.
This is a minor refactor to allow stats_dump_info_* and stats_dump_fields_*
functions to directly access stat ctx pointer instead of explicitly
passing stat ctx struct members to them.
This will allow dumping functions to benefit from upcoming ctx updates.
BUG/MINOR: ssl: Fix memory leak of find_chain in ssl_sock_load_cert_chain
The certificate chain that gets passed in the SSL_CTX through
SSL_CTX_set1_chain has its reference counter increased by OpenSSL
itself. But since the ssl_sock_load_cert_chain function might create a
brand new certificate chain if none exists in the ckch_data
(sk_X509_new_null), then we ended up returning a new certificate chain
to the caller that was never destroyed.
This patch can be backported to all stable branches but it might need to
be reworked for branches older than 2.4 because of commit ec805a32b9
that refactorized the modified code.
MINOR: ssl: Remove unneeded buffer allocation in show ocsp-response
When calling 'show ssl ocsp-response' from the CLI, a temporary buffer
was created in parse_binary when we could just use a local static buffer
instead. This does not change the behavior of the function, it just
simplifies it.
Amaury Denoyelle [Thu, 15 Dec 2022 09:58:05 +0000 (10:58 +0100)]
MINOR: h3: check return values of htx_add_* on headers parsing
Check return values of htx_add_header()/htx_add_eof() during H3 HEADERS
conversion to HTX. In case of error, the connection is interrupted with
a CONNECTION_CLOSE.
This commit is useful to detect abnormal situation on headers parsing.
It should be backported up to 2.7.
Amaury Denoyelle [Thu, 15 Dec 2022 09:53:55 +0000 (10:53 +0100)]
BUG/MINOR: h3: fix memleak on HEADERS parsing failure
If an error is triggered on H3 HEADERS parsing, the allocated buffer for
HTX data is not freed.
To prevent this memleak, all return path have been centralized using
goto statements.
Also, as a small bonus, offer_buffers() is not called anymore if buffer
is not freed because sedesc has taken it. However this change has
probably no noticeable effect as dynamic buffers management is not
functional currently.
Amaury Denoyelle [Thu, 15 Dec 2022 08:18:25 +0000 (09:18 +0100)]
BUG/MEDIUM: h3: fix cookie header parsing
Cookie header are treated specifically to merge multiple occurences in a
single HTX header. This is treated in a if-block condition inside the
'while (1)' loop for headers parsing. The length value of ist
representing cookie header is set to -1 by http_cookie_register(). The
problem is that then a continue statement is used but without
incrementing 'hdr_idx' to pass on the next header.
Before the aformentionned patch, the bug was hidden : on the next while
iteration, all isteq() invocations won't match with cookie header length
now set to -1. htx_add_header() fails silently because length is
invalid. hdr_idx is finally incremented which allows parsing to proceed
normally with the next header.
Now, a cookie header with length -1 do not pass the test on header name
conformance introduced by the above patch. Thus, a spurrious
RESET_STREAM is emitted. This behavior has been reported on the mailing
list by Shawn Heisey who found out that browsers disabled H3 usage due
to the RESET_STREAM received. Big thanks to him for his testing on the
master branch.
This issue is simply resolved by incrementing hdr_idx before continue
statement. It could have been detected earlier if htx_add_header()
return value was checked. This will be the subject of a dedicated commit
outside of the backport scope.
Amaury Denoyelle [Thu, 15 Dec 2022 08:27:34 +0000 (09:27 +0100)]
MINOR: http-htx: add BUG_ON to prevent API error on http_cookie_register
http_cookie_register() must be called on first invocation with the last
two arguments pointing both to a negative value. After it, they will be
updated to a valid index.
We must never have only the last argument as NULL as this will cause an
invalid array addressing. To clarify this a BUG_ON statement is
introduced.
BUG/MINOR: mux-h1: Fix test instead a BUG_ON() in h1_send_error()
In the previous patch (86924532db "BUG/MINOR: mux-h1: Fix test instead a
BUG_ON() in h1_send_error()"), a BUG_ON() condition was inverted by error in
h1_send_error(). The stream-connector must be NULL to be able to destroy the H1
stream.
This patch must be backported with the commit above (to 2.7).
BUG/MEDIUM: mux-h1: Don't release H1 stream upgraded from TCP on error
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.
However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().
So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.
This patch may be related to #1966. It must be backported to 2.7.
BUG/MEDIUM: h3: parse content-length and reject invalid messages
Ensure that if a request contains a content-length header it matches
with the total size of following DATA frames. This is conformance with
HTTP/3 RFC 9114.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
This must be backported up to 2.6. It relies on the previous commit :
MINOR: http: extract content-length parsing from H2
MINOR: http: extract content-length parsing from H2
Extract function h2_parse_cont_len_header() in the generic HTTP module.
This allows to reuse it for all HTTP/x parsers. The function is now
available as http_parse_cont_len_header().
Most notably, this will be reused in the next bugfix for the H3 parser.
This is necessary to check that content-length header match the length
of DATA frames.
BUG/MEDIUM: h3: reject request with invalid pseudo header
RFC 9114 dictates several requirements for pseudo header usage in H3
request. Previously only minimal checks were implemented. Enforce all
the following requirements with this patch :
* reject request with undefined or invalid pseudo header
* reject request with duplicated pseudo header
* reject non-CONNECT request with missing mandatory pseudo header
* reject request with pseudo header after standard ones
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
BUG/MEDIUM: h3: reject request with invalid header name
Reject request containing invalid header name. This concerns every
header containing uppercase letter or a non HTTP token such as a space.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
Thanks to Yuki Mogi from FFRI Security, Inc. for having reported this.
BUG/MEDIUM: resolvers: Use tick_first() to update the resolvers task timeout
In resolv_update_resolvers_timeout(), the resolvers task timeout is updated
by checking running and waiting resolutions. However, to find the next
wakeup date, MIN() operator is used to compare ticks. Ticks must never be
compared with such operators, tick helper functions must be used, to
properly handled TICK_ETERNITY value. In this case, tick_first() must be
used instead of MIN().
It is an old bug but it is pretty visible since the commit fdecaf6ae4
("BUG/MINOR: resolvers: do not run the timeout task when there's no
resolution"). Because of this bug, the resolvers task timeout may be set to
TICK_ETERNITY, stopping periodic resolutions.
This patch should solve the issue #1962. It must be backported to all stable
versions.
BUG/MEDIUM: freq-ctr: Don't compute overshoot value for empty counters
The function computing the excess of events of a frequency counter over the
current period for a target frequency must handle empty counters (no event
and no start date). In this case, no excess must be reported.
Because of this bug, long pauses may be experienced on the bandwith
limitation filter.
BUG/MINOR: startup: don't use internal proxies to compute the maxconn
With internal proxies using the SSL activated (httpclient for example)
the automatic computation of the maxconn is wrong because these proxies
are always activated by default.
This patch fixes the issue by not counting these internal proxies during
the computation.
Ilya Shipitsin [Mon, 12 Dec 2022 14:15:22 +0000 (19:15 +0500)]
CI: github: split ssl lib selection based on git branch
when *SSL_VERSION="latest" behaviour was introduced, it seems to be fine
for development branches, but too intrusive for stable branches.
let us limit "latest" semantic only for development builds, if branch name
contains "haproxy-" it is supposed to be stable branch, no latest openssl
should be taken
[wla: must be backported as far as 2.6] Signed-off-by: William Lallemand <wlallemand@haproxy.org>
Amaury Denoyelle [Mon, 12 Dec 2022 08:59:50 +0000 (09:59 +0100)]
BUG/MINOR: mux-quic: handle properly alloc error in qcs_new()
Use qcs_free() on allocation failure in qcs_new() This ensures that all
qcs content is properly deallocated and prevent memleaks. Most notably,
qcs instance is now removed from qcc tree.
This bug is labelled as MINOR as it occurs only on qcs allocation
failure due to memory exhaustion.
BUG/MINOR: mux-quic: remove qcs from opening-list on free
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
If a qcs instance is freed before receiving a full HTTP request, it must
be removed from the <qcc.opening_list>. Else a segfault will occur in
qcc_refresh_timeout() when accessing a dangling pointer.
For the moment this bug was not reproduced in production. This is
because there exists only few rare cases where a qcs is freed before
HTTP request parsing. However, as error detection will be improved on
H3, this will occur more frequently in the near future.
Amaury Denoyelle [Mon, 12 Dec 2022 10:24:05 +0000 (11:24 +0100)]
BUG/MINOR: quic: handle alloc failure on qc_new_conn() for owned socket
This patch is the follow up of previous fix :
BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
quic_conn owned socket FD is initialized as soon as possible in
qc_new_conn(). This guarantees that we can safely call
quic_conn_release() on allocation failure. This function uses internally
qc_release_fd() to free the socket FD unless it has been initialized to
an invalid FD value.
Without this patch, a segfault will occur if one inner allocation of
qc_new_conn() fails before qc.fd is initialized.
This change is linked to quic-conn owned socket implementation.
This should be backported up to 2.7.
Amaury Denoyelle [Mon, 12 Dec 2022 10:22:42 +0000 (11:22 +0100)]
BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
qc_new_conn() is used to allocate a quic_conn instance and its various
internal members. If one allocation fails, quic_conn_release() is used
to cleanup things.
For the moment, pool_zalloc() is used which ensures that all content is
null. However, some members must be initialized to a special values
to be able to use quic_conn_release() safely. This is the case for
quic_conn lists and its tasklet.
Also, some quic_conn internal allocation functions were doing their own
cleanup on failure without reset to NULL. This caused an issue with
quic_conn_release() which also frees this members. To fix this, these
functions now only return an error without cleanup. It is the caller
responsibility to free the allocated content, which is done via
quic_conn_release().
Without this patch, allocation failure in qc_new_conn() would often
result in segfault. This was reproduced easily using fail-alloc at 10%.