BUILD: incompatible pointer type suspected with -DDEBUG_UNIT
src/jws.c: In function '__jws_init':
src/jws.c:594:38: error: passing argument 2 of 'hap_register_unittest' from incompatible pointer type [-Wincompatible-pointer-types]
594 | hap_register_unittest("jwk", jwk_debug);
| ^~~~~~~~~
| |
| int (*)(int, char **)
In file included from include/haproxy/api.h:36,
from include/import/ebtree.h:251,
from include/import/ebmbtree.h:25,
from include/haproxy/jwt-t.h:25,
from src/jws.c:5:
include/haproxy/init.h:37:52: note: expected 'int (*)(void)' but argument is of type 'int (*)(int, char **)'
37 | void hap_register_unittest(const char *name, int (*fct)());
| ~~~~~~^~~~~~
GCC 15 is warning because the function pointer does have its
arguments in the register function.
CLEANUP: acme: stored value is overwritten before it can be used
>>> CID 1609049: Code maintainability issues (UNUSED_VALUE)
>>> Assigning value "NULL" to "new_ckchs" here, but that stored value is overwritten before it can be used.
592 struct ckch_store *old_ckchs, *new_ckchs = NULL;
Coverity reported an issue where a variable is initialized to NULL then
directry overwritten with another value. This doesn't arm but this patch
removes the useless initialization.
MINOR: debug: detect call instructions and show the branch target in backtraces
In backtraces, sometimes it's difficult to know what was called by a
given point, because some functions can be fairly long making one
doubt about the correct pointer of unresolved ones, others might
just use a tail branch instead of a call + return, etc. On common
architectures (x86 and aarch64), it's not difficult to detect and
decode a relative call, so let's do it on both of these platforms
and show the branch location after a '>'. Example:
MINOR: debug: in call traces, dump the 8 bytes before the return address, not after
In call traces, we're interested in seeing the code that was executed, not
the code that was not yet. The return address is where the CPU will return
to, so we want to see the bytes that precede this location. In the example
below on x86 we can clearly see a number of direct "call" instructions
(0xe8 + 4 bytes). There are also indirect calls (0xffd0) that cannot be
exploited but it gives insights about where the code branched, which will
not always be the function above it if that one used tail branching for
example. Here's an example dump output:
MINOR: tools: let dump_addr_and_bytes() support dumping before the offset
For code dumps, dumping from the return address is pointless, what is
interesting is to dump before the return address to read the machine
code that was executed before branching. Let's just make the function
support negative sizes to indicate that we're dumping this number of
bytes to the address instead of this number from the address. In this
case, in order to distinguish them, we're using a '<' instead of '[' to
start the series of bytes, indicating where the bytes expand and where
they stop. For example we can now see this:
DEBUG: counters: add the ability to enable/disable updating the COUNT_IF counters
These counters can have a noticeable cost on large machines, though not
dramatic. There's no single good choice to keep them enabled or disabled.
This commit adds multiple choices:
- DEBUG_COUNTERS set to 2 will automatically enable them by default, while
1 will disable them by default
- the global "debug.counters on/off" will allow to change the setting at
boot, regardless of DEBUG_COUNTERS as long as it was at least 1.
- the CLI "debug counters on/off" will also allow to change the value at
run time, allowing to observe a phenomenon while it's happening, or to
disable counters if it's suspected that their cost is too high
Finally, the "debug counters" command will append "(stopped)" at the end
of the CNT lines when these counters are stopped.
Not that the whole mechanism would easily support being extended to all
counter types by specifying the types to apply to, but it doesn't seem
useful at all and would require the user to also type "cnt" on debug
lines. This may easily be changed in the future if it's found relevant.
DEBUG: counters: make COUNT_IF() only appear at DEBUG_COUNTERS>=1
COUNT_IF() is convenient but can be heavy since some of them were found
to trigger often (roughly 1 counter per request on avg). This might even
have an impact on large setups due to the cost of a shared cache line
bouncing between multiple cores. For now there's no way to disable it,
so let's only enable it when DEBUG_COUNTERS is 1 or above. A future
change will make it configurable.
DEBUG: rename DEBUG_GLITCHES to DEBUG_COUNTERS and enable it by default
Till now the per-line glitches counters were only enabled with the
confusingly named DEBUG_GLITCHES (which would not turn glitches off
when disabled). Let's instead change it to DEBUG_COUNTERS and make sure
it's enabled by default (though it can still be disabled with
-DDEBUG_GLITCHES=0 just like for DEBUG_STRICT). It will later be
expanded to cover more counters.
DEBUG: init: report invalid characters in debug description strings
It's easy to leave some trailing \n or even other characters that can
mangle the debug output. Let's verify at boot time that the debug sections
are clean by checking for chars 0x20 to 0x7e inclusive. This is very simple
to do and it managed to find another one in a multi-line message:
[WARNING] (23696) : Invalid character 0x0a at position 96 in description string at src/cli.c:2516 _send_status()
This way new offending code will be spotted before being committed.
BUG/MINOR: debug: remove the trailing \n from BUG_ON() statements
These ones were added by mistake during the change of the cfgparse
mechanism in 3.1, but they're corrupting the output of "debug counters"
by leaving stray ']' on their own lines. We could possibly check them
all once at boot but it doens't seem worth it.
/usr/bin/ld: src/thread.o: warning: relocation against `thread_cpus_enabled_at_boot' in read-only section `.text'
/usr/bin/ld: src/thread.o: in function `thread_detect_count':
/home/vk/projects/haproxy/src/thread.c:1619: undefined reference to `thread_cpus_enabled_at_boot'
/usr/bin/ld: /home/vk/projects/haproxy/src/thread.c:1619: undefined reference to `thread_cpus_enabled_at_boot'
/usr/bin/ld: /home/vk/projects/haproxy/src/thread.c:1620: undefined reference to `thread_cpus_enabled_at_boot'
/usr/bin/ld: warning: creating DT_TEXTREL in a PIE
collect2: error: ld returned 1 exit status
make: *** [Makefile:1044: haproxy] Error 1
thread_cpus_enabled_at_boot is only available when we compiled with
USE_THREAD=1, which is the default for the most targets now.
In some cases, we need to recompile in mono-thread mode, thus
thread_cpus_enabled_at_boot should be protected with USE_THREAD in
thread_detect_count().
thread_detect_count() is always called during the process initialization
never mind of multi thread support. It sets some defaults in global.nbthread
and global.nbtgroups.
This patch is related to GitHub issue #2916.
No need to be backported as it was added in 3.2-dev9 version.
BUG/MINOR: acme: key not restored upon error in acme_res_certificate()
When receiving the final certificate, it need to be loaded by
ssl_sock_load_pem_into_ckch(). However this function will remove any
existing private key in the struct ckch_store.
In order to fix the issue, the ptr to the key is swapped with a NULL
ptr, and restored once the new certificate is commited.
However there is a discrepancy when there is an error in
ssl_sock_load_pem_into_ckch() fails and the pointer is lost.
This patch fixes the issue by restoring the pointer in the error path.
Schedule the retries with a 3s exponential timer. This is a temporary
mesure as the client should follow the Retry-After field for
rate-limiting for every request (https://datatracker.ietf.org/doc/html/rfc8555#section-6.6)
MEDIUM: acme: replace the previous ckch instance with new ones
This step is the latest to have a usable ACME certificate in haproxy.
It looks for the previous certificate, locks the "BIG CERTIFICATE LOCK",
copy every instance, deploys new ones, remove the previous one.
This is done in one step in a function which does not yield, so it could
be problematic if you have thousands of instances to handle.
It still lacks the rate limit which is mandatory to be used in
production, and more cleanup and deinit.
Copy the original ckch_store instead of creating a new one. This allows
to inherit the ckch_conf from the previous structure when doing a
ckchs_dup(). The ckch_conf contains the SAN for ACME.
Free the previous PKEY since it a new one is generated.
MINOR: acme: implement retrieval of the certificate
Once the Order status is "valid", the certificate URL is accessible,
this patch implements the retrieval of the certificate which is stocked
in ctx->store.
MINOR: acme: implement a check on the challenge status
This patch implements a check on the challenge URL, once haproxy asked
for the challenge to be verified, it must verify the status of the
challenge resolution and if there weren't any error.
This patch sends the "{}" message to specify that a challenge is ready.
It iterates on every challenge URL in the authorization list from the
acme_ctx.
This allows the ACME server to procede to the challenge validation.
https://www.rfc-editor.org/rfc/rfc8555#section-7.5.1
MINOR: acme: get the challenges object from the Auth URL
This patch implements the retrieval of the challenges objects on the
authorizations URLs. The challenges object contains a token and a
challenge url that need to be called once the challenge is setup.
Each authorization URLs contain multiple challenge objects, usually one
per challenge type (HTTP-01, DNS-01, ALPN-01... We only need to keep the
one that is relevent to our configuration.
This patch implements the newOrder action in the ACME task, in order to
ask for a new certificate, a list of SAN is sent as a JWS payload.
the ACME server replies a list of Authorization URLs. One Authorization
is created per SAN on a Order.
The authorization URLs are stored in a linked list of 'struct acme_auth'
in acme_ctx, so we can get the challenge URLs from them later.
The location header is also store as it is the URL of the order object.
MINOR: acme: add the acme section in the configuration parser
Add a configuration parser for the new acme section, the section is
configured this way:
acme letsencrypt
uri https://acme-staging-v02.api.letsencrypt.org/directory
account account.key
contact foobar@example.com
challenge HTTP-01
When unspecified, the challenge defaults to HTTP-01, and the account key
to "<section_name>.account.key".
Section are stored in a linked list containing acme_cfg structures, the
configuration parsing is mostly resolved in the postsection parser
cfg_postsection_acme() which is called after the parsing of an acme section.
MINOR: master/cli: support bidirectional communications with workers
Some rare commands in the worker require to keep their input open and
terminate when it's closed ("show events -w", "wait"). Others maintain
a per-session context ("set anon on"). But in its default operation
mode, the master CLI passes commands one at a time to the worker, and
closes the CLI's input channel so that the command can immediately
close upon response. This effectively prevents these two specific cases
from being used.
Here the approach that we take is to introduce a bidirectional mode to
connect to the worker, where everything sent to the master is immediately
forwarded to the worker (including the raw command), allowing to queue
multiple commands at once in the same session, and to continue to watch
the input to detect when the client closes. It must be a client's choice
however, since doing so means that the client cannot batch many commands
at once to the master process, but must wait for these commands to complete
before sending new ones. For this reason we use the prefix "@@<pid>" for
this. It works exactly like "@" except that it maintains the channel
open during the whole execution. Similarly to "@<pid>" with no command,
"@@<pid>" will simply open an interactive CLI session to the worker, that
will be ended by "quit" or by closing the connection. This can be convenient
for the user, and possibly for clients willing to dedicate a connection to
the worker.
DOC: management: add a paragraph about the limitations of the '@' prefix
The '@' prefix permits to execute a single command at once in a worker.
It is very handy but comes with some limitations affecting rare commands,
which is better to be documented (one command per session, input closed)
since it can seldom have user-visible effects.
DOC: management: slightly clarify the prefix role of the '@' command
While the examples were clear, the text did not fully imply what was
reflected there. Better have the text explicitly mention that the
'@' command may be used as a prefix or wrapper in front of a command
as well as a standalone command.
Released version 3.2-dev10 with the following main changes :
- REORG: ssl: move curves2nid and nid2nist to ssl_utils
- BUG/MEDIUM: stream: Fix a possible freeze during a forced shut on a stream
- MEDIUM: stream: Save SC and channel flags earlier in process_steam()
- BUG/MINOR: peers: fix expire learned from a peer not converted from ms to ticks
- BUG/MEDIUM: peers: prevent learning expiration too far in futur from unsync node
- CI: spell check: allow manual trigger
- CI: codespell: add "pres" to spellcheck whitelist
- CLEANUP: assorted typo fixes in the code, commits and doc
- CLEANUP: atomics: remove support for gcc < 4.7
- CLEANUP: atomics: also replace __sync_synchronize() with __atomic_thread_fence()
- TESTS: Fix build for filltab25.c
- MEDIUM: ssl: replace "crt" lines by "ssl-f-use" lines
- DOC: configuration: replace "crt" by "ssl-f-use" in listeners
- MINOR: backend: mark srv as nonnull in alloc_dst_address()
- BUG/MINOR: server: ensure check-reuse-pool is copied from default-server
- MINOR: server: activate automatically check reuse for rhttp@ protocol
- MINOR: check/backend: support conn reuse with SNI
- MINOR: check: implement check-pool-conn-name srv keyword
- MINOR: task: add thread safe notification_new and notification_wake variants
- BUG/MINOR: hlua_fcn: fix potential UAF with Queue:pop_wait()
- MINOR: hlua_fcn: register queue class using hlua_register_metatable()
- MINOR: hlua: add core.wait()
- MINOR: hlua: core.wait() takes optional delay paramater
- MINOR: hlua: split hlua_applet_tcp_recv_yield() in two functions
- MINOR: hlua: add AppletTCP:try_receive()
- MINOR: hlua_fcn: add Queue:alarm()
- MEDIUM: task: make notification_* API thread safe by default
- CLEANUP: log: adjust _lf_cbor_encode_byte() comment
- MEDIUM: ssl/crt-list: warn on negative wildcard filters
- MEDIUM: ssl/crt-list: warn on negative filters only
- BUILD: atomics: fix build issue on non-x86/non-arm systems
- BUG/MINOR: log: fix CBOR encoding with LOG_VARTEXT_START() + lf_encode_chunk()
- BUG/MEDIUM: sample: fix risk of overflow when replacing multiple regex back-refs
- DOC: configuration: rework the crt-list section
- MINOR: ring: support arbitrary delimiters through ring_dispatch_messages()
- MINOR: ring/cli: support delimiting events with a trailing \0 on "show events"
- DEV: h2: fix h2-tracer.lua nil value index
- BUG/MINOR: backend: do not use the source port when hashing clientip
- BUG/MINOR: hlua: fix invalid errmsg use in hlua_init()
- MINOR: proxy: add setup_new_proxy() function
- MINOR: checks: mark CHECKS-FE dummy frontend as internal
- MINOR: flt_spoe: mark spoe agent frontend as internal
- MEDIUM: tree-wide: avoid manually initializing proxies
- MINOR: proxy: add deinit_proxy() helper func
- MINOR: checks: deinit checks_fe upon deinit
- MINOR: flt_spoe: deinit spoe agent proxy upon agent release
MINOR: flt_spoe: deinit spoe agent proxy upon agent release
Even though spoe agent proxy is statically allocated, it uses the proxy
API and is initialized like a regular proxy, thus specific cleanup is
required upon release. This is not tagged as a bug because as of now this
would only cause some minor memory leak upon deinit.
We check the presence of proxy->id to know if it was initialized since
we cannot rely on a pointer for that.
In this patch we try to use the proxy API init functions as much as
possible to avoid code redundancy and prevent proxy initialization
errors. As such, we prefer using alloc_new_proxy() and setup_new_proxy()
instead of manually allocating the proxy pointer and performing the
base init ourselves.
MINOR: checks: mark CHECKS-FE dummy frontend as internal
CHECKS-FE frontend is a dummy frontend used to create checks sessions
as such, it is internal and should not be exposed to the user.
Better mark it as internal using PR_CAP_INT capability to prevent
proxy API from ever exposing it.
Split alloc_new_proxy() in two functions: the preparing part is now
handled by setup_new_proxy() which can be called individually, while
alloc_new_proxy() takes care of allocating a new proxy struct and then
calling setup_new_proxy() with the freshly allocated proxy.
BUG/MINOR: backend: do not use the source port when hashing clientip
The server's "usesrc" keyword supports among other options "client"
and "clientip". The former means we bind to the client's IP and port
to connect to the server, while the latter means we bind to its IP
only. It's done in two steps, first alloc_bind_address() retrieves
the IP address and port, and second, tcp_connect_server() decides
to either bind to the IP only or IP+port.
The problem comes with idle connection pools, which hash all the
parameters: the hash is calculated before (and ideally withouy) calling
tcp_connect_server(), and it considers the whole struct sockaddr_storage
for the hash, except that both client and clientip entirely fill it with
the client's address. This means that both client and clientip make use
of the source port in the hash calculation, making idle connections
almost not reusable when using "usesrc clientip" while they should for
clients coming from the same source. A work-around is to force the
source port to zero using "tcp-request session set-src-port int(0)" but
it's ugly.
Let's fix this by properly zeroing the port for AF_INET/AF_INET6 addresses.
This can be backported to 2.4. Thanks to Sebastien Gross for providing a
reproducer for this problem.
Nick Ramirez reported the following error while testing the h2-tracer.lua
script:
Lua filter 'h2-tracer' : [state-id 0] runtime error: /etc/haproxy/h2-tracer.lua:227: attempt to index a nil value (field '?') from /etc/haproxy/h2-tracer.lua:227: in function line 109.
It is caused by h2ff indexing with an out of bound value. Indeed, h2ff
is indexed with the frame type, which can potentially be > 9 (not common
nor observed during Willy's tests), while h2ff only defines indexes from
0 to 9.
The fix was provided by Willy, it consists in skipping h2ff indexing if
frame type is > 9. It was confirmed that doing so fixes the error.
Willy Tarreau [Mon, 31 Mar 2025 16:26:26 +0000 (18:26 +0200)]
MINOR: ring/cli: support delimiting events with a trailing \0 on "show events"
At the moment it is not supported to produce multi-line events on the
"show events" output, simply because the LF character is used as the
default end-of-event mark. However it could be convenient to produce
well-formatted multi-line events, e.g. in JSON or other formats. UNIX
utilities have already faced similar needs in the past and added
"-print0" to "find" and "-0" to "xargs" to mention that the delimiter
is the NUL character. This makes perfect sense since it's never present
in contents, so let's do exactly the same here.
Thus from now on, "show events <ring> -0" will delimit messages using
a \0 instead of a \n, permitting a better and safer encapsulation.
Willy Tarreau [Mon, 31 Mar 2025 16:17:35 +0000 (18:17 +0200)]
MINOR: ring: support arbitrary delimiters through ring_dispatch_messages()
In order to support delimiting output events with other characters than
just the LF, let's pass the delimiter through the API. The default remains
the LF, used by applet_append_line(), and ignored by the log forwarder.
BUG/MEDIUM: sample: fix risk of overflow when replacing multiple regex back-refs
Aleandro Prudenzano of Doyensec and Edoardo Geraci of Codean Labs
reported a bug in sample_conv_regsub(), which can cause replacements
of multiple back-references to overflow the temporary trash buffer.
The problem happens when doing "regsub(match,replacement,g)": we're
replacing every occurrence of "match" with "replacement" in the input
sample, which requires a length check. For this, a max is applied, so
that a replacement may not use more than the remaining length in the
buffer. However, the length check is made on the replaced pattern and
not on the temporary buffer used to carry the new string. This results
in the remaining size to be usable for each input match, which can go
beyond the temporary buffer size if more than one occurrence has to be
replaced with something that's larger than the remaining room.
The fix proposed by Aleandro and Edoardo is the correct one (check on
"trash" not "output"), and is the one implemented in this patch.
While it is very unlikely that a config will replace multiple short
patterns each with a larger one in a request, this possibility cannot
be entirely ruled out (e.g. mask a known, short IP address using
"XXX.XXX.XXX.XXX"). However when this happens, the replacement pattern
will be static, and not be user-controlled, which is why this patch is
marked as medium.
The bug was introduced in 2.2 with commit 07e1e3c93e ("MINOR: sample:
regsub now supports backreferences"), so it must be backported to all
versions.
Special thanks go to Aleandro and Edoardo for reporting this bug with
a simple reproducer and a fix.
BUG/MINOR: log: fix CBOR encoding with LOG_VARTEXT_START() + lf_encode_chunk()
There have been some reports that using %HV logformat alias with CBOR
encoder would produce invalid CBOR payload according to some CBOR
implementations such as "cbor.me". Indeed, with the below log-format:
cbor.me would complain with: "bytes/text mismatch (ASCII-8BIT != UTF-8) in
streaming string") error message.
It is due to the version string being first announced as text, while CBOR
encoder actually encodes it as byte string later when lf_encode_chunk()
is used.
In fact it affects all patterns combining LOG_VARTEXT_START() with
lf_encode_chunk() which means %HM, %HU, %HQ, %HPO and %HP are also
affected. To fix the issue, in _lf_encode_bytes() (which is
lf_encode_chunk() helper), we now check if we are inside a VARTEXT (we
can tell it if ctx->in_text is true), in which case we consider that we
already announced the current data as regular text so we keep the same
type to encode the bytes from the chunk to prevent inconsistencies.
BUILD: atomics: fix build issue on non-x86/non-arm systems
Commit f435a2e518 ("CLEANUP: atomics: also replace __sync_synchronize()
with __atomic_thread_fence()") replaced the builtins used for barriers,
but the different API required an argument while the macros didn't specify
any, resulting in double parenthesis that were causing obscure build errors
such as "called object type 'void' is not a function or function pointer".
Let's just specify the args for the macro. No backport is needed.
MEDIUM: ssl/crt-list: warn on negative filters only
negative SNI filters on crt-list lines only have a meaning when they
match a positive wildcard filter. This patch adds a warning which
is emitted when trying to use negative filters without any wildcard on
the same line.
MEDIUM: ssl/crt-list: warn on negative wildcard filters
negative wildcard filters were always a noop, and are not useful for
anything unless you want to use !* alone to remove every name from a
certificate.
This is confusing and the documentation never stated it correctly. This
patch adds a warning during the bind initialization if it founds one,
only !* does not emit a warning.
This patch was done during the debugging of issue #2900.
MEDIUM: task: make notification_* API thread safe by default
Some notification_* functions were not thread safe by default as they
assumed only one producer would emit events for registered tasks.
While this suited well with the Lua sockets use-case, this proved to
be a limitation with some other event sources (ie: lua Queue class)
instead of having to deal with both the non thread safe and thread
safe variants (_mt suffix), which is error prone, let's make the
entire API thread safe regarding the event list.
Pruning functions still require that only one thread executes them,
with Lua this is always the case because there is one cleanup list
per context.
This is the non-blocking variant for AppletTCP:receive(). It doesn't
take any argument, instead it tries to read as much data as available
at once. If no data is available, empty string is returned.
BUG/MINOR: hlua_fcn: fix potential UAF with Queue:pop_wait()
If Queue:pop_wait() excecuted from a stream context and pop_wait() is
aborted due to a Lua or ressource error, then the waiting object pointing
to the task will still be registered, so if the task eventually dissapears,
Queue:push() may try to wake invalid task pointer..
To prevent this bug from happening, we now rely on notification_* API to
deliver waiting signals. This way signals are properly garbage collected
when a lua context is destroyed.
It should be backported in 2.8 with 86fb22c55 ("MINOR: hlua_fcn: add Queue
class").
This patch depends on ("MINOR: task: add thread safe notification_new and
notification_wake variants")
MINOR: task: add thread safe notification_new and notification_wake variants
notification_new and notification_wake were historically meant to be
called by a single thread doing both the init and the wakeup for other
tasks waiting on the signals.
In this patch, we extend the API so that notification_new and
notification_wake have thread-safe variants that can safely be used with
multiple threads registering on the same list of events and multiple
threads pushing updates on the list.
This commit is a direct follow-up of the previous one. It defines a new
server keyword check-pool-conn-name. It is used as the default value for
the name parameter of idle connection hash generation.
Its behavior is similar to server keyword pool-conn-name, but reserved
for checks reuse. If check-pool-conn-name is set, it is used in priority
to match a connection for reuse. If unset, a fallback is performed on
check-sni.
Support for connection reuse during server checks was implemented
recently. This is activated with the server keyword check-reuse-pool.
Similarly to stream processing via connect_backend(), a connection hash
is calculated when trying to perform reuse for checks. This is necessary
to retrieve for a connection which shares the check connect parameters.
However, idle connections can additionnally be tagged using a
pool-conn-name or SNI under connect_backend(). Check reuse does not test
these values, which prevent to retrieve a matching connection.
Improve this by using "check-sni" value as idle connection hash input
for check reuse. be_calculate_conn_hash() API has been adjusted so that
name value can be passed as input, both when using streams or checks.
Even with the current patch, there is still some scenarii which could
not be covered for checks connection reuse. most notably, when using
dynamic pool-conn-name/SNI value. It is however at least sufficient to
cover simpler cases.
MINOR: server: activate automatically check reuse for rhttp@ protocol
Without check-reuse-pool, it is impossible to perform check on server
using @rhttp protocol. This is due to the inherent nature of the
protocol which does not implement an active connect method.
Thus, ensure that check-reuse-pool is always set when a reverse HTTP
server is declared. This reduces server configuration and should prevent
any omission. Note that it is still require to add "check" server
keyword so activate server checks.
BUG/MINOR: server: ensure check-reuse-pool is copied from default-server
Duplicate server check.reuse_pool boolean value in srv_settings_cpy().
This is necessary to ensure that check-reuse-pool value can be set via
default-server or server-template.
MINOR: backend: mark srv as nonnull in alloc_dst_address()
Server instance can be NULL on connect_server(), either when dispatch or
transparent proxy are active. However, in alloc_dst_address() access to
<srv> is safe thanks to SF_ASSIGNED stream flag. Add an ASSUME_NONNULL()
to reflect this state.
This should fix coverity report from github issue #2922.
DOC: configuration: replace "crt" by "ssl-f-use" in listeners
Replace the "crt" keyword from the frontend section with a "ssl-f-use"
keyword, "crt" could be ambigous in case we don't want to put a
certificate filename.
MEDIUM: ssl: replace "crt" lines by "ssl-f-use" lines
The new "crt" lines in frontend and listen sections are confusing:
- a filename is mandatory but we could need a syntax without the
filename in the future, if the filename is generated for example
- there is no clue about the fact that its only used on the frontend
side when reading the line
A new "ssl-f-use" line replaces the "crt" line, but a "crt" keyword
can be used on this line. "f" indicates that this is the frontend
configuration, a "ssl-b-use" keyword could be used in the future.
The "crt" lines only appeared in 3.2-dev so this won't change anything
for people using configurations from previous major versions.
The old __sync_* API is no longer necessary since we do not support
gcc before 4.7 anymore. Let's just get rid of this code, the file is
still ugly enough without it.
MEDIUM: stream: Save SC and channel flags earlier in process_steam()
At the begining of process_stream(), the flags of the stream connectors and
channels are saved to be able to handle changes performed in sub-functions
(for instance in analyzers). But, some operations were performed before
saving these flags: Synchronous receives and forced shutdowns. While it
seems to safe for now, it is a bit annoying because some events could be
missed.
So, to avoid bugs in the future, the channels and stream connectors flags
are now really saved before any other processing.
BUG/MEDIUM: stream: Fix a possible freeze during a forced shut on a stream
When a forced shutdown is performed on a stream, it is possible to freeze it
infinitly because it is performed in an unexpected way from process_stream()
point of view, especially when the stream is waiting for a server
connection. The events sequence is a bit complex but at the end the stream
remains blocked in turn-around state and no event are trriggered to unblock
it.
By trying to fix the issue, we considered it was safer to rethink the
feature. The idea is to quickly shutdown a stream to release resources. For
instance to be able to delete a server. So, instead of scheduling a
shutdown, it is more efficient to trigger an error and detach the stream
from the server, if neecessary. The same code than the one used to deal with
connection errors in back_handle_st_cer() is used.
This patch must be slowly backported as far as 2.6.
Released version 3.2-dev9 with the following main changes :
- MINOR: quic: move global tune options into quic_tune
- CLEANUP: quic: reorganize TP flow-control initialization
- MINOR: quic: ignore uni-stream for initial max data TP
- MINOR: mux-quic: define config for max-data
- MINOR: quic: define max-stream-data configuration as a ratio
- MEDIUM: lb-chash: add directive hash-preserve-affinity
- MEDIUM: pools: be a bit smarter when merging comparable size pools
- REGTESTS: disable the test balance/balance-hash-maxqueue
- BUG/MINOR: log: fix gcc warn about truncating NUL terminator while init char arrays
- CI: fedora rawhide: allow "on: workflow_dispatch" in forks
- CI: fedora rawhide: install "awk" as a dependency
- CI: spellcheck: allow "on: workflow_dispatch" in forks
- CI: coverity scan: allow "on: workflow_dispatch" in forks
- CI: cross compile: allow "on: workflow_dispatch" in forks
- CI: Illumos: allow "on: workflow_dispatch" in forks
- CI: NetBSD: allow "on: workflow_dispatch" in forks
- CI: QUIC Interop on AWS-LC: allow "on: workflow_dispatch" in forks
- CI: QUIC Interop on LibreSSL: allow "on: workflow_dispatch" in forks
- MINOR: compiler: add __nonstring macro
- MINOR: thread: dump the CPU topology in thread_map_to_groups()
- MINOR: cpu-set: compare two cpu sets with ha_cpuset_isequal()
- MINOR: cpu-set: add a new function to print cpu-sets in human-friendly mode
- MINOR: cpu-topo: add a dump of thread-to-CPU mapping to -dc
- MINOR: cpu-topo: pass an extra argument to ha_cpu_policy
- MINOR: cpu-topo: add new cpu-policies "group-by-2-clusters" and above
- BUG/MINOR: config: silence .notice/.warning/.alert in discovery mode
- EXAMPLES: add "games.cfg" and an example game in Lua
- MINOR: jws: emit the JWK thumbprint
- TESTS: jws: change the jwk format
- MINOR: ssl/ckch: add substring parser for ckch_conf
- MINOR: mt_list: Implement mt_list_try_lock_prev().
- MINOR: lbprm: Add method to deinit server and proxy
- MINOR: threads: Add HA_RWLOCK_TRYRDTOWR()
- MAJOR: leastconn; Revamp the way servers are ordered.
- BUG/MINOR: ssl/ckch: leak in error path
- BUILD: ssl/ckch: potential null pointer dereference
- MINOR: log: support "raw" logformat node typecast
- CLEANUP: assorted typo fixes in the code and comments
- DOC: config: fix two missing "content" in "tcp-request" examples
- MINOR: cpu-topo: cpu_dump_topology() SMT info check little optimisation
- BUILD: compiler: undefine the CONCAT() macro if already defined
- BUG/MEDIUM: leastconn: Don't try to reposition if the server is down
- BUG/MINOR: rhttp: fix incorrect dst/dst_port values
- BUG/MINOR: backend: do not overwrite srv dst address on reuse
- BUG/MEDIUM: backend: fix reuse with set-dst/set-dst-port
- MINOR: sample: define bc_reused fetch
- REGTESTS: extend conn reuse test with transparent proxy
- MINOR: backend: fix comment when killing idle conns
- MINOR: backend: adjust conn_backend_get() API
- MINOR: backend: extract conn hash calculation from connect_server()
- MINOR: backend: extract conn reuse from connect_server()
- MINOR: backend: remove stream usage on connection reuse
- MINOR: check define check-reuse-pool server keyword
- MEDIUM: check: implement check-reuse-pool
- BUILD: backend: silence a build warning when not using ssl
- BUILD: quic_sock: address a strict-aliasing build warning with gcc 5 and 6
- BUILD: ssl_ckch: use my_strndup() instead of strndup()
- DOC: update INSTALL to reflect the minimum compiler version
DOC: update INSTALL to reflect the minimum compiler version
The mt_list update in 3.1 mandated the support for c11-like atomics that
arrived with gcc-4.7. As such, older versions are no longer supported.
For special cases in single-threaded environments, mt_lists could be
replaced with regular lists but it doesn't seem worth the hassle. It
was verified that gcc 4.7 to 14 and clang 3.0 and 19 do build fine.
That leaves us with 10 years of coverage of compiler versions, which
remains reasonable assuming that users of old ultra-stable systems are
unlikely to upgrade haproxy without touching the rest of the system.
BUILD: ssl_ckch: use my_strndup() instead of strndup()
Not all systems have strndup(), that's why we have our "my_strndup()",
so let's make use of it here. This fixes the build on Solaris 10.
No backport is needed, this was just merged with commit fdcb97614c
("MINOR: ssl/ckch: add substring parser for ckch_conf").
BUILD: quic_sock: address a strict-aliasing build warning with gcc 5 and 6
The UDP GSO code emits a build warning with older toolchains (gcc 5 and 6):
src/quic_sock.c: In function 'cmsg_set_gso':
src/quic_sock.c:683:2: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*((uint16_t *)CMSG_DATA(c)) = gso_size;
^
Let's just use the write_u16() function that's made for this purpose.
It was verified that for all versions from 5 to 13, gcc produces the
exact same code with the fix (and without the warning). It arrived in
3.1 with commit 448d3d388a ("MINOR: quic: add GSO parameter on quic_sock
send API") so this can be backported there.
BUILD: backend: silence a build warning when not using ssl
Since recent commit ee94a6cfc1 ("MINOR: backend: extract conn reuse
from connect_server()") a build warning "set but not used" on the
"reuse" variable is emitted, because indeed the variable is now only
checked when SSL is in use. Let's just mark it as such.
Amaury Denoyelle [Fri, 28 Mar 2025 16:25:57 +0000 (17:25 +0100)]
MEDIUM: check: implement check-reuse-pool
Implement the possibility to reuse idle connections when performing
server checks. This is done thanks to the recently introduced functions
be_calculate_conn_hash() and be_reuse_connection().
One side effect of this change is that be_calculate_conn_hash() can now
be called with a NULL stream instance. As such, part of the functions
are adjusted accordingly.
Note that to simplify configuration, connection reuse is not performed
if any specific check connection parameters are defined on the server
line or via the tcp-check connect rule. This is performed via newly
defined tcpcheck_use_nondefault_connect().