Amaury Denoyelle [Thu, 27 Mar 2025 09:37:48 +0000 (10:37 +0100)]
MINOR: backend: extract conn reuse from connect_server()
Following the previous patch, the part directly related to connection
reuse is extracted from connect_server(). It is now define in a new
function be_reuse_connection().
Amaury Denoyelle [Fri, 28 Mar 2025 16:42:14 +0000 (17:42 +0100)]
MINOR: backend: extract conn hash calculation from connect_server()
On connection reuse, a hash is first calculated. It is generated from
various connection parameters, to retrieve a matching connection.
Extract hash calculation from connect_server() into a new dedicated
function be_calculate_conn_hash(). The objective is to be able to
perform connection reuse for checks, without connect_server() invokation
which relies on a stream instance.
Amaury Denoyelle [Mon, 31 Mar 2025 13:59:01 +0000 (15:59 +0200)]
MINOR: backend: adjust conn_backend_get() API
The main objective of this patch is to remove the stream instance from
conn_backend_get() parameters. This would allow to perform reuse outside
of stream contexts, for example for checks purpose.
Amaury Denoyelle [Thu, 27 Mar 2025 16:39:39 +0000 (17:39 +0100)]
MINOR: backend: fix comment when killing idle conns
Previously, if a server reached its pool-high-count limit, connection
were killed on connect_server() when reuse was not possible. However,
this is now performed even if reuse is done since the following patch : b3397367dc7cec9e78c62c54efc24d9db5cde2d2
MEDIUM: connections: Kill connections even if we are reusing one.
Thus, adjust the related comment to reflect this state.
REGTESTS: extend conn reuse test with transparent proxy
Recently, work on connection reuses reveals an issue when mixed with
transparent proxy and set-dst. This patch rewrites the related regtests
to be able to catch this now fixed bug.
Note that it is the first regtest which relies on bc_reused recently
introduced sample fetches. This fetch could be reuse in other related
connection reuse regtests to simplify them.
Amaury Denoyelle [Mon, 31 Mar 2025 15:57:56 +0000 (17:57 +0200)]
BUG/MEDIUM: backend: fix reuse with set-dst/set-dst-port
On backend connection reuse, a hash is calculated from various
parameters, to ensure the selected connection match the requested
parameters. Notably, destination address is one of these parameters.
However, it is only taken into account if using a transparent server
(server address 0.0.0.0).
This may cause issue where an incorrect connection is reused, which is
not targetted to the correct destination address. This may be the case
if a set-dst/set-dst-port is used with a transparent proxy (proxy option
transparent).
The fix is simple enough. Destination address is now always used as
input to the connection reuse hash.
This must be backported up to 2.6. Note that for reverse HTTP to work,
it relies on the following patch, which ensures destination address
remains NULL in this case.
Amaury Denoyelle [Thu, 27 Mar 2025 16:06:06 +0000 (17:06 +0100)]
BUG/MINOR: backend: do not overwrite srv dst address on reuse
Previously, destination address of backend connection was systematically
always reassigned. However, this step is unnecessary on connection
reuse. Indeed, reuse should only be conducted with connection using the
same destination address matching the stream requirements.
This patch removes this unnecessary assignment. It is now only performed
when reuse cannot be conducted and a new connection is instantiated.
Functionnally speaking, this patch should not change anything in theory,
as reuse is performed in conformance with the destination address.
However, it appears that it was not always properly enforced. The
systematic assignment of the destination address hides these issues, so
it is now remove. The identified bogus cases will then be fixed in the
following patches.would
This should be backported up to all stable versions.
With a @rhttp server, connect is not possible, transfer is only possible
via idle connection reuse. The server does not have any network address.
Thus, it is unnecessary to allocate the stream destination address prior
to connection reuse. This patch adjusts this by fixing
alloc_dst_address() to take this into account.
Prior to this patch, alloc_dst_address() would incorrectly assimilate a
@rhttp server with a transparent proxy mode. Thus stream destination
address would be copied from the destination address. Connection adress
would then be rewrote with this incorrect value. This did not impact
connect or reuse as destination addr is only used in idle conn hash
calculation for transparent servers. However, it causes incorrect values
for dst/dst_port samples.
BUG/MEDIUM: leastconn: Don't try to reposition if the server is down
It may happen that the server is going down, and fwlc_srv_reposition()
is still called, because streams still attached to the server are
being terminated.
So in fwlc_srv_reposition(), just do nothing if we've been removed from
the tree.
BUILD: compiler: undefine the CONCAT() macro if already defined
As Ilya reported in issue #2911, the CONCAT() macro breaks on NetBSD
which defines its own as __CONCAT() (which is exactly the same). Let's
just undefine it before ours to fix the issue instead of renaming, but
keep ours so that we don't have doubts about what we're running with.
Note that the patch introducing this breaking change was backported
to 3.0.
DOC: config: fix two missing "content" in "tcp-request" examples
As reported by Uku Sõrmus in GitHub issue #2917, two "tcp-request" rules
in an example were mistakenly missing the "content" hook, rendering them
invalid.
"raw" logformat node typecast is a special value (unlike str,bool,int..)
which tells haproxy to completely ignore logformat options (including
encoding ones) and force binary output for the current node only. It is
mainly intended for use with JSON or CBOR encoders in order to generate
nested CBOR or nested JSON by storing intermediate log-formats within
variables and assembling the final object in the parent log-format.
Olivier Houchard [Wed, 26 Mar 2025 16:16:48 +0000 (16:16 +0000)]
MAJOR: leastconn; Revamp the way servers are ordered.
For leastconn, servers used to just be stored in an ebtree.
Each server would be one node.
Change that so that nodes contain multiple mt_lists. Each list
will contain servers that share the same key (typically meaning
they have the same number of connections). Using mt_lists means
that as long as tree elements already exist, moving a server from
one tree element to another does no longer require the lbprm write
lock.
We use multiple mt_lists to reduce the contention when moving
a server from one tree element to another. A list in the new
element will be chosen randomly.
We no longer remove a tree element as soon as they no longer
contain any server. Instead, we keep a list of all elements,
and when we need a new element, we look at that list only if it
contains a number of elements already, otherwise we'll allocate
a new one. Keeping nodes in the tree ensures that we very
rarely have to take the lbrpm write lock (as it only happens
when we're moving the server to a position for which no
element is currently in the tree).
The number of mt_lists used is defined as FWLC_NB_LISTS.
The number of tree elements we want to keep is defined as
FWLC_MIN_FREE_ENTRIES, both in defaults.h.
The value used were picked afrer experimentation, and
seems to be the best choice of performances vs memory
usage.
Doing that gives a good boost in performances when a lot of
servers are used.
With a configuration using 500 servers, before that patch,
about 830000 requests per second could be processed, with
that patch, about 1550000 requests per second are
processed, on an 64-cores AMD, using 1200 concurrent connections.
Olivier Houchard [Tue, 25 Mar 2025 16:24:46 +0000 (16:24 +0000)]
MINOR: lbprm: Add method to deinit server and proxy
Add two new methods to lbprm, server_deinit() and proxy_deinit(),
in case something should be done at the lbprm level when
removing servers and proxies.
Implement mt_list_try_lock_prev(), that does the same thing
as mt_list_lock_prev(), exceot if the list is locked, it
returns { NULL, NULL } instaed of waiting.
jwk_thumbprint() is a function which is a function which implements
RFC7368 and emits a JWK thumbprint using a EVP_PKEY.
EVP_PKEY_EC_to_pub_jwk() and EVP_PKEY_RSA_to_pub_jwk() were changed in
order to match what is required to emit a thumbprint (ie, no spaces or
lines and the lexicographic order of the fields)
EXAMPLES: add "games.cfg" and an example game in Lua
The purpose is mainly to exhibit certain limitations that come with such
less common programming models, to show users how to program interactive
tools in Lua, and how to connect interactively.
Other use cases that could be envisioned are "top" and various monitoring
utilities, with sliding graphs etc. Lua is particularly attractive for
this usage, easy to program, well known from most AI tools (including its
integration into haproxy), making such programs very quick to obtain in
their basic form, and to improve later.
A very limited example game is provided, following the principle of a
very popular one, where the player must compose lines from falling
pieces. It quickly revealed the need to the ability to enforce a timeout
to applet:receive(). Other identified limitations include the difficulty
from the Lua side to monitor multiple events at once, but it seems that
callbacks and/or event dispatchers would be useful here.
At the moment the CLI is not workable (it interactivity was broken in 2.9
when line buffering was adopted), though it was verified that it works
with older releases.
The command needed to connect to the game is displayed as a notice message
during boot.
BUG/MINOR: config: silence .notice/.warning/.alert in discovery mode
When first pre-parsing the config to detect the presence or absence of
the master mode, we must not emit messages because they are not supposed
to be visible at this point, otherwise they appear twice each. The
pre-parsing, also called discovery mode, is only for internal use,
thus it should remain silent.
This should be backported to 3.1 where this mode was introduced.
Willy Tarreau [Mon, 31 Mar 2025 13:00:57 +0000 (15:00 +0200)]
MINOR: cpu-topo: add new cpu-policies "group-by-2-clusters" and above
This adds "group-by-{2,3,4}-clusters", which, as its name implies,
create one thread group per X clusters. This can be useful when CPUs
are split into too small clusters, as well as when the total number
of assigned cores is not even between the clusters, to try to spread
the load between less different ones.
Willy Tarreau [Mon, 31 Mar 2025 12:36:26 +0000 (14:36 +0200)]
MINOR: cpu-topo: add a dump of thread-to-CPU mapping to -dc
When emitting the CPU topology info with -dc, also emit a list of
thread-to-CPU mapping. The group/thread and thread ID are emitted
with the list of their CPUs on each line. The count of CPUs is shown
to ease comparisons, and as much as possible, we try to pack identical
lines within a group by showing thread ranges.
Willy Tarreau [Mon, 31 Mar 2025 12:35:09 +0000 (14:35 +0200)]
MINOR: cpu-set: add a new function to print cpu-sets in human-friendly mode
The new function "print_cpu_set()" will print cpu sets in a human-friendly
way, with commas and dashes for intervals. The goal is to keep them compact
enough.
Willy Tarreau [Mon, 31 Mar 2025 13:40:41 +0000 (15:40 +0200)]
MINOR: thread: dump the CPU topology in thread_map_to_groups()
It was previously done in thread_detect_count() but that's not quite
handy because we still don't know about the groups setting. Better do
it slightly later and have all the relevant info instead.
GCC 15 throws the following warning on fixed-size char arrays if they do not
contain terminated NUL:
src/tools.c:2041:25: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (17 chars into 16 available) [-Werror=unterminated-string-initialization]
2041 | const char hextab[16] = "0123456789ABCDEF";
We are using a couple of such definitions for some constants. Converting them
to flexible arrays, like: hextab[] = "0123456789ABCDEF" may have consequences,
as enlarged arrays won't fit anymore where they were possibly located due to
the memory alignement constraints.
GCC adds 'nonstring' variable attribute for such char arrays, but clang and
other compilers don't have it. Let's wrap 'nonstring' with our
__nonstring macro, which will test if the compiler supports this attribute.
Willy Tarreau [Tue, 25 Mar 2025 17:19:05 +0000 (18:19 +0100)]
REGTESTS: disable the test balance/balance-hash-maxqueue
This test brought by commit 8ed1e91efd ("MEDIUM: lb-chash: add directive
hash-preserve-affinity") seems to have hit a limitation of what can be
expressed in vtc, as it would be desirable to have one server response
release two clients at once but the various attempts using barriers
have failed so far. The test seems to work fine locally but still fails
almost 100% of the time on the CI, so it remains timing dependent in
some ways. Tests have been done with nbthread 1, pool-idle-shared off,
http-reuse never (since always fails locally) etc but to no avail. Let's
just mark it broken in case we later figure another way to fix it. It's
still usable locally most of the time, though.
Willy Tarreau [Thu, 20 Mar 2025 09:48:37 +0000 (10:48 +0100)]
MEDIUM: pools: be a bit smarter when merging comparable size pools
By default, pools of comparable sizes are merged together. However, the
current algorithm is dumb: it rounds the requested size to the next
multiple of 16 and compares the sizes like this. This results in many
entries which are already multiples of 16 not being merged, for example
1024 and 1032 are separate, 65536 and 65540 are separate, 48 and 56 are
separate (though 56 merges with 64).
This commit changes this to consider not just the entry size but also the
average entry size, that is, it compares the average size of all objects
sharing the pool with the size of the object looking for a pool. If the
object is not more than 1% bigger nor smaller than the current average
size or if it neither 16 bytes smaller nor larger, then it can be merged.
Also, it always respects exact matches in order to avoid merging objects
into larger pools or worse, extending existing ones for no reason, and
when there's a tie, it always avoids extending an existing pool.
Also, we now visit all existing pools in order to spot the best one, we
do not stop anymore at the smallest one large enough. Theoretically this
could cost a bit of CPU but in practice it's O(N^2) with N quite small
(typically in the order of 100) and the cost at each step is very low
(compare a few integer values). But as a side effect, pools are no
longer sorted by size, "show pools bysize" is needed for this.
This causes the objects to be much better grouped together, accepting to
use a little bit more sometimes to avoid fragmentation, without causing
everyone to be merged into the same pool. Thanks to this we're now
seeing 36 pools instead of 48 by default, with some very nice examples
of compact grouping:
On a very unscientific test consisting in sending 1 million H1 requests
and 1 million H2 requests to the stats page, we're seeing an ~6% lower
memory usage with the patch:
before the patch:
Total: 48 pools, 4120832 bytes allocated, 4120832 used (~3555680 by thread caches).
after the patch:
Total: 36 pools, 3880648 bytes allocated, 3880648 used (~3299064 by thread caches).
This should be taken with care however since pools allocate and release
in batches.
When using hash-based load balancing, requests are always assigned to
the server corresponding to the hash bucket for the balancing key,
without taking maxconn or maxqueue into account, unlike in other load
balancing methods like 'first'. This adds a new backend directive that
can be used to take maxconn and possibly maxqueue in that context. This
can be used when hashing is desired to achieve cache locality, but
sending requests to a different server is preferable to queuing for a
long time or failing requests when the initial server is saturated.
By default, affinity is preserved as was the case previously. When
'hash-preserve-affinity' is set to 'maxqueue', servers are considered
successively in the order of the hash ring until a server that does not
have a full queue is found.
When 'maxconn' is set on a server, queueing cannot be disabled, as
'maxqueue=0' means unlimited. To support picking a different server
when a server is at 'maxconn' irrespective of the queue,
'hash-preserve-affinity' can be set to 'maxconn'.
Amaury Denoyelle [Wed, 19 Mar 2025 16:19:35 +0000 (17:19 +0100)]
MINOR: mux-quic: define config for max-data
Define a new global configuration tune.quic.frontend.max-data. This
allows users to explicitely set the value for the corresponding QUIC TP
initial-max-data, with direct impact on haproxy memory consumption.
Amaury Denoyelle [Thu, 20 Mar 2025 15:18:13 +0000 (16:18 +0100)]
MINOR: quic: ignore uni-stream for initial max data TP
Initial TP value for max-data is automatically calculated to be adjusted
to the maximum number of opened streams over a QUIC connection. This
took into account both max-streams-bidi-remote and uni-streams. By
default, this is equivalent to 100 + 3 = 103 max opened streams.
This patch simplifies the calculation by only using bidirectional
streams. Uni streams are ignored because they are only used for HTTP/3
control exchanges, which should only represents a few bytes. For now,
users can only configure the max number of remote bidi streams, so the
simplified calculation should make more sense to them.
Note that this relies on the assumption that HTTP/3 is used as
application protocol. To support other protocols, it may be necessary to
review this and take into account both local bidi and uni streams.
Adjust initialization of flow-control transport parameters via
quic_transport_params_init().
This is purely cosmetic, with some comments added. It is also a
preparatory step for future patches with addition of new configuration
keywords related to flow-control TP values.
Amaury Denoyelle [Mon, 17 Mar 2025 14:15:55 +0000 (15:15 +0100)]
MINOR: quic: move global tune options into quic_tune
A new structure quic_tune has recently been defined. Its purpose is to
store global options related to QUIC. Previously, only the tunable to
toggle pacing was stored in it.
This commit moves several QUIC related tunable from global to quic_tune
structure. This better centralizes QUIC configuration option and gives
room for future generic options.
Willy Tarreau [Fri, 21 Mar 2025 16:33:36 +0000 (17:33 +0100)]
[RELEASE] Released version 3.2-dev8
Released version 3.2-dev8 with the following main changes :
- MINOR: jws: implement JWS signing
- TESTS: jws: implement a test for JWS signing
- CI: github: add "jose" to apt dependencies
- CLEANUP: log-forward: remove useless options2 init
- CLEANUP: log: add syslog_process_message() helper
- MINOR: proxy: add proxy->options3
- MINOR: log: migrate log-forward options from proxy->options2 to options3
- MINOR: log: provide source address information in syslog_process_message()
- MINOR: tools: only print address in sa2str() when port == -1
- MINOR: log: add "option host" log-forward option
- MINOR: log: handle log-forward "option host"
- MEDIUM: log: change default "host" strategy for log-forward section
- BUG/MEDIUM: thread: use pthread_self() not ha_pthread[tid] in set_affinity
- MINOR: compiler: add a simple macro to concatenate resolved strings
- MINOR: compiler: add a new __decl_thread_var() macro to declare local variables
- BUILD: tools: silence a build warning when USE_THREAD=0
- BUILD: backend: silence a build warning when threads are disabled
- DOC: management: rename some last occurences from domain "dns" to "resolvers"
- BUG/MINOR: stats: fix capabilities and hide settings for some generic metrics
- MINOR: cli: export cli_io_handler() to ease symbol resolution
- MINOR: tools: improve symbol resolution without dl_addr
- MINOR: tools: ease the declaration of known symbols in resolve_sym_name()
- MINOR: tools: teach resolve_sym_name() a few more common symbols
- BUILD: tools: avoid a build warning on gcc-4.8 in resolve_sym_name()
- DEV: ncpu: also emulate sysconf() for _SC_NPROCESSORS_*
- DOC: design-thoughts: commit numa-auto.txt
- MINOR: cpuset: make the API support negative CPU IDs
- MINOR: thread: rely on the cpuset functions to count bound CPUs
- MINOR: cpu-topo: add ha_cpu_topo definition
- MINOR: cpu-topo: allocate and initialize the ha_cpu_topo array.
- MINOR: cpu-topo: rely on _SC_NPROCESSORS_CONF to trim maxcpus
- MINOR: cpu-topo: add a function to dump CPU topology
- MINOR: cpu-topo: update CPU topology from excluded CPUs at boot
- REORG: cpu-topo: move bound cpu detection from cpuset to cpu-topo
- MINOR: cpu-topo: add detection of online CPUs on Linux
- MINOR: cpu-topo: add detection of online CPUs on FreeBSD
- MINOR: cpu-topo: try to detect offline cpus at boot
- MINOR: cpu-topo: add CPU topology detection for linux
- MINOR: cpu-topo: also store the sibling ID with SMT
- MINOR: cpu-topo: add NUMA node identification to CPUs on Linux
- MINOR: cpu-topo: add NUMA node identification to CPUs on FreeBSD
- MINOR: thread: turn thread_cpu_mask_forced() into an init-time variable
- MINOR: cfgparse: move the binding detection into numa_detect_topology()
- MINOR: cfgparse: use already known offline CPU information
- MINOR: global: add a command-line option to enable CPU binding debugging
- MINOR: cpu-topo: add a new "cpu-set" global directive to choose cpus
- MINOR: cpu-topo: add "drop-cpu" and "only-cpu" to cpu-set
- MEDIUM: thread: start to detect thread groups and threads min/max
- MEDIUM: cpu-topo: make sure to properly assign CPUs to threads as a fallback
- MEDIUM: thread: reimplement first numa node detection
- MEDIUM: cfgparse: remove now unused numa & thread-count detection
- MINOR: cpu-topo: refine cpu dump output to better show kept/dropped CPUs
- MINOR: cpu-topo: fall back to nominal_perf and scaling_max_freq for the capacity
- MINOR: cpu-topo: use cpufreq before acpi cppc
- MINOR: cpu-topo: boost the capacity of performance cores with cpufreq
- MINOR: cpu-topo: skip CPU detection when /sys/.../cpu does not exist
- MINOR: cpu-topo: skip identification of non-existing CPUs
- MINOR: cpu-topo: skip CPU properties that we've verified do not exist
- MINOR: cpu-topo: implement a sorting mechanism for CPU index
- MINOR: cpu-topo: implement a sorting mechanism by CPU locality
- MINOR: cpu-topo: implement a CPU sorting mechanism by cluster ID
- MINOR: cpu-topo: ignore single-core clusters
- MINOR: cpu-topo: assign clusters to cores without and renumber them
- MINOR: cpu-topo: make sure we don't leave unassigned IDs in the cpu_topo
- MINOR: cpu-topo: assign an L3 cache if more than 2 L2 instances
- MINOR: cpu-topo: renumber cores to avoid holes and make them contiguous
- MINOR: cpu-topo: add a function to sort by cluster+capacity
- MINOR: cpu-topo: consider capacity when forming clusters
- MINOR: cpu-topo: create an array of the clusters
- MINOR: cpu-topo: ignore excess of too small clusters
- MINOR: cpu-topo: add "only-node" and "drop-node" to cpu-set
- MINOR: cpu-topo: add "only-thread" and "drop-thread" to cpu-set
- MINOR: cpu-topo: add "only-core" and "drop-core" to cpu-set
- MINOR: cpu-topo: add "only-cluster" and "drop-cluster" to cpu-set
- MINOR: cpu-topo: add a CPU policy setting to the global section
- MINOR: cpu-topo: add a 'first-usable-node' cpu policy
- MEDIUM: cpu-topo: use the "first-usable-node" cpu-policy by default
- CLEANUP: thread: now remove the temporary CPU node binding code
- MINOR: cpu-topo: add cpu-policy "group-by-cluster"
- MEDIUM: cpu-topo: let the "group-by-cluster" split groups
- MINOR: cpu-topo: add a new "performance" cpu-policy
- MINOR: cpu-topo: add a new "efficiency" cpu-policy
- MINOR: cpu-topo: add a new "resource" cpu-policy
- MINOR: jws: add new functions in jws.h
- MINOR: cpu-topo: fix unused stack var 'cpu2' reported by coverity
- MINOR: hlua: add an optional timeout to AppletTCP:receive()
- MINOR: jws: use jwt_alg type instead of a char
- BUG/MINOR: log: prevent saddr NULL deref in syslog_io_handler()
- MINOR: stream: decrement srv->served after detaching from the list
- BUG/MINOR: hlua: fix optional timeout argument index for AppletTCP:receive()
- MINOR: server: simplify srv_has_streams()
- CLEANUP: server: make it clear that srv_check_for_deletion() is thread-safe
- MINOR: cli/server: don't take thread isolation to check for srv-removable
- BUG/MINOR: limits: compute_ideal_maxconn: don't cap remain if fd_hard_limit=0
- MINOR: limits: fix check_if_maxsock_permitted description
- BUG/MEDIUM: hlua/cli: fix cli applet UAF in hlua_applet_wakeup()
- MINOR: tools: path_base() concatenates a path with a base path
- MEDIUM: ssl/ckch: make the ckch_conf more generic
- BUG/MINOR: mux-h2: Reset streams with NO_ERROR code if full response was already sent
- MINOR: stats: add .generic explicit field in stat_col struct
- MINOR: stats: STATS_PX_CAP___B_ macro
- MINOR: stats: add .cap for some static metrics
- MINOR: stats: use stat_col storage stat_cols_info
- MEDIUM: promex: switch to using stat_cols_info for global metrics
- MINOR: promex: expose ST_I_INF_WARNINGS (AKA total_warnings) metric
- MEDIUM: promex: switch to using stat_cols_px for front/back/server metrics
- MINOR: stats: explicitly add frontend cap for ST_I_PX_REQ_TOT
- CLEANUP: promex: remove unused PROMEX_FL_{INFO,FRONT,BACK,LI,SRV} flags
- BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
- BUG/MINOR: mux-quic: remove extra BUG_ON() in _qcc_send_stream()
- MEDIUM: mt_list: Reduce the max number of loops with exponential backoff
- MINOR: stats: add alt_name field to stat_col struct
- MINOR: stats: add alt name info to stat_cols_info where relevant
- MINOR: promex: get rid of promex_global_metric array
- MINOR: stats-proxy: add alt_name field for ME_NEW_{FE,BE,PX} helpers
- MINOR: stats-proxy: add alt name info to stat_cols_px where relevant
- MINOR: promex: get rid of promex_st_metrics array
- MINOR: pools: rename the "by_what" field of the show pools context to "how"
- MINOR: cli/pools: record the list of pool registrations even when merging them
Willy Tarreau [Wed, 19 Mar 2025 10:33:36 +0000 (11:33 +0100)]
MINOR: cli/pools: record the list of pool registrations even when merging them
By default, create_pool() tries to merge similar pools into one. But when
dealing with certain bugs, it's hard to say which ones were merged together.
We do have the information at registration time, so let's just create a
list of registrations ("pool_registration") attached to each pool, that
will store that information. It can then be consulted on the CLI using
"show pools detailed", where the names, sizes, alignment and flags are
reported.
Willy Tarreau [Wed, 19 Mar 2025 10:16:58 +0000 (11:16 +0100)]
MINOR: pools: rename the "by_what" field of the show pools context to "how"
The goal will be to support other dump options. We don't need 32 bits to
express sorting criteria, let's reserve only 4 bits for them and leave
the remaining ones unused.
In this patch we pursue the work started in a5aadbd ("MEDIUM: promex:
switch to using stat_cols_px for front/back/server metrics"):
Indeed, while having ".promex_name" info in stat_cols_info generic array
was confusing, Willy suggested that we have ".alt_name" which stays
generic and may be considered by alternative exporters for metric naming.
For now, only promex exporter will make use of it.
Thanks to this, it allows us to completely get rid of the
stat_cols_px array. The other main benefit is that it will be much harder
to overlook promex metric definition now because .alt_name has more
visibility in the main metric array rather than in an addon file.
MINOR: stats-proxy: add alt_name field for ME_NEW_{FE,BE,PX} helpers
For now alt_name is systematically set to NULL. Thanks to this change we
may easily add an altname to existing metrics. Also by requiring explicit
value it offers more visibility for this field.
MINOR: promex: get rid of promex_global_metric array
In this patch we pursue the work started in 1adc796 ("MEDIUM: promex:
switch to using stat_cols_info for global metrics"):
Indeed, while having ".promex_name" info in stat_cols_info generic array
was confusing, Willy suggested that we have ".alt_name" which stays
generic and may be considered by alternative exporters for metric naming.
For now, only promex exporter will make use of it.
Thanks to this, it allows us to completely get rid of the
promex_global_metric array. The other main benefit is that it will be
much harder to overlook promex metric definition now because .alt_name
has more visibility in the main metric array rather than in an addon file.
MINOR: stats: add alt_name field to stat_col struct
alt_name will be used by metric exporters to know how the metric should be
presented to the user. If the alt_name is NULL, the metric should be
ignored. For now only promex exporter will make use of this.
Olivier Houchard [Fri, 21 Mar 2025 09:56:40 +0000 (09:56 +0000)]
MEDIUM: mt_list: Reduce the max number of loops with exponential backoff
Reduce the max number of loops in the mt_list code while waiting for
a lock to be available with exponential backoff. It's been observed that
the current value led to severe performances degradation at least on
some hardware, hopefully this value will be acceptable everywhere.
Amaury Denoyelle [Thu, 20 Mar 2025 17:10:56 +0000 (18:10 +0100)]
BUG/MINOR: mux-quic: remove extra BUG_ON() in _qcc_send_stream()
The following patch fixed a BUG_ON() which could be triggered if RS/SS
emission was scheduled after stream local closure. 7ee1279f4b8416435faba5cb93a9be713f52e4df
BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
qcc_send_stream() was rewritten as a wrapper around an internal
_qcc_send_stream() used to bypass the faulty BUG_ON(). However, an extra
unnecessary BUG_ON() was added by mistake in _qcc_send_stream().
This should not cause any issue, as the BUG_ON() is only active if <urg>
argument is false, which is not the case for RS/SS emission. However,
this patch is labelled as a bug as this BUG_ON() is unnecessary and may
cause issues in the future.
This should be backported up to 2.8, after the above mentionned patch.
Amaury Denoyelle [Thu, 20 Mar 2025 15:01:16 +0000 (16:01 +0100)]
BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
A BUG_ON() is present in qcc_send_stream() to ensure that emission is
never performed with a stream already closed locally. However, this
function is also used for RESET_STREAM/STOP_SENDING emission. No
protection exists to ensure that RS/SS is not scheduled after stream
local closure, which would result in this BUG_ON() crash.
This crash can be triggered with the following QUIC client sequence :
1. SS is emitted to open a new stream. QUIC-MUX schedules a RS emission
by and the stream is locally closed.
2. An invalid HTTP/3 request is sent on the same stream, for example
with duplicated pseudo-headers. The objective is to ensure
qcc_abort_stream_read() is called after stream closure, which results
in the following backtrace.
0x000055555566a620 in qcc_send_stream (qcs=0x7ffff0061420, urg=1, count=0) at src/mux_quic.c:1633
1633 BUG_ON(qcs_is_close_local(qcs));
[ ## gdb ## ] bt
#0 0x000055555566a620 in qcc_send_stream (qcs=0x7ffff0061420, urg=1, count=0) at src/mux_quic.c:1633
#1 0x000055555566a921 in qcc_abort_stream_read (qcs=0x7ffff0061420) at src/mux_quic.c:1658
#2 0x0000555555685426 in h3_rcv_buf (qcs=0x7ffff0061420, b=0x7ffff748d3f0, fin=0) at src/h3.c:1454
#3 0x0000555555668a67 in qcc_decode_qcs (qcc=0x7ffff0049eb0, qcs=0x7ffff0061420) at src/mux_quic.c:1315
#4 0x000055555566c76e in qcc_recv (qcc=0x7ffff0049eb0, id=12, len=0, offset=23, fin=0 '\000',
data=0x7fffe0049c1c "\366\r,\230\205\354\234\301;\2563\335\037k\306\334\037\260", <incomplete sequence \323>) at src/mux_quic.c:1901
#5 0x0000555555692551 in qc_handle_strm_frm (pkt=0x7fffe00484b0, strm_frm=0x7ffff00539e0, qc=0x7fffe0049220, fin=0 '\000') at src/quic_rx.c:635
#6 0x0000555555694530 in qc_parse_pkt_frms (qc=0x7fffe0049220, pkt=0x7fffe00484b0, qel=0x7fffe0075fc0) at src/quic_rx.c:980
#7 0x0000555555696c7a in qc_treat_rx_pkts (qc=0x7fffe0049220) at src/quic_rx.c:1324
#8 0x00005555556b781b in quic_conn_app_io_cb (t=0x7fffe0037f20, context=0x7fffe0049220, state=49232) at src/quic_conn.c:601
#9 0x0000555555d53788 in run_tasks_from_lists (budgets=0x7ffff748e2b0) at src/task.c:603
#10 0x0000555555d541ae in process_runnable_tasks () at src/task.c:886
#11 0x00005555559c39e9 in run_poll_loop () at src/haproxy.c:2858
#12 0x00005555559c41ea in run_thread_poll_loop (data=0x55555629fb40 <ha_thread_info+64>) at src/haproxy.c:3075
The proper solution is to not execute this BUG_ON() for RS/SS emission.
Indeed, it is valid and can be useful to emit these frames, even after
stream local closure.
To implement this, qcc_send_stream() has been rewritten as a mere
wrapper function around the new internal _qcc_send_stream(). The latter
is used only by QMUX for STREAM, RS and SS emission. Application layer
continue to use the original function for STREAM emission, with the
BUG_ON() still in place there.
MINOR: stats: explicitly add frontend cap for ST_I_PX_REQ_TOT
While being a generic metric, ST_I_PX_REQ_TOT is handled specifically for
the frontend case. But the frontend capability isn't set for that metric
It is actually quite misleading, because the capability may be checked
to see whether the metric is relevant for a given scope, yet it is
relevant for frontend scope.
In this patch we also add the frontend capability for the metric.
MEDIUM: promex: switch to using stat_cols_px for front/back/server metrics
Now the stat_cols_px array contains all info that-prometheus requires
stop using the promex_st_metrics array that contains redundant infos.
As for ("MEDIUM: promex: switch to using stat_cols_info for global
metrics"), initial goal was to completely get rid of promex_st_metrics
array, but it turns out it is still required but only for the name
mapping part now. So in this commit we change it from complex structure
array (with redundant info) to a simple ist array with the
metric id:promex name mapping. If a metric name is not defined there, then
promex ignores it.
It has been requested to have the ST_I_INF_WARNINGS metric available from
prometheus, let's define it in promex_global_metrics ist array so that
prometheus starts advertising it.
MEDIUM: promex: switch to using stat_cols_info for global metrics
Now the stat_cols_info array contains all info that prometheus requires,
stop using the promex_global_metrics array that contains redundant infos.
Initial goal was to completely drop the promex_global_metrics array.
However it was deemed no longer relevant as prometheus stats rely on a
custom name that cannot be derived from stat_cols_info[], unless we add
a specific ".promex_name" field or similar to name the stats for
prometheus. This is what was carried over on a first attempt but it proved
to burden stat_cols_info[] array (not only memory wise, it is quite
confusing to see promex in the main codebase, given that prometheus is
shipped as an optional add-on).
The new strategy consists in revamping the promex_global_metrics array
from promex_metric (with all redundant fields for metrics) to a simple
ID<==>IST mapping. If the metric is mapped, then it means promex addon
should advertise it (using the name provided in the mapping). Now for
all the metric retrieval, no longer rely on built-in hardcoded values
but instead leverage the new stat cols API.
The tricky part is the .type association because the general rule doesn't
apply for all metrics as it seems that we stated that some non-counters
oriented metrics (at least from haproxy point of view) had to be presented
as counter metrics. So in this patch we add some special treatment for
those metrics to emulate the old behavior. If that's not relevant in the
future, it may be removed. But this requires to ensure that promex users
will properly cope with that change. At least for now, no change of
behavior should be expected.
Use stat_col storage for stat_cols_info[] array instead of name_desc.
As documented in 65624876f ("MINOR: stats: introduce a more expressive
stat definition method"), stat_col supersedes name_desc storage but
it remains backward compatible. Here we migrate to the new API to be
able to further extend stat_cols_info[] in following patches.
Goal is to merge promex metrics definition into the main one.
Promex metrics will use the metric capability to know available scopes,
thus only metrics relevant for prometheus were updated.
MINOR: stats: add .generic explicit field in stat_col struct
Further extend logic implemented in 65624876 ("MINOR: stats: introduce a
more expressive stat definition method") and 4e9e8418 ("MINOR: stats:
prepare stats-file support for values other than FN_COUNTER"): we don't
rely anymore on the presence of the capability to know if the metric is
generic or not. This is because it prevents us from setting a capability
on static statistics. Yet it could be useful to set the capability even
on static metrics, thus we add a dedicated .generic bit to tell haproxy
that the metric is generic and can be handled automatically by the API.
Also, ME_NEW_* helpers are not explicitly associated to generic metric
definition (as it was already the case before) to avoid ambiguities.
It may change in the future as we may need to use the new definition
method to define static metrics (without the generic bit set). But for
now it isn't the case as this need definition was implemented for generic
metrics support in the first place. If we want to define static metrics
using the API, we could add a new set of helpers for instance.
BUG/MINOR: mux-h2: Reset streams with NO_ERROR code if full response was already sent
On frontend side, when a stream is shut while the response was already fully
sent, it was cancelled by sending a RST_STREAM(CANCEL) frame. However, it is
not accurrate. CANCEL error code must only be used if the response headers
were sent, but not the full response. As stated in the RFC 9113, when the
response was fully sent, to stop the request sending, a RST_STREAM with an
error code of NO_ERROR must be sent.
This patch should solve the issue #1219. It must be backported to all stable
versions.
The ckch_store_load_files() function makes specific processing for
PARSE_TYPE_STR as if it was a type only used for paths.
This patch changes a little bit the way it's done,
PARSE_TYPE_STR is only meant to strdup() a string and stores the
resulting pointer in the ckch_conf structure.
Any processing regarding the path is now done in the callback.
Since the callbacks were basically doing the same thing, they were
transformed into the DECLARE_CKCH_CONF_LOAD() macros which allows to
do some templating of these functions.
The resulting ckch_conf_load_* functions will do the same as before,
except they will also do the path processing instead of letting
ckch_store_load_files() do it, which means we don't need the "base"
member anymore in the struct ckch_conf_kws.
MINOR: tools: path_base() concatenates a path with a base path
With the SSL configuration, crt-base, key-base are often used, these
keywords concatenates the base path with the path when the path does not
start by '/'.
This is done at several places in the code, so a function to do this
would be better to standardize the code.
BUG/MEDIUM: hlua/cli: fix cli applet UAF in hlua_applet_wakeup()
Recent commit e5e36ce09 ("BUG/MEDIUM: hlua/cli: Fix lua CLI commands
to work with applet's buffers") revealed a bug in hlua cli applet handling
Indeed, playing with Willy's lua tetris script on the cli, a segfault
would be encountered when forcefully closing the session by sending a
CTRL+C on the terminal.
In fact the crash was caused by a UAF: while the cli applet was already
freed, the lua task responsible for waking it up would still point to it.
Thus hlua_applet_wakeup() could be called even if the applet didn't exist
anymore.
To fix the issue, in hlua_cli_io_release_fct() we must also free the hlua
task linked to the applet, like we already do for
hlua_applet_tcp_release() and hlua_applet_http_release().
While this bug exists on stable versions (where it should be backported
too for precaution), it only seems to be triggered starting with 3.0.
BUG/MINOR: limits: compute_ideal_maxconn: don't cap remain if fd_hard_limit=0
'global.fd_hard_limit' stays uninitialized, if haproxy is started with -m
(global.rlimit_memmax). 'remain' is the MAX between soft and hard process fd
limits. It will be always bigger than 'global.fd_hard_limit' (0) in this case.
So, if we reassign 'remain' to the 'global.fd_hard_limit' unconditionally,
calculated then 'maxconn' will be even negative and the DEFAULT_MAXCONN (100)
will be set as the 'ideal_maxconn'.
During the 'global.maxconn' calculations in set_global_maxconn(), if the
provided 'global.rlimit_memmax' is quite big, system will refuse to calculate
based on its 'global.maxconn' and we will do a fallback to the 'ideal_maxconn',
which is 100.
Same problem for the configs with SSL frontends and backends.
Willy Tarreau [Tue, 18 Mar 2025 10:41:51 +0000 (11:41 +0100)]
MINOR: cli/server: don't take thread isolation to check for srv-removable
Thanks to the previous commits, we now know that "wait srv-removable"
does not require thread isolation, as long as 3372a2ea00 ("BUG/MEDIUM:
queues: Stricly respect maxconn for outgoing connections") and c880c32b16
("MINOR: stream: decrement srv->served after detaching from the list")
are present. Let's just get rid of thread_isolate() here, which can
consume a lot of CPU on highly threaded machines when removing many
servers at once.
Willy Tarreau [Tue, 18 Mar 2025 10:38:56 +0000 (11:38 +0100)]
CLEANUP: server: make it clear that srv_check_for_deletion() is thread-safe
This function was marked as requiring thread isolation because its code
was extracted from cli_parse_delete_server() and was running under
isolation. But upon closer inspection, and using atomic loads to check
a few counters, it is actually safe to run without isolation, so let's
reflect that in its description.
However, it remains true that cli_parse_delete_server() continues to call
it under isolation.
Willy Tarreau [Tue, 18 Mar 2025 10:30:37 +0000 (11:30 +0100)]
MINOR: server: simplify srv_has_streams()
Now that thanks to commit c880c32b16 ("MINOR: stream: decrement
srv->served after detaching from the list") we can trust srv->served,
let's use it and no longer loop on threads when checking if a server
still has streams attached to it. This will be much cheaper and will
result in keeping isolation for a shorter time in the "wait" command.
BUG/MINOR: hlua: fix optional timeout argument index for AppletTCP:receive()
Baptiste reported that using the new optional timeout argument introduced
in 19e48f2 ("MINOR: hlua: add an optional timeout to AppletTCP:receive()")
the following error would occur at some point:
runtime error: file.lua:lineno: bad argument #-2 to 'receive' (number
expected, got light userdata) from [C]: in method 'receive...
In fact this is caused by exp_date being retrieved using relative index -1
instead of absolute index 3. Indeed, while using relative index is fine
most of the time when we trust the stack, when combined with yielding the
top of the stack when resuming from yielding is not necessarily the same
as when the function was first called (ie: if some data was pushed to the
stack in the yieldable function itself). As such, it is safer to use
explicit index to access exp_date variable at position 3 on the stack.
It was confirmed that doing so addresses the issue.
Willy Tarreau [Tue, 18 Mar 2025 10:24:56 +0000 (11:24 +0100)]
MINOR: stream: decrement srv->served after detaching from the list
In commit 3372a2ea00 ("BUG/MEDIUM: queues: Stricly respect maxconn for
outgoing connections"), it has been ensured that srv->served is held
as long as possible around the periods where a stream is attached to a
server. However, it's decremented early when entering sess_change_server,
and actually just before detaching from that server's list. While there
is theoretically nothing wrong with this, it prevents us from looking at
this counter to know if streams are still using a server or not.
We could imagine decrementing it much later but that wouldn't work with
leastconn, since that algo needs ->served to be final before calling
lbprm.server_drop_conn(). Thus what we're doing here is to detach from
the server, then decrement ->served, and only then call the LB callback
to update the server's position in the tree. At this moment the stream
doesn't know the server anymore anyway (except via this function's
local variable) so it's safe to consider that no stream knows the server
once the variable reaches zero.
BUG/MINOR: log: prevent saddr NULL deref in syslog_io_handler()
In ad0133cc ("MINOR: log: handle log-forward "option host""), we
de-reference saddr without first checking if saddr is NULL. In practise
saddr shouldn't be null, but it may be the case if memory error happens
for tcp syslog handler so we must assume that it can be NULL at some
point.
To fix the bug, we simply check for NULL before de-referencing it
under syslog_io_handler(), as the function comment suggests.
Willy Tarreau [Mon, 17 Mar 2025 15:19:34 +0000 (16:19 +0100)]
MINOR: hlua: add an optional timeout to AppletTCP:receive()
TCP services might want to be interactive, and without a timeout on
receive(), the possibilities are a bit limited. Let's add an optional
timeout in the 3rd argument to possibly limit the wait time. In this
case if the timeout strikes before the requested size is complete,
a possibly incomplete block will be returned.
MINOR: cpu-topo: fix unused stack var 'cpu2' reported by coverity
Coverity has reported that cpu2 seems sometimes unused in
cpu_fixup_topology():
*** CID 1593776: Code maintainability issues (UNUSED_VALUE)
/src/cpu_topo.c: 690 in cpu_fixup_topology()
684 continue;
685
686 if (ha_cpu_topo[cpu].cl_gid != curr_id) {
687 if (curr_id >= 0 && cl_cpu <= 2)
688 small_cl++;
689 cl_cpu = 0;
>>> CID 1593776: Code maintainability issues (UNUSED_VALUE)
>>> Assigning value from "cpu" to "cpu2" here, but that stored value is overwritten before it can be used.
690 cpu2 = cpu;
691 curr_id = ha_cpu_topo[cpu].cl_gid;
692 }
693 cl_cpu++;
694 }
695
That's it. 'cpu2' automatic/stack variable is used only in for() loop scopes to
save cpus ID in which we are interested in. In the loop pointed by coverity
this variable is not used for further processing within the loop's scope.
Then it is always reinitialized to 0 in the another following loops.
Willy Tarreau [Fri, 14 Mar 2025 17:16:52 +0000 (18:16 +0100)]
MINOR: cpu-topo: add a new "resource" cpu-policy
This cpu policy keeps the smallest CPU cluster. This can
be used to limit the resource usage to the strict minimum
that still delivers decent performance, for example to
try to further reduce power consumption or minimize the
number of cores needed on some rented systems for a
sidecar setup, in order to scale the system down more
easily. Note that if a single cluster is present, it
will still be fully used.
When started on a 64-core EPYC gen3, it uses only one CCX
with 8 cores and 16 threads, all in the same group.
Willy Tarreau [Fri, 14 Mar 2025 16:58:27 +0000 (17:58 +0100)]
MINOR: cpu-topo: add a new "efficiency" cpu-policy
This cpu policy tries to evict performant core clusters and only
focuses on efficiency-oriented ones. On an intel i9-14900k, we can
get 525k rps using 8 performance cores, versus 405k when using all
24 efficiency cores. In some cases the power savings might be more
desirable (e.g. scalability tests on a developer's laptop), or the
performance cores might be better suited for another component
(application or security component).
Willy Tarreau [Fri, 14 Mar 2025 14:09:07 +0000 (15:09 +0100)]
MINOR: cpu-topo: add a new "performance" cpu-policy
This cpu policy tries to evict efficient core clusters and only
focuses on performance-oriented ones. On an intel i9-14900k, we can
get 525k rps using only 8 cores this way, versus 594k when using all
24 cores. The gains from using all these codes are not significant
enough to waste them on this. Also these cores can be much slower
at doing SSL handshakes so it can make sense to evict them. Better
keep the efficiency cores for network interrupts for example.
Also, on a developer's machine it can be convenient to keep all these
cores for the local tasks and extra tools (load generators etc).
Willy Tarreau [Thu, 13 Mar 2025 14:41:00 +0000 (15:41 +0100)]
MEDIUM: cpu-topo: let the "group-by-cluster" split groups
When a cluster is too large to fit into a single group, let's split it
into two equal groups, which will still be allowed to use all the CPUs
of the cluster. This allows haproxy to start all the threads with a
minimum number of groups (e.g. 2x40 for 80 cores).
This policy forms thread groups from the CPU clusters, and bind all the
threads in them to all the CPUs of the cluster. This is recommended on
system with bad inter-CCX latencies. It was shown to simply triple the
performance with queuing on a 64-core EPYC without having to manually
assign the cores with cpu-map.
Willy Tarreau [Tue, 11 Mar 2025 16:14:03 +0000 (17:14 +0100)]
CLEANUP: thread: now remove the temporary CPU node binding code
This is now superseded by the default "safe" cpu-policy, and every time
it's used, that code was bypassed anyway since global.nbthread was set.
We can now safely remove it. Note that for other policies which do not
set a thread count nor further restrict CPUs (such as "none", or even
"safe" when finding a single node), we continue to go through the fallback
code that automatically assigns CPUs to threads and counts them.
Willy Tarreau [Tue, 11 Mar 2025 16:12:25 +0000 (17:12 +0100)]
MEDIUM: cpu-topo: use the "first-usable-node" cpu-policy by default
This now turns the cpu-policy to "first-usable-node" by default, so that
we preserve the current default behavior consisting in binding to the
first node if nothing was forced. If a second node is found,
global.nbthread is set and the previous code will be skipped.
Willy Tarreau [Thu, 6 Mar 2025 07:40:21 +0000 (08:40 +0100)]
MINOR: cpu-topo: add a 'first-usable-node' cpu policy
This is a reimplemlentation of the current default policy. It binds to
the first node having usable CPUs if found, and drops CPUs from the
second and next nodes.
Willy Tarreau [Mon, 3 Mar 2025 12:44:11 +0000 (13:44 +0100)]
MINOR: cpu-topo: add a CPU policy setting to the global section
We'll need to let the user decide what's best for their workload, and in
order to do this we'll have to provide tunable options. For that, we're
introducing struct ha_cpu_policy which contains a name, a description
and a function pointer. The purpose will be to use that function pointer
to choose the best CPUs to use and now to set the number of threads and
thread-groups, that will be called during the thread setup phase. The
only supported policy for now is "none" which doesn't set/touch anything
(i.e. all available CPUs are used).
Willy Tarreau [Thu, 27 Feb 2025 10:54:06 +0000 (11:54 +0100)]
MINOR: cpu-topo: add "only-cluster" and "drop-cluster" to cpu-set
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
cluster number(s). It can be used to bind to only some clusters, such
as CCX or different energy efficiency cores. For this reason, here we
use the cluster's local ID (local to the node).
Willy Tarreau [Thu, 27 Feb 2025 10:49:51 +0000 (11:49 +0100)]
MINOR: cpu-topo: add "only-core" and "drop-core" to cpu-set
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
core number(s). It can be used to bind to only some clusters as well
as to evict efficient cores whose number is known.
Willy Tarreau [Thu, 27 Feb 2025 09:59:10 +0000 (10:59 +0100)]
MINOR: cpu-topo: add "only-thread" and "drop-thread" to cpu-set
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
thread number(s). It can be used to reserve even threads for HW IRQs
and odd threads for haproxy for example, or to evict efficient cores
that do only have thread #0.