Olivier Houchard [Wed, 15 Jan 2025 15:37:35 +0000 (16:37 +0100)]
MINOR: proxies/servers: Calculate queueslength and use it.
For both proxies and servers, properly calculates queueslength, which is
the total number of element in each queues (as they currently are only
using one queue, it is equivalent to the number of element of that
queue), and use it instead of the queue's length.
Olivier Houchard [Wed, 15 Jan 2025 15:16:18 +0000 (16:16 +0100)]
MINOR: Add fields to the per-thread group field in struct server.
Add a per-thread group queue and associated fields in per-thread group
field in struct server, as well as a new field, queues length.
This is currently unused, so should change nothing.
Olivier Houchard [Wed, 15 Jan 2025 15:10:43 +0000 (16:10 +0100)]
MINOR: proxies: Add a per-thread group field to struct proxy.
Add a per-thread group field to struct proxy, that will contain a struct
queue, as well as a new field, "queueslength".
This is currently unused, so should change nothing.
Please note that proxy_init_per_thr() must now be called for each proxy
once the thread groups number is known.
Willy Tarreau [Mon, 27 Jan 2025 14:41:26 +0000 (15:41 +0100)]
MINOR: epoll: permit to mask certain specific events
A few times in the past we've seen cases where epoll was caught reporting
a wrong event that caused trouble (e.g. spuriously reporting HUP or RDHUP
after a successful connect()). The new tune.epoll.mask-events directive
permits to mask events such as ERR, HUP and RDHUP and convert them to IN
events that are processed by the regular receive path. This should help
better diagnose and troubleshoot issues such as this one, as well as rule
out such a cause when similar issues are reported:
CLEANUP: tree-wide: define and use acl_match_cond() helper
acl_match_cond() combines acl_exec_cond() + acl_pass() and a check on the
condition->pol (to check if the cond is inverted) in order to return
either 0 if the cond doesn't match or 1 if it matches (or NULL).
Thanks to this we can actually simplify some redundant constructs that
iterate over rules and evaluate if the condition matches or not.
Conditions for tcp-request inspect-content and tcp-response
inspect-content couldn't be simplified because they perform an extra
check for missing data, and thus still need to leverage acl_exec_cond()
It's best to display the patch using "-w", like "git show xxxx -w",
because some blocks had to be re-indented after the cleanup, which
makes the patch hard to review by default.
CLEANUP: ssl: move ssl_sock_gencert_load_ca declaration in ssl_gencert.h
As ssl_sock_gencert_load_ca and ssl_sock_gencert_free_ca are compiled only if
SSL_NO_GENERATE_CERTIFICATES is not defined, let's align it and move these
declarations in ssl_gencert.h.
CLEANUP: ssl: rename ssl_sock_load_ca to ssl_sock_gencert_load_ca
ssl_sock_load_ca is defined in ssl_gencert.c and compiled only if
SSL_NO_GENERATE_CERTIFICATES is not defined. It's name is a bit confusing, as
we may think at the first glance, that it's a generic function, which is also
used to load CA file, provided via 'ca-file' keyword.
ssl_set_verify_locations_file is used in this case.
So let's rename ssl_sock_load_ca into ssl_sock_gencert_load_ca. Same is
applied to ssl_sock_free_ca.
BUG/MINOR: ssl: put ssl_sock_load_ca under SSL_NO_GENERATE_CERTIFICATES
ssl_sock_load_ca and ssl_sock_free_ca definitions are compiled only, if
SSL_NO_GENERATE_CERTIFICATES is not set. In case, when we set this define and
build haproxy, linker throws an error. So, let's fix this.
Willy Tarreau [Fri, 24 Jan 2025 10:01:06 +0000 (11:01 +0100)]
[RELEASE] Released version 3.2-dev4
Released version 3.2-dev4 with the following main changes :
- BUG/MINOR: stktable: fix big-endian compatiblity in smp_to_stkey()
- MINOR: stktable: add stkey_to_smp() helper
- MINOR: stktable: add stksess_getkey() helper
- MINOR: stktable: add sc[0-2]_key fetches
- BUG/MEDIUM: queues: Adjust the proxy counters when appropriate
- MINOR: trace: add help message for -dt argument
- MINOR: trace: ensure -dt priority over traces config section
- MINOR: trace: support all source alias on -dt
- BUG/MINOR: quic: reject NEW_TOKEN frames from clients
- MINOR: stktable: fix potential build issue in smp_to_stkey
- BUG/MEDIUM: stktable: fix missing lock on some table converters
- BUG/MEDIUM: promex: Use right context pointers to dump backends extra-counters
- MINOR: stktable: fix potential build issue in smp_to_stkey (2nd try)
- MINOR: stktable: add smp_fetch_stksess() helper function
- MEDIUM: stktable: split src-based key smp_fetch_sc functions
- MEDIUM: stktable: split sc_ and src_ fetch lookup logics
- MEDIUM: stktable: leverage smp_fetch_* helpers from sample conv
- DOC: config: unify sample conv|fetches optional arguments syntax
- DOC: config: stick-table converters support implicit <table> argument
- DOC: config: stick-table converter do accept ANY-typed input
- DOC: config: clarify return type for some stick-table converters
- DOC: config: refer to canonical sticktable converters for src_* fetches
- CLEANUP: stktable: move sample_conv_table_bytes_out_rate()
- MINOR: stktable: add table_{inc,clr}_gpc* converters
- BUG/MAJOR: quic: reject too large CRYPTO frames
- BUG/MAJOR: log/sink: possible sink collision in sink_new_from_srv()
- BUG/MINOR: init: set HAPROXY_STARTUP_VERSION from the variable, not the macro
- REORG: version: move the remaining BUILD_* stuff from haproxy.c to version.c
- BUG/MINOR: quic: ensure a detached coalesced packet can't access its neighbours
- MINOR: quic: Add a BUG_ON() on quic_tx_packet refcount
- BUILD: quic: Move an ASSUME_NONNULL() for variable which is not null
- BUG/MEDIUM: mux-h1: Properly close H1C if an error is reported before sending data
- CLEANUP: quic: remove unused prototype
- MINOR: quic: rename pacing_rate cb to pacing_inter
- BUG/MINOR: quic: do not increase congestion window if app limited
- MINOR: mux-quic: increment pacing retry counter on expired
- MEDIUM: quic: implement credit based pacing
- MEDIUM: mux-quic: reduce pacing CPU usage with passive wait
- MEDIUM: quic: use dynamic credit for pacing
- MINOR: quic: remove unused pacing burst in bind_conf/quic_cc_path
- MINOR: quic: adapt credit based pacing to BBR
- MINOR: tools: add errname to print errno macro name
- MINOR: debug: debug_parse_cli_show_dev: use errname
- MINOR: debug: show boot and runtime process settings in table
MINOR: debug: debug_parse_cli_show_dev: use errname
Let's use errname, introduced in the previous commit in the output of
"show dev". This output is destined to engineers. So, no need to provide a
long descriptions of errnos given by strerror.
MINOR: tools: add errname to print errno macro name
Add helper to print the name of errno's corresponding macro, for example
"EINVAL" for errno=22. This may be helpful for debugging and for using in
some CLI commands output. The switch-case in errname() contains only the
errnos currently used in the code. So, it needs to be extended, if one starts
to use new syscalls.
Amaury Denoyelle [Thu, 23 Jan 2025 14:24:09 +0000 (15:24 +0100)]
MINOR: quic: adapt credit based pacing to BBR
Credit based pacing has been further refined to be able to calculate
dynamically burst size based on congestion parameter. However, BBR
algorithm already provides pacing rate and burst size (labelled as
send_quantum) for 1ms of emission.
Adapt quic_pacing_reload() to use BBR values to compute pacing credit.
This is done via pacing_burst callback which is now only defined for
BBR. For other algorithms, determine the burst size over 1ms with the
congestion window size and RTT.
Amaury Denoyelle [Thu, 23 Jan 2025 16:06:04 +0000 (17:06 +0100)]
MINOR: quic: remove unused pacing burst in bind_conf/quic_cc_path
Pacing burst size is now dynamic. As such, configuration value has been
removed and related fields in bind_conf and quic_cc_path structures can
be safely removed.
Amaury Denoyelle [Wed, 22 Jan 2025 16:26:13 +0000 (17:26 +0100)]
MEDIUM: quic: use dynamic credit for pacing
Major improvements have been introduced in pacing recently. Most
notably, QMUX schedules emission on a millisecond resolution, which
allow to use passive wait to be much CPU friendly.
However, an issue remains with the pacing max credit. Unless BBR is
used, it is fixed to the configured value from quic-cc-algo bind
statement. This is not practical as if too low, it may drastically
reduce performance due to 1ms sleep resolution. If too high, some
clients will suffer from too much packet loss.
This commit fixes the issue by implementing a dynamic maximum credit
value based on the network condition specific to each clients.
Calculation is done to fix a maximum value which should allow QMUX
current tasklet context to emit enough data to cover the delay with the
next tasklet invokation. As such, avg_loop_us is used to detect the
process load. If too small, 1.5ms is used as minimal value, to cover the
extra delay incurred by the system which will happen for a default 1ms
sleep.
Amaury Denoyelle [Wed, 22 Jan 2025 16:31:10 +0000 (17:31 +0100)]
MEDIUM: mux-quic: reduce pacing CPU usage with passive wait
Pacing algorithm has been revamped in the previous commit to implement a
credit based solution. This is a far more adaptative solution, in
particular which allow to catch up in case pause between pacing emission
was longer than expected.
This allows QMUX to remove the active loop based on tasklet wake-up.
Instead, a new task is used when emission should be paced. The main
advantage is that CPU usage is drastically reduced.
New pacing task timer is reset each time qcc_io_send() is invoked. Timer
will be set only if pacing engine reports that emission must be
interrupted. In this case timer is set via qcc_wakeup_pacing() to the
delay reported by congestion algorithm, or 1ms if delay is too short. At
the end of qcc_io_cb(), pacing task is queued if timer has been set.
Pacing task execution is simple enough : it immediately wakes up QCC I/O
handler.
Note that to have decent performance, it requires to have a large enough
burst defined in configuration of quic-cc-algo. However, this value is
common to every listener clients, which may cause too much loss under
network conditions. This will be address in a future patch.
Amaury Denoyelle [Fri, 10 Jan 2025 16:18:54 +0000 (17:18 +0100)]
MEDIUM: quic: implement credit based pacing
Implement a new method for QUIC pacing emission based on credit. This
represents the number of packets which can be emitted in a single burst.
After emission, decrement from the credit the number of emitted packets.
Several emission can be conducted in the same sequence until the credit
is completely decremented.
When a new emission sequence is initiated (i.e. under a new QMUX tasklet
invokation), credit is refilled according to the delay which occured
between the last and current emission context.
This new mechanism main advantage is that it allows to conduct several
emission in the same task context without having to wait between each
invokation. Wait is only forced if pacing is expired, which is now
equivalent to having a null credit.
Furthermore, if delay between two emissions sequence would have been
smaller than expected, credit is only partially refilled. This allows to
restart emission without having to wait for the whole credit to be
available.
On the implementation side, a new field <credit> is avaiable in
quic_pacer structure. It is automatically decremented on
quic_pacing_sent_done() invokation. Also, a new function
quic_pacing_reload() must be used by QUIC MUX when a new emission
sequence is initiated to refill credit. <next> field from quic_pacer has
been removed.
For the moment, credit is based on the burst configured via quic-cc-algo
keyword, or directly reported by BBR.
Amaury Denoyelle [Fri, 10 Jan 2025 16:20:55 +0000 (17:20 +0100)]
MINOR: mux-quic: increment pacing retry counter on expired
A field <paced_sent_ctr> from quic_pacer structure is used to report the
number of occurences where emission has been interrupted due to pacing.
However, it was not incremented when QUIC MUX had to pause immediately
emission as pacing was still not yet expired.
Fix this by incrementing <paced_sent_ctr> in qcc_io_send() prior to
emission if pacing is expired. Note that incrementation is only done
once if the tasklet is then repeatdely woken up until the timer is
expired.
BUG/MINOR: quic: do not increase congestion window if app limited
Previously, congestion window was increased any time each time a new
acknowledge was received. However, it did not take into account the
window filling level. In a network condition with negligible loss, this
will cause the window to be incremented until the maximum value (by
default 480k), even though the application does not have enough data to
fill it.
In most cases, this issue is not noticeable. However, it may lead to
excessive memory consumption when a QUIC connection is suddendly
interrupted, as in this case haproxy will fill the window with
retransmission. It even has caused OOM crash when thousands of clients
were interrupted at once on a local network benchmark.
Fix this by first checking window level prior to every incrementation
via a new helper function quic_cwnd_may_increase(). It was arbitrarily
decided that the window must be at least 50% full when the ACK is
handled prior to increment it. This value is a good compromise to keep
window in check while still allowing fast increment when needed.
Note that this patch only concerns cubic and newreno algorithm. BBR has
already its notion of application limited which ensures the window is
only incremented when necessary.
Amaury Denoyelle [Thu, 23 Jan 2025 09:28:57 +0000 (10:28 +0100)]
MINOR: quic: rename pacing_rate cb to pacing_inter
Rename one of the congestion algorithms pacing callback from pacing_rate
to pacing_inter. This better reflects that this function returns a delay
(in nanoseconds) which should be applied between each packet emission to
fill the congestion window with a perfectly smoothed emission.
BUG/MEDIUM: mux-h1: Properly close H1C if an error is reported before sending data
It is possible to have front H1 connections waiting for the client timeout
while they should be closed because a conneciton error was reported before
sebding an error message to the client. It is not a leak because the
connections are closed when the timeout expires but it is a waste of
ressources, especially if the client timeout is high.
When an early error message must be sent to the client, if an error was
already detected, no data are sent and the output buffer is released. At
this stage, the H1 connection is in CLOSING state and it must be
released. But because of a bug, this is not performed. The client timeout is
rearmed and the H1 connection is only closed when it expires.
To fix the issue, the condition to close a H1C must also be evaluated when
an error is detected before sending data.
It is only an issue with idle client connections, because there is no H1
stream in that case and the error message is generated by the mux itself.
BUILD: quic: Move an ASSUME_NONNULL() for variable which is not null
Some new compilers warn that <oldest_lost> variable can be null even this cannot be
the case as mentioned by the comment about an already present ASSUME_NONNULL()
call comment as follows:
src/quic_loss.c: In function ‘qc_release_lost_pkts’:
src/quic_loss.c:307:86: error: potential null pointer dereference [-Werror=null-dereference]
307 | unsigned int period = newest_lost->time_sent_ms - oldest_lost->time_sent_ms;
| ~~~~~~~~~~~^~~~~~~~~~~~~~
Move up this ASSUME_NONNULL() statement to please these compiler.
Must be backported as far as 2.6 to easy any further backport around this code part.
MINOR: quic: Add a BUG_ON() on quic_tx_packet refcount
This is definitively a bug to call quic_tx_packet_refdec() to decrement the reference
counter of a TX packet calling quic_tx_packet_refdec(), and possibly to release its
memory when it is negative or null.
This counter is incremented when a TX frm is attached to it with some allocated memory
and when the packet is inserted into a data structure, if needed (list or tree).
Should be easily backported as far as 2.6 to ease any further backport around
this code part.
BUG/MINOR: quic: ensure a detached coalesced packet can't access its neighbours
Reset ->prev and ->next fields of a coalesced TX packet to ensure it cannot access
several times its neighbours after it is supposed to be detached from them calling
quic_tx_packet_dgram_detach().
There are two cases where a packet can be coalesced to another previous built one:
this is when it is built into the same datagrame without GSO (and flagged flag with
QUIC_FL_TX_PACKET_COALESCED) or when sent from the same sendto() syscall with GOS
(not flagged with QUIC_FL_TX_PACKET_COALESCED).
Willy Tarreau [Mon, 20 Jan 2025 16:49:55 +0000 (17:49 +0100)]
REORG: version: move the remaining BUILD_* stuff from haproxy.c to version.c
version.c tries to centralize all variables conveying version information,
but there's still an issue with the BUILD_* variables which are only
passed to haproxy.o and are only updated when that one is rebuilt. This
is not very logical given that we can end up with values there which
contradict info from version.c.
Better move all of these to version.c which is systematically rebuilt.
Most of these variables only end up as string concatenation at the
moment. Some of them are even duplicated. In version.c we now have one
variable (or constant) for each of them and haproxy.c references them
in messages. This is much more logical and easier to maintain in a
consistent state.
The patch looks a bit large but it really only moves the ifdefed string
assignment from one file to another, placing them into variables.
Willy Tarreau [Mon, 20 Jan 2025 16:21:19 +0000 (17:21 +0100)]
BUG/MINOR: init: set HAPROXY_STARTUP_VERSION from the variable, not the macro
This environment variable was added by commit d4c0be6b20 ("MINOR: startup:
HAPROXY_STARTUP_VERSION contains the version used to start"). However, it's
set from the macro that is passed during the build process instead of being
set from the variable that's kept up to date in version.c. The difference
is visible only during debugging/bisecting because only changed files and
version.o are rebuilt, but not necessarily haproxy.o, which is where the
environment variable is set. This means that the version exposed in the
environment is not necessarily the same as the one presented in
"haproxy -v" during such debugging sessions.
This should be backported to 2.8. It has no impact at all on regularly
built binaries.
BUG/MAJOR: log/sink: possible sink collision in sink_new_from_srv()
sink_new_from_srv() leverages sink_new_buf() with the server id as name,
sink_new_buf() then calls __sink_new() with the provided name.
Unfortunately sink_new() is designed in such a way that it will first look
up in the list of existing sinks to check if a sink already exists with
given name, in which case the existing sink is returned. While this
behavior may be error-prone, it is actually up to the caller to ensure
that the provided name is unique if it really expects a unique sink
pointer.
Due to this bug in sink_new_from_srv(), multiple tcp servers with the same
name defined in distinct log backends would end up sharing the same sink,
which means messages sent to one of the servers would also be forwarded to
all servers with the same name across all log backend sections defined in
the config, which is obviously an issue and could even raise security
concerns.
Example:
defaults
log backend@log-1 local0
backend log-1
mode log
server s1 127.0.0.1:514
backend log-2
mode log
server s1 127.0.0.1:5114
With the above config, logs sent to log-1/s1 would also end up being sent
to log-2/s1 due to server id "s1" being used for tcp servers in distinct
log backends.
To fix the issue, we now prefix the sink ame with the backend name:
back_name/srv_id combination is known to be unique (backend name
serves as a namespace)
This bug was reported by GH user @landon-lengyel under #2846.
UDP servers (with udp@ prefix before the address) are not affected as they
don't make use of the sink facility.
As a workaround, one should manually ensure that all tcp servers across
different log backends (backend with "mode log" enabled) use unique names
This bug was introduced in e58a9b4 ("MINOR: sink: add sink_new_from_srv()
function") thus it exists since the introduction of log backends in 2.9,
which means this patch should be backported up to 2.9.
Amaury Denoyelle [Mon, 20 Jan 2025 09:15:12 +0000 (10:15 +0100)]
BUG/MAJOR: quic: reject too large CRYPTO frames
Received CRYPTO frames are inserted in a ncbuf to handle out-of-order
reception via ncb_add(). They are stored on the position relative to the
frame offset, minus a base offset which corresponds to the in-order data
length already handled.
Previouly, no check was implemented on the frame offset value prior to
ncb_add(), which could easily trigger a crash if relative offset was too
large. Fix this by ensuring first that the frame can be stored in the
buffer before ncb_add() invokation. If this is not the case, connection
is closed with error CRYPTO_BUFFER_EXCEEDED, as required by QUIC
specification.
As discussed in GH #2423, there are some cases where src_{inc,clr}_gpc*
is not sufficient because we need to perform the lookup on a specific
key. Indeed, just like we did in e642916 ("MEDIUM: stktable: leverage
smp_fetch_* helpers from sample conv"), we can easily implement new
table converters based on existing fetches. This is what we do in
this patch.
Also the doc was updated so that src_{inc,clr}_gpc* fetches now point to
their generic equivalent table_{inc,clr}_gpc*. Indeed, src_{inc,clr}_gpc*
are simply aliases.
sample_conv_table_bytes_out_rate() was defined in the middle of other
stick-table sample convs without any ordering logic. Let's put it
where it belongs, right after sample_conv_table_bytes_in_rate().
DOC: config: refer to canonical sticktable converters for src_* fetches
When available, to prevent doc duplication, let's make src_* fetches
point to equivalent table_* converters, as they are in fact aliases
for src,table_* converters.
DOC: config: clarify return type for some stick-table converters
Some stick-table converters such as "table_gpt" erroneously suggest that
the returned type is a boolean while in fact it is integer type, as
properly documented for the sample fetch equivalents.
DOC: config: stick-table converter do accept ANY-typed input
Since 2d17db58 ("MINOR: stick-table: change all stick-table converters'
inputs to SMP_T_ANY"), all stick-table converters accept ANY input
type as parameter, this means that it does no longer restrict the key as
a string representation of the input. However the doc wasn't updated when
the change was made. Moreover, some converters document the updated behavior
while others don't, which is kind of confusing, let's fix that.
The most common way (and proper way it seems) to declare optional
arguments in sample fetch or converters' prototype is to declare
them between square brackets, including the leading coma (because the
coma should be omitted if the argument is not provided). Also, when
multiple optional arguments are found, we should apply the same logic
but recursively.
In this patch we fix prototypes that include optional arguments and don't
follow this syntax. This improves readibility and sets the norm for
upcoming sample fetches/converters.
MEDIUM: stktable: leverage smp_fetch_* helpers from sample conv
In this patch we try to prevent code duplication: some fetches and sample
converters do the exact same thing, except that the converter takes the
argument as input data. Until now, both the converter and the fetch
had their own implementation (copy pasted), with the fetch specific or
converter specific lookup part.
Thanks to previous commits, we now have generic sample fetch helpers
that take the stkctr as argument, so let's leverage them directly
from the converter functions when available. This allows to remove
a lot of code duplication and should make code maintenance easier in the
future.
MEDIUM: stktable: split sc_ and src_ fetch lookup logics
While this patch actually adds more insertions than deletions, it actually
tries to simplify the lookup logic for sc_ and src_ sticktable fetches.
Indeed, smp_create_src_stkctr() and smp_fetch_sc_stkctr() combination
was used everywhere the fetch supports sc_ and src_ form, and
smp_fetch_sc_stkctr() even integrated some of the src-oriented fetch logic.
Not only this was confusing, but it made the task of adding new generic
fetches even more complex.
Thus in this patch we completely dedicate smp_fetch_sc_stkctr() to sc_
oriented fetches, while smp_create_src_stkctr() is now renamed to
smp_fetch_src_stkctr() and can now work on its own for src_ oriented
fetches. It takes an additional paramater, "create" to tell the function
if the entry should be created if it doesn't exist yet.
Now it's up to the calling function to know if it should be using the
sc_ oriented fetch or the src_ oriented one based on the input keyword.
In this patch we split several sample fetch functions that are leveraged
by the "src-" fetches such as smp_fetch_sc_inc_gpc().
Indeed, for all of them, we add an intermediate helper function that takes
a stkctr pointer as parameter and performs the logic, leaving the lookup
part in the calling function. Before this patch existing functions were
doing the lookup + the fetch logic. Thanks to this patch it will become
easier to add generic converters taking lookup key as input.
Please note that this patch doesn't render any good using "git show" or
"git diff". For all the functions listed above, a new helper function was
defined right above it, with the same name without "_sc". These new
functions perform the fetch part, while the original ones (with "_sc")
now simply perform the lookup and then leverage the corresponding fetch
helper.
MINOR: stktable: add smp_fetch_stksess() helper function
smp_fetch_stksess(table, smp, create) performs a lookup in <table> by
using <smp> as a key. It returns matching entry on success and NULL on
failure. <create> can be set to 1 to force the entry creation.
We then use this helper everywhere relevant to prevent code duplication
MINOR: stktable: fix potential build issue in smp_to_stkey (2nd try)
As discussed in GH #2838, the previous fix f399dbf
("MINOR: stktable: fix potential build issue in smp_to_stkey") which
attempted to remove conversion ambiguity and prevent build warning proved
to be insufficient.
This time, we implement Willy's suggestion, which is to use an union to
perform the conversion.
Hopefully this should fix GH #2838. If that's the case (and only in that
case), then this patch may be backported with f399dbf (else the patch
won't apply) anywhere b59d1fd ("BUG/MINOR: stktable: fix big-endian
compatiblity in smp_to_stkey()") was backported.
BUG/MEDIUM: promex: Use right context pointers to dump backends extra-counters
When backends extra counters are dumped, the wrong pointer was used in the
promex context to retrieve the stats module. p[1] must be used instead of
p[2]. Because of this typo, a infinite loop could be experienced if the
output buffer is full during this stage. But in all cases an overflow is
possible leading to a memory corruption.
This patch may be related to issue #2831. It must be backported as far as
3.0.
BUG/MEDIUM: stktable: fix missing lock on some table converters
In 819fc6f563
("MEDIUM: threads/stick-tables: handle multithreads on stick tables"),
sample fetch and action functions were properly guarded with stksess
read/write locks for read and write operations respectively, but the
sample_conv_table functions leveraged by "table_*" converters were
overlooked.
This bug was not known to cause issues in existing deployments yet (at
least it was not reported), but due to its nature it can theorically lead
to inconsistent values being reported by "table_*" converters if the value
is being updated by another thread in parallel.
It should be backported to all stable versions.
[ada: for versions < 3.0, glitch_cnt and glitch_rate samples should be
ignored as they first appeared in 3.0]
MINOR: stktable: fix potential build issue in smp_to_stkey
smp_to_stkey() uses an ambiguous cast from 64bit integer to 32 bit
unsigned integer. While it is intended, let's make the cast less
ambiguous by explicitly casting the right part of the assignment to the
proper type.
Command line argument -dt can be used to activate traces during startup.
Via its optional argument, it is possible to change settings for a
particular trace source. It is also possible to update every registered
sources by specifying an empty name.
Support the trace source alias "all". This is an alternative to the
empty name to update every sources.
MINOR: trace: ensure -dt priority over traces config section
Traces can be activated on startup either via -dt command line argument
or via the traces configuration section. This can caused confusion as it
may not be clear as trace source can be completed or overriden by one or
the other.
Fix the precedence to give the priority to the command line argument.
Now, each trace source configured via -dt is first resetted to a default
state before applying new settings. Then, it is impossible to change a
trace source via the configuration file if it was already targetted via
-dt argument.
Traces can be activated on startup via -dt command line argument. To
facilitate its usage, display a usage description and examples when
"help" is specified.
BUG/MEDIUM: queues: Adjust the proxy counters when appropriate
In process_srv_queue(), if we manage to successfully run an extra task,
don't forget to adjust the proxy's totpend and served counters accordingly.
Having an inaccurate served could lead to various subtle bugs, as it is
used when making load balancing decisions.
As discussed in GH #1750, we were lacking a sample fetch to be able to
retrieve the key from the currently tracked counter entry. To do so,
sc_key fetch can now be used. It returns a sample with the correct type
(table key type) corresponding to the tracked counter entry (from previous
track-sc rules).
If no entry is currently tracked, it returns nothing.
It can be used using the standard form "sc_key(<sc_number>)" or the legacy
form: "sc0_key", "sc1_key", "sc2_key"
stksess_getkey(t, ts) returns a stktable_key struct pointer filled with
data from input <ts> entry in <t> table. Returned pointer uses the
static_table_key variable. Indeed, stktable_key struct is more convenient
to manipulate than having to deal with the key extraction from stktsess
struct directly.
BUG/MINOR: stktable: fix big-endian compatiblity in smp_to_stkey()
When smp_to_stkey() deals with SINT samples, since stick-tables deals with
32 bits integers while SINT sample is 64 bit integer, inplace conversion
was done in smp_to_stkey. For that the 64 bit integer was truncated before
the key would point to it. Unfortunately this only works on little endian
architectures because with big endian ones, the key would point to the
wrong 32bit range.
To fix the issue and make the conversion endian-proof, let's re-assign
the sample as 32bit integer before the key points to it.
Thanks to Willy for having spotted the bug and suggesting the above fix.
Willy Tarreau [Thu, 9 Jan 2025 08:21:04 +0000 (09:21 +0100)]
[RELEASE] Released version 3.2-dev3
Released version 3.2-dev3 with the following main changes :
- DOC: config: add missing "track-sc0" in action keywords matrix
- BUG/MINOR: stktable: invalid use of stkctr_set_entry() with mixed table types
- BUG/MAJOR: mux-quic: fix BUG_ON on empty STREAM emission
- BUG/MEDIUM: mux-h2: Count copied data when looping on RX bufs in h2_rcv_buf()
- Revert "BUG/MAJOR: mux-quic: fix BUG_ON on empty STREAM emission"
- BUG/MAJOR: mux-quic: properly fix BUG_ON on empty STREAM emission
- MINOR: mux-quic: add traces on sd attach
- BUG/MEDIUM: mux-quic: do not attach on already closed stream
- BUG/MINOR: compression: handle a possible strdup() failure
- BUG/MINOR: pool: handle a possible strdup() failure
- BUG/MINOR: cfgparse-tcp: handle a possible strdup() failure
- BUG/MINOR: log: Allow to use if/unless conditionnals for do-log action
- MINOR: config: Alert about extra arguments for errorfile and errorloc
- BUG/MINOR: mux-quic: fix wakeup on qcc_set_error()
- MINOR: mux-quic: change return value of qcs_attach_sc()
- BUG/MINOR: mux-quic: handle closure of uni-stream
- BUG/MEDIUM: promex/resolvers: Don't dump metrics if no nameserver is defined
- BUG/MAJOR: ssl/ocsp: fix NULL conn object dereferencing to access QUIC TLS counters
- MEDIUM: errors: get rid of shm_open()
- BUILD: makefile: do not clean standalone binaries on a simple "make clean"
- BUILD: makefile: add a qinfo macro to pass info in quiet mode
- DEV: ncpu: add a simple utility to help with NUMA development
- DEV: ncpu: implement a wrapper mode
- DEV: ncpu: make the wrapper work both as a lib and executable
- BUG/MEDIUM: h1-htx: Properly handle bodyless messages
- MINOR: tools: add a few functions to simply check for a file's existence
Willy Tarreau [Mon, 6 Jan 2025 18:34:41 +0000 (19:34 +0100)]
MINOR: tools: add a few functions to simply check for a file's existence
At many places we'd like to be able to simply construct a path from a
format string and check if that path corresponds to an existing file,
directory etc. Here we add 3 functions, a generic one to test that a
path corresponds to a given file mode (e.g. S_IFDIR, S_IFREG etc), and
two other ones specifically checking for a file or a dir for easier
use.
During h1 parsing, there are some postparsing checks to detect bodyless
messages and switch the parsing in DONE state. However, a case was not
properly handled. Responses to HEAD requests with a "transfer-encoding"
header. The response parser remained blocked waiting for the response body.
To fix the issue, the postparsing was sliglty modified. Instead of trying to
handle bodyless messages in a common way between the request and the
response, it is now performed in the dedicated postparsing functions. It is
easier to enumerate all cases, especially because there is already a test
for responses to HEAD requests.
This patch should fix the issue #2836. It must be backported as far as 2.9.
Trying to preload such an executable indeed returns:
ERROR: ld.so: object '/path/to/ncpu.so' from LD_PRELOAD cannot be preloaded (cannot dynamically load position-independent executable): ignored.
Note that the code still supports it since libc.so is both an executable
and a lib. The approach taken here is the same as in the nousr.so wrapper.
It consists in dropping the DF_1_PIE flag from the resulting executable
since it's what the dynamic linker is looking for. This flag is found in
FLAGS_1 in the .dynamic section. As readelf -a suggests, it's after the
tag 0x6ffffffb. The value is 0x08000000. We're using objdump to figure the
length and offset of the struct, dd to extract the 3 parts, and sed to
patch the binary.
It's likely that it will only work on 64-bit little endian, though tests
should be performed to see what to do on other platforms. At least on
x86_64, ld.so is happy and it continues to be possible to use the binary
as a .so, and that the platform where most of the development happens so
that's fine.
In any case the wrapper and the standard shared lib are still made two
distinct files so that it's possible to use the non-patched version on
unsupported OSes or architectures.
Willy Tarreau [Wed, 8 Jan 2025 08:04:09 +0000 (09:04 +0100)]
DEV: ncpu: implement a wrapper mode
The wrapper mode allows to present itself as LD_PRELOAD before loading
haproxy, which is often more convenient since it allows to pass the
number of CPUs in argument. However, this mode is no longer supported by
modern glibcs, so a future patch will come to implement a trick that was
tested to work at least on x86.
Willy Tarreau [Wed, 8 Jan 2025 07:48:16 +0000 (08:48 +0100)]
DEV: ncpu: add a simple utility to help with NUMA development
Collecting captures of /sys isn't sufficient for NUMA development because
haproxy detects the number of CPUs at boot time and will not be able to
inspect more than this number. Let's just have a small utility to report
a fake number of CPUs, that will be loaded using LD_PRELOAD. It checks
the NCPU variable if it exists and will present this number of CPUs, or
if it does not exist, will expose the maximum supported number.
Willy Tarreau [Wed, 8 Jan 2025 09:29:39 +0000 (10:29 +0100)]
BUILD: makefile: add a qinfo macro to pass info in quiet mode
Some commands such as $(cmd_CC) etc already handle the quiet vs verbose
mode in the makefile, but sometimes we may want to pass other info. The
new "qinfo" macro can be called with a 9-char string argument (spaces
included) as a prefix for some commands, to emit that string when in
quiet mode. The caller must fill the spaces needed for alignment. E.g:
Willy Tarreau [Wed, 8 Jan 2025 10:22:18 +0000 (11:22 +0100)]
BUILD: makefile: do not clean standalone binaries on a simple "make clean"
Running "make clean" currently gets rid of a number of auxiliary tools,
including the standalone ones that do not depend on haproxy's build
options. This is a bit annoying as they have to be rebuilt each time.
Let's move them to the distclean target instead.
Since 5ee266b7 ("MINOR: error: simplify startup_logs_init_shm"), the FD
of the startup logs is always closed and the HAPROXY_STARTUPLOGS_FD
variable is not used anymore. Which means we only need a mmap.
Indeed the shm_open() function was only needed to keep the shm between
the exec() of the master so we can get the logs stored there after doing
the final exec() in wait mode. Since the wait mode doesn't exist
anymore and the parsing is done in a worker, we only need to share a
memory zone between the master and the worker.
This patch removes shm_open() and replace it with a simple mmap(), this
way the shared startup-logs become more portable and USE_SHM_OPEN is not
required anymore.
This bug arrived with this commit in the current dev branch:
056ec51c26 MEDIUM: ssl/ocsp: counters for OCSP stapling
and could occur for QUIC connections during handshake when the underlying
<conn> connection object is not already initialized. So in this case the TLS
counters attached to TLS listeners cannot be accessed through this object but
from the QUIC connection object.
Modify the code to initialize the listener (<li> variable) for both QUIC
and TCP connections, then initialize the variables for the TLS counters
if the listener is also initialized.
Thank you to @Tristan971 for having reported this issue in GH #2833.
Must be backported with the commit mentioned above if it is planned to be
backported.
BUG/MEDIUM: promex/resolvers: Don't dump metrics if no nameserver is defined
A 'resolvers' section may be defined without any nameserver. In that case,
we must take care to not dump corresponding Prometheus metrics. However
there is an issue that could lead to a crash or a strange infinite loop
because we are looping on an empty list and, at some point, we are
dereferencing an invalid pointer.
There is an issue because the loop on the nameservers of a resolvers section
is performed via callback functions and not the standard list_for_each_entry
macro. So we must take care to properly detect end of the list and empty
lists for nameservers. But the fix is not so simple because resolvers
sections with and without nameservers may be mixed.
To fix the issue, in rslv_promex_start_ts() and rslv_promex_next_ts(), when the
next resolvers section must be evaluated, a loop is now used to properly skip
empty sections.
This patch is related to #2831. Not sure it fixes it. It must be backported
as far as 3.0.
This commit is a direct follow-up to the previous one. As already
described, a previous fix was merged to prevent streamdesc attach
operation on already completed QCS instances scheduled for purging. This
was implemented by skipping app proto decoding.
However, this has a bad side-effect for remote uni-directional stream.
If receiving a FIN stream frame on such a stream, it will considered as
complete because streamdesc are never attached to a uni stream. Due to
the mentionned new fix, this prevent analysis of this last frame for
every uni stream.
To fix this, do not skip anymore app proto decoding for completed QCS.
Update instead qcs_attach_sc() to transform it as a noop function if QCS
is already fully closed before streamdesc instantiation. However,
success return value is still used to prevent an invalid decoding error
report.
The impact of this bug should be minor. Indeed, HTTP3 and QPACK uni
streams are never closed by the client as this is invalid due to the
spec. The only issue was that this prevented QUIC MUX to close the
connection with error H3_ERR_CLOSED_CRITICAL_STREAM.
This must be backported along the previous patch, at least to 3.1, and
eventually to 2.8 if mentionned patches are merged there.
MINOR: mux-quic: change return value of qcs_attach_sc()
A recent fix was introduced to ensure that a streamdesc instance won't
be attached to an already completed QCS which is eligible to purging.
This was performed by skipping application protocol decoding if a QCS is
in such a state. Here is the patch responsible for this change. caf60ac696a29799631a76beb16d0072f65eef12
BUG/MEDIUM: mux-quic: do not attach on already closed stream
However, this is too restrictive, in particular for unidirection stream
where no streamdesc is never attached. To fix this behavior, first
qcs_attach_sc() API has been modified. Instead of returning a streamdesc
instance, it returns either 0 on success or a negative error code.
There should be no functional changes with this patch. It is only to be
able to extend qcs_attach_sc() with the possibility of skipping
streamdesc instantiation while still keeping a success return value.
This should be backported wherever the above patch has been merged. For
the record, it was scheduled for immediate backport on 3.1, plus merging
on older releases up to 2.8 after a period of observation.
BUG/MINOR: mux-quic: fix wakeup on qcc_set_error()
The following patch was a major refactoring of QUIC MUX. It removes
pacing specific code path. In particular, qcc_wakeup() utility function
was removed and replaced by its tasklet_wakup() usage. 41f0472d967b2deb095d5adc8a167da973fbee3d
MEDIUM: mux-quic: remove pacing specific code on qcc_io_cb
However, an incorrect substitution was performed in qcc_set_error(). As
such, there was no explicit wakeup in case an error is detected by QUIC
MUX or the app protocol layer. This may lead to missing error reporting
to clients.
Fix this by re-add tasklet_wakup() usage into qcc_set_error().
This must be backported up to 3.1 where above patch is scheduled.
MINOR: config: Alert about extra arguments for errorfile and errorloc
errorfile and errorloc directives expect excatly two arguments. But extra
arguments were just ignored while an error should be emitted. It is now
fixed.
This patch could be backported as far as 2.2 if necessary.
BUG/MINOR: log: Allow to use if/unless conditionnals for do-log action
The do-log action does not accept argument for now. But an error was
triggered if any extra arguments was found, preventing the use of if/unless
conditionnals.
When an action is parsed, expected arguments must be tested to detect
missing ones but not unexpected extra arguments because this should be
performed by the conditionnal parser. So just removing the test in the
do-log parser function is enough to fix the issue.
Amaury Denoyelle [Tue, 31 Dec 2024 15:06:32 +0000 (16:06 +0100)]
BUG/MEDIUM: mux-quic: do not attach on already closed stream
Due to QUIC packet reordering, a stream may be opened via a new
RESET_STREAM or STOP_SENDING frame. This would cause either Tx or Rx
channel to be immediately closed.
This can cause an issue with current QUIC MUX implementation with QCS
purging. QCS are inserted into QCC purge list when transfer could be
considered as completed. In most cases, this happens after full
request/response exchange. However, it can also happens after request
reception if RESET_STREAM/STOP_SENDING are received first.
A BUG_ON() crash will occur if a STREAM frame is received after. In this
case, streamdesc instance will be attached via qcs_attach_sc() to handle
the new request. However, QCS is already considered eligible to purging.
It could cause it to be released while its streamdesc instance remains.
A BUG_ON() crash detects this problem in qcc_purge_streams().
To fix this, extend qcc_decode_qcs() to skip app proto rcv_buf
invokation if QCS is considered completed. A similar condition was
already implemented when read was previously aborted after a
STOP_SENDING emission by QUIC MUX.
This crash was reproduced on haproxy.org. Here is the output of the
backtrace :
Core was generated by `./haproxy-dev -db -f /etc/haproxy/haproxy-current.cfg -sf 16495'.
Program terminated with signal SIGILL, Illegal instruction.
#0 0x00000000004e442b in qcc_purge_streams (qcc=0x774cca0) at src/mux_quic.c:2661
2661 BUG_ON_HOT(!qcs_is_completed(qcs));
[Current thread is 1 (LWP 1457)]
[ ## gdb ## ] bt
#0 0x00000000004e442b in qcc_purge_streams (qcc=0x774cca0) at src/mux_quic.c:2661
#1 0x00000000004e4db7 in qcc_io_process (qcc=0x774cca0) at src/mux_quic.c:2744
#2 0x00000000004e5a54 in qcc_io_cb (t=0x7f71193940c0, ctx=0x774cca0, status=573504) at src/mux_quic.c:2886
#3 0x0000000000b4f792 in run_tasks_from_lists (budgets=0x7ffdcea1e670) at src/task.c:603
#4 0x0000000000b5012f in process_runnable_tasks () at src/task.c:883
#5 0x00000000007de4a3 in run_poll_loop () at src/haproxy.c:2771
#6 0x00000000007deb9f in run_thread_poll_loop (data=0x1335a00 <ha_thread_info>) at src/haproxy.c:2985
#7 0x00000000007dfd8d in main (argc=6, argv=0x7ffdcea1e958) at src/haproxy.c:3570
This BUG_ON() crash can only happen since 3.1 refactoring. Indeed, purge
list was only implemented on this version. As such, please backport it
on 3.1 immediately. However, a logic issue remains for older version as
a stream could be attached on a fully closed QCS. Thus, it should be
backported up to 2.8, this time after a period of observation.
Amaury Denoyelle [Tue, 31 Dec 2024 14:18:52 +0000 (15:18 +0100)]
MINOR: mux-quic: add traces on sd attach
Add traces into qcs_attach_sc(). This function is called when a request
is received on a QCS stream and a streamdesc instance is attached. This
will be useful to facilitate debugging.
BUG/MAJOR: mux-quic: properly fix BUG_ON on empty STREAM emission
Properly fix BUG_ON() occurence when QUIC MUX emits only empty STREAM
frames. This was addressed by a previous patch but it causes another
regression so a revert was needed.
BUG_ON() on qcc_build_frms() return value is invalid. Indeed,
qcc_build_frms() may return 0, but this does not imply that frame list
is empty, as encoded frames can have a zero length payload. As such,
simply remove this invalid BUG_ON().
Above patch tried to fix a BUG_ON() occurence when MUX only emitted
empty STREAM frames via qcc_build_frms(). Return value of qcs_send() was
changed from the payload STREAM frame to the whole frame length.
However, this is invalid as this return value is used to ensure
connection flow-control is not exceeded on sending retry. This causes
occurence of BUG_ON() crash in qcc_io_send() as send-list is not
properly purged after QCS emission.
Reverts this incorrect fix. The original issue will be properly dealt in
the next commit.
This commit must be backported to 3.1 if reverted commit was already
applied on it.
BUG/MEDIUM: mux-h2: Count copied data when looping on RX bufs in h2_rcv_buf()
When data was copied from RX buffers to the channel buffer, more data than
expected could be moved because amount of data copied was never decremented
from the limit. This could lead to a stream dead lock when the compression
filter was inuse.
The issue was introduced by commit 4eb3ff1 ("MAJOR: mux-h2: make streams use
the connection's buffers") but revealed by 3816c38 ("MAJOR: mux-h2: permit a
stream to allocate as many buffers as desired").
Because a h2 stream can now have several RX buffers, in h2_rcv_buf(), we
loop on these buffers to fill the channel buffer. However, we must still
take care to respect the limit to not copy to much data. However, the
"count" variable was never decremented to reflect amount of data already
copied. So, it was possible to exceed the limit.
It was an issue when the compression filter was inuse because the channel
buffer could be fully filled, preventing the compression to be
performed. When this happened, the stream was infinitly blocked because the
compression filter was asking for some space but nothing was scheduled to be
forwarded.
This patch should fix the issue #2826. It must be backported to 3.1.
Amaury Denoyelle [Tue, 31 Dec 2024 14:21:19 +0000 (15:21 +0100)]
BUG/MAJOR: mux-quic: fix BUG_ON on empty STREAM emission
A BUG_ON() is present in qcc_io_send() to ensure that encoded frame list
is empty if qcc_build_frms() previously returned 0.
This BUG_ON() may be triggered if empty STREAM frame is encoded for
standalone FIN. Indeed, qcc_build_frms() returns the sum of all STREAM
payload length. In case only empty STREAM frames are generated, return
value will be 0, despite new frames encoded and inserted into frame
list.
To fix this, change return value of qcs_send(). This now returns the
whole STREAM frame length, both header and payload included. This
ensures that qcc_build_frms() won't return a nul value if new frames are
encoded, even empty ones.
BUG/MINOR: stktable: invalid use of stkctr_set_entry() with mixed table types
Some actions such as "sc0_get_gpc0" (using smp_fetch_sc_stkctr()
internally) can take an optional table name as parameter to perform the
lookup on a different table from the tracked one but using the key from
the tracked entry. It is done by leveraging the stktable_lookup() function
which was originally meant to perform intra-table lookups.
Calling sc0_get_gpc0() with a different table name will result in
stktable_lookup() being called to perform lookup using a stktsess from
a different table. While it is theorically fine, it comes with a pitfall:
both tables (the one from where the stktsess originates and the actual
target table) should rely on the exact same key type and length.
Failure to do so actually results in undefined behavior, because the key
type and/or length from one table is used to perform the lookup in
another table, while the underlying lookup API expects explicit type and
key length.
For instance, consider the below example:
peers testpeers
bind 127.0.0.1:10001
server localhost
table test type binary len 1 size 100k expire 1h store gpc0
table test2 type string size 100k expire 1h store gpc0
Performing a curl request to localhost:8080 will cause unitialized reads
because string "ok" from test2 table will be compared as a string against
"AA" binary sample which is not NULL terminated:
==2450742== Conditional jump or move depends on uninitialised value(s)
==2450742== at 0x484F238: strlen (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2450742== by 0x27BCE6: stktable_lookup (stick_table.c:539)
==2450742== by 0x281470: smp_fetch_sc_stkctr (stick_table.c:3580)
==2450742== by 0x283083: smp_fetch_sc_get_gpc0 (stick_table.c:3788)
==2450742== by 0x2A805C: sample_process (sample.c:1376)
So let's prevent that by adding some comments in stktable_set_entry()
func description, and by adding a check in smp_fetch_sc_stkctr() to ensure
both source stksess and target table share the same key properties.
While it could be relevant to backport this in all stable versions, it is
probably safer to wait for some time before doing so, to ensure that no
existing configs rely on this ambiguity because the fact that the target
table and source stksess entry need to share the same key type and length
is not explicitly documented.
DOC: config: add missing "track-sc0" in action keywords matrix
In d54e8f8107 ("DOC: config: reorganize actions into their own section"),
"track-sc0" keyword was properly documented but the keyword was not placed
in the action keywords matrix alongside other track-sc* statements. It
was probably overlooked, so let's fix that.
Willy Tarreau [Wed, 25 Dec 2024 14:17:01 +0000 (15:17 +0100)]
[RELEASE] Released version 3.2-dev2
Released version 3.2-dev2 with the following main changes :
- MINOR: build: define DEBUG_STRESS
- MINOR: applet: define applet_putchk_stress() alternative
- MINOR: stats: use stress mode to force reentrant dumps
- CI: scripts: add support for AWS-LC-FIPS in build-ssl.sh
- MINOR: ssl: add "FIPS" details in haproxy -vv
- MEDIUM: ssl: rename 'OpenSSL' by 'SSL library' in haproxy -vv
- CI: github: let's add an AWS-LC-FIPS job
- MINOR: window_filter: rely on the time to update the filter samples (QUIC/BBR)
- BUG/MINOR: quic: wrong logical statement in in_recovery_period() (BBR)
- BUG/MINOR: quic: fix BBB max bandwidth oscillation issue.
- BUG/MINOR: quic: wrong bbr_target_inflight() implementation
- BUG/MINOR: quic: remove max_bw filter from delivery rate sampling
- BUG/MINOR: quic: underflow issue for bbr_inflight_hi_from_lost_packet()
- BUG/MINOR: quic: reduce packet losses at least during ProbeBW_CRUISE (BBR)
- MINOR: quic: reduce the private data size of QUIC cc algos
- CLEANUP: quic: remove a wrong comment about ->app_limited (drs)
- BUG/MINOR: quic: fix the wrong tracked recovery start time value
- BUG/MINOR: quic: too permissive exit condition for high loss detection in Startup (BBR)
- BUG/MINOR: cli: cli_snd_buf: preserve \r\n for payload lines
- REGTESTS: ssl: add a PEM with mix of LF and CRLF line endings
- BUG/MINOR: quic: missing Startup accelerating probing bw states
- CLEANUP: quic: Rename some BBR functions in relation with bw probing
- REORG: startup: move global.maxconn calculations in limits.c
- REORG: startup: move code that applies limits to limits.c
- REORG: startup: move nofile limit checks in limits.c
- MINOR: ssl: add utils functions to extract X509 notAfter date
- MINOR: ssl/cli: allow to filter expired certificates with 'show ssl sni'
- MINOR: ssl/cli: add -A to the 'show ssl sni' command description
- BUG/MINOR: ssl/cli: 'show ssl cert' escape the first '*' of a filename
- BUG/MINOR: ssl/cli: 'show ssl crl-file' escape the first '*' of a filename
- BUG/MINOR: ssl/cli: 'show ssl ca-file' escape the first '*' of a filename
- BUG/MEDIUM: stconn: Only consider I/O timers to update stream's expiration date
- BUG/MEDIUM: queues: Make sure we call process_srv_queue() when leaving
- BUG/MEDIUM: queues: Do not use pendconn_grab_from_px().
- CLEANUP: queues: Remove pendconn_grab_from_px().
- BUILD: debug: only dump/reset glitch counters when really defined
- MINOR: compiler: add a __has_builtin() macro to detect features more easily
- MINOR: compiler: rely on builtin detection for __builtin_unreachable()
- MINOR: compiler: add a new "ASSUME" macro to help the compiler
- MINOR: compiler: also enable __builtin_assume() for ASSUME()
- MINOR: compiler: add ASSUME_NONNULL() to tell the compiler a pointer is valid
- MINOR: bug: make BUG_ON() fall back to ASSUME
- CLEANUP: cache: use ASSUME_NONNULL() instead of DISGUISE()
- CLEANUP: hlua: use ASSUME_NONNULL() instead of ALREADY_CHECKED()
- CLEANUP: htx: use ASSUME_NONNULL() to mark the start line as non-null
- CLEANUP: mux-fcgi: use ASSUME_NONNULL() to indicate that the first block exists
- CLEANUP: stats: use ASSUME_NONNULL() to indicate that the first block exists
- CLEANUP: quic: replace ALREADY_CHECKED() with ASSUME_NONNULL() at a few places
- CLEANUP: ssl-sock: drop two now unneeded ALREADY_CHECKED()
- BUG/MEDIUM: mux-quic: do not mix qcc_io_send() return codes with pacing
- CLEANUP: mux-quic: remove unused qcc member send_retry_list
- MINOR: quic: add traces
- MINOR: mux-quic: refactor wait-for-handshake support
- MEDIUM/OPTIM: mux-quic: define a recv_list for demux resumption
- MEDIUM/OPTIM: mux-quic: implement purg_list
- MINOR: mux-quic: extract code to build STREAM frames list
- MINOR: mux-quic: split STREAM and RS/SS emission
- MEDIUM/OPTIM: mux-quic: do not rebuild frms list on every send
- MEDIUM: mux-quic: remove pacing specific code on qcc_io_cb
- MINOR: trace: implement tracing disabling API
- MINOR: mux-quic: hide traces when woken up on pacing only
- MINOR: ssl/cli: add a 'Uncommitted' status for 'show ssl' commands
- MINOR: ssl/ocsp: Add extra details in error logs when possible
- BUILD: ssl/ocsp: error: â\80\98%.*sâ\80\99 directive argument is null
- MEDIUM: ssl/ocsp: OCSP response is expired with OCSP_MAX_RESPONSE_TIME_SKEW
- MINOR: ssl: improve HAVE_SSL_OCSP ifdef
- DOC: config: add example for server "track" keyword
- DOC: config: reorder "tune.lua.*" keywords by alphabetical order
- DOC: config: add "tune.lua.burst-timeout" to the list of global parameters
- MINOR: hlua: add option to preserve bool type from smp to lua
- REGTESTS: fix lua-based regtests using tune.lua.smp-preserve-bool
- BUG/MEDIUM: mux-quic: prevent BUG_ON() by refreshing frms on MAX_DATA
- CLEANUP: mux-quic: remove dead err label in qcc_build_frms()
- BUG/MINOR: h2/rhttp: fix HTTP2 conn counters on reverse
- MINOR: hlua: rename "tune.lua.preserve-smp-bool" to "tune.lua.bool-sample-conversion"
- MINOR: ssl: change visibility of ssl_stats_module
- MINOR: ssl: rework the error management in the OCSP callback
- MEDIUM: ssl/ocsp: counters for OCSP stapling
- CI: limit aws-lc and libressl Quic Interop to "haproxy" only
- BUG/MEDIUM: queue: Make process_srv_queue return the number of streams
- CI: github: try to build the latest WolfSSL master weekly
- CI: github: activate ASAN on the WolfSSL weekly job
- BUG/MINOR: stats: fix segfault caused by uninitialized value in "show schema json"
- MINOR: stktable: add stktable_get_data_type_idx() helper function
- MINOR: stktable: support optional index for array types in {set, clear, show} table commands
- CI: scripts: allow to build wolfssl with --enable-debug
- CI: github: activate debug in wolfssl weekly build
- BUG/MEDIUM: queues: Stricly respect maxconn for outgoing connections
- MEDIUM: queue: Handle the race condition between queue and dequeue differently
- CLEANUP: Remove pendconn_must_try_again().
- BUILD: compat: add missing fcntl.h before defining F_SETPIPE_SZ
- BUILD: mworker: always initialize the saveptr of strtok_r()
- BUILD: limits: make normalize_rlim() take an rlim_t to fix build on m68k
- BUG/MINOR: checks: handle a possible strdup() failure
- BUG/MINOR: listener: handle a possible strdup() failure
- BUG/MINOR: mux_h1: handle a possible strdup() failure
- BUG/MINOR: debug: handle a possible strdup() failure
The reason is the comparison between a ulong limit and RLIM_INFINITY.
Indeed, on m68k, rlim_t is an unsigned long long. Let's just change
the function's input type to take an rlim_t instead. This also allows
to get rid of the casts in the call place.
This can be backported to 3.1 though it's not important given the low
prevalence of this platform for such use cases.
Willy Tarreau [Wed, 25 Dec 2024 11:10:13 +0000 (12:10 +0100)]
BUILD: mworker: always initialize the saveptr of strtok_r()
Building with some libcs which define strtok_r() as an inline function
can yield a possibly uninitialized warning due to a loop dereferencing
this save pointer early, even though the doc clearly mentions that it
is ignored. This is actually more of a mismatch between the compiler
and the libc (gcc-4.7 and glibc-2.23 in that case). It's trivial to
set s2 to NULL here so let's do it to please this old couple. Note
that while the warning is triggered in all supported versions, there's
no point backporting it since it's unlikely this combination will be
relevant outside of backwards compatibility checks now.
Willy Tarreau [Wed, 25 Dec 2024 10:53:11 +0000 (11:53 +0100)]
BUILD: compat: add missing fcntl.h before defining F_SETPIPE_SZ
n 1.5-dev8, 13 years ago, support for setting pipe size was added by
commit bd9a0a778 ("OPTIM/MINOR: make it possible to change pipe size
(tune.pipesize)"). For compatibility purposes, it was defining
F_SETPIPE_SZ in compat.h if it was not set. It apparently always had
F_SETPIPE_SZ defined before being included.
Now in 3.2-dev1, commit fbc534a6f ("REORG: startup: move nofile limit
checks in limits.c") reordered a few includes and ended up with
mworker-prog.c including compat.h before fcntl.h, causing a redefinition
error on certain libcs:
CC src/mworker-prog.o
In file included from /usr/include/bits/fcntl.h:61:0,
from /usr/include/fcntl.h:35,
from include/haproxy/limits.h:11,
from include/haproxy/mworker.h:18,
from src/mworker-prog.c:27:
/usr/include/bits/fcntl-linux.h:203:0: warning: "F_SETPIPE_SZ" redefined [enabled by default]
In file included from include/haproxy/api-t.h:35:0,
from include/haproxy/api.h:33,
from src/mworker-prog.c:23:
include/haproxy/compat.h:161:0: note: this is the location of the previous definition
Let's simply include fcntl.h in compat.h before the macro is redefined.
There's normally no need to backport this, though it's harmless to do
it if needed.
MEDIUM: queue: Handle the race condition between queue and dequeue differently
There is a small race condition, where a server would check if there is
something left in the proxy queue, and adding something to the proxy
queue. If the server checks just before the stream is added to the queue,
and it no longer has any stream to deal with, then nothing will take
care of the stream, that may stay in the queue forever.
This was worked around with commit 5541d4995d, by checking for that exact
condition after adding the stream to the queue, and trying again to get
a server assigned if it is detected.
That fix lead to multiple infinite loops, that got fixed, but it is not
unlikely that it could happen again. So let's fix the initial problem
differently : a single server may mark itself as ready, and it removes
itself once used. The principle is that when we discover that the just
queued stream is alone with no active request anywhere ot dequeue it,
instead of rebalancing it, it will be assigned to that current "ready"
server that is available to handle it. The extra cost of the atomic ops
is negligible since the situation is super rare.
Olivier Houchard [Tue, 17 Dec 2024 13:30:46 +0000 (13:30 +0000)]
BUG/MEDIUM: queues: Stricly respect maxconn for outgoing connections
The "served" field of struct server is used to know how many connections
are currently in use for a server. But served used to be incremented way
after the server was picked, so there were race conditions that could
lead more than maxconn connections to be allocated for one server. To
fix this, increment served way earlier, and make sure at the time that
it never goes past maxconn.
We now should never have more outgoing connections than set by maxconn.
MINOR: stktable: support optional index for array types in {set, clear, show} table commands
As discussed in GH #2286, {set, clear, show} table commands were unable
to deal with array types such as gpt, because they handled such types as
a non-array types, thus only the first entry (ie: gpt[0]) was considered.
In this patch we add an extra logic around array-types handling so that
it is possible to specify an array index right after the type, like this:
set table peer/table key mykey data.gpt[2] value
# where 2 is the entry index that we want to access
If no index is specified, then it implicitly defaults to 0 to mimic
previous behavior.
BUG/MINOR: stats: fix segfault caused by uninitialized value in "show schema json"
Since b3d5708 ("MINOR: stats: remove implicit static trash_chunk usage")
a segfault can occur when issuing "show schema json" on the stats socket.
Indeed, now the dumping functions don't rely on trash_chunk anymore, but
instead they rely on the appctx->chunk buffer. However, unlike other
stats dumping commands, the "show schema json" only have an io handler,
and no parse function. With other command, the parse function is
responsible for pre-setting some data, including applet ctx reservation.
Thus due to "show schema json" lacking parsing function, the applet ctx is
used uninitialized, which is a bug obviously.
To fix the issue we simply add a parse function for "show schema json",
although all it does for now is calling applet_reserve_svcctx() for the
current applet ctx.
This issue was reported by @dsuch in GH #2825. It must be backported up
to 3.0.
Olivier Houchard [Mon, 23 Dec 2024 14:17:25 +0000 (14:17 +0000)]
BUG/MEDIUM: queue: Make process_srv_queue return the number of streams
Make process_srv_queue() return the number of streams unqueued, as
pendconn_grab_from_px() did, as that number is used by
srv_update_status() to generate logs.