This bug was introduced with 969e212 ("MINOR: log: add dup_logsrv() helper
function")
When duplicating an existing log entry, we must take care to inherit from
its original ->ref if it is set, because not doing so would make 28ac0999
("MINOR: log: Keep the ref when a log server is copied to avoid duplicate entries")
ineffective given that global log directives will lose their original
reference when duplicated resursively (at least twice), which is what
happens when global log directives are first inherited to defaults which
are then inherited to a regular proxy at the end of the chain.
This can be easily reproduced using the following configuration:
|global
| log stdout format raw local0
|
|defaults
| log global
|
|frontend test
| log global
| ...
Logs from "test" proxy will be duplicated because test incorrectly
inherited from global "log" directives twice, which 28ac0999 would
normally detect and prevent.
No backport needed unless 969e212 gets backported.
BUG/MINOR: sample: Fix bytes converter if offset is bigger than sample length
When the bytes converter was improved to be able to use variables (915e48675
["MEDIUM: sample: Enhances converter "bytes" to take variable names as
arguments"]), the behavior of the sample slightly change. A failure is
reported if the given offset is bigger than the sample length. Before, a
empty binary sample was returned.
This patch fixes the converter to restore the original behavior. The
function was also refactored to properly handle failures by removing
SMP_F_MAY_CHANGE flag. Because the converter now handles variables, the
conversion to an integer may fail. In this case SMP_F_MAY_CHANGE flag must
be removed to be sure the caller will not retry.
This patch should fix the issue #2335. No backport needed except if commit
above is backported.
Willy Tarreau [Mon, 13 Nov 2023 08:17:05 +0000 (09:17 +0100)]
BUG/MEDIUM: proxy: always initialize the default settings after init
The proxy's initialization is rather odd. First, init_new_proxy() is
called to zero all the lists and certain values, except those that can
come from defaults, which are initialized by proxy_preset_defaults().
The default server settings are also only set there.
This results in these settings not to be set for a number of internal
proxies that do not explicitly call proxy_preset_defaults() after
allocation, such as sink and log forwarders.
This was revealed by last commit 79aa63823 ("MINOR: server: always
initialize pp_tlvs for default servers") which crashes in log parsers
when applied to certain proxies which did not initialize their default
servers.
In theory this should be backported, however it would be desirable to
wait a bit before backporting it, in case certain parts would rely on
these elements not being initialized.
Willy Tarreau [Mon, 13 Nov 2023 07:46:51 +0000 (08:46 +0100)]
MINOR: server: always initialize pp_tlvs for default servers
In commit 6f4bfed3a ("MINOR: server: Add parser support for
set-proxy-v2-tlv-fmt") a suspicious check for a NULL srv_tlv was placed
in the list_for_each_entry(), that should not be needed. In practice,
it's caused by the list head not being initialized, hence the first
element is NULL, as shown by Alexander's reproducer below which crashes
if the test in the loop is removed:
backend dummy
default-server send-proxy-v2 set-proxy-v2-tlv-fmt(0xE1) %[fc_pp_tlv(0xE1)]
server dummy_server 127.0.0.1:2319
The right place to initialize this field is proxy_preset_defaults().
We'd really need a function to initialize a server :-/
The check in the loop was removed. No backport is needed.
BUG/MINOR: quic: Useless use of non-contiguous buffer for in order CRYPTO data
This issue could be reproduced with a TLS client certificate verificatio to
generate enough CRYPTO data between the client and haproxy and with dev/udp/udp-perturb
as network perturbator. Haproxy could crash thanks to a BUG_ON() call as soon as
in disorder data were bufferized into a non-contiguous buffer.
There is no need to pass a non NULL non-contiguous to qc_ssl_provide_quic_data()
from qc_ssl_provide_all_quic_data() which handles in order CRYPTO data which
have not been bufferized. If not, the first call to qc_ssl_provide_quic_data()
to process the first block of in order data leads the non-contiguous buffer
head to be advanced to a wrong offset, by <len> bytes which is the length of the
in order CRYPTO frame. This is detected by a BUG_ON() as follows:
BUG/MINOR: sink: don't learn srv port from srv addr
Since 04276f3d ("MEDIUM: server: split the address and the port into two
different fields") we should not use srv->addr to store server's port
and rely on srv->svc_port instead.
For sink servers, we correctly set >svc_port upon server creation but
we didn't use it when initializing address for the connection.
As a result, FQDN resolution will not work properly with sink servers.
Hopefully, this used to work by accident because sink servers were
resolved using the PA_O_RESOLVE flag in str2sa_range(), which made the
srv->addr contain the port in addition to the address.
But this will fail to work when FQDN resolution is postponed because only
->svc_port will contain the proper server port upon resolution.
For instance, FQDN resolution with servers from log backends (which are
resolved as regular servers, that is, without the PA_O_RESOLVE) will fail
to work because of this.
This may be backported as far as 2.2 even though the bug didn't have
noticeable effects for versions below 2.9
[In 2.2, sink_forward_session_init() didn't exist it should be applied in
sink_forward_session_create()]
This bug was introduced with 29b76ca ("BUG/MEDIUM: server/log: "mode log"
after server keyword causes crash ")
Indeed, we cannot safely rely on addr_proto being set when str2sa_range()
returns in parse_server() (even if SRV_PARSE_PARSE_ADDR is set), because
proto lookup might be bypassed when FQDN addresses are involved.
Unfortunately, the above patch wrongly assumed that proto would always
be set when SRV_PARSE_PARSE_ADDR was passed to parse_server() (so when
str2sa_range() was called), resulting in invalid postparsing checks being
performed, which could as well lead to crashes with log backends
("mode log" set) because some postparsing init was skipped as a result of
proto not being set and this wasn't expected later in the init code.
To fix this, we now make use of the previous patch to perform server's
address compatibility checks on hints that are always set when
str2sa_range() succesfully returns.
For log backend, we're also adding a complementary test to check if the
address family is of expected type, else we report an error, plus we're
moving the postinit logic in log api since _srv_check_proxy_mode() is
only meant to check proxy mode compatibility and we were abusing it.
This patch depends on:
- "MINOR: tools: make str2sa_range() directly return type hints"
No backport required unless 29b76ca gets backported.
MINOR: tools: make str2sa_range() directly return type hints
str2sa_range() already allows the caller to provide <proto> in order to
get a pointer on the protocol matching with the string input thanks to 5fc9328a ("MINOR: tools: make str2sa_range() directly return the protocol")
However, as stated into the commit message, there is a trick:
"we can fail to return a protocol in case the caller
accepts an fqdn for use later. This is what servers do and in this
case it is valid to return no protocol"
In this case, we're unable to return protocol because the protocol lookup
depends on both the [proto type + xprt type] and the [family type] to be
known.
While family type might not be directly resolved when fqdn is involved
(because family type might be discovered using DNS queries), proto type
and xprt type are already known. As such, the caller might be interested
in knowing those address related hints even if the address family type is
not yet resolved and thus the matching protocol cannot be looked up.
Thus in this patch we add the optional net_addr_type (custom type)
argument to str2sa_range to enable the caller to check the protocol type
and transport type when the function succeeds.
BUG/MEDIUM: applet: Remove appctx from buffer wait list on release
For now, the appctx is removed from the buffer wait list when it is
freed. However, when it is released, it is not necessarily freed
immediately. But it is detached from the SC. If it is still registered in
the buffer wait list, it could then be woken up to get a buffer. At this
stage it is totally unexpected, especially because we must access the SC.
The fix is obvious, the appctx must be removed from the buffer wait list on
release.
Note this bug exists because the appctx was moved at the mux level.
Willy Tarreau [Fri, 10 Nov 2023 15:26:32 +0000 (16:26 +0100)]
DOC: config: use the word 'backend' instead of 'proxy' in 'track' description
User @nwehrman reported in issue #2328 that the used of "proxy" instead
of "backend" in the argument of the "track" server keyword is confusing.
Admittedly, all other places in the doc use "backend/server" instead of
"proxy/server", so let's update it for the sake of consistency.
Amaury Denoyelle [Wed, 25 Oct 2023 08:50:47 +0000 (10:50 +0200)]
MEDIUM: quic: release conn socket before using quic_cc_conn
After emission/reception of a CONNECTION_CLOSE, a connection enters the
CLOSING state. In this state, only minimal exchanges occurs as only the
packets which containted the CONNECTION_CLOSE frame can be reemitted. In
conformance with the RFC, most resources are released and quic_conn
instance is converted to the lighter quic_cc_conn.
Push further this optimization by closing quic_conn socket FD before
switching to a quic_cc_conn. This means that quic_cc_conn will rely on
listener socket for its send/recv operation. This should not impact
performance as as stated input/output are minimal on this state.
This patch should improve FD consumption as prior to this a socket FD
was kept during the closing delay which could cause maxsock to be
reached for other connections.
Note that fd member is kept in QUIC_CONN_COMMON and not removed from
quic_cc_conn. This is because quic_cc_conn relies on qc_snd_buf() which
access this field.
As a side-effect to this change, jobs accounting for quic_conn is also
updated. quic_cc_conn instances are now not counted as jobs. Indeed, the
main objective of jobs is to prevent haproxy process to be stopped with
data truncation. However, this relies on the connection to uses its
owned socket as the listener socket is shut down inconditionaly on
shutdown.
A consequence of the jobs handling change is that haproxy process will
be closed if only quic_cc_conn instances are present, thus preventing to
respect the closing state. In case of a reload, if a client missed a
CONNECTION_CLOSE frame just before process shutdown, it will probably
received a Stateless Reset on sending retry.
This change is considered safe as, for now, haproxy only emits
CONNECTION_CLOSE on error conditions (such as protocol violation or
timeout). It is considered as expected to suffer from data truncation
from this. However, if connection closing is reused by haproxy to
implement clean shutdown, it should be necessary to delay
CONNECTION_CLOSE frame emission to ensure no data truncation happens
here.
Amaury Denoyelle [Thu, 26 Oct 2023 12:15:41 +0000 (14:15 +0200)]
MEDIUM: quic: respect closing state even on soft-stop
Prior to this patch, a special condition was set when idle timer was
rearmed for closing connections during haproxy process stopping. In this
case, the timeout was ditched and the idle task woken up immediatly.
The objective was to release quickly closing connections to not prevent
the process stopping to be too long. However, it is not conform with RFC
9000 recommandations and may cause some clients to miss a
CONNECTION_CLOSE in case of a packet loss.
A recent fix was set to use a shorter timeout for closing state. Now a
connection should only be left in this state for one second or less.
This reduces greatly the importance of stopping special condition. Thus,
this patch removes it completely.
Willy Tarreau [Thu, 9 Nov 2023 15:53:32 +0000 (16:53 +0100)]
OPTIM: mux-h2: don't allocate more buffers per connections than streams
When an H2 mux works with a slow downstream connection and without the
mux-mux mode, it is possible that a single stream will allocate all 32
buffers in the connection. This is not desirable at all because 1) it
brings no value, and 2) it allocates a lot of memory per connection,
which, in addition to using a lot of memory, tends to degrade performance
due to cache thrashing.
This patch improves the situation by refraining from sending data frames
over a connection when more mbufs than streams are allocated. On a test
featuring 10k connections each with a single stream reading from the
cache, this patch reduces the RAM usage from ~180k buffers to ~20k bufs,
and improves the bandwidth. This may even be backported later to recent
versions to improve memory usage. Note however that it is efficient only
when combined with e16762f8a ("OPTIM: mux-h2: call h2_send() directly
from h2_snd_buf()"), and tends to slightly reduce the single-stream
performance without it, so in case of a backport, the two need to be
considered together.
Willy Tarreau [Thu, 9 Nov 2023 11:05:08 +0000 (12:05 +0100)]
MINOR: task/debug: make task_queue() and task_schedule() possible callers
It's common to see process_stream() being woken up by wake_expired_tasks
in the profiling output, without knowing which timeout was set to cause
this. By making it possible to record the call places of task_queue()
and task_schedule(), and by making wake_expired_tasks() explicitly not
replace it, we'll be able to know which task_queue() or task_schedule()
was triggered for a given wakeup.
Willy Tarreau [Thu, 9 Nov 2023 11:03:24 +0000 (12:03 +0100)]
MINOR: task/debug: explicitly support passing a null caller to wakeup functions
This is used for tracing and profiling. By permitting to have a NULL
caller, we allow a caller to explicitly pass zero to state that the
current caller must not be replaced. This will soon be used by
wake_expired_tasks() to avoid replacing a caller in the expire loop.
A client may send multiple INITIAL packets if ClientHello is too big for
only one. In case a Retry token is used, the client must reuse it for
every INITIAL packets.
On the haproxy server side, there was an inconsistency to handle these
packets depending on the socket mode :
* when using listener socket, token is always revalidated.
* when using connection socket, token check is bypassed. This is because
quic_conn instance is known through its socket and thus
quic_rx_pkt_retrieve_conn() is not necessary.
RFC 9000 does not seems to mandate retry token validation after the
first INITIAL packet per connection. Thus, this patch chooses to bypass
the check every time the connection instance is known, as this indicates
that a previous token was already validated.
QUIC connections are pushed manually into a dedicated listener queue
when they are ready to be accepted. This happens after handshake
finalization or on 0-RTT packet reception. Listener is then woken up to
dequeue them with listener_accept().
This patch comptabilizes the number of connections currently stored in
the accept queue. If reaching a certain limit, INITIAL packets are
dropped on reception to prevent further QUIC connections allocation.
This should help to preserve system resources.
This limit is automatically derived from the listener backlog. Half of
its value is reserved for handshakes and the other half for accept
queues. By default, backlog is equal to maxconn which guarantee that
there can't be no more than maxconn connections in handshake or waiting
to be accepted.
Implement a limit per listener for concurrent number of QUIC
connections. When reached, INITIAL packets for new connections are
automatically dropped until the number of handshakes is reduced.
The limit value is automatically based on listener backlog, which itself
defaults to maxconn.
This feature is important to ensure CPU and memory resources are not
consume if too many handshakes attempt are started in parallel.
Special care is taken if a connection is released before handshake
completion. In this case, counter must be decremented. This forces to
ensure that member <qc.state> is set early in qc_new_conn() before any
quic_conn_release() invocation.
Accounting is implemented for half open connections which represent QUIC
connections waiting for handshake completion. When reaching a certain
limit, Retry mechanism is automatically activated prior to instantiate
new connections.
The issue with this behavior is that two notions are mixed : QUIC
connection handshake phase and Retry which is mechanism against
amplification attacks. As such, only peer address validation should be
taken into account to activate Retry protection.
This patch chooses to reduce the scope of half_open_conn. Now only
connection waiting to validate the peer address are now accounted for.
Most notably, connections instantiated with a validated Retry token
check are not accounted.
One impact of this patch is that it should prevent to activate Retry
mechanism too early, in particular in case if multiple handshakes are
too slow. Another limitation should be implemented to protect against
this scenario.
When a new QUIC connection is created, server considers peer address as
not yet validated. The server must limit its sending up to 3 times the
content already received. This is a defensive measure to avoid flooding
a remote host victim of address spoofing.
This patch adjust the condition to consider the peer address as
validated. Two conditions are now considered :
* successful handling of a received HANDSHAKE packet. This was already
done before although implemented in a different way.
* validation of a Retry token. This was not considered prior this patch
despite RFC recommandation.
This patch also adjusts how a connection is internally labelled as using
a validated peer address. Before, above conditions were checked via
quic_peer_validated_addr(). Now, a flag QUIC_FL_CONN_PEER_VALIDATED_ADDR
is set to labelled this. It already existed prior this patch but was
only used for quic_cc_conn. This should now be more explicit.
BUG/MEDIUM: mux-h1: Exit early if fast-forward is not supported by opposite SC
The commit 4be0c7c65 ("MEDIUM: stconn/muxes: Loop on data fast-forwarding to
forward at least a buffer") introduced a regression. In h1_fastfwd(), if
data fast-forwarding is not supported by the opposite SC, we must exit
without calling se_donn_ff(). Otherwise a BUG_ON() will be triggered because
the opposite mux has no .done_fastfwd() callback function.
Defining a master CLI without the master-worker mode emits a warning
since version 1.8. This patch enforce the behavior by forbiding the
usage of the -S option without the master-worker mode.
The documentation about -q seems wrong, it does not output messages
after the startup, it disables all messages. It was always quiet with
the stdio_quiet() function.
MEDIUM: errors: move the MODE_QUIET test in print_message()
Move the MODE_QUIET and MODE_VERBOSE test in print_message() so we
always output in the startup-logs even with MODE_QUIET.
ha_warning(), ha_alert() and ha_notice() does not check the MODE_QUIET
and MODE_VERBOSE anymore, it is done before doing the fprintf() in
print_message().
BUG/MEDIUM: stconn: Don't update stream expiration date if already expired
The commit 08d7169f4 ("MINOR: stconn: Don't queue stream task in past in
sc_notify()") tried to fix issues with epiration date set in past for the
stream in sc_notify(). However it remains some cases where the stream
expiration date may already be expired before recomputing it. This happens
when an event is reported by the mux exactly when a timeout is triggered. In
this case, depending on the scheduling, the SC may be woken up before the
stream. For these cases, we fall into the BUG_ON() preventing to queue in
the past.
So, it remains unexpected to queue a task in the past. The BUG_ON() is
correct at this place. We must just avoid to recompute the stream expiration
date if it is already expired. At worst, the stream will be woken up for
nothing. But it is not really a big deal because it will only happen on
timeouts from time to time. It is so sporadic that we can ignore it from a
performance point of view.
This patch must be backpoted to 2.8. Be careful to remove the BUG_ON() on
the 2.8.
BUG/MEDIUM: quic: Possible crashes during secrets allocations (heavy load)
This bug could be reproduced with -dMfail option and detected by libasan.
During the TLS secrets allocations, when failed, quic_tls_ctx_secs_free()
is called. It resets the already initialized secrets. Some were detected
as initialized when not, or with a non initialized length, which leads
to big "memset(0)" detected by libsasan.
Ensure that all the secrets are really initialized with correct lengths.
BUG/MEDIUM: quic: Avoid some crashes upon TX packet allocation failures
If a TX packet cannot be allocated (by qc_build_pkt()), as it can be coalesced
to another one, this leads the TX buffer to have remaining not sent prepared data.
Then haproxy crashes upon a BUG_ON() triggered by the next call to qc_txb_release().
This may happen only during handshakes.
To fix this, qc_build_pkt() returns a new -3 error to dected such allocation
failures followed which is for now on followed by a call to qc_purge_txbuf() to
send the TX prepared data and purge the TX buffer.
BUG/MEDIUM: quic: Possible crashes when sending too short Initial packets
This may happen during handshakes when Handshake packets cannot be coalesced
to a first Initial packet because of TX frame allocation failures (from
qc_build_frms()). This leads too short (not padded) Initial packets to be sent.
This is detected by a BUG_ON() in qc_send_ppkts().
To avoid this an Handshake packet without ack-eliciting frames which should have
been built by qc_build_frms() is built.
BUG/MEDIUM: quic: Avoid trying to send ACK frames from an empty ack ranges tree
This may happen upon ack ranges allocation failures (from quic_update_ack_ranges_list().
This can lead to empty trees of ack ranges to be used to build ACK frames which
is not good at all. Furthermore this is detected by a BUG_ON() (in qc_do_build_pkt()).
To avoid this, simply update the acknowledgemen state of the connection only if
quic_update_ack_ranges_list() succeeds, as it fails only in case of memory
allocation failures.
BUG/MEDIUM: quic: Too short Initial packet sent (enc. level allocation failed)
If the Handshake encryption level could not be allocated, this could lead
to Initial packets to be sent because no Handshake CRYPTO frames were generated.
Furthermore in such an allocation failure case, the connection should be closed
as soon as possible. This is done making ha_quic_set_encryption_secrets() return
0 upon an encryption level allocation failure.
Also fix a typo in the trace in relation to this allocation failure.
BUG/MINOR: quic: idle timer task requeued in the past
When the idle timer expired with a still present mux, this task was not freed
and even requeued with a timer in the past.
Fix this issue calling task_destroy() in this case. As the task is freed,
its handler must return NULL setting local <t> variable to NULL in every cases.
Also ensure that this timer task is not armed again after having been released
with a <return> statement when this is the case from qc_idle_timer_do_rearm().
MINOR: quic: release the TLS context asap from quic_conn_release()
This was no reason not to release as soon as possible the TLS/SSL QUIC connection
context from quic_conn_release() before allocating a "closing connection" connection
(quic_cc_conn struct).
MEDIUM: quic: Heavy task mode with non contiguously bufferized CRYPTO data
This patch sets the handshake task in heavy task mode when receiving in disorder
CRYPTO data which results in in order bufferized CRYPTO data. This is done
thanks to a non-contiguous buffer and from qc_handle_crypto_frm() after having
potentially bufferized CRYPTO data in this buffer.
qc_treat_rx_crypto_frms() is no more called from qc_treat_rx_pkts() but instead
this is where the task is set in heavy task mode. Consequently,
this is the job of qc_ssl_provide_all_quic_data() to call directly
qc_treat_rx_crypto_frms() to provide the in order bufferized CRYPTO data to the
TLS stack. As this function releases the non-contiguous buffer for the CRYPTO
data, if possible, there is no need to do that from qc_treat_rx_crypto_frms()
anymore.
Add a new pool for the CRYPTO data frames received in order.
Add ->rx.crypto_frms list to each encryption level to store such frames
when they are received in order from qc_handle_crypto_frm().
Also set the handshake task (qc_conn_io_cb()) in heavy task mode from
this function after having received such frames. When this task
detects that it is set in heavy mode, it calls qc_ssl_provide_all_quic_data()
newly implemented function to provide the CRYPTO data to the TLS task.
Modify quic_conn_enc_level_uninit() to release these CRYPTO frames
when releasing the encryption level they are in relation with.
MINOR: stconn/mux-h2: Use a iobuf flag to report EOI to consumer side during FF
IOBUF_FL_EOI iobuf flag is now set by the producer to notify the consumer
that the end of input was reached. Thanks to this flag, we can remove the
ugly ack in h2_done_ff() to test the opposite SE flags.
Of course, for now, it works and it is good enough. But we must keep in mind
that EOI is always forwarded from the producer side to the consumer side in
this case. But if this change, a new CO_RFL_ flag will have to be added to
instruct the producer if it can forward EOI or not.
MEDIUM: stconn/muxes: Loop on data fast-forwarding to forward at least a buffer
In the mux-to-mux data forwarding, we now try, as far as possible to send at
least a buffer. Of course, if the consumer side is congested or if nothing
more can be received, we leave. But the idea is to retry to fast-forward
data if less than a buffer was forwarded. It is only performed for buffer
fast-forwarding, not splicing.
The idea behind this patch is to optimise the forwarding, when a first
forward was performed to complete a buffer with some existing data. In this
case, the amount of data forwarded is artificially limited because we are
using a non-empty buffer. But without this limitation, it is highly probable
that a full buffer could have been sent. And indeed, with H2 client, a
significant improvement was observed during our test.
To do so, .done_fastfwd() callback function must be able to deal with
interim forwards. Especially for the H2 mux, to remove H2_SF_NOTIFIED flags
on the H2S on the last call only. Otherwise, the H2 stream can be blocked by
itself because it is in the send_list. IOBUF_FL_INTERIM_FF iobuf flag is
used to notify the consumer it is not the last call. This flag is then
removed on the last call.
Willy Tarreau [Wed, 8 Nov 2023 15:44:20 +0000 (16:44 +0100)]
BUG/MEDIUM: pool: try once to allocate from another bucket if empty
In order to limit inter-thread contention on the global pool, in 2.9-dev3
with commit 7bf829ace ("MAJOR: pools: move the shared pool's free_list
over multiple buckets"), it was decided that if the selected bucket had
an empty free list, we would simply give up and fall back to the OS
allocator.
But this causes allocations to be made from the OS for certain threads,
to be released to overloaded pools that are sent back to the OS. One
visible effect is that sending a lot of traffic using h2load with 100
parallel streams over 100 connections causes 5-10k buffers to be
allocated, then reducing the load to only 10 connections doesn't make
these allocations go down, just because some buckets are no longer
visited.
Tests show that giving a second chance to pick another bucket in this
case is sufficient to visit all other buckets and recycle their pending
objects. Now "show pools" that starts at 10k buffers at 100 connections
goes down to about 150 with 1 connection and 100 streams in a fraction
of a second.
No backport is needed, as the issue is only in 2.9.
Willy Tarreau [Wed, 8 Nov 2023 15:44:20 +0000 (16:44 +0100)]
BUG/MINOR: pool: check one other random bucket on alloc conflict
Since 2.9-dev3 with commit 7bf829ace ("MAJOR: pools: move the shared
pool's free_list over multiple buckets"), the global pool supports
multiple heads to reduce inter-thread contention. However, when
grabbing a freelist head fails because another thread is already
picking from it, we just skip to the next one and try again.
Unfortunately, it still maintains a bit of contention between thread
pairs when for some reasons only a few threads are used. This may
happen for example when running on a 4- or 8- thread system and
the two most active ones end up on adjacent buckets.
A better and much simpler solution consists in visiting a random bucket
instead of the current one. Tests show that the CPU usage spent in
pool_refill_local_from_shared() reduces at low number of connections
(hence threads).
No backport is needed, as the issue is only in 2.9.
Willy Tarreau [Wed, 8 Nov 2023 15:35:49 +0000 (16:35 +0100)]
BUG/MEDIUM: pool: fix releasable pool calculation when overloaded
In 2.6-dev1, the method used to decide how many pool entries could be
released at once was revisited to support releases in batches. This was
done with commits 91a8e28f9 ("MINOR: pool: add a function to estimate
how many may be released at once") and 361e31e3f ("MEDIUM: pool: compute
the number of evictable entries once per pool").
The first commit takes care of the possible inconsistency between the
moment the allocated count and the used count are read, but unfortunately
fixed it the wrong way, by adjusting "used" to match "alloc" whenever it
was lower (i.e. almost always). This results in a nasty case which is that
as soon as the allocated value becomes higher than the estimated count of
needed entries, we end up returning pool->minavail, which causes very
small batches to be released, starting from commit 1513c5479 ("MEDIUM:
pools: release cached objects in batches").
The problem was further amplified in 2.9-dev3 with commit 7bf829ace
("MAJOR: pools: move the shared pool's free_list over multiple buckets")
because it now becomes possible for a thread to allocate from one bucket
and release into a few other different ones, causing an accumulation of
entries in that bucket.
The fix is trivial, simply adjust the alloc counter if the used one is
higher, before performing operations.
BUG/MEDIUM: freq-ctr: Don't report overshoot for long inactivity period
The function returning the excess of events over the current period for a
target frequency (the overshoot) has a flaw if the inactivity period is too
long. In this case, the result may overflow. Instead to be negative, a very
high positive value is returned.
This function is used by the bandwidth limitation filter. It means after a
long inactivity period, a huge burst may be detected while it should not.
In fact, the problem arise from the moment we're past the current period. In
this case, we should not report any overshoot and just get the number of
remaining events as usual.
BUG/MINOR: mux-h1: Properly handle http-request and http-keep-alive timeouts
It is now the turn for the H1 mux to be fix to properly handle http-request
and http-keep-alive timeouts. It is quite surprising but it is broken since
the 2.2. For idle connections on client side, the smallest value between the
client timeout and the http-request/http-keep-alive timeout is used while
the client timeout should only be used if other ones are not defined. So, if
the client timeout is the smallest value, the keep-alive timeout is not
respected.
It is only an issue for idle client connections. The http-request timeout is
respected from the moment part of the next request was received.
This patch should fix the issue #2334. It must be backported as far as 2.2. But
be careful during the backports. The H1 mux had evolved a lot since the 2.2.
MINOR: stktable/cli: support v6tov4 and v4tov6 conversions
Add a special treatment for the IPV4 and IPV6 cases in
table_process_entry_per_key() function so that input string is parsed
in best effort (STR to pseudo type ADDR): input format is first considered
over table type and then let smp_to_stkey() do the type conversion for us
when needed.
This patch heavily depends on:
- "MEDIUM: stktable/cli: simplify entry key handling"
And optionally depends on:
- 72514a44 ("MEDIUM: tools/ip: v4tov6() and v6tov4() rework")
Make use of smp_to_stkey() in table_process_entry_per_key() to simplify
key handling and leverage auto type conversions from sample API.
One noticeable side effect is that integer input checks will be relaxed
given that c_str2int() sample conv is more permissible than the integrated
table_process_entry_per_key() integer parser.
BUG/MINOR: stick-table/cli: Check for invalid ipv4 key
When an ipv4 key is used to filter a CLI command on a stick table
clear/set/show table ...), inetaddr_host+htonl combination was used
with no error checking.
Instead, we now use inet_pton(), which is what we use for ipv6 addresses
since b7c962b0c0 ("BUG/MINOR: stick-table/cli: Check for invalid ipv6 key")
Doing this allows us to easily check for parsing errors: we're trading off
some parsing efficience to better catch input errors and ensure we get
similar behavior between ipv4 and ipv6 addresses handling.
This patch may be backported to all supported versions.
BUG/MINOR: mux-h1: Release empty ibuf during data fast-forwarding
We must take care to release H1 input buffer when it is emptied during the
fast-forwarding nego. Otherwise, it may be kept allocated for a while,
waiting for the next "normal" receive or the H1C release.
Use backend connect timeout when a new connection is instantiated for
rhttp. This ensures that if connect operation fails after a certain
delay, reverse_connect listener task is woken up. This allows to free
the current connection and retry a new connect.
As a consequence of this change, rev_process() may be woken up even if
connection is not reported with CO_FL_ERROR. This happens if timeout
fired before any network reported issue. Connection freeing is adjusted
as in this case MUX instance is already allocated. Use destroy callback
to release MUX context prior to the connection itself.
This patch is really useful as a side measure for a haproxy bug
impacting connect with SSL for both backend connections and active
reverse connect. This is caused by the delayed allocation of MUX
allocation. Asynchronous connect error detected at the socket layer is
not notified to upper layers. Currently, only connect timeout allows to
release this failed connection.
BUG/MEDIUM: mux-h1: Be sure xprt support splicing to use it during fast-forward
The commit d6d4abdc3 ("BUILD: mux-h1: Fix build without kernel splicing
support") introduced a regression. The kernel support for the underlying
XPRT is no longer checked. So it is possible to enable the splicing for SSL
connection. This of course leads to a segfault.
This patch restore the test on the xprt rcv_pipe/snd_pipe functions.
This patch should fix a crash reported by Tristan in #2095
(#issuecomment-1788949014). No backport needed.
BUG/MEDIUM: quic: fix sslconns on quic_conn alloc failure
QUIC connections are accounted inside global sslconns. As with QUIC
actconn, it suffered from a similar issue if an intermediary allocation
failed inside qc_new_conn().
Fix this similarly by moving increment operation inside qc_new_conn().
Increment and error path are now centralized and much easier to
validate.
The consequences are similar to the actconn fix : on memory allocation
global sslconns may wrap, this time blocking any future QUIC or SSL
connections on the process.
Increment is always done prior to real allocation to guarantee minimal
resource consumption. Special care is taken to ensure there will always
be one decrement operation for each increment. To help this, decrement
is centralized in quic_conn_release().
This behaves incorrectly in case of an intermediary allocation failure
inside qc_new_conn(). In this case, quic_conn_release() will decrement
actconn. Then, a NULL qc is returned in quic_rx_pkt_retrieve_conn()
which will also decrement the counter on its own error code path.
To properly fix this, actconn incrementation has been moved directly
inside qc_new_conn(). It is thus easier to cover every cases :
* if alloc failure before or on pool_head_quic_conn, actconn is
decremented manually at the end of qc_new_conn()
* after this step, actconn will be decremented by quic_conn_release()
either on intermediary alloc failure or on proper connection release
This bug happens on memory allocation failure so it should be rare.
However, its impact is not negligeable as if actconn counter is wrapped
it will block any future connection allocation for both QUIC and TCP.
One small downside of this change is that a CID is now always allocated
before quic_conn even if maxconn will be reached. However, this is
considered as of minor importance compared to a more robust code.
DOC: stconn: Improve comments about lra and fsb usage
Recent fixes have shown <lra> and <fsb> uses were not prettu clear. So let's
try to improve documentation about these value. Especially when <lra> is
updated and how to used it.
BUG/MINOR: stconn: Sanitize report for read activity
When a EOS or EOI is detected on the endpoint and when the event is reported
at the SC level, a read activity must be reported. It is not really a big
deal because these flags already inhibit any read timeout. But it is
consistent with the <lra> comment. In addition, no read activity is reported
on abort. It is up-down event and it is not an event unblocking the
reads. So there is no reason to report a read activity.
BUG/MEDIUM: Don't apply a max value on room_needed in sc_need_room()
In sc_need_room(), we compute the maximum room that can be requested to
restarted reading to be sure to be able to unblock the SC. At worst when the
buffer is emptied. Here, the buffer reserve is considered but it is an issue.
Counting the reserve can lead to a wicked bug with the H1 multiplexer, when
small amount of data are found at the end of the HTX buffer. In this case,
to not wrap, the H1 mux requests more room. It is an optim to be able to
resync the buffer with the consumer side and to be able to perform zero-copy
transfers. However, if this amount of data is smaller than the reserve and
if the consumer is congested, we fall in a loop because the wrong value is
used to request more room. The H1 mux continues to pretend there is not
enough space in the buffer, while the effective requested value is lower
than the free space in the buffer. While the consumer is congested and does
not consume these data, the is no way to stop the loop.
We can fix the function by removing the buffer reserve from the
computation. But it remains a dangerous decision to apply a max value on
room_needed. It is safer to require the caller must set a correct value. For
now, it is true. But at the end, it is totally unexepected to wait for more
room than an empty buffer can contain.
MINOR: stconn: Don't queue stream task in past in sc_notify()
A task must never be queued in past. However, in sc_notify(), the stream
task, if not woken up, is queued. Thanks to previous fixes, the stream task
expiration date should be correct. But to prevent any issue, a BUG_ON() is
added to be sure it never happens. I guess a good idea could be to remove it
or change it to BUG_ON_HOT() for the final release.
BUG/MEDIUM: stconn: Don't report rcv/snd expiration date if SC cannot epxire
When receive or send expiration date of a stream-connector is retrieved, we
now automatically check if it may expire. If not, TICK_ETERNITY is returned.
The expiration dates of the frontend and backend stream-connectors are used
to compute the stream expiration date. This operation is performed at 2
places: at the end of process_stream() and in sc_notify() if the stream is
not woken up.
With this patch, there is no special changes for process_stream() because it
was already handled. It make thing a little simpler. However, it fixes
sc_notify() by avoiding to erroneously compute an expiration date in
past. This highly reduce the stream wakeups when there is contention on the
consumer side.
The bug was introduced with the commit 8073094bf ("NUG/MEDIUM: stconn:
Always update stream's expiration date after I/O"). It was an error to
unconditionnaly set the stream expiration data, without testing blocking
conditions on both SC.
BUG/MEDIUM: stconn: Report send activity during mux-to-mux fast-forward
When data are directly forwarded from a mux to the opposite one, we must not
forget to report send activity when data are successfully sent or report a
blocked send with data are blocked. It is important because otherwise, if
the transfer is quite long, longer than the client or server timeout, an
error may be triggered because the write timeout is reached.
H1, H2 and PT muxes are concerned. To fix the issue, The done_fastword()
callback now returns the amount of data consummed. This way it is possible
to update/reset the FSB data accordingly.
Willy Tarreau [Sat, 4 Nov 2023 08:38:16 +0000 (09:38 +0100)]
[RELEASE] Released version 2.9-dev9
Released version 2.9-dev9 with the following main changes :
- DOC: internal: filters: fix reference to entities.pdf
- BUG/MINOR: ssl: load correctly @system-ca when ca-base is define
- MINOR: lua: Add flags to configure logging behaviour
- MINOR: lua: change tune.lua.log.stderr default from 'on' to 'auto'
- BUG/MINOR: backend: fix wrong BUG_ON for avail conn
- BUG/MAJOR: backend: fix idle conn crash under low FD
- MINOR: backend: refactor insertion in avail conns tree
- DEBUG: mux-h2/flags: fix list of h2c flags used by the flags decoder
- BUG/MEDIUM: server/log: "mode log" after server keyword causes crash
- MINOR: connection: add conn_pr_mode_to_proto_mode() helper func
- BUG/MEDIUM: server: "proto" not working for dynamic servers
- MINOR: server: add helper function to detach server from proxy list
- DEBUG: add a tainted flag when ha_panic() is called
- DEBUG: lua: add tainted flags for stuck Lua contexts
- DEBUG: pools: detect that malloc_trim() is in progress
- BUG/MINOR: quic: do not consider idle timeout on CLOSING state
- MINOR: frontend: implement a dedicated actconn increment function
- BUG/MINOR: ssl: use a thread-safe sslconns increment
- MEDIUM: quic: count quic_conn instance for maxconn
- MEDIUM: quic: count quic_conn for global sslconns
- BUG/MINOR: ssl: suboptimal certificate selection with TLSv1.3 and dual ECDSA/RSA
- REGTESTS: ssl: update the filters test for TLSv1.3 and sigalgs
- BUG/MINOR: mux-quic: fix early close if unset client timeout
- BUG/MEDIUM: ssl: segfault when cipher is NULL
- BUG/MINOR: tcpcheck: Report hexstring instead of binary one on check failure
- MEDIUM: systemd: be more verbose about the reload
- MINOR: sample: Add fetcher for getting all cookie names
- BUG/MINOR: proto_reverse_connect: support SNI on active connect
- MINOR: proxy/stktable: add resolve_stick_rule helper function
- BUG/MINOR: stktable: missing free in parse_stick_table()
- BUG/MINOR: cfgparse/stktable: fix error message on stktable_init() failure
- MINOR: stktable: stktable_init() sets err_msg on error
- MINOR: stktable: check if a type should be used as-is
- MEDIUM: stktable/peers: "write-to" local table on peer updates
- CI: github: update wolfSSL to 5.6.4
- DOC: install: update the wolfSSL required version
- MINOR: server: Add parser support for set-proxy-v2-tlv-fmt
- MINOR: connection: Send out generic, user-defined server TLVs
- BUG/MEDIUM: pattern: don't trim pools under lock in pat_ref_purge_range()
- MINOR: mux-h2: always use h2_send() in h2_done_ff(), not h2_process()
- OPTIM: mux-h2: call h2_send() directly from h2_snd_buf()
- BUG/MINOR: server: remove some incorrect free() calls on null elements
Willy Tarreau [Sat, 4 Nov 2023 07:56:01 +0000 (08:56 +0100)]
BUG/MINOR: server: remove some incorrect free() calls on null elements
In commit 6f4bfed3a ("MINOR: server: Add parser support for
set-proxy-v2-tlv-fmt") a few free() calls were made to an element on
error path when it was detected it was NULL. It doesn't have any
effect, however there was one case of use-after-free at the end of
srv_settings_cpy() that was caught by gcc due to attempting to free
the element after freeing its holder.
Willy Tarreau [Sat, 4 Nov 2023 07:34:23 +0000 (08:34 +0100)]
OPTIM: mux-h2: call h2_send() directly from h2_snd_buf()
This allows to eliminate full buffers very quickly and to recycle them
much faster, resulting in higher transfer rates and lower memory usage
at the same time. We just wake the tasklet up if it succeeded so that
h2_process() and friends are called to finalize what needs to.
For regular buffer sizes, the performance level becomes quite close to
the one obtained with the zero-copy mechanism (zero-copy remains much
faster with non-default buffer sizes). The memory savings are huge with
default buffer size: at 64c * 100 streams on a single thread, we used
to forward 4.4 Gbps of traffic using 10400 buffers. After the change,
the performance reaches 5.9 Gbps with only 22-24 buffers, since they
are quickly recycled. That's asaving of 160 MB of RAM.
A concern was an increase in the number of syscalls but this is not
the case, the numbers remained exactly the same before and after.
Some experimentations were made to try to cork data and not send
incomplete buffers, and that always voided these changes. One
explanation might be that keeping a first buffer with only headers
frames is sufficient to prevent a zero-copy of the data coming in
a next snd_buf() call. This still needs to be studied anyway.
Willy Tarreau [Sat, 4 Nov 2023 07:12:17 +0000 (08:12 +0100)]
MINOR: mux-h2: always use h2_send() in h2_done_ff(), not h2_process()
By calling h2_process(), the code would theoretically make it possible
for a synchronous ->wake() call to provoke an indirect call to h2_snd_buf()
while we're in h2_done_ff(), which could be quite bad. The current
conditions do not permit it right now but this could easily break by
accident. Better use h2_send() and wake the task up if needed. Precise
performance tests showed no change.
Willy Tarreau [Sat, 4 Nov 2023 06:55:37 +0000 (07:55 +0100)]
BUG/MEDIUM: pattern: don't trim pools under lock in pat_ref_purge_range()
There's a subtle issue that results from pat_ref_purge_range() trying
to release memory. Since commit 0d93a8186 ("MINOR: pools: work around
possibly slow malloc_trim() during gc") that was backported to 2.3,
trim_all_pools() now protects itself against concurrent malloc() and
free() by isolating itself. The problem is that pat_ref_purge_range()
must be called under a lock, which is precisely what's done in
cli_io_handler_clear_map(). Thus during a clearing of a map, if
another thread tries to access or update an entry in the same map, it
will wait for the ref->lock to be released, and trim_all_pools() will
wait for all threads to be harmless, thus causing a deadlock. Note
that disabling memory trimming cannot work around the problem here
because it's tested only under isolation.
The solution here consists in moving the call to trim_all_pools() to
the caller, out of the lock.
MINOR: connection: Send out generic, user-defined server TLVs
To follow-up the implementation of the new set-proxy-v2-tlv-fmt
keyword in the server, the connection is updated to use the previously
allocated TLVs. If no value was specified, we send out an empty TLV.
As the feature is fully working with this commit, documentation and a
test for the server and default-server are added as well.
MINOR: server: Add parser support for set-proxy-v2-tlv-fmt
This commit introduces a generic server-side parsing of type-value pair
arguments and allocation of a TLV list via a new keyword called
set-proxy-v2-tlv-fmt.
This allows to 1) forward any TLV type with the help of fc_pp_tlv,
2) generally, send out any TLV type and value via a log format expression.
To have this fully working the connection will need to be updated in
a follow-up commit to actually respect the new server TLV list.
MEDIUM: stktable/peers: "write-to" local table on peer updates
In this patch, we add the possibility to declare on a table definition
("table" in peer section, or "stick-table" in proxy section) that we
want the remote/peer updates on that table to be pushed on a local
haproxy table in addition to the source table.
Consider this example:
|peers mypeers
| peer local 127.0.0.1:3334
| peer clust 127.0.0.1:3333
| table t1.local type string size 10m store server_id,server_key expire 30s
| table t1.clust type string size 10m store server_id,server_key write-to mypeers/t1.local expire 30s
With this setup, we consider haproxy uses t1.local as cache/local table
for read and write operations, and that t1.clust is a remote table
containing datas processed from t1.local and similar tables from other
haproxy peers in a cluster setup. The t1.clust table will be used to
refresh the local/cache one via the "write-to" statement.
What will happen, is that every time haproxy will see entry updates for
the t1.clust table: it will overwrite t1.local table with fresh data and
will update the entry expiration timer. If t1.local entry doesn't exist
yet (key doesn't exist), it will automatically create it. Note that only
types that cannot be used for arithmetic ops will be handled, and this
to prevent processed values from the remote table from interfering with
computations based on values from the local table. (ie: prevent
cumulative counters from growing indefinitely).
"write-to" will only push supported types if they both exist in the source
and the target table. Be careful with server_id and server_key storage
because they are often declared implicitly when referencing a table in
sticking rules but it is required to declare them explicitly for them to
be pushed between a remote and a local table through "write-to" option.
Also note that the "write-to" target table should have the same type as
the source one, and that the key length should be strictly equal,
otherwise haproxy will raise an error due to the tables being
incompatibles. A table that is already being written to cannot be used
as a source table for a "write-to" target.
Thanks to this patch, it will now be possible to use sticking rules in
peer cluster context by using a local table as a local cache which
will be automatically refreshed by one or multiple remote table(s).
This commit depends on:
- "MINOR: stktable: stktable_init() sets err_msg on error"
- "MINOR: stktable: check if a type should be used as-is"
MINOR: stktable: check if a type should be used as-is
stick table types now have an extra bit named 'as_is' that allows us to
check if such type should be used as-is or if it may be involved in
arithmetic operations such as counters. This can be useful since those
types are not common and may require specific handling.
e.g.: stktable_data_types[data_type].as_is will be set to 1 if the type
cannot be used in arithmetic operations.
BUG/MINOR: cfgparse/stktable: fix error message on stktable_init() failure
As a result of copy paste error in 1b8e68e ("MEDIUM: stick-table: Stop
handling stick-tables as proxies."), postparsing stktable_init() failures
were reported as such for named peer tables:
"Proxy 'table_name': failed to initialize stick table."
Now they are correctly reported like this:
"Parsing [file:line]: failed to initialize 'table_name' stick-table."
This should be backported to every stable versions.
BUG/MINOR: stktable: missing free in parse_stick_table()
When "peers" keyword is encountered within a stick table definition,
peers.name hint gets replaced with a new copy of the provided name using
strdup(). However, there is no detection on whether the name was
previously set or not, so it is currently allowed to reuse the keyword
multiple time to overwrite previous value, but here we forgot to free
previous value for peers.name before assigning it to a new one.
This should be backported to every stable versions.
BUG/MINOR: proto_reverse_connect: support SNI on active connect
SNI may be specify on a server line for connecting to the remote host.
This requires to manually set it on the connection via
ssl_sock_set_servername().
This step was missing when a server line was used for active reverse
HTTP. Fix this by adding the missing ssl_sock_set_servername()
invocation inside new_reverse_conn().
Note that for the moment, no session is instantiated to carry active
reverse connection. A direct consequence of this is that SNI sample
retrieval may crash depending if it depends on session parameters. This
should be fixed by a later commit. In the meantime, this patch is
sufficient to support simple SNI value such as constant expressions.
Ruei-Bang Chen [Fri, 27 Oct 2023 20:59:21 +0000 (13:59 -0700)]
MINOR: sample: Add fetcher for getting all cookie names
This new fetcher can be used to extract the list of cookie names from
Cookie request header or from Set-Cookie response header depending on
the stream direction. There is an optional argument that can be used
as the delimiter (which is assumed to be the first character of the
argument) between cookie names. The default delimiter is comma (,).
Note that we will treat the Cookie request header as a semi-colon
separated list of cookies and each Set-Cookie response header as
a single cookie and extract the cookie names accordingly.
When the `haproxy -c` check during the reload fails, no error is output
in the logs, this can be quite bothersome to understand what's going on.
This patch removes the -q option on the check so we can see the error
with `journalctl -u haproxy` or `systemctl status haproxy`
This will change the behavior when the check works, and will display
"Configuration file is valid"
Note that in some case this test could be completely removed, because
the master process loads the configuration itself and is able to keep
the previous workers running when the reload failed. This is interesting
to disable the test when there are a lot of certificates of files to
load, to divide the reload time by 2.
BUG/MINOR: tcpcheck: Report hexstring instead of binary one on check failure
When an expect rule failed for a tcp-check, information about the expect
rule is dumped in the report. For a check on a binary string, a hexstring is
used in the configuration but the decoded string is dumped. It is an problem
because it can contain special characters. And it is not really handy
because there is no correspondance with the config.
So, now, the hexstring is dumped in the report. This way, we are sure there
is no special characters and it is easy to find it in the configuration.
This patch shoudl solve the issue #2326. It must be backported as far as
2.2.
The patch which fixes the certificate selection uses
SSL_CIPHER_get_id() to skip the SCSV ciphers without checking if cipher
is NULL. This patch fixes the issue by skipping any NULL cipher in the
iteration.
Amaury Denoyelle [Thu, 26 Oct 2023 16:17:29 +0000 (18:17 +0200)]
BUG/MINOR: mux-quic: fix early close if unset client timeout
When no client timeout is defined in the configuration, QCC timeout task
is never allocated. However, a NULL timeout task is also used as a
criteria in qcc_is_dead() to consider that the MUX instance should be
released as timeout stroke earlier.
This bug causes every connection to be closed by haproxy side with a
CONNECTION_CLOSE. This is notable when using several streams per
connection with only the first stream completed and the others failed.
To fix this, change timeout task allocation policy. It is now always
allocated. This means that if no timeout is defined, it will never be
run. This is not considered a waste of resource as no timeout in the
configuration is considered as an exception case. However, this has the
advantage to simplify the rest of the code which can now check for the
task instance without having an extra check on the timeout value.
This bug is labelled as minor as it only occurs if no timeout client is
defined which reports warning on startup as it may caused unexpected
behavior.
REGTESTS: ssl: update the filters test for TLSv1.3 and sigalgs
Signature algorithms allows us to select the right certificates when
using TLSv1.3. This patch update the ssl_crt-list_filters.vtc regtest to
do more precise testing with TLSv1.3 in addition to TLSv1.2.
This allow us to test correctly bug #2300.
It could be backported to 2.8 with the previous fix for certificate
selection.
BUG/MINOR: ssl: suboptimal certificate selection with TLSv1.3 and dual ECDSA/RSA
When using TLSv1.3, the signature algorithms extension is used to chose
the right ECDSA or RSA certificate.
However there was an old test for previous version of TLS (< 1.3) which
was testing if the cipher is compatible with ECDSA when an ECDSA
signature algorithm is used. This test was relying on
SSL_CIPHER_get_auth_nid(cipher) == NID_auth_ecdsa to verify if the
cipher is still good.
Problem is, with TLSv1.3, all ciphersuites are compatible with any
authentication algorithm, but SSL_CIPHER_get_auth_nid(cipher) does not
return NID_auth_ecdsa, but NID_auth_any.
Because of this, with TLSv1.3 when both ECDSA and RSA certificates are
available for a domain, the ECDSA one is not chosen in priority.
This patch also introduces a test on the cipher IDs for the signaling
ciphersuites, because they would always return NID_auth_any, and are not
relevent for this selection.
Amaury Denoyelle [Wed, 25 Oct 2023 13:38:50 +0000 (15:38 +0200)]
MEDIUM: quic: count quic_conn for global sslconns
Similar to the previous commit which check for maxconn before allocating
a QUIC connection, this patch checks for maxsslconn at the same step.
This is necessary as a QUIC connection cannot run without a SSL context.
This should be backported up to 2.6. It relies on the following patch :
"BUG/MINOR: ssl: use a thread-safe sslconns increment"
Amaury Denoyelle [Wed, 25 Oct 2023 08:52:23 +0000 (10:52 +0200)]
MEDIUM: quic: count quic_conn instance for maxconn
Increment actconn and check maxconn limit when a quic_conn is
instantiated. This is necessary because prior to this patch, quic_conn
instances where not counted. Global actconn was only incremented after
the handshake has been completed and the connection structure is
allocated.
The increment is done using increment_actconn() on INITIAL packet
parsing if a new connection is about to be created. If the limit is
reached, the allocation is cancelled and the INITIAL packet is dropped.
The decrement is done under quic_conn_release(). This means that
quic_cc_conn instances are not taken into account. This seems safe
enough because quic_cc_conn are only used for minimal usage.
The counterpart of this change is that maxconn must not be checked a
second time when listener_accept() is done over a QUIC connection. For
this, a new bind_conf flag BC_O_XPRT_MAXCONN is set for listeners when
maxconn is already counted by the lower layer. For the moment, it is
positionned only for QUIC listeners.
Without this patch, haproxy process could suffer from heavy memory/CPU
load if the number of concurrent handshake is high.
This patch is not considered a bug fix per-se. However, it has a major
benefit to protect against too many QUIC handshakes. As such, it should
be backported up to 2.6. For this, it relies on the following patch :
"MINOR: frontend: implement a dedicated actconn increment function"
Amaury Denoyelle [Wed, 25 Oct 2023 13:38:04 +0000 (15:38 +0200)]
BUG/MINOR: ssl: use a thread-safe sslconns increment
Each time a new SSL context is allocated, global.sslconns is
incremented. If global.maxsslconn is reached, the allocation is
cancelled.
This procedure was not entirely thread-safe due to the check and
increment operations conducted at different stage. This could lead to
global.maxsslconn slightly exceeded when several threads allocate SSL
context while sslconns is near the limit.
To fix this, use a CAS operation in a do/while loop. This code is
similar to the actconn/maxconn increment for connection.
A new function increment_sslconn() is defined for this operation. For
the moment, only SSL code is using it. However, it is expected that QUIC
will also use it to count QUIC connections as SSL ones.
This should be backported to all stable releases. Note that prior to the
2.6, sslconns was outside of global struct, so this commit should be
slightly adjusted.
Amaury Denoyelle [Wed, 25 Oct 2023 13:32:28 +0000 (15:32 +0200)]
MINOR: frontend: implement a dedicated actconn increment function
When a new frontend connection is instantiated, actconn global counter
is incremented. If global maxconn value is reached, the connection is
cancelled. This ensures that system limit are under control.
Prior to this patch, the atomic check/increment operations were done
directly into listener_accept(). Move them in a dedicated function
increment_actconn() in frontend module. This will be useful when QUIC
connections will be counted in actconn counter.
Amaury Denoyelle [Wed, 25 Oct 2023 12:45:53 +0000 (14:45 +0200)]
BUG/MINOR: quic: do not consider idle timeout on CLOSING state
When entering closing state, a QUIC connection is maintained during a
certain delay. The principle is to ensure the other peer has received
the CONNECTION_CLOSE frame. In case of packet duplication/reordering,
CONNECTION_CLOSE is reemitted.
QUIC RFC recommends to use at least 3 times the PTO value. However,
prior to this patch, haproxy used instead the max value between 3 times
the PTO and the connection idle timeout. In the default case, idle
timeout is set to 30s which is in most of the times largely superior to
the PTO. This has the downside of keeping the connection in memory for
too long whereas all resources could be released much earlier.
Fix this behavior by using 3 times the PTO on closing or draining state.
This value is limited up to 1s. This ensures that most of connections
are covered by this. If a connection runs with a very high RTT, it must
not impact the whole process and should be released in a reasonable
delay.
Willy Tarreau [Wed, 25 Oct 2023 13:42:27 +0000 (15:42 +0200)]
DEBUG: pools: detect that malloc_trim() is in progress
Now when calling ha_panic() with a thread still under malloc_trim(),
we'll set a new tainted flag to easily report it, and the output
trace will report that this condition happened and will suggest to
use no-memory-trimming to avoid it in the future.
Willy Tarreau [Wed, 25 Oct 2023 13:02:59 +0000 (15:02 +0200)]
DEBUG: lua: add tainted flags for stuck Lua contexts
William suggested that since we can detect the presence of Lua in the
stack, let's combine it with stuck detection to set a new pair of flags
indicating a stuck Lua context and a stuck Lua shared context.
Now, executing an infinite loop in a Lua sample fetch function with
yield disabled crashes with tainted=0xe40 if loaded from a lua-load
statement, or tainted=0x640 from a lua-load-per-thread statement.
In addition, at the end of the panic dump, we can check if Lua was
seen stuck and emit recommendations about lua-load-per-thread and
the choice of dependencies depending on the presence of threads
and/or shared context.
Willy Tarreau [Wed, 25 Oct 2023 12:34:08 +0000 (14:34 +0200)]
DEBUG: add a tainted flag when ha_panic() is called
This will make it easier to know that the panic function was called,
for the occasional case where the dump crashes and/or the stack is
corrupted and not much exploitable. Now at least it will be sufficient
to check the tainted value to know that someone called ha_panic(), and
it will also be usable to condition extra analysis.
MINOR: server: add helper function to detach server from proxy list
Remove some code duplication by introducing a basic helper function
to detach a server from its parent proxy. It is supported to call
the function even if the server is not yet listed in the proxy list.
If the server is not yet listed in the proxy, the function will do
nothing. In delete_server(), we previously performed some BUG_ON()
to ensure that the detach always succeeded given that we were certain
that the server was in the proxy list because it was retrieved through
get_backend_server().
However this test is superfluous, we can safely assume that the operation
will always succeed if get_backend_server() returned != NULL (we're under
full thread isolation), and if it's not the case, then we have a bigger
API issue anyway..
BUG/MEDIUM: server: "proto" not working for dynamic servers
In 304672320e ("MINOR: server: support keyword proto in 'add server' cli")
improper use of conn_get_best_mux_entry() function was made:
First, server's proxy mode was directly passed as "proto_mode" argument
to conn_get_best_mux_entry(), but this is strictly invalid because while
there is some relationship between proto modes and proxy modes, they
don't use the same storage mechanism and cannot be used interchangeably.
Because of this bug, conn_get_best_mux_entry() would not work at all for
TCP because PR_MODE_TCP equals 0, where PROTO_MODE_TCP normally equals 1.
Then another, less sensitive bug, remains:
as its name and description implies, conn_get_best_mux_entry() will try
its best to return something to the user, only using keyword (mux_proto)
input as an hint to return the most relevant mux within the list of
mux that are compatibles with proto_side and proto_mode values.
This means that even if mux_proto cannot be found or is not available
with current proto_side and proto_mode values, conn_get_best_mux_entry()
will most probably fallback to a more generic mux.
However in cli_parse_add_server(), we directly check the result of
conn_get_best_mux_entry() and consider that it will return NULL if the
provided keyword hint for mux_proto cannot be found. This will result in
the function not raising errors as expected, because most of the times if
the expected proto cannot be found, then we'll silently switch to the
fallback one, despite the user providing an explicit proto.
To fix that, we store the result of conn_get_best_mux_entry() to compare
the returned mux proto name with the one we're expecting to get, as it
is originally performed in cfgparse during initial server keyword parsing.
This patch depends on
- "MINOR: connection: add conn_pr_mode_to_proto_mode() helper func")