MINOR: Add sample fetches to get the frontend and backend stream ID
"fc.id" and "bc.id" sample fetches can now be used to get, respectively, the
frontend or the backend stream ID. They rely on ->sctl() callback function
on the mux attached to the corresponding SC.
It means these sample fetches work only for connection, not applets, and
from the time a multiplexer is installed.
MINOR: muxes: Implement ->sctl() callback for muxes and return the stream id
All muxes now implements the ->sctl() callback function and are able to
return the stream ID. For the PT multiplexer, it is always 0. For the H1
multiplexer it is the request count for the current H1 connection (added for
this purpose). The FCGI, H2 and QUIC muxes, the stream ID is returned.
The stream ID is returned as a signed 64 bits integer.
MINOR: muxes: Add a callback function to send commands to mux streams
Just like the ->ctl() callback function, used to send commands to mux
connections, the ->sctl() callback function can now be used to send commands
to mux streams. The first command, MUX_SCTL_SID, is a way to request the mux
stream ID.
MINOR: muxes: Rename mux_ctl_type values to use MUX_CTL_ prefix
Instead of the generic MUX_, we now use MUX_CTL_ prefix for all mux_ctl_type
value. This will avoid any ambiguities with other enums, especially with a
new one that will be added to get information on mux streams.
MINOR: stream: add a sample fetch to get the number of connection retries
"txn.conn_retries" can now be used to get the number of connection
retries. This value is only stable once the connection is fully
established. For HTTP sessions, L7-retries must also be passed.
MINOR: stream: Expose session terminate state via a new sample fetch
It is now possible to retrieve the session terminate state, using
"txn.sess_term_state". The sample fetch returns the 2-character session
termation state.
Of course, the result of this sample fetch is volatile. It is subject to
change. It is also most of time useless because no termation state is set
except at the end. It should only be useful in http-after-response rule
sets. It may also be used to customize the logs using a log-format
directive.
MEDIUM: http-ana: Set termination state before returning haproxy response
When, for any reason and at any step, HAProxy decides to interrupt an HTTP
transaction, it returns a dedicated responses to the client (possibly empty)
and it sets the stream flags used to produce the session termination state.
These both operation were performed in any order, depending on the code
path. Most of time, the HAPRoxy response is produced first.
With this patch, the stream flags for the termination state are now set
first. This way, these flags become visible from http-after-reponse rule
sets. Only errors when the HAProxy response is generated are reported later.
MINOR: http-fetch: Add a sample to get the transaction status code
It was possible get the status code in the HTTP response and the one
received from the server. Thanks to 'txn.status', it is now possible to get
the transaction status code. It is equivalent to '%ST' in log-format.
Most of time, it is the same than 'status', except if the status code of the
HTTP reply does not match the one used to interrupt the transaction. For
instance, an error file use mapped on 400 containing a 404.
We clearly state the 'status' sample returns the status code the client will
receive, if no change happens on the HTTP response. This should avoid
ambiguities with the 'server-status' sample fetch.
MINOR: http-fetch: Add a sample to retrieve the server status code
The code returned by the "status" sample fetch is the one in the HTTP
response at the moment the sample is evaluated. It may be the status code in
the server response or the one of the HAProxy reply in case of error, deny,
redirect...
However, it could be handy to retrieve the status code returned by the
server, when a HTTP response was really received from it. It is the purpose
of the "server_status" sample fetch. The server status code itself is stored
in the HTTP txn.
Amaury Denoyelle [Tue, 28 Nov 2023 15:01:38 +0000 (16:01 +0100)]
MINOR: h3: use correct error code for missing SETTINGS
Each received HTTP/3 frame is checked to ensure it is valid given the
type of stream and its current status. This was implemented via
h3_is_frame_valid().
Previously, no distinction was made for error code, so every failure
triggered a CONNECTION_CLOSE_APP with code H3_FRAME_UNEXPECTED. However,
this function also ensures that the first frame received on control
frame is of type SETTINGS. If not, the error code to use is
H3_MISSING_SETTINGS.
To support this, adjust the function prototype. Instead of returning a
boolean, 0 is returned for success, or a HTTP/3 error code. The function
is renamed h3_check_frame_valid() to reflects the return type change.
This is not considered as a bug as previously the connection was
correctly closed on a missing SETTINGS, albeit with a non conform error
code. It's not deemed as sufficient to be backported.
Amaury Denoyelle [Tue, 28 Nov 2023 11:00:40 +0000 (12:00 +0100)]
BUG/MINOR: h3: always reject PUSH_PROMISE
The condition for checking PUSH_PROMISE was not correctly interpreted
from the RFC. Initially, it rejects such a frame for every stream
initiated from client side.
In fact, the RFC indicates that PUSH_PROMISE are never sent by a client.
Thus, it can be rejected in any case until HTTP/3 will be implemented on
the backend side.
Amaury Denoyelle [Tue, 28 Nov 2023 14:59:38 +0000 (15:59 +0100)]
BUG/MINOR: h3: fix TRAILERS encoding
HTTP/3 trailers encoding was never working as intended. It's because
h3_trailers_to_htx() manipulate a newly allocated buffer instead of the
already existing channel one. Thus, HTX message handled by the stream
was incomplete as it lacked trailers and EOM.
Fix this by reusing the already allocated channel buffer in
h3_trailers_to_htx().
This bug was detected by simulating TRAILERS emission which generate
CL--- state due to missing request side termination signal. Its impact
is deemed as minimal as trailers are pretty infrequent for now in
HTTP/3.
BUG/MEDIUM: mux-quic: Stop zero-copy FF during nego if input is not empty
When the producer negociate with the QUIC mux to perform a zero-copy
fast-forward, data in the input buffer are first transferred in the H3
buffer. However, after the transfer, if the input buffer is not empty, the
data fast-forwarding must be stopped. In this case, qmux_nego_ff() must
return 0.
BUG/MEDIUM: master/cli: Properly pin the master CLI on thread 1 / group 1
A previous fix was pushed for that (13fb7170be "BUG/MEDIUM: master/cli: Pin
the master CLI on the first thread of the group 1" ). Unfortunately, instead
of the master CLI, it is the sockpairs between the master and the workers
that were pinned to the first thread of the group 1. So the crash is still
there.
So, again, to fix the bug the master CLI is now pinned on the first thread
of the first group.
patch should fix the issue #2259 and must be backported to 2.8.
BUG/MINOR: compression: possible NULL dereferences in comp_prepare_compress_request()
This bug was introduced in ead43fe4f2 ("MEDIUM: compression: Make it so
we can compress requests as well.")
2 cases where not properly handled, resulting in 2 possible NULL
dereferences leading to crashes in the function at runtime:
- when the backend didn't define any compression options so its comp
pointer is NULL (ie: if only the frontend defines some comp options)
- when both the frontend and the backend didn't set a compression algo
but at least one of the two defined some other comp options (comp
pointer set)
For the first case, we added the missing checks to make sure we don't
read ->comp pointer if it is NULL.
For the second case, we properly return from the function if no
compression algo is defined, because there is no default value that could
be used as a fallback.
MEDIUM: log/balance: support FQDN for UDP log servers
In previous log backend implementation, we created a pseudo log target
for each declared log server, and we made the log target's address point
to the actual server address to save some time and prevent unecessary
copies.
But this was done without knowing that when FQDN is involved (more broadly
when dns/resolution is involved), the "port" part of server addr should
not be relied upon, and we should explicitly use ->svc_port for that
purpose.
With that in mind and thanks to the previous commit, some changes were
required: we allocate a dedicated addr within the log target when target
is in DGRAM mode. The addr is first initialized with known values and it
is then updated automatically by _srv_set_inetaddr() during runtime.
(the change is atomic so readers don't need to worry about it)
addr from server "log target" (INET/DGRAM mode) is made of the combination
of server's address (lacking the port part) and server's svc_port.
BUG/MAJOR: server/addr: fix a race during server addr:svc_port updates
For inet families (IP4/IP6), it is expected that server's addr/port might
be updated at runtime from DNS, cli or lua for instance.
Such updates were performed under the server's lock.
Unfortunately, most readers such as backend.c or sink.c perform the read
without taking server's lock because they can't afford slowing down their
processing for a type of event which is normally rare. But this could
result in bad values being read for the server addr:svc_port tuple (ie:
during connection etablishment) as a result of concurrent updates from
external components, which can obviously cause some undesirable effects.
Instead of slowing the readers down, as we consider server's addr changes
are relatively rare, we take another approach and try to update the
addr:port atomically by performing changes under full thread isolation
when a new change is requested. The changes are performed by a dedicated
task which takes care of isolating the current thread and doesn't depend
on other threads (independent code path) to protect against dead locks.
As such, server's addr:port changes will now be performed atomically, but
they will not be processed instantly, they will be translated to events
that the dedicated task will pick up from time to time to apply the
pending changes.
This bug existed for a very long time and has never been reported so
far. It was discovered by reading the code during the implementation
of log backend ("mode log" in backends). As it involves changes in
sensitive areas as well as thread isolation, it is probably not
worth considering backporting it for now, unless it is proven that it
will help to solve bugs that are actually encountered in the field.
This patch depends on:
- 24da4d3 ("MINOR: tools: use const for read only pointers in ip{cmp,cpy}")
- c886fb5 ("MINOR: server/ip: centralize server ip updates")
- event_hdl API (which was first seen on 2.8) + 683b2ae ("MINOR: server/event_hdl: add SERVER_INETADDR event") +
BUG/MEDIUM: server/event_hdl: memory overrun in _srv_event_hdl_prepare_inetaddr() +
"MINOR: event_hdl: add global tunables"
Note that the patch may be reworked so that it doesn't depend on
event_hdl API for older versions, the approach would remain the same:
this would result in a larger patch due to the need to manually
implement a global queue of pending updates with its dedicated task
responsible for picking updates and comitting them. An alternative
approach could consist in per-server, lock-protected, temporary
addr:svc_port storage dedicated to "updaters" were only the most
recent values would be kept. The sync task would then use them as
source values to atomically update the addr:svc_port members that the
runtime readers are actually using.
The local variable "event_hdl_async_max_notif_at_once" which was
introduced with the event_hdl API was left as is but with a TODO note
telling that we should make it a global tunable.
Well, we're doing this now. To prepare for upcoming tunables related to
event_hdl API, we add a dedicated struct named event_hdl_tune which is
globally exposed through the event_hdl header file so that it may be used
from everywhere. The struct is automatically initialized in
event_hdl_init() according to defaults.h.
"event_hdl_async_max_notif_at_once" now becomes
"event_hdl_tune.max_events_at_once" with it's dedicated
configuation keyword: "tune.events.max-events-at-once".
We're also taking this opportunity to raise the default value from 10
to 100 since it's seems quite reasonnable given existing async event_hdl
users.
BUG/MEDIUM: server/event_hdl: memory overrun in _srv_event_hdl_prepare_inetaddr()
As reported in GH #2358, #2359, #2360, #2361 and #2362: ipv6 address
handling may cause memory overrun due to struct in6_addr being handled
as sockaddr_in6 which is larger. Moreover, source variable wasn't properly
read from since the raw value was used as a pointer instead of pointing to
the actual variable's address.
This bug was introduced by 6fde37e046
("MINOR: server/event_hdl: add SERVER_INETADDR event")
Unfortunately for us, gcc didn't catch this and, this actually used to
"work" by accident since in6_addr struct is made of array so not passing
pointer explicitly still resolved to the proper starting address..
Hopefully this was caught by coverity so thanks to Ilya for that.
The fix is simple: we simply copy the whole in6_addr struct by accessing
it using a pointer and using the proper struct size for the copy.
MINOR: mworker/cli: implements the customized payload pattern for master CLI
Implements the customized payload pattern for the master CLI.
The pattern is stored in the stream in char pcli_payload_pat[8].
The principle is basically the same as the CLI one, it looks for '<<'
then stores what's between '<<' and '\n', and look for it to exit the
payload mode.
The CLI payload syntax has some limitation, it can't handle payloads
with empty lines, which is a common problem when uploading a PEM file
over the CLI.
This patch implements a way to customize the ending pattern of the CLI,
so we can't look for other things than empty lines.
A char cli_payload_pat[8] is used in the appctx to store the customized
pattern. The pattern can't be more than 7 characters and can still empty
to match an empty line.
The cli_io_handler() identifies the pattern and stores it, and
cli_parse_request() identifies the end of the payload.
If the customized pattern between "<<" and "\n" is more than 7
characters, it is not considered as a pattern.
This patch only implements the parser for the 'stats socket', another
patch is needed for the 'master CLI'.
BUG/MINOR: cache: Remove incomplete entries from the cache when stream is closed
When a stream is interrupted by the client before the full answer is
stored in the cache, we end up with an incomplete entry in the cache
that cannot be overwritten until it "naturally" expires. In such a case,
we call the cache filter's cache_store_strm_deinit callback without ever
calling cache_store_http_end which means that the 'complete' flag is
never set on the concerned cache_entry.
This patch adds a check on the 'complete' flag in the strm_deinit
callback and removes the entry from the cache if it is incomplete.
A way to exhibit this bug is to try to get the same "big" response on
multiple clients at the same time thanks to h2load for instance, and to
interrupt the client side before the answer can be fully stored in the
cache.
This patch can be backported up to 2.4 but it will need some rework
starting with branch 2.8 because of the latest cache changes.
As this function does only a few things with a not very well chosen name,
remove it and replace it by the its statements at the unique location it
is called.
Move qc_notify_send() from quic_tx.c to quic_conn.c. Note that it was already
exported from both quic_conn.h and quic_tx.h. Modify this latter header
to fix the duplication.
BUILD: quic: Several compiler warns fixes after retry module creation
Such a warning appeared after having added quic_retry.h which includes only
headers for types (quic_cid-t.h, clock-t.h...)
In file included from include/haproxy/quic_retry.h:12,
from src/quic_retry.c:5:
include/haproxy/quic_cid-t.h:26:26: error: field ‘seq_num’ has incomplete type
26 | struct eb64_node seq_num;
Add quic_retry.c new C file for the QUIC retry feature:
quic_saddr_cpy() moved from quic_tx.c,
quic_generate_retry_token_aad() moved from
quic_generate_retry_token() moved from
parse_retry_token() moved from
quic_retry_token_check() moved from
quic_retry_token_check() moved from
REORG: quic: Move qc_may_probe_ipktns() to quic_tls.h
This function is in relation with the Initial packet number space which is
more linked to the QUIC TLS specifications. Let's move it to quic_tls.h
to be inlined.
REORG: quic: Move QUIC path definitions/declarations to quic_cc module
Move quic_path struct from quic_conn-t.h to quic_cc-t.h and rename it to quic_cc_path.
Update the code consequently.
Also some inlined functions in relation with QUIC path to quic_cc.h
REORG: quic: Move several inlined functions from quic_conn.h
Move quic_pkt_type(), quic_saddr_cpy(), quic_write_uint32(), max_available_room(),
max_stream_data_size(), quic_packet_number_length(), quic_packet_number_encode()
and quic_compute_ack_delay_us() to quic_tx.c because only used in this file.
Also move quic_ack_delay_ms() and quic_read_uint32() to quic_tx.c because they
are used only in this file.
Move quic_rx_packet_refinc() and quic_rx_packet_refdec() to quic_rx.h header.
Move qc_el_rx_pkts(), qc_el_rx_pkts_del() and qc_list_qel_rx_pkts() to quic_tls.h
header.
REORG: quic: Move QUIC CRYPTO stream definitions/declarations to QUIC TLS
Move quic_cstream struct definition from quic_conn-t.h to quic_tls-t.h.
Its pool is also moved from quic_conn module to quic_tls. Same thing for
quic_cstream_new() and quic_cstream_free().
Fix such building issues:
In file included from src/quic_tx.c:15:
include/haproxy/quic_tx.h:51:23: warning: ‘struct quic_rx_packet’
Do not know why the compiler warns about such missing header inclusions
just now. It should have complained a long time ago during the big QUIC
source code split.
REORG: quic: Add a new module to handle QUIC connection IDs
Move quic_cid and quic_connnection_id from quic_conn-t.h to new quic_cid-t.h header.
Move defintions of quic_stateless_reset_token_init(), quic_derive_cid(),
new_quic_cid(), quic_get_cid_tid() and retrieve_qc_conn_from_cid() to quic_cid.c
new C file.
BUG/MEDIUM: mux-h2: Remove H2_SF_NOTIFIED flag for H2S blocked on fast-forward
When a H2 stream is blocked during data fast-forwarding, we must take care
to remove H2_SF_NOTIFIED flag. This was only performed when data
fast-forward was attempted. However, if the H2 stream was blocked for any
reason, this flag was not removed. During our tests, we found it was
possible to infinitely block a connection because one of its streams was in
the send_list with the flag set. In this case, the stream was no longer
woken up to resume the sends, blocking all other streams.
BUG/MEDIUM: stconn: Don't perform zero-copy FF if opposite SC is blocked
When zero-copy data fast-forwarding is inuse, if the opposite SC is blocked,
there is no reason to try to fast-forward more data. Worst, in some cases,
this can lead to a receive loop of the producer side while the consumer side
is blocked.
CONNECTION_CLOSE_APP encoding is broken, which prevents the sending of
every packet with such a frame. This bug was always present in quic
haproxy. However, it was slightly dissimulated by the previous code
which always initialized all frame members to zero, which was sufficient
to ensure CONNECTION_CLOSE_APP encoding was ok. The below patch changes
this behavior by removing this costly initialization step.
Now, frames members must always be initialized individually given the
type of frame to used. However, for CONNECTION_CLOSE_APP this was not
done as qc_cc_build_frm() accessed the wrong union member refering to a
CONNECTION_CLOSE instead.
This bug was detected when trying to generate a HTTP/3 error. The
CONNECTION_CLOSE_APP frame encoding failed due to a non-initialized
<reason_phrase_len> which was too big. This was reported by the
following trace :
"frame building error : qc@0x5555561b86c0 idle_timer_task@0x5555561e5050 flags=0x86038058 CONNECTION_CLOSE_APP"
This must be backported up to 2.6. This is necessary even if above
commit is not as previous code is also buggy, albeit with a different
behavior.
Willy Tarreau [Tue, 28 Nov 2023 08:15:26 +0000 (09:15 +0100)]
OPTIM: mux-h2/zero-copy: don't allocate more buffers per connections than streams
It's the exact same as commit 0a7ab7067 ("OPTIM: mux-h2: don't allocate
more buffers per connections than streams"), but for the zero-copy case
this time. Previously it was only done on the regular snd_buf() path, but
this one is needed as well. A transfer on 16 parallel streams now consumes
half of the memory, and a single stream consumes much less.
An alternate approach would be worth investigating in the future, based
on the same principle as the CF_STREAMER_FAST at the higher level: in
short, by monitoring how many mux buffers we write at once before refilling
them, we would get an idea of how much is worth keeping in buffers max,
given that anything beyond would just waste memory. Some tests show that
a single buffer already seems almost as good, except for single-stream
transfers, which is why it's worth spending more time on this.
Amaury Denoyelle [Wed, 22 Nov 2023 16:27:57 +0000 (17:27 +0100)]
MINOR: trace: support -dt optional format
Add an optional argument for "-dt". This argument is interpreted as a
list of several trace statement separated by comma. For each statement,
a specific trace name can be specifed, or none to act on all sources.
Using double-colon separator, it is possible to add specifications on
the wanted level and verbosity.
Amaury Denoyelle [Wed, 22 Nov 2023 16:25:52 +0000 (17:25 +0100)]
MINOR: trace: parse level in a function
Extract conversion of level string argument to integer value in a
dedicated internal function trace_parse_level(). This function is used
to for CLI trace parsing and will also be useful for "-dt" process
argument.
Amaury Denoyelle [Wed, 22 Nov 2023 13:58:59 +0000 (14:58 +0100)]
MINOR: trace: define simple -dt argument
Add '-dt' haproxy process argument. This will automatically activate all
trace sources on stderr with the error level. This could be useful to
troubleshoot issues such as protocol violations.
Amaury Denoyelle [Mon, 27 Nov 2023 13:49:33 +0000 (14:49 +0100)]
BUILD: map: fix build warning
<pattern> field pointer of pat_ref_elt structure has been by a
zero-length array. As such, it's now unneeded to check for NULL address
before printing it.
This type conversion was done in the following commit : 3ac9912837118a81f3291e106ce9a7c4be6c934f
OPTIM: pattern: save memory and time using ebst instead of ebis
The current patch is mandatory to fix the following GCC warning :
CC src/map.o
src/map.c: In function ‘cli_io_handler_map_lookup’:
src/map.c:549:54: error: the comparison will always evaluate as ‘true’ for the address of ‘pattern’ will never be NULL [-Werror=address]
549 | if (pat->ref && pat->ref->pattern)
|
No need to backport it unless the above commit is.
Willy Tarreau [Sun, 26 Nov 2023 10:56:08 +0000 (11:56 +0100)]
OPTIM: pattern: save memory and time using ebst instead of ebis
In the pat_ref_elt struct, the pattern string is stored outside of the
node element, using a pointer to an strdup(). Not only this needlessly
wastes at least 16-24 bytes per entry (8 for the pointer, 8-16 for the
allocator), it also makes the tree descent less efficient since both
the node and the string have to be visited for each layer (hence at least
two cache lines). Let's use an ebmb storage and place the pattern right
at the end of the pat_ref_elt, making it a variable-sized element instead.
The set-map test below jumps from 173 to 182 kreq/s/core, and the memory
usage drops from 356 MB to 324 MB:
Willy Tarreau [Fri, 24 Nov 2023 15:45:34 +0000 (16:45 +0100)]
MINOR: task/profiling: do not record task_drop_running() as a caller
Task_drop_running() is used to remove the RUNNING bit and check if
while the task was running it got a new wakeup from itself. Thus
each time task_drop_running() marks itself as a caller, it in fact
removes the previous caller that woke up the task, such as below:
Tasks activity over 10.439 sec till 0.000 sec ago:
function calls cpu_tot cpu_avg lat_tot lat_avg
task_run_applet 57895273 6.396m 6.628us 2.733h 170.0us <- run_tasks_from_lists@src/task.c:658 task_drop_running
Better not mark this function as a caller and keep the original one:
Tasks activity over 13.834 sec till 0.000 sec ago:
function calls cpu_tot cpu_avg lat_tot lat_avg
task_run_applet 62424582 5.825m 5.599us 5.717h 329.7us <- sc_app_chk_rcv_applet@src/stconn.c:952 appctx_wakeup
BUG/MEDIUM: mux-h1: Properly ignore trailers when a content-length is announced
It is not possible in H1, but in H2 (and probably H3) it is possible to have
trailers at the end of a message while a Content-Length was announced.
However, depending if the trailers are received with the last HTX DATA block
or the zero-copy forwarding is used or not, an processing error may be
triggered, leading to a 500-internal-error.
To fix the issue, when a content-length is announced and all the payload was
processed, we switch the message to H1_MSG_DONE state only if the
end-of-message was also reported (HTX_FL_EOM flag set). Otherwise, it is
switched to H1_MSG_TRAILERS state to be able to properly ignored the
trailers, if so.
The patch must be backported as far as 2.4. Be careful, this part was highly
refactored. The patch will have to be adapted to be backported.
MINOR: mworker/cli: implement hard-reload over the master CLI
The mworker mode never had a proper 'hard-stop' (-st) for the reload,
this is a mode which was commonly used with the daemon mode, but it was
never implemented in mworker mode.
This patch fixes the problem by implementing a "hard-reload" command
over the master CLI. It does the same as the "reload" command, but
instead of waiting for the connections to stop in the previous process,
it immediately quits the previous process after binding.
MEDIUM: ssl: use ssl_sock_chose_sni_ctx() in the clienthello callback
This patch removes the code which selects the SSL certificate in the
OpenSSL Client Hello callback, to use the ssl_sock_chose_sni_ctx()
function which does the same.
The bigger part of the function which remains is the extraction of the
servername, ciphers and sigalgs, because it's done manually by parsing
the TLS extensions.
This is not supposed to change anything functionally.
MINOR: ssl: move certificate selection in a dedicate function
The certificate selection used in the WolfSSL cert_cb and in the OpenSSL
clienthello callback is the same, the function was duplicate to achieve
the same.
This patch move the selection code to a common function called
ssl_sock_chose_sni_ctx().
The servername string is still lowered in the callback, however the
search for the first dot in the string (wildp) is done in
ssl_sock_chose_sni_ctx()
The function uses the same certificate selection algorithm as before, it
needs to know if you need rsa or ecdsa, the bind_conf to achieve the
lookup, and the servername string.
MEDIUM: ssl: implement rsa/ecdsa selection with WolfSSL
PR https://github.com/wolfSSL/wolfssl/pull/6963 implements primitives to
extract ciphers and algorithm signatures.
It allows to chose a certificate depending on the sigals and
ciphers presented by the client (RSA or ECDSA).
Since WolfSSL does not implement the clienthello callback, the patch
uses the certificate callback (SSL_CTX_set_cert_cb())
The callback is inspired by our clienthello callback, however the
extraction of client ciphers and sigalgs is simpler,
wolfSSL_get_sigalg_info() and wolfSSL_get_ciphersuite_info() are used.
This is not enabled by default yet as the PR was not merged.
Following previous commit: in this patch we add the "syslog" output as
possible return value for Proxy.get_mode() function since log backend
may now be enumerated from lua with 9a74a6c ("MAJOR: log: introduce log
backends")
Proxy.get_mode() function internally relies on proxy_mode_str() to return
the proxy mode. The current function description is exhaustive about the
possible outputs for the function. I can't tell if it's relevant or not
but it's subject to changes. Here it is the case, the documentation
indicates that "health" mode may be returned, which cannot happen
since 77e0daef9 ("MEDIUM: proxy: remove obsolete "mode health"").
In 6e0425b718 ("DOC: config: Add documentation about TCP/HTTP rules in
defaults section") an error was made: the restriction note about the
setting not being inherited from anonymous default section was added
by mistake in the "timeout check" documentation. But it is wrong,
"timeout check" behaves like other "timeout" directives for proxy
sections.
DOC: config: specify supported sections for "max-session-srv-conns"
There was no info about supported sections for "max-session-srv-conns"
proxy directive. A quick look at the code tells us that it may be used
in proxies with the FE capability set.
MINOR: log/backend: prevent "use-server" rules use with LOG mode
server_rules declared using "use-server" keyword within a proxy are not
supported inside a log backend (with "mode log" set), so we report a
warning to the user and reset the setting.
MINOR: proxy: add free_server_rules() helper function
Take the px->server_rules freeing part out of free_proxy() and make it
a dedicated helper function so that it becomes possible to use it from
anywhere.
MINOR: proxy: add free_logformat_list() helper function
There are multiple places inside free_proxy() where we need to perform
the exact same operation: freeing a logformat list which includes freeing
every member.
To prevent code duplication, we add the free_logformat_list() function
that takes such list as parameter and does all the freeing job on its
own.
MINOR: backend: remove invalid mode test for "hash-balance-factor"
This is a leftover from 1e0093a317 ("MINOR: backend/balance: "balance"
requires TCP or HTTP mode").
Indeed, we cannot perform the test during parsing as the effective proxy
type is not yet known. Moreover, thanks to b61147fd ("MEDIUM: log/balance:
merge tcp/http algo with log ones") we could potentially benefit from
this setting even in log mode, but for now it is ignored by all log
compatible load-balancing algorithms.
MINOR: tools: use const for read only pointers in ip{cmp,cpy}
In this patch we fix the prototype for ipcmp() and ipcpy() functions so
that input pointers that are used exclusively for reads are used as const
pointers. This way, the compiler can safely assume that those variables
won't be altered by the function.
In this patch we add the support for a new SERVER event in the
event_hdl API.
SERVER_INETADDR is implemented as an advanced server event.
It is published each time the server's ip address or port is
about to change. (ie: from the cli, dns, lua...)
SERVER_INETADDR data is an event_hdl_cb_data_server_inetaddr struct
that provides additional info related to the server inet addr change,
but can be casted as a regular event_hdl_cb_data_server struct if
additional info is not needed.
Willy Tarreau [Fri, 24 Nov 2023 07:14:31 +0000 (08:14 +0100)]
[RELEASE] Released version 2.9-dev11
Released version 2.9-dev11 with the following main changes :
- BUG/MINOR: startup: set GTUNE_SOCKET_TRANSFER correctly
- BUG/MINOR: sock: mark abns sockets as non-suspendable and always unbind them
- BUILD: cache: fix build error on older compilers
- BUG/MAJOR: quic: complete thread migration before tcp-rules
- BUG/MEDIUM: quic: Possible crash for connections to be killed
- MINOR: quic: remove unneeded QUIC specific stopping function
- MINOR: acl: define explicit HTTP_3.0
- DEBUG: connection/flags: update flags for reverse HTTP
- BUILD: log: silence a build warning when threads are disabled
- MINOR: quic: Add traces to debug frames handling during retransmissions
- BUG/MEDIUM: quic: Possible crash during retransmissions and heavy load
- BUG/MINOR: quic: Possible leak of TX packets under heavy load
- BUG/MINOR: quic: Possible RX packet memory leak under heavy load
- BUG/MINOR: server: do not leak default-server in defaults sections
- DEBUG: tinfo: store the pthread ID and the stack pointer in tinfo
- MINOR: debug: start to create a new struct post_mortem
- MINOR: debug: add OS/hardware info to the post_mortem struct
- MINOR: debug: report in port_mortem whether a container was detected
- MINOR: debug: report in post_mortem if the container techno used is docker
- MINOR: debug: detect CPU model and store it in post_mortem
- MINOR: debug: report any detected hypervisor in post_mortem
- MINOR: debug: collect some boot-time info related to the process
- MINOR: debug: copy the thread info into the post_mortem struct
- MINOR: debug: dump the mapping of the libs into post_mortem
- MINOR: debug: add the ability to enter components in the post_mortem struct
- MINOR: init: add info about the main program to the post_mortem struct
- DOC: management: document "show dev"
- CLEANUP: assorted typo fixes in the code and comments
- CI: limit codespell checks to main repo, not forks
- DOC: 51d: updated 51Degrees repo URL for v3.2.10
- DOC: install: update the list of openssl versions
- MINOR: ext-check: add an option to preserve environment variables
- BUG/MEDIUM: mux-h1: Don't set CO_SFL_MSG_MORE flag on last fast-forward send
- MINOR: rhttp: rename proto_reverse_connect
- MINOR: rhttp: large renaming to use rhttp prefix
- MINOR: rhttp: add count of active conns per thread
- MEDIUM: rhttp: support multi-thread active connect
- MINOR: listener: allow thread kw for rhttp bind
- DOC: rhttp: replace maxconn by nbconn
- MINOR: log/balance: rename "log-sticky" to "sticky"
- MEDIUM: mux-quic: Add consumer-side fast-forwarding support
- MAJOR: h3: Implement zero-copy support to send DATA frame
MAJOR: h3: Implement zero-copy support to send DATA frame
When possible, we try send DATA frame without copying data. To do so, we
swap the input buffer with QCS tx buffer. It is only possible iff:
* There is only one HTX block of data at the beginning of the message
* Amount of data to send is equal to the size of the HTX data block
* The QCS tx buffer is empty
In this case, both buffers are swapped. The frame metadata are written at
the begining of the buffer, before data and where the HTX structure is
stored.
Willy Tarreau [Thu, 23 Nov 2023 17:19:41 +0000 (18:19 +0100)]
MINOR: log/balance: rename "log-sticky" to "sticky"
After giving it some thought, it could pretty well happen that other
protocols benefit from the sticky algorithm that some used to emulate
using a "stick-on int(0)" or things like this previously. So better
rename it to "sticky" right now instead of having to keep that "log-"
prefix forever. It's still limited to logs, of course, only the algo
is renamed in the config.
Amaury Denoyelle [Thu, 23 Nov 2023 16:37:26 +0000 (17:37 +0100)]
DOC: rhttp: replace maxconn by nbconn
Usage of existing "maxconn" for rhttp listeners configuration was
replaced recently by a new dedicating "nbconn" keyword. Update the
documentation part to reflect this.
Amaury Denoyelle [Thu, 23 Nov 2023 16:31:05 +0000 (17:31 +0100)]
MINOR: listener: allow thread kw for rhttp bind
Thanks to previous commit, a reverse HTTP listener is able to distribute
actively opened connections accross its threads. To be able to exploit
this, allow "thread" keyword for such a listener.
An extra check is added to explicitely forbids a reverse bind to span
multiple thread groups. Without this, multiple listeners instances will
be created, each with its owned "nbconn" value. This may surprise users
so for now, better to deactivate this possibility.
Amaury Denoyelle [Wed, 22 Nov 2023 17:02:37 +0000 (18:02 +0100)]
MEDIUM: rhttp: support multi-thread active connect
Implement support for active HTTP reverse task migration on listener
threads. This operation is done each time a new reversable connection
will be instantiated. Instead of directly allocate the connection, a
lookup is done among all the listener threads.
A comparison is done to select the thread with the smallest number of
current reverse connection. If the thread found is different from the
current one, the connection allocation is delayed and the task
rescheduled on the chosen thread. The connection will then be created
and pinned on the new thread. This mechanisms allows to balance reverse
HTTP connections accross different threads.
Note that rhttp_set_affinity is still defined to disable thread
migration on accept. This is necessary as it's unsafe to move an
existing connection to another thread. However, active reverse task
migration should be sufficient to distribute connections accross several
threads. Better than that, this design allows to differentiate standard
frontend and reversable connections. The latest are designed to be
long-lived so it's useful to have their repartition solely based on
others reversed connections.
Amaury Denoyelle [Wed, 22 Nov 2023 16:55:58 +0000 (17:55 +0100)]
MINOR: rhttp: add count of active conns per thread
Add a new member <nb_rhttp_conns> in thread_ctx structure. Its purpose
is to count the current number of opened reverse HTTP connections
regarding from their listeners membership.
This patch will be useful to support multi-thread for active reverse
HTTP, in order to select the less loaded thread.
Note that despite access to <nb_rhttp_conns> are only done by the
current thread, atomic operations are used. This is because once
multi-thread support will be added, external threads will also retrieve
values from others.
Amaury Denoyelle [Tue, 21 Nov 2023 10:10:34 +0000 (11:10 +0100)]
MINOR: rhttp: large renaming to use rhttp prefix
Previous commit renames 'proto_reverse_connect' module to 'proto_rhttp'.
This commits follows this by replacing various custom prefix by 'rhttp_'
to make the code uniform.
Note that 'reverse_' prefix was kept in connection module. This is
because if a new reversable protocol not based on HTTP is implemented,
it may be necessary to reused the same connection function which are
protocol agnostic.
BUG/MEDIUM: mux-h1: Don't set CO_SFL_MSG_MORE flag on last fast-forward send
In the mux-to-mux fast-forwarding, when end-of-input is reached on the producer
side, the consumer side must not set the CO_SFL_MSG_MORE flag on send. It means
the H1C_F_CO_MSG_MORE flag must be removed from the H1 connection.
Willy Tarreau [Thu, 23 Nov 2023 15:48:48 +0000 (16:48 +0100)]
MINOR: ext-check: add an option to preserve environment variables
In Github issue #2128, @jvincze84 explained the complexity of using
external checks in some advanced setups due to the systematic purge of
environment variables, and expressed the desire to preserve the
existing environment. During the discussion an agreement was found
around having an option to "external-check" to do that and that
solution was tested and confirmed to work by user @nyxi.
This patch just cleans this up, implements the option as
"preserve-env" and documents it. The default behavior does not change,
the environment is still purged, unless "preserve-env" is passed. The
choice of not using "import-env" instead was made so that we could
later use it to name specific variables that have to be imported
instead of keeping the whole environment.
The patch is simple enough that it could be backported if needed (and
was in fact tested on 2.6 first).