BUG/MEDIUM: samples: Fix handling of SMP_T_METH samples
Samples of type SMP_T_METH were not properly handled in smp_dup(),
smp_is_safe() and smp_is_rw(). For "other" methods, for instance PATCH, a
fallback was performed on the SMP_T_STR type. Only the buffer considered
changed. "smp->data.u.meth.str" should be used for the SMP_T_METH samples
while smp->data.u.str should be used for SMP_T_STR samples. However, in
smp_dup(), the result was stored in wrong buffer, the string one instead of
the method one. In smp_is_safe() and smp_is_rw(), the method buffer was not
used at all.
We now take care to use the right buffer.
This patch must be backported to all stable versions.
BUG/MINOR: http-act: validate decoded lengths in *-headers-bin
http_action_set_headers_bin() decodes varint name and value lengths
from a binary sample but never validates that the decoded length
fits in the remaining sample data before constructing the ist.
If the value's varint decodes to a large number with only a few
bytes following, v.len exceeds the buffer and http_add_header()
memcpys past the sample, copying adjacent heap data into a header
sent to the backend (or client, with http-response).
The intended source for this action is the hdrs_bin sample fetch
which produces well-formed output, but nothing prevents an admin
from feeding it req.body or another untrusted source. With:
BUG/MINOR: spoe: fix pointer arithmetic overflow in spoe_decode_buffer()
decode_varint() has no iteration cap and accepts varints decoding to
any uint64_t value. When sz is large enough that p + sz wraps modulo
2^64, the check "p + sz > end" passes, *buf is set to the wrapped
pointer, and the caller's parsing loop continues from an arbitrary
relative offset before the demux buffer.
A malicious SPOE agent sending an AGENT_HELLO frame with a key-name
length varint of 0xfffffffffffff000 causes spop_conn_handle_hello()
to dereference memory ~64KB before the dbuf allocation, resulting in
SIGSEGV (DoS) or, if the read lands on live heap data, parser
confusion. The relative offset is fully attacker-controlled and
ASLR-independent.
Compare against the remaining length instead of computing p + sz.
Since p <= end is guaranteed after a successful decode_varint(),
end - p is non-negative.
This patch must be backport to all stable versions.
BUG/MINOR: resolvers: fix memory leak on AAAA additional records
Commit c84c15d3938a ("BUG/MINOR: resolvers: Apply dns-accept-family
setting on additional records") converted a switch statement to an
if/else chain but left the break; in the AAAA branch. In the new
form, break exits the surrounding for loop instead of a switch case.
For every AAAA additional record in an SRV response:
- answer_record allocated at line 1460 is never freed and never
inserted into answer_tree -> ~580 bytes leaked per response
- all subsequent additional records in the response are silently
discarded
A DNS server controlling SRV responses for haproxy service discovery
can leak memory at MB/min rates given default resolution intervals.
Also breaks IPv6 SRV target resolution outright since the AAAA record
is leaked rather than attached to its SRV entry.
MINOR: lua: add tune.lua.openlibs to restrict loaded Lua standard libraries
HAProxy has always called luaL_openlibs() unconditionally, which opens
all standard Lua libraries including io, os, package and debug. This
makes it impossible to prevent Lua scripts from executing binaries
(os.execute, io.popen), loading native C modules (package/require), or
bypassing any Lua-level sandbox via the debug library.
Add a new global directive tune.lua.openlibs that accepts a comma-separated
list of library names to load:
tune.lua.openlibs none # only base + coroutine
tune.lua.openlibs string,math,table,utf8 # safe libs only
tune.lua.openlibs all # default, same as before
The base and coroutine libraries are always loaded regardless: base provides
core Lua functions that HAProxy relies on, and coroutine is required because
HAProxy overrides coroutine.create() with its own safe implementation.
When all libraries are enabled (the default), the fast path still calls
luaL_openlibs() directly with no overhead. A parse error is returned if
the directive appears after lua-load or lua-load-per-thread (the Lua state
is already initialised at that point), or if 'none' is combined with other
library names. Note that fork() and new thread creation are already blocked
by default regardless of this setting (see "insecure-fork-wanted").
BUG/MAJOR: slz: always make sure to limit fixed output to less than worst case literals
Literals are sent in two ways:
- in EOB state, unencoded and prefixed with their length
- in FIXED state, huffman-encoded
And references are only sent in FIXED state.
The API promises that the amount of data will not grow by more than
5 bytes every 65535 input bytes (the comment was adjusted to remind
this last point). This is guaranteed by the literal encoding in EOB
state (BT, LEN, NLEN + bytes), which is supposed to be the worst
case by design.
However, as reported by Greg KH, this is currently not true: the test
that decides whether or not to switch to FIXED state to send references
doesn't properly account for the number of bytes needed to roll back
to the *exact* same state in EOB, which means sending EOB, BT,
alignment, LEN and NLEN in addition to the referenced bytes, versus
sending the encoding for the reference. By not taking into account the
cost of returning to the initial state (BT+LEN+NLEN), it was possible
to stay too long in the FIXED state and to consume the extra bytes that
are needed to return to the EOB state, resulting in producing much more
data in case of multiple switchovers (up to 6.25% increase was measured
in tests, or 1/16, which matches worst case estimates based on the code).
And this check is only valid when starting from EOB (in order to restore
the same state that offers this guarantee). When already in FIXED state,
the encoded reference is always smaller than or same size as the data.
The smallest match length we support is 4 bytes, and when encoded this
is no more than 28 bits, so it is safe to stay in FIXED state as long
as needed while checking the possibility of switching back to EOB.
This very slightly reduces the compression ratio (-0.17% on a linux
kernel source) but makes sure we respect the API promise of no more
than 5 extra bytes per 65535 of input. A side effect of the slightly
simpler check is an ~7.5% performance increase in compression speed.
Many thanks to Greg for the detailed report allowing to reproduce
the issue.
Note: in haproxy's default configuration (tune.bufsize 16384,
tune.maxrewrite 1024), this problem cannot be triggered, because the
reserve limits input to 15360 bytes, and the overflow is maximum
960 bytes resulting in 16320 bytes total, which still fits into the
buffer. However, reducing tune.maxrewrite below 964, or tune.bufsize
above 17408 can result in overflows for specially crafted patterns.
A workaround for larger buffers consists in always setting tune.bufsize
to at least 1/16 of tune.bufsize.
MEDIUM: check: Revamp the way the protocol and xprt are determined
Storing the protocol directly into the check was not a good idea,
because the protocol may not be determined until after a DNS resolution
on the server, and may even change at runtime, if the DNS changes.
What we can, however, figure out at start up, is the net_addr_type,
which will contain all that we need to find out which protocol to use
later.
Also revert the changes made by commit 07edaed1918a6433126b4d4d61b7f7b0e9324b30
that would not reuse the server xprt if a different alpn is set for
checks. The alpn is just a string, and should not influence the choice
of the xprt.
We'll now make sure to use the server xprt, unless an address is
provided, in which case we'll use whatever xprt matches that address, or
a port, in which case we'll assume we want TCP, and use check_ssl to
know whetver we want the SSL xprt or not.
Now that the check contains all that is needed to know which protocol to
look up, always just use that when creating a new check connection if it
is the default check connection, and for now, always use TCP when a
tcp-check or http-check connect rule is used (which means those can't be
used for QUIC so far).
Commit 1b0dfff552713274b95c81594b153104e215ec81 attempted to make it so
the mux would expect a QUIC-like protocol or not, however it only made
that we would not instantiate a non-QUIC mux on a QUIC protocol, but not
that we tried to instance a QUIC mux on a non-QUIC protocol, so fix
that.
CI: github: fix vtest path to allow correct caching
The vtest binary does not seem to be cached correctly by actions/cache,
the cause of the problem seems to be the binary is installed outside the
github workspace. This patch installs the binary in ~/vtest/ to fix the
issue.
A WIP version of the patch was applied before the actual patch by
accident. The correct patch is 2db801c ("BUG/MINOR: hlua: fix stack
overflow in httpclient headers conversion")
Node.js 20 actions are deprecated. The following actions are running on
Node.js 20 and may not work as expected: actions/cache@v4. Actions will
be forced to run with Node.js 24 by default starting June 2nd, 2026.
Node.js 20 will be removed from the runner on September 16th, 2026.
Please check if updated versions of these actions are available that
support Node.js 24. To opt into Node.js 24 now, set the
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the
runner or in your workflow file. Once Node.js 24 becomes the default,
you can temporarily opt out by setting
ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see:
https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
BUG/MEDIUM: connection: Wake the stconn on error when failing to create mux
When the app_ops were removed, direct calls to the SC wake callback function
were replaced by tasklet wakeups. However, in conn_create_mux(), it was
replaced by a direct call to sc_conn_process(). However, sc_conn_process()
is only usable when the SC is attach to a stream. A backend mux can be
created for a healcheck. In this context, sc_conn_process() cannot be
called.
Because of this bug, crashes can be experienced when an error is triggered
during a SSL connection attempt from a healthcheck.
To fix the issue, the call to sc_conn_process() was replaced by a tasklet
wakeup.
This patch should fix the issue #3326. No backport needed.
The VTest2 tarball URL at code.vinyl-cache.org/vtest/VTest2/archive/main.tar.gz
no longer works. Switch scripts/build-vtest.sh to use a git clone of the
repository instead.
Add a cache step in the setup-vtest CI action so VTest is only rebuilt
when its HEAD commit changes, keyed on the runner OS and the VTest2 HEAD
SHA.
BUG/MINOR: peers: fix OOB heap write in dictionary cache update
When a peer sends a dictionary entry update with a value (the else
branch at line 2109), the entry id decoded from the wire was never
validated against dc->max_entries before being used as an array index
into dc->rx[].
A malicious peer can send id=N where N > 128 (PEER_STKT_CACHE_MAX_ENTRIES)
to:
- dc->rx[id-1].de at line 2123: OOB read followed by atomic decrement
and potential free of an attacker-controlled pointer via
dict_entry_unref()
- dc->rx[id-1].de = de at line 2124: OOB write of a heap pointer at
an attacker-controlled offset (16-byte stride, ~64 GiB range)
The bounds check was added to the key-only branch in commit f9e51beec
("BUG/MINOR: peers: Do not ignore a protocol error for dictionary
entries.") but was never added to the with-value branch. The bug has
been present since dictionary support was introduced in commit 8d78fa7def5c ("MINOR: peers: Make peers protocol support new
"server_name" data type.").
Reachable from any TCP client that knows the configured peer name
(no cryptographic authentication on the peers protocol). Requires a
stick-table with "store server_key" in the configuration.
Fix by hoisting the bounds check above the branch so it covers both
paths.
BUG/MEDIUM: chunk: fix infinite loop in get_larger_trash_chunk()
When the input chunk is already the large buffer (chk->size ==
large_trash_size), the <= comparison still matched and returned
another large buffer of the same size. Callers that retry on a
non-NULL return value (sample.c:4567 in json_query) loop forever.
The json_query infinite loop is trivially triggered: mjson_unescape()
returns -1 not only when the output buffer is too small but also for
any \uXXYY escape where XX != "00" (mjson.c:305) and for invalid
escapes like \q. The retry loop assumes -1 always means "grow the
buffer", so a 14-byte JSON body of {"k":"\u0100"} hangs the worker
thread permanently. Send N such requests to exhaust all worker
threads.
Use < instead of <= so a chunk that is already large yields NULL.
This also fixes the json converter overflow at sample.c:2869 where
no recheck happens after the "growth" returned a same-size buffer.
Introduced in commit ce912271db4e ("MEDIUM: chunk: Add support for
large chunks"). No backport needed.
BUG/MEDIUM: chunk: fix typo allocating small trash with bufsize_large
A copy-paste error in alloc_trash_buffers_per_thread() passes
global.tune.bufsize_large to alloc_small_trash_buffers() instead of
global.tune.bufsize_small. This sets small_trash_size = bufsize_large.
When tune.bufsize.large is configured, get_larger_trash_chunk() then
incorrectly matches a large buffer against small_trash_size at line
169 and "grows" it to a regular (smaller) buffer. b_xfer() at line
179 attempts to copy the large buffer's contents into the smaller one:
- Default builds (DEBUG_STRICT=1): BUG_ON in __b_putblk() aborts
the process -> remote DoS
- DEBUG_STRICT=0 builds: BUG_ON becomes ASSUME() and the compiler
elides the check -> heap overflow with attacker-controlled bytes
Reachable via the json converter (sample.c:2862) when escaping
~bufsize_large/6 control characters in attacker-supplied data such
as a request header or body.
Introduced in commit 92a24a4e875b ("MEDIUM: chunk: Add support for
small chunks"). No backport needed.
BUG/MINOR: hlua: fix format-string vulnerability in Patref error path
hlua_error() is a printf-family function (calls vsnprintf), but
hlua_patref_set, hlua_patref_add, and _hlua_patref_add_bulk pass
errmsg directly as the format string. errmsg is built by pattern.c
helpers that embed the user-supplied key or value verbatim, e.g.
pat_ref_set_elt() generates "unable to parse '<value>'".
A Lua script calling:
ref:set("key", "%p.%p.%p.%p.%p.%p.%p.%p")
against a map with an integer output type (where the parse fails)
gets stack/register contents formatted into the (nil, err) return
value -> ASLR/canary leak. With %n and no _FORTIFY_SOURCE this
becomes an arbitrary write primitive.
This must be backported as far as the Patref Lua API exists.
BUG/MINOR: hlua: fix stack overflow in httpclient headers conversion
hlua_httpclient_table_to_hdrs() declares a VLA of size
global.tune.max_http_hdr (default 101) on the stack but never checks
hdr_num against that bound. A Lua script that supplies a header table
with more than 101 values writes struct http_hdr entries (two ist =
two heap pointers + two lengths) past the end of the VLA, smashing
the stack frame.
Trigger from any Lua action/task/service:
local hc = core.httpclient()
local v = {}
for i = 1, 300 do v[i] = "x" end
hc:get{ url = "http://127.0.0.1/", headers = { ["X"] = v } }
Each out-of-bounds entry writes a heap pointer (controllable
allocation contents via istdup) plus an attacker-chosen length onto
the stack, overwriting the saved return address.
[wla: this is only reachable if the Lua script passes more than
max_http_hdr header values, which requires access to the script itself]
This must be backported as far as the httpclient Lua API exists.
Signed-off-by: William Lallemand <wlallemand@haproxy.com>
BUG: hlua: fix stack overflow in httpclient headers conversion
hlua_httpclient_table_to_hdrs() declares a VLA of size
global.tune.max_http_hdr (default 101) on the stack but never checks
hdr_num against that bound. A Lua script that supplies a header table
with more than 101 values writes struct http_hdr entries (two ist =
two heap pointers + two lengths) past the end of the VLA, smashing
the stack frame.
Trigger from any Lua action/task/service:
local hc = core.httpclient()
local v = {}
for i = 1, 300 do v[i] = "x" end
hc:get{ url = "http://127.0.0.1/", headers = { ["X"] = v } }
Each out-of-bounds entry writes a heap pointer (controllable
allocation contents via istdup) plus an attacker-chosen length onto
the stack, overwriting the saved return address. With no stack
canary, this is direct RCE; with a canary, it requires a leak first.
Reachable from any deployment that loads Lua scripts. While Lua
scripts are nominally trusted, this turns "can edit Lua" into "can
execute arbitrary native code", which is a meaningful boundary in
many setups (Lua sandbox escape).
This must be backported as far as the httpclient Lua API exists.
BUG/MEDIUM: jwe: fix memory leak in jwt_decrypt_secret with var argument
When the secret argument to jwt_decrypt_secret is a variable
(ARGT_VAR) rather than a literal string, alloc_trash_chunk() is
called to hold the base64-decoded secret but the buffer is never
released. The end: label frees input, decrypted_cek, out, and the
decoded_items array but not secret.
Each request leaks one trash chunk (~tune.bufsize, default 16KB).
At ~65000 requests per GiB this allows slow memory exhaustion DoS
against any config of the form:
BUG/MEDIUM: jwt: fix heap overflow in ECDSA signature DER conversion
convert_ecdsa_sig() calls i2d_ECDSA_SIG(ecdsa_sig, &p) where p
points into signature->area, a trash chunk of tune.bufsize bytes
(default 16384). i2d writes with no output bound.
The raw R||S input can be up to bufsize bytes (filled by
base64urldec at jwt.c:520-527), giving bignum_len up to 8192. The
DER encoding adds a SEQUENCE header (2-4 bytes), two INTEGER headers
(2-4 bytes each), and up to two leading-zero sign-padding bytes when
the bignum high bit is set. With two 8192-byte bignums having the
high bit set, the encoding is ~16398 bytes, overflowing the 16384-
byte buffer by ~14 bytes.
Triggered by any JWT with alg=ES256/384/512 and a ~21830-character
base64url signature. The signature does not need to verify
successfully; the overflow happens before verification. Reachable
from any config using jwt_verify with an EC algorithm.
Also fixes the existing wrong check: i2d returns -1 on error which
became SIZE_MAX in the size_t signature->data, defeating the
"== 0" test.
This must be backported as far as JWT support exists.
BUG/MEDIUM: jwe: fix NULL deref crash with empty CEK and non-dir alg
In sample_conv_jwt_decrypt_secret(), when a JWE token has an empty
encrypted-key section but the algorithm is not "dir" (e.g. A128KW),
neither branch initializes decrypted_cek. The NULL pointer is then
passed to decrypt_ciphertext() which dereferences it:
- For GCM encodings: aes_process() calls b_orig(NULL) -> SIGSEGV
- For CBC encodings: b_data(NULL) at jwe.c:463 -> SIGSEGV
A single HTTP request with a crafted Authorization header crashes the
worker process. Trigger token (JOSE header {"alg":"A128KW","enc":"A128GCM"},
empty CEK section between the two dots):
Reachable in any configuration using the jwt_decrypt_secret converter.
The other two decrypt converters (jwt_decrypt_jwk, jwt_decrypt_cert)
already have the check.
This must be backported as far as JWE support exists.
BUG/MEDIUM: payload: validate SNI name_len in req.ssl_sni
The 16-bit name_len field is read directly from the ClientHello and
stored as the sample length without any validation against srv_len,
ext_len, or the channel buffer size. A 65-byte ClientHello with
name_len=0xffff produces a sample claiming 65535 bytes of data when
only ~4 bytes are actually present in the buffer.
Downstream consumers then read tens of kilobytes past the channel
buffer:
- pattern.c:741 XXH3() hashes 65535 bytes -> ~50KB OOB heap read
- sample.c smp_dup memcpy if large trash configured
- log-format %[req.ssl_sni] leaks heap contents to logs/headers
Reachable pre-authentication on any TCP frontend using req.ssl_sni
(req_ssl_sni), which is the documented way to do SNI-based content
switching in TCP mode. No SSL handshake is required; the parser
runs on raw buffer contents in tcp-request content rules.
Bug introduced in commit d4c33c8889ec3 (2013). The ALPN parser in
the same file at line 1044 has the equivalent check; SNI never did.
This must be backported to all supported versions.
BUG/MEDIUM: tcpcheck: Properly retrieve tcpcheck type to install the best mux
When the healthcheck section support was added, the tcpcheck type was moved
into the tcpcheck ruleset. However, conn_install_mux_chk() function was not
updated accordingly. So the TCP mode was always returned.
No backport needed. This patch is related to #3324 but it is not the root
cause of the issue.
As reported by GH @phihos on GH #3320, using the shm-stats-file feature
with objects exceeding 127 chars would result in object name being
unexpectedly truncated, while GUID API supports up to 128 chars.
Indeed, with the config below, and shm-stats-file enabled:
server s1 127.0.0.1:1 guid srv:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:SRV_1 disabled
server s10 127.0.0.1:1 guid srv:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:SRV_10 disabled
haproxy would store the second server object with the same id as the first
one, but upon reload, only the first one would be restored, which would
eventually cause shm-stats-file slot exhaustion with repetitive reloads.
@phihos, found out the underlying issue, in counters.c we used snprintf()
with sizeof(shm_obj->guid) - 1 as <size> parameter, while we should have
use sizeof(shm_obj->guid) instead since shm_obj->guid already takes the
terminating NULL byte into account.
So we simply apply the fix suggested by @phihos, and hopefully this should
solve the shm-stats-file slot leak that was observed.
Unfortunately, for now, we cannot warn the user that a duplicate
shm-stats-file object was found, because we accept duplicate objects
by design for 2 reasons. The first one is for a new process to be able
to change the object type for a previously known GUID while allowing
previous processes to use the old object as long as they are alive.
The second reason is that upon startup we cannot afford to scan the
whole object list, as soon as we find a match (type + GUID), we bind
the object, and this way we avoid unnecessary lookup time.
Perhaps we have room for improvement in the future, but for now let's
keep it this way.
It should be backported to 3.3
Big thanks to @phihos for the bug description, analysis and
suggestions.
BUG/MEDIUM: tcpcheck/server: Fix parsing of healthcheck param for dynamic servers
The parsing of the "healthcheck" parameter for dynamic servers was not
finished. The post-config was missing, leading to a crash because the
ruleset pointer was NULL.
To fix the issue, check_server_tcpcheck() function is called in
cli_parse_add_server().
DOC: config: Fix two typos in the server param "healthcheck" description
There was 2 typos here. First, the 'k' was missing on the parameter name.
Then "sectino" was used in the description instead of "section". Let's fix
them.
BUG/MEDIUM: mux-h1: Disable 0-copy forwarding when draining the request
When an early response is sent to the client and the H1 connection is
switched to the draining state, we must take care to disable the 0-copy data
forwarding because the backend side is no longer here. It is an issue
because this prevent any regular receive to be performed.
This patch should fix the issue #3316. It must be backported as far as 3.0.
BUG/MEDIUM: haterm: Move all init functions of haterm in haterm_init.c
Functions used to initialize haterm (the splicing and the response buffers)
were defined and registered in haterm.c. The problem is that this file in
compiled with haproxy. So it may be an issue. And for the splicing part,
warnings may be emitted when haproxy is started.
To avoid any issue during haproxy startup and to avoid to initialize some
part of haterm, all init functions were moved into haterm_init.c file.
REGTESTS: add a test for "filter-sequence" directive
We add a reg-test, filter_sequence.vtc, with associated lua file
dummy_filters.lua to cover the "filter-sequence" directive and
ensure it is working as expected, both for request and responses
paths.
This regtest will only be effective starting with 3.4-dev0
This is another pre-requisite work for upcoming decompression filter.
In this patch we implement the "filter-sequence" directive which can be
used in proxy section (frontend,backend,listen) and takes 2 parameters
The first one is the direction (request or response), the second one
is a comma separated list of filter names previously declared on the
proxy using the "filter" keyword.
The main goal of this directive is to be able to instruct haproxy in which
order the filters should be executed on request and response paths,
especially if the ordering between request and response handling must
differ, and without relying on the filter declaration ordering (within
the proxy) which is used by default by haproxy.
Another benefit of this feature is that it becomes possible to "ignore"
a previously declared filter on the proxy. Indeed, when filter-sequence
is defined for a given direction (request/response), then it will be used
over the implicit filter ordering, but if a filter which was previously
declared is not specified in the related filter-sequence, it will not be
executed on purpose. This can be used as a way to temporarily disable a
filter without completely removing its configuration.
Documentation was updated (check examples for more info)
MINOR: filters: add filter name to flt_conf struct
flt_conf struct stores the filter id, which is used internally to check
match the filter against static pointer identifier, and also used as
descriptive text to describe the filter. But the id is not consistent
with the public name as used in the configuration (for instance when
selecting filter through the 'filter' directive).
What we do in this patch is that we add flt_conf->name member, which
stores the real filter name as seen in the configuration. This will
allow to select filters by their name from other directives in the
configuration.
DOC: config: fix ambiguous info in log-steps directive description
log-steps takes <steps> as parameter. <steps> is made of individual
log origins separated by commas, as shown in the examples, but the
directive's description says it should be separated by spaces, which
is wrong.
Released version 3.4-dev8 with the following main changes :
- MINOR: log: split do_log() in do_log() + do_log_ctx()
- MINOR: log: provide a way to override logger->profile from process_send_log_ctx
- MINOR: log: support optional 'profile <log_profile_name>' argument to do-log action
- BUG/MINOR: sock: adjust accept() error messages for ENFILE and ENOMEM
- BUG/MINOR: qpack: fix 62-bit overflow and 1-byte OOB reads in decoding
- MEDIUM: sched: do not run a same task multiple times in series
- MINOR: sched: do not requeue a tasklet into the current queue
- MINOR: sched: do not punish self-waking tasklets anymore
- MEDIUM: sched: do not punish self-waking tasklets if TASK_WOKEN_ANY
- MEDIUM: sched: change scheduler budgets to lower TL_BULK
- MINOR: mux-h2: assign a limited frames processing budget
- BUILD: sched: fix leftover of debugging test in single-run changes
- BUG/MEDIUM: acme: fix multiple resource leaks in acme_x509_req()
- MINOR: http_htx: use enum for arbitrary values in conf_errors
- MINOR: http_htx: rename fields in struct conf_errors
- MINOR: http_htx: split check/init of http_errors
- MINOR/OPTIM: http_htx: lookup once http_errors section on check/init
- MEDIUM: proxy: remove http-errors limitation for dynamic backends
- BUG/MINOR: acme: leak of ext_san upon insertion error
- BUG/MINOR: acme: wrong error when checking for duplicate section
- BUG/MINOR: acme/cli: wrong argument check in 'acme renew'
- BUG/MINOR: http_htx: fix null deref in http-errors config check
- MINOR: buffers: Move small buffers management from quic to dynbuf part
- MINOR: dynbuf: Add helper functions to alloc large and small buffers
- MINOR: quic: Use b_alloc_small() to allocate a small buffer
- MINOR: config: Relax tests on the configured size of small buffers
- MINOR: config: Report the warning when invalid large buffer size is set
- MEDIUM: htx: Add htx_xfer function to replace htx_xfer_blks
- MINOR: htx: Add helper functions to xfer a message to smaller or larger one
- MINOR: http-ana: Use HTX API to move to a large buffer
- MEDIUM: chunk: Add support for small chunks
- MEDIUM: stream: Try to use a small buffer for HTTP request on queuing
- MEDIUM: stream: Try to use small buffer when TCP stream is queued
- MEDIUM: stconn: Use a small buffer if possible for L7 retries
- MEDIUM: tree-wide: Rely on htx_xfer() instead of htx_xfer_blks()
- Revert "BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream"
- MEDIUM: mux-h2: Stop dealing with HTX flags transfer in h2_rcv_buf()
- MEDIUM: tcpcheck: Use small buffer if possible for healthchecks
- MINOR: proxy: Review options flags used to configure healthchecks
- DOC: config: Fix alphabetical ordering of proxy options
- DOC: config: Fix alphabetical ordering of external-check directives
- MINOR: proxy: Add use-small-buffers option to set where to use small buffers
- DOC: config: Add missing 'status-code' param for 'http-check expect' directive
- DOC: config: Reorder params for 'tcp-check expect' directive
- BUG/MINOR: acme: NULL check on my_strndup()
- BUG/MINOR: acme: free() DER buffer on a2base64url error path
- BUG/MINOR: acme: replace atol with len-bounded __strl2uic() for retry-after
- BUG/MINOR: acme/cli: fix argument check and error in 'acme challenge_ready'
- BUILD: tools: potential null pointer dereference in dl_collect_libs_cb
- BUG/MINOR: ech: permission checks on the CLI
- BUG/MINOR: acme: permission checks on the CLI
- BUG/MEDIUM: check: Don't reuse the server xprt if we should not
- MINOR: checks: Store the protocol to be used in struct check
- MINOR: protocols: Add a new proto_is_quic() function
- MEDIUM: connections: Enforce mux protocol requirements
- MEDIUM: server: remove a useless memset() in srv_update_check_addr_port.
- BUG/MINOR: config: Warn only if warnif_cond_conflicts report a conflict
- BUG/MINOR: config: Properly test warnif_misplaced_* return values
- BUG/MINOR: http-ana: Only consider client abort for abortonclose
- BUG/MEDIUM: acme: skip doing challenge if it is already valid
- MINOR: connections: Enhance tune.idle-pool.shared
- BUG/MINOR: acme: fix task allocation leaked upon error
- BUG/MEDIUM: htx: Fix htx_xfer() to consume more data than expected
- CI: github: fix tag listing by implementing proper API pagination
- CLEANUP: fix typos and spelling in comments and documentation
- BUG/MINOR: quic: close conn on packet reception with incompatible frame
- CLEANUP: stconn: Remove usless sc_new_from_haterm() declaration
- BUG/MINOR: stconn: Always declare the SC created from healthchecks as a back SC
- MINOR: stconn: flag the stream endpoint descriptor when the app has started
- MINOR: mux-h2: report glitches on early RST_STREAM
- BUG/MINOR: net_helper: fix length controls on ip.fp tcp options parsing
- BUILD: net_helper: fix unterminated comment that broke the build
- MINOR: resolvers: basic TXT record implementation
- MINOR: acme: store the TXT record in auth->token
- MEDIUM: acme: add dns-01 DNS propagation pre-check
- MEDIUM: acme: new 'challenge-ready' option
- DOC: configuration: document challenge-ready and dns-delay options for ACME
- SCRIPTS: git-show-backports: list new commits and how to review them with -L
- BUG/MEDIUM: ssl/cli: tls-keys commands warn when accessed without admin level
- BUG/MEDIUM: ssl/ocsp: ocsp commands warn when accessed without admin level
- BUG/MEDIUM: map/cli: map/acl commands warn when accessed without admin level
- BUG/MEDIUM: ssl/cli: tls-keys commands are missing permission checks
- BUG/MEDIUM: ssl/ocsp: ocsp commands are missing permission checks
- BUG/MEDIUM: map/cli: CLI commands lack admin permission checks
- DOC: configuration: mention QUIC server support
- MEDIUM: Add set-headers-bin, add-headers-bin and del-headers-bin actions
- BUG/MEDIUM: mux-h1: Don't set MSG_MORE on bodyless responses forwarded to client
- BUG/MINOR: http_act: Properly handle decoding errors in *-headers-bin actions
- MEDIUM: stats: Hide the version by default and add stats-showversion
- MINOR: backends: Don't update last_sess if it did not change
- MINOR: servers: Don't update last_sess if it did not change
- MINOR: ssl/log: add keylog format variables and env vars
- DOC: configuration: update tune.ssl.keylog URL to IETF draft
- BUG/MINOR: http_act: Make set/add-headers-bin compatible with ACL conditions
- MINOR: action: Add a sample expression field in arguments used by HTTP actions
- MEDIUM: http_act: Rework *-headers-bin actions
- BUG/MINOR: tcpcheck: Remove unexpected flag on tcpcheck rules for httchck option
- MEDIUM: tcpcheck: Refactor how tcp-check rulesets are stored
- MINOR: tcpcheck: Deal with disable-on-404 and send-state in the tcp-check itself
- BUG/MINOR: tcpcheck: Don't enable http_needed when parsing HTTP samples
- MINOR: tcpcheck: Use tcpcheck flags to know a healthcheck uses SSL connections
- BUG/MINOR: tcpcheck: Use tcpcheck context for expressions parsing
- CLEANUP: tcpcheck: Don't needlessly expose proxy_parse_tcpcheck()
- MINOR: tcpcheck: Add a function to stringify the healthcheck type
- MEDIUM: tcpcheck: Split parsing functions to prepare healthcheck sections parsing
- MEDIUM: tcpcheck: Add parsing support for healthcheck sections
- MINOR: tcpcheck: Extract tcpheck ruleset post-config in a dedicated function
- MEDIUM: tcpcheck/server: Add healthcheck server keyword
- REGTESTS: tcpcheck: Add a script to check healthcheck section
- MINOR: acme: add 'dns-timeout' keyword for dns-01 challenge
- CLEANUP: net_helper: fix typo in comment
- MINOR: acme: set the default dns-delay to 30s
- MINOR: connection: add function to identify a QUIC connection
- MINOR: quic: refactor frame parsing
- MINOR: quic: refactor frame encoding
- BUG/MINOR: quic: fix documentation for transport params decoding
- MINOR: quic: split transport params decoding/check
- MINOR: quic: remove useless quic_tp_dec_err type
- MINOR: quic: define QMux transport parameters frame type
- MINOR: quic: implement QMux transport params frame parser/builder
- MINOR: mux-quic: move qcs stream member into tx inner struct
- MINOR: mux-quic: prepare Tx support for QMux
- MINOR: mux-quic: convert init/closure for QMux compatibility
- MINOR: mux-quic: protect qcc_io_process for QMux
- MINOR: mux-quic: prepare traces support for QMux
- MINOR: quic: abstract stream type in qf_stream frame
- MEDIUM: mux-quic: implement QMux receive
- MINOR: mux-quic: handle flow-control frame on qstream read
- MINOR: mux-quic: define Rx connection buffer for QMux
- MINOR: mux_quic: implement qstrm rx buffer realign
- MEDIUM: mux-quic: implement QMux send
- MINOR: mux-quic: implement qstream send callback
- MINOR: mux-quic: define Tx connection buffer for QMux
- MINOR: xprt_qstrm: define new xprt module for QMux protocol
- MINOR: xprt_qstrm: define callback for ALPN retrieval
- MINOR: xprt_qstrm: implement reception of transport parameters
- MINOR: xprt_qstrm: implement sending of transport parameters
- MEDIUM: ssl: load xprt_qstrm after handshake completion
- MINOR: mux-quic: use QMux transport parameters from qstrm xprt
- MAJOR: mux-quic: activate QMux for frontend side
- MAJOR: mux-quic: activate QMux on the backend side
- MINOR: acme: split the CLI wait from the resolve wait
- MEDIUM: acme: initialize the dns timer starting from the first DNS request
- DEBUG: connection/flags: add QSTRM flags for the decoder
- BUG/MINOR: mux_quic: fix uninit for QMux emission
- MINOR: acme: remove remaining CLI wait in ACME_RSLV_TRIGGER
- MEDIUM: acme: split the initial delay from the retry DNS delay
- BUG/MINOR: cfgcond: properly set the error pointer on evaluation error
- BUG/MINOR: cfgcond: always set the error string on openssl_version checks
- BUG/MINOR: cfgcond: always set the error string on awslc_api checks
- BUG/MINOR: cfgcond: fail cleanly on missing argument for "feature"
- MINOR: ssl: add the ssl_fc_crtname sample fetch
- MINOR: hasterm: Change hstream_add_data() to prepare zero-copy data forwarding
- MEDIUM: haterm: Add support for 0-copy data forwading and option to disable it
- MEDIUM: haterm: Prepare support for splicing by initializing a master pipe
- MEDIUM: haterm: Add support for splicing and option to disable it
- MINOR: haterm: Handle boolean request options as flags
- MINOR: haterm: Add an request option to disable splicing
- BUG/MINOR: ssl: fix memory leak in ssl_fc_crtname by using SSL_CTX ex_data index
BUG/MINOR: ssl: fix memory leak in ssl_fc_crtname by using SSL_CTX ex_data index
The ssl_crtname_index was registered with SSL_get_ex_new_index() but the
certificate name is stored on a SSL_CTX object via SSL_CTX_set_ex_data().
The free callback is only invoked for the object type matching the index
registration, so the strdup'd name was never freed when the SSL_CTX was
released.
Fix this by using SSL_CTX_get_ex_new_index() instead, which ensures the
free callback fires when the SSL_CTX is destroyed.
MINOR: haterm: Handle boolean request options as flags
Following request options are now handled as flags:
- ?k=1 => flag HS_ST_OPT_CHUNK_RES is set
- ?c=0 => flag HS_ST_OPT_NO_CACHE is set
- ?R=1 => flag HS_ST_OPT_RANDOM_RES is set
- ?A=A => flag HS_ST_OPT_REQ_AFTER_RES is set.
MEDIUM: haterm: Prepare support for splicing by initializing a master pipe
Now the zero-copy data forwarding is supported, we will add the splicing
support. To do so, we first create a master pipe with vmsplice() during
haterm startup. It is only performed if the splicing is supported. And its
size can be configured by setting "tune.pipesize" global parameter.
This master pipe will be used to fill the pipe with the client.
MEDIUM: haterm: Add support for 0-copy data forwading and option to disable it
The support for the zero-copy data forwarding was added and enabled by
default. The command line option '-dZ' was also added to disable the
feature.
Concretely, when haterm pushes the response payload, if the zero-copy
forwarding is supported, a dedicated function is used to do so.
hstream_ff_snd() will rely on se_nego_ff() to know how many data can send
and at the end, on se_done_ff() to really send data.
hstream_add_ff_data() function was added to perform the raw copy of the
payload in the sedesc I/O buffer.
MINOR: hasterm: Change hstream_add_data() to prepare zero-copy data forwarding
hstream_add_data() function is renamed to hstream_add_htx_data() because
there will be a similar function to add data in zero-copy forwarding
mode. The function was also adapted to take the data length to add in
parameter and to return the number of written bytes.
This new sample fetch returns the name of the certificate selected for
an incoming SSL/TLS connection, as it would appear in "show ssl cert".
It may be a filename with its relative or absolute path, or an alias,
depending on how the certificate was declared in the configuration.
The certificate name is stored as ex_data on the SSL_CTX at load time
in ckch_inst_new_load_store(), and freed via a dedicated free callback.
BUG/MINOR: cfgcond: fail cleanly on missing argument for "feature"
The "feature" predicate takes an argument name. Not passing one will
cause strstr() to always find something, including at the end of the
string, and to read past end that ASAN detects. We need to check that
we didn't reach end before proceeding.
This bug was reported by OSS Fuzz here:
https://issues.oss-fuzz.com/issues/499133314
The issue is present since 2.4 with commit 58ca706e16 ("MINOR: config:
add predicate "feature" to detect certain built-in features") so this
fix must be backported to all stable versions.
BUG/MINOR: cfgcond: always set the error string on awslc_api checks
Using awslc_api_before() with an invalid argument results in "(null)"
appearing in the error message due to -1 being returned without the
error message being filled. Let's always fill the error message on error.
This was introduced in 3.3 with commit 3d15c07ed0 ("MINOR: cfgcond: add
"awslc_api_atleast" and "awslc_api_before""), and this fix must be
backported to 3.3.
BUG/MINOR: cfgcond: always set the error string on openssl_version checks
Using openssl_version_before() with an invalid argument results in "(null)"
appearing in the error message due to -1 being returned without the error
message being filled. Let's always fill the error message on error.
This was introduced in 2.5 with commit 3aeb3f9347 ("MINOR: cfgcond:
implements openssl_version_atleast and openssl_version_before"), and
this fix must be backported to 2.6.
BUG/MINOR: cfgcond: properly set the error pointer on evaluation error
cfg_eval_condition() says that the <errptr> pointer will be set upon
error. However, cfg_eval_cond_expr() can fail (e.g. failure to handle
a dynamic argument) but would branch to "done" and leave errptr unset.
Let's check for this case as well.
This bug was reported by OSS Fuzz here:
https://issues.oss-fuzz.com/issues/499135825
The bug was introduced in 2.5 around commit ca81887599 ("MINOR:
cfgcond: insert an expression between the condition and the term") so
the fix must be backported as far as 2.6.
MEDIUM: acme: split the initial delay from the retry DNS delay
The previous ACME_RSLV_WAIT state served a dual role: it applied the
initial dns-delay before the first DNS probe and also handled the
delay between retries. There was no way to simply wait a fixed delay
before submitting the challenge without also triggering DNS pre-checks.
Replace ACME_RSLV_WAIT with two distinct states:
- ACME_INITIAL_DELAY: an optional initial wait before proceeding,
only applied when "challenge-ready" includes the new "delay" keyword
- ACME_RSLV_RETRY_DELAY: the delay between resolution retries, always
applied when DNS pre-checks are in progress
The new "delay" keyword in "challenge-ready" can be used standalone
(wait then submit the challenge directly) or combined with "dns" (wait
then start the DNS pre-checks). When "delay" is not set, the first DNS
probe fires immediately.
MINOR: acme: remove remaining CLI wait in ACME_RSLV_TRIGGER
The TASK_WOKEN_TIMER check that previously handled the case where
RSLV_TRIGGER was reached directly from the CLI command is therefore dead
code and can be removed.
Fix the following build warning from obsolete compilers for <orig_frm>
variable in qcc_qstrm_send_frames() function :
src/mux_quic_qstrm.c:266:17: warning: 'orig_frm' may be used
uninitialized in this function [-Wmaybe-uninitialized]
The variable is now explicitely initialized to NULL on each loop, which
should prevent this warning. Note that for code clarity, the variable is
renamed <next_frm>.
MEDIUM: acme: initialize the dns timer starting from the first DNS request
Previously the dns timeout timer was initialized in ACME_RSLV_WAIT,
before the initial dns-delay expires. This meant the countdown started
before any DNS request was actually sent, so the effective timeout was
shorter than expected by one dns-delay period.
Move the initialization to ACME_RSLV_TRIGGER so the timer starts only
when the first DNS resolution attempt is triggered. Update the
documentation to clarify this behaviour.
Amaury Denoyelle [Fri, 27 Mar 2026 13:29:09 +0000 (14:29 +0100)]
MINOR: mux-quic: use QMux transport parameters from qstrm xprt
Defines an API for xprt_qstrm so that the QMux transport parameters can
be retrieved by the MUX layer on its initialization. This concerns both
local and remote parameters.
Functions xprt_qstrm_lparams/rparams() are defined and exported for
this. They are both used in qmux_init() if QMux protocol is active.
Amaury Denoyelle [Wed, 25 Mar 2026 13:17:38 +0000 (14:17 +0100)]
MEDIUM: ssl: load xprt_qstrm after handshake completion
On SSL handshake completion, MUX layer can be initialized if not already
the case. However, for QMux protocol, it is necessary first to perform
transport parameters exchange, via the new xprt_qstrm layer. This patch
ensures this is performed if any flag CO_FL_QSTRM_* is set on the
connection.
Also, SSL layer registers itself via add_xprt. This ensures that it can
be used by xprt_qstrm for the emission/reception of the necessary
frames.
Amaury Denoyelle [Wed, 25 Mar 2026 13:14:20 +0000 (14:14 +0100)]
MINOR: xprt_qstrm: implement sending of transport parameters
This patch implements QMux emission of transport parameters via
xprt_qstrm. Similarly to receive, this is performed in conn_send_qstrm()
which uses lower xprt snd_buf operation. The connection must first be
flagged with CO_FL_QSTRM_SEND to trigger this step.
Amaury Denoyelle [Wed, 25 Mar 2026 08:05:21 +0000 (09:05 +0100)]
MINOR: xprt_qstrm: implement reception of transport parameters
Extend xprt_qstrm to implement the reception of QMux transport
parameters. This is performed via conn_recv_qstrm() which relies on the
lower xprt rcv_buf operation. Once received, parameters are kept in
xprt_qstrm context, so that the MUX can retrieve them on init.
For the reception of parameters to be active, the connection must first
be flagged with CO_FL_QSTRM_RECV.
Amaury Denoyelle [Wed, 25 Mar 2026 08:03:41 +0000 (09:03 +0100)]
MINOR: xprt_qstrm: define callback for ALPN retrieval
Add get_alpn operation support for xprt_qstrm. This simply acts as a
passthrough method to the underlying XPRT layer.
This function is necessary for QMux when running above SSL, as mux-quic
will access ALPN during its initialization in order to instantiate the
proper application protocol layer.
Amaury Denoyelle [Tue, 24 Mar 2026 15:58:48 +0000 (16:58 +0100)]
MINOR: xprt_qstrm: define new xprt module for QMux protocol
Define a new XPRT layer for the new QMux protocol. Its role will be to
perform the initial exchange of transport parameters.
On completion, contrary to XPRT handshake, xprt_qstrm will first init
the MUX and then removes itself. This will be necessary so that the
parameters can be retrieved by the MUX during its initialization.
This patch only declares the new xprt_qstrm along with basic operations.
Future commits will implement the proper reception/emission steps.
MINOR: mux-quic: define Tx connection buffer for QMux
Similarly to reception, a new buffer is defined in QCC connection to
handle emission for QMux protocol. This replaces the trash buffer usage
in qcc_qstrm_send_frames().
This buffer is necessary to handle partial emission. On retry, the
buffer must be completely emitted before starting to send new frames.
Amaury Denoyelle [Fri, 27 Mar 2026 09:16:56 +0000 (10:16 +0100)]
MINOR: mux-quic: implement qstream send callback
Each time a QUIC frame is emitted, mux-quic layer is notified via a
callback to update the underlying QCS. For QUIC, this is performed via
qc_stream_desc element.
In QMux protocol, this can be simplified as there is no
qc_stream_desc/quic_conn layer interaction. Instead, each time snd_buf
is called, QCS can be updated immediately using its return value. This
is performed via a new function qstrm_ctrl_send().
Its work is similar to the QUIC equivalent but in a simpler mode. In
particular, sent data can be immediately removed from the Tx buffer as
there is no need for retransmission when running above TCP.
Amaury Denoyelle [Fri, 27 Mar 2026 13:41:40 +0000 (14:41 +0100)]
MEDIUM: mux-quic: implement QMux send
This patchs implement mux-quic reception for the new QMux protocol. This
is performed via the new function qcc_qstrm_send_frames(). Its interface
is similar to the QUIC equivalent : it takes a list of frames and
encodes them in a buffer before sending it via snd_buf.
Contrary to QUIC, a check on CO_FL_ERROR flag is performed prior to
every qcc_qstrm_send_frames() invokation to interrupt emission. This is
necessary as the transport layer may set it during snd_buf. This is not
the case currently for quic_conn layer, but maybe a similar mechanism
should be implemented as well for QUIC in the future.
The previous patch defines a new QCC buffer member to implement QMux
reception. This patch completes this by perfoming realign on it during
qcc_qstrm_recv(). This is necessary when there is not enough contiguous
data to read a whole frame.
Amaury Denoyelle [Fri, 27 Mar 2026 09:14:39 +0000 (10:14 +0100)]
MINOR: mux-quic: define Rx connection buffer for QMux
When QMux is used, mux-quic must actively performed reception of new
content. This has been implemented by the previous patch.
The current patch extends this by defining a buffer on QCC dedicated to
this operation. This replaces the usage of the trash buffer. This is
necessary to deal with incomplete reads.
Amaury Denoyelle [Fri, 27 Mar 2026 09:15:13 +0000 (10:15 +0100)]
MINOR: mux-quic: handle flow-control frame on qstream read
Implements parsing of frames related to flow-control for mux-quic
running on the new QMux protocol. This simply calls qcc_recv_*() MUX
functions already used by QUIC.
Amaury Denoyelle [Fri, 27 Mar 2026 13:39:34 +0000 (14:39 +0100)]
MEDIUM: mux-quic: implement QMux receive
This patch implements a new function qcc_qstrm_recv() dedicated to the
new QMux protocol. It is responsible to perform data reception via
rcv_buf() callback. This is defined in a new mux_quic_strm module.
Read data are parsed in frames. Each frame is handled via standard
mux-quic functions. Currently, only STREAM and RESET_STREAM types are
implemented.
One major difference between QUIC and QMux is that mux-quic is passive
on the reception side on the former protocol. For the new one, mux-quic
becomes active. Thus, a new call to qcc_qstrm_recv() is performed via
qcc_io_recv().
Amaury Denoyelle [Wed, 10 Dec 2025 09:43:36 +0000 (10:43 +0100)]
MINOR: quic: abstract stream type in qf_stream frame
STREAM frame will also be used by the new QMux protocol. This requires
some adaptation in the qf_stream structure. Reference to qc_stream_desc
object is replaced by a generic void* pointer.
This change is necessary as QMux protocol will not use any
qc_stream_desc elements for emission.
Amaury Denoyelle [Thu, 26 Mar 2026 14:03:04 +0000 (15:03 +0100)]
MINOR: mux-quic: prepare traces support for QMux
Ensure mux-quic traces will be compatible with the new QMux protocol.
This is necessary as the quic_conn element is accessed to display some
transport information. Use conn_is_quic() to protect these accesses.
MINOR: mux-quic: convert init/closure for QMux compatibility
Ensure mux-quic operations related to initialization and shutdown will
be compatible with the new QMux protocol. This requires to use
conn_is_quic() before any access to the quic_conn element, in
qmux_init(), qcc_shutdown() and qcc_release().
Amaury Denoyelle [Thu, 26 Mar 2026 13:57:49 +0000 (14:57 +0100)]
MINOR: mux-quic: prepare Tx support for QMux
Adapts mux-quic functions related to emission for future QMux protocol
support.
In short, QCS will not used a qc_stream_desc object but instead a plain
buffer. This is inserted as a union in QCS structure. Every access to
QUIC qc_stream_desc is protected by a prior conn_is_quic() check. Also,
pacing is useless for QMux and thus is disabled for such protocol.
Amaury Denoyelle [Tue, 31 Mar 2026 15:55:10 +0000 (17:55 +0200)]
MINOR: mux-quic: move qcs stream member into tx inner struct
Move <stream> field from qcs type into the inner structure 'tx'. This
change is only a minor refactoring without any impact. It is cleaner as
Rx buffer elements are already present in 'rx' inner structure.
This reorganization is performed before introducing of a new Tx buffer
field used for QMux protocol.
Amaury Denoyelle [Mon, 30 Mar 2026 14:39:57 +0000 (16:39 +0200)]
MINOR: quic: implement QMux transport params frame parser/builder
Implement parse/build methods for QX_TRANSPORT_PARAMETER frame. Both
functions may fail due to buffer space too small (encoding) or truncated
frame (parsing).
Amaury Denoyelle [Wed, 12 Feb 2025 16:54:13 +0000 (17:54 +0100)]
MINOR: quic: define QMux transport parameters frame type
Define a new frame type for QMux transport parameter exchange. Frame
type is 0x3f5153300d0a0d0a and is declared as an extra frame, outside of
quic_frame_parsers / quic_frame_builders.
The next patch will implement parsing/encoding of this frame payload.
The previous patch refactored QUIC transport parameters decoding and
validity checks. These two operation are now performed in two distinct
functions. This renders quic_tp_dec_err type useless. Thus, this patch
removes it. Function returns are converted to a simple integer value.
MINOR: quic: split transport params decoding/check
Function quic_transport_params_decode() is used for decoding received
parameters. Prior to this patch, it also contained validity checks on
some of the parameters. Finally, it also tested that mandatory
parameters were indeed found.
This patch separates this two parts. Params validity is now tested in a
new function quic_transport_params_check(), which can be called just
after decode operation.
This patch will be useful for QMux protocol, as this allows to reuse
decode operation without executing checks which are tied to the QUIC
specification, in particular for mandatory parameters.
BUG/MINOR: quic: fix documentation for transport params decoding
The documentation for functions related to transport parameters decoding
is unclear or sometimes completely wrong on the meaning of the <server>
argument. It must be set to reflect the origin of the parameters,
contrary to what was implied in function comments.
Fix this by rewriting comments related to this <server> argument. This
should prevent to make any mistake in the future.
This is purely a documentation fix. However, it could be useful to
backport it up to 2.6.
Amaury Denoyelle [Mon, 30 Mar 2026 12:11:17 +0000 (14:11 +0200)]
MINOR: quic: refactor frame encoding
This patch is a direct follow-up of the previous one. This time,
refactoring is performed on qc_build_frm() which is used for frame
encoding.
Function prototype has changed as now packet argument is removed. To be
able to check frame validity with a packet, one can use the new parent
function qc_build_frm_pkt() which relies on qc_build_frm().
As with the previous patch, there is no function change expected. The
objective is to facilitate a future QMux implementation.
Amaury Denoyelle [Wed, 19 Feb 2025 13:53:14 +0000 (14:53 +0100)]
MINOR: quic: refactor frame parsing
This patch refactors parsing in QUIC frame module. Function
qc_parse_frm() has been splitted in three :
* qc_parse_frm_type()
* qc_parse_frm_pkt()
* qc_parse_frm_payload()
No functional change. The main objective of this patch is to facilitate
a QMux implementation. One of the gain is the ability to manipulate QUIC
frames without any reference to a QUIC packet as it is irrelevant for
QMux. Also, quic_set_connection_close() calls are extracted as this
relies on qc type. The caller is now responsible to set the required
error code.
Set the default dns-delay to 30s so it can be more efficient with fast
DNS providers. The dns-timeout is set to 600s by default so this does
not have a big impact, it will only do more check and allow the
challenge to be started more quickly.
MINOR: acme: add 'dns-timeout' keyword for dns-01 challenge
When using the dns-01 challenge method with "challenge-ready dns", HAProxy
retries DNS resolution indefinitely at the interval set by "dns-delay". This
adds a "dns-timeout" keyword to set a maximum duration for the DNS check phase
(default: 600s). If the next resolution attempt would be scheduled beyond that
deadline, the renewal is aborted with an explicit error message.
A new "dnsstarttime" field is stored in the acme_ctx to record when DNS
resolution began, used to evaluate the timeout on each retry.
MEDIUM: tcpcheck/server: Add healthcheck server keyword
Thanks to this patch, it is now possible to specify an healthcheck section
on the server line. In that case, the server will use the tcpcheck as
defined in the correspoding healthcheck section instead of the proxy's one.
MEDIUM: tcpcheck: Add parsing support for healthcheck sections
tcpcheck_ruleset struct was extended to host a config part that will be used
for healthcheck sections. This config part is mainly used to store element
for the server's tcpcheck part.
When a healthcheck section is parsed, a ruleset is created with its name
(which must be unique). "*healthcheck-{NAME}" is used for these ruleset. So
it is not possible to mix them with regular rulesets.
For now, in a healthcheck section, the type must be defined, based on the
options name (tcp-check, httpchk, redis-check...). In addition, several
"tcp-check" or "http-check" rules can be specified, depending on the
healthcheck type.
MEDIUM: tcpcheck: Split parsing functions to prepare healthcheck sections parsing
Functions used to parse directives related to tcpchecks were split to have a
first step testing the proxy and creating the tcpcheck ruleset if necessary,
and a second step filling the ruleset. The aim of this patch is to preapre
the parsing of healthcheck sections. In this context, only the second steip
will be used.