v4tov6() and v6tov4() helper function were initially implemented in 4f92d3200 ("[MEDIUM] IPv6 support for stick-tables").
However, since ceb4ac9c3 ("MEDIUM: acl: support IPv6 address matching")
support for legacy ip6 to ip4 conversion formats were added, with the
parsing logic directly performed in acl_match_ip (which later became
pat_match_ip)
The issue is that the original v6tov4() function which is used for sample
expressions handling lacks those additional formats, so we could face
inconsistencies whether we rely on ip4/ip6 conversions from an acl context
or an expression context.
To unify ip4/ip6 automatic mapping behavior, we reworked v4tov6 and v6tov4
functions so that they now behave like in pat_match_ip() function.
Note: '6to4 (RFC3056)' and 'RFC4291 ipv4 compatible address' formats are
still supported for legacy purposes despite being deprecated for a while
now.
BUG/MEDIUM: http-ana: Try to handle response before handling server abort
In the request analyser responsible to forward the request, we try to detect
the server abort to stop the request forwarding. However, we must be careful
to not block the response processing, if any. Indeed, it is possible to get
the response and the server abort in same time. In this case, we must try to
forward the response to the client first.
So to fix the issue, in the request analyser we no longer handle the server
abort if the response channel is not empty. In the end, the response
analyser is able to detect the server abort if it is relevant. Otherwise,
the stream will be woken up after the response forwarding and the server
abort should be handled at this stage.
This patch should be backported as far as 2.7 only because the risk of
breakage is high. And it is probably a good idea to wait a bit before
backporting it.
CLEANUP: ring: rename the ring lock "RING_LOCK" instead of "LOGSRV_LOCK"
The ring lock was initially mostly used for the logs and used to inherit
its name in lock stats. Now that it's exclusively used by rings, let's
rename it accordingly.
MEDIUM: logs: atomically check and update the log sample index
The log server lock is pretty visible in perf top when using log samples
because it's taken for each server in turn while trying to validate and
update the log server's index. Let's change this for a CAS, since we have
the index and the range at hand now. This allow us to remove the logsrv
lock.
The test on 4 servers now shows a 3.7 times improvement thanks to much
lower contention. Without log sampling a test producing 4.4M logs/s
delivers 4.4M logs/s at 21 CPUs used, everything spent in the kernel.
After enabling 4 samples (1:4, 2:4, 3:4 and 4:4), the throughput would
previously drop to 1.13M log/s with 37 CPUs used and 75% spent in
process_send_log(). Now with this change, 4.25M logs/s are emitted,
using 26 CPUs and 22% in process_send_log(). That's a 3.7x throughput
improvement for a 30% global CPU usage reduction, but in practice it
mostly shows that the performance drop caused by having samples is much
less noticeable (each of the 4 servers has its index updated for each
log).
Note that in order to even avoid incrementing an index for each log srv
that is consulted, it would be more convenient to have a single index
per frontend and apply the modulus on each log server in turn to see if
the range has to be updated. It would then only perform one write per
range switch. However the place where this is done doesn't have access
to a frontend, so some changes would need to be performed for this, and
it would require to update the current range independently in each
logsrv, which is not necessarily easier since we don't know yet if we
can commit it.
MINOR: logs: use a single index to store the current range and index
By using a single long long to store both the current range and the
next index, we'll make it possible to perform atomic operations instead
of locking. Let's only regroup them for now under a new "curr_rg_idx".
The upper word is the range, the lower is the index.
CLEANUP: logs: rename a confusing local variable "curr_rg" to "smp_rg"
The variable curr_rg in process_send_log() is misleading because it is
not related to the integer curr_rg that's used to calculate it, instead
it's a pointer to the current smp_log_range from smp_rgs[], so let's call
it "smp_rg" as a singular for this "smp_rgs" and put an end to this
confusion.
MINOR: log: remove the unused curr_idx in struct smp_log_range
This index is useless because it only serves to know when the global
index reached the end, while the global one already knows it. Let's
just drop it and perform the test on the global range.
It was verified with the following config that the first server continues
to take 1/10 of the traffic, the 2nd one 2/10, the 3rd one 3/10 and the
4th one 4/10:
The test of the log range is not very clear, in part due to the
reuse of the "curr_idx" name that happens at two levels. The call
to in_smp_log_range() applies to the smp_info's index to which 1 is
added: it verifies that the next index is still within the current
range.
Let's just have a local variable "next_index" in process_send_log()
that gets assigned the next index (current+1) and compare it to the
current range's boundaries. This makes the test much clearer. We can
then simply remove in_smp_log_range() that's no longer needed.
BUG/MINOR: server: add missing free for server->rdr_pfx
rdr_pfx was not being free during server cleanup, leading to small memory
leak when "redir" argument was used on a server line (HTTP only).
This should be backported to every stable versions.
[For 2.6 and 2.7: the free should be performed in srv_drop() directly.
For older versions: free in deinit() function near the free for the
cookie string]
The list iterator is broken. As found by Fred, running QUIC single-
threaded shows that only the first connection is accepted because the
accepter relies on the element being initialized once detached (which
is expected and matches what MT_LIST_DELETE_SAFE() used to do before).
However while doing this in the quic_sock code seems to work, doing it
inside the macro show total breakage and the unit test doesn't work
anymore (random crashes). Thus it looks like the fix is not trivial,
let's roll this back for the time it will take to fix the loop.
BUILD: quic: fix build on centos 8 and USE_QUIC_OPENSSL_COMPAT
When using USE_QUIC_OPENSSL_COMPAT=1 on centos-8 the build fail this
way:
In file included from src/quic_openssl_compat.c:11:
/usr/include/openssl/kdf.h:33:46: error: unknown type name 'va_list'
int EVP_KDF_vctrl(EVP_KDF_CTX *ctx, int cmd, va_list args);
This is because of openssl/kdf.h being include before openssl-compat.h
BUG/MAJOR: mux-h2: Report a protocol error for any DATA frame before headers
If any DATA frame is received before all headers are fully received, a
protocol error must be reported. It is required by the HTTP/2 RFC but it is
also important because the HTTP analyzers expect the first HTX block is a
start-line. It leads to a crash if this statement is not respected.
For instance, it is possible to trigger a crash by sending an interim
message with a DATA frame (It may be an empty DATA frame with the ES
flag). AFAIK, only the server side is affected by this bug.
To fix the issue, an protocol error is reported for the stream.
This patch should fix the issue #2291. It must be backported as far as 2.2
(and probably to 2.0 too).
BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API
In 1.9 with commit 627505d36 ("MINOR: freq_ctr: add swrate_add_scaled()
to work with large samples") we got the ability to indicate when adding
some values that they represent a number of samples. However there is an
issue in the calculation which is that the number of samples that is
added to the sum before the division in order to avoid fading away too
fast, is multiplied by the scale. The problem it causes is that this is
done in the negative part of the expression, and that as soon if the sum
of old_sum and v*s is too small (e.g. zero), we end up with a negative
value of -s.
This is visible in "show pools" which occasionally report a very large
value on "needed_avg" since 2.9, though the bug has been there for longer.
Indeed in 2.9 since they're hashed in buckets, it suffices that any
thread got one such error once for the sum to be wrong. One possible
impact is memory usage not shrinking after a short burst due to pools
refraining from releasing objects, believing they don't have enough.
This must be backported to all versions. Note that the opportunistic
version can be dropped before 2.8.
In very rare cases, it is possible that packet are detected as lost, their frames
requeued, then the connection is released without releasing for any reason (to
be killed because of a sendto() fatal failure for instance. Such frames are lost
and never release because the function which release their packet number spaces
does not release the frames which are still enqueued to be send.
CLEANUP: pools: simplify the pool expression when no pool was matched in dump
When dumping pool information, we make a special case of the condition
where the pool couldn't be identified and we consider that it was the
correct one. In the code arrangements brought by commit efc46dede ("DEBUG:
pools: inspect pools on fatal error and dump information found"), a
ternary expression for testing this depends on the "if" block condition
so this can be simplified and will make Coverity happy. This was reported
in GH #2290.
MAJOR: import: update mt_list to support exponential back-off
The new mt_list code supports exponential back-off on conflict, which
is important for use cases where there is contention on a large number
of threads. The API evolved a little bit and required some updates:
- mt_list_for_each_entry_safe() is now in upper case to explicitly
show that it is a macro, and only uses the back element, doesn't
require a secondary pointer for deletes anymore.
- MT_LIST_DELETE_SAFE() doesn't exist anymore, instead one just has
to set the list iterator to NULL so that it is not re-inserted
into the list and the list is spliced there. One must be careful
because it was usually performed before freeing the element. Now
instead the element must be nulled before the continue/break.
- MT_LIST_LOCK_ELT() and MT_LIST_UNLOCK_ELT() have always been
unclear. They were replaced by mt_list_cut_around() and
mt_list_connect_elem() which more explicitly detach the element
and reconnect it into the list.
- MT_LIST_APPEND_LOCKED() was only in haproxy so it was left as-is
in list.h. It may however possibly benefit from being upstreamed.
This required tiny adaptations to event_hdl.c and quic_sock.c. The
test case was updated and the API doc added. Note that in order to
keep include files small, the struct mt_list definition remains in
list-t.h (par of the internal API) and was ifdef'd out in mt_list.h.
A test on QUIC with both quictls 1.1.1 and wolfssl 5.6.3 on ARM64 with
80 threads shows a drastic reduction of CPU usage thanks to this and
the refined memory barriers. Please note that the CPU usage on OpenSSL
3.0.9 is significantly higher due to the excessive use of atomic ops
by openssl, but 3.1 is only slightly above 1.1.1 though:
- before: 35 Gbps, 3.5 Mpps, 7800% CPU
- after: 41 Gbps, 4.2 Mpps, 2900% CPU
BUG/MEDIUM: master/cli: Pin the master CLI on the first thread of the group 1
There is no reason to start the master CLI on several threads and on several
groups. And in fact, it must not be done otherwise the same FD is inserted
several times in the fdtab, leading to a crash during startup because of a
BUG_ON(). It happens when several groups are configured.
To fix the bug the master CLI is now pinned on the first thread of the first
group.
This patch should fix the issue #2259 and must be backported to 2.8.
When a server is in maintenance, the check.status is no longer updated.
Therefore, we shouldn't consider check.status if the checks are not active. This
check was properly implemented in the haproxy_server_check_status metric, but
wasn't carried over to backend_agg_check_status, which introduced
inconsistencies between the two metrics.
BUG/MEDIUM: mux-fcgi: Don't swap trash and dbuf when handling STDERR records
trahs chunks are buffers but not allocated from the buffers pool. And the
"trash" chunk is static and thread-local. It is two reason to not swap it
with a regular buffer allocated from the buffers pool.
Unfortunatly, it is exactly what is performed in the FCGI mux when a STDERR
record is handled. b_xfer() is used to copy data from the demux buffer to
the trash to format the error message. A zeor-copy via a swap may be
performed. In this case, this leads to a memory corruption and a crash
because, some time later, the demux buffer is released because it is
empty. And it is in fact the trash chunk.
b_force_xfer() must be used instead. This function forces the copy.
This patch must be backported as far as 2.2. For 2.4 and 2.2, b_force_xfer()
does not exist. For these versions, the following commit must be backported
too:
BUG/MINOR: hlua/init: coroutine may not resume itself
It's not supported to call lua_resume with <L> and <from> designating
the same lua coroutine. It didn't cause visible bugs so far because
Lua 5.3 used to be more permissive about this, and moreover, yielding
is not involved during the hlua init state.
But this is wrong usage, and the doc clearly specifies that the <from>
argument can be NULL when there is no such coroutine, which is the case
here.
This should be backported in every stable versions.
Once the call returns, we may call the function again with the same
hlua context when E_YIELD is returned (the execution was interrupted
and may be resumed through another lua_resume() call).
The 3rd argument to lua_resume(), 'nargs', is a hint passed to Lua to
know how many (optional) arguments were pushed on the stack prior to
resuming the execution (arguments that Lua will then expose to the Lua
script).
But here is the catch: we never reset lua->nargs between successive
lua_resume() calls, meaning that next lua_resume() calls will still
inherit from the initial nargs value that was set in hlua ctx prior
to calling hlua_ctx_resume() (our wrapper function) for the first time.
This is problematic, because despite not being explicitly mentioned in
the Lua documentation, passed arguments (to which `nargs` refer to), are
already consumed once lua_resume() returns.
This means that we cannot keep calling lua_resume() with non-zero nargs
if we don't push new arguments on the stack prior to resuming lua after
the initial call: nargs is proper to a single lua_resume() invocation.
Despite improper use of lua_resume() for a long time, this didn't cause
visible issues in the past with Lua 5.3, but it is particularly sensitive
starting with Lua 5.4.3 due to debugging hooks improvements that led to
some internal changes (see: lua/lua@58aa09a). Not using nargs properly
now exposes us to undefined behavior when resuming after a yield triggered
from a debugging hook, which may cause running scripts to crash
unexpectedly: for instance with Lua raising errors and complaining about
values being NULL where it should not be the case.
For reference, this issue was initially raised on the Lua mailing list:
http://lua-users.org/lists/lua-l/2023-09/msg00005.html
In this patch, we immediately reset nargs when lua_resume() returns to
prevent any misuse.
It should be backported to every maintained versions.
The pools sizes were rounded up a little bit too much with commit 30f931ead ("BUG/MEDIUM: pools: fix the minimum allocation size"). The
goal was in fact to make sure they were always at least large enough to
store 2 list heads, and stuffing this into the alignment calculation
resulted in the size being always rounded up to this size. This is
problematic because it means that the appended tag at the end doesn't
always catch potential overflows since more bytes than needed are
allocated. Moreover, this test was later reinforced by commit b5ba09ed5
("BUG/MEDIUM: pools: ensure items are always large enough for the
pool_cache_item"), proving that the first test was not always sufficient.
This needs to be reworked to proceed correctly:
- the two lists are needed when the object is in the cache, hence
when we don't care about the tag, which means that the tag's size,
if any, can easily cover for the missing bytes to reach that size.
This is actually what was already being checked for.
- the rounding should not be performed (beyond the size of a word to
preserve pointer alignment) when pool tagging is enabled, otherwise
we don't detect small overflows. It means that there will be less
merging when proceeding like this. Tests show that we merge 93 pools
into 36 without tags and 43 with tags enabled.
- the rounding should not consider the extra size, since it's already
done when calculating the allocated size later (i.e. don't round up
twice). The difference is subtle but it's what makes sure the tag
immediately follows the area instead of starting from the end.
Thanks to this, now when writing one byte too many at the end of a struct
stream, the error is instantly caught.
DEBUG: pools: print the contents surrounding the expected tag location
When no tag matches a known pool, we can inspect around to help figure
what could have possibly overwritten memory. The contents are printed
one machine word per line in hex, then using printable characters, and
when they can be resolved to a pointer, either the pool's pointer name
or a resolvable symbol with offset. The goal here is to help recognize
what is easily identifiable in memory.
For example applying the following patch to stream_free():
FATAL: pool inconsistency detected in thread 1: tag mismatch on free().
caller: 0x59e968 (stream_free+0x6d8/0xa0a)
item: 0x13df5c1
pool: 0x12782c0 ('stream', size 888, real 904, users 1)
Tag does not match (0x4f00000000012782). Tag does not match any other pool.
Contents around address 0x13df5c1+888=0x13df939:
0x13df918 [00 00 00 00 00 00 00 00] [........]
0x13df920 [00 00 00 00 00 00 00 00] [........]
0x13df928 [00 00 00 00 00 00 00 00] [........]
0x13df930 [00 00 00 00 00 00 00 00] [........]
0x13df938 [c0 82 27 01 00 00 00 00] [..'.....] [pool:stream]
0x13df940 [4f c0 59 00 00 00 00 00] [O.Y.....] [stream_new+0x4f/0xbec]
0x13df948 [49 46 49 43 41 54 45 2d] [IFICATE-]
0x13df950 [81 02 00 00 00 00 00 00] [........]
0x13df958 [df 13 00 00 00 00 00 00] [........]
Other possible callers:
(...)
We notice that the tag references pool_head_stream with the allocation
point in stream_new. Another benefit is that a caller may be figured
from the tag even if the "caller" feature is not enabled, because upon
a free() we always put the caller's location into the tag. This should
be sufficient to debug most cases that normally require gdb.
MEDIUM: quic: Allow the quic_conn memory to be asap released.
When sending packets from quic_cc_conn_io_cb(), e.g. when the quic_conn
object has been released and replaced by a lighter one (quic_cc_conn),
some counters may have to be incremented. But they were not reachable
because not shared between quic_conn and quic_cc_conn struct.
To fix this, one has only to move the ->cntrs counters from quic_conn
to QUIC_CONN_COMMON struct which is shared between both quic_cc_conn
Thank you to Tristan for having reported this in GH #2247.
DEBUG: pools: inspect pools on fatal error and dump information found
It's a bit frustrating sometimes to see pool checks catch a bug but not
provide exploitable information without a core.
Here we're adding a function "pool_inspect_item()" which is called just
before aborting in pool_check_pattern() and POOL_DEBUG_CHECK_MARK() and
which will display the error type, the pool's pointer and name, and will
try to check if the item's tag matches the pool, and if not, will iterate
over all pools to see if one would be a better candidate, then will try
to figure the last known caller and possibly other likely candidates if
the pool's tag is not sufficiently trusted. This typically helps better
diagnose corruption in use-after-free scenarios, or freeing to a pool
that differs from the one the object was allocated from, and will also
indicate calling points that may help figure where an object was last
released or allocated. The info is printed on stderr just before the
backtrace.
For example, the recent off-by-one test in the PPv2 changes would have
produced the following output in vtest logs:
*** h1 debug|FATAL: pool inconsistency detected in thread 1: tag mismatch on free().
*** h1 debug| caller: 0x62bb87 (conn_free+0x147/0x3c5)
*** h1 debug| pool: 0x2211ec0 ('pp_tlv_256', size 304, real 320, users 1)
*** h1 debug|Tag does not match. Possible origin pool(s):
*** h1 debug| tag: @0x2565530 = 0x2216740 (pp_tlv_128, size 176, real 192, users 1)
*** h1 debug|Recorded caller if pool 'pp_tlv_128':
*** h1 debug| @0x2565538 (+0184) = 0x62c76d (conn_recv_proxy+0x4cd/0xa24)
A mismatch in the allocated/released pool is already visible, and the
callers confirm it once resolved, where the allocator indeed allocates
from pp_tlv_128 and conn_free() releases to pp_tlv_256:
$ addr2line -spafe ./haproxy <<< $'0x62bb87\n0x62c76d'
0x000000000062bb87: conn_free at connection.c:568
0x000000000062c76d: conn_recv_proxy at connection.c:1177
DEBUG: pools: pass the caller pointer to the check functions and macros
In preparation for more detailed pool error reports, let's pass the
caller pointers to the check functions. This will be useful to produce
messages indicating where the issue happened.
DEBUG: pools: always record the caller for uncached allocs as well
When recording the caller of a pool_alloc(), we currently store it only
when the object comes from the cache and never when it comes from the
heap. There's no valid reason for this except that the caller's pointer
was not passed to pool_alloc_nocache(), so it used to set NULL there.
Let's just pass it down the chain.
When using the listener socket as file descriptor, qc->fd value is -1.
In this case one must not access fdtab[qc->fd] element to change its value.
This bug could have been detected by asan with such a backtrace:
=================================================================
==402222==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fa8ecf417ex7fa8e915cf90 sp 0x7fa8e915cf88
WRITE of size 8 at 0x7fa8ecf417e8 thread T6
#0 0x55707a0bf18a in qc_new_cc_conn src/quic_conn.c:838
#1 0x55707a0c6dc0 in quic_conn_release src/quic_conn.c:1408
#2 0x55707a10916f in quic_close src/xprt_quic.c:35
#3 0x55707a0cec77 in conn_xprt_close include/haproxy/connection.h:153
#4 0x55707a0ceed0 in conn_full_close include/haproxy/connection.h:197
#5 0x55707a0ec253 in qcc_release src/mux_quic.c:2412
#6 0x55707a0ec7d0 in qcc_io_cb src/mux_quic.c:2443
#7 0x55707a63ff2a in run_tasks_from_lists src/task.c:596
#8 0x55707a641cc9 in process_runnable_tasks src/task.c:876
#9 0x55707a56f7b2 in run_poll_loop src/haproxy.c:2954
#10 0x55707a5705fd in run_thread_poll_loop src/haproxy.c:3153
#11 0x7fa8f9450ea6 in start_thread nptl/pthread_create.c:477
#12 0x7fa8f936ea2e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xfba2e)
0x7fa8ecf417e8 is located 24 bytes to the left of 134217728-byte region [0x7fa8e
allocated by thread T0 here:
#0 0x7fa8f9a37037 in __interceptor_calloc ../../../../src/libsanitizer/asan/
#1 0x55707a71a61d in init_pollers src/fd.c:1161
#2 0x55707a56cdf1 in init src/haproxy.c:2672
#3 0x55707a5714c2 in main src/haproxy.c:3298
#4 0x7fa8f9296d09 in __libc_start_main ../csu/libc-start.c:308
Thread T6 created by T0 here:
#0 0x7fa8f99e22a2 in __interceptor_pthread_create ../../../../src/libsanitizpp:214
#1 0x55707a748a21 in setup_extra_threads src/thread.c:252
#2 0x55707a5735c9 in main src/haproxy.c:3844
#3 0x7fa8f9296d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: heap-buffer-overflow src/quic_conn.c:838 in qc_new_cc
Shadow bytes around the buggy address:
0x0ff59d9e02a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0ff59d9e02e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0ff59d9e02f0: fa fa fa fa fa fa fa fa fa fa fa fa fa[fa]fa fa
0x0ff59d9e0300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff59d9e0340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==402222==ABORTING
Aborted
Thank you to @Tristan971 for having reported this bug in GH #2247.
Released version 2.9-dev5 with the following main changes :
- BUG/MEDIUM: mux-h2: fix crash when checking for reverse connection after error
- BUILD: import: guard plock.h against multiple inclusion
- BUILD: pools: import plock.h to build even without thread support
- BUG/MINOR: ssl/cli: can't find ".crt" files when replacing a certificate
- BUG/MINOR: stream: protect stream_dump() against incomplete streams
- DOC: config: mention uid dependency on the tune.quic.socket-owner option
- MEDIUM: capabilities: enable support for Linux capabilities
- CLEANUP/MINOR: connection: Improve consistency of PPv2 related constants
- MEDIUM: connection: Generic, list-based allocation and look-up of PPv2 TLVs
- MEDIUM: sample: Add fetch for arbitrary TLVs
- MINOR: sample: Refactor fc_pp_authority by wrapping the generic TLV fetch
- MINOR: sample: Refactor fc_pp_unique_id by wrapping the generic TLV fetch
- MINOR: sample: Add common TLV types as constants for fc_pp_tlv
- MINOR: ssl_sock: avoid iterating realloc(+1) on stored context
- DOC: ssl: add some comments about the non-obvious session allocation stuff
- CLEANUP: ssl: keep a pointer to the server in ssl_sock_init()
- MEDIUM: ssl_sock: always use the SSL's server name, not the one from the tid
- MEDIUM: server/ssl: place an rwlock in the per-thread ssl server session
- MINOR: server/ssl: maintain an index of the last known valid SSL session
- MINOR: server/ssl: clear the shared good session index on failure
- MEDIUM: server/ssl: pick another thread's session when we have none yet
- MINOR: activity: report the current run queue size
- BUG/MINOR: checks: do not queue/wake a bounced check
- MINOR: checks: start the checks in sleeping state
- MINOR: checks: pin the check to its thread upon wakeup
- MINOR: check: remember when we migrate a check
- MINOR: check/activity: collect some per-thread check activity stats
- MINOR: checks: maintain counters of active checks per thread
- MINOR: check: also consider the random other thread's active checks
- MEDIUM: checks: search more aggressively for another thread on overload
- MEDIUM: checks: implement a queue in order to limit concurrent checks
- MINOR: checks: also consider the thread's queue for rebalancing
- DEBUG: applet: Properly report opposite SC expiration dates in traces
- BUG/MEDIUM: stconn: Update stream expiration date on blocked sends
- BUG/MINOR: stconn: Don't report blocked sends during connection establishment
- BUG/MEDIUM: stconn: Wake applets on sending path if there is a pending shutdown
- BUG/MEDIUM: stconn: Don't block sends if there is a pending shutdown
- BUG/MINOR: quic: Possible skipped RTT sampling
- MINOR: quic: Add a trace to quic_release_frm()
- BUG/MAJOR: quic: Really ignore malformed ACK frames.
- BUG/MINOR: quic: Unchecked pointer to packet number space dereferenced
- BUG/MEDIUM: connection: fix pool free regression with recent ppv2 TLV patches
- BUG/MEDIUM: h1-htx: Ensure chunked parsing with full output buffer
- BUG/MINOR: stream: further protect stream_dump() against incomplete sessions
- DOC: configuration: update examples for req.ver
- MINOR: properly mark the end of the CLI command in error messages
- BUILD: ssl: Build with new cryptographic library AWS-LC
- REGTESTS: ssl: skip ssl_dh test with AWS-LC
- BUILD: bug: make BUG_ON() void to avoid a rare warning
- BUILD: checks: shut up yet another stupid gcc warning
- MINOR: cpuset: add ha_cpuset_isset() to check for the presence of a CPU in a set
- MINOR: cpuset: add ha_cpuset_or() to bitwise-OR two CPU sets
- MINOR: cpuset: centralize a reliable bound cpu detection
- MEDIUM: threads: detect incomplete CPU bindings
- MEDIUM: threads: detect excessive thread counts vs cpu-map
- BUILD: quic: Compilation issue on 32-bits systems with quic_may_send_bytes()
- BUG/MINOR: quic: Unchecked pointer to Handshake packet number space
- MINOR: global: export the display_version() symbol
- MEDIUM: mworker: display a more accessible message when a worker crash
- MINOR: httpclient: allow to configure the retries
- MINOR: httpclient: allow to configure the timeout.connect
- BUG/MINOR: quic: Wrong RTT adjusments
- BUG/MINOR: quic: Wrong RTT computation (srtt and rrt_var)
- BUG/MINOR: stconn: Don't inhibit shutdown on connection on error
- BUG/MEDIUM: applet: Fix API for function to push new data in channels buffer
- BUG/MEDIUM: stconn: Report read activity when a stream is attached to front SC
- BUG/MEDIUM: applet: Report an error if applet request more room on aborted SC
- BUG/MEDIUM: stconn/stream: Forward shutdown on write timeout
- NUG/MEDIUM: stconn: Always update stream's expiration date after I/O
- BUG/MINOR: applet: Always expect data when CLI is waiting for a new command
- BUG/MINOR: ring/cli: Don't expect input data when showing events
- BUG/MINOR: quic: Dereferenced unchecked pointer to Handshke packet number space
- BUG/MINOR: hlua/action: incorrect message on E_YIELD error
- MINOR: http_ana: position the FINAL flag for http_after_res execution
- CI: scripts: add support to build-ssl.sh to download and build AWS-LC
- CI: add support to matrix.py to determine the latest AWS-LC release
- CI: Update matrix.py so all code is contained in functions.
- CI: github: Add a weekly CI run building with AWS-LC
- MINOR: ring: add a function to compute max ring payload
- BUG/MEDIUM: ring: adjust maxlen consistency check
- MINOR: sink: simplify post_sink_resolve function
- MINOR: log/sink: detect when log maxlen exceeds sink size
- MINOR: sink: inform the user when logs will be implicitly truncated
- MEDIUM: sink: don't perform implicit truncations when maxlen is not set
- MINOR: log: move log-forwarders cleanup in log.c
- MEDIUM: httpclient/logs: rely on per-proxy post-check instead of global one
- MINOR: log: add dup_logsrv() helper function
- MEDIUM: log/sink: make logsrv postparsing more generic
- MEDIUM: fcgi-app: properly postresolve logsrvs
- MEDIUM: spoe-agent: properly postresolve log rings
- MINOR: sink: add helper function to deallocate sink struct
- MEDIUM: sink/ring: introduce high level ring creation helper function
- MEDIUM: sink: add sink_finalize() function
- CLEANUP: log: remove unnecessary trim in __do_send_log
- MINOR: cache: Change hash function in default normalizer used in case of "vary"
- MINOR: tasks/stats: report the number of niced tasks in "show info"
- CI: Update to actions/checkout@v4
- MINOR: ssl: add support for 'curves' keyword on server lines
- BUG/MINOR: quic: Wrong cluster secret initialization
- CLEANUP: quic: Remove useless free_quic_tx_pkts() function.
- MEDIUM: init: initialize the trash earlier
- MINOR: tools: add function read_line_to_trash() to read a line of a file
- MINOR: cfgparse: use read_line_from_trash() to read from /sys
- MEDIUM: cfgparse: assign NUMA affinity to cpu-maps
- MINOR: cpuset: dynamically allocate cpu_map
- REORG: cpuset: move parse_cpu_set() and parse_cpumap() to cpuset.c
- CI: musl: highlight section if there are coredumps
- CI: musl: drop shopt in workflow invocation
REORG: cpuset: move parse_cpu_set() and parse_cpumap() to cpuset.c
These ones were still in cfgparse.c but they're not specific to the
config at all and may actually be used even when parsing cpu list
entries in /sys. Better move them where they can be reused.
cpu_map is 8.2kB/entry and there's one such entry per group, that's
~520kB total. In addition, the init code is still in haproxy.c enclosed
in ifdefs. Let's make this a dynamically allocated array in the cpuset
code and remove that init code.
Later we may even consider reallocating it once the number of threads
and groups is known, in order to shrink it a little bit, as the typical
setup with a single group will only need 8.2kB, thus saving half a MB
of RAM. This would require that the upper bound is placed in a variable
though.
MEDIUM: cfgparse: assign NUMA affinity to cpu-maps
Do not force affinity on the process, instead let's just apply it to
cpu-map, it will automatically be used later in the init process. We
can do this because we know that cpu-map was not set when we're using
this detection code.
This is much saner, as we don't need to manipulate the process' affinity
at this point in time, and just update the info that the user omitted to
set by themselves, which guarantees a better long-term consistency with
the documented feature.
MINOR: cfgparse: use read_line_from_trash() to read from /sys
It's easier to use this function now to natively support variable
fields in the file's path. This also removes read_file_from_trash()
that was only used here and was static.
MINOR: tools: add function read_line_to_trash() to read a line of a file
This function takes on input a printf format for the file name, making
it particularly suitable for /proc or /sys entries which take a lot of
numbers. It also automatically trims the trailing CR and/or LF chars.
More and more utility functions rely on the trash while most of the init
code doesn't have access to it because it's initialized very late (in
PRE_CHECK for the initial one). It's a pool, and it purposely supports
being reallocated, so let's initialize it in STG_POOL so that early
STG_INIT code can at least use it.
The function generate_random_cluster_secret() which initializes the cluster secret
when not supplied by configuration is buggy. There 1/256 that the cluster secret
string is empty.
To fix this, one stores the cluster as a reduced size first 128 bits of its own
SHA1 (160 bits) digest, if defined by configuration. If this is not the case, it
is initialized with a 128 bits random value. Furthermore, thus the cluster secret
is always initialized.
As the cluster secret is always initialized, there are several tests which
are for now on useless. This patch removes such tests (if(global.cluster_secret))
in the QUIC code part and at parsing time: no need to check that a cluster
secret was initialized with "quic-force-retry" option.
MINOR: tasks/stats: report the number of niced tasks in "show info"
We currently know the number of tasks in the run queue that are niced,
and we don't expose it. It's too bad because it can give a hint about
what share of the load is relevant. For example if one runs a Lua
script that was purposely reniced, or if a stats page or the CLI is
hammered with slow operations, seeing them appear there can help
identify what part of the load is not caused by the traffic, and
improve monitoring systems or autoscalers.
MINOR: cache: Change hash function in default normalizer used in case of "vary"
When building the secondary signature for cache entries when vary is
enabled, the referer part of the signature was a simple crc32 of the
first referer header.
This patch changes it to a 64bits hash based of xxhash algorithm with a
random seed built during init. This will prevent "malicious" hash
collisions between entries of the cache.
CLEANUP: log: remove unnecessary trim in __do_send_log
Since both sink_write and fd_write_frag_line take the maxlen parameter
as argument, there is no added value for the trim before passing the
msg parameter to those functions.
To further clean the code and remove duplication, some sink postparsing
and sink->sft finalization is now performed in a dedicated function
named sink_finalize().
MINOR: sink: add helper function to deallocate sink struct
In this patch we move sink freeing logic outside of sink_deinit() function
in order to create the sink_free() helper function that could be used
on error paths for example.
MEDIUM: log/sink: make logsrv postparsing more generic
We previously had postparsing logic but only for logsrv sinks, but now we
need to make this operation on logsrv directly instead of sinks to prepare
for additional postparsing logic that is not sink-specific.
To do this, we migrated post_sink_resolve() and sink_postresolve_logsrvs()
to their postresolve_logsrvs() and postresolve_logsrv_list() equivalents.
Then, we split postresolve_logsrv_list() so that the sink-only logic stays
in sink.c (sink_resolve_logsrv_buffer() function), and the "generic"
target part stays in log.c as resolve_logsrv().
Error messages formatting was preserved as far as possible but some slight
variations are to be expected.
As for the functional aspect, no change should be expected.
MEDIUM: httpclient/logs: rely on per-proxy post-check instead of global one
httpclient used to register a global post-check function to iterate over
all known proxies and post-initialize httpclient related ones (mainly
for logs initialization).
But we currently have an issue: post_sink_resolve() function which is
also registered using REGISTER_POST_CHECK() macro conflicts with
httpclient_postcheck() function.
This is because post_sink_resolve() relies on proxy->logsrvs to be
correctly initialized already, and httpclient_postcheck() may create
and insert new logsrvs entries to existing proxies when executed.
So depending on which function runs first, we could run into trouble.
Hopefully, to this day, everything works "by accident" due to
http_client.c file being loaded before sink.c file when compiling source
code.
But as soon as we would move one of the two functions to other files, or
if we rename files or make changes to the Makefile build recipe, we could
break this at any time.
To prevent post_sink_resolve() from randomly failing in the future, we now
make httpclient postcheck rely on per-proxy post-checks by slightly
modifying httpclient_postcheck() function so that it can be registered
using REGISTER_POST_PROXY_CHECK() macro.
As per-proxy post-check functions are executed right after config parsing
for each known proxy (vs global post-check which are executed a bit later
in the init process), we can be certain that functions registered using
global post-check macro, ie: post_sink_resolve(), will always be executed
after httpclient postcheck, effectively resolving the ordering conflict.
This should normally not cause visible behavior changes, and while it
could be considered as a bug, it's probably not worth backporting it
since the only way to trigger the issue is through code refactors,
unless we want to backport it to ease code maintenance of course,
in which case it should easily apply for >= 2.7.
MEDIUM: sink: don't perform implicit truncations when maxlen is not set
maxlen now defaults ~0 (instead of BUFSIZE) to make sure no implicit
truncation will be performed when the option is not specified, since the
doc doesn't mention any default value for maxlen. As such, if the payload
is too big, it will be dropped (this is the default expected behavior).
This would result in emitted logs being silently truncated to 1000 because
test-ring maxlen is smaller than the log directive maxlen.
In this patch we're adding an extra check in post_sink_resolve() to detect
this kind of confusing setups and warn the user about the implicit
truncation when DIAG mode is on.
This commit depends on:
- "MINOR: sink: simplify post_sink_resolve function"
MINOR: log/sink: detect when log maxlen exceeds sink size
To prevent logs from being silently (and unexpectly droppped) at runtime,
we check that the maxlen parameter from the log directives are
strictly inferior to the targeted ring size.
|global
| tune.bufsize 16384
| log tcp@127.0.0.1:514 len 32768
| log myring@127.0.0.1:514 len 32768
|ring myring
| # no explicit size
On such configs, a diag warning will be reported.
This commit depends on:
- "MINOR: sink: simplify post_sink_resolve function"
- "MINOR: ring: add a function to compute max ring payload"
When user specifies a maxlen parameter that is greater than the size of
a given ring section, a warning is emitted to inform that the max length
exceeds size, and then the maxlen is forced to size.
The logic is good, but imprecise, because it doesn't take into account
the slight overhead from storing payloads into the ring.
In practise, we cannot store a single message which is exactly the same
length than size. Doing so will result in the message being dropped at
runtime.
Thanks to the ring_max_payload() function introduced in "MINOR: ring: add
a function to compute max ring payload", we can now deduce the maximum
value for the maxlen parameter before it could result in messages being
dropped.
When maxlen value is set to an improper value, the warning will be emitted
and maxlen will be forced to the maximum "single" payload len that could
fit in the ring buffer, preventing messages from being dropped
unexpectedly.
This commit depends on:
- "MINOR: ring: add a function to compute max ring payload"
Andrew Hopkins [Tue, 5 Sep 2023 23:32:50 +0000 (16:32 -0700)]
CI: Update matrix.py so all code is contained in functions.
Refactor matrix.py so all the logic is contained inside either
helper functions or a new main function. Run the new main function
by default. This lets other GitHub actions use functions in the
python code without generating the whole matrix.
Andrew Hopkins [Tue, 5 Sep 2023 23:27:28 +0000 (16:27 -0700)]
CI: add support to matrix.py to determine the latest AWS-LC release
Refactor the existing OpenSSL tag parsing logic to share some of GitHub
tag logic. OpenSSL and AWS-LC don't follow the same naming convention so
each library has it's own sorting logic.
MINOR: http_ana: position the FINAL flag for http_after_res execution
Ensure that the ACT_OPT_FINAL flag is always set when executing actions
from http_after_res context.
This will permit lua functions to be executed as http_after_res actions
since hlua_ctx_resume() automatically disables "yielding" when such flag
is set: the hlua handler will only allow 1shot executions at this point
(lua or not, we don't wan't to reschedule http_after_res actions).
BUG/MINOR: hlua/action: incorrect message on E_YIELD error
When hlua_action error messages were reworked in d5b073cf1
("MINOR: lua: Improve error message"), an error was made for the
E_YIELD case.
Indeed, everywhere E_YIELD error is handled: "yield is not allowed" or
similar error message is reported to the user. But instead we currently
have: "aborting Lua processing on expired timeout".
It is quite misleading because this error message often refers to the
HLUA_E_ETMOUT case.
Thus, we now report the proper error message thanks to this patch.
This should be backported to all stable versions.
[on 2.0, the patch needs to be slightly adapted]
BUG/MINOR: ring/cli: Don't expect input data when showing events
The "show events" command may wait for now events if "-w" option is used. In
this case, no timeout must be triggered. So we explicitly state no input
data are expected. This disables the read timeout on the client side.
This patch should be backported to 2.8. It is probably useless to backport
it further. In all cases, it depends on the commit "BUG/MINOR: applet:
Always expect data when CLI is waiting for a new command"
BUG/MINOR: applet: Always expect data when CLI is waiting for a new command
There is a mechanism for applets to disable the read timeout on the opposite
side if it is now waiting for any data. Of course, there is also a way to
re-activate it. But, it must excplicitly be handle by applets.
For the CLI, some commands may state no input data are expected. So we must
be sure to reset its state when the applet is waiting for a new command. For
now, it is not a bug because no CLI command uses this mechanism.
NUG/MEDIUM: stconn: Always update stream's expiration date after I/O
It is a revert of following patches:
* d7111e7ac ("MEDIUM: stconn: Don't requeue the stream's task after I/O")
* 3479d99d5 ("BUG/MEDIUM: stconn: Update stream expiration date on blocked sends")
Because the first one is reverted, the second one is useless and can be reverted
too.
The issue here is that I/O may be performed without stream wakeup. So if no
expiration date was set on the last call to process_stream(), the stream is
never rescheduled and no timeout can be detected. This especially happens on
TCP streams because fast-forward is enabled very early.
Instead of tracking all places where the stream's expiration data must be
updated, it is now centralized in sc_notify(), as it was performed before
the timeout refactoring.
BUG/MEDIUM: stconn/stream: Forward shutdown on write timeout
The commit 7f59d68fe2 ("BUG/MEDIIM: stconn: Flush output data before
forwarding close to write side") introduced a regression. When a write
timeout is detected, the shutdown is no longer forwarded. Dependig on the
channels state, it may block the processing, waiting the client or the
server leaves.
The commit above tries to avoid to truncate messages on shutdown but on
write timeout, if the channel is not empty, there is nothing more we can do
to send these data. It means the endpoint is unable to send data. In this
case, we must forward the shutdown.
BUG/MEDIUM: applet: Report an error if applet request more room on aborted SC
If an abort was performed and the applet still request more room, it means
the applet has not properly handle the error on its own. At least the CLI
applet is concerned. Instead of reviewing all applets, the error is now
handled in task_run_applet() function.
Because of this bug, a session may be blocked infinitly and may also lead to
a wakup loop.
This patch must only be backported to 2.8 for now. And only to lower
versions if a bug is reported because it is a bit sensitive and the code
older versions are very different.
BUG/MEDIUM: stconn: Report read activity when a stream is attached to front SC
It only concerns the front SC. But it is important to report a read activity
when a stream is created and attached to the front SC, especially in TCP. In
HTTP, when this happens, the request was necessarily received. But in TCP,
the client may open a connection without sending anything. We must still
report a first read activity in this case to be able to properly report
client timeout.
BUG/MEDIUM: applet: Fix API for function to push new data in channels buffer
All applets only check the -1 error value (need room) for applet_put*
functions while the underlying functions may also return -2 if the input is
closed or -3 if the data length is invalid. It means applets already handle
other cases by their own.
The API should be fixed but for now, to ease backports, we only fix
applet_put* functions to always return -1 on error. This way, at least for
the applets point of view, the API is consistent.
This patch should be backported to 2.8. Probably not further. Except if we
suspect it could fix a bug.
BUG/MINOR: stconn: Don't inhibit shutdown on connection on error
In the SC function responsible to perform shutdown, there is a statement
inhibiting the shutdown if an error was encountered on the SC. This
statement is inherited from very old version and should in fact be
removed. The error may be set from the stream. In this case the shutdown
must be performed. In all cases, it is not a big deal if the shutdown is
performed twice because underlying functions already handle multiple calls.
This patch does not fix any bug. Thus there is no reason to backport it.
BUG/MINOR: quic: Wrong RTT computation (srtt and rrt_var)
Due to the fact that several variable values (rtt_var, srtt) were stored as multiple
of their real values, some calculations were less accurate as expected.
Stop storing 4*rtt_var values, and 8*srtt values.
Adjust all the impacted statements.
MINOR: httpclient: allow to configure the timeout.connect
When using the httpclient, one could be bothered with it returning
after a very long time when failing. By default the httpclient has a
retries of 3 and a timeout connect of 5s, which can results in pause of
20s upon failure.
This patch allows the user to configure the "timeout connect" of the
httpclient so it could reduce the time to return an error.
When using the httpclient, one could be bothered with it returning after
a very long time when failing. By default the httpclient has a retries
of 3 and a timeout connect of 5s, which can results in pause of 20s
upon failure.
This patch allows the user to configure the retries of the httpclient so
it could reduce the time to return an error.
MEDIUM: mworker: display a more accessible message when a worker crash
Should fix issue #1034.
Display a more accessible message when a worker crash about what to do.
Example:
$ ./haproxy -W -f haproxy.cfg
[NOTICE] (308877) : New worker (308884) forked
[NOTICE] (308877) : Loading success.
[NOTICE] (308877) : haproxy version is 2.9-dev4-d90d3b-58
[NOTICE] (308877) : path to executable is ./haproxy
[ALERT] (308877) : Current worker (308884) exited with code 139 (Segmentation fault)
[WARNING] (308877) : A worker process unexpectedly died and this can only be explained by a bug in haproxy or its dependencies.
Please check that you are running an up to date and maintained version of haproxy and open a bug report.
HAProxy version 2.9-dev4-d90d3b-58 2023/09/05 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 6.2.0-31-generic #31-Ubuntu SMP PREEMPT_DYNAMIC Mon Aug 14 13:42:26 UTC 2023 x86_64
[ALERT] (308877) : exit-on-failure: killing every processes with SIGTERM
[WARNING] (308877) : All workers exited. Exiting... (139)
MEDIUM: threads: detect excessive thread counts vs cpu-map
This detects when there are more threads bound via cpu-map than CPUs
enabled in cpu-map, or when there are more total threads than the total
number of CPUs available at boot (for unbound threads) and configured
for bound threads. In this case, a warning is emitted to explain the
problems it will cause, and explaining how to address the situation.
Note that some configurations will not be detected as faulty because
the algorithmic complexity to resolve all arrangements grows in O(N!).
This means that having 3 threads on 2 CPUs and one thread on 2 CPUs
will not be detected as it's 4 threads for 4 CPUs. But at least configs
such as T0:(1,4) T1:(1,4) T2:(2,4) T3:(3,4) will not trigger a warning
since they're valid.
It's very easy to mess up with some cpu-map directives and to leave
some thread unbound. Let's add a test that checks that either all
threads are bound or none are bound, but that we do not face the
intermediary situation where some are pinned and others are left
wandering around, possibly on the same CPUs as bound ones.
Note that this should not be backported, or maybe turned into a
notice only, as it appears that it will easily catch invalid
configs and that may break updates for some users.
MINOR: cpuset: centralize a reliable bound cpu detection
Till now the CPUs that were bound were only retrieved in
thread_cpus_enabled() in order to count the number of CPUs allowed,
and it relied on arch-specific code.
Let's slightly arrange this into ha_cpuset_detect_bound() that
reuses the ha_cpuset struct and the accompanying code. This makes
the code much clearer without having to carry along some arch-specific
stuff out of this area.
Note that the macos-specific code used in thread.c to only count
online CPUs but not retrieve a mask, so for now we can't infer
anything from it and can't implement it.
In addition and more importantly, this function is reliable in that
it will only return a value when the detection is accurate, and will
not return incomplete sets on operating systems where we don't have
an exact list, such as online CPUs.
BUILD: checks: shut up yet another stupid gcc warning
gcc has always had hallucinations regarding value ranges, and this one
is interesting, and affects branches 4.7 to 11.3 at least. When building
without threads, the randomly picked new_tid that is reduced to a multiply
by 1 shifted right 32 bits, hence a constant output of 0 shows this
warning:
src/check.c: In function 'process_chk_conn':
src/check.c:1150:32: warning: array subscript [-1, 0] is outside array bounds of 'struct thread_ctx[1]' [-Warray-bounds]
In file included from include/haproxy/thread.h:28,
from include/haproxy/list.h:26,
from include/haproxy/action.h:28,
from src/check.c:31:
or this one when trying to force the test to see that it cannot be zero(!):
src/check.c: In function 'process_chk_conn':
src/check.c:1150:54: warning: array subscript [0, 0] is outside array bounds of 'struct thread_ctx[1]' [-Warray-bounds]
1150 | uint t2_act = _HA_ATOMIC_LOAD(&ha_thread_ctx[thr2].active_checks);
| ~~~~~~~~~~~~~^~~~~~
include/haproxy/atomic.h:66:40: note: in definition of macro 'HA_ATOMIC_LOAD'
66 | #define HA_ATOMIC_LOAD(val) *(val)
| ^~~
src/check.c:1150:24: note: in expansion of macro '_HA_ATOMIC_LOAD'
1150 | uint t2_act = _HA_ATOMIC_LOAD(&ha_thread_ctx[thr2].active_checks);
| ^~~~~~~~~~~~~~~
Let's just add an ALREADY_CHECKED() statement there, no other check seems
to get rid of it. No backport is needed.
BUILD: bug: make BUG_ON() void to avoid a rare warning
When building without threads, the recently introduced BUG_ON(tid != 0)
turns to a constant expression that evaluates to 0 and that is not used,
resulting in this warning:
src/connection.c: In function 'conn_free':
src/connection.c:584:3: warning: statement with no effect [-Wunused-value]
This is because the whole thing is declared as an expression for clarity.
Make it return void to avoid this. No backport is needed.
Andrew Hopkins [Thu, 6 Jul 2023 22:41:46 +0000 (15:41 -0700)]
BUILD: ssl: Build with new cryptographic library AWS-LC
This adds a new option for the Makefile USE_OPENSSL_AWSLC, and
update the documentation with instructions to use HAProxy with
AWS-LC.
Update the type of the OCSP callback retrieved with
SSL_CTX_get_tlsext_status_cb with the actual type for
libcrypto versions greater than 1.0.2. This doesn't affect
OpenSSL which casts the callback to void* in SSL_CTX_ctrl.
MINOR: properly mark the end of the CLI command in error messages
In several places in the file src/ssl_ckch.c, in the message about the
incorrect use of the CLI command, the end of that CLI command is not
correctly marked with the sign ' .