BUG/MEDIUM: peers: Don't set PEERS_F_RESYNC_PROCESS flag on a peer
The bug was introduced by commit 9425aeaffb ("BUG/MAJOR: peers: Update peers
section state from a thread-safe manner"). A peers flags was set on a peer
by error. Just remove it.
This patch must only be backported if the above commit is backported.
BUG/MINOR: fd: my_closefrom() on Linux could skip contiguous series of sockets
We got a detailed report analysis showing that our optimization consisting
in using poll() to detect already closed FDs within a 1024 range has an
issue with the case where 1024 consecutive FDs are open (hence do not show
POLLNVAL) and none of them has any activity report. In this case poll()
returns zero update and we would just skip the loop that inspects all the
FDs to close the valid ones. One visible effect is that the called programs
might occasionally see some FDs being exposed in the low range of their fd
space, possibly making the process run out of FDs when trying to open a
file for example.
Note that this is actually a fix for commit b8e602cb1b ("BUG/MINOR: fd:
make sure my_closefrom() doesn't miss some FDs") that already faced a
more common form of this problem (incomplete but non-empty FDs reported).
BUG/MINOR: sock: handle a weird condition with connect()
As reported on github issue #2491, there's a very strange situation where
epoll_wait() appears to be reported EPOLLERR only (and not IN/OUT/HUP etc
as normally happens with EPOLLERR), and when connect() is called again to
check the state of the ongoing connection, it returns EALREADY, basically
saying "no news, please wait". This obviously triggers a wakeup loop. For
now it has remained impossible to reproduce this issue outside of the
reporter's environment, but that's definitely something that is impossible
to get out from.
The workaround here is to address the lowest level cause we can act on,
which is to avoid returning to wait if EPOLLERR was returned. Indeed, in
this case we know it will loop, so we must definitely take this one into
account. We only do that after connect() asks us to wait, so that a
properly established connection with a queued error at the end of an
exchange will not be diverted and will be handled as usual.
This should be backported to approximately all versions, at least as far
as 2.4 according to the reporter who observed it there.
Thanks to @donnyxray for their useful captures isolating the problem.
MEDIUM: muxes: Use one callback function to shut a mux stream
mux-ops .shutr and .shutw callback functions are merged into a unique
functions, called .shut. The shutdown mode is still passed as argument,
muxes are responsible to test it. Concretly, .shut() function of each mux is
now the content of the old .shutw() followed by the content of the old
.shutr().
MEDIUM: stconn: Use one function to shut connection and applet endpoints
se_shutdown() function is now used to perform a shutdown on a connection
endpoint and an applet endpoint. The same function is used for
both. sc_conn_shut() function was removed and appctx_shut() function was
updated to only deal with the applet stuff.
MEDIUM: stconn: Explicitly pass shut modes to shut applet endpoints
It is the same than the previous patch but for applets. Here there is
already only one function. But with this patch, appctx_shut() function was
modified to explicitly get shutdown mode as parameter. In addition
appctx_shutw() was removed.
MEDIUM: stconn: Use only one SC function to shut connection endpoints
The SC API to perform shutdowns on connection endpoints was unified to have
only one function, sc_conn_shut(), with read/write shut modes passed
explicitly. It means sc_conn_shutr() and sc_conn_shutw() were removed. The
next step is to do the same at the mux level.
MINOR: stconn/connection: Move shut modes at the SE descriptor level
CO_SHR_* and CO_SHW_* modes are in fact used by the stream-connectors to
instruct the muxes how streams must be shut done. It is then the mux
responsibility to decide if it must be propagated to the connection layer or
not. And in this case, the modes above are only tested to pass a boolean
(clean or not).
So, it is not consistant to still use connection related modes for
information set at an upper layer and never used by the connection layer
itself.
These modes are thus moved at the sedesc level and merged into a single
enum. Idea is to add more modes, not necessarily mutually exclusive, to pass
more info to the muxes. For now, it is a one-for-one renaming.
MINOR: mux-pt: Test conn flags instead of sedesc ones to perform a full close
In .shutr and .shutw callback functions, we must rely on the connection
flags (CO_FL_SOCK_RD_SH/WR_SH) to decide to fully close the connection
instead of using sedesc flags. At the end, for the PT multiplexer, it is
equivalent. But it is more logicial and consistent this way.
Since the begining, this function returns a pointer on an appctx while it
should be a void pointer. It is the caller responsibility to cast it to the
right type, the corresponding mux stream in this case.
However, it is not a big deal because this function is unused for now. Only
the unsafe one is used.
MINOR: stats: Get the right prototype for stats_dump_html_end().
When the stat code was reorganized, and the prototype to
stats_dump_html_end() was moved to its own header, it missed the function
arguments. Fix that.
MEDIUM: ssl: crt-base and key-base local keywords for crt-store
Add support for crt-base and key-base local keywords for the crt-store.
current_crtbase and current_keybase are filed with a copy of the global
keyword argument when a crt-store is declared, and updated with a new
path when the keywords are in the crt-store section.
The ckch_conf_kws[] array was updated with ¤t_crtbase and
¤t_keybase instead of the global_ssl ones so the parser can use
them.
The keyword must be used before any "load" line in a crt-store section.
Example:
crt-store web
crt-base /etc/ssl/certs/
key-base /etc/ssl/private/
load crt "site3.crt" alias "site3"
load crt "site4.crt" key "site4.key"
Extract functions related to HTML stats webpage from stats.c into a new
module named stats-html. This allows to reduce stats.c to roughly half
of its original size.
A static variable trash_chunk was used as implicit buffer in most of
stats output function. It was a oneline buffer uses as temporary storage
before emitting to the final applet or CLI buffer.
Replaces it by a buffer defined in show_stat_ctx structure. This allows
to retrieve it in most of stats output function. An additional parameter
was added for the function where context was not already used. This
renders the code cleaner and will allow to split stats.c in several
source files.
As a result of a new member into show_stat_ctx, per-command context max
size has increased. This forces to increase APPLET_MAX_SVCCTX to ensure
pool size is big enough. Increase it to 128 bytes which includes some
extra room for the future.
MINOR: peers: stop relying on srv->addr to find peer port
Now that peers entirely rely on peer->srv for connection settings, and
that it was confirmed that it works properly thanks to previous commit,
let's finish what we started in f6ae258 ("MINOR: peers: rely on srv->addr
and remove peer->addr") and stop using srv->addr to find out peers port
and instead rely on srv->svc_port as it's already done for other proxy
types.
BUG/MEDIUM: peers: fix localpeer regression with 'bind+server' config style
A dumb mistake was made in f6ae25858 ("MINOR: peers: rely on srv->addr
and remove peer->addr"). I completely overlooked the part where the bind
address settings are used as implicit server's address settings when the
peers are declared using the new bind+server config style (which is the
new recommended method to declare peers as it follows the same logic as
the one used in other proxy sections).
As such, the peers synchro fails to work between previous and new process
(localpeer mechanism) upon reload when declaring peers with way:
global
localpeer local
peers mypeers
bind 127.0.0.1:10001
server local
And one has to use the 'old' config style to make it work:
global
localpeer local
peers mypeers
peer local 127.0.0.1:10001
--
To fix the issue, let's explicitly set the server's addr:port
according to the bind's address settings (only the first listener is
considered) when local peer was declared using the 'bind+server' method.
Expected arguments were not specified in the
prepare_caps_from_permitted_set() function declaration. It is an issue for
some compilers, for instance clang. But at the end, it is unexpected and
deprecated.
No backport needed, except if f0b6436f57 ("MEDIUM: capabilities: check
process capabilities sets") is backported.
BUG/MEDIUM: peers: Fix exit condition when max-updates-at-once is reached
When a peer applet is pushing updates, we limit the number of update sent at
once via a global parameter to not spend too much time in the applet. On
interrupt, we claimed for more room to be woken up quickly. However, this
statement is only true if something was pushed in the buffer. Otherwise,
with an empty buffer, if the stream itself is not woken up, the applet
remains also blocked because there is no send activity on the other side to
unblock it.
In this case, instead of requesting more room, it is sufficient to state the
applet have more data to send.
BUG/MEDIUM: spoe: Always retry when an applet fails to send a frame
This bug is related to the previous one ("BUG/MEDIUM: spoe: Always retry
when an applet fails to send a frame"). applet_putblk() function retruns -1
on error and it should always be interpreted as a missing of room in the
buffer. However, on the spoe, this was processed as an I/O error.
BUG/MEDIUM: applet: Fix applet API to put input data in a buffer
applet_putblk and co were added to simplify applets. In 2.8, a fix was
pushed to deal with all errors as a room error because the vast majority of
applets didn't expect other kind of errors. The API was changed with the
commit 389b7d1f7b ("BUG/MEDIUM: applet: Fix API for function to push new
data in channels buffer").
Unfortunately and for unknown reason, the fix was totally failed. Checks on
channel functions were just wrong and not consistent. applet_putblk()
function is especially affected because the error is returned but no flag
are set on the SC to request more room. Because of this bug, applets relying
on it may be blocked, waiting for more room, and never woken up.
The crt-store load line parser relies on offsets of member of the
ckch_conf struct. However the new "alias" keyword as an offset to
-1, because it does not need to be used. Plan was to handle it that way
in the parser, but it wasn't supported yet. So -1 was still used in an
offset computation which was not used, but ASAN could see the problem.
This patch fixes the issue by using a signed type for the offset value,
so any negative value would be skipped. It also introduced a
PARSE_TYPE_NONE for the parser.
The crt-store load line now allows to put an alias. This alias is used
as the key in the ckch_tree instead of the certificate. This way an
alias can be referenced in the configuration with the '@/' prefix.
MEDIUM: evports: permit to report multiple events at once
Since the beginning in 2.0 the nevlist parameter was set to 1 before
calling port_getn(), which means that a single FD event will be reported
per polling loop. This is extremely inefficient, and all the code was
designed to use global.tune.maxpollevents. It looks like it's a leftover
of a temporary debugging change. No apparent issues were found by setting
it to a higher value, so better do that.
That code is not much used nowadays with Solaris disappearing from the
landscape, so even if this definitely was a bug, it's preferable not to
backport that fix as it could uncover other subtle bugs that were never
raised yet.
BUG/MEDIUM: evports: do not clear returned events list on signal
Since 2.0 with commit 0ba4f483d2 ("MAJOR: polling: add event ports
support (Solaris)"), the polling system on Solaris suffers from a
signal handling problem. It turns out that this API is very bizarre,
as reported events are automatically unregistered and their counter
is updated in the same variable that was used to pass the count on
input, making it difficult to handle certain error codes (how should
one handle ENOSYS for example?). And to complete everything, the API
is able to return both EINTR and an event if a signal is reported.
The code tries to deal with certain such cases (e.g. ETIME for timeout
can also report an event), otherwise it defaults to clearing the
event counter upon error. This has the effect that EINTR clears the
list of events, which are also automatically cleared from the set by
the system.
This is visible when using external checks where the SIGCHLD of the
leaving child causes a wakeup that ruins the event counter and causes
endless loops, apparently due to the queued inter-thread byte in the
pipe used to wake threads up that never gets removed in this case.
Note that extcheck would also deserve deeper investigation because it
can immediately re-trigger a check in such a case, which is not normal.
Removing the wiping of the nevlist variable fixes the problem.
This can be backported to all versions since it affects 2.0.
BUILD: xxhash: silence a build warning on Solaris + gcc-5.5
Testing an undefined macro emits warnings due to -Wundef, and we have
exactly one such case in xxhash:
include/import/xxhash.h:3390:42: warning: "__cplusplus" is not defined [-Wundef]
#if ((defined(sun) || defined(__sun)) && __cplusplus) /* Solaris includes __STDC_VERSION__ with C++. Tested with GCC 5.5 */
Let's just prepend "defined(__cplusplus) &&" before __cplusplus to
resolve the problem. Upstream is still affected apparently.
Gcc before 7 does really not like direct operations on cast pointers
such as "((struct a*)b)->c += d;". It turns our that we have exactly
that construct in 3.0 since commit 5baa9ea168 ("MEDIUM: cache: Save
body size of cached objects and track it on delivery").
It's generally sufficient to use an intermediary variable such as :
"({ (struct a*) _ = b; _; })->c +=d;" but that's ugly. Fortunately
DISGUISE() implicitly does something very similar and works fine, so
let's use that.
BUG/MEDIUM: stconn: Don't forward channel data if input data must be filtered
Once data are received and placed in a channel buffer, if it is possible,
outgoing data are immediately forwarded. But we must take care to not do so
if there is also pending input data and a filter registered on the
channel. It is especially important for HTX streams because the HTX may be
altered, especially the extra field. And it is indeed an issue with the HTTP
compression filter and the H1 multiplexer. The wrong chunk size may be
announced leading to an internal error.
This patch should fix the issue #2530. It must be backported to all stable
versions.
MINOR: peer: Restore previous peer flags value to ease debugging
The last fixes on the peers to improve the locking mechanism introduced new
peer flags and the value of some old flags was changed. This was done in the
commit 9b78e33837 ("MINOR: peers: Add 2 peer flags about the peer learn
status"). But, to ease the debugging of the peers team, old values are
restored.
This patch must be backported with the commit above.
MEDIUM: peers: Only lock one peer at a time in the sync process function
Thanks to all previous changes, it is now possible to stop locking all peers
at once in the resync process function. Peer are locked one after the
other. Wen a peer is locked, another one may be locked when all peer sharing
the same shard must be updated. Otherwise, at anytime, at most one peer is
locked. This should significantly improve the situation.
This patch depends on the following patchs:
* BUG/MAJOR: peers: Update peers section state from a thread-safe manner
* BUG/MINOR: peers: Report a resync was explicitly requested from a thread-safe manner
* MINOR: peers: Add functions to commit peer changes from the resync task
* MINOR: peers: sligthly adapt part processing the stopping signal
* MINOR: peers: Add flags to report the peer state to the resync task
* MINOR: peers: Add 2 peer flags about the peer learn status
* MINOR: peers: Split resync process function to separate running/stopping states
It may be good to backport it to 2.9. All the seris should fix the issue #2470.
BUG/MAJOR: peers: Update peers section state from a thread-safe manner
It is the main part of this series. In the peer applet, only the peer flags
are updated. It is now the responsibility of the resync process function to
check changes on each peer to update the peers section state accordingly.
Concretly, changes on the connection state (accepted, connected, released or
renewed) are first reported at the peer level and then handled in
__process_peer_state() function.
In the same manner, when the learn status of a peer changes, the peers
section state is no longer updated immediately. The resync task is woken up
to deal with this changes.
Thanks to these changes, the peers should be now really thread-safe.
This patch relies on the following ones:
* BUG/MINOR: peers: Report a resync was explicitly requested from a thread-safe manner
* MINOR: peers: Add functions to commit peer changes from the resync task
* MINOR: peers: sligthly adapt part processing the stopping signal
* MINOR: peers: Add flags to report the peer state to the resync task
* MINOR: peers: Add 2 peer flags about the peer learn status
* MINOR: peers: Split resync process function to separate running/stopping states
No bug was reported about the thread-safety of peers. Only a performance
issue was encountered with a huge number of peers (> 50). So there is no
reason to backport all these patches further than 2.9.
BUG/MINOR: peers: Report a resync was explicitly requested from a thread-safe manner
Flags on the peers section state must be updated from a thread-safe manner.
It is not true today. With this patch we take care PEERS_F_RESYNC_REQUESTED
flag is only set by the resync task. To do so, a peer flag is used. This
flag is only set once and never removed. It is juste used for debugging
purpose. So it is enough to set it on a peer and be sure to report it on the
peers section when the sync task is executed.
This patch relies on previous ones:
* MINOR: peers: Add functions to commit peer changes from the resync task
* MINOR: peers: sligthly adapt part processing the stopping signal
* MINOR: peers: Add flags to report the peer state to the resync task
* MINOR: peers: Add 2 peer flags about the peer learn status
* MINOR: peers: Split resync process function to separate running/stopping states
MINOR: peers: Add functions to commit peer changes from the resync task
For now, nothing is done in these functions. It is only a patch to prepare
the huge part of the refactoring about the locking mechanism of the peers.
These functions will be responsible to check peers state and their learn
status to update the peers section flags accordingly.
MINOR: peers: sligthly adapt part processing the stopping signal
The signal and the PEERS_F_DONOTSTOP flag are now handled in the loop on peers
to force sessions shutdown. We will need to loop on all peers to update their
state. It is easier this way.
MINOR: peers: Add flags to report the peer state to the resync task
As the previous patch, this patch is also part of the refactoring of peer
locking mechanisme. Here we add flags to represent a transitional state for
a peer. It will be the resync task responsibility to update the peers state
accordingly.
A peer may be in 4 transitional states:
* accepted : a connection was accepted from a peer
* connected: a connection to a peer was established
* release : a peer session was released
* renewed : a peer session was released because it was replaced by a new
one. Concretly, this will be equivalent to released+accepted
If none of these flags is set, it means the transition, if any, was
processed by the resync task, or no transition happened.
MINOR: peers: Add 2 peer flags about the peer learn status
PEER_F_LEARN_PROCESS and PEER_F_LEARN_FINISHED flags are added to help to
fix locking issue about peers. Indeed, a peer is able to update the peers
"section" state under its own lock. Because the resync task locks all peers
at once, there is no conflict at this level. But there is nothing to prevent
2 peers to update the peers state in same time. So it seems there is no real
issue here, but there is a theorical thread-safety issue here. And it means
the locking mechanism of the peers must be reviewed.
In this context, the 2 flags above will help to move all update of the peers
state in the scope of resync task. Each peer will be able to update its own
state and the resync task will be responsible to update the peers state
accordingly.
MINOR: peers: Split resync process function to separate running/stopping states
The function responsible to deal with resynchro between all peers is now split
in two subfunctions. The first one is used when HAProxy is running while the
other one is used in soft-stop case.
This patch is required to be able to refactor locking mechanism of the peers.
BUG/MEDIUM: grpc: Fix several unaligned 32/64 bits accesses
There were several places in grpc and its dependency protobuf where unaligned
accesses were done. Read accesses to 32 (resp. 64) bits values should be performed
by read_u32() (resp. read_u64()).
Replace these unligned read accesses by correct calls to these functions.
Same fixes for doubles and floats.
Such unaligned read accesses could lead to crashes with bus errors on CPU
archictectures which do not fix them at run time.
This patch depends on this previous commit: 861199fa71 MINOR: net_helper: Add support for floats/doubles.
Add crt-base support for "crt-store". It will be used by 'crt', 'ocsp',
'issuer', 'sctl' load line parameter.
In order to keep compatibility with previous configurations and scripts
for the CLI, a crt-store load line will save its ckch_store using the
absolute crt path with the crt-base as the ckch tree key. This way, a
`show ssl cert` on the CLI will always have the completed path.
BUG/MAJOR: ring: use the correct size to reallocate startup_logs
In 3.0-dev, with commit 7c9ce715c9 ("MINOR: ring: make callers use
ring_data() and ring_size(), not ring->buf"), we made startup_logs_dup()
use ring_size() to get the old ring size and pass it to ring_new() to
create a new ring. But due to the ambiguity of the allocate vs usable
size, this resulted in slightly shrinking the buffer compared to the
previous one, occasionally causing crashes if the first one was already
full of warnings, as seen in GH issue #2529. We need to use the allocated
size instead, thanks to the function brought by previous commit.
No backport is needed, this only affects 3.0-dev. Thanks to @felipewd
for the detailed report that allowed to spot the problem.
MINOR: ring: clarify the usage of ring_size() and add ring_allocated_size()
There's currently an abiguity around ring_size(), it's said to return
the allocated size but returns the usable size. We can't change it as
it's used everywhere in the code like this. Let's fix the comment and
add ring_allocated_size() instead for anything related to allocation.
BUG/MINOR: lru: fix the standalone test case for invalid revision
In 2.6, a build issue for LRU in standalone test mode was addressed by
commit bf9c07fd9 ("BUILD/DEBUG: lru: update the standalone code to
support the revision"), but using revision 1 while looking up rev 0
results in 100% misses. Let's fix this and commit with revision 0 as
well.
No backport is needed, this only happens when hacking on the code.
MINOR: listener/protocol: add proto name in alerts
Frontend and listen sections allow unlimited number of bind statements, it is
often, when there is a bind statement per supported protocol, like below:
listen test
mode http
bind quic4@0.0.0.0:443 name quic ssl crt ...
bind 0.0.0.0:443 name https ssl alpn http/1.1,h2 crt ...
bind 0.0.0.0:8080 ...
...
It seems useful to show corresponded protocol name in alerts and warnings,
when problem occures with port binding, connection resuming or sharding. This
helps to figure out immediately, which bind statement has a wrong setting or
which protocol module is the root cause of the issue.
DEBUG: pools: report the data around the offending area in case of mismatch
When the integrity check fails, it's useful to get a dump of the area
around the first faulty byte. That's what this patch does. For example
it now shows this before reporting info about the tag itself:
REORG: pool: move the area dump with symbol resolution to tools.c
This function is particularly useful to dump unknown areas watching
for opportunistic symbols, so let's move it to tools.c so that we can
reuse it a little bit more.
When a corruption was detected in an object, it's often said that the
tag doesn't match the pool, but it should also check if it matches the
location of an earlier pool_free() call, which happens when -dMcaller
is used. That's what we're doing now.
BUG/MAJOR: stick-tables: fix race with peers in entry expiration
In 2.9 with commit 7968fe3889 ("MEDIUM: stick-table: change the ref_cnt
atomically") we significantly relaxed the stick-tables locking when
dealing with peers by adjusting the ref_cnt atomically and moving it
out of the lock.
However it opened a tiny window that became problematic in 3.0-dev7
when the table's contention was lowered by commit 1a088da7c2 ("MAJOR:
stktable: split the keys across multiple shards to reduce contention").
What happens is that some peers may access the entry for reading at
the moment it's about to expire, and while the read accesses to push
the data remain unnoticed (possibly that from time to time we push
crap), but the releasing of the refcount causes a new write that may
damage anything else. The scenario is the following:
Here it's clear that the bottom part of peer_send_teachmsgs() believes
to be protected but may act on freed data.
This is more visible when enabling -dMtag,no-merge,integrity because
the ATOMIC_DEC(&ref_cnt) decrements one byte in the area, that makes
the eviction check fail while the tag has the address of the left
__stksess_free(), proving a completed pool_free() before the decrement,
and the anomaly there is pretty visible in the crash dump. Changing
INC()/DEC() with ADD(2)/DEC(2) shows that the byte is now off by two,
confirming that the operation happened there.
The solution is not very hard, it consists in checking for the ref_cnt
on the left after grabbing the lock, and doing both before deleting the
element, so that we have the guarantee that either the peer will not
take it or that it has already started taking it.
This was proven to be sufficient, as instead of crashing after 3s of
injection with 4 peers, 16 threads and 130k RPS, it survived for 15mn.
In order to stress the setup, a config involving 4+ peers, tracking
HTTP request with randoms and applying a bwlim-out filter with a
random key, with a client made of 160 h2 conns downloading 10 streams
of 4MB objects in parallel managed to trigger it within a few seconds:
backend bk
server s1 198.18.0.30:8000
server s2 198.18.0.34:8000
backend tbl
stick-table type ip size 1000k expire 1s store http_req_cnt,bytes_in_rate(1s),bytes_out_rate(1s) peers peers
This seems to be very dependent on the timing and setup though.
This will need to be backported to 2.9. This part of the code was
reindented with shards but the block should remain mostly unchanged.
The logic to apply is the same.
BUG/MEDIUM: peers/trace: fix crash when listing event types
Sending "trace peers event" on the CLI crashes because the event list
in the peers is not finished. This was introduced in 2.4 by commit d865935f32 ("MINOR: peers: Add traces to peer_treat_updatemsg().")
so this must be backported to 2.4.
CLEANUP: stick-tables: always respect the to_batch limit when trashing
When adding the shards support to tables with commit 1a088da7c ("MAJOR:
stktable: split the keys across multiple shards to reduce contention"),
the condition to stop eliminating entries based on the batch size being
reached is based on a pre-decrement of the max_search counter, but now
it goes back into the outer loop which doesn't check it, so next time
it does it when entering the next shard, it will become even more
negative and will properly stop, but at first glance it looks like an
int overflow (which it is not). Let's make sure the outer loop stops
on this condition so that we don't continue searching when the limit
is reached.
BUG/MEDIUM: stick-tables: fix the task's next expiration date
While changing the stick-table indexing that led to commit 1a088da7c
("MAJOR: stktable: split the keys across multiple shards to reduce
contention"), I met a problem with the task's expiration date being
incorrectly updated, I fixed it and apparently I committed the wrong
version :-/
The effect is that the task's date is only correctly reset if the
table is empty, otherwise the task wakes up again and is queued at
the previous date, eating 100% CPU. The tick_isfirst() must not be
used when storing the last result.
No backport is needed as this was only merged in 3.0-dev7.
MINOR: ssl/crtlist: alloc ssl_conf only when a valid keyword is found
crt-list will be enhanced with ckch_conf keywords, however these keywords
does not fill the 'ssl_conf' structure. So we don't need to allocate the
ssl_conf for every options between [ ] but only when we found a relevant
one.
BUG/MEDIUM: cache/stats: Handle inbuf allocation failure in the I/O handler
When cache and stats applets were changed to use their own buffers, a change
was also performed to no longer access the stream from the I/O
handller. Among other things, the HTTP start-line of the request is now
retrieved to get the method. But, when these changes were brought, the inbuf
buffer allocation failures were not handled.
It is of course not so common. But if this happens, a crash may be
experienced. To fix the issue, we now check for inbuf allocation failures
before accessing it.
We observed that a dynamic server which health check is down for longer
than slowstart delay at startup doesn't trigger the warmup phase, it
receives full traffic immediately. This has been confirmed by checking
haproxy UI, weight is immediately the full one (e.g. 75/75), without any
throttle applied. Further tests showed that it was similar if it was in
maintenance, and even when entering a down or maintenance state after
being up.
Another issue is that if the server is down for less time than
slowstart, when it comes back up, it briefly has a much higher weight
than expected for a slowstart.
An easy way to reproduce is to do the following:
- Add a server with e.g. a 20s slowstart and a weight of 10 in config
file
- Put it in maintenance using CLI (set server be1/srv1 state maint)
- Wait more than 20s, enable it again (set server be1/srv1 state ready)
- Observe UI, weight will show 10/10 immediately.
If server was down for less than 20s, you'd briefly see a weight and
throttle value that is inconsistent, e.g. 50% throttle value and a
weight of 5 if server comes back up after 10s before going back to
6% after a second or two.
Code analysis shows that the logic in server_recalc_eweight stops the
warmup task by setting server's next state to SRV_ST_RUNNING if it
didn't change state for longer than the slowstart duration, regardless
of its current state. As a consequence, a server being down or disabled
for longer than the slowstart duration will never enter the warmup phase
when it will be up again.
Regarding the weight when server comes back up, issue is that even if
the server is down, we still compute its next weight as if it was up,
hence when it comes back up, it can briefly have a much higher weight
than expected during slowstart, until the warmup task is called again
after last_change is updated.
DOC: install: clarify the build process by splitting it into subsections
The doc about the build process has grown to a point where it was painful
to read when searching a specific element. This commit cuts it into a few
sub-categories for ease of searching, and it also adds a summary of the
most commonly used makefile variables, their usage and default settings.
CLEANUP: makefile: make the output of the "opts" target more readable
"make opts" is nice because it shows all options being used, but it
does so in a copy-pastable way that aggregates everything on a single
line, rendering very poorly and making it hard to spot the relevant
variables.
Let's break lines and append a trailing backslash to them instead. This
gives something much more readable which remains copy-pastable. Options
can be inspected and more easily replicated. It was also verified that
copy-pasting the whole block after "make" does continue to work like
before and produces the same output.
This one is often used as a gateway to inject regular CFLAGS, even though
not designed for this. It's now ignored, but any attempt at setting it
reports a warning suggesting to use CFLAGS or ARCH_FLAGS instead.
BUILD: makefile: do not pass warnings to VERBOSE_CFLAGS
The VERBOSE_CFLAGS variable is used to report the CFLAGS used in the
output of "haproxy -vv". It currently contains all -Wxxx and -Wno-xxx
because there was no other possibility till now, but these warnings
are highly specific to the compiler version in use, are automatically
detected and do not bring value being presented here. Worse, by
encouraging users to copy-paste them, they can end up with warnings
that are not relevant to their build environment.
In addition, their presence there doesn't give indication of whether
or not they triggered, so they cannot vouch for the output code quality.
Better just not report them there. However, as part of VERBOSE_CFLAGS
they were also used to detect whether or not to rebuild, via build_opts,
so they still have to be explicitly mentioned there if we want to make
sure that changing warning options triggers a rebuild to see their
effect.
Now we can have more useful outputs like this one which indicate precisely
what to use to safely rebuild the executable:
BUILD: makefile: split WARN_CFLAGS from SPEC_CFLAGS
It's currently not possible to only set some -Wno... without breaking
the -W... and conversely. Let's split both sets apart so that it's now
possible to set -W... alone in WARN_CFLAGS to enable only some warnings,
and pass the -Wno... in SPEC_CFLAGS without losing the enabled ones.
BUILD: makefile: extract -Werror/-Wfatal-errors from automatic CFLAGS
The compiler-specific CFLAGS that are automatically detected (SPEC_CFLAGS)
are currently the ones affected by ERR and FAILFAST. Not only this makes
these options useless as soon as SPEC_CFLAGS is forced, but it also means
that any change to them causes a rebuild, so disabling FAILFAST or
enabling ERR to get more info on a faulty object causes a full rebuild
of all others. Let's just move them to ERROR_CFLAGS that only contains
these two. It's not documented outside of the makefile because it's not
supposed to be touched.
BUILD: makefile: add FAILFAST to select the -Wfatal-errors behavior
-Wfatal-errors is set by default and is not supported on older compilers.
Since it's part of all the automatically detected flags, it's painful to
remove when needed. Also it's a matter of taste, some developers might
prefer to get a long list of all errors at once, others prefer that the
build stops immediately after the root cause.
The default is now back to no -Wfatal-errors, and when FAILFAST is set to
any non-empty non-zero value, -Wfatal-errors is added:
BUILD: makefile: make the ERR variable also support 0
It's among the options that change a lot on the developer's side and it's
tempting to change from ERR=1 to ERR=0 on the make command line by reusing
the history, except it doesn't work. Let's explictily permit ERR=0 to
disable -Werror like ERR= does.
BUILD: makefile: move the fwrapv option to STD_CFLAGS
We now have a separate CFLAGS variable for sensitive options that affect
code generation and/or standard compliance. This one must not be mixed
with the warnings auto-detection because changing such warnings can
result in losing the options. Now it's easier to affect them separately.
The option was not listed in the series of variables useful to packagers
because they're not supposed to touch it.
BUILD: makefile: extract ARCH_FLAGS out of LDFLAGS
ARCH_FLAGS used to be merged into LDFLAGS so that it was not possible to
pass extra options to LDFLAGS without losing ARCH_FLAGS. This commit now
splits them apart and leaves LDFLAGS empty by default. The doc explains
how to use it for rpath and such occasional use cases.
BUILD: makefile: drop the ARCH variable and better document ARCH_FLAGS
ARCH_FLAGS was always present and is documented as being fed to both
CC and LD during the build. This is meant for options that need to be
consistent between the two stages such as -pg, -flto, -fsanitize=address,
-m64, -g etc. Its doc was lacking a bit of clarity though, and it was
not enumerated in the makefile's variables list.
ARCH however was only documented as affecting ARCH_FLAGS, and was just
never used as the only two really usable and supported ARCH_FLAGS options
were -m32 and -m64. In addition it was even written in the makefile that
it was CPU that was affecting the ARCH_FLAGS. Let's just drop ARCH and
improve the documentation on ARCH_FLAGS. Again, if ARCH is set, a warning
is emitted explaining how to proceed.
ARCH_FLAGS is now preset to -g so that we finally have a correct place
to deal with such debugging options that need to be passed to both
stages. The fedora and musl CI workflows were updated to also use it
instead of sticking to duplicate DEBUG_CFLAGS+LDFLAGS.
It's also worth noting that BUILD_ARCH was being passed to the build
process and never used anywhere in the code, so its removal will not
be noticed.
The CPU variable, when used, is almost always exclusively used with
"generic" to disable any CPU-specific optimizations, or "native" to
enable "-march=native". Other options are not used and are just making
CPU_CFLAGS more confusing.
This commit just drops all pre-configured variants and replaces them
with documentation about examples of supported options. CPU_CFLAGS is
preserved as it appears that it's mostly used as a proxy to inject the
distro's CFLAGS, and it's just empty by default.
The CPU variable is checked, and if set to anything but "generic", it
emits a warning about its deprecation and invites the user to read
INSTALL.
Users who would just set CPU_CFLAGS will be able to continue to do so,
those who were using CPU=native will have to pass CPU_CFLAGS=-march=native
and those who were passing one of the other options will find it in the
doc as well.
Note that this also removes the "CPU=" line from haproxy -vv, that most
users got used to seeing set to "generic" or occasionally "native"
anyway, thus that didn't provide any useful information.
BUILD: makefile: move -O2 from CPU_CFLAGS to OPT_CFLAGS
CPU_CFLAGS is meant to set the CPU-specific options (-mcpu, -march etc).
The fact that it also includes the optimization level is annoying because
one cannot be set without replacing the other. Let's move the optimization
level to a new independent OPT_CFLAGS that is added early to the list, so
that other CFLAGS (including CPU_CFLAGS) can continue to override it if
necessary.
These settings were appended to the final build CFLAGS and used to
contain a mix of obsolete settings that can equally be passed in one
of the many other variables such as DEFINE or more recently CFLAGS.
Let's just drop the obsolete comment about it, and check if anything
was forced there, then emit a warning suggesting to move that to other
variables such as DEFINE or CFLAGS, so as to be kind to package
maintainers.
BUILD: makefile: allow to use CFLAGS to append build options
CFLAGS has always been a troublemaker because the variable was preset
based on other options, including dynamically detected ones, so
overriding it would just lose the original contents, forcing users
to resort to various alternatives such as DEFINE, ADDINC or SMALL_OPTS.
Now that the variable's usage was cleared, let's just preset it to
empty (and it MUST absolutely remain like this) and append it at the
end of the compiler's options. This will now allow to change an
optimization level, force a CPU type or disable a warning as users
commonly expect from CFLAGS passed to a makefile, and not to override
*all* the compiler flags as it has progressively become.
BUILD: makefile: get rid of the config CFLAGS variable
CFLAGS currently is a concatenation of 4 other variables, some of which
are dynamically determined. This has long been totally unusable to pass
any extra option. Let's just get rid of it and pass the 4 variables at
the 3 only places CFLAGS was used. This will later allow us to make
CFLAGS something really usable.
This also has the benefit of implicitly restoring the build on AIX5
which needs to disable DEBUG_CFLAGS to solve symbol issues when built
with -g. Indeed, that one got ignored since the targets moved past the
CFLAGS definition which collects DEBUG_CFLAGS.
BUILD: pools: make DEBUG_MEMORY_POOLS=1 the default option
This option has been set by default for a very long time and also
complicates the manipulation of the DEBUG variable. Let's make it
the official default and permit to unset it by setting it to zero.
The other pool-related DEBUG options were adjusted to also explicitly
check for the zero value for consistency.
We continue to carry it in the makefile, which adds to the difficulty
of passing new options. Let's make DEBUG_STRICT=1 the default so that
one has to explicitly pass DEBUG_STRICT=0 to disable it. This allows us
to remove the option from the default DEBUG variable in the makefile.
BUILD: cache: fix non-inline vs inline declaration mismatch to silence a warning
Some compilers report this on the cache:
src/cache.c:235: warning: 'release_entry_locked' declared inline after being called
src/cache.c:235: warning: previous declaration of 'release_entry_locked' was here
And indeed, the function is first declared non-inline and later inline.
Let's just set the inline status from the beginning. It's not really
needed to backport this.
BUG/MINOR: debug: make sure DEBUG_STRICT=0 does work as documented
Setting DEBUG_STRICT=0 only validates the defined(DEBUG_STRICT) test
regarding DEBUG_STRICT_ACTION, which is equivalent to DEBUG_STRICT>=0.
Let's make sure the test checks for >0 so that DEBUG_STRICT=0 properly
disables DEBUG_STRICT.
BUILD: atomic: fix peers build regression on gcc < 4.7 after recent changes
Recent commit 4c1480f13b ("MINOR: stick-tables: mark the seen stksess
with a flag "seen"") introduced a build regression on older versions of
gcc before 4.7. This is in the old __sync_ API, the HA_ATOMIC_LOAD()
implementation uses an intermediary return value called "ret" that is
of the same name as the variable passed in argument to the macro in the
aforementioned commit. As such, the compiler complains with a cryptic
error:
src/peers.c: In function 'peer_teach_process_stksess_lookup':
src/peers.c:1502: error: invalid type argument of '->' (have 'int')
The solution is to avoid referencing the argument in the expression and
using an intermediary variable for the pointer as done elsewhere in the
code. It seems there's no other place affected with this. It probably
does not need to be backported since this code is antique and very rarely
used nowadays.
Using an invalid GUID for guid_insert() causes a crash. This is easily
reproducible using for example an invalid character with "guid" keyword.
Here is the provided backtrace :
Thread 1 "haproxy" received signal SIGSEGV, Segmentation fault.
0x00005555561fda95 in guid_insert (objt=0x520000002080, uid=0x519000002dac "@foo2", errmsg=0x7ffff4c0a7a0)
at src/guid.c:83
83 ha_free(&guid->node.key);
This error is present in guid_insert() cleanup parts. GUID node is not
allocated in case of an early error so it's impossible to dereference it
to free guid.node.key. Fix this simply by using an intermediary pointer
key.
William rightfully reported that not supporting =0 to disable a USE_xxx
option is sometimes painful (e.g. a script might do USE_xxx=$(command)).
It's not that difficult to handle actually, we just need to consider the
value 0 as empty at the few places that test for an empty string in
options.mk, and in each "ifneq" test in the main Makefile, so let's do
that. We even take care of preserving the original value in the build
options string so that building with USE_OPENSSL=0 will be reported
as-is in haproxy -vv, and with "-OPENSSL" in the feature list.
BUILD: makefile: warn about unknown USE_* variables
William suggested that it would be nice to warn about unknown USE_*
variables to more easily catch misspelled ones. The valid ones are
present in use_opts, so by appending "=%" to each of them, we can
build a series of patterns to exclude from MAKEOVERRIDES and emit
a warning for the ones that stand out.
Example:
$ make TARGET=linux-glibc USE_QUIC_COMPAT_OPENSSL=1
Makefile:338: Warning: ignoring unknown build option: USE_QUIC_COMPAT_OPENSSL=1
CC src/slz.o
BUG/MEDIUM: http-ana: Deliver 502 on keep-alive for fressh server connection
In HTTP keep-alive, if we face a connection error to the server while
sending the request, the error should not be reported, and the client-side
connection should simply be closed, so that client knows it can retry.
If the error happens during the connection stage, there is two cases. We
have a connection timeout or an allocation error. In this case, the 503
response must be skipped if it is not the first request on the client-side
connection. Or we have a connection error. In this case, the 503 response
must be skipped if it is a reused server connection. Otherwise, during the
connection stage, the 503-Service-unavailable response is delivered to the
client. The part works properly.
If the error happens after this stage, the 502-Bad-gateway response
delivering should only be based on the server-side connection status. For a
reused server connection, the client-side connection must be closed with no
reponses. However, for a fresh server-side connection, a 502-Bad-gateway
response must be delivered to the client. Unfortunately, This part is
buggy. Only the client-side connection state is considered and the response
is skipped if it is not the first request for the same client connection.
The bug is not so visbile in HTTP/1.1 but in H2 and H3 it is pretty annoying
because for a connection, requests are multiplexed, in parallels. It means
there is no first request. So, because of this bug, for H2 and H3,
502-Bad-gateway responses because of a connection error before receiveing
the response are always skipped.
To fix the issue, in http_wait_for_response() analyser, we must only rely on
SF_SRV_REUSED stream flag to skip the 502 response or not. This flag is set
if the server connection was reused.
The bug is their since a while. SF_SRV_REUSED flag was added in the version
1.5 especially to fix this kind of bug. But only the 503 case was fixed.
This patch should fix the issue #2285. It must be backported to every stable
versions.
OPTIM: quic: do not call qc_prep_pkts() if everything sent
qc_send() is implemented as a loop to repeatedly invoke
qc_prep_pkts()/qc_send_ppkts(). This ensures that all data are emitted
even if bigger that a single Tx buffer instance. This is useful if
congestion window is empty but big enough for application data.
Looping is interrupted if qc_prep_pkts() returns a negative error
code, for example due to no space left in congestion window. It can also
returns 0 if no input data to sent, which also interrupt the loop.
To limit this last case, removed quic_enc_level from send_list each time
everything already send via qc_prep_pkts(). Loop can then be interrupted
as soon as send_list is empty, avoiding an extra superfluous call to
qc_prep_pkts().