Willy Tarreau [Thu, 19 Oct 2017 04:28:23 +0000 (06:28 +0200)]
MINOR: ist: add ist0() to add a trailing zero to a string.
This function modifies the string to add a zero after the end, and returns
the start pointer. The purpose is to use it on strings extracted by parsers
from larger strings cut with delimiters that are not important and can be
destroyed. It allows any such string to be used with regular string
functions. It's also convenient to use with printf() to show data extracted
from writable areas.
Willy Tarreau [Thu, 19 Oct 2017 12:58:40 +0000 (14:58 +0200)]
MINOR: channel: make the channel be a const in all {ci,co}_get* functions
There's no point having the channel marked writable as these functions
only extract data from the channel. The code was retrieved from their
ci/co ancestors.
Willy Tarreau [Thu, 19 Oct 2017 12:32:15 +0000 (14:32 +0200)]
REORG: channel: finally rename the last bi_* / bo_* functions
For HTTP/2 we'll need some buffer-only equivalent functions to some of
the ones applying to channels and still squatting the bi_* / bo_*
namespace. Since these names have kept being misleading for quite some
time now and are really getting annoying, it's time to rename them. This
commit will use "ci/co" as the prefix (for "channel in", "channel out")
instead of "bi/bo". The following ones were renamed :
Willy Tarreau [Mon, 16 Oct 2017 12:01:18 +0000 (14:01 +0200)]
MINOR: buffer: add buffer_space_wraps()
This function returns true if the available buffer space wraps. This
will be used to detect if it's worth realigning a buffer when it lacks
contigous space.
MINOR: buffer: add two functions to inject data into buffers
bi_istput() injects the ist string into the input region of the buffer,
it will be used to feed small data chunks into the conn_stream. bo_istput()
does the same into the output region of the buffer, it will be used to send
data via the transport layer and assumes there's no input data.
MINOR: buffer: add a function to match against string patterns
In order to match known patterns in wrapping buffer, we'll introduce new
string manipulation functions for buffers. The new function b_isteq()
relies on an ist string for the pattern and compares it against any
location in the buffer relative to <p>. The second function bi_eat()
is specially designed to match input contents.
Willy Tarreau [Wed, 18 Oct 2017 06:32:12 +0000 (08:32 +0200)]
MINOR: buffer: add bo_del() to delete a number of characters from output
This simply reduces the amount of output data from the buffer after
they have been transferred, in a way that is more natural than by
fiddling with buf->o. b_del() was renamed to bi_del() to avoid any
ambiguity (it's not yet used).
Olivier Houchard [Tue, 17 Oct 2017 17:23:25 +0000 (19:23 +0200)]
BUG/MINOR: stats: Clear a bit more counters with in cli_parse_clear_counters().
Clear MaxSslRate, SslFrontendMaxKeyRate and SslBackendMaxKeyRate when
clear counters is used, it was probably forgotten when those counters were
added.
[wt: this can probably be backported as far as 1.5 in dumpstats.c]
Willy Tarreau [Wed, 18 Oct 2017 09:39:33 +0000 (11:39 +0200)]
BUG/MINOR: tools: fix my_htonll() on x86_64
Commit 36eb3a3 ("MINOR: tools: make my_htonll() more efficient on x86_64")
brought an incorrect asm statement missing the input constraints, causing
the input value not necessarily to be placed into the same register as the
output one, resulting in random output. It happens to work when building at
-O0 but not above. This was only detected in the HTTP/2 parser, but in
mainline it could only affect the integer to binary sample cast.
No backport is needed since this bug was only introduced in the development
branch.
Willy Tarreau [Tue, 17 Oct 2017 14:33:46 +0000 (16:33 +0200)]
BUG/MINOR: stream-int: don't set MSG_MORE on SHUTW_NOW without AUTO_CLOSE
Since around 1.5-dev12, we've been setting MSG_MORE on send() on various
conditions, including the fact that SHUTW_NOW is present, but we don't
check that it's accompanied with AUTO_CLOSE. The result is that on requests
immediately followed by a close (where AUTO_CLOSE is not set), the request
gets delayed in the TCP stack before being sent to the server. This is
visible with the H2 code where the end-of-stream flag is set on requests,
but probably happens when a POLL_HUP is detected along with the request.
The (lack of) presence of option abortonclose has no effect here since we
never send the SHUTW along with the request.
The hour part of the timezone offset was multiplied by 60 instead of
3600, resulting in an inaccurate expiry. This bug was introduced in
1.6-dev1 by commit 4f3c87a ("BUG/MEDIUM: ssl: Fix to not serve expired
OCSP responses."), so this fix must be backported into 1.7 and 1.6.
Emeric Brun [Tue, 3 Oct 2017 12:46:45 +0000 (14:46 +0200)]
MAJOR: servers: propagate server status changes asynchronously.
In order to prepare multi-thread development, code was re-worked
to propagate changes asynchronoulsy.
Servers with pending status changes are registered in a list
and this one is processed and emptied only once 'run poll' loop.
Operational status changes are performed before administrative
status changes.
In a case of multiple operational status change or admin status
change in the same 'run poll' loop iteration, those changes are
merged to reach only the targeted status.
Willy Tarreau [Fri, 13 Oct 2017 09:03:15 +0000 (11:03 +0200)]
MINOR: payload: add new sample fetch functions to process distcc protocol
When using haproxy in front of distccd, it's possible to provide significant
improvements by only connecting when the preprocessing is completed, and by
selecting different farms depending on the payload size. This patch provides
two new sample fetch functions :
Willy Tarreau [Fri, 13 Oct 2017 09:46:26 +0000 (11:46 +0200)]
MINOR: server: add the srv_queue() sample fetch method
srv_queue([<backend>/]<server>) : integer
Returns an integer value corresponding to the number of connections currently
pending in the designated server's queue. If <backend> is omitted, then the
server is looked up in the current backend. It can sometimes be used together
with the "use-server" directive to force to use a known faster server when it
is not much loaded. See also the "srv_conn", "avg_queue" and "queue" sample
fetch methods.
Willy Tarreau [Sun, 8 Oct 2017 19:32:53 +0000 (21:32 +0200)]
MINOR: session: remove the list of streams from struct session
Commit bcb86ab ("MINOR: session: add a streams field to the session
struct") added this list of streams that is not needed anymore. Let's
get rid of it now.
Willy Tarreau [Sun, 8 Oct 2017 20:26:03 +0000 (22:26 +0200)]
MINOR: compiler: restore the likely() wrapper for gcc 5.x
After some tests, gcc 5.x produces better code with likely()
than without, contrary to gcc 4.x where it was better to disable
it. Let's re-enable it for 5 and above.
BUILD/MINOR: 51d: fix warning when building with 51Degrees release version 3.2.12.12
The warning appears when building with 51Degrees release that uses a new
Hash Trie algorithm (release version 3.2.12.12):
src/51d.c: In function init_51degrees:
src/51d.c:566:2: warning: enumeration value DATA_SET_INIT_STATUS_TOO_MANY_OPEN_FILES not handled in switch [-Wswitch]
switch (_51d_dataset_status) {
^
Bin Wang [Fri, 15 Sep 2017 06:56:40 +0000 (14:56 +0800)]
BUG/MAJOR: stream-int: don't re-arm recv if send fails
When
1) HAProxy configured to enable splice on both directions
2) After some high load, there are 2 input channels with their socket buffer
being non-empty and pipe being full at the same time, sitting in `fd_cache`
without any other fds.
The 2 channels will repeatedly be stopped for receiving (pipe full) and waken
for receiving (data in socket), thus getting out and in of `fd_cache`, making
their fd swapping location in `fd_cache`.
There is a `if (entry < fd_cache_num && fd_cache[entry] != fd) continue;`
statement in `fd_process_cached_events` to prevent frequent polling, but since
the only 2 fds are constantly swapping location, `fd_cache[entry] != fd` will
always hold true, thus HAProxy can't make any progress.
The root cause of the issue is dual :
- there is a single fd_cache, for next events and for the ones being
processed, while using two distinct arrays would avoid the problem.
- the write side of the stream interface wakes the read side up even
when it couldn't write, and this one really is a bug.
Due to CF_WRITE_PARTIAL not being cleared during fast forwarding, a failed
send() attempt will still cause ->chk_rcv() to be called on the other side,
re-creating an entry for its connection fd in the cache, causing the same
sequence to be repeated indefinitely without any opportunity to make progress.
CF_WRITE_PARTIAL used to be used for what is present in these tests : check
if a recent write operation was performed. It's part of the CF_WRITE_ACTIVITY
set and is tested to check if timeouts need to be updated. It's also used to
detect if a failed connect() may be retried.
What this patch does is use CF_WROTE_DATA() to check for a successful write
for connection retransmits, and to clear CF_WRITE_PARTIAL before preparing
to send in stream_int_notify(). This way, timeouts are still updated each
time a write succeeds, but chk_rcv() won't be called anymore after a failed
write.
It seems the fix is required all the way down to 1.5.
Without this patch, the only workaround at this point is to disable splicing
in at least one direction. Strictly speaking, splicing is not absolutely
required, as regular forwarding could theorically cause the issue to happen
if the timing is appropriate, but in practice it appears impossible to
reproduce it without splicing, and even with splicing it may vary.
The following config manages to reproduce it after a few attempts (haproxy
going 100% CPU and having to be killed) :
BUG/MEDIUM: http: Return an error when url_dec sample converter failed
url_dec sample converter uses url_decode function to decode an URL. This
function fails by returning -1 when an invalid character is found. But the
sample converter never checked the return value and it used it as length for the
decoded string. Because it always succeeded, the invalid sample (with a string
length set to -1) could be used by other sample fetches or sample converters,
leading to undefined behavior like segfault.
The fix is pretty simple, url_dec sample converter just needs to return an error
when url_decode fails.
Willy Tarreau [Wed, 4 Oct 2017 18:24:54 +0000 (20:24 +0200)]
BUG/MEDIUM: cli: fix "show fd" crash when dumping closed FDs
I misplaced the "if (!fdt.owner)" test so it can occasionally crash
when dumping an fd that's already been closed but still appears in
the table. It's not critical since this was not pushed into any
release nor backported though.
Willy Tarreau [Wed, 4 Oct 2017 16:05:01 +0000 (18:05 +0200)]
MEDIUM: checks: do not allocate a permanent connection anymore
Health check currently cheat, they allocate a connection upon startup and never
release it, it's only recycled. The problem with doing this is that this code
is preventing the connection code from evolving towards multiplexing.
This code ensures that it's safe for the checks to run without a connection
all the time. Given that the code heavily relies on CO_FL_ERROR to signal
check errors, it is not trivial but in practice this is the principle adopted
here :
- the connection is not allocated anymore on startup
- new checks are not supposed to have a connection, so an attempt is made
to allocate this connection in the check task's context. If it fails,
the check is aborted on a resource error, and the rare code on this path
verifying the connection was adjusted to check for its existence (in
practice, avoid to close it)
- returning checks necessarily have a valid connection (which may possibly
be closed).
- a "tcp-check connect" rule tries to allocate a new connection before
releasing the previous one (but after closing it), so that if it fails,
it still keeps the previous connection in a closed state. This ensures
a connection is always valid here
Now it works well on all tested cases (regular and TCP checks, even with
multiple reconnections), including when the connection is forced to NULL or
randomly allocated.
Willy Tarreau [Wed, 4 Oct 2017 16:41:00 +0000 (18:41 +0200)]
MEDIUM: checks: make tcpcheck_main() indicate if it recycled a connection
The tcp-checks are very fragile. They can modify a connection's FD by
closing and reopening a socket without informing the connection layer,
which may then possibly touch the wrong fd. Given that the events are
only cleared and that the fd is just created, there should be no visible
side effect because the old fd is deleted so even if its flags get cleared
they were already, and the new fd already has them cleared as well so it's
a NOP. Regardless, this is too fragile and will not resist to threads.
In order to address this situation, this patch makes tcpcheck_main()
indicate if it closed a connection and report it to wake_srv_chk(), which
will then report it to the connection's fd handler so that it refrains
from updating the connection polling and the fd. Instead the connection
polling status is updated in the wake() function.
Willy Tarreau [Wed, 4 Oct 2017 14:21:19 +0000 (16:21 +0200)]
MINOR: checks: don't create then kill a dummy connection before tcp-checks
When tcp-checks are in use, a connection starts to be created, then it's
destroyed so that tcp-check can recreate its own. Now we directly move
to tcpcheck_main() when it's detected that tcp-check is in use.
Willy Tarreau [Wed, 4 Oct 2017 13:58:52 +0000 (15:58 +0200)]
BUG/MINOR: tcp-check: don't initialize then break a connection starting with a comment
The following config :
backend tcp9000
option tcp-check
tcp-check comment "this is a comment"
tcp-check connect port 10000
server srv 127.0.0.1:9000 check inter 1s
will result in a connection being first made to port 9000 then immediately
destroyed and re-created on port 10000, because the first rule is a comment
and doesn't match the test for the first rule being a connect(). It's
mostly harmless (unless the server really must not receive empty
connections) and the workaround simply consists in removing the comment.
Let's proceed like in other places where we simply skip leading comments.
A new function was made to make this lookup les boring. The fix should be
backported to 1.7 and 1.6.
Willy Tarreau [Wed, 4 Oct 2017 13:25:38 +0000 (15:25 +0200)]
CLEANUP: checks: do not allocate a connection for process checks
Since this connection is not used at all anymore, do not allocate it.
It was verified that check successes and failures (both synchronous
and asynchronous) continue to be properly reported.
Willy Tarreau [Wed, 4 Oct 2017 13:19:26 +0000 (15:19 +0200)]
CLEANUP: checks: don't report report the fork() error twice
Upon fork() error, a first report is immediately made by connect_proc_chk()
via set_server_check_status(), then process_chk_proc() detects the error
code and makes up a dummy connection error to call chk_report_conn_err(),
which tries to retrieve the errno code from the connection, fails, then
saves the status message from the check, fails all "if" tests on its path
related to the connection then resets the check's state to the current one
with the current status message. All this useless chain is the only reason
why process checks require a connection! Let's simply get rid of this second
useless call.
Willy Tarreau [Wed, 4 Oct 2017 13:07:02 +0000 (15:07 +0200)]
CLEANUP: checks: remove misleading comments and statuses for external process
The external process check code abused a little bit from copy-pasting to
the point of making think it requires a connection... The initialization
code only returns SF_ERR_NONE and SF_ERR_RESOURCE, so the other one can
be folded there. The code now only uses the connection to report the
error status.
Willy Tarreau [Wed, 4 Oct 2017 12:47:29 +0000 (14:47 +0200)]
MINOR: checks: make chk_report_conn_err() take a check, not a connection
Amazingly, this function takes a connection to report an error and is used
by process checks, placing a hard dependency between the connection and the
check preventing the mux from being completely implemented. Let's first get
rid of this.
Willy Tarreau [Wed, 4 Oct 2017 12:43:44 +0000 (14:43 +0200)]
BUG/MINOR: unix: properly check for octal digits in the "mode" argument
A config containing "stats socket /path/to/socket mode admin" used to
silently start and be unusable (mode 0, level user) because the "mode"
parser doesn't take care of non-digits. Now it properly reports :
This can probably be backported to 1.7, 1.6 and 1.5, though reporting
parsing errors in very old versions probably isn't a good idea if the
feature was left unused for years.
Willy Tarreau [Wed, 4 Oct 2017 09:58:22 +0000 (11:58 +0200)]
BUG/MEDIUM: tcp-check: don't call tcpcheck_main() from the I/O handlers!
This function can destroy a socket and create a new one, resulting in a
change of FD on the connection between recv() and send() for example,
which is absolutely not permitted, and can result in various funny games
like polling not being properly updated (or with the flags from a previous
fd) etc.
Let's only call this from the wake() callback which is more tolerant.
Ideally the operations should be made even more reliable by returning
a specific value to indicate that the connection was released and that
another one was created. But this is hasardous for stable releases as
it may reveal other issues.
Willy Tarreau [Wed, 4 Oct 2017 09:38:08 +0000 (11:38 +0200)]
BUG/MINOR: tcp-check: don't quit with pending data in the send buffer
In the rare case where the "tcp-check send" directive is the last one in
the list, it leaves the loop without sending the data. Fortunately, the
polling is still enabled on output, resulting in the connection handler
calling back to send what remains, but this is ugly and not very reliable.
Willy Tarreau [Wed, 4 Oct 2017 06:45:19 +0000 (08:45 +0200)]
BUG/MEDIUM: tcp-check: properly indicate polling state before performing I/O
While porting the connection to use the mux layer, it appeared that
tcp-checks wouldn't receive anymore because the polling is not enabled
before attempting to call xprt->rcv_buf() nor xprt->snd_buf(), and it
is illegal to call these functions with polling disabled as they
directly manipulate the FD state, resulting in an inconsistency where
the FD is enabled and the connection's polling flags disabled.
Till now it happened to work only because when recv() fails on EAGAIN
it calls fd_cant_recv() which enables polling while signaling the
failure, so that next time the message is received. But the connection's
polling is never enabled, and any tiny change resulting in a call to
conn_data_update_polling() immediately disables reading again.
It's likely that this problem already happens on some corner cases
such as multi-packet responses. It definitely breaks as soon as the
response buffer is full but we don't support consuming more than one
response buffer.
This fix should be backported to 1.7 and 1.6. In order to check for the
proper behaviour, this tcp-check must work and clearly show an SSH
banner in recvfrom() as observed under strace, otherwise it's broken :
Willy Tarreau [Wed, 4 Oct 2017 05:48:56 +0000 (07:48 +0200)]
CLEANUUP: checks: don't set conn->handle.fd to -1
This used to be needed to know whether there was a check in progress a
long time ago (before tcp_checks) but this is not true anymore and even
becomes wrong after the check is reused as conn_init() initializes it
to DEAD_FD_MAGIC.
A regression has been introduced in commit 00005ce5a14310d248c9f20af9ef258d245d43b1: the port being changed is the
one from 'cli_conn->addr.from' instead of 'cli_conn->addr.to'.
This patch fixes the regression.
Backport status: should be backported to HAProxy 1.7 and above.
MINOR: ist: add a macro to ease const array initialization
It's not possible to use strlen() in const arrays even with const
strings, but we can use sizeof-1 via a macro. Let's provide this in
the IST() macro, as it saves the developer from having to count the
characters.
MINOR: connection: adjust CO_FL_NOTIFY_DATA after removal of flags
After the removal of CO_FL_DATA_RD_SH and CO_FL_DATA_WR_SH, the
aggregate mask CO_FL_NOTIFY_DATA was not updated. It happens that
now CO_FL_NOTIFY_DATA and CO_FL_NOTIFY_DONE are similar, which may
reveal some overlap between the ->wake and ->xprt_done callbacks.
We'll see after the mux changes if both are still required.
These ones are the same as the previous ones but for 64 bit values.
We're using my_ntohll() and my_htonll() from standard.h for the byte
order conversion.
These ones are the equivalent of the read_* functions. They support
writing unaligned words, possibly wrapping, in host and network order.
The write_i*() functions were not implemented since the caller can
already use the unsigned version.
MINOR: net_helper: add functions to read from vectors
This patch adds the ability to read from a wrapping memory area (ie:
buffers). The new functions are called "readv_<type>". The original
ones were renamed to start with "read_" to make the difference more
obvious between the read method and the returned type.
It's worth noting that the memory barrier in readv_bytes() is critical,
as otherwise gcc decides that it doesn't need the resulting data, but
even worse, removes the length checks in readv_u64() and happily
performs an out-of-bounds unaligned read using read_u64()! Such
"optimizations" are a bit borderline, especially when they impact
security like this...
These ones return respectively the pointer to the end of the buffer and
the distance between b->p and the end. These will simplify a bit some
new code needed to parse directly from a wrapping buffer.
MINOR: tools: make my_htonll() more efficient on x86_64
The current construct was made when developing on a 32-bit machine.
Having a simple bswap operation replaced with 2 bswap, 2 shift and
2 or is quite of a waste of precious cycles... Let's provide a trivial
asm-based implementation for x86_64.
BUG/MINOR: dns: Fix check on nameserver in snr_resolution_cb
snr_resolution_cb can be called with <nameserver> parameter set to NULL. So we
must check it before using it. This is done most of time, except when we deal
with invalid DNS response.
BUG/MEDIUM: compression: Fix check on txn in smp_fetch_res_comp_algo
The check was totally messed up. In the worse case, it led to a crash, when
res.comp_algo sample fetch was retrieved on uncompressed response (with the
compression enabled).
MEDIUM: session: count the frontend's connections at a single place
There are several places where we see feconn++, feconn--, totalconn++ and
an increment on the frontend's number of connections and connection rate.
This is done exactly once per session in each direction, so better take
care of this counter in the session and simplify the callers. At least it
ensures a better symmetry. It also ensures consistency as till now the
lua/spoe/peers frontend didn't have these counters properly set, which can
be useful at least for troubleshooting.
MEDIUM: session: factor out duplicated code for conn_complete_session
session_accept_fd() may either successfully complete a session creation,
or defer it to conn_complete_session() depending of whether a handshake
remains to be performed or not. The problem is that all the code after
the handshake was duplicated between the two functions.
This patch make session_accept_fd() synchronously call
conn_complete_session() to finish the session creation. It is only needed
to check if the session's task has to be released or not at the end, which
is fairly minimal. This way there is now a single place where the sessions
are created.
MINOR: session: small cleanup of conn_complete_session()
Commit 8e3c6ce ("MEDIUM: connection: get rid of data->init() which was
not for data") simplified conn_complete_session() but introduced a
confusing check which cannot happen on CO_FL_HANDSHAKE. Make it clear
that this call is final and will either succeed and complete the
session or fail.
Instead of duplicating some sensitive listener-specific code in the
session and in the stream code, let's call listener_release() when
releasing a connection attached to a listener.
MEDIUM: session: take care of incrementing/decrementing jobs
Each user of a session increments/decrements the jobs variable at its
own place, resulting in a real mess and inconsistencies between them.
Let's have session_new() increment jobs and session_free() decrement
it.
MINOR: listeners: make listeners count consistent with reality
Some places call delete_listener() then decrement the number of
listeners and jobs. At least one other place calls delete_listener()
without doing so, but since it's in deinit(), it's harmless and cannot
risk to cause zombie processes to survive. Given that the number of
listeners and jobs is incremented when creating the listeners, it's
much more logical to symmetrically decrement them when deleting such
listeners.
This function is used to create a series of listeners for a specific
address and a port range. It automatically calls the matching protocol
handlers to add them to the relevant lists. This way cfgparse doesn't
need to manipulate listeners anymore. As an added bonus, the memory
allocation is checked.
MINOR: unix: remove the now unused proto_uxst.h file
Since everything is self contained in proto_uxst.c there's no need to
export anything. The same should be done for proto_tcp.c but the file
contains other stuff that's not related to the TCP protocol itself
and which should first be moved somewhere else.
MINOR: protocols: register the ->add function and stop calling them directly
cfgparse has no business directly calling each individual protocol's 'add'
function to create a listener. Now that they're all registered, better
perform a protocol lookup on the family and have a standard ->add method
for all of them.
MINOR: protocols: always pass a "port" argument to the listener creation
It's a shame that cfgparse() has to make special cases of each protocol
just to cast the port to the target address family. Let's pass the port
in argument to the function. The unix listener simply ignores it.
MINOR: frontend: don't retrieve ALPN on the critical path
It's pointless to read it on each and every accept(), as we only need
it for reporting in debugging mode a few lines later. Let's move this
part to the relevant block.
MINOR: peers: don't reference the incoming listener on outgoing connections
Since v1.7 it's pointless to reference a listener when greating a session
for an outgoing connection, it only complicates the code. SPOE and Lua were
cleaned up in 1.8-dev1 but the peers code was forgotten. This patch fixes
this by not assigning such a listener for outgoing connections. It also has
the extra benefit of not discounting the outgoing connections from the number
of allowed incoming connections (the code currently adds a safety marging of
3 extra connections to take care of this).
BUILD: Makefile: improve detection of support for compiler warnings
Some compiler versions don't emit an error when facing an unknown
no-warning unless another error is reported, resulting in all -Wno-*
options being enabled by default and being reported as wrong with
build errors. Let's create a new "cc-nowarn" function to disable
warnings only after checking that the positive one is supported.
BUILD: Makefile: shut certain gcc/clang stupid warnings
The recent gcc and clang are utterly broken and apparently written by
people who don't use them anymore, because they emit warnings that are
impossible to disable in the code, which is the opposite of what a
warning should do. It is however possible to disable these warnings on
the command line.
This patch adds when supported :
-Wno-format-truncation: bogus warning which is triggered on each
snprintf() call based on the input type instead of the variables
ranges, resulting in the impossibility to use "%02d" and similar.
-Wno-address-of-packed-member: emitted for each and every line in
ebtree.h by recent clang. Probably that the warning's author has
never understood the use cases of packed structs and should be
taught the use cases of the language he writes the compiler for.
-Wno-null-dereference: emitted by clang on *(int *)0 = 0. The code
will be updated to use a volatile instead but this recent change
of behaviour will certainly cause quite some bugs in decades of
existing code.
Feel free to report new such stupid warnings and to propose patches
to complete this list.
BUILD: Makefile: add a function to detect support by the compiler of certain options
The recent gcc and clang are utterly broken and apparently written by
people who don't use them anymore, because they emit warnings that are
impossible to disable in the code, which is the opposite of what a
warning should do. It is however possible to disable these warnings on
the command line, but not in a backwards-compatible way.
Thus here we create a new function which detect if the compiler supports
certain options, and which adds them if supported.
MINOR: cli: add socket commands and config to prepend informational messages with severity
Adds cli commands to change at runtime whether informational messages
are prepended with severity level or not, with support for numeric and
worded severity in line with syslog severity level.
Adds stats socket config keyword severity-output to set default behavior
per socket on startup.
MINOR: tasks: Move Lua notification from Lua to tasks
These notification management function and structs are generic and
it will be better to move in common parts.
The notification management functions and structs have names
containing some "lua" references because it was written for
the Lua. This patch removes also these references.
Thierry FOURNIER [Thu, 31 Aug 2017 18:35:18 +0000 (20:35 +0200)]
MINOR: xref: Add a new xref system
xref is used to create a relation between two elements.
Once an element is released, it breaks the relation. If the
relation is already broken, it frees the xref struct.
The pointer between two elements is a sort of refcount with
max value 1. The relation is only between two elements.
The pointer and the type of element a and b are conventional.
Note that xref is initialised from Lua files because Lua is
the only one user.
BUG/MEDIUM: http: Close streams for connections closed before a redirect
A previous fix was made to prevent the connection to a server if a redirect was
performed during the request processing when we wait to keep the client
connection alive. This fix introduced a pernicious bug. If a client closes its
connection immediately after sending a request, it is possible to keep stream
alive infinitely. This happens when the connection closure is caught when the
request is received, before the request parsing.
To be more specific, this happens because the close event is not "forwarded",
first because of the call to "channel_dont_connect" in the function
"http_apply_redirect_rule", then because we want to keep the client connection
alive, we explicitly call "channel_dont_close" in the function
"http_request_forward_body".
So, to fix the bug, instead of blocking the server connection, we force its
shutdown. This will force the stream to re-evaluate all connexions states. So it
will detect the client has closed its connection.
MINOR: ssl: rework smp_fetch_ssl_fc_cl_str without internal ssl use
smp_fetch_ssl_fc_cl_str as very limited usage (only work with openssl == 1.0.2
compiled with the option enable-ssl-trace). It use internal cipher.algorithm_ssl
attribut and SSL_CIPHER_standard_name (available with ssl-trace).
This patch implement this (debug) function in a standard way. It used common
SSL_CIPHER_get_name to display cipher name. It work with openssl >= 1.0.2
and boringssl.
It causes some trouble reported by Manu :
listen tls
[...]
server bla 127.0.0.1:8080
[ALERT] 248/130258 (21960) : parsing [/etc/haproxy/test.cfg:53] : 'server bla' : no method found to resolve address '(null)'
[ALERT] 248/130258 (21960) : Failed to initialize server(s) addr.
According to Nenad :
"It's not a good way to fix the issue we were experiencing
before. It will need a bigger rewrite, because the logic in
srv_iterate_initaddr needs to be changed."
BUG/MINOR: server: Remove FQDN requirement for using init-addr and state file
Historically the DNS was the only way of updating the server IP dynamically
and the init-addr processing and state file load required the server to have
an FQDN defined. Given that we can now update the IP through the socket as
well and also can have different init-addr values (like IP and 'none') - this
requirement needs to be removed.
This function should be called by the poller to set FD_POLL_* flags on an FD and
update its state if needed. This function has been added to ease threads support
integration.
BUG/MEDIUM: epoll: ensure we always consider HUP and ERR
Since commit 5be2f35 ("MAJOR: polling: centralize calls to I/O callbacks")
that came into 1.6-dev1, each poller deals with its own events and decides
to signal ability to receive or send on a file descriptor based on the
active events on the file descriptor.
The commit above was incorrectly done for the epoll code. Instead of
checking the active events on the fd, it checks for the new events. In
general these ones are the same for POLL_IN and POLL_OUT since they
are always cleared prior to being computed, but it is possible that
POLL_HUP and POLL_ERR were initially reported and are not reported
again (especially for HUP). This could happen for example if POLL_HUP
and POLL_IN were received together, the pending data exactly correspond
to a full buffer which is read at once, preventing the POLL_HUP from
being dealt with in the same call, and on the next call only POLL_OUT
is reported (eg: to emit some response or peers protocol ACKs). In this
case fd_may_recv() will not be enabled anymore and the close event will
be missed.
It seems quite hard to trigger this case, though it might explain some
of the rare missed close events that were detected in the past on the
peers.
Emeric Brun [Thu, 31 Aug 2017 12:41:55 +0000 (14:41 +0200)]
MEDIUM: check: server states and weight propagation re-work
The server state and weight was reworked to handle
"pending" values updated by checks/CLI/LUA/agent.
These values are commited to be propagated to the
LB stack.
In further dev related to multi-thread, the commit
will be handled into a sync point.
Pending values are named using the prefix 'next_'
Current values used by the LB stack are named 'cur_'
MINOR: http: Use a trash chunk to store decoded string of the HTTP auth header
This string is used in sample fetches so it is safe to use a preallocated trash
chunk instead of a buffer dynamically allocated during HAProxy startup.