Willy Tarreau [Tue, 28 May 2019 08:58:50 +0000 (10:58 +0200)]
MEDIUM: htx: make htx_add_data() never defragment the buffer
Now instead of trying to fit 100% of the input data into the output
buffer at the risk of defragmenting it, we put what fits into it only
and return the amount of bytes transferred. In a test, compared to the
previous commit, it increases the cached data rate from 44 Gbps to
55 Gbps and saves a lot in case of large buffers : with a 1 MB buffer,
uncached transfers jumped from 700 Mbps to 30 Gbps.
Willy Tarreau [Tue, 28 May 2019 08:30:11 +0000 (10:30 +0200)]
MINOR: htx: make htx_add_data() return the transmitted byte count
In order to later allow htx_add_data() to transmit partial blocks and
avoid defragmenting the buffer, we'll need to return the number of bytes
consumed. This first modification makes the function do this and its
callers take this into account. At the moment the function still works
atomically so it returns either the block size or zero. However all
call places have been adapted to consider any value between zero and
the block size.
Olivier Houchard [Thu, 23 May 2019 16:41:47 +0000 (18:41 +0200)]
MINOR: ssl: Don't forget to call the close method of the underlying xprt.
In ssl_sock_close(), don't forget to call the underlying xprt's close method
if it exists. For now it's harmless not to do so, because the only available
layer is the raw socket, which doesn't have a close method, but that will
change when we implement QUIC.
Willy Tarreau [Tue, 28 May 2019 06:26:17 +0000 (08:26 +0200)]
BUG/MEDIUM: http: fix "http-request reject" when not final
When "http-request reject" was introduced in 1.8 with commit 53275e8b0
("MINOR: http: implement the "http-request reject" rule"), it was already
broken. The code mentions "it always returns ACT_RET_STOP" and obviously
a gross copy-paste made it ACT_RET_CONT. If the rule is the last one it
properly blocks, but if not the last one it gets ignored, as can be seen
with this simple configuration :
MINOR: htx: don't rely on htx_find_blk() anymore in the function htx_truncate()
the function htx_find_blk() is used by only one function, htx_truncate(). So
because this function does nothing very smart, we don't use it anymore. It will
be removed by another commit.
MEDIUM: filters/htx: Filter body relatively to the first block
The filters filtering HTX body, in the callback http_payload, must now loop on
an HTX message starting from the first block position. The offset passed as
parameter is relative to this position and not the head one. It is mandatory
because once filtered, data are now forwarded using the function
channel_htx_fwd_payload(). So the first block position is always updated.
MINOR: channel/htx: Add functions to forward a part or all HTX payload
The functions channel_htx_fwd_payload() and channel_htx_fwd_all() should now be
used to forward, respectively, a part of the HTX payload or all of it. These
functions forward data and update the first block position.
MINOR: stats/htx: don't use the first block position but the head one
Applets must never rely on the first block position to consume an HTX
message. The head position must be used instead. For the request it is always
the start-line. At this stage, it is not a bug, because the first position of
the request is never changed by HTX analysers.
MEDIUM: htx: Store the first block position instead of the start-line one
We don't store the start-line position anymore in the HTX message. Instead we
store the first block position to analyze. For now, it is almost the same. But
once all changes will be made on this part, this position will have to be used
by HTX analyzers, and only in the analysis context, to know where the analyse
should start.
When new blocks are added in an HTX message, if the first block position is not
defined, it is set. When the block pointed by it is removed, it is set to the
block following it. -1 remains the value to unset the position. the first block
position is unset when the HTX message is empty. It may also be unset on a
non-empty message, meaning every blocks were already analyzed.
From HTX analyzers point of view, this position is always set during headers
analysis. When they are waiting for a request or a response, if it is unset, it
means the analysis should wait. But once the analysis is started, and as long as
headers are not forwarded, it points to the message start-line.
As mentionned, outside the HTX analysis, no code must rely on the first block
position. So multiplexers and applets must always use the head position to start
a loop on an HTX message.
MINOR: channel/htx: Add function to forward headers of an HTX message
The function channel_htx_fwd_headers() should now be used by HTX analyzers to
forward all headers of an HTX message, from the start-line to the corresponding
EOH. It takes care to update the star-line position.
MEDIUM: htx: 1xx messages are now part of the final reponses
1xx informational messages (all except 101) are now part of the HTTP reponse,
semantically speaking. These messages are not followed by an EOM anymore,
because a final reponse is always expected. All these parts can also be
transferred to the channel in same time, if possible. The HTX response analyzer
has been update to forward them in loop, as the legacy one.
MINOR: htx: Be sure to xfer all headers in one time in htx_xfer_blks()
In the function htx_xfer_blks(), we take care to transfer all headers in one
time. When the current block is a start-line, we check if there is enough space
to transfer all headers too. If not, and if the destination is empty, a parsing
error is reported on the source.
The H2 multiplexer is the only one to use this function. When a parsing error is
reported during the transfer, the flag CS_FL_EOI is also set on the conn_stream.
MINOR: htx: Add a field to set the memory used by headers in the HTX start-line
The field hdrs_bytes has been added in the structure htx_sl. It should be used
to set how many bytes are help by all headers, from the start-line to the
corresponding EOH block. it must be set to -1 if it is unknown.
MINOR: stream-int: Don't use the flag CO_RFL_KEEP_RSV anymore in si_cs_recv()
Because the channel_recv_max() always return the right value, for HTX and legacy
streams, we don't need to set this flag. The multiplexer don't use it anymore.
MINOR: mux-h2: Use the count value received from the SI in h2_rcv_buf()
Now, the SI calls h2_rcv_buf() with the right count value. So we can rely on
it. Unlike the H1 multiplexer, it is fairly easier for the H2 multiplexer
because the HTX message already exists, we only transfer blocks from the H2S to
the channel. And this part is handled by htx_xfer_blks().
MEDIUM: mux-h1: Use the count value received from the SI in h1_rcv_buf()
Now, the SI calls h1_rcv_buf() with the right count value. So we can rely on
it. During the parsing, we now really respect this value to be sure to never
exceed it. To do so, once headers are parsed, we should estimate the size of the
HTX message before copying data.
BUG/MINOR: htx: Change htx_xfer_blk() to also count metadata
This patch makes the function more accurate. Thanks to the function
htx_get_max_blksz(), the transfer of data has been simplified. Note that now the
total number of bytes copied (metadata + payload) is returned. This slighly
change how the function is used in the H2 multiplexer.
This functions should be used to get the maximum size for a block, not exceeding
the max amount of bytes passed in argument. Thus max may be set to -1 to have no
limit.
MINOR: channel/htx: Call channel_htx_recv_max() from channel_recv_max()
When channel_recv_max() is called for an HTX stream, we fall back on the HTX
version. This function is called from si_cs_recv(). This will let us pass the
max amount of bytes to read to HTX multiplexers.
MEDIUM: http/htx: Perform analysis relatively to the first block
The first block is the start-line, if defined. Otherwise it the head of the HTX
message. So now, during HTTP analysis, lookup are all done using the first block
instead of the head. Concretely, for now, it is the same because only one HTTP
message is stored at a time in an HTX message. 1xx informational messages are
handled separatly from the final reponse and from each other. But it will make
sense when the 1xx informational messages and the associated final reponse will
be stored in the same HTX message.
MINOR: http/htx: Use sl_pos directly to replace the start-line
Since the HTX start-line is now referenced by position instead of by its payload
address, it is fairly easier to replace it. No need to search the rigth block to
find the start-line comparing the payloads address. It just enough to get the
block at the position sl_pos.
MINOR: htx: Replace the function http_find_stline() by http_get_stline()
Now, we only return the start-line. If not found, NULL is returned. No lookup is
performed and the HTX message is no more updated. It is now the caller
responsibility to update the position of the start-line to the right value. So
when it is not found, i.e sl_pos is set to -1, it means the last start-line has
been already processed and the next one has not been inserted yet.
It is mandatory to rely on this kind of warranty to store 1xx informational
responses and final reponse in the same HTX message.
MINOR: mux-h2/htx: Get the start-line from the head when HEADERS frame is built
in the H2 multiplexer, when a HEADERS frame is built before sending it, we have
the warranty the start-line is the head of the HTX message. It is safer to rely
on this fact than on the sl_pos value. For now, it's safe to use sl_pos in muxes
because HTTP 1xx messages are considered as full messages in HTX and only one
HTTP message can be stored at a time in HTX. But we are trying to handle 1xx
messages as a part of the reponse message. In this way, an HTTP reponse will be
the sum of all 1xx informational messages followed by the final response. So it
will be possible to have several start-line in the same HTX message. And the
sl_pos will point to the first unprocessed start-line from the analyzers point
of view.
MINOR: htx: Add functions to get the first block of an HTX message
It is the first block relatively to the start-line. So it is the start-line if
its position is set (sl_pos != -1), otherwise it is the head. The functions
htx_get_first() and htx_get_first_blk() can be used to get it. This change is
mandatory to consider 1xx informational messages as part of a response.
MINOR: htx: Store the head position instead of the wrap one
The head of an HTX message is heavily used whereas the wrap position is only
used when a block is added or removed. So it is more logical to store the head
position in the HTX message instead of the wrap one. The wrap position can be
easily deduced. To get it, the new function htx_get_wrap() may be used.
Willy Tarreau [Mon, 27 May 2019 17:31:06 +0000 (19:31 +0200)]
MEDIUM: config: now alert when two servers have the same name
We've been emitting warnings for over 5 years (since 1.5-dev22) about
configs accidently carrying multiple servers with the same name in the
same backend, and this starts to cause some real trouble in dynamic
environments since it's still very difficult to accurately process
a state-file and we still can't transport a server's name over the
peers protocol because of this.
It's about time to force users to fix their configs if they still
hadn't given that there is zero technical justification for doing this,
beyond the "yyp" (or copy-paste accident) when editing the config.
The message remains as clear as before, indicating the file and lines
of the conflict so that the user can easily fix it.
Willy Tarreau [Mon, 27 May 2019 15:37:20 +0000 (17:37 +0200)]
BUG/MEDIUM: threads: fix double-word CAS on non-optimized 32-bit platforms
On armv7 haproxy doesn't work because of the fixes on the double-word
CAS. There are two issues. The first one is that the last argument in
case of dwcas is a pointer to the set of value and not a value ; the
second is that it's not enough to cast the data as (void*) since it will
be a single word. Let's fix this by using the pointers as an array of
long. This was tested on i386, armv7, x86_64 and aarch64 and it is now
fine. An alternate approach using a struct was attempted as well but it
used to produce less optimal code.
This fix must be backported to 1.9. This fixes github issue #105.
Willy Tarreau [Mon, 27 May 2019 06:10:11 +0000 (08:10 +0200)]
BUG/MEDIUM: queue: fix the tree walk in pendconn_redistribute.
In pendconn_redistribute() we scan the queue using eb32_next() on the
node we've just deleted, which is wrong since the node is not in the
tree anymore, and it could dereference one node that has already been
released by another thread. Note that we cannot use eb32_first() in the
loop here instead because we need to skip pendconns having SF_FORCE_PRST.
Instead, let's keep a copy of the next node before deleting it.
In addition, the pendconn retrieved there is wrong, it uses &node as
the pointer instead of node, resulting in very quick crashes when the
server list is scanned.
Fortunately this only happens when "option redispatch" is used in
conjunction with "maxconn" on server lines, "cookie" for the stickiness,
and when a server goes down with entries in its queue.
This bug was introduced by commit 0355dabd7 ("MINOR: queue: replace
the linked list with a tree") so the fix must be backported to 1.9.
Willy Tarreau [Mon, 27 May 2019 08:17:05 +0000 (10:17 +0200)]
BUG/MAJOR: lb/threads: make sure the avoided server is not full on second pass
In fwrr_get_next_server(), we optionally pass a server to avoid. It
usually points to the current server during a redispatch operation. If
this server is usable, an "avoided" pointer is set and we continue to
look for another server. If in the end no other server is found, then
we fall back to this avoided one, which is still better than nothing.
The problem that may arise with threads is that in the mean time, this
avoided server might have received extra connections and might not be
usable anymore. This causes it to be queued a second time in the "full"
list and the loop to search for a server again, ending up on this one
again and so on.
This patch makes sure that we break out of the loop when we have to
pick the avoided server. It's probably what the code intended to do
as the current break statement causes fwrr_update_position() and
fwrr_dequeue_srv() to be called again on the avoided server.
It must be backported to 1.9 and 1.8, and seems appropriate for older
versions though it's unclear what the impact of this bug might be
there since the race doesn't exist and we're left with the double
update of the server's position.
Willy Tarreau [Mon, 27 May 2019 05:03:38 +0000 (07:03 +0200)]
MINOR: cli/activity: add 3 general purpose counters in development mode
The unused fd_del and fd_skip were being abused during debugging sessions
as general purpose event counters. With their removal, let's officially
have dedicated counters for such use cases. These counters are called
"ctr0".."ctr2" and are listed at the end when DEBUG_DEV is set.
Willy Tarreau [Sun, 26 May 2019 09:50:08 +0000 (11:50 +0200)]
BUILD: connections: shut up gcc about impossible out-of-bounds warning
Since commit 88698d9 ("MEDIUM: connections: Add a way to control the
number of idling connections.") when building without threads, gcc
complains that the operations made on the idle_orphan_conns[] list is
out of bounds, which is always false since 1) <i> can only equal zero,
and 2) given it's equal to <tid> we never even enter the loop. But as
usual it thinks it knows better, so let's mask the origin of this <i>
value to shut it up. Another solution consists in making <i> unsigned
and adding an explicit range check.
Willy Tarreau [Sun, 26 May 2019 08:08:28 +0000 (10:08 +0200)]
MAJOR: mux-h2: switch to next mux buffer on buffer full condition.
Now when we fail to send because the mux buffer is full, before giving
up and marking MFULL, we try to allocate another buffer in the mux's
ring to try again. Thanks to this (and provided there are enough buffers
allocated to the mux's ring), a single stream picked in the send_list
cannot steal all the mux's room at once. For this, we expand the ring
size to 31 buffers as it seems to be optimal on benchmarks since it
divides the number of context switches by 3. It will inflate each H2
conn's memory by 1 kB.
The bandwidth is now much more stable. Prior to this, it a test on
h2->h1 with very large objects (1 GB), a few tens of connections and
a few tens of streams per connection would show a varying performance
between 34 and 95 Gbps on 2 cores/4 threads, with h2_snd_buf() stopped
on a buffer full condition between 300000 and 600000 times per second.
Now the performance is constantly between 88 and 96 Gbps. Measures show
that buffer full conditions are met around only 159 times per second
in this case, or rougly 2000 to 4000 times less often.
Willy Tarreau [Sun, 26 May 2019 08:05:50 +0000 (10:05 +0200)]
CLEANUP: mux-h2: consistently use a local variable for the mbuf
This makes the code more readable and reduces the calls to br_tail().
In addition, all calls to h2_get_buf() are now made via this local
variable, which should significantly help for retries.
Willy Tarreau [Sun, 26 May 2019 07:49:17 +0000 (09:49 +0200)]
MEDIUM: mux-h2: make the send() function iterate over all mux buffers
Now send() uses a loop to iterate over all buffers to be sent. These
buffers are released and deleted from the vector once completely sent.
If any buffer gets released, offer_buffers() is called to wake up some
waiters.
Willy Tarreau [Sun, 26 May 2019 07:38:07 +0000 (09:38 +0200)]
MEDIUM: mux-h2: replace all occurrences of mbuf with a buffer ring
For now it's only one buffer long so the head and tails are always the
same, thus it doesn't change what used to work. In short, br_tail(h2c->mbuf)
was inserted everywhere we used to have h2c->mbuf.
Willy Tarreau [Fri, 24 May 2019 12:55:06 +0000 (14:55 +0200)]
MINOR: buffer: add a new buffer ring API to manipulate rings of buffers
The purpose is to manipulate rings made of series of buffers so that
it is possible to continue to work on a next buffer once one is full.
This will be used by muxes to deal with contention between multiple
streams and a single output buffer. No data is expected to span over
multiple buffers, all of them will be used like a regular buffer. This
will significantly limit the amount of changes and the code complexity
while still supporting larger output buffering.
The ring is made of a head and a tail indexes both of which point to a
buffer descriptor. At least one descriptor is always valid, so it could
be seen as a form of pagination always presenting one buffer. The root
of the ring is itself stored into a buffer descriptor so that the user
only has to declare a buffer array and to call br_init() on it in order
to use it.
Willy Tarreau [Sun, 26 May 2019 07:25:59 +0000 (09:25 +0200)]
CLEANUP: debug: remove the TRACE() macro
It has not been used for many years, is unlikely to be reused and
conflicts with the similarly named macro in flt_trace, causing warnings
at build time when including debug.h in low-level files. Let's simply
remove it.
Willy Tarreau [Fri, 24 May 2019 17:42:18 +0000 (19:42 +0200)]
MEDIUM: mux-h2: avoid doing expensive buffer realigns when not absolutely needed
Transferring large objects over H2 sometimes shows unexplained performance
variations. A long analysis resulted in the following discovery. Often the
mux buffer looks like this :
[ empty_head | data | empty_tail ]
Typical numbers are (very common) :
- empty_head = 31
- empty_tail = 16 (total free=47)
- data = 16337
- size = 16384
- data to copy: 43
The reason for these holes are the blocking factors that are not always
the same in and out (due to keeping 9 bytes for the frame size, or the
56 bytes corresponding to the HTX header). This can easily happen 10000
times a second if the network bandwidth permits it!
In this case, while copying a DATA frame we find that the buffer has its
free space wrapped so we decide to realign it to optimize the copy. It's
possible that this practice stems from the code used to emit headers,
which do not support fragmentation and which had no other option left.
But it comes with two problems :
- we don't check if the data fits, which results in a memcpy for nothing
- we can move huge amounts of data to just copy a small block.
This patch addresses this two ways :
- first, by not forcing a data realignment if what we have to copy does
not fit, as this is totally pointless ;
- second, by refusing to move too large data blocks. The threshold was
set to 1 kB, because it may make sense to move 1 kB of data to copy
a 15 kB one at once, which will leave as a single 16 kB block, but
it doesn't make sense to mvoe 15 kB to copy just 1 kB. In all cases
the data would fit and would just be split into two blocks, which is
not very expensive, hence the low limit to 1 kB
With such changes, realignments are very rare, they show up around once
every 15 seconds at 60 Gbps, and look like this, resulting in a much more
stable bit rate :
Willy Tarreau [Sat, 25 May 2019 17:54:40 +0000 (19:54 +0200)]
OPTIM: freq-ctr: don't take the date lock for most updates
It's amazing that the value was still incremented under the date lock,
let's first use an atomic increment for the counter and move it out of
the date lock to reduce contention. These are just counters, we don't
need to take locks if we're not rotating, atomic ops are enough. This
patch does this, and leaves the lock for when the period is over. It's
important to note that some values might be added just before or just
after a rotation but this is not a problem since we don't care if a
value is counted in the previous or next period when it's exactly on
the edge. Great care was taken to ensure that the current counter is
always atomically updated.
Other minor cleanups were performed, such as avoiding to reload the
value from memory after a CAS, or using &~1 instead of two shifts to
remove the lowest bit.
Michael Prokop [Fri, 24 May 2019 08:25:45 +0000 (10:25 +0200)]
DOC: fix typos
s/accidently/accidentally/
s/any ot these messages/any of theses messages/
s/catched/caught/
s/completly/completely/
s/convertor/converter/
s/desribing/describing/
s/developper/developer/
s/eventhough/even though/
s/exectution/execution/
s/functionnality/functionality/
s/If it receive a/If it receives a/
s/In can even/It can even/
s/informations/information/
s/it will be remove /it will be removed /
s/langage/language/
s/mentionned/mentioned/
s/negociated/negotiated/
s/Optionnaly/Optionally/
s/ouputs/outputs/
s/outweights/outweighs/
s/ressources/resources/
BUG/MINOR: htx: Remove a forgotten while loop in htx_defrag()
Fortunately, this loop does nothing. Otherwise it would have led to an infinite
loop. It was probably forgotten during a refactoring, in the early stage of the
HTX.
BUG/MEDIUM: proto-htx: Not forward too much data when 1xx reponses are handled
When an 1xx reponse is processed, we forward it immediatly. But another message
may already be in the channel's buffer, waiting to be processed. This may be
another 1xx reponse or the final one. So instead of forwarding everything, we
must take care to only forward the processed 1xx response.
BUG/MINOR: mux-h1: Report EOI instead EOS on parsing error or H2 upgrade
When a parsing error occurrs in the H1 multiplexer, we stop to copy HTX
blocks. So the error may be reported with an emtpy HTX message. For instance, if
the headers parsing failed. When it happens, the flag CS_FL_EOS is also set on
the conn_stream. But it is an error. Most of time, it is set on established
connections, so it is not really an issue. But if it happens when the server
connection is not fully established, the connection is shut down immediatly and
the stream-interface is switched from SI_ST_CON to SI_ST_DIS/CLO. So HTX
analyzers have no chance to catch the error.
Instead of setting CS_FL_EOS, it is fairly better to set CS_FL_EOI, which is the
right flag to use. The same is also done on H2 upgrade. As a side effet of this
fix, in the stream-interface code, we must now set the flag CF_READ_PARTIAL on
the channel when the flag CF_EOI is set. It is a warranty to wakeup the stream
when EOI is reported to the channel while no data are received.
BUG/MINOR: mux-h2: Count EOM in bytes sent when a HEADERS frame is formatted
In HTX, when a HEADERS frame is formatted before sending it to the client or the
server, If an EOM is found because there is no body, we must count it in the
number bytes sent.
BUG/MINOR: lua: Set right direction and flags on new HTTP objects
When a LUA HTTP object is created using the current TXN object, it is important
to also set the right direction and flags, using ones from the TXN object.
This patch may be backported to all supported branches with the lua
support. But, it seems to have no impact for now.
BUG/MINOR: proto-htx: Try to keep connections alive on redirect
As fat as possible, we try to keep the connections alive on redirect. It's
possible when the request has no body or when the request parsing is finished.
Willy Tarreau [Thu, 23 May 2019 10:23:55 +0000 (12:23 +0200)]
MINOR: stats: report the global output bit rate in human readable form
The stats page now reports the per-process output bit rate and applies
the usual conversions needed to turn the TCP payload rate to an Ethernet
bit rate in order to give a reasonably accurate estimate of how far from
interface saturation we are.
Willy Tarreau [Thu, 23 May 2019 09:39:14 +0000 (11:39 +0200)]
MINOR: raw_sock: report global traffic statistics
Many times we've been missing per-process traffic statistics. While it
didn't make sense in multi-process mode, with threads it does. Thus we
now have a counter of bytes emitted by raw_sock, and a freq counter for
these as well. However, freq_ctr are limited to 32 bits, and given that
loads of 300 Gbps have already been reached over a loopback using
splicing, we need to downscale this a bit. Here we're storing 1/32 of
the byte rate, which gives a theorical limit of 128 GB/s or ~1 Tbps,
which is more than enough. Let's have fun re-reading this sentence in
2029 :-) The values can be read in "show info" output on the CLI.
Willy Tarreau [Thu, 23 May 2019 08:20:55 +0000 (10:20 +0200)]
BUILD: watchdog: condition it to USE_RT
It's needed on Linux to have access to timerfd_*, and on FreeBSD this
lib is needed as well, though not enabled in our default build. We can
see later if it's OK to enable it, for now let's fix the build issues.
Willy Tarreau [Thu, 23 May 2019 06:40:50 +0000 (08:40 +0200)]
BUILD: signals: FreeBSD has SI_LWP instead of SI_TKILL
SI_TKILL is for Linux. We're again in the non-portable area. Both OSes
use macros to define these values so we can #ifdef them. Let's make
SI_TKILL defined based on SI_LWP when only the latter is defined.
Willy Tarreau [Thu, 23 May 2019 06:36:29 +0000 (08:36 +0200)]
BUILD: watchdog: use si_value.sival_int, not si_int for the timer's value
Bah, the linux manpage suggests to use si_int but it's a fake, it's only
a define on sigval.sival_int where sigval is defined as si_value. Let's
use si_value.sival_int, at least it builds on both Linux and FreeBSD. It's
likely that this code will have to be limited to a small subset of OSes
if it causes difficulties like this.
Willy Tarreau [Wed, 22 May 2019 18:48:33 +0000 (20:48 +0200)]
[RELEASE] Released version 2.0-dev4
Released version 2.0-dev4 with the following main changes :
- BUILD: enable freebsd builds on cirrus-ci
- BUG/MINOR: http_fetch: Rely on the smp direction for "cookie()" and "hdr()"
- MEDIUM: Make 'option forceclose' actually warn
- MEDIUM: Make 'resolution_pool_size' directive fatal
- DOC: management: place "show activity" at the right place
- MINOR: cli/activity: show the dumping thread ID starting at 1
- MINOR: task: export global_task_mask
- MINOR: cli/debug: add a thread dump function
- BUG/MEDIUM: streams: Don't use CF_EOI to decide if the request is complete.
- BUG/MEDIUM: streams: Try to L7 retry before aborting the connection.
- BUG/MINOR: debug: make ha_task_dump() always check the task before dumping it
- BUG/MINOR: debug: make ha_task_dump() actually dump the requested task
- MINOR: debug: make ha_thread_dump() and ha_task_dump() take a buffer
- BUG/MINOR: debug: don't check the call date on tasklets
- MINOR: thread: implement ha_thread_relax()
- MINOR: task: put barriers after each write to curr_task
- MINOR: task: always reset curr_task when freeing a task or tasklet
- MINOR: stream: detach the stream from its own task on stream_free()
- MEDIUM: debug/threads: implement an advanced thread dump system
- REGTEST: extend the check duration on tls_health_checks and mark it slow
- DOC: fix "successful" typo
- MINOR: init: setenv HAPROXY_CFGFILES
- MINOR: threads/init: synchronize the threads startup
- MEDIUM: init/mworker: make the pipe register function a regular initcall
- CLEANUP: memory: make the fault injection code use the OTHER_LOCK label
- CLEANUP: threads: remove the now unused START_LOCK label
- MINOR: init/threads: make the global threads an array of structs
- MINOR: threads: add each thread's clockid into the global thread_info
- CLEANUP: stream: remove an obsolete debugging test
- MINOR: tools: add dump_hex()
- MINOR: debug: implement ha_panic()
- MINOR: debug/cli: add some debugging commands for developers
- MINOR: tools: provide a may_access() function and make dump_hex() use it
- MINOR: debug: make ha_panic() report threads starting at 1
- REORG: compat: move some integer limit definitions from standard.h to compat.h
- REORG: threads: move the struct thread_info from global.h to hathreads.h
- MINOR: compat: make sure to always define clockid_t
- MINOR: threads: always place the clockid in the struct thread_info
- MINOR: threads: add a thread-local thread_info pointer "ti"
- MINOR: time: move the cpu, mono, and idle time to thread_info
- MINOR: time: add a function to retrieve another thread's cputime
- MINOR: debug: report each thread's cpu usage in "show thread"
- BUILD: threads: only assign the clock_id when supported
- BUILD: makefile: use USE_OBSOLETE_LINKER for solaris
- BUILD: makefile: remove -fomit-frame-pointer optimisation (solaris)
- MAJOR: polling: add event ports support (Solaris)
- BUG/MEDIUM: streams: Don't switch from SI_ST_CON to SI_ST_DIS on read0.
- CLEANUP: time: refine the test on _POSIX_TIMERS
- MINOR: compat: define a new empty type empty_t for non-implemented fields
- CLEANUP: time: switch clockid_t to empty_t when not available
- BUG/MINOR: mworker: Fix memory leak of mworker_proc members
- CLEANUP: objtype: make obj_type() and obj_type_name() take consts
- MINOR: debug: switch to SIGURG for thread dumps
- CLEANUP: threads: really move thread_info to hathreads.c
- MINOR: threads: make threads_{harmless|want_rdv}_mask constant 0 without threads
- CLEANUP: debug: always report harmless/want_rdv even without threads
- MINOR: threads: implement ha_tkill() and ha_tkillall()
- CLEANUP: debug: make use of ha_tkill() and remove ifdefs
- MINOR: stream: introduce a stream_dump() function and use it in stream_dump_and_crash()
- MINOR: debug: dump streams when an applet, iocb or stream is known
- MINOR: threads: add a "stuck" flag to the thread_info struct
- MINOR: threads: add a timer_t per thread in thread_info
- MAJOR: watchdog: implement a thread lockup detection mechanism
- MINOR: stream: remove the cpu time detection from process_stream()
- MINOR: connection: report the mux names in "haproxy -vv"
- CLEANUP: mux-h1: use "H1" and not "h1" as the mux's name
- BUG/MEDIUM: WURFL: segfault in wurfl-get() with missing info.
- MINOR: WURFL: call header_retireve_callback() in dummy library
- MINOR: WURFL: fixed Engine load failed error when wurfl-information-list contains wurfl_root_id
- MINOR: WURFL: shows log messages during module initialization
- MINOR: WURFL: removes heading wurfl-information-separator from wurfl-get-all() and wurfl-get() results
- MINOR: WURFL: wurfl_get() and wurfl_get_all() now return an empty string if device detection fails
- MEDIUM: WURFL: HTX awareness.
- MINOR: WURFL: module version bump to 2.0
- MINOR: WURFL: do not emit warnings when not configured
- CONTRIB: wurfl: address 3 build issues in the wurfl dummy library
- BUG/MEDIUM: init/threads: provide per-thread alloc/free function callbacks
- BUILD: travis: add sanitizers to travis-ci builds
- BUILD: time: remove the test on _POSIX_C_SOURCE
- CLEANUP: build: rename some build macros to use the USE_* ones
- CLEANUP: raw_sock: remove support for very old linux splice bug workaround
- BUG/MEDIUM: dns: make the port numbers unsigned
- MEDIUM: config: deprecate the antique req* and rsp* commands
Willy Tarreau [Wed, 22 May 2019 18:34:35 +0000 (20:34 +0200)]
MEDIUM: config: deprecate the antique req* and rsp* commands
These commands don't follow the same flow as the rest of the commands,
each of them iterates over all header lines before switching to the
next directive. In addition they make no distinction between start
line and headers and can lead to unparsable rewrites which are very
difficult to deal with internally.
Most of them are still occasionally found in configurations, mainly
because of the usual "we've always done this way". By marking them
deprecated and emitting a warning and recommendation on first use of
each of them, we will raise users' awareness of users regarding the
cleaner, faster and more reliable alternatives.
Some use cases of "reqrep" still appear from time to time for URL
rewriting that is not so convenient with other rules. But at least
users facing this requirement will explain their use case so that we
can best serve them. Some discussion started on this subject in a
thread linked to from github issue #100.
The goal is to remove them in 2.1 since they require to reparse the
result before indexing it and we don't want this hack to live long.
The following directives were marked deprecated :
Willy Tarreau [Wed, 22 May 2019 18:07:45 +0000 (20:07 +0200)]
BUG/MEDIUM: dns: make the port numbers unsigned
Mustafa Yildirim reported in Discourse that ports >32767 advertised
in SRV records are wrong. Given the high value they definitely
correspond to a sign extension of a negative number. The cause was
indeed that the port is declared as a signed int in the dns_answer_item
structure, and Lukas confirmed in github issue #103 that turning it to
unsigned addresses the issue.
It is worth noting that there are other such fields in this structure
that don't look right (ttl, priority, class, type) and that someone
should audit this part to be certain they are properly typed.
This fix must be backported to 1.9 and likely to 1.8 as well.
Willy Tarreau [Wed, 22 May 2019 17:55:24 +0000 (19:55 +0200)]
CLEANUP: raw_sock: remove support for very old linux splice bug workaround
We've been dealing with a workaround for a bug in splice that used to
affect version 2.6.25 to 2.6.27.12 and which was fixed 10 years ago
in kernel versions which are not supported anymore. Given that people
who would use a kernel in such a range would face much more serious
stability and security issues, it's about time to get rid of this
workaround and of the ASSUME_SPLICE_WORKS build option used to disable
it.
Willy Tarreau [Wed, 22 May 2019 17:24:06 +0000 (19:24 +0200)]
CLEANUP: build: rename some build macros to use the USE_* ones
We still have quite a number of build macros which are mapped 1:1 to a
USE_something setting in the makefile but which have a different name.
This patch cleans this up by renaming them to use the USE_something
one, allowing to clean up the makefile and make it more obvious when
reading the code what build option needs to be added.
Willy Tarreau [Wed, 22 May 2019 17:12:54 +0000 (19:12 +0200)]
BUILD: time: remove the test on _POSIX_C_SOURCE
It seems it's not defined on FreeBSD while it's mentioned on Linux that
clock_gettime() can be detected using this. Given that we also have the
test for _POSIX_TIMERS>0 that should cover it well enough. If it breaks
on other systems, we'll see.
Report was here :
https://github.com/haproxy/haproxy/runs/133866993
Ilya Shipitsin [Wed, 22 May 2019 15:12:15 +0000 (20:12 +0500)]
BUILD: travis: add sanitizers to travis-ci builds
full list of changes:
use TARGET=osx instead of generic for osx builds,
add USE_PCRE_JIT=1, USE_GETADDRINFO=1 to build matrix,
enable address sanitizer for clang
enable device detection: WURFL, DEVICEATLAS
Willy Tarreau [Wed, 22 May 2019 12:42:12 +0000 (14:42 +0200)]
BUG/MEDIUM: init/threads: provide per-thread alloc/free function callbacks
We currently have the ability to register functions to be called early
on thread creation and at thread deinitialization. It turns out this is
not sufficient because certain such functions may use resources that are
being allocated by the other ones, thus creating a race condition depending
only on the linking order. For example the mworker needs to register a
file descriptor while the pollers will reallocate the fd_updt[] array.
Similarly logs and trashes may be used by some init functions while it's
unclear whether they have been deduplicated.
The same issue happens on deinit, if the fd_updt[] or trash is released
before some functions finish to use them, we'll get into trouble.
This patch creates a couple of early and late callbacks for per-thread
allocation/freeing of resources. A few init functions were moved there,
and the fd init code was split between the two (since it used to both
allocate and initialize at once). This way the init/deinit sequence is
expected to be safe now.
This patch should be backported to 1.9 as at least the trash/log issue
seems to be present. The run_thread_poll_loop() code is a bit different
there as the mworker is not a callback, but it will have no effect and
it's enough to drop the mworker changes.
This bug was reported by Ilya Shipitsin in github issue #104.
Willy Tarreau [Wed, 22 May 2019 12:01:22 +0000 (14:01 +0200)]
MINOR: WURFL: do not emit warnings when not configured
At the moment the WURFL module emits 3 lines of warnings upon startup
when it is not referenced in the configuration file, which is quite
confusing. Let's make sure to keep it silent when not configured, as
detected by the absence of the wurfl-data-file statement.
mbellomi [Tue, 21 May 2019 13:44:53 +0000 (15:44 +0200)]
MINOR: WURFL: call header_retireve_callback() in dummy library
The current coverage of the dummy library was limited because the callbacks
passed to wurfl_lookup() were not called. Now we do call them with one existing
and one non-existing headers to make sure that ha_wurfl_retrieve_header() is
covered by the tests as well.
mbellomi [Tue, 21 May 2019 13:32:48 +0000 (15:32 +0200)]
BUG/MEDIUM: WURFL: segfault in wurfl-get() with missing info.
A segfault may happen in ha_wurfl_get() when dereferencing information not
present in wurfl-information-list. Check the node retrieved from the tree,
not its container.
Willy Tarreau [Wed, 22 May 2019 09:36:54 +0000 (11:36 +0200)]
CLEANUP: mux-h1: use "H1" and not "h1" as the mux's name
The mux's name is the only one reported in lower case in "show sess"
or "haproxy -vv" while the other ones are upper case, so it loses and
the other ones win :-)
Willy Tarreau [Wed, 22 May 2019 06:57:01 +0000 (08:57 +0200)]
MINOR: stream: remove the cpu time detection from process_stream()
It was not as efficient as the watchdog in that it would only trigger
after the problem resolved by itself, and still required a huge margin
to make sure we didn't trigger for an invalid reason. This used to leave
little indication about the cause. Better use the watchdog now and
improve it if needed.
The detector of unkillable tasks remains active though.
Willy Tarreau [Fri, 3 May 2019 11:52:18 +0000 (13:52 +0200)]
MAJOR: watchdog: implement a thread lockup detection mechanism
Since threads were introduced, we've naturally had a number of bugs
related to locking issues. In addition we've also got some issues
with corrupted lists in certain rare cases not necessarily involving
threads. Not only these events cause a lot of trouble to the production
as it is very hard to detect that the process is stuck in a loop and
doesn't deliver the service anymore, but it's often difficult (or too
late) to collect more debugging information.
The patch presented here implements a lockup detection mechanism, also
known as "watchdog". The principle is that (on systems supporting it),
each thread will have its own CPU timer which progresses as the thread
consumes CPU cycles, and when a deadline is met, a signal is delivered
(SIGALRM here since it doesn't interrupt gdb by default).
The thread handling this signal (which is not necessarily the one which
triggered the timer) figures the thread ID from the signal arguments and
checks if it's really stuck by looking at the time spent since last exit
from poll() and by checking that the thread's scheduler is still alive
(so that even when dealing with configuration issues resulting in insane
amount of tasks being called in turn, it is not possible to accidently
trigger it). Checking the scheduler's activity will usually result in a
second chance, thus doubling the detecting time.
In order not to incorrectly flag a thread as being the cause of the
lockup, the thread_harmless_mask is checked : a thread could very well
be spinning on itself waiting for all other threads to join (typically
what happens when issuing "show sess"). In this case, once all threads
but one (or two) have joined, all the innocent ones are marked harmless
and will not trigger the timer. Only the ones not reacting will.
The deadline is set to one second, which already appears impossible to
reach, especially since it's 1 second of CPU usage, not elapsed time
with the CPU being preempted by other threads/processes/hypervisor. In
practice due to the scheduler's health verification it takes up to two
seconds to decide to panic.
Once all conditions are met, the goal is to crash from the offending
thread. So if it's the current one, we call ha_panic() otherwise the
signal is bounced to the offending thread which deals with it. This
will result in all threads being woken up in turn to dump their context,
the whole state is emitted on stderr in hope that it can be logged, and
the process aborts, leaving a chance for a core to be dumped and for a
service manager to restart it.
An alternative mechanism could be implemented for systems unable to
wake up a thread once its CPU clock reaches a deadline (e.g. FreeBSD).
Instead of waking the timer each and every deadline, it is possible to
use a standard timer which is reset each time we leave poll(). Since
the signal handler rechecks the CPU consumption this will also work.
However a totally idle process may trigger it from time to time which
may or may not confuse some debugging sessions. The same is true for
alarm() which could be another option for systems not having such a
broad choice of timers (but it seems that in this case they will not
have per-thread CPU measurements available either).
The feature is currently implemented only when threads are enabled in
order to keep the code clean, since the main purpose is to detect and
address inter-thread deadlocks. But if it proves useful for other
situations this condition might be relaxed.
Willy Tarreau [Tue, 21 May 2019 18:01:26 +0000 (20:01 +0200)]
MINOR: threads: add a timer_t per thread in thread_info
This will be used by the watchdog to detect that a thread locked up.
It's only defined on platforms supporting it. This patch only reserves
the room for the timer in the struct. A special value was reserved for
the uninitialized timer. The problem is that the POSIX API was horribly
designed, defining no invalid value, thus for each timer it is required
to keep a second variable to indicate whether it's valid. A quick check
shows that defining a 32-bit invalid value is not something uncommon
across other implementations, with ~0 being common. Let's try with this
and if it causes issues we can revisit this decision.
Willy Tarreau [Wed, 22 May 2019 05:06:44 +0000 (07:06 +0200)]
MINOR: threads: add a "stuck" flag to the thread_info struct
This flag is constantly cleared by the scheduler and will be set by the
watchdog timer to detect stuck threads. It is also set by the "show
threads" command so that it is easy to spot if the situation has evolved
between two subsequent calls : if the first "show threads" shows no stuck
thread and the second one shows such a stuck thread, it indicates that
this thread didn't manage to make any forward progress since the previous
call, which is extremely suspicious.
Willy Tarreau [Wed, 22 May 2019 07:43:09 +0000 (09:43 +0200)]
MINOR: debug: dump streams when an applet, iocb or stream is known
Whenever we can retrieve a valid stream pointer, we now call stream_dump()
to get a detailed dump of the stream currently running on the processor.
This is used by "show threads" and by ha_panic().