Willy Tarreau [Wed, 22 Nov 2017 14:47:29 +0000 (15:47 +0100)]
MINOR: pools: implement DEBUG_UAF to detect use after free
This code has been used successfully a few times in the past to detect
that a pool was used after being freed. Its main goal is to allocate a
full page for each object so that they are always released individually
and unmapped from memory. This way if any part of the code reference the
object after is was freed and before it is reallocated, a segv occurs at
the exact offending location. It does a few extra things such as writing
to the memory area before freeing to detect double-frees and free of
read-only areas, and placing the data at the end of the page instead of
the beginning so that out of bounds accesses are easier to spot. The
amount of memory used with this is huge (about 10 times the regular
usage) but it can be useful sometimes.
Olivier Houchard [Wed, 22 Nov 2017 18:12:10 +0000 (19:12 +0100)]
MINOR: ssl: Don't disable early data handling if we could not write.
If we can't write early data, for some reason, don't give up on reading them,
they may still be early data to be read, and if we don't do so, openssl
internal states might be inconsistent, and the handshake will fail.
Olivier Houchard [Wed, 22 Nov 2017 16:38:37 +0000 (17:38 +0100)]
BUG/MINOR: ssl: Always start the handshake if we can't send early data.
The current code only tries to do the handshake in case we can't send early
data if we're acting as a client, which is wrong, it has to be done on the
server side too, or we end up in an infinite loop.
Willy Tarreau [Wed, 22 Nov 2017 15:53:53 +0000 (16:53 +0100)]
BUG/MEDIUM: deinit: correctly deinitialize the proxy and global listener tasks
While using mmap() to allocate pools for debugging purposes, kill -USR1 caused
libc aborts in deinit() on two calls to free() on proxies' tasks and the global
listener task. The issue comes from the fact that we're using free() to release
a task instead of task_free(), so the task was allocated from a pool and released
using a different method.
This bug has been there since at least 1.5, so a backport is desirable to all
maintained versions.
BUG/MEDIUM: cache: use key=0 as a condition for freeing
The cache was trying to remove objects from the tree while they were
already removed from it. We set the key to 0 as a check for not trying
to remove the object from the tree when we are still using the object.
The cli command "show cache" displays the status of the cache, the first
displayed line is the shctx informations with how much blocks available
blocks it contains (blocks are 1k by default).
The next lines are the objects stored in the cache tree, the pointer,
the size of the object and how much blocks it uses, a refcount for the
number of users of the object, and the remaining expiration time (which
can be negative if expired)
MEDIUM: shctx: use unsigned int for len and block_count
Allows bigger objects to be cached in the shctx, the first
implementation was only storing small ssl session, but we want to store
bigger HTTP response.
Willy Tarreau [Tue, 14 Nov 2017 14:01:22 +0000 (15:01 +0100)]
CONTRIB: spoa_example: remove last dependencies on type "sample"
Being an external agent, it's confusing that it uses haproxy's internal
types and it seems to have encouraged other implementations to do so.
Let's completely remove any reference to struct sample and use the
native DATA types instead of converting to and from haproxy's sample
types.
Lukas Tribus [Tue, 21 Nov 2017 11:39:34 +0000 (12:39 +0100)]
BUG/MINOR: systemd: ignore daemon mode
Since we switched to notify mode in the systemd unit file in commit d6942c8, haproxy won't start if the daemon keyword is present in the
configuration.
This change makes sure that haproxy remains in foreground when using
systemd mode and adds a note in the documentation.
Willy Tarreau [Tue, 21 Nov 2017 20:01:29 +0000 (21:01 +0100)]
BUG/MEDIUM: h2: always reassemble the Cookie request header field
The special case of the Cookie header field was overlooked in the
implementation, considering that most servers do handle cookie lists,
but as reported here on discourse it's not the case at all :
This patch fixes this by skipping all occurences of the Cookie header
in the request while building the H1 request, and then building a single
Cookie header with all values appended at once, according to what is
requested in RFC7540#8.1.2.5.
In order to build the list of values, the list struct is used as a linked
list (as there can't be more cookies than headers). This makes the list
walking quite efficient and ensures all values are quickly found without
having to rescan the list.
A test case provided by Lukas shows that it properly works :
Willy Tarreau [Tue, 21 Nov 2017 19:03:02 +0000 (20:03 +0100)]
MEDIUM: h2: change hpack_decode_headers() to only provide a list of headers
The current H2 to H1 protocol conversion presents some issues which will
require to perform some processing on certain headers before writing them
so it's not possible to convert HPACK to H1 on the fly.
This commit modifies the headers decoding so that it now works in two
phases : hpack_decode_headers() only decodes the HPACK stream in the
HEADERS frame and puts the result into a list. Headers which require
storage (huffman-compressed or from the dynamic table) are stored in
a chunk allocated by the H2 demuxer. Then once the headers are properly
decoded into this list, h2_make_h1_request() is called with this list
to produce the HTTP/1.1 request into the destination buffer. The list
necessarily enforces a limit. Here we use 2*MAX_HTTP_HDR, which means
that we can have as many individual cookies as we have regular headers
if a client decides to break their cookies into multiple values. This
seams reasonable and will allow the H1 parser to decide whether it's
too much or not.
Thus the output stream is not produced on the fly anymore and this will
permit to deal with certain corner cases like reparing the Cookie header
(which for now is not done).
In order to limit header duplication and parsing, the known pseudo headers
continue to be passed by their index : the name element in the list then
has a NULL pointer and the value is the pseudo header's index. Given that
these ones represent about half of the incoming requests and need to be
found quickly, it maintains an acceptable level of performance.
The code was significantly reduced by doing this because the orignal code
had to deal with HPACK and H1 combinations (eg: index vs not indexed, etc)
and now the HPACK decoding is totally focused on the decompression, and
the H1 encoding doesn't have to deal with the issue of wrapping input for
example.
One bug was addressed here (though it couldn't happen at the moment). The
H2 demuxer used to detect a failure to write the request into the H1 buffer
and would then detect if the output buffer wraps, realign it and try again.
The problem by doing so was that the HPACK context was already modified and
not rewindable. Thus the size check is now performed first and a failure is
reported if it doesn't fit.
Willy Tarreau [Tue, 21 Nov 2017 18:55:27 +0000 (19:55 +0100)]
MEDIUM: h2: add a function to emit an HTTP/1 request from a headers list
The current H2 to H1 protocol conversion presents some issues which will
require to perform some processing on certain headers before writing them
so it's not possible to convert HPACK to H1 on the fly.
Here we introduce a function which performs half of what hpack_decode_header()
used to do, which is to take a list of headers on input and emit the
corresponding request in HTTP/1.1 format. The code is the same and functions
were renamed to be prefixed with "h2" instead of "hpack", though it ends
up being simpler as the various HPACK-specific cases could be fused into
a single one (ie: add header).
Moving this part here makes a lot of sense as now this code is specific to
what is documented in HTTP/2 RFC 7540 and will be able to deal with special
cases related to H2 to H1 conversion enumerated in section 8.1.
Various error codes which were previously assigned to HPACK were never
used (aside being negative) and were all replaced by -1 with a comment
indicating what error was detected. The code could be further factored
thanks to this but this commit focuses on compatibility first.
Willy Tarreau [Tue, 21 Nov 2017 18:36:21 +0000 (19:36 +0100)]
BUG/MEDIUM: h2: properly report connection errors in headers and data handlers
We used to return >0 indicating a success when an error was present on the
connection, preventing the caller from detecting and handling it. This for
example happens when sending too many headers in a frame, making the request
impossible to decompress.
Willy Tarreau [Mon, 20 Nov 2017 20:27:45 +0000 (21:27 +0100)]
BUILD: h2: mark some inlined functions "unused"
Clang complains that h2_get_n64() is not used, and a few other protocol
specific functions may fall in that category depending on how the code
evolves. Better mark them unused to silence the warning since it's on
purpose.
Willy Tarreau [Mon, 20 Nov 2017 20:22:17 +0000 (21:22 +0100)]
BUILD: compiler: add a new type modifier __maybe_unused
While gcc only emits warnings about unused static functions, Clang also
emits such a warning when the functions are inlined. This is a bit
annoying at certain places where functions are provided to manipulate
multiple data types and are not yet used. Let's have a type modifier
"__maybe_unused" which sets the "unused" attribute like the Linux kernel
does. It's elegant as it allows the code author to indicate that it knows
that this element might be unused. It works on variables as well, which
is convenient to remove ifdefs around local variables in certain functions,
but doesn't work on labels.
Willy Tarreau [Mon, 20 Nov 2017 20:11:12 +0000 (21:11 +0100)]
BUILD: ebtree: don't redefine types u32/s32 in scope-aware trees
Clang emits a warning about these types being redefined in eb32sctree
while they are already defined in eb32tree. Let's simply not redefine
them if eb32tree was already included.
Pieter Baauw reported a build issue affecting haproxy after plock was
included. It happens that expressions of the form :
if ((const) ? (expr1) : (expr2))
do_something()
always produce code for both expr1 and expr2 on Clang when building
without optimization. The resulting asm code is even funny, basically
doing :
mov reg, 1
cmp reg, 1
...
This causes our sizeof() tests to fail to build because we purposely
dereference a fake function that reports the location and nature of the
inconsistency, but this fake function appears in the object code despite
all conditions being there to avoid it.
However the compiler is still smart enough to optimize away code doing
if (const)
do_something()
So we simply repeat the condition before do_something(), and the dummy
function is not referenced anymore unless really required.
There are a few inlines such as pl_barrier() and pl_cpu_relax() which
are used a lot. Unfortunately, while building test code at -O0, inlining
is disabled and these ones are called a lot and show up a lot in any
profile, are traced into when single-stepping with a debugger, etc, thus
they are polluting the landscape. Since they're single-asm statements,
there is no reason for not turning them into macros.
The result becomes fairly visible here at -O0 :
$ size latency.inline latency.macro
text data bss dec hex filename
11431 692 656 12779 31eb treelock.inline
10967 692 656 12315 301b treelock.macro
And it was verified that regularly optimized code remains strictly identical.
Local variables "l", "i" and "ret" were renamed "__pl_l", "__pl_i" and
"__pl_r" respectively, to limit the risk of conflicts with existing
variables in application code.
Tim Duesterhus [Mon, 20 Nov 2017 14:58:35 +0000 (15:58 +0100)]
MEDIUM: mworker: Add systemd `Type=notify` support
This patch adds support for `Type=notify` to the systemd unit.
Supporting `Type=notify` improves both starting as well as reloading
of the unit, because systemd will be let known when the action completed.
See this quote from `systemd.service(5)`:
> Note however that reloading a daemon by sending a signal (as with the
> example line above) is usually not a good choice, because this is an
> asynchronous operation and hence not suitable to order reloads of
> multiple services against each other. It is strongly recommended to
> set ExecReload= to a command that not only triggers a configuration
> reload of the daemon, but also synchronously waits for it to complete.
By making systemd aware of a reload in progress it is able to wait until
the reload actually succeeded.
This patch introduces both a new `USE_SYSTEMD` build option which controls
including the sd-daemon library as well as a `-Ws` runtime option which
runs haproxy in master-worker mode with systemd support.
When haproxy is running in master-worker mode with systemd support it will
send status messages to systemd using `sd_notify(3)` in the following cases:
- The master process forked off the worker processes (READY=1)
- The master process entered the `mworker_reload()` function (RELOADING=1)
- The master process received the SIGUSR1 or SIGTERM signal (STOPPING=1)
Change the unit file to specify `Type=notify` and replace master-worker
mode (`-W`) with master-worker mode with systemd support (`-Ws`).
Future evolutions of this feature could include making use of the `STATUS`
feature of `sd_notify()` to send information about the number of active
connections to systemd. This would require bidirectional communication
between the master and the workers and thus is left for future work.
Willy Tarreau [Sat, 18 Nov 2017 10:26:20 +0000 (11:26 +0100)]
BUG/MINOR: stream-int: don't try to read again when CF_READ_DONTWAIT is set
Commit 9aaf778 ("MAJOR: connection : Split struct connection into struct
connection and struct conn_stream.") had to change the way the stream
interface deals with incoming data to accomodate the mux. A break
statement got lost during a change, leading to the receive call being
performed twice even when CF_READ_DONTWAIT is set. The most noticeable
effect is that it made the bug described in commit 33982cb ("BUG/MAJOR:
stream: ensure analysers are always called upon close") much easier to
reproduce as it would appear even with an HTTP frontend.
Let's just restore the stream-interface flag and the break here, as in
the previous code.
No backport is needed as this was introduced during 1.8-dev.
Willy Tarreau [Mon, 20 Nov 2017 14:37:13 +0000 (15:37 +0100)]
BUG/MAJOR: stream: ensure analysers are always called upon close
A recent issue affecting HTTP/2 + redirect + cache has uncovered an old
problem affecting all existing versions regarding the way events are
reported to analysers.
It happens that when an event is reported, analysers see it and may
decide to temporarily pause processing and prevent other analysers from
processing the same event. Then the event may be cleared and upon the
next call to the analysers, some of them will never see it.
This is exactly what happens with CF_READ_NULL if it is received before
the request is processed, like during redirects : the first time, some
analysers see it, pause, then the event may be converted to a SHUTW and
cleared, and on next call, there's nothing to process. In practice it's
hard to get the CF_READ_NULL flag during the request because requests
have CF_READ_DONTWAIT, preventing the read0 from happening. But on
HTTP/2 it's presented along with any incoming request. Also on a TCP
frontend the flag is not set and it's possible to read the NULL before
the request is parsed.
This causes a problem when filters are present because flt_end_analyse
needs to be called to release allocated resources and remove the
CF_FLT_ANALYZE flag. And the loss of this event prevents the analyser
from being called and from removing itself, preventing the connection
from ever ending.
This problem just shows that the event processing needs a serious revamp
after 1.8. In the mean time we can deal with the really problematic case
which is that we *want* to call analysers if CF_SHUTW is set on any side
ad it's the last opportunity to terminate a processing. It may
occasionally result in some analysers being called for nothing in half-
closed situations but it will take care of the issue.
An example of problematic configuration triggering the bug in 1.7 is :
This fix must be backported to 1.7 as well as any version where commit c0c672a ("BUG/MINOR: http: Fix conditions to clean up a txn and to
handle the next request") was backported. This commit didn't cause the
bug but made it much more likely to happen.
Willy Tarreau [Sat, 18 Nov 2017 14:39:10 +0000 (15:39 +0100)]
BUG/MEDIUM: stream: don't automatically forward connect nor close
Upon stream instanciation, we used to enable channel auto connect
and auto close to ease TCP processing. But commit 9aaf778 ("MAJOR:
connection : Split struct connection into struct connection and
struct conn_stream.") has revealed that it was a bad idea because
this commit enables reading of the trailing shutdown that may follow
a small requests, resulting in a read and a shutr turned into shutw
before the stream even has a chance to apply the filters. This
causes an issue with impossible situations where the backend stream
interface is still in SI_ST_INI with a closed output, which blocks
some streams for example when performing a redirect with filters
enabled.
Let's change this so that we only enable these two flags if there is
no analyser on the stream. This way process_stream() has a chance to
let the analysers decide whether or not to allow the shutdown event
to be transferred to the other side.
It doesn't seem possible to trigger this issue before 1.8, so for now
it is preferable not to backport this fix.
Willy Tarreau [Sun, 19 Nov 2017 08:55:29 +0000 (09:55 +0100)]
[RELEASE] Released version 1.8-rc4
Released version 1.8-rc4 with the following main changes :
- BUG/MEDIUM: cache: does not cache if no Content-Length
- BUILD: thread/pipe: fix build without threads
- BUG/MINOR: spoe: check buffer size before acquiring or releasing it
- MINOR: debug/flags: Add missing flags
- MINOR: threads: Use __decl_hathreads to declare locks
- BUG/MINOR: buffers: Fix b_alloc_margin to be "fonctionnaly" thread-safe
- BUG/MAJOR: ebtree/scope: fix insertion and removal of duplicates in scope-aware trees
- BUG/MAJOR: ebtree/scope: fix lookup of next node in scope-aware trees
- MINOR: ebtree/scope: add a function to find next node from a parent
- MINOR: ebtree/scope: simplify the lookup functions by using eb32sc_next_with_parent()
- BUG/MEDIUM: mworker: Fix re-exec when haproxy is started from PATH
- BUG/MEDIUM: cache: use msg->sov to forward header
- MINOR: cache: forward data with headers
- MINOR: cache: disable cache if shctx_row_data_append fail
- BUG/MINOR: threads: tid_bit must be a unsigned long
- CLEANUP: tasks: Remove useless double test on rq_next
- BUG/MEDIUM: standard: itao_str/idx and quote_str/idx must be thread-local
- MINOR: tools: add a function to dump a scope-aware tree to a file
- MINOR: tools: improve the DOT dump of the ebtree
- MINOR: tools: emphasize the node being worked on in the tree dump
- BUG/MAJOR: ebtree/scope: properly tag upper nodes during insertion
- DOC: peers: Add a first version of peers protocol v2.1.
- CONTRIB: Wireshark dissector for HAProxy Peer Protocol.
- MINOR: mworker: display an accurate error when the reexec fail
- BUG/MEDIUM: mworker: wait again for signals when execvp fail
- BUG/MEDIUM: mworker: does not deinit anymore
- BUG/MEDIUM: mworker: does not close inherited FD
- MINOR: tests: add a python wrapper to test inherited fd
- BUG/MINOR: Allocate the log buffers before the proxies startup
- MINOR: tasks: Use a bitfield to track tasks activity per-thread
- MAJOR: polling: Use active_tasks_mask instead of tasks_run_queue
- MINOR: applets: Use a bitfield to track applets activity per-thread
- MAJOR: polling: Use active_appels_mask instead of applets_active_queue
- MEDIUM: applets: Don't process more than 200 active applets at once
- MINOR: stream: Add thread-mask of tasks/FDs/applets in "show sess all" command
- MINOR: SSL: Store the ASN1 representation of client sessions.
- MINOR: ssl: Make sure we don't shutw the connection before the handshake.
- BUG/MEDIUM: deviceatlas: ignore not valuable HTTP request data
David Carlier [Fri, 17 Nov 2017 08:47:25 +0000 (08:47 +0000)]
BUG/MEDIUM: deviceatlas: ignore not valuable HTTP request data
A customer reported a crash when within the HTTP request some headers
were not set leading to the module to crash. So the module ignore them
since empty data have no value for the detection.
Needs to be backported to 1.7.
Olivier Houchard [Thu, 16 Nov 2017 16:42:52 +0000 (17:42 +0100)]
MINOR: SSL: Store the ASN1 representation of client sessions.
Instead of storing the SSL_SESSION pointer directly in the struct server,
store the ASN1 representation, otherwise, session resumption is broken with
TLS 1.3, when multiple outgoing connections want to use the same session.
MEDIUM: applets: Don't process more than 200 active applets at once
Now, we process at most 200 active applets per call to applet_run_active. We use
the same limit as the tasks. With the cache filter and the SPOE, the number of
active applets can now be huge. So, it is important to limit the number of
applets processed in applet_run_active.
MAJOR: polling: Use active_appels_mask instead of applets_active_queue
applets_active_queue is the active queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active applets for a
thread while in fact all active applets are assigned to the otherthreads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_applets_mask.
This is specific to threads, no backport is needed.
MINOR: applets: Use a bitfield to track applets activity per-thread
a bitfield has been added to know if there are runnable applets for a
thread. When an applet is woken up, the bits corresponding to its thread_mask
are set. When all active applets for a thread is get to be processed, the thread
is removed from active ones by unsetting its tid_bit from the bitfield.
MAJOR: polling: Use active_tasks_mask instead of tasks_run_queue
tasks_run_queue is the run queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active tasks for a
thread while in fact all active tasks are assigned to the other threads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_tasks_mask.
Another change has been made in process_runnable_tasks. Now, we always limit the
number of tasks processed to 200.
This is specific to threads, no backport is needed.
MINOR: tasks: Use a bitfield to track tasks activity per-thread
a bitfield has been added to know if there are runnable tasks for a thread. When
a task is woken up, the bits corresponding to its thread_mask are set. When all
tasks for a thread have been evaluated without any wakeup, the thread is removed
from active ones by unsetting its tid_bit from the bitfield.
BUG/MINOR: Allocate the log buffers before the proxies startup
Since the commit cd7879adc ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too"), the log buffers are allocated after the proxies startup. So log
messages produced during this startup was ignored.
To fix the bug, we restore the initialization of these buffers before proxies
startup.
This is specific to threads, no backport is needed.
Does not use the deinit() function during a reload, it's dangerous and
might be subject to double free, segfault and hazardous behavior if
it's called twice in the case of a execvp fail.
BUG/MEDIUM: mworker: wait again for signals when execvp fail
After execvp fails, the signals were ignored, preventing to try a reload
again. It is now fixed by reaching the top of the mworker_wait()
function once the execvp failed.
Willy Tarreau [Wed, 15 Nov 2017 18:38:29 +0000 (19:38 +0100)]
BUG/MAJOR: ebtree/scope: properly tag upper nodes during insertion
Christopher found a case where some tasks would remain unseen in the run
queue and would spontaneously appear after certain apparently unrelated
operations performed by the other thread.
It's in fact the insertion which is not correct, the node serving as the
top of duplicate tree wasn't properly updated, just like the each top of
subtree in a duplicate tree. This had the effect that after some removals,
the incorrectly tagged node would hide the underlying ones, which would
then suddenly re-appear once they were removed.
Willy Tarreau [Wed, 15 Nov 2017 17:51:29 +0000 (18:51 +0100)]
MINOR: tools: emphasize the node being worked on in the tree dump
Now we can show in dotted red the node being removed or surrounded in red
a node having been inserted, and add a description on the graph related to
the operation in progress for example.
Willy Tarreau [Wed, 15 Nov 2017 16:49:54 +0000 (17:49 +0100)]
MINOR: tools: improve the DOT dump of the ebtree
Use a smaller and cleaner fixed font, use upper case to indicate sides on
branches, remove the useless node/leaf markers on branches since the colors
already indicate them, and show the node's key as it helps spot the matching
leaf.
MINOR: cache: disable cache if shctx_row_data_append fail
Disable the cache if the append of data failed, it should never happen
because the allocated row size is at least equal to the size of the
object to allocate.
Use msg->sov to forward headers instead of msg->eoh. It can causes some
problem because eoh does not contains the last \r\n, and the filter does
not support to send the headers partially.
Tim Duesterhus [Sun, 12 Nov 2017 16:39:18 +0000 (17:39 +0100)]
BUG/MEDIUM: mworker: Fix re-exec when haproxy is started from PATH
If haproxy is started using the name of the binary only (i.e.
not using a relative or absolute path) the `execv` in
`mworker_reload` fails with `ENOENT`, because it does not
examine the `PATH`:
[WARNING] 315/161139 (7) : Reexecuting Master process
[WARNING] 315/161139 (7) : Cannot allocate memory
[WARNING] 315/161139 (7) : Failed to reexecute the master processs [7]
The error messages are misleading, because the return value of
`execv` is not checked. This should be fixed in a separate commit.
Once this happened the master process ignores any further
signals sent by the administrator.
Replace `execv` with `execvp` to establish the expected
behaviour.
Willy Tarreau [Mon, 13 Nov 2017 18:13:06 +0000 (19:13 +0100)]
MINOR: ebtree/scope: add a function to find next node from a parent
Several parts of the code need to access the next node but don't start
from a node but a tagged parent link. Even eb32sc_next() does this.
Let's provide this function to prepare a cleanup for the lookup function.
Willy Tarreau [Mon, 13 Nov 2017 17:55:44 +0000 (18:55 +0100)]
BUG/MAJOR: ebtree/scope: fix lookup of next node in scope-aware trees
The eb32sc_walk_down_left() function needs to be able to go up when
it doesn't find a matching entry because this situation may always
happen, especially when fixing two constraints (scope + value). It
also happens after certain removal situations where some bits remain
on some intermediary nodes in the tree.
In addition, the algorithm for deciding to take the right branch is
wrong as it would take it if the current node shows a scope that
doesn't matchthe required one.
The current code is flakey in that it returns NULL when the bottom
has been reached and it's up to the caller to visit other nodes above.
In addition to being complex it's not reliable, and it was noticed a
few times that some tasks could remain lying in the tree after heavy
insertion/removals under multi-threaded workloads.
Now instead we make eb32sc_walk_down_left() visit the leftmost branch
that matches the scope, and automatically go up to visit the closest
matching right branch. This effectively does the same operations as a
next() operation but in reverse order (down then up instead of up then
down).
The eb32sc_next() function now becomes very simple again and matches
the original one, and the initial issues cannot be met anymore.
No backport is needed, this is purely 1.8-specific.
Willy Tarreau [Mon, 13 Nov 2017 15:16:09 +0000 (16:16 +0100)]
BUG/MAJOR: ebtree/scope: fix insertion and removal of duplicates in scope-aware trees
Commit ca30839 and following ("MINOR: ebtree: implement the scope-aware
functions for eb32") improperly dealt with the scope in duplicate trees.
The insertion was too lenient in that it would always mark the whole
rightmost chain below the insertion point, and the removal could leave
marks of non-existing scopes causing next()/first() to visit the wrong
branch and return NULL.
For insertion, we must only tag the nodes between the head of the dup
tree and the insertion point which is the top of the lowest subtree. For
removal, the new scope must be be calculated by oring the scopes of the
two new branches and is irrelevant to the previous values.
No backport is needed, this is purely 1.8-specific.
BUG/MINOR: buffers: Fix b_alloc_margin to be "fonctionnaly" thread-safe
b_alloc_margin is, strickly speeking, thread-safe. It will not crash
HAproxy. But its contract is not respected anymore in a multithreaded
environment. In this function, we need to be sure to have <margin> buffers
available in the pool after the allocation. So to have this guarantee, we must
lock the memory pool during all the operation. This also means, we must call
internal and lockless memory functions (prefixed with '__').
For the record, this patch fixes a pernicious bug happens after a soft reload
where some streams can be blocked infinitly, waiting for a buffer in the
buffer_wq list. This happens because, during a soft reload, pool_gc2 is called,
making some calls to b_alloc_fast fail.
This is specific to threads, no backport is needed.
MINOR: threads: Use __decl_hathreads to declare locks
This macro should be used to declare variables or struct members depending on
the USE_THREAD compile option. It avoids the encapsulation of such declarations
between #ifdef/#endif. It is used to declare all lock variables.
BUG/MINOR: spoe: check buffer size before acquiring or releasing it
In spoe_acquire_buffer and spoe_release_buffer, instead of checking the buffer
against buf_empty, we now check its size. It is important because when an
allocation fails, it will be set to buf_wanted. In both cases, the size is 0.
It is a proactive bug fix, no real problem was observed till now. It cannot be
backported as is in 1.7 because of all changes made on the SPOE in 1.8.
Willy Tarreau [Sat, 11 Nov 2017 16:58:31 +0000 (17:58 +0100)]
BUILD: thread/pipe: fix build without threads
Marcus Rückert reported that commit d8b3b65 ("BUG/MEDIUM: splice/threads:
pipe reuse list was not protected.") broke threadless support. Add the
required #ifdef.
Willy Tarreau [Sat, 11 Nov 2017 08:06:48 +0000 (09:06 +0100)]
[RELEASE] Released version 1.8-rc3
Released version 1.8-rc3 with the following main changes :
- BUILD: use MAXPATHLEN instead of NAME_MAX.
- BUG/MAJOR: threads/checks: add 4 missing spin_unlock() in various functions
- BUG/MAJOR: threads/server: missing unlock in CLI fqdn parser
- BUG/MINOR: cli: do not perform an invalid action on "set server check-port"
- BUG/MAJOR: threads/checks: wrong use of SPIN_LOCK instead of SPIN_UNLOCK
- CLEANUP: checks: remove return statements in locked functions
- BUG/MINOR: cli: add severity in "set server addr" parser
- CLEANUP: server: get rid of return statements in the CLI parser
- BUG/MAJOR: cli/streams: missing unlock on exit "show sess"
- BUG/MAJOR: threads/dns: add missing unlock on allocation failure path
- BUG/MAJOR: threads/lb: fix missing unlock on consistent hash LB
- BUG/MAJOR: threads/lb: fix missing unlock on map-based hash LB
- BUG/MEDIUM: threads/stick-tables: close a race condition on stktable_trash_expired()
- BUG/MAJOR: h2: set the connection's task to NULL when no client timeout is set
- BUG/MAJOR: thread/listeners: enable_listener must not call unbind_listener()
- BUG/MEDIUM: threads: don't try to free build option message on exit
- MINOR: applets: no need to check for runqueue's emptiness in appctx_res_wakeup()
- MINOR: add master-worker in the warning about nbproc
- MINOR: mworker: allow pidfile in mworker + foreground
- MINOR: mworker: write parent pid in the pidfile
- MINOR: mworker: do not store child pid anymore in the pidfile
- MINOR: ebtree: implement the scope-aware functions for eb32
- MEDIUM: ebtree: specify the scope of every node inserted via eb32sc
- MINOR: ebtree: update the eb32sc parent node's scope on delete
- MEDIUM: ebtree: only consider the branches matching the scope in lookups
- MINOR: ebtree: implement eb32sc_lookup_ge_or_first()
- MAJOR: task: make use of the scope-aware ebtree functions
- MINOR: task: simplify wake_expired_tasks() to avoid unlocking in the loop
- MEDIUM: task: change the construction of the loop in process_runnable_tasks()
- MINOR: threads: use faster locks for the spin locks
- MINOR: tasks: only visit filled task slots after processing them
- MEDIUM: tasks: implement a lockless scheduler for single-thread usage
- BUG/MINOR: dns: Don't try to get the server lock if it's already held.
- BUG/MINOR: dns: Don't lock the server lock in snr_check_ip_callback().
- DOC: Add note about encrypted password CPU usage
- BUG/MINOR: h2: set the "HEADERS_SENT" flag on stream, not connection
- BUG/MEDIUM: h2: properly send an RST_STREAM on mux stream error
- BUG/MEDIUM: h2: properly send the GOAWAY frame in the mux
- BUG/MEDIUM: h2: don't try (and fail) to send non-existing data in the mux
- MEDIUM: h2: remove the H2_SS_RESET intermediate state
- BUG/MEDIUM: h2: fix some wrong error codes on connections
- BUILD: threads: Rename SPIN/RWLOCK macros using HA_ prefix
- BUILD: enable USE_THREAD for Solaris build.
- BUG/MEDIUM: h2: don't close the connection is there are data left
- MINOR: h2: don't re-enable the connection's task when we're closing
- BUG/MEDIUM: h2: properly set H2_SF_ES_SENT when sending the final frame
- BUG/MINOR: h2: correctly check for H2_SF_ES_SENT before closing
- MINOR: h2: add new stream flag H2_SF_OUTGOING_DATA
- BUG/MINOR: h2: don't send GOAWAY on failed response
- BUG/MEDIUM: splice/threads: pipe reuse list was not protected.
- BUG/MINOR: comp: fix compilation warning compiling without compression.
- BUG/MINOR: stream-int: don't set MSG_MORE on closed request path
- BUG/MAJOR: threads/tasks: fix the scheduler again
- BUG/MINOR; ssl: Don't assume we have a ssl_bind_conf because a SNI is matched.
- MINOR: ssl: Handle session resumption with TLS 1.3
- MINOR: ssl: Spell 0x10101000L correctly.
- MINOR: ssl: Handle sending early data to server.
- BUILD: ssl: fix build of backend without ssl
- BUILD: shctx: do not depend on openssl anymore
- BUG/MINOR: h1: the HTTP/1 make status code parser check for digits
- BUG/MEDIUM: h2: reject non-3-digit status codes
- BUG/MEDIUM: stream-int: Don't loss write's notifs when a stream is woken up
- BUG/MINOR: pattern: Rely on the sample type to copy it in pattern_exec_match
- BUG/MEDIUM: h2: split the function to send RST_STREAM
- BUG/MEDIUM: h1: ensure the chunk size parser can deal with full buffers
- MINOR: tools: don't use unlikely() in hex2i()
- BUG/MEDIUM: h2: support orphaned streams
- BUG/MEDIUM: threads/cli: fix "show sess" locking on release
- CLEANUP: mux: remove the unused "release()" function
- MINOR: cli: make "show fd" report the fd's thread mask
- BUG/MEDIUM: stream: don't ignore res.analyse_exp anymore
- CLEANUP: global: introduce variable pid_bit to avoid shifts with relative_pid
- MEDIUM: http: always reject the "PRI" method
Willy Tarreau [Fri, 10 Nov 2017 18:38:10 +0000 (19:38 +0100)]
MEDIUM: http: always reject the "PRI" method
This method was reserved for the HTTP/2 connection preface, must never
be used and must be rejected. In normal situations it doesn't happen,
but it may be visible if a TCP frontend has alpn "h2" enabled, and
forwards to an HTTP backend which tries to parse the request. Before
this patch it would pass the wrong request to the backend server, now
it properly returns 400 bad req.
This patch should probably be backported to stable versions.
Willy Tarreau [Fri, 10 Nov 2017 18:08:14 +0000 (19:08 +0100)]
CLEANUP: global: introduce variable pid_bit to avoid shifts with relative_pid
At a number of places, bitmasks are used for process affinity and to map
listeners to processes. Every time 1UL<<(relative_pid-1) is used. Let's
create a "pid_bit" variable corresponding to this value to clean this up.
It happens that no single analyser has ever needed to set res.analyse_exp,
so that process_stream() didn't consider it when computing the next task
expiration date. Since Lua actions were introduced in 1.6, this can be
needed on http-response actions for example, so let's ensure it's properly
handled.
Thanks to Nick Dimov for reporting this bug. The fix needs to be
backported to 1.7 and 1.6.
Willy Tarreau [Fri, 10 Nov 2017 15:53:09 +0000 (16:53 +0100)]
MINOR: cli: make "show fd" report the fd's thread mask
This is useful to know what thread(s) an fd is scheduled to be
handled on. It's worth noting that at the moment the "show fd"d
doesn't seem totally thread-safe.
Willy Tarreau [Fri, 10 Nov 2017 15:43:05 +0000 (16:43 +0100)]
CLEANUP: mux: remove the unused "release()" function
In commit 53a4766 ("MEDIUM: connection: start to introduce a mux layer
between xprt and data") we introduced a release() function which ends
up never being used. Let's get rid of it now.
Willy Tarreau [Fri, 10 Nov 2017 10:42:33 +0000 (11:42 +0100)]
BUG/MEDIUM: h2: support orphaned streams
When a stream_interface performs a shutw() then a shutr(), the stream
is marked closed. Then cs_destroy() calls h2_detach() and it cannot
fail since we're on the leaving path of the caller. The problem is that
in order to close streams we usually have to send either an emty DATA
frame with the ES flag set or an RST_STREAM frame, and the mux buffer
might already be full, forcing the stream to be queued. The forced
removal of this stream causes this last message to silently disappear,
and the client to wait forever for a response.
This commit ensures we can detach the conn_stream from the h2 stream
if the stream is blocked, effectively making the h2 stream an orphan,
ensures that the mux can deal with orphaned streams after processing
them, and that the demux can kill them upon receipt of GOAWAY.
Willy Tarreau [Fri, 10 Nov 2017 10:19:54 +0000 (11:19 +0100)]
MINOR: tools: don't use unlikely() in hex2i()
This small inline function causes some pain to the compiler when used
inside other functions due to its use of the unlikely() hint for non-digits.
It causes the letters to be processed far away in the calling function and
makes the code less efficient. Removing these unlikely() hints has increased
the chunk size parsing by around 5%.
Willy Tarreau [Fri, 10 Nov 2017 10:17:08 +0000 (11:17 +0100)]
BUG/MEDIUM: h1: ensure the chunk size parser can deal with full buffers
The HTTP/1 code always has the reserve left available so the buffer is
never full there. But with HTTP/2 we have to deal with full buffers,
and it happens that the chunk size parser cannot tell the difference
between a full buffer and an empty one since it compares the start and
the stop pointer.
Let's change this to instead deal with the number of bytes left to process.
As a side effect, this code ends up being about 10% faster than the previous
one, even on HTTP/1.
Willy Tarreau [Fri, 10 Nov 2017 09:05:24 +0000 (10:05 +0100)]
BUG/MEDIUM: h2: split the function to send RST_STREAM
There is an issue with how the RST_STREAM frames are sent. Some of
them are sent from the demux, either for valid or for closed streams,
and some are sent from the mux always for valid streams. At the moment
the demux stream ID is used, which is wrong for all streams being muxed,
and sometimes results in certain bad HTTP responses causing the emission
of an RST_STREAM referencing stream zero. In addition, the stream's
blocked flags could be updated even if the stream was the closed or
idle ones.
We really need to split the function for the two distinct use cases where
one is used to send an RST on a condition detected at the connection level
(such as a closed stream) and the other one is used to send an RST for a
condition detected at the stream level. The first one is used only in the
demux, and the other one only by a valid stream.
BUG/MINOR: pattern: Rely on the sample type to copy it in pattern_exec_match
To be thread safe, the function pattern_exec_match copy data (the pattern and
the inner sample) in thread-local variables. But when the sample is duplicated,
we must check its type and not the pattern one.
This is specific to threads, no backport is needed.
BUG/MEDIUM: stream-int: Don't loss write's notifs when a stream is woken up
When a write activity is reported on a channel, it is important to keep this
information for the stream because it take part on the analyzers' triggering.
When some data are written, the flag CF_WRITE_PARTIAL is set. It participates to
the task's timeout updates and to the stream's waking. It is also used in
CF_MASK_ANALYSER mask to trigger channels anaylzers. In the past, it was cleared
by process_stream. Because of a bug (fixed in commit 95fad5ba4 ["BUG/MAJOR:
stream-int: don't re-arm recv if send fails"]), It is now cleared before each
send and in stream_int_notify. So it is possible to loss this information when
process_stream is called, preventing analyzers to be called, and possibly
leading to a stalled stream.
Today, this happens in HTTP2 when you call the stat page or when you use the
cache filter. In fact, this happens when the response is sent by an applet. In
HTTP1, everything seems to work as expected.
To fix the problem, we need to make the difference between the write activity
reported to lower layers and the one reported to the stream. So the flag
CF_WRITE_EVENT has been added to notify the stream of the write activity on a
channel. It is set when a send succedded and reset by process_stream. It is also
used in CF_MASK_ANALYSER. finally, it is checked in stream_int_notify to wake up
a stream and in channel_check_timeouts.
This bug is probably present in 1.7 but it seems to have no effect. So for now,
no needs to backport it.
Willy Tarreau [Thu, 9 Nov 2017 10:23:00 +0000 (11:23 +0100)]
BUG/MEDIUM: h2: reject non-3-digit status codes
If the H1 parser would report a status code length not consisting in
exactly 3 digits, the error case was confused with a lack of buffer
room and was causing the parser to loop infinitely.
Willy Tarreau [Thu, 9 Nov 2017 10:15:45 +0000 (11:15 +0100)]
BUG/MINOR: h1: the HTTP/1 make status code parser check for digits
The H1 parser used by the H2 gateway was a bit lax and could validate
non-numbers in the status code. Since it computes the code on the fly
it's problematic, as "30:" is read as status code 310. Let's properly
check that it's a number now. No backport needed.
Willy Tarreau [Wed, 8 Nov 2017 13:33:36 +0000 (14:33 +0100)]
BUILD: shctx: do not depend on openssl anymore
The build breaks on a machine without openssl/crypto.h because shctx
still loads openssl-compat.h while it doesn't need it anymore since
the code was moved :
In file included from src/shctx.c:20:0:
include/proto/openssl-compat.h:3:28: fatal error: openssl/crypto.h: No such file or directory
#include <openssl/crypto.h>
Willy Tarreau [Wed, 8 Nov 2017 13:25:59 +0000 (14:25 +0100)]
BUILD: ssl: fix build of backend without ssl
Commit 522eea7 ("MINOR: ssl: Handle sending early data to server.") added
a dependency on SRV_SSL_O_EARLY_DATA which only exists when USE_OPENSSL
is defined (which is probably not the best solution) and breaks the build
when ssl is not enabled. Just add an ifdef USE_OPENSSL around the block
for now.
This adds a new keyword on the "server" line, "allow-0rtt", if set, we'll try
to send early data to the server, as long as the client sent early data, as
in case the server rejects the early data, we no longer have them, and can't
resend them, so the only option we have is to send back a 425, and we need
to be sure the client knows how to interpret it correctly.
MINOR: ssl: Handle session resumption with TLS 1.3
With TLS 1.3, session aren't established until after the main handshake
has completed. So we can't just rely on calling SSL_get1_session(). Instead,
we now register a callback for the "new session" event. This should work for
previous versions of TLS as well.
Willy Tarreau [Wed, 8 Nov 2017 13:05:19 +0000 (14:05 +0100)]
BUG/MAJOR: threads/tasks: fix the scheduler again
My recent change in commit ce4e0aa ("MEDIUM: task: change the construction
of the loop in process_runnable_tasks()") was bogus as it used to keep the
rq_next across an unlock/lock sequence, occasionally leading to crashes for
tasks that are eligible to any thread. We must use the lookup call for each
new batch instead. The problem is easily triggered with such a configuration :
global
nbthread 4
listen check
mode http
bind 0.0.0.0:8080
redirect location /
option httpchk GET /
server s1 127.0.0.1:8080 check inter 1
server s2 127.0.0.1:8080 check inter 1
Thanks to Olivier for diagnosing this one. No backport is needed.
Willy Tarreau [Tue, 7 Nov 2017 14:07:25 +0000 (15:07 +0100)]
BUG/MINOR: stream-int: don't set MSG_MORE on closed request path
Commit 4ac4928 ("BUG/MINOR: stream-int: don't set MSG_MORE on SHUTW_NOW
without AUTO_CLOSE") was incomplete. H2 reveals another situation where
the input stream is marked closed with the request and we set MSG_MORE,
causing a delay before the request leaves.
Better avoid setting the flag on the request path for close cases in
general.