Willy Tarreau [Thu, 22 Dec 2016 14:59:02 +0000 (15:59 +0100)]
MEDIUM: spoe: don't create a dummy listener for outgoing connections
The code currently creates a listener only to ensure that sess->li is
properly populated, and to retrieve the frontend (which is also available
directly from the session).
It turns out that the current infrastructure (for a large part) already
supports not having any listener on a session (since Lua does the same),
except for the following places which were not yet converted :
- session_count_new() : used by session_accept_fd, ie never for spoe
- session_accept_fd() : never used here, an applet initiates the session
- session_prepare_log_prefix() : embryonic sessions only, thus unused
- session_kill_embryonic() : same
- conn_complete_session() : same
- build_log_line() for fields %cp, %fp and %ft : unused here but may change
- http_wait_for_request() and subsequent functions : unused here
Thus for now it's as safe to run SPOE without listener as it is for Lua,
and this was an obstacle against some cleanups of the listener code. The
places above should be plugged so that it becomes save over the long term
as well.
An alternative in the future might be to create a dummy listener that
outgoing connections could use just to avoid keeping a null here.
Willy Tarreau [Thu, 22 Dec 2016 17:14:41 +0000 (18:14 +0100)]
MINOR: tcp-rules: check that the listener exists before updating its counters
The tcp rules may be applied to a TCP stream initiated by applets (spoe,
lua, peers, later H2). These ones do not necessarily have a valid listener
so we must verify the field is not null before updating the stats. For now
there's no way to trigger this bug because lua and peers don't have analysers,
h2 is not implemented and spoe has a dummy listener. But this threatens to
break at any instant.
Thierry FOURNIER [Mon, 19 Dec 2016 15:50:42 +0000 (16:50 +0100)]
BUG/MINOR: stats: fix be/sessions/current out in typed stats
"scur" was typed as "limit" (FO_CONFIG) and "config value" (FN_LIMIT).
The real types of "scur" are "metric" (FO_METRIC) and "gauge"
(FN_GAUGE). FO_METRIC and FN_GAUGE are the value 0.
Willy Tarreau [Thu, 22 Dec 2016 16:57:46 +0000 (17:57 +0100)]
BUG/MEDIUM: ssl: avoid double free when releasing bind_confs
ssl_sock functions don't mark pointers as NULL after freeing them. So
if a "bind" line specifies some SSL settings without the "ssl" keyword,
they will get freed at the end of check_config_validity(), then freed
a second time on exit. Simply mark the pointers as NULL to fix this.
This fix needs to be backported to 1.7 and 1.6.
Willy Tarreau [Thu, 22 Dec 2016 20:54:21 +0000 (21:54 +0100)]
BUG/MEDIUM: ssl: properly reset the reused_sess during a forced handshake
We have a bug when SSL reuse is disabled on the server side : we reset
the context but do not set it to NULL, causing a multiple free of the
same entry. It seems like this bug cannot appear as-is with the current
code (or the conditions to get it are not obvious) but it did definitely
strike when trying to fix another bug with the SNI which forced a new
handshake.
This fix should be backported to 1.7, 1.6 and 1.5.
Willy Tarreau [Thu, 22 Dec 2016 18:46:17 +0000 (19:46 +0100)]
MEDIUM: compression: move the zlib-specific stuff from global.h to compression.c
This finishes to clean up the zlib-specific parts. It also unbreaks recent
commit b97c6fb ("CLEANUP: compression: use the build options list to report
the algos") which broke USE_ZLIB due to MAXWBITS not being defined anymore
in haproxy.c.
It's worth mentionning that some of them used to have incorrect sign
checks possibly resulting in some negative values being used. All of
them are now checked for being positive.
Willy Tarreau [Wed, 21 Dec 2016 21:44:46 +0000 (22:44 +0100)]
MINOR: cfgparse: move parsing of "ca-base" and "crt-base" to ssl_sock
This removes 2 #ifdefs and makes the code much cleaner. The controls
are still there and the two parsers have been merged into a single
function ssl_parse_global_ca_crt_base().
It's worth noting that there's still a check to prevent a change when
the value was already specified. This test seems useless and possibly
counter-productive, it may have to be revisited later, but for now it
was implemented identically.
Willy Tarreau [Wed, 21 Dec 2016 21:41:44 +0000 (22:41 +0100)]
MINOR: cfgparse: add two new functions to check arguments count
We already had alertif_too_many_args{,_idx}(), but these ones are
specifically designed for use in cfgparse. Outside of it we're
trying to avoid calling Alert() all the time so we need an
equivalent using a pointer to an error message.
These new functions called too_many_args{,_idx)() do exactly this.
They don't take the file name nor the line number which they have
no use for but instead they take an optional pointer to an error
message and the pointer to the error code is optional as well.
With (NULL, NULL) they'll simply check the validity and return a
verdict. They are quite convenient for use in isolated keyword
parsers.
These two new functions as well as the previous ones have all been
exported.
Willy Tarreau [Wed, 21 Dec 2016 20:25:06 +0000 (21:25 +0100)]
CLEANUP: da: move global settings out of the global section
We replaced global.deviceatlas with global_deviceatlas since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in da.c. The file da.h was now
removed because it was only used to load dac.h, which is more easily
loaded directly from da.c. It provides another good example of how to
integrate code in the future without touching the core parts.
Willy Tarreau [Wed, 21 Dec 2016 20:18:44 +0000 (21:18 +0100)]
CLEANUP: 51d: move global settings out of the global section
We replaced global._51degrees with global_51degrees since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in 51d.c. The file 51d.h was now
removed because it was only used to load 51Degrees.h, which is more easily
loaded from 51d.c. It provides a good example of how to integrate code in
the future without touching the core parts.
Willy Tarreau [Wed, 21 Dec 2016 13:57:34 +0000 (14:57 +0100)]
CLEANUP: wurfl: move global settings out of the global section
We replaced global.wurfl with global_wurfl since there's no need to store
all this into the global section. This removes the last #ifdefs, and now
the code is 100% self-contained in wurfl.c. It provides a good example of
how to integrate code in the future without touching the core parts.
Willy Tarreau [Wed, 21 Dec 2016 19:59:01 +0000 (20:59 +0100)]
CLEANUP: 51d: register the deinitialization function
deinit_51degrees() is not called anymore from haproxy.c, removing
2 #ifdefs and one include. The function was made static. The include
file still includes 51Degrees.h which is needed by global.h and 51d.c
so it was not touched beyond this last function removal.
Willy Tarreau [Wed, 21 Dec 2016 19:52:38 +0000 (20:52 +0100)]
CLEANUP: wurfl: register the deinit function via the dedicated list
By registering the deinit function we avoid another #ifdef in haproxy.c.
The ha_wurfl_deinit() function has been made static and unexported. Now
proto/wurfl.h is totally empty, the code being self-contained in wurfl.c,
so the useless .h has been removed.
Willy Tarreau [Wed, 21 Dec 2016 19:46:26 +0000 (20:46 +0100)]
MINOR: haproxy: add a registration for post-deinit functions
The 3 device detection engines stop at the same place in deinit()
with the usual #ifdefs. Similar to the other functions we can have
some late deinitialization functions. These functions do not return
anything however so we have to use a different type.
Willy Tarreau [Wed, 21 Dec 2016 19:39:16 +0000 (20:39 +0100)]
CLEANUP: da: make use of the late init registration code
Instead of having a #ifdef in the main init code we now use the registered
init functions. Doing so also enables error checking as errors were previously
reported as alerts but ignored. Also they were incorrect as the 'status'
variable was hidden by a second one and was always reporting DA_SYS (which
is apparently an error) in every case including the case where no file was
loaded. The init_deviceatlas() function was unexported since it's not used
outside of this place anymore.
Willy Tarreau [Wed, 21 Dec 2016 19:30:05 +0000 (20:30 +0100)]
CLEANUP: 51d: make use of the late init registration
This removes some #ifdefs from the main haproxy code path. Function
init_51degrees() now returns ERR_* instead of exit(1) on error, and
this function was made static and is not exported anymore.
Willy Tarreau [Wed, 21 Dec 2016 19:20:17 +0000 (20:20 +0100)]
CLEANUP: wurfl: make use of the late init registration
This removes some #ifdefs from the main haproxy code path and enables
error checking. The current code only makes use of warnings even for
some errors that look serious. While this choice is questionnable, it
has been kept as-is, and only the return codes were adapted to ERR_WARN
to at least report that some warnings were emitted. ha_wurfl_init() was
unexported as it's not needed anymore.
Willy Tarreau [Wed, 21 Dec 2016 19:04:48 +0000 (20:04 +0100)]
CLEANUP: checks: make use of the post-init registration to start checks
Instead of calling the checks directly from the init code, we now
register the start_checks() function to be run at this point. This
also allows to unexport the check init function and to remove one
include from haproxy.c.
Willy Tarreau [Wed, 21 Dec 2016 18:57:00 +0000 (19:57 +0100)]
MINOR: haproxy: add a registration for post-check functions
There's a significant amount of late initialization calls which are
performed after the point where we exit in check mode. These calls
are used to allocate resource and perform certain slow operations.
Let's have a way to register some functions which need to be called
there instead of having this multitude of #ifdef in the init path.
Willy Tarreau [Wed, 21 Dec 2016 18:30:30 +0000 (19:30 +0100)]
CLEANUP: compression: use the build options list to report the algos
This removes 2 #ifdef, an include, an ugly construct and a wild "extern"
declaration from haproxy.c. The message indicating that compression is
*not* enabled is not there anymore.
Willy Tarreau [Wed, 21 Dec 2016 17:43:10 +0000 (18:43 +0100)]
MINOR: haproxy: add a registration for build options
Many extensions now report some build options to ease debugging, but
this is now being done at the expense of code maintainability. Let's
provide a registration function to do this so that we can start to
remove most of the #ifdefs from haproxy.c (18 currently just for a
single function).
Thierry FOURNIER [Sat, 17 Dec 2016 11:09:51 +0000 (12:09 +0100)]
BUG/MINOR: lua: memleak when Lua/cli fails
If the memory allocator fails, it return a bad code, and the execution
continue. If the Lua/cli initializer fails, the allocated struct is not
released.
When data are sent from a cosocket, the action is done in the context of the
applet running a lua stack and not in the context of the applet owning the
cosocket. So we must take care, explicitly, that this last applet have a buffer
(the req buffer of the cosocket).
Thierry FOURNIER [Fri, 16 Dec 2016 07:48:32 +0000 (08:48 +0100)]
MINOR/DOC: lua: just precise one thing
In the case of applet, the Lua context is taken from session
when we get the private values. This patch just update comments
associated to this action because it is not obvious.
Willy Tarreau [Fri, 16 Dec 2016 07:02:21 +0000 (08:02 +0100)]
CONTRIB: tcploop: add limits.h to fix build issue with some compilers
Just got this while cross-compiling :
tcploop.c: In function 'tcp_recv':
tcploop.c:444:48: error: 'INT_MAX' undeclared (first use in this function)
tcploop.c:444:48: note: each undeclared identifier is reported only once for each function it appears in
Willy Tarreau [Fri, 16 Dec 2016 17:47:27 +0000 (18:47 +0100)]
MINOR: appctx/cli: remove the "tlskeys" entry from the appctx union
This one now migrates to the general purpose cli.p0 for the ref pointer,
cli.i0 for the dump_all flag and cli.i1 for the dump_keys_index. A few
comments were added.
The applet.h file doesn't depend on openssl anymore. It's worth noting
that the previous dependency was accidental and only used to work because
all files including this one used to have openssl included prior to
loading this file.
Willy Tarreau [Fri, 16 Dec 2016 17:23:39 +0000 (18:23 +0100)]
MINOR: appctx/cli: remove the "server_state" entry from the appctx union
This one now migrates to the general purpose cli.p0 for the proxy pointer,
cli.p1 for the server pointer, and cli.i0 for the proxy's instance if only
one has to be dumped.
Willy Tarreau [Fri, 16 Dec 2016 11:37:03 +0000 (12:37 +0100)]
MINOR: cli: add two general purpose pointers and integers in the CLI struct
Most of the keywords don't need to have their own entry in the appctx
union, they just need to reuse some generic pointers like we've been
used to do in the appctx with st{0,1,2}. This patch adds p0, p1, i0, i1
and initializes them to zero before calling the parser. This way some
of the simplest existing keywords will be able to disappear from the
union.
It's worth noting that this is an extension to what was initially
attempted via the "private" member that I removed a few patches ago by
not understanding how it was supposed to be used. Here the fact that
we share the same union will force us to be stricter: the code either
uses the general purpose variables or it uses its own fields but not
both.
Willy Tarreau [Fri, 16 Dec 2016 11:33:47 +0000 (12:33 +0100)]
CLEANUP: stats: move a misplaced stats context initialization
This is a leftover from the cleanup campaign, the stats scope was still
initialized by the CLI instead of being initialized by the stats keyword
parsers. This should probably be backported to 1.7 to make the code more
consistent.
Willy Tarreau [Fri, 16 Dec 2016 11:14:12 +0000 (12:14 +0100)]
CLEANUP: applet: group all CLI contexts together
The appctx storage became a real mess along the years. It now contains
mostly CLI-specific parts that share the same storage as the "cli" part
which in fact only contains the fields needed to pass an error message
to the caller, and it also has room a few other regular applets which
may become more and more common.
This first patch moves the parts around in the union so that all
standard applet parts are grouped together and the CLI-specific ones
are grouped together. It also adds a few comments to indicate what
certain parts are used for since it's sometimes a bit confusing.
Willy Tarreau [Fri, 16 Dec 2016 16:59:25 +0000 (17:59 +0100)]
MINOR: cli: automatically enable a CLI I/O handler when there's no parser
Sometimes a registered keyword will not need any specific parsing nor
initialization, so it's annoying to have to write an empty parsing
function returning zero just for this.
This patch makes it possible to automatically call a keyword's I/O
handler of when the parsing function is not defined, while still allowing
a parser to set the I/O handler itself.
Thierry FOURNIER [Fri, 16 Dec 2016 12:07:22 +0000 (13:07 +0100)]
MINOR: lua/signals: Remove Lua part from signals.
The signals system embedded in Lua can be tranformed in general purpose
signals code. To reach this goal, this path removes the Lua part of the
signals.
This is an easy job, because Lua is useles with signal. I change just two
prototypes.
Thierry FOURNIER [Fri, 16 Dec 2016 10:54:07 +0000 (11:54 +0100)]
MEDIUM: lua: use memory pool for hlua struct in applets
The struct hlua size is 128 bytes. The size is the biggest of all the elements
of the union embedded in the appctx struct. With HTTP2, it is possible that this
appctx struct will be use many times for each connection, so the 128 bytes are
a little bit heavy for the global memory consomation.
This patch replace the embbeded hlua struct by a pointer and an associated memory
pool. Now, the memory for lua is allocated only if it is required.
Willy Tarreau [Fri, 16 Dec 2016 11:56:31 +0000 (12:56 +0100)]
BUG/MINOR: cli: "show cli sockets" would always report process 64
Another small bug in "show cli sockets" made the last fix always report
process 64 due to a signedness issue in the shift operation when building
the mask.
Willy Tarreau [Wed, 14 Dec 2016 14:50:35 +0000 (15:50 +0100)]
CLEANUP: applet/table: add an "action" entry in ->table context
Just like previous patch, this was the only other user of the "private"
field of the applet. It used to store a copy of the keyword's action.
Let's just put it into ->table->action and use it from there. It also
slightly simplifies the code by removing a few pointer to integer casts.
Willy Tarreau [Wed, 14 Dec 2016 14:41:45 +0000 (15:41 +0100)]
CLEANUP: applet/lua: create a dedicated ->fcn entry in hlua_cli context
We have very few users of the appctx's private field which was introduced
prior to the split of the CLI. Unfortunately it was not removed after the
end. This commit simply introduces hlua_cli->fcn which is the pointer to
the Lua function that the Lua code used to store in this private pointer.
Willy Tarreau [Tue, 13 Dec 2016 14:21:25 +0000 (15:21 +0100)]
BUG/MINOR: stream-int: automatically release SI_FL_WAIT_DATA on SHUTW_NOW
While developing an experimental applet performing only one read per full
line, it appeared that it would be woken up for the client's close, not
read all data (missing LF), then wait for a subsequent call, and would only
be woken up on client timeout to finish the read. The reason is that we
preset SI_FL_WAIT_DATA in the stream-interface's flags to avoid a fast loop,
but there's nothing which can remove this flag until there's a read operation.
We must definitely remove it in stream_int_notify() each time we're called
with CF_SHUTW_NOW because we know there will be no more subsequent read
and we don't want an applet which keeps the WANT_GET flag to block on this.
This fix should be backported to 1.7 and 1.6 though it's uncertain whether
cli, peers, lua or spoe really are affected there.
Willy Tarreau [Wed, 14 Dec 2016 15:44:45 +0000 (16:44 +0100)]
SCRIPTS: git-show-backports: add -H to use the hash of the commit message
Sometimes certain commits don't contain useful tracking information but
we'd still like to be able to report them. Here we implement a hash on
the author's name, e-mail and date, the subject and the body before the
first s-o-b or cherry-picked line. These parts are supposed to be
reasonable invariant across backports and are usable to compute an
invariant hash of a given commit. When we don't find ancestry in a
commit, we try this method (if -H is specified) to compare commits
hashes and we can report a match. The equivalent commit is reported
as "XXXX+?" to indicate that it's an apparent backport but we don't
know how far it goes.
Another case raises. Now HAProxy sends a final message (typically
with "http-request deny"). Once the the message is sent, the response
channel flags are not modified.
HAProxy executes a Lua sample-fecthes for building logs, and the
result is ignored because the response flag remains set to the value
HTTP_MSG_RPBEFORE. So the Lua function hlua_check_proto() want to
guarantee the valid state of the buffer and ask for aborting the
request.
The function check_proto() is not the good way to ensure request
consistency. The real question is not "Are the message valid ?", but
"Are the validity of message unchanged ?"
This patch memorize the parser state before entering int the Lua
code, and perform a check when it go out of the Lua code. If the parser
state change for down, the request is aborted because the HTTP message
is degraded.
This patch should be backported in version 1.6 and 1.7
Luca Pizzamiglio [Mon, 12 Dec 2016 09:56:56 +0000 (10:56 +0100)]
BUILD/MEDIUM: Fixing the build using LibreSSL
Fixing the build using LibreSSL as OpenSSL implementation.
Currently, LibreSSL 2.4.4 provides the same API of OpenSSL 1.0.1x,
but it redefine the OpenSSL version number as 2.0.x, breaking all
checks with OpenSSL 1.1.x.
The patch solves the issue checking the definition of the symbol
LIBRESSL_VERSION_NUMBER when Openssl 1.1.x features are requested.
BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
When an entity tries to get a buffer, if it cannot be allocted, for example
because the number of buffers which may be allocated per process is limited,
this entity is added in a list (called <buffer_wq>) and wait for an available
buffer.
Historically, the <buffer_wq> list was logically attached to streams because it
were the only entities likely to be added in it. Now, applets can also be
waiting for a free buffer. And with filters, we could imagine to have more other
entities waiting for a buffer. So it make sense to have a generic list.
Anyway, with the current design there is a bug. When an applet failed to get a
buffer, it will wait. But we add the stream attached to the applet in
<buffer_wq>, instead of the applet itself. So when a buffer is available, we
wake up the stream and not the waiting applet. So, it is possible to have
waiting applets and never awakened.
So, now, <buffer_wq> is independant from streams. And we really add the waiting
entity in <buffer_wq>. To be generic, the entity is responsible to define the
callback used to awaken it.
In addition, applets will still request an input buffer when they become
active. But they will not be sleeped anymore if no buffer are available. So this
is the responsibility to the applet I/O handler to check if this buffer is
allocated or not. This way, an applet can decide if this buffer is required or
not and can do additional processing if not.
BUG/MEDIUM: stream: Save unprocessed events for a stream
A stream can be awakened for different reasons. During its processing, it can be
early stopped if no buffer is available. In this situation, the reason why the
stream was awakened is lost, because we rely on the task state, which is reset
after each processing loop.
In many cases, that's not a big deal. But it can be useful to accumulate the
task states if the stream processing is interrupted, especially if some filters
need to be called.
To be clearer, here is an simple example:
1) A stream is awakened with the reason TASK_WOKEN_MSG.
2) Because no buffer is available, the processing is interrupted, the stream
is back to sleep. And the task state is reset.
3) Some buffers become available, so the stream is awakened with the reason
TASK_WOKEN_RES. At this step, the previous reason (TASK_WOKEN_MSG) is lost.
Now, the task states are saved for a stream and reset only when the stream
processing is not interrupted. The correspoing bitfield represents the pending
events for a stream. And we use this one instead of the task state during the
stream processing.
Note that TASK_WOKEN_TIMER and TASK_WOKEN_RES are always removed because these
events are always handled during the stream processing.
MINOR: task: Rename run_queue and run_queue_cur counters
<run_queue> is used to track the number of task in the run queue and
<run_queue_cur> is a copy used for the reporting purpose. These counters has
been renamed, respectively, <tasks_run_queue> and <tasks_run_queue_cur>. So the
naming is consistent between tasks and applets.
[wt: needed for next fixes, backport to 1.7 and 1.6]
[wt: while it could seem suspicious, the preceeding call to
dump_servers_state() indeed flushes the trash in case anything is
emitted. No backport needed though.]
MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
When the option "http-buffer-request" is set, HAProxy send itself the
"HTTP/1.1 100 Continue" response in order to retrieve the post content.
When HAProxy forward the request, it send the body directly after the
headers. The header "Expect: 100-continue" was sent with the headers.
This header is useless because the body will be sent in all cases, and
the server reponse is not removed by haproxy.
This patch removes the header "Expect: 100-continue" if HAProxy sent it
itself.
Marcin Deranek [Mon, 12 Dec 2016 13:08:05 +0000 (14:08 +0100)]
MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
These 2 patches add ability to fetch frontend/backend name in your
logic, so they can be used later to make routing decisions (fe_name) or
taking some actions based on backend which responded to request (be_name).
In our case we needed a fetcher to be able to extract information we
needed from frontend name.
Willy Tarreau [Sun, 11 Dec 2016 21:12:33 +0000 (22:12 +0100)]
BUILD: rearrange target files by build time
When doing a parallel build on multiple CPUs it's common that at the end
a few CPUs only are busy compiling very large files while the other ones
have finished. By placing the largest files first, we can ensure that in
the worst case they are present from the beginning to the end, and that
other processes are free to take smaller files. This ordering was made
based on a measurement consisting in counting the number of times a given
file appears in the build. The top ten looks like this :
Only a few files were moved, ssl_sock would need to be moved as well but
that would not be a convenient thing to do in the makefile. This new
order allows to save about 10-15% of build time on 4 CPUs, which is nice.
(http|tcp)-(request|response) action cannot take arguments from the
configuration file. Arguments are useful for executing the action with
a special context.
This patch adds the possibility of passing arguments to an action. It
runs exactly like sample fetches and other Lua wrappers.
BUG/MEDIUM: variables: some variable name can hide another ones
The variable are compared only using text, the final '\0' (or the
string length) are not checked. So, the variable name "txn.internal"
matchs other one call "txn.int".
Matthieu Guegan [Mon, 5 Dec 2016 10:35:54 +0000 (11:35 +0100)]
BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
By investigating a keep-alive issue with CloudFlare, we[1] found that
when using the 'set-cookie' option in a redirect (302) HAproxy is adding
an extra `\r\n`.
Triggering rule :
`http-request redirect location / set-cookie Cookie=value if [...]`
Expected result :
```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;
Connection: close
```
This extra `\r\n` seems to be harmless with another HAproxy instance in
front of it (sanitizing) or when using a browser. But we confirm that
the CloudFlare NGINX implementation is not able to handle this. It
seems that both 'Content-length: 0' and extra carriage return broke RFC
(to be confirmed).
When looking into the code, this carriage-return was already present in
1.3.X versions but just before closing the connection which was ok I
think. Then, with 1.4.X the keep-alive feature was added and this piece
of code remains unchanged.
[1] all credit for the bug finding goes to CloudFlare Support Team
[wt: the bug was indeed present since the Set-Cookie was introduced
in 1.3.16, by commit 0140f25 ("[MINOR] redirect: add support for
"set-cookie" and "clear-cookie"") so backporting to all supported
versions is desired]
Ben Shillito [Fri, 2 Dec 2016 14:25:37 +0000 (14:25 +0000)]
DOC: Added 51Degrees conv and fetch functions to documentation.
Definitions and examples for 51d.single and 51d.all have been added to
configuration.txt so it now appears in online documentation in addition
to the README, The 51degrees-property-name-list entry has also been
updated to make it clear that multiple properties can be added.
Willy Tarreau [Mon, 5 Dec 2016 13:50:15 +0000 (14:50 +0100)]
BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
The recent CLI reorganization managed to break these two commands
by having their parser return 1 (indicating an end of processing)
instead of 0 to indicate new calls to the io handler were needed.
Namely the faulty commits are : 69e9644 ("REORG: cli: move show stat resolvers to dns.c") 32af203 ("REORG: cli: move ssl CLI functions to ssl_sock.c")
The fix is trivial and there is no other loss of functionality. Thanks
to Dragan Dosen for reporting the issue and the faulty commits. The
backport is needed in 1.7.
Dragan Dosen [Thu, 24 Nov 2016 10:33:12 +0000 (11:33 +0100)]
BUG/MINOR: cli: allow the backslash to be escaped on the CLI
In 1.5-dev20, commit 48bcfda ("MEDIUM: dumpstat: make the CLI parser
understand the backslash as an escape char") introduced support for
backslash on the CLI, but it strips all backslashes in all arguments
instead of only unescaping them, making it impossible to pass a
backslash in an argument.
This will allow us to use a backslash in a command over the socket, eg.
"add acl #0 ABC\\XYZ".
Willy Tarreau [Tue, 29 Nov 2016 20:47:02 +0000 (21:47 +0100)]
OPTIM: stream-int: don't disable polling anymore on DONT_READ
Commit 5fddab0 ("OPTIM: stream_interface: disable reading when
CF_READ_DONTWAIT is set") improved the connection layer's efficiency
back in 1.5-dev13 by avoiding successive read attempts on an active
FD. But by disabling this on a polled FD, it causes an unpleasant
side effect which is that the FD that was subscribed to polling is
suddenly stopped and may need to be re-enabled once the kernel
starts to slow down on data eviction (eg: saturated server at the
other end, bursty traffic caused by too large maxpollevents).
This behaviour is observable with persistent connections when there
is a large enough connection count so that there's no data in the
early connection and polling is required, because there are then
up to 4 epoll_ctl() calls per request. It's important that the
server is slower than haproxy to cause some delays when reading
response.
The current connection layer as designed in 1.6 with the FD cache
doesn't require this trick anymore, though it still benefits from
it when it saves an FD from being uselessly polled. But compared
to the increased cost of enabling and disabling poll all the time,
it's still better to disable it. In some cases it's possible to
observe a performance increase as high as 30% by avoiding this
epoll_ctl() dance.
In the end we only want to disable it when the FD is speculatively
read and not when it's polled. For this we introduce a new function
__conn_data_done_recv() which is used to indicate that we're done
with recv() and not interested in new attempts. If/when we later
support event-triggered epoll, this function will have to change
a bit to do the same even in the polled case.
A quick test with keep-alive requests run on a dual-core / dual-
thread Atom shows a significant improvement :
single process, 0 bytes :
before: Requests per second: 12243.20 [#/sec] (mean)
after: Requests per second: 13354.54 [#/sec] (mean)
single process, 4k :
before: Requests per second: 9639.81 [#/sec] (mean)
after: Requests per second: 10991.89 [#/sec] (mean)