This function should be called by the poller to set FD_POLL_* flags on an FD and
update its state if needed. This function has been added to ease threads support
integration.
BUG/MEDIUM: epoll: ensure we always consider HUP and ERR
Since commit 5be2f35 ("MAJOR: polling: centralize calls to I/O callbacks")
that came into 1.6-dev1, each poller deals with its own events and decides
to signal ability to receive or send on a file descriptor based on the
active events on the file descriptor.
The commit above was incorrectly done for the epoll code. Instead of
checking the active events on the fd, it checks for the new events. In
general these ones are the same for POLL_IN and POLL_OUT since they
are always cleared prior to being computed, but it is possible that
POLL_HUP and POLL_ERR were initially reported and are not reported
again (especially for HUP). This could happen for example if POLL_HUP
and POLL_IN were received together, the pending data exactly correspond
to a full buffer which is read at once, preventing the POLL_HUP from
being dealt with in the same call, and on the next call only POLL_OUT
is reported (eg: to emit some response or peers protocol ACKs). In this
case fd_may_recv() will not be enabled anymore and the close event will
be missed.
It seems quite hard to trigger this case, though it might explain some
of the rare missed close events that were detected in the past on the
peers.
Emeric Brun [Thu, 31 Aug 2017 12:41:55 +0000 (14:41 +0200)]
MEDIUM: check: server states and weight propagation re-work
The server state and weight was reworked to handle
"pending" values updated by checks/CLI/LUA/agent.
These values are commited to be propagated to the
LB stack.
In further dev related to multi-thread, the commit
will be handled into a sync point.
Pending values are named using the prefix 'next_'
Current values used by the LB stack are named 'cur_'
MINOR: http: Use a trash chunk to store decoded string of the HTTP auth header
This string is used in sample fetches so it is safe to use a preallocated trash
chunk instead of a buffer dynamically allocated during HAProxy startup.
MINOR: stick-tables: Make static_table_key a struct variable instead of a pointer
First, this variable does not need to be publicly exposed because it is only
used by stick_table functions. So we declare it as a global static in
stick_table.c file. Then, it is useless to use a pointer. Using a plain struct
variable avoids any dynamic allocation.
MINOR: buffers: Move swap_buffer into buffer.c and add deinit_buffer function
swap_buffer is a global variable only used by buffer_slow_realign. So it has
been moved from global.h to buffer.c and it is allocated by init_buffer
function. deinit_buffer function has been added to release it. It is also used
to destroy the buffers' pool.
MINOR: logs: Realloc log buffers only after the config is parsed and checked
During the configuration parsing, log buffers are reallocated when
global.max_syslog_len is updated. This can be done serveral time. So, instead of
doing it serveral time, we do it only once after the configuration parsing.
MINOR: chunks: Use dedicated function to init/deinit trash buffers
Now, we use init_trash_buffers and deinit_trash_buffers to, respectively,
initialize and deinitialize trash buffers (trash, trash_buf1 and trash_buf2).
These functions have been introduced to be used by threads, to deal with
thread-local trash buffers.
The check_status field in the CSV stats output is conditionally prefixed
with "* " if a check is currently underway. This can trip tools that
parse the CSV output and compare against a well known list of values.
This commit just adds this bit to the documentation.
BUG/MEDIUM: http: Fix a regression bug when a HTTP response is in TUNNEL mode
Unfortunatly, a regression bug was introduced in the commit 1486b0ab
("BUG/MEDIUM: http: Switch HTTP responses in TUNNEL mode when body length is
undefined"). HTTP responses with undefined body length are blocked until timeout
when the compression is enabled. This bug was fixed in commit 69744d92
("BUG/MEDIUM: http: Fix blocked HTTP/1.0 responses when compression is
enabled").
The bug is still the same. We do not forward response data because we are
waiting for the synchronization between the HTTP request and the response.
To fix the bug, conditions to infinitly forward channel data has been slightly
relaxed. Now, it is done if there is no more analyzer registered on the channel
or if _FLT_END analyzer is still there but without the flag CF_FLT_ANALYZE. This
last condition is only possible when a channel is waiting the end of the other
side. So, fundamentally, it means that no one is analyzing the channel
anymore. This is a transitional state during a sync phase.
Emmanuel Hocdet [Wed, 9 Aug 2017 16:26:20 +0000 (18:26 +0200)]
MINOR: ssl: remove duplicate ssl_methods in struct bind_conf
Patch "MINOR: ssl: support ssl-min-ver and ssl-max-ver with crt-list"
introduce ssl_methods in struct ssl_bind_conf. struct bind_conf have now
ssl_methods and ssl_conf.ssl_methods (unused). It's error-prone. This patch
remove the duplicate structure to avoid any confusion.
As per a recent mailing list discussion, suggesting specific cipher
settings is not too helpful, because they depend on a lot of factors,
ranging from client capabilities, available TLS libraries, new
security research, and others.
To avoid the documentation from become stale -- and potentially
wrong/dangerous, this commit adds links to Mozilla's well-reknowned
TLS blog, as well as to their configuration generator.
Willy Tarreau [Wed, 30 Aug 2017 07:59:52 +0000 (09:59 +0200)]
MEDIUM: connection: remove useless flag CO_FL_DATA_WR_SH
After careful inspection, this flag is set at exactly two places :
- once in the health-check receive callback after receipt of a
response
- once in the stream interface's shutw() code where CF_SHUTW is
always set on chn->flags
The flag was checked in the checks before deciding to send data, but
when it is set, the wake() callback immediately closes the connection
so the CO_FL_SOCK_WR_SH flag is also set.
The flag was also checked in si_conn_send(), but checking the channel's
flag instead is enough and even reveals that one check involving it
could never match.
So it's time to remove this flag and replace its check with a check of
CF_SHUTW in the stream interface. This way each layer is responsible
for its shutdown, this will ease insertion of the mux layer.
Willy Tarreau [Wed, 30 Aug 2017 05:35:35 +0000 (07:35 +0200)]
MEDIUM: connection: remove useless flag CO_FL_DATA_RD_SH
This flag is both confusing and wrong. It is supposed to report the
fact that the data layer has received a shutdown, but in fact this is
reported by CO_FL_SOCK_RD_SH which is set by the transport layer after
this condition is detected. The only case where the flag above is set
is in the stream interface where CF_SHUTR is also set on the receiving
channel.
In addition, it was checked in the health checks code (while never set)
and was always test jointly with CO_FL_SOCK_RD_SH everywhere, except in
conn_data_read0_pending() which incorrectly doesn't match the second
time it's called and is fortunately protected by an extra check on
(ic->flags & CF_SHUTR).
This patch gets rid of the flag completely. Now conn_data_read0_pending()
accurately reports the fact that the transport layer has detected the end
of the stream, regardless of the fact that this state was already consumed,
and the stream interface watches ic->flags&CF_SHUTR to know if the channel
was already closed by the upper layer (which it already used to do).
The now unused conn_data_read0() function was removed.
Willy Tarreau [Mon, 28 Aug 2017 17:02:51 +0000 (19:02 +0200)]
MEDIUM: session: add a pointer to a struct task in the session
The session may need to enforce a timeout when waiting for a handshake.
Till now we used a trick to avoid allocating a pointer, we used to set
the connection's owner to the task and set the task's context to the
session, so that it was possible to circle between all of them. The
problem is that we'll really need to pass the pointer to the session
to the upper layers during initialization and that the only place to
store it is conn->owner, which is squatted for this trick.
So this patch moves the struct task* into the session where it should
always have been and ensures conn->owner points to the session until
the data layer is properly initialized.
Willy Tarreau [Mon, 28 Aug 2017 14:28:47 +0000 (16:28 +0200)]
CLEANUP: listener: remove the unused handler field
Historically listeners used to have a handler depending on the upper
layer. But now it's exclusively process_stream() and nothing uses it
anymore so it can safely be removed.
Willy Tarreau [Mon, 28 Aug 2017 14:22:54 +0000 (16:22 +0200)]
MEDIUM: stream: make stream_new() allocate its own task
Currently a task is allocated in session_new() and serves two purposes :
- either the handshake is complete and it is offered to the stream via
the second arg of stream_new()
- or the handshake is not complete and it's diverted to be used as a
timeout handler for the embryonic session and repurposed once we land
into conn_complete_session()
Furthermore, the task's process() function was taken from the listener's
handler in conn_complete_session() prior to being replaced by a call to
stream_new(). This will become a serious mess with the mux.
Since it's impossible to have a stream without a task, this patch removes
the second arg from stream_new() and make this function allocate its own
task. In session_accept_fd(), we now only allocate the task if needed for
the embryonic session and delete it later.
Willy Tarreau [Mon, 28 Aug 2017 13:46:01 +0000 (15:46 +0200)]
MEDIUM: connection: get rid of data->init() which was not for data
The ->init() callback of the connection's data layer was only used to
complete the session's initialisation since sessions and streams were
split apart in 1.6. The problem is that it creates a big confusion in
the layers' roles as the session has to register a dummy data layer
when waiting for a handshake to complete, then hand it off to the
stream which will replace it.
The real need is to notify that the transport has finished initializing.
This should enable a better splitting between these layers.
This patch thus introduces a connection-specific callback called
xprt_done_cb() which informs about handshake successes or failures. With
this, data->init() can disappear, CO_FL_INIT_DATA as well, and we don't
need to register a dummy data->wake() callback to be notified of errors.
Willy Tarreau [Tue, 29 Aug 2017 14:40:59 +0000 (16:40 +0200)]
BUG/MINOR: stream-int: don't check the CO_FL_CURR_WR_ENA flag
The stream interface chk_snd() code checks if the connection has already
subscribed to write events in order to avoid attempting a useless write()
which will fail. But it used to check both the CO_FL_CURR_WR_ENA and the
CO_FL_DATA_WR_ENA flags, while the former may only be present without the
latterif either the other side just disabled writing did not synchronize
yet (which is harmless) or if it's currently performing a handshake, which
is being checked by the next condition and will be better dealt with by
properly subscribing to the data events.
This code was added back in 1.5-dev20 to limit the number of useless calls
to splice() but both flags were checked at once while only CO_FL_DATA_WR_ENA
was needed. This bug seems to have no impact other than making code changes
more painful. This fix may be backported down to 1.5 though is unlikely to
be needed there.
Willy Tarreau [Thu, 24 Aug 2017 12:31:19 +0000 (14:31 +0200)]
REORG/MEDIUM: connection: introduce the notion of connection handle
Till now connections used to rely exclusively on file descriptors. It
was planned in the past that alternative solutions would be implemented,
leading to member "union t" presenting sock.fd only for now.
With QUIC, the connection will need to continue to exist but will not
rely on a file descriptor but a connection ID.
So this patch introduces a "connection handle" which is either a file
descriptor or a connection ID, to replace the existing "union t". We've
now removed the intermediate "struct sock" which was never used. There
is no functional change at all, though the struct connection was inflated
by 32 bits on 64-bit platforms due to alignment.
Willy Tarreau [Wed, 23 Aug 2017 09:37:48 +0000 (11:37 +0200)]
OPTIM: lua: don't add "Connection: close" on the response
Haproxy doesn't need this anymore, we're wasting cycles checking for
a Connection header in order to add "Connection: close" only in the
1.1 case so that haproxy sees it and removes it. All tests were run
in 1.0 and 1.1, with/without the request header, and in the various
keep-alive/close modes, with/without compression, and everything works
fine. It's worth noting that this header was inherited from the stats
applet and that the same cleanup probably ought to be done there as
well.
Willy Tarreau [Wed, 23 Aug 2017 09:24:47 +0000 (11:24 +0200)]
OPTIM: lua: don't use expensive functions to parse headers in the HTTP applet
In the HTTP applet, we have to parse the response headers provided by
the application and to produce a response. strcasecmp() is expensive,
and chunk_append() even more as it uses a format string.
Here we check the string length before calling strcasecmp(), which
results in strcasecmp() being called only on the relevant header in
practise due to very few collisions on the name lengths, effectively
dividing the number of calls by 3, and we replace chunk_appendf()
with memcpy() as we already know the string lengths.
Doing just this makes the "hello-world" applet 5% faster, reaching
41400 requests/s on a core i5-3320M.
Willy Tarreau [Wed, 23 Aug 2017 08:52:20 +0000 (10:52 +0200)]
BUG/MEDIUM: stream: properly set the required HTTP analysers on use-service
Commit 4850e51 ("BUG/MAJOR: lua: Do not force the HTTP analysers in
use-services") fixed a bug in how services are used in Lua, but this
fix broke the ability for Lua services to support keep-alive.
The cause is that we branch to a service while we have not yet set the
body analysers on the request nor the response, and when we start to
deal with the response we don't have any request analyser anymore. This
leads the response forward engine to detect an error and abort. It's
very likely that this also causes some random truncation of responses
though this has not been observed during the tests.
The root cause is not the Lua part in fact, the commit above was correct,
the problem is the implementation of the "use-service" action. When done
in an HTTP request, it bypasses the load balancing decisions and the
connect() phase. These ones are normally the ones preparing the request
analysers to parse the body when keep-alive is set. This should be dealt
with in the main process_use_service() function in fact.
That's what this patch does. If process_use_service() is called from the
http-request rule set, it enables the XFER_BODY analyser on the request
(since the same is always set on the response). Note that it's exactly
what is being done on the stats page which properly supports keep-alive
and compression.
This fix must be backported to 1.7 and 1.6 as the breakage appeared in 1.6.3.
Willy Tarreau [Wed, 23 Aug 2017 07:32:06 +0000 (09:32 +0200)]
MINOR: lua: properly process the contents of the content-length field
The header's value was parsed with atoi() then compared against -1,
meaning that all the unparsable stuff returning zero was not considered
and that all multiples of 2^32 + 0xFFFFFFFF would continue to emit a
chunk.
Now instead we parse the value using a long long, only accept positive
values and consider all unparsable values as incorrect and switch to
either close or chunked encoding. This is more in line with what a
client (including haproxy's parser) would expect.
This may be backported as a cleanup to stable versions, though it's
really unlikely that Lua applications are facing side effects of this.
Indeed, responses with status codes 1xx, 204 and 304 do not contain any
body and the message ends immediately after the empty header (cf RFC7230)
so by emitting a 0<CR><LF> we're disturbing keep-alive responses. There's
a workaround against this for now which consists in always emitting
"Content-length: 0" but it may not be cool with 304 when clients use
the headers to update their cache.
This fix must be backported to stable versions back to 1.6.
Willy Tarreau [Wed, 23 Aug 2017 14:07:33 +0000 (16:07 +0200)]
BUG/MAJOR: lua: fix the impact of the scheduler changes again
Commit d1aa41f ("BUG/MAJOR: lua: properly dequeue hlua_applet_wakeup()
for new scheduler") tried to address the side effects of the scheduler
changes on Lua, but it was not enough. Having some Lua code send data
in chunks separated by one second each clearly shows busy polling being
done.
The issue was tracked down to hlua_applet_wakeup() being woken up on
timer expiration, and returning itself without clearing the timeout,
causing the task to be re-inserted with an expiration date in the past,
thus firing again. In the past it was not a problem, as returning NULL
was enough to clear the timer. Now we can't rely on this anymore so
it's important to clear this timeout.
No backport is needed, this issue is specific to 1.8-dev and results
from an incomplete fix in the commit above.
Willy Tarreau [Tue, 22 Aug 2017 10:01:26 +0000 (12:01 +0200)]
BUG/MEDIUM: dns: fix accepted_payload_size parser to avoid integer overflow
Since commit 9d8dbbc ("MINOR: dns: Maximum DNS udp payload set to 8192") it's
possible to specify a packet size, but passing too large a size or a negative
size is not detected and results in memset() being performed over a 2GB+ area
upon receipt of the first DNS response, causing runtime crashes.
We now check that the size is not smaller than the smallest packet which is
the DNS header size (12 bytes).
Baptiste Assmann [Mon, 21 Aug 2017 11:21:48 +0000 (13:21 +0200)]
BUG/MINOR: dns: wrong resolution interval lead to 100% CPU
Since the DNS layer split and the use of obj_type structure, we did not
updated propoerly the code used to compute the interval between 2
resolutions.
A nasty loop was then created when:
- resolver's hold.valid is shorter than servers' check.inter
- a valid response is available in the DNS cache
A task was woken up for a server's resolution. The servers pick up the IP
in the cache and returns without updating the 'last update' timestamp of
the resolution (which is normal...). Then the task is woken up again for
the same server.
The fix simply computes now properly the interval between 2 resolutions
and the cache is used properly while a new resolution is triggered if
the data is not fresh enough.
Baptiste Assmann [Fri, 18 Aug 2017 21:36:07 +0000 (23:36 +0200)]
MINOR: dns: make SRV record processing more verbose
For troubleshooting purpose, it may be important to know when a server
got its fqdn updated by a SRV record.
This patch makes HAProxy to report such events through stderr and logs.
Baptiste Assmann [Mon, 21 Aug 2017 14:51:09 +0000 (16:51 +0200)]
MINOR: dns: automatic reduction of DNS accpeted payload size
RFC 6891 states that if a DNS client announces "big" payload size and
doesn't receive a response (because some equipments on the path may
block/drop UDP fragmented packets), then it should try asking for
smaller responses.
Baptiste Assmann [Fri, 18 Aug 2017 21:35:08 +0000 (23:35 +0200)]
MINOR: dns: Maximum DNS udp payload set to 8192
Following up DNS extension introduction, this patch aims at making the
computation of the maximum number of records in DNS response dynamic.
This computation is based on the announced payload size accepted by
HAProxy.
Baptiste Assmann [Mon, 14 Aug 2017 08:37:46 +0000 (10:37 +0200)]
BUG/MINOR: dns: server set by SRV records stay in "no resolution" status
This patch fixes a bug where some servers managed by SRV record query
types never ever recover from a "no resolution" status.
The problem is due to a wrong function called when breaking the
server/resolution (A/AAAA) relationship: this is performed when a server's SRV
record disappear from the SRV response.
BUG/MINOR: Wrong type used as argument for spoe_decode_buffer().
Contrary to 64-bits libCs where size_t type size is 8, on systems with 32-bits
size of size_t is 4 (the size of a long) which does not equal to size of uint64_t type.
This was revealed by such GCC warnings on 32bits systems:
src/flt_spoe.c:2259:40: warning: passing argument 4 of spoe_decode_buffer from
incompatible pointer type
if (spoe_decode_buffer(&p, end, &str, &sz) == -1)
^
As the already existing code using spoe_decode_buffer() already use such pointers to
uint64_t, in place of pointer to size_t ;), most of this code is in contrib directory,
this simple patch modifies the prototype of spoe_decode_buffer() so that to use a
pointer to uint64_t in place of a pointer to size_t, uint64_t type being the type
finally required for decode_varint().
MINOR: http: export some of the HTTP parser macros
The two macros EXPECT_LF_HERE and EAT_AND_JUMP_OR_RETURN were exported
for use outside the HTTP parser. They now take extra arguments to avoid
implicit pointers and jump labels. These will be used to reimplement a
minimalist HTTP/1 parser in the H1->H2 gateway.
Willy Tarreau [Wed, 9 Aug 2017 21:36:48 +0000 (23:36 +0200)]
TESTS: ist: add a test file for the functions
This test file covers the various functions provided by ist.h. It allows
both to test them for absence of regression, and to observe the code
emitted at different optimization levels.
Willy Tarreau [Tue, 30 May 2017 15:49:36 +0000 (17:49 +0200)]
MINOR: ist: implement very simple indirect strings
For HPACK we'll need to perform a lot of string manipulation between the
dynamic headers table and the output stream, and we need an efficient way
to deal with that, considering that the zero character is not an end of
string marker here. It turns out that gcc supports returning structs from
functions and is able to place up to two words directly in registers when
-freg-struct is used, which is the case by default on x86 and armv8. On
other architectures the caller reserves some stack space where the callee
can write, which is equivalent to passing a pointer to the return value.
So let's implement a few functions to deal with this as the resulting code
will be optimized on certain architectures where retrieving the length of
a string will simply consist in reading one of the two returned registers.
Extreme care was taken to ensure that the compiler gets maximum opportunities
to optimize out every bit of unused code. This is also the reason why no
call to regular string functions (such as strlen(), memcmp(), memcpy() etc)
were used. The code involving them is often larger than when they are open
coded. Given that strings are usually very small, especially when manipulating
headers, the time spent calling a function optimized for large vectors often
ends up being higher than the few cycles needed to count a few bytes.
An issue was met with __builtin_strlen() which can automatically convert
a constant string to its constant length. It doesn't accept NULLs and there
is no way to hide them using expressions as the check is made before the
optimizer is called. On gcc 4 and above, using an intermediary variable
is enough to hide it. On older versions, calls to ist() with an explicit
NULL argument will issue a warning. There is normally no reason to do this
but taking care of it the best possible still seems important.
Willy Tarreau [Tue, 27 Jun 2017 13:25:14 +0000 (15:25 +0200)]
MEDIUM: session: do not free a session until no stream references it
We now refrain from clearing a session's variables, counters, and from
releasing it as long as at least one stream references it. For now it
never happens but with H2 this will be mandatory to avoid double frees.
Willy Tarreau [Tue, 27 Jun 2017 13:20:05 +0000 (15:20 +0200)]
MINOR: stream: link the stream to its session
Now each stream is added to the session's list of streams, so that it
will be possible to know all the streams belonging to a session, and
to know if any stream is still attached to a sessoin.
MINOR: chunks: add chunk_memcpy() and chunk_memcat()
These two functions respectively copy a memory area onto the chunk, and
append the contents of a memory area over a chunk. They are convenient
to prepare binary output data to be sent and will be used for HTTP/2.
Baptiste Assmann [Fri, 18 Aug 2017 08:15:42 +0000 (10:15 +0200)]
MINOR: dns: default "hold obsolete" timeout set to 0
The "hold obsolete" timer is used to prevent HAProxy from moving a server to
an other IP or from considering the server as DOWN if the IP currently
affected to this server has not been seen for this period of time in DNS
responses.
That said, historically, HAProxy used to update servers as soon as the IP
has disappeared from the response. Current default timeout break this
historical behavior and may change HAProxy's behavior when people will
upgrade to 1.8.
This patch changes the default value to 0 to keep backward compatibility.
Baptiste Assmann [Sun, 13 Aug 2017 22:13:01 +0000 (00:13 +0200)]
MINOR: dns: enabled edns0 extension and make accpeted payload size tunable
Edns extensions may be used to negotiate some settings between a DNS
client and a server.
For now we only use it to announce the maximum response payload size accpeted
by HAProxy.
This size can be set through a configuration parameter in the resolvers
section. If not set, it defaults to 512 bytes.
Baptiste Assmann [Mon, 14 Aug 2017 14:35:45 +0000 (16:35 +0200)]
MINOR: dns: enable caching of responses for server set by a SRV record
The function srv_set_fqdn() is used to update a server's fqdn and set
accordingly its DNS resolution.
Current implementation prevents a server whose update is triggered by a
SRV record from being linked to an existing resolution in the cache (if
applicable).
This patch aims at fixing this.
Baptiste Assmann [Mon, 14 Aug 2017 14:38:29 +0000 (16:38 +0200)]
MINOR: dns: ability to use a SRV resolution for multiple backends
Current code implementation prevents multiple backends from relying on
the same SRV resolution. Actually, only the first backend which triggers
the resolution gets updated.
This patch makes HAProxy to process the whole list of the 'curr'
requesters to apply the changes everywhere (hence, the cache also applies
to SRV records...)
Baptiste Assmann [Sat, 12 Aug 2017 09:16:55 +0000 (11:16 +0200)]
MINOR: dns: make debugging function dump_dns_config() compatible with SRV records
This function is particularly useful when debugging DNS resolution at
run time in HAProxy.
SRV records must be read differently, hence we have to update this
function.
Baptiste Assmann [Fri, 11 Aug 2017 08:37:20 +0000 (10:37 +0200)]
MINOR: dns: update dns response buffer reading pointer due to SRV record
DNS SRV records uses "dns name compression" to store the target name.
"dns compression" principle is simple. Let's take the name below: 3336633266663038.red.default.svc.cluster.local.
It can be stored "as is" in the response or it can be compressed like
this: 3336633266663038<POINTER>
and <POINTER> would point to the string
'.red.default.svc.cluster.local.' availble in the question section for
example.
This mechanism allows storing much more data in a single DNS response.
This means the flag "record->data_len" which stores the size of the
record (hence the whole string, uncompressed) can't be used to move the
pointer forward when reading responses. We must use the "offset" integer
which means the real number of bytes occupied by the target name.
If we don't do that, we can properly read the first SRV record, then we
loose alignment and we start reading unrelated data (still in the
response) leading to a false negative error treated as an "invalid"
response...
Baptiste Assmann [Fri, 11 Aug 2017 08:31:22 +0000 (10:31 +0200)]
MINOR: dns: update record dname matching for SRV query types
DNS response for SRV queries look like this:
- query dname looks like '_http._tcp.red.default.svc.cluster.local'
- answer record dname looks like
'3336633266663038.red.default.svc.cluster.local.'
Of course, it never matches... and it triggers many false positive in
the current code (which is suitable for A/AAAA/CNAME).
This patch simply ignores this dname matching in the case of SRV query
type.
Baptiste Assmann [Fri, 11 Aug 2017 07:58:27 +0000 (09:58 +0200)]
MINOR: dns: Update analysis of TRUNCATED response for SRV records
First implementation of the DNS parser used to consider TRUNCATED
responses as errors and triggered a failover to an other query type
(usually A to AAAA or vice-versa).
When we query for SRV records, a TRUNCATED response still contains valid
records we can exploit, so we shouldn't trigger a failover in such case.
Note that we had to move the maching against the flag later in the
response parsing (actually, until we can read the query type....)
Olivier Houchard [Mon, 14 Aug 2017 13:59:44 +0000 (15:59 +0200)]
CLEANUP: raw_sock: Use a better name for the constructor than __ssl_sock_deinit()
I just noticed the raw socket constructor was called __ssl_sock_deinit,
which is a bit confusing, and wrong twice, so the attached patch renames it
to __raw_sock_init, which seems more correct.
Willy Tarreau [Thu, 17 Aug 2017 13:54:46 +0000 (15:54 +0200)]
BUG/MAJOR: stream: in stream_free(), close the front endpoint and not the origin
stream_free() used to close the front connection by using s->sess->origin,
instead of using s->si[0].end. This is very visible in HTTP/2 where the
front connection is abusively closed and causes all sort of issues including
crashes caused by double closes due to the same origin being referenced many
times.
It's also suspected that it may have caused some of the early issues met
during the Lua development.
It's uncertain whether stable branches are affected. It might be worth
backporting it once it has been confirmed not to create new impacts.
Willy Tarreau [Wed, 16 Aug 2017 13:35:19 +0000 (15:35 +0200)]
BUILD/MINOR: build without openssl still broken
As mentionned in commit cf4e496c9 ("BUG/MEDIUM: build without openssl broken"),
commit 872f9c213 ("MEDIUM: ssl: add basic support for OpenSSL crypto engine")
broke the build without openssl support. But the former did only fix it when
openssl is not enabled, but not when it's not installed on the system :
In file included from src/haproxy.c:112:
include/proto/ssl_sock.h:24:25: openssl/ssl.h: No such file or directory
In file included from src/haproxy.c:112:
include/proto/ssl_sock.h:45: error: syntax error before "SSL_CTX"
include/proto/ssl_sock.h:75: error: syntax error before '*' token
include/proto/ssl_sock.h:75: warning: type defaults to `int' in declaration of `ssl_sock_create_cert'
include/proto/ssl_sock.h:75: warning: data definition has no type or storage class
include/proto/ssl_sock.h:76: error: syntax error before '*' token
include/proto/ssl_sock.h:76: warning: type defaults to `int' in declaration of `ssl_sock_get_generated_cert'
include/proto/ssl_sock.h:76: warning: data definition has no type or storage class
include/proto/ssl_sock.h:77: error: syntax error before '*' token
Now we also surround the include with #ifdef USE_OPENSSL to fix this. No
backport is needed since openssl async engines were not backported.
Emmanuel Hocdet [Fri, 11 Aug 2017 08:56:00 +0000 (10:56 +0200)]
BUILD: ssl: replace SSL_CTX_get0_privatekey for openssl < 1.0.2
Commit 48a8332a introduce SSL_CTX_get0_privatekey in openssl-compat.h but
SSL_CTX_get0_privatekey access internal structure and can't be a candidate
to openssl-compat.h. The workaround with openssl < 1.0.2 is to use SSL_new
then SSL_get_privatekey.
Willy Tarreau [Wed, 9 Aug 2017 14:35:44 +0000 (16:35 +0200)]
BUILD/MINOR: cli: shut a minor gcc warning in "show fd"
Recent commit 7a4a0ac ("MINOR: cli: add a new "show fd" command") introduced
a warning when building at -O2 and above. The compiler doesn't know if a
variable's value might have changed between two if blocks so warns that some
values might be used uninitialized, which is not the case. Let's simply
initialize them to shut the warning.
Make it so for each server, instead of specifying a hostname, one can use
a SRV label.
When doing so, haproxy will first resolve the SRV label, then use the
resulting hostnames, as well as port and weight (priority is ignored right
now), to each server using the SRV label.
It is resolved periodically, and any server disappearing from the SRV records
will be removed, and any server appearing will be added, assuming there're
free servers in haproxy.
As DNS servers may not return all IPs in one answer, we want to cache the
previous entries. Those entries are removed when considered obsolete, which
happens when the IP hasn't been returned by the DNS server for a time
defined in the "hold obsolete" parameter of the resolver section. The default
is 30s.
Emmanuel Hocdet [Wed, 9 Aug 2017 09:24:25 +0000 (11:24 +0200)]
MINOR: ssl: allow to start without certificate if strict-sni is set
With strict-sni, ssl connection will fail if no certificate match. Have no
certificate in bind line, fail on all ssl connections. It's ok with the
behavior of strict-sni. When 'generate-certificates' is set 'strict-sni' is
never used. When 'strict-sni' is set, default_ctx is never used. Allow to start
without certificate only in this case.
Use case is to start haproxy with ssl before customer start to use certificates.
Typically with 'crt' on a empty directory and 'strict-sni' parameters.
BUG/MEDIUM: ssl: Fix regression about certificates generation
Since the commit f6b37c67 ["BUG/MEDIUM: ssl: in bind line, ssl-options after
'crt' are ignored."], the certificates generation is broken.
To generate a certificate, we retrieved the private key of the default
certificate using the SSL object. But since the commit f6b37c67, the SSL object
is created with a dummy certificate (initial_ctx).
So to fix the bug, we use directly the default certificate in the bind_conf
structure. We use SSL_CTX_get0_privatekey function to do so. Because this
function does not exist for OpenSSL < 1.0.2 and for LibreSSL, it has been added
in openssl-compat.h with the right #ifdef.
This one dumps the fdtab for all active FDs with some quickly interpretable
characters to read the flags (like upper case=set, lower case=unset). It
can probably be improved to report fdupdt[] and/or fdinfo[] but at least it
provides a good start and allows to see how FDs are seen. When the fd owner
is a connection, its flags are also reported as it can help compare with the
polling status, and the target (fe/px/sv) as well. When it's a listener, the
listener's state is reported as well as the frontend it belongs to.
BUG/MEDIUM: stream: don't retry SSL connections which fail the SNI name check
Commits 2ab8867 ("MINOR: ssl: compare server certificate names to the
SNI on outgoing connections") and 96c7b8d ("BUG/MINOR: ssl: Fix check
against SNI during server certificate verification") made it possible
to check that the server's certificate matches the name presented in
the SNI field. While it solves a class of problems, it opens another
one which is that by failing such a connection, we'll retry it and put
more load on the server. It can be a real problem if a user can trigger
this issue, which is what will very often happen when the SNI is forwarded
from the client to the server.
This patch solves this by detecting that this very specific hostname
verification failed and that the hostname was provided using SNI, and
then it simply disables retries and the failure is immediate.
At the time of writing this patch, the previous patches were not backported
(yet), so no backport is needed for this one unless the aforementionned
patches are backported as well. This patch requires previous patches
"BUG/MINOR: ssl: make use of the name in SNI before verifyhost" and
"MINOR: ssl: add a new error code for wrong server certificates".
MINOR: ssl: add a new error codes for wrong server certificates
If a server presents an unexpected certificate to haproxy, that is, a
certificate that doesn't match the expected name as configured in
verifyhost or as requested using SNI, we want to store that precious
information. Fortunately we have access to the connection in the
verification callback so it's possible to store an error code there.
For this purpose we use CO_ER_SSL_MISMATCH_SNI (for when the cert name
didn't match the one requested using SNI) and CO_ER_SSL_MISMATCH for
when it doesn't match verifyhost.
BUG/MINOR: ssl: make use of the name in SNI before verifyhost
Commit 2ab8867 ("MINOR: ssl: compare server certificate names to the SNI
on outgoing connections") introduced the ability to check server cert
names against the name provided with in the SNI, but verifyhost was kept
as a way to force the name to check against. This was a mistake, because :
- if an SNI is used, any static hostname in verifyhost will be wrong ;
worse, if it matches and doesn't match the SNI, the server presented
the wrong certificate ;
- there's no way to have a default name to check against for health
checks anymore because the point above mandates the removal of the
verifyhost directive
This patch reverses the ordering of the check : whenever SNI is used, the
name provided always has precedence (ie the server must always present a
certificate that matches the requested name). And if no SNI is provided,
then verifyhost is used, and will be configured to match the server's
default certificate name. This will work both when SNI is not used and
for health checks.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
BUG/MINOR: ssl: Fix check against SNI during server certificate verification
This patch fixes the commit 2ab8867 ("MINOR: ssl: compare server certificate
names to the SNI on outgoing connections")
When we check the certificate sent by a server, in the verify callback, we get
the SNI from the session (SSL_SESSION object). In OpenSSL, tlsext_hostname value
for this session is copied from the ssl connection (SSL object). But the copy is
done only if the "server_name" extension is found in the server hello
message. This means the server has found a certificate matching the client's
SNI.
When the server returns a default certificate not matching the client's SNI, it
doesn't set any "server_name" extension in the server hello message. So no SNI
is set on the SSL session and SSL_SESSION_get0_hostname always returns NULL.
To fix the problemn, we get the SNI directly from the SSL connection. It is
always defined with the value set by the client.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
Note: it's worth mentionning that by making the SNI check work, we
introduce another problem by which failed SNI checks can cause
long connection retries on the server, and in certain cases the
SNI value used comes from the client. So this patch series must
not be backported until this issue is resolved.
While playing with Lua API I've noticed that core.proxies attribute
doesn't return all the proxies, more precisely the ones with same names
(e.g. for frontend and backend with the same name it would only return
the latter one).
So, this patch fixes this problem without breaking the actual behaviour.
We have two case of proxies with frontend/backend capabilities:
The first case is the listen. This case is not a problem because the
proxy object process these two entities as only one and it is the
expected behavior. With these case the "proxies" list works fine.
The second case is the frontend and backend with the same name. i think
that this case is possible for compatibility with 'listen' declaration.
These two proxes with same name and different capabilities must not
processed with the same object (different statitics, differents orders).
In fact, one the the two object crush the other one whoch is no longer
accessible.
To fix this problem, this patch adds two lists which are "frontends" and
"backends", each of these list contains specialized proxy, but warning
the "listen" proxy are declare in each list.
This is just for convenience and uniformity, Proxy.servers/listeners
returns a table/hash of objects with names as keys, but for example when
I want to pass such object to some other Lua function I have to manually
copy the name (or wrap the object), since the object itself doesn't
expose name info.
This patch simply adds the proxy name as member of the proxy object.
BUG/MAJOR: lua: properly dequeue hlua_applet_wakeup() for new scheduler
The recent scheduler change broke the Lua co-sockets due to
hlua_applet_wakeup() returning NULL after waking the applet up. With the
previous scheduler, returning NULL was a way to do nothing on return.
With the new one it keeps TASK_RUNNING set, causing all new notifications
to end up into t->pending_state instead of t->state, and prevents the
task from being added into the run queue again, so and it's never woken
up anymore.
The applet keeps waking up, causing hlua_socket_handler() to do nothing
new, then si_applet_wake_cb() calling stream_int_notify() to try to wake
the task up, which it can't do due to the TASK_RUNNING flag, then decide
that since the associated task is not in the run queue, it needs to call
stream_int_update_applet() to propagate the update. This last one finds
that the applet needs to be woken up to deal with the last reported events
and calling appctx_wakeup() again. Previously, this situation didn't exist
because the task was always added in the run queue despite the TASK_RUNNING
flag.
By returning the task instead in hlua_applet_wakeup(), we can ensure its
flag is properly cleared and the task is requeued if needed or just sits
waiting for new events to happen.
This fix requires the previous ones ("BUG/MINOR: lua: always detach the
tcp/http tasks before freeing them") and MINOR: task: always preinitialize
the task's timeout in task_init().
Thanks to Thierry, Christopher and Emeric for the long head-scratching
session!
No backport is needed as the bug doesn't appear in older versions and
it's unsure whether we'll not break something by backporting it.
MINOR: task: always preinitialize the task's timeout in task_init()
task_init() is called exclusively by task_new() which is the only way
to create a task. Most callers set t->expire to TICK_ETERNITY, some set
it to another value and a few like Lua don't set it at all as they don't
need a timeout, causing random values to be used in case the task gets
queued.
Let's always set t->expire to TICK_ETERNITY in task_init() so that all
tasks are now initialized in a clean state.
This patch can be backported as it will definitely make the code more
robust (at least the Lua code, possibly other places).
BUG/MINOR: lua: always detach the tcp/http tasks before freeing them
In hlua_{http,tcp}_applet_release(), a call to task_free() is performed
to release the task, but no task_delete() is made on these tasks. Till
now it wasn't much of a problem because this was normally not done with
the task in the run queue, and the task was never put into the wait queue
since it doesn't have any timer. But with threading it will become an
issue. And not having this already prevents another bug from being fixed.
Thanks to Christopher for spotting this one. A backport to 1.7 and 1.6 is
preferred for safety.