Willy Tarreau [Sun, 31 Mar 2013 21:14:46 +0000 (23:14 +0200)]
MEDIUM: acl: support using sample fetches directly in ACLs
Now it becomes possible to directly use sample fetches as the ACL fetch
methods. In this case, the matching method is mandatory. This allows to
form more ACL combinations from existing fetches and will limit the need
for new ACLs when everything is available to form them from sample fetches
and matches.
Willy Tarreau [Sun, 31 Mar 2013 20:59:32 +0000 (22:59 +0200)]
MEDIUM: acl: have a pointer to the keyword name in acl_expr
The acl_expr struct used to hold a pointer to the ACL keyword. But since
we now have all the relevant pointers, we don't need that anymore, we just
need the pointer to the keyword as a string in order to return warnings
and error messages.
So let's change this in order to remove the dependency on the acl_keyword
struct from acl_expr.
During this change, acl_cond_kw_conflicts() used to return a pointer to an
ACL keyword but had to be changed to return a const char* for the same reason.
Willy Tarreau [Sun, 31 Mar 2013 20:13:34 +0000 (22:13 +0200)]
MAJOR: acl: add option -m to change the pattern matching method
ACL expressions now support "-m" in addition to "-i" and "-f". This new
option is followed by the name of the pattern matching method to be used
on the extracted pattern. This makes it possible to reuse existing sample
fetch methods with other matching methods (eg: regex). A "found" matching
method ignores any pattern and only verifies that the required sample was
found (useful for cookies).
Willy Tarreau [Sun, 31 Mar 2013 16:49:18 +0000 (18:49 +0200)]
MINOR: http: replace acl_parse_ver with acl_parse_str
The HTTP version parser used in ACLs has long been a string and
still had its own parser. This makes no sense, switch it to use
the standard string parser.
Willy Tarreau [Mon, 25 Mar 2013 07:12:18 +0000 (08:12 +0100)]
MAJOR: acl: convert all ACL requires to SMP use+val instead of ->requires
The ACLs now use the fetch's ->use and ->val to decide upon compatibility
between the place where they are used and where the information are fetched.
The code is capable of reporting warnings about very fine incompatibilities
between certain fetches and an exact usage location, so it is expected that
some new warnings will be emitted on some existing configurations.
Two degrees of detection are provided :
- detecting ACLs that never match
- detecting keywords that are ignored
All tests show that this seems to work well, though bugs are still possible.
Willy Tarreau [Sun, 24 Mar 2013 06:22:08 +0000 (07:22 +0100)]
MEDIUM: proxy: remove acl_requires and just keep a flag "http_needed"
Proxy's acl_requires was a copy of all bits taken from ACLs, but we'll
get rid of ACL flags and only rely on sample fetches soon. The proxy's
acl_requires was only used to allocate an HTTP context when needed, and
was even forced in HTTP mode. So better have a flag which exactly says
what it's supposed to be used for.
Willy Tarreau [Sun, 24 Mar 2013 00:34:58 +0000 (01:34 +0100)]
CLEANUP: acl: remove ACL hooks which were never used
These hooks, which established the relation between ACL_USE_* and the location
where the ACL were used, were never used because they were superseded with the
sample capabilities. Remove them now.
Willy Tarreau [Mon, 14 Jan 2013 15:07:52 +0000 (16:07 +0100)]
MINOR: payload: add new direction-explicit sample fetches
Similarly to previous commit fixing "hdr" and "cookie" in HTTP, we have to deal
with "payload" and "payload_lv" which are request-only for ACLs and req/resp for
sample fetches depending on the context, and to a less extent with other req_*
and rep_*/rep_* fetches. So let's add explicit "req." and "res." variants and
make the ACLs rely on that instead.
Willy Tarreau [Mon, 14 Jan 2013 14:56:36 +0000 (15:56 +0100)]
MINOR: http: add new direction-explicit sample fetches for headers and cookies
Since "hdr" and "cookie" were ambiguously referring to the request or response
depending on the context, we need a way to explicitly specify the direction.
By prefixing the fetches names with "req." and "res.", we can now restrict such
fetches to the appropriate direction. At the moment the fetches are explicitly
declared by later we might think about having an automatic match when "req." or
"res." appears. These explicit fetches are now used by the relevant ACLs.
Willy Tarreau [Fri, 11 Jan 2013 15:56:48 +0000 (16:56 +0100)]
MAJOR: acl: remove the arg_mask from the ACL definition and use the sample fetch's
Now that ACLs solely rely on sample fetch functions, make them use the
same arg mask. All inconsistencies have been fixed separately prior to
this patch, so this patch almost only adds a new pointer indirection
and removes all references to ARG*() in the definitions.
The parsing is still performed by the ACL code though.
Willy Tarreau [Fri, 11 Jan 2013 14:49:37 +0000 (15:49 +0100)]
MAJOR: acl: make all ACLs reference the fetch function via a sample.
ACL fetch functions used to directly reference a fetch function. Now
that all ACL fetches have their sample fetches equivalent, we can make
ACLs reference a sample fetch keyword instead.
In order to simplify the code, a sample keyword name may be NULL if it
is the same as the ACL's, which is the most common case.
A minor change appeared, http_auth always expects one argument though
the ACL allowed it to be missing and reported as such afterwards, so
fix the ACL to match this. This is not really a bug.
Most of them won't bring much benefit at the moment, or are even aliases of
existing ones, however they'll be needed for ACL->SMP convergence.
A new val_usr() function was added to resolve userlist names into pointers.
The http_auth_group ACL forgot to make its first argument mandatory, so
there was a check in cfgparse to report a vague error. Now that args are
correctly parsed, let's report something more precise.
All urlp* ACLs now support an optional 3rd argument like their sample
counter-part which is the optional delimiter.
The fetch functions have been renamed "smp_fetch_*".
Some args controls on the sample keywords have been relaxed so that we
can soon use them for ACLs :
- cookie now accepts to have an optional name ; it will return the
first matching cookie if the name is not set ;
- same for set-cookie and hdr
Willy Tarreau [Mon, 7 Jan 2013 20:59:07 +0000 (21:59 +0100)]
MEDIUM: samples: move payload-based fetches and ACLs to their own file
The file acl.c is a real mess, it both contains functions to parse and
process ACLs, and some sample extraction functions which act on buffers.
Some other payload analysers were arbitrarily dispatched to proto_tcp.c.
So now we're moving all payload-based fetches and ACLs to payload.c
which is capable of extracting data from buffers and rely on everything
that is protocol-independant. That way we can safely inflate this file
and only use the other ones when some fetches are really specific (eg:
HTTP, SSL, ...).
As a result of this cleanup, the following new sample fetches became
available even if they're not really useful :
The function 'acl_fetch_nothing' was wrong and never used anywhere so it
was removed.
The "rdp_cookie" sample fetch used to have a mandatory argument while it
was optional in ACLs, which are supposed to iterate over RDP cookies. So
we're making it optional as a fetch too, and it will return the first one.
Willy Tarreau [Mon, 7 Jan 2013 14:42:20 +0000 (15:42 +0100)]
MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
Samples fetches were relying on two flags SMP_CAP_REQ/SMP_CAP_RES to describe
whether they were compatible with requests rules or with response rules. This
was never reliable because we need a finer granularity (eg: an HTTP request
method needs to parse an HTTP request, and is available past this point).
Some fetches are also dependant on the context (eg: "hdr" uses request or
response depending where it's involved, causing some abiguity).
In order to solve this, we need to precisely indicate in fetches what they
use, and their users will have to compare with what they have.
So now we have a bunch of bits indicating where the sample is fetched in the
processing chain, with a few variants indicating for some of them if it is
permanent or volatile (eg: an HTTP status is stored into the transaction so
it is permanent, despite being caught in the response contents).
The fetches also have a second mask indicating their validity domain. This one
is computed from a conversion table at registration time, so there is no need
for doing it by hand. This validity domain consists in a bitmask with one bit
set for each usage point in the processing chain. Some provisions were made
for upcoming controls such as connection-based TCP rules which apply on top of
the connection layer but before instantiating the session.
Then everywhere a fetch is used, the bit for the control point is checked in
the fetch's validity domain, and it becomes possible to finely ensure that a
fetch will work or not.
Note that we need these two separate bitfields because some fetches are usable
both in request and response (eg: "hdr", "payload"). So the keyword will have
a "use" field made of a combination of several SMP_USE_* values, which will be
converted into a wider list of SMP_VAL_* flags.
The knowledge of permanent vs dynamic information has disappeared for now, as
it was never used. Later we'll probably reintroduce it differently when
dealing with variables. Its only use at the moment could have been to avoid
caching a dynamic rate measurement, but nothing is cached as of now.
Willy Tarreau [Fri, 4 Jan 2013 15:31:47 +0000 (16:31 +0100)]
MEDIUM: acl: remove flag ACL_MAY_LOOKUP which is improperly used
This flag is used on ACL matches that support being looking up patterns
in trees. At the moment, only strings and IPs support tree-based lookups,
but the flag is randomly set also on integers and binary data, and is not
even always set on strings nor IPs.
Better get rid of this mess by only relying on the matching function to
decide whether or not it supports tree-based lookups, this is safer and
easier to maintain.
Willy Tarreau [Fri, 29 Mar 2013 11:31:49 +0000 (12:31 +0100)]
BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().
However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.
A number of conditions are required for the bug to be triggered :
- HTTP keep-alive must be enabled ;
- HTTP inspection in TCP rules must be used ;
- some request appending rules are needed (reqadd, x-forwarded-for)
- since empty buffers are always realigned, the client must pipeline
enough requests so that the buffer always contains something till
the point where there is no more room for rewriting.
While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.
A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.
At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.
Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.
CVE-2013-1912 was assigned to this issue.
Note that 1.4 is also affected so the fix must be backported.
BUG/MAJOR: http: fix regression introduced by commit d655ffe
Sander Klein reported that since last snapshot, some downloads would
hang from nginx but succeed from apache. The culprit was not too hard
to find given the low number of recent changes affecting the data path.
Commit d655ffe slightly reorganized the HTTP state machine and
introduced this regression. The reason is that we must never jump
into the MSG_DONE case without first flushing remaining data because
this is not done anymore afterwards. This part is scheduled for
being reorganized since it's totally ugly especially since we added
compression, and this regression is an illustration of its readability.
The issue is entirely dependant on the server close sequence, which
explains why it was reproducible only with nginx here.
Lukas Tribus [Tue, 2 Apr 2013 14:43:24 +0000 (16:43 +0200)]
BUILD: add explicit support for TFO with USE_TFO
TCP Fast Open is supported in server mode since Linux 3.7, but current
libc's don't define TCP_FASTOPEN=23. Introduce the new USE flag USE_TFO
to define it manually in compat.h. Also note this in the TFO related
documentation.
BUG/MEDIUM: ssl: improve error processing and reporting in ssl_sock_load_cert_list_file()
fe61656b added the ability to load a list of certificates from a file,
but error control was incomplete and misleading, as some errors such
as missing files were not reported, and errors reported with Alert()
instead of memprintf() were inappropriate and mixed with upper errors.
Also, the code really supports a single SNI filter right now, so let's
correct it and the doc for that, leaving room for later change if needed.
Emmanuel Hocdet [Tue, 22 Jan 2013 14:31:15 +0000 (15:31 +0100)]
MEDIUM: ssl: add mapping from SNI to cert file using "crt-list"
It designates a list of PEM file with an optional list of SNI filter
per certificate, with the following format for each line :
<crtfile>[ <snifilter>]*
Wildcards are supported in the SNI filter. The certificates will be
presented to clients who provide a valid TLS Server Name Indication
field matching one of SNI filter. If no SNI filter is specified the
CN and alt subjects are used.
Formerly, if A was replaced by B, and then B by C before
A finished exiting, we didn't wait for B to finish so it
ended up as a zombie process.
Fix this by waiting randomly every child we spawn.
BUG/MAJOR: http: use a static storage for sample fetch context
Baptiste Assmann reported that the cook*() ACLs do not work anymore.
The reason is the way we store the hdr_ctx between subsequent calls
to smp_fetch_cookie() since commit 3740635b (1.5-dev10).
The smp->ctx.a[] storage holds up to 8 pointers. It is not meant for
generic storage. We used to store hdr_ctx in the ctx, but while it used
to just fit for smp_fetch_hdr(), it does not for smp_fetch_cookie()
since we stored it at offset 2.
The correct solution is to use this storage to store a pointer to the
current hdr_ctx struct which is statically allocated.
The "osx" target may now be passed in the TARGET variable. It supports
the same features as FreeBSD and allows its users to use the GNU makefile
instead of the platform-specific makefile which lacks some features.
This allows to build haproxy for unknown targets and still have poll().
If for any reason a target does not support it, just passing USE_POLL=""
disables it.
Alex Davies [Sat, 2 Mar 2013 16:04:50 +0000 (16:04 +0000)]
DOCS: Add explanation of intermediate certs to crt paramater
This change makes the "crt" block of the documentation easier to use
for those not clear on what needs to go in what file, specifically for
those using CAs that require intermediate certificates.
BUG/MEDIUM: tools: vsnprintf() is not always reliable on Solaris
Seen on Solaris 8, calling vsnprintf() with a null-size results
in the output size not being computed. This causes some random
behaviour including crashes when trying to display error messages
when loading an invalid configuration.
Willy Tarreau [Sun, 31 Mar 2013 12:41:15 +0000 (14:41 +0200)]
BUG/MAJOR: ev_select: disable the select() poller if maxsock > FD_SETSIZE
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
For this reason, we now control the compatibility between global.maxsock
and FD_SETSIZE, and refuse to use select() if there too many FDs are
expected to be used. Note that on Solaris, FD_SETSIZE is already forced
to 65536, and that FreeBSD and OpenBSD allow it to be redefined, though
this is not needed thanks to kqueue which is much more efficient.
In practice, since poll() is enabled on all targets, it should not cause
any problem, unless it is explicitly disabled.
This change must be backported to 1.4 because the crashes caused by glibc
have already been reported on this version.
Willy Tarreau [Sun, 31 Mar 2013 12:06:57 +0000 (14:06 +0200)]
MEDIUM: poll: do not use FD_* macros anymore
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
Do not rely on FD_* macros anymore and replace them with bit fields.
Willy Tarreau [Tue, 26 Mar 2013 00:08:21 +0000 (01:08 +0100)]
BUG/MEDIUM: http: fix another issue caused by http-send-name-header
An issue reported by David Coulson is that when using http-send-name-header,
the response processing would randomly be performed. The issue was first
diagnosed by Cyril Bonté as being related to a time race when processing
the closing of the response.
In practice, the issue is a bit trickier. It happens that
http_send_name_header() did not update msg->sol after a rewrite. This
counter is supposed to point to the beginning of the message's body
once headers are scheduled for being forwarded. And not updating it
means that the first forwarding of the request headers in
http_request_forward_body() does not send the correct count, leaving
some bytes in chn->to_forward.
Then if the server sends its response in a single packet with the
close, the stream interface switches to state SI_ST_DIS which in
turn moves to SI_ST_CLO in process_session(), and to close the
outgoing connection. This is detected by http_request_forward_body(),
which then switches the request message to the error state, and syncs
all FSMs and removes any response analyser.
The response analyser being removed, no processing is performed on
the response buffer, which is tunnelled as-is to the client.
Of course, the correct fix consists in having http_send_name_header()
update msg->sol. Normally this ought not to have been needed, but it
is an abuse to modify data already scheduled for being forwarded, so
it is expected that such specific handling has to be done there. Better
not have generic functions deal with such cases, so that it does not
become the standard.
Note: 1.4 does not have this issue even if it does not update the
pointer either, because it forwards from msg->som which is not
updated at the moment the connect() succeeds. So no backport is
required.
Willy Tarreau [Mon, 25 Mar 2013 18:16:31 +0000 (19:16 +0100)]
BUG/MEDIUM: config: ACL compatibility check on "redirect" was wrong
The check was made on "cond" instead of "rule->cond", so it never
emitted any warning since either the rule was NULL or it was set to
the last condition met.
This is 1.5-specific and the bug was introduced by commit 4baae248
in 1.5-dev17, so no backport is needed.
Willy Tarreau [Sun, 24 Mar 2013 06:33:22 +0000 (07:33 +0100)]
BUG/MEDIUM: http: add-header should not emit "-" for empty fields
Patch 6cbbdbf3 fixed the missing "-" delimitors in logs but it caused
them to be emitted with "http-request add-header", eventhough it was
correctly fixed for the unique-id format. Fix this by simply removing
LOG_OPT_MANDATORY in this case.
Willy Tarreau [Mon, 11 Mar 2013 00:20:04 +0000 (01:20 +0100)]
MAJOR: tools: support environment variables in addresses
Now that all addresses are parsed using str2sa_range(), it becomes easy
to add support for environment variables and use them everywhere an address
is needed. Environment variables are used as $VAR or ${VAR} as in shell.
Any number of variables may compose an address, allowing various fantasies
such as "fd@${FD_HTTP}" or "${LAN_DC1}.1:80".
These ones are usable in logs, bind, servers, peers, stats socket, source,
dispatch, and check address.
Willy Tarreau [Sun, 10 Mar 2013 22:51:38 +0000 (23:51 +0100)]
MAJOR: listener: support inheriting a listening fd from the parent
Using the address syntax "fd@<num>", a listener may inherit a file
descriptor that the caller process has already bound and passed as
this number. The fd's socket family is detected using getsockname(),
and the usual initialization is performed through the existing code
for that family, but the socket creation is skipped.
Whether the parent has performed the listen() call or not is not
important as this is detected.
For UNIX sockets, we immediately clear the path after preparing a
socket so that we never remove it in case an abort would happen due
to a late error during startup.
Willy Tarreau [Sun, 10 Mar 2013 20:32:12 +0000 (21:32 +0100)]
MEDIUM: tools: support specifying explicit address families in str2sa_range()
This change allows one to force the address family in any address parsed
by str2sa_range() by specifying it as a prefix followed by '@' then the
address. Currently supported address prefixes are 'ipv4@', 'ipv6@', 'unix@'.
This also helps forcing resolving for host names (when getaddrinfo is used),
and force the family of the empty address (eg: 'ipv4@' = 0.0.0.0 while
'ipv6@' = ::).
The main benefits is that unix sockets can now get a local name without
being forced to begin with a slash. This is useful during development as
it is no longer necessary to have stats socket sent to /tmp.
Willy Tarreau [Sun, 10 Mar 2013 18:44:48 +0000 (19:44 +0100)]
CLEANUP: config: do not use multiple errmsg at once
Several of the parsing functions made use of multiple errmsg/err_msg
variables which had to be freed, while there is already one in each
function that is freed upon exit. Adapt the code to use the existing
variable exclusively.
Willy Tarreau [Sun, 10 Mar 2013 18:27:44 +0000 (19:27 +0100)]
CLEANUP: minor cleanup in str2sa_range() and str2ip()
Don't use a statically allocated address both for str2ip and str2sa_range,
use the same. The inet and unix code paths have been splitted a little
better to improve readability.
Willy Tarreau [Sun, 10 Mar 2013 17:51:54 +0000 (18:51 +0100)]
MEDIUM: config: add complete support for str2sa_range() in 'source' and 'usesrc'
The 'source' and 'usesrc' statements now completely rely on str2sa_range() to
parse an address. A test is made to ensure that the address family supports
connect().
Willy Tarreau [Mon, 4 Mar 2013 18:56:20 +0000 (19:56 +0100)]
MEDIUM: config: make str2listener() use str2sa_range() to parse unix addresses
Now that str2sa_range() knows how to parse UNIX addresses, make str2listener()
use it. It simplifies the function. Next step consists in unifying the error
handling to further simplify the call.
Tests have been done and show that unix sockets are correctly handled, with
and without prefixes, both for global stats and normal "bind" statements.
Willy Tarreau [Mon, 4 Mar 2013 18:48:14 +0000 (19:48 +0100)]
MEDIUM: tools: make str2sa_range() parse unix addresses too
str2sa_range() now considers that any address beginning with '/' is a UNIX
address. It is compatible with all callers at the moment since all of them
perform this test and use a different parser for such addresses. However,
some parsers (eg: servers) still don't check for unix addresses.
Willy Tarreau [Mon, 4 Mar 2013 17:22:00 +0000 (18:22 +0100)]
MINOR: tools: prepare str2sa_range() to accept a prefix
We'll need str2sa_range() to support a prefix for unix sockets. Since
we don't always want to use it (eg: stats socket), let's not take it
unconditionally from global but let the caller pass it.
Willy Tarreau [Mon, 4 Mar 2013 19:07:44 +0000 (20:07 +0100)]
BUG/MEDIUM: checks: don't call connect() on unsupported address families
At the moment, all address families supported on a "server" statement support
a connect() method, but this will soon change with the generalization of
str2sa_range(). Checks currently call ->connect() unconditionally so let's
add a check for this.
Willy Tarreau [Mon, 4 Mar 2013 18:53:29 +0000 (19:53 +0100)]
BUG/MEDIUM: stats: never apply "unix-bind prefix" to the global stats socket
The "unix-bind prefix" feature was made for explicit "bind" statements. Since
the stats socket was changed to use str2listener(), it implicitly inherited
from this feature. But both are defined in the global section, and we don't
want them to be position-dependant.
So let's make str2listener() explicitly not apply the unix-bind prefix to the
global stats frontend.
This only affects 1.5-dev so it does not need any backport.
Willy Tarreau [Wed, 6 Mar 2013 14:28:17 +0000 (15:28 +0100)]
BUG/MEDIUM: tools: fix bad character handling in str2sa_range()
Commit d4448bc8 brought support for parsing port ranges, but invalid
characters are not properly handled and can result in a crash while
parsing the configuration if an invalid character is present in the
port, because the return value is set to NULL then dereferenced.
Willy Tarreau [Mon, 4 Mar 2013 06:31:08 +0000 (07:31 +0100)]
BUG/MINOR: syscall: fix NR_accept4 system call on sparc/linux
An invalid copy-paste called it NR_splice instead of NR_accept4.
This does not lead to real issues because if this define is used,
then the code cannot compile since NR_accept4 is still missing.
Willy Tarreau [Thu, 21 Feb 2013 06:46:09 +0000 (07:46 +0100)]
MINOR: ssl: add a global tunable for the max SSL/TLS record size
Add new tunable "tune.ssl.maxrecord".
Over SSL/TLS, the client can decipher the data only once it has received
a full record. With large records, it means that clients might have to
download up to 16kB of data before starting to process them. Limiting the
record size can improve page load times on browsers located over high
latency or low bandwidth networks. It is suggested to find optimal values
which fit into 1 or 2 TCP segments (generally 1448 bytes over Ethernet
with TCP timestamps enabled, or 1460 when timestamps are disabled), keeping
in mind that SSL/TLS add some overhead. Typical values of 1419 and 2859
gave good results during tests. Use "strace -e trace=write" to find the
best value.
Willy Tarreau [Wed, 20 Feb 2013 16:26:02 +0000 (17:26 +0100)]
MEDIUM: config: make use of str2sa_range() instead of str2sa()
When parsing the config, we now use str2sa_range() to detect when
ranges or port offsets were improperly used. Among the new checks
are "log", "source", "addr", "usesrc" which previously didn't check
for extra parameters.
Willy Tarreau [Wed, 20 Feb 2013 14:55:15 +0000 (15:55 +0100)]
MEDIUM: tools: make str2sa_range support all address syntaxes
Right now we have multiple methods for parsing IP addresses in the
configuration. This is quite painful. This patch aims at adapting
str2sa_range() to make it support all formats, so that the callers
perform the appropriate tests on the return values. str2sa() was
changed to simply return str2sa_range().
The output values are now the following ones (taken from the comment
on top of the function).
Converts <str> to a locally allocated struct sockaddr_storage *, and a port
range or offset consisting in two integers that the caller will have to
check to find the relevant input format. The following format are supported :
The detection of a port range or increment by the caller is made by
comparing <low> and <high>. If both are equal, then port 0 means no port
was specified. The caller may pass NULL for <low> and <high> if it is not
interested in retrieving port ranges.
Note that <addr> above may also be :
- empty ("") => family will be AF_INET and address will be INADDR_ANY
- "*" => family will be AF_INET and address will be INADDR_ANY
- "::" => family will be AF_INET6 and address will be IN6ADDR_ANY
- a host name => family and address will depend on host name resolving.
Willy Tarreau [Sat, 16 Feb 2013 22:49:04 +0000 (23:49 +0100)]
MEDIUM: halog: add support for counting per source address (-ic)
This is the same as -uc except that instead of counting URLs, it
counts source addresses. The reported times are request times and
not response times.
The code becomes heavily ugly, the url struct is being abused to
store an address, and there are no more bit fields available. The
code needs a major revamp.
Sean Carey [Fri, 15 Feb 2013 22:39:18 +0000 (23:39 +0100)]
BUG/MEDIUM: config: fix parser crash with bad bind or server address
If an address is improperly formated on a bind or server address
and haproxy is built for using getaddrinfo, then a crash may occur
upon the call to freeaddrinfo().
Thanks to Jon Meredith for helping me patch this for SmartOS,
I am not a C/GDB wizard.
Willy Tarreau [Wed, 13 Feb 2013 11:39:06 +0000 (12:39 +0100)]
BUILD: improve the makefile's support for libpcre
Currently when cross-compiling, it's generally necessary to force
PCREDIR which the Makefile automatically appends /include and /lib to.
Unfortunately on most 64-bit linux distros, the lib path is instead
/lib64, which is really annoying to fix in the makefile.
So now we're computing PCRE_INC and PCRE_LIB from PCREDIR and using
these ones instead. If one wants to force paths individually, it is
possible to set them instead of setting PCREDIR. The old behaviour
of not passing anything to the compiler when PCREDIR is forced to blank
is conserved.
Willy Tarreau [Wed, 13 Feb 2013 11:47:12 +0000 (12:47 +0100)]
BUILD: fix a warning emitted by isblank() on non-c99 compilers
Commit a2b9dad introduced use of isblank() which is not present everywhere
and requires -std=c99 or various other defines. Here on gcc-3.4 with glibc
2.3 it emits a warning though it works :
src/checks.c: In function 'event_srv_chk_r':
src/checks.c:1007: warning: implicit declaration of function 'isblank'
This macro matches only 2 values, better replace it with the explicit match.
Simon Horman [Tue, 12 Feb 2013 01:45:54 +0000 (10:45 +0900)]
MEDIUM: checks: Add agent health check
Support a agent health check performed by opening a TCP socket to a
pre-defined port and reading an ASCII string. The string should have one of
the following forms:
* An ASCII representation of an positive integer percentage.
e.g. "75%"
Values in this format will set the weight proportional to the initial
weight of a server as configured when haproxy starts.
* The string "drain".
This will cause the weight of a server to be set to 0, and thus it will
not accept any new connections other than those that are accepted via
persistence.
* The string "down", optionally followed by a description string.
Mark the server as down and log the description string as the reason.
* The string "stopped", optionally followed by a description string.
This currently has the same behaviour as down (iii).
* The string "fail", optionally followed by a description string.
This currently has the same behaviour as down (iii).
A agent health check may be configured using "option lb-agent-chk".
The use of an alternate check-port, used to obtain agent heath check
information described above as opposed to the port of the service,
may be useful in conjunction with this option.
e.g.
option lb-agent-chk
server http1_1 10.0.0.10:80 check port 10000 weight 100
Simon Horman [Tue, 12 Feb 2013 01:45:53 +0000 (10:45 +0900)]
MEDIUM: server: Tighten up parsing of weight string
Detect:
* Empty weight string, including no digits before '%' in relative
weight string
* Trailing garbage, including between the last integer and '%'
in relative weights
The motivation for this is to allow the weight string to be safely
logged if successfully parsed by this function
Simon Horman [Tue, 12 Feb 2013 01:45:51 +0000 (10:45 +0900)]
MEDIUM: server: Break out set weight processing code
Break out set weight processing code.
This is in preparation for reusing the code.
Also, remove duplicate check in nested if clauses.
{px->lbprm.algo & BE_LB_PROP_DYN) is checked by
the immediate outer if clause, so there is no need
to check it a second time.
Simon Horman [Wed, 13 Feb 2013 08:48:00 +0000 (17:48 +0900)]
BUG/MINOR: Correct logic in cut_crlf()
This corrects what appears to be logic errors in cut_crlf().
I assume that the intention of this function is to truncate a
string at the first cr or lf. However, currently lf are ignored.
Also use '\0' instead of 0 as the null character, a cosmetic change.
Cc: Krzysztof Piotr Oledzki <ole@ans.pl> Signed-off-by: Simon Horman <horms@verge.net.au>
[WT: this fix may be backported to 1.4 too]
Currently, to reload haproxy configuration, you have to use "-sf".
There is a problem with this way of doing things. First of all, in the systemd world,
reload commands should be "oneshot" ones, which means they should not be the new main
process but rather a tool which makes a call to it and then exits. With the current approach,
the reload command is the new main command and moreover, it makes the previous one exit.
Systemd only tracks the main program, seeing it ending, it assumes it either finished or failed,
and kills everything remaining as a grabage collector. We then end up with no haproxy running
at all.
This patch adds wrapper around haproxy, no changes at all have been made into it,
so it's not intrusive and doesn't change anything for other hosts. What this wrapper does
is basically launching haproxy as a child, listen to the SIGUSR2 (not to conflict with
haproxy itself) signal, and spawing a new haproxy with "-sf" as a child to relay the
first one.
MEDIUM: New cli option -Ds for systemd compatibility
This patch adds a new option "-Ds" which is exactly like "-D", but instead of
forking n times to get n jobs running and then exiting, prefers to wait for all the
children it just created. With this done, haproxy becomes more systemd-compliant,
without changing anything for other systems.