OPTIM: log: resolve logformat options during postparsing
In lf_buildctx_prepare(), we perform costly bitwise operations for every
nodes to resolve node options and check for incompatibilities with global
options.
In fact, all this logic may safely be performed during postparsing. This
is what we're doing in this commit. Doing so saves us from unnecessary
runtime checks and could help speedup sess_build_logline().
Since checks are not as costly as before (due to them being performed
during postparsing and not on log building path anymore), an complementary
check for OPT_HTTP vs OPT_ENCODE incompatibity was added:
encoding is ignored if HTTP option is set, unless HTTP option wasn't
set globally and encoding was set globally, which means encoding
takes the precedence
Thanks to this patch, lf_buildctx_prepare() now only takes care of
assigning proper typecast and options settings depending if it's used
from global or per-node context, and prepares CBOR-specific structure
members when CBOR encode option is set.
Willy Tarreau [Sat, 4 May 2024 08:16:05 +0000 (10:16 +0200)]
[RELEASE] Released version 3.0-dev10
Released version 3.0-dev10 with the following main changes :
- BUG/MEDIUM: cache: Vary not working properly on anything other than accept-encoding
- REGTESTS: cache: Add test on 'vary' other than accept-encoding
- BUG/MINOR: stats: replace objt_* by __objt_* macros
- CLEANUP: tools/cbor: rename cbor_encode_ctx struct members
- MINOR: log/cbor: _lf_cbor_encode_byte() explicitly requires non-NULL ctx
- BUG/MINOR: log: fix global lf_expr node options behavior
- CLEANUP: log: add a macro to know if a lf_node is configurable
- MINOR: httpclient: allow to use absolute URI with new flag HC_F_HTTPROXY
- MINOR: ssl: introduce ocsp_update.http_proxy for ocsp-update keyword
- BUG/MINOR: log/encode: consider global options for key encoding
- BUG/MINOR: log/encode: fix potential NULL-dereference in LOGCHAR()
- BUG/MINOR: log: fix global lf_expr node options behavior (2nd try)
- MINOR: log/cbor: _lf_cbor_encode_byte() explicitly requires non-NULL ctx (again)
- BUG/MEDIUM: log: don't ignore disabled node's options
- BUG/MINOR: stconn: don't wake up an applet waiting on buffer allocation
- MINOR: sock: rename sock to sock_fd in sock_create_server_socket
- MEDIUM: proto_uxst: take in account server namespace
- MEIDUM: unix sock: use my_socketat to create bind socket
- MINOR: sock_set_mark: take sock family in account
- MEDIUM: proto: make common fd checks in sock_create_server_socket
- MINOR: sock: add EPERM case in sock_handle_system_err
- MINOR: capabilities: add cap_sys_admin support
- CLEANUP: ssl: clean the includes in ssl_ocsp.c
- CLEANUP: ssl: move the global ocsp-update options parsing to ssl_ocsp.c
- MINOR: stats: fix visual alignment for stat_cols_px definition
- MINOR: stats: convert req_tot as generic column
- MINOR: stats: prepare stats-file support for values other than FN_COUNTER
- MINOR: counters: move freq-ctr from proxy/server into counters struct
- MINOR: stats: support rate in stats-file
- MINOR: stats: convert rate as generic column for proxy stats
- MINOR: counters: move last_change into counters struct
- MINOR: stats: support age in stats-file
- MINOR: stats: convert age as generic column for proxy stat
- CLEANUP: ssl: rename new_ckch_store_load_files_path() to ckch_store_new_load_files_path()
- MINOR: ssl: rename ocsp_update.http_proxy into ocsp-update.httpproxy
- REORG: stats: define stats-proxy source module
- MINOR: stats: extract proxy clear-counter in a dedicated function
- REGTESTS: stats: add test stats-file counters preload
- CI: netbsd: adjust packages after NetBSD-10 released
- CLEANUP: assorted typo fixes in the code and comments
- REGTESTS: replace REQUIRE_VERSION by version_atleast
- MEDIUM: log: optimizing tmp->type handling in sess_build_logline()
- BUG/MINOR: log: prevent double spaces emission in sess_build_logline()
- OPTIM: log: declare empty buffer as global variable
- OPTIM: log: use thread local lf_buildctx to stop pushing it on the stack
- OPTIM: log: use lf_buildctx's buffer instead of temporary stack buffers
- OPTIM: log: speedup date printing in sess_build_logline() when no encoding is used
OPTIM: log: speedup date printing in sess_build_logline() when no encoding is used
In sess_build_logline(), we have multiple fieds such as '%t' that build
a fixed-length string out of a date struct and then print it using
lf_rawtext(). In fact, printing it using lf_rawtext() is only mandatory
to deal with encoding options, but when no encoding is used we can output
the result to tmplog directly. Since most dates generate between 25 and 30
chars, doing so spares us from writing them twice and could help make
sess_build_logline() a bit faster when no encoding is used. (to match with
pre-encoding patch series performance).
OPTIM: log: use lf_buildctx's buffer instead of temporary stack buffers
Now that lf_buildctx isn't pushed on the stack anymore, let's take this
opportunity to store a small buffer of 256 bytes within it, and then use
this buffer as general purpose buffer to build fixed-length strings that
are then printed using lf_{raw}text() function. By doing so we stop
relying on temporary stack buffers.
OPTIM: log: use thread local lf_buildctx to stop pushing it on the stack
Following previous commit's logic, let's move lf_buildctx ctx away from
sess_build_logline() to stop abusing from the stack to push large
structure each time sess_build_logline() is called. Also, don't memset
the structure for each invokation, but only reset members explicitly when
required.
For that we now declare one static lf_buildctx per thread (using
THREAD_LOCAL) and make sess_build_logline() refer to it using a pointer.
OPTIM: log: declare empty buffer as global variable
'empty' buffer used in sess_build_logline() inside a loop, and since it
is only being read from and not modified, until recently it ended up being
cached most of the time and didn't cause overhead due to systematic push
on the stack.
However, due recent encoding work and new added variables on the stack,
we're starting to reach a stack limit and declaring 'empty' buffer within
the loop seems to cause non-negligible CPU overhead.
Since the variable isn't modified during log generation, let's declare
'empty' buffer as a global variable outside from sess_build_logline()
to prevent pushing it on the stack for each node evaluation.
BUG/MINOR: log: prevent double spaces emission in sess_build_logline()
Christian reported in GH #2556 that since 3.0-dev double spaces may be
found in log messages on some cases where it was not the case before.
As we were able to easily reproduce, a quick bisect led us to c6a7138
("MINOR: log: simplify last_isspace in sess_build_logline()"). While
it is true that all switch cases set the last_isspace variable to 0,
there was a subtelty for some fields such as '%hr', '%hrl', '%hs' or
'%hsl' and I overlooked it. Indeed, for '%hr', last_isspace was only set
to 0 if data was emitted, else the assignment didn't occur.
But with c6a7138, last_isspace is always set to 0 as long as the current
node type is not a separator. Because of that, if no data is emitted for
the current node value, and a space was already emitted prior to the
current node, then an extra space could be emitted after the node,
resulting in two spaces being emitted.
Note that while c6a7138 introduces a slight behavior regression regarding
last_isspace logic with the specific fields mentionned above, this
behavior could already be triggered with a failing or empty logformat
node sample expression. Consider this logformat expression:
log-format "%{-M}o | %[str()] |"
str() will not print anything, and since we disabled mandatory option with
'-M', nothing gets printed for the node sample expression. As a result, we
have the following output:
"| |"
Instead of (when mandatory option is enabled):
"| - |"
Thus in order to stick to the historical behavior, systematically set
last_isspace to 0 for EXPR nodes, and only set last_isspace to 0 when
data was written for TAG nodes. This way, '%hr', '%hrl', '%hs' or
'%hsl' should behave as before.
MEDIUM: log: optimizing tmp->type handling in sess_build_logline()
Instead of chaining 2 switchcases and performing encoding checks for all
nodes let's actually split the logic in 2: first handle simple node types
(text/separator), and then handle dynamic node types (tag, expr). Encoding
options are only evaluated for dynamic node types.
Also, last_isspace is always set to 0 after next_fmt label, since next_fmt
label is only used for dynamic nodes, thus != LOG_FMT_SEPARATOR.
Since LF_NODE_WITH_OPT() macro (which was introduced recently) is now
unused, let's get rid of it.
No functional change should be expected.
(Use diff -w to check patch changes since reindentation makes the patch
look heavy, but in fact it remains fairly small)
REGTESTS: stats: add test stats-file counters preload
Define a simple regtest to check stats-file loading on startup. A sample
stats-file is written with some invalid values which should be silently
ignored.
MINOR: stats: extract proxy clear-counter in a dedicated function
Split code related to proxies list looping in cli_parse_clear_counters()
to a new dedicated function. This function is placed in the new module
stats-proxy.
Create a new module stats-proxy. Move stats functions related to proxies
list looping in it. This allows to reduce stats source file dividing its
size by half.
Extend generic stat column support to be able to fully support age stats
type. Several changes were required.
On output, me_generate_field() has been updated to report the difference
between the current tick with the stored value for FN_AGE type. Also, if
an age stats is hidden in show stats, -1 is returned instead of an empty
metric, which is the value to mark an age as unset.
On counters preload, load_ctr() was updated to handled FN_AGE. A similar
substraction is performed to the current tick value.
MINOR: counters: move last_change into counters struct
last_change was a member present in both proxy and server struct. It is
used as an age statistics to report the last update of the object.
Move last_change into fe_counters/be_counters. This is necessary to be
able to manipulate it through generic stat column and report it into
stats-file.
Note that there is a change for proxy structure with now 2 different
last_change values, on frontend and backend side. Special care was taken
to ensure that the value is initialized only on the proxy side. The
other value is set to 0 unless a listen proxy is instantiated. For the
moment, only backend counter is reported in stats. However, with now two
distinct values, stats could be extended to report it on both side.
MINOR: stats: convert rate as generic column for proxy stats
Convert every FN_RATE in stat_cols_px[] to generic column. Thanks to
prior patch, this allows to automatically dump their value into
stats-file and preload corresponding freq-ctr on process startup.
Implement support for FN_RATE stat column into stat-file.
For the output part, only minimal change is required. Reuse the function
read_freq_ctr() to print the same value in both stats output and
stats-file dump.
For counter preloading, define a new utility function
preload_freq_ctr(). This can be used to initialize a freq-ctr type by
preloading previous period value. Reuse this function in load_ctr()
during stats-file parsing.
At the moment, no rate column is defined as generic. Thus, this commit
does not have functional change. This will be changed as soon as FN_RATE
are converted to generic columns.
MINOR: counters: move freq-ctr from proxy/server into counters struct
Move freq-ctr defined in proxy or server structures into their dedicated
fe_counters/be_counters struct.
Functionnaly no change here. This commit will allow to convert rate
stats column to generic one, which is mandatory to manipulate them in
the stats-file.
MINOR: stats: prepare stats-file support for values other than FN_COUNTER
Currently, only FN_COUNTER are dumped and preloaded via a stats-file.
Thus in several places we relied on the assumption that only FN_COUNTER
are valid in stats-file context.
New stats types will soon be implemented as they are also eligilible to
statistics reloading on process startup. Thus, prepare stats-file
functions to remove any FN_COUNTER restriction.
As one of this change, generate_stat_tree() now uses stcol_is_generic()
for stats name tree indexing before stats-file parsing.
Also related to stats-file parsing, individual counter preloading step
as been extracted from line parsing in a dedicated new function
load_ctr(). This will allow to extend it to support multiple mechanism
of counter preloading depending on the stats type.
req_tot counter is a special case as it is not managed identically
between frontend and backend side.
For the backend side, this metric is available directly into
be_counters, which allows to use a generic stat column definition.
On the frontend side however, the metric value is an aggredate of
multiple fe_counters value. This is the case since the splitting between
HTTP version introduced in the following patch :
This difference cannot be handled automatically by me_generate_field().
Add a special case in the function to produce it on frontend side
reusing the aggregated value. This not done however for stats-file as
there is no counter to preload.
If 'namespace' keyword is used in the backend server settings or/and in the
bind string, it means that haproxy process will call setns() to change its
default namespace to the configured one and then, it will create a
socket in this new namespace. setns() syscall requires CAP_SYS_ADMIN
capability in the process Effective set (see man 2 setns). Otherwise, the
process must be run as root.
To avoid to run haproxy as root, let's add cap_sys_admin capability in the
same way as we already added the support for some other network capabilities.
As CAP_SYS_ADMIN belongs to CAP_SYS_* capabilities type, let's add a separate
flag LSTCHK_SYSADM for it. This flag is set, if the 'namespace' keyword was
found during configuration parsing. The flag may be unset only in
prepare_caps_for_setuid() or in prepare_caps_from_permitted_set(), which
inspect process EUID/RUID and Effective and Permitted capabilities sets.
If system doesn't support Linux capabilities or 'cap_sys_admin' was not set
in 'setcap', but 'namespace' keyword is presented in the configuration, we
keep the previous strict behaviour. Process, that has changed uid to the
non-priviledged user, will terminate with alert. This alert invites the user
to recheck its configuration.
In the case, when haproxy will start and run under a non-root user and
'cap_sys_admin' is not set, but 'namespace' keyword is presented, this patch
does not change previous behaviour as well. We'll still let the user to try
its configuration, but we inform via warning, that unexpected things, like
socket creation errors, may occur.
MINOR: sock: add EPERM case in sock_handle_system_err
setns() may return EPERM if thread, that tries to move into different
namespace, do not have CAP_SYS_ADMIN capability in its Effective set.
So, extending sock_handle_system_err() with this error allows to send
appropriate log message and set SF_ERR_PRXCOND (SC termination
flag in log) as stream termination error code. This error code can be
simply checked with SF_ERR_MASK at protocol layer.
MEDIUM: proto: make common fd checks in sock_create_server_socket
quic_connect_server(), tcp_connect_server(), uxst_connect_server() duplicate
same code to check different ERRNOs, that socket() and setns() may return.
They also duplicate some runtime condition checks, applied to the obtained
server socket fd.
So, in order to remove these duplications and to improve code readability,
let's encapsulate socket() and setns() ERRNOs handling in
sock_handle_system_err(). It must be called just before fd's runtime condition
checks, which we also move in sock_create_server_socket by the same reason.
SO_MARK, SO_USER_COOKIE, SO_RTABLE socket options (used to set the special
mark/ID on socket, in order to perform mark-based routing) are only supported
by AF_INET sockets. So, let's check socket address family, when we enter into
this function.
it is better to use my_socket_at() in order to create UNIX listener's socket.
my_socket_at() takes in account a network namespace, that may be configured
for a frontend in the bind line:
frontend fe_foo
...
bind uxst@frontend.sock user haproxy group haproxy mode 660 namespace frontend
Like this, namespace aware applications as netstat for example, will see this
listening socket in its 'frontend' namespace and not in the root namespace as
it was before.
It is important to mention, that fixes in Linux kernel referenced above allow
to connect to this listener's socket from the root and from any other
namespace. UNIX Domain socket is protected by its permission set, which must
be set with caution on its inode.
it is better to use sock_create_server_socket() in UNIX stream protocol
implementation, as this function calls my_socket_at() and the latter takes
in account server network namespace, which could be configured as in example
below:
backend be_bar
...
server rpicam0 /run/ustreamer.sock namespace foonet
So, for UNIX Domain socket, used as an address of some backend server, this
patch makes possible to perform connect() to this backend server from the same
network namespace, where the server is running, or where its listening socket
was created.
Using sock_create_server_socket() in UNIX stream protocol implementation also
makes the code of uxst_connect_server() more uniform with tcp_connect_server()
and quic_connect_server().
MINOR: sock: rename sock to sock_fd in sock_create_server_socket
Renaming sock to sock_fd makes it more clear, that sock_create_server_socket
returns the fd of newly created server socket and then we check this fd.
As we heavily use "fd" variable name in all protocol implementations, let's
prefix this one with the name of its object file: sock.o.
BUG/MINOR: stconn: don't wake up an applet waiting on buffer allocation
Since the extension of the buffers API to applets in 3.0-dev, an applet
may find itself unable to allocate a buffer, and will block respectively
on APPCTX_FL_OUTBLK_ALLOC or APPCTX_FL_INBLK_ALLOC depending on the
direction. However the code in sc_applet_process() doesn't consider this
situation when deciding to wake up an applet, so when the condition
arises, the applet keeps ringing and is killed by the loop detector.
The fix is trivial and simply consists in checking for the flags above.
No backport is needed since this is new in 3.0.
In 3f2e8d0ed ("MEDIUM: log: lf_* build helpers now take a ctx argument")
I made a mistake, because starting with this commit it is no longer
possible from a node to disable global logformat options.
The result is that when an option is set globally, it cannot be disabled
anymore.
For instance, it is not possible to do this anymore:
log-format "%{+X}o %{-X}Ts"
The original intent was to prevent encoding options from being
disabled once enabled globally, because when encoding is enabled globally
we start the object enumeration right away (ie: in CBOR and JSON we
announce dynamic map, and for each node we announce the key..), thus it
doesn't make sense to mix encoding types there, unless encoding is only
used per-node, in which case only the value gets encoded, thus it remains
possible to print a value in JSON/CBOR-compatible format while the next
one shouldn't be printed as-is.
Thus, to restore the original behavior, slightly change the logic in
lf_buildctx_prepare() so that only global encoding options take the
precedence over node's options (instead of all options).
The BUG_ON() statement that was added in 9bdea51 ("MINOR: log/cbor:
_lf_cbor_encode_byte() explicitly requires non-NULL ctx") isn't
sufficient as Coverity still thinks the lf_buildctx itself may be NULL
as shown in GH #2554. In fact the original reports complains about the
lf_buildctx itself and I didn't understand it properly, let's add another
check in the BUG_ON() to ensure both cbor_ctx and cbor_ctx->ctx are not
NULL since it is not expected if used properly.
BUG/MINOR: log: fix global lf_expr node options behavior (2nd try)
In 98b44e8 ("BUG/MINOR: log: fix global lf_expr node options behavior"),
I properly restored global node options behavior for when encoding is
not used, however the fix is not optimal when encoding is involved:
Indeed, encoding logic in sess_build_logline() relies on global node
options to know if encoding must be handled expression-wide or
individually. However, because of the above fix, if an expression is
made of 1 or multiple nodes that all set an encoding option manually
(without '%o'), we consider that the option was set globally, but
that's probably not what the user intended. Instead we should only
evaluate global options from '%o', so that it remains possible to
skip global encoding when needed.
BUG/MINOR: log/encode: fix potential NULL-dereference in LOGCHAR()
When CBOR encoding was added in c614fd3b9 ("MINOR: log: add +cbor encoding
option"), in LOGCHAR(), we forgot to check that we don't assign the NULL
value to tmplog (as we assume that tmplog cannot be NULL at the end of
sess_build_logline())
BUG/MINOR: log/encode: consider global options for key encoding
In sess_build_logline(), contrary to what's stated in the comment
"only consider global ctx for key encoding", we check for
LOG_OPT_ENCODE flag on the current ctx options instead of global
ones. Because of this, we could end up doing the wrong thing if the
previous node had encoding enabled but it isn't set globally for
instance.
To fix the issue, let's simply check the presence of the flag on
g_options before entering the "key encoding" block.
This bug was introduced with 3f7c8387 ("MINOR: log: add +json encoding
option"), no backport needed.
MINOR: ssl: introduce ocsp_update.http_proxy for ocsp-update keyword
The ocsp_update.http_proxy global option allows to set an HTTP proxy
address which will be used to send the OCSP update request with an
absolute form URI.
CLEANUP: log: add a macro to know if a lf_node is configurable
LF_NODE_WITH_OPT(node) returns true if the node's option may be set and
thus should be considered. Logic is based on logformat node's type:
for now only TAG and FMT nodes can be configured.
BUG/MINOR: log: fix global lf_expr node options behavior
In 507223d5 ("MINOR: log: global lf_expr node options"), a mistake was
made because it was assumed that only the last occurence of %o
(LOG_FMT_GLOBAL) should be kept as global node options.
However, although not documented, it is possible to have multiple %o
within a single logformat expression to change the global settings on the
fly.
Prior to 3f2e8d0ed ("MEDIUM: log: lf_* build helpers now take a ctx
argument"), this would output something like this:
test1=18B test2=395 test3=18B
This is because global options is properly updated as the lf_expr string
is parsed. But now due to 507223d5 and 3f2e8d0ed, only the last %o
occurence is considered. With the above example, this gives:
test1=18B test2=18B test3=18B
To restore historical behavior, let's partially revert 507223d5: to
compute global node options, we now start with all options enabled and
then for each configurable node in lf_expr_postcheck(), we keep options
common to the current node and previous nodes using AND masking, this way
we really end up with options common to all nodes.
As shown in GH #2550, Coverity is tempted to think that NULL-dereference
can occur in _lf_cbor_encode_byte() due to user-ctx being dereferenced
from cbor_ctx, while coverity thinks that cbor_ctx may be NULL.
In practise this cannot happen, because _lf_cbor_encode_byte() is
only leveraged through a function pointer that is set in conjunction with
the function pointer ctx (which ain't NULL). All this logic is done inside
lf_buildctx_prepare() when LOG_OPT_ENCODE_CBOR is set.
Since coverity doesn't seem to understand the logic properly, then it
might as well confuse humans, so let's make it clear in
_lf_cbor_encode_byte() that we expect non-NULL ctx by adding a BUG_ON()
CLEANUP: tools/cbor: rename cbor_encode_ctx struct members
Rename e_byte_fct to e_fct_byte and e_fct_byte_ctx to e_fct_ctx, and
adjust some comments to make it clear that e_fct_ctx is here to provide
additional user-ctx to the custom cbor encode function pointers.
For now, only e_fct_byte function may be provided, but we could imagine
having e_fct_int{16,32,64}() one day to speed up the encoding when we
know we can encode multiple bytes at a time, but for now it's not worth
the hassle.
BUG/MINOR: stats: replace objt_* by __objt_* macros
Update parse_stat_line() used during stats-file parsing. For each line,
GUID is extracted first to access to the object instance. obj_type()
is then invoked to retrieve the correct object type.
Replace objt_* by __objt_* macros to mark its result as safe and non
NULL.
This should fix coverity report from github issue #2550.
REGTESTS: cache: Add test on 'vary' other than accept-encoding
A bug related to vary and the 'accept-encoding' header was fixed in
"BUG/MEDIUM: cache: Vary not working properly on anything other than
accept-encoding". This patch adds tests specific to this bug.
BUG/MEDIUM: cache: Vary not working properly on anything other than accept-encoding
If a response varies on anything other than accept-encoding (origin or
referer) but still contains an 'Encoding' header, the cached responses
were never sent back.
This is because of the 'set_secondary_key_encoding' call that always
filled the accept-encoding part of the secondary signature with the
response's actual encoding, regardless of whether the response varies on
this or not. This meant that the accept-encoding part of the signature
could be non-null in the cached entry which made the
'get_secondary_entry' calls in 'http_action_req_cache_use' always fail
because in those cases the request's secondary signature always had a
null accept-encoding part.
Released version 3.0-dev9 with the following main changes :
- BUILD: ssl: use %zd for sizeof() in ssl_ckch.c
- MINOR: backend: use be_counters for health down accounting
- BUG/MINOR: backend: use cum_sess counters instead of cum_conn
- BUG/MINOR: stats: fix stot metric for listeners
- REGTESTS: use -dI for insecure fork by default in the regtest scripts
- MINOR: stats: rename proxy stats
- MINOR: stats: rename ambiguous stat_l and stat_count
- MINOR: stats: rename info stats
- MINOR: stats: use stricter naming stats/field/line
- MINOR: stats: use STAT_F_* prefix for flags
- BUG/MEDIUM: applet: Let's applets decide if they have more data to deliver
- BUILD: stick-tables: silence build warnings when threads are disabled
- MINOR: tools: Rename `ha_generate_uuid` to `ha_generate_uuid_v4`
- MINOR: Add `ha_generate_uuid_v7`
- MINOR: Add support for UUIDv7 to the `uuid` sample fetch
- MEDIUM: shctx: Naming shared memory context
- BUG/MINOR: h1: fix detection of upper bytes in the URI
- MINOR: intops: add a pair of functions to check multi-byte ranges
- TESTS: add a unit test for the multi-byte range checks
- CLEANUP: h1: make use of the multi-byte matching functions
- REGTESTS: ssl: Remove "sleep" calls from ocsp auto update test
- BUG/MEDIUM: peers: Automatically start to learn on local peer
- BUG/MEDIUM: peers: Reprocess peer state after all session shutdowns
- MINOR: peers: Remove unused PEERS_F_RESYNC_REQUESTED flag
- MINOR: peers: Don't set TEACH flags on a peer from the sync task
- MINOR: peers: Use a peer flag to block the applet waiting ack of the sync task
- BUG/MEDIUM: peers: Wait for sync task ack when a resynchro is finished
- MINOR: peers: Remove unused PEERS_F_RESYNC_PROCESS flag
- MINOR: applet: Add a function to know the side where an applet was created
- MEDIUM: peers: Simplify the peer flags dealing with the connection state
- MEDIUM: peers: Use true states for the peer applets as seen from outside
- MEDIUM: peers: Use true states for the learn state of a peer
- MINOR: peers: Start learning for local peer before receiving messages
- MINOR: peers: Rename PEERS_F_TEACH_COMPLETE to PEERS_F_LOCAL_TEACH_COMPLETE
- MINOR: peers: Reorder and slightly rename PEER flags
- MINOR: peers: Reorder and rename PEERS flags
- REORG: peers: Move peer and peers flags in the corresponding header file
- DEV: flags/peers: Decode PEER and PEERS flags
- MINOR: peers: Add comment on processing functions of the sync task
- MINOR: peers: Use a static variable to wait a resync on reload
- BUG/MEDIUM: peers: Use atomic operations on peers flags when necessary
- REORG: peers: Rename all occurrences to 'ps' variable
- BUG/MINOR: peers: Don't wait for a remote resync if there no remote peer
- MINOR: stats: update ambiguous "metrics" naming to "stat_cols"
- MINOR: stats: introduce a more expressive stat definition method
- MINOR: stats: implement automatic metric generation from stat_col
- MINOR: stats: hide some columns in output
- MEDIUM: stats: convert counters to new column definition
- MINOR: stats: define stats-file output format support
- MEDIUM: stats: implement dump stats-file CLI
- MINOR: ist: define iststrip() new function
- MINOR: guid: define guid_is_valid_fmt()
- MINOR: stats: apply stats-file on process startup
- MINOR: stats: parse header lines from stats-file
- MINOR: stats: parse values from stats-file
- MEDIUM: stats: define stats-file keyword
- BUG/MINOR: mworker: reintroduce way to disable seamless reload with -x /dev/null
- CLEANUP: log: remove unused checks for encode_{chunk,string}
- MINOR: log: store lf_expr nodes inside substruct
- MINOR: log: global lf_expr node options
- CLEANUP: log: simplify complex values usages in sess_build_logline()
- MINOR: log: skip custom logformat_node name if empty
- MINOR: log: add lf_int() wrapper to print integers
- MINOR: log: add lf_rawtext{_len}() functions
- MEDIUM: log: pass date strings to lf_rawtext()
- MEDIUM: log: write raw strings using lf_rawtext()
- MEDIUM: log: use lf_rawtext for lf_ip() and lf_port() hex strings
- MINOR: log: explicitly handle %ts and %tsc as text strings
- MINOR: log: use LOG_VARTEXT_{START,END} to enclose text strings
- MINOR: log: make all lf_* sess build helper static
- MINOR: log: merge lf_encode_string() and lf_encode_chunk() logic
- MEDIUM: log: lf_* build helpers now take a ctx argument
- MINOR: log: expose node typecast in lf_buildctx struct
- MINOR: log: postpone conversion for sample expressions in sess_build_logline()
- MINOR: log: add LOG_OPT_NONE flag
- MINOR: log: add no_escape_map to bypass escape with _lf_encode_bytes()
- MINOR: log: add +bin logformat node option
- MINOR: log: add +json encoding option
- MINOR: tools: add cbor encode helpers
- MINOR: log: add +cbor encoding option
- MINOR: log: support true cbor binary encoding
- CLEANUP: dynbuf: move the reserve and limit parsers to dynbuf.c
- MINOR: list: add a macro to detect that a list contains at most one element
- MINOR: cli/wait: rename the condition "srv-unused" to "srv-removable"
MINOR: cli/wait: rename the condition "srv-unused" to "srv-removable"
As previously discussed, "srv-unused" is sufficiently ambiguous to cause
some trouble over the long term. Better use "srv-removable" to indicate
that the server is removable, and if the conditions to delete a server
change over time, the wait condition will be adjusted without renaming
it.
MINOR: list: add a macro to detect that a list contains at most one element
The new LIST_ATMOST1() test verifies that the designated element is either
alone or points on both sides to the same element. This is used to detect
that a list has at most a single element, or that an element about to be
deleted was the last one of a list.
CLEANUP: dynbuf: move the reserve and limit parsers to dynbuf.c
I just added a new setting to set the number of reserved buffer, to
discover we already had one... Let's move the parsing of this keyword
(tune.buffers.reserve) and tune.buffers.limit to dynbuf.c where they
should be.
CBOR in hex format as implemented in previous commit is convenient because
the produced output is portable and can easily be embedded in regular
syslog payloads.
However, one of the goal of CBOR implementation is to be able to produce
"Concise Binary" object representation. Here is an excerpt from cbor.io
website:
"Some applications also benefit from CBOR itself being encoded in
binary. This saves bulk and allows faster processing."
Currently we don't offer that with '+cbor', quite the opposite actually
since a text string encoded with '+cbor' option will be larger than a
text string encoded with '+json' or without encoding at all, because for
each CBOR binary byte, 2 characters will be emitted.
Hopefully, the sink/log API allows for binary data to be passed as
parameter, this is because all relevant functions in the chain don't rely
on the terminating NULL byte and take a string pointer + string length as
parameter. We can actually rely on this property to support the '+bin'
option when combined with '+cbor' to produce RAW binary CBOR output.
Be careful though, as this is only intended for use with set-var-fmt or to
send binary data to capable UDP/ring endpoints.
In this patch, we make use of the CBOR (RFC8949) encode helper functions
from the previous commit to implement '+cbor' encoding option for log-
formats. The logic behind it is pretty similar to '+json' encoding option,
except that the produced output is a CBOR payload written in HEX format so
that it remains compatible to use this with regular syslog endpoints.
Example:
log-format "%{+cbor}o %[int(4)] test %(named_field)[str(ok)]"
Add cbor helpers to encode strings (bytes/text) and integers according to
RFC8949, also add cbor_encode_ctx struct to pass encoding options such as
how to encode a single byte.
In this patch, we add the "+json" log format option that can be set
globally or per log format node.
What it does, it that it sets the LOG_OPT_ENCODE_JSON flag for the
current context which is provided to all lf_* log building function.
This way, all lf_* are now aware of this option and try to comply with
JSON specification when the option is set.
If the option is set globally, then sess_build_logline() will produce a
map-like object with key=val pairs for named logformat nodes.
(logformat nodes that don't have a name are simply ignored).
Example:
log-format "%{+json}o %[int(4)] test %(named_field)[str(ok)]"
Will produce:
{"named_field": "ok"}
If the option isn't set globally, but on a specific node instead, then
only the value will be encoded according to JSON specification.
Support '+bin' option argument on logformat nodes to try to preserve
binary output type with binary sample expressions.
For this, we rely on the log/sink API which is capable of conveying binary
data since all related functions don't search for a terminating NULL byte
in provided log payload as they take a string pointer and a string length
as argument.
Example:
log-format "%{+bin}o %[bin(00AABB)]"
Will produce:
00aabb
(output was piped to `hexdump -ve '1/1 "%.2x"'` to dump raw bytes as HEX
characters)
This should be used carefully, because many syslog endpoints don't expect
binary data (especially NULL bytes). This is mainly intended for use with
set-var-fmt actions or with ring/udp log endpoints that know how to deal
with such binary payloads.
Also, this option is only supported globally (for use with '%o'), it will
not have any effect when set on an individual node. (it makes no sense to
have binary data in the middle of log payload that was started without
binary data option)
MINOR: log: postpone conversion for sample expressions in sess_build_logline()
In sess_build_logline(), for sample expression nodes, instead of directly
calling sample_fetch_as_type(... SMP_T_STR), let's first process the
sample using sample_process(), and then proceed with the conversion to
str if required.
Doing so will allow us to implement type casting and preserving logic.
MEDIUM: log: lf_* build helpers now take a ctx argument
Add internal lf_buildctx struct that is only used inside
sess_build_logline() scope and is passed to lf_* log building helpers
to expose current building context. For now, node options and the in_text
counter are stored in the ctx struct. Thanks to this change, lf_* building
functions don't depend on a logformat_node struct pointer, and may be used
in a standalone manner as long as a build context is provided.
Also, global options are now handled explictly in sess_build_logline() to
make sure that global options are always considered even if they were not
duplicated on every nodes.
MINOR: log: merge lf_encode_string() and lf_encode_chunk() logic
lf_encode_string() and lf_encode_chunk() function are pretty similar. The
only difference is the stopping behavior, encode_chunk stops at a given
position while encode_string stops when encountering '\0'. Moreover,
both functions leverage tools.c encode helpers, but because of the
LOG_OPT_ESC option, they reimplement those helpers with added logic.
Instead of having to deal with code duplication which makes both functions
harder to maintain, let's define a _lf_encode_bytes() helper function
which satisfies lf_encode_string() and lf_encode_chunk() needs while
keeping the function as simple as possible.
_lf_encode_bytes() itself is made of multiple static inline helper
functions, in the attempt to keep checks outside of core loop for
better performance.
MINOR: log: use LOG_VARTEXT_{START,END} to enclose text strings
Rename LOGQUOTE_{START,END} macros to more generic LOG_VARTEXT_{START,END}
in order to prepare for new encoding types that rely on specific treatment
for variable-length texts. No functional change should be expected.
MINOR: log: explicitly handle %ts and %tsc as text strings
Build fixed-length strings for %ts and %tsc to be able to print them
using lf_rawtext_len(), this way it will be easier to encode them
when new encoding options will be added.
MEDIUM: log: use lf_rawtext for lf_ip() and lf_port() hex strings
Same as the previous commit, but for ip and port oriented values when
+X option is provided.
No functional change should be expected.
Because of this patch, we add a little overhead because we first generate
the text into a temporary variable and then use lf_rawtext() to print it.
Thus we have a double-copy, and this could have some performance
implications that were not yet evaluated. Due to the small number of bytes
that can end up being copied twice, we could be lucky and have no visible
performance impact, but if we happen to see a significant impact, it could
be useful to add a passthrough mechanism (to keep historical behavior)
when no encoding is involved.
Make use of the previous commit to print strings that should not be
modified.
For instance, when +X option is provided, we have to print numerical
values in ASCII HEX form. For that, we used snprintf() to output the
result to the log output buffer directly, but now we build the string in
a temporary buffer of fixed-size and then print it using lf_rawtext()
which will take care of encoding options.
Because of this patch, we add a little overhead because we first generate
the text into a temporary variable and then use lf_rawtext() to print it.
Thus we have a double-copy, and this could have some performance
implications that were not yet evaluated. Due to the small number of bytes
that can end up being copied twice, we could be lucky and have no visible
performance impact, but if we happen to see a significant impact, it could
be useful to add a passthrough mechanism (to keep historical behavior)
when no encoding is involved.
Don't directly call functions that take date as argument and output the
string representation to the log output buffer under sess_build_logline(),
and instead build the strings in temporary buffers of fixed size
(hopefully such functions, such as date2str_log() and gmt2str_log()
procuce strings of known size), and then print the result using
lf_rawtext() helper function. This way, we will be able to encode them
automatically as regular string/text when new encoding methods are added.
Because of this patch, we add a little overhead because we first generate
the text into a temporary variable and then use lf_rawtext() to print it.
Thus we have a double-copy, and this could have some performance
implications that were not yet evaluated. Due to the small number of bytes
that can end up being copied twice (< 30), we could be lucky and have no
visible performance impact, but if we happen to see a significant impact,
it could be useful to add a passthrough mechanism (to keep historical
behavior) when no encoding is involved.
similar to lf_text_{len}, except that quoting and mandatory options are
ignored. Use this to print the input string without any modification (
except for encoding logic).
MINOR: log: add lf_int() wrapper to print integers
Wrap ltoa(), lltoa(), ultoa() and utoa_pad() functions that are used by
sess_build_logline() to print numerical values by implementing a dedicated
helper named lf_int() that takes <dft_hld> as argument to know how to
write the integer by default (when no encoding is specified).
LF_INT_UTOA_PAD_4 is used to emulate utoa_pad(x, 4) since it's found only
once under sess_build_logline(), thus there is no need to pass an extra
parameter to lf_int() function.
But we may also optionally set the expected node type by appending
':type' after the name, type being either sint,str or bool, like this:
log-format "%(string_as_int:sint)[str(14)]"
However, it is currently not possible to provide a type without providing
a name that is a least 1 char long. But it could be useful to provide a
type without setting a name, like this, for typecasting purposes only:
log-format "%(:sint)[bool(true)]"
Thus in order to allow this usage, don't set node->name if node name is
not at least 1 character long. By doing so, node->name will remain NULL
and will not be considered, but the typecast setting will.
CLEANUP: log: simplify complex values usages in sess_build_logline()
make sess_build_logline() switch case more readable by performing some
simplifications: complex values are first extracted in a temporary
variable so that it's easier to refer to them and at a single place.
CLEANUP: log: remove unused checks for encode_{chunk,string}
Thanks to 8226e92eb ("BUG/MINOR: tools/log: invalid
encode_{chunk,string} usage"), we only need to check for NULL return
value from encode_{chunk,string}() and escape_string() to know if the
call failed.
This patch implement parsing of counter values line from stats-file. It
reuses domain context previously set by the last header line. Each
value is separated by ',' character, relative to the list of column
names describe by the header line.
This is implemented via static function parse_stat_line(). It first
extract a GUID and retrieve the object instance. Then each numerical
value is parsed and object counters updated. For the moment, only U64
counters metrics is supported. parse_stat_line() is called on each line
until a new header line is found.
This patch implements parsing of headers line from stats-file.
A header line is defined as starting with '#' character. It is directly
followed by a domain name. For the moment, either 'fe' or 'be' is
allowed. The following lines will contain counters values relatives to
the domain context until the next header line.
This is implemented via static function parse_header_line(). It first
sets the domain context used during apply_stats_file(). A stats column
array is generated to contains the order on which column are stored.
This will be reused to parse following lines values.
If an invalid line is found and no header was parsed, considered the
stats-file as ill formatted and stop parsing. This allows to immediately
interrupt parsing if a garbage file was used without emitting a ton of
warnings to the user.
This commit is the first one of a serie to implement preloading of
haproxy counters via stats-file parsing.
This patch defines a basic apply_stats_file() function. It implements
reading line by line of a stats-file without any parsing for the moment.
It is called automatically on process startup via init().
Amaury Denoyelle [Thu, 28 Mar 2024 13:53:52 +0000 (14:53 +0100)]
MEDIUM: stats: implement dump stats-file CLI
Define a new CLI command "dump stats-file" with its handler
cli_parse_dump_stat_file(). It will loop twice on proxies_list to dump
first frontend and then backend side. It reuses the common function
stats_dump_stat_to_buffer(), using STAT_F_BOUND to restrict on the
correct side.
A new module stats-file.c is added to regroup function specifics to
stats-file. It defines two main functions :
* stats_dump_file_header() to generate the list of column list prefixed
by the line context, either "#fe" or "#be"
* stats_dump_fields_file() to generate each stat lines. Object without
GUID are skipped. Each stat entry is separated by a comma.
For the moment, stats-file does not support statistics modules. As such,
stats_dump_*_line() functions are updated to prevent looping over stats
module on stats-file output.
MINOR: stats: define stats-file output format support
Prepare stats function to handle a new format labelled "stats-file". Its
purpose is to generate a statistics dump with a format closed from the
CSV output. Such output will be then used to preload haproxy internal
counters on process startup.
stats-file output differs from a standard CSV on several points. First,
only an excerpt of all statistics is outputted. All values that does not
make sense to preload are excluded. For the moment, stats-file only list
stats fully defined via "struct stat_col" method. Contrary to a CSV, sll
columns of a stats-file will be filled. As such, empty field value is
used to mark stats which should not be outputted.
Some adaptation specifics to stats-file are necessary into
me_generate_field(). First, stats-file will output separatedly values
from frontend and backend sides with their own respective set of
columns. As such, an empty field value is returned if stat is not
defined for either frontend/listener, or backend/server when outputting
the other side. Also, as stats-file does not support empty column,
stcol_hide() is not used for it.
A minor adjustement was necessary for stats_fill_fe_line() to pass
context flags. This is necessary to detect stat output format. All other
listener/server/backend corresponding functions already have it.
MEDIUM: stats: convert counters to new column definition
Convert most of proxy counters statistics to new "struct stat_col"
definition. Remove their corresponding switch..case entries in
stats_fill_*_line() functions. Their value are automatically calculate
via me_generate_field() invocation.
Along with this, also complete stcol_hide() when some stats should be
hidden.
Only a few counters where not converted. This is because they rely on
values stored outside of fe/be_counters structure, which
me_generate_field() cannot use for now.
Metric style stats can be automatically calculate since the introduction
of metric_generate() when using "struct stat_col" as input. This would
allow to centralize statistics generation. However, some stats are not
outputted under specific condition. For example, health check failures
on a server are only reported if checks are active.
To support this, define a new function metric_hide(). It is called by
metric_generate(). If true, it will skip metric calcuation and return an
empty field value instead. This allows to define "stat_col" metrics and
calculate them with metric_generate() but hiding them under certain
circumstances.
MINOR: stats: implement automatic metric generation from stat_col
This commit is a direct follow-up of the previous one which define a new
type "struct stat_col" to fully define a statistic entry.
Define a new function metric_generate(). For metrics statistics, it is
able to automatically calculate a stat value field for "offsets" from
"struct stat_col". Use it in stats_fill_*_stats() functions. Maintain a
fallback to previously used switch-case for old-style statistics.
This commit does not introduce functional change as currently no
statistic is defined as "struct stat_col". This will be the subject of a
future commit.
Amaury Denoyelle [Thu, 28 Mar 2024 13:53:39 +0000 (14:53 +0100)]
MINOR: stats: introduce a more expressive stat definition method
Previously, statistics were simply defined as a list of name_desc, as
for example "stat_cols_px" for proxy stats. No notion of type was fixed
for each stat definition. This correspondance was done individually
inside stats_fill_*_line() functions. This renders the process to
define new statistics tedious.
Implement a more expressive stat definition method via a new API. A new
type "struct stat_col" for stat column to replace name_desc usage is
defined. It contains a field to store the stat nature and format. A
<cap> field is also defined to be able to define a proxy stat only for
certain type of objects.
This new type is also further extended to include counter offsets. This
allows to define a method to automatically generate a stat value field
from a "struct stat_col". This will be the subject of a future commit.
New type "struct stat_col" is fully compatible full name_desc. This
allows to gradually convert stats definition. The focus will be first
for proxies counters to implement statistics preservation on reload.
MINOR: stats: update ambiguous "metrics" naming to "stat_cols"
The name "metrics" was chosen to represent the various list of haproxy
exposed statistics. However, it is deemed as ambiguous as some stats are
indeed metric in the true sense, but some are not, as highlighted by
various "enum field_origin" values.
Replace it by the new name "stat_cols" for statistic columns. Along with
the already existing notion of stat lines it should better reflect its
purpose.
BUG/MINOR: peers: Don't wait for a remote resync if there no remote peer
When a resync is needed, a local resync is first tried and if it does not
work, a remote resync is tried. It happens when the worker is started for
instance. There is a timeout to wait for the local resync, except for the
first start. And if the local resync fails or times out, the same timeout
is applied to the remote resync. This one is always applied, even if there
is no remote peer.
On the other hand, on reload, if the old worker has never performed its
resync, it does not try to resync the new worker. And here there is an
issue. On the first reload, when there is no remote peer, we must wait for
the resync timeout expiration to have a chance to resync the new worker. If
the reload happens too early, there is no resync at all. Concretly, after a
fresh start, if a reload happens in the first 5 seconds, there is no resync
with the new worker. The issue only concerns the first reload and affects
the second worker.
To fix the issue, we must only skip the remote resync if there is no remote
peer. This way, on a fresh start, the worker is immediately considered as
resync. The local reynsc is skipped because it is the first worker and the
remote resync is skipped because there is no remote peer.
This patch must be backported to all stable versions.
REORG: peers: Rename all occurrences to 'ps' variable
In loops on the peer list in the code, the 'ps' variable was used as a
shortcut for the peer session. However, if mays be confusing with the peers
section too. So, all occurrences to 'ps' variable were renamed to 'peer'.
BUG/MEDIUM: peers: Use atomic operations on peers flags when necessary
Peers flags are mainly used from the sync task. At least, it is only updated
by the sync task. However, there is one place where a peer may read these
flags, when the message marking the end of a synchro is sent.
So to be sure the value retrieved at this place is consistent, we must use
an atomic operation to read it. And of course, from the sync task, atomic
operations must be used to update peers flags. However, from the sync task,
there is no reason to use atomic operations to read flags because they
cannot be update from somewhere eles.
MINOR: peers: Use a static variable to wait a resync on reload
When a process is reloaded, the old process must performed a synchronisation
with the new process. To do so, the sync task notify the local peer to
proceed and waits. Internally, the sync task used PEERS_F_DONOTSTOP flag to
know it should wait. However, this flag was only set/unset in a single
function. There is no real reason to set a flag to do so. A static variable
set to 1 when the resync starts and to 0 when it is finished is enough.