BUG/MEDIUM: stconn: Report a send activity everytime data were sent
When read/write timeouts were refactored in 2.8, we decided to change when a
send activity had to be reported. Before, everytime some data were sent a
send activity were reported. At this time, the channel's wex timer were
updated. During the refactoring, we decided to limit send activity to sends
that ampty te channel's buffer, consuming all outgoing data. Idea behind
this change was to protect haproxy against clients consumming data very
slowly.
However, it is too strict. Some congested muxes but still active can hit the
client or the server timeout. It seems a bit unfair. It is especially
visible with QUIC/H3 but it is probably also possible with H2 if the window
size is small.
"log-bufsize" may now be used for a log server (in a log backend) to
configure the bufsize of implicit ring associated to the server (which
defaults to BUFSIZE).
REGTEST: add a test for log-backend used as a log target
This regtest declares and uses 3 log backends, one of which has TCP syslog
servers declared in it and other ones UDP syslog servers.
Some tests aims at testing log distribution reliability by leveraging the
log-balance hash algorithm with a key extracted from the request URL, and
the dummy vtest syslog servers ensure that messages are sent to the
correct endpoint. Overall this regtest covers essential parts of the log
message distribution and log-balancing logic involved with log backends.
It also leverages the log-forward section to perform the TCP->UDP
translation required to test UDP endpoints since vtest syslog servers
work in UDP mode.
Finally, we have some tests to ensure that the server queuing/dequeuing
and failover (backup) logics work properly.
MEDIUM: log/balance: support for the "hash" lb algorithm
hash lb algorithm can be configured with the "log-balance hash <cnv_list>"
directive. With this algorithm, the user specifies a converter list with
<cnv_list>.
The produced log message will be passed as-is to the provided converter
list, and the resulting hash will be used to select the log server that
will receive the log message.
split sample_process() in 2 parts in order to be able to only process
the converter part of a sample expression from an existing input sample
struct passed as parameter.
MINOR: lbprm: compute the hash avalanche in gen_hash()
Instead of systematically computing the avalanche hash right after the
gen_hash() call, do it inside the gen_hash() function directly to ensure
avalanche setting is always considered.
MINOR: lbprm: support for the "none" hash-type function
Allow the use of the "none" hash-type function so that the key resulting
from the sample expression is directly used as the hash.
This can be useful to do the hashing manually using available hashing
converters, or even custom ones, and then inform haproxy that it can
directly rely on the sample expression result which is explictly handled
as an integer in this case.
MINOR: log/balance: support for the "random" lb algorithm
In this patch we add basic support for the random algorithm:
random algorithm picks a random server using the result of the
statistical_prng() function as if it was a hash key to then compute the
related server ID.
There is no support for the <draw> parameter (which is implemented for
tcp/http load-balancing), because we don't have the required metrics to
evaluate server's load in log backends for the moment. Plus it would add
more complexity to the __do_send_log_backend() function so we'll keep it
this way for now but this might be needed in the future.
MINOR: log/balance: support for the "sticky" lb algorithm
sticky algorithm always tries to send log messages to the first server in
the farm. The server will stay in front during queue and dequeue
operations (no other server can steal its place), unless it becomes
unavailable, in which case it will be replaced by another server from
the tree.
Using "mode log" in a backend section turns the proxy in a log backend
which can be used to log-balance logs between multiple log targets
(udp or tcp servers)
log backends can be used as regular log targets using the log directive
with "backend@be_name" prefix, like so:
| log backend@mybackend local0
A log backend will distribute log messages to servers according to the
log load-balancing algorithm that can be set using the "log-balance"
option from the log backend section. For now, only the roundrobin
algorithm is supported and set by default.
This helper function can be used to create a new sink from an existing
server struct (and thus existing proxy as well), in order to spare some
resources when possible.
MEDIUM: sink: inherit from caller fmt in ring_write() when rings didn't set one
implicit rings were automatically forced to the parent logger format, but
this was done upon ring creation.
This is quite restrictive because we might want to choose the desired
format right before generating the log header (ie: when producing the
log message), depending on the logger (log directive) that is
responsible for the log message, and with current logic this is not
possible. (To this day, we still have dedicated implicit ring per log
directive, but this might change)
In ring_write(), we check if the sink->fmt is specified:
- defined: we use it since it is the most precise format
(ie: for named rings)
- undefined: then we fallback to the format from the logger
With this change, implicit rings' format is now set to UNSPEC upon
creation. This is safe because the log header building function
automatically enforces the "raw" format when UNSPEC is set. And since
logger->format also defaults to "raw", no change of default behavior
should be expected.
MEDIUM: sink/log: stop relying on AF_UNSPEC for rings
Since a5b325f92 ("MINOR: protocol: add a real family for existing FDs"),
we don't rely anymore on AF_UNSPEC for buffer rings in do_send_log.
But we kept it as a parsing hint to differentiate between implicit and
named rings during ring buffer postparsing.
However it is still a bit confusing and forces us to systematically rely
on target->addr, even for named buffer rings where it doesn't make much
sense anymore.
Now that target->addr was made a pointer in a recent commit, we can
choose not to initialize it when not needed (i.e.: named rings) and use
this as a hint to distinguish implicit rings during init since they rely
on the addr struct to temporarily store the ring's address until the ring
is actually created during postparsing step.
DOC: config: log <address> becomes log <target> in "log" related doc
This is a follow up of the previous commit to emphasize that "log"
directive allows to provide a log target which may directly be a server
address but may also be a log transport facility such as rings. Thus we
use the term "target" instead of "address" to make it more generic.
log targets were immediately embedded in logger struct (previously
named logsrv) and could not be used outside of this context.
In this patch, we're introducing log_target type with the associated
helper functions so that it becomes possible to declare and use log
targets outside of loggers scope.
When 'log' directive was implemented, the internal representation was
named 'struct logsrv', because the 'log' directive would directly point
to the log target, which used to be a (UDP) log server exclusively at
that time, hence the name.
But things have become more complex, since today 'log' directive can point
to ring targets (implicit, or named) for example.
Indeed, a 'log' directive does no longer reference the "final" server to
which the log will be sent, but instead it describes which log API and
parameters to use for transporting the log messages to the proper log
destination.
So now the term 'logsrv' is rather confusing and prevents us from
introducing a new level of abstraction because they would be mixed
with logsrv.
So in order to better designate this 'log' directive, and make it more
generic, we chose the word 'logger' which now replaces logsrv everywhere
it was used in the code (including related comments).
This is internal rewording, so no functional change should be expected
on user-side.
Amaury Denoyelle [Thu, 12 Oct 2023 16:15:01 +0000 (18:15 +0200)]
BUG/MEDIUM: quic-conn: free unsent frames on retransmit to prevent crash
Since the following patch :
commit 33c49cec987c1dcd42d216c6d075fb8260058b16
MINOR: quic: Make qc_dgrams_retransmit() return a status.
retransmission process is interrupted as soon as a fatal send error has
been encounted. However, this may leave frames in local list. This cause
several issues : a memory leak and a potential crash.
The crash happens because leaked frames are duplicated of an origin
frame via qc_dup_pkt_frms(). If an ACK arrives later for the origin
frame, all duplicated frames are also freed. During qc_frm_free(),
LIST_DEL_INIT() operation is invalid as it still references the local
list used inside qc_dgrams_retransmit().
This bug was reproduced using the following injection from another
machine :
$ h2load --npn-list h3 -t 8 -c 10000 -m 1 -n 2000000000 \
https://<host>:<port>/?s=4m
Haproxy was compiled using ASAN. The crash resulted in the following
trace :
==332748==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7fff82bf9d78 at pc 0x556facd3b95a bp 0x7fff82bf8b20 sp 0x7fff82bf8b10
WRITE of size 8 at 0x7fff82bf9d78 thread T0
#0 0x556facd3b959 in qc_frm_free include/haproxy/quic_frame.h:273
#1 0x556facd59501 in qc_release_frm src/quic_conn.c:1724
#2 0x556facd5a07f in quic_stream_try_to_consume src/quic_conn.c:1803
#3 0x556facd5abe9 in qc_treat_acked_tx_frm src/quic_conn.c:1866
#4 0x556facd5b3d8 in qc_ackrng_pkts src/quic_conn.c:1928
#5 0x556facd60187 in qc_parse_ack_frm src/quic_conn.c:2354
#6 0x556facd693a1 in qc_parse_pkt_frms src/quic_conn.c:3203
#7 0x556facd7531a in qc_treat_rx_pkts src/quic_conn.c:4606
#8 0x556facd7a528 in quic_conn_app_io_cb src/quic_conn.c:5059
#9 0x556fad3284be in run_tasks_from_lists src/task.c:596
#10 0x556fad32a3fa in process_runnable_tasks src/task.c:876
#11 0x556fad24a676 in run_poll_loop src/haproxy.c:2968
#12 0x556fad24b510 in run_thread_poll_loop src/haproxy.c:3167
#13 0x556fad24e7ff in main src/haproxy.c:3857
#14 0x7fae30ddd0b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2)
#15 0x556facc9375d in _start (/opt/haproxy-quic-2.8/haproxy+0x1ea75d)
Address 0x7fff82bf9d78 is located in stack of thread T0 at offset 40 in frame
#0 0x556facd74ede in qc_treat_rx_pkts src/quic_conn.c:4580
Amaury Denoyelle [Wed, 11 Oct 2023 15:32:04 +0000 (17:32 +0200)]
BUG/MINOR: mux-quic: fix free on qcs-new fail alloc
qcs_new() allocates several elements in intermediary steps. All elements
must first be properly initialized to be able to free qcs instance in
case of an intermediary failure.
Previously, qc_stream_desc allocation was done in the middle of
qcs_new() before some elements initializations. In case this fails, a
crash can happened as some elements are left uninitialized.
To fix this, move qc_stream_desc allocation at the end of qcs_new().
This ensures that all qcs elements are initialized first.
Amaury Denoyelle [Wed, 11 Oct 2023 14:04:35 +0000 (16:04 +0200)]
BUG/MINOR: quic: fix free on quic-conn fail alloc
qc_new_conn() allocates several elements in intermediary steps. If one
of the fails, a global free is done on the quic_conn and its elements.
This requires that most elements are first initialized to NULL or
equivalent to ensure freeing operation is done only on proper values.
Once of this element is qc.tx.cc_buf_area. It was initialized too late
which could caused crashes. This is introduced by 9f7cfb0a56352188854bdaef9617ca836c2a30c9
MEDIUM: quic: Allow the quic_conn memory to be asap released.
Amaury Denoyelle [Wed, 11 Oct 2023 13:40:38 +0000 (15:40 +0200)]
BUG/MINOR: quic: fix qc.cids access on quic-conn fail alloc
CIDs tree is now allocated dynamically since the following commit : 276697438d50456f92487c990f20c4d726dfdb96
MINOR: quic: Use a pool for the connection ID tree.
This can caused a crash if qc_new_conn() is interrupted due to an
intermediary failed allocation. When freeing all connection members,
free_quic_conn_cids() is used. However, this function does not support a
NULL cids.
To fix this, simply check that cids is NULL during free_quic_conn_cids()
prologue.
Willy Tarreau [Thu, 12 Oct 2023 12:01:49 +0000 (14:01 +0200)]
BUG/MAJOR: connection: make sure to always remove a connection from the tree
Since commit 5afcb686b ("MAJOR: connection: purge idle conn by last usage")
in 2.9-dev4, the test on conn->toremove_list added to conn_get_idle_flag()
in 2.8 by commit 3a7b539b1 ("BUG/MEDIUM: connection: Preserve flags when a
conn is removed from an idle list") becomes misleading. Indeed, now both
toremove_list and idle_list are shared by a union since the presence in
these lists is mutually exclusive. However, in conn_get_idle_flag() we
check for the presence in the toremove_list to decide whether or not to
delete the connection from the tree. This test now fails because instead
it sees the presence in the idle or safe list via the union, and concludes
the element must not be removed. Thus the element remains in the tree and
can be found later after the connection is released, causing crashes that
Tristan reported in issue #2292.
The following config is sufficient to reproduce it with 2 threads:
With Amaury we analyzed the conditions in which the function is called
in order to figure a better condition for the test and concluded that
->toremove_list is never filled there so we can safely remove that part
from the test and just move the flag retrieval back to what it was prior
to the 2.8 patch above. Note that the patch is not reverted though, as
the parts that would drop the unexpected flags removal are unchanged.
This patch must NOT be backported. The code in 2.8 works correctly, it's
only the change in 2.9 that makes it misbehave.
Willy Tarreau [Thu, 12 Oct 2023 12:14:20 +0000 (14:14 +0200)]
CLEANUP: connection: drop an uneeded leftover cast
In conn_delete_from_tree() there remains a cast of the toremove_list
to struct list while the introduction of the union precisely was to
avoid this cast. It's a leftover from the first version of patch 5afcb686b ("MAJOR: connection: purge idle conn by last usage") merged
into in 2.9-dev4, let's fix that.
HTTP/3 specification has several requirement when parsing authority or
host header inside a request. However, it was until then only partially
implemented.
This commit fixes this by ensuring the following :
* reject an empty authority/host header
* reject a host header if an authority was found with a different value
* no authority neither host header present
BUG/MINOR: mux-quic: support initial 0 max-stream-data
Support stream opening with an initial max-stream-data of 0.
In normal case, QC_SF_BLK_SFCTL is set when a qcs instance cannot
transfer more data due to flow-control. This flag is set when
transfering data from MUX to quic-conn instance.
However, it's possible to define an initial value of 0 for
max-stream-data. In this case, qcs instance is blocked despite
QC_SF_BLK_SFCTL not set. No STREAM frame is prepared for this stream as
it's not possible to emit any byte, so QC_SF_BLK_SFCTL flag is never
set.
This behavior should cause no harm. However, this can cause a BUG_ON()
crash on qcc_io_send(). Indeed, when sending is retried, it ensures that
only qcs instance waiting for a new qc_stream_buf or with
QC_SF_BLK_SFCTL set is present in the send_list.
To fix this, initialize qcs with 0 value for msd and QC_SF_BLK_SFCTL.
The flag is removed only if transport parameter msd value is non null.
BUG/MEDIUM: mux-quic: fix RESET_STREAM on send-only stream
When receiving a RESET_STREAM on a send-only stream, it is mandatory to
close the connection with an error STREAM_STATE error. However, this was
badly implemented as this caused two invocation of qcc_set_error() which
is forbidden by the mux-quic API.
To fix this, rely on qcc_get_qcs() to properly detect the error. Remove
qcc_set_error() usage from qcc_recv_reset_stream() instead.
RFC 9000 indicates that a QUIC packet with no frame must trigger a
connection closure with PROTOCOL_VIOLATION error code. Implement this
via an early return inside qc_parse_pkt_frms().
Move all QUIC trace definitions from quic_conn.h to quic_trace-t.h. Also
remove multiple definition trace_quic macro definition into
quic_trace.h. This forces all QUIC source files who relies on trace to
include it while reducing the size of quic_conn.h.
BUG/MINOR: quic: Avoid crashing with unsupported cryptographic algos
This bug was detected when compiling haproxy against aws-lc TLS stack
during QUIC interop runner tests. Some algorithms could be negotiated by haproxy
through the TLS stack but not fully supported by haproxy QUIC implentation.
This leaded tls_aead() to return NULL (same thing for tls_md(), tls_hp()).
As these functions returned values were never checked, they could triggered
segfaults.
To fix this, one closes the connection as soon as possible with a
handshake_failure(40) TLS alert. Note that as the TLS stack successfully
negotiates an algorithm, it provides haproxy with CRYPTO data before entering
->set_encryption_secrets() callback. This is why this callback
(ha_set_encryption_secrets() on haproxy side) is modified to release all
the CRYPTO frames before triggering a CONNECTION_CLOSE with a TLS alert. This is
done calling qc_release_pktns_frms() for all the packet number spaces.
Modify some quic_tls_keys_hexdump to avoid crashes when the ->aead or ->hp EVP_CIPHER
are NULL.
Modify qc_release_pktns_frms() to do nothing if the packet number space passed
as parameter is not intialized.
This bug does not impact the QUIC TLS compatibily mode (USE_QUIC_OPENSSL_COMPAT).
Thank you to @ilia-shipitsin for having reported this issue in GH #2309.
CLEANUP: ssl: remove compat functions for openssl < 1.0.0
The openssl-compat.h file has some function which were implemented in
order to provide compatibility with openssl < 1.0.0. Most of them where
to support the 0.9.8 version, but we don't support this version anymore.
This patch removes the deprecated code from openssl-compat.h
Willy Tarreau [Fri, 6 Oct 2023 20:03:17 +0000 (22:03 +0200)]
[RELEASE] Released version 2.9-dev7
Released version 2.9-dev7 with the following main changes :
- MINOR: support for http-request set-timeout client
- BUG/MINOR: mux-quic: remove full demux flag on ncbuf release
- CLEANUP: freq_ctr: make all freq_ctr readers take a const
- CLEANUP: stream: make the dump code not depend on the CLI appctx
- MINOR: stream: split stats_dump_full_strm_to_buffer() in two
- CLEANUP: stream: use const filters in the dump function
- CLEANUP: stream: make strm_dump_to_buffer() take a const stream
- MINOR: stream: make strm_dump_to_buffer() take an arbitrary buffer
- MINOR: stream: make strm_dump_to_buffer() show the list of filters
- MINOR: stream: make stream_dump() always multi-line
- MINOR: streams: add support for line prefixes to strm_dump_to_buffer()
- MEDIUM: stream: now provide full stream dumps in case of loops
- MINOR: debug: use the more detailed stream dump in panics
- CLEANUP: stream: remove the now unused stream_dump() function
- Revert "BUG/MEDIUM: quic: missing check of dcid for init pkt including a token"
- MINOR: stream: fix output alignment of stuck thread dumps
- BUG/MINOR: proto_reverse_connect: fix FD leak on connection error
- BUG/MINOR: tcp_act: fix attach-srv rule ACL parsing
- MINOR: connection: define error for reverse connect
- MINOR: connection: define mux flag for reverse support
- MINOR: tcp_act: remove limitation on protocol for attach-srv
- BUG/MINOR: proto_reverse_connect: fix FD leak upon connect
- BUG/MAJOR: plock: fix major bug in pl_take_w() introduced with EBO
- Revert "MEDIUM: sample: Small fix in function check_operator for eror reporting"
- DOC: sample: Add a comment in 'check_operator' to explain why 'vars_check_arg' should ignore the 'err' buffer
- DEV: sslkeylogger: handle file opening error
- MINOR: quic: define quic-socket bind setting
- MINOR: quic: handle perm error on bind during runtime
- MINOR: backend: refactor specific source address allocation
- MINOR: proto_reverse_connect: support source address setting
- BUILD: pool: Fix GCC error about potential null pointer dereference
- MINOR: hlua: Set context's appctx when the lua socket is created
- MINOR: hlua: Don't preform operations on a not connected socket
- MINOR: hlua: Save the lua socket's timeout in its context
- MINOR: hlua: Save the lua socket's server in its context
- MINOR: hlua: Test the hlua struct first when the lua socket is connecting
- BUG/MEDIUM: hlua: Initialize appctx used by a lua socket on connect only
- DEBUG: mux-h1: Fix event label from trace messages about payload formatting
- BUG/MINOR: mux-h1: Handle read0 in rcv_pipe() only when data receipt was tried
- BUG/MINOR: mux-h1: Ignore C-L when sending H1 messages if T-E is also set
- BUG/MEDIUM: h1: Ignore C-L value in the H1 parser if T-E is also set
- REGTESTS: filters: Don't set C-L header in the successful response to CONNECT
- MINOR: mux-h1: Add flags if outgoing msg contains a header about its payload
- MINOR: mux-h1: Rely on H1S_F_HAVE_CHNK to add T-E in outgoing messages
- BUG/MEDIUM: mux-h1: Add C-L header in outgoing message if it was removed
- BUG/MEDIUM: mux-h1; Ignore headers modifications about payload representation
- BUG/MINOR: h1-htx: Keep flags about C-L/T-E during HEAD response parsing
- MINOR: h1-htx: Declare successful tunnel establishment as bodyless
- BUILD: quic: allow USE_QUIC to work with AWSLC
- CI: github: add USE_QUIC=1 to aws-lc build
- BUG/MINOR: hq-interop: simplify parser requirement
- MEDIUM: cache: Add "Origin" header to secondary cache key
- MINOR: haproxy: permit to register features during boot
- MINOR: tcp_rules: tcp-{request,response} requires TCP or HTTP mode
- MINOR: stktable: "stick" requires TCP or HTTP mode
- MINOR: filter: "filter" requires TCP or HTTP mode
- MINOR: backend/balance: "balance" requires TCP or HTTP mode
- MINOR: flt_http_comp: "compression" requires TCP or HTTP mode
- MINOR: http_htx/errors: prevent the use of some keywords when not in tcp/http mode
- MINOR: fcgi-app: "use-fcgi-app" requires TCP or HTTP mode
- MINOR: cfgparse-listen: "http-send-name-header" requires TCP or HTTP mode
- MINOR: cfgparse-listen: "dynamic-cookie-key" requires TCP or HTTP mode
- MINOR: proxy: dynamic-cookie CLIs require TCP or HTTP mode
- MINOR: cfgparse-listen: "http-reuse" requires TCP or HTTP mode
- MINOR: proxy: report a warning for max_ka_queue in proxy_cfg_ensure_no_http()
- MINOR: cfgparse-listen: warn when use-server rules is used in wrong mode
- DOC: config: unify "log" directive doc
- MINOR: sink/log: fix some typos around postparsing logic
- MINOR: sink: remove useless check after sink creation
- MINOR: sink: don't rely on p->parent in sink appctx
- MINOR: sink: don't rely on forward_px to init sink forwarding
- MINOR: sink: refine forward_px usage
- MINOR: sink: function to add new sink servers
- BUG/MEDIUM: stconn: Fix comparison sign in sc_need_room()
- BUG/MEDIUM: actions: always apply a longest match on prefix lookup
Willy Tarreau [Fri, 6 Oct 2023 14:51:41 +0000 (16:51 +0200)]
BUG/MEDIUM: actions: always apply a longest match on prefix lookup
Many actions take arguments after a parenthesis. When this happens, they
have to be tagged in the parser with KWF_MATCH_PREFIX so that a sub-word
is sufficient (since by default the whole block including the parenthesis
is taken).
The problem with this is that the parser stops on the first match. This
was OK years ago when there were very few actions, but over time new ones
were added and many actions are the prefix of another one (e.g. "set-var"
is the prefix of "set-var-fmt"). And what happens in this case is that the
first word is picked. Most often that doesn't cause trouble because such
similar-looking actions involve the same custom parser so actually the
wrong selection of the first entry results in the correct parser to be
used anyway and the error to be silently hidden.
But it's getting worse when accidentally declaring prefixes in multiple
files, because in this case it will solely depend on the object file link
order: if the longest name appears first, it will be properly detected,
but if it appears last, its other prefix will be detected and might very
well not be related at all and use a distinct parser. And this is random
enough to make some actions succeed or fail depending on the build options
that affect the linkage order. Worse: what if a keyword is the prefix of
another one, with a different parser but a compatible syntax ? It could
seem to work by accident but not do the expected operations.
The correct solution is to always look for the longest matching name.
This way the correct keyword will always be matched and used and there
will be no risk to randomly pick the wrong anymore.
This fix must be backported to the relevant stable releases.
BUG/MEDIUM: stconn: Fix comparison sign in sc_need_room()
sc_need_room() function may be called with a negative value. In this case,
the intent is to be notified if any space was made in the channel buffer. In
the function, we get the min between the requested room and the maximum
possible room in the buffer, considering it may be an HTX buffer.
However this max value is unsigned and leads to an unsigned comparison,
casting the negative value to an unsigned value. Of course, in this case,
this always leads to the wrong result. This bug seems to have no effect but
it is hard to be sure.
To fix the issue, we take care to respect the requested room sign by casting
the max value to a signed integer.
MINOR: sink: don't rely on p->parent in sink appctx
Removing unnecessary dependency on proxy->parent pointer in
sink appctx functions by directly using the sink sft from the
applet->svcctx to get back to sink related structs.
Thanks to this, proxy used for a ringbuf does not have to be exclusive
to a single sink anymore.
MINOR: sink: remove useless check after sink creation
It's useless to check if sink has been created with BUF type after
calling sink_new_buf() since the goal of the function is to create
a new sink of BUF type.
MINOR: sink/log: fix some typos around postparsing logic
Fixing some typos that have been overlooked during the recent log/sink
API improvements. Using this patch to make sink_new_from_logsrv() static
since it is not used outside of sink.c
"log" directive description was found 2 times in the configuration file:
First, in 3.1 in the "global parameters" chapter, and then in 4.2 in the
per-proxy keyword options.
Both descriptions are almost identical: having to maintain the "same"
documentation in 2 different places is error-prone. Due to this, some
precisions have been added in one of them, and were missing from
the other, and vice-versa, probably because one didn't see that the
"log" directive was also documented elsewhere.
To prevent the 2 descriptions from further diverging, and make it easier
to maintain, we merge them in the per-proxy "log" directive description
(in 4.2 chapter), and we add a pointer to it in the global "log" to
encourage the user to refer to the per-proxy "log" documentation for
usage details.
MINOR: cfgparse-listen: warn when use-server rules is used in wrong mode
haproxy will report a warning when "use-server" keyword is used within a
backend that doesn't support server rules to inform the user that rules
will be ignored.
To this day, only TCP and HTTP backends can make use of it.
MINOR: proxy: report a warning for max_ka_queue in proxy_cfg_ensure_no_http()
Display a warning when max_ka_queue is set (it is the case when
"max-keep-alive-queue" directive is used within a proxy section) to inform
the user that this directives depends on the "http" mode to work and thus
will safely be ignored.
Willy Tarreau [Fri, 6 Oct 2023 08:45:16 +0000 (10:45 +0200)]
MINOR: haproxy: permit to register features during boot
The regtests are using the "feature()" predicate but this one can only
rely on build-time options. It would be nice if some runtime-specific
options could be detected at boot time so that regtests could more
flexibly adapt to what is supported (capabilities, splicing, etc).
Similarly, certain features that are currently enabled with USE_XXX
could also be automatically detected at build time using ifdefs and
would simplify the configuration, but then we'd lose the feature
report in the feature list which is convenient for regtests.
This patch makes sure that haproxy -vv shows the variable's contents
and not the macro's contents, and adds a new hap_register_feature()
to allow the code to register a new keyword.
MEDIUM: cache: Add "Origin" header to secondary cache key
This patch add a hash of the Origin header to the cache's secondary key.
This enables to manage store responses that have a "Vary: Origin" header
in the cache when vary is enabled.
This cannot be considered as a means to manage CORS requests though, it
only processes the Origin header and hashes the presented value without
any form of URI normalization.
This need was expressed by Philipp Hossner in GitHub issue #251.
Co-Authored-by: Philipp Hossner <philipp.hossner@posteo.de>
hq-interop should be limited for QUIC testing. As such, its code should
be kept plain simple and not implement too many things.
This patch fixes issues which may cause rare QUIC interop failures :
- remove some unneeded BUG_ON() as parser should not be too strict
- remove support of partial message parsing
- ensure buffer data does not wrap as it was not properly handled. In
any case, this should never happen as only a single message will be
stored for each qcs buffer.
This patch fixes the build with AWSLC and USE_QUIC=1, this is only meant
to be able to build for now and it's not feature complete.
The set_encryption_secrets callback has been split in set_read_secret
and set_write_secret.
Missing features:
- 0RTT was disabled.
- TLS1_3_CK_CHACHA20_POLY1305_SHA256, TLS1_3_CK_AES_128_CCM_SHA256 were disabled
- clienthello callback is missing, certificate selection could be
limited (RSA + ECDSA at the same time)
MINOR: h1-htx: Declare successful tunnel establishment as bodyless
Successful responses to a CONNECT or to a upgrade request have no payload.
Be explicit on this point by setting HTX_SL_F_BODYLESS_RESP flag on the HTX
start-line.
BUG/MINOR: h1-htx: Keep flags about C-L/T-E during HEAD response parsing
When a response to a HEAD request is parsed, flags to know if the content
length is set or if the payload is chunked must be preserved.. It is
important because of the previous fix. Otherwise, these headers will be
removed from the response sent to the client.
This patch must only backported if "BUG/MEDIUM: mux-h1; Ignore headers
modifications about payload representation" is backported.
BUG/MEDIUM: mux-h1; Ignore headers modifications about payload representation
We now ignore modifications during the message analysis about the payload
representation if only headers are updated and not meta-data. It means a C-L
header removed to add a T-E one or the opposite via HTTP actions. This kind
of changes are ignored because it is extremly hard to be sure the payload
will be properly formatted.
It is an issue since the HTX was introduced and it was never reported. Thus,
there is no reason to backport this patch for now. It relies on following commits:
* MINOR: mux-h1: Add flags if outgoing msg contains a header about its payload
* MINOR: mux-h1: Rely on H1S_F_HAVE_CHNK to add T-E in outgoing messages
* BUG/MEDIUM: mux-h1: Add C-L header in outgoing message if it was removed
BUG/MEDIUM: mux-h1: Add C-L header in outgoing message if it was removed
If a C-L header was found during parsing of a message but it was removed via
a HTTP action, it is re-added during the message formatting. Indeed, if
headers about the payload are modified, meta-data of the message must also
be updated. Otherwise, it is not possible to guarantee the message will be
properly formatted.
To do so, we rely on the flag H1S_F_HAVE_CLEN.
This patch should not be backported except an issue is explicitly
reported. It relies on "MINOR: mux-h1: Add flags if outgoing msg contains a
header about its payload".
MINOR: mux-h1: Rely on H1S_F_HAVE_CHNK to add T-E in outgoing messages
If a message is declared to have a known length but no C-L or T-E headers
are set, a "Transfer-Encoding; chunked" header is automatically added. It is
useful for H2/H3 messages with no C-L header. There is now a flag to know
this header was found or added. So we use it.
MINOR: mux-h1: Add flags if outgoing msg contains a header about its payload
If a "Content-length" or "Transfer-Encoding; chunked" headers is found or
inserted in an outgoing message, a specific flag is now set on the H1
stream. H1S_F_HAVE_CLEN is set for "Content-length" header and
H1S_F_HAVE_CHNK for "Transfer-Encoding: chunked".
This will be useful to properly format outgoing messages, even if one of
these headers was removed by hand (with no update of the message meta-data).
REGTESTS: filters: Don't set C-L header in the successful response to CONNECT
in random-forwarding.vtc script, adding "Content-Lnegth; 0" header in the
successful response to the CONNECT request is invalid but it may also lead
to wrong check on the response. "rxresp" directive don"t handle CONNECT
response. Thus "-no_obj" must be added instead, to be sure the payload won't
be retrieved or expected.
BUG/MEDIUM: h1: Ignore C-L value in the H1 parser if T-E is also set
In fact, during the parsing there is already a test to remove the
Content-Length header if a Transfer-Encoding one is found. However, in the
parser, the content-length value was still used to set the body length (the
final one and the remaining one). This value is thus also used to set the
extra field in the HTX message and is then used during the sending stage to
announce the chunk size.
So, Content-Length header value must be ignored by the H1 parser to properly
reformat the message when it is sent.
This patch must be backported as far as 2.6. Lower versions don"t handle
this case.
BUG/MINOR: mux-h1: Ignore C-L when sending H1 messages if T-E is also set
In fact, it is already done but both flags (H1_MF_CLEN and H1_MF_CHUNK) are
set on the H1 parser. Thus it is errorprone when H1 messages are sent,
especially because most of time, the "Content-length" case is processed
before the "chunked" one. This may lead to compute the wrong chunk size and
to miss the last chunk.
This patch must be backported as far as 2.6. This case is not handled in 2.4
and lower.
BUG/MINOR: mux-h1: Handle read0 in rcv_pipe() only when data receipt was tried
In rcv_pipe() callback we must be careful to not report the end of stream
too early because some data may still be present in the input buffer. If we
report a EOS here, this will block the subsequent call to rcv_buf() to
process remaining input data. This only happens when we try a last
rcv_pipe() when the xfer length is unknown and all data was already received
in the input buffer. Concretely this happens with a payload larger than a
buffer but lower than 2 buffers.
BUG/MEDIUM: hlua: Initialize appctx used by a lua socket on connect only
Ths appctx used by a lua socket was synchronously initialized after the
appctx creation. The connect itself is performed later. However it is an
issue because the script may be interrupted beteween the two operation. In
this case, the stream attached to the appctx is woken up before any
destination is set. The stream will try to connect but without destination,
it fails. When the lua script is rescheduled and the connect is performed,
the connection has already failed and an error is returned.
To fix the issue, we must be sure to not woken up the stream before the
connect. To do so, we must defer the appctx initilization. It is now perform
on connect.
This patch relies on the following commits:
* MINOR: hlua: Test the hlua struct first when the lua socket is connecting
* MINOR: hlua: Save the lua socket's server in its context
* MINOR: hlua: Save the lua socket's timeout in its context
* MINOR: hlua: Don't preform operations on a not connected socket
* MINOR: hlua: Set context's appctx when the lua socket is created
MINOR: hlua: Save the lua socket's server in its context
For the same reason than the timeout, the server used by a lua socket is now
saved in its context. This will be mandatory to fix issues with the lua
sockets.
MINOR: hlua: Save the lua socket's timeout in its context
When the lua socket timeout is set, it is now saved in its context. If there
is already a stream attached to the appctx, the timeout is then immediately
modified. Otherwise, it is modified when the stream is created, thus during
the appctx initialization.
For now, the appctx is initialized when it is created. But this will change
to fix issues with the lua sockets. Thus, this patch is mandatory.
MINOR: hlua: Don't preform operations on a not connected socket
There is nothing that prevent someone to create a lua socket and try to
receive or to write before the connection was established ot after the
shutdown was performed. The same is true when info about the socket are
retrieved.
It is not an issue because this will fail later. But now, we check the
socket is connected or not earlier. It is more effecient but it will be also
mandatory to fix issue with the lua sockets.
MINOR: hlua: Set context's appctx when the lua socket is created
The lua socket's context referenced the owning appctx. It was set when the
appctx was initialized. It is now performed when the appctx is created. It
is a small change but this will be required to fix several issues with the
lua sockets.
BUILD: pool: Fix GCC error about potential null pointer dereference
In pool_gc(), GCC 13.2.1 reports an error about a potential null potential
dereference:
src/pool.c: In function ‘pool_gc’:
src/pool.c:807:64: error: potential null pointer dereference [-Werror=null-dereference]
807 | entry->buckets[bucket].free_list = temp->next;
| ~~~~^~~~~~
There is no issue here because "bucket" variable cannot be greater than
CONFIG_HAP_POOL_BUCKETS. But to make GCC happy, we now break the loop if it
is greater or equal to CONFIG_HAP_POOL_BUCKETS.
MINOR: proto_reverse_connect: support source address setting
Support backend configuration for explicit source address on
pre-connect. These settings can be specified via "source" backend
keyword or directly on the server line.
Previously, all source parameters triggered a BUG_ON() when binding a
reverse connect listener. This was done because some settings are
incompatible with reverse connect context : this is the case for all
source settings which do not specify a fixed address but rather rely on
a frontend connection. Indeed, in case of preconnect, connection is
initiated on its own without the existence of a previous frontend
connection.
This patch allows to use a source parameter with a fixed address. All
other settings (usesrc client/clientip/hdr_ip) are rejected on listener
binding. On connection init, alloc_bind_address() is used to set the
optional source address.
MINOR: backend: refactor specific source address allocation
Refactor alloc_bind_address() function which is used to allocate a
sockaddr if a connection to a target server relies on a specific source
address setting.
The main objective of this change is to be able to use this function
outside of backend module, namely for preconnections using a reverse
server. As such, this function is now exported globally.
For reverse connect, there is no stream instance. As such, the function
parts which relied on it were reduced to the minimal. Now, stream is
only used if a non-static address is configured which is useful for
usesrc client|clientip|hdr_ip. These options have no sense for reverse
connect so it should be safe to use the same function.
MINOR: quic: handle perm error on bind during runtime
Improve EACCES permission errors encounterd when using QUIC connection
socket at runtime :
* First occurence of the error on the process will generate a log
warning. This should prevent users from using a privileged port
without mandatory access rights.
* Socket mode will automatically fallback to listener socket for the
receiver instance. This requires to duplicate the settings from the
bind_conf to the receiver instance to support configurations with
multiple addresses on the same bind line.
Define a new bind option quic-socket :
quic-socket [ connection | listener ]
This new setting works in conjunction with the existing configuration
global tune.quic.socket-owner and reuse the same semantics.
The purpose of this setting is to allow to disable connection socket
usage on listener instances individually. This will notably be useful
when needing to deactivating it when encountered a fatal permission
error on bind() at runtime.
DOC: sample: Add a comment in 'check_operator' to explain why 'vars_check_arg' should ignore the 'err' buffer
This extra comment ensure that we do not try to pass an 'err' argument
to 'vars_check_arg' otherwise some warnings will be raised if an
operator is given an integer directly in the configuration file.
The "check_operator" function is used for all the operator converters
such as "and", "or", "add"...
With such a converter that accepts a variable name as well as an
integer, the "vars_check_arg" call is expected to fail when an integer
is provided. Passing an "err" variable has the unwanted side effect of
raising a warning during init for a configuration such as the following:
http-request set-query "s=%[rand,add(20)]"
which raises the following warning:
[WARNING] (33040) : config : parsing [hap.cfg:14] : invalid
variable name '20'. A variable name must be start by its scope. The
scope can be 'proc', 'sess', 'txn', 'req', 'res' or 'check'.