Fix parsing of certificate validator responses (#452)
If a certificate validator did not end its response with an end-of-line
or whitespace character, then Squid, while parsing the response,
accessed the bytes after the end of the buffer where the response is
stored.
Fix rock disk entry contamination related to aborted swapouts (#444)
Also probably fixed hit response corruption related to aborted rock
swapins.
The following disk entry contamination sequence was observed in detailed
cache logs during high-load Polygraph tests. Some of the observed
level-1 errors matched those in real-world deployments.
1. Worker A schedules AX – a request to write a piece of entry A to disk
slot X. The disker is busy writing and reading other slots for worker
A and other workers so AX stays in the A-to-Disker queue for a while.
2. Entry A aborts swapout (for any reason, including network errors
while receiving the being-stored response). Squid makes disk slot X
available for other entries to use. AX stays queued.
3. Another worker B picks up disk slot X (from the shared free disk slot
pool) and schedules BX, a request to write a piece of entry B to disk
slot X. BX gets queued in an B-to-Disker queue. AX stays queued.
4. The disker satisfies write request BX. Disk slot X contains entry B
contents now. AX stays queued.
5. The disker satisfies write request AX. Disk slot X is a part of entry
B slot chain but contains former entry A contents now! HTTP requests
for entry B now read entry A leftovers from disk and complain about
metadata mismatches (at best) or get wrong response contents (at
worst).
To prevent premature disk slot reuse, we now keep disk slots reserved
while they are in the disk queue, even if the corresponding cache entry
is long gone: Individual disk write requests now "own" the slot they are
writing. The Rock::IoState object owns reserved but not yet used slots
so that they can be freed when the object is gone. The disk entry owns
the (successfully written) slots added to its chain in the map.
The new slot ownership scheme required changes in what metadata the
writing code has to maintain. For example, we now keep the address of
the previous slot in the entry chain so that we can update its .next
field after a successful disk write. Also, the old code detecting
dropped write requests could no longer rely on the now-empty .next field
in the previous map entry. The rewritten code numbers I/O transactions
so that out-of-order replies can be detected without using the map.
I tried to simplify the metadata maintenance code by shortening
reservation lifetimes and using just-in-time [first] slot reservations.
The new code may also leak fewer slots when facing C++ exceptions.
As for reading, I realized that we had no protection from dropped rock
read requests. If the first read request is dropped, the metadata
decoding would probably fail but if subsequent reads are skipped, the
client may be fed with data that is missing those skipped blocks. I did
not try to reproduce these problems, but they are now fixed using the
same I/O transaction numbering mechanism that the writing code now uses.
Negative length checks in store_client.cc treat dropped reads as errors.
I also removed commented out "partial writing" code because IoState
class member changes should expose any dangerous merge problems.
GCC-9 with Squid use of -Werror makes these warning hard
errors which can no longer be ignored. We are thus required
to alter this third-party code when built for Squid.
Truncation of these strings is fine. Rather than suppress
GCC warnings, switch to xstrncpy() which has similar
behaviour but guarantees c-string terminator exists within
the copied range limit (removing need for two -1 hacks).
This change will add terminators on path and device_type
values in the rare case of overly long configured values.
It is not clear what ancient Domain Controllers would do
when handed un-terminated c-string in those cases, but was
unlikely to be good.
Partial disk writes may be useful for CF disk slaves and SMP disk hit
readers, but their correct implementation requires a lot of additional
work, the current implementation is insufficient/buggy, and partially
written entries were probably never read because Rock writers do not
enable shared/appending locks.
Here is a brief (but complicated) history of the issue, for the record:
1. 807feb1 adds partial disk writes "in order to propagate data from
the hit writer to the collapsed hit readers". The readers
probably could not read any data though because the disk entry
was still exclusively locked for writing. The developers either
did not realize that or intended to address it later, but did not
document their intentions -- all this development was happening
on a fast-moving CF development branch.
2. 0b8be48 makes those partial disk writes conditional on CF being
enabled. It is not clear whether the developers wanted to reduce
the scope of a risky feature or did not realize that non-CF use
cases also benefit from partial writes (when fully supported).
3. ce49546 adds low-level appending lock support but does not
actually use append locks for any caches.
4. 4475555 adds appending support to the shared memory cache.
5. 4976925 explicitly disables partial disk writes, acknowledging
that they were probably never used (by readers) anyway due to the
lack of a Ipc::StoreMap::startAppending() call. The same commit
documents that partial writes caused problems (for writers) in
performance tests.
6. 5296bbd re-enables partial disk writes (for writers) after fixing
problems detected earlier in performance tests. This commit does
not add the critical (for readers) startAppending() call. It
looks like the lack of that call was overlooked, again!
When parsing entries from /etc/hosts file, they are all lowered
(see bug 3040). If cache_peer hostname is uppercase, it will
lead to DNS resolution failure. Lowering cache_peer host fixes
this issue.
This change may expose broken Squid configurations that
incorrectly relied on non-lowercase peer host names to
bypass Squid's "is this cache_peer different from me?"
check. Though such configurations should encounter
forwarding loop errors later anyway.
Bug 4957: Multiple XSS issues in cachemgr.cgi (#429)
The cachemgr.cgi web module of the squid proxy is vulnerable
to XSS issue. The vulnerable parameters "user_name" and "auth"
have insufficient sanitization in place.
Prevent TLS transaction stalls by preserving flags.read_pending (#423)
DoSelect() cleared flags.read_pending just like it clears the read
handler, but that was wrong: Unlike the read handler, the "pending read"
state is not managed by the read _handler_. It is managed by the reading
_method_. DoSelect() calls read handler so it has the right to clear the
handler before that call -- the called handler is given an opportunity
to set another read handler as needed. DoSelect() does not call the
reading method.
In most cases, the reading method is called by the reading handler so
the code appears to "work". However, if the reading handler has to defer
the read, it will not call the reading method. Deferred reads are
naturally ignorant about the high-level read_pending flag so they cannot
(and should not) attempt to preserve that flag. The flag and the
associated "I have buffered data to give to the reader" state get lost.
When the deferred read is kicked, it starts waiting for socket I/O.
The loss of the flag usually leads to (minor) response time increases
because the peer socket will have more data available for reading,
triggering a call to the read handler which (this time) calls the
reading method which supplies the previously buffered data. Thus, the
code usually appears to "work" even when the bug is triggered.
However, when the buffered data contained the _last_ response byte, the
peer socket often remains idle (not ready for reading), the transaction
gets stalled and eventually timeouts/aborts without delivering the last
byte(s) to the client. Such bugs have been observed in the wild.
This change delegates read_pending management to the reading method.
When the method is not called (e.g., because reading is deferred), the
flag remains set and continues to reflect the current reading state.
When the reading method is "downgraded" from tls_read_method() to
default_read_method(), the new code clears flags.read_pending to avoid
endless "this FD is ready for reading" looping. The downgrade itself is
a dangerous operation because it may result in the loss of the already
buffered data. I hope Squid does not lose data when downgrading (and
added comments to justify that hope), but I cannot guarantee that. This
change itself does not introduce data losses.
I also made fde I/O methods private to improve our chances of preventing
a similar bug if future code changes carelessly reset them.
FreeBSD defines FD_NONE in /usr/include/fcntl.h to be magic to
the system. We are not using that name explicitly anywhere, but
it may make sense to keep it around as a default value for
fd_type. Rename the symbol to avoid the clash and fix the build
on FreeBSD.
Bug 4842: Memory leak when http_reply_access uses external_acl (#424)
Http::One::Server::handleReply() sets AccessLogEntry::reply which may
already be set. It is already set, for example, when the ACL code
has already called syncAle() because external ACLs require an ALE.
Squid converted any invalid response shorter than 4 bytes into an
invalid "HTTP/1.1 0 Init" response (with those received characters and a
CRLFCRLF suffix as a body). In some cases (e.g., with ICAP RESPMOD), the
resulting body was not sent to the client at all.
Now Squid handles such responses the same way it handles any non-HTTP/1
(and non-ICY) response, converting it into a valid HTTP/200 response
with an X-Transformed-From:HTTP/0.9 header and received bytes as
a message body.
Amos Jeffries [Sat, 8 Jun 2019 11:40:40 +0000 (11:40 +0000)]
Fix GCC-9 build issues (#413)
GCC-9 continues the development track started with GCC-8
producing more warnings and errors about possible code issues
which Squid use of "-Wall -Werror" turns into hard build
failures:
error: ‘strncpy’ output may be truncated copying 6 bytes from a
string of length 6 [-Werror=stringop-truncation]
error: ‘%s’ directive argument is null
[-Werror=format-overflow=]
error: ‘void* memset(void*, int, size_t)’ clearing an object of
type ... with no trivial copy-assignment; use assignment or
value-initialization instead [-Werror=class-memaccess]
error: ‘void* memset(void*, int, size_t)’ clearing an object of
non-trivial type ...; use assignment or value-initialization
instead [-Werror=class-memaccess]
Also, segmentation faults with minimal builds have been
identified as std::string template differences between
optimized and non-optimized object binaries. This results in
cppunit (built with optimizations) crashing unit tests when
freeing memory. Workaround that temporarily by removing the use
of --disable-optimizations from minimal builds.
Calling virginBodySending.plan() has nothing to do with ICAP trailers
support. These (expensive!) plans are necessary if and only if we may
echo the virgin message back to Squid.
Since 6fbd6e, ICAP code always planned to echo the virgin body, even
when no echoing was needed. Echoing requires buffering the virgin
message body, so these plans stalled responses with bodies exceeding
~65K buffer capacity when, for example, the ICAP server did not respond
until getting the entire request body.
Amos Jeffries [Sat, 29 Dec 2018 23:58:49 +0000 (12:58 +1300)]
Move session tls-options= assignment out of CreateSession() function
The original implementation utilized OpenSSL inheritence of
library-specific options from SSL_CTX* to SSL* structures.
This does not work with GnuTLS which requires setting the
options on each session object.
The workaround of looking up the session options inside CreateSession()
was only looking for client connection details. Which is broken when
creating server sessions.
* Extend Security::CreateSession() to take a PeerOptions reference
containing the specific config details to associate with the new
session (if any).
* Move selection of CachePeer vs tls_outgoing_* options
out to Security::CreateClientSession().
* Pass the Any::PortCfg::secure details into the
Security::CreateServerSession().
Amos Jeffries [Sat, 29 Dec 2018 22:56:07 +0000 (11:56 +1300)]
Separate tls-min-version= config from options= config
.. until later when parsing the library-specific strings
into their parsedOptions binary form.
This allows multiple options= and tls-min-version= settings
with only the latest of each being used independent of what
is going on with the other config settings.
Also, add nil definitions for the NO_TLSv* OpenSSL options
to make the tls-min-version= not result in fatal errors when
using minimalized OpenSSL library builds.
Amos Jeffries [Thu, 27 Dec 2018 09:17:49 +0000 (22:17 +1300)]
Delay parsing of options= parameters
.. until context (or session) creation time after squid.conf parsing.
Since a nil value in parsedOptions has meaning as default settings
we cannot use that to indicate a re-parse required. Add a flag
to set indicating changes have been made and check for a re-parse
when using parsedOptions to update a context or session.
Amos Jeffries [Thu, 27 Dec 2018 06:42:31 +0000 (19:42 +1300)]
Update PeekingPeerConnector to use PeerOptions API
.. for SSL_set_options() call.
Allowing parsedOptions to be private within PeerOptions
which is needed to safely delay its value being set until
after squid.conf line is fully parsed.
Amos Jeffries [Sat, 10 Nov 2018 04:00:12 +0000 (17:00 +1300)]
Fix tls-min-version= being ignored
Audit required change to make PeerOptions::parse() call
parseOptions() when 'options=' altered sslOptions instead of
delaying the parse to context creation.
This missed the fact that for GnuTLS the tlsMinVersion was
also updating the sslOptions string rather than the
parsedOptions variable later in the configuration process.
Call parseOptions() to reset the parsedOptions value whenever
sslOptions string is altered.
Amish [Wed, 22 May 2019 05:42:06 +0000 (05:42 +0000)]
Bug 4912: same-name notes being appended instead of replaced (#393)
When auth helper replies with clt_conn_tag=foo (or any KV pair) and
then later in future replaces it with clt_conn_tag=bar. The new pair
was getting appended instead of getting replaced.
i.e. notes would contain two KV pairs:
clt_conn_tag=foo
clt_conn_tag=bar
This fixes it and also fixes:
https://bugs.squid-cache.org/show_bug.cgi?id=4912
Amos Jeffries [Tue, 21 May 2019 21:31:31 +0000 (21:31 +0000)]
Replace uudecode with libnettle base64 decoder (#406)
Since RFC 7235 updated the HTTP Authentication credentials token
to the token68 characterset it is possible that characters
uudecode cannot cope with are received.
The Nettle decoder better handles characters which are valid but
not to be used for Basic auth token.
Matthieu Herrb [Mon, 13 May 2019 08:45:57 +0000 (08:45 +0000)]
Ignore ECONNABORTED in accept(2) (#404)
An aborted connection attempt does not affect listening socket's
ability to accept other connections. If the error is not ignored, Squid
gets stuck after logging an oldAccept error like this one:
This bug fix was motivated by accept(2) changes in OpenBSD v6.5 that
resulted in new ECONNABORTED errors under regular deployment conditions:
https://github.com/openbsd/src/commit/c255b5a
Amos Jeffries [Sat, 4 May 2019 06:53:45 +0000 (06:53 +0000)]
Bug 4942: --with-filedescriptors does not do anything (#395)
SQUID_CHECK_MAXFD has been unconditionally overwriting any
user-defined limit with an auto-detected limit from the build
machine. The change causing this was an incomplete fix for
bug 3970 added to v3.3 and later releases.
Fixing that problem has two notable side effects:
* the user-defined value now has the FD property checks applied
to it (multiple of 64, too-few, etc). This means warnings will
start to appear in build logs for a number of custom
configurations. We should expect an increase in questions
about that.
* builds which have previously been passing in outrageous values
will actually start to use those values as the SQUID_MAXFD
limit. This may result in surprising memory consumption or
performance issues. Hopefully the warnings and new messages
displaying auto-detected limit separate from the value used
will reduce the admin surprise, but may not.
This PR also includes cleanup of the autoconf syntax within the
SQUID_CHECK_MAXFD macro and moves the ./configure warnings about
possible issues into that check macro.
RFC 8586: Loop Detection in Content Delivery Networks (#396)
Support the CDN-Loop header as a source for loop detection.
This header is only relevant to CDN installations. For which the
surrogate_id configuration directive specifies the authoritative
ID.
Squid does not add this header by default, preferring to use the
Via mechanism instead. Administrators may add it to requests
with the request_header_add directive or remove with
request_header_remove.
When MIT or Heimdal Keberos libraries are installed at a custom
location there may be several krb5-config installed. The one
located at the user-provided path (if any) needs to have preference.
From the current code point of view, this is a false positive because
the private constructor which lacked initialization was only used from
other constructors that do initialize httpStatus.
Alex Rousskov [Mon, 1 Apr 2019 16:58:36 +0000 (16:58 +0000)]
Bug 4796: comm.cc !isOpen(conn->fd) assertion when rotating logs (#382)
Squid abandoned cache.log file descriptor maintenance, calling fd_open()
but then closing the descriptor without fd_close(). If the original file
descriptor value was reused for another purpose, Squid would either hit
the reported assertion or log a "Closing open FD" WARNING (depending on
the new purpose). The cache.log file descriptor is closed on log
rotation and reconfiguration events.
This short-term solution avoids assertions and WARNINGs but sacrifices
cache.log listing in fd_table and, hence, mgr:filedescriptors reports.
The correct long-term solution is to properly maintain descriptor meta
information across cache.log closures/openings, but doing so from inside
of debug.cc is technically difficult due to linking boundaries/problems.
Support forwarding of bumped, reencrypted HTTPS requests through a
cache_peer using a standard HTTP CONNECT tunnel.
The new Http::Tunneler class establishes HTTP CONNECT tunnels through
forward proxies. It is used by TunnelStateData and FwdState classes.
Just like before these changes, when a cache_peer replies to CONNECT
with an error response, only the HTTP response headers are forwarded to
the client, and then the connection is closed.
No support for triggering client authentication when a cache_peer
configuration instructs the bumping Squid to relay authentication info
contained in client CONNECT request. The bumping Squid still responds
with HTTP 200 (Connection Established) to the client CONNECT request (to
see TLS client handshake) _before_ selecting the cache_peer.
HTTPS cache_peers are not yet supported primarily because Squid cannot
do TLS-in-TLS with a single fde::ssl state; SslBump and the HTTPS proxy
client/tunneling code would need a dedicated TLS connection each.
Alex Rousskov [Tue, 19 Mar 2019 20:30:55 +0000 (20:30 +0000)]
When using OpenSSL, trust intermediate CAs from trusted stores (#383)
According to [1], GnuTLS and NSS do that by default.
Use case: Chrome and Mozilla no longer trust Semantic root CAs _but_
still trust several whitelisted Semantic intermediate CAs[2]. Squid
built with OpenSSL cannot do that without X509_V_FLAG_PARTIAL_CHAIN.
Fix incremental parsing of chunked quoted extensions (#310)
Before this change, incremental parsing of quoted chunked extensions
was broken for two reasons:
* Http::One::Parser::skipLineTerminator() unexpectedly threw after
partially received quoted chunk extension value.
* When Http::One::Tokenizer was unable to parse a quoted extension,
it incorrectly restored the input buffer to the beginning of the
extension value (instead of the extension itself), thus making
further incremental parsing iterations impossible.
IMO, the reason for this problem was that Http::One::Tokenizer::qdText()
could not distinguish two cases (returning false in both):
* the end of the quoted string not yet reached
* an input error, e.g., wrong/unexpected character
A possible approach could be to improve Http::One::Tokenizer, making it
aware about "needs more data" state. However, to be acceptable,
these improvements should be done in the base Parser::Tokenizer
class instead. These changes seem to be non-trivial and could be
done separately and later.
Another approach, used here, is to simplify the complex and error-prone
chunked extensions parsing algorithm, fixing incremental parsing bugs
and still parse incrementally in almost all cases. The performance
regression could be expected only in relatively rare cases of partially
received or malformed extensions.
Also:
* fixed parsing of partial use-original-body extension values
* do not treat an invalid use-original-body as an unknown extension
* optimization: parse use-original-body extension only in ICAP context
(i.e., where it is expected)
* improvement: added a new API to TeChunkedParser to specify known
chunked extensions list
Amos Jeffries [Thu, 7 Mar 2019 13:50:38 +0000 (13:50 +0000)]
Bug 4928: Cannot convert non-IPv4 to IPv4 (#379)
… when reaching client_ip_max_connections
The client_ip_max_connections limit is checked before the TCP
dst-IP is located for the newly received TCP connection. This
leaves Squid unable to fetch the NFMARK or similar details later
on (they do not exist for [::]).
Move client_ip_max_connections test later in the TCP accept
process to ensure dst-IP is known when the error is produced.
Support logformat %codes in error page templates (#365)
... using `@Squid{%code}` syntax which allows Squid to distinguish newly
supported logformat %codes like `%http::<rm` from classic single-letter
error page %codes like `%M` (that are never enclosed in `@Squid{}`).
This improvement gives error pages access to more information about the
failed transaction and paves the way towards unifying the two partially
duplicated %code interfaces (and underlying code).
Some single-letter error page %codes have similar logformat %codes. For
example, error page `%i` will often be replaced with the same characters
as `@Squid{%>a}`. However, admins should not rely on expansion
equivalence because most codes may be rendered differently under some
conditions, and the two rendering algorithms are not kept in sync. For
example, error page `%i` does not account for log_uses_indirect_client.
Only a few letters remain unused by the error page %codes. Most
(possibly all) new codes should be added to the logformat instead,
within an appropriate error_page:: namespace if needed.
Side effects of updating error page parsing and assembly code include:
* Leave bad %codes in deny_info URLs instead of removing them. This
helps in triage and is consistent with how bad %codes are already
handled in error pages.
* Report bad deny_info %codes at startup and reconfiguration. Startup
errors should eventually become fatal, just like logformat errors are.
* Report bad error page %codes at startup and reconfiguration (except
for `%;` sequences found in all existing error pages templates).
* Fixed parsing of non-default error-details.txt that did not end with
an empty line.
* Fixed unterminated c-string printing when reporting error-details.txt
parsing problems.
This assertion could be triggered by various swapout failures for
ufs/aufs/diskd cache_dir entries.
The bug was caused by 4310f8b change related to storeSwapOutFileClosed()
method. Before that change, swapout failures resulted in
StoreEntry::swap_status set to SWAPOUT_NONE, preventing
another/asserting iteration of StoreEntry::swapOut().
This fix adds SWAPOUT_FAILED swap status for marking swapout failures
(instead of reviving and abusing SWAPOUT_NONE), making the code more
reliable.
Also removed storeSwapOutFileNotify() implementation. We should not
waste time on maintaining an unused method that now contains conflicting
assertions: swappingOut() and !hasDisk().
Alex Rousskov [Sun, 24 Feb 2019 03:28:47 +0000 (03:28 +0000)]
Fixed squidclient authentication after 4b19fa9 (Bug 4843 pt2) (#373)
* squidclient -U sent Proxy-Authorization instead of Authorization.
Code duplication bites again.
* squidclient -U and -u could sent random garbage after the correct
[Proxy-]Authorization value as exposed by Coverity CID 1441999: Unused
value (UNUSED_VALUE). Coverity missed this deeper problem, but
analyzing its report lead to discovery of the two bugs fixed here.
Also reduced authentication-related code duplication.
Bug 4864: !Comm::MonitorsRead assertion in maybeReadVirginBody() (#351)
This assertion is probably triggered when Squid retries/reforwards
server-first or step2+ bumped connections (after they fail).
Retrying/reforwarding such pinned connections is wrong because the
corresponding client-to-Squid TLS connection was negotiated based on the
now-failed Squid-to-server TLS connection, and there is no mechanism to
ensure that the new Squid-to-server TLS connection will have exactly the
same properties. Squid should forward the error to client instead.
Also fixed peer selection code that could return more than one PINNED
paths with only the first path having the destination of the actual
pinned connection. To reduce the chances of similar future bugs, and to
polish the code, peer selection now returns a nil path to indicate a
PINNED decision. After all, the selection code decides to use a pinned
connection (whatever it is) rather than a specific pinned _destination_.
Added %proxy_protocol::>h logformat code for logging received PROXY
protocol TLV values and passing them to adaptation services and helpers.
For simplicity, this implementation treats TLV values as text and reuses
the existing HTTP header field logging interfaces. Support for binary
TLV value logging can be added later if needed.
Also support logging of metadata extracted from the fixed portion of a
PROXY protocol header and referenced as "pseudo headers": :command,
:version, :src_addr, :dst_addr, :src_port, and :dst_port.
Also fixed several bugs in the old PROXY protocol v1/v2 parsing code:
* Buffer overrun in ConnStateData::parseProxy2p0(). The available
local SBuf could be less than sizeof(pax), resulting in copying
excessive bytes with SBuf::rawContent().
* Incorrect processing of malformed v1 headers lacking CRLF in
ConnStateData::parseProxy1p0(), which waited for more data
even if the buffer size already exceeded the maximum v1 header
size.
* Incorrect processing of partial-buffered v1 headers, when only
the initial (magic) part of the header has been received. The old
code resulted with an error instead of waiting for more data in this
case.
* Incorrect v1 header framing for proto=UNKNOWN headers.
The code used only LF while the protocol requires CRLF for it.
* Do not use address information from v2 header if the header
proto is UNSPEC.
* Incorrect v1 magic expectations (a 6-character `PROXY ` instead of the
proper 5-character `PROXY` sequence) leading to mishandling of
non-PROXY input. For example, receiving `PROXY\r\n` would result in
"need more data" outcome (and long timeout) instead of an immediate
error.
Also eliminated code duplication in HttpHeader::getByNameListMember()
and HttpHeader::getListMember(), moving the common part into a
separate getListMember() method.
Also eliminated code duplication and probably fixed a bug with applying
client_netmask parameter in ConnStateData constructor. The mask should
not be applied to localhost and IPv6 addresses but was.
Also parse PROXY protocol v2 LOCAL addresses (for logging purposes). In
compliance with the PROXY protocol specs, LOCAL addresses are still
unused for connection routing.
Restored the natural order of the following two notifications:
* BodyConsumer::noteMoreBodyDataAvailable() and
* BodyConsumer::noteBodyProductionEnded() or noteBodyProducerAborted().
Commit b599471 unintentionally reordered those two notifications. Client
kids (and possibly other BodyConsumers) relied on the natural order to
end their work. If an HttpStateData job was done with the Squid-to-peer
connection and only waiting for the last adapted body bytes, it would
get stuck and leak many objects. This use case was not tested during b599471 work.
Reuse reserved Negotiate and NTLM helpers after an idle timeout (#59)
Squid can be killed or maimed by enough clients that start multi-step
connection authentication but never follow up with the second HTTP
request while keeping their HTTP connection open. Affected helpers
remain in the "reserved" state and cannot be reused for other clients.
Observed helper exhaustion has happened without any malicious intent.
To address the problem, we add a helper reservation timeout. Timed out
reserved helpers may be reused by new clients/connections. To minimize
problems with slow-to-resume-authentication clients, timed out reserved
helpers are not reused until there are no unreserved running helpers
left. The reservations are tracked using unique integer IDs.
Also fixed Squid crashes caused by unexpected helper termination -- the
raw UserRequest::authserver pointer could point to a deleted helper.
mahdi1001 [Sun, 10 Feb 2019 08:08:55 +0000 (08:08 +0000)]
Add support for buffer-size= to UDP logging (#359)
Allow admin control of buffering for log outputs written to UDP
receivers using the buffer-size= parameter.
buffer-size=0byte disables buffering and sends UDP packets
immediately regardless of line size.
When non-0 values are used lines shorter than the buffer may be
delayed and aggregated into a later UDP packet.
Log lines larger than the buffer size will be sent immediately
and may trigger delivery of previously buffered content to
retain log order (at time of send, not UDP arrival).
To avoid truncation problems known with common recipients
the buffer size remains capped at 1400 bytes.
Amos Jeffries [Fri, 25 Jan 2019 16:21:38 +0000 (16:21 +0000)]
Cleanup and simplify unit tests (#336)
Polish unit tests to reduce their dependency lists inline with
the updated test documentation requirements.
Some further cleanup of unit tests documentation based on
experience pruning existing unit tests.
* Only the test logic files actually need to be distributed by
a test. All files being tested are supposed to be distributed
from elsewhere and the test should rely on that to prevent
future issues like the base/RefCount.h bug mentioned below.
* Starting to deprecate TESTSOURCES which is causing more
trouble than benefit by pulling in too many needless
dependencies. eg the globals.cc objects and symbols. Tools it
used to provide are largely superseded by the stub mechanisms.
Add missing stub files necessary for pruning dependencies. Also
fix some bugs in existing stub files.
Fix base/RefCount.h distribution. This file was not included in
the base/libbase.la dependency list but indirectly being
distributed by the unit-test existence. Which in some builds
could cause it not to exist in generated minimal tarballs.
Systems which have been partially 'IPv6 disabled' may allow
sockets to be opened and used but missing the IPv6 loopback
address.
Implement the outstanding TODO to detect such failures and
disable IPv6 support properly within Squid when they are found.
This should fix bug 4915 auth_param helper startup and similar
external_acl_type helper issues. For security such helpers are
not permitted to use the machine default IP address which is
globally accessible.
Fail Rock swapout if the disk dropped some of the write requests (#352)
Detecting dropped writes earlier is more than a TODO: If the last entry
write was successful, the whole entry becomes available for hits
immediately. IpcIoFile::checkTimeouts() that runs every 7 seconds
(IpcIoFile::Timeout) would eventually notify Rock about the timeout,
allowing Rock to release the failed entry, but that notification may
be too late.
The precise outcome of hitting an entry with a missing on-disk slice is
unknown (because the bug was detected by temporary hit validation code
that turned such hits into misses), but SWAPFAIL is the best we could
hope for.
Initialize StoreMapSlice when reserving a new cache slot (#350)
Rock sets the StoreMapSlice::next field when sending a slice to disk. To
avoid writing slice A twice, Rock allocates a new slice B to prime
A.next right before writing A. Scheduling A's writing and, sometimes,
lack of data to fill B create a gap between B's allocation and B's
writing (which sets B.next). During that time, A.next points to B, but
B.next is untouched.
If writing slice A or swapout in general fails, the chain of failed
entry slices (now containing both A and B) is freed. If untouched B.next
contains garbage, then freeChainAt() adds "random" slices after B to the
free slice pool. Subsequent swapouts use those incorrectly freed slices,
effectively overwriting portions of random cache entries, corrupting the
cache.
How did B.next get dirty in the first place? freeChainAt() cleans the
slices it frees, but Rock also makes direct noteFreeMapSlice() calls.
Shared memory cache may have avoided this corruption because it makes no
such calls.
Ipc::StoreMap::prepFreeSlice() now clears allocated slices. Long-term,
we may be able to move free slice management into StoreMap to automate
this cleanup.
Also simplified and polished slot allocation code a little, removing the
Rock::IoState::reserveSlotForWriting() middleman. This change also
improves the symmetry between Rock and shared memory cache code.
Before this fix, Squid sometimes logged the following error:
BUG: Worker I/O pop queue for ... overflow: ...
The bug could result in truncated hit responses, reduced hit ratio, and,
combined with buggy lost I/O handling code (GitHub PR #352), even cache
corruption.
The bug could be triggered by the following sequence of events:
* Disker dequeues one I/O request from the worker push queue.
* Worker pushes more I/O requests to that disker, reaching 1024 requests
in its push queue (QueueCapacity or just "N" below). No overflow here!
* Worker process is suspended (or is just too busy to pop I/O results).
* Disker satisfies all 1+N requests, adding each to the worker pop queue
and overflows that queue when adding the last processed request.
This fix limits worker push so that the sum of all pending requests
never exceeds (pop) queue capacity. This approach will continue to work
even if diskers are enhanced to dequeue multiple requests for seek
optimization and/or priority-based scheduling.
Pop queue and push queue can still accommodate N requests each. The fix
appears to reduce supported disker "concurrency" levels from 2N down to
N pending I/O requests, reducing queue memory utilization. However, the
actual reduction is from N+1 to N: Since a worker pops all its satisfied
requests before queuing a new one, there could never be more than N+1
pending requests (N in the push queue and 1 worked on by the disker).
We left the BUG reporting and handling intact. There are no known bugs
in that code now. If the bug never surfaces again, it can be replaced
with code that translates low-level queue overflow exception into a
user-friendly TextException.
Alex Rousskov [Tue, 8 Jan 2019 15:14:18 +0000 (15:14 +0000)]
Fix BodyPipe/Sink memory leaks associated with auto-consumption (#348)
Auto-consumption happens (and could probably leak memory) in many cases,
but this leak was exposed by an eCAP service that blocked or replaced
virgin messages.
The BodySink job termination algorithm relies on body production
notifications. A BodySink job created after the body production had
ended can never stop and, hence, leaks (leaking the associated BodyPipe
object with it). Such a job is also useless: If production is over,
there is no need to free space for more body data! This change avoids
creating such leaking and useless jobs.
Amos Jeffries [Sun, 6 Jan 2019 13:22:19 +0000 (13:22 +0000)]
Bug 4875 pt2: GCC-8 compile errors with -O3 optimization (#288)
GCC-8 warnings exposed at -O3 optimization causes its
own static analyzer to detect optimized code is eliding
initialization on paths that do not use the
configuration variables.
Refactor the parseTimeLine() API to return the parsed
values so that there is no need to initialize anything prior
to parsing.
Amish [Wed, 2 Jan 2019 11:51:45 +0000 (11:51 +0000)]
basic_ldap_auth: Return BH on internal errors; polished messages (#347)
Basic LDAP auth helper now returns BH instead of ERR in case of errors
other than LDAP_SECURITY_ERROR, per helper guidelines.
Motivation: I have a wrapper around Basic LDAP auth helper. If an LDAP
server is down, then the helper returns BH, and the wrapper uses
a fallback authentication source.
Also converted printf() to SEND_*() macros and reduced message
verbosity.
Daris A Nevil [Mon, 17 Dec 2018 17:38:01 +0000 (17:38 +0000)]
Add %ssl::<cert macro for logging server X.509 certificate (#316)
We have chosen the PEM format instead of, for example, raw DER format
because most programs exchange certificates using PEM format and because
logging raw binary values would be unusual for Squid logformat.
The current support is limited to SslBump step3 which parses and stores
the peer certificate. TODO: Support all from-Squid TLS connections.
Fixed forward_max_tries documentation and implementation (#277)
Before 1c8f25b, FwdState::n_tries counted the total number of forwarding
attempts, including pinned and persistent connection retries. Since that
revision, it started counting just those retries. What should n_tries
count? The counter is used to honor the forward_max_tries directive, but
that directive was documented to limit the number of _different_ paths
to try. Neither 1c8f25b~1 nor 1c8f25b code matched that documentation!
Continuing to count just pinned and persistent connection retries (as in 1c8f25b) would violate any reasonable forward_max_tries intent and admin
expectations. There are two ways to fix this problem, synchronizing code
and documentation:
* Count just the attempts to use a different forwarding path, matching
forward_max_tries documentation but not what Squid has ever done. This
approach makes it difficult for an admin to limit the total number of
forwarding attempts in environments where, say, the second attempt is
unlikely to succeed and will just incur wasteful delays (Squid bug
4788 report is probably about one of such use cases). Also,
implementing this approach may be more difficult because it requires
adding a new counter for retries and, for some interpretations of
"different", even a container of previously visited paths.
* Count all forwarding attempts (as before 1c8f25b) and adjust
forward_max_tries documentation to match this historical behavior.
This approach does not have known unique flaws.
Also fixed FwdState::n_tries off-by-one comparison bug discussed during
Squid bug 4788 triage.
Also fixed admin concern behind Squid bug 4788 "forward_max_tries 1 does
not prevent some retries": While the old forward_max_tries documentation
actually excluded pconn retries, technically invalidating the bug
report, the admin now has a knob to limit those retries.
Amos Jeffries [Fri, 16 Nov 2018 06:04:45 +0000 (06:04 +0000)]
SourceLayout: Cleanup unit tests (#324)
Polish up and add some structure to the unit tests in
src/Makefile.am. Documenting current best practice for new test
additions and shuffling of lines around to make the tests easily
managed in future, consistent with that practice.
There are two functional changes amongst the non-logic changes;
1) testUfs and testRock src/fs/ tests are updated to using
automake conditionals to wrap the entire set of automake lists.
Not just the part adding the test binary to those built. This
reduces the compiler work creating dependency information for
them in builds where they are not going to be used.
2) testRefCount has missing LDFLAGS list added
Also, shuffling of most SOURCES lists into nodist_*_SOURCES to
comply with the documented practice is left to followup work
pruning those lists down.