Amos Jeffries [Wed, 19 May 2021 11:37:34 +0000 (11:37 +0000)]
Bug 4832: '!schemeAccess' assertion on exit (#819)
* Add test to detect the bug and prevent regressions
* delete the access list if not cleared by the time
Auth::Config instance is destructed. This is what the
free_AuthSchemes() code does when the function call
nesting and debugs are pruned away.
Joshua Rogers [Mon, 17 May 2021 03:43:48 +0000 (03:43 +0000)]
Simplified Http::Message::parse() (#814)
size_t hdr_len cannot be negative, and sanityCheckStartLine() already
rejects zero values. We now use (and clearly document) the latter fact.
Also replaced a level-1 warning about "Too large" headers with a level-3
debugging message because huge headers is not a Squid problem, and the
problem should already be visible in access.logs records.
Alex Rousskov [Sat, 8 May 2021 07:51:32 +0000 (07:51 +0000)]
API to separate Config.x users/parsers from Config.y details (#811)
The new API avoids several kinds of unwanted code dependencies:
* Config.x users do not need to know the type of any Config.y component.
Forward declarations of Config.y types are sufficient.
* Config.x parsers do not need to know about Config.y parsers. Config.x
parsers can also be hidden from Config.x users.
* The centralized parsing code does not need to contain Config.x parsing
code (for any x) and does not maintain "generic" parser registration.
The compiler/linker do that for us _without_ losing C++ type safety.
Correct API implementation may also help separate the active Config
object from the being-parsed configuration, to eventually support safe
"hot" reconfiguration.
This change does not convert any existing Config fields and does not
imply a policy that existing fields must be converted when touched.
The API was tested on a typical new Config.x component (to be merged
separately). It is likely to evolve, especially when support for
multi-directive components is added.
Activated by using TYPE names with two colons (e.g., TYPE: Security::X).
Amos Jeffries [Thu, 6 May 2021 08:38:29 +0000 (08:38 +0000)]
Fix GCC -Werror=range-loop-construct (#808)
This warning detects unnecessary object copying in C++ range loops, with
a focus on large objects and copies triggered by implicit type
conversions. Included in -Wall.
error: loop variable ... creates a copy from type ...
Alex Rousskov [Mon, 3 May 2021 21:40:14 +0000 (21:40 +0000)]
Stop processing a response if the Store entry is gone (#806)
HttpStateData::processReply() is usually called synchronously, after
checking the Store entry status, but there are other call chains.
StoreEntry::isAccepting() adds STORE_PENDING check to the ENTRY_ABORTED
check. An accepting entry is required for writing into Store. In theory,
an entry may stop accepting new writes (without being aborted) if
FwdState or another entry co-owner writes an error response due to a
timeout or some other problem that happens while we are waiting for an
I/O callback or some such.
N.B. HTTP and FTP code cannot use StoreEntry::isAccepting() directly
because their network readers may not be the ones writing into Store --
the content may go through the adaptation layer first and that layer
might complete the store entry before the entire peer response is
received. For example, imagine an adaptation service that wants to log
the whole response containing a virus but also replaces that (large)
response with a small error reply.
Use already parsed request-target URL in cache manager and
update CacheManager to Tokanizer based URL parse
Removing use of sscan() and regex string processing which have
proven to be problematic on many levels. Most particularly with
regards to tolerance of normally harmless garbage syntax in URLs
received.
Support for generic URI schemes is added possibly resolving some
issues reported with ftp:// URL and manager access via ftp_port
sockets.
Truly generic support for /squid-internal-mgr/ path prefix is
added, fixing some user confusion about its use on cache_object:
scheme URLs.
TODO: support for single-name parameters and URL #fragments
are left to future updates. As is refactoring the QueryParams
data storage to avoid SBuf data copying.
Traffic parsing errors should be reported at level 2 (or below) because
Squid admins can usually do nothing about them and a noisy cache.log
hides important problems that they can and should do something about.
TODO: Detail this and similar parsing errors for %err_detail logging.
Alex Rousskov [Tue, 16 Mar 2021 06:23:02 +0000 (06:23 +0000)]
Replace defective Must2(c, "not c") API (#785)
... with Must3(c, "c", Here())
* We should not make a Must*() API different from the standard C++
static_assert() API. It is just too confusing, especially since the
two calls are very closely related.
* We should not tell Must*() writers to specify one condition (i.e. what
must happen) but then describe the opposite condition (i.e. what went
wrong). When the text describes the opposite condition, it is easy for
a human writing or thinking about the text to type the wrong
condition: Must2(got < need, "incomplete message") looks correct!
We should not keep the same macro name when changing the meaning of the
second parameter. Fortunately, adding a third argument to the macro fits
nicely into how modern Squid code should pass the source code location.
Must3() does not support SBuf descriptions. I tried to support them, but
doing so without duplicating non-trivial code is too difficult, and
since the current code does not actually use them, wasteful.
If future (complex) code needs SBuf conditions, then it should probably
use an explicit throw statement instead, especially since Must*() is,
like assert(), supposed to quickly check source code sanity rather than
validate unsafe input. A compile-time condition description and the
source code location is usually enough for sanity checks, while proper
reporting of invalid input usually also requires parsing context info.
Alex Rousskov [Tue, 16 Mar 2021 01:09:34 +0000 (01:09 +0000)]
Delete a few unnecessary checks for standard C++ features (#779)
* nullptr ought to be provided by any C++ compiler we support
* HAVE_NULLPTR_T was unused while std::nullptr_t is used w/o it
* HAVE_UNIQUE_PTR was almost unused while std::unique_ptr is used w/o it
Alex Rousskov [Mon, 15 Mar 2021 14:05:05 +0000 (14:05 +0000)]
Fix HttpHeaderStats definition to include hoErrorDetail (#787)
... when Squid is built --with-openssl.
We were "lucky" that the memory area after HttpHeaderStats was not,
apparently, used for anything important enough when HttpHeader::parse(),
indirectly called from errorInitialize() during initial Squid
configuration, was writing to it.
Detected by using AddressSanitizer.
The bug was created in commit 02259ff and cemented by commit 2673511.
Alex Rousskov [Thu, 4 Mar 2021 02:51:18 +0000 (02:51 +0000)]
Maintenance: Make boilerplate copyright years update optional (#782)
Running with `--check-and-update-copyright no` allows developers to
format sources and/or perform other essential source maintenance actions
without changing the copyright year in 2000+ files.
No change to default ./scripts/source-maintenance.sh operation.
Also improved/updated scripts/source-maintenance.sh usage reporting and
added support for --help/-h options.
Handling missing issuer certificates for TLSv1.3 (#766)
Prior to TLS v1.3 Squid could detect and fetch missing intermediate
server certificates by parsing TLS ServerHello. TLS v1.3 encrypts the
relevant part of the handshake, making such "prefetch" impossible.
Instead of looking for certificates in TLS ServerHello, Squid now waits
for the OpenSSL built-in certificate validation to fail with an
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error. Squid-supplied
verify_callback function tells OpenSSL to ignore that error. Squid
SSL_connect()-calling code detects that the error was ignored and, if
possible, fetches the missing certificates and orchestrates certificate
chain validation outside the SSL_connect() sequence. If that validation
is successful, Squid continues with SSL_connect(). See comments inside
Security::PeerConnector::negotiate() for low-level details.
In some cases, OpenSSL is able to complete SSL_connect() with an ignored
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error. If Squid validation
fails afterwards, the TLS connection is closed (before any payload is
exchanged). Ideally, the negotiation with an untrusted server should not
complete, but complexity BIO changes required to prevent such premature
completion is probably not worth reaching that ideal, especially since
all of this is just a workaround.
The long-term solution is adding SSL_ERROR_WANT_RETRY_VERIFY to OpenSSL,
giving an application a chance to download the missing certificates
during SSL_connect() negotiations. We assist OpenSSL team with that
change, but it will not be available at least until OpenSSL v3.0.
This description and changes are not specific to SslBump code paths.
Alex Rousskov [Fri, 19 Feb 2021 16:14:37 +0000 (16:14 +0000)]
Bug 3556: "FD ... is not an open socket" for accept() problems (#777)
Many things could go wrong after Squid successfully accept(2)ed a socket
and before that socket was registered with Comm. During that window, the
socket is stored in a refcounted Connection object. When that object was
auto-destroyed on the error handling path, its attempt to auto-close the
socket would trigger level-1 BUG 3556 errors because the socket was not
yet opened from Comm point of view. This change eliminates that "already
in Connection but not yet in Comm" window.
The fixed BUG 3556 errors stalled affected clients and leaked their FDs.
TODO: Keeping that window closed should not require a human effort, but
achieving that goal probably requires significant changes. We are
investigating.
uhliarik [Thu, 18 Feb 2021 01:08:40 +0000 (01:08 +0000)]
Fix build on Fedora Rawhide (#772)
* add SYSTEMD_LIBS to all binaries using client_side.cc, fixing linking
* add `<limits>` to all sources using std::numeric_limits, fixing gcc-11
builds
Protect SMP kids from unsolicited IPC answers (#771)
When an SMP kid restarts, it recreates its IPC socket to flush any old
messages, but it might still receive IPC messages intended for its
previous OS process because the sender may write the message _after_
kid's IPC socket is flushed. Squid IPC communication is connectionless,
so the sender cannot easily detect the reopening of the recipient socket
to prevent this race condition. Some notifications are desirable across
kid restarts, so properly switching to connection-oriented IPC
communication would only complicate things further.
This change protects kids from, for the lack of a better term, an
"unsolicited" answer: An answer to a question the recipient did not ask.
When allowed to reach regular message-handling code, unsolicited answers
result in misleading diagnostics, may trigger assertions, and might even
result in a bad recipient state. For example, a kid might think it has
been successfully registered with Coordinator while its registration
attempt was actually dropped due to a Coordinator restart.
Our protection targets one specific use case: Responses that were born
(or became) "unsolicited" due to a recipient restart. Other problematic
cases may not require any protection, may require a very different
protection mechanism (e.g. cryptography), may deal with requests rather
than responses, or even cannot be reliably detected. For example:
* messages sent by a malicious attacker
* requests sent by a misconfigured Squid instance
* requests sent by a previous Squid instance
* messages sent by a no-longer-running kid process
* messages sent by buggy Squid code
----
Also marked a few out-of-scope problems/improvements, including a bug:
Improved handling of Coordinator and Strand exceptions exposed and
partially addressed an old problem: When configured to listen on an
inaccessible port, Squid used to kill the Coordinator job, resulting in
subsequent kid registration timeouts. Squid now correctly keeps the
Coordinator job running, logging a detailed report:
Still, the affected kids do not report the port opening problem and
continue running, listening on the problem-free ports (if any).
Depending on the exception timing, similar behavior was possible before
these changes. The correct action here is to send the port opening error
to the kid process without throwing the putFd() exception, but we should
decide how Squid should handle such inaccessible configured ports first.
Detail certificate validation errors during TLS handshake (#770)
Fix certificate validation error handling in Security::Connect/Accept().
The existing validation details were not propagated/copied to IoResult,
requiring the caller to extract them via ssl_ex_index_ssl_error_detail.
The clunky approach even required a special "ErrorDetail generations"
API to figure out which error detail is "primary": the one received in
IoResult or the just extracted one. That API is removed now.
This change is used by the upcoming improvements that fetch missing TLS
v1.3 server certificates, but it also has an immediate positive effect
on the existing reporting of the client certificate validation errors.
Currently, only a general TLS error is reported for those cases because
Security::Accept() code forgot to check ssl_ex_index_ssl_error_detail.
Alex Rousskov [Mon, 8 Feb 2021 22:49:42 +0000 (22:49 +0000)]
Allow 1xx and 204 responses with Transfer-Encoding headers (#769)
HTTP servers MUST NOT send those header fields in those responses, but
some do, possibly because they compute the same basic headers for all
responses, regardless of the status code. Item 1 in RFC 7230 Section
3.3.3 is very clear about message framing in these cases. We have been
ignoring Content-Length under the same conditions since at least 2018.
We should be consistent and apply the same logic to Transfer-Encoding.
I also polished the Transfer-Encoding handling comment for clarity sake.
No functionality changes other than minor debugging improvements.
* replaced identical (except for the message kind value) HereIamMessage
and StrandSearchResponse classes with StrandMessage
* reduced code duplication with a new StrandMessage::NotifyCoordinator()
* split TypedMsgHdr::type() into unchecked rawType() and checked type()
* renamed and documented several Ipc::MessageType enum values
The above code improvements will help with adding more IPC messages.
Detail client closures of CONNECT tunnels during TLS handshake (#691)
... and improve detailing of other errors.
Many admins cannot triage TLS client failures, and even Squid developers
often cannot diagnose TLS problems without requiring detailed debugging
logs of failing transactions. The problem is especially bad for busy
proxies where debugging individual transactions is often impractical.
We enhance existing error detailing code so that more information is
logged via the existing %err_code/%err_detail logformat codes.
Propagating low-level error details required significant enhancements
and refactoring. We also built initial scaffolding for better error
detailing by GnuTLS-driven code and documented several key
error-handling APIs, exposing a few out-of-scope problems.
Also checkLogging() once, after consuming unparsed input attributed to a
transaction: Due to fake CONNECT requests, from-client read errors, and
possibly other complications, we may have a transaction that did not
consume every input byte available to it. That transaction is still
responsible for reporting those unparsed bytes (e.g., by logging the
number of bytes read on a connection and the number of parsed bytes).
Also fixed passing wrong (errno vs. size) or stale (requested vs. read)
I/O size to connFinishedWithConn(); now shouldCloseOnEof(). The bad
value was "correct" (i.e. zero) in many cases, obscuring the bug.
Bug 5057: "Generated response lacks status code" (#752)
... for responses carrying status-code with numerical value of 0.
Upon receiving a response with such a status-code (e.g., 0 or 000),
Squid reported a (misleading) level-1 BUG message and sent a 500
"Internal Error" response to the client.
This BUG message exposed a different/bigger problem: Squid parser
declared such a response valid, while other Squid code could easily
misinterpret its 0 status-code value as scNone which has very special
internal meaning.
A similar problem existed for received responses with status-codes that
HTTP WG considers semantically invalid (0xx, 6xx, and higher values).
Various range-based status-code checks could misinterpret such a
received status-code as being cachable, as indicating a control message,
or as having special for-internal-use values scInvalidHeader and
scHeaderTooLarge.
Unfortunately, HTTP/1 does not explicitly define how a response with a
status-code having an invalid response class (e.g., 000 or 600)
should be handled, but there may be an HTTP WG consensus that such
status-codes are semantically invalid:
Since leaking semantically invalid response status-codes into Squid code
is dangerous for response retries, routing, caching, etc. logic, we now
reject such responses at response parsing time.
Also fixed logging of the (last) received status-code (%<Hs) when we
cannot parse the response status-line or headers: We now store the
received status-code (if we can parse it) in peer_reply_status, even if
it is too short or has a wrong response class. Prior to this change,
%<Hs was either not logged at all or, during retries, recorded a stale
value from the previous successfully parsed response.
Alex Rousskov [Tue, 10 Nov 2020 21:42:18 +0000 (21:42 +0000)]
Transactions exceeding client_lifetime are logged as _ABORTED (#748)
... rather than timed out (_TIMEOUT).
To record the right cause of death, we have to call terminateAll()
rather than setting logType.err.timedout directly. Otherwise, when
ConnStateData::swanSong() calls terminateAll(0), it overwrites our
direct setting.
Since commit 5ef5e5c, a socket write timeout triggers two things:
* reporting of a write error to the socket writer (as designed/expected)
* reporting of a socket read timeout to the socket reader (unexpected).
The exact outcome probably depends on the transaction state, but one
known manifestation of this bug is the following level-1 message in
cache.log, combined with an access.log record showing a
much-shorter-than-client_lifetime transaction response time.
WARNING: Closing client connection due to lifetime timeout
Alex Rousskov [Wed, 4 Nov 2020 14:27:22 +0000 (14:27 +0000)]
Optimization: Avoid more SBuf::cow() reallocations (#744)
This optimization contains two parts:
1. A no-brainer part that allows SBuf to reuse MemBlob area previously
used by other SBufs sharing the same MemBlob. To see this change,
follow the "cowAvoided" code path modifications in SBuf::cow().
2. A part based on a rule of thumb: memmove is usually better than
malloc+memcpy. This part of the optimization (follow the "cowShift"
path) is only activated if somebody has consume()d from the buffer
earlier. The implementation is based on the heuristic that most
consuming callers follow the usual append-consume-append-... usage
pattern and want to preserve their buffer capacity.
MemBlob::consume() API mimics SBuf::consume() and std::string::erase(),
ignoring excessive number of bytes rather than throwing an error.
Also detailed an old difference between an SBuf::cow() requiring just a
new buffer allocation and the one also requiring data copying.
Alex Rousskov [Tue, 27 Oct 2020 23:33:39 +0000 (23:33 +0000)]
Do not send keep-alive or close in HTTP Upgrade requests (#732)
A presence of a Connection:keep-alive or Connection:close header in an
Upgrade request sent by Squid breaks some Google Voice services. FWIW,
these headers are not present in RFC 7230 Upgrade example, and popular
client requests do not send them.
* In most cases, Squid was sending Connection:keep-alive which is
redundant in Upgrade requests (because they are HTTP/1.1).
* In rare cases (e.g., server_persistent_connections=off), Squid was
sending Connection:close. Since we cannot send that header and remain
compatible with popular Upgrade-dependent services, we now do not send
it but treat the connection as non-persistent if the server does not
upgrade and expects us to continue reusing the connection.
Avoid null pointer dereferences when dynamic_cast'ing to SwapDir (#743)
Detected by Coverity. CID 1461158: Null pointer dereferences
(FORWARD_NULL).
When fixing these problems, we moved a few cache_dir iterations into
Disks.cc by applying the "Only Store::Disks can iterate cache_dirs"
design principle to the changed code. The Disks class is responsible for
maintaining (and will eventually encapsulate all the knowledge of) the
set of cache_dirs. Adjusted cache_cf.cc no longer depends on Disk.h.
Removed StoreClient::created() and improved PURGE code quality (#734)
The StoreClient::created() callback method was probably added in hope to
make Store lookups asynchronous, but that functionality has not been
implemented, leaving convoluted and dangerous synchronous created() call
chains behind. Moreover, properly supporting asynchronous Store lookups
in modern code is likely to require a very different API.
Removal of created() allowed to greatly simplify PURGE processing,
eliminating some tricky state, such as `purging` and `lookingforstore`.
Also removed all Store::getPublic*() methods as no longer used.
Amos Jeffries [Sat, 17 Oct 2020 05:33:11 +0000 (05:33 +0000)]
Cleanup helper.h classes (#719)
Polish up the classes in helper.h to use proper constructors as the
caller API for creation + initialization. Use C++11 initialization for
default values.
* no "virtual" in helper class destructor declaration could create
memory leaks in future (poorly) refactored code; the gained protection
is probably worth adding the (currently unused) virtual table to the
helper class; resolves Coverity issue CID 1461141 (VIRTUAL_DTOR)
* missing Comm::Connection timers initialization on helper startup
* multiple initialization of values used for state accounting
* initialization of booleans to non-0 integer values
* initialization of char using numeric values
* missing mgr:filedescriptors description for helper sockets
Do not duplicate free disk slots on diskers restart (#731)
When a disker process starts, it scans the on-disk storage to populate
shared-memory indexes of cached entries and unused/free slots. This
process may take more than ten minutes for large caches. Squid workers
use these indexes as they are being populated by diskers - workers do
not wait for the slow index rebuild process to finish. Cached entries
can be retrieved and misses can be cached almost immediately.
The disker does not "lock" the free slots to itself because the disker
does not want to preclude workers from caching new entries while the
disker is scanning the rock storage to build a complete index of old
cached entries (and free slots). The disker knows that it shares the
disk slot index with workers and is careful to populate the indexes
without confusing workers.
However, if the disker process is restarted for any reason (e.g., a
crash or kid registration timeout), the disker starts scanning its
on-disk storage from the beginning, adding to the indexes that already
contain some entries (added by the first disker incarnation and adjusted
by workers). An attempt to index the same cached object twice may remove
that object. Such a removal would be wasteful but not dangerous.
Indexing a free/unused slot twice can be disastrous:
* If Squid is lucky, the disker quickly hits an assertion (or a fatal
exception) when trying to add the already free slot to the free slot
collection, as long as no worker starts using the free slot between
additions (detailed in the next bullet).
* Unfortunately, there is also a good chance that a worker starts using
the free slot before the (restarted) disker adds it the second time.
In this case, the "double free" event cannot be detected. Both free
slot copies (pointing to the same disk location) will eventually be
used by a worker to cache new objects. In the worst case, it may lead
to completely wrong cached response content being served to an
unsuspecting user. The risk is partially mitigated by the fact that
disker crashes/restarts are rare.
Now, if a disker did not finish indexing before being restarted, it
resumes from the next db slot, thus avoiding indexing the same slot
twice. In other words, the disker forgets/ignores all the slots scanned
prior to the restart. Squid logs "Resuming indexing cache_dir..."
instead of the usual "Loading cache_dir..." to mark these (hopefully
rare) occurrences.
Also simplified code that delays post-indexing revalidation of cache
entries (i.e. store_dirs_rebuilding hacks). We touched that code because
the updated rock code will now refuse to reindex the already indexed
cache_dir. That decision relies on shared memory info and should not be
made where the old code was fiddling with store_dirs_rebuilding level.
After several attempts resulted in subtle bugs, we decided to simplify
that hack to reduce the risks of mismanaging store_dirs_rebuilding.
Adjusted old level-1 "Store rebuilding is ... complete" messages to
report more details (especially useful when rebuilding kid crashes). The
code now also reports some of the "unknown rebuild goal" UFS cases
better, but more work is needed in that area.
Also updated several rebuild-related counters to use int64_t instead of
int. Those changes stemmed from the need to add a new counter
(StoreRebuildData::validations), and we did not want to add an int
counter that will sooner or later overflow (especially when counting db
slots (across all cache_dirs) rather than just cache entries (from one
cache_dir)). That new counter interacted with several others, so we
had to update them as well. Long-term, all old StoreRebuildData counters
and the cache_dir code feeding them should be updated/revised.
Amos Jeffries [Fri, 28 Aug 2020 18:47:04 +0000 (18:47 +0000)]
Replaced X-Cache and X-Cache-Lookup headers with Cache-Status (#705)
See https://tools.ietf.org/html/draft-ietf-httpbis-cache-header
Also switched to identifying Squid instance in the header using
unique_hostname(), fixing a bug affecting proxies that share the same
visible_hostname in a cluster. The Cache-Status field values will now
point to a specific proxy in such a cluster.
The new initial lookup reporting (formally X-Cache-Lookup)
implementation has several differences:
* The reporting of the lookup is now unconditional, just like the
Cache-Status reporting itself. The dropped X-Cache-Lookup required
--enable-cache-digests. We removed that precondition because
Cache-Status already relays quite a bit of information, and sharing
lookup details is unlikely to tilt the balance in most environments.
The original lookup reporting code was conditional because the feature
was added for Cache Digests debugging, not because we wanted to hide
the info. Folks later discovered that the info is useful in many other
cases. If we do want to hide this information, it should be done
directly rather than via a (no) Cache Digests side effect.
* The initial lookup classification can no longer be overwritten by
additional Store lookups. Official code allowed such rewrites due to
implementation bugs. If we only report one lookup, the first lookup
classification is the most valuable one and almost eliminates doubts
about the the cache state at the request time. Ideally, we should also
exclude various internal Store lookup checks that may hide a matching
cache entry, but that exclusion is difficult to implement with the
current needlessly asynchronous create() Store API.
* Lookup reporting now covers more use cases. The official code probably
missed or mishandled various PURGE/DELETE use cases and did not
distinguish absence of Store lookup because of CC:no-cache from other
lookup bypass cases (e.g., errors). More work is probably needed to
cover every lookup-avoiding case.
Amos Jeffries [Sun, 16 Aug 2020 02:21:22 +0000 (02:21 +0000)]
Improve Transfer-Encoding handling (#702)
Reject messages containing Transfer-Encoding header with coding other
than chunked or identity. Squid does not support other codings.
For simplicity and security sake, also reject messages where
Transfer-Encoding contains unnecessary complex values that are
technically equivalent to "chunked" or "identity" (e.g., ",,chunked" or
"identity, chunked").
RFC 7230 formally deprecated and removed identity coding, but it is
still used by some agents.
Amos Jeffries [Fri, 14 Aug 2020 15:05:31 +0000 (15:05 +0000)]
Forbid obs-fold and bare CR whitespace in framing header fields (#701)
Header folding has been used for various attacks on HTTP proxies and
servers. RFC 7230 prohibits sending obs-fold (in any header field) and
allows the recipient to reject messages with folded headers. To reduce
the attack space, Squid now rejects messages with folded Content-Length
and Transfer-Encoding header field values. TODO: Follow RFC 7230 status
code recommendations when rejecting.
Bare CR is a CR character that is not followed by a LF character.
Similar to folding, bare CRs has been used for various attacks. HTTP
does not allow sending bare CRs in Content-Length and Transfer-Encoding
header field values. Squid now rejects messages with bare CR characters
in those framing-related field values.
When rejecting, Squid informs the admin with a level-1 WARNING such as
obs-fold seen in framing-sensitive Content-Length: ...
Ignore SMP queue responses made stale by worker restarts (#711)
When a worker restarts (for any reason), the disker-to-worker queue may
contain disker responses to I/O requests sent by the previous
incarnation of the restarted worker process (the "previous generation"
responses). Since the current response:request mapping mechanism relies
on a 32-bit integer counter, and a worker process always starts counting
from 0, there is a small chance that the restarted worker may see a
previous generation response that accidentally matches the current
generation request ID.
For writing transactions, accepting a previous generation response may
mean unlocking a cache entry too soon, making not yet written slots
available to other workers that might read wrong content. For reading
transactions, accepting a previous generation response may mean
immediately serving wrong response content (that have been already
overwritten on disk with the information that the restarted worker is
now waiting for).
To avoid these problems, each disk I/O request now stores the worker
process ID. Workers ignore responses to requests originated by a
different/mismatching worker ID.
Alex Rousskov [Mon, 10 Aug 2020 18:46:02 +0000 (18:46 +0000)]
Do not send keep-alive in 101 (Switching Protocols) responses (#709)
... because it breaks clients using websocket_client[1] library and is
redundant in our HTTP/1.1 control messages anyway.
I suspect that at least some buggy clients are confused by a multi-value
Connection field rather than the redundant keep-alive signal itself, but
let's try to follow RFC 7230 Upgrade example more closely this time and
send no keep-alive at all.
trapexit [Sun, 9 Aug 2020 06:14:51 +0000 (06:14 +0000)]
Add http_port sslflags=CONDITIONAL_AUTH (#510)
Enabling this flag removes SSL_VERIFY_FAIL_IF_NO_PEER_CERT from the
SSL_CTX_set_verify callback. Meaning a client certificate verify
occurs iff provided.
Amos Jeffries [Thu, 6 Aug 2020 22:49:34 +0000 (22:49 +0000)]
Fix: cannot stat tests/STUB.h No such file or directory (#707)
Since 2b5ebbe (PR #670), we have been seeing "random" build failures.
The tests/Stub.am generated by source-maintenance.sh has not included
the tests/STUB.h file for explicit distribution. The source file was
built and included only when seen as a dependency of the files using it.
When stubs are copied for use by the extra binaries from tools/
directory, there is a secondary side effect from "make -j":
* When the -j concurrency is small, tests/STUB.h gets copied during the
first cycle, and parallel builds compiling other copied files succeed.
* When the -j concurrency is large, tests/STUB.h may be deferred to a
later copy cycle, and the first actual compile task needing it fails
with `cannot stat 'src/tests/STUB.h': No such file or directory`.
Add tests/STUB.h to src/Makefile.am EXTRA_DIST to restore the previous
distribution behavior (when STUB_SOURCE contained it explicitly).
Update the pinger source paths that were omitted in 2b5ebbe.
Amos Jeffries [Tue, 4 Aug 2020 04:34:32 +0000 (04:34 +0000)]
Enforce token characters for field-name (#700)
RFC 7230 defines field-name as a token. Request splitting and cache
poisoning attacks have used non-token characters to fool broken HTTP
agents behind or in front of Squid for years. This change should
significantly reduce that abuse.
If we discover exceptional situations that need special treatment, the
relaxed parser can allow them on a case-by-case basis (while being extra
careful about framing-related header fields), just like we already
tolerate some header whitespace (e.g., between the response header
field-name and colon).
Do not stall while debugging a scan of an empty store_table (#699)
Non-SMP Squid and each SMP kid allocate a store_table hash. With large
caches, some allocated store_table may have millions of buckets.
Recently we discovered that it is almost impossible to debug SMP Squid
with a large but mostly empty disk cache because the disker registration
times out while store_table is being built -- the disker process is
essentially blocked on a very long debugging loop.
The code suspends the loop every 500 entries (to take care of tasks like
kid registration), but there are no pauses when scanning millions of
empty hash buckets where every bucket prints two debug lines.
Squid now does not report empty store_table buckets explicitly. When
dealing with large caches, the debugged process may still be blocked for
a few hundred milliseconds (instead of many seconds) while scanning the
entire (mostly empty) store_table. Optimizing that should be done as a
part of the complex "store search" API refactoring.
peerDigestHandleReply() was missing a premature EOF check. The existing
peerDigestFetchedEnough() cannot detect EOF because it does not have
access to receivedData.length used to indicate the EOF condition. We did
not adjust peerDigestFetchedEnough() because it is abused to check both
post-I/O state and the state after each digest processing step. The
latter invocations lack access to receivedData.length and should not
really bother with EOF anyway.
Revamped LruMap implementation, adding support for value-specific TTLs,
value-specific memory consumption estimates, memory pooling for value
wrappers, and polishing code. Renamed the result to ClpMap.
Fixed memory leak of generated fake SSL certificates in Squids
configured with dynamic_cert_mem_cache_size=0 (the default is 4MB).
Controversially changed sslcrtvalidator_program cache and TTL defaults:
* After we started accounting for key lengths in validator cache
entries, the old 2048 bytes default size became not just ridiculously
small but insufficient to cache the vast majority of cache entries
because an entry key contains a PEM certificate sent to the validator
(among other validation parameters). A PEM certificate usually
consumes at least a few kilobytes. Living the old 2KB default would
essentially mean that we are disabling caching (by default).
* And if we fix one default, it makes sense to fix the other as well.
The old 60 second TTL default felt too conservative. Most certificate
invalidations last days, not seconds. We picked one hour because that
value is still fairly conservative, used as the default by several
other Squid caches, and should allow for decent cache hit ratio.
Dropped support for the undocumented "sslcrtvalidator_program ttl=-1"
hack that meant unlimited TTL. Large TTL values should be used instead.
TODO: The option parser should be revamped to reject too-large values.
Report SMP store queues state (mgr:store_queues) (#690)
The state of two Store queues is reported: Transients notification queue
(a.k.a. Collapsed Forwarding queue) and SMP I/O request queue (used for
communication with rock diskers). Each worker and disker kid reports its
view of that kid's incoming and outgoing queues state, including a small
sample (up to 7 items) of queued messages. These kid-specific reports
are YAML-compliant.
With the exception of a field labeled "other", each queue report is
self-consistent despite accessing data shared among kids -- the reported
combination of values did exist at the snapshot collection time.
The special "other" field represents a message counter maintained by the
other side of the queue. In most cases, that field will be close to its
correct value, but, due to modifications by the other process of a
non-atomic value, it may be virtually anything. We think it is better to
report (without officially naming) this field because it may be useful
in triage regardless of these caveats. Making the counter atomic just
for these occasional reports is not worth the performance overheads.
Also fixed testStore linking problems (on some platforms) that were
caused by the wrong order of libdiskio and libipc dependencies.
Alex Rousskov [Mon, 6 Jul 2020 08:04:31 +0000 (08:04 +0000)]
Honor on_unsupported_protocol for intercepted https_port (#689)
... when Squid discovers a non-TLS client while parsing its handshake.
For https_port traffic, ConnStateData::switchToHttps() relies on start()
to set preservingClientData_ correctly, but shouldPreserveClientData(),
called by start() to set preservingClientData_, was not preserving TLS
bytes in the https_port start() context. Typical debug messages:
parseTlsHandshake: Got something other than TLS ... Cannot SslBump
tunnelOnError: may have forgotten client data; send error: 40
The failure leads to a cache miss for one of the requests. The miss
transaction will overwrite the already cached entry, possibly creating
more misses due to long write locks used by the disk cache. The problem
does not affect collapsed requests. It affects regular cache hits.
The failure happens because two requests are trying to create a
Transients entry at about the same time. Only one can succeed. With the
current ReadWriteLock-based design, some contention is unavoidable, but
the contention window was rather large/long:
* The critical section essentially started when Controller::find() (in
each request worker) checked Transients and did not find a matching
entry there. That failed lookup put each request into "create a new
Transients entry" mode.
* A worker exited that critical section when openForWriting() in
Transients::addEntry() returned (with a write lock to the
ready-to-be-filled entry for one of the two requests and with a
failure for the other).
This change reduces the contention window to the bare minimum necessary
to create and fill a Transients entry inside openOrCreateForReading().
In the micro-tests that exposed this problem, the probability of a
failure is reduced from more than 70% to less than 3%. YMMV.
Also split addEntry() into two directional methods because while they
are structured similarly they actually have almost no common code now.
This bug was caused by a4d576d. ResolvedPeers::retryPath() used
connection object pointers to find the original path location under the
incorrect assumption that the Connection pointers never change. That
assumption was wrong for persistent connections: ResolvedPeers stores a
half-baked Connection object rather than the corresponding open
Connection object that the unfortunate caller got from fwdPconnPool and
passed to ResolvedPeers::retryPath().
While working on a fix, we discovered a second reason why the pointer
comparison was a bad idea (and why a simpler fix of comparing addresses
rather than pointers was also a bad idea): The peer selection code
promises to deliver unique destinations, but we now suspect that, under
presumably rare circumstances, it may deliver duplicates. This broken
promise is now an out-of-scope XXX in PeerSelector::addSelection().
Squid now eliminates the search (and the previously documented concern
about its slow speed) by remembering Connection's position in the
destination array. The position is remembered in a smart Connection
pointer (compatible with Comm::ConnectionPointer) so that the rest of
the code (that does not really care about all this) is preserved largely
unchanged. Most changes are just renaming the pointer type.
Also freshened a FwdState call and comment made stale by 25b0ce4.
Also marked an out-of-scope problem in maybeGivePrimeItsChance() caller.
Alex Rousskov [Thu, 25 Jun 2020 09:39:45 +0000 (09:39 +0000)]
Avoid FwdState::serverConn while establishing a CONNECT tunnel (#681)
Master/v6 commit 25b0ce4 missed two cases where serverConnection() was
used prematurely: One case requires --enable-delay-pools, and the other
does not actually dereference the prematurely used serverConnection().
Both cases establish a CONNECT tunnel through a cache_peer.
DrDaveD [Tue, 23 Jun 2020 18:41:15 +0000 (18:41 +0000)]
Bug 5051: Some collapsed revalidation responses never expire (#652)
Since negative caching support was repaired in master commit 91870bf, it
has been found to last indefinitely when cache revalidation happens.
New revalidation requests were collapsing on a negatively cached
response forever because handleIMS() logic does not validate response
freshness (still assuming that the reply came in response to the current
request even though that assumption could be false since collapsed
revalidation support was added in master commit 1a210de).
Clearing the ENTRY_REQUIRES_COLLAPSING flag when hitting the negatively
cached collapsed revalidaiton response for the first time works around
this "lack of freshness check" problem. The same solution existed in the
official code for positive responses. However, this solution is partial
and unreliable because there is no guarantee that the clearing code will
be reached (and reached while the cached response is still fresh).
Also added additional partial protections against collapsing on entries
abandoned by their writers, including idle hittingRequiresCollapsing()
StoreEntry objects.
Also fixed a tiny race condition missed in master commit d1d3b4d which
addressed a much bigger (and more frequent) problem. I am not aware of
any real-world cases where this race condition surfaced, but they would
probably manifest in unwarranted failures to collapse.
Amos Jeffries [Mon, 22 Jun 2020 18:01:29 +0000 (18:01 +0000)]
Fix "make check" with GCC-10 (#677)
Latest GCC-10 release on some OS (Debian initially) no
longer locates files in "." by default. Fix Makefile hack
pulling tests/ files into tools/ for stub linking.
Also, Squid #includes under src/ are supposed to always
use path relative to src/. Fix wrong STUB.h references.
Preserve caller context in MemObject::abort callbacks (#671)
Store abused time-based event API to deliver abort notifications, losing
transaction context in the process. Further refactoring may crystallize
the currently vague/obscured "store writer" concept. Such refactoring is
outside this project scope but will benefit from these changes.
Also avoids creation of new source files, as (poorly) illustrated by the
single-byte src/tunnel.cc file created (after being removed) above.
Also improves diagnostic when dealing with misspelled file names:
$ ./scripts/formater.pl src/tunnel.ccsrc/FwdState.cc
Can not open input file: src/tunnel.ccsrc/FwdState.cc.astylebak
Can't open output file: src/tunnel.ccsrc/FwdState.cc
Reforward CONNECT after TLS handshake failure with peer (#489)
When Squid received a CONNECT request and attempted to establish a
secure connection to an SSL cache_peer, it did not try the next
available destination after a TLS negotiation failure.
Why retry such TLS negotiation failures? Peer B may not have the same
misconfiguration that led to the negotiation failure when talking to
peer A. Admins may improve fault tolerance by using pools of different
peers or pools of identical peers that are reconfigured one by one. And
FwdState already does this -- flags.connected_okay is not set until TLS
negotiation is successful.
SslBump: Support parsing GREASEd (and future) TLS handshakes (#663)
A peeking or staring Squid aborted TLS connections containing a GREASE
version in the supported_versions handshake extension (e.g., 0x3a3a).
Here is a sample cache.log error (debug_options ALL,1 83,2):
The same problem would apply to some other "unsupported" (and currently
non-existent) TLS versions (e.g., 0x0400). The growing popularity of
GREASE values exposed the bug (just like RFC 8710 was designed to do).
Squid now ignores all unsupported-by-Squid TLS versions in handshake
extensions. Squid still requires a supported TLS version for
framing-related handshake fields -- no changes there.
It is difficult to define "supported" in this context precisely because
a peeking Squid only observes the TLS handshake. Our handshake parser
may report the following versions: SSL v2.0, SSL v3.0, and TLS v1.x
(with x >= 0). This logic allows us to safely parse the handshake (no
framing errors) and continue to make version-based decisions like
disabling OpenSSL TLS v1.3 support when the agents are negotiating TLS
v1.3+ (master cd29a42). Also, access logs benefit from this "support"
going beyond TLS v1.2 even though Squid cannot bump most TLS v1.3+
handshakes.
Squid was and still is compliant with the "MUST NOT treat GREASE values
differently from any unknown value" requirement of RFC 8710.
Also added source code comments to mark other places where unsupported
(for some definition of "support") TLS values, including GREASE values
may appear.
Folks studying debugging logs (where GREASE values may appear) will
probably either recognize their easy-to-remember 0x?A?A pattern or
not really know/care about RFC 8710 and its special values.
Amos Jeffries [Wed, 10 Jun 2020 21:19:26 +0000 (21:19 +0000)]
Maintenance: Remove FIXME and \todo labels (#647)
Update old ad-hoc FIXME labels and experimental use of doxygen \todo
listings to use XXX or TODO labels for consistency sake.
Approximate current definitions (to be formalized elsewhere):
* XXX - a problem, often a serious one
* TODO - a problem, often a minor one
* BUG - a problem that is worth informing admin about
Existing TODO, XXX, and BUG labels may not fit the above definitions.