Alex Rousskov [Wed, 14 Jul 2021 21:45:01 +0000 (17:45 -0400)]
fixup: Do not rely on dieOnConnectionFailure() to throw (so much)
That method should probably be rewritten to avoid excessive throwing in
favor of mustStop() or a similar call, but that polishing is outside
this branch scope. Let's just attempt to minimize future code changes
by not relying on dieOnConnectionFailure() unwinding the stack.
Alex Rousskov [Wed, 14 Jul 2021 21:22:50 +0000 (17:22 -0400)]
Simplified PeerPoolMgr by removing excessive conn closure monitoring
... and possibly even fixed branch PeerPoolMgr changes.
Since commit 25b0ce4, Security::BlindPeerConnector (or, to be more
precise, its Security::PeerConnector parent) owns the connection being
secured, including informing the requestor (here, PeerPoolMgr) about
connection closures. This removal should have been done in 25b0ce4.
Removing closure monitoring here/now allows us to easily fix
branch-added callback-related closure handling that might have been racy
with respect to connection closure notification that could have arrived
_after_ we cancelled encryptionWait for other reasons, triggering a
Must() violation.
Alex Rousskov [Wed, 14 Jul 2021 20:58:10 +0000 (16:58 -0400)]
Fix Tunneler handling of last-resort callback requirements
... and simplify the corresponding code.
Events like noteAbort() and callException() terminate the job but are
not bugs that should be reported by swanSong().
Not treating (rare) cancelled callbacks specially in swanSong() allows
us to avoid making cleanup code even more complex than it already is,
undoing branch changes that introduced closeQuietly().
With JobWait API used by the callers, we no longer need to rely on some
unrelated async job call noticing the callback cancellation. We are now
guaranteed to receive a noteAbort() call.
I also removed a call to always-true parent doneAll() because no other
job, not even a parent's job can force Downloader to keep running -- if
we have notified the requestor, we are done! Squid code is not
consistent in application of that principle (yet).
Alex Rousskov [Wed, 14 Jul 2021 20:18:11 +0000 (16:18 -0400)]
fixup: Fix Downloader handling of last-resort callback requirements
... and simplify the corresponding code.
The previous branch commit missed a callException() case that stops the
job but should not be reported as a BUG related to callback maintenance
(it would be up to the callException() implementation to decide how to
report the caught exception). There might be (or there will) be other,
similar cases where the job is stopped prematurely for some non-BUG
reason beyond swanSong() knowledge.
Alex Rousskov [Wed, 14 Jul 2021 19:07:06 +0000 (15:07 -0400)]
Require strict JobWait::start/finish pairing
I hesitated removing !fooWait asserts that many fooWait.start() callers
had at the beginning of their start-waiting methods. The earlier we
assert, the better, and those early asserts looked correct. However,
there is also a danger in asserting too early or for now good reason (as
the code gets refactored) and most of the code separating current
asserts and fooWait.start() calls are relatively simple job
configuration manipulations that are unlikely to significantly alter the
caller state or confuse developers during assertion triage. Keeping
these early asserts consistent across future changes would also be
difficult. Thus, I decided that it is better to remove those asserts
while relying on JobWait asserts, now built into the JobWait class.
I considered making JobWait::finish() forgiving, conditional on being in
waiting() state but succumbed to pressures to address concerns about
possible message corruption or incorrect routing decisions due to
mismatched/out-of-order callbacks.
AFAICT, the only way a caller can decide whether to call fail() or
saveError() is to somehow know that fail() should only be called for
ERR_ZERO_SIZE_OBJECT errors. That knowledge is hidden inside branch
code. Moreover, that distinction may change in the future without
saveError() callers' knowledge. In other words, today, saveError()
callers are correct, but tomorrow some of them will become wrong because
of changes in cleanupOnFail() that they do not call. Developers
modifying cleanupOnFail() can easily miss the hidden fact that some of
the saveError() callers should be kept in sync with cleanupOnFail().
Besides saving CPU cycles on that trivial ERR_ZERO_SIZE_OBJECT check, I
saw no value in this dangerous split of fail() into fail() and
saveError(). The branch commit 0523b93 message did not help me discover
that value.
However, commit 0523b93 was based on code that already had
doFail()/cleanup() methods that seem unnecessary to the current diff
reader like me. Perhaps the prior existence of those methods implies
that I am missing some other reasons behind that split...
Alex Rousskov [Thu, 17 Dec 2020 21:55:43 +0000 (16:55 -0500)]
fixup: Simplify and protect private PeerConnectionPointer::connection_
... from being cleared by the (difficult-to-track) 3rd-party code.
AFAICT, these 3rd-party resets were (ab)used to prevent the pending
connection from being closed after a successful hand-off to a a helper
job. This commit essentially replaces
try {
... step creation/scheduling code here ...
conn = nullptr;
} catch (...) {
if (conn) // we still own the connection
closePendingConnection(conn);
else
cancelStep();
}
with the equivalent but much simpler
try {
... step creation/scheduling code here ...
} catch (...) {
closePendingConnection(conn); // we still own the connection
}
I think we can and should assert that the helper job is
* either scheduled to start (no exception is thrown in this case and the
connection ownership passes to the helper job)
* or is not scheduled to start (an exception is thrown and the
connection ownership stays with the caller).
The PR code assumed the existence of a third outcome: The helper job was
scheduled to start but the same code that successfully scheduled the job
to start threw an exception. I believe that if this outcome is possible,
then there is a bug somewhere outside of advanceDestination(). We should
fix any such bugs, but advanceDestination() should assume that there are
none because working around such bugs is too costly.
With this approach, we do not need to clear the conn pointer in the step
creation/scheduling code to signal a successful hand-off -- if we see
the exception, we know that we still own the connection and do not need
to cancel the step (which failed to start).
FWIW, PeerConnectionPointer::print(), at least, thinks that "we should
see no combinations of a nil connection and a set position". External
resets would invalidate that assumption. This is a secondary/minor
reason, of course.
Said that, I am pretty sure that the (future) connection ownership
object will be modified on the path modified by this commit because the
connection ownership does change hands on that path. Thus, we may have
to adjust advanceDestination() and step profiles again. Hopefully, by
the time we are ready for that adjustment, the code will not have to
sneak into PeerConnectionPointer private areas to pass conn ownership.
Alex Rousskov [Tue, 8 Dec 2020 02:01:35 +0000 (21:01 -0500)]
fixup: Renamed JobCallbackPointer to JobWait, polished method names
... and synced the corresponding data member names.
The new class is not really a smart pointer (to a callback). With some
effort, it can be viewed as a smart pointer to a helper job, but that is
not really its primary purpose/role. The class is actually focusing on
the _wait_ (for our helper job calling our callback).
Renaming data members probably increases the diff, but the vast majority
of changes appear to be necessary for other reasons anyway.
Also swapped the JobWait::start() arguments for an arguably more natural
or "chronological" order.
Also found (two instances of) a branch bug in icap/Xaction.cc (now XXXs)
and sketched a TODO solution.
x.h: // #include "helper.h"
x.h: class X { ... JobCallbackPointer<Helper> helper; };
Most code #includes x.h without using or "seeing" X constructor or
destructor implementations. Such code does not really use the helper
data member and, hence, compiles fine without #including helper.h. Thus,
x.h should not #include helper.h.
This commit also removes some branch-added #includes (e.g., HttpRequest)
that were probably added in hope to fix some other build problems.
It is possible that some of these changes will need to be undone and/or
refactored to accommodate older compilers. I am using GCC v9.3.0.
Alex Rousskov [Sat, 5 Dec 2020 01:49:26 +0000 (20:49 -0500)]
fixup: Disabled dangerous copying of job callbacks
AsyncCalls are not designed to be copied and JobCallbackPointer is,
essentially, an AsyncCall (among other things). Also JobCallbackPointer
destructor semantics is difficult to properly combine with copying.
Essentially the same information is available via the callback. And if
we discover a need to provide the same info in some future use cases, it
is very likely to still default to the callaback settings IMO.
This change also removes strangely-looking (IMO) magic numbers from many
JobCallbackPointer field constructors. They were debug sections, but
they did not look like ones due to insufficient interpretation context.
The method was only used for reporting JobCallbackPointer in debug logs.
For that purpose, we can use JobCallbackPointer::print() to provide both
the helper job ID and the callback ID. While either one can usually be
derived from another, having access to both can save quite a bit of log
searching. The two can be reported in a compact way: `call24<-job6`.
HappyConnOpener::status() refactoring was necessary to use
JobCallbackPointer::print() but it is a step forward anyway. After we
redesign the status() API (which is an old TODO), new status() code
would use stream-based reporting like the refactored code does.
JobCallbackPointer is a general-purpose class. The proposed notifier API
is specific to one-per-pointer, parameterless job notifications. The
first aspect (i.e. limiting notifications to just one notification kind
per pointer instance) cannot be easily fixed by generalizing the API.
It also feels wrong to expect JobCallbackPointer users to pre-register
notification calls they might want to make in the future.
Finally, the motivation for introducing JobCallbackPointer::notifier
feels insufficient at best. There is nothing conceptually wrong with
exposing the helper job pointer to that job initiator -- initiators may
have legitimate reasons to want to message the helper job. The wrapped
noteCandidatesChange() is one of such legitimate notification examples.
Bug 5057: "Generated response lacks status code" (#752)
... for responses carrying status-code with numerical value of 0.
Upon receiving a response with such a status-code (e.g., 0 or 000),
Squid reported a (misleading) level-1 BUG message and sent a 500
"Internal Error" response to the client.
This BUG message exposed a different/bigger problem: Squid parser
declared such a response valid, while other Squid code could easily
misinterpret its 0 status-code value as scNone which has very special
internal meaning.
A similar problem existed for received responses with status-codes that
HTTP WG considers semantically invalid (0xx, 6xx, and higher values).
Various range-based status-code checks could misinterpret such a
received status-code as being cachable, as indicating a control message,
or as having special for-internal-use values scInvalidHeader and
scHeaderTooLarge.
Unfortunately, HTTP/1 does not explicitly define how a response with a
status-code having an invalid response class (e.g., 000 or 600)
should be handled, but there may be an HTTP WG consensus that such
status-codes are semantically invalid:
Since leaking semantically invalid response status-codes into Squid code
is dangerous for response retries, routing, caching, etc. logic, we now
reject such responses at response parsing time.
Also fixed logging of the (last) received status-code (%<Hs) when we
cannot parse the response status-line or headers: We now store the
received status-code (if we can parse it) in peer_reply_status, even if
it is too short or has a wrong response class. Prior to this change,
%<Hs was either not logged at all or, during retries, recorded a stale
value from the previous successfully parsed response.
Alex Rousskov [Tue, 10 Nov 2020 21:42:18 +0000 (21:42 +0000)]
Transactions exceeding client_lifetime are logged as _ABORTED (#748)
... rather than timed out (_TIMEOUT).
To record the right cause of death, we have to call terminateAll()
rather than setting logType.err.timedout directly. Otherwise, when
ConnStateData::swanSong() calls terminateAll(0), it overwrites our
direct setting.
Since commit 5ef5e5c, a socket write timeout triggers two things:
* reporting of a write error to the socket writer (as designed/expected)
* reporting of a socket read timeout to the socket reader (unexpected).
The exact outcome probably depends on the transaction state, but one
known manifestation of this bug is the following level-1 message in
cache.log, combined with an access.log record showing a
much-shorter-than-client_lifetime transaction response time.
WARNING: Closing client connection due to lifetime timeout
Alex Rousskov [Wed, 4 Nov 2020 14:27:22 +0000 (14:27 +0000)]
Optimization: Avoid more SBuf::cow() reallocations (#744)
This optimization contains two parts:
1. A no-brainer part that allows SBuf to reuse MemBlob area previously
used by other SBufs sharing the same MemBlob. To see this change,
follow the "cowAvoided" code path modifications in SBuf::cow().
2. A part based on a rule of thumb: memmove is usually better than
malloc+memcpy. This part of the optimization (follow the "cowShift"
path) is only activated if somebody has consume()d from the buffer
earlier. The implementation is based on the heuristic that most
consuming callers follow the usual append-consume-append-... usage
pattern and want to preserve their buffer capacity.
MemBlob::consume() API mimics SBuf::consume() and std::string::erase(),
ignoring excessive number of bytes rather than throwing an error.
Also detailed an old difference between an SBuf::cow() requiring just a
new buffer allocation and the one also requiring data copying.
Alex Rousskov [Tue, 27 Oct 2020 23:33:39 +0000 (23:33 +0000)]
Do not send keep-alive or close in HTTP Upgrade requests (#732)
A presence of a Connection:keep-alive or Connection:close header in an
Upgrade request sent by Squid breaks some Google Voice services. FWIW,
these headers are not present in RFC 7230 Upgrade example, and popular
client requests do not send them.
* In most cases, Squid was sending Connection:keep-alive which is
redundant in Upgrade requests (because they are HTTP/1.1).
* In rare cases (e.g., server_persistent_connections=off), Squid was
sending Connection:close. Since we cannot send that header and remain
compatible with popular Upgrade-dependent services, we now do not send
it but treat the connection as non-persistent if the server does not
upgrade and expects us to continue reusing the connection.
Avoid null pointer dereferences when dynamic_cast'ing to SwapDir (#743)
Detected by Coverity. CID 1461158: Null pointer dereferences
(FORWARD_NULL).
When fixing these problems, we moved a few cache_dir iterations into
Disks.cc by applying the "Only Store::Disks can iterate cache_dirs"
design principle to the changed code. The Disks class is responsible for
maintaining (and will eventually encapsulate all the knowledge of) the
set of cache_dirs. Adjusted cache_cf.cc no longer depends on Disk.h.
Removed StoreClient::created() and improved PURGE code quality (#734)
The StoreClient::created() callback method was probably added in hope to
make Store lookups asynchronous, but that functionality has not been
implemented, leaving convoluted and dangerous synchronous created() call
chains behind. Moreover, properly supporting asynchronous Store lookups
in modern code is likely to require a very different API.
Removal of created() allowed to greatly simplify PURGE processing,
eliminating some tricky state, such as `purging` and `lookingforstore`.
Also removed all Store::getPublic*() methods as no longer used.
Amos Jeffries [Sat, 17 Oct 2020 05:33:11 +0000 (05:33 +0000)]
Cleanup helper.h classes (#719)
Polish up the classes in helper.h to use proper constructors as the
caller API for creation + initialization. Use C++11 initialization for
default values.
* no "virtual" in helper class destructor declaration could create
memory leaks in future (poorly) refactored code; the gained protection
is probably worth adding the (currently unused) virtual table to the
helper class; resolves Coverity issue CID 1461141 (VIRTUAL_DTOR)
* missing Comm::Connection timers initialization on helper startup
* multiple initialization of values used for state accounting
* initialization of booleans to non-0 integer values
* initialization of char using numeric values
* missing mgr:filedescriptors description for helper sockets
Do not duplicate free disk slots on diskers restart (#731)
When a disker process starts, it scans the on-disk storage to populate
shared-memory indexes of cached entries and unused/free slots. This
process may take more than ten minutes for large caches. Squid workers
use these indexes as they are being populated by diskers - workers do
not wait for the slow index rebuild process to finish. Cached entries
can be retrieved and misses can be cached almost immediately.
The disker does not "lock" the free slots to itself because the disker
does not want to preclude workers from caching new entries while the
disker is scanning the rock storage to build a complete index of old
cached entries (and free slots). The disker knows that it shares the
disk slot index with workers and is careful to populate the indexes
without confusing workers.
However, if the disker process is restarted for any reason (e.g., a
crash or kid registration timeout), the disker starts scanning its
on-disk storage from the beginning, adding to the indexes that already
contain some entries (added by the first disker incarnation and adjusted
by workers). An attempt to index the same cached object twice may remove
that object. Such a removal would be wasteful but not dangerous.
Indexing a free/unused slot twice can be disastrous:
* If Squid is lucky, the disker quickly hits an assertion (or a fatal
exception) when trying to add the already free slot to the free slot
collection, as long as no worker starts using the free slot between
additions (detailed in the next bullet).
* Unfortunately, there is also a good chance that a worker starts using
the free slot before the (restarted) disker adds it the second time.
In this case, the "double free" event cannot be detected. Both free
slot copies (pointing to the same disk location) will eventually be
used by a worker to cache new objects. In the worst case, it may lead
to completely wrong cached response content being served to an
unsuspecting user. The risk is partially mitigated by the fact that
disker crashes/restarts are rare.
Now, if a disker did not finish indexing before being restarted, it
resumes from the next db slot, thus avoiding indexing the same slot
twice. In other words, the disker forgets/ignores all the slots scanned
prior to the restart. Squid logs "Resuming indexing cache_dir..."
instead of the usual "Loading cache_dir..." to mark these (hopefully
rare) occurrences.
Also simplified code that delays post-indexing revalidation of cache
entries (i.e. store_dirs_rebuilding hacks). We touched that code because
the updated rock code will now refuse to reindex the already indexed
cache_dir. That decision relies on shared memory info and should not be
made where the old code was fiddling with store_dirs_rebuilding level.
After several attempts resulted in subtle bugs, we decided to simplify
that hack to reduce the risks of mismanaging store_dirs_rebuilding.
Adjusted old level-1 "Store rebuilding is ... complete" messages to
report more details (especially useful when rebuilding kid crashes). The
code now also reports some of the "unknown rebuild goal" UFS cases
better, but more work is needed in that area.
Also updated several rebuild-related counters to use int64_t instead of
int. Those changes stemmed from the need to add a new counter
(StoreRebuildData::validations), and we did not want to add an int
counter that will sooner or later overflow (especially when counting db
slots (across all cache_dirs) rather than just cache entries (from one
cache_dir)). That new counter interacted with several others, so we
had to update them as well. Long-term, all old StoreRebuildData counters
and the cache_dir code feeding them should be updated/revised.
Amos Jeffries [Fri, 28 Aug 2020 18:47:04 +0000 (18:47 +0000)]
Replaced X-Cache and X-Cache-Lookup headers with Cache-Status (#705)
See https://tools.ietf.org/html/draft-ietf-httpbis-cache-header
Also switched to identifying Squid instance in the header using
unique_hostname(), fixing a bug affecting proxies that share the same
visible_hostname in a cluster. The Cache-Status field values will now
point to a specific proxy in such a cluster.
The new initial lookup reporting (formally X-Cache-Lookup)
implementation has several differences:
* The reporting of the lookup is now unconditional, just like the
Cache-Status reporting itself. The dropped X-Cache-Lookup required
--enable-cache-digests. We removed that precondition because
Cache-Status already relays quite a bit of information, and sharing
lookup details is unlikely to tilt the balance in most environments.
The original lookup reporting code was conditional because the feature
was added for Cache Digests debugging, not because we wanted to hide
the info. Folks later discovered that the info is useful in many other
cases. If we do want to hide this information, it should be done
directly rather than via a (no) Cache Digests side effect.
* The initial lookup classification can no longer be overwritten by
additional Store lookups. Official code allowed such rewrites due to
implementation bugs. If we only report one lookup, the first lookup
classification is the most valuable one and almost eliminates doubts
about the the cache state at the request time. Ideally, we should also
exclude various internal Store lookup checks that may hide a matching
cache entry, but that exclusion is difficult to implement with the
current needlessly asynchronous create() Store API.
* Lookup reporting now covers more use cases. The official code probably
missed or mishandled various PURGE/DELETE use cases and did not
distinguish absence of Store lookup because of CC:no-cache from other
lookup bypass cases (e.g., errors). More work is probably needed to
cover every lookup-avoiding case.
Amos Jeffries [Sun, 16 Aug 2020 02:21:22 +0000 (02:21 +0000)]
Improve Transfer-Encoding handling (#702)
Reject messages containing Transfer-Encoding header with coding other
than chunked or identity. Squid does not support other codings.
For simplicity and security sake, also reject messages where
Transfer-Encoding contains unnecessary complex values that are
technically equivalent to "chunked" or "identity" (e.g., ",,chunked" or
"identity, chunked").
RFC 7230 formally deprecated and removed identity coding, but it is
still used by some agents.
Amos Jeffries [Fri, 14 Aug 2020 15:05:31 +0000 (15:05 +0000)]
Forbid obs-fold and bare CR whitespace in framing header fields (#701)
Header folding has been used for various attacks on HTTP proxies and
servers. RFC 7230 prohibits sending obs-fold (in any header field) and
allows the recipient to reject messages with folded headers. To reduce
the attack space, Squid now rejects messages with folded Content-Length
and Transfer-Encoding header field values. TODO: Follow RFC 7230 status
code recommendations when rejecting.
Bare CR is a CR character that is not followed by a LF character.
Similar to folding, bare CRs has been used for various attacks. HTTP
does not allow sending bare CRs in Content-Length and Transfer-Encoding
header field values. Squid now rejects messages with bare CR characters
in those framing-related field values.
When rejecting, Squid informs the admin with a level-1 WARNING such as
obs-fold seen in framing-sensitive Content-Length: ...
Ignore SMP queue responses made stale by worker restarts (#711)
When a worker restarts (for any reason), the disker-to-worker queue may
contain disker responses to I/O requests sent by the previous
incarnation of the restarted worker process (the "previous generation"
responses). Since the current response:request mapping mechanism relies
on a 32-bit integer counter, and a worker process always starts counting
from 0, there is a small chance that the restarted worker may see a
previous generation response that accidentally matches the current
generation request ID.
For writing transactions, accepting a previous generation response may
mean unlocking a cache entry too soon, making not yet written slots
available to other workers that might read wrong content. For reading
transactions, accepting a previous generation response may mean
immediately serving wrong response content (that have been already
overwritten on disk with the information that the restarted worker is
now waiting for).
To avoid these problems, each disk I/O request now stores the worker
process ID. Workers ignore responses to requests originated by a
different/mismatching worker ID.
Alex Rousskov [Mon, 10 Aug 2020 18:46:02 +0000 (18:46 +0000)]
Do not send keep-alive in 101 (Switching Protocols) responses (#709)
... because it breaks clients using websocket_client[1] library and is
redundant in our HTTP/1.1 control messages anyway.
I suspect that at least some buggy clients are confused by a multi-value
Connection field rather than the redundant keep-alive signal itself, but
let's try to follow RFC 7230 Upgrade example more closely this time and
send no keep-alive at all.
trapexit [Sun, 9 Aug 2020 06:14:51 +0000 (06:14 +0000)]
Add http_port sslflags=CONDITIONAL_AUTH (#510)
Enabling this flag removes SSL_VERIFY_FAIL_IF_NO_PEER_CERT from the
SSL_CTX_set_verify callback. Meaning a client certificate verify
occurs iff provided.
Amos Jeffries [Thu, 6 Aug 2020 22:49:34 +0000 (22:49 +0000)]
Fix: cannot stat tests/STUB.h No such file or directory (#707)
Since 2b5ebbe (PR #670), we have been seeing "random" build failures.
The tests/Stub.am generated by source-maintenance.sh has not included
the tests/STUB.h file for explicit distribution. The source file was
built and included only when seen as a dependency of the files using it.
When stubs are copied for use by the extra binaries from tools/
directory, there is a secondary side effect from "make -j":
* When the -j concurrency is small, tests/STUB.h gets copied during the
first cycle, and parallel builds compiling other copied files succeed.
* When the -j concurrency is large, tests/STUB.h may be deferred to a
later copy cycle, and the first actual compile task needing it fails
with `cannot stat 'src/tests/STUB.h': No such file or directory`.
Add tests/STUB.h to src/Makefile.am EXTRA_DIST to restore the previous
distribution behavior (when STUB_SOURCE contained it explicitly).
Update the pinger source paths that were omitted in 2b5ebbe.
Amos Jeffries [Tue, 4 Aug 2020 04:34:32 +0000 (04:34 +0000)]
Enforce token characters for field-name (#700)
RFC 7230 defines field-name as a token. Request splitting and cache
poisoning attacks have used non-token characters to fool broken HTTP
agents behind or in front of Squid for years. This change should
significantly reduce that abuse.
If we discover exceptional situations that need special treatment, the
relaxed parser can allow them on a case-by-case basis (while being extra
careful about framing-related header fields), just like we already
tolerate some header whitespace (e.g., between the response header
field-name and colon).
Do not stall while debugging a scan of an empty store_table (#699)
Non-SMP Squid and each SMP kid allocate a store_table hash. With large
caches, some allocated store_table may have millions of buckets.
Recently we discovered that it is almost impossible to debug SMP Squid
with a large but mostly empty disk cache because the disker registration
times out while store_table is being built -- the disker process is
essentially blocked on a very long debugging loop.
The code suspends the loop every 500 entries (to take care of tasks like
kid registration), but there are no pauses when scanning millions of
empty hash buckets where every bucket prints two debug lines.
Squid now does not report empty store_table buckets explicitly. When
dealing with large caches, the debugged process may still be blocked for
a few hundred milliseconds (instead of many seconds) while scanning the
entire (mostly empty) store_table. Optimizing that should be done as a
part of the complex "store search" API refactoring.
peerDigestHandleReply() was missing a premature EOF check. The existing
peerDigestFetchedEnough() cannot detect EOF because it does not have
access to receivedData.length used to indicate the EOF condition. We did
not adjust peerDigestFetchedEnough() because it is abused to check both
post-I/O state and the state after each digest processing step. The
latter invocations lack access to receivedData.length and should not
really bother with EOF anyway.
Revamped LruMap implementation, adding support for value-specific TTLs,
value-specific memory consumption estimates, memory pooling for value
wrappers, and polishing code. Renamed the result to ClpMap.
Fixed memory leak of generated fake SSL certificates in Squids
configured with dynamic_cert_mem_cache_size=0 (the default is 4MB).
Controversially changed sslcrtvalidator_program cache and TTL defaults:
* After we started accounting for key lengths in validator cache
entries, the old 2048 bytes default size became not just ridiculously
small but insufficient to cache the vast majority of cache entries
because an entry key contains a PEM certificate sent to the validator
(among other validation parameters). A PEM certificate usually
consumes at least a few kilobytes. Living the old 2KB default would
essentially mean that we are disabling caching (by default).
* And if we fix one default, it makes sense to fix the other as well.
The old 60 second TTL default felt too conservative. Most certificate
invalidations last days, not seconds. We picked one hour because that
value is still fairly conservative, used as the default by several
other Squid caches, and should allow for decent cache hit ratio.
Dropped support for the undocumented "sslcrtvalidator_program ttl=-1"
hack that meant unlimited TTL. Large TTL values should be used instead.
TODO: The option parser should be revamped to reject too-large values.
JobCallbackPointer::cancel() not checks itself the precondition
... instead of checking it in each caller. In future we can find a
better name for the method to avoid confusion about 'cancelling' of
non-existent callbacks.
Moved the job notifiation functionality to JobCallbackPointer
This improvement allowed to get rid of JobCallbackPointer::job()
getter.
The notifier registration requires both method name and method
pointer, which looks redundant. TODO: probably this may be improved
by passing the method pointer to a new macro, which will convert it
to string.
This should be done in a9e67903, when the 'opener' field was moved from
Ftp::Channel. This moving was justified by the fact that the
pre-a9e67903 code used only 'Ftp::Client::data::opener' field
(Ftp::Client::ctrl::opener was always nil). In other words, only
data channel spawned the ConnOpener job.
Log::TcpLogger simply created the ConnOpener job, without storing
callback and notifying it in a case of destruction. The new
Log::TcpLogger::opener field is now responsible for this.
Refactored Ftp::Channel::opener with JobCallbackPointer
partially addressing the 553e9da first TODO.
I removed the old Ftp::Channel::opener (which was probably misplaced)
and created a new Ftp::Client::opener.
Also fixed swanSong() for Http::Tunneler and PeerConnector jobs so
that it was aware of possible callback cancelling. For example, if the
caller has cancelled the job it should not treat this as an error and
just perform some cleanup (closing connections).
TunnelStateData spawns three jobs (HappyConnOpener, BlindPeerConnector
and Tunneler), passing callbacks to them. For each job/callback pair,
it may need to do common tasks, such as cancelling the callback or
notifying the job of abort. Instead of duplicating the TunnelStateData
code for each job, it is better to move this functionality into a new
class.
TODO: find similar cases outside tunnel.cc and use JobCallbackPointer
there.
TODO: teach JobCallbackPointer to send 'noteAbort' notifications.
The fail() method performs two actions, saving an error and making some
cleanup operations. However, in reality the most of the callers do only
the first step because the second (cleanup) depends on a rare
ERR_ZERO_SIZE_OBJECT condition. We could probably optimize all these
callers, calling saveError() instead of fail() - but this can cause some
difficulties in future if the cleanup condition changes. However, I
think it is reasonable to do this for internal callers only (i.e., in
FwdState itself).
Report SMP store queues state (mgr:store_queues) (#690)
The state of two Store queues is reported: Transients notification queue
(a.k.a. Collapsed Forwarding queue) and SMP I/O request queue (used for
communication with rock diskers). Each worker and disker kid reports its
view of that kid's incoming and outgoing queues state, including a small
sample (up to 7 items) of queued messages. These kid-specific reports
are YAML-compliant.
With the exception of a field labeled "other", each queue report is
self-consistent despite accessing data shared among kids -- the reported
combination of values did exist at the snapshot collection time.
The special "other" field represents a message counter maintained by the
other side of the queue. In most cases, that field will be close to its
correct value, but, due to modifications by the other process of a
non-atomic value, it may be virtually anything. We think it is better to
report (without officially naming) this field because it may be useful
in triage regardless of these caveats. Making the counter atomic just
for these occasional reports is not worth the performance overheads.
Also fixed testStore linking problems (on some platforms) that were
caused by the wrong order of libdiskio and libipc dependencies.
The 8c9a47c solution was incomplete. Though it fixed one problem in
advanceDestination(), resetting the flags on an error, another problem
still persisted: the job (which could be still running after the
exception) should not callback this (already failed) destination after
retryOrBail() starts another destination.
Both problems should be resolved now by means of callbacks
stored in TunnelStateData, which are cancelled when needed.
Alex Rousskov [Mon, 6 Jul 2020 08:04:31 +0000 (08:04 +0000)]
Honor on_unsupported_protocol for intercepted https_port (#689)
... when Squid discovers a non-TLS client while parsing its handshake.
For https_port traffic, ConnStateData::switchToHttps() relies on start()
to set preservingClientData_ correctly, but shouldPreserveClientData(),
called by start() to set preservingClientData_, was not preserving TLS
bytes in the https_port start() context. Typical debug messages:
parseTlsHandshake: Got something other than TLS ... Cannot SslBump
tunnelOnError: may have forgotten client data; send error: 40