Alex Rousskov [Mon, 9 Aug 2021 02:46:31 +0000 (22:46 -0400)]
Keep Connection and other objects in sync with Comm in closure callbacks
There are lots of small bugs, inconsistencies, and other problems in
Connection closure handlers. It is not clear whether any of those
problems could result in serious runtime errors or leaks. In theory, the
rest of the code could neutralize their negative side effects. However,
even in that case, it was just a matter of time before the next bug will
bite us due to stale Connection::fd and such. These changes themselves
carry elevated risk, but I think we have to do them to get closer to a
reliable code as far as Connection maintenance is concerned; otherwise,
we will keep chasing their deadly side effects.
Long-term, all these manual efforts to keep things in sync should become
unnecessary with the introduction of appropriate Connection ownership
APIs that automatically maintain the corresponding environments (TODO).
Also marked a few newly uncovered bugs in the official code.
Alex Rousskov [Fri, 6 Aug 2021 20:20:33 +0000 (16:20 -0400)]
Improved AsyncJob::Start() API and reduced code duplication
Every JobWait::start() call was followed by AsyncJob::Start() call. The
relationship is natural. JobWait::start() should call Start() directly.
AsyncJob::Start() used to return a job pointer, but that was a bad idea:
* It implies that a failed Start() will return a nil pointer, and that
the caller should check the result. Neither is true.
* It encourages callers to dereference the returned pointer to adjust
the job. That technically works (today) but violates the rules of
communicating with an async job. The Start() method is the boundary
after which the job is deemed asynchronous.
Also removed old "and wait for..." comments because the code itself
became clear enough, and those comments were becoming increasingly stale
(because they duplicated the callback names above them).
The problem is real, but there are other similarly problematic places in
the official code. If we fix this one place, then we probably should fix
all other known ones. However,
* the related exceptions ought be rare (there are no known ones);
* Squid will probably recover even if the affected Connection objects
are orphaned; and
* the appropriate fix (i.e. custom try/catch blocks) is not the right
long-term solution -- we need a better Connection storage API that
would do the right thing automatically.
Given the combination of these factors, I think it is best to do nothing
here/now but add the right long-term API in a dedicated branch.
Alex Rousskov [Fri, 6 Aug 2021 15:44:37 +0000 (11:44 -0400)]
Refactored ICAP connection-establishing code
... to delay Connection ownership until the ICAP connection is ready.
This change addresses primary Connection ownership concerns (as they
apply to this ICAP code) except orphaning of the temporary Connection
object by helper job start exceptions (now an explicit XXX). For
example, the transaction no longer shares a Connection object with
ConnOpener and IcapPeerConnector jobs.
Also removed (unused except for debugging) Icap::Xaction::stopReason
that was shadowing AsyncJob::stopReason, effectively breaking debugging.
Alex Rousskov [Thu, 5 Aug 2021 21:56:12 +0000 (17:56 -0400)]
Revised Connection cloning during ConnOpener creation
Item 1 in recent branch commit 21f1082 correctly identified the
Connection sharing problem, but its solution (i.e. let ConnOpener clone
the passed Connection) was not only somewhat expensive but buggy:
ConnOpener assumed that cloneDestinationDetails() is the right cloning
method, but that method missed Connection properties like tos and nfmark
(at least). Another method -- cloneIdentDetails() would copy those, but
calling that method in non-IDENT contexts would look weird, and even
that method could have missed something (now or in the future).
The core problem here was not in the correct cloning method selection,
but that ConnOpener is not in a position to select at all! ConnOpener
should be using/forwarding all the provided Connection details, not
guessing which ones are relevant to the _caller_. After that became
clear, we had two choices:
A. ConnOpener clones everything. This is a simple, safe choice. However,
it means double-cloning: First, the caller would need to clone to
copy just the right set of Connection members (or effectively clear
the irrelevant ones). And then ConnOpener would clone again to break
the Connection ownership link with poorly implemented callers. In
some cases, that cloning was already following ResolvedPeers cloning,
resulting in triple cloning of each used destination on the common
forwarding path! Such cloning is both expensive and complicates
triage by creating long chains of Connection IDs.
B. ConnOpener clones nothing; ConnOpener creator is responsible for
supplying ConnOpener with a non-owned Connection object (that does
not need to be cloned by ConnOpener). This is a clean, efficient,
forward-looking solution but it is a bit less safe because the
compiler will not identify buggy creators that share a Connection
object they own with ConnOpener.
After weighing all these impossible-to-compare factors, we went with B.
Implementing B required adjusting several callers that were still
passing the Connections they owned to ConnOpener.
Also, after comparing the needs of existing and newly added ("clone
everything") Connection cloning method callers, we decided there is no
need to maintain three different methods. All existing callers should be
fine with a single method because none of them suffers from "extra"
copying of members that others need. Right now, the new cloneProfile()
method copies everything except FD and a few special-purpose members
(with documented reasons for not copying).
Alex Rousskov [Mon, 2 Aug 2021 01:46:49 +0000 (21:46 -0400)]
Fixed ConnOpener callback's syncWithComm()
The stale CommConnectCbParams override was testing unused (i.e. always
negative) CommConnectCbParams::fd and was trying to cancel the callback
that most (possibly all) recipients rely on: ConnOpener users expect a
negative answer rather than no answer at all.
Alex Rousskov [Sun, 1 Aug 2021 21:20:48 +0000 (17:20 -0400)]
Adjusted ConnOpener API to fix several problems
1. Many ConnOpener users are witten to keep a ConnectionPointer to the
destination given to ConnOpener. This means that their connection
magically opens when ConnOpener successfully connects, before
ConnOpener has a chance to notify the user about the changes. Having
multiple concurrent connection owners is always dangerous, and the
user cannot even have a close handler registered for its now-open
connection. When something happens to ConnOpener or its answer, the
user job may be in trouble.
ConnOpener now clones the destination parameter, refusing to tie
ConnOpener connection to the ConnOpener creator connection. This
addresses the concern, but requires adjustment 2.
2. Refactored ConnOpener users to stop assuming that the answer contains
a pointer to their connection object. After adjustment 1 above, it
does not. HappyConnOpener relied on that assumption quite a bit so we
had to refactor to use two custom callback methods instead of one
with a complicated if-statement distinguishing prime from spare
attempts. This refactoring is an overall improvement because it
simplifies the code. Other ConnOpener users just needed to remove a
few no longer valid paranoid assertions/Musts.
3. ConnOpener users were forced to remember to close params.conn when
processing negative answers. Some, naturally, forgot, triggering
warnings about orphaned connections (e.g., Ident and TcpLogger).
ConnOpener now closes its open connection before sending a negative
answer.
4. ConnOpener would trigger orphan connection warnings if the job ended
after opening the connection but without supplying the connection to
the requestor (e.g., because the requestor has gone away). Now,
ConnOpener explicitly closes its open connection if it has not been
sent to the requestor.
Also fixed Comm::ConnOpener::cleanFd() debugging that was incorrectly
saying that the method closes the temporary descriptor.
Also added Comm::Connection::cloneDestinationDetails() debugging to
simplify tracking dependencies between half-baked Connection objects
carrying destination/flags/other metadata and open Connection objects
created by ConnOpener using that metadata (which are later delivered to
ConnOpener users and, in some cases, replace those half-baked
connections mentioned earlier. Long-term, we need to find a better way
to express these and other Connection states/stages than comments and
debugging messages.
Apart from Downloader, there are several other doneAll() methods
needing refactoring as well (e.g., ConnOpener::doneAll()). We
should refactor them all at once (probably in a separate PR).
Alex Rousskov [Fri, 16 Jul 2021 15:51:43 +0000 (11:51 -0400)]
Simplified Gopher client code
Its fake deleteThis() and swanSong() methods were not really needed,
swanSong() was doing the cleanup that the destructor should be doing
(because the _real_ swanSong() is not guaranteed to be called), and
swanSong() implementation was actually a little buggy (creating a
dangling StoreEntry pointer).
Alex Rousskov [Thu, 15 Jul 2021 21:58:50 +0000 (17:58 -0400)]
fixup: Moved JobWait classes to their own files
Technically, there may exist some code that uses JobWait but does not
make any JobCalls itself. And the new classes are big/interesting
enough to deserve their own files anyway.
Alex Rousskov [Thu, 15 Jul 2021 20:49:36 +0000 (16:49 -0400)]
fixup: Moved Job_type-agnostic JobWait code to a .cc file
It is possible to make most JobWait users template-free. I have done
that and reverted those changes because they:
* Decrease code readability: The wait member declaration no longer makes
it clear what job kind that wait member is waiting for/dedicated to.
* Decrease JobWait safety (a user may give the wrong job to the
wait.start() call)
* Require strange/ugly tricks to make sure that users of the templated
version do not call the non-templated start() which does not set the
type-aware JobWaitAndCommunicate job_ member.
A couple of other, very minor JobWait polishing touches.
Alex Rousskov [Thu, 15 Jul 2021 18:34:43 +0000 (14:34 -0400)]
Fix PeerConnector handling of last-resort callback requirements
... and simplify the corresponding code.
Events like handleStopRequest() and callException() stop the job but
should not be reported as a BUG (e.g., it would be up to the
callException() to decide how to report the caught exception). There
might be (or there will) be other, similar cases where the job is
stopped prematurely for some non-BUG reason beyond swanSong() knowledge.
The existence of non-bug cases does not mean there could be no bugs
worth reporting here, but until they can be identified more reliably
than all these benign/irrelevant cases, reporting no BUGs is a (much)
lesser evil.
The swanSong() method does not need to remember to cancel waits. That
will happen automatically upon imminent job destruction.
Adaptation::Icap::Xaction::noteCommClosed() does not need to remember to
cancel encryption wait because the encryptor will see the same
connection closure notification. The Xaction object should not even have
the connection closure handler while another job is negotiating TLS, but
that is one of the problems I hope we do not have to fix now.
Also added nasty XXXs. This old code needs some love, but fixing these
old problems is far from trivial and is outside this branch scope.
Alex Rousskov [Wed, 14 Jul 2021 21:45:01 +0000 (17:45 -0400)]
fixup: Do not rely on dieOnConnectionFailure() to throw (so much)
That method should probably be rewritten to avoid excessive throwing in
favor of mustStop() or a similar call, but that polishing is outside
this branch scope. Let's just attempt to minimize future code changes
by not relying on dieOnConnectionFailure() unwinding the stack.
Alex Rousskov [Wed, 14 Jul 2021 21:22:50 +0000 (17:22 -0400)]
Simplified PeerPoolMgr by removing excessive conn closure monitoring
... and possibly even fixed branch PeerPoolMgr changes.
Since commit 25b0ce4, Security::BlindPeerConnector (or, to be more
precise, its Security::PeerConnector parent) owns the connection being
secured, including informing the requestor (here, PeerPoolMgr) about
connection closures. This removal should have been done in 25b0ce4.
Removing closure monitoring here/now allows us to easily fix
branch-added callback-related closure handling that might have been racy
with respect to connection closure notification that could have arrived
_after_ we cancelled encryptionWait for other reasons, triggering a
Must() violation.
Alex Rousskov [Wed, 14 Jul 2021 20:58:10 +0000 (16:58 -0400)]
Fix Tunneler handling of last-resort callback requirements
... and simplify the corresponding code.
Events like noteAbort() and callException() terminate the job but are
not bugs that should be reported by swanSong().
Not treating (rare) cancelled callbacks specially in swanSong() allows
us to avoid making cleanup code even more complex than it already is,
undoing branch changes that introduced closeQuietly().
With JobWait API used by the callers, we no longer need to rely on some
unrelated async job call noticing the callback cancellation. We are now
guaranteed to receive a noteAbort() call.
I also removed a call to always-true parent doneAll() because no other
job, not even a parent's job can force Downloader to keep running -- if
we have notified the requestor, we are done! Squid code is not
consistent in application of that principle (yet).
Alex Rousskov [Wed, 14 Jul 2021 20:18:11 +0000 (16:18 -0400)]
fixup: Fix Downloader handling of last-resort callback requirements
... and simplify the corresponding code.
The previous branch commit missed a callException() case that stops the
job but should not be reported as a BUG related to callback maintenance
(it would be up to the callException() implementation to decide how to
report the caught exception). There might be (or there will) be other,
similar cases where the job is stopped prematurely for some non-BUG
reason beyond swanSong() knowledge.
Alex Rousskov [Wed, 14 Jul 2021 19:07:06 +0000 (15:07 -0400)]
Require strict JobWait::start/finish pairing
I hesitated removing !fooWait asserts that many fooWait.start() callers
had at the beginning of their start-waiting methods. The earlier we
assert, the better, and those early asserts looked correct. However,
there is also a danger in asserting too early or for now good reason (as
the code gets refactored) and most of the code separating current
asserts and fooWait.start() calls are relatively simple job
configuration manipulations that are unlikely to significantly alter the
caller state or confuse developers during assertion triage. Keeping
these early asserts consistent across future changes would also be
difficult. Thus, I decided that it is better to remove those asserts
while relying on JobWait asserts, now built into the JobWait class.
I considered making JobWait::finish() forgiving, conditional on being in
waiting() state but succumbed to pressures to address concerns about
possible message corruption or incorrect routing decisions due to
mismatched/out-of-order callbacks.
AFAICT, the only way a caller can decide whether to call fail() or
saveError() is to somehow know that fail() should only be called for
ERR_ZERO_SIZE_OBJECT errors. That knowledge is hidden inside branch
code. Moreover, that distinction may change in the future without
saveError() callers' knowledge. In other words, today, saveError()
callers are correct, but tomorrow some of them will become wrong because
of changes in cleanupOnFail() that they do not call. Developers
modifying cleanupOnFail() can easily miss the hidden fact that some of
the saveError() callers should be kept in sync with cleanupOnFail().
Besides saving CPU cycles on that trivial ERR_ZERO_SIZE_OBJECT check, I
saw no value in this dangerous split of fail() into fail() and
saveError(). The branch commit 0523b93 message did not help me discover
that value.
However, commit 0523b93 was based on code that already had
doFail()/cleanup() methods that seem unnecessary to the current diff
reader like me. Perhaps the prior existence of those methods implies
that I am missing some other reasons behind that split...
Alex Rousskov [Thu, 17 Dec 2020 21:55:43 +0000 (16:55 -0500)]
fixup: Simplify and protect private PeerConnectionPointer::connection_
... from being cleared by the (difficult-to-track) 3rd-party code.
AFAICT, these 3rd-party resets were (ab)used to prevent the pending
connection from being closed after a successful hand-off to a a helper
job. This commit essentially replaces
try {
... step creation/scheduling code here ...
conn = nullptr;
} catch (...) {
if (conn) // we still own the connection
closePendingConnection(conn);
else
cancelStep();
}
with the equivalent but much simpler
try {
... step creation/scheduling code here ...
} catch (...) {
closePendingConnection(conn); // we still own the connection
}
I think we can and should assert that the helper job is
* either scheduled to start (no exception is thrown in this case and the
connection ownership passes to the helper job)
* or is not scheduled to start (an exception is thrown and the
connection ownership stays with the caller).
The PR code assumed the existence of a third outcome: The helper job was
scheduled to start but the same code that successfully scheduled the job
to start threw an exception. I believe that if this outcome is possible,
then there is a bug somewhere outside of advanceDestination(). We should
fix any such bugs, but advanceDestination() should assume that there are
none because working around such bugs is too costly.
With this approach, we do not need to clear the conn pointer in the step
creation/scheduling code to signal a successful hand-off -- if we see
the exception, we know that we still own the connection and do not need
to cancel the step (which failed to start).
FWIW, PeerConnectionPointer::print(), at least, thinks that "we should
see no combinations of a nil connection and a set position". External
resets would invalidate that assumption. This is a secondary/minor
reason, of course.
Said that, I am pretty sure that the (future) connection ownership
object will be modified on the path modified by this commit because the
connection ownership does change hands on that path. Thus, we may have
to adjust advanceDestination() and step profiles again. Hopefully, by
the time we are ready for that adjustment, the code will not have to
sneak into PeerConnectionPointer private areas to pass conn ownership.
Alex Rousskov [Tue, 8 Dec 2020 02:01:35 +0000 (21:01 -0500)]
fixup: Renamed JobCallbackPointer to JobWait, polished method names
... and synced the corresponding data member names.
The new class is not really a smart pointer (to a callback). With some
effort, it can be viewed as a smart pointer to a helper job, but that is
not really its primary purpose/role. The class is actually focusing on
the _wait_ (for our helper job calling our callback).
Renaming data members probably increases the diff, but the vast majority
of changes appear to be necessary for other reasons anyway.
Also swapped the JobWait::start() arguments for an arguably more natural
or "chronological" order.
Also found (two instances of) a branch bug in icap/Xaction.cc (now XXXs)
and sketched a TODO solution.
x.h: // #include "helper.h"
x.h: class X { ... JobCallbackPointer<Helper> helper; };
Most code #includes x.h without using or "seeing" X constructor or
destructor implementations. Such code does not really use the helper
data member and, hence, compiles fine without #including helper.h. Thus,
x.h should not #include helper.h.
This commit also removes some branch-added #includes (e.g., HttpRequest)
that were probably added in hope to fix some other build problems.
It is possible that some of these changes will need to be undone and/or
refactored to accommodate older compilers. I am using GCC v9.3.0.
Alex Rousskov [Sat, 5 Dec 2020 01:49:26 +0000 (20:49 -0500)]
fixup: Disabled dangerous copying of job callbacks
AsyncCalls are not designed to be copied and JobCallbackPointer is,
essentially, an AsyncCall (among other things). Also JobCallbackPointer
destructor semantics is difficult to properly combine with copying.
Essentially the same information is available via the callback. And if
we discover a need to provide the same info in some future use cases, it
is very likely to still default to the callaback settings IMO.
This change also removes strangely-looking (IMO) magic numbers from many
JobCallbackPointer field constructors. They were debug sections, but
they did not look like ones due to insufficient interpretation context.
The method was only used for reporting JobCallbackPointer in debug logs.
For that purpose, we can use JobCallbackPointer::print() to provide both
the helper job ID and the callback ID. While either one can usually be
derived from another, having access to both can save quite a bit of log
searching. The two can be reported in a compact way: `call24<-job6`.
HappyConnOpener::status() refactoring was necessary to use
JobCallbackPointer::print() but it is a step forward anyway. After we
redesign the status() API (which is an old TODO), new status() code
would use stream-based reporting like the refactored code does.
JobCallbackPointer is a general-purpose class. The proposed notifier API
is specific to one-per-pointer, parameterless job notifications. The
first aspect (i.e. limiting notifications to just one notification kind
per pointer instance) cannot be easily fixed by generalizing the API.
It also feels wrong to expect JobCallbackPointer users to pre-register
notification calls they might want to make in the future.
Finally, the motivation for introducing JobCallbackPointer::notifier
feels insufficient at best. There is nothing conceptually wrong with
exposing the helper job pointer to that job initiator -- initiators may
have legitimate reasons to want to message the helper job. The wrapped
noteCandidatesChange() is one of such legitimate notification examples.
Bug 5057: "Generated response lacks status code" (#752)
... for responses carrying status-code with numerical value of 0.
Upon receiving a response with such a status-code (e.g., 0 or 000),
Squid reported a (misleading) level-1 BUG message and sent a 500
"Internal Error" response to the client.
This BUG message exposed a different/bigger problem: Squid parser
declared such a response valid, while other Squid code could easily
misinterpret its 0 status-code value as scNone which has very special
internal meaning.
A similar problem existed for received responses with status-codes that
HTTP WG considers semantically invalid (0xx, 6xx, and higher values).
Various range-based status-code checks could misinterpret such a
received status-code as being cachable, as indicating a control message,
or as having special for-internal-use values scInvalidHeader and
scHeaderTooLarge.
Unfortunately, HTTP/1 does not explicitly define how a response with a
status-code having an invalid response class (e.g., 000 or 600)
should be handled, but there may be an HTTP WG consensus that such
status-codes are semantically invalid:
Since leaking semantically invalid response status-codes into Squid code
is dangerous for response retries, routing, caching, etc. logic, we now
reject such responses at response parsing time.
Also fixed logging of the (last) received status-code (%<Hs) when we
cannot parse the response status-line or headers: We now store the
received status-code (if we can parse it) in peer_reply_status, even if
it is too short or has a wrong response class. Prior to this change,
%<Hs was either not logged at all or, during retries, recorded a stale
value from the previous successfully parsed response.
Alex Rousskov [Tue, 10 Nov 2020 21:42:18 +0000 (21:42 +0000)]
Transactions exceeding client_lifetime are logged as _ABORTED (#748)
... rather than timed out (_TIMEOUT).
To record the right cause of death, we have to call terminateAll()
rather than setting logType.err.timedout directly. Otherwise, when
ConnStateData::swanSong() calls terminateAll(0), it overwrites our
direct setting.
Since commit 5ef5e5c, a socket write timeout triggers two things:
* reporting of a write error to the socket writer (as designed/expected)
* reporting of a socket read timeout to the socket reader (unexpected).
The exact outcome probably depends on the transaction state, but one
known manifestation of this bug is the following level-1 message in
cache.log, combined with an access.log record showing a
much-shorter-than-client_lifetime transaction response time.
WARNING: Closing client connection due to lifetime timeout
Alex Rousskov [Wed, 4 Nov 2020 14:27:22 +0000 (14:27 +0000)]
Optimization: Avoid more SBuf::cow() reallocations (#744)
This optimization contains two parts:
1. A no-brainer part that allows SBuf to reuse MemBlob area previously
used by other SBufs sharing the same MemBlob. To see this change,
follow the "cowAvoided" code path modifications in SBuf::cow().
2. A part based on a rule of thumb: memmove is usually better than
malloc+memcpy. This part of the optimization (follow the "cowShift"
path) is only activated if somebody has consume()d from the buffer
earlier. The implementation is based on the heuristic that most
consuming callers follow the usual append-consume-append-... usage
pattern and want to preserve their buffer capacity.
MemBlob::consume() API mimics SBuf::consume() and std::string::erase(),
ignoring excessive number of bytes rather than throwing an error.
Also detailed an old difference between an SBuf::cow() requiring just a
new buffer allocation and the one also requiring data copying.
Alex Rousskov [Tue, 27 Oct 2020 23:33:39 +0000 (23:33 +0000)]
Do not send keep-alive or close in HTTP Upgrade requests (#732)
A presence of a Connection:keep-alive or Connection:close header in an
Upgrade request sent by Squid breaks some Google Voice services. FWIW,
these headers are not present in RFC 7230 Upgrade example, and popular
client requests do not send them.
* In most cases, Squid was sending Connection:keep-alive which is
redundant in Upgrade requests (because they are HTTP/1.1).
* In rare cases (e.g., server_persistent_connections=off), Squid was
sending Connection:close. Since we cannot send that header and remain
compatible with popular Upgrade-dependent services, we now do not send
it but treat the connection as non-persistent if the server does not
upgrade and expects us to continue reusing the connection.
Avoid null pointer dereferences when dynamic_cast'ing to SwapDir (#743)
Detected by Coverity. CID 1461158: Null pointer dereferences
(FORWARD_NULL).
When fixing these problems, we moved a few cache_dir iterations into
Disks.cc by applying the "Only Store::Disks can iterate cache_dirs"
design principle to the changed code. The Disks class is responsible for
maintaining (and will eventually encapsulate all the knowledge of) the
set of cache_dirs. Adjusted cache_cf.cc no longer depends on Disk.h.
Removed StoreClient::created() and improved PURGE code quality (#734)
The StoreClient::created() callback method was probably added in hope to
make Store lookups asynchronous, but that functionality has not been
implemented, leaving convoluted and dangerous synchronous created() call
chains behind. Moreover, properly supporting asynchronous Store lookups
in modern code is likely to require a very different API.
Removal of created() allowed to greatly simplify PURGE processing,
eliminating some tricky state, such as `purging` and `lookingforstore`.
Also removed all Store::getPublic*() methods as no longer used.
Amos Jeffries [Sat, 17 Oct 2020 05:33:11 +0000 (05:33 +0000)]
Cleanup helper.h classes (#719)
Polish up the classes in helper.h to use proper constructors as the
caller API for creation + initialization. Use C++11 initialization for
default values.
* no "virtual" in helper class destructor declaration could create
memory leaks in future (poorly) refactored code; the gained protection
is probably worth adding the (currently unused) virtual table to the
helper class; resolves Coverity issue CID 1461141 (VIRTUAL_DTOR)
* missing Comm::Connection timers initialization on helper startup
* multiple initialization of values used for state accounting
* initialization of booleans to non-0 integer values
* initialization of char using numeric values
* missing mgr:filedescriptors description for helper sockets
Do not duplicate free disk slots on diskers restart (#731)
When a disker process starts, it scans the on-disk storage to populate
shared-memory indexes of cached entries and unused/free slots. This
process may take more than ten minutes for large caches. Squid workers
use these indexes as they are being populated by diskers - workers do
not wait for the slow index rebuild process to finish. Cached entries
can be retrieved and misses can be cached almost immediately.
The disker does not "lock" the free slots to itself because the disker
does not want to preclude workers from caching new entries while the
disker is scanning the rock storage to build a complete index of old
cached entries (and free slots). The disker knows that it shares the
disk slot index with workers and is careful to populate the indexes
without confusing workers.
However, if the disker process is restarted for any reason (e.g., a
crash or kid registration timeout), the disker starts scanning its
on-disk storage from the beginning, adding to the indexes that already
contain some entries (added by the first disker incarnation and adjusted
by workers). An attempt to index the same cached object twice may remove
that object. Such a removal would be wasteful but not dangerous.
Indexing a free/unused slot twice can be disastrous:
* If Squid is lucky, the disker quickly hits an assertion (or a fatal
exception) when trying to add the already free slot to the free slot
collection, as long as no worker starts using the free slot between
additions (detailed in the next bullet).
* Unfortunately, there is also a good chance that a worker starts using
the free slot before the (restarted) disker adds it the second time.
In this case, the "double free" event cannot be detected. Both free
slot copies (pointing to the same disk location) will eventually be
used by a worker to cache new objects. In the worst case, it may lead
to completely wrong cached response content being served to an
unsuspecting user. The risk is partially mitigated by the fact that
disker crashes/restarts are rare.
Now, if a disker did not finish indexing before being restarted, it
resumes from the next db slot, thus avoiding indexing the same slot
twice. In other words, the disker forgets/ignores all the slots scanned
prior to the restart. Squid logs "Resuming indexing cache_dir..."
instead of the usual "Loading cache_dir..." to mark these (hopefully
rare) occurrences.
Also simplified code that delays post-indexing revalidation of cache
entries (i.e. store_dirs_rebuilding hacks). We touched that code because
the updated rock code will now refuse to reindex the already indexed
cache_dir. That decision relies on shared memory info and should not be
made where the old code was fiddling with store_dirs_rebuilding level.
After several attempts resulted in subtle bugs, we decided to simplify
that hack to reduce the risks of mismanaging store_dirs_rebuilding.
Adjusted old level-1 "Store rebuilding is ... complete" messages to
report more details (especially useful when rebuilding kid crashes). The
code now also reports some of the "unknown rebuild goal" UFS cases
better, but more work is needed in that area.
Also updated several rebuild-related counters to use int64_t instead of
int. Those changes stemmed from the need to add a new counter
(StoreRebuildData::validations), and we did not want to add an int
counter that will sooner or later overflow (especially when counting db
slots (across all cache_dirs) rather than just cache entries (from one
cache_dir)). That new counter interacted with several others, so we
had to update them as well. Long-term, all old StoreRebuildData counters
and the cache_dir code feeding them should be updated/revised.
Amos Jeffries [Fri, 28 Aug 2020 18:47:04 +0000 (18:47 +0000)]
Replaced X-Cache and X-Cache-Lookup headers with Cache-Status (#705)
See https://tools.ietf.org/html/draft-ietf-httpbis-cache-header
Also switched to identifying Squid instance in the header using
unique_hostname(), fixing a bug affecting proxies that share the same
visible_hostname in a cluster. The Cache-Status field values will now
point to a specific proxy in such a cluster.
The new initial lookup reporting (formally X-Cache-Lookup)
implementation has several differences:
* The reporting of the lookup is now unconditional, just like the
Cache-Status reporting itself. The dropped X-Cache-Lookup required
--enable-cache-digests. We removed that precondition because
Cache-Status already relays quite a bit of information, and sharing
lookup details is unlikely to tilt the balance in most environments.
The original lookup reporting code was conditional because the feature
was added for Cache Digests debugging, not because we wanted to hide
the info. Folks later discovered that the info is useful in many other
cases. If we do want to hide this information, it should be done
directly rather than via a (no) Cache Digests side effect.
* The initial lookup classification can no longer be overwritten by
additional Store lookups. Official code allowed such rewrites due to
implementation bugs. If we only report one lookup, the first lookup
classification is the most valuable one and almost eliminates doubts
about the the cache state at the request time. Ideally, we should also
exclude various internal Store lookup checks that may hide a matching
cache entry, but that exclusion is difficult to implement with the
current needlessly asynchronous create() Store API.
* Lookup reporting now covers more use cases. The official code probably
missed or mishandled various PURGE/DELETE use cases and did not
distinguish absence of Store lookup because of CC:no-cache from other
lookup bypass cases (e.g., errors). More work is probably needed to
cover every lookup-avoiding case.
Amos Jeffries [Sun, 16 Aug 2020 02:21:22 +0000 (02:21 +0000)]
Improve Transfer-Encoding handling (#702)
Reject messages containing Transfer-Encoding header with coding other
than chunked or identity. Squid does not support other codings.
For simplicity and security sake, also reject messages where
Transfer-Encoding contains unnecessary complex values that are
technically equivalent to "chunked" or "identity" (e.g., ",,chunked" or
"identity, chunked").
RFC 7230 formally deprecated and removed identity coding, but it is
still used by some agents.
Amos Jeffries [Fri, 14 Aug 2020 15:05:31 +0000 (15:05 +0000)]
Forbid obs-fold and bare CR whitespace in framing header fields (#701)
Header folding has been used for various attacks on HTTP proxies and
servers. RFC 7230 prohibits sending obs-fold (in any header field) and
allows the recipient to reject messages with folded headers. To reduce
the attack space, Squid now rejects messages with folded Content-Length
and Transfer-Encoding header field values. TODO: Follow RFC 7230 status
code recommendations when rejecting.
Bare CR is a CR character that is not followed by a LF character.
Similar to folding, bare CRs has been used for various attacks. HTTP
does not allow sending bare CRs in Content-Length and Transfer-Encoding
header field values. Squid now rejects messages with bare CR characters
in those framing-related field values.
When rejecting, Squid informs the admin with a level-1 WARNING such as
obs-fold seen in framing-sensitive Content-Length: ...
Ignore SMP queue responses made stale by worker restarts (#711)
When a worker restarts (for any reason), the disker-to-worker queue may
contain disker responses to I/O requests sent by the previous
incarnation of the restarted worker process (the "previous generation"
responses). Since the current response:request mapping mechanism relies
on a 32-bit integer counter, and a worker process always starts counting
from 0, there is a small chance that the restarted worker may see a
previous generation response that accidentally matches the current
generation request ID.
For writing transactions, accepting a previous generation response may
mean unlocking a cache entry too soon, making not yet written slots
available to other workers that might read wrong content. For reading
transactions, accepting a previous generation response may mean
immediately serving wrong response content (that have been already
overwritten on disk with the information that the restarted worker is
now waiting for).
To avoid these problems, each disk I/O request now stores the worker
process ID. Workers ignore responses to requests originated by a
different/mismatching worker ID.
Alex Rousskov [Mon, 10 Aug 2020 18:46:02 +0000 (18:46 +0000)]
Do not send keep-alive in 101 (Switching Protocols) responses (#709)
... because it breaks clients using websocket_client[1] library and is
redundant in our HTTP/1.1 control messages anyway.
I suspect that at least some buggy clients are confused by a multi-value
Connection field rather than the redundant keep-alive signal itself, but
let's try to follow RFC 7230 Upgrade example more closely this time and
send no keep-alive at all.
trapexit [Sun, 9 Aug 2020 06:14:51 +0000 (06:14 +0000)]
Add http_port sslflags=CONDITIONAL_AUTH (#510)
Enabling this flag removes SSL_VERIFY_FAIL_IF_NO_PEER_CERT from the
SSL_CTX_set_verify callback. Meaning a client certificate verify
occurs iff provided.
Amos Jeffries [Thu, 6 Aug 2020 22:49:34 +0000 (22:49 +0000)]
Fix: cannot stat tests/STUB.h No such file or directory (#707)
Since 2b5ebbe (PR #670), we have been seeing "random" build failures.
The tests/Stub.am generated by source-maintenance.sh has not included
the tests/STUB.h file for explicit distribution. The source file was
built and included only when seen as a dependency of the files using it.
When stubs are copied for use by the extra binaries from tools/
directory, there is a secondary side effect from "make -j":
* When the -j concurrency is small, tests/STUB.h gets copied during the
first cycle, and parallel builds compiling other copied files succeed.
* When the -j concurrency is large, tests/STUB.h may be deferred to a
later copy cycle, and the first actual compile task needing it fails
with `cannot stat 'src/tests/STUB.h': No such file or directory`.
Add tests/STUB.h to src/Makefile.am EXTRA_DIST to restore the previous
distribution behavior (when STUB_SOURCE contained it explicitly).
Update the pinger source paths that were omitted in 2b5ebbe.
Amos Jeffries [Tue, 4 Aug 2020 04:34:32 +0000 (04:34 +0000)]
Enforce token characters for field-name (#700)
RFC 7230 defines field-name as a token. Request splitting and cache
poisoning attacks have used non-token characters to fool broken HTTP
agents behind or in front of Squid for years. This change should
significantly reduce that abuse.
If we discover exceptional situations that need special treatment, the
relaxed parser can allow them on a case-by-case basis (while being extra
careful about framing-related header fields), just like we already
tolerate some header whitespace (e.g., between the response header
field-name and colon).
Do not stall while debugging a scan of an empty store_table (#699)
Non-SMP Squid and each SMP kid allocate a store_table hash. With large
caches, some allocated store_table may have millions of buckets.
Recently we discovered that it is almost impossible to debug SMP Squid
with a large but mostly empty disk cache because the disker registration
times out while store_table is being built -- the disker process is
essentially blocked on a very long debugging loop.
The code suspends the loop every 500 entries (to take care of tasks like
kid registration), but there are no pauses when scanning millions of
empty hash buckets where every bucket prints two debug lines.
Squid now does not report empty store_table buckets explicitly. When
dealing with large caches, the debugged process may still be blocked for
a few hundred milliseconds (instead of many seconds) while scanning the
entire (mostly empty) store_table. Optimizing that should be done as a
part of the complex "store search" API refactoring.
peerDigestHandleReply() was missing a premature EOF check. The existing
peerDigestFetchedEnough() cannot detect EOF because it does not have
access to receivedData.length used to indicate the EOF condition. We did
not adjust peerDigestFetchedEnough() because it is abused to check both
post-I/O state and the state after each digest processing step. The
latter invocations lack access to receivedData.length and should not
really bother with EOF anyway.