Alex Rousskov [Fri, 21 May 2021 18:47:36 +0000 (18:47 +0000)]
Bug 4528: ICAP transactions quit on async DNS lookups (#795)
The bug directly affected some ICAP OPTIONS transactions and indirectly
affected some ICAP REQMOD/RESPMOD transactions:
* OPTIONS: When a transaction needed to look up an IP address of the
ICAP service, and that address was not cached by Squid, it ended
prematurely because Adaptation::Icap::Xaction::doneAll() was unaware
of ipcache_nbgethostbyname()'s async nature. This bug is fixed now.
* REQMOD/RESPMOD: Adaptation::Icap::ModXact masked the _direct_ effects
of the bug: ModXact::startWriting() sets state.writing before calling
openConnection() which schedules the DNS lookup. That "I am still
writing" state makes ModXact::doneAll() false while a REQMOD or
RESPMOD transaction waits for the DNS lookup.
However, REQMOD and RESPMOD transactions that require an OPTIONS
transaction (because the service options have never been fetched
before or have expired) could still fail because the OPTIONS
transaction they trigger could fail as described in the first bullet.
For example, the first few REQMOD and RESPMOD transactions for a given
service -- all those started before the DNS lookup completes and Squid
caches its result -- could fail this way. With the OPTIONS now fixed,
these REQMOD and RESPMOD transactions should work correctly.
Alex Rousskov [Mon, 3 May 2021 21:40:14 +0000 (21:40 +0000)]
Stop processing a response if the Store entry is gone (#806)
HttpStateData::processReply() is usually called synchronously, after
checking the Store entry status, but there are other call chains.
StoreEntry::isAccepting() adds STORE_PENDING check to the ENTRY_ABORTED
check. An accepting entry is required for writing into Store. In theory,
an entry may stop accepting new writes (without being aborted) if
FwdState or another entry co-owner writes an error response due to a
timeout or some other problem that happens while we are waiting for an
I/O callback or some such.
N.B. HTTP and FTP code cannot use StoreEntry::isAccepting() directly
because their network readers may not be the ones writing into Store --
the content may go through the adaptation layer first and that layer
might complete the store entry before the entire peer response is
received. For example, imagine an adaptation service that wants to log
the whole response containing a virus but also replaces that (large)
response with a small error reply.
Use already parsed request-target URL in cache manager and
update CacheManager to Tokanizer based URL parse
Removing use of sscan() and regex string processing which have
proven to be problematic on many levels. Most particularly with
regards to tolerance of normally harmless garbage syntax in URLs
received.
Support for generic URI schemes is added possibly resolving some
issues reported with ftp:// URL and manager access via ftp_port
sockets.
Truly generic support for /squid-internal-mgr/ path prefix is
added, fixing some user confusion about its use on cache_object:
scheme URLs.
TODO: support for single-name parameters and URL #fragments
are left to future updates. As is refactoring the QueryParams
data storage to avoid SBuf data copying.
Traffic parsing errors should be reported at level 2 (or below) because
Squid admins can usually do nothing about them and a noisy cache.log
hides important problems that they can and should do something about.
TODO: Detail this and similar parsing errors for %err_detail logging.
Alex Rousskov [Mon, 15 Mar 2021 14:05:05 +0000 (14:05 +0000)]
Fix HttpHeaderStats definition to include hoErrorDetail (#787)
... when Squid is built --with-openssl.
We were "lucky" that the memory area after HttpHeaderStats was not,
apparently, used for anything important enough when HttpHeader::parse(),
indirectly called from errorInitialize() during initial Squid
configuration, was writing to it.
Detected by using AddressSanitizer.
The bug was created in commit 02259ff and cemented by commit 2673511.
Alex Rousskov [Fri, 19 Feb 2021 16:14:37 +0000 (16:14 +0000)]
Bug 3556: "FD ... is not an open socket" for accept() problems (#777)
Many things could go wrong after Squid successfully accept(2)ed a socket
and before that socket was registered with Comm. During that window, the
socket is stored in a refcounted Connection object. When that object was
auto-destroyed on the error handling path, its attempt to auto-close the
socket would trigger level-1 BUG 3556 errors because the socket was not
yet opened from Comm point of view. This change eliminates that "already
in Connection but not yet in Comm" window.
The fixed BUG 3556 errors stalled affected clients and leaked their FDs.
TODO: Keeping that window closed should not require a human effort, but
achieving that goal probably requires significant changes. We are
investigating.
Since commit 5ef5e5c, a socket write timeout triggers two things:
* reporting of a write error to the socket writer (as designed/expected)
* reporting of a socket read timeout to the socket reader (unexpected).
The exact outcome probably depends on the transaction state, but one
known manifestation of this bug is the following level-1 message in
cache.log, combined with an access.log record showing a
much-shorter-than-client_lifetime transaction response time.
WARNING: Closing client connection due to lifetime timeout
Alex Rousskov [Tue, 10 Nov 2020 21:42:18 +0000 (21:42 +0000)]
Transactions exceeding client_lifetime are logged as _ABORTED (#748)
... rather than timed out (_TIMEOUT).
To record the right cause of death, we have to call terminateAll()
rather than setting logType.err.timedout directly. Otherwise, when
ConnStateData::swanSong() calls terminateAll(0), it overwrites our
direct setting.
Amos Jeffries [Sun, 16 Aug 2020 02:21:22 +0000 (02:21 +0000)]
Improve Transfer-Encoding handling (#702)
Reject messages containing Transfer-Encoding header with coding other
than chunked or identity. Squid does not support other codings.
For simplicity and security sake, also reject messages where
Transfer-Encoding contains unnecessary complex values that are
technically equivalent to "chunked" or "identity" (e.g., ",,chunked" or
"identity, chunked").
RFC 7230 formally deprecated and removed identity coding, but it is
still used by some agents.
Amos Jeffries [Fri, 14 Aug 2020 15:05:31 +0000 (15:05 +0000)]
Forbid obs-fold and bare CR whitespace in framing header fields (#701)
Header folding has been used for various attacks on HTTP proxies and
servers. RFC 7230 prohibits sending obs-fold (in any header field) and
allows the recipient to reject messages with folded headers. To reduce
the attack space, Squid now rejects messages with folded Content-Length
and Transfer-Encoding header field values. TODO: Follow RFC 7230 status
code recommendations when rejecting.
Bare CR is a CR character that is not followed by a LF character.
Similar to folding, bare CRs has been used for various attacks. HTTP
does not allow sending bare CRs in Content-Length and Transfer-Encoding
header field values. Squid now rejects messages with bare CR characters
in those framing-related field values.
When rejecting, Squid informs the admin with a level-1 WARNING such as
obs-fold seen in framing-sensitive Content-Length: ...
Amos Jeffries [Tue, 4 Aug 2020 04:34:32 +0000 (04:34 +0000)]
Enforce token characters for field-name (#700)
RFC 7230 defines field-name as a token. Request splitting and cache
poisoning attacks have used non-token characters to fool broken HTTP
agents behind or in front of Squid for years. This change should
significantly reduce that abuse.
If we discover exceptional situations that need special treatment, the
relaxed parser can allow them on a case-by-case basis (while being extra
careful about framing-related header fields), just like we already
tolerate some header whitespace (e.g., between the response header
field-name and colon).
Do not stall while debugging a scan of an empty store_table (#699)
Non-SMP Squid and each SMP kid allocate a store_table hash. With large
caches, some allocated store_table may have millions of buckets.
Recently we discovered that it is almost impossible to debug SMP Squid
with a large but mostly empty disk cache because the disker registration
times out while store_table is being built -- the disker process is
essentially blocked on a very long debugging loop.
The code suspends the loop every 500 entries (to take care of tasks like
kid registration), but there are no pauses when scanning millions of
empty hash buckets where every bucket prints two debug lines.
Squid now does not report empty store_table buckets explicitly. When
dealing with large caches, the debugged process may still be blocked for
a few hundred milliseconds (instead of many seconds) while scanning the
entire (mostly empty) store_table. Optimizing that should be done as a
part of the complex "store search" API refactoring.
peerDigestHandleReply() was missing a premature EOF check. The existing
peerDigestFetchedEnough() cannot detect EOF because it does not have
access to receivedData.length used to indicate the EOF condition. We did
not adjust peerDigestFetchedEnough() because it is abused to check both
post-I/O state and the state after each digest processing step. The
latter invocations lack access to receivedData.length and should not
really bother with EOF anyway.
Alex Rousskov [Mon, 6 Jul 2020 08:04:31 +0000 (08:04 +0000)]
Honor on_unsupported_protocol for intercepted https_port (#689)
... when Squid discovers a non-TLS client while parsing its handshake.
For https_port traffic, ConnStateData::switchToHttps() relies on start()
to set preservingClientData_ correctly, but shouldPreserveClientData(),
called by start() to set preservingClientData_, was not preserving TLS
bytes in the https_port start() context. Typical debug messages:
parseTlsHandshake: Got something other than TLS ... Cannot SslBump
tunnelOnError: may have forgotten client data; send error: 40
SslBump: Support parsing GREASEd (and future) TLS handshakes (#663)
A peeking or staring Squid aborted TLS connections containing a GREASE
version in the supported_versions handshake extension (e.g., 0x3a3a).
Here is a sample cache.log error (debug_options ALL,1 83,2):
The same problem would apply to some other "unsupported" (and currently
non-existent) TLS versions (e.g., 0x0400). The growing popularity of
GREASE values exposed the bug (just like RFC 8710 was designed to do).
Squid now ignores all unsupported-by-Squid TLS versions in handshake
extensions. Squid still requires a supported TLS version for
framing-related handshake fields -- no changes there.
It is difficult to define "supported" in this context precisely because
a peeking Squid only observes the TLS handshake. Our handshake parser
may report the following versions: SSL v2.0, SSL v3.0, and TLS v1.x
(with x >= 0). This logic allows us to safely parse the handshake (no
framing errors) and continue to make version-based decisions like
disabling OpenSSL TLS v1.3 support when the agents are negotiating TLS
v1.3+ (master cd29a42). Also, access logs benefit from this "support"
going beyond TLS v1.2 even though Squid cannot bump most TLS v1.3+
handshakes.
Squid was and still is compliant with the "MUST NOT treat GREASE values
differently from any unknown value" requirement of RFC 8710.
Also added source code comments to mark other places where unsupported
(for some definition of "support") TLS values, including GREASE values
may appear.
Folks studying debugging logs (where GREASE values may appear) will
probably either recognize their easy-to-remember 0x?A?A pattern or
not really know/care about RFC 8710 and its special values.
DrDaveD [Fri, 26 Jun 2020 12:09:19 +0000 (07:09 -0500)]
Bug 5051: Some collapsed revalidation responses never expire (#683)
* clear collapsed revalidation on a negative response
* fixup: Reduce chances of forgetting to call clearPublicKeyScope()
... in other/future sendClientUpstreamResponse() cases.
TODO:
1. Handle sendClientOldEntry() cases. We probably do not want future
collapsed revalidation clients to accidentally collapse on the new
entry that was rejected by this code. After all, such collapsing is
exactly what caused bug 5051 AFAICT!
2. Consider cases where a collapsed revalidation did not reach
handleIMSReply(). For example, imagine a client that created the
collapsed revalidation entry and then disconnected or died. How to
protect other clients on hitting that essentially stale entry
forever, keeping bug 5051 alive?
Co-authored-by: Alex Rousskov <rousskov@measurement-factory.com>
Amos Jeffries [Thu, 21 May 2020 14:42:02 +0000 (14:42 +0000)]
Add flexible RFC 3986 URI encoder (#617)
Use AnyP::Uri namespace to self-document encoder scope and
coding type.
Use SBuf and CharacterSet for more flexible input and actions
than previous RFC 1738 encoder. Allowing callers to trivially
determine which characters are encoded.
SslBump: Disable OpenSSL TLSv1.3 support for older TLS traffic (#620)
* SslBump: Disable OpenSSL TLSv1.3 support for older TLS traffic (#588)
This change fixes stalled peeked-at during step2 connections from IE11
and FireFox v56 running on Windows 10 (at least), producing "Handshake
with SSL server failed" cache.log errors with this OpenSSL detail:
Disabling TLS v1.3 support for older TLS connections is required
because, in the affected environments, OpenSSL detects and, for some
unknown reason, blocks a "downgrade" when a server claims support for
TLS v1.3 but then accepts a TLS v1.2 connection from an older client.
This is a Measurement Factory project
* Fixed TLS selected_version parsing and debugging
The description of the expected input was given to the wrong parsing
function. This typo may have affected parsing because it told the TLS
version tokenizer that more data may be expected for the already fully
extracted extension. I believe that the lie could affect error
diagnostic when parsing malformed input, but had no effect on handling
well-formed TLS handshakes (other than less-specific debugging).
Detected by Coverity. CID 1462621: Incorrect expression (NO_EFFECT)
Fixed prohibitively slow search for new SMP shm pages (#523)
The original Ipc::Mem::PageStack algorithm used an optimistic linear
search to locate the next free page. Measurements showed that, in
certain cases, that search could take seconds on busy caches, iterating
over millions of page index items and effectively stalling all workers
while showing 100% CPU utilization.
The new code uses a deterministic stack. It is still lock-free. The spin
loops around stack head pointer updates are expected to quit after at
most few iterations, even with a large number of workers. These loops do
not have ABA update problems. They are not spin locks.
Alex Rousskov [Wed, 13 May 2020 14:05:00 +0000 (14:05 +0000)]
Validate Content-Length value prefix (#629)
The new code detects all invalid Content-Length prefixes but the old
code was already rejecting most invalid prefixes using strtoll(). The
newly covered (and now rejected) invalid characters are
* explicit "+" sign;
* explicit "-" sign in "-0" values;
* isspace(3) characters that are not (relaxed) OWS characters.
In most deployment environments, the last set is probably empty because
the relaxed OWS set has all the POSIX/C isspace(3) characters but the
new line, and the new line is unlikely to sneak in past other checks.
Thank you, Amit Klein <amit.klein@safebreach.com>, for elevating the
importance of this 2016 TODO (added in commit a1b9ec2).
gcc-8+ build error: undefined reference to __atomic_is_lock_free (#625)
Compilers warned about AC_SEARCH_LIBS(__atomic_load_8)-generated code.
Newer, stricter compilers (e.g., gcc-8), exit with an error, resulting
in AC_SEARCH_LIBS failure when determining whether libatomic is needed.
More at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907277#30
It is unclear whether autoconf will ever handle this case better. Let's
use a custom-made test for now. The current test refuses to build Squid
on platforms where a program using std::atomic<T>::is_lock_free() cannot
be successfully linked (either with or without libatomic) for a the
largest atomic T used by Squid (i.e. a 64 bit integer).
Linking with libatomic may be required for many reasons that we do not
fully understand, but we know that std::atomic<T>::is_lock_free() does
require linking with libatomic in some environments, even where T is an
inlineable atomic. That is why we use that as a test case.
Restore PURGE miss replies to be "404 Not Found", not "0 Init" (#586)
Since commit 6956579, PURGE requests resulted in invalid HTTP responses
with zero status code when Store lacked both GET and HEAD entries with
the requested URI.
Also adjusted Http::StatusLine packing code to avoid generating similar
invalid responses in the future (and to attract developer attention to
their presence in the code logic with a BUG message).
DrDaveD [Wed, 6 May 2020 07:12:15 +0000 (02:12 -0500)]
Bug 5030: Negative responses are never cached (#614)
Negative caching was blocked by checkCachable().
Since 3e98df2, Squid cached ENTRY_NEGCACHED entries in memory cache
only. Back then, storeCheckSwapable() prevented what later became
ENTRY_NEGCACHED entries from going to disk. The design was obscured by 8350fe9 that renamed storeCheckSwapable() to storeCheckCachable().
Commit 97754f5 violated that (obscured) design by adding a
checkCachable() call to StoreEntry::memoryCachable(), effectively
blocking ENTRY_NEGCACHED entries from the memory cache as well. That
call should have been added, but checkCachable() should not have denied
caching rights to ENTRY_NEGCACHED -- the corresponding check should have
been moved into StoreEntry::mayStartSwapOut().
By removing ENTRY_NEGCACHED from checkCachable(), we now allow
ENTRY_NEGCACHED entries into both memory and disk caches, subject to all
the other checks. We allow ENTRY_NEGCACHED to be cached on disk because
negative responses are fundamentally no different than positive ones:
HTTP allows caching of 4xx and 5xx responses expiring in the future.
Hopefully, the increased disk cache traffic will not be a problem.
Alex Rousskov [Thu, 23 Apr 2020 11:56:35 +0000 (05:56 -0600)]
Bug 5041: Missing Debug::Extra breaks build on hosts with systemd (#611)
* Bug 5041: Missing Debug::Extra breaks build on hosts with systemd
Master commit 6fa8c66 (i.e. Bug 5016 fix) relied on Debug::Extra added
by master commit (ccfbe8f) that was not ported to v4. The port of the
former master commit lacked the required piece of the latter commit.
The problem is invisible on hosts without a systemd package (that Squid
can find/use) and with Squids explicitly ./configured --without-systemd.
* "Minimum features" build test should be --without-systemd
* LDFLAGS were missing SYSTEMD_LIBS in builds with systemd support
Amos Jeffries [Mon, 20 May 2019 11:23:13 +0000 (11:23 +0000)]
ESI: convert parse exceptions into 500 status response (#411)
Produce a valid HTTP 500 status reply and continue operations when
ESI parser throws an exception. This will prevent incomplete ESI
responses reaching clients on server errors. Such responses might
have been cacheable and thus corrupted, albeit corrupted consistently
and at source by the reverse-proxy delivering them.
ESI: throw on large stack recursions (#408)
This reduces the impact on concurrent clients to only those
accessing the malformed resource.
Depending on what type of recursion is being performed the
resource may appear to the client with missing segments, or
not at all.
DrDaveD [Fri, 21 Feb 2020 05:12:04 +0000 (05:12 +0000)]
Bug 5022: Reconfigure kills Coordinator in SMP+ufs configurations (#556)
In these unsupported SMP+ufs configurations, depending on the deployment
specifics, the Coordinator process could exit due to swap state file
opening errors:
kid11| FATAL: UFSSwapDir::openLog: Failed to open swap log.
squidcontrib [Wed, 29 Jan 2020 06:10:04 +0000 (06:10 +0000)]
Remove pointer from the input of Digest nonce hashes (#549)
This is a follow-up to #491 (b20ce97), which hashed what was previously
revealed as plaintext. Removing the pointer from the input to the hash
removes the possibility that someone could recover a pointer by
reversing a hash. Having the pointer as input was not adding anything:
Squid remembers all outstanding nonces, so it really only requires
uniqueness, which is already guaranteed by the
authenticateDigestNonceFindNonce loop.
huaraz [Sat, 25 Jan 2020 03:36:49 +0000 (03:36 +0000)]
kerberos_ldap_group: fix encryption type for cross realm check (#542)
Newer setups require AESxxx encryption but old Crossrealm
tickets are still using RC4. Remove the use of the cached client
ticket encryption type and use the configured default list
(which must include AESxxx and RC4).
Marcos Mello [Thu, 23 Jan 2020 12:07:40 +0000 (12:07 +0000)]
Bug 5016: systemd thinks Squid is ready before Squid listens (#539)
Use systemd API to send start-up completion notification if built
with libsystemd support. New configure option --with-systemd
can be used to force enable or disable the feature (default:
auto-detect on Linux platforms).
Bug 4864: !Comm::MonitorsRead assertion in maybeReadVirginBody() (#351)
This assertion is probably triggered when Squid retries/reforwards
server-first or step2+ bumped connections (after they fail).
Retrying/reforwarding such pinned connections is wrong because the
corresponding client-to-Squid TLS connection was negotiated based on the
now-failed Squid-to-server TLS connection, and there is no mechanism to
ensure that the new Squid-to-server TLS connection will have exactly the
same properties. Squid should forward the error to client instead.
Also fixed peer selection code that could return more than one PINNED
paths with only the first path having the destination of the actual
pinned connection.
This is a Measurement Factory project
This is a limited equivalent to master branch commit 3dde9e52
Supply ALE to request_header_add/reply_header_add (#564)
Supply ALE to request_header_add and reply_header_add ACLs that need it
(e.g., external, annotate_client, and annotate_transaction ACLs). Fixes
"ACL is used in context without an ALE state" errors when external ACLs
are used in the same context (other ACLs do not yet properly disclose
that they need ALE).
Also provides HTTP reply to reply_header_add ACLs.
DrDaveD [Mon, 30 Dec 2019 20:43:33 +0000 (20:43 +0000)]
Bug 4735: Truncated chunked responses cached as whole (#528)
Mark responses received without the last chunk as responses that have
bad (and, hence, unknown) message body length (i.e. ENTRY_BAD_LENGTH).
If they were being cached, such responses will be released and will stop
being shareable.
Fix server_cert_fingerprint on cert validator-reported errors (#522)
The server_cert_fingerprint ACL mismatched when sslproxy_cert_error
directive was applied to validation errors reported by the certificate
validator because the ACL could not find the server certificate.
Fix the parsing of the received listing from FTP services.
Also relaxed size/filename grammar used for DOS listings: Tolerate
multiple spaces between the size and the filename.
Fix shared memory size calculation on 64-bit systems (#520)
Since commit 2253ee0, the wrong type (uint32 instead of size_t) was used
to calculate the PagePool::theLevels size. theLevels memory (positioned
by different and correct code) did not overlap with the raw pages
buffer, but the raw pages buffer could, in some cases, be 32 bits short,
placing the last 4 bytes of the last page outside of allocated memory.
In practice, shared memory allocations are page-aligned, and the
difference in 4 bytes was probably compensated by the extra allocated
bytes in most (or perhaps even all) cases.
jijiwawa [Sat, 23 Nov 2019 10:24:41 +0000 (10:24 +0000)]
Bug 5008: SIGBUS in PagePool::level() with custom rock slot size (#515)
SMP Squids were crashing on arm64 due to incorrect memory alignment of
Ipc::Mem::PagePool::theLevels array. The relative position of the array
depends on the number of workers and the number of pages (influenced by
the cache capacity and slot size), so some configurations worked OK.
We have to manually align manually positioned fields inside shared
memory segments. Thankfully, C++11 provides alignment-computing APIs.
Alex Rousskov [Sat, 23 Nov 2019 09:18:24 +0000 (09:18 +0000)]
Bug 5009: Build failure with older clang libc++ (#514)
Older clang libc++ implementations correctly reject implicit usage of an
explicit (in C++11) std::map copy constructor with "chosen constructor
is explicit in copy-initialization" errors. The same code becomes legal
in C++14[1], so newer libc++ implementation allow implicit usage (even
in C++11), but there is no need for copy-initialization here at all.
Evidently, libstdc++ has never declared constructors explicit.
The bug was seen with Apple clang in Xcode 5.1.1 (roughly upstream clang
3.4) and Xcode 6.2 (roughly upstream clang 3.5), both using libc++.
Amos Jeffries [Mon, 18 Nov 2019 12:06:56 +0000 (01:06 +1300)]
Fix detection of sys/sysctl.h detection (#511)
Make sure we test the EUI specific headers using same flags
chosen for final build operations. This should make the
test detect the header as unavailable if the user options
would make the compiler #warning be a fatal error later.
squidcontrib [Sun, 20 Oct 2019 18:59:08 +0000 (18:59 +0000)]
Hash Digest noncedata (#491)
These commits together
1. Hash the noncedata for Digest nonces before encoding,
to match the documentation.
2. Encode Digest nonces using hex, rather than base64.
Fix rock disk entry contamination related to aborted swapouts (#444)
Also probably fixed hit response corruption related to aborted rock
swapins.
The following disk entry contamination sequence was observed in detailed
cache logs during high-load Polygraph tests. Some of the observed
level-1 errors matched those in real-world deployments.
1. Worker A schedules AX – a request to write a piece of entry A to disk
slot X. The disker is busy writing and reading other slots for worker
A and other workers so AX stays in the A-to-Disker queue for a while.
2. Entry A aborts swapout (for any reason, including network errors
while receiving the being-stored response). Squid makes disk slot X
available for other entries to use. AX stays queued.
3. Another worker B picks up disk slot X (from the shared free disk slot
pool) and schedules BX, a request to write a piece of entry B to disk
slot X. BX gets queued in an B-to-Disker queue. AX stays queued.
4. The disker satisfies write request BX. Disk slot X contains entry B
contents now. AX stays queued.
5. The disker satisfies write request AX. Disk slot X is a part of entry
B slot chain but contains former entry A contents now! HTTP requests
for entry B now read entry A leftovers from disk and complain about
metadata mismatches (at best) or get wrong response contents (at
worst).
To prevent premature disk slot reuse, we now keep disk slots reserved
while they are in the disk queue, even if the corresponding cache entry
is long gone: Individual disk write requests now "own" the slot they are
writing. The Rock::IoState object owns reserved but not yet used slots
so that they can be freed when the object is gone. The disk entry owns
the (successfully written) slots added to its chain in the map.
The new slot ownership scheme required changes in what metadata the
writing code has to maintain. For example, we now keep the address of
the previous slot in the entry chain so that we can update its .next
field after a successful disk write. Also, the old code detecting
dropped write requests could no longer rely on the now-empty .next field
in the previous map entry. The rewritten code numbers I/O transactions
so that out-of-order replies can be detected without using the map.
I tried to simplify the metadata maintenance code by shortening
reservation lifetimes and using just-in-time [first] slot reservations.
The new code may also leak fewer slots when facing C++ exceptions.
As for reading, I realized that we had no protection from dropped rock
read requests. If the first read request is dropped, the metadata
decoding would probably fail but if subsequent reads are skipped, the
client may be fed with data that is missing those skipped blocks. I did
not try to reproduce these problems, but they are now fixed using the
same I/O transaction numbering mechanism that the writing code now uses.
Negative length checks in store_client.cc treat dropped reads as errors.
I also removed commented out "partial writing" code because IoState
class member changes should expose any dangerous merge problems.
urnHandleReply() may be called several times while copying the entry
from the store. Each time it must use the buffer length that is left
(from the previous call).
Also do not abandon a urn entry, still having clients attached.
Also allow urnHandleReply() to produce a reply if it receives a
zero-sized buffer. This may happen after the entry has been fully
stored.