Alex Rousskov [Mon, 29 Jul 2013 00:43:55 +0000 (18:43 -0600)]
Re-enabled on-disk collapsing of entries after fixing related code.
Since we started writing partial entries, we cannot rely on negative sidNext
marking the end of the slice/write sequence. Added a WriteRequest::eof field
to signal that end explicitly.
Do not leak db slices when write fails or IoState is closed before the write
succeeds.
Handle store client requesting an offset we have not stored yet. This might
happen for collapsed hits (and also if the client is buggy). May need more
work to slow the reader down.
Do not update various shared stats until the corresponding slot is written.
Alex Rousskov [Mon, 29 Jul 2013 00:27:23 +0000 (18:27 -0600)]
Improved STORE_MEM_CLIENT detection.
IN_MEMORY mem_status does not guarantee that the entore object is in the
memory cache. We may be just loading it from a shared memrory cache, and
loading may fail. We may have nibbled at the entry already (although that may
not be possible, not sure). The whole memory/disk store_client designation
probably needs more work, but the now-removed condition was causing
store_client.cc:445: "STORE_DISK_CLIENT == getType()" assertions.
Alex Rousskov [Sat, 27 Jul 2013 17:19:29 +0000 (11:19 -0600)]
Keep anchor.basics.swap_file_sz in sync with slice sizes.
The old code updated anchor.basics.swap_file_sz _after_ copying all of the
available data into shared memory. An exception in the copying loop (e.g., the
map is out of available slots) could prevent that update. For another worker,
the entry would then appear to be fully completed (no writer, last slice size
stable, and last slice poiner is -1) and that worker would assert due to
anchor.basics.swap_file_sz mismatching the sum of slice sizes.
Alex Rousskov [Wed, 24 Jul 2013 21:48:45 +0000 (15:48 -0600)]
Disconnect StoreEntries before deleting their memory objects.
The new cleanup order helps identify the write Rock entry state (reading or
writing) and avoid assertions related to state identification bugs (such
as unlocking a writing entry for reading).
Similar to the memory cache code, we should not disconnect disk entries during
shutdown because Store::Root() may be missing by then.
Alex Rousskov [Wed, 24 Jul 2013 21:45:02 +0000 (15:45 -0600)]
Avoid !writeableAnchor_ assertions when Squid shuts down.
A shutting down Squid deletes locked StoreEntry objects, which may trigger
deletion of Rock::IoState that is still writing to disk. We should fix the
shutdown sequence. Meanwhile, the Rock::IoState code does not need to mislead
admins with an assert.
Alex Rousskov [Mon, 22 Jul 2013 17:04:00 +0000 (11:04 -0600)]
Fixed StoreEntry::mayStartSwapOut() logic to handle terminated swapouts.
StoreEntry::mayStartSwapOut() should return true if a swapout can start. If
swapout was started earlier but then terminated for some reason (setting sio
to nil), the method should not return true. Checking swap_status ==
SWAPOUT_DONE does not work reliably because the status may be reset to
SWAPOUT_NONE in some cases (and the check was too late anyway). Checking
decision == swPossible does not work at all because while swapout start was
possible at some point, it is no longer possible after we started swapping
out.
Added MemObject::SwapOut::swStarted to detect started swapouts reliably.
Alex Rousskov [Wed, 10 Jul 2013 00:41:01 +0000 (18:41 -0600)]
Use Rock::IoState::writeableAnchor_ to detect rock entries open for writing.
Just e.mem_obj->swapout.sio presence is not reliable enough because we
may switch from writing to reading while the [writing] sio is still around.
More explicitly disabled on-disk collapsing of entries. The relevant code is
unstable under load [at least when combined with memory caching]. We were not
calling Ipc::StoreMap::startAppending() before so we probably did not fully
disk-collapsed entries before these temporary changes.
Added an XXX to mark an assert() that may fail if we allow on-disk collapsing.
Alex Rousskov [Tue, 2 Jul 2013 19:23:49 +0000 (13:23 -0600)]
Broadcast mem-cache writer departure to transient readers (in more/all cases).
Moved transientsAbandon() call to MemStore::disconnect() to make sure we
catch all cases where a mem-cache writer stops updating the cache entry.
Transient readers need to know so that they do not get stuck when a writer
disappears.
transientsAbandon() needs StoreEntry so MemStore::disconnect requires one now.
Alex Rousskov [Mon, 1 Jul 2013 19:59:32 +0000 (13:59 -0600)]
Do not become a store_client for entries that are not backed by Store.
If we ignore cache backing when becoming a store client, then
StoreEntry::storeClientType() is going to make us a DISK_CLIENT by default.
If there is no disk cache or it cannot be used for our entry, we will assert
in store_client constructor. Prevent those assertions by checking earlier in
StoreEntry::validToSend().
Alex Rousskov [Mon, 1 Jul 2013 02:25:50 +0000 (20:25 -0600)]
Several fixes and improvements to help collapsed forwarding work reliably:
Removed ENTRY_CACHABLE. AFAICT, it was just negating RELEASE_REQUEST AFAICT.
Broadcast transients index instead of key because key may become private.
Squid uses private keys to resolve store_table collisions (among other
things). Thus, a public entry may become private at any time, at any worker.
Using keys results in collapsed entries getting stuck waiting for an update.
The transients index remains constant and can be used for reliable
synchronization.
Using transient index, however, requires storing a pointer to the transient
entry corresponding to that index. Otherwise, there is no API to find the
entry object when a notification comes: Store::Root().get() needs a key.
Mark an entry for release when setting its key from public to private. The old
code was only logging SWAP_LOG_DEL, but we now need to prevent requests in
other workers from collapsing on top of a now-private cache entry. In many
cases, such an entry is in trouble (but not all cases because private keys are
also used for store_table collision resolution).
Fixed syncing of abandoned entries.
Prevent new requests from collapsing on writer-less transient entries.
Alex Rousskov [Thu, 27 Jun 2013 21:26:57 +0000 (15:26 -0600)]
Tightened StoreEntry locking. Fixed entry touching and synchronization code:
Tightened StoreEntry locking code to use accessors instead of manipulating the
locking counter directly. Helps with locking bugs detection. Do not consider
STORE_PENDING and SWAPOUT_WRITING entries locked by default because it is
confusing and might even leave zero lock_count but locked() entries in the
global table. Entry users should lock them instead.
StoreController::get() is now the only place where we touch() a store entry.
We used to touch entries every time they were locked, which possibly did not
touch some entries often enough (e.g. during Vary mismatches and such where
the get() entry is discarded) and definitely touched some entries too often
(every time the entry was locked multiple times during the same master
transaction). This addresses a design bug marked RBC 20050104.
Fixed interpretation of IN_MEMORY status. The status means that the store
entry was, at some point, fully loaded into memory. And since we prohibit
trimming of IN_MEMORY entries, it should still be fully loaded. Collapsing
changes started to use IN_MEMORY for partially loaded entries, which helps
detecting entries associated with the [shared] memory cache, but goes against
old Squid code assumptions, triggering assertions.
Handle synchronization of entries the worker is writing. Normally, the writing
worker will not receive synchronization notifications (it will send them) but
a stale notification is possible and should not lead to asserts. The worker
writing an entry will see a false mem_obj->smpCollapsed.
Do not re-anchor entries that were already anchored, fully loaded (ioDone),
and are now disassociated from the [shared] memory cache.
For shared caching to work reliably, StoreEntry::setReleaseFlag() should mark
cache entries for future release. We should not wait for release() time.
Waiting creates stuck entries because Squid sometimes changes the key from
public to private and collapsed forwarding broadcasts are incapable of
tracking such key changes (but they are capable of detecting entries abandoned
by their writers via the deletion mark in the transients table).
Alex Rousskov [Tue, 25 Jun 2013 17:51:30 +0000 (11:51 -0600)]
Avoid "STORE_DISK_CLIENT == getType()" assertions for ENTRY_ABORTED clients
and no disk cache configured.
StoreEntry::abort() makes entry STORE_OK, which makes
storeClientNoMoreToSend() return false for entries with unknown objectLen(),
triggering a disk read for some of them (when store_client::doCopy() cannot
schedule a memory read). If the entry is not really on disk, we hit an
assertion in store_client::scheduleDiskRead().
Alex Rousskov [Tue, 25 Jun 2013 16:06:37 +0000 (10:06 -0600)]
Various fixes related to overlapping and collapsed entry caching.
Wrote Transients description, replacing an irrelevant copy-pasted comment.
Maintain proper transient entry locks, distinguishing reading and writing
cases.
Fixed transients synchronization logic. Store::get() must not return
incomplete from-cache entries, except for local or transient ones. Otherwise,
the returned entry will not be updated when its remote writer makes changes.
Marked entries fully loaded from the shared memory cache as STORE_OK.
Avoid caching ENTRY_SPECIAL in the shared memory cache for now. This is not
strictly necessary, I think, but it simplifies shared caching log when
triaging start-test-analyze test cases. The restriction can be removed
when ENTRY_SPECIAL generation code becomes shared cache-aware, for example.
Fixed copy-paste error in Transients::disconnect().
Changed CollapsedForwarding::Broadcast() profile in preparation for excluding
broadcasts for entries without remote readers.
Do not purge entire cache entries just because we have to trim their RAM
footprint. The old code assumed that non-swappable entries may not have any
other stored content (which is no longer correct because they may still reside
in the shared memory cache) so it almost made sense to purge them, but it is
possible for clients to use partial in-RAM data when serving range requests,
so we should not be purging unless there are other reasons to do that. This
may expose client-side bugs if the hit validation code is not checking for RAM
entries being incomplete.
Allow MemObject::trimUnSwappable() to be called when there is nothing to trim.
This used to be a special case in StoreEntry::trimMemory(), but we do not need
it anymore after the above change.
Added transient and shared memory indexes to StoreEntry debugging summaries.
Alex Rousskov [Tue, 25 Jun 2013 15:39:10 +0000 (09:39 -0600)]
Mark client streams that sent everything as STREAM_COMPLETE.
The old code used STREAM_UNPLANNED_COMPLETE if the completed stream was
associated with a non-persistent connection, which did not make sense to me
and, IIRC, led to store entry aborts even though the entries were not damaged
in any way.
This change may expose other subtle bugs, but none are known at this time.
See also:
http://www.squid-cache.org/mail-archive/squid-dev/200702/0017.html
http://www.squid-cache.org/mail-archive/squid-dev/201102/0210.html
Alex Rousskov [Mon, 24 Jun 2013 17:05:13 +0000 (11:05 -0600)]
Removed StoreEntry::hidden_mem_obj.
Replaced MemObject::url with MemObject::urlXXX() and storeId().
* Replace StoreEntry::hidden_mem_obj hack with explicit MemObject::setUris().
We need MemObject to tie Store::get() results to locked memory cache entries
and such but Store::get() does not know the entry URIs so we had to use fake
"TBD" URIs instead. The hidden_mem_obj hack was added to minimize chances
that those temporary "TBD" URIs are going to be logged or forwarded.
However, new code uses MemObject cache ties a lot more, and it became too
cumbersome and error prone to always check whether there is a hidden object
holding indexes of locked StoreMap entries. It should be easier to ensure
that true URIs are set after Store::get() instead.
* Provide accessors for MemObject::url (which is actually a store ID these
days) and MemObject::log_url (which is usually the same as the url so we now
do not allocated it when it is the same). These accessors allow us to verify
that the caller is not going to use an undefined URI or Store ID because some
code forgot to set them explicitly.
* Add urlXXX() to mark old callers that appear to assume that MemObject::url
still holds a URI (instead of StoreID). Fixing those callers is outside this
project scope, but this was a good opportunity to identify/mark them because
we needed to hide raw Store ID field name ("url") anyway.
Alex Rousskov [Sat, 22 Jun 2013 15:24:34 +0000 (09:24 -0600)]
Various shared memory-based collapsed forwarding improvements and fixes.
Lock transient entries while in use. Transient entry presence is used
used to detect collapsed entry aborts for not-yet-cached entries.
Store current transient locks and memory cache entry state in MemObject. Why
not in StoreEntry like the disk cache does? To avoid penalizing those Stores
that keep idle StoreEntries in RAM.
Mark collapsing entries specially (in MemObject) so that we can stop updating
(un-tie) local entries that tried to collapse but did not like the collapsed
hit object that they started to get from another worker. When this happens,
the client side creates a new StoreEntry, but without a flag Store cannot tell
whether that entry needs to be kept in sync with the collapsed writer because
both the old entry and the new one have the same key. We may eventually find
a better way to distinguish the two cases.
Do not require MemObjects to be disassociated from various caches during
shutdown because Squid is currently incapable of maintaining Store::Root()
during shutdown.
Support incremental shared memory caching. Maintain and honor the
ENTRY_FWD_HDR_WAIT flag. Maintain shared memory cache reading/writing states.
Better updates of collapsed entries. Detect aborted entries. Do not release
entries that are not yet cached anywhere at the update time.
Alex Rousskov [Sat, 22 Jun 2013 15:11:30 +0000 (09:11 -0600)]
Properly reinitialize reused acnhor.start and slice.size.
Since we allowed readers and [appending] writers to share an entry, it is
no longer possible to implement abortIo(). The caller must either close
the reading entry or abort the writing one, depending on the caller's lock.
Alex Rousskov [Fri, 21 Jun 2013 22:04:04 +0000 (16:04 -0600)]
Make !lock.readers and !lock.writers assertions safe.
The lock class used readers level counter to count both attempts to read and
current readers. The attempts part made assertions declaring that there should
be no readers unsafe because even a writing entry may have a reading attempt.
Same for writers counter: A reading entry may have a writing attempt.
We now segragate the attempts level, which is internal information required
for shared lock to work, from counting the number of successful attempts
(i.e., actual readers and writers), which is public information useful for
assertions, stats, etc.
Alex Rousskov [Fri, 21 Jun 2013 00:50:35 +0000 (18:50 -0600)]
Fixed ipc/Queue notification race leading to stuck, overflowing queues.
The writer calling OneToOneUniQueue::push() must tell readers if it places the
first item into a previously empty queue. We used to determine emptiness prior
to incrementing queue size. That created a window between wasEmpty calculation
and queuing the new item (by incrementing the queue size). During that window,
the readers could pop() all previously queued items (resulting in an empty
queue) but since that happened after wasEmpty was computed to be false, the
writer would not notify them about the new item it just placed, and they will
get stuck, eventually resulting in queue overflow errors.
The fix attempts to increment the queue size and extract the previous size
value atomically.
Alex Rousskov [Sat, 8 Jun 2013 00:56:36 +0000 (18:56 -0600)]
Simplified MemObject::write() API.
The API required a callback, but the call was always synchronous and the
required callback mechanism could not reliably support an async call anyway.
The method adjusted the buffer offset to become relative to headers rather
than body. While the intent to separate headers from body is noble, none of
the existing caches support that separation, and a different API will be
needed to support it correctly anyway. For now, let's reduce the number of
special cases and offset manipulations.
Alex Rousskov [Fri, 7 Jun 2013 23:34:36 +0000 (17:34 -0600)]
Support "appending" read/write lock state that can be shared by readers
and writer. Writer promises not to update key metadata (except growing
object size and next pointers) and readers promise to be careful when
reading growing slices.
Support copying of partially cached entries from the shared memory cache to
local RAM. This is required for collapsed shared memory hits to receive new
data during broadcasted updates.
Properly unlock objects in the shared memory cache when their entries are
abandoned by a worker. This was not necessary before because we never locked
memory cache entries for more than a single method call. Now, with partially
cached entries support, the locks may persist much longer.
Properly delete objects from the shared memory cache when they are purged by a
worker. Before this change, locally purged objects may have stayed in the
shared memory cache.
Update disk cache index _after_ the changes are written to disk. Another
worker may be using that index and will expect to find the indexed slices on
disk. Disk queues are not FIFOs across workers.
Made CollapsedForwarding work better in non-SMP mode.
Polished broadcasting code. We need to broadcast entry key because the entry
may not have any other information (it may no longer be cached by the sender,
for example).
Implemented "anchoring" in-transit entries when the writer caches the
corresponding object. This allows the reader's entry object to reflect its
cached status and, hence, be able to ask for cached data during broadcasted
entry updates. Still need to handle the case where the writer does not cache
the object (by aborting collapsed hit).
Dmitry Kurochkin [Wed, 29 May 2013 16:04:40 +0000 (10:04 -0600)]
Added BaseMultiQueue class, a common base of the old FewToFewBiQueue class and
the new MultiQueue class.
Added MultiQueue, a lockless fixed-capacity bidirectional queue for a limited
number processes. Any process may send data to and receive from any other
process (including itself). Used for collapsed forwarding notifications.
Added CollapsedForwarding class to send and handle received collapsed
forwarding notifications using MultiQueue.
Write partial Rock pages to disk in order to propagate data from the hit
writer to the collapsed hit readers. Send collapsed forwarding notification
after data was written to disk.
Missing code to share locked StoreMap entries, kick collapsed hit readers, and
to disable notifications in no-daemon mode.
Amos Jeffries [Wed, 20 Mar 2013 04:48:17 +0000 (22:48 -0600)]
Fix bogus 'invalid response' message on URL rewriter interface
The empty-line response from rewriter and redirector should be converted
to ERR reply code in the new API. It was being left as Unknown.
While this reply used to only be valid on URL helper interface, and it
woudl be more appropriate to map other helpers to BH the ERR response
seems to be safe for use on any of the helper interfaces for an empty
line response. At worst it will prevent the lookup being re-tried on
other possibly better working helper instance.
Amos Jeffries [Mon, 18 Mar 2013 10:10:13 +0000 (04:10 -0600)]
Polish: clarify authenticate_ip_ttl code
This patch alters the directive implementation to only perform TTL
addition when setting the expiry value. This improves speed a little when
comparing timestamps, and allows the config file to display 1 second TTL
instead of displaying 0 seconds and actually being 0-1 seconds.
Which resolves some confusion about why max_user_ip ACL still works when
the TTL is set to 0 seconds.
Also, document the AuthUserIP class used to store the IP information.
Amos Jeffries [Mon, 18 Mar 2013 04:55:51 +0000 (22:55 -0600)]
SourceLayout: shuffle HttpStatusLine into http/libsquid-http.la
* moves HttpStatusLine.* to http/StatusLine.*
* renames HttpStatusLine to Http::StatusLine
* renames httpStatusLine*() functions as members of Http::StatusLine
* shuffles StatusCode string conversion function into http/StatusCode
* makes reason parameter of StatusLine::set() function optional.
There is no logic change involved but callers now no longer need to
set it to the status code string explicitly, nor need to set it to NULL
explicitly unless intending to replace an existing status string.
* adds const-correctness and documentation to StatusLine symbols.
Alex Rousskov [Thu, 14 Mar 2013 23:04:37 +0000 (17:04 -0600)]
Fix concurrency support in stateless helpers: Parse multiple replies correctly.
When multiple helper replies were read at the same time, the old code moved \0
(former EoM mark) in front of the buffer after handling the first reply, which
prevented remaining replies from being parsed.
The code also did not terminate the remaining replies correctly after moving
them to the beginning of the buffer. As far as I could test, such termination
is accidentally(?) not necessary, but I could not figure out why and added it
anyway.
Amos Jeffries [Mon, 11 Mar 2013 23:28:51 +0000 (17:28 -0600)]
Fix SSL Bump bypass for intercepted traffic
The SSL-bump bypass code on intercepted HTTPS traffic generates a fake
CONNECT request from the original destination IP:port in an attempt to
trigger a TCP tunnel being opened for the un-bumped data to be
transferred over.
The current implementation breaks in two situations:
1) when IPv6 traffic is intercepted
The URL field generated does not account for the additional []
requirements involved when IPv6+port are combined.
The resulting fake requests look like:
CONNECT ::1:443 HTTP/1.1
Host: ::1
... which are both invalid, and will fail to parse. Breaking IPv6 HTTPS
interception bypass.
Resolve this by using Ip::Address::ToURL() function which was created
for the purpose of generating URL hostnames from raw-IP + port with
the bracketing inserted when required.
2) when a non-443 port is being intercepted
The Host: header generated is missing the port and Squid Host: header
validity will reject the outbound
CONNECT 127.0.0.1:8443 HTTP/1.1
Host: 127.0.0.1
... this is an invalid request. Squid is currently ignoring the Host
header. However Squid tunnel.cc does make use of peering and may relay
the fake request Host: to upstream peers where we cannot be so sure what
will happen.
Resolve this issue by re-using the generated IP:port string for both URL
and Host: fields, which preserves teh port in Host: regardless of value.
This also means there is an unnecessary :443 tagged on for most HTTPS
traffic, however the omission of port from the Host: header is only a MAY
and this should not cause any issues.
Amos Jeffries [Thu, 7 Mar 2013 23:40:02 +0000 (12:40 +1300)]
Regression fix: Accept-Language header parse
When handling error page negotiation the header parse to detect language
code can enter into an infinite loop. Recover the 3.1 series behaviour
and fix an additional pre-existing off-by-1 error.
The errors were introduced in trunk rev.11496 in 3.2.0.9.
Amos Jeffries [Sun, 3 Mar 2013 12:44:30 +0000 (05:44 -0700)]
Fix authentication headers sent on peer digest requests
Cache digest fetches have been sending the cache_peer login= option
value without sanitizing it for special-case values used internally
by Squid. This causes authentication failure on peers which are checking
user credentials.
Tianyin Xu [Sun, 3 Mar 2013 07:10:22 +0000 (00:10 -0700)]
Make all the parameter names and options case sensitive
Changes "strcasecmp" to "strcmp".
This mainly deals with constant configuration options (e.g., enumerative
options and boolean options). For directive names, it's already
consistent (case sensitive), the parser functions are auto-generated.
The case sensitivity of the following parameter values is not changed:
- user and group names
- host names
- domain and realm names
- ACL names
- filesystem names
- options in request/response/digest messages
The cases which were earlier causing a lot of RAM 'leaks' have been
resolved already and the remaining causes appear to all be in components
with short packet handling pathways where the orphan is not wasting much
in the way of RAM bytes or FD time.
The trace is left at level-4 for future debugging if necessary.
Amos Jeffries [Tue, 26 Feb 2013 00:34:52 +0000 (13:34 +1300)]
MacOS: reduce the testRock unit test UDS path
On MacOS shm_open() requires the name entry to be less than 31 bytes
long. The garbage name used by testRock was 35 bytes and not really
describing what it was used for in the test anyway.
TODO: find out and fix why MacOS still responds EINVAL once the path
is set to a usable length.
Amos Jeffries [Sun, 24 Feb 2013 07:26:26 +0000 (00:26 -0700)]
MacOS: workaround compiler errors and case-insensitivity
MacOS GCC version implicitly searches the local directory for .h includes
despite the absence of -I. in the provided options.
Furthermore it searches with case-insensitive filenames due to the
underlying case-insensitive filesystem.
The combined result is that libacl .cc files include their local copy of
acl/Url.h instead of the base directories src/URL.h which was needed.
The long term fix will be to shuffle URL.h and its related code into
a convenience library. For now we can avoid issues by prefixing the full
src/ path to the includes.
Amos Jeffries [Fri, 22 Feb 2013 13:26:12 +0000 (02:26 +1300)]
SourceLayout: shuffle BasicAuthQueueNode to Auth:: namespace
... and document what it is used for by authentication.
There is only one logic change in this patch. The QueueNode destructor
is added to clear the queued CBDATA entries when the queue is deleted.
Previously the pointer was just erased in hopes that the queue was
notified prior to deletion.
Amos Jeffries [Mon, 18 Feb 2013 13:02:42 +0000 (02:02 +1300)]
Removes the domain from the cache_peer server pconn key
Under the squid-3.2 pconn model the IP:port specifying the destination
are part of the key and can be used to strictly filter selection when
locating pconn. This means the domain is no longer a necessary part
of the key.
Squid using cache_peer can see a large number of wasted idle connections
to their peers due to the key domain value if the peer hostname is not
substituted properly. There is also a similar affect when contacting
servers with virtual hosted domains.
Also a bug was located with peer host and name= values being used
inconsistently as the domain marker. Resulting in failed pop() operations
and extra FD usage.
This has been tested for several months now with only socket usage
benefits seen in several production networks.
NOTE: previous experience some years back with pconn has demonstrated
several broken web servers which assume all requests on a persistent
connection are for the same virtual host. For now this change avoids
altering the behaviour on DIRECT traffic for this reason.
Since debugs() is a macro, it should not change static Debugs::level
before putting the debug message to the internal stream. Otherwise we
encounter problems when debug message itself containg calls to debugs().
Alex Rousskov [Fri, 15 Feb 2013 18:12:40 +0000 (11:12 -0700)]
mem_hdr::hasContigousContentRange() should return true for empty ranges
mem_hdr::hasContigousContentRange() is called exclusively from
StoreEntry::mayStartSwapOut() via memObject::isContiguous(). In theory, that
mayStartSwapOut() may happen when we have not written anything to mem_hdr yet
(not even HTTP response headers). In that case, mayStartSwapout() should not
refuse to swap the entry out on non-contiguous grounds, but it was doing that
because hasContigousContentRange() mishandled the empty range case.
XXX: Calling hasContigousContentRange() [just] from mayStartSwapout() is just
wrong because what may be contiguous now may become fragmented later.
Alex Rousskov [Fri, 15 Feb 2013 02:28:12 +0000 (19:28 -0700)]
Polished StoreEntry debugging to report more info, less noise.
Start with a relatively unique "e:" prefix to ease finding entries with
specific properties in a large cache.log.
Only report fileno and cache_dir ID if they are set.
Only report non-default states and use mnemonics for them.
Report all set flags using mnemonics.
Report entry address in RAM.
After a few lookups, mnemonics allow reading (and searching for) many logged
entry states without referring to source code. While nobody is likely to
remember all flags by heart, the most commonly used/important ones should be
easy to remember.