rousskov [Sun, 18 Oct 1998 07:30:09 +0000 (07:30 +0000)]
- added missing case counters to refreshCheck()
request_reload2ims_stale, request_reload_stale,
min_age_override_exp_fresh, min_age_override_lmt_fresh
- display only non-zero stats
- hack to double check that all cases are counted
("total" line will not display 100% if not)
rousskov [Sat, 17 Oct 1998 10:34:08 +0000 (10:34 +0000)]
- eliminated refreshWhen() which was out-of-sync with refreshCheck()
potentially causing under-utilized cache digests
- maintain refreshCheck statistics on per-protocol basis so we
can tell why ICP or Digests return too many misses, etc.
wessels [Sat, 17 Oct 1998 01:19:29 +0000 (01:19 +0000)]
we used to CLOSE persistent connections if the ENTRY_BAD_LENGTH bit
was set (because we didn't get exactly content-length bytes). But
when we read TOO MUCH we can still handle this gracefully. Instead of
closing on != condition, now we close on < condition.
wessels [Fri, 16 Oct 1998 05:44:37 +0000 (05:44 +0000)]
clientKeepaliveNextRequest() used to assume (assert) that conn->chr
was either NULL, or if conn->chr was set, then conn->chr->entry must
also be set. But the cilent request could be blocked on ACL or
redirector operations, in which case conn->chr->entry is NULL.
wessels [Wed, 14 Oct 1998 05:39:08 +0000 (05:39 +0000)]
From: Q <q@fan.net.au>
Here is a patch to make squid2-p1 work properly when it's both an
ipf-transparent proxy and a local http-accelerator at the same time. It
also closes a potential DoS window.
wessels [Wed, 14 Oct 1998 02:38:42 +0000 (02:38 +0000)]
- Changed storeClientCopy2() so that it keeps sending the remainder
of a STORE_ABORTED request, instead of cutting off the client as
soon as the object becomes aborted.
wessels [Fri, 9 Oct 1998 23:46:35 +0000 (23:46 +0000)]
Changed the policy of storeReleaseRequest() which used to require
the entry to be locked. This was to prevent RELEASE_REQUEST entries
from getting stranded.
But there were some places where we manually set the RELEASE_REQUEST
bits for maybe-unlocked objects. This sucks because we also need
to clear ENTRY_CACHABLE if we set RELEASE_REQUEST.
- changed the way Range requests are handled:
- do not "advertise" our ability to process ranges at all
- on hits, handle simple ranges and forward complex ones
- on misses, fetch the whole document for simple ranges
and forward range request for complex ranges
The change is supposed to decrease the number of cases when clients
such as Adobe acrobat reader get confused when we send a "200" response
instead of "206" (because we cannot handle complex ranges, even for hits)
Note: Support for complex ranges requires storage of partial objects.
- fixed(?) cbdata handling by the peer_digest module
there should be less coredumps on reconfigure now
however, the digest structure might leak when peer is gone -- TBF
- added support for undocumented Request-Range header
the header is used by Netscape (with Adobe plugin working)
Request-Range is assumed to match Range header so only
the former is actually used when both are provided.
- fixed a bug when one of the merged ranges could get lost
- commented out merging of overlapping ranges
because it might break efficieancy of some clients
- added httpHdrRangeWillBeComplex()