wessels [Fri, 21 Aug 1998 14:40:57 +0000 (14:40 +0000)]
Fixed up ugly confusion with public keys and RELEASE_REQUEST states.
Some other failed assertions led me to an object with a public cache
key, but which was not being swapped out because the proxy-only option
made us call storeReleaseRequest early (before reading any server reply).
RELEASE_REQUEST objects should never be given public keys.
storeReleaseRequest now clears the ENTRY cachable bit to help ensure
this doesn't happen.
wessels [Fri, 21 Aug 1998 04:21:02 +0000 (04:21 +0000)]
- Added httpMaybeRemovePublic() to purge public objects for
certain responses even though they are uncachable. This is
needed, for example, when an initially cachable object
later becomes uncachable.
wessels [Thu, 20 Aug 1998 05:10:30 +0000 (05:10 +0000)]
we have FMR bugs with peer *'s in ps_state structure. When a reconfigure
occurs during ICP queries, and we have a timeout, the ->first_parent_miss
peer will have been freed.
Using cbdata here would be too ugly. we would have a lot of locks and
unlocks, plus what to do when the first_parent_miss peer is not valid?
re-select?
This approach saves sockaddr_in values for the peers. We look up the
actual peer structure with whichPeer() when we really need the peers.
wessels [Thu, 20 Aug 1998 05:07:23 +0000 (05:07 +0000)]
move whitespace-skipping block inside the while loop below it so we skip
leading whitespace on following requests, and work around broken user
agents which send too many CRLF on a POST
wessels [Tue, 18 Aug 1998 01:19:33 +0000 (01:19 +0000)]
From: Stewart Forster <slf@connect.com.au>
The following patches do some cosmetic changes (REQ_NOCACHE_HACK ->
REQ_NOCACHE_IMS), and introduce a new check into refreshCheck to force an
IMS refresh check if REQ_NOCACHE_IMS is set.
wessels [Mon, 17 Aug 1998 23:17:45 +0000 (23:17 +0000)]
From: Stewart Forster <slf@connect.com.au>
Just recently our caches ran into troubles because the base squid 1.2
code doesn't delete objects fast enough under high load. The old code
would only remove at most 50 objects per second. When pulling in more
than that (as we often do) the disks start to fill and the disk selection
algorithm defaults to sending everything to the first specified cache_dir
once the disks fill. Further, doing 50 deletes at once is also taxing on
the ASYNC threads.
The patch applied makes more continous deletions of objects by
deleting objects every 1/10th second, and then speeding this up to as fast
as squid can go at more objects per second if our disks start to fill up
past the high water mark.
wessels [Mon, 17 Aug 1998 22:38:07 +0000 (22:38 +0000)]
moved clientAccessCheck() call to AFTER a block for non-GET
requests to copy body bytes and maybe disable read handlers.
This was done because in the experimental optimistic-IO code,
the request is DONE after the clientAccessCheck call, and the
request data structure has been freed, resulting in FMR's etc.
It works here only because we have at least one select loop
without optimistic IO before the request is complete.