Amos Jeffries [Sat, 6 Feb 2010 06:32:11 +0000 (19:32 +1300)]
Author: Henrik Nordstrom <hno@squid-cache.org>
Clean up use of httpReplySetHeaders to be consistent across the code, and
remove the unneeded http_version argument.
Amos Jeffries [Fri, 5 Feb 2010 23:27:27 +0000 (12:27 +1300)]
Author: Jean-Gabriel Dick <jean-gabriel.dick@curie.fr>
Bug 1843: multicast-siblings cache_peer option for optimising multicast ICP relations
'multicast-siblings' : this option is meant to be used only for cache peers of
type "multicast". It instructs Squid that ALL members of this multicast group
have "sibling" relationship with it, not "parent". This is an optimization
that avoids useless multicast queries to a multicast group when the requested
object would be fetched only from a "parent" cache, anyway. It's useful, e.g.,
when configuring a pool of redundant Squid proxies, being members of the same
multicast group.
Amos Jeffries [Sun, 31 Jan 2010 06:20:21 +0000 (19:20 +1300)]
Author: Graham Keeling <graham@equiinet.com>
WCCPv1 not connecting to router correctly
I am coming across a problem with WCCPv1...
squid-2.5 connects to UDP port 2048, I get replies, and everything else then works.
squid-3.1 looks like it is trying to connect to UDP port 0 on the cisco.
[and fails to work]
I have looked at the src/wccp.c for squid-2.5, and it is clear that the port is
being set to 2048 for the connection to the router.
I have also looked at the source for 2.6, 2.7 and 3.0 (src/wccp.cc for this
version).
In all those, it appears to be setting the port on the outgoing connection.
Add the http::>ha format code and make http::>h log virgin request headers
This patch:
- Modify the existin "http::>h format code to log HTTP request headers
before any adaptation and redirection
- Add the new format code "http::>ha" which allow the user to log HTTP
request header or header fields after adaptation and redirection.
Amos Jeffries [Thu, 21 Jan 2010 10:01:16 +0000 (23:01 +1300)]
Author: Wolfgang Nothdurft <wolfgang@linogate.de>
Bug 2730: Regressions in follow_x_forwarded_for since Squid-2
Two Major Regressions:
* Omitted testing for trust of the directly connecting client.
this is critical is trusting the header content itself.
The absence permitted remote clients to forge X-Forwarded-For
and gain access to resources through Squid.
(mitigated by the following)
* Bad logic in implementing the trust model resulted in any XFF
headers containing untrusted IPs to be dropped in their entirety.
This resulted in clients transiting more than one proxy heirarchy to
be incorrectly logged and reported in the second.
Some polish alterations to the existing logics:
* Testing the direct client address for trust means the testing must be
fully async 'slow'. Thus avoiding the memory leaks found on occasion.
* acl_uses_indirect_client is not strictly needed to test multiple levels
of X-Forwarded-For properly. The entire list of IPs are now always
tested until on untrusted is found or an ACL failure occurs.
Amos Jeffries [Tue, 12 Jan 2010 06:44:41 +0000 (19:44 +1300)]
Full re-working of the way AcceptFD are handled in Squid
Previously:
a pre-defined fdc_table was initialized and left waiting just in case
any of the available FD were needed as a listening port. At which point
a handler was assigned and select was setup. Much action was wasted
initializing the array on startup and shutdown, and various unnecessary
maintenance references in comms every FD closure.
Now:
one merely opens a socket, defines the AsyncCall handler for the
listening FD and creates a ListenStateData object from the two. When a
listener socket is done with, delete the ListenStateData and the FD gets
closed. Callers are responsible for maintaining a pointer to the
ListenStateData.
ListenStateData silently does all the comm processing for setting up and
watching for new connections. Including whether accept() on a port is
delayed until more FD are available. Then when any accept() is completed
it sends the resulting FD and ConnectionDetails objects to the assigned
callback.
COMM_ERR_CLOSING and re-scheduling themselves is now not generally relevant
to layers higher than comm on listening sockets. The callbacks never get
called at all unless a real new connection has been accepted. The old code
is still used internally by the comm layer, but details of error handling
and re-scheduling of accept() never leak out into higher level code now.
ListenStateData can be created in use-once or accept-many form.
* use-once are intended for short temporary listening sockets such as
FTP passive data links. A single inbound connection is allowed after
which the handler self-destructs.
* accept-many are for long term FD such as the http_port listeners which
continuously accept new connections and pass them all to the same handler
until something causes the ListenStateData to die (deletion, or self
destruct under fatal socket IO errors).
The previous existing AcceptLimiter is slightly remodeled to work with
ListenStateData* instead of an intermediary callback reference type.
And also altered from LIFO to FIFO behavior for better client response
times under load.
All the code relevant to comm layer processing of accept() is bundled
into libcomm-listener.la.
TODO:
* wrap the SSL handshake into ListenStateData. That way SSL on some
sockets can be treated as a real transport layer and for example
httpsAccept and httpAccept merged into one function.
* there is one bug uncovered, that when the accept() handler is deleted
and the port closed while it has been deferred. The deferred queue
needs to be cleared of all (multiple) deferral events, or they will
be scheduled uselessly and may segfault with FD errors.
This only gets hit if Squid is shutdown during active load limiting.
Amos Jeffries [Tue, 12 Jan 2010 05:31:55 +0000 (18:31 +1300)]
Language Updates: Polish for manuals translation
* Mark up *.8 and *.1 for po4a garbage strings elimination
* Polishes existing *.8 and *.1 to standard texts
* Moves squid.8.in to src/ next to its binary
* Updated manuals.pot and current .po
* Adds en_AU.po from Rosetta Project
Amos Jeffries [Tue, 12 Jan 2010 04:06:23 +0000 (17:06 +1300)]
Standardize the manual page format
* markup static *.8 and *.1 for po4a eliding garbage strings
* standardize texts for some common strings
* Also moves the squid.8 document to src/ where the matching binary is
* Update .pot and .po
Amos Jeffries [Sun, 10 Jan 2010 12:40:53 +0000 (01:40 +1300)]
Language Updates: manual pages
* Update manuals.pot after helper shuffling
* Add po4a infrastructure to build .pot and .po
* Modify some manual sources to cleanup the transation strings
TODO:
* more cleanup modification of existing man(8) sources
* addition of new man(8) pages for other helpers and tools
Amos Jeffries [Sat, 2 Jan 2010 05:00:34 +0000 (18:00 +1300)]
Author: Francesco Chemolli <kinkie@squid-cache.org>
Helper Multiplexer
The helper multiplexer's purpose is to relieve some of the burden
squid has when dealing with slow helpers. It does so by acting as a
middleman between squid and the actual helpers, talking to squid via
the multiplexed variant of the helper protocol and to the helpers
via the non-multiplexed variant.
Helpers are started on demand, and in theory the muxer can handle up to
1k helpers per instance. It's up to squid to decide how many helpers
to start.
The muxer knows nothing about the actual messages being passed around,
and as such can't really (yet?) compensate for broken helpers.
It is not yet able to manage dying helpers, but it will.
The helper can be controlled using various signals:
- SIGHUP: dump the state of all helpers to STDERR
Amos Jeffries [Sat, 2 Jan 2010 04:32:46 +0000 (17:32 +1300)]
Add client_ip_max_connections
Given some incentive after deep consideration of the slowloris claims.
While I still do not believe Squid is vulnerable per-se and some people
have tested and found no such failures as claimed for the DoS attack.
We found we could provide better administrative controls. This is one such
that has been asked about many times and still did not exist. It operates
essentially teh same as the maxconn ACL, but does not require HTTP headers
and other request data to fully exist like ACLs do.
It is tested immediately after accept() and is request type agnostic, right
down to DNS TCP requests. So care is warranted in hierarchy situations or where
clients may be behind NAT.
Utilizes the client DB to monitor accepted TCP links. Operates prior to
everything so as to eliminate resource usage on the blocking case and
close the windows of opportunity for dribble-attacks etc.
Default (-1) is to keep the status-quo of no limits.