Amos Jeffries [Sat, 2 Jan 2010 05:00:34 +0000 (18:00 +1300)]
Author: Francesco Chemolli <kinkie@squid-cache.org>
Helper Multiplexer
The helper multiplexer's purpose is to relieve some of the burden
squid has when dealing with slow helpers. It does so by acting as a
middleman between squid and the actual helpers, talking to squid via
the multiplexed variant of the helper protocol and to the helpers
via the non-multiplexed variant.
Helpers are started on demand, and in theory the muxer can handle up to
1k helpers per instance. It's up to squid to decide how many helpers
to start.
The muxer knows nothing about the actual messages being passed around,
and as such can't really (yet?) compensate for broken helpers.
It is not yet able to manage dying helpers, but it will.
The helper can be controlled using various signals:
- SIGHUP: dump the state of all helpers to STDERR
Amos Jeffries [Sat, 2 Jan 2010 04:32:46 +0000 (17:32 +1300)]
Add client_ip_max_connections
Given some incentive after deep consideration of the slowloris claims.
While I still do not believe Squid is vulnerable per-se and some people
have tested and found no such failures as claimed for the DoS attack.
We found we could provide better administrative controls. This is one such
that has been asked about many times and still did not exist. It operates
essentially teh same as the maxconn ACL, but does not require HTTP headers
and other request data to fully exist like ACLs do.
It is tested immediately after accept() and is request type agnostic, right
down to DNS TCP requests. So care is warranted in hierarchy situations or where
clients may be behind NAT.
Utilizes the client DB to monitor accepted TCP links. Operates prior to
everything so as to eliminate resource usage on the blocking case and
close the windows of opportunity for dribble-attacks etc.
Default (-1) is to keep the status-quo of no limits.
Fixed output for disk-io modules.
Refactored some --enable options to SQUID_YESNO.
Made option-parsing for --with-coss-membuf-size stricter and refactored it.
Refactored auth-modules detection and handling.
Imlemented SQUID_DEFINE_UNQUOTED
Some reformatting
Restructured eCAP, ICAP, WCCP, WCCPv2, KILL_PARENT_HACK to use SQUID_DEFINE_UNQUOTED
Renamed SELECT_TYPE
Removed unnecessary pauses when configureing
Renamed some internal variables to be standard-compliant.
Reworked handling of some transparent interception options.
Leaned-up some modules-detection code
Renamed some vars to standard-compliant
Performed some aesthetic fixups
Made some tests more consistent
Prettified EUI and SSL
Amos Jeffries [Mon, 21 Dec 2009 12:13:11 +0000 (01:13 +1300)]
Author: Tsantilas Christos <chtsanti@users.sourceforge.net>
Append the _ABORTED or _TIMEDOUT suffixes to the action access.log field.
* When an HTTP connection with a client times out, append _TIMEDOUT suffix
to the Squid result code field in access.log.
* When an HTTP connection with the client is terminated prematurely by
Squid, append _ABORTED suffix to the result code field in access.log.
Premature connection termination may happen when, for example, I/O
errors or server side-problems force Squid to terminate the master
transaction and close all associated connections.
The above changes make it possible to identify failed transactions even
when they have 200/200 received/send response status codes and a
"successful" Squid result code (e.g., TCP_MISS). This is important when
one does not want 1-hour "stuck" transactions for 15-byte GIFs to
significantly skew the mean response time statistics. Such transactions
eventually terminate due to, say, TCP errors, and the old code would
record huge response times for successfully-looking transactions.
The development sponsored by the Measurement Factory
Amos Jeffries [Mon, 21 Dec 2009 12:05:22 +0000 (01:05 +1300)]
Author: Tsantilas Christos <chtsanti@users.sourceforge.net>
Add support for write timeouts.
The development sponsored by the Measurement Factory
Description:
The write I/O timeout should trigger if Squid has data to write but
the connection is not ready to accept more data for the specified time.
If the write times out, the Comm caller's write handler is called with
an ETIMEDOUT COMM_ERROR error.
Comm may process a single write request in several chunks, without
caller's knowledge. The waiting time is reset internally by Comm after
each chunk is written.
Default timeout value is 15 minutes.
The implementation requires no changes in Comm callers but it adds write
timeouts to all connections, including the connections that have
context-specific write timeouts. I think that is fine.
Amos Jeffries [Mon, 21 Dec 2009 12:00:15 +0000 (01:00 +1300)]
Regression Fix: Make Squid abort on parse failures.
The addition of multi-file parsing and catching of thrown errors between
them caused any errors in sub-files to be non-fatal and allow Squid to
run as if everything was normal, even if parts of the config were not
being loaded.
Squid will now propigate the error exception out and exit with a count of
the errors found.
Amos Jeffries [Sun, 20 Dec 2009 10:18:22 +0000 (23:18 +1300)]
Bug 2811: pt 1: Correct Peer table OID numbering
The IPv6 alterations to the cache_peer table display should not have
altered existing OID numbers. This fixes that by bumping the new table
format to a new OID number .1.3.6.1.4.1.3495.1.5.1.3 for version 3 of the
table.
NP: version 1 of the table was in 2.0->2.5, and 3.0
version 2 of the table was in 2.6+
Amos Jeffries [Sat, 19 Dec 2009 05:47:00 +0000 (18:47 +1300)]
Polish on-demand helpers a little more
* logic for implicit external_acl_type idle= parameter was bad
could result in idle=9999999 if max<startup.
Fix that and remove the possible max<startup
* add concurrency back into the config dump displays
* fully drop the auth_param concurrency parameter for consistency.
Amos Jeffries [Wed, 16 Dec 2009 03:46:59 +0000 (16:46 +1300)]
Run helpers on-demand
For some config backwards compatibility the maximum is kept as a single
integer first parameter to the *children directives.
Default setting changes:
Instead of starting N helpers on startup and each reconfigure this
makes the default zero and the configured value a maximum cap.
The default maximum is raised from 5 to 20 for all helpers except
for dnsservers where the maximum is raised to the old documented
maximum of 32.
Obsoleted settings:
url_rewrite_concurrency
- replaced by the concurrency=N option now available on all *_children
directives (including auth_param X children).
To avoid compile problems this directive had to be fully dropped.
auth_param X concurrency N
- as above. However the option was able to be retained, as deprecated
for future removal as well.
Behavior changes:
Whenever a request needs to use a helper and there are none available
immediately Squid tests to see if its okay to start a new one. Then does so.
The "helpers dying too fast" warnings and Squid closing has been modified
Squid will now not care about dying helpers if there are more that
startup=N active. If the death causes less than startup=N to be running
and is hit twice in less than 30 seconds will cause the warning message
to be doisplayed and Squid to abort same as before.
NP: that with startup=0 (the new default) helpers dying before or after
their first use will not crash Squid. But may result in a loop of
hung/failed requests and WILL result in a great many helper-failed
warnings in cache.log.
If needed we can bump the startup default back to 1 to avoid all that.
Or add a special check to kill squid if helpers die during startup and
provide a clearer log message "Foo helper is dying before we can finish
starting it" etc.
TODO: the current patch has no way to dynamically decrease the number of
helpers. Only a reconfigure or helper dying can do that.
The patch allows Squid v3.1 to build on RHEL 5.4 which has autoconf v2.59.
Without the patch, USE_DISKIO_AIO is zero but the corresponding AIO files
are compiled, leading to errors. I do not know if other platforms are
affected.
Amos Jeffries [Fri, 11 Dec 2009 14:15:28 +0000 (03:15 +1300)]
Bug 2395: FTP errors not displayed
* Fix PUT and other errors hanging
* Fix assertion "entry->store_status == STORE_PENDING" caused by FTP
* Several variable-shadowing cases resolved for the fix.
Amos Jeffries [Thu, 3 Dec 2009 02:38:11 +0000 (15:38 +1300)]
Account for mem_node overheaders inside cache_mem
This makes squid include the overhead memory space when determining the
number of data pages available in cache_mem memory space. Forming a much
better limit on memory cache usage.
This does NOT solve any issues created by sizeof(mem_node) being unaligned
with the system malloc implementation page size. That still needs to be
resolved.
Amos Jeffries [Wed, 2 Dec 2009 22:27:33 +0000 (11:27 +1300)]
Bug 2830: clarify where NULL byte is in headers.
Debug printing used to naturally stop string output at the null byte.
This should show the first segment of headers up to the NULL and the
segment of headers after it. So that its clear to admin that there are
more headers _after_ the portion that used to be logged.
Factored out m4 method to look for modules.
Added CC and CXX to state save/commit/rollback helper.
Removed redundant --enable-async-io configure argument