Amos Jeffries [Fri, 2 Jan 2015 13:15:24 +0000 (05:15 -0800)]
Bug 3754: configure doesnt detect IPFilter 5.1.2 system headers
Solaris 10+ bundles IPFilter code natively, but the IPFilter
headers contain a duplicate definition of minor_t which does
not match the existing OS definition.
The result is that no applications (such as Squid) will build
on Solaris with the natively provided headers.
Also, the upstream IPFilter code separate from Solaris contains
the same minor_t definition so building against a separate
install of IPFilter does not fix the issue.
We must instead play fancy games #define'ing minor_t to a
different real-name for the OS headers and its own name for
the IPFilter headers.
Thanks to Yuri Voinov for sponsoring the Solaris 10
machine and environment resources for this fix.
Amos Jeffries [Thu, 1 Jan 2015 08:57:18 +0000 (00:57 -0800)]
Cleanup: fix most 'unused parameter' warnings
... and several bugs hidden by lack of this check:
* url_rewrite_timeout parser/dumper using wrong cf.data.pre
parameter definition.
* url_rewrite_timeout parser/dumper using wrong object for
state data.
Global a Config object instead of parameter object.
Preventing future use of multiple Config objects. There is
more to be done as the Timeout value itself is not stored
as part of the object apparently detailing the timeout.
* request_header_add directive dump() omitting directive
name in mgr:config output.
* dead code as HTCP packet handlers for NOP, MON, SET
* mime icons download operation incorrectly initialized.
was using the 'view' access parameter to set download
access permission.
* peerCountHandleIcpReply() assertions testing validity
after pointers already used. This would lead to segfault
on errors, now leading to assertion logging.
Only the default built code was checked and updated at this
time. There are 62 known warnings still appearing due to
parameters being only used inside conditional code, possibly
more issues in code not enabled in this build and certainly
a lot more in the stubs and unit tests which were not checked.
Fixed handling of invalid SSL server certificates when splicing connections.
An unpatched Squid in peek-and-splice mode may splice connections after
receiving a malformed or unsupported SSL server Hello message. This may
happen even if sslproxy_cert_error tells Squid to honor the error. After
this change, Squid honors sslproxy_cert_error setting when:
* no server certificate was found and checked using Squid validation procedure
(e.g., because the SSL server Hello response was malformed or unsupported); or
* Squid server certificate validation procedure has failed.
If the certificate error is not allowed, Squid terminates the server connection
and attempts to bump the client connection to deliver the error message to the
user.
Amos Jeffries [Tue, 30 Dec 2014 13:40:33 +0000 (05:40 -0800)]
Fix 64-bit compile issues in rev.13785
The Nettle 3.0 library API imported and used by rev.13785 defines
function symbols with size_t parameters where earlier libraries used
'unsigned'. This matters on 64-bit systems where unsigned is a 'int'
and size_t a 'long' - implicit conversion is not possible.
Explicitly detect the size_t API existence during ./configure time and
use the built-in logics if supplied Nettle library is an older version.
Amos Jeffries [Tue, 30 Dec 2014 10:22:29 +0000 (02:22 -0800)]
basic_msnt_multi_domain_auth: Superceeded by basic_smb_lm_auth
This helper consisted of a Perl script requiring special Perl
SMB:Authen module and Samba nmblookup helper to operate.
It performs the same operations as basic_smb_lm_auth helper,
so is not actually needed.
It also contains a slightly ambiguous copyright license as it
was published to the squid-users mailing list in effective
Public Domain free for any use, but without explicit statement
to the fact.
Amos Jeffries [Tue, 30 Dec 2014 09:09:27 +0000 (01:09 -0800)]
Crypto-NG: Base64 crypto replacement
The existing Squid base64 code had ambiguous copyright licensing. In
particular it only referenced a dead URL for source copyright
ownership details. In all likelihood this was for an Open Source
implementation, but we dont have sufficient record of the original
license terms to be certain without a long investigation.
It has also been heavily modified and customized over the decades
since importing whih complicates the issue a lot.
It also does not match any of the common industry context-based API
patterns for encoders/decoders.
This patch replaces that logic with GPLv2 licensed code from the
Nettle crypto library. Either linking the library dynamically or in
its absence embedding the logic via our libmiscencoding library.
It also updates all code to the new API, and as a byproduct removes
several layers of deprecated wrapper functions which have grown in
over the years.
This patch add a new configuration option the 'pconn_lifetime' to allow users
set the desired maximum lifetime of a persistent connection.
When set, Squid will close a now-idle persistent connection that
exceeded configured lifetime instead of moving the connection into
the idle connection pool (or equivalent). No effect on ongoing/active
transactions. Connection lifetime is the time period from the
connection acceptance or opening time until "now".
This limit is useful in environments with long-lived connections
where Squid configuration or environmental factors change during a
single connection lifetime. If unrestricted, some connections may
last for hours and even days, ignoring those changes that should
have affected their behavior or their existence.
This option has the following behaviour when pipelined requests tunneled
to a connection where its lifetime expired:
1. finish interpreting the Nth request
check whether pconn_lifetime has expired
2. if pconn_lifetime has expired, then stop further reading and
do not interpret any already read raw bytes of the N+1st request
3. otherwise, read and interpret read raw bytes of the N+1st request
and go to #1.
Amos Jeffries [Sun, 21 Dec 2014 16:28:17 +0000 (08:28 -0800)]
Windows: fix getaddrinfo, getnameinfo, inet_ntop and inet_pton detection
These API symbols are not always defined as functions, and in varying
locations. AC_REPLACE_FUNCS cannot handle that kind of complexity so we
must use AC_CHECK_DECL instead and provide the sequence of #include
necessary to identify their existence.
Markus Moeller [Fri, 19 Dec 2014 22:16:42 +0000 (14:16 -0800)]
negotiate_kerberos_auth: MEMORY keytab and replay cache support
1) Checks for MEMORY: keytab support and reads the keytab from disk into
MEMORY to improve performance (i.e. read keytab only at startup and
nerver again)
2) Add option for replay cache type. Allows to set replay cache to none
to improve performance ( may reduce security a bit )
3) Add option for replay cache directory. If /var/tmp is not the best
location you can choose a different location.
Fix peek-and-splice mode: certificate validation for domain mismatched errors
Currently squid does not check for domain mismatched errors while validates the
server certificate on peek and splice mode, even if the server hostname is known
from SNI info or from CONNECT request string.
Amos Jeffries [Fri, 19 Dec 2014 16:26:44 +0000 (08:26 -0800)]
MemPool the debug output stream buffers
The CurrentDebug output stream controller for cache.log was
defined as a std::ostringstream object and allocated with
new/delete on each call to debugs().
The std::ostringstream is defined as a templates output stream
which uses the std::allocator<char> built into libc when its
new()'d. Since this is all internal to the STL library
definitions it links against the libc global-scope allocator.
However, there is no matching deallocator definition and when
the object is delete()'d the standard C++ operator overloading
rules make the global-scope SquidNew.h definition of
::operator delete() be the method of deallocation. That uses
free() internally.
To resolve the mismatch of new()/free() we must define a
wrapper class with explicit class-scope new/delete operators
instead of relying on weak linkages to overloaded global scope
operators.
As a result the memory is new()'d and free()'d. As detected by
Valgrind
Amos Jeffries [Thu, 18 Dec 2014 12:12:33 +0000 (01:12 +1300)]
Bug 1961, Bug 429: Add asterisk to class URL
This does not yet perform any of the outgoing request mapping from
path-less URI required by current RFC 7231.
Squid already allows these URI in OPTIONS and TRACE requests (only).
It does make a start by cleaning up the current special case handling of
"*" URI to be matched by the URI class/namespace method and SBuf
comparisions instead of c-strings.
Support http_access denials of SslBump "peeked" connections.
If an SSL connection is "peeked", it is currently not possible to deny it
with http_access. For example, the following configuration denies all plain
HTTP requests as expected but allows all CONNECTs (and all subsequent
encrypted/spliced HTTPS requests inside the allowed CONNECT tunnels):
http_access deny all
ssl_bump peek all
ssl_bump splice all
The bug results in insecure bumping configurations and/or forces admins to
abuse ssl_bump directive (during step1 of bumping) for access control (as a
partial workaround).
This change sends all SSL tunnels (CONNECT and transparent) through http_access
(and adaptation, etc.) checks during bumping step1. If (real or fake) CONNECT is
denied during step1, then Squid does not connect to the SSL server, but bumps
the client connection, and then delivers an error page (in response to the
first decrypted GET). The behavior is similar to what Squid has already been
doing for server certificate validation errors.
Technical notes
----------------
Before these changes:
* When a transparent SSL connection is being bumped, if we decide to splice
during step1, then we splice the connections without any http_access
checks. The (spliced) connection is always established.
* When a CONNECT tunnel is being bumped at step1, if peek/stare/server-first
mode is selected, and our http_access check fails, then:
1) We create an error page and proceeding with SSL bump, expecting
to serve the error after the client SSL connection is negotiated.
2) We start forwarding SSL Hello to the server, to peek/stare at (or
server-first bump) the server connection.
3) If we then decide to splice the connection during step2 or step3, then
we splice, and the error page never gets served to the client!
After these changes:
* During transparent SSL bumping, if we decide to splice at step1, do not
splice the connection immediately, but create a fake CONNECT request first
and send it through the callout code (http_access check, ICAP/ECAP, etc.).
If that fake CONNECT is denied, the code path described below kicks in.
* When an error page is generated during CONNECT or transparent bumping
(e.g. because an http_access check has failed), we switch to the
"client-first" bumping mode and then serve the error page to the client
(upon receiving the first regular request on the bumped connection).
Bug 4164: SEGFAULT when %W formating code used in errorpages
Squid will crash inside ErrorState::Dump if not authentication configured for
squid. In this case ErrorState::auth_user_request is NULL and trying to access
a method of this object will cause segfault to squid.
Hussam Al-Tayeb [Tue, 16 Dec 2014 12:23:58 +0000 (01:23 +1300)]
Bug 3826: pt 2: Provide a systemd .service file for Squid
Created with help from davidstrauss in #systemd channel and provided
as a working example for package distributors to use. It is not
installed by a 'make install' build of Squid.
For now SMP support is not available to Squid controlled by systemd.
That part of the bug 3826 issue has yet to be resolved.
Amos Jeffries [Thu, 11 Dec 2014 08:35:32 +0000 (00:35 -0800)]
Update Http::ProtocolVersion() to initializer functions
The Http::ProtocolVersion(*) does not work sufficiently well as a class
hierarchy.
Convert Http::ProtocolVersion to two functions:
* Http::ProtocolVersion() providing the default Squid HTTP version
level, and
* Http::ProtocolVersion(unsigned, unsigned) providing the HTTP version
details for the given level.
NP: using two overloaded functions instead of one with default
parameter values because with HTTP/0.x and HTTP/2.x we cannot safely
default just the minor value. ie. using two functions prevents
mistakenly using HTTP/2.1, HTTP/0.1 or HTTP/1.0 if the second
parameter is omitted.
All variables must now be of type AnyP::ProtocolVersion, and should be
constructed from an appropriate Foo::ProtocolVersion() function.
Amos Jeffries [Mon, 8 Dec 2014 11:25:58 +0000 (03:25 -0800)]
Update localnet definition for RFC 6890
RFC 6890 details updated IP address reservations for Carrier-Grade NAT
and confirms registration of the "this" network range legitimacy amongst
other non-relevant ddress range allocations.