The following Squid configuration uses src ACL with sslproxy_cert_error:
acl me src 172.16.101.51
sslproxy_cert_error allow me
Cache log shows that the source IP address is missing when the 'me' ACL
is checked for sslproxy_cert_error:
| ACL::checklistMatches: checking 'me'
| aclIpAddrNetworkCompare: compare: *[::]/[ff...ff] ([::])* vs ...
| aclIpMatchIp: '[::]' NOT found
The problem is that the HttpRequest::client_addr is not set, for the fake
HTTPS request created to initiate the bump-server-first procedure.
This patch trying to handle at least the following three cases when we are
reporting error details to the user and logging error details:
1) Shallow error: The same code discovers the error and creates the
error page. The request details will be in sync with the error page
details because they are discovered at the same time, by the same code.
2) Honored deep error: Somewhere deep inside, say, ICAP or DNS code, an
error was detected and detailed. The error condition/answer slowly and
asynchronously propagated to the place where the error page is being
created. We want to preserve that original deep detail if any or provide
the current high-level detail if no deep detail is available.
3) Bypassed deep error1 followed by error2: Somewhere deep inside, say,
ICAP or DNS code, error1 was detected and detailed. The error1 condition
started propagating up but the ICAP or DNS transaction was eventually
successfully retried. Later, a deep or shallow error2 was discovered.
The error1 detail becomes irrelevant when we started to retry the failed
transaction.
This patch:
- Reset the error details when ICAP transactions retried, adaptation services
retried or replaced by the fail-over service and when the forwarding code
retry the connection to the destication servers.
- On SSLBump errors the error details, logged in both master CONNECT request
plus the first tunnelled GET request. To achieve this the error details
of the bump-server-first fake request saved to the CONNECT HttpRequest
object, and the logging of CONNECT request delayed until we have the
bump-server-first answer (freeAllContexts called after SSL-Server answered).
- Fix the cases where we set custom error codes (internal squid error codes)
to ErrorState::xerrno member. This member is only for system errors.
- We should not set ErrorState::xerrno to system err number unless we know
that the current system error triggered the error page generation.
This patch sets this member only to the system errorno passed by squid
API (eg AsyncCalls API).
This is also fix a possible bug in gopher.cc subsystem.
- We are setting the HttpRequest:detailError inside ErrorState::BuildHttpReply
method, where we have all the information required to corrently build
HttpRequest error details.
Alex Rousskov [Fri, 20 Apr 2012 17:18:17 +0000 (11:18 -0600)]
Polished sslproxy_cert_sign and sslproxy_cert_adapt documentation.
Most importantly, we now explicitly document that sslproxy_cert_adapt stops
searching for other ACL matches within the same adaptation algorithm group
once the first matching sslproxy_cert_adapt is found within an adaptation
algorithm group.
Ssl certificate domain mismatch errors on IP-based URLs
The ssl::certDomainMismatch acl can not be used with ip-based urls. For example
let's assume that a user enters https://74.125.65.99/, using a Google IP addressin the URL instead of www.google.com. If the sslBump used with
"sslproxy_cert_error allow all" and "sslproxy_cert_adapt setCommonName ssl::certDomainMismatch"
the browser displays a browser "Server's certificate does not match the URL"
error.
This is because for all cases we have the ip address instead of the hostname
we are detecting the cert domain mismatch errors when the first GET request
comes. At the time the sslproxy_cert_adatp access list processed the error is
not detected yet.
For intercepted connections this is the desired behaviour.
This patch fix the ssl-bump-first to check for domain-mismatch errors while
retrieving the SSL certificate from the server, hoping that CONNECT is using
a user-entered address (a host name or a user-entered IP).
Someone can hit this assertion when have two "sslproxy_cert_adapt setCommonName"
configuration lines the first with parameter but the second without parameter:
sslproxy_cert_adapt setCommonName{toThisOne} AN_IP
sslproxy_cert_adapt setCommonName AN_IP
Inside ConnStateData::buildSslCertGenerationParams method inside the loop
for (sslproxy_cert_adapt *ca = Config.ssl_client.cert_adapt; ....) {...}
the second time the loop entered the param is NULL and never set because
certProperties.setCommonName is already set, so we hit the assertion.
* relay "Permanent Redirect" message on status line
* MAY cache these responses with heuristics
* accept this status as a redirect status from URL redirectors
Alex Rousskov [Tue, 10 Apr 2012 04:26:14 +0000 (22:26 -0600)]
Bug 3441: Part 3: Replace corrupted v1 swap.state with new v2 format.
A fix for bug 3408 changed the offset at which we start writing dirty
swap.state entries from StoreSwapLogHeader::record_size to StoreSwapLogHeader
size. However, the log-reading code still read the entries starting from the
old offset (which is required to remain compatible with how a clean swap.state
is written).
Wrong starting offset essentially means that the vast majority of read
swap.state entries were bogus. They could only match some real entry when 64*n
is divisible by 12 and perhaps when their random data just happened to match a
real entry. Part 2 of this bug fix (trunk r11995) started to pad the [dirty]
swap.state header to start entry writing at StoreSwapLogHeader::record_size
boundary.
Changes specific to Part 3:
Unfortunately, since old v1 logs could contain completely bogus entries as the
result of being read (at some point) from the wrong offset, we should not load
v1 logs any more (neither dirty nor clean because what looks clean now could
be based on a previously dirty and, hence, corrupted log). This forced us to
raise the swap.state format version from 1 to 2.
After this change, if a v1 swap log is detected, Squid ignores it and does a
from-directory rebuild as if no swap.state files were found.
Since we had to change swap.state format version, we also made log entry size
and composition the same across virtually all platforms; added checksums so
that a similar bug would not go unnoticed for so long (and would not result in
log corruption); and increased the size of time-related entries to avoid the
"year 2038" problem.
The swap log entries are still written to disk in host byte order.
We now also zero the [dirty] swap.state header padding to prevent random and
potentially sensitive garbage in logs.
Cache index rebuild kinds are now reported using the following three labels:
* Rebuild using a swap log created by Squid during clean shutdown: "clean log"
* Rebuild using a swap log accumulated by a running Squid: "dirty log"
* Rebuild using directory scan: "no log"
The first kind used to be reported as CLEAN and the other two as DIRTY rebuild.
Customers want Squid to detect domain mismatch as early as possible so
that Squid uses a minimal valid fake certificate and serves a
[customized] Squid error. Currently, the browser always displays the
built-in error even if CONNECT has a domain name.
This patch tells Squid to use CONNECT host name if not peeking or if it is not
an IP while peeking. It also sends SNI information if host name is not an IP.
Also on error pages use the server certificate CN as hostname only if the
CONNECT host name is not an IP address.
Add checks to assure that a cached certificate is valid for current request
Add checks in Ssl::certificateMatchesProperties to assure:
- The CN name of the cached certificate matches the requested CN
- "Not After" and "Not Before" fields of the cached certificate are valid
Ssl::CommonHostName and getOrganization functions moved to gadgets.cc to allow
use by ssl_crtd daemon
Amos Jeffries [Thu, 29 Mar 2012 09:22:41 +0000 (21:22 +1200)]
Polish: de-duplicate UDP port dialers
This create a Comm::UdpOpenDialer class which replaces the ICP, HTCP and
SNMP start-listening dialer classes. Their code was very close to
identical anyway.
ICP and HTCP can now also use the dialer Comm::Connection parameter
instead of assuming that the callback relates to the global incoming
port variable.
According the RFC 5280: "When extensions are used, as expected in this profile,
version MUST be 3 (value is 2)". This patch sets the generated certificates
version to 3 when the subjectAltName extension copied from mimicking certificate.
It is reported at least one case where squid crashed with segfault because the
signing certificate was NULL.
This patch:
- Add assertion checks inside buildSslCertGenerationParams and
Ssl::certificateMatchesProperties functions (a)to avoid segfaults
- In the case the signing certificate is not given in http_port configuration
or the given certificate filename was not valid, squid does not start.
- Creates the http_port_list::configureSslServerContext method and move
here the cache_cf.cc code which was responsible to initialize ssl contexts
and sslBump feature.
Honor the "deny" part of "foobar deny ACL" options - Temporary patch
When AuthenticateAcl() and aclMatchExternal() were converted to use extended
authentication ACL states (r11644 and r11645 dated 2011-08-14), the result of
those function calls was set as the current checklist answer. This was
incorrect because those functions do not make allow/deny decisions. They only
tell us whether the ACL part of the allow/deny rule matches. If there is a
match, the ACCESS_ALLOWED/ACCESS_DENIED answer depends on whether it is an
allow or deny rule.
For example, "http_access deny BadGuys" should deny access when the BadGuys
ACL matches, but it was allowing access instead.
Without authentication, bump-server-first CONNECT requests allow uncontrolled
SSL handhsakes with origin servers, which is not desirable if the proxy operatordoes not want to allow users to access external resources anonymously.
Authenticating CONNECT requests is troublesome because when CONNECT
authentication fails, the proxy has difficulties communicating details of the
error to the browser, due to security vulnerabilities discussed at
https://bugzilla.mozilla.org/show_bug.cgi?id=479880
This patch implements the following logic to allow for seamless authentication
of CONNECT requests in a bump-server-first setup:
- Process http_access. Authenticate CONNECT request if needed, which may
require several HTTP CONNECT exchanges. This should be already supported.
- If access is allowed, use Connect-To-Server-First (for bumped connections) or normal TCP tunneling (for regular connections). This should be already supported.
- If access is denied, check ssl_bump and delay the error (for bumped
connections) or serve the error immediately (for regular connections).
This needs work.
"Delaying the error" in this context means remembering the error, responding
with 200 Established, establishing a bumped secure connection with the client,
not connecting to the origin server at all, and serving the error to the client
when the first encapsulated request comes.
Alex Rousskov [Thu, 8 Mar 2012 01:50:04 +0000 (18:50 -0700)]
Do not assert if we fail to compose ssl_crtd request. Do blocking generation.
Users report assertions when OpenSSL fails to write a true server certificate
to to memory. Since that certificate is received from a 3rd party, we should
not assert that it is writeable. Besides, OpenSSL may have limitations/bugs
even if dealing with valid certificates.
If we fail to componse a request, we now try the good old blocking in-process
certificate generation.
Currently, it is not known what exactly causes OpenSSL to fail as we are
unable to trigger the assertion in a controlled test.
Bug fix: ssl_crtd crashes when accessing HTTPS sites with a domain name exceeding 64 characters
Squid tries to generate a certificate for long domain names, which is not
possible.
According to RFC 5280 (Section A.1), the common name length in a certificate
can be at most 64 characters. Therefore it is not possible to generate a valid
certificate with the above domain name as common name.
This patch does not allow use of common names longer than 64 bytes in
setCommonName adaptation algorithm. Also In the case the openssl fails to
read subject name from mimicking certificate does not set any subject to
generated certification. (currently ssl_crtd crashes).
Simplify Ssl::ServerPeeker and renamed it to Ssl::ServerBump
Currently, Ssl::ServerPeeker is a job that does virtually nothing and leaks on
SSL connection establishment errors. This patch convert this class convert this
class into a non-job class which ConnStateData use to maintain SSL
certificate peeking state.
The Ssl::ServerPeeker class renamed to Ssl::ServerBump.
ConnStateData::bumpErrorEntry and ConnStateData::bumpSslErrorNoList and
ConnStateData::bumpServerCert now are members to the this class.
Bug fix: The '%I' formating code in error page shows '[unknown]' in the case of SQUID_X509_V_ERR_DOMAIN_MISMATCH error
Update the server ip address and server hostname in HttpRequest::hier object
in the case of SQUID_X509_V_ERR_DOMAIN_MISMATCH error to be used for %I field
of the generated error page
Bug fix: Current serial number generation code does not produce a stable serial number for self-signed certificates
Squid always send the signing certificate to the ssl_crtd daemon even for
self-signed certificates because the signing certificate may used for cert
adaptation algorithms. The ssl_crtd currently ignore the signing certificate
in the case of self-signed certificates. This is has as result to
use a random number as serial number of generated certificate.
This patch also use 0 as serial number of the temporary intermediate certificate
used to generate the final serial number of the certificate, in the case of
signing certificate is not given.
Alex Rousskov [Wed, 29 Feb 2012 06:32:14 +0000 (23:32 -0700)]
Better helper-to-Squid buffer size management.
The minimum buffer size is reduced from 8KB to 4KB after a squid-dev
discussion to prevent wasting of "several hundred KB of unused permanent
memory on some installations".
We now increase the buffer if we cannot parse the helper response message.
The maximum buffer size is now 32KB. This should be enough for all known
helper responses.
We now warn if the read buffer reaches its capacity and kill the offending
helper explicitly. An increase in maximum buffer capacity to 32KB should make
such events rare.
Motivation: ssl_crtd helper may produce responses exceeding 9907 bytes in size
(and possibly much larger if multiple chained certificates need to be returned
to Squid). The old helper.cc code would fill the read buffer completely,
schedule a read for zero bytes, receive zero bytes, declare an EOF condition,
and close the stream (which kills ssl_crtd). Due to insufficient information
logged, the observable symptoms were pretty much the same as if ssl_crtd
closed the stream first, indicating a ssl_crtd bug.
The "trimMemory for unswappable objects" fix (trunk r11969,r11970) exposed
ENTRY_SPECIAL objects such as icons to memory trimming which they cannot
handle because Squid does not reload the missing parts of special objects.
Further testing showed that a special object may be cached in shared RAM
and/or might even be purged from all caches completely. This may explain why
icons eventually "disappear" in some environments.
We now treat special objects as not belonging to memory or disk Stores and do
not ask those Stores to manage them. This separation needs more work, but it
passes basic tests, and it is the right step towards creating a proper storage
dedicated to those objects.
Alex Rousskov [Tue, 28 Feb 2012 23:45:23 +0000 (16:45 -0700)]
Better helper-to-Squid buffer size management.
The minimum buffer size is reduced from 8KB to 4KB after a squid-dev
discussion to prevent wasting of "several hundred KB of unused permanent
memory on some installations".
We now increase the buffer if we cannot parse the helper response message.
The maximum buffer size is now 32KB. This should be enough for all known
helper responses.
We now warn if the read buffer reaches its capacity and kill the offending
helper explicitly. An increase in maximum buffer capacity to 32KB should make
such events rare.
Motivation: ssl_crtd helper may produce responses exceeding 9907 bytes in size
(and possibly much larger if multiple chained certificates need to be returned
to Squid). The old helper.cc code would fill the read buffer completely,
schedule a read for zero bytes, receive zero bytes, declare an EOF condition,
and close the stream (which kills ssl_crtd). Due to insufficient information
logged, the observable symptoms were pretty much the same as if ssl_crtd
closed the stream first, indicating a ssl_crtd bug.
HONDA Hirofumi [Tue, 28 Feb 2012 17:52:21 +0000 (10:52 -0700)]
Bug 3502: client timeout uses server-side read_timeout, not request_timeout
I have also adjusted request_timeout description in squid.conf to clarify that
request_timeout applies to receiving complete HTTP request headers and not
just the first header byte. We reset the connection timeout to
clientLifetimeTimeout after parsing request headers.
https_port was correctly using Config.Timeout.request already.
Alex Rousskov [Tue, 28 Feb 2012 00:30:47 +0000 (17:30 -0700)]
Bug 3497: Bad ssl_crtd db size file causes infinite loop.
The db size file may become empty when Squid runs out of disk space. Ignoring
db size reading errors led to bogus db sizes used as looping condition. This
fix honors reading errors and also terminates the loop when no more
certificates can be removed. Both errors and removal failure are fatal to
ssl_crtd.
A positive side-effect of this fix is one less call to the relatively
expensive file-reading size()/readSize() methods under normal conditions.
I also removed "minimum db size" check because it did not seem to be in sync
with other ssl_crtd parameters such as fs block size and because its overall
purpose was unclear. The check was also removed by the original bug reporter.
TODO: Remaining problems include: ssl_crtd should not exit just because it
cannot write something to disk. A proper reporting/debugging API is missing.
Guy Helmer [Tue, 28 Feb 2012 00:22:38 +0000 (17:22 -0700)]
Bug 3497: Bad ssl_crtd db size file causes infinite loop.
The db size file may become empty when Squid runs out of disk space. Ignoring
db size reading errors led to bogus db sizes used as looping condition. This
fix honors reading errors and also terminates the loop when no more
certificates can be removed. Both errors and removal failure are fatal to
ssl_crtd.
A positive side-effect of this fix is one less call to the relatively
expensive file-reading size()/readSize() methods under normal conditions.
I also removed "minimum db size" check because it did not seem to be in sync
with other ssl_crtd parameters such as fs block size and because its overall
purpose was unclear. The check was also removed by the original bug reporter.
TODO: Remaining problems include: ssl_crtd should not exit just because it
cannot write something to disk. A proper reporting/debugging API is missing.
When there is an error and we know the intended server name from CONNECT
request, we should use that name for the CN in the fake certificate instead
of mimicking the received true server certificate CN.
Alex Rousskov [Sat, 25 Feb 2012 16:44:36 +0000 (09:44 -0700)]
Mark requests on re-pinned connections to avoid them being pconnPush()ed
causing "fd_table[conn->fd].halfClosedReader != NULL" comm assertions later.
Forward.cc comments imply that request->flags.pinned is set by ConnStateData
but that is a lie. The flag is set by forward.cc itself. It was set for PINNED
peers having a valid pinned connection only. When we retry a pinned pconn
race, we still have a PINNED peer but the failed connection prevents us from
setting the flag. If we successfuly re-pin later, we must set the flag.
request->flags.pinned essentially means "the connection is or should be
pinned".
Alex Rousskov [Fri, 24 Feb 2012 22:29:40 +0000 (15:29 -0700)]
Ssl::ServerPeeker must become a store_client to prevent [error] entry trimming
We do not need a store client during the certificate peeking stage because we
do not send the error to the client and only accumulate what is being written
to the store. However, if there is no store client then Store will trim the
entry and we will hit a