basic_smb_auth.sh delivers the credentials via environment in
a form "$USER%$PASSWORD", which is not expected from smbclient. This seem to
result from an obsolete or inferior documentation of smbclient. While it is
perfectly valid to deliver the credentials in this form via commandline
parameter -U, for example in
Jeff Licquia [Fri, 31 Jul 2015 20:13:45 +0000 (13:13 -0700)]
basic_smb_auth: doesn't handle passwords with backslashes
From; Jeff Licquia <jlicquia@scinet.springfieldclinic.com>
Subject; squid: SMB auth proxy has problems with some passwords
Date; Tue, 18 Jul 2000 12:45:01 -0500 (CDT)
The SMB authenticator doesn't handle passwords with backslashes in them
correctly. The fix appears to be easy; just put a -r in the "read SMBPASS"
line in smb_auth.sh.
John M Cooper [Fri, 31 Jul 2015 20:12:12 +0000 (13:12 -0700)]
basic_smb_auth: nmblookup fails when smb.conf contaisn WINS servers
From; John M Cooper
To; Debian Bug Tracking System
Subject; squid: smb_auth does not work with a wins server defined in smb.conf
Date; 28 Jan 2002 17:46:13 +0000
If you define a wins server in the file /etc/samba/smb.conf then the
smb_auth script gets the wrong Domain Controller IP address.
There should be a change to smb_auth.sh at line 50
basically adding in the extra "\..+" stops the number of Wins servers
from being returned from the nmblookup command.
Increasingly code used inside squid.conf parsing is capable of throwing
exceptions to signal errors. Catch any unexpected exceptions that reach
the config parse initiator(s) and report as a FATAL event before self
destructing.
Bug 3345: Support %un (any available user name) format code for external ACLs.
The same %un code, with the same meaning is already supported in access.log.
In an external ACL request, it expands to the first available user name
from the following list of information sources:
- authenticated user name, like %ul or %LOGIN
- user name supplied by an external ACL to Squid via the "user=..."
key=value pair, like %ue or %EXT_USER
- SSL client name, like %us
- ident user name, like %ui
Based on Amos Jeffries 2011 patch and "arronax28" design:
http://www.squid-cache.org/mail-archive/squid-dev/201112/0080.html
with TODO completion by Measurement Factory
... from 8 to 8196 before initial congestion message appears.
Modern networks can be quite busy and even amateur installations have a
much higher I/O throughput than Squid was originally designed for. This
often results in a series of "Queue congestion" warnings appearing on
startup before Squid learns what the local environment requires.
The new limit helps to cater for this and reduce teh frequency of
unnecessary warnings. They may still occur, so debug output is also
updated to show what the queue length has grown to with each warning.
Also updating the congestion counter from 32-bit to 64-bit unsigned
since the new limit already consumes half the available growth bits in
32-bit integer.
NP: this update was triggered by reports from admin with proxies needing
to expand AIO queues to over 4K entries on startup.
Improve handling of client connections on shutdown
When Squid which are processing a lot of traffic, using persistent
client connections, or dealing with long duration requests are shutdown
they can exit with a lot of connections still open. The
shutdown_lifetime directive exists to allow time for existing
transactions to complete, but this is not always possible and has no
effect on idle connections.
The result is a large dump of aborted FD entries being logged as the TCP
sockets get abruptly reset. Potentially active transactions cache
objects being "corrupted" in the process.
Makes ConnStateData and its children implement Runner API callbacks
to receive signals about Squid shutdown. Which allows their close()
handlers to be run properly and make use of AsyncCalls API. Idle client
connections are closed immediately on the startShutdown() signal, so
their closure CPU cycles happens during the shutdown grace period.
An extra 0-delay event step is added to SignalEngine shutdown sequence
with a new Runner registry hook 'endingShutdown' is added to signal that
the shutdown_lifetime grace period is over for closure of active
transactions. All network FD sockets should be considered unusable for
read()/write() at that point since close handlers may have already been
scheduled by other Runners. AsyncCall's may still be scheduled to
release resources.
Also adds a DeregisterRunner() API action to remove Runners dynamically
from the registered set.
* shutdown grace period ends:
- remaining client connections closed
* shutdown finishes:
- main signal and Async loop halted
- all memory free'd
Server connections which are PINNED or in active use during the
endingShutdown execution will be closed cleanly as a side-effect of the
client closures. Otherwise there is no change (yet) to server connections
or other FD sockets behaviour on shutdown.
Avoid SSL certificate db corruption with empty index.txt as a symptom.
* Detect cases where the size file is corrupted or has a clearly wrong
value. Automatically rebuild the database in such cases.
* Teach ssl_crtd to keep running if it is unable to store the generated
certificate in the database. Return the generated certificate to Squid
and log an error message in such cases.
Background:
There are cases where ssl_crtd may corrupt its certificate database.
The known cases manifest themselves with an empty db index file. When
that happens, ssl_crtd helpers quit, SSL bumping does not work any more,
and the certificate DB has to be deleted and re-initialized.
We do not know exactly what causes corruption in deployments, but one
known trigger that is easy to reproduce in a lab is the block size
change in the ssl_crtd configuration. That change has the following
side-effects:
1. When ssl_crtd removes certificates, it computes their size using a
different block size than the one used to store the certificates.
This is may result in negative database sizes.
2. Signed/unsigned conversion results in a huge number near LONG_MAX,
which is then written to the "size" file.
3. The ssl_crtd helper remoces all certificates from database trying to make
space for new certificates.
4. The ssl_crtd helper refuses to store new certificates because the
database size (as described by the "size" file) still exceeds the
configured limit.
5. The ssl_crtd helper exits because it cannot store a new certificates
to the database. No helper response is sent to Squid in this case.
Most likely, there are other corruption triggers -- the database
management code is of an overall poor quality. This change resolves some
of the underlying problems in hope to address at least some of the
unknown triggers as well as the known one.
Errors served using invalid certificates when dealing with SSL server errors.
When bumping Squid needs to send an Squid-generated error "page" over a
secure connection, Squid needs to generate a certificate for that connection.
Prior to these changes, several scenarios could lead to Squid generating
a certificate that clients could not validate. In those cases, the user would
get a cryptic and misleading browser error instead of a Squid-generated
error page with useful details about the problem.
For example, is a server certificate that is rejected by the certificate
validation helper. Squid no longer uses CN from that certificate to generate
a fake certificate.
Another example is a user accessing an origin server using one of its
"alternative names" and getting a Squid-generated certificate containing just
the server common name (CN).
These changes make sure that certificate for error pages is generated using
SNI (when peeking or staring, if available) or CONNECT host name (including
server-first bumping mode). We now update the ConnStateData::sslCommonName
field (used as CN field for generated certificates) only _after_ the server
certificate is successfully validated.
Currently, Squid cannot redirect intercepted connections that are subject to
SslBump rules to _originserver_ cache_peer. For example, consider Squid that
enforces "safe search" by redirecting clients to forcesafesearch.example.com.
Consider a TLS client that tries to connect to www.example.com. Squid needs to
send that client to forcesafesearch.example.com (without changing the host
header and SNI information; those would still point to www.example.com for
safe search to work as intended!).
The admin may configure Squid to send intercepted clients to an originserver
cache_peer with the forcesafesearch.example.com address. Such a configuration
does not currently work together with ssl_bump peek/splice rules.
This patch:
* Fixes src/neighbors.cc bug which prevented CONNECT requests from going
to originserver cache peers. This bug affects both true CONNECT requests
and intercepted SSL/TLS connections (with fake CONNECT requests). Squid
use the CachePeer::in_addr.port which is not meant to be used for the HTTP
port, apparently. HTTP checks should use CachePeer::http_port instead.
* Changes Squid to not initiate SSL/TLS connection to cache_peer for
true CONNECT requests.
* Allows forwarding being-peeked (or stared) at connections to originserver
cache_peers.
The bug fix described in the first bullet makes the last two changes
necessary.
Alex Rousskov [Wed, 1 Jul 2015 06:26:38 +0000 (23:26 -0700)]
Do not blindly forward cache peer CONNECT responses.
Squid blindly forwards cache peer CONNECT responses to clients. This
may break things if the peer responds with something like HTTP 403
(Forbidden) and keeps the connection with Squid open:
- The client application issues a CONNECT request.
- Squid forwards this request to a cache peer.
- Cache peer correctly responds back with a "403 Forbidden".
- Squid does not parse cache peer response and
just forwards it as if it was a Squid response to the client.
- The TCP connections are not closed.
At this stage, Squid is unaware that the CONNECT request has failed. All
subsequent requests on the user agent TCP connection are treated as
tunnelled traffic. Squid is forwarding these requests to the peer on the
TCP connection previously used for the 403-ed CONNECT request, without
proper processing. The additional headers which should have been applied
by Squid to these requests are not applied, and the requests are being
forwarded to the cache peer even though the Squid configuration may
state that these requests must go directly to the origin server.
This fixes Squid to parse cache peer responses, and if an error response
found, respond with "502 Bad Gateway" to the client and close the
connections.
Amos Jeffries [Sun, 28 Jun 2015 10:13:58 +0000 (03:13 -0700)]
Use relative-URL in errorpage.css for SN.png
Modern browsers now seem to be accepting relative-URLs in CSS, and Squid
global_internal_static non-https:// URLs are working (bug 4132). So we
can do this now without as many failures.
Amos Jeffries [Sun, 28 Jun 2015 10:09:15 +0000 (03:09 -0700)]
Fix CONNECT failover to IPv4 after trying broken IPv6 servers
This makes CONNECT tunnel connection attempts obey forward_timeout
and continue retrying instead of aborting with a client error when one
possible server hits a connect_timeout.
Alex Rousskov [Sun, 28 Jun 2015 10:05:58 +0000 (03:05 -0700)]
Fixed segfault when freeing https_port clientca on reconfigure or exit.
AnyP::PortCfg::clientCA list was double-freed because the SSL context takes
ownership of the STACK_OF(X509_NAME) supplied via SSL_CTX_set_client_CA_list(),
but Squid was not aware of that. Squid now supplies a clone of clientCA.
This bug can be caused by certificates does not contain a CN field. In this
case the Ssl::ErrorDetail::cn method may return NULL causing this assertion
somewhere inside Ssl::ErrorDetail::buildDetail method, which expects always
a non NULL value from Ssl::ErrorDetail::cn and similar methods.
This patch try to hardening the Ssl::ErrorDetail error formating functions to
avoid always check for NULL values and also avoid sending wrong information
for various certificate fields in the case of an error while extracting the
information from certificate..
Fix assertion comm.cc:759: "Comm::IsConnOpen(conn)" in ConnStateData::getSslContextDone
This is an ssertion inside ConnStateData::getSslContextDone while
setting timeout. The reason is that the ConnStateData::clientConnection
may closed while waiting response from ssl_crtd helper.
Amos Jeffries [Fri, 5 Jun 2015 23:38:34 +0000 (16:38 -0700)]
Bug 3875: bad mimeLoadIconFile error handling
Improve the MimeIcon reliability when filesystem I/O errors or others
cause the icon data to not be loadable.
The loading process is re-worked to guarantee that once the
MimeIon::created callback occurs it will result in a valid StoreEntry in
the cache representing the wanted icon.
* If the image can be loaded without any issues it will be placed in
the cache as a 200 response.
* If errors prevent the image being loaded or necessary parameters
(size and mtime) being known a 204 object will be placed into the cache.
NP: There is no clear agreement on 204 being 'the best' status for this
case. 500 Internal Error is also appropriate. I have use 204 since:
* the bug is not in the clients request (eliminating 400, 404, etc),
* a 500 would be revealing details about server internals unnecessarily
often and incur extra complexity creating the error page.
* 204 also avoids needing to send Content-Length, Cache-Control header
and body object (bandwidth saving over 500 status).
NP: This started with just correcting the errno usage, but other bugs
promptly started appearing once I got to seriously testing this load
process. So far it fixes:
* several assertions resulting from StoreEntry being left invalid in
cache limbo beween created hash entries and valid mem_obj data.
* repeated attempts on startup to load absent icons files which dont
exist in the filesystem.
* buffer overfow on misconfigured or corrupt mime.conf file entries
* incorrect debugs messages about file I/O errors
* large error pages delivered when icons not installed (when it does
not assert from the StoreEntry)
This patch allow user_cert and ca_cert ACLs to match arbitrary
stand-alone OIDs (not DN/C/O/CN/L/ST objects or their substrings).
For example, should be able to match certificates that have
1.3.6.1.4.1.1814.3.1.14 OID in the certificate Subject or Issuer field.
Squid configuration would look like this:
acl User_Cert-TrustedCustomerNum user_cert 1.3.6.1.4.1.1814.3.1.14 1001
Bug 3329: The server side pinned connection is not closed properly
... in ConnStateData::clientPinnedConnectionClosed CommClose handler.
Squid enters a buggy state when an idle connection pinned to a peer closes:
- The ConnStateData::clientPinnedConnectionRead, the pinned peer
connection read handler, is called with the io.flag set to
Comm::ERR_CLOSING. The read handler does not close the peer
Comm::Connection object. This is correct and expected -- the I/O
handler must exit on ERR_CLOSING without doing anything.
- The ConnStateData::clientPinnedConnectionClosed close handler is called,
but it does not close the peer Comm::Connection object either. Again,
this is correct and expected -- the close handler is not the place to
close a being-closed connection.
- The corresponding fde object is marked as closed (fde::flags.open
is false), but the peer Comm::Connection object is still open
(Comm::Connection.fd >= 0)! From this point on, we have an inconsistency
between the peer Comm::Connection object state and the real world.
- When the ConnStateData::pinning::serverConnection object is later
destroyed (by refcounting), it will try to close its fd. If that fd
is already in use (e.g., by another Comm::Connection), bad things
happen (crashes, segfaults, etc). Otherwise (i.e., if that fd is
not open), comm_close may cry about BUG 3556 (or worse).
To fix this problem, we must not allow Comm::Connections to get out
of sync with fd_table, even when a descriptor is closed without going
through Connection::close(). There are two ways to accomplished that:
* Change Comm to always store Comm::Connections and similar high-level
objects instead of fdes. This is a huge change that has been long on
the TODO list (those "other high-level objects" is on of the primary
obstacles there because not everything with a FD is a Connection).
* Notify Comm::Connections about closure in their closing handlers
(this change). This design relies on every Comm::Connection having
a close handler that notifies it. It may take us some time to reach
that goal, but this change is the first step providing the necessary
API, a known bug fix, and a few preventive changes.
This change:
- Adds a new Comm::Connection::noteClosure() method to inform the
Comm::Connection object that somebody is closing its FD.
- Uses the new method inside ConnStateData::clientPinnedConnectionClosed
handler to inform the ConnStateData::pinning::serverConnection object
that its FD is being closed.
- Replaces comm_close calls which may cause bug #3329 in other places with
Comm::Connection->close() calls.
Initially based on Nathan Hoad research for bug 3329.
The Adaptation::Icap::Xaction::swanSong may try to use an invalid
Icap::Xaction::cs object (Comm::ConnOpener object) if the Comm::ConnOpener
is already gone (because its job finished) but the Xaction::noteCommConnected
method is not called yet.
This patch makes the Adaptation::Icap::Xaction::cs object a CbcPointer instead
of a raw pointer and checks if the Xaction::cs object is still valid before
using it.
Fix "Not enough space to hold server hello message" error message
This patch merges the Ssl::ClientBio and Ssl::ServerBio read buffering code
to the Ssl::Bio::readAndBuffer method and uses the MemBuf::potentialSpaceSize
instead of MemBuf::spaceSize to check space size for SSL hello messages buffer,
to take in account available space after a possible buffer grow.
Amos Jeffries [Fri, 22 May 2015 04:55:35 +0000 (21:55 -0700)]
Prevent unused ssl_crtd helpers being run
The conditions for when to start ssl_crtd helpers was ignoring the
generate-host-certificates=off option. Meaning most ssl-bump installs
were running them needlessly.
Alex Dowad [Fri, 22 May 2015 04:47:22 +0000 (21:47 -0700)]
Fix incorrect use of errno in various libcomm.la places
Fix problems with 'errno' in TcpAcceptor::Listen, Comm::HandleRead, and
Comm::HandleWrite. 'errno' is only valid after a standard library function
returns an error. Also, we must avoid calling out to other functions before
recording the value of 'errno', since they might overwrite it.
Alex Dowad [Fri, 22 May 2015 04:33:41 +0000 (21:33 -0700)]
Fix signal.h usage to resolve compiler warning
When included, musl libc's sys/signal.h issues a compiler warning
stating that signal.h should be used directly instead. If gcc is
treating all warnings as errors, this breaks the build.
glibc's sys/signal.h does not contain any definitions; all it does
is include signal.h (indirectly). So directly including signal.h
doesn't break anything with glibc.
Nathan Hoad [Fri, 22 May 2015 04:26:17 +0000 (21:26 -0700)]
Fix missing external ACL helper notes
external ACL helper notes are only added onto the HTTP request that
kicked off the external ACL lookup, and not cached ACL responses.
This means if you set notes from an external ACL that are used for
some processing in other ACLs, or post-processing on logs, things
may be missed.
Inside IdleConnList::findUseable the IdleConnList::removeAt call can delete
"this" IdleConnList object. The IdleConnList::clearHandlers called imediatelly
after the removeAt method, will try to use the invalid "this" object in
a comm_read_cancel function call, causing this assertion or other similar.
This patch fixes the IdleConnList::findUseable, IdleConnList::pop and
IdleConnList::findAndClose methods to call IdleConnList::clearHandlers before
the IdleConnList::removeAt is called.
The comm_connect_addr on connect failures sets the xerrno to 0
and returns Comm::OK. This is causes problems on ConnOpener
class users which believes the connection is established and
it is ready for use.
Amos Jeffries [Sat, 9 May 2015 11:48:35 +0000 (04:48 -0700)]
Docs: shuffle SMP specific options to the top of squid.conf
The workers directive is required to be used before several other
directives. It makes little sense to documents it after the controls
which depend on it.
Make a new config section to contain the SMP specific options.
Amos Jeffries [Sat, 9 May 2015 11:43:14 +0000 (04:43 -0700)]
CacheMgr: display 'client_db off' instead of 0 clients accessing cache
... to clarify why there is no record of even the mgr request happening.
The client_db mechanism needs to be enabled and measuring traffic for
any useful client counter value to exist.
The maximum buffer size for holding Server and Client SSL hello messages is only
16k which is not enough hold a Hello message which includes some extensions and
1-2 or more Certificates.
This patch increases the maximum size to 65535 and also adds some checks to
avoid squid crashes in the case the hello messages buffer overflows.
comm_connect_addr() uses errno to determine whether library calls like connect()
are successful. Its callers also use errno for extra information on the cause
of any problem. However, after calling library functions like connect(),
comm_connect_addr() calls other library functions which can overwrite errno.
As the errno manpage explains, "a function that succeeds is allowed to change
errno". So even when nothing is wrong, comm_connect_addr() may return an error
flag if libc sets errno. And when something *is* wrong, incorrect error information
may be returned to the caller because errno was overwritten with a different code.
Correct this by using our own error code variable which is set only when a library
call fails. To avoid breaking callers, set errno before returning.
Fix 'access_log none' to prevent following logs being used
The documented behaviour of "access_log none" for preventing logging
using log lines following the directive has not been working in
Squid-3 for some time.
Since the 'none' type does not have a log module associated the entire
switch logic where its abort is checked for was being skipped.
Unexpected SQUID_X509_V_ERR_DOMAIN_MISMATCH errors while accessing sites with valid certificates
A "const char *" pointer retrieved using the SBuf::c_str() method may attached
to an SSL object using the SSL_set_ex_data method as server name used to
validate server certificates. This pointer may become invalid, causing
the SQUID_X509_V_ERR_DOMAIN_MISMATCH errors.
This patch changes the type of the ssl_ex_index_server index used with the
SSL_set_ex_data function to be an SBuf object.
Portability: Add hacks to define C++11 explicit N-bit type limits
Add cstdint and stdint.h to libcompat headers and ensure that type limits
used by Squid are always available. Mostly this involves shuffling
existing hacks into the compat headers but the UINT32_* limits are new.
The SSL_get_peer_certificate openSSL function increases the lock for X509
object it returns so X509 object retrieved using this function must be
released with X509_free after use.
This patch uses the Ssl::X509_Pointer TidyPointer to release X509 object
retrieved with the SSL_get_peer_certificate function inside the
Ssl::PeerConnector::handleNegotiateError method
Despite the "must match" comment, MAX_AUTHTOKEN_LEN in
auth/UserRequest.h got out of sync with similar constants in Negotiate helpers.
A 32KB buffer cannot fit some helper requests (e.g., those carrying Privilege
Account Certificate information in the client's Kerberos ticket). Each truncated
request blocks the negotiate helper channel, eventually causing helper queue
overflow and possibly killing Squid.
This patch increases MAX_AUTHTOKEN_LEN in UserRequest.h to 65535 which
is also the maximum used by the negotiate helpers. The patch also adds checks
to avoid sending truncated requests, treating them as helper errors instead.
This patch adds code in squid to control SslBump behavior when dealing with
"resuming SSL/TLS sessions". Without these changes, SslBump usually terminates
all resuming sessions with an error because such sessions do not include
server certificates, preventing Squid from successfully validating the server
identity.
After these changes, Squid splices resuming sessions. Splicing is the right
because Squid most likely has spliced the original connections that the client
and server are trying to resume now.
Without SslBump, session resumption would just work, and SslBump behaviour
should approach that ideal.
Future projects may add ACL checks for allowing resuming sessions and may
add more complex algorithms, including maintaining an SMP-shared
cache of sessions that may be resumed in the future and evaluating
client/server attempts to resume a session using that cache.
This patch also makes SSL client Hello message parsing more robust and
adds an SSL server Hello message parser.
Also add support for NPN (next protocol negotiation) and ALPN
(Application-Layer Protocol Negotiation) tls extensions, required to
correctly bump web clients support these extensions
Technical details
-----------------
In Peek mode, the old Squid code would forward the client Hello message to the
server. If the server tries to resume the previous (spliced) SSL session with
the client, then Squid SSL code gets an ssl/PeerConnector.cc "ccs received
early" error (or similar) because the Squid SSL object expects a server
certificate and does not know anything about the session being resumed.
With this patch, Squid detects session resumption attempts and splices
There are two mechanism in SSL/TLS for resuming sessions. The traditional
shared session IDs and the TLS ticket extensions:
* If Squid detects a shared ID in both client and server Hello messages, then
Squid decides whether the session is being resumed by comparing those client
and server shared IDs. If (and only if) the IDs are the same, then Squid
assumes that it is dealing with a resuming session (using session IDs).
* If Squid detects a TLS ticket in the client Hello message and TLS ticket
support in the server Hello message as well as a Change Cipher Spec or a New
TLS Ticket message (following the server Hello message), then (and only then)
Squid assumes that it is dealing with a resuming session (using TLS tickets).
The TLS tickets check is not performed if Squid detects a shared session ID
in both client and server Hello messages.
NPN and ALPN tls extensions
---------------------------
Even if squid has some SSL hello messages parsing code, we are relying to
OpenSSL for full parsing. The openSSL used in peek and splice mode to parse
server hello message, check for errors and verify server certificates.
If the openSSL, while parses the server hello message, find an extension enabled
in the server hello message, which is not enabled in its side, fails with an
error ("...parse tlsext...").
OpenSSL supports NPN tls extension and from 1.0.2 release supports also the
ALPN tls extensions. In peek mode we are forwading the client SSL hello message
as is, and if this message include support for NPN or ALPN tls extension is
possible that the SSL server support them and include related extensions
in its response. The openSSL will fail if support for these extensions is
not enabled in its side.
This patch handles the NPN (TLSEXT_TYPE_next_proto_neg) as follows:
Try to select the http/1.1 protocol from the server protocols list. If the
http/1.1 is not supported then the SSL bumping will fail. This is valid
because only http protocol we are supporting in squid.
Splicing is not affected.
Also add support for the ALPN TLS extension. This extension is a replacement
for the NPN extension. The client sends a list of supported protocols. In the
case of stare mode squid now sends only http as supported protocol. In the
case of server-first or client-first bumbing modes, squid does enable this
extension.
The NPN supported by chromium browser the ALPN supported by firefox.
Support for ALPN is added to openSSL 1.0.2 release.
These extensions are used to support SPDY and similar protocols.
The fix for Bug 3664 "ssl_crtd fails to build on OpenSolaris/OpenIndiana/Solaris 11"
introduced a regression on BSD and Linux where lockf() implementations appear not to
lock the entire file correctly or as reliably as flock().
Reverting the flock/lockf change for non-Solaris OS.
Add server_name ACL matching server name(s) obtained from various sources
... such as CONNECT request URI, client SNI, and SSL server certificate CN.
During each SslBump step, Squid improves its understanding of a "true server
name", with a bias towards server-provided (and Squid-validated) information.
The server-provided server names are retrieved from the server certificate CN
and Subject Alternate Names. The new server_name ACL matches any of alternate
names and CN. If the CN or an alternate name is a wildcard, then the new ACL
matches any domain that matches the domain with the wildcard.
Other than supporting many sources of server name information (including
sources that may supply Squid with multiple server name variants and
wildcards), the new ACL is similar to dstdomain.
Invalid request->clientConnectionManager object used by Ssl::PeerConnector::handleNegotiateError
This patch adds the Ssl::ServerBio::bumpMode() method to retrieve the configured
mode from a ServerBio object, and uses this method for checking the bumping
mode inside Ssl::PeerConnector::handleNegotiateError method
After a failed http_access acl check of an HTTP request, tunneled through a
SSL bumped connection, ssl bumping code try to re-setup the connection for a
client-first bumping mode to serve the error crashing squid.
It can be hard determining what simple operations (ie cow(), grow()) are
being done no what SBuf object. Add the SBuf::id to debugs() output on
many more operations.
When squid generated an error page which contains the "%m" formating code
but the authentication information is not available squid dies with
segfault.