bug4682: When client-first bumping mode is used squid can ignore http access
denied
Squid fails to identify HTTP requests which are tunneled inside an already
established client-first bumped tunnel, and this is results to ignore
http access denied for these requests.
Fixes squid documentation to correctly describe the squid behavior when the
"bump" action is selected on step SslBump1. In this case squid selects
the client-first bumping mode.
Bug 4659 - sslproxy_foreign_intermediate_certs does not work
The sslproxy_foreign_intermediate_certs directive does not work after r14769.
The bug is caused because of wrong use of X509_check_issued OpenSSL API call.
Now that Squid is sending an explicit '-' for the trailing %DATA parameter
if there were no acl parameters this helper needs to cope with it on
'active mode' session lookups when login/logout are not being performed.
Squid does not send CONNECT request to adaptation services
if the "ssl_bump splice" rule matched at step 2. This adaptation
is important because the CONNECT request gains SNI information during
the second SslBump step. This is a regression bug, possibly caused by
the Squid bug 4529 fix (trunk commits r14913 and r14914).
Count failures and use peer-specific connect timeouts when tunneling.
Fixed two bugs with tunneling CONNECT requests (or equivalent traffic)
through a cache_peer:
1. Not detecting dead cache_peers due to missing code to count peer
connect failures. TLS/SSL-level failures were detected (for "tls"
cache_peers) but TCP/IP connect(2) failures were not (for all peers).
2. Origin server connect_timeout used instead of peer_connect_timeout or
a peer-specific connect-timeout=N (where configured).
The regular forwarding code path does not have the above bugs. This
change reduces code duplication across the two code paths (that
duplication probably caused these bugs in the first place), but a lot
more work is needed in that direction.
The 5-second forwarding timeout hack has been in Squid since
forward_timeout inception (r6733). It is not without problems (now
marked with an XXX), but I left it as is to avoid opening another
Pandora box. The hack now applies to the tunneling code path as well.
Cleanup: remove redundant IntRange class from StoreMeta.cc
Use the Range<> template we have for generic ranges.
Move the Range.h template definitio to src/base/. It is only used by
code in src/.
Also, include a small performance improvements for StoreMeta::validLength().
Storing the valid TLV length limits in a static instead of generating a
new object instance on each call.
QA: allow test-suite to be run without a full build
The squid.conf processing tests have been assuming a full 'make check' was
run and generated a squid binary in the build directory.
This change allows callers to also run these tests on an arbitrary 'squid'
binary by using the command:
make --eval="BIN_DIR=/path" -C test-suite squid-conf-tests
where /path is the path under which a squid binary already exists.
Amos Jeffries [Fri, 31 Mar 2017 18:43:20 +0000 (07:43 +1300)]
Bug 4610: cleanup of BerkleyDB related checks
Most of the logic seems to be hangovers from when session helper was
using the BerkleyDB v1.85 compatibility interface. Some of it is
possibly still necessary for the time_quota helper, but that helper has
not been using it so far and needs an upgrade to match what happened to
session helper.
Changes:
* The helpers needing -ldb will not be built unless the library and
headers are available. So we can drop the Makefile LIB_DB substitutions
and always just link -ldb explicitly to these helpers.
NP: Anyone who needs small minimal binaries, can build with the
--as-needed linker flag, or without these helpers. This change has no
effect on other helpers or the main squid binary.
* Since we no longer need to check if -ldb is necessary, we can drop the
configure.ac and acinclude logic detecting that.
* Remove unused AC_CHECK_DECL(dbopen, ...)
- resolves one "FIXME"
* Fix the time_quota helper check to only scan db.h header file contents
if that file is existing, and if the db_185.h file is not being used
instead.
* Fix the session helper check to only try compiling with the db.h
header if that header actually exists.
* De-duplicate the library header file detection shared by configure.ac
and the helpers required.m4 files (after the above two changes).
Amos Jeffries [Sat, 18 Mar 2017 04:25:24 +0000 (17:25 +1300)]
Add move semantics to remaining HTTP Parser heirarchy
Destructor is requied because this hierarchy contains virtuals, which in turn
means the compiler will not add move constructor by default. So we must add
teh default ones in ourselves.
Squid may fail to load cache entry metadata for several very different
reasons, including the following two relatively common ones:
* A cache_dir entry corruption.
* Huge cache_dir entry metadata that does not fit into the I/O buffer
used for loading entry metadata.
Knowing the exact failure reason may help triage and guide development.
We refactored existing checks to distinguish various error cases,
including the two above. Refactoring also reduced code duplication.
These improvements also uncovered and fixed a null pointer dereference
inside ufsdump.cc (but ufsdump does not even build right now for reasons
unrelated to these changes).
Amos Jeffries [Wed, 15 Mar 2017 15:41:41 +0000 (04:41 +1300)]
Cleanup: Migrate Http1:: Parser child classes to C++11 initialization
Also, add move semantics to Http1::RequestParser. This apparently will
make the clear() operators faster as they no longer have to data-copy.
At least, one the base Parser class supports move as well.
It also consists a small experiment to see if virtaul destructor alone
allows automatic move constructor to be added by the compiler.
Alex Rousskov [Fri, 3 Mar 2017 23:18:25 +0000 (16:18 -0700)]
Fixed URI scheme case-sensitivity treatment broken since r14802.
A parsed value for the AnyP::UriScheme image constructor parameter was
stored without toLower() canonicalization for known protocols (e.g.,
Squid would store "HTTP" instead of "http" after successfully parsing
"HTTP://EXAMPLE.COM/" in urlParseFinish()). Without that
canonicalization step, Squid violated various HTTP caching rules related
to URI comparison (and served fewer hits) when dealing with absolute
URLs containing non-lowercase HTTP scheme.
According to my limited tests, URL-based ACLs are not affected by this
bug, but I have not investigated how URL-based ACL code differs from
caching code when it comes to stored URL access and whether some ACLs
are actually affected in some environments.
Fix two read-ahead problems related to delay pools (or lack of thereof).
1. Honor EOF on Squid-to-server connections with full read ahead buffers
and no clients when --enable-delay-pools is used without any delay
pools configured in squid.conf.
Since trunk r6150.
Squid delays reading from the server after buffering read_ahead_gap
bytes that are not yet sent to the client. A delayed read is normally
resumed after Squid sends more buffered bytes to the client. See
readAheadPolicyCanRead() and kickReads().
However, Squid was not resuming the delayed read after all Store clients
were gone. If quick_abort prevents Squid from immediately closing the
corresponding Squid-to-server connection, then the connection gets stuck
until read_timeout (15m), even if the server closes much sooner, --
without reading from the server, Squid cannot detect the connection
closure. The affected connections enter the CLOSE_WAIT state.
Kicking delayed read when the last client leaves fixes the problem. The
removal of any client, including the last one, may change
readAheadPolicyCanRead() answer and, hence, deserves a kickReads() call.
Why "without any delay pools configured in squid.conf"? When classic
(i.e., delay_pool_*) delay pools are configured, Squid kicks all delayed
reads every second. That periodic kicking is an old design bug, but it
resumes stuck reads when all Store clients are gone. Without classic
delay pools, there is no periodic kicking. This fix does not address
that old bug but removes Squid hidden dependence on its side effect.
Note that the Squid-to-server connections with full read-ahead buffers
still remain "stuck" if there are non-reading clients. There is nothing
Squid can do about them because we cannot reliably detect EOF without
reading at least one byte and such reading is not allowed by the read
ahead gap. In other words, non-reading clients still stall server
connections.
While fixing this, I moved all CheckQuickAbort() tests into
CheckQuickAbortIsReasonable() because we need a boolean function to
avoid kicking aborted entries and because the old separation was rather
awkward -- CheckQuickAbort() contained "reasonable" tests that were not
in CheckQuickAbortIsReasonable(). All the aborting tests and their order
were preserved during this move. The moved tests gained debugging.
According to the existing test order in CheckQuickAbortIsReasonable(),
the above problem can be caused by:
* non-private responses with a known content length
* non-private responses with unknown content length, having quick_abort_min
set to -1 KB.
2. Honor read_ahead_gap with --disable-delay-pools.
Since trunk r13954.
This fix also addresses "Perhaps these two calls should both live
in MemObject" comment and eliminates existing code duplication.
Amos Jeffries [Fri, 3 Mar 2017 11:52:37 +0000 (00:52 +1300)]
Bug 4671 pt4: refactor Format::assemble()
* replace the String local with an SBuf to get appendf()
* overdue removal of empty lines and '!= NULL' conditions
* reduce scope redux for many out assignments
* use sizeof(tmp) instead of '1024'
* Fixes many GCC 7 compile errors from snprintf() being called with a
too-small buffer.
* update the for-loops in Adaptation::History to C++11 and produce output
in an SBuf. Removing need for iterator typedef's and resolving more GCC 7
warnings about too-small buffers for snprintf().
Amos Jeffries [Fri, 3 Mar 2017 11:41:07 +0000 (00:41 +1300)]
Bug 4671 pt3: remove limit on FTP realm strings
Convert ftpRealm() from generating char* to SBuf. This fixes issues identified
by GCC 7 where the realm string may be longer than the available buffer and
gets truncated.
The size of the buffer was making the occurance rather rare, but it is still
possible.
Amos Jeffries [Thu, 23 Feb 2017 10:02:10 +0000 (23:02 +1300)]
Fix another Must(!= NULL) coverity issue.
The issue is that Coverity Scan gets confused by implicit casting of NULL
to a Pointer into thinking that 'true' comparison is possible when NULL
is involved. The != should still compile to the correct checks.
Amos Jeffries [Wed, 22 Feb 2017 17:39:44 +0000 (06:39 +1300)]
Cleanup: convert SBuf to C++11 initialization
This should resolve many Coverity uninitialized member warnings
caused by the SBuf stub linked to pinger helper being confused
with the sbuf/libsbuf.la SBuf constructor definition.
Amos Jeffries [Mon, 20 Feb 2017 12:51:03 +0000 (01:51 +1300)]
TLS: refactor Security::ContextPointer to a std::shared_ptr
These pointers now use the same construction pattern tested out with
Security::SessionPointer.
It also fixes a reference counting bug in GnuTLS code paths where the
PeerConnector::initialize() method would be passed a temporary Pointer
and thus free the context/credentials before it was used by the session
verify logics.
Add response delay pools feature for Squid-to-client speed limiting.
The feature restricts Squid-to-client bandwidth only. It applies to
both cache hits and misses.
* Rationale *
This may be useful for specific response(s) bandwidth limiting.
There are situations when doing this is hardly possible
(or impossible) by means of netfilter/iptables operating with
TCP/IP packets and IP addresses information for filtering. In other
words, sometimes it is problematic to 'extract' a single response from
TCP/IP data flow at system level. For example, a single Squid-to-client
TCP connection can transmit multiple responses (persistent connections,
pipelining or HTTP/2 connection multiplexing) or be encrypted
(HTTPS proxy mode).
* Description *
When Squid starts delivering the final HTTP response to a client,
Squid checks response_delay_pool_access rules (supporting fast ACLs
only), in the order they were declared. The first rule with a
matching ACL wins. If (and only if) an "allow" rule won, Squid
assigns the response to the corresponding named delay pool.
If a response is assigned to a delay pool, the response becomes
subject to the configured bucket and aggregate bandwidth limits of
that pool, similar to the current "class 2" server-side delay pools,
but with a brand new, dedicated "individual" filled bucket assigned to
the matched response.
The new feature serves the same purpose as the existing client-side
pools: both features limit Squid-to-client bandwidth. Their common
interface was placed into a new base BandwidthBucket class. The
difference is that client-side pools do not aggregate clients and
always use one bucket per client IP. It is possible that a response
becomes a subject of both these pools. In such situations only matched
response delay pool will be used for Squid-to-client speed limiting.
* Limitations *
The accurate SMP support (with the aggregate bucket shared among
workers) is outside this patch scope. In SMP configurations,
Squid should automatically divide the aggregate_speed_limit and
max_aggregate_size values among the configured number of Squid
workers.
* Also: *
Fixed ClientDelayConfig which did not perform cleanup on
destruction, causing memory problems detected by Valgrind. It was not
possible to fix this with minimal changes because of linker problems
with SquidConfig while checking with test-builds.sh. So I had
to refactor ClientDelayConfig module, separating configuration code
(old ClientDelayConfig class) from configured data (a new
ClientDelayPools class) and minimizing dependencies with SquidConfig.
Amos Jeffries [Fri, 10 Feb 2017 13:35:05 +0000 (02:35 +1300)]
TLS: refactor Security::ContextPointer to a std::shared_ptr
These pointers now use the same construction pattern tested out with
Security::SessionPointer.
It also fixes a reference counting bug in GnuTLS code paths where the
PeerConnector::initialize() method would be passed a temporary Pointer
and thus free the context/credentials before it was used by the session
verify logics.
Emmanuel Fuste [Wed, 8 Feb 2017 19:12:00 +0000 (08:12 +1300)]
digest_ldap_auth: Add -r option to clamp the realm to a fixed value
Some historic Digest Auth implementations do not include the realm in the
digest password attribute. The password is effectively stored as "HA1"
instead of "REALM:HA1".
The realm cannot simply be ignored due to:
1) the realm is both the salting value used within the hash and the
scope limitation on what inputs from HTTP are used to compare against
the A1, and
2) Squid does not itself verify the realm received was the one offered
and leaves the comparison to the backend system. There is some
possibility the authentication system is using multiple security realms
and Squids realm string is just an offer.
Not having realm tied to the credentials in the backend storage leaves
this particular helper with no other option but to trust the realm sent
(probably) over clear-text by any client/attacker actually matches the
salting. That allows remote senders to manipulate the realm string they
send to perform a collision attack against the stored password.
They no longer have to find and prove knowledge of the password. But
just find a collision for its hash vs arbitrary realm strings.
Old Digest systems are not the safest things to begin with. They also
tend to use MD5 hashing which was the only one available for many years
and relatively easy to find collisions for.
To resolve all these problems allow the -l parameter to accept an empty
string ('-l "" ') when the -r option provides a fixed realm.
Amos Jeffries [Tue, 7 Feb 2017 08:20:39 +0000 (21:20 +1300)]
GCC7: raise FTP Gateway CTRL channel buffer to 16KB
Fixes
error: %s directive output may be truncated writing up to 8191 bytes
into a region of size 1019
note: snprintf output between 8 and 8199 bytes into a destination of
size 1024
Bump SSL client on [more] errors encountered before ssl_bump evaluation
... such as ERR_ACCESS_DENIED with HTTP/403 Forbidden triggered by an
http_access deny rule match.
The old code allowed ssl_bump step1 rules to be evaluated in the
presence of an error. An ssl_bump splicing decision would then trigger
the useless "send the error to the client now" processing logic instead
of going down the "to serve an error, bump the client first" path.
Furthermore, the ssl_bump evaluation result itself could be surprising
to the admin because ssl_bump (and most other) rules are not meant to be
evaluated for a transaction in an error state. This complicated triage.
Also polished an important comment to clarify that we want to bump on
error if (and only if) the SslBump feature is applicable to the failed
transaction (i.e., if the ssl_bump rules would have been evaluated if
there were no prior errors). The old comment could have been
misinterpreted that ssl_bump rules must be evaluated to allow an
"ssl_bump splice" match to hide the error.