Alex Rousskov [Sun, 26 Feb 2017 08:46:24 +0000 (21:46 +1300)]
Fix crash when configuring with invalid delay_parameters restore value.
... like none/none. Introduced in rev which fixed another, much
bigger delay_parameters parsing bug.
TODO: Reject all invalid input, including restore/max of "-/100".
TODO: Fix misleading/wrong associated error messages. For example:
ERROR: invalid delay rate 'none/none'. Expecting restore/max or 'none'
ERROR: restore rate in '1/none' is not a number.
Bump SSL client on [more] errors encountered before ssl_bump evaluation
... such as ERR_ACCESS_DENIED with HTTP/403 Forbidden triggered by an
http_access deny rule match.
The old code allowed ssl_bump step1 rules to be evaluated in the
presence of an error. An ssl_bump splicing decision would then trigger
the useless "send the error to the client now" processing logic instead
of going down the "to serve an error, bump the client first" path.
Furthermore, the ssl_bump evaluation result itself could be surprising
to the admin because ssl_bump (and most other) rules are not meant to be
evaluated for a transaction in an error state. This complicated triage.
Also polished an important comment to clarify that we want to bump on
error if (and only if) the SslBump feature is applicable to the failed
transaction (i.e., if the ssl_bump rules would have been evaluated if
there were no prior errors). The old comment could have been
misinterpreted that ssl_bump rules must be evaluated to allow an
"ssl_bump splice" match to hide the error.
SSLv2 records force SslBump bumping despite a matching step2 peek rule.
If Squid receives a valid TLS Hello encapsulated into ancient SSLv2
records (observed on Solaris 10), the old code ignored the step2 peek
decision and bumped the transaction instead.
Now Squid peeks (or stares) at the origin server as configured, even
after detecting (and parsing) SSLv2 records.
Mitigate DoS attacks that use client-initiated SSL/TLS renegotiation.
There is a well-known DoS attack using client-initiated SSL/TLS
renegotiation. The severety or uniqueness of this attack method
is disputed, but many believe it is serious/real.
There is even a (disputed) CVE 2011-1473:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-1473
The old Squid code tried to disable client-initiated renegotiation, but
it did not work reliably (or at all), depending on Squid version, due
to OpenSSL API changes and conflicting SslBump callbacks. That
code is now removed and client-initiated renegotiations are allowed.
With this change, Squid aborts the TLS connection, with a level-1 ERROR
message if the rate of client-initiated renegotiate requests exceeds
5 requests in 10 seconds (approximately). This protection and the rate
limit are currently hard-coded but the rate is not expected to be
exceeded under normal circumstances.
Amos Jeffries [Fri, 27 Jan 2017 15:00:05 +0000 (04:00 +1300)]
Detect HTTP header ACL issues
rep_header and req_header ACL types cannot match multiple different
headers in one test (unlike Squid-2 appears to have done). Produce
an ERROR and ignore the extra line(s) instead of silently changing
all the previous regex to match the second header name.
Also detect and ERROR when header name is missing entirely. Ignore
these lines instead of asserting.
Update External ACL helpers error handling and caching
The helper protocol for external ACLs [1] defines three possible return values:
OK - Success. ACL test matches.
ERR - Success. ACL test fails to match.
BH - Failure. The helper encountered a problem.
The external acl helpers distributed with squid currently do not follow this
definition. For example, upon connection error, ERR is returned:
$ ext_ldap_group_acl ... -d
ext_ldap_group_acl: WARNING: could not bind to binddn 'Can't contact LDAP server'
ERR
This does not allow to distinguish "no match" and "error" either and
therefore negative caches "ERR", also in the case of an error.
Moreover there are multiple problems inside squid when trying to handle BH
responses:
- Squid-5 and Squid-4 retry requests for BH responses but crashes after the
maximum retry number (currently 2) is reached.
- If an external acl helper return always BH (eg because the LDAP server is
down) squid sends infinitely new request to the helper.
Fix "Source and destination overlap in memcpy" Valgrind errors
Before this patch, source and destination arguments in
log_quoted_string() could point to the same static memory area, causing
multiple Valgrind-reported errors. Fixed by creating another buffer to
store quoted-processed output string.
Reduce crashes due to unexpected ClientHttpRequest termination.
The underlying problem has been known since r13480: If a
ClientHttpRequest job ends without Http::Stream (and ConnStateData)
knowledge, then Squid is likely to segfault or assert. This patch does
not resolve the underlying issue (a proper fix would require
architectural changes in a consensus-lacking area) but makes an
unexpected ClientHttpRequest job destruction less likely.
BodyPipe and Adaptation-related exceptions are the major causes of
unexpected ClientHttpRequest job destruction. This patch handles them by
closing the client connection. Connection closure should trigger an
orderly top-down cleanup, including Http::Stream, ConnStateData, and
ClientHttpRequest destruction.
If there is no connection to close, then the exception is essentially
ignored with a level-1 error message disclosing the problem. The side
effects of ignoring such exceptions are unknown, but without a client
connection, it is our hope that they would be relatively benign.
Amos Jeffries [Mon, 26 Dec 2016 02:22:00 +0000 (15:22 +1300)]
Bug 3940 pt2: Make 'cache deny' do what is documented
Instead of overriding whatever cacheability was previously set to
(including changing non-cacheables to be cacheable) actually
prevent both cache read and write.
Do not share private responses with collapsed client(s).
This excessive sharing problem with collapsed forwarding code has
several layers. In most cases, the core CF code does not share
uncachable or private response with collapsed clients because of the
refreshCheckHTTP() check. However, some responses might not be subject
to that (or equivalent) check. More importantly, collapsed revalidation
code does not check its responses at all and, hence, easily shares
private responses.
This short-term fix incorrectly assumes that an entry may become private
(KEY_PRIVATE) only when it cannot be shared among multiple clients
(e.g., because of a Cache-Control:private response header). However,
there are a few other cases when an entry becomes private. One of them
is a DISK_NO_SPACE_LEFT error inside storeSwapOutFileClosed() where
StoreEntry::releaseRequest() sets KEY_PRIVATE for a sharable entry [that
may still be perfectly preserved in the memory cache]. Consequently, the
short-term fix reduces CF effectiveness. The extent of this reduction is
probably environment-dependent.
Also: do not re-use SET_COOKIE headers for collapsed revalidation slaves,
i.e., adhere to the same requirement as for regular response HITs.
Garri Djavadyan [Thu, 15 Dec 2016 09:36:34 +0000 (22:36 +1300)]
Bug 3940 (partial): hostHeaderVerify failures MISS when they should be HIT
This fixes the critical condition leading to the HIT. However not all
code is correctly setting flags.noCache and flags.cacheable (see bugzilla).
So there may be other fixes needed after this.
The following sequence of events triggers this assertion:
- The server sends an 1xx control message.
- http.cc schedules ConnStateData::sendControlMsg call.
- Before sendControlMsg is fired, http.cc detects an error (e.g., I/O
error or timeout) and starts writing the reply to the user.
- The ConnStateData::sendControlMsg is fired, starts writing 1xx, and
hits the "no concurrent writes" assertion.
We could only reproduce this sequence in the lab after changing Squid
code to trigger a timeout at the right moment, but the sequence looks
plausible. Other event sequences might result in the same outcome.
To avoid concurrent writes, Squid now drops the control message if
Http::One::Server detects that a reply is already being written. Also,
ConnStateData delays reply writing until a pending control message write
has been completed.
Bug 4004 partial: Fix segfault via Ftp::Client::readControlReply
Added nil dereference checks for Ftp::Client::ctrl.conn, including:
- Ftp::Client::handlePasvReply() and handleEpsvReply() that dereference
ctrl.conn in DBG_IMPORTANT messages.
- Many functions inside FtpClient.cc and FtpGateway.cc files.
TODO: We need to find a better way to handle nil ctrl.conn. It is only
a matter of time when we forget to add another dereference check or
discover a place we missed during this change.
Also disabled forwarding of EPRT and PORT commands to origin servers.
Squid support for those commands is broken and their forwarding may
cause segfaults (bug #4004). Active FTP is still supported, of course.
Alex Rousskov [Mon, 14 Nov 2016 12:40:51 +0000 (01:40 +1300)]
Honor SBufReservationRequirements::minSize regardless of idealSize.
In a fully specified SBufReservationRequirements, idealSize would
naturally match or exceed minSize. However, the idealSize default value
(zero) may not. We should honor minSize regardless of idealSize, just as
the API documentation promises to do.
No runtime changes expected right now because the only existing user of
SBufReservationRequirements sets .idealSize to CLIENT_REQ_BUF_SZ (4096)
and .minSize to 1024.
Fix ssl::server_name ACL badly broken since inception.
The original server_name code mishandled all SNI checks and some rare
host checks:
* The SNI-derived value was pointing to an already freed memory storage.
* Missing host-derived values were not detected (host() is never nil).
* Mismatches were re-checked with an undocumented "none" value
instead of being treated as mismatches.
Same for ssl::server_name_regex.
Also set SNI for more server-first and client-first transactions.
Amos Jeffries [Sun, 30 Oct 2016 09:45:03 +0000 (22:45 +1300)]
HTTP/1.1: make Vary:* objects cacheable
Under new clauses from RFC 7231 section 7.1.4 and HTTP response
containing header Vary:* (wifcard variant) can be cached, but
requires revalidation with server before each use.
Use the new mandatory revalidation flags to allow storing of any
wildcard Vary:* response.
Note that responses with headers like Vary:A,B,C,* are equivalent
to Vary:*. The cache key string for these objects is normalized.
Squid crashed because HttpMsg::body_pipe was used without check that it
was initialized. The message lacks body pipe when it has no body or
empty body.
HTTP: MUST ignore a [revalidation] response with an older Date header.
Before this patch, Squid violated the RFC 7234 section 4 MUST
requirement: "When more than one suitable response is stored, a cache
MUST use the most recent response (as determined by the Date header
field)." This problem may be damaging in cache hierarchies where
parent caches may have different responses. Trusting the older response
may lead to excessive IMS requests, cache thrashing and other problems.
Alex Rousskov [Sun, 9 Oct 2016 14:30:11 +0000 (03:30 +1300)]
Hide OpenSSL tricks from Valgrind far-reaching initialization errors.
This change has no effect unless ./configured --with-valgrind-debug.
OpenSSL, including its Assembly code, contains many optimizations and
timing defenses that Valgrind misinterprets as uninitialized value
usage. Most of those tricks can be disabled by #defining PURIFY when
building OpenSSL, but some are not protected with PURIFY and most
OpenSSL libraries are (and should be) built without that #define.
To make matters worse, once Valgrind misdetects uninitialized memory, it
will complain about every usage of that memory. Those complaints create
a lot of noise, complicate triage, and effectively mask true bugs.
AFAICT, they cannot be suppressed by listing the source of that memory.
For example, this OpenSSL Assembly trick:
Uninitialised value was created by a stack allocation
at 0x556C2F7: aesni_cbc_encrypt (aesni-x86_64.s:2081)
Triggers many false errors like this one:
Conditional jump or move depends on uninitialised value(s)
by 0x750838: Debug::Finish()
by 0x942E68: Http::One::ResponseParser::parse(SBuf const&)
...
This change marks OpenSSL-returned decrypted bytes as initialized. This
might miss some true OpenSSL bugs, but we should focus on Squid bugs.
Bug 4471: revalidation doesn't work when expired cached object lacks Last-Modified.
The bug was caused by the fact that Squid used only Last-Modified header
value for evaluating entry's last modification time while making an
internal revalidation request. So, without Last-Modified it was not
possible to correctly fill If-Modified-Since header value. As a result,
the revalidation request was not initiated when it should be.
To fix this problem, we should generate If-Modified-Since based on other
data, showing how "old" the cached response is, such as Date and Age
header values. Both Date and Age header values are utilized while
calculating entry's timestamp. Therefore, we should use the timestamp if
Last-Modified is unavailable.
TODO: HttpRequest::imslen support looks broken/deprecated. Using this
field inside StoreEntry::modifiedSince() leads to several useless checks
and probably may cause other [hidden] problems.
... also address Bug 4311 and partially address Bug 2833 and Bug 4471.
Extend collapsed_forwarding functionality to internal revalidation
requests. This implementation does not support Vary-controlled cache
objects and is limited to SMP-unaware caching environments, where each
Squid worker knows nothing about requests and caches handled by other
workers. However, it also lays critical groundwork for future SMP-aware
collapsed revalidation support.
Prior to these changes, multiple concurrent HTTP requests for the same
stale cached object always resulted in multiple internal revalidation
requests sent by Squid to the origin server. Those internal requests
were likely to result in multiple competing Squid cache updates, causing
cache misses and/or more internal revalidation requests, negating
collapsed forwarding savings.
Internal cache revalidation requests are collapsed if and only if
collapsed_forwarding is enabled. There is no option to control just
revalidation collapsing because there is no known use case for it.
* Public revalidation keys
Each Store entry has a unique key. Keys are used to find entries in the
Store (both already cached/swapped_out entries and not). Public keys are
normally tied to the request method and target URI. Same request
properties normally lead to the same public key, making cache hits
possible. If we were to calculate a public key for an internal
revalidation request, it would have been the same as the public key of
the stale cache entry being revalidated. Adding a revalidation response
to Store would have purged that being-revalidated cached entry, even if
the revalidation response itself was not cachable.
To avoid purging being-revalidated cached entries, the old code used
private keys for internal revalidation requests. Private keys are always
unique and cannot accidentally purge a public entry. On the other hand,
for concurrent [revalidation] requests to find the store entry to
collapse on, that store entry has to have a public key!
We resolved this conflict by adding "scope" to public keys:
* Regular/old public keys have default empty scope (that does not affect
key generation). The code not dealing with collapsed revalidation
continues to work as before. All entries stored in caches continue to
have the same keys (with an empty scope).
* When collapsed forwarding is enabled, collapsable internal
revalidation requests get public keys with a "revalidation" scope
(that is fed to the MD5 hash when the key is generated). Such a
revalidation request can find a matching store entry created by
another revalidation request (and collapse on it), but cannot clash
with the entry being revalidated (because that entry key is using a
different [empty] scope).
This change not only enables collapsing of internal revalidation
requests within one worker, but opens the way for SMP-aware workers to
share information about collapsed revalidation requests, similar to how
those workers already share information about being-swapped-out cache
entries.
After receiving the revalidation response, each collapsed revalidation
request may call updateOnNotModified() to update the stale entry [with
the same revalidation response!]. Concurrent entry updates would have
wasted many resources, especially for disk-cached entries that support
header updates, and may have purged being-revalidated entries due to
locking conflicts among updating transactions. To minimize these
problems, we adjusted header and entry metadata updating logic to skip
the update if nothing have changed since the last update.
Also fixed Bug 4311: Collapsed forwarding deadlocks for SMP Squids using
SMP-unaware caches. Collapsed transactions stalled without getting a
response because they were waiting for the shared "transients" table
updates. The table was created by Store::Controller but then abandoned (not
updated) by SMP-unaware caches. Now, the transients table is not created
at all unless SMP-aware caches are present. This fix should also address
complaints about shared memory being used for Squid instances without
SMP-aware caches.
A combination of SMP-aware and SMP-unaware caches is still not supported
and is likely to stall collapsed transactions if they are enabled. Note
that, by default, the memory cache is SMP-aware in SMP environments.
Alex Rousskov [Fri, 23 Sep 2016 12:40:44 +0000 (00:40 +1200)]
Bug 3819: "fd >= 0" assertion in file_write() during reconfiguration
Other possible assertions include "NumberOfUFSDirs" in UFSSwapDir and
"fd >= 0" in file_close(). And Fs::Ufs::UFSStoreState::drainWriteQueue()
may segfault while dereferencing nil theFile, although I am not sure
that bug is caused specifically by reconfiguration.
Since trunk r9181.3.1, reconfiguration is done in at least two steps:
First, mainReconfigureStart() closes/cleans/disables all modules
supporting hot reconfiguration and then, at the end of the event loop
iteration, mainReconfigureFinish() opens/configures/enables them again.
UFS code hits assertions if it has to log entries or rebuild swap.state
between those two steps. The tiny assertion opportunity window hides
these bugs from most proxies that are not doing enough disk I/O or are
not being reconfigured frequently enough.
Asynchronous UFS cache_dirs such as diskd were the most exposed, but
even blocking UFS caching code could probably hit [rebuild] assertions.
The swap.state rebuilding (always initiated at startup) probably did not
work as intended if reconfiguration happened during the rebuild time
because reconfiguration closed the swap.state file being rebuilt. We now
protect that swap.state file and delay rebuilding progress until
reconfiguration is over.
TODO: To squash more similar bugs, we need to move hot reconfiguration
support into the modules so that each module can reconfigure instantly.
Alex Rousskov [Fri, 23 Sep 2016 11:11:48 +0000 (23:11 +1200)]
Do reset $HOME if needed after r13435. Minimize putenv() memory leaks.
While r13435 is attributed to me, the wrong condition was not mine. This
is a good example of why "cmp() op 0" pattern is usually safer because
the "==" or "!=" operator "tells" us whether the strings are equal,
unlike "!cmp()" that is often misread as "not equal".
Alex Rousskov [Wed, 21 Sep 2016 02:09:36 +0000 (14:09 +1200)]
Bug 4228: ./configure bug/typo in r14394.
The bug resulted in "test: too many arguments" error messages when
running ./configure and effectively replaced AC_MSG_ERROR() with
AC_MSG_WARN() for missing but required Heimdal and GNU GSS Kerberos
libraries.
Fix logged request size (%http::>st) and other size-related %codes.
The %[http::]>st logformat code should log the actual number of
[dechunked] bytes received from the client. However, for requests with
Content-Length, Squid was logging the sum of the request header size and
the Content-Length header field value instead. The logged value was
wrong when the client sent fewer than Content-Length body bytes.
Also:
* Fixed %http::>h and %http::<h format codes for icap_log. The old code
was focusing on preserving the "request header" (i.e. not "response
header") meaning of the %http::>h logformat code, but since ICAP
services deal with (and log) both HTTP requests and responses, that
focus on the HTTP message kind actually complicates ICAP logging
configuration and will eventually require introduction of new %http
logformat codes that would be meaningless in access.log context.
- In ICAP context, %http::>h should log to-be-adapted HTTP message
headers embedded in an ICAP request (HTTP request headers in REQMOD;
HTTP response headers in RESPMOD). Before this change, %http::>h
logged constant/"FYI" HTTP request headers in RESPMOD.
- Similarly, %http::<h should log adapted HTTP message headers
embedded in an ICAP response (HTTP request headers in regular
REQMOD; HTTP response headers in RESPMOD and during request
satisfaction in REQMOD). Before this change, %http::<h logged "-" in
REQMOD.
In other words, in ICAP logging context, the ">" and "<" characters
should indicate ICAP message kind (where the being-logged HTTP message
is embedded), not HTTP message kind, even for %http codes.
TODO: %http::>h code logs "-" in RESPMOD because AccessLogEntry does
not store virgin HTTP response headers.
* Correctly initialize ICAP ALE HTTP fields related to HTTP message
sizes for icap_log, including %http::>st, %http::<st, %http::>sh, and
%http::>sh format codes.
* Initialize HttpMsg::hdr_sz in HTTP header parsing code. Among other
uses, this field is needed to initialize HTTP fields inside ICAP ALE.
* Synced default icap_log format documentation with the current code.
HTTP: do not allow Proxy-Connection to override Connection header
Proxy-Connection header is never actually valid, it is relevant in
HTTP/1.0 messages only when Connection header is missing and not
relevant at all in HTTP/1.1 messages.
This fixes part of the behaviour, making Squid use Connection header
for persistence (keep-alive vs close) checking if one is present
instead of letting Proxy-Connection override it.
TODO: Proxy-Connection still needs to be ignored completely when
the message version is HTTP/1.1.
SSL CN wildcard must only match a single domain component [fragment].
When comparing the requested domain name with a certificate Common Name,
Squid expanded wildcard to cover more than one domain name label (a.k.a
component), violating RFC 2818 requirement[1]. For example, Squid
thought that wrong.host.example.com matched a *.example.com CN.
[1] "the wildcard character * ... is considered to match any single
domain name component or component fragment. E.g., *.a.com matches
foo.a.com but not bar.foo.a.com".
In other contexts (e.g., ACLs), wildcards expand to all components.
matchDomainName() now accepts a mdnRejectSubsubDomains flag that selects
the right behavior for CN match validation.
The old boolean honorWildcards parameter replaced with a flag, for clarity
and consistency sake.
This patch also handles the cases where the host name consists only from dots
(eg malformed Host header or SNI info). The old code has undefined behaniour
in these cases. Moreover it handles the cases a certificate contain zero length
string as CN or alternate name.
HTTP: MUST always revalidate Cache-Control:no-cache responses.
Squid MUST NOT use a CC:no-cache response for satisfying subsequent
requests without successful validation with the origin server. Squid
violated this MUST because Squid mapped both "no-cache" and
"must-revalidate"/etc. directives to the same ENTRY_REVALIDATE flag
which was interpreted as "revalidate when the cached response becomes
stale" rather than "revalidate on every request".
This patch splits ENTRY_REVALIDATE into:
- ENTRY_REVALIDATE_ALWAYS, for entries with CC:no-cache and
- ENTRY_REVALIDATE_STALE, for entries with other related CC directives
like CC:must-revalidate and CC:proxy-revalidate.
Reusing ENTRY_CACHABLE_RESERVED_FOR_FUTURE_USE value for the more
forgiving ENTRY_REVALIDATE_STALE (instead of the more aggressive
ENTRY_REVALIDATE_ALWAYS) fixes the bug for the already cached entries -
they will be revalidated and stored with the correct flag on the next
request.
Squid segfault via Ftp::Client::readControlReply().
Ftp::Client::scheduleReadControlReply(), which may called from the
asynchronous start() or readControlReply()/handleControlReply()
handlers, does not check whether the control connection is still usable
before using it.
The Ftp::Server::stopWaitingForOrigin() notification may come after
Ftp::Server (or an external force) has started closing the control
connection but before the Ftp::Server job became unreachable for
notifications. Writing a response in this state leads to assertions.
Other, currently unknown paths may lead to the same write-after-close
problems. This change protects all asynchronous notification methods
(except the connection closure handler itself) from exposing underlying
code to a closing control connection. This is very similar to checking
for ERR_CLOSING in Comm handlers.
Alex Rousskov [Thu, 30 Jun 2016 20:51:54 +0000 (08:51 +1200)]
Do not make bogus recvmsg(2) calls when closing UDS sockets.
comm_empty_os_read_buffers() assumes that all non-blocking
FD_READ_METHODs can read into an opaque buffer filled with random
characters. That assumption is wrong for UDS sockets that require an
initialized msghdr structure. Feeding random data to recvmsg(2) leads to
confusing errors, at best. Squid does not log those errors, but they
are visible in, for example, strace:
recvmsg(17, 0x7fffbb, MSG_DONTWAIT) = -1 EMSGSIZE (Message too long)
comm_empty_os_read_buffers() is meant to prevent TCP RST packets. The
function now ignores UDS sockets that are not used for TCP.
TODO: Useless reads may also exist for UDP and some TCP sockets.
This change fixes logic bugs that mostly affect performance: In micro-
tests, this change gives 10% performance improvement.
maybeMakeSpaceAvailable() is called with an essentially random in.buf.
The method must prepare in.buf for the next network read. The old code
was not doing that [well enough], leading to performance problems.
In some environments, in.buf often ends up having tiny space exceeding 2
bytes (e.g., 6 bytes). This happens, for example, when Squid creates and
parses a fake CONNECT request. The old code often left such tiny in.bufs
"as is" because we tried to ensure that we have at least 2 bytes to read
instead of trying to provide a reasonable number of buffer space for the
next network read. Tiny buffers naturally result in tiny network reads,
which are very inefficient, especially for non-incremental parsers.
I have removed the explicit "2 byte" space checks: Both the new and the
old code do not _guarantee_ that at least 2 bytes of buffer space are
always available, and the caller does not check that condition either.
If some other code relies on it, more fixes will be needed (but this
change is not breaking that guarantee -- either it was broken earlier or
was never fully enforced). In practice, only buffers approaching
Config.maxRequestBufferSize limit may violate this guarantee AFAICT, and
those buffers ought to be rare, so the bug, if any, remains unnoticed.
Another subtle maybeMakeSpaceAvailable() problem was that the code
contained its own buffer capacity increase algorithm (n^2 growth).
However, increasing buffer capacity exponentially does not make much
sense because network read sizes are not going to increase
exponentially. Also, memAllocStringmemAllocate() overwrites n^2 growth
with its own logic. Besides, it is buffer _space_, not the total
capacity that should be increased. More work is needed to better match
Squid buffer size for from-user network reads with the TCP stack buffers
and traffic patterns.
Both the old and the new code reallocate in.buf MemBlobs. However, the
new code leaves "reallocate or memmove" decision to the new
SBuf::reserve(), opening the possibility for future memmove
optimizations that SBuf/MemBlob do not currently support.
It is probably wrong that in.buf points to an essentially random MemBlob
outside ConnStateData control but this change does not attempt to fix that.
Alex Rousskov [Wed, 15 Jun 2016 22:08:16 +0000 (10:08 +1200)]
Do not allow low-level debugging to hide important/critical messages.
Removed debugs() side effects that inadvertently resulted in some
important/critical messages logged at the wrong debugging level and,
hence, becoming invisible to the admin. The removed side effects set the
"current" debugging level when a debugs() parameter called a function
that also called debugs(). The last nested debugs() called affected the
level of all debugs() still in progress!
Related changes:
* Reentrant debugging messages no longer clobber parent messages. Each
debugging message is logged separately, in the natural order of
debugs() calls that would have happened if debugs() were a function
(that gets already evaluated arguments) and not a macro (that
evaluates its arguments in the middle of the call). This order is
"natural" because good macros work like functions from the caller
point of view.
* Assertions hit while evaluating debugs() parameters are now logged
instead of being lost with the being-built debugs() log line.
* 10-20% faster debugs() performance because we no longer allocate a new
std::ostringstream buffer for the vast majority of debugs() calls.
Only reentrant calls get a new buffer.
* Removed old_debug(), addressing an old "needs to die" to-do.
* Removed do_debug() that changed debugging level while testing whether
debugging is needed. Use side-effect-free Debug::Enabled() instead.
Also removed the OutStream wrapper class. The wrapper was added in trunk
revision 13767 that promised to (but did not?) MemPool the debug output
buffers. We no longer "new" the buffer stream so a custom new() method
would be unused. Besides, the r13767 explanation implied that providing
a Child::new() method would somehow overwrite Parent::allocator_type,
which did not compute for me. Finally, Squid "new"s other allocator-
enabled STL objects without overriding their new methods so either the
same problem is still there or it did not exist (or was different?).
Also removed Debug::xassert() because the debugs() assertions now work
OK without that hack.
Alex Rousskov [Sat, 21 May 2016 15:52:02 +0000 (03:52 +1200)]
Fix icons loading speed.
Since trunk r14100 (Bug 3875: bad mimeLoadIconFile error handling), each
icon was read from disk and written to Store one character at a time. I
did not measure startup delays in production, but in debugging runs,
fixing this bug sped up icons loading from 1 minute to 4 seconds.
Steve Hill [Tue, 17 May 2016 14:58:50 +0000 (02:58 +1200)]
Support unified EUI format code in external_acl_type
Squid supports %>eui as a logformat specifier, which produces an EUI-48
for IPv4 clients and an EUI-64 for IPv6 clients. However, This is not
allowed as a format specifier for the external ACLs, and you have to use
%SRCEUI48 and %SRCEUI64 instead. %SRCEUI48 is only useful for IPv4
clients and %SRCEUI64 is only useful for IPv6 clients, so supporting
both v4 and v6 is a bit messy.
Adds the %>eui specifier for external ACLs and behaves in the same way
as the logformat specifier.