The Ftp::Server::stopWaitingForOrigin() notification may come after
Ftp::Server (or an external force) has started closing the control
connection but before the Ftp::Server job became unreachable for
notifications. Writing a response in this state leads to assertions.
Other, currently unknown paths may lead to the same write-after-close
problems. This change protects all asynchronous notification methods
(except the connection closure handler itself) from exposing underlying
code to a closing control connection. This is very similar to checking
for ERR_CLOSING in Comm handlers.
Alex Rousskov [Wed, 15 Jun 2016 15:37:44 +0000 (09:37 -0600)]
Do not make bogus recvmsg(2) calls when closing UDS sockets.
comm_empty_os_read_buffers() assumes that all non-blocking
FD_READ_METHODs can read into an opaque buffer filled with random
characters. That assumption is wrong for UDS sockets that require an
initialized msghdr structure. Feeding random data to recvmsg(2) leads to
confusing errors, at best. Squid does not log those errors, but they
are visible in, for example, strace:
recvmsg(17, 0x7fffbb, MSG_DONTWAIT) = -1 EMSGSIZE (Message too long)
comm_empty_os_read_buffers() is meant to prevent TCP RST packets. The
function now ignores UDS sockets that are not used for TCP.
TODO: Useless reads may also exist for UDP and some TCP sockets.
Alex Rousskov [Tue, 14 Jun 2016 16:54:23 +0000 (04:54 +1200)]
Fixed Server::maybeMakeSpaceAvailable() logic.
This change fixes logic bugs that mostly affect performance: In micro-
tests, this change gives 10% performance improvement for intercepted
"fast peek at SNI and splice" SslBump configurations. Similar
improvement is expected for future plain HTTP/2 parsers.
maybeMakeSpaceAvailable() is called with an essentially random inBuf.
The method must prepare inBuf for the next network read. The old code
was not doing that [well enough], leading to performance problems.
In some environments, inBuf often ends up having tiny space exceeding 2
bytes (e.g., 6 bytes). This happens, for example, when Squid creates and
parses a fake CONNECT request. The old code often left such tiny inBufs
"as is" because we tried to ensure that we have at least 2 bytes to read
instead of trying to provide a reasonable number of buffer space for the
next network read. Tiny buffers naturally result in tiny network reads,
which are very inefficient, especially for non-incremental parsers.
I have removed the explicit "2 byte" space checks: Both the new and the
old code do not _guarantee_ that at least 2 bytes of buffer space are
always available, and the caller does not check that condition either.
If some other code relies on it, more fixes will be needed (but this
change is not breaking that guarantee -- either it was broken earlier or
was never fully enforced). In practice, only buffers approaching
Config.maxRequestBufferSize limit may violate this guarantee AFAICT, and
those buffers ought to be rare, so the bug, if any, remains unnoticed.
Another subtle maybeMakeSpaceAvailable() problem was that the code
contained its own buffer capacity increase algorithm (n^2 growth).
However, increasing buffer capacity exponentially does not make much
sense because network read sizes are not going to increase
exponentially. Also, memAllocStringmemAllocate() overwrites n^2 growth
with its own logic. Besides, it is buffer _space_, not the total
capacity that should be increased. More work is needed to better match
Squid buffer size for from-user network reads with the TCP stack buffers
and traffic patterns.
Both the old and the new code reallocate inBuf MemBlobs. However, the
new code leaves "reallocate or memmove" decision to the new
SBuf::reserve(), opening the possibility for future memmove
optimizations that SBuf/MemBlob do not currently support.
It is probably wrong that inBuf points to an essentially random MemBlob
outside Server control but this change does not attempt to fix that.
This patch add support for mimicking TLS Authority Key Identifier certificate
extension in Squid generated TLS certificates: If the origin server certificate
has that extension, the generated certificate (via the ssl_crtd daemon or
internally) should have the same extension, with the same set of fields if
possible.
Amos Jeffries [Tue, 14 Jun 2016 06:22:55 +0000 (18:22 +1200)]
Cleanup cppunit detection and use
The cppunit-config tool has apparently been replaced by pkg-config .pc
file years ago and is now in the process of being removed from some OS.
Notably Fedora.
Which means our present way of detecting it for use by "make check" will
increasingly fail.
This converts configure.ac to using the pkg-config method of detection
and updates the --with-cppunit-basedir parameter to --without-cppunit
matching our naming and usage scheme for other similar options. If a
=PATH is explicitly provided cppunit is assumed to exist at that
location without configure-time checking.
During start-up, Valgrind reported many errors with a similar
message: "Use of uninitialised value of size 8...". These errors
were caused by HttpRequestMethod& parameter during private
key generation: it was used as a raw void* memory there. HttpRequestMethod
is a non-POD type and has some "padding" bytes, which are not initialized
by the constructor.
The patch simplifies private key generation by using a simple counter for this,
which is sufficient to create a unique key within a single Squid process.
Except fixing Valgrind errors, this solution saves many CPU cycles wasted on
generating MD5 hashes just to "guarantee" uniqueness within a single process.
Amos Jeffries [Sat, 11 Jun 2016 05:28:18 +0000 (17:28 +1200)]
Fix GCC 4.8 eCAP builds after rev.14701
It seems GCC 4.8 at least does not reorder lnker flags so we can't use
the particular LDADD approach there. Use explicit setting of LIBS and
CXXFLAGS instead.
Amos Jeffries [Fri, 10 Jun 2016 04:53:56 +0000 (16:53 +1200)]
Bug 4446: undefined reference to 'libecap::Name::Name'
Add autoconf link check for -lecap if it is going to be used.
Also, cleanup adaptation dependency situation. Automake *_DEPENDENCIES should
no longer be used and there is no need for both AC_SUBST component lists and
ENABLE_FOO conditionals to be defined. This allows simpler configure.ac logic
for both eCAP and ICAP.
The libsecurity ServerOptions and PeerOptions class methods are now
supposed to be the API for creating SSL contexts for https_port,
cache_peer and outgoing connections.
Continue the API transition by making the callers of sslCreate*Context()
functions use the libsecurity API instead and repurpose the now obsolete
functions into the Ssl:: namespace to initialize the keying material and
other not-yet-converted OpenSSL state details of an existing context.
A side effect of this is that GnuTLS contexts are now actually created
and initialized as far as they can be.
SSL-Bump context initialization is not altered by this.
Before this change, transactions initiating a refresh were still marked
as TCP_HIT*. If such a transaction was terminated (for any reason)
before receiving an IMS reply, it was logged with that misleading tag.
Now, such transactions are logged using TCP_REFRESH[_ABORTED].
After the refresh (successful or otherwise), the tag changes to one of
the other TCP_REFRESH_* values, as before.
Amos Jeffries [Mon, 30 May 2016 01:55:32 +0000 (13:55 +1200)]
Deprecating SMB LanMan helpers
Bring the SMB LanMan helpers one step closer to removal by dropping them
from the set of helpers which are auto-detected and built by default
with Squid.
They are still available for the minority using them. But need to be
explicitly listed in the ./configure options to be built.
Amos Jeffries [Fri, 27 May 2016 12:58:15 +0000 (00:58 +1200)]
Remove ie_refresh configuration option
This option was provided as a hack to workaround problems in MSIE 5.01
and older.
Since those MSIE versions are long deprecated and no longer even
registering on the popularity charts for more than 5 years I think its
time we removed this hack.
Alex Rousskov [Fri, 27 May 2016 12:46:02 +0000 (00:46 +1200)]
Replace new/delete operators using modern C++ rules.
This change was motivated by "Mismatched free()/delete/delete[]" errors
reported by valgrind and mused about in Squid source code.
I speculate that the old new/delete replacement code was the result of
slow accumulation of working hacks to accomodate various environments,
as compiler support for the feature evolved. The cumulative result does
not actually work well (see the above paragraph), and the replacement
functions had the following visible coding problems according to [1,2]:
a) Declared with non-standard profiles that included throw specifiers.
b) Declared inline. C++ says that the results of inline declarations
have unspecified effects. In Squid, they probably necessitated
complex compiler-specific "extern inline" workarounds.
c) Defined in the header file. C++ says that defining replacements "in
any source file" is enough and that multiple replacements per
program (which is what a header file definition produces) result in
"undefined behavior".
d) Declared inconsistently (only 2 out of 4 flavors). Declaring one base
flavor should be sufficient, but if we declare more, we should
declare all of them.
The replacements were not provided to clang (trunk r13219), but there
was no explanation why. This patch does not change that exclusion.
I have no idea whether any of the old hacks are still necessary in some
cases. However, I suspect that either we do not care much if the
replacements are not enabled on some poorly supported platforms OR we
can disable them (or make them work) using much simpler hacks for the
platforms we do care about.
Alex Rousskov [Mon, 23 May 2016 23:20:27 +0000 (17:20 -0600)]
Bug 4517 error: comparison between signed and unsigned integer
The old cast is required when size_t is unsigned (as it should be).
The new cast is required when size_t is signed (as it may be).
We could cast just the left-hand side to be signed instead, but it feels
slightly wrong to do that here because all values we are casting are
meant to be unsigned and, hence, in theory, might overflow in some
future version of the code if we cast them to a signed value now and
forget to fix that cast later while adding support for larger values.
Alex Rousskov [Fri, 20 May 2016 18:16:19 +0000 (12:16 -0600)]
Fixed icons loading speed.
Since trunk r14100 (Bug 3875: bad mimeLoadIconFile error handling), each
icon was read from disk and written to Store one character at a time. I
did not measure startup delays in production, but in debugging runs,
fixing this bug sped up icons loading from 1 minute to 4 seconds.
Alex Rousskov [Fri, 20 May 2016 17:19:44 +0000 (11:19 -0600)]
Never enable OPENSSL_HELLO_OVERWRITE_HACK automatically.
OPENSSL_HELLO_OVERWRITE_HACK, a.k.a adjustSSL(), a.k.a. "splice after
stare and bump after peek" hack requires updating internal/private
OpenSSL structures. The hack also relies on SSL client making SSL
negotiation decisions that are similar to our OpenSSL version decisions.
Squid used to enable this hack if it could compile the sources, but:
* The hack works well in fewer and fewer cases.
* Making its behavior reliable is virtually impossible.
* Maintaining this hack is increasingly difficult, especially after
OpenSSL has changed its internal structures in v1.1.
* The combination of other bugs (fixed in r14670) and TLS extensions in
popular browsers effectively disabled this hack for a while, and
nobody (that we know of) noticed.
This temporary change disables the hack even if it can be compiled. If
an admin is willing to take the risks, they may enable it manually by
setting SQUID_USE_OPENSSL_HELLO_OVERWRITE_HACK macro value to 1 during
the build.
If, after this experimental change, we get no complaints (that we can
address), the hack will be completely removed from Squid sources.
Alex Rousskov [Fri, 20 May 2016 16:49:07 +0000 (10:49 -0600)]
Check for SSL_CIPHER_get_id() support required in adjustSSL().
Our adjustSSL() hack requires SSL_CIPHER_get_id() since trunk r14670,
but that OpenSSL function is not available in some environments, leading
to compilation failures.
Alex Rousskov [Fri, 20 May 2016 13:20:27 +0000 (01:20 +1200)]
Do not allow low-level debugging to hide important/critical messages.
Removed debugs() side effects that inadvertently resulted in some
important/critical messages logged at the wrong debugging level and,
hence, becoming invisible to the admin. The removed side effects set the
"current" debugging level when a debugs() parameter called a function
that also called debugs(). The last nested debugs() called affected the
level of all debugs() still in progress!
Related changes:
* Reentrant debugging messages no longer clobber parent messages. Each
debugging message is logged separately, in the natural order of
debugs() calls that would have happened if debugs() were a function
(that gets already evaluated arguments) and not a macro (that
evaluates its arguments in the middle of the call). This order is
"natural" because good macros work like functions from the caller
point of view.
* Assertions hit while evaluating debugs() parameters are now logged
instead of being lost with the being-built debugs() log line.
* 10-20% faster debugs() performance because we no longer allocate a new
std::ostringstream buffer for the vast majority of debugs() calls.
Only reentrant calls get a new buffer.
* Removed old_debug(), addressing an old "needs to die" to-do.
* Removed do_debug() that changed debugging level while testing whether
debugging is needed. Use side-effect-free Debug::Enabled() instead.
Also removed the OutStream wrapper class. The wrapper was added in trunk
revision 13767 that promised to (but did not?) MemPool the debug output
buffers. We no longer "new" the buffer stream so a custom new() method
would be unused. Besides, the r13767 explanation implied that providing
a Child::new() method would somehow overwrite Parent::allocator_type,
which did not compute for me. Finally, Squid "new"s other allocator-
enabled STL objects without overriding their new methods so either the
same problem is still there or it did not exist (or was different?).
Also removed Debug::xassert() because the debugs() assertions now work
OK without that hack.
Amos Jeffries [Fri, 20 May 2016 08:28:33 +0000 (20:28 +1200)]
HTTP/1.1: unfold mime header blocks
Enact the RFC 7230 section 3 requirement that proxies remove obs-fold from
received mime header blocks and drop all lines that start with whitespace
immediately following the request-line.
Also;
* Shuffle the DelimiterCharacters() and RelaxedDelimiters() helper methods
to Http::One::Parser for use in processing mime.
* Extend the headersEnd() function with a wrapper to avoid SBuf::c_str()
and to efficiently report whether obs-fold exists without needing to
re-scan the block.
* Document the mime_header.h API function(s).
* Add Http::One::CrLf() to present a SBuf CRLF constant. This can be used
instead of the many local string definitions we have now.
The implementation could perhapse be done better for performance, but not
done here.
Amos Jeffries [Fri, 20 May 2016 05:25:58 +0000 (17:25 +1200)]
squidclient: Improve shell-escape support in -H option
The -H parameter takes a string with some limited shell-escaped
characters. Currently just \n was expanded to the CRLF sequence.
Other shell escaped characters were left untouched.
However, to properly test headers containing weird CR, LF and HTAB
positioning it needs to be able to receive these special characters
individually and thus unescape them.
Add a new function similar to perform shell unescape with special
characters \\, \r, \n, and \t. All other characters are un-escaped
to themselves and a warning printed at verbosity level 1.
Alex Rousskov [Wed, 18 May 2016 22:46:52 +0000 (16:46 -0600)]
Delete cbdata-protected data when built --with-valgrind-debug.
Valgrind was correctly reporting every cbdata allocation as leaking!
AFAICT, these regressions were introduced by a combination of trunk
r13977 (Bug 4215: double-free in CBDATA) and trunk r13909 (de-duplicate
cbdata deallocate actions).
Also fixed and polished cbdata debugging that was printing mismatching
Allocating/Freeing pointer values and synced scripts/find-alive.pl.
Currently, bumping peek mode at step2 and splice at step2, after the SNI is
received is slow.
The most of the performance overhead comes from openSSL. However Squid does not
need openSSL to peek at SNI. It needs only to get client TLS Hello message
analyze it to retrieve SNI and then splice at step2.
This patch:
- Postpone creation of the OpenSSL connection (i.e. SSL) object for the
accepted TCP connection until after we peek at SNI (after step2).
- Implements the Parser::BinaryTokenizer parser for extracting byte-oriented
fields from raw input
- Reimplement a new SSL/TLS handshake messages parser using the
BinaryTokenizer, and remove old buggy parsing code from ssl/bio.cc
- Adjust ConnStateData, Ssl::Bio, Ssl::PeerConnector classes to use the
new parsers and parsing results.
Alex Rousskov [Wed, 18 May 2016 05:21:28 +0000 (23:21 -0600)]
HttpHeaderEntry leaks and HttpHeader::len corruption by delById().
AFAICT, this regression was introduced in trunk r14285 (Refactor
HttpHeader into gperf-generated perfect hash) and became severe after
trunk r14659 started calling delById() on virtually every request.
Allow chunking the last HTTP response on a connection.
Squid should avoid signaling the message end by connection closure
because it hurts message integrity and sometimes performance. Squid
now chunks if:
1. the response has a body;
2. the client claims HTTP/1.1 support; and
3. Squid is not going to send a Content-Length header.
AFAICT, Squid used to exclude to-be-closed connections from chunking
because chunking support was added (trunk r10781) specifically to
optimize persistent connection reuse and closing connections were
incorrectly excluded as a non-interesting/out-of-scope case. And/or
perhaps we did not realize the dangers of signaling the message end
by connection closure.
- Replace Handshake::details pointer with an always-available object
- Replace Security::ProtocolVersion and its "int" representation in TlsDetails
and NegotiationHistory classes with the existing Anyp::ProtocolVersion
- Fix TlsDetails::compressMethod. The clients may send a compression methods
list with a NULL compression method.
Rename to TlsDetails::compressionSupported.
- Other minor fixes.
Amos Jeffries [Wed, 4 May 2016 03:31:48 +0000 (15:31 +1200)]
Fix SIGSEGV in ESIContext response handling
HttpReply pointer was being unlocked without heving been locked.
Resulting in a double-free. Make it use RefCount instead of
manual locking to ensure locked/unlock is always symmetrical.
Alex Rousskov [Tue, 3 May 2016 17:18:39 +0000 (11:18 -0600)]
Optimization: Spend less CPU and RAM on adjustSSL(). Speed gain: ~5%.
Do not store extension types just to iterate over them in adjustSSL().
Check for extension support while parsing instead. Since the list of
OpenSSL-supported extensions is constant (does not depend on the
connection), we do not need to create and index extension storage once
for each TLS connection; we now do it once per worker lifetime instead.
Use std::unordered_set instead of std::list for ciphers. Most real-world
cipher lists probably contain dozens of 2-byte entries, making std::list
storage a poor choice. Unlike TLS extensions, supported ciphers depend
on the connection so we have to store all of them to check whether each
stored cipher is supported for the SSL connection object created later.
Having an O(1) lookup speeds up that last check a lot compared to the
old linear search across all stored ciphers.
Do fast adjustSSL() checks before the longer cipher loop check.
Added TLSEXT_TYPE_signature_algorithms(13) and
TLSEXT_TYPE_next_proto_neg(13172) to the list of TLS extensions
supported by OpenSSL and recognized by Squid. Recognizing these
extensions is necessary for adjustSSL() to work in more real-world
cases.
Also sorted TLSEXT_TYPE_* entries and replaced "#if 0" code with a way
to build Squid to recognize more extensions as OpenSSL's list grows.
Alex Rousskov [Mon, 2 May 2016 16:38:05 +0000 (10:38 -0600)]
Finalized BinaryTokenizer context handling. Polished.
No more funny context fields inside TLS structures. Context is handled
by the parsing code without needlessly storing it long-term.
Hid TLS structures/parsers used exclusively by
Security::HandshakeParser inside security/Handshake.cc to simplify API.
Also skipped unused ServerHello.random (instead of storing it in
TlsDetails::clientRandom) and replaced SQUID_TLS_RANDOM_SIZE macro
with a regular C++ constant.
Amos Jeffries [Mon, 2 May 2016 06:09:13 +0000 (18:09 +1200)]
HTTP/1.1: normalize Host header
When absolute-URI is provided Host header should be ignored. However some
code still uses Host directly so normalize it using the previously
sanitized URL authority value before doing any further request processing.
For now preserve the case where Host is completely absent. That matters
to the CVE-2009-0801 protection.
This also has the desirable side effect of removing multiple or duplicate
Host header entries.
Nathan Hoad [Mon, 2 May 2016 03:17:18 +0000 (15:17 +1200)]
Prevent Squid forcing -b 2048 into the arguments for sslcrtd_program
Previously Squid assumed it was running with the default sslcrtd_program, which
takes an argument for the FS block size. This causes issues for administrators
that use their own helpers that happen to take a -b argument that means
something else entirely, causing confusion and preventing them from removing
this argument.
A summary of the changes:
* Move the block size retrieval from Squid into security_file_certgen. It
does not use fsBlockSize as that introduces a lot of dependencies on
unrelated Squid code, e.g. fde, Debug, MemBuf.
* Make the -b argument mostly redundant, but leave it there so
administrators can overrule xstatvfs.
* Fix a small typo.
This work is submitted on behalf of Bloomberg L.P.
Alex Rousskov [Sun, 1 May 2016 21:37:52 +0000 (15:37 -0600)]
Accumulate fewer unknown-size responses to avoid overwhelming disks.
Start swapping out an unknown-size entry as soon as size-based cache_dir
selection is no longer affected by the entry growth. If the entry
eventually exceeds the selected cache_dir entry size limits, terminate
the swapout.
The following description assumes that Squid deals with a cachable
response that lacks a Content-Length header. These changes should not
affect other responses.
Prior to these changes, StoreEntry::mayStartSwapOut() delayed swapout
decision until the entire response was received or the already
accumulated portion of the response exceeded the [global] store entry
size limit, whichever came first. This logic protected Store from
entries with unknown sizes. AFAICT, that protection existed for two
reasons:
* Incorrect size-based cache_dir selection: When cache_dirs use
different min-size and/or max-size settings, Squid cannot tell which
cache_dir the unknown-size entry belongs to and, hence, may select the
wrong cache_dir.
* Disk bandwidth/space waste: If the growing entry exceeds all cache_dir
max-size limits, the swapout has to be aborted, resulting in waste of
previously spent resources (primarily disk bandwidth and space).
The cost of those protections include RAM waste (up to maximum cachable
object size for each of the concurrent unknown-size entry downloads) and
sudden disk overload (when the entire delayed entry is written to disk
in a large burst of write requests initiated from a tight doPages() loop
at the end of the swapout sequence when the entry size becomes known).
The latter cost is especially high because swapping the entire large
object out in one function call can easily overflow disker queues and/or
block Squid while the OS drains disk write buffers in an emergency mode.
FWIW, AFAICT, cache_dir selection protection was the only reason for
introducing response accumulation (trunk r4446). The RAM cost was
realized a year later (r4954), and the disk costs were realized during
this project.
This change reduces those costs by starting to swap out an unknown-size
entry ASAP, usually immediately. In most caching environments, most
large cachable entries should be cached. It is usually better to spend
[disk] resources gradually than to deal with sudden bursts [of disk
write requests]. Occasional jolts make high performance unsustainable.
This change does not affect size-based cache_dir selection: Squid still
delays swapout until future entry growth cannot affect that selection.
Fortunately, in most configurations, the correct selection can happen
immediately because cache_dirs lack explicit min-size and/or max-size
settings and simply rely on the *same-for-all* minimum_object_size and
maximum_object_size values.
We could make the trade-off between costly protections (i.e., accumulate
until the entry size is known) and occasional gradual resource waste
(i.e., start swapping out ASAP) configurable. However, I think it is
best to wait for the use case that requires such configuration and can
guide the design of those new configuration options.
Side changes:
* Honor forgotten minimum_object_size for cache_dirs without min-size in
Store::Disk::objectSizeIsAcceptable() and fix its initial value to
correctly detect a manually configured cache_dir min-size (which may
be zero). However, the fixed bug is probably hidden by another (yet
unfixed) bug: checkTooSmall() forgets about cache_dirs with min-size!
* Allow unknown-size objects into the shared memory cache, which code
could handle partial writes (since collapsed forwarding changes?).
* Fixed Rock::SwapDir::canStore() handling of unknown-size objects. I do
not see how such objects could get that far before, but if they could,
most would probably be cached because the bug would hide the unknown
size from Store::Disk::canStore() that declares them unstorable.
Alex Rousskov [Sat, 30 Apr 2016 03:38:26 +0000 (21:38 -0600)]
Stop parsing SSL records after a fatal SSL Alert.
The fatal alert sender should close the connection. Waiting for the next
record is pointless and will obscure the problem when we eventually read
the EOF on the socket.
Alex Rousskov [Sat, 30 Apr 2016 03:35:46 +0000 (21:35 -0600)]
Separated BinaryTokenizer commits from context debugging. Polished.
Commits are relatively rare events specific to incremental parsing. Most
parsers are not incremental and do not commit/rollback. However, all
parsers need to debug what they parse. Thus, it was wrong to combine
commits with context debugging.
BinaryTokenizer single-context debugging did not support nested contexts
(such as Hello.version.major) and reported wrong FieldGroup sizes for
some parsed structures. The new BinaryTokenizerContext does not have
these problems and is more general (but still needs more polishing
work).
Also polished many field names, comments, debug messages, and some code.
Earlier code required the caller to first create a PString object and
then extract its .data field to get to the vector contents as an SBuf.
However, that approach just increased code and logging noise because
that SBuf itself (including SBuf.length()) is all the caller may need!
Alex Rousskov [Fri, 29 Apr 2016 00:27:45 +0000 (18:27 -0600)]
Fixed parsing of server-sent SNI.
The old code could not handle an empty SNI extension that most servers
send. RFC 6066 prose instructs servers to send empty SNI extensions, and
the formal SNI grammar is apparently client-specific. We are not the
only ones being confused by that because there are severs that send
empty ServerNameLists, which are actually prohibited by the grammar.
Alex Rousskov [Sat, 23 Apr 2016 04:46:12 +0000 (22:46 -0600)]
Removed HandshakeParser::parseError and hid/renamed its parseDone.
The presence of two "persistent" parsing outcomes (done and error) was
confusing to the callers: Which one do I check and when? The adjusted
interface uses exceptions for errors and a false parseHandshake() return
value to signal "need more data". This simplifies the API and untangles
the calling code quite a bit.
Cleanup: convert late initialized objects to MEMPROXY_CLASS
Convert all the objects using the libmem "old API" for as-needed pools
to using the MEMPROXY_CLASS() API which is better designed for late
initialization.
Alex Rousskov [Fri, 22 Apr 2016 00:12:20 +0000 (18:12 -0600)]
Lack of data at EOF is a parsing error, not a "need more" condition.
Branch changes did not distinguish "need more and can expect more" from
"need more but got everything" situations. When one of the inner parsers
that deal with "complete" buffers failed, it would throw
InsufficientInput exception and the higher level parsing code would
interpret that as a sign that more SSL records/fragments are needed,
resulting in stuck transactions instead of parsing errors.
Unlike parsing errors, stuck transactions cannot be bypassed or ignored
so the difference is important.