* The "mem-loaded all" message was printing -1 instead of the
accumulated object size. It also deserves a lower debugging level
because it happens at most once per transaction.
Fix parsing of certificate validator responses (#452)
If a certificate validator did not end its response with an end-of-line
or whitespace character, then Squid, while parsing the response,
accessed the bytes after the end of the buffer where the response is
stored.
GCC-9 with Squid use of -Werror makes these warning hard
errors which can no longer be ignored. We are thus required
to alter this third-party code when built for Squid.
Truncation of these strings is fine. Rather than suppress
GCC warnings, switch to xstrncpy() which has similar
behaviour but guarantees c-string terminator exists within
the copied range limit (removing need for two -1 hacks).
This change will add terminators on path and device_type
values in the rare case of overly long configured values.
It is not clear what ancient Domain Controllers would do
when handed un-terminated c-string in those cases, but was
unlikely to be good.
Partial disk writes may be useful for CF disk slaves and SMP disk hit
readers, but their correct implementation requires a lot of additional
work, the current implementation is insufficient/buggy, and partially
written entries were probably never read because Rock writers do not
enable shared/appending locks.
Here is a brief (but complicated) history of the issue, for the record:
1. 807feb1 adds partial disk writes "in order to propagate data from
the hit writer to the collapsed hit readers". The readers
probably could not read any data though because the disk entry
was still exclusively locked for writing. The developers either
did not realize that or intended to address it later, but did not
document their intentions -- all this development was happening
on a fast-moving CF development branch.
2. 0b8be48 makes those partial disk writes conditional on CF being
enabled. It is not clear whether the developers wanted to reduce
the scope of a risky feature or did not realize that non-CF use
cases also benefit from partial writes (when fully supported).
3. ce49546 adds low-level appending lock support but does not
actually use append locks for any caches.
4. 4475555 adds appending support to the shared memory cache.
5. 4976925 explicitly disables partial disk writes, acknowledging
that they were probably never used (by readers) anyway due to the
lack of a Ipc::StoreMap::startAppending() call. The same commit
documents that partial writes caused problems (for writers) in
performance tests.
6. 5296bbd re-enables partial disk writes (for writers) after fixing
problems detected earlier in performance tests. This commit does
not add the critical (for readers) startAppending() call. It
looks like the lack of that call was overlooked, again!
When parsing entries from /etc/hosts file, they are all lowered
(see bug 3040). If cache_peer hostname is uppercase, it will
lead to DNS resolution failure. Lowering cache_peer host fixes
this issue.
This change may expose broken Squid configurations that
incorrectly relied on non-lowercase peer host names to
bypass Squid's "is this cache_peer different from me?"
check. Though such configurations should encounter
forwarding loop errors later anyway.
Bug 4957: Multiple XSS issues in cachemgr.cgi (#429)
The cachemgr.cgi web module of the squid proxy is vulnerable
to XSS issue. The vulnerable parameters "user_name" and "auth"
have insufficient sanitization in place.
FreeBSD defines FD_NONE in /usr/include/fcntl.h to be magic to
the system. We are not using that name explicitly anywhere, but
it may make sense to keep it around as a default value for
fd_type. Rename the symbol to avoid the clash and fix the build
on FreeBSD.
Bug 4842: Memory leak when http_reply_access uses external_acl (#424)
Http::One::Server::handleReply() sets AccessLogEntry::reply which may
already be set. It is already set, for example, when the ACL code
has already called syncAle() because external ACLs require an ALE.
Squid converted any invalid response shorter than 4 bytes into an
invalid "HTTP/1.1 0 Init" response (with those received characters and a
CRLFCRLF suffix as a body). In some cases (e.g., with ICAP RESPMOD), the
resulting body was not sent to the client at all.
Now Squid handles such responses the same way it handles any non-HTTP/1
(and non-ICY) response, converting it into a valid HTTP/200 response
with an X-Transformed-From:HTTP/0.9 header and received bytes as
a message body.
Amos Jeffries [Sat, 8 Jun 2019 11:40:40 +0000 (11:40 +0000)]
Fix GCC-9 build issues (#413)
GCC-9 continues the development track started with GCC-8
producing more warnings and errors about possible code issues
which Squid use of "-Wall -Werror" turns into hard build
failures:
error: 'strncpy' output may be truncated copying 6 bytes from a
string of length 6 [-Werror=stringop-truncation]
error: '%s' directive argument is null
[-Werror=format-overflow=]
error: 'void* memset(void*, int, size_t)' clearing an object of
type ... with no trivial copy-assignment; use assignment or
value-initialization instead [-Werror=class-memaccess]
error: 'void* memset(void*, int, size_t)' clearing an object of
non-trivial type ...; use assignment or value-initialization
instead [-Werror=class-memaccess]
Also, segmentation faults with minimal builds have been
identified as std::string template differences between
optimized and non-optimized object binaries. This results in
cppunit (built with optimizations) crashing unit tests when
freeing memory. Workaround that temporarily by removing the use
of --disable-optimizations from minimal builds.
Amos Jeffries [Thu, 6 Jun 2019 12:06:41 +0000 (00:06 +1200)]
Bug 4953: to_localhost does not include :: (#410)
Some OS treat unspecified destination address as an implicit
localhost connection attempt. Add ::/128 alongside the
to_localhost 0.0.0.0/32 address to let admin forbid these
connections when DNS entries wrongly contain [::].
Also, adjust ::1 to ::1/128 to match IPv4 range-based definition
and clarify that IPv6 localhost is /128 rather than /127.
Amos Jeffries [Sat, 10 Nov 2018 04:00:12 +0000 (17:00 +1300)]
Fix tls-min-version= being ignored
Audit required change to make PeerOptions::parse() call
parseOptions() when 'options=' altered sslOptions instead of
delaying the parse to context creation.
This missed the fact that for GnuTLS the tlsMinVersion was
also updating the sslOptions string rather than the
parsedOptions variable later in the configuration process.
Call parseOptions() to reset the parsedOptions value whenever
sslOptions string is altered.
Amos Jeffries [Tue, 21 May 2019 21:31:31 +0000 (21:31 +0000)]
Replace uudecode with libnettle base64 decoder (#406)
Since RFC 7235 updated the HTTP Authentication credentials token
to the token68 characterset it is possible that characters
uudecode cannot cope with are received.
The Nettle decoder better handles characters which are valid but
not to be used for Basic auth token.
Matthieu Herrb [Mon, 13 May 2019 08:45:57 +0000 (08:45 +0000)]
Bug 4889: Ignore ECONNABORTED in accept(2) (#404)
An aborted connection attempt does not affect listening socket's
ability to accept other connections. If the error is not ignored, Squid
gets stuck after logging an oldAccept error like this one:
This bug fix was motivated by accept(2) changes in OpenBSD v6.5 that
resulted in new ECONNABORTED errors under regular deployment conditions:
https://github.com/openbsd/src/commit/c255b5a
Amos Jeffries [Sat, 4 May 2019 06:53:45 +0000 (06:53 +0000)]
Bug 4942: --with-filedescriptors does not do anything (#395)
SQUID_CHECK_MAXFD has been unconditionally overwriting any
user-defined limit with an auto-detected limit from the build
machine. The change causing this was an incomplete fix for
bug 3970 added to v3.3 and later releases.
Fixing that problem has two notable side effects:
* the user-defined value now has the FD property checks applied
to it (multiple of 64, too-few, etc). This means warnings will
start to appear in build logs for a number of custom
configurations. We should expect an increase in questions
about that.
* builds which have previously been passing in outrageous values
will actually start to use those values as the SQUID_MAXFD
limit. This may result in surprising memory consumption or
performance issues. Hopefully the warnings and new messages
displaying auto-detected limit separate from the value used
will reduce the admin surprise, but may not.
This PR also includes cleanup of the autoconf syntax within the
SQUID_CHECK_MAXFD macro and moves the ./configure warnings about
possible issues into that check macro.
When MIT or Heimdal Keberos libraries are installed at a custom
location there may be several krb5-config installed. The one
located at the user-provided path (if any) needs to have preference.
This assertion could be triggered by various swapout failures for
ufs/aufs/diskd cache_dir entries.
The bug was caused by 4310f8b change related to storeSwapOutFileClosed()
method. Before that change, swapout failures resulted in
StoreEntry::swap_status set to SWAPOUT_NONE, preventing
another/asserting iteration of StoreEntry::swapOut().
This fix adds SWAPOUT_FAILED swap status for marking swapout failures
(instead of reviving and abusing SWAPOUT_NONE), making the code more
reliable.
Also removed storeSwapOutFileNotify() implementation. We should not
waste time on maintaining an unused method that now contains conflicting
assertions: swappingOut() and !hasDisk().
Alex Rousskov [Mon, 1 Apr 2019 16:58:36 +0000 (16:58 +0000)]
Bug 4796: comm.cc !isOpen(conn->fd) assertion when rotating logs (#382)
Squid abandoned cache.log file descriptor maintenance, calling fd_open()
but then closing the descriptor without fd_close(). If the original file
descriptor value was reused for another purpose, Squid would either hit
the reported assertion or log a "Closing open FD" WARNING (depending on
the new purpose). The cache.log file descriptor is closed on log
rotation and reconfiguration events.
This short-term solution avoids assertions and WARNINGs but sacrifices
cache.log listing in fd_table and, hence, mgr:filedescriptors reports.
The correct long-term solution is to properly maintain descriptor meta
information across cache.log closures/openings, but doing so from inside
of debug.cc is technically difficult due to linking boundaries/problems.
Alex Rousskov [Tue, 19 Mar 2019 20:30:55 +0000 (20:30 +0000)]
When using OpenSSL, trust intermediate CAs from trusted stores (#383)
According to [1], GnuTLS and NSS do that by default.
Use case: Chrome and Mozilla no longer trust Semantic root CAs _but_
still trust several whitelisted Semantic intermediate CAs[2]. Squid
built with OpenSSL cannot do that without X509_V_FLAG_PARTIAL_CHAIN.
Amos Jeffries [Thu, 7 Mar 2019 13:50:38 +0000 (13:50 +0000)]
Bug 4928: Cannot convert non-IPv4 to IPv4 (#379)
... when reaching client_ip_max_connections
The client_ip_max_connections limit is checked before the TCP dst-IP is located for the newly received TCP connection. This leaves Squid unable to fetch the NFMARK or similar
details later on (they do not exist for [::]).
Move client_ip_max_connections test later in the TCP accept process to ensure dst-IP is known when the error is produced.
Alex Rousskov [Sun, 24 Feb 2019 03:28:47 +0000 (03:28 +0000)]
Fixed squidclient authentication after 4b19fa9 (Bug 4843 pt2) (#373)
* squidclient -U sent Proxy-Authorization instead of Authorization.
Code duplication bites again.
* squidclient -U and -u could sent random garbage after the correct
[Proxy-]Authorization value as exposed by Coverity CID 1441999: Unused
value (UNUSED_VALUE). Coverity missed this deeper problem, but
analyzing its report lead to discovery of the two bugs fixed here.
Also reduced authentication-related code duplication.
Conflicts:
tools/squidclient/squidclient.cc
mahdi1001 [Sun, 24 Feb 2019 09:24:14 +0000 (12:54 +0330)]
Add support for buffer-size= to UDP logging #359 (#377)
* Add support for buffer-size= to UDP logging #359
Allow admin control of buffering for log outputs written to UDP
receivers using the buffer-size= parameter.
buffer-size=0byte disables buffering and sends UDP packets
immediately regardless of line size.
When non-0 values are used lines shorter than the buffer may be
delayed and aggregated into a later UDP packet.
Log lines larger than the buffer size will be sent immediately
and may trigger delivery of previously buffered content to
retain log order (at time of send, not UDP arrival).
To avoid truncation problems known with common recipients
the buffer size remains capped at 1400 bytes.
Restored the natural order of the following two notifications:
* BodyConsumer::noteMoreBodyDataAvailable() and
* BodyConsumer::noteBodyProductionEnded() or noteBodyProducerAborted().
Commit b599471 unintentionally reordered those two notifications. Client
kids (and possibly other BodyConsumers) relied on the natural order to
end their work. If an HttpStateData job was done with the Squid-to-peer
connection and only waiting for the last adapted body bytes, it would
get stuck and leak many objects. This use case was not tested during b599471 work.
Amish [Wed, 2 Jan 2019 11:51:45 +0000 (11:51 +0000)]
basic_ldap_auth: Return BH on internal errors; polished messages (#347)
Basic LDAP auth helper now returns BH instead of ERR in case of errors
other than LDAP_SECURITY_ERROR, per helper guidelines.
Motivation: I have a wrapper around Basic LDAP auth helper. If an LDAP
server is down, then the helper returns BH, and the wrapper uses
a fallback authentication source.
Also converted printf() to SEND_*() macros and reduced message
verbosity.
Systems which have been partially 'IPv6 disabled' may allow
sockets to be opened and used but missing the IPv6 loopback
address.
Implement the outstanding TODO to detect such failures and
disable IPv6 support properly within Squid when they are found.
This should fix bug 4915 auth_param helper startup and similar
external_acl_type helper issues. For security such helpers are
not permitted to use the machine default IP address which is
globally accessible.
Fail Rock swapout if the disk dropped some of the write requests (#352)
Detecting dropped writes earlier is more than a TODO: If the last entry
write was successful, the whole entry becomes available for hits
immediately. IpcIoFile::checkTimeouts() that runs every 7 seconds
(IpcIoFile::Timeout) would eventually notify Rock about the timeout,
allowing Rock to release the failed entry, but that notification may
be too late.
The precise outcome of hitting an entry with a missing on-disk slice is
unknown (because the bug was detected by temporary hit validation code
that turned such hits into misses), but SWAPFAIL is the best we could
hope for.
Initialize StoreMapSlice when reserving a new cache slot (#350)
Rock sets the StoreMapSlice::next field when sending a slice to disk. To
avoid writing slice A twice, Rock allocates a new slice B to prime
A.next right before writing A. Scheduling A's writing and, sometimes,
lack of data to fill B create a gap between B's allocation and B's
writing (which sets B.next). During that time, A.next points to B, but
B.next is untouched.
If writing slice A or swapout in general fails, the chain of failed
entry slices (now containing both A and B) is freed. If untouched B.next
contains garbage, then freeChainAt() adds "random" slices after B to the
free slice pool. Subsequent swapouts use those incorrectly freed slices,
effectively overwriting portions of random cache entries, corrupting the
cache.
How did B.next get dirty in the first place? freeChainAt() cleans the
slices it frees, but Rock also makes direct noteFreeMapSlice() calls.
Shared memory cache may have avoided this corruption because it makes no
such calls.
Ipc::StoreMap::prepFreeSlice() now clears allocated slices. Long-term,
we may be able to move free slice management into StoreMap to automate
this cleanup.
Also simplified and polished slot allocation code a little, removing the
Rock::IoState::reserveSlotForWriting() middleman. This change also
improves the symmetry between Rock and shared memory cache code.
Before this fix, Squid sometimes logged the following error:
BUG: Worker I/O pop queue for ... overflow: ...
The bug could result in truncated hit responses, reduced hit ratio, and,
combined with buggy lost I/O handling code (GitHub PR #352), even cache
corruption.
The bug could be triggered by the following sequence of events:
* Disker dequeues one I/O request from the worker push queue.
* Worker pushes more I/O requests to that disker, reaching 1024 requests
in its push queue (QueueCapacity or just "N" below). No overflow here!
* Worker process is suspended (or is just too busy to pop I/O results).
* Disker satisfies all 1+N requests, adding each to the worker pop queue
and overflows that queue when adding the last processed request.
This fix limits worker push so that the sum of all pending requests
never exceeds (pop) queue capacity. This approach will continue to work
even if diskers are enhanced to dequeue multiple requests for seek
optimization and/or priority-based scheduling.
Pop queue and push queue can still accommodate N requests each. The fix
appears to reduce supported disker "concurrency" levels from 2N down to
N pending I/O requests, reducing queue memory utilization. However, the
actual reduction is from N+1 to N: Since a worker pops all its satisfied
requests before queuing a new one, there could never be more than N+1
pending requests (N in the push queue and 1 worked on by the disker).
We left the BUG reporting and handling intact. There are no known bugs
in that code now. If the bug never surfaces again, it can be replaced
with code that translates low-level queue overflow exception into a
user-friendly TextException.
Alex Rousskov [Tue, 8 Jan 2019 15:14:18 +0000 (15:14 +0000)]
Fix BodyPipe/Sink memory leaks associated with auto-consumption (#348)
Auto-consumption happens (and could probably leak memory) in many cases,
but this leak was exposed by an eCAP service that blocked or replaced
virgin messages.
The BodySink job termination algorithm relies on body production
notifications. A BodySink job created after the body production had
ended can never stop and, hence, leaks (leaking the associated BodyPipe
object with it). Such a job is also useless: If production is over,
there is no need to free space for more body data! This change avoids
creating such leaking and useless jobs.
Amos Jeffries [Sun, 6 Jan 2019 13:22:19 +0000 (13:22 +0000)]
Bug 4875 pt2: GCC-8 compile errors with -O3 optimization (#288)
GCC-8 warnings exposed at -O3 optimization causes its
own static analyzer to detect optimized code is eliding
initialization on paths that do not use the
configuration variables.
Refactor the parseTimeLine() API to return the parsed
values so that there is no need to initialize anything prior
to parsing.
Fixed forward_max_tries documentation and implementation (#277)
Before 1c8f25b, FwdState::n_tries counted the total number of forwarding
attempts, including pinned and persistent connection retries. Since that
revision, it started counting just those retries. What should n_tries
count? The counter is used to honor the forward_max_tries directive, but
that directive was documented to limit the number of _different_ paths
to try. Neither 1c8f25b~1 nor 1c8f25b code matched that documentation!
Continuing to count just pinned and persistent connection retries (as in 1c8f25b) would violate any reasonable forward_max_tries intent and admin
expectations. There are two ways to fix this problem, synchronizing code
and documentation:
* Count just the attempts to use a different forwarding path, matching
forward_max_tries documentation but not what Squid has ever done. This
approach makes it difficult for an admin to limit the total number of
forwarding attempts in environments where, say, the second attempt is
unlikely to succeed and will just incur wasteful delays (Squid bug
4788 report is probably about one of such use cases). Also,
implementing this approach may be more difficult because it requires
adding a new counter for retries and, for some interpretations of
"different", even a container of previously visited paths.
* Count all forwarding attempts (as before 1c8f25b) and adjust
forward_max_tries documentation to match this historical behavior.
This approach does not have known unique flaws.
Also fixed FwdState::n_tries off-by-one comparison bug discussed during
Squid bug 4788 triage.
Also fixed admin concern behind Squid bug 4788 "forward_max_tries 1 does
not prevent some retries": While the old forward_max_tries documentation
actually excluded pconn retries, technically invalidating the bug
report, the admin now has a knob to limit those retries.
chi-mf [Tue, 30 Oct 2018 04:48:40 +0000 (04:48 +0000)]
Fix netdb exchange with a TLS cache_peer (#307)
Squid uses http-scheme URLs when sending netdb exchange (and possibly
other) requests to a cache_peer. If a DIRECT path is selected for that
cache_peer URL, then Squid sends a clear text HTTP request to that
cache_peer. If that cache_peer expects a TLS connection, it will reject
that request (with, e.g., error:transaction-end-before-headers),
resulting in an HTTP 503 or 504 netdb fetch error.
Workaround this by adding an internalRemoteUri() parameter to indicate
whether https or http URL scheme should be used. Netdb fetches from
CachePeer::secure peers now get an https scheme and, hence, a TLS
connection.
chi-mf [Thu, 25 Oct 2018 13:33:06 +0000 (13:33 +0000)]
Update netdb when tunneling requests (#314)
Updating netdb on tunneled transactions (e.g., CONNECT requests) is
especially important for origin servers that are only reached via
tunnels. Without updates, requests for such sites may always through a
cache_peer, even if a direct connection to them is much faster.
Logging client "handshake" bytes is useful in at least two contexts:
* Runtime traffic bypass and bumping/splicing decisions. Identifying
popular clients like Skype for Business (that uses a TLS handshake but
then may not speak TLS) is critical for handling their traffic
correctly. Squid does not have enough ACLs to interrogate most TLS
handshake aspects. Adding more ACLs may still be a good idea, but
initial sketches for SfB handshakes showed rather complex
ACLs/configurations, _and_ no reasonable ACLs would be able to handle
non-TLS handshakes. An external ACL receiving the handshake is in a
much better position to analyze/fingerprint it according to custom
admin needs.
* A logged handshake can be used to analyze new/unusual traffic or even
trigger security-related alarms.
The current support is limited to cases where Squid was saving handshake
for other reasons. With enough demand, this initial support can be
extended to all protocols and port configurations.
flozilla [Wed, 24 Oct 2018 12:12:01 +0000 (14:12 +0200)]
Fix memory leak when parsing SNMP packet (#313)
SNMP queries denied by snmp_access rules and queries with certain
unsupported SNMPv2 commands were leaking a few hundred bytes each. Such
queries trigger "SNMP agent query DENIED from..." WARNINGs in cache.log.
Certificate fields injection via %D in ERR_SECURE_CONNECT_FAIL (#306)
%ssl_subject, %ssl_ca_name, and %ssl_cn values were not properly escaped
when %D code was expanded in HTML context of the ERR_SECURE_CONNECT_FAIL
template. This bug affects all ERR_SECURE_CONNECT_FAIL page templates
containing %D, including the default template.
Other error pages are not vulnerable because Squid does not populate %D
with certificate details in other contexts (yet).
Thanks to Nikolas Lohmann [eBlocker] for identifying the problem.
TODO: If those certificate details become needed for ACL checks or other
non-HTML purposes, make their HTML-escaping conditional.
Eneas Queiroz [Wed, 10 Oct 2018 16:45:29 +0000 (16:45 +0000)]
Allow compilation with minimal OpenSSL (#281)
Updated use of OpenSSL deprecated API, so that Squid can be compiled
with OpenSSL built with the OPENSSL_NO_DEPRECATED option. Such OpenSSL
builds are useful for saving storage space on embedded systems.
Also added compat/openssl.h -- a centralized OpenSSL portability shim.
Including it is now required before #including openssl/*.h headers.
chi-mf [Wed, 10 Oct 2018 07:50:52 +0000 (07:50 +0000)]
Fixed %USER_CA_CERT_xx and %USER_CERT_xx crashes (#301)
The bug was introduced in 4e56d7f6 when the formatting code was moved
into Format::Format::assemble() where the old "format" loop variable is
a Format data member with the right type but (usually) the wrong value.
Bug 4893: Malformed %>ru URIs for CONNECT requests (#299)
Commit bec110e (a.k.a. v4 commit fbbd5cd5) broke CONNECT URI logging
because it incorrectly assumed that URI::absolute() supports all URIs.
As the result, Squid logged CONNECT URLs as "://host:port".
Also fixed a similar wrong assumption in ACLFilledChecklist::verifyAle()
which may affect URL-related ACL checks for CONNECT requests, albeit
only in already buggy cases where Squid warns about "ALE missing URL".
Bug 4885: Excessive memory usage when running out of descriptors (#291)
TcpAcceptor now stops listening when it cannot accept due to FD limits.
We also no longer defer/queue the same limited TcpAcceptor multiple
times. These changes prevent unbounded memory growth and improve
performance of Squids running out of file descriptors. They should have
no impact on other Squids.
Bug 4875 pt1: GCC-8 compile errors with -O3 optimization (#287)
Use xstrncpy instead of strncat for String appending
Our xstrncpy() is safer, not assuming the existing char*
is nul-terminated and accounting explicitly for the
nul-terminator byte.
GCC-8 -O3 optimizations were exposing a strncat() output
truncation of the terminator when insufficient space was
available in the String buffer.
We suspect the GCC error to be a false-positive for -O3
builds and, even it it is accurate, these changes should
not affect builds with lower optimization levels.
This change also fixes icc builds: Commit 39cca4e missed noexcept
specification for nothrow variants of new and delete operators,
and the icc compiler did not like that.
Furthermore, we can simplify the replacements because, according
to cppreference, with C++11, "replacing the throwing single object
allocation functions is sufficient to handle all [allocations and
deallocations]".
Bug 4877: Add missing text about external_acl_type %DATA changes (#276)
Conversion of external_acl_type to using logformat macros was
not quite seamless. The %DATA macro now expands to a dash '-' to
fix helpers using it explicitly from receiving incorrect number
of fields (and misaligned input) on their input lines.
Unfortunately that also results in the implicit use of that
macro expanding to non-whitespace ('-'). That small fact was not
documented in the initial v4 release notes and config texts.
Bug 4716: Blank lines in cachemgr.conf are not skipped (#274)
The default cachemgr.conf contains three lines other than
comments. Two of them are blank, the third is "localhost".
These blank lines show up in the "Cache Server" list in the
CGI output.
Amos Jeffries [Tue, 7 Aug 2018 13:00:02 +0000 (13:00 +0000)]
Update systemd dependencies in squid.service (#264)
The network.target is not sufficient to guarantee network
interfaces and IPs are assigned and available. Particularly when
systemd is not in charge of the IP assignment itself.
Use network-online.target as well, which should ensure network
is properly configured and online before starting Squid.
Packing reply headers into StoreEntry/ShmWriter directly means numerous
tiny append() calls which involve expensive mem_node/slice searches. For
example, every two-byte ": " and CRLF delimiter is packed separately.
When dealing with an HTTP request header that Squid can parse but that
contains request URI length exceeding the 8K limit, Squid should log the
URL (prefix) instead of a dash. Logging the URL helps with triaging
these unusual requests. The older %ru (LFT_REQUEST_URI) was already
logging these huge URLs, but %>ru (LFT_CLIENT_REQ_URI) was logging a
dash. Now both log the URL (or its prefix).
As a side effect, %>ru now also logs error:request-too-large,
error:transaction-end-before-headers and other Squid-specific
pseudo-URLs, as appropriate.
Also refactored request- and URI-recording code to reduce chances of
similar inconsistencies reappearing in the future.
Also, honor strip_query_terms in %ru for large URLs. Not stripping query
string in %ru was a security problem.
Also fixed a bug with "redirected" flag calculation in
ClientHttpRequest::handleAdaptedHeader(). In general, http->url and
request->url should not be compared directly, because the latter always
passes through uri_whitespace cleanup, while the former does not.
Also fixed a bug with possibly wrong %ru after redirection:
ClientHttpRequest::log_uri was not updated in this case.
Also initialize AccessLogEntry::request and AccessLogEntry::notes ASAP.
Before this change, these fields were initialized in
ClientHttpRequest::doCallouts(). It is better to initialize them just
after the request object is created so that ACLs, running before
doCallouts(), could have them at hand. There are at least three such
ACLs: force_request_body_continuation, spoof_client_ip and
spoof_client_ip.
Also synced %ru and %>ru documentation with the current code.