Add response delay pools feature for Squid-to-client speed limiting.
The feature restricts Squid-to-client bandwidth only. It applies to
both cache hits and misses.
* Rationale *
This may be useful for specific response(s) bandwidth limiting.
There are situations when doing this is hardly possible
(or impossible) by means of netfilter/iptables operating with
TCP/IP packets and IP addresses information for filtering. In other
words, sometimes it is problematic to 'extract' a single response from
TCP/IP data flow at system level. For example, a single Squid-to-client
TCP connection can transmit multiple responses (persistent connections,
pipelining or HTTP/2 connection multiplexing) or be encrypted
(HTTPS proxy mode).
* Description *
When Squid starts delivering the final HTTP response to a client,
Squid checks response_delay_pool_access rules (supporting fast ACLs
only), in the order they were declared. The first rule with a
matching ACL wins. If (and only if) an "allow" rule won, Squid
assigns the response to the corresponding named delay pool.
If a response is assigned to a delay pool, the response becomes
subject to the configured bucket and aggregate bandwidth limits of
that pool, similar to the current "class 2" server-side delay pools,
but with a brand new, dedicated "individual" filled bucket assigned to
the matched response.
The new feature serves the same purpose as the existing client-side
pools: both features limit Squid-to-client bandwidth. Their common
interface was placed into a new base BandwidthBucket class. The
difference is that client-side pools do not aggregate clients and
always use one bucket per client IP. It is possible that a response
becomes a subject of both these pools. In such situations only matched
response delay pool will be used for Squid-to-client speed limiting.
* Limitations *
The accurate SMP support (with the aggregate bucket shared among
workers) is outside this patch scope. In SMP configurations,
Squid should automatically divide the aggregate_speed_limit and
max_aggregate_size values among the configured number of Squid
workers.
* Also: *
Fixed ClientDelayConfig which did not perform cleanup on
destruction, causing memory problems detected by Valgrind. It was not
possible to fix this with minimal changes because of linker problems
with SquidConfig while checking with test-builds.sh. So I had
to refactor ClientDelayConfig module, separating configuration code
(old ClientDelayConfig class) from configured data (a new
ClientDelayPools class) and minimizing dependencies with SquidConfig.