Non-SMP Squid and each SMP kid allocate a store_table hash. With large
caches, some allocated store_table may have millions of buckets.
Recently we discovered that it is almost impossible to debug SMP Squid
with a large but mostly empty disk cache because the disker registration
times out while store_table is being built -- the disker process is
essentially blocked on a very long debugging loop.
The code suspends the loop every 500 entries (to take care of tasks like
kid registration), but there are no pauses when scanning millions of
empty hash buckets where every bucket prints two debug lines.
Squid now does not report empty store_table buckets explicitly. When
dealing with large caches, the debugged process may still be blocked for
a few hundred milliseconds (instead of many seconds) while scanning the
entire (mostly empty) store_table. Optimizing that should be done as a
part of the complex "store search" API refactoring.
{
/* probably need to lock the store entries...
* we copy them all to prevent races on the links. */
- debugs(47, 3, "Store::LocalSearch::copyBucket #" << bucket);
assert (!entries.size());
hash_link *link_ptr = NULL;
hash_link *link_next = NULL;
entries.push_back(e);
}
+ // minimize debugging: we may be called more than a million times on startup
+ if (const auto count = entries.size())
+ debugs(47, 8, "bucket #" << bucket << " entries: " << count);
+
++bucket;
- debugs(47,3, "got entries: " << entries.size());
}