Most of the time all sink applets (which are responsible for relaying
messages from the ring to the tcp servers endpoints) would end up being
assigned to the first available thread (tid:0), resulting in excessive CPU
usage on a single thread when multiple sink servers were defined (no
matter if they were defined over multiple "ring" sections) and significant
message load was pushed through them over the ring API.
This patch is similar to
34e4085f ("MEDIUM: peers: Balance applets across
threads") but for sinks. We use a slightly different approach, which is to
elect a random thread instead of picking the one with leasts applets. This
proves to be already sufficient to alleviate the issue.
In the case we want to have a better load distribution we should consider
breaking existing connections to reestablish them on a new thread when we
find out that they start monopolizing a cpu thread (ie: after a certain
amount of messages for instance). Also check tcpchecks migrating model for
inspiration.
This patch depends on the previous one ("MEDIUM: sink: start applets
asynchronously").
if (sft->srv->log_proto == SRV_LOG_PROTO_OCTET_COUNTING)
applet = &sink_forward_oc_applet;
- appctx = appctx_new_here(applet, NULL);
+ appctx = appctx_new_on(applet, NULL, statistical_prng_range(global.nbthread));
if (!appctx)
goto out_close;
appctx->svcctx = (void *)sft;