protocols. This limit is large enough but not documented on purpose. The same
limit is applied on the first steps of the decoding for the same reason.
-tune.idle-pool.shared { on | off }
- Enables ('on') or disables ('off') sharing of idle connection pools between
- threads for a same server. The default is to share them between threads in
- order to minimize the number of persistent connections to a server, and to
- optimize the connection reuse rate. But to help with debugging or when
+tune.idle-pool.shared { auto | on | off }
+ Controls sharing idle connection pools between threads for a same server.
+ It can be enabled for all threads in a same thread group ('on'), enabled for
+ all threads ('full') or disabled ('off'). The default is to share them
+ between threads in the same thread group ('on'), in order to minimize the
+ number of persistent connections to a server, and to optimize the connection
+ reuse rate. Sharing with threads from other thread groups can have a
+ performance impact, and is not enabled by default, but can be useful if
+ maximizing connection reuse is a priority. To help with debugging or when
suspecting a bug in HAProxy around connection reuse, it can be convenient to
- forcefully disable this idle pool sharing between multiple threads, and force
- this option to "off". The default is on. It is strongly recommended against
- disabling this option without setting a conservative value on "pool-low-conn"
- for all servers relying on connection reuse to achieve a high performance
- level, otherwise connections might be closed very often as the thread count
- increases. Note that in any case, connections are only shared between threads
- of the same thread group. This means that systems with many NUMA nodes may
- show slightly more persistent connections while machines with unified caches
- and many CPU cores per node may experience higher CPU usage. In the latter
- case, the "max-thread-per-group" tunable may be used to improve the behavior.
+ forcefully disable this idle pool sharing between multiple threads,
+ and force this option to "off". It is strongly recommended against disabling
+ this option without setting a conservative value on "pool-low-conn" for all
+ servers relying on connection reuse to achieve a high performance level,
+ otherwise connections might be closed very often as the thread count
+ increases.
tune.idletimer <timeout>
Sets the duration after which HAProxy will consider that an empty buffer is
connections.
Note that using connections from other thread groups can occur performance
penalties, so it should not be used unless really needed.
+ Note that this behavior is now controlled by tune.idle-pool.shared, and
+ this keyword is just there for compatibility with older configurations, and
+ will be deprecated.
tune.vars.global-max-size <size>
tune.vars.proc-max-size <size>
REGISTER_SERVER_DEINIT(srv_close_idle_conns);
-/* config parser for global "tune.idle-pool.shared", accepts "on" or "off" */
+/* config parser for global "tune.idle-pool.shared", accepts "full", "on" or "off" */
static int cfg_parse_idle_pool_shared(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
if (too_many_args(1, args, err, NULL))
return -1;
- if (strcmp(args[1], "on") == 0)
+ if (strcmp(args[1], "full") == 0) {
global.tune.options |= GTUNE_IDLE_POOL_SHARED;
- else if (strcmp(args[1], "off") == 0)
+ global.tune.tg_takeover = FULL_THREADGROUP_TAKEOVER;
+ } else if (strcmp(args[1], "on") == 0) {
+ global.tune.options |= GTUNE_IDLE_POOL_SHARED;
+ global.tune.tg_takeover = RESTRICTED_THREADGROUP_TAKEOVER;
+ } else if (strcmp(args[1], "off") == 0) {
global.tune.options &= ~GTUNE_IDLE_POOL_SHARED;
- else {
- memprintf(err, "'%s' expects either 'on' or 'off' but got '%s'.", args[0], args[1]);
+ global.tune.tg_takeover = NO_THREADGROUP_TAKEOVER;
+ } else {
+ memprintf(err, "'%s' expects 'auto', 'on' or 'off' but got '%s'.", args[0], args[1]);
return -1;
}
return 0;