From: Willy Tarreau Date: Fri, 14 Jun 2019 06:30:10 +0000 (+0200) Subject: BUG/MINOR: task: prevent schedulable tasks from starving under high I/O activity X-Git-Tag: v2.0.0~46 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=3cec0f94f3464368f9e5762e6d194f1b2c66e70f;p=thirdparty%2Fhaproxy.git BUG/MINOR: task: prevent schedulable tasks from starving under high I/O activity With both I/O and tasks in the same tasklet list, we now have a very smooth and responsive scheduler, providing a good fairness between I/O activities. With the lower layers relying on tasklet a lot (I/O wakeup, subscribe, etc), there may often be a large number of totally autonomous tasklets doing their business such as forwarding data between two muxes. But the task scheduler historically refrained from picking tasks from the priority-ordered run queue to put them into the tasklet list until this later had less than max_runqueue_depth entries. This was to make sure that low-latency, high-priority tasks would have an opportunity to be dequeued before others even if they arrive late. But the counter used for this is still the tasklet list size, which contains countless I/O events. This causes an unfairness between unbounded I/Os and bounded tasks, resulting for example in the CLI responding slower when forwarding 40 Gbps of HTTP traffic spread over a thousand of connections. A good solution consists in sticking to the initial intent of max_runqueue_depth which is to limit the number of tasks in the list (to maintain fairness between them) and not to limit the number of these tasks among tasklets. It just turns out that the task_list_size initially was this task counter and changed over time to be a tasklet list size. Let's simply refrain from updating it for pure tasklets so that it takes back its original role of counting real tasks as its name implies. With this change the CLI becomes instantly responsive under load again. This patch may possibly be backported to 1.9 though it requires some careful checks. --- diff --git a/include/proto/task.h b/include/proto/task.h index d39d0f4d4f..5f4940cbe1 100644 --- a/include/proto/task.h +++ b/include/proto/task.h @@ -231,11 +231,11 @@ static inline void tasklet_wakeup(struct tasklet *tl) if (!LIST_ISEMPTY(&tl->list)) return; LIST_ADDQ(&task_per_thread[tid].task_list, &tl->list); - task_per_thread[tid].task_list_size++; _HA_ATOMIC_ADD(&tasks_run_queue, 1); } +/* may only be used for real tasks */ static inline void task_insert_into_tasklet_list(struct task *t) { struct tasklet *tl; @@ -252,7 +252,8 @@ static inline void task_insert_into_tasklet_list(struct task *t) static inline void __task_remove_from_tasklet_list(struct task *t) { LIST_DEL_INIT(&((struct tasklet *)t)->list); - task_per_thread[tid].task_list_size--; + if (!TASK_IS_TASKLET(t)) + task_per_thread[tid].task_list_size--; _HA_ATOMIC_SUB(&tasks_run_queue, 1); } @@ -361,7 +362,6 @@ static inline void tasklet_free(struct tasklet *tl) { if (!LIST_ISEMPTY(&tl->list)) { LIST_DEL(&tl->list); - task_per_thread[tid].task_list_size--; _HA_ATOMIC_SUB(&tasks_run_queue, 1); }