From: Aurelien DARRAGON Date: Wed, 7 Aug 2024 16:04:08 +0000 (+0200) Subject: MINOR: server: ensure max_events_at_once > 0 in server_atomic_sync() X-Git-Tag: v3.1-dev5~10 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=a6d1eb8f5d81df0d18e0c9f07c6d199dc39f0893;p=thirdparty%2Fhaproxy.git MINOR: server: ensure max_events_at_once > 0 in server_atomic_sync() In 8f1fd96 ("BUG/MEDIUM: server/addr: fix tune.events.max-events-at-once event miss and leak"), we added a comment saying that tune.events.max-events-at-once is assumed to be strictly positive. It is so because the keyword parser forces values between 1 and 10000: we don't want less than 1 because it wouldn't make any sense, and 10k max because beyond that we could create contention in server_atomic_sync() Now as the above commit implements a do..while it heavily relies on the fact that the budget is at least 1. Upon soft-stop, we break away from the loop without decrementing the budget. With all that in mind, it is safe to assume that the 'remain' counter will only fall to 0 if the task runs out of budget while doing work, in which case the task still exists and must be rescheduled. As seen in GH #2667 this assumption was ambiguous, so let's make it official by adding a pair of BUG_ON() that make it explicit that it works because remain 'cannot' be 0 unless the entire budget was consumed. No backport needed. --- diff --git a/src/server.c b/src/server.c index 2f27df8538..c6f39012f5 100644 --- a/src/server.c +++ b/src/server.c @@ -227,6 +227,8 @@ static struct task *server_atomic_sync(struct task *task, void *context, unsigne unsigned int remain = event_hdl_tune.max_events_at_once; // to limit max number of events per batch struct event_hdl_async_event *event; + BUG_ON(remain == 0); // event_hdl_tune.max_events_at_once is expected to be > 0 + /* check for new server events that we care about */ do { event = event_hdl_async_equeue_pop(&server_atomic_sync_queue); @@ -317,7 +319,7 @@ static struct task *server_atomic_sync(struct task *task, void *context, unsigne srv_set_addr_desc(srv, 1); } event_hdl_async_free_event(event); - } while (--remain); // event_hdl_tune.max_events_at_once is expected to be > 0 + } while (--remain); /* some events possibly required thread_isolation: * now that we are done, we must leave thread isolation before @@ -334,6 +336,7 @@ static struct task *server_atomic_sync(struct task *task, void *context, unsigne * Reschedule the task to finish where we left off if * there are remaining events in the queue. */ + BUG_ON(task == NULL); // ending event doesn't decrement remain if (!event_hdl_async_equeue_isempty(&server_atomic_sync_queue)) task_wakeup(task, TASK_WOKEN_OTHER); }