From: Christopher Faulet Date: Fri, 10 Nov 2017 09:39:16 +0000 (+0100) Subject: BUG/MINOR: buffers: Fix b_alloc_margin to be "fonctionnaly" thread-safe X-Git-Tag: v1.8-rc4~33 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=fa5c812a6bab78a0c0c53ccfeb9393dd3dcaec80;p=thirdparty%2Fhaproxy.git BUG/MINOR: buffers: Fix b_alloc_margin to be "fonctionnaly" thread-safe b_alloc_margin is, strickly speeking, thread-safe. It will not crash HAproxy. But its contract is not respected anymore in a multithreaded environment. In this function, we need to be sure to have buffers available in the pool after the allocation. So to have this guarantee, we must lock the memory pool during all the operation. This also means, we must call internal and lockless memory functions (prefixed with '__'). For the record, this patch fixes a pernicious bug happens after a soft reload where some streams can be blocked infinitly, waiting for a buffer in the buffer_wq list. This happens because, during a soft reload, pool_gc2 is called, making some calls to b_alloc_fast fail. This is specific to threads, no backport is needed. --- diff --git a/include/common/buffer.h b/include/common/buffer.h index 4216d937c7..fe6913c0fe 100644 --- a/include/common/buffer.h +++ b/include/common/buffer.h @@ -722,26 +722,44 @@ static inline void b_free(struct buffer **buf) * a response buffer could not be allocated anymore, resulting in a deadlock. * This means that we sometimes need to try to allocate extra entries even if * only one buffer is needed. + * + * We need to lock the pool here to be sure to have buffers available + * after the allocation, regardless how many threads that doing it in the same + * time. So, we use internal and lockless memory functions (prefixed with '__'). */ static inline struct buffer *b_alloc_margin(struct buffer **buf, int margin) { - struct buffer *next; + struct buffer *b; if ((*buf)->size) return *buf; + *buf = &buf_wanted; + HA_SPIN_LOCK(POOL_LOCK, &pool2_buffer->lock); + /* fast path */ - if ((pool2_buffer->allocated - pool2_buffer->used) > margin) - return b_alloc_fast(buf); + if ((pool2_buffer->allocated - pool2_buffer->used) > margin) { + b = __pool_get_first(pool2_buffer); + if (likely(b)) { + HA_SPIN_UNLOCK(POOL_LOCK, &pool2_buffer->lock); + b->size = pool2_buffer->size - sizeof(struct buffer); + b_reset(b); + *buf = b; + return b; + } + } - next = pool_refill_alloc(pool2_buffer, margin); - if (!next) - return next; + /* slow path, uses malloc() */ + b = __pool_refill_alloc(pool2_buffer, margin); - next->size = pool2_buffer->size - sizeof(struct buffer); - b_reset(next); - *buf = next; - return next; + HA_SPIN_UNLOCK(POOL_LOCK, &pool2_buffer->lock); + + if (b) { + b->size = pool2_buffer->size - sizeof(struct buffer); + b_reset(b); + *buf = b; + } + return b; }