From: Willy Tarreau Date: Sun, 17 Mar 2024 15:54:36 +0000 (+0100) Subject: CLEANUP: ring: use only curr_cell and not next_cell in the main write loop X-Git-Tag: v3.0-dev6~4 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=4bc81ec9857a3b427caba65353fd3816c1d0d651;p=thirdparty%2Fhaproxy.git CLEANUP: ring: use only curr_cell and not next_cell in the main write loop It turns out that we can reduce by one variable in the loop and this clobbers one less register, making it slightly faster on Cortex A72. --- diff --git a/src/ring.c b/src/ring.c index 8d46679cca..1b9cd8d8e6 100644 --- a/src/ring.c +++ b/src/ring.c @@ -266,8 +266,6 @@ ssize_t ring_write(struct ring *ring, size_t maxlen, const struct ist pfx[], siz * comes in and becomes the leader in turn. */ - next_cell = &cell; - /* Wait for another thread to take the lead or for the tail to * be available again. It's critical to be read-only in this * loop so as not to lose time synchronizing cache lines. Also, @@ -276,7 +274,7 @@ ssize_t ring_write(struct ring *ring, size_t maxlen, const struct ist pfx[], siz */ while (1) { - if ((next_cell = HA_ATOMIC_LOAD(ring_queue_ptr)) != &cell) + if ((curr_cell = HA_ATOMIC_LOAD(ring_queue_ptr)) != &cell) goto wait_for_flush; __ha_cpu_relax_for_read(); @@ -296,7 +294,6 @@ ssize_t ring_write(struct ring *ring, size_t maxlen, const struct ist pfx[], siz * which we'll confirm by trying to reset the queue. If we're * still the leader, we're done. */ - curr_cell = &cell; if (HA_ATOMIC_CAS(ring_queue_ptr, &curr_cell, NULL)) break; // Won!