From: Christoph Hellwig Date: Mon, 23 Mar 2026 07:50:54 +0000 (+0100) Subject: xfs: don't decrement the buffer LRU count for in-use buffers X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=8166876aadef90744bb26addc9c5a16b1c8341b5;p=thirdparty%2Fkernel%2Flinux.git xfs: don't decrement the buffer LRU count for in-use buffers XFS buffers are added to the LRU when they are unused, but are only removed from the LRU lazily when the LRU list scan finds a used buffer. So far this only happen when the LRU counter hits 0, which is suboptimal as buffers that were added to the LRU, but are in use again still consume LRU scanning resources and are aged while actually in use. Fix this by checking for in-use buffers and removing the from the LRU before decrementing the LRU counter. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Carlos Maiolino --- diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index e4b65d0c9ef0b..ee8c3944015a1 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1524,23 +1524,25 @@ xfs_buftarg_isolate( return LRU_SKIP; /* - * Decrement the b_lru_ref count unless the value is already - * zero. If the value is already zero, we need to reclaim the - * buffer, otherwise it gets another trip through the LRU. + * If the buffer is in use, remove it from the LRU for now. We can't + * free it while someone is using it, and we should also not count + * eviction passed for it, just as if it hadn't been added to the LRU + * yet. */ - if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) { + if (bp->b_lockref.count > 0) { + list_lru_isolate(lru, &bp->b_lru); spin_unlock(&bp->b_lockref.lock); - return LRU_ROTATE; + return LRU_REMOVED; } /* - * If the buffer is in use, remove it from the LRU for now as we can't - * free it. It will be freed when the last reference drops. + * Decrement the b_lru_ref count unless the value is already + * zero. If the value is already zero, we need to reclaim the + * buffer, otherwise it gets another trip through the LRU. */ - if (bp->b_lockref.count > 0) { - list_lru_isolate(lru, &bp->b_lru); + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) { spin_unlock(&bp->b_lockref.lock); - return LRU_REMOVED; + return LRU_ROTATE; } lockref_mark_dead(&bp->b_lockref);