]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
mm/zsmalloc: clarify class per-fullness zspage counts
authorChengming Zhou <chengming.zhou@linux.dev>
Thu, 27 Jun 2024 07:59:58 +0000 (15:59 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 12 Jul 2024 22:52:12 +0000 (15:52 -0700)
We always use insert_zspage() and remove_zspage() to update zspage's
fullness location, which will account correctly.

But this special async free path use "splice" instead of remove_zspage(),
so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.

Clean things up by decreasing when iterate over the zspage free list.

This doesn't actually fix anything.  ZS_INUSE_RATIO_0 is just a
"placeholder" which is never used anywhere.

Link: https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@linux.dev
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/zsmalloc.c

index fec1a39e5bbebd6365ee72790186f6108b575199..7fc25fa4e6b348bbf7b3b900dcea1dee1fbae19f 100644 (file)
@@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
 
                class = zspage_class(pool, zspage);
                spin_lock(&class->lock);
+               class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
                __free_zspage(pool, class, zspage);
                spin_unlock(&class->lock);
        }