From: Chengming Zhou Date: Thu, 27 Jun 2024 07:59:58 +0000 (+0800) Subject: mm/zsmalloc: clarify class per-fullness zspage counts X-Git-Tag: v6.11-rc1~85^2~58 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=538148f9ba9e3136a877881e72ccbe56733daae2;p=thirdparty%2Fkernel%2Flinux.git mm/zsmalloc: clarify class per-fullness zspage counts We always use insert_zspage() and remove_zspage() to update zspage's fullness location, which will account correctly. But this special async free path use "splice" instead of remove_zspage(), so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease. Clean things up by decreasing when iterate over the zspage free list. This doesn't actually fix anything. ZS_INUSE_RATIO_0 is just a "placeholder" which is never used anywhere. Link: https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@linux.dev Signed-off-by: Chengming Zhou Cc: Minchan Kim Cc: Sergey Senozhatsky Signed-off-by: Andrew Morton --- diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index fec1a39e5bbeb..7fc25fa4e6b34 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work) class = zspage_class(pool, zspage); spin_lock(&class->lock); + class_stat_dec(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); spin_unlock(&class->lock); }