From 1b6a58e205ed0bbeeeca46388f0649f322b04f06 Mon Sep 17 00:00:00 2001
From: Huan Yang
Date: Fri, 25 Apr 2025 11:19:25 +0800
Subject: [PATCH] mm/memcg: use kmem_cache when alloc memcg pernode info
When tracing mem_cgroup_per_node allocations with kmalloc ftrace:
kmalloc: call_site=mem_cgroup_css_alloc+0x1d8/0x5b4 ptr=00000000d798700c
bytes_req=2896 bytes_alloc=4096 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
accounted=false
This reveals the slab allocator provides 4096B chunks for 2896B
mem_cgroup_per_node due to:
1. The slab allocator predefines bucket sizes from 64B to 8096B
2. The mem_cgroup allocation size (2312B) falls between the 2KB and 4KB
slabs
3. The allocator rounds up to the nearest larger slab (4KB), resulting in
~1KB wasted memory per memcg alloc - per node.
This patch introduces a dedicated kmem_cache for mem_cgroup structs,
achieving precise memory allocation. Post-patch ftrace verification shows:
kmem_cache_alloc: call_site=mem_cgroup_css_alloc+0x1b8/0x5d4
ptr=000000002989e63a bytes_req=2896 bytes_alloc=2944
gfp_flags=GFP_KERNEL|__GFP_ZERO node=0 accounted=false
Each mem_cgroup_per_node alloc 2944bytes(include hw cacheline align),
compare to 4096, it avoid waste.
Link: https://lkml.kernel.org/r/20250425031935.76411-4-link@vivo.com
Signed-off-by: Huan Yang
Acked-by: Shakeel Butt
Acked-by: Johannes Weiner
Cc: Francesco Valla
Cc: guoweikang
Cc: Huang Shijie
Cc: KP Singh
Cc: Michal Hocko
Cc: Muchun Song
Cc: "Paul E . McKenney"
Cc: Petr Mladek
Cc: Rasmus Villemoes
Cc: Raul E Rangel
Cc: Roman Gushchin
Cc: "Uladzislau Rezki (Sony)"
Cc: Vlastimil Babka
Cc: Matthew Wilcox
Signed-off-by: Andrew Morton
---
mm/memcontrol.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6d713fe10221f..8ed265852423f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -97,6 +97,7 @@ static bool cgroup_memory_nokmem __ro_after_init;
static bool cgroup_memory_nobpf __ro_after_init;
static struct kmem_cache *memcg_cachep;
+static struct kmem_cache *memcg_pn_cachep;
#ifdef CONFIG_CGROUP_WRITEBACK
static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq);
@@ -3614,7 +3615,8 @@ static bool alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
{
struct mem_cgroup_per_node *pn;
- pn = kzalloc_node(sizeof(*pn), GFP_KERNEL, node);
+ pn = kmem_cache_alloc_node(memcg_pn_cachep, GFP_KERNEL | __GFP_ZERO,
+ node);
if (!pn)
return false;
@@ -5075,6 +5077,9 @@ int __init mem_cgroup_init(void)
memcg_cachep = kmem_cache_create("mem_cgroup", memcg_size, 0,
SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
+ memcg_pn_cachep = KMEM_CACHE(mem_cgroup_per_node,
+ SLAB_PANIC | SLAB_HWCACHE_ALIGN);
+
return 0;
}
--
2.39.5