From: Qi Zheng Date: Fri, 27 Mar 2026 10:16:28 +0000 (+0800) Subject: mm: memcontrol: correct the type of stats_updates to unsigned long X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=616795d7db00377b76f6918fafd32f61af7f78f0;p=thirdparty%2Fkernel%2Flinux.git mm: memcontrol: correct the type of stats_updates to unsigned long Patch series "fix unexpected type conversions and potential overflows", v3. As Harry Yoo pointed out [1], in scenarios where massive state updates occur (e.g., during the reparenting of LRU folios), the values passed to memcg stat update functions can accumulate and exceed the upper limit of a 32-bit integer. If the parameter types are not large enough (like 'int') or are handled incorrectly, it can lead to severe truncation, potential overflow issues, and unexpected type conversion bugs. This series aims to address these issues by correcting the parameter types in the relevant functions, and by fixing an implicit conversion bug in memcg_state_val_in_pages(). This patch (of 3): The memcg_rstat_updated() tracks updates for vmstats_percpu->state and lruvec_stats_percpu->state. Since these state values are of type long, change the val parameter passed to memcg_rstat_updated() to long as well. Correspondingly, change the type of stats_updates in struct memcg_vmstats_percpu and struct memcg_vmstats from unsigned int and atomic_t to unsigned long and atomic_long_t respectively to prevent potential overflow when handling large state updates during the reparenting of LRU folios. Link: https://lore.kernel.org/cover.1774604356.git.zhengqi.arch@bytedance.com Link: https://lore.kernel.org/a5b0b468e7b4fe5f26c50e36d5d016f16d92f98f.1774604356.git.zhengqi.arch@bytedance.com Link: https://lore.kernel.org/all/acDxaEgnqPI-Z4be@hyeyoo/ [1] Signed-off-by: Qi Zheng Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Harry Yoo (Oracle) Cc: Allen Pais Cc: Axel Rasmussen Cc: Baoquan He Cc: David Hildenbrand Cc: Hamza Mahfooz Cc: Hugh Dickins Cc: Imran Khan Cc: Johannes Weiner Cc: Kamalesh Babulal Cc: Lance Yang Cc: Michal Hocko Cc: Michal Koutný Cc: Muchun Song Cc: Roman Gushchin Cc: Shakeel Butt Cc: Usama Arif Cc: Wei Xu Cc: Yuanchu Xie Cc: Zi Yan Signed-off-by: Andrew Morton --- diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b696823b34d0..4ee668c20fa6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -608,7 +608,7 @@ static inline int memcg_events_index(enum vm_event_item idx) struct memcg_vmstats_percpu { /* Stats updates since the last flush */ - unsigned int stats_updates; + unsigned long stats_updates; /* Cached pointers for fast iteration in memcg_rstat_updated() */ struct memcg_vmstats_percpu __percpu *parent_pcpu; @@ -639,7 +639,7 @@ struct memcg_vmstats { unsigned long events_pending[NR_MEMCG_EVENTS]; /* Stats updates since the last flush */ - atomic_t stats_updates; + atomic_long_t stats_updates; }; /* @@ -665,16 +665,16 @@ static u64 flush_last_time; static bool memcg_vmstats_needs_flush(struct memcg_vmstats *vmstats) { - return atomic_read(&vmstats->stats_updates) > + return atomic_long_read(&vmstats->stats_updates) > MEMCG_CHARGE_BATCH * num_online_cpus(); } -static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val, +static inline void memcg_rstat_updated(struct mem_cgroup *memcg, long val, int cpu) { struct memcg_vmstats_percpu __percpu *statc_pcpu; struct memcg_vmstats_percpu *statc; - unsigned int stats_updates; + unsigned long stats_updates; if (!val) return; @@ -697,7 +697,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val, continue; stats_updates = this_cpu_xchg(statc_pcpu->stats_updates, 0); - atomic_add(stats_updates, &statc->vmstats->stats_updates); + atomic_long_add(stats_updates, &statc->vmstats->stats_updates); } } @@ -705,7 +705,7 @@ static void __mem_cgroup_flush_stats(struct mem_cgroup *memcg, bool force) { bool needs_flush = memcg_vmstats_needs_flush(memcg->vmstats); - trace_memcg_flush_stats(memcg, atomic_read(&memcg->vmstats->stats_updates), + trace_memcg_flush_stats(memcg, atomic_long_read(&memcg->vmstats->stats_updates), force, needs_flush); if (!force && !needs_flush) @@ -4413,8 +4413,8 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) } WRITE_ONCE(statc->stats_updates, 0); /* We are in a per-cpu loop here, only do the atomic write once */ - if (atomic_read(&memcg->vmstats->stats_updates)) - atomic_set(&memcg->vmstats->stats_updates, 0); + if (atomic_long_read(&memcg->vmstats->stats_updates)) + atomic_long_set(&memcg->vmstats->stats_updates, 0); } static void mem_cgroup_fork(struct task_struct *task)