From: Qi Zheng Date: Thu, 5 Mar 2026 11:52:35 +0000 (+0800) Subject: mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=681d325b23dccbf8f6beda18dc1a61d8e3c715cf;p=thirdparty%2Fkernel%2Flinux.git mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in folio_split_queue_lock{_irqsave}(). Link: https://lore.kernel.org/ca2957c0df1126b2c71b40c738018fd5255525a6.1772711148.git.zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo Acked-by: Johannes Weiner Acked-by: Shakeel Butt Acked-by: David Hildenbrand (Red Hat) Acked-by: Muchun Song Cc: Allen Pais Cc: Axel Rasmussen Cc: Baoquan He Cc: Chengming Zhou Cc: Chen Ridong Cc: Hamza Mahfooz Cc: Hugh Dickins Cc: Imran Khan Cc: Kamalesh Babulal Cc: Lance Yang Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Michal Hocko Cc: Michal Koutný Cc: Mike Rapoport Cc: Muchun Song Cc: Nhat Pham Cc: Roman Gushchin Cc: Suren Baghdasaryan Cc: Usama Arif Cc: Vlastimil Babka Cc: Wei Xu Cc: Yosry Ahmed Cc: Yuanchu Xie Cc: Zi Yan Signed-off-by: Andrew Morton --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 958b580c6619..970e077019b7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1218,13 +1218,29 @@ retry: static struct deferred_split *folio_split_queue_lock(struct folio *folio) { - return split_queue_lock(folio_nid(folio), folio_memcg(folio)); + struct deferred_split *queue; + + rcu_read_lock(); + queue = split_queue_lock(folio_nid(folio), folio_memcg(folio)); + /* + * The memcg destruction path is acquiring the split queue lock for + * reparenting. Once you have it locked, it's safe to drop the rcu lock. + */ + rcu_read_unlock(); + + return queue; } static struct deferred_split * folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) { - return split_queue_lock_irqsave(folio_nid(folio), folio_memcg(folio), flags); + struct deferred_split *queue; + + rcu_read_lock(); + queue = split_queue_lock_irqsave(folio_nid(folio), folio_memcg(folio), flags); + rcu_read_unlock(); + + return queue; } static inline void split_queue_unlock(struct deferred_split *queue)