From: Muchun Song Date: Thu, 5 Mar 2026 11:52:32 +0000 (+0800) Subject: mm: mglru: prevent memory cgroup release in mglru X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=c29f90a2dac18bdd407eafc4cbaa57f14665393a;p=thirdparty%2Fkernel%2Flinux.git mm: mglru: prevent memory cgroup release in mglru In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in mglru. This serves as a preparatory measure for the reparenting of the LRU pages. Link: https://lore.kernel.org/9d887662a9d39c425742dd8468e3123316bccfe3.1772711148.git.zhengqi.arch@bytedance.com Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Acked-by: Shakeel Butt Reviewed-by: Harry Yoo Cc: Allen Pais Cc: Axel Rasmussen Cc: Baoquan He Cc: Chengming Zhou Cc: Chen Ridong Cc: David Hildenbrand Cc: Hamza Mahfooz Cc: Hugh Dickins Cc: Imran Khan Cc: Johannes Weiner Cc: Kamalesh Babulal Cc: Lance Yang Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Michal Hocko Cc: Michal Koutný Cc: Mike Rapoport Cc: Muchun Song Cc: Nhat Pham Cc: Roman Gushchin Cc: Suren Baghdasaryan Cc: Usama Arif Cc: Vlastimil Babka Cc: Wei Xu Cc: Yosry Ahmed Cc: Yuanchu Xie Cc: Zi Yan Signed-off-by: Andrew Morton --- diff --git a/mm/vmscan.c b/mm/vmscan.c index 031fbd35ae10..6f3f9e20ff67 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3440,8 +3440,10 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, if (folio_nid(folio) != pgdat->node_id) return NULL; + rcu_read_lock(); if (folio_memcg(folio) != memcg) - return NULL; + folio = NULL; + rcu_read_unlock(); return folio; } @@ -4211,12 +4213,12 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, unsigned int nr) unsigned long addr = pvmw->address; struct vm_area_struct *vma = pvmw->vma; struct folio *folio = pfn_folio(pvmw->pfn); - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg; struct pglist_data *pgdat = folio_pgdat(folio); - struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); - DEFINE_MAX_SEQ(lruvec); - int gen = lru_gen_from_seq(max_seq); + struct lruvec *lruvec; + struct lru_gen_mm_state *mm_state; + unsigned long max_seq; + int gen; lockdep_assert_held(pvmw->ptl); VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); @@ -4251,6 +4253,12 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, unsigned int nr) } } + memcg = get_mem_cgroup_from_folio(folio); + lruvec = mem_cgroup_lruvec(memcg, pgdat); + max_seq = READ_ONCE((lruvec)->lrugen.max_seq); + gen = lru_gen_from_seq(max_seq); + mm_state = get_mm_state(lruvec); + lazy_mmu_mode_enable(); pte -= (addr - start) / PAGE_SIZE; @@ -4300,6 +4308,8 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, unsigned int nr) if (mm_state && suitable_to_scan(i, young)) update_bloom_filter(mm_state, max_seq, pvmw->pmd); + mem_cgroup_put(memcg); + return true; }