From: Greg Kroah-Hartman Date: Wed, 21 Feb 2024 10:37:14 +0000 (+0100) Subject: drop mm patch X-Git-Tag: v4.19.307~29 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=db9f59a543d3b317cb5c8f43c7f1707587f94d40;p=thirdparty%2Fkernel%2Fstable-queue.git drop mm patch --- diff --git a/queue-4.19/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch b/queue-4.19/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch deleted file mode 100644 index 29f1db00a6e..00000000000 --- a/queue-4.19/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch +++ /dev/null @@ -1,104 +0,0 @@ -From 1a3e1f40962c445b997151a542314f3c6097f8c3 Mon Sep 17 00:00:00 2001 -From: Johannes Weiner -Date: Thu, 6 Aug 2020 23:20:45 -0700 -Subject: mm: memcontrol: decouple reference counting from page accounting - -From: Johannes Weiner - -commit 1a3e1f40962c445b997151a542314f3c6097f8c3 upstream. - -The reference counting of a memcg is currently coupled directly to how -many 4k pages are charged to it. This doesn't work well with Roman's new -slab controller, which maintains pools of objects and doesn't want to keep -an extra balance sheet for the pages backing those objects. - -This unusual refcounting design (reference counts usually track pointers -to an object) is only for historical reasons: memcg used to not take any -css references and simply stalled offlining until all charges had been -reparented and the page counters had dropped to zero. When we got rid of -the reparenting requirement, the simple mechanical translation was to take -a reference for every charge. - -More historical context can be found in commit e8ea14cc6ead ("mm: -memcontrol: take a css reference for each charged page"), commit -64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning tricks") and -commit b2052564e66d ("mm: memcontrol: continue cache reclaim from offlined -groups"). - -The new slab controller exposes the limitations in this scheme, so let's -switch it to a more idiomatic reference counting model based on actual -kernel pointers to the memcg: - -- The per-cpu stock holds a reference to the memcg its caching - -- User pages hold a reference for their page->mem_cgroup. Transparent - huge pages will no longer acquire tail references in advance, we'll - get them if needed during the split. - -- Kernel pages hold a reference for their page->mem_cgroup - -- Pages allocated in the root cgroup will acquire and release css - references for simplicity. css_get() and css_put() optimize that. - -- The current memcg_charge_slab() already hacked around the per-charge - references; this change gets rid of that as well. - -- tcp accounting will handle reference in mem_cgroup_sk_{alloc,free} - -Roman: -1) Rebased on top of the current mm tree: added css_get() in - mem_cgroup_charge(), dropped mem_cgroup_try_charge() part -2) I've reformatted commit references in the commit log to make - checkpatch.pl happy. - -[hughd@google.com: remove css_put_many() from __mem_cgroup_clear_mc()] - Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2007302011450.2347@eggly.anvils - -Signed-off-by: Johannes Weiner -Signed-off-by: Roman Gushchin -Signed-off-by: Hugh Dickins -Signed-off-by: Andrew Morton -Reviewed-by: Shakeel Butt -Acked-by: Roman Gushchin -Acked-by: Michal Hocko -Cc: Christoph Lameter -Cc: Tejun Heo -Cc: Vlastimil Babka -Link: http://lkml.kernel.org/r/20200623174037.3951353-6-guro@fb.com -Signed-off-by: Linus Torvalds -Fixes: cdec2e4265df ("memcg: coalesce charging via percpu storage") -Signed-off-by: GONG, Ruiqi -Signed-off-by: Greg Kroah-Hartman ---- - mm/memcontrol.c | 6 ++++++ - 1 file changed, 6 insertions(+) - ---- a/mm/memcontrol.c -+++ b/mm/memcontrol.c -@@ -2015,6 +2015,9 @@ static void drain_stock(struct memcg_sto - { - struct mem_cgroup *old = stock->cached; - -+ if (!old) -+ return; -+ - if (stock->nr_pages) { - page_counter_uncharge(&old->memory, stock->nr_pages); - if (do_memsw_account()) -@@ -2022,6 +2025,8 @@ static void drain_stock(struct memcg_sto - css_put_many(&old->css, stock->nr_pages); - stock->nr_pages = 0; - } -+ -+ css_put(&old->css); - stock->cached = NULL; - } - -@@ -2057,6 +2062,7 @@ static void refill_stock(struct mem_cgro - stock = this_cpu_ptr(&memcg_stock); - if (stock->cached != memcg) { /* reset if necessary */ - drain_stock(stock); -+ css_get(&memcg->css); - stock->cached = memcg; - } - stock->nr_pages += nr_pages; diff --git a/queue-5.4/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch b/queue-5.4/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch deleted file mode 100644 index 058fc0c8ce7..00000000000 --- a/queue-5.4/mm-memcontrol-decouple-reference-counting-from-page-accounting.patch +++ /dev/null @@ -1,104 +0,0 @@ -From 1a3e1f40962c445b997151a542314f3c6097f8c3 Mon Sep 17 00:00:00 2001 -From: Johannes Weiner -Date: Thu, 6 Aug 2020 23:20:45 -0700 -Subject: mm: memcontrol: decouple reference counting from page accounting - -From: Johannes Weiner - -commit 1a3e1f40962c445b997151a542314f3c6097f8c3 upstream. - -The reference counting of a memcg is currently coupled directly to how -many 4k pages are charged to it. This doesn't work well with Roman's new -slab controller, which maintains pools of objects and doesn't want to keep -an extra balance sheet for the pages backing those objects. - -This unusual refcounting design (reference counts usually track pointers -to an object) is only for historical reasons: memcg used to not take any -css references and simply stalled offlining until all charges had been -reparented and the page counters had dropped to zero. When we got rid of -the reparenting requirement, the simple mechanical translation was to take -a reference for every charge. - -More historical context can be found in commit e8ea14cc6ead ("mm: -memcontrol: take a css reference for each charged page"), commit -64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning tricks") and -commit b2052564e66d ("mm: memcontrol: continue cache reclaim from offlined -groups"). - -The new slab controller exposes the limitations in this scheme, so let's -switch it to a more idiomatic reference counting model based on actual -kernel pointers to the memcg: - -- The per-cpu stock holds a reference to the memcg its caching - -- User pages hold a reference for their page->mem_cgroup. Transparent - huge pages will no longer acquire tail references in advance, we'll - get them if needed during the split. - -- Kernel pages hold a reference for their page->mem_cgroup - -- Pages allocated in the root cgroup will acquire and release css - references for simplicity. css_get() and css_put() optimize that. - -- The current memcg_charge_slab() already hacked around the per-charge - references; this change gets rid of that as well. - -- tcp accounting will handle reference in mem_cgroup_sk_{alloc,free} - -Roman: -1) Rebased on top of the current mm tree: added css_get() in - mem_cgroup_charge(), dropped mem_cgroup_try_charge() part -2) I've reformatted commit references in the commit log to make - checkpatch.pl happy. - -[hughd@google.com: remove css_put_many() from __mem_cgroup_clear_mc()] - Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2007302011450.2347@eggly.anvils - -Signed-off-by: Johannes Weiner -Signed-off-by: Roman Gushchin -Signed-off-by: Hugh Dickins -Signed-off-by: Andrew Morton -Reviewed-by: Shakeel Butt -Acked-by: Roman Gushchin -Acked-by: Michal Hocko -Cc: Christoph Lameter -Cc: Tejun Heo -Cc: Vlastimil Babka -Link: http://lkml.kernel.org/r/20200623174037.3951353-6-guro@fb.com -Signed-off-by: Linus Torvalds -Fixes: cdec2e4265df ("memcg: coalesce charging via percpu storage") -Signed-off-by: GONG, Ruiqi -Signed-off-by: Greg Kroah-Hartman ---- - mm/memcontrol.c | 6 ++++++ - 1 file changed, 6 insertions(+) - ---- a/mm/memcontrol.c -+++ b/mm/memcontrol.c -@@ -2214,6 +2214,9 @@ static void drain_stock(struct memcg_sto - { - struct mem_cgroup *old = stock->cached; - -+ if (!old) -+ return; -+ - if (stock->nr_pages) { - page_counter_uncharge(&old->memory, stock->nr_pages); - if (do_memsw_account()) -@@ -2221,6 +2224,8 @@ static void drain_stock(struct memcg_sto - css_put_many(&old->css, stock->nr_pages); - stock->nr_pages = 0; - } -+ -+ css_put(&old->css); - stock->cached = NULL; - } - -@@ -2256,6 +2261,7 @@ static void refill_stock(struct mem_cgro - stock = this_cpu_ptr(&memcg_stock); - if (stock->cached != memcg) { /* reset if necessary */ - drain_stock(stock); -+ css_get(&memcg->css); - stock->cached = memcg; - } - stock->nr_pages += nr_pages;