+++ /dev/null
-From 1a3e1f40962c445b997151a542314f3c6097f8c3 Mon Sep 17 00:00:00 2001
-From: Johannes Weiner <hannes@cmpxchg.org>
-Date: Thu, 6 Aug 2020 23:20:45 -0700
-Subject: mm: memcontrol: decouple reference counting from page accounting
-
-From: Johannes Weiner <hannes@cmpxchg.org>
-
-commit 1a3e1f40962c445b997151a542314f3c6097f8c3 upstream.
-
-The reference counting of a memcg is currently coupled directly to how
-many 4k pages are charged to it. This doesn't work well with Roman's new
-slab controller, which maintains pools of objects and doesn't want to keep
-an extra balance sheet for the pages backing those objects.
-
-This unusual refcounting design (reference counts usually track pointers
-to an object) is only for historical reasons: memcg used to not take any
-css references and simply stalled offlining until all charges had been
-reparented and the page counters had dropped to zero. When we got rid of
-the reparenting requirement, the simple mechanical translation was to take
-a reference for every charge.
-
-More historical context can be found in commit e8ea14cc6ead ("mm:
-memcontrol: take a css reference for each charged page"), commit
-64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning tricks") and
-commit b2052564e66d ("mm: memcontrol: continue cache reclaim from offlined
-groups").
-
-The new slab controller exposes the limitations in this scheme, so let's
-switch it to a more idiomatic reference counting model based on actual
-kernel pointers to the memcg:
-
-- The per-cpu stock holds a reference to the memcg its caching
-
-- User pages hold a reference for their page->mem_cgroup. Transparent
- huge pages will no longer acquire tail references in advance, we'll
- get them if needed during the split.
-
-- Kernel pages hold a reference for their page->mem_cgroup
-
-- Pages allocated in the root cgroup will acquire and release css
- references for simplicity. css_get() and css_put() optimize that.
-
-- The current memcg_charge_slab() already hacked around the per-charge
- references; this change gets rid of that as well.
-
-- tcp accounting will handle reference in mem_cgroup_sk_{alloc,free}
-
-Roman:
-1) Rebased on top of the current mm tree: added css_get() in
- mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
-2) I've reformatted commit references in the commit log to make
- checkpatch.pl happy.
-
-[hughd@google.com: remove css_put_many() from __mem_cgroup_clear_mc()]
- Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2007302011450.2347@eggly.anvils
-
-Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
-Signed-off-by: Roman Gushchin <guro@fb.com>
-Signed-off-by: Hugh Dickins <hughd@google.com>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Reviewed-by: Shakeel Butt <shakeelb@google.com>
-Acked-by: Roman Gushchin <guro@fb.com>
-Acked-by: Michal Hocko <mhocko@suse.com>
-Cc: Christoph Lameter <cl@linux.com>
-Cc: Tejun Heo <tj@kernel.org>
-Cc: Vlastimil Babka <vbabka@suse.cz>
-Link: http://lkml.kernel.org/r/20200623174037.3951353-6-guro@fb.com
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Fixes: cdec2e4265df ("memcg: coalesce charging via percpu storage")
-Signed-off-by: GONG, Ruiqi <gongruiqi1@huawei.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- mm/memcontrol.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
---- a/mm/memcontrol.c
-+++ b/mm/memcontrol.c
-@@ -2015,6 +2015,9 @@ static void drain_stock(struct memcg_sto
- {
- struct mem_cgroup *old = stock->cached;
-
-+ if (!old)
-+ return;
-+
- if (stock->nr_pages) {
- page_counter_uncharge(&old->memory, stock->nr_pages);
- if (do_memsw_account())
-@@ -2022,6 +2025,8 @@ static void drain_stock(struct memcg_sto
- css_put_many(&old->css, stock->nr_pages);
- stock->nr_pages = 0;
- }
-+
-+ css_put(&old->css);
- stock->cached = NULL;
- }
-
-@@ -2057,6 +2062,7 @@ static void refill_stock(struct mem_cgro
- stock = this_cpu_ptr(&memcg_stock);
- if (stock->cached != memcg) { /* reset if necessary */
- drain_stock(stock);
-+ css_get(&memcg->css);
- stock->cached = memcg;
- }
- stock->nr_pages += nr_pages;
+++ /dev/null
-From 1a3e1f40962c445b997151a542314f3c6097f8c3 Mon Sep 17 00:00:00 2001
-From: Johannes Weiner <hannes@cmpxchg.org>
-Date: Thu, 6 Aug 2020 23:20:45 -0700
-Subject: mm: memcontrol: decouple reference counting from page accounting
-
-From: Johannes Weiner <hannes@cmpxchg.org>
-
-commit 1a3e1f40962c445b997151a542314f3c6097f8c3 upstream.
-
-The reference counting of a memcg is currently coupled directly to how
-many 4k pages are charged to it. This doesn't work well with Roman's new
-slab controller, which maintains pools of objects and doesn't want to keep
-an extra balance sheet for the pages backing those objects.
-
-This unusual refcounting design (reference counts usually track pointers
-to an object) is only for historical reasons: memcg used to not take any
-css references and simply stalled offlining until all charges had been
-reparented and the page counters had dropped to zero. When we got rid of
-the reparenting requirement, the simple mechanical translation was to take
-a reference for every charge.
-
-More historical context can be found in commit e8ea14cc6ead ("mm:
-memcontrol: take a css reference for each charged page"), commit
-64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning tricks") and
-commit b2052564e66d ("mm: memcontrol: continue cache reclaim from offlined
-groups").
-
-The new slab controller exposes the limitations in this scheme, so let's
-switch it to a more idiomatic reference counting model based on actual
-kernel pointers to the memcg:
-
-- The per-cpu stock holds a reference to the memcg its caching
-
-- User pages hold a reference for their page->mem_cgroup. Transparent
- huge pages will no longer acquire tail references in advance, we'll
- get them if needed during the split.
-
-- Kernel pages hold a reference for their page->mem_cgroup
-
-- Pages allocated in the root cgroup will acquire and release css
- references for simplicity. css_get() and css_put() optimize that.
-
-- The current memcg_charge_slab() already hacked around the per-charge
- references; this change gets rid of that as well.
-
-- tcp accounting will handle reference in mem_cgroup_sk_{alloc,free}
-
-Roman:
-1) Rebased on top of the current mm tree: added css_get() in
- mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
-2) I've reformatted commit references in the commit log to make
- checkpatch.pl happy.
-
-[hughd@google.com: remove css_put_many() from __mem_cgroup_clear_mc()]
- Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2007302011450.2347@eggly.anvils
-
-Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
-Signed-off-by: Roman Gushchin <guro@fb.com>
-Signed-off-by: Hugh Dickins <hughd@google.com>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Reviewed-by: Shakeel Butt <shakeelb@google.com>
-Acked-by: Roman Gushchin <guro@fb.com>
-Acked-by: Michal Hocko <mhocko@suse.com>
-Cc: Christoph Lameter <cl@linux.com>
-Cc: Tejun Heo <tj@kernel.org>
-Cc: Vlastimil Babka <vbabka@suse.cz>
-Link: http://lkml.kernel.org/r/20200623174037.3951353-6-guro@fb.com
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Fixes: cdec2e4265df ("memcg: coalesce charging via percpu storage")
-Signed-off-by: GONG, Ruiqi <gongruiqi1@huawei.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- mm/memcontrol.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
---- a/mm/memcontrol.c
-+++ b/mm/memcontrol.c
-@@ -2214,6 +2214,9 @@ static void drain_stock(struct memcg_sto
- {
- struct mem_cgroup *old = stock->cached;
-
-+ if (!old)
-+ return;
-+
- if (stock->nr_pages) {
- page_counter_uncharge(&old->memory, stock->nr_pages);
- if (do_memsw_account())
-@@ -2221,6 +2224,8 @@ static void drain_stock(struct memcg_sto
- css_put_many(&old->css, stock->nr_pages);
- stock->nr_pages = 0;
- }
-+
-+ css_put(&old->css);
- stock->cached = NULL;
- }
-
-@@ -2256,6 +2261,7 @@ static void refill_stock(struct mem_cgro
- stock = this_cpu_ptr(&memcg_stock);
- if (stock->cached != memcg) { /* reset if necessary */
- drain_stock(stock);
-+ css_get(&memcg->css);
- stock->cached = memcg;
- }
- stock->nr_pages += nr_pages;