From: Greg Kroah-Hartman Date: Mon, 26 Feb 2024 07:17:15 +0000 (+0100) Subject: 4.19-stable patches X-Git-Tag: v4.19.308~64 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=787a29010d0382b8af09971298004a44265e48a6;p=thirdparty%2Fkernel%2Fstable-queue.git 4.19-stable patches added patches: mm-memcontrol-switch-to-rcu-protection-in-drain_all_stock.patch --- diff --git a/queue-4.19/mm-memcontrol-switch-to-rcu-protection-in-drain_all_stock.patch b/queue-4.19/mm-memcontrol-switch-to-rcu-protection-in-drain_all_stock.patch new file mode 100644 index 00000000000..0836ac22119 --- /dev/null +++ b/queue-4.19/mm-memcontrol-switch-to-rcu-protection-in-drain_all_stock.patch @@ -0,0 +1,79 @@ +From e1a366be5cb4f849ec4de170d50eebc08bb0af20 Mon Sep 17 00:00:00 2001 +From: Roman Gushchin +Date: Mon, 23 Sep 2019 15:34:58 -0700 +Subject: mm: memcontrol: switch to rcu protection in drain_all_stock() + +From: Roman Gushchin + +commit e1a366be5cb4f849ec4de170d50eebc08bb0af20 upstream. + +Commit 72f0184c8a00 ("mm, memcg: remove hotplug locking from try_charge") +introduced css_tryget()/css_put() calls in drain_all_stock(), which are +supposed to protect the target memory cgroup from being released during +the mem_cgroup_is_descendant() call. + +However, it's not completely safe. In theory, memcg can go away between +reading stock->cached pointer and calling css_tryget(). + +This can happen if drain_all_stock() races with drain_local_stock() +performed on the remote cpu as a result of a work, scheduled by the +previous invocation of drain_all_stock(). + +The race is a bit theoretical and there are few chances to trigger it, but +the current code looks a bit confusing, so it makes sense to fix it +anyway. The code looks like as if css_tryget() and css_put() are used to +protect stocks drainage. It's not necessary because stocked pages are +holding references to the cached cgroup. And it obviously won't work for +works, scheduled on other cpus. + +So, let's read the stock->cached pointer and evaluate the memory cgroup +inside a rcu read section, and get rid of css_tryget()/css_put() calls. + +Link: http://lkml.kernel.org/r/20190802192241.3253165-1-guro@fb.com +Signed-off-by: Roman Gushchin +Acked-by: Michal Hocko +Cc: Hillf Danton +Cc: Johannes Weiner +Cc: Vladimir Davydov +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Fixes: cdec2e4265df ("memcg: coalesce charging via percpu storage") +Signed-off-by: GONG, Ruiqi +Signed-off-by: Greg Kroah-Hartman +--- + mm/memcontrol.c | 17 +++++++++-------- + 1 file changed, 9 insertions(+), 8 deletions(-) + +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -2094,21 +2094,22 @@ static void drain_all_stock(struct mem_c + for_each_online_cpu(cpu) { + struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); + struct mem_cgroup *memcg; ++ bool flush = false; + ++ rcu_read_lock(); + memcg = stock->cached; +- if (!memcg || !stock->nr_pages || !css_tryget(&memcg->css)) +- continue; +- if (!mem_cgroup_is_descendant(memcg, root_memcg)) { +- css_put(&memcg->css); +- continue; +- } +- if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { ++ if (memcg && stock->nr_pages && ++ mem_cgroup_is_descendant(memcg, root_memcg)) ++ flush = true; ++ rcu_read_unlock(); ++ ++ if (flush && ++ !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { + if (cpu == curcpu) + drain_local_stock(&stock->work); + else + schedule_work_on(cpu, &stock->work); + } +- css_put(&memcg->css); + } + put_cpu(); + mutex_unlock(&percpu_charge_mutex); diff --git a/queue-4.19/series b/queue-4.19/series index bf20630316a..cd2fef46409 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -25,3 +25,4 @@ virtio-blk-ensure-no-requests-in-virtqueues-before-d.patch s390-qeth-fix-potential-loss-of-l3-ip-in-case-of-net.patch pmdomain-renesas-r8a77980-sysc-cr7-must-be-always-on.patch ib-hfi1-fix-sdma.h-tx-num_descs-off-by-one-error.patch +mm-memcontrol-switch-to-rcu-protection-in-drain_all_stock.patch