--- /dev/null
+From zhangqiao22@huawei.com Thu Mar 17 11:21:46 2022
+From: Zhang Qiao <zhangqiao22@huawei.com>
+Date: Thu, 17 Mar 2022 10:41:57 +0800
+Subject: cpuset: Fix unsafe lock order between cpuset lock and cpuslock
+To: "Greg Kroah-Hartman" <gregkh@linuxfoundation.org>, "Michal Koutný" <mkoutny@suse.com>
+Cc: <linux-kernel@vger.kernel.org>, <stable@vger.kernel.org>, Zhao Gongyi <zhaogongyi@huawei.com>, Waiman Long <longman@redhat.com>, Tejun Heo <tj@kernel.org>, Juri Lelli <juri.lelli@redhat.com>
+Message-ID: <1ea13066-aa98-ead2-f50f-f62d030ce3c5@huawei.com>
+
+From: Zhang Qiao <zhangqiao22@huawei.com>
+
+The backport commit 4eec5fe1c680a ("cgroup/cpuset: Fix a race
+between cpuset_attach() and cpu hotplug") looks suspicious since
+it comes before commit d74b27d63a8b ("cgroup/cpuset: Change
+cpuset_rwsem and hotplug lock order") v5.4-rc1~176^2~30 when
+the locking order was: cpuset lock, cpus lock.
+
+Fix it with the correct locking order and reduce the cpus locking
+range because only set_cpus_allowed_ptr() needs the protection of
+cpus lock.
+
+Fixes: 4eec5fe1c680a ("cgroup/cpuset: Fix a race between cpuset_attach() and cpu hotplug")
+Reported-by: Michal Koutný <mkoutny@suse.com>
+Signed-off-by: Zhang Qiao <zhangqiao22@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/cgroup/cpuset.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1528,9 +1528,13 @@ static void cpuset_attach(struct cgroup_
+ cgroup_taskset_first(tset, &css);
+ cs = css_cs(css);
+
+- cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+
++ /*
++ * It should hold cpus lock because a cpu offline event can
++ * cause set_cpus_allowed_ptr() failed.
++ */
++ get_online_cpus();
+ /* prepare for attach */
+ if (cs == &top_cpuset)
+ cpumask_copy(cpus_attach, cpu_possible_mask);
+@@ -1549,6 +1553,7 @@ static void cpuset_attach(struct cgroup_
+ cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
+ cpuset_update_task_spread_flag(cs, task);
+ }
++ put_online_cpus();
+
+ /*
+ * Change mm for all threadgroup leaders. This is expensive and may
+@@ -1584,7 +1589,6 @@ static void cpuset_attach(struct cgroup_
+ wake_up(&cpuset_attach_wq);
+
+ mutex_unlock(&cpuset_mutex);
+- cpus_read_unlock();
+ }
+
+ /* The various types of files and directories in a cpuset file system */
--- /dev/null
+From liqiong@nfschina.com Thu Mar 17 11:25:02 2022
+From: liqiong <liqiong@nfschina.com>
+Date: Thu, 17 Feb 2022 19:54:16 +0800
+Subject: mm: fix dereference a null pointer in migrate[_huge]_page_move_mapping()
+To: david@redhat.com, gregkh@linuxfoundation.org
+Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, liqiong <liqiong@nfschina.com>, stable@vger.kernel.org
+Message-ID: <20220217115416.55835-1-liqiong@nfschina.com>
+
+From: liqiong <liqiong@nfschina.com>
+
+Upstream doesn't use radix tree any more in migrate.c, no need this patch.
+
+The two functions look up a slot and dereference the pointer,
+If the pointer is null, the kernel would crash and dump.
+
+The 'numad' service calls 'migrate_pages' periodically. If some slots
+being replaced (Cache Eviction), the radix_tree_lookup_slot() returns
+a null pointer that causes kernel crash.
+
+"numad": crash> bt
+[exception RIP: migrate_page_move_mapping+337]
+
+Introduce pointer checking to avoid dereference a null pointer.
+
+Cc: <stable@vger.kernel.org> # linux-4.19.y
+Signed-off-by: liqiong <liqiong@nfschina.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/migrate.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -472,6 +472,10 @@ int migrate_page_move_mapping(struct add
+
+ pslot = radix_tree_lookup_slot(&mapping->i_pages,
+ page_index(page));
++ if (pslot == NULL) {
++ xa_unlock_irq(&mapping->i_pages);
++ return -EAGAIN;
++ }
+
+ expected_count += hpage_nr_pages(page) + page_has_private(page);
+ if (page_count(page) != expected_count ||
+@@ -590,6 +594,10 @@ int migrate_huge_page_move_mapping(struc
+ xa_lock_irq(&mapping->i_pages);
+
+ pslot = radix_tree_lookup_slot(&mapping->i_pages, page_index(page));
++ if (pslot == NULL) {
++ xa_unlock_irq(&mapping->i_pages);
++ return -EAGAIN;
++ }
+
+ expected_count = 2 + page_has_private(page);
+ if (page_count(page) != expected_count ||