--- /dev/null
+From f231fe4235e22e18d847e05cbe705deaca56580a Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Fri, 18 Oct 2019 20:20:05 -0700
+Subject: hugetlbfs: don't access uninitialized memmaps in pfn_range_valid_gigantic()
+
+From: David Hildenbrand <david@redhat.com>
+
+commit f231fe4235e22e18d847e05cbe705deaca56580a upstream.
+
+Uninitialized memmaps contain garbage and in the worst case trigger
+kernel BUGs, especially with CONFIG_PAGE_POISONING. They should not get
+touched.
+
+Let's make sure that we only consider online memory (managed by the
+buddy) that has initialized memmaps. ZONE_DEVICE is not applicable.
+
+page_zone() will call page_to_nid(), which will trigger
+VM_BUG_ON_PGFLAGS(PagePoisoned(page), page) with CONFIG_PAGE_POISONING
+and CONFIG_DEBUG_VM_PGFLAGS when called on uninitialized memmaps. This
+can be the case when an offline memory block (e.g., never onlined) is
+spanned by a zone.
+
+Note: As explained by Michal in [1], alloc_contig_range() will verify
+the range. So it boils down to the wrong access in this function.
+
+[1] http://lkml.kernel.org/r/20180423000943.GO17484@dhcp22.suse.cz
+
+Link: http://lkml.kernel.org/r/20191015120717.4858-1-david@redhat.com
+Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") [visible after d0dc12e86b319]
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Reported-by: Michal Hocko <mhocko@kernel.org>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Anshuman Khandual <anshuman.khandual@arm.com>
+Cc: <stable@vger.kernel.org> [4.13+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/hugetlb.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1073,11 +1073,10 @@ static bool pfn_range_valid_gigantic(str
+ struct page *page;
+
+ for (i = start_pfn; i < end_pfn; i++) {
+- if (!pfn_valid(i))
++ page = pfn_to_online_page(i);
++ if (!page)
+ return false;
+
+- page = pfn_to_page(i);
+-
+ if (page_zone(page) != z)
+ return false;
+
--- /dev/null
+From 3d7fed4ad8ccb691d217efbb0f934e6a4df5ef91 Mon Sep 17 00:00:00 2001
+From: Jane Chu <jane.chu@oracle.com>
+Date: Mon, 14 Oct 2019 14:12:29 -0700
+Subject: mm/memory-failure: poison read receives SIGKILL instead of SIGBUS if mmaped more than once
+
+From: Jane Chu <jane.chu@oracle.com>
+
+commit 3d7fed4ad8ccb691d217efbb0f934e6a4df5ef91 upstream.
+
+Mmap /dev/dax more than once, then read the poison location using
+address from one of the mappings. The other mappings due to not having
+the page mapped in will cause SIGKILLs delivered to the process.
+SIGKILL succeeds over SIGBUS, so user process loses the opportunity to
+handle the UE.
+
+Although one may add MAP_POPULATE to mmap(2) to work around the issue,
+MAP_POPULATE makes mapping 128GB of pmem several magnitudes slower, so
+isn't always an option.
+
+Details -
+
+ ndctl inject-error --block=10 --count=1 namespace6.0
+
+ ./read_poison -x dax6.0 -o 5120 -m 2
+ mmaped address 0x7f5bb6600000
+ mmaped address 0x7f3cf3600000
+ doing local read at address 0x7f3cf3601400
+ Killed
+
+Console messages in instrumented kernel -
+
+ mce: Uncorrected hardware memory error in user-access at edbe201400
+ Memory failure: tk->addr = 7f5bb6601000
+ Memory failure: address edbe201: call dev_pagemap_mapping_shift
+ dev_pagemap_mapping_shift: page edbe201: no PUD
+ Memory failure: tk->size_shift == 0
+ Memory failure: Unable to find user space address edbe201 in read_poison
+ Memory failure: tk->addr = 7f3cf3601000
+ Memory failure: address edbe201: call dev_pagemap_mapping_shift
+ Memory failure: tk->size_shift = 21
+ Memory failure: 0xedbe201: forcibly killing read_poison:22434 because of failure to unmap corrupted page
+ => to deliver SIGKILL
+ Memory failure: 0xedbe201: Killing read_poison:22434 due to hardware memory corruption
+ => to deliver SIGBUS
+
+Link: http://lkml.kernel.org/r/1565112345-28754-3-git-send-email-jane.chu@oracle.com
+Signed-off-by: Jane Chu <jane.chu@oracle.com>
+Suggested-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Reviewed-by: Dan Williams <dan.j.williams@intel.com>
+Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Michal Hocko <mhocko@kernel.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memory-failure.c | 22 +++++++++++++---------
+ 1 file changed, 13 insertions(+), 9 deletions(-)
+
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -202,7 +202,6 @@ struct to_kill {
+ struct task_struct *tsk;
+ unsigned long addr;
+ short size_shift;
+- char addr_valid;
+ };
+
+ /*
+@@ -327,22 +326,27 @@ static void add_to_kill(struct task_stru
+ }
+ }
+ tk->addr = page_address_in_vma(p, vma);
+- tk->addr_valid = 1;
+ if (is_zone_device_page(p))
+ tk->size_shift = dev_pagemap_mapping_shift(p, vma);
+ else
+ tk->size_shift = compound_order(compound_head(p)) + PAGE_SHIFT;
+
+ /*
+- * In theory we don't have to kill when the page was
+- * munmaped. But it could be also a mremap. Since that's
+- * likely very rare kill anyways just out of paranoia, but use
+- * a SIGKILL because the error is not contained anymore.
++ * Send SIGKILL if "tk->addr == -EFAULT". Also, as
++ * "tk->size_shift" is always non-zero for !is_zone_device_page(),
++ * so "tk->size_shift == 0" effectively checks no mapping on
++ * ZONE_DEVICE. Indeed, when a devdax page is mmapped N times
++ * to a process' address space, it's possible not all N VMAs
++ * contain mappings for the page, but at least one VMA does.
++ * Only deliver SIGBUS with payload derived from the VMA that
++ * has a mapping for the page.
+ */
+- if (tk->addr == -EFAULT || tk->size_shift == 0) {
++ if (tk->addr == -EFAULT) {
+ pr_info("Memory failure: Unable to find user space address %lx in %s\n",
+ page_to_pfn(p), tsk->comm);
+- tk->addr_valid = 0;
++ } else if (tk->size_shift == 0) {
++ kfree(tk);
++ return;
+ }
+ get_task_struct(tsk);
+ tk->tsk = tsk;
+@@ -369,7 +373,7 @@ static void kill_procs(struct list_head
+ * make sure the process doesn't catch the
+ * signal and then access the memory. Just kill it.
+ */
+- if (fail || tk->addr_valid == 0) {
++ if (fail || tk->addr == -EFAULT) {
+ pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
+ pfn, tk->tsk->comm, tk->tsk->pid);
+ do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
--- /dev/null
+From 96c804a6ae8c59a9092b3d5dd581198472063184 Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Fri, 18 Oct 2019 20:19:23 -0700
+Subject: mm/memory-failure.c: don't access uninitialized memmaps in memory_failure()
+
+From: David Hildenbrand <david@redhat.com>
+
+commit 96c804a6ae8c59a9092b3d5dd581198472063184 upstream.
+
+We should check for pfn_to_online_page() to not access uninitialized
+memmaps. Reshuffle the code so we don't have to duplicate the error
+message.
+
+Link: http://lkml.kernel.org/r/20191009142435.3975-3-david@redhat.com
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") [visible after d0dc12e86b319]
+Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Michal Hocko <mhocko@kernel.org>
+Cc: <stable@vger.kernel.org> [4.13+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memory-failure.c | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1258,17 +1258,19 @@ int memory_failure(unsigned long pfn, in
+ if (!sysctl_memory_failure_recovery)
+ panic("Memory failure on page %lx", pfn);
+
+- if (!pfn_valid(pfn)) {
++ p = pfn_to_online_page(pfn);
++ if (!p) {
++ if (pfn_valid(pfn)) {
++ pgmap = get_dev_pagemap(pfn, NULL);
++ if (pgmap)
++ return memory_failure_dev_pagemap(pfn, flags,
++ pgmap);
++ }
+ pr_err("Memory failure: %#lx: memory outside kernel control\n",
+ pfn);
+ return -ENXIO;
+ }
+
+- pgmap = get_dev_pagemap(pfn, NULL);
+- if (pgmap)
+- return memory_failure_dev_pagemap(pfn, flags, pgmap);
+-
+- p = pfn_to_page(pfn);
+ if (PageHuge(p))
+ return memory_failure_hugetlb(pfn, flags);
+ if (TestSetPageHWPoison(p)) {
--- /dev/null
+From a26ee565b6cd8dc2bf15ff6aa70bbb28f928b773 Mon Sep 17 00:00:00 2001
+From: Qian Cai <cai@lca.pw>
+Date: Fri, 18 Oct 2019 20:19:29 -0700
+Subject: mm/page_owner: don't access uninitialized memmaps when reading /proc/pagetypeinfo
+
+From: Qian Cai <cai@lca.pw>
+
+commit a26ee565b6cd8dc2bf15ff6aa70bbb28f928b773 upstream.
+
+Uninitialized memmaps contain garbage and in the worst case trigger
+kernel BUGs, especially with CONFIG_PAGE_POISONING. They should not get
+touched.
+
+For example, when not onlining a memory block that is spanned by a zone
+and reading /proc/pagetypeinfo with CONFIG_DEBUG_VM_PGFLAGS and
+CONFIG_PAGE_POISONING, we can trigger a kernel BUG:
+
+ :/# echo 1 > /sys/devices/system/memory/memory40/online
+ :/# echo 1 > /sys/devices/system/memory/memory42/online
+ :/# cat /proc/pagetypeinfo > test.file
+ page:fffff2c585200000 is uninitialized and poisoned
+ raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
+ raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
+ page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
+ There is not page extension available.
+ ------------[ cut here ]------------
+ kernel BUG at include/linux/mm.h:1107!
+ invalid opcode: 0000 [#1] SMP NOPTI
+
+Please note that this change does not affect ZONE_DEVICE, because
+pagetypeinfo_showmixedcount_print() is called from
+mm/vmstat.c:pagetypeinfo_showmixedcount() only for populated zones, and
+ZONE_DEVICE is never populated (zone->present_pages always 0).
+
+[david@redhat.com: move check to outer loop, add comment, rephrase description]
+Link: http://lkml.kernel.org/r/20191011140638.8160-1-david@redhat.com
+Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") # visible after d0dc12e86b319
+Signed-off-by: Qian Cai <cai@lca.pw>
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
+Cc: Miles Chen <miles.chen@mediatek.com>
+Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
+Cc: Qian Cai <cai@lca.pw>
+Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: <stable@vger.kernel.org> [4.13+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/page_owner.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/mm/page_owner.c
++++ b/mm/page_owner.c
+@@ -273,7 +273,8 @@ void pagetypeinfo_showmixedcount_print(s
+ * not matter as the mixed block count will still be correct
+ */
+ for (; pfn < end_pfn; ) {
+- if (!pfn_valid(pfn)) {
++ page = pfn_to_online_page(pfn);
++ if (!page) {
+ pfn = ALIGN(pfn + 1, MAX_ORDER_NR_PAGES);
+ continue;
+ }
+@@ -281,13 +282,13 @@ void pagetypeinfo_showmixedcount_print(s
+ block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+ block_end_pfn = min(block_end_pfn, end_pfn);
+
+- page = pfn_to_page(pfn);
+ pageblock_mt = get_pageblock_migratetype(page);
+
+ for (; pfn < block_end_pfn; pfn++) {
+ if (!pfn_valid_within(pfn))
+ continue;
+
++ /* The pageblock is online, no need to recheck. */
+ page = pfn_to_page(pfn);
+
+ if (page_zone(page) != zone)
--- /dev/null
+From e4f8e513c3d353c134ad4eef9fd0bba12406c7c8 Mon Sep 17 00:00:00 2001
+From: Qian Cai <cai@lca.pw>
+Date: Mon, 14 Oct 2019 14:11:51 -0700
+Subject: mm/slub: fix a deadlock in show_slab_objects()
+
+From: Qian Cai <cai@lca.pw>
+
+commit e4f8e513c3d353c134ad4eef9fd0bba12406c7c8 upstream.
+
+A long time ago we fixed a similar deadlock in show_slab_objects() [1].
+However, it is apparently due to the commits like 01fb58bcba63 ("slab:
+remove synchronous synchronize_sched() from memcg cache deactivation
+path") and 03afc0e25f7f ("slab: get_online_mems for
+kmem_cache_{create,destroy,shrink}"), this kind of deadlock is back by
+just reading files in /sys/kernel/slab which will generate a lockdep
+splat below.
+
+Since the "mem_hotplug_lock" here is only to obtain a stable online node
+mask while racing with NUMA node hotplug, in the worst case, the results
+may me miscalculated while doing NUMA node hotplug, but they shall be
+corrected by later reads of the same files.
+
+ WARNING: possible circular locking dependency detected
+ ------------------------------------------------------
+ cat/5224 is trying to acquire lock:
+ ffff900012ac3120 (mem_hotplug_lock.rw_sem){++++}, at:
+ show_slab_objects+0x94/0x3a8
+
+ but task is already holding lock:
+ b8ff009693eee398 (kn->count#45){++++}, at: kernfs_seq_start+0x44/0xf0
+
+ which lock already depends on the new lock.
+
+ the existing dependency chain (in reverse order) is:
+
+ -> #2 (kn->count#45){++++}:
+ lock_acquire+0x31c/0x360
+ __kernfs_remove+0x290/0x490
+ kernfs_remove+0x30/0x44
+ sysfs_remove_dir+0x70/0x88
+ kobject_del+0x50/0xb0
+ sysfs_slab_unlink+0x2c/0x38
+ shutdown_cache+0xa0/0xf0
+ kmemcg_cache_shutdown_fn+0x1c/0x34
+ kmemcg_workfn+0x44/0x64
+ process_one_work+0x4f4/0x950
+ worker_thread+0x390/0x4bc
+ kthread+0x1cc/0x1e8
+ ret_from_fork+0x10/0x18
+
+ -> #1 (slab_mutex){+.+.}:
+ lock_acquire+0x31c/0x360
+ __mutex_lock_common+0x16c/0xf78
+ mutex_lock_nested+0x40/0x50
+ memcg_create_kmem_cache+0x38/0x16c
+ memcg_kmem_cache_create_func+0x3c/0x70
+ process_one_work+0x4f4/0x950
+ worker_thread+0x390/0x4bc
+ kthread+0x1cc/0x1e8
+ ret_from_fork+0x10/0x18
+
+ -> #0 (mem_hotplug_lock.rw_sem){++++}:
+ validate_chain+0xd10/0x2bcc
+ __lock_acquire+0x7f4/0xb8c
+ lock_acquire+0x31c/0x360
+ get_online_mems+0x54/0x150
+ show_slab_objects+0x94/0x3a8
+ total_objects_show+0x28/0x34
+ slab_attr_show+0x38/0x54
+ sysfs_kf_seq_show+0x198/0x2d4
+ kernfs_seq_show+0xa4/0xcc
+ seq_read+0x30c/0x8a8
+ kernfs_fop_read+0xa8/0x314
+ __vfs_read+0x88/0x20c
+ vfs_read+0xd8/0x10c
+ ksys_read+0xb0/0x120
+ __arm64_sys_read+0x54/0x88
+ el0_svc_handler+0x170/0x240
+ el0_svc+0x8/0xc
+
+ other info that might help us debug this:
+
+ Chain exists of:
+ mem_hotplug_lock.rw_sem --> slab_mutex --> kn->count#45
+
+ Possible unsafe locking scenario:
+
+ CPU0 CPU1
+ ---- ----
+ lock(kn->count#45);
+ lock(slab_mutex);
+ lock(kn->count#45);
+ lock(mem_hotplug_lock.rw_sem);
+
+ *** DEADLOCK ***
+
+ 3 locks held by cat/5224:
+ #0: 9eff00095b14b2a0 (&p->lock){+.+.}, at: seq_read+0x4c/0x8a8
+ #1: 0eff008997041480 (&of->mutex){+.+.}, at: kernfs_seq_start+0x34/0xf0
+ #2: b8ff009693eee398 (kn->count#45){++++}, at:
+ kernfs_seq_start+0x44/0xf0
+
+ stack backtrace:
+ Call trace:
+ dump_backtrace+0x0/0x248
+ show_stack+0x20/0x2c
+ dump_stack+0xd0/0x140
+ print_circular_bug+0x368/0x380
+ check_noncircular+0x248/0x250
+ validate_chain+0xd10/0x2bcc
+ __lock_acquire+0x7f4/0xb8c
+ lock_acquire+0x31c/0x360
+ get_online_mems+0x54/0x150
+ show_slab_objects+0x94/0x3a8
+ total_objects_show+0x28/0x34
+ slab_attr_show+0x38/0x54
+ sysfs_kf_seq_show+0x198/0x2d4
+ kernfs_seq_show+0xa4/0xcc
+ seq_read+0x30c/0x8a8
+ kernfs_fop_read+0xa8/0x314
+ __vfs_read+0x88/0x20c
+ vfs_read+0xd8/0x10c
+ ksys_read+0xb0/0x120
+ __arm64_sys_read+0x54/0x88
+ el0_svc_handler+0x170/0x240
+ el0_svc+0x8/0xc
+
+I think it is important to mention that this doesn't expose the
+show_slab_objects to use-after-free. There is only a single path that
+might really race here and that is the slab hotplug notifier callback
+__kmem_cache_shrink (via slab_mem_going_offline_callback) but that path
+doesn't really destroy kmem_cache_node data structures.
+
+[1] http://lkml.iu.edu/hypermail/linux/kernel/1101.0/02850.html
+
+[akpm@linux-foundation.org: add comment explaining why we don't need mem_hotplug_lock]
+Link: http://lkml.kernel.org/r/1570192309-10132-1-git-send-email-cai@lca.pw
+Fixes: 01fb58bcba63 ("slab: remove synchronous synchronize_sched() from memcg cache deactivation path")
+Fixes: 03afc0e25f7f ("slab: get_online_mems for kmem_cache_{create,destroy,shrink}")
+Signed-off-by: Qian Cai <cai@lca.pw>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Christoph Lameter <cl@linux.com>
+Cc: Pekka Enberg <penberg@kernel.org>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Cc: Tejun Heo <tj@kernel.org>
+Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
+Cc: Roman Gushchin <guro@fb.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/slub.c | 13 +++++++++++--
+ 1 file changed, 11 insertions(+), 2 deletions(-)
+
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4797,7 +4797,17 @@ static ssize_t show_slab_objects(struct
+ }
+ }
+
+- get_online_mems();
++ /*
++ * It is impossible to take "mem_hotplug_lock" here with "kernfs_mutex"
++ * already held which will conflict with an existing lock order:
++ *
++ * mem_hotplug_lock->slab_mutex->kernfs_mutex
++ *
++ * We don't really need mem_hotplug_lock (to hold off
++ * slab_mem_going_offline_callback) here because slab's memory hot
++ * unplug code doesn't destroy the kmem_cache->node[] data.
++ */
++
+ #ifdef CONFIG_SLUB_DEBUG
+ if (flags & SO_ALL) {
+ struct kmem_cache_node *n;
+@@ -4838,7 +4848,6 @@ static ssize_t show_slab_objects(struct
+ x += sprintf(buf + x, " N%d=%lu",
+ node, nodes[node]);
+ #endif
+- put_online_mems();
+ kfree(nodes);
+ return x + sprintf(buf + x, "\n");
+ }
--- /dev/null
+From c07d0073b9ec80a139d07ebf78e9c30d2a28279e Mon Sep 17 00:00:00 2001
+From: Faiz Abbas <faiz_abbas@ti.com>
+Date: Tue, 15 Oct 2019 00:08:49 +0530
+Subject: mmc: cqhci: Commit descriptors before setting the doorbell
+
+From: Faiz Abbas <faiz_abbas@ti.com>
+
+commit c07d0073b9ec80a139d07ebf78e9c30d2a28279e upstream.
+
+Add a write memory barrier to make sure that descriptors are actually
+written to memory, before ringing the doorbell.
+
+Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
+Acked-by: Adrian Hunter <adrian.hunter@intel.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mmc/host/cqhci.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -617,7 +617,8 @@ static int cqhci_request(struct mmc_host
+ cq_host->slot[tag].flags = 0;
+
+ cq_host->qcnt += 1;
+-
++ /* Make sure descriptors are ready before ringing the doorbell */
++ wmb();
+ cqhci_writel(cq_host, 1 << tag, CQHCI_TDBR);
+ if (!(cqhci_readl(cq_host, CQHCI_TDBR) & (1 << tag)))
+ pr_debug("%s: cqhci: doorbell not set for tag %d\n",
drm-amdgpu-bail-earlier-when-amdgpu.cik_-si_support-is-not-set-to-1.patch
drivers-base-memory.c-don-t-access-uninitialized-memmaps-in-soft_offline_page_store.patch
fs-proc-page.c-don-t-access-uninitialized-memmaps-in-fs-proc-page.c.patch
+mmc-cqhci-commit-descriptors-before-setting-the-doorbell.patch
+mm-memory-failure.c-don-t-access-uninitialized-memmaps-in-memory_failure.patch
+mm-slub-fix-a-deadlock-in-show_slab_objects.patch
+mm-page_owner-don-t-access-uninitialized-memmaps-when-reading-proc-pagetypeinfo.patch
+hugetlbfs-don-t-access-uninitialized-memmaps-in-pfn_range_valid_gigantic.patch
+mm-memory-failure-poison-read-receives-sigkill-instead-of-sigbus-if-mmaped-more-than-once.patch
+xtensa-drop-export_symbol-for-outs-ins.patch
--- /dev/null
+From 8b39da985194aac2998dd9e3a22d00b596cebf1e Mon Sep 17 00:00:00 2001
+From: Max Filippov <jcmvbkbc@gmail.com>
+Date: Mon, 14 Oct 2019 15:48:19 -0700
+Subject: xtensa: drop EXPORT_SYMBOL for outs*/ins*
+
+From: Max Filippov <jcmvbkbc@gmail.com>
+
+commit 8b39da985194aac2998dd9e3a22d00b596cebf1e upstream.
+
+Custom outs*/ins* implementations are long gone from the xtensa port,
+remove matching EXPORT_SYMBOLs.
+This fixes the following build warnings issued by modpost since commit
+15bfc2348d54 ("modpost: check for static EXPORT_SYMBOL* functions"):
+
+ WARNING: "insb" [vmlinux] is a static EXPORT_SYMBOL
+ WARNING: "insw" [vmlinux] is a static EXPORT_SYMBOL
+ WARNING: "insl" [vmlinux] is a static EXPORT_SYMBOL
+ WARNING: "outsb" [vmlinux] is a static EXPORT_SYMBOL
+ WARNING: "outsw" [vmlinux] is a static EXPORT_SYMBOL
+ WARNING: "outsl" [vmlinux] is a static EXPORT_SYMBOL
+
+Cc: stable@vger.kernel.org
+Fixes: d38efc1f150f ("xtensa: adopt generic io routines")
+Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/xtensa/kernel/xtensa_ksyms.c | 7 -------
+ 1 file changed, 7 deletions(-)
+
+--- a/arch/xtensa/kernel/xtensa_ksyms.c
++++ b/arch/xtensa/kernel/xtensa_ksyms.c
+@@ -119,13 +119,6 @@ EXPORT_SYMBOL(__invalidate_icache_range)
+ // FIXME EXPORT_SYMBOL(screen_info);
+ #endif
+
+-EXPORT_SYMBOL(outsb);
+-EXPORT_SYMBOL(outsw);
+-EXPORT_SYMBOL(outsl);
+-EXPORT_SYMBOL(insb);
+-EXPORT_SYMBOL(insw);
+-EXPORT_SYMBOL(insl);
+-
+ extern long common_exception_return;
+ EXPORT_SYMBOL(common_exception_return);
+