--- /dev/null
+From 02186d8897d49b0afd3c80b6cf23437d91024065 Mon Sep 17 00:00:00 2001
+From: Dan Williams <dan.j.williams@intel.com>
+Date: Fri, 18 Sep 2020 12:51:15 -0700
+Subject: dm/dax: Fix table reference counts
+
+From: Dan Williams <dan.j.williams@intel.com>
+
+commit 02186d8897d49b0afd3c80b6cf23437d91024065 upstream.
+
+A recent fix to the dm_dax_supported() flow uncovered a latent bug. When
+dm_get_live_table() fails it is still required to drop the
+srcu_read_lock(). Without this change the lvm2 test-suite triggers this
+warning:
+
+ # lvm2-testsuite --only pvmove-abort-all.sh
+
+ WARNING: lock held when returning to user space!
+ 5.9.0-rc5+ #251 Tainted: G OE
+ ------------------------------------------------
+ lvm/1318 is leaving the kernel with locks still held!
+ 1 lock held by lvm/1318:
+ #0: ffff9372abb5a340 (&md->io_barrier){....}-{0:0}, at: dm_get_live_table+0x5/0xb0 [dm_mod]
+
+...and later on this hang signature:
+
+ INFO: task lvm:1344 blocked for more than 122 seconds.
+ Tainted: G OE 5.9.0-rc5+ #251
+ "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ task:lvm state:D stack: 0 pid: 1344 ppid: 1 flags:0x00004000
+ Call Trace:
+ __schedule+0x45f/0xa80
+ ? finish_task_switch+0x249/0x2c0
+ ? wait_for_completion+0x86/0x110
+ schedule+0x5f/0xd0
+ schedule_timeout+0x212/0x2a0
+ ? __schedule+0x467/0xa80
+ ? wait_for_completion+0x86/0x110
+ wait_for_completion+0xb0/0x110
+ __synchronize_srcu+0xd1/0x160
+ ? __bpf_trace_rcu_utilization+0x10/0x10
+ __dm_suspend+0x6d/0x210 [dm_mod]
+ dm_suspend+0xf6/0x140 [dm_mod]
+
+Fixes: 7bf7eac8d648 ("dax: Arrange for dax_supported check to span multiple devices")
+Cc: <stable@vger.kernel.org>
+Cc: Jan Kara <jack@suse.cz>
+Cc: Alasdair Kergon <agk@redhat.com>
+Cc: Mike Snitzer <snitzer@redhat.com>
+Reported-by: Adrian Huang <ahuang12@lenovo.com>
+Reviewed-by: Ira Weiny <ira.weiny@intel.com>
+Tested-by: Adrian Huang <ahuang12@lenovo.com>
+Link: https://lore.kernel.org/r/160045867590.25663.7548541079217827340.stgit@dwillia2-desk3.amr.corp.intel.com
+Signed-off-by: Dan Williams <dan.j.williams@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/md/dm.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1112,15 +1112,16 @@ static bool dm_dax_supported(struct dax_
+ {
+ struct mapped_device *md = dax_get_private(dax_dev);
+ struct dm_table *map;
++ bool ret = false;
+ int srcu_idx;
+- bool ret;
+
+ map = dm_get_live_table(md, &srcu_idx);
+ if (!map)
+- return false;
++ goto out;
+
+ ret = dm_table_supports_dax(map, device_supports_dax, &blocksize);
+
++out:
+ dm_put_live_table(md, srcu_idx);
+
+ return ret;
--- /dev/null
+From 29231826f3bd65500118c473fccf31c0cf14dbc0 Mon Sep 17 00:00:00 2001
+From: Quentin Perret <qperret@google.com>
+Date: Wed, 16 Sep 2020 18:18:25 +0100
+Subject: ehci-hcd: Move include to keep CRC stable
+
+From: Quentin Perret <qperret@google.com>
+
+commit 29231826f3bd65500118c473fccf31c0cf14dbc0 upstream.
+
+The CRC calculation done by genksyms is triggered when the parser hits
+EXPORT_SYMBOL*() macros. At this point, genksyms recursively expands the
+types of the function parameters, and uses that as the input for the CRC
+calculation. In the case of forward-declared structs, the type expands
+to 'UNKNOWN'. Following this, it appears that the result of the
+expansion of each type is cached somewhere, and seems to be re-used
+when/if the same type is seen again for another exported symbol in the
+same C file.
+
+Unfortunately, this can cause CRC 'stability' issues when a struct
+definition becomes visible in the middle of a C file. For example, let's
+assume code with the following pattern:
+
+ struct foo;
+
+ int bar(struct foo *arg)
+ {
+ /* Do work ... */
+ }
+ EXPORT_SYMBOL_GPL(bar);
+
+ /* This contains struct foo's definition */
+ #include "foo.h"
+
+ int baz(struct foo *arg)
+ {
+ /* Do more work ... */
+ }
+ EXPORT_SYMBOL_GPL(baz);
+
+Here, baz's CRC will be computed using the expansion of struct foo that
+was cached after bar's CRC calculation ('UNKOWN' here). But if
+EXPORT_SYMBOL_GPL(bar) is removed from the file (because of e.g. symbol
+trimming using CONFIG_TRIM_UNUSED_KSYMS), struct foo will be expanded
+late, during baz's CRC calculation, which now has visibility over the
+full struct definition, hence resulting in a different CRC for baz.
+
+The proper fix for this certainly is in genksyms, but that will take me
+some time to get right. In the meantime, we have seen one occurrence of
+this in the ehci-hcd code which hits this problem because of the way it
+includes C files halfway through the code together with an unlucky mix
+of symbol trimming.
+
+In order to workaround this, move the include done in ehci-hub.c early
+in ehci-hcd.c, hence making sure the struct definitions are visible to
+the entire file. This improves CRC stability of the ehci-hcd exports
+even when symbol trimming is enabled.
+
+Acked-by: Alan Stern <stern@rowland.harvard.edu>
+Cc: stable <stable@vger.kernel.org>
+Signed-off-by: Quentin Perret <qperret@google.com>
+Link: https://lore.kernel.org/r/20200916171825.3228122-1-qperret@google.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/usb/host/ehci-hcd.c | 1 +
+ drivers/usb/host/ehci-hub.c | 1 -
+ 2 files changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -22,6 +22,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/usb.h>
+ #include <linux/usb/hcd.h>
++#include <linux/usb/otg.h>
+ #include <linux/moduleparam.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/debugfs.h>
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -14,7 +14,6 @@
+ */
+
+ /*-------------------------------------------------------------------------*/
+-#include <linux/usb/otg.h>
+
+ #define PORT_WAKE_BITS (PORT_WKOC_E|PORT_WKDISC_E|PORT_WKCONN_E)
+
--- /dev/null
+From 9683182612214aa5f5e709fad49444b847cd866a Mon Sep 17 00:00:00 2001
+From: Pavel Tatashin <pasha.tatashin@soleen.com>
+Date: Fri, 18 Sep 2020 21:20:31 -0700
+Subject: mm/memory_hotplug: drain per-cpu pages again during memory offline
+
+From: Pavel Tatashin <pasha.tatashin@soleen.com>
+
+commit 9683182612214aa5f5e709fad49444b847cd866a upstream.
+
+There is a race during page offline that can lead to infinite loop:
+a page never ends up on a buddy list and __offline_pages() keeps
+retrying infinitely or until a termination signal is received.
+
+Thread#1 - a new process:
+
+load_elf_binary
+ begin_new_exec
+ exec_mmap
+ mmput
+ exit_mmap
+ tlb_finish_mmu
+ tlb_flush_mmu
+ release_pages
+ free_unref_page_list
+ free_unref_page_prepare
+ set_pcppage_migratetype(page, migratetype);
+ // Set page->index migration type below MIGRATE_PCPTYPES
+
+Thread#2 - hot-removes memory
+__offline_pages
+ start_isolate_page_range
+ set_migratetype_isolate
+ set_pageblock_migratetype(page, MIGRATE_ISOLATE);
+ Set migration type to MIGRATE_ISOLATE-> set
+ drain_all_pages(zone);
+ // drain per-cpu page lists to buddy allocator.
+
+Thread#1 - continue
+ free_unref_page_commit
+ migratetype = get_pcppage_migratetype(page);
+ // get old migration type
+ list_add(&page->lru, &pcp->lists[migratetype]);
+ // add new page to already drained pcp list
+
+Thread#2
+Never drains pcp again, and therefore gets stuck in the loop.
+
+The fix is to try to drain per-cpu lists again after
+check_pages_isolated_cb() fails.
+
+Fixes: c52e75935f8d ("mm: remove extra drain pages on pcp list")
+Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Acked-by: David Rientjes <rientjes@google.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Acked-by: David Hildenbrand <david@redhat.com>
+Cc: Oscar Salvador <osalvador@suse.de>
+Cc: Wei Yang <richard.weiyang@gmail.com>
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/20200903140032.380431-1-pasha.tatashin@soleen.com
+Link: https://lkml.kernel.org/r/20200904151448.100489-2-pasha.tatashin@soleen.com
+Link: http://lkml.kernel.org/r/20200904070235.GA15277@dhcp22.suse.cz
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memory_hotplug.c | 14 ++++++++++++++
+ mm/page_isolation.c | 8 ++++++++
+ 2 files changed, 22 insertions(+)
+
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1566,6 +1566,20 @@ static int __ref __offline_pages(unsigne
+ /* check again */
+ ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
+ NULL, check_pages_isolated_cb);
++ /*
++ * per-cpu pages are drained in start_isolate_page_range, but if
++ * there are still pages that are not free, make sure that we
++ * drain again, because when we isolated range we might
++ * have raced with another thread that was adding pages to pcp
++ * list.
++ *
++ * Forward progress should be still guaranteed because
++ * pages on the pcp list can only belong to MOVABLE_ZONE
++ * because has_unmovable_pages explicitly checks for
++ * PageBuddy on freed pages on other zones.
++ */
++ if (ret)
++ drain_all_pages(zone);
+ } while (ret);
+
+ /* Ok, all of our target is isolated.
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -187,6 +187,14 @@ __first_valid_page(unsigned long pfn, un
+ * pageblocks we may have modified and return -EBUSY to caller. This
+ * prevents two threads from simultaneously working on overlapping ranges.
+ *
++ * Please note that there is no strong synchronization with the page allocator
++ * either. Pages might be freed while their page blocks are marked ISOLATED.
++ * In some cases pages might still end up on pcp lists and that would allow
++ * for their allocation even when they are in fact isolated already. Depending
++ * on how strong of a guarantee the caller needs drain_all_pages might be needed
++ * (e.g. __offline_pages will need to call it after check for isolated range for
++ * a next retry).
++ *
+ * Return: the number of isolated pageblocks on success and -EBUSY if any part
+ * of range cannot be isolated.
+ */
--- /dev/null
+From 437ef802e0adc9f162a95213a3488e8646e5fc03 Mon Sep 17 00:00:00 2001
+From: Alexey Kardashevskiy <aik@ozlabs.ru>
+Date: Tue, 8 Sep 2020 11:51:06 +1000
+Subject: powerpc/dma: Fix dma_map_ops::get_required_mask
+
+From: Alexey Kardashevskiy <aik@ozlabs.ru>
+
+commit 437ef802e0adc9f162a95213a3488e8646e5fc03 upstream.
+
+There are 2 problems with it:
+ 1. "<" vs expected "<<"
+ 2. the shift number is an IOMMU page number mask, not an address
+ mask as the IOMMU page shift is missing.
+
+This did not hit us before f1565c24b596 ("powerpc: use the generic
+dma_ops_bypass mode") because we had additional code to handle bypass
+mask so this chunk (almost?) never executed.However there were
+reports that aacraid does not work with "iommu=nobypass".
+
+After f1565c24b596, aacraid (and probably others which call
+dma_get_required_mask() before setting the mask) was unable to enable
+64bit DMA and fall back to using IOMMU which was known not to work,
+one of the problems is double free of an IOMMU page.
+
+This fixes DMA for aacraid, both with and without "iommu=nobypass" in
+the kernel command line. Verified with "stress-ng -d 4".
+
+Fixes: 6a5c7be5e484 ("powerpc: Override dma_get_required_mask by platform hook and ops")
+Cc: stable@vger.kernel.org # v3.2+
+Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/20200908015106.79661-1-aik@ozlabs.ru
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/powerpc/kernel/dma-iommu.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/arch/powerpc/kernel/dma-iommu.c
++++ b/arch/powerpc/kernel/dma-iommu.c
+@@ -160,7 +160,8 @@ u64 dma_iommu_get_required_mask(struct d
+ return bypass_mask;
+ }
+
+- mask = 1ULL < (fls_long(tbl->it_offset + tbl->it_size) - 1);
++ mask = 1ULL << (fls_long(tbl->it_offset + tbl->it_size) +
++ tbl->it_page_shift - 1);
+ mask += mask - 1;
+
+ return mask;
--- /dev/null
+From b6186d7fb53349efd274263a45f0b08749ccaa2d Mon Sep 17 00:00:00 2001
+From: Harald Freudenberger <freude@linux.ibm.com>
+Date: Wed, 9 Sep 2020 11:59:43 +0200
+Subject: s390/zcrypt: fix kmalloc 256k failure
+
+From: Harald Freudenberger <freude@linux.ibm.com>
+
+commit b6186d7fb53349efd274263a45f0b08749ccaa2d upstream.
+
+Tests showed that under stress conditions the kernel may
+temporary fail to allocate 256k with kmalloc. However,
+this fix reworks the related code in the cca_findcard2()
+function to use kvmalloc instead.
+
+Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
+Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
+Cc: Stable <stable@vger.kernel.org>
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/s390/crypto/zcrypt_ccamisc.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1684,9 +1684,9 @@ int cca_findcard2(u32 **apqns, u32 *nr_a
+ *nr_apqns = 0;
+
+ /* fetch status of all crypto cards */
+- device_status = kmalloc_array(MAX_ZDEV_ENTRIES_EXT,
+- sizeof(struct zcrypt_device_status_ext),
+- GFP_KERNEL);
++ device_status = kvmalloc_array(MAX_ZDEV_ENTRIES_EXT,
++ sizeof(struct zcrypt_device_status_ext),
++ GFP_KERNEL);
+ if (!device_status)
+ return -ENOMEM;
+ zcrypt_device_status_mask_ext(device_status);
+@@ -1754,7 +1754,7 @@ int cca_findcard2(u32 **apqns, u32 *nr_a
+ verify = 0;
+ }
+
+- kfree(device_status);
++ kvfree(device_status);
+ return rc;
+ }
+ EXPORT_SYMBOL(cca_findcard2);
--- /dev/null
+From 1ec882fc81e3177faf055877310dbdb0c68eb7db Mon Sep 17 00:00:00 2001
+From: Christophe Leroy <christophe.leroy@csgroup.eu>
+Date: Fri, 18 Sep 2020 21:20:28 -0700
+Subject: selftests/vm: fix display of page size in map_hugetlb
+
+From: Christophe Leroy <christophe.leroy@csgroup.eu>
+
+commit 1ec882fc81e3177faf055877310dbdb0c68eb7db upstream.
+
+The displayed size is in bytes while the text says it is in kB.
+
+Shift it by 10 to really display kBytes.
+
+Fixes: fa7b9a805c79 ("tools/selftest/vm: allow choosing mem size and page size in map_hugetlb")
+Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/e27481224564a93d14106e750de31189deaa8bc8.1598861977.git.christophe.leroy@csgroup.eu
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/vm/map_hugetlb.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/tools/testing/selftests/vm/map_hugetlb.c
++++ b/tools/testing/selftests/vm/map_hugetlb.c
+@@ -83,7 +83,7 @@ int main(int argc, char **argv)
+ }
+
+ if (shift)
+- printf("%u kB hugepages\n", 1 << shift);
++ printf("%u kB hugepages\n", 1 << (shift - 10));
+ else
+ printf("Default size hugepages\n");
+ printf("Mapping %lu Mbytes\n", (unsigned long)length >> 20);
input-i8042-add-entroware-proteus-el07r4-to-nomux-and-reset-lists.patch
serial-8250_pci-add-realtek-816a-and-816b.patch
x86-boot-compressed-disable-relocation-relaxation.patch
+s390-zcrypt-fix-kmalloc-256k-failure.patch
+ehci-hcd-move-include-to-keep-crc-stable.patch
+powerpc-dma-fix-dma_map_ops-get_required_mask.patch
+selftests-vm-fix-display-of-page-size-in-map_hugetlb.patch
+dm-dax-fix-table-reference-counts.patch
+mm-memory_hotplug-drain-per-cpu-pages-again-during-memory-offline.patch