From: Sasha Levin Date: Fri, 18 Jun 2021 11:47:19 +0000 (-0400) Subject: Fixes for 4.9 X-Git-Tag: v5.4.128~61 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=2b469b33607cedf4f82c31583f19e07d41acc226;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 4.9 Signed-off-by: Sasha Levin --- diff --git a/queue-4.9/dmaengine-qcom_hidma_mgmt-depends-on-has_iomem.patch b/queue-4.9/dmaengine-qcom_hidma_mgmt-depends-on-has_iomem.patch new file mode 100644 index 00000000000..5b3140e1987 --- /dev/null +++ b/queue-4.9/dmaengine-qcom_hidma_mgmt-depends-on-has_iomem.patch @@ -0,0 +1,50 @@ +From fe57c619f2e2ac175dd82373f99b47b2bfc3cc3c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 21 May 2021 19:13:11 -0700 +Subject: dmaengine: QCOM_HIDMA_MGMT depends on HAS_IOMEM + +From: Randy Dunlap + +[ Upstream commit 0cfbb589d67f16fa55b26ae02b69c31b52e344b1 ] + +When CONFIG_HAS_IOMEM is not set/enabled, certain iomap() family +functions [including ioremap(), devm_ioremap(), etc.] are not +available. +Drivers that use these functions should depend on HAS_IOMEM so that +they do not cause build errors. + +Rectifies these build errors: +s390-linux-ld: drivers/dma/qcom/hidma_mgmt.o: in function `hidma_mgmt_probe': +hidma_mgmt.c:(.text+0x780): undefined reference to `devm_ioremap_resource' +s390-linux-ld: drivers/dma/qcom/hidma_mgmt.o: in function `hidma_mgmt_init': +hidma_mgmt.c:(.init.text+0x126): undefined reference to `of_address_to_resource' +s390-linux-ld: hidma_mgmt.c:(.init.text+0x16e): undefined reference to `of_address_to_resource' + +Fixes: 67a2003e0607 ("dmaengine: add Qualcomm Technologies HIDMA channel driver") +Signed-off-by: Randy Dunlap +Reported-by: kernel test robot +Cc: Sinan Kaya +Cc: Vinod Koul +Cc: dmaengine@vger.kernel.org +Link: https://lore.kernel.org/r/20210522021313.16405-3-rdunlap@infradead.org +Signed-off-by: Vinod Koul +Signed-off-by: Sasha Levin +--- + drivers/dma/qcom/Kconfig | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig +index a7761c4025f4..a97c7123d913 100644 +--- a/drivers/dma/qcom/Kconfig ++++ b/drivers/dma/qcom/Kconfig +@@ -9,6 +9,7 @@ config QCOM_BAM_DMA + + config QCOM_HIDMA_MGMT + tristate "Qualcomm Technologies HIDMA Management support" ++ depends on HAS_IOMEM + select DMA_ENGINE + help + Enable support for the Qualcomm Technologies HIDMA Management. +-- +2.30.2 + diff --git a/queue-4.9/dmaengine-stedma40-add-missing-iounmap-on-error-in-d.patch b/queue-4.9/dmaengine-stedma40-add-missing-iounmap-on-error-in-d.patch new file mode 100644 index 00000000000..62907dd7c4c --- /dev/null +++ b/queue-4.9/dmaengine-stedma40-add-missing-iounmap-on-error-in-d.patch @@ -0,0 +1,40 @@ +From b6176f40d76d5a35dfbb99481568946a0c2871d7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 18 May 2021 22:11:08 +0800 +Subject: dmaengine: stedma40: add missing iounmap() on error in d40_probe() + +From: Yang Yingliang + +[ Upstream commit fffdaba402cea79b8d219355487d342ec23f91c6 ] + +Add the missing iounmap() before return from d40_probe() +in the error handling case. + +Fixes: 8d318a50b3d7 ("DMAENGINE: Support for ST-Ericssons DMA40 block v3") +Reported-by: Hulk Robot +Signed-off-by: Yang Yingliang +Reviewed-by: Linus Walleij +Link: https://lore.kernel.org/r/20210518141108.1324127-1-yangyingliang@huawei.com +Signed-off-by: Vinod Koul +Signed-off-by: Sasha Levin +--- + drivers/dma/ste_dma40.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c +index 68b41daab3a8..bf7105814ee7 100644 +--- a/drivers/dma/ste_dma40.c ++++ b/drivers/dma/ste_dma40.c +@@ -3674,6 +3674,9 @@ static int __init d40_probe(struct platform_device *pdev) + + kfree(base->lcla_pool.base_unaligned); + ++ if (base->lcpa_base) ++ iounmap(base->lcpa_base); ++ + if (base->phy_lcpa) + release_mem_region(base->phy_lcpa, + base->lcpa_size); +-- +2.30.2 + diff --git a/queue-4.9/mm-hwpoison-change-pagehwpoison-behavior-on-hugetlb-.patch b/queue-4.9/mm-hwpoison-change-pagehwpoison-behavior-on-hugetlb-.patch new file mode 100644 index 00000000000..9df79fbb346 --- /dev/null +++ b/queue-4.9/mm-hwpoison-change-pagehwpoison-behavior-on-hugetlb-.patch @@ -0,0 +1,248 @@ +From 7e7c6ab11455dd54ef4d84ff556d73aa898aadd0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 10 Jul 2017 15:47:38 -0700 +Subject: mm: hwpoison: change PageHWPoison behavior on hugetlb pages + +From: Naoya Horiguchi + +[ Upstream commit b37ff71cc626a0c1b5e098ff9a0b723815f6aaeb ] + +We'd like to narrow down the error region in memory error on hugetlb +pages. However, currently we set PageHWPoison flags on all subpages in +the error hugepage and add # of subpages to num_hwpoison_pages, which +doesn't fit our purpose. + +So this patch changes the behavior and we only set PageHWPoison on the +head page then increase num_hwpoison_pages only by 1. This is a +preparation for narrow-down part which comes in later patches. + +Link: http://lkml.kernel.org/r/1496305019-5493-4-git-send-email-n-horiguchi@ah.jp.nec.com +Signed-off-by: Naoya Horiguchi +Cc: Michal Hocko +Cc: "Aneesh Kumar K.V" +Cc: Anshuman Khandual +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + include/linux/swapops.h | 9 ----- + mm/memory-failure.c | 87 ++++++++++++----------------------------- + 2 files changed, 24 insertions(+), 72 deletions(-) + +diff --git a/include/linux/swapops.h b/include/linux/swapops.h +index 5c3a5f3e7eec..c5ff7b217ee6 100644 +--- a/include/linux/swapops.h ++++ b/include/linux/swapops.h +@@ -196,15 +196,6 @@ static inline void num_poisoned_pages_dec(void) + atomic_long_dec(&num_poisoned_pages); + } + +-static inline void num_poisoned_pages_add(long num) +-{ +- atomic_long_add(num, &num_poisoned_pages); +-} +- +-static inline void num_poisoned_pages_sub(long num) +-{ +- atomic_long_sub(num, &num_poisoned_pages); +-} + #else + + static inline swp_entry_t make_hwpoison_entry(struct page *page) +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index d6524dce43b2..ad156b42d2ad 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1010,22 +1010,6 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, + return ret; + } + +-static void set_page_hwpoison_huge_page(struct page *hpage) +-{ +- int i; +- int nr_pages = 1 << compound_order(hpage); +- for (i = 0; i < nr_pages; i++) +- SetPageHWPoison(hpage + i); +-} +- +-static void clear_page_hwpoison_huge_page(struct page *hpage) +-{ +- int i; +- int nr_pages = 1 << compound_order(hpage); +- for (i = 0; i < nr_pages; i++) +- ClearPageHWPoison(hpage + i); +-} +- + /** + * memory_failure - Handle memory failure of a page. + * @pfn: Page Number of the corrupted page +@@ -1051,7 +1035,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + struct page *hpage; + struct page *orig_head; + int res; +- unsigned int nr_pages; + unsigned long page_flags; + + if (!sysctl_memory_failure_recovery) +@@ -1065,24 +1048,23 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + + p = pfn_to_page(pfn); + orig_head = hpage = compound_head(p); ++ ++ /* tmporary check code, to be updated in later patches */ ++ if (PageHuge(p)) { ++ if (TestSetPageHWPoison(hpage)) { ++ pr_err("Memory failure: %#lx: already hardware poisoned\n", pfn); ++ return 0; ++ } ++ goto tmp; ++ } + if (TestSetPageHWPoison(p)) { + pr_err("Memory failure: %#lx: already hardware poisoned\n", + pfn); + return 0; + } + +- /* +- * Currently errors on hugetlbfs pages are measured in hugepage units, +- * so nr_pages should be 1 << compound_order. OTOH when errors are on +- * transparent hugepages, they are supposed to be split and error +- * measurement is done in normal page units. So nr_pages should be one +- * in this case. +- */ +- if (PageHuge(p)) +- nr_pages = 1 << compound_order(hpage); +- else /* normal page or thp */ +- nr_pages = 1; +- num_poisoned_pages_add(nr_pages); ++tmp: ++ num_poisoned_pages_inc(); + + /* + * We need/can do nothing about count=0 pages. +@@ -1110,12 +1092,11 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + if (PageHWPoison(hpage)) { + if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) + || (p != hpage && TestSetPageHWPoison(hpage))) { +- num_poisoned_pages_sub(nr_pages); ++ num_poisoned_pages_dec(); + unlock_page(hpage); + return 0; + } + } +- set_page_hwpoison_huge_page(hpage); + res = dequeue_hwpoisoned_huge_page(hpage); + action_result(pfn, MF_MSG_FREE_HUGE, + res ? MF_IGNORED : MF_DELAYED); +@@ -1138,7 +1119,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + pr_err("Memory failure: %#lx: thp split failed\n", + pfn); + if (TestClearPageHWPoison(p)) +- num_poisoned_pages_sub(nr_pages); ++ num_poisoned_pages_dec(); + put_hwpoison_page(p); + return -EBUSY; + } +@@ -1202,14 +1183,14 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + */ + if (!PageHWPoison(p)) { + pr_err("Memory failure: %#lx: just unpoisoned\n", pfn); +- num_poisoned_pages_sub(nr_pages); ++ num_poisoned_pages_dec(); + unlock_page(hpage); + put_hwpoison_page(hpage); + return 0; + } + if (hwpoison_filter(p)) { + if (TestClearPageHWPoison(p)) +- num_poisoned_pages_sub(nr_pages); ++ num_poisoned_pages_dec(); + unlock_page(hpage); + put_hwpoison_page(hpage); + return 0; +@@ -1228,14 +1209,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + put_hwpoison_page(hpage); + return 0; + } +- /* +- * Set PG_hwpoison on all pages in an error hugepage, +- * because containment is done in hugepage unit for now. +- * Since we have done TestSetPageHWPoison() for the head page with +- * page lock held, we can safely set PG_hwpoison bits on tail pages. +- */ +- if (PageHuge(p)) +- set_page_hwpoison_huge_page(hpage); + + /* + * It's very difficult to mess with pages currently under IO +@@ -1407,7 +1380,6 @@ int unpoison_memory(unsigned long pfn) + struct page *page; + struct page *p; + int freeit = 0; +- unsigned int nr_pages; + static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL, + DEFAULT_RATELIMIT_BURST); + +@@ -1452,8 +1424,6 @@ int unpoison_memory(unsigned long pfn) + return 0; + } + +- nr_pages = 1 << compound_order(page); +- + if (!get_hwpoison_page(p)) { + /* + * Since HWPoisoned hugepage should have non-zero refcount, +@@ -1483,10 +1453,8 @@ int unpoison_memory(unsigned long pfn) + if (TestClearPageHWPoison(page)) { + unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n", + pfn, &unpoison_rs); +- num_poisoned_pages_sub(nr_pages); ++ num_poisoned_pages_dec(); + freeit = 1; +- if (PageHuge(page)) +- clear_page_hwpoison_huge_page(page); + } + unlock_page(page); + +@@ -1612,14 +1580,10 @@ static int soft_offline_huge_page(struct page *page, int flags) + ret = -EIO; + } else { + /* overcommit hugetlb page will be freed to buddy */ +- if (PageHuge(page)) { +- set_page_hwpoison_huge_page(hpage); ++ SetPageHWPoison(page); ++ if (PageHuge(page)) + dequeue_hwpoisoned_huge_page(hpage); +- num_poisoned_pages_add(1 << compound_order(hpage)); +- } else { +- SetPageHWPoison(page); +- num_poisoned_pages_inc(); +- } ++ num_poisoned_pages_inc(); + } + return ret; + } +@@ -1728,15 +1692,12 @@ static int soft_offline_in_use_page(struct page *page, int flags) + + static void soft_offline_free_page(struct page *page) + { +- if (PageHuge(page)) { +- struct page *hpage = compound_head(page); ++ struct page *head = compound_head(page); + +- set_page_hwpoison_huge_page(hpage); +- if (!dequeue_hwpoisoned_huge_page(hpage)) +- num_poisoned_pages_add(1 << compound_order(hpage)); +- } else { +- if (!TestSetPageHWPoison(page)) +- num_poisoned_pages_inc(); ++ if (!TestSetPageHWPoison(head)) { ++ num_poisoned_pages_inc(); ++ if (PageHuge(head)) ++ dequeue_hwpoisoned_huge_page(head); + } + } + +-- +2.30.2 + diff --git a/queue-4.9/mm-hwpoison-introduce-memory_failure_hugetlb.patch b/queue-4.9/mm-hwpoison-introduce-memory_failure_hugetlb.patch new file mode 100644 index 00000000000..5f419a3e0b7 --- /dev/null +++ b/queue-4.9/mm-hwpoison-introduce-memory_failure_hugetlb.patch @@ -0,0 +1,234 @@ +From bcf33601706a65343648271e046faea9f011a14c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 10 Jul 2017 15:47:47 -0700 +Subject: mm: hwpoison: introduce memory_failure_hugetlb() + +From: Naoya Horiguchi + +[ Upstream commit 761ad8d7c7b5485bb66fd5bccb58a891fe784544 ] + +memory_failure() is a big function and hard to maintain. Handling +hugetlb- and non-hugetlb- case in a single function is not good, so this +patch separates PageHuge() branch into a new function, which saves many +PageHuge() check. + +Link: http://lkml.kernel.org/r/1496305019-5493-7-git-send-email-n-horiguchi@ah.jp.nec.com +Signed-off-by: Naoya Horiguchi +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + mm/memory-failure.c | 134 +++++++++++++++++++++++++++----------------- + 1 file changed, 82 insertions(+), 52 deletions(-) + +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index ad156b42d2ad..d3986a58ca89 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1010,6 +1010,76 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, + return ret; + } + ++static int memory_failure_hugetlb(unsigned long pfn, int trapno, int flags) ++{ ++ struct page_state *ps; ++ struct page *p = pfn_to_page(pfn); ++ struct page *head = compound_head(p); ++ int res; ++ unsigned long page_flags; ++ ++ if (TestSetPageHWPoison(head)) { ++ pr_err("Memory failure: %#lx: already hardware poisoned\n", ++ pfn); ++ return 0; ++ } ++ ++ num_poisoned_pages_inc(); ++ ++ if (!(flags & MF_COUNT_INCREASED) && !get_hwpoison_page(p)) { ++ /* ++ * Check "filter hit" and "race with other subpage." ++ */ ++ lock_page(head); ++ if (PageHWPoison(head)) { ++ if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) ++ || (p != head && TestSetPageHWPoison(head))) { ++ num_poisoned_pages_dec(); ++ unlock_page(head); ++ return 0; ++ } ++ } ++ unlock_page(head); ++ dissolve_free_huge_page(p); ++ action_result(pfn, MF_MSG_FREE_HUGE, MF_DELAYED); ++ return 0; ++ } ++ ++ lock_page(head); ++ page_flags = head->flags; ++ ++ if (!PageHWPoison(head)) { ++ pr_err("Memory failure: %#lx: just unpoisoned\n", pfn); ++ num_poisoned_pages_dec(); ++ unlock_page(head); ++ put_hwpoison_page(head); ++ return 0; ++ } ++ ++ if (!hwpoison_user_mappings(p, pfn, trapno, flags, &head)) { ++ action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED); ++ res = -EBUSY; ++ goto out; ++ } ++ ++ res = -EBUSY; ++ ++ for (ps = error_states;; ps++) ++ if ((p->flags & ps->mask) == ps->res) ++ break; ++ ++ page_flags |= (p->flags & (1UL << PG_dirty)); ++ ++ if (!ps->mask) ++ for (ps = error_states;; ps++) ++ if ((page_flags & ps->mask) == ps->res) ++ break; ++ res = page_action(ps, p, pfn); ++out: ++ unlock_page(head); ++ return res; ++} ++ + /** + * memory_failure - Handle memory failure of a page. + * @pfn: Page Number of the corrupted page +@@ -1047,33 +1117,22 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + } + + p = pfn_to_page(pfn); +- orig_head = hpage = compound_head(p); +- +- /* tmporary check code, to be updated in later patches */ +- if (PageHuge(p)) { +- if (TestSetPageHWPoison(hpage)) { +- pr_err("Memory failure: %#lx: already hardware poisoned\n", pfn); +- return 0; +- } +- goto tmp; +- } ++ if (PageHuge(p)) ++ return memory_failure_hugetlb(pfn, trapno, flags); + if (TestSetPageHWPoison(p)) { + pr_err("Memory failure: %#lx: already hardware poisoned\n", + pfn); + return 0; + } + +-tmp: ++ orig_head = hpage = compound_head(p); + num_poisoned_pages_inc(); + + /* + * We need/can do nothing about count=0 pages. + * 1) it's a free page, and therefore in safe hand: + * prep_new_page() will be the gate keeper. +- * 2) it's a free hugepage, which is also safe: +- * an affected hugepage will be dequeued from hugepage freelist, +- * so there's no concern about reusing it ever after. +- * 3) it's part of a non-compound high order page. ++ * 2) it's part of a non-compound high order page. + * Implies some kernel user: cannot stop them from + * R/W the page; let's pray that the page has been + * used and will be freed some time later. +@@ -1084,31 +1143,13 @@ tmp: + if (is_free_buddy_page(p)) { + action_result(pfn, MF_MSG_BUDDY, MF_DELAYED); + return 0; +- } else if (PageHuge(hpage)) { +- /* +- * Check "filter hit" and "race with other subpage." +- */ +- lock_page(hpage); +- if (PageHWPoison(hpage)) { +- if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) +- || (p != hpage && TestSetPageHWPoison(hpage))) { +- num_poisoned_pages_dec(); +- unlock_page(hpage); +- return 0; +- } +- } +- res = dequeue_hwpoisoned_huge_page(hpage); +- action_result(pfn, MF_MSG_FREE_HUGE, +- res ? MF_IGNORED : MF_DELAYED); +- unlock_page(hpage); +- return res; + } else { + action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED); + return -EBUSY; + } + } + +- if (!PageHuge(p) && PageTransHuge(hpage)) { ++ if (PageTransHuge(hpage)) { + lock_page(p); + if (!PageAnon(p) || unlikely(split_huge_page(p))) { + unlock_page(p); +@@ -1154,7 +1195,7 @@ tmp: + } + } + +- lock_page(hpage); ++ lock_page(p); + + /* + * The page could have changed compound pages during the locking. +@@ -1184,32 +1225,21 @@ tmp: + if (!PageHWPoison(p)) { + pr_err("Memory failure: %#lx: just unpoisoned\n", pfn); + num_poisoned_pages_dec(); +- unlock_page(hpage); +- put_hwpoison_page(hpage); ++ unlock_page(p); ++ put_hwpoison_page(p); + return 0; + } + if (hwpoison_filter(p)) { + if (TestClearPageHWPoison(p)) + num_poisoned_pages_dec(); +- unlock_page(hpage); +- put_hwpoison_page(hpage); ++ unlock_page(p); ++ put_hwpoison_page(p); + return 0; + } + +- if (!PageHuge(p) && !PageTransTail(p) && !PageLRU(p)) ++ if (!PageTransTail(p) && !PageLRU(p)) + goto identify_page_state; + +- /* +- * For error on the tail page, we should set PG_hwpoison +- * on the head page to show that the hugepage is hwpoisoned +- */ +- if (PageHuge(p) && PageTail(p) && TestSetPageHWPoison(hpage)) { +- action_result(pfn, MF_MSG_POISONED_HUGE, MF_IGNORED); +- unlock_page(hpage); +- put_hwpoison_page(hpage); +- return 0; +- } +- + /* + * It's very difficult to mess with pages currently under IO + * and in many cases impossible, so we just avoid it here. +@@ -1258,7 +1288,7 @@ identify_page_state: + break; + res = page_action(ps, p, pfn); + out: +- unlock_page(hpage); ++ unlock_page(p); + return res; + } + EXPORT_SYMBOL_GPL(memory_failure); +-- +2.30.2 + diff --git a/queue-4.9/mm-memory-failure-make-sure-wait-for-page-writeback-.patch b/queue-4.9/mm-memory-failure-make-sure-wait-for-page-writeback-.patch new file mode 100644 index 00000000000..ef2f687553e --- /dev/null +++ b/queue-4.9/mm-memory-failure-make-sure-wait-for-page-writeback-.patch @@ -0,0 +1,84 @@ +From 09ba9806ce09602cac6a49367a08971bd3ce6669 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 15 Jun 2021 18:23:32 -0700 +Subject: mm/memory-failure: make sure wait for page writeback in + memory_failure + +From: yangerkun + +[ Upstream commit e8675d291ac007e1c636870db880f837a9ea112a ] + +Our syzkaller trigger the "BUG_ON(!list_empty(&inode->i_wb_list))" in +clear_inode: + + kernel BUG at fs/inode.c:519! + Internal error: Oops - BUG: 0 [#1] SMP + Modules linked in: + Process syz-executor.0 (pid: 249, stack limit = 0x00000000a12409d7) + CPU: 1 PID: 249 Comm: syz-executor.0 Not tainted 4.19.95 + Hardware name: linux,dummy-virt (DT) + pstate: 80000005 (Nzcv daif -PAN -UAO) + pc : clear_inode+0x280/0x2a8 + lr : clear_inode+0x280/0x2a8 + Call trace: + clear_inode+0x280/0x2a8 + ext4_clear_inode+0x38/0xe8 + ext4_free_inode+0x130/0xc68 + ext4_evict_inode+0xb20/0xcb8 + evict+0x1a8/0x3c0 + iput+0x344/0x460 + do_unlinkat+0x260/0x410 + __arm64_sys_unlinkat+0x6c/0xc0 + el0_svc_common+0xdc/0x3b0 + el0_svc_handler+0xf8/0x160 + el0_svc+0x10/0x218 + Kernel panic - not syncing: Fatal exception + +A crash dump of this problem show that someone called __munlock_pagevec +to clear page LRU without lock_page: do_mmap -> mmap_region -> do_munmap +-> munlock_vma_pages_range -> __munlock_pagevec. + +As a result memory_failure will call identify_page_state without +wait_on_page_writeback. And after truncate_error_page clear the mapping +of this page. end_page_writeback won't call sb_clear_inode_writeback to +clear inode->i_wb_list. That will trigger BUG_ON in clear_inode! + +Fix it by checking PageWriteback too to help determine should we skip +wait_on_page_writeback. + +Link: https://lkml.kernel.org/r/20210604084705.3729204-1-yangerkun@huawei.com +Fixes: 0bc1f8b0682c ("hwpoison: fix the handling path of the victimized page frame that belong to non-LRU") +Signed-off-by: yangerkun +Acked-by: Naoya Horiguchi +Cc: Jan Kara +Cc: Theodore Ts'o +Cc: Oscar Salvador +Cc: Yu Kuai +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + mm/memory-failure.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index d3986a58ca89..448f5decf95c 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1237,7 +1237,12 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + return 0; + } + +- if (!PageTransTail(p) && !PageLRU(p)) ++ /* ++ * __munlock_pagevec may clear a writeback page's LRU flag without ++ * page_lock. We need wait writeback completion for this page or it ++ * may trigger vfs BUG while evict inode. ++ */ ++ if (!PageTransTail(p) && !PageLRU(p) && !PageWriteback(p)) + goto identify_page_state; + + /* +-- +2.30.2 + diff --git a/queue-4.9/series b/queue-4.9/series index cdf3ba351de..e8b237290cb 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -12,3 +12,8 @@ rtnetlink-fix-missing-error-code-in-rtnl_bridge_noti.patch net-x25-return-the-correct-errno-code.patch net-return-the-correct-errno-code.patch fib-return-the-correct-errno-code.patch +dmaengine-qcom_hidma_mgmt-depends-on-has_iomem.patch +dmaengine-stedma40-add-missing-iounmap-on-error-in-d.patch +mm-hwpoison-change-pagehwpoison-behavior-on-hugetlb-.patch +mm-hwpoison-introduce-memory_failure_hugetlb.patch +mm-memory-failure-make-sure-wait-for-page-writeback-.patch