From: Greg Kroah-Hartman Date: Thu, 18 Apr 2019 15:56:09 +0000 (+0200) Subject: 4.19-stable patches X-Git-Tag: v4.9.170~13 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=6350b593745920342756a6c34fed682f60096d72;p=thirdparty%2Fkernel%2Fstable-queue.git 4.19-stable patches added patches: ib-hfi1-failed-to-drain-send-queue-when-qp-is-put-into-error-state.patch mm-hide-incomplete-nr_indirectly_reclaimable-in-proc-zoneinfo.patch mm-hide-incomplete-nr_indirectly_reclaimable-in-sysfs.patch --- diff --git a/queue-4.19/ib-hfi1-failed-to-drain-send-queue-when-qp-is-put-into-error-state.patch b/queue-4.19/ib-hfi1-failed-to-drain-send-queue-when-qp-is-put-into-error-state.patch new file mode 100644 index 00000000000..c49458845f3 --- /dev/null +++ b/queue-4.19/ib-hfi1-failed-to-drain-send-queue-when-qp-is-put-into-error-state.patch @@ -0,0 +1,62 @@ +From 662d66466637862ef955f7f6e78a286d8cf0ebef Mon Sep 17 00:00:00 2001 +From: Kaike Wan +Date: Mon, 18 Mar 2019 09:55:19 -0700 +Subject: IB/hfi1: Failed to drain send queue when QP is put into error state + +From: Kaike Wan + +commit 662d66466637862ef955f7f6e78a286d8cf0ebef upstream. + +When a QP is put into error state, all pending requests in the send work +queue should be drained. The following sequence of events could lead to a +failure, causing a request to hang: + +(1) The QP builds a packet and tries to send through SDMA engine. + However, PIO engine is still busy. Consequently, this packet is put on + the QP's tx list and the QP is put on the PIO waiting list. The field + qp->s_flags is set with HFI1_S_WAIT_PIO_DRAIN; + +(2) The QP is put into error state by the user application and + notify_error_qp() is called, which removes the QP from the PIO waiting + list and the packet from the QP's tx list. In addition, qp->s_flags is + cleared of RVT_S_ANY_WAIT_IO bits, which does not include + HFI1_S_WAIT_PIO_DRAIN bit; + +(3) The hfi1_schdule_send() function is called to drain the QP's send + queue. Subsequently, hfi1_do_send() is called. Since the flag bit + HFI1_S_WAIT_PIO_DRAIN is set in qp->s_flags, hfi1_send_ok() fails. As + a result, hfi1_do_send() bails out without draining any request from + the send queue; + +(4) The PIO engine completes the sending and tries to wake up any QP on + its waiting list. But the QP has been removed from the PIO waiting + list and therefore is kept in sleep forever. + +The fix is to clear qp->s_flags of HFI1_S_ANY_WAIT_IO bits in step (2). +HFI1_S_ANY_WAIT_IO includes RVT_S_ANY_WAIT_IO and HFI1_S_WAIT_PIO_DRAIN. + +Fixes: 2e2ba09e48b7 ("IB/rdmavt, IB/hfi1: Create device dependent s_flags") +Cc: # 4.19.x+ +Reviewed-by: Mike Marciniszyn +Reviewed-by: Alex Estrin +Signed-off-by: Kaike Wan +Signed-off-by: Dennis Dalessandro +Signed-off-by: Jason Gunthorpe +Signed-off-by: Greg Kroah-Hartman + + +--- + drivers/infiniband/hw/hfi1/qp.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/infiniband/hw/hfi1/qp.c ++++ b/drivers/infiniband/hw/hfi1/qp.c +@@ -784,7 +784,7 @@ void notify_error_qp(struct rvt_qp *qp) + write_seqlock(lock); + if (!list_empty(&priv->s_iowait.list) && + !(qp->s_flags & RVT_S_BUSY)) { +- qp->s_flags &= ~RVT_S_ANY_WAIT_IO; ++ qp->s_flags &= ~HFI1_S_ANY_WAIT_IO; + list_del_init(&priv->s_iowait.list); + priv->s_iowait.lock = NULL; + rvt_put_qp(qp); diff --git a/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-proc-zoneinfo.patch b/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-proc-zoneinfo.patch new file mode 100644 index 00000000000..d553fd4653e --- /dev/null +++ b/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-proc-zoneinfo.patch @@ -0,0 +1,69 @@ +From c29f9010a35604047f96a7e9d6cbabfa36d996d1 Mon Sep 17 00:00:00 2001 +From: Roman Gushchin +Date: Tue, 30 Oct 2018 17:48:25 +0000 +Subject: mm: hide incomplete nr_indirectly_reclaimable in /proc/zoneinfo + +From: Roman Gushchin + +[fixed differently upstream, this is a work-around to resolve it for 4.19.y] + +Yongqin reported that /proc/zoneinfo format is broken in 4.14 +due to commit 7aaf77272358 ("mm: don't show nr_indirectly_reclaimable +in /proc/vmstat") + +Node 0, zone DMA + per-node stats + nr_inactive_anon 403 + nr_active_anon 89123 + nr_inactive_file 128887 + nr_active_file 47377 + nr_unevictable 2053 + nr_slab_reclaimable 7510 + nr_slab_unreclaimable 10775 + nr_isolated_anon 0 + nr_isolated_file 0 + <...> + nr_vmscan_write 0 + nr_vmscan_immediate_reclaim 0 + nr_dirtied 6022 + nr_written 5985 + 74240 + ^^^^^^^^^^ + pages free 131656 + +The problem is caused by the nr_indirectly_reclaimable counter, +which is hidden from the /proc/vmstat, but not from the +/proc/zoneinfo. Let's fix this inconsistency and hide the +counter from /proc/zoneinfo exactly as from /proc/vmstat. + +BTW, in 4.19+ the counter has been renamed and exported by +the commit b29940c1abd7 ("mm: rename and change semantics of +nr_indirectly_reclaimable_bytes"), so there is no such a problem +anymore. + +Cc: # 4.14.x-4.18.x +Fixes: 7aaf77272358 ("mm: don't show nr_indirectly_reclaimable in /proc/vmstat") +Reported-by: Yongqin Liu +Signed-off-by: Roman Gushchin +Cc: Vlastimil Babka +Cc: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Greg Kroah-Hartman + +--- + mm/vmstat.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/mm/vmstat.c ++++ b/mm/vmstat.c +@@ -1547,6 +1547,10 @@ static void zoneinfo_show_print(struct s + if (is_zone_first_populated(pgdat, zone)) { + seq_printf(m, "\n per-node stats"); + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { ++ /* Skip hidden vmstat items. */ ++ if (*vmstat_text[i + NR_VM_ZONE_STAT_ITEMS + ++ NR_VM_NUMA_STAT_ITEMS] == '\0') ++ continue; + seq_printf(m, "\n %-12s %lu", + vmstat_text[i + NR_VM_ZONE_STAT_ITEMS + + NR_VM_NUMA_STAT_ITEMS], diff --git a/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-sysfs.patch b/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-sysfs.patch new file mode 100644 index 00000000000..4bcfeb04a88 --- /dev/null +++ b/queue-4.19/mm-hide-incomplete-nr_indirectly_reclaimable-in-sysfs.patch @@ -0,0 +1,51 @@ +From khlebnikov@yandex-team.ru Thu Apr 18 17:53:53 2019 +From: Konstantin Khlebnikov +Date: Tue, 09 Apr 2019 20:05:43 +0300 +Subject: [PATCH 4.19.y 2/2] mm: hide incomplete nr_indirectly_reclaimable in sysfs +To: stable@vger.kernel.org +Cc: linux-mm@kvack.org, Roman Gushchin , Vlastimil Babka +Message-ID: <155482954368.2823.12386748649541618609.stgit@buzz> + +From: Konstantin Khlebnikov + +In upstream branch this fixed by commit b29940c1abd7 ("mm: rename and +change semantics of nr_indirectly_reclaimable_bytes"). + +This fixes /sys/devices/system/node/node*/vmstat format: + +... +nr_dirtied 6613155 +nr_written 5796802 + 11089216 +... + +Cc: # 4.19.y +Fixes: 7aaf77272358 ("mm: don't show nr_indirectly_reclaimable in /proc/vmstat") +Signed-off-by: Konstantin Khlebnikov +Cc: Roman Gushchin +Cc: Vlastimil Babka +Signed-off-by: Greg Kroah-Hartman +--- + drivers/base/node.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +--- a/drivers/base/node.c ++++ b/drivers/base/node.c +@@ -197,11 +197,16 @@ static ssize_t node_read_vmstat(struct d + sum_zone_numa_state(nid, i)); + #endif + +- for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) ++ for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { ++ /* Skip hidden vmstat items. */ ++ if (*vmstat_text[i + NR_VM_ZONE_STAT_ITEMS + ++ NR_VM_NUMA_STAT_ITEMS] == '\0') ++ continue; + n += sprintf(buf+n, "%s %lu\n", + vmstat_text[i + NR_VM_ZONE_STAT_ITEMS + + NR_VM_NUMA_STAT_ITEMS], + node_page_state(pgdat, i)); ++ } + + return n; + } diff --git a/queue-4.19/series b/queue-4.19/series index db552e01fc9..b635fd5555e 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -105,3 +105,6 @@ f2fs-fix-to-dirty-inode-for-i_mode-recovery.patch include-linux-swap.h-use-offsetof-instead-of-custom-.patch bpf-fix-use-after-free-in-bpf_evict_inode.patch tools-include-adopt-linux-bits.h.patch +ib-hfi1-failed-to-drain-send-queue-when-qp-is-put-into-error-state.patch +mm-hide-incomplete-nr_indirectly_reclaimable-in-proc-zoneinfo.patch +mm-hide-incomplete-nr_indirectly_reclaimable-in-sysfs.patch