]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.14.111/ib-mlx4-increase-the-timeout-for-cm-cache.patch
drop drm patch
[thirdparty/kernel/stable-queue.git] / releases / 4.14.111 / ib-mlx4-increase-the-timeout-for-cm-cache.patch
CommitLineData
04fd09d4
SL
1From 9ab807b63d4a76c4cd97d733d3c58a83fcbc1d45 Mon Sep 17 00:00:00 2001
2From: =?UTF-8?q?H=C3=A5kon=20Bugge?= <haakon.bugge@oracle.com>
3Date: Sun, 17 Feb 2019 15:45:12 +0100
4Subject: IB/mlx4: Increase the timeout for CM cache
5MIME-Version: 1.0
6Content-Type: text/plain; charset=UTF-8
7Content-Transfer-Encoding: 8bit
8
9[ Upstream commit 2612d723aadcf8281f9bf8305657129bd9f3cd57 ]
10
11Using CX-3 virtual functions, either from a bare-metal machine or
12pass-through from a VM, MAD packets are proxied through the PF driver.
13
14Since the VF drivers have separate name spaces for MAD Transaction Ids
15(TIDs), the PF driver has to re-map the TIDs and keep the book keeping
16in a cache.
17
18Following the RDMA Connection Manager (CM) protocol, it is clear when
19an entry has to evicted form the cache. But life is not perfect,
20remote peers may die or be rebooted. Hence, it's a timeout to wipe out
21a cache entry, when the PF driver assumes the remote peer has gone.
22
23During workloads where a high number of QPs are destroyed concurrently,
24excessive amount of CM DREQ retries has been observed
25
26The problem can be demonstrated in a bare-metal environment, where two
27nodes have instantiated 8 VFs each. This using dual ported HCAs, so we
28have 16 vPorts per physical server.
29
3064 processes are associated with each vPort and creates and destroys
31one QP for each of the remote 64 processes. That is, 1024 QPs per
32vPort, all in all 16K QPs. The QPs are created/destroyed using the
33CM.
34
35When tearing down these 16K QPs, excessive CM DREQ retries (and
36duplicates) are observed. With some cat/paste/awk wizardry on the
37infiniband_cm sysfs, we observe as sum of the 16 vPorts on one of the
38nodes:
39
40cm_rx_duplicates:
41 dreq 2102
42cm_rx_msgs:
43 drep 1989
44 dreq 6195
45 rep 3968
46 req 4224
47 rtu 4224
48cm_tx_msgs:
49 drep 4093
50 dreq 27568
51 rep 4224
52 req 3968
53 rtu 3968
54cm_tx_retries:
55 dreq 23469
56
57Note that the active/passive side is equally distributed between the
58two nodes.
59
60Enabling pr_debug in cm.c gives tons of:
61
62[171778.814239] <mlx4_ib> mlx4_ib_multiplex_cm_handler: id{slave:
631,sl_cm_id: 0xd393089f} is NULL!
64
65By increasing the CM_CLEANUP_CACHE_TIMEOUT from 5 to 30 seconds, the
66tear-down phase of the application is reduced from approximately 90 to
6750 seconds. Retries/duplicates are also significantly reduced:
68
69cm_rx_duplicates:
70 dreq 2460
71[]
72cm_tx_retries:
73 dreq 3010
74 req 47
75
76Increasing the timeout further didn't help, as these duplicates and
77retries stems from a too short CMA timeout, which was 20 (~4 seconds)
78on the systems. By increasing the CMA timeout to 22 (~17 seconds), the
79numbers fell down to about 10 for both of them.
80
81Adjustment of the CMA timeout is not part of this commit.
82
83Signed-off-by: HÃ¥kon Bugge <haakon.bugge@oracle.com>
84Acked-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
85Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
86Signed-off-by: Sasha Levin <sashal@kernel.org>
87---
88 drivers/infiniband/hw/mlx4/cm.c | 2 +-
89 1 file changed, 1 insertion(+), 1 deletion(-)
90
91diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
92index fedaf8260105..8c79a480f2b7 100644
93--- a/drivers/infiniband/hw/mlx4/cm.c
94+++ b/drivers/infiniband/hw/mlx4/cm.c
95@@ -39,7 +39,7 @@
96
97 #include "mlx4_ib.h"
98
99-#define CM_CLEANUP_CACHE_TIMEOUT (5 * HZ)
100+#define CM_CLEANUP_CACHE_TIMEOUT (30 * HZ)
101
102 struct id_map_entry {
103 struct rb_node node;
104--
1052.19.1
106