From: Greg Kroah-Hartman Date: Thu, 27 Apr 2017 10:21:00 +0000 (+0200) Subject: 4.4-stable patches X-Git-Tag: v4.4.65~15 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=e961a0f128f10a70500ff1d93e784d215851395d;p=thirdparty%2Fkernel%2Fstable-queue.git 4.4-stable patches added patches: tipc-correct-error-in-node-fsm.patch tipc-make-dist-queue-pernet.patch tipc-make-sure-ipv6-header-fits-in-skb-headroom.patch tipc-re-enable-compensation-for-socket-receive-buffer-double-counting.patch --- diff --git a/queue-4.10/series b/queue-4.10/series new file mode 100644 index 00000000000..e69de29bb2d diff --git a/queue-4.4/series b/queue-4.4/series new file mode 100644 index 00000000000..ced80b217a8 --- /dev/null +++ b/queue-4.4/series @@ -0,0 +1,4 @@ +tipc-make-sure-ipv6-header-fits-in-skb-headroom.patch +tipc-make-dist-queue-pernet.patch +tipc-re-enable-compensation-for-socket-receive-buffer-double-counting.patch +tipc-correct-error-in-node-fsm.patch diff --git a/queue-4.4/tipc-correct-error-in-node-fsm.patch b/queue-4.4/tipc-correct-error-in-node-fsm.patch new file mode 100644 index 00000000000..7860b13228c --- /dev/null +++ b/queue-4.4/tipc-correct-error-in-node-fsm.patch @@ -0,0 +1,136 @@ +From c4282ca76c5b81ed73ef4c5eb5c07ee397e51642 Mon Sep 17 00:00:00 2001 +From: Jon Paul Maloy +Date: Wed, 8 Jun 2016 12:00:04 -0400 +Subject: tipc: correct error in node fsm + +From: Jon Paul Maloy + +commit c4282ca76c5b81ed73ef4c5eb5c07ee397e51642 upstream. + +commit 88e8ac7000dc ("tipc: reduce transmission rate of reset messages +when link is down") revealed a flaw in the node FSM, as defined in +the log of commit 66996b6c47ed ("tipc: extend node FSM"). + +We see the following scenario: +1: Node B receives a RESET message from node A before its link endpoint + is fully up, i.e., the node FSM is in state SELF_UP_PEER_COMING. This + event will not change the node FSM state, but the (distinct) link FSM + will move to state RESETTING. +2: As an effect of the previous event, the local endpoint on B will + declare node A lost, and post the event SELF_DOWN to the its node + FSM. This moves the FSM state to SELF_DOWN_PEER_LEAVING, meaning + that no messages will be accepted from A until it receives another + RESET message that confirms that A's endpoint has been reset. This + is wasteful, since we know this as a fact already from the first + received RESET, but worse is that the link instance's FSM has not + wasted this information, but instead moved on to state ESTABLISHING, + meaning that it repeatedly sends out ACTIVATE messages to the reset + peer A. +3: Node A will receive one of the ACTIVATE messages, move its link FSM + to state ESTABLISHED, and start repeatedly sending out STATE messages + to node B. +4: Node B will consistently drop these messages, since it can only accept + accept a RESET according to its node FSM. +5: After four lost STATE messages node A will reset its link and start + repeatedly sending out RESET messages to B. +6: Because of the reduced send rate for RESET messages, it is very + likely that A will receive an ACTIVATE (which is sent out at a much + higher frequency) before it gets the chance to send a RESET, and A + may hence quickly move back to state ESTABLISHED and continue sending + out STATE messages, which will again be dropped by B. +7: GOTO 5. +8: After having repeated the cycle 5-7 a number of times, node A will + by chance get in between with sending a RESET, and the situation is + resolved. + +Unfortunately, we have seen that it may take a substantial amount of +time before this vicious loop is broken, sometimes in the order of +minutes. + +We correct this by making a small correction to the node FSM: When a +node in state SELF_UP_PEER_COMING receives a SELF_DOWN event, it now +moves directly back to state SELF_DOWN_PEER_DOWN, instead of as now +SELF_DOWN_PEER_LEAVING. This is logically consistent, since we don't +need to wait for RESET confirmation from of an endpoint that we alread +know has been reset. It also means that node B in the scenario above +will not be dropping incoming STATE messages, and the link can come up +immediately. + +Finally, a symmetry comparison reveals that the FSM has a similar +error when receiving the event PEER_DOWN in state PEER_UP_SELF_COMING. +Instead of moving to PERR_DOWN_SELF_LEAVING, it should move directly +to SELF_DOWN_PEER_DOWN. Although we have never seen any negative effect +of this logical error, we choose fix this one, too. + +The node FSM looks as follows after those changes: + + +----------------------------------------+ + | PEER_DOWN_EVT| + | | + +------------------------+----------------+ | + |SELF_DOWN_EVT | | | + | | | | + | +-----------+ +-----------+ | + | |NODE_ | |NODE_ | | + | +----------|FAILINGOVER|<---------|SYNCHING |-----------+ | + | |SELF_ +-----------+ FAILOVER_+-----------+ PEER_ | | + | |DOWN_EVT | A BEGIN_EVT A | DOWN_EVT| | + | | | | | | | | + | | | | | | | | + | | |FAILOVER_ |FAILOVER_ |SYNCH_ |SYNCH_ | | + | | |END_EVT |BEGIN_EVT |BEGIN_EVT|END_EVT | | + | | | | | | | | + | | | | | | | | + | | | +--------------+ | | | + | | +-------->| SELF_UP_ |<-------+ | | + | | +-----------------| PEER_UP |----------------+ | | + | | |SELF_DOWN_EVT +--------------+ PEER_DOWN_EVT| | | + | | | A A | | | + | | | | | | | | + | | | PEER_UP_EVT| |SELF_UP_EVT | | | + | | | | | | | | + V V V | | V V V ++------------+ +-----------+ +-----------+ +------------+ +|SELF_DOWN_ | |SELF_UP_ | |PEER_UP_ | |PEER_DOWN | +|PEER_LEAVING| |PEER_COMING| |SELF_COMING| |SELF_LEAVING| ++------------+ +-----------+ +-----------+ +------------+ + | | A A | | + | | | | | | + | SELF_ | |SELF_ |PEER_ |PEER_ | + | DOWN_EVT| |UP_EVT |UP_EVT |DOWN_EVT | + | | | | | | + | | | | | | + | | +--------------+ | | + |PEER_DOWN_EVT +--->| SELF_DOWN_ |<---+ SELF_DOWN_EVT| + +------------------->| PEER_DOWN |<--------------------+ + +--------------+ + +Acked-by: Ying Xue +Signed-off-by: Jon Maloy +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + net/tipc/node.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/net/tipc/node.c ++++ b/net/tipc/node.c +@@ -728,7 +728,7 @@ static void tipc_node_fsm_evt(struct tip + state = SELF_UP_PEER_UP; + break; + case SELF_LOST_CONTACT_EVT: +- state = SELF_DOWN_PEER_LEAVING; ++ state = SELF_DOWN_PEER_DOWN; + break; + case SELF_ESTABL_CONTACT_EVT: + case PEER_LOST_CONTACT_EVT: +@@ -747,7 +747,7 @@ static void tipc_node_fsm_evt(struct tip + state = SELF_UP_PEER_UP; + break; + case PEER_LOST_CONTACT_EVT: +- state = SELF_LEAVING_PEER_DOWN; ++ state = SELF_DOWN_PEER_DOWN; + break; + case SELF_LOST_CONTACT_EVT: + case PEER_ESTABL_CONTACT_EVT: diff --git a/queue-4.4/tipc-make-dist-queue-pernet.patch b/queue-4.4/tipc-make-dist-queue-pernet.patch new file mode 100644 index 00000000000..15eefb1c0e7 --- /dev/null +++ b/queue-4.4/tipc-make-dist-queue-pernet.patch @@ -0,0 +1,106 @@ +From 541726abe7daca64390c2ec34e6a203145f1686d Mon Sep 17 00:00:00 2001 +From: Erik Hugne +Date: Thu, 7 Apr 2016 10:40:43 -0400 +Subject: tipc: make dist queue pernet + +From: Erik Hugne + +commit 541726abe7daca64390c2ec34e6a203145f1686d upstream. + +Nametable updates received from the network that cannot be applied +immediately are placed on a defer queue. This queue is global to the +TIPC module, which might cause problems when using TIPC in containers. +To prevent nametable updates from escaping into the wrong namespace, +we make the queue pernet instead. + +Signed-off-by: Erik Hugne +Signed-off-by: Jon Maloy +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + net/tipc/core.c | 1 + + net/tipc/core.h | 3 +++ + net/tipc/name_distr.c | 16 +++++++--------- + 3 files changed, 11 insertions(+), 9 deletions(-) + +--- a/net/tipc/core.c ++++ b/net/tipc/core.c +@@ -69,6 +69,7 @@ static int __net_init tipc_init_net(stru + if (err) + goto out_nametbl; + ++ INIT_LIST_HEAD(&tn->dist_queue); + err = tipc_topsrv_start(net); + if (err) + goto out_subscr; +--- a/net/tipc/core.h ++++ b/net/tipc/core.h +@@ -103,6 +103,9 @@ struct tipc_net { + spinlock_t nametbl_lock; + struct name_table *nametbl; + ++ /* Name dist queue */ ++ struct list_head dist_queue; ++ + /* Topology subscription server */ + struct tipc_server *topsrv; + atomic_t subscription_count; +--- a/net/tipc/name_distr.c ++++ b/net/tipc/name_distr.c +@@ -40,11 +40,6 @@ + + int sysctl_tipc_named_timeout __read_mostly = 2000; + +-/** +- * struct tipc_dist_queue - queue holding deferred name table updates +- */ +-static struct list_head tipc_dist_queue = LIST_HEAD_INIT(tipc_dist_queue); +- + struct distr_queue_item { + struct distr_item i; + u32 dtype; +@@ -340,9 +335,11 @@ static bool tipc_update_nametbl(struct n + * tipc_named_add_backlog - add a failed name table update to the backlog + * + */ +-static void tipc_named_add_backlog(struct distr_item *i, u32 type, u32 node) ++static void tipc_named_add_backlog(struct net *net, struct distr_item *i, ++ u32 type, u32 node) + { + struct distr_queue_item *e; ++ struct tipc_net *tn = net_generic(net, tipc_net_id); + unsigned long now = get_jiffies_64(); + + e = kzalloc(sizeof(*e), GFP_ATOMIC); +@@ -352,7 +349,7 @@ static void tipc_named_add_backlog(struc + e->node = node; + e->expires = now + msecs_to_jiffies(sysctl_tipc_named_timeout); + memcpy(e, i, sizeof(*i)); +- list_add_tail(&e->next, &tipc_dist_queue); ++ list_add_tail(&e->next, &tn->dist_queue); + } + + /** +@@ -362,10 +359,11 @@ static void tipc_named_add_backlog(struc + void tipc_named_process_backlog(struct net *net) + { + struct distr_queue_item *e, *tmp; ++ struct tipc_net *tn = net_generic(net, tipc_net_id); + char addr[16]; + unsigned long now = get_jiffies_64(); + +- list_for_each_entry_safe(e, tmp, &tipc_dist_queue, next) { ++ list_for_each_entry_safe(e, tmp, &tn->dist_queue, next) { + if (time_after(e->expires, now)) { + if (!tipc_update_nametbl(net, &e->i, e->node, e->dtype)) + continue; +@@ -405,7 +403,7 @@ void tipc_named_rcv(struct net *net, str + node = msg_orignode(msg); + while (count--) { + if (!tipc_update_nametbl(net, item, node, mtype)) +- tipc_named_add_backlog(item, mtype, node); ++ tipc_named_add_backlog(net, item, mtype, node); + item++; + } + kfree_skb(skb); diff --git a/queue-4.4/tipc-make-sure-ipv6-header-fits-in-skb-headroom.patch b/queue-4.4/tipc-make-sure-ipv6-header-fits-in-skb-headroom.patch new file mode 100644 index 00000000000..6590aa63239 --- /dev/null +++ b/queue-4.4/tipc-make-sure-ipv6-header-fits-in-skb-headroom.patch @@ -0,0 +1,34 @@ +From 9bd160bfa27fa41927dbbce7ee0ea779700e09ef Mon Sep 17 00:00:00 2001 +From: Richard Alpe +Date: Mon, 14 Mar 2016 09:43:52 +0100 +Subject: tipc: make sure IPv6 header fits in skb headroom + +From: Richard Alpe + +commit 9bd160bfa27fa41927dbbce7ee0ea779700e09ef upstream. + +Expand headroom further in order to be able to fit the larger IPv6 +header. Prior to this patch this caused a skb under panic for certain +tipc packets when using IPv6 UDP bearer(s). + +Signed-off-by: Richard Alpe +Acked-by: Jon Maloy +Signed-off-by: David S. Miller +Signed-off-by: Jon Maloy +Signed-off-by: Greg Kroah-Hartman + +--- + net/tipc/udp_media.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/tipc/udp_media.c ++++ b/net/tipc/udp_media.c +@@ -52,7 +52,7 @@ + /* IANA assigned UDP port */ + #define UDP_PORT_DEFAULT 6118 + +-#define UDP_MIN_HEADROOM 28 ++#define UDP_MIN_HEADROOM 48 + + static const struct nla_policy tipc_nl_udp_policy[TIPC_NLA_UDP_MAX + 1] = { + [TIPC_NLA_UDP_UNSPEC] = {.type = NLA_UNSPEC}, diff --git a/queue-4.4/tipc-re-enable-compensation-for-socket-receive-buffer-double-counting.patch b/queue-4.4/tipc-re-enable-compensation-for-socket-receive-buffer-double-counting.patch new file mode 100644 index 00000000000..f7bbd445d62 --- /dev/null +++ b/queue-4.4/tipc-re-enable-compensation-for-socket-receive-buffer-double-counting.patch @@ -0,0 +1,49 @@ +From 7c8bcfb1255fe9d929c227d67bdcd84430fd200b Mon Sep 17 00:00:00 2001 +From: Jon Paul Maloy +Date: Mon, 2 May 2016 11:58:45 -0400 +Subject: tipc: re-enable compensation for socket receive buffer double counting + +From: Jon Paul Maloy + +commit 7c8bcfb1255fe9d929c227d67bdcd84430fd200b upstream. + +In the refactoring commit d570d86497ee ("tipc: enqueue arrived buffers +in socket in separate function") we did by accident replace the test + +if (sk->sk_backlog.len == 0) + atomic_set(&tsk->dupl_rcvcnt, 0); + +with + +if (sk->sk_backlog.len) + atomic_set(&tsk->dupl_rcvcnt, 0); + +This effectively disables the compensation we have for the double +receive buffer accounting that occurs temporarily when buffers are +moved from the backlog to the socket receive queue. Until now, this +has gone unnoticed because of the large receive buffer limits we are +applying, but becomes indispensable when we reduce this buffer limit +later in this series. + +We now fix this by inverting the mentioned condition. + +Acked-by: Ying Xue +Signed-off-by: Jon Maloy +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + net/tipc/socket.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -1755,7 +1755,7 @@ static void tipc_sk_enqueue(struct sk_bu + + /* Try backlog, compensating for double-counted bytes */ + dcnt = &tipc_sk(sk)->dupl_rcvcnt; +- if (sk->sk_backlog.len) ++ if (!sk->sk_backlog.len) + atomic_set(dcnt, 0); + lim = rcvbuf_limit(sk, skb) + atomic_read(dcnt); + if (likely(!sk_add_backlog(sk, skb, lim))) diff --git a/queue-4.9/series b/queue-4.9/series new file mode 100644 index 00000000000..e69de29bb2d