--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
+Date: Tue, 30 Jul 2019 22:21:41 -0500
+Subject: atm: iphase: Fix Spectre v1 vulnerability
+
+From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
+
+[ Upstream commit ea443e5e98b5b74e317ef3d26bcaea54931ccdee ]
+
+board is controlled by user-space, hence leading to a potential
+exploitation of the Spectre variant 1 vulnerability.
+
+This issue was detected with the help of Smatch:
+
+drivers/atm/iphase.c:2765 ia_ioctl() warn: potential spectre issue 'ia_dev' [r] (local cap)
+drivers/atm/iphase.c:2774 ia_ioctl() warn: possible spectre second half. 'iadev'
+drivers/atm/iphase.c:2782 ia_ioctl() warn: possible spectre second half. 'iadev'
+drivers/atm/iphase.c:2816 ia_ioctl() warn: possible spectre second half. 'iadev'
+drivers/atm/iphase.c:2823 ia_ioctl() warn: possible spectre second half. 'iadev'
+drivers/atm/iphase.c:2830 ia_ioctl() warn: potential spectre issue '_ia_dev' [r] (local cap)
+drivers/atm/iphase.c:2845 ia_ioctl() warn: possible spectre second half. 'iadev'
+drivers/atm/iphase.c:2856 ia_ioctl() warn: possible spectre second half. 'iadev'
+
+Fix this by sanitizing board before using it to index ia_dev and _ia_dev
+
+Notice that given that speculation windows are large, the policy is
+to kill the speculation on the first load and not worry if it can be
+completed with a dependent load/store [1].
+
+[1] https://lore.kernel.org/lkml/20180423164740.GY17484@dhcp22.suse.cz/
+
+Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/atm/iphase.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/drivers/atm/iphase.c
++++ b/drivers/atm/iphase.c
+@@ -63,6 +63,7 @@
+ #include <asm/byteorder.h>
+ #include <linux/vmalloc.h>
+ #include <linux/jiffies.h>
++#include <linux/nospec.h>
+ #include "iphase.h"
+ #include "suni.h"
+ #define swap_byte_order(x) (((x & 0xff) << 8) | ((x & 0xff00) >> 8))
+@@ -2760,8 +2761,11 @@ static int ia_ioctl(struct atm_dev *dev,
+ }
+ if (copy_from_user(&ia_cmds, arg, sizeof ia_cmds)) return -EFAULT;
+ board = ia_cmds.status;
+- if ((board < 0) || (board > iadev_count))
+- board = 0;
++
++ if ((board < 0) || (board > iadev_count))
++ board = 0;
++ board = array_index_nospec(board, iadev_count + 1);
++
+ iadev = ia_dev[board];
+ switch (ia_cmds.cmd) {
+ case MEMDUMP:
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Sudarsana Reddy Kalluru <skalluru@marvell.com>
+Date: Tue, 23 Jul 2019 19:32:41 -0700
+Subject: bnx2x: Disable multi-cos feature.
+
+From: Sudarsana Reddy Kalluru <skalluru@marvell.com>
+
+[ Upstream commit d1f0b5dce8fda09a7f5f04c1878f181d548e42f5 ]
+
+Commit 3968d38917eb ("bnx2x: Fix Multi-Cos.") which enabled multi-cos
+feature after prolonged time in driver added some regression causing
+numerous issues (sudden reboots, tx timeout etc.) reported by customers.
+We plan to backout this commit and submit proper fix once we have root
+cause of issues reported with this feature enabled.
+
+Fixes: 3968d38917eb ("bnx2x: Fix Multi-Cos.")
+Signed-off-by: Sudarsana Reddy Kalluru <skalluru@marvell.com>
+Signed-off-by: Manish Chopra <manishc@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -1936,8 +1936,7 @@ u16 bnx2x_select_queue(struct net_device
+ }
+
+ /* select a non-FCoE queue */
+- return fallback(dev, skb, NULL) %
+- (BNX2X_NUM_ETH_QUEUES(bp) * bp->max_cos);
++ return fallback(dev, skb, NULL) % (BNX2X_NUM_ETH_QUEUES(bp));
+ }
+
+ void bnx2x_set_num_queues(struct bnx2x *bp)
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:41 AM CEST
+From: Arnd Bergmann <arnd@arndb.de>
+Date: Tue, 30 Jul 2019 21:25:20 +0200
+Subject: compat_ioctl: pppoe: fix PPPOEIOCSFWD handling
+
+From: Arnd Bergmann <arnd@arndb.de>
+
+[ Upstream commit 055d88242a6046a1ceac3167290f054c72571cd9 ]
+
+Support for handling the PPPOEIOCSFWD ioctl in compat mode was added in
+linux-2.5.69 along with hundreds of other commands, but was always broken
+sincen only the structure is compatible, but the command number is not,
+due to the size being sizeof(size_t), or at first sizeof(sizeof((struct
+sockaddr_pppox)), which is different on 64-bit architectures.
+
+Guillaume Nault adds:
+
+ And the implementation was broken until 2016 (see 29e73269aa4d ("pppoe:
+ fix reference counting in PPPoE proxy")), and nobody ever noticed. I
+ should probably have removed this ioctl entirely instead of fixing it.
+ Clearly, it has never been used.
+
+Fix it by adding a compat_ioctl handler for all pppoe variants that
+translates the command number and then calls the regular ioctl function.
+
+All other ioctl commands handled by pppoe are compatible between 32-bit
+and 64-bit, and require compat_ptr() conversion.
+
+This should apply to all stable kernels.
+
+Acked-by: Guillaume Nault <g.nault@alphalink.fr>
+Signed-off-by: Arnd Bergmann <arnd@arndb.de>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ppp/pppoe.c | 3 +++
+ drivers/net/ppp/pppox.c | 13 +++++++++++++
+ drivers/net/ppp/pptp.c | 3 +++
+ fs/compat_ioctl.c | 3 ---
+ include/linux/if_pppox.h | 3 +++
+ net/l2tp/l2tp_ppp.c | 3 +++
+ 6 files changed, 25 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -1120,6 +1120,9 @@ static const struct proto_ops pppoe_ops
+ .recvmsg = pppoe_recvmsg,
+ .mmap = sock_no_mmap,
+ .ioctl = pppox_ioctl,
++#ifdef CONFIG_COMPAT
++ .compat_ioctl = pppox_compat_ioctl,
++#endif
+ };
+
+ static const struct pppox_proto pppoe_proto = {
+--- a/drivers/net/ppp/pppox.c
++++ b/drivers/net/ppp/pppox.c
+@@ -22,6 +22,7 @@
+ #include <linux/string.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
++#include <linux/compat.h>
+ #include <linux/errno.h>
+ #include <linux/netdevice.h>
+ #include <linux/net.h>
+@@ -103,6 +104,18 @@ int pppox_ioctl(struct socket *sock, uns
+
+ EXPORT_SYMBOL(pppox_ioctl);
+
++#ifdef CONFIG_COMPAT
++int pppox_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
++{
++ if (cmd == PPPOEIOCSFWD32)
++ cmd = PPPOEIOCSFWD;
++
++ return pppox_ioctl(sock, cmd, (unsigned long)compat_ptr(arg));
++}
++
++EXPORT_SYMBOL(pppox_compat_ioctl);
++#endif
++
+ static int pppox_create(struct net *net, struct socket *sock, int protocol,
+ int kern)
+ {
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -633,6 +633,9 @@ static const struct proto_ops pptp_ops =
+ .recvmsg = sock_no_recvmsg,
+ .mmap = sock_no_mmap,
+ .ioctl = pppox_ioctl,
++#ifdef CONFIG_COMPAT
++ .compat_ioctl = pppox_compat_ioctl,
++#endif
+ };
+
+ static const struct pppox_proto pppox_pptp_proto = {
+--- a/fs/compat_ioctl.c
++++ b/fs/compat_ioctl.c
+@@ -894,9 +894,6 @@ COMPATIBLE_IOCTL(PPPIOCDISCONN)
+ COMPATIBLE_IOCTL(PPPIOCATTCHAN)
+ COMPATIBLE_IOCTL(PPPIOCGCHAN)
+ COMPATIBLE_IOCTL(PPPIOCGL2TPSTATS)
+-/* PPPOX */
+-COMPATIBLE_IOCTL(PPPOEIOCSFWD)
+-COMPATIBLE_IOCTL(PPPOEIOCDFWD)
+ /* Big A */
+ /* sparc only */
+ /* Big Q for sound/OSS */
+--- a/include/linux/if_pppox.h
++++ b/include/linux/if_pppox.h
+@@ -84,6 +84,9 @@ extern int register_pppox_proto(int prot
+ extern void unregister_pppox_proto(int proto_num);
+ extern void pppox_unbind_sock(struct sock *sk);/* delete ppp-channel binding */
+ extern int pppox_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
++extern int pppox_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
++
++#define PPPOEIOCSFWD32 _IOW(0xB1 ,0, compat_size_t)
+
+ /* PPPoX socket states */
+ enum {
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -1686,6 +1686,9 @@ static const struct proto_ops pppol2tp_o
+ .recvmsg = pppol2tp_recvmsg,
+ .mmap = sock_no_mmap,
+ .ioctl = pppox_ioctl,
++#ifdef CONFIG_COMPAT
++ .compat_ioctl = pppox_compat_ioctl,
++#endif
+ };
+
+ static const struct pppox_proto pppol2tp_proto = {
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Cong Wang <xiyou.wangcong@gmail.com>
+Date: Mon, 22 Jul 2019 21:43:00 -0700
+Subject: ife: error out when nla attributes are empty
+
+From: Cong Wang <xiyou.wangcong@gmail.com>
+
+[ Upstream commit c8ec4632c6ac9cda0e8c3d51aa41eeab66585bd5 ]
+
+act_ife at least requires TCA_IFE_PARMS, so we have to bail out
+when there is no attribute passed in.
+
+Reported-by: syzbot+fbb5b288c9cb6a2eeac4@syzkaller.appspotmail.com
+Fixes: ef6980b6becb ("introduce IFE action")
+Cc: Jamal Hadi Salim <jhs@mojatatu.com>
+Cc: Jiri Pirko <jiri@resnulli.us>
+Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/sched/act_ife.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -484,6 +484,11 @@ static int tcf_ife_init(struct net *net,
+ int ret = 0;
+ int err;
+
++ if (!nla) {
++ NL_SET_ERR_MSG_MOD(extack, "IFE requires attributes to be passed");
++ return -EINVAL;
++ }
++
+ err = nla_parse_nested(tb, TCA_IFE_MAX, nla, ife_policy, NULL);
+ if (err < 0)
+ return err;
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Date: Wed, 24 Jul 2019 20:00:42 +0800
+Subject: ip6_gre: reload ipv6h in prepare_ip6gre_xmit_ipv6
+
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+
+[ Upstream commit 3bc817d665ac6d9de89f59df522ad86f5b5dfc03 ]
+
+Since ip6_tnl_parse_tlv_enc_lim() can call pskb_may_pull()
+which may change skb->data, so we need to re-load ipv6h at
+the right place.
+
+Fixes: 898b29798e36 ("ip6_gre: Refactor ip6gre xmit codes")
+Cc: William Tu <u9012063@gmail.com>
+Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Acked-by: William Tu <u9012063@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv6/ip6_gre.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -680,12 +680,13 @@ static int prepare_ip6gre_xmit_ipv6(stru
+ struct flowi6 *fl6, __u8 *dsfield,
+ int *encap_limit)
+ {
+- struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++ struct ipv6hdr *ipv6h;
+ struct ip6_tnl *t = netdev_priv(dev);
+ __u16 offset;
+
+ offset = ip6_tnl_parse_tlv_enc_lim(skb, skb_network_header(skb));
+ /* ip6_tnl_parse_tlv_enc_lim() might have reallocated skb->head */
++ ipv6h = ipv6_hdr(skb);
+
+ if (offset > 0) {
+ struct ipv6_tlv_tnl_enc_lim *tel;
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Date: Fri, 26 Jul 2019 00:40:17 +0800
+Subject: ip6_tunnel: fix possible use-after-free on xmit
+
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+
+[ Upstream commit 01f5bffad555f8e22a61f4b1261fe09cf1b96994 ]
+
+ip4ip6/ip6ip6 tunnels run iptunnel_handle_offloads on xmit which
+can cause a possible use-after-free accessing iph/ipv6h pointer
+since the packet will be 'uncloned' running pskb_expand_head if
+it is a cloned gso skb.
+
+Fixes: 0e9a709560db ("ip6_tunnel, ip6_gre: fix setting of DSCP on encapsulated packets")
+Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv6/ip6_tunnel.c | 6 ++----
+ 1 file changed, 2 insertions(+), 4 deletions(-)
+
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1283,12 +1283,11 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, str
+ }
+
+ fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++ dsfield = INET_ECN_encapsulate(dsfield, ipv4_get_dsfield(iph));
+
+ if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6))
+ return -1;
+
+- dsfield = INET_ECN_encapsulate(dsfield, ipv4_get_dsfield(iph));
+-
+ skb_set_inner_ipproto(skb, IPPROTO_IPIP);
+
+ err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+@@ -1372,12 +1371,11 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, str
+ }
+
+ fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++ dsfield = INET_ECN_encapsulate(dsfield, ipv6_get_dsfield(ipv6h));
+
+ if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6))
+ return -1;
+
+- dsfield = INET_ECN_encapsulate(dsfield, ipv6_get_dsfield(ipv6h));
+-
+ skb_set_inner_ipproto(skb, IPPROTO_IPV6);
+
+ err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Date: Thu, 25 Jul 2019 11:07:56 +0800
+Subject: ipip: validate header length in ipip_tunnel_xmit
+
+From: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+
+[ Upstream commit 47d858d0bdcd47cc1c6c9eeca91b091dd9e55637 ]
+
+We need the same checks introduced by commit cb9f1b783850
+("ip: validate header length on virtual device xmit") for
+ipip tunnel.
+
+Fixes: cb9f1b783850b ("ip: validate header length on virtual device xmit")
+Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv4/ipip.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/net/ipv4/ipip.c
++++ b/net/ipv4/ipip.c
+@@ -281,6 +281,9 @@ static netdev_tx_t ipip_tunnel_xmit(stru
+ const struct iphdr *tiph = &tunnel->parms.iph;
+ u8 ipproto;
+
++ if (!pskb_inet_may_pull(skb))
++ goto tx_error;
++
+ switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ ipproto = IPPROTO_IPIP;
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Jiri Pirko <jiri@mellanox.com>
+Date: Wed, 31 Jul 2019 09:33:14 +0300
+Subject: mlxsw: spectrum: Fix error path in mlxsw_sp_module_init()
+
+From: Jiri Pirko <jiri@mellanox.com>
+
+[ Upstream commit 28fe79000e9b0a6f99959869947f1ca305f14599 ]
+
+In case of sp2 pci driver registration fail, fix the error path to
+start with sp1 pci driver unregister.
+
+Fixes: c3ab435466d5 ("mlxsw: spectrum: Extend to support Spectrum-2 ASIC")
+Signed-off-by: Jiri Pirko <jiri@mellanox.com>
+Signed-off-by: Ido Schimmel <idosch@mellanox.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -5032,7 +5032,7 @@ static int __init mlxsw_sp_module_init(v
+ return 0;
+
+ err_sp2_pci_driver_register:
+- mlxsw_pci_driver_unregister(&mlxsw_sp2_pci_driver);
++ mlxsw_pci_driver_unregister(&mlxsw_sp1_pci_driver);
+ err_sp1_pci_driver_register:
+ mlxsw_core_driver_unregister(&mlxsw_sp2_driver);
+ err_sp2_core_driver_register:
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Matteo Croce <mcroce@redhat.com>
+Date: Thu, 1 Aug 2019 14:13:30 +0200
+Subject: mvpp2: fix panic on module removal
+
+From: Matteo Croce <mcroce@redhat.com>
+
+[ Upstream commit 944a83a2669ae8aa2c7664e79376ca7468eb0a2b ]
+
+mvpp2 uses a delayed workqueue to gather traffic statistics.
+On module removal the workqueue can be destroyed before calling
+cancel_delayed_work_sync() on its works.
+Fix it by moving the destroy_workqueue() call after mvpp2_port_remove().
+Also remove an unneeded call to flush_workqueue()
+
+ # rmmod mvpp2
+ [ 2743.311722] mvpp2 f4000000.ethernet eth1: phy link down 10gbase-kr/10Gbps/Full
+ [ 2743.320063] mvpp2 f4000000.ethernet eth1: Link is Down
+ [ 2743.572263] mvpp2 f4000000.ethernet eth2: phy link down sgmii/1Gbps/Full
+ [ 2743.580076] mvpp2 f4000000.ethernet eth2: Link is Down
+ [ 2744.102169] mvpp2 f2000000.ethernet eth0: phy link down 10gbase-kr/10Gbps/Full
+ [ 2744.110441] mvpp2 f2000000.ethernet eth0: Link is Down
+ [ 2744.115614] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
+ [ 2744.115615] Mem abort info:
+ [ 2744.115616] ESR = 0x96000005
+ [ 2744.115617] Exception class = DABT (current EL), IL = 32 bits
+ [ 2744.115618] SET = 0, FnV = 0
+ [ 2744.115619] EA = 0, S1PTW = 0
+ [ 2744.115620] Data abort info:
+ [ 2744.115621] ISV = 0, ISS = 0x00000005
+ [ 2744.115622] CM = 0, WnR = 0
+ [ 2744.115624] user pgtable: 4k pages, 39-bit VAs, pgdp=0000000422681000
+ [ 2744.115626] [0000000000000000] pgd=0000000000000000, pud=0000000000000000
+ [ 2744.115630] Internal error: Oops: 96000005 [#1] SMP
+ [ 2744.115632] Modules linked in: mvpp2(-) algif_hash af_alg nls_iso8859_1 nls_cp437 vfat fat xhci_plat_hcd m25p80 spi_nor xhci_hcd mtd usbcore i2c_mv64xxx sfp usb_common marvell10g phy_generic spi_orion mdio_i2c i2c_core mvmdio phylink sbsa_gwdt ip_tables x_tables autofs4 [last unloaded: mvpp2]
+ [ 2744.115654] CPU: 3 PID: 8357 Comm: kworker/3:2 Not tainted 5.3.0-rc2 #1
+ [ 2744.115655] Hardware name: Marvell 8040 MACCHIATOBin Double-shot (DT)
+ [ 2744.115665] Workqueue: events_power_efficient phylink_resolve [phylink]
+ [ 2744.115669] pstate: a0000085 (NzCv daIf -PAN -UAO)
+ [ 2744.115675] pc : __queue_work+0x9c/0x4d8
+ [ 2744.115677] lr : __queue_work+0x170/0x4d8
+ [ 2744.115678] sp : ffffff801001bd50
+ [ 2744.115680] x29: ffffff801001bd50 x28: ffffffc422597600
+ [ 2744.115684] x27: ffffff80109ae6f0 x26: ffffff80108e4018
+ [ 2744.115688] x25: 0000000000000003 x24: 0000000000000004
+ [ 2744.115691] x23: ffffff80109ae6e0 x22: 0000000000000017
+ [ 2744.115694] x21: ffffffc42c030000 x20: ffffffc42209e8f8
+ [ 2744.115697] x19: 0000000000000000 x18: 0000000000000000
+ [ 2744.115699] x17: 0000000000000000 x16: 0000000000000000
+ [ 2744.115701] x15: 0000000000000010 x14: ffffffffffffffff
+ [ 2744.115702] x13: ffffff8090e2b95f x12: ffffff8010e2b967
+ [ 2744.115704] x11: ffffff8010906000 x10: 0000000000000040
+ [ 2744.115706] x9 : ffffff80109223b8 x8 : ffffff80109223b0
+ [ 2744.115707] x7 : ffffffc42bc00068 x6 : 0000000000000000
+ [ 2744.115709] x5 : ffffffc42bc00000 x4 : 0000000000000000
+ [ 2744.115710] x3 : 0000000000000000 x2 : 0000000000000000
+ [ 2744.115712] x1 : 0000000000000008 x0 : ffffffc42c030000
+ [ 2744.115714] Call trace:
+ [ 2744.115716] __queue_work+0x9c/0x4d8
+ [ 2744.115718] delayed_work_timer_fn+0x28/0x38
+ [ 2744.115722] call_timer_fn+0x3c/0x180
+ [ 2744.115723] expire_timers+0x60/0x168
+ [ 2744.115724] run_timer_softirq+0xbc/0x1e8
+ [ 2744.115727] __do_softirq+0x128/0x320
+ [ 2744.115731] irq_exit+0xa4/0xc0
+ [ 2744.115734] __handle_domain_irq+0x70/0xc0
+ [ 2744.115735] gic_handle_irq+0x58/0xa8
+ [ 2744.115737] el1_irq+0xb8/0x140
+ [ 2744.115738] console_unlock+0x3a0/0x568
+ [ 2744.115740] vprintk_emit+0x200/0x2a0
+ [ 2744.115744] dev_vprintk_emit+0x1c8/0x1e4
+ [ 2744.115747] dev_printk_emit+0x6c/0x7c
+ [ 2744.115751] __netdev_printk+0x104/0x1d8
+ [ 2744.115752] netdev_printk+0x60/0x70
+ [ 2744.115756] phylink_resolve+0x38c/0x3c8 [phylink]
+ [ 2744.115758] process_one_work+0x1f8/0x448
+ [ 2744.115760] worker_thread+0x54/0x500
+ [ 2744.115762] kthread+0x12c/0x130
+ [ 2744.115764] ret_from_fork+0x10/0x1c
+ [ 2744.115768] Code: aa1403e0 97fffbbe aa0003f5 b4000700 (f9400261)
+
+Fixes: 118d6298f6f0 ("net: mvpp2: add ethtool GOP statistics")
+Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
+Signed-off-by: Matteo Croce <mcroce@redhat.com>
+Acked-by: Antoine Tenart <antoine.tenart@bootlin.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5358,9 +5358,6 @@ static int mvpp2_remove(struct platform_
+
+ mvpp2_dbgfs_cleanup(priv);
+
+- flush_workqueue(priv->stats_queue);
+- destroy_workqueue(priv->stats_queue);
+-
+ fwnode_for_each_available_child_node(fwnode, port_fwnode) {
+ if (priv->port_list[i]) {
+ mutex_destroy(&priv->port_list[i]->gather_stats_lock);
+@@ -5369,6 +5366,8 @@ static int mvpp2_remove(struct platform_
+ i++;
+ }
+
++ destroy_workqueue(priv->stats_queue);
++
+ for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) {
+ struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i];
+
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Matteo Croce <mcroce@redhat.com>
+Date: Sun, 28 Jul 2019 02:46:45 +0200
+Subject: mvpp2: refactor MTU change code
+
+From: Matteo Croce <mcroce@redhat.com>
+
+[ Upstream commit 230bd958c2c846ee292aa38bc6b006296c24ca01 ]
+
+The MTU change code can call napi_disable() with the device already down,
+leading to a deadlock. Also, lot of code is duplicated unnecessarily.
+
+Rework mvpp2_change_mtu() to avoid the deadlock and remove duplicated code.
+
+Fixes: 3f518509dedc ("ethernet: Add new driver for Marvell Armada 375 network unit")
+Signed-off-by: Matteo Croce <mcroce@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 41 +++++++-----------------
+ 1 file changed, 13 insertions(+), 28 deletions(-)
+
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -3501,6 +3501,7 @@ static int mvpp2_set_mac_address(struct
+ static int mvpp2_change_mtu(struct net_device *dev, int mtu)
+ {
+ struct mvpp2_port *port = netdev_priv(dev);
++ bool running = netif_running(dev);
+ int err;
+
+ if (!IS_ALIGNED(MVPP2_RX_PKT_SIZE(mtu), 8)) {
+@@ -3509,40 +3510,24 @@ static int mvpp2_change_mtu(struct net_d
+ mtu = ALIGN(MVPP2_RX_PKT_SIZE(mtu), 8);
+ }
+
+- if (!netif_running(dev)) {
+- err = mvpp2_bm_update_mtu(dev, mtu);
+- if (!err) {
+- port->pkt_size = MVPP2_RX_PKT_SIZE(mtu);
+- return 0;
+- }
++ if (running)
++ mvpp2_stop_dev(port);
+
++ err = mvpp2_bm_update_mtu(dev, mtu);
++ if (err) {
++ netdev_err(dev, "failed to change MTU\n");
+ /* Reconfigure BM to the original MTU */
+- err = mvpp2_bm_update_mtu(dev, dev->mtu);
+- if (err)
+- goto log_error;
++ mvpp2_bm_update_mtu(dev, dev->mtu);
++ } else {
++ port->pkt_size = MVPP2_RX_PKT_SIZE(mtu);
+ }
+
+- mvpp2_stop_dev(port);
+-
+- err = mvpp2_bm_update_mtu(dev, mtu);
+- if (!err) {
+- port->pkt_size = MVPP2_RX_PKT_SIZE(mtu);
+- goto out_start;
++ if (running) {
++ mvpp2_start_dev(port);
++ mvpp2_egress_enable(port);
++ mvpp2_ingress_enable(port);
+ }
+
+- /* Reconfigure BM to the original MTU */
+- err = mvpp2_bm_update_mtu(dev, dev->mtu);
+- if (err)
+- goto log_error;
+-
+-out_start:
+- mvpp2_start_dev(port);
+- mvpp2_egress_enable(port);
+- mvpp2_ingress_enable(port);
+-
+- return 0;
+-log_error:
+- netdev_err(dev, "failed to change MTU\n");
+ return err;
+ }
+
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Date: Mon, 29 Jul 2019 12:28:41 +0300
+Subject: net: bridge: delete local fdb on device init failure
+
+From: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+
+[ Upstream commit d7bae09fa008c6c9a489580db0a5a12063b97f97 ]
+
+On initialization failure we have to delete the local fdb which was
+inserted due to the default pvid creation. This problem has been present
+since the inception of default_pvid. Note that currently there are 2 cases:
+1) in br_dev_init() when br_multicast_init() fails
+2) if register_netdevice() fails after calling ndo_init()
+
+This patch takes care of both since br_vlan_flush() is called on both
+occasions. Also the new fdb delete would be a no-op on normal bridge
+device destruction since the local fdb would've been already flushed by
+br_dev_delete(). This is not an issue for ports since nbp_vlan_init() is
+called last when adding a port thus nothing can fail after it.
+
+Reported-by: syzbot+88533dc8b582309bf3ee@syzkaller.appspotmail.com
+Fixes: 5be5a2df40f0 ("bridge: Add filtering support for default_pvid")
+Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/bridge/br_vlan.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -677,6 +677,11 @@ void br_vlan_flush(struct net_bridge *br
+
+ ASSERT_RTNL();
+
++ /* delete auto-added default pvid local fdb before flushing vlans
++ * otherwise it will be leaked on bridge device init failure
++ */
++ br_fdb_delete_by_port(br, NULL, 0, 1);
++
+ vg = br_vlan_group(br);
+ __vlan_flush(vg);
+ RCU_INIT_POINTER(br->vlgrp, NULL);
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Date: Tue, 30 Jul 2019 14:21:00 +0300
+Subject: net: bridge: mcast: don't delete permanent entries when fast leave is enabled
+
+From: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+
+[ Upstream commit 5c725b6b65067909548ac9ca9bc777098ec9883d ]
+
+When permanent entries were introduced by the commit below, they were
+exempt from timing out and thus igmp leave wouldn't affect them unless
+fast leave was enabled on the port which was added before permanent
+entries existed. It shouldn't matter if fast leave is enabled or not
+if the user added a permanent entry it shouldn't be deleted on igmp
+leave.
+
+Before:
+$ echo 1 > /sys/class/net/eth4/brport/multicast_fast_leave
+$ bridge mdb add dev br0 port eth4 grp 229.1.1.1 permanent
+$ bridge mdb show
+dev br0 port eth4 grp 229.1.1.1 permanent
+
+< join and leave 229.1.1.1 on eth4 >
+
+$ bridge mdb show
+$
+
+After:
+$ echo 1 > /sys/class/net/eth4/brport/multicast_fast_leave
+$ bridge mdb add dev br0 port eth4 grp 229.1.1.1 permanent
+$ bridge mdb show
+dev br0 port eth4 grp 229.1.1.1 permanent
+
+< join and leave 229.1.1.1 on eth4 >
+
+$ bridge mdb show
+dev br0 port eth4 grp 229.1.1.1 permanent
+
+Fixes: ccb1c31a7a87 ("bridge: add flags to distinguish permanent mdb entires")
+Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/bridge/br_multicast.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1621,6 +1621,9 @@ br_multicast_leave_group(struct net_brid
+ if (!br_port_group_equal(p, port, src))
+ continue;
+
++ if (p->flags & MDB_PG_FLAGS_PERMANENT)
++ break;
++
+ rcu_assign_pointer(*pp, p->next);
+ hlist_del_init(&p->mglist);
+ del_timer(&p->timer);
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Jiri Pirko <jiri@mellanox.com>
+Date: Sun, 28 Jul 2019 14:56:36 +0200
+Subject: net: fix ifindex collision during namespace removal
+
+From: Jiri Pirko <jiri@mellanox.com>
+
+[ Upstream commit 55b40dbf0e76b4bfb9d8b3a16a0208640a9a45df ]
+
+Commit aca51397d014 ("netns: Fix arbitrary net_device-s corruptions
+on net_ns stop.") introduced a possibility to hit a BUG in case device
+is returning back to init_net and two following conditions are met:
+1) dev->ifindex value is used in a name of another "dev%d"
+ device in init_net.
+2) dev->name is used by another device in init_net.
+
+Under real life circumstances this is hard to get. Therefore this has
+been present happily for over 10 years. To reproduce:
+
+$ ip a
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 86:89:3f:86:61:29 brd ff:ff:ff:ff:ff:ff
+3: enp0s2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
+$ ip netns add ns1
+$ ip -n ns1 link add dummy1ns1 type dummy
+$ ip -n ns1 link add dummy2ns1 type dummy
+$ ip link set enp0s2 netns ns1
+$ ip -n ns1 link set enp0s2 name dummy0
+[ 100.858894] virtio_net virtio0 dummy0: renamed from enp0s2
+$ ip link add dev4 type dummy
+$ ip -n ns1 a
+1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+2: dummy1ns1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 16:63:4c:38:3e:ff brd ff:ff:ff:ff:ff:ff
+3: dummy2ns1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether aa:9e:86:dd:6b:5d brd ff:ff:ff:ff:ff:ff
+4: dummy0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
+$ ip a
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 86:89:3f:86:61:29 brd ff:ff:ff:ff:ff:ff
+4: dev4: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 5a:e1:4a:b6:ec:f8 brd ff:ff:ff:ff:ff:ff
+$ ip netns del ns1
+[ 158.717795] default_device_exit: failed to move dummy0 to init_net: -17
+[ 158.719316] ------------[ cut here ]------------
+[ 158.720591] kernel BUG at net/core/dev.c:9824!
+[ 158.722260] invalid opcode: 0000 [#1] SMP KASAN PTI
+[ 158.723728] CPU: 0 PID: 56 Comm: kworker/u2:1 Not tainted 5.3.0-rc1+ #18
+[ 158.725422] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-2.fc30 04/01/2014
+[ 158.727508] Workqueue: netns cleanup_net
+[ 158.728915] RIP: 0010:default_device_exit.cold+0x1d/0x1f
+[ 158.730683] Code: 84 e8 18 c9 3e fe 0f 0b e9 70 90 ff ff e8 36 e4 52 fe 89 d9 4c 89 e2 48 c7 c6 80 d6 25 84 48 c7 c7 20 c0 25 84 e8 f4 c8 3e
+[ 158.736854] RSP: 0018:ffff8880347e7b90 EFLAGS: 00010282
+[ 158.738752] RAX: 000000000000003b RBX: 00000000ffffffef RCX: 0000000000000000
+[ 158.741369] RDX: 0000000000000000 RSI: ffffffff8128013d RDI: ffffed10068fcf64
+[ 158.743418] RBP: ffff888033550170 R08: 000000000000003b R09: fffffbfff0b94b9c
+[ 158.745626] R10: fffffbfff0b94b9b R11: ffffffff85ca5cdf R12: ffff888032f28000
+[ 158.748405] R13: dffffc0000000000 R14: ffff8880335501b8 R15: 1ffff110068fcf72
+[ 158.750638] FS: 0000000000000000(0000) GS:ffff888036000000(0000) knlGS:0000000000000000
+[ 158.752944] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+[ 158.755245] CR2: 00007fe8b45d21d0 CR3: 00000000340b4005 CR4: 0000000000360ef0
+[ 158.757654] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
+[ 158.760012] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
+[ 158.762758] Call Trace:
+[ 158.763882] ? dev_change_net_namespace+0xbb0/0xbb0
+[ 158.766148] ? devlink_nl_cmd_set_doit+0x520/0x520
+[ 158.768034] ? dev_change_net_namespace+0xbb0/0xbb0
+[ 158.769870] ops_exit_list.isra.0+0xa8/0x150
+[ 158.771544] cleanup_net+0x446/0x8f0
+[ 158.772945] ? unregister_pernet_operations+0x4a0/0x4a0
+[ 158.775294] process_one_work+0xa1a/0x1740
+[ 158.776896] ? pwq_dec_nr_in_flight+0x310/0x310
+[ 158.779143] ? do_raw_spin_lock+0x11b/0x280
+[ 158.780848] worker_thread+0x9e/0x1060
+[ 158.782500] ? process_one_work+0x1740/0x1740
+[ 158.784454] kthread+0x31b/0x420
+[ 158.786082] ? __kthread_create_on_node+0x3f0/0x3f0
+[ 158.788286] ret_from_fork+0x3a/0x50
+[ 158.789871] ---[ end trace defd6c657c71f936 ]---
+[ 158.792273] RIP: 0010:default_device_exit.cold+0x1d/0x1f
+[ 158.795478] Code: 84 e8 18 c9 3e fe 0f 0b e9 70 90 ff ff e8 36 e4 52 fe 89 d9 4c 89 e2 48 c7 c6 80 d6 25 84 48 c7 c7 20 c0 25 84 e8 f4 c8 3e
+[ 158.804854] RSP: 0018:ffff8880347e7b90 EFLAGS: 00010282
+[ 158.807865] RAX: 000000000000003b RBX: 00000000ffffffef RCX: 0000000000000000
+[ 158.811794] RDX: 0000000000000000 RSI: ffffffff8128013d RDI: ffffed10068fcf64
+[ 158.816652] RBP: ffff888033550170 R08: 000000000000003b R09: fffffbfff0b94b9c
+[ 158.820930] R10: fffffbfff0b94b9b R11: ffffffff85ca5cdf R12: ffff888032f28000
+[ 158.825113] R13: dffffc0000000000 R14: ffff8880335501b8 R15: 1ffff110068fcf72
+[ 158.829899] FS: 0000000000000000(0000) GS:ffff888036000000(0000) knlGS:0000000000000000
+[ 158.834923] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+[ 158.838164] CR2: 00007fe8b45d21d0 CR3: 00000000340b4005 CR4: 0000000000360ef0
+[ 158.841917] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
+[ 158.845149] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
+
+Fix this by checking if a device with the same name exists in init_net
+and fallback to original code - dev%d to allocate name - in case it does.
+
+This was found using syzkaller.
+
+Fixes: aca51397d014 ("netns: Fix arbitrary net_device-s corruptions on net_ns stop.")
+Signed-off-by: Jiri Pirko <jiri@mellanox.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/dev.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9510,6 +9510,8 @@ static void __net_exit default_device_ex
+
+ /* Push remaining network devices to init_net */
+ snprintf(fb_name, IFNAMSIZ, "dev%d", dev->ifindex);
++ if (__dev_get_by_name(&init_net, fb_name))
++ snprintf(fb_name, IFNAMSIZ, "dev%%d");
+ err = dev_change_net_namespace(dev, &init_net, fb_name);
+ if (err) {
+ pr_emerg("%s: failed to move %s to init_net: %d\n",
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Edward Srouji <edwards@mellanox.com>
+Date: Tue, 23 Jul 2019 10:12:55 +0300
+Subject: net/mlx5: Fix modify_cq_in alignment
+
+From: Edward Srouji <edwards@mellanox.com>
+
+[ Upstream commit 7a32f2962c56d9d8a836b4469855caeee8766bd4 ]
+
+Fix modify_cq_in alignment to match the device specification.
+After this fix the 'cq_umem_valid' field will be in the right offset.
+
+Cc: <stable@vger.kernel.org> # 4.19
+Fixes: bd37197554eb ("net/mlx5: Update mlx5_ifc with DEVX UID bits")
+Signed-off-by: Edward Srouji <edwards@mellanox.com>
+Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
+Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
+Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/mlx5/mlx5_ifc.h | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -5623,7 +5623,12 @@ struct mlx5_ifc_modify_cq_in_bits {
+
+ struct mlx5_ifc_cqc_bits cq_context;
+
+- u8 reserved_at_280[0x600];
++ u8 reserved_at_280[0x60];
++
++ u8 cq_umem_valid[0x1];
++ u8 reserved_at_2e1[0x1f];
++
++ u8 reserved_at_300[0x580];
+
+ u8 pas[0][0x40];
+ };
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Mark Zhang <markz@mellanox.com>
+Date: Tue, 9 Jul 2019 05:37:12 +0300
+Subject: net/mlx5: Use reversed order when unregister devices
+
+From: Mark Zhang <markz@mellanox.com>
+
+[ Upstream commit 08aa5e7da6bce1a1963f63cf32c2e7ad434ad578 ]
+
+When lag is active, which is controlled by the bonded mlx5e netdev, mlx5
+interface unregestering must happen in the reverse order where rdma is
+unregistered (unloaded) first, to guarantee all references to the lag
+context in hardware is removed, then remove mlx5e netdev interface which
+will cleanup the lag context from hardware.
+
+Without this fix during destroy of LAG interface, we observed following
+errors:
+ * mlx5_cmd_check:752:(pid 12556): DESTROY_LAG(0x843) op_mod(0x0) failed,
+ status bad parameter(0x3), syndrome (0xe4ac33)
+ * mlx5_cmd_check:752:(pid 12556): DESTROY_LAG(0x843) op_mod(0x0) failed,
+ status bad parameter(0x3), syndrome (0xa5aee8).
+
+Fixes: a31208b1e11d ("net/mlx5_core: New init and exit flow for mlx5_core")
+Reviewed-by: Parav Pandit <parav@mellanox.com>
+Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
+Signed-off-by: Mark Zhang <markz@mellanox.com>
+Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/mellanox/mlx5/core/dev.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -307,7 +307,7 @@ void mlx5_unregister_device(struct mlx5_
+ struct mlx5_interface *intf;
+
+ mutex_lock(&mlx5_intf_mutex);
+- list_for_each_entry(intf, &intf_list, list)
++ list_for_each_entry_reverse(intf, &intf_list, list)
+ mlx5_remove_device(intf, priv);
+ list_del(&priv->dev_list);
+ mutex_unlock(&mlx5_intf_mutex);
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Qian Cai <cai@lca.pw>
+Date: Thu, 1 Aug 2019 09:52:54 -0400
+Subject: net/mlx5e: always initialize frag->last_in_page
+
+From: Qian Cai <cai@lca.pw>
+
+[ Upstream commit 60d60c8fbd8d1acf25b041ecd72ae4fa16e9405b ]
+
+The commit 069d11465a80 ("net/mlx5e: RX, Enhance legacy Receive Queue
+memory scheme") introduced an undefined behaviour below due to
+"frag->last_in_page" is only initialized in mlx5e_init_frags_partition()
+when,
+
+if (next_frag.offset + frag_info[f].frag_stride > PAGE_SIZE)
+
+or after bailed out the loop,
+
+for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++)
+
+As the result, there could be some "frag" have uninitialized
+value of "last_in_page".
+
+Later, get_frag() obtains those "frag" and check "frag->last_in_page" in
+mlx5e_put_rx_frag() and triggers the error during boot. Fix it by always
+initializing "frag->last_in_page" to "false" in
+mlx5e_init_frags_partition().
+
+UBSAN: Undefined behaviour in
+drivers/net/ethernet/mellanox/mlx5/core/en_rx.c:325:12
+load of value 170 is not a valid value for type 'bool' (aka '_Bool')
+Call trace:
+ dump_backtrace+0x0/0x264
+ show_stack+0x20/0x2c
+ dump_stack+0xb0/0x104
+ __ubsan_handle_load_invalid_value+0x104/0x128
+ mlx5e_handle_rx_cqe+0x8e8/0x12cc [mlx5_core]
+ mlx5e_poll_rx_cq+0xca8/0x1a94 [mlx5_core]
+ mlx5e_napi_poll+0x17c/0xa30 [mlx5_core]
+ net_rx_action+0x248/0x940
+ __do_softirq+0x350/0x7b8
+ irq_exit+0x200/0x26c
+ __handle_domain_irq+0xc8/0x128
+ gic_handle_irq+0x138/0x228
+ el1_irq+0xb8/0x140
+ arch_cpu_idle+0x1a4/0x348
+ do_idle+0x114/0x1b0
+ cpu_startup_entry+0x24/0x28
+ rest_init+0x1ac/0x1dc
+ arch_call_rest_init+0x10/0x18
+ start_kernel+0x4d4/0x57c
+
+Fixes: 069d11465a80 ("net/mlx5e: RX, Enhance legacy Receive Queue memory scheme")
+Signed-off-by: Qian Cai <cai@lca.pw>
+Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -420,12 +420,11 @@ static inline u64 mlx5e_get_mpwqe_offset
+
+ static void mlx5e_init_frags_partition(struct mlx5e_rq *rq)
+ {
+- struct mlx5e_wqe_frag_info next_frag, *prev;
++ struct mlx5e_wqe_frag_info next_frag = {};
++ struct mlx5e_wqe_frag_info *prev = NULL;
+ int i;
+
+ next_frag.di = &rq->wqe.di[0];
+- next_frag.offset = 0;
+- prev = NULL;
+
+ for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++) {
+ struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:41 AM CEST
+From: Ariel Levkovich <lariel@mellanox.com>
+Date: Sat, 6 Jul 2019 18:06:15 +0300
+Subject: net/mlx5e: Prevent encap flow counter update async to user query
+
+From: Ariel Levkovich <lariel@mellanox.com>
+
+[ Upstream commit 90bb769291161cf25a818d69cf608c181654473e ]
+
+This patch prevents a race between user invoked cached counters
+query and a neighbor last usage updater.
+
+The cached flow counter stats can be queried by calling
+"mlx5_fc_query_cached" which provides the number of bytes and
+packets that passed via this flow since the last time this counter
+was queried.
+It does so by reducting the last saved stats from the current, cached
+stats and then updating the last saved stats with the cached stats.
+It also provide the lastuse value for that flow.
+
+Since "mlx5e_tc_update_neigh_used_value" needs to retrieve the
+last usage time of encapsulation flows, it calls the flow counter
+query method periodically and async to user queries of the flow counter
+using cls_flower.
+This call is causing the driver to update the last reported bytes and
+packets from the cache and therefore, future user queries of the flow
+stats will return lower than expected number for bytes and packets
+since the last saved stats in the driver was updated async to the last
+saved stats in cls_flower.
+
+This causes wrong stats presentation of encapsulation flows to user.
+
+Since the neighbor usage updater only needs the lastuse stats from the
+cached counter, the fix is to use a dedicated lastuse query call that
+returns the lastuse value without synching between the cached stats and
+the last saved stats.
+
+Fixes: f6dfb4c3f216 ("net/mlx5e: Update neighbour 'used' state using HW flow rules counters")
+Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
+Reviewed-by: Roi Dayan <roid@mellanox.com>
+Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 4 ++--
+ drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c | 5 +++++
+ include/linux/mlx5/fs.h | 1 +
+ 3 files changed, 8 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -992,13 +992,13 @@ void mlx5e_tc_encap_flows_del(struct mlx
+ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
+ {
+ struct mlx5e_neigh *m_neigh = &nhe->m_neigh;
+- u64 bytes, packets, lastuse = 0;
+ struct mlx5e_tc_flow *flow;
+ struct mlx5e_encap_entry *e;
+ struct mlx5_fc *counter;
+ struct neigh_table *tbl;
+ bool neigh_used = false;
+ struct neighbour *n;
++ u64 lastuse;
+
+ if (m_neigh->family == AF_INET)
+ tbl = &arp_tbl;
+@@ -1015,7 +1015,7 @@ void mlx5e_tc_update_neigh_used_value(st
+ list_for_each_entry(flow, &e->flows, encap) {
+ if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) {
+ counter = mlx5_flow_rule_counter(flow->rule[0]);
+- mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
++ lastuse = mlx5_fc_query_lastuse(counter);
+ if (time_after((unsigned long)lastuse, nhe->reported_lastuse)) {
+ neigh_used = true;
+ break;
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+@@ -321,6 +321,11 @@ int mlx5_fc_query(struct mlx5_core_dev *
+ }
+ EXPORT_SYMBOL(mlx5_fc_query);
+
++u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter)
++{
++ return counter->cache.lastuse;
++}
++
+ void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ u64 *bytes, u64 *packets, u64 *lastuse)
+ {
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -188,6 +188,7 @@ int mlx5_modify_rule_destination(struct
+ struct mlx5_fc *mlx5_flow_rule_counter(struct mlx5_flow_handle *handler);
+ struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging);
+ void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter);
++u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter);
+ void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ u64 *bytes, u64 *packets, u64 *lastuse);
+ int mlx5_fc_query(struct mlx5_core_dev *dev, struct mlx5_fc *counter,
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: "René van Dorst" <opensource@vdorst.com>
+Date: Sat, 27 Jul 2019 11:40:11 +0200
+Subject: net: phylink: Fix flow control for fixed-link
+
+From: "René van Dorst" <opensource@vdorst.com>
+
+[ Upstream commit 8aace4f3eba2a3ceb431e18683ea0e1ecbade5cd ]
+
+In phylink_parse_fixedlink() the pl->link_config.advertising bits are AND
+with pl->supported, pl->supported is zeroed and only the speed/duplex
+modes and MII bits are set.
+So pl->link_config.advertising always loses the flow control/pause bits.
+
+By setting Pause and Asym_Pause bits in pl->supported, the flow control
+work again when devicetree "pause" is set in fixes-link node and the MAC
+advertise that is supports pause.
+
+Results with this patch.
+
+Legend:
+- DT = 'Pause' is set in the fixed-link in devicetree.
+- validate() = ‘Yes’ means phylink_set(mask, Pause) is set in the
+ validate().
+- flow = results reported my link is Up line.
+
++-----+------------+-------+
+| DT | validate() | flow |
++-----+------------+-------+
+| Yes | Yes | rx/tx |
+| No | Yes | off |
+| Yes | No | off |
++-----+------------+-------+
+
+Fixes: 9525ae83959b ("phylink: add phylink infrastructure")
+Signed-off-by: René van Dorst <opensource@vdorst.com>
+Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/phy/phylink.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -226,6 +226,8 @@ static int phylink_parse_fixedlink(struc
+ __ETHTOOL_LINK_MODE_MASK_NBITS, true);
+ linkmode_zero(pl->supported);
+ phylink_set(pl->supported, MII);
++ phylink_set(pl->supported, Pause);
++ phylink_set(pl->supported, Asym_Pause);
+ if (s) {
+ __set_bit(s->bit, pl->supported);
+ } else {
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
+Date: Thu, 25 Jul 2019 12:07:12 -0600
+Subject: net: qualcomm: rmnet: Fix incorrect UL checksum offload logic
+
+From: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
+
+[ Upstream commit a7cf3d24ee6081930feb4c830a7f6f16ebe31c49 ]
+
+The udp_ip4_ind bit is set only for IPv4 UDP non-fragmented packets
+so that the hardware can flip the checksum to 0xFFFF if the computed
+checksum is 0 per RFC768.
+
+However, this bit had to be set for IPv6 UDP non fragmented packets
+as well per hardware requirements. Otherwise, IPv6 UDP packets
+with computed checksum as 0 were transmitted by hardware and were
+dropped in the network.
+
+In addition to setting this bit for IPv6 UDP, the field is also
+appropriately renamed to udp_ind as part of this change.
+
+Fixes: 5eb5f8608ef1 ("net: qualcomm: rmnet: Add support for TX checksum offload")
+Cc: Sean Tranchetti <stranche@codeaurora.org>
+Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 2 +-
+ drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 13 +++++++++----
+ 2 files changed, 10 insertions(+), 5 deletions(-)
+
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+@@ -59,7 +59,7 @@ struct rmnet_map_dl_csum_trailer {
+ struct rmnet_map_ul_csum_header {
+ __be16 csum_start_offset;
+ u16 csum_insert_offset:14;
+- u16 udp_ip4_ind:1;
++ u16 udp_ind:1;
+ u16 csum_enabled:1;
+ } __aligned(1);
+
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+@@ -215,9 +215,9 @@ rmnet_map_ipv4_ul_csum_header(void *iphd
+ ul_header->csum_insert_offset = skb->csum_offset;
+ ul_header->csum_enabled = 1;
+ if (ip4h->protocol == IPPROTO_UDP)
+- ul_header->udp_ip4_ind = 1;
++ ul_header->udp_ind = 1;
+ else
+- ul_header->udp_ip4_ind = 0;
++ ul_header->udp_ind = 0;
+
+ /* Changing remaining fields to network order */
+ hdr++;
+@@ -248,6 +248,7 @@ rmnet_map_ipv6_ul_csum_header(void *ip6h
+ struct rmnet_map_ul_csum_header *ul_header,
+ struct sk_buff *skb)
+ {
++ struct ipv6hdr *ip6h = (struct ipv6hdr *)ip6hdr;
+ __be16 *hdr = (__be16 *)ul_header, offset;
+
+ offset = htons((__force u16)(skb_transport_header(skb) -
+@@ -255,7 +256,11 @@ rmnet_map_ipv6_ul_csum_header(void *ip6h
+ ul_header->csum_start_offset = offset;
+ ul_header->csum_insert_offset = skb->csum_offset;
+ ul_header->csum_enabled = 1;
+- ul_header->udp_ip4_ind = 0;
++
++ if (ip6h->nexthdr == IPPROTO_UDP)
++ ul_header->udp_ind = 1;
++ else
++ ul_header->udp_ind = 0;
+
+ /* Changing remaining fields to network order */
+ hdr++;
+@@ -428,7 +433,7 @@ sw_csum:
+ ul_header->csum_start_offset = 0;
+ ul_header->csum_insert_offset = 0;
+ ul_header->csum_enabled = 0;
+- ul_header->udp_ip4_ind = 0;
++ ul_header->udp_ind = 0;
+
+ priv->stats.csum_sw++;
+ }
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Jia-Ju Bai <baijiaju1990@gmail.com>
+Date: Mon, 29 Jul 2019 16:24:33 +0800
+Subject: net: sched: Fix a possible null-pointer dereference in dequeue_func()
+
+From: Jia-Ju Bai <baijiaju1990@gmail.com>
+
+[ Upstream commit 051c7b39be4a91f6b7d8c4548444e4b850f1f56c ]
+
+In dequeue_func(), there is an if statement on line 74 to check whether
+skb is NULL:
+ if (skb)
+
+When skb is NULL, it is used on line 77:
+ prefetch(&skb->end);
+
+Thus, a possible null-pointer dereference may occur.
+
+To fix this bug, skb->end is used when skb is not NULL.
+
+This bug is found by a static analysis tool STCheck written by us.
+
+Fixes: 76e3cc126bb2 ("codel: Controlled Delay AQM")
+Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com>
+Reviewed-by: Jiri Pirko <jiri@mellanox.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/sched/sch_codel.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -71,10 +71,10 @@ static struct sk_buff *dequeue_func(stru
+ struct Qdisc *sch = ctx;
+ struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
+
+- if (skb)
++ if (skb) {
+ sch->qstats.backlog -= qdisc_pkt_len(skb);
+-
+- prefetch(&skb->end); /* we'll need skb_shinfo() */
++ prefetch(&skb->end); /* we'll need skb_shinfo() */
++ }
+ return skb;
+ }
+
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Roman Mashak <mrv@mojatatu.com>
+Date: Fri, 2 Aug 2019 15:16:46 -0400
+Subject: net sched: update vlan action for batched events operations
+
+From: Roman Mashak <mrv@mojatatu.com>
+
+[ Upstream commit b35475c5491a14c8ce7a5046ef7bcda8a860581a ]
+
+Add get_fill_size() routine used to calculate the action size
+when building a batch of events.
+
+Fixes: c7e2b9689 ("sched: introduce vlan action")
+Signed-off-by: Roman Mashak <mrv@mojatatu.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/sched/act_vlan.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -296,6 +296,14 @@ static int tcf_vlan_search(struct net *n
+ return tcf_idr_search(tn, a, index);
+ }
+
++static size_t tcf_vlan_get_fill_size(const struct tc_action *act)
++{
++ return nla_total_size(sizeof(struct tc_vlan))
++ + nla_total_size(sizeof(u16)) /* TCA_VLAN_PUSH_VLAN_ID */
++ + nla_total_size(sizeof(u16)) /* TCA_VLAN_PUSH_VLAN_PROTOCOL */
++ + nla_total_size(sizeof(u8)); /* TCA_VLAN_PUSH_VLAN_PRIORITY */
++}
++
+ static struct tc_action_ops act_vlan_ops = {
+ .kind = "vlan",
+ .type = TCA_ACT_VLAN,
+@@ -305,6 +313,7 @@ static struct tc_action_ops act_vlan_ops
+ .init = tcf_vlan_init,
+ .cleanup = tcf_vlan_cleanup,
+ .walk = tcf_vlan_walker,
++ .get_fill_size = tcf_vlan_get_fill_size,
+ .lookup = tcf_vlan_search,
+ .size = sizeof(struct tcf_vlan),
+ };
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Dmytro Linkin <dmitrolin@mellanox.com>
+Date: Thu, 1 Aug 2019 13:02:51 +0000
+Subject: net: sched: use temporary variable for actions indexes
+
+From: Dmytro Linkin <dmitrolin@mellanox.com>
+
+[ Upstream commit 7be8ef2cdbfe41a2e524b7c6cc3f8e6cfaa906e4 ]
+
+Currently init call of all actions (except ipt) init their 'parm'
+structure as a direct pointer to nla data in skb. This leads to race
+condition when some of the filter actions were initialized successfully
+(and were assigned with idr action index that was written directly
+into nla data), but then were deleted and retried (due to following
+action module missing or classifier-initiated retry), in which case
+action init code tries to insert action to idr with index that was
+assigned on previous iteration. During retry the index can be reused
+by another action that was inserted concurrently, which causes
+unintended action sharing between filters.
+To fix described race condition, save action idr index to temporary
+stack-allocated variable instead on nla data.
+
+Fixes: 0190c1d452a9 ("net: sched: atomically check-allocate action")
+Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com>
+Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
+Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/sched/act_bpf.c | 9 +++++----
+ net/sched/act_connmark.c | 9 +++++----
+ net/sched/act_csum.c | 9 +++++----
+ net/sched/act_gact.c | 8 +++++---
+ net/sched/act_ife.c | 8 +++++---
+ net/sched/act_mirred.c | 13 +++++++------
+ net/sched/act_nat.c | 9 +++++----
+ net/sched/act_pedit.c | 10 ++++++----
+ net/sched/act_police.c | 8 +++++---
+ net/sched/act_sample.c | 10 +++++-----
+ net/sched/act_simple.c | 10 ++++++----
+ net/sched/act_skbedit.c | 11 ++++++-----
+ net/sched/act_skbmod.c | 11 ++++++-----
+ net/sched/act_tunnel_key.c | 8 +++++---
+ net/sched/act_vlan.c | 16 +++++++++-------
+ 15 files changed, 85 insertions(+), 64 deletions(-)
+
+--- a/net/sched/act_bpf.c
++++ b/net/sched/act_bpf.c
+@@ -287,6 +287,7 @@ static int tcf_bpf_init(struct net *net,
+ struct tcf_bpf *prog;
+ bool is_bpf, is_ebpf;
+ int ret, res = 0;
++ u32 index;
+
+ if (!nla)
+ return -EINVAL;
+@@ -299,13 +300,13 @@ static int tcf_bpf_init(struct net *net,
+ return -EINVAL;
+
+ parm = nla_data(tb[TCA_ACT_BPF_PARMS]);
+-
+- ret = tcf_idr_check_alloc(tn, &parm->index, act, bind);
++ index = parm->index;
++ ret = tcf_idr_check_alloc(tn, &index, act, bind);
+ if (!ret) {
+- ret = tcf_idr_create(tn, parm->index, est, act,
++ ret = tcf_idr_create(tn, index, est, act,
+ &act_bpf_ops, bind, true);
+ if (ret < 0) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -104,6 +104,7 @@ static int tcf_connmark_init(struct net
+ struct tcf_connmark_info *ci;
+ struct tc_connmark *parm;
+ int ret = 0;
++ u32 index;
+
+ if (!nla)
+ return -EINVAL;
+@@ -117,13 +118,13 @@ static int tcf_connmark_init(struct net
+ return -EINVAL;
+
+ parm = nla_data(tb[TCA_CONNMARK_PARMS]);
+-
+- ret = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ ret = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!ret) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_connmark_ops, bind, false);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_csum.c
++++ b/net/sched/act_csum.c
+@@ -55,6 +55,7 @@ static int tcf_csum_init(struct net *net
+ struct tc_csum *parm;
+ struct tcf_csum *p;
+ int ret = 0, err;
++ u32 index;
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -66,13 +67,13 @@ static int tcf_csum_init(struct net *net
+ if (tb[TCA_CSUM_PARMS] == NULL)
+ return -EINVAL;
+ parm = nla_data(tb[TCA_CSUM_PARMS]);
+-
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!err) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_csum_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_gact.c
++++ b/net/sched/act_gact.c
+@@ -64,6 +64,7 @@ static int tcf_gact_init(struct net *net
+ struct tc_gact *parm;
+ struct tcf_gact *gact;
+ int ret = 0;
++ u32 index;
+ int err;
+ #ifdef CONFIG_GACT_PROB
+ struct tc_gact_p *p_parm = NULL;
+@@ -79,6 +80,7 @@ static int tcf_gact_init(struct net *net
+ if (tb[TCA_GACT_PARMS] == NULL)
+ return -EINVAL;
+ parm = nla_data(tb[TCA_GACT_PARMS]);
++ index = parm->index;
+
+ #ifndef CONFIG_GACT_PROB
+ if (tb[TCA_GACT_PROB] != NULL)
+@@ -91,12 +93,12 @@ static int tcf_gact_init(struct net *net
+ }
+ #endif
+
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!err) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_gact_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -482,6 +482,7 @@ static int tcf_ife_init(struct net *net,
+ u8 *saddr = NULL;
+ bool exists = false;
+ int ret = 0;
++ u32 index;
+ int err;
+
+ if (!nla) {
+@@ -509,7 +510,8 @@ static int tcf_ife_init(struct net *net,
+ if (!p)
+ return -ENOMEM;
+
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0) {
+ kfree(p);
+ return err;
+@@ -521,10 +523,10 @@ static int tcf_ife_init(struct net *net,
+ }
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a, &act_ife_ops,
++ ret = tcf_idr_create(tn, index, est, a, &act_ife_ops,
+ bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ kfree(p);
+ return ret;
+ }
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -104,6 +104,7 @@ static int tcf_mirred_init(struct net *n
+ struct net_device *dev;
+ bool exists = false;
+ int ret, err;
++ u32 index;
+
+ if (!nla) {
+ NL_SET_ERR_MSG_MOD(extack, "Mirred requires attributes to be passed");
+@@ -117,8 +118,8 @@ static int tcf_mirred_init(struct net *n
+ return -EINVAL;
+ }
+ parm = nla_data(tb[TCA_MIRRED_PARMS]);
+-
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -135,21 +136,21 @@ static int tcf_mirred_init(struct net *n
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ NL_SET_ERR_MSG_MOD(extack, "Unknown mirred option");
+ return -EINVAL;
+ }
+
+ if (!exists) {
+ if (!parm->ifindex) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ NL_SET_ERR_MSG_MOD(extack, "Specified device does not exist");
+ return -EINVAL;
+ }
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_mirred_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_nat.c
++++ b/net/sched/act_nat.c
+@@ -45,6 +45,7 @@ static int tcf_nat_init(struct net *net,
+ struct tc_nat *parm;
+ int ret = 0, err;
+ struct tcf_nat *p;
++ u32 index;
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -56,13 +57,13 @@ static int tcf_nat_init(struct net *net,
+ if (tb[TCA_NAT_PARMS] == NULL)
+ return -EINVAL;
+ parm = nla_data(tb[TCA_NAT_PARMS]);
+-
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!err) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_nat_ops, bind, false);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -149,6 +149,7 @@ static int tcf_pedit_init(struct net *ne
+ struct tcf_pedit *p;
+ int ret = 0, err;
+ int ksize;
++ u32 index;
+
+ if (!nla) {
+ NL_SET_ERR_MSG_MOD(extack, "Pedit requires attributes to be passed");
+@@ -178,18 +179,19 @@ static int tcf_pedit_init(struct net *ne
+ if (IS_ERR(keys_ex))
+ return PTR_ERR(keys_ex);
+
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!err) {
+ if (!parm->nkeys) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
+ ret = -EINVAL;
+ goto out_free;
+ }
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_pedit_ops, bind, false);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ goto out_free;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -85,6 +85,7 @@ static int tcf_police_init(struct net *n
+ struct qdisc_rate_table *R_tab = NULL, *P_tab = NULL;
+ struct tc_action_net *tn = net_generic(net, police_net_id);
+ bool exists = false;
++ u32 index;
+ int size;
+
+ if (nla == NULL)
+@@ -101,7 +102,8 @@ static int tcf_police_init(struct net *n
+ return -EINVAL;
+
+ parm = nla_data(tb[TCA_POLICE_TBF]);
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -109,10 +111,10 @@ static int tcf_police_init(struct net *n
+ return 0;
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, NULL, a,
++ ret = tcf_idr_create(tn, index, NULL, a,
+ &act_police_ops, bind, false);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -43,7 +43,7 @@ static int tcf_sample_init(struct net *n
+ struct tc_action_net *tn = net_generic(net, sample_net_id);
+ struct nlattr *tb[TCA_SAMPLE_MAX + 1];
+ struct psample_group *psample_group;
+- u32 psample_group_num, rate;
++ u32 psample_group_num, rate, index;
+ struct tc_sample *parm;
+ struct tcf_sample *s;
+ bool exists = false;
+@@ -59,8 +59,8 @@ static int tcf_sample_init(struct net *n
+ return -EINVAL;
+
+ parm = nla_data(tb[TCA_SAMPLE_PARMS]);
+-
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -68,10 +68,10 @@ static int tcf_sample_init(struct net *n
+ return 0;
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_sample_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+ ret = ACT_P_CREATED;
+--- a/net/sched/act_simple.c
++++ b/net/sched/act_simple.c
+@@ -88,6 +88,7 @@ static int tcf_simp_init(struct net *net
+ struct tcf_defact *d;
+ bool exists = false;
+ int ret = 0, err;
++ u32 index;
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -100,7 +101,8 @@ static int tcf_simp_init(struct net *net
+ return -EINVAL;
+
+ parm = nla_data(tb[TCA_DEF_PARMS]);
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -111,15 +113,15 @@ static int tcf_simp_init(struct net *net
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EINVAL;
+ }
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_simp_ops, bind, false);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_skbedit.c
++++ b/net/sched/act_skbedit.c
+@@ -107,6 +107,7 @@ static int tcf_skbedit_init(struct net *
+ u16 *queue_mapping = NULL, *ptype = NULL;
+ bool exists = false;
+ int ret = 0, err;
++ u32 index;
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -153,8 +154,8 @@ static int tcf_skbedit_init(struct net *
+ }
+
+ parm = nla_data(tb[TCA_SKBEDIT_PARMS]);
+-
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -165,15 +166,15 @@ static int tcf_skbedit_init(struct net *
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EINVAL;
+ }
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_skbedit_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -88,12 +88,12 @@ static int tcf_skbmod_init(struct net *n
+ struct nlattr *tb[TCA_SKBMOD_MAX + 1];
+ struct tcf_skbmod_params *p, *p_old;
+ struct tc_skbmod *parm;
++ u32 lflags = 0, index;
+ struct tcf_skbmod *d;
+ bool exists = false;
+ u8 *daddr = NULL;
+ u8 *saddr = NULL;
+ u16 eth_type = 0;
+- u32 lflags = 0;
+ int ret = 0, err;
+
+ if (!nla)
+@@ -122,10 +122,11 @@ static int tcf_skbmod_init(struct net *n
+ }
+
+ parm = nla_data(tb[TCA_SKBMOD_PARMS]);
++ index = parm->index;
+ if (parm->flags & SKBMOD_F_SWAPMAC)
+ lflags = SKBMOD_F_SWAPMAC;
+
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -136,15 +137,15 @@ static int tcf_skbmod_init(struct net *n
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EINVAL;
+ }
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_skbmod_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -224,6 +224,7 @@ static int tunnel_key_init(struct net *n
+ __be16 flags;
+ u8 tos, ttl;
+ int ret = 0;
++ u32 index;
+ int err;
+
+ if (!nla) {
+@@ -244,7 +245,8 @@ static int tunnel_key_init(struct net *n
+ }
+
+ parm = nla_data(tb[TCA_TUNNEL_KEY_PARMS]);
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -338,7 +340,7 @@ static int tunnel_key_init(struct net *n
+ }
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_tunnel_key_ops, bind, true);
+ if (ret) {
+ NL_SET_ERR_MSG(extack, "Cannot create TC IDR");
+@@ -384,7 +386,7 @@ err_out:
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -118,6 +118,7 @@ static int tcf_vlan_init(struct net *net
+ u8 push_prio = 0;
+ bool exists = false;
+ int ret = 0, err;
++ u32 index;
+
+ if (!nla)
+ return -EINVAL;
+@@ -129,7 +130,8 @@ static int tcf_vlan_init(struct net *net
+ if (!tb[TCA_VLAN_PARMS])
+ return -EINVAL;
+ parm = nla_data(tb[TCA_VLAN_PARMS]);
+- err = tcf_idr_check_alloc(tn, &parm->index, a, bind);
++ index = parm->index;
++ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+@@ -145,7 +147,7 @@ static int tcf_vlan_init(struct net *net
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EINVAL;
+ }
+ push_vid = nla_get_u16(tb[TCA_VLAN_PUSH_VLAN_ID]);
+@@ -153,7 +155,7 @@ static int tcf_vlan_init(struct net *net
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -ERANGE;
+ }
+
+@@ -167,7 +169,7 @@ static int tcf_vlan_init(struct net *net
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EPROTONOSUPPORT;
+ }
+ } else {
+@@ -181,16 +183,16 @@ static int tcf_vlan_init(struct net *net
+ if (exists)
+ tcf_idr_release(*a, bind);
+ else
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return -EINVAL;
+ }
+ action = parm->v_action;
+
+ if (!exists) {
+- ret = tcf_idr_create(tn, parm->index, est, a,
++ ret = tcf_idr_create(tn, index, est, a,
+ &act_vlan_ops, bind, true);
+ if (ret) {
+- tcf_idr_cleanup(tn, parm->index);
++ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Ursula Braun <ubraun@linux.ibm.com>
+Date: Fri, 2 Aug 2019 10:16:38 +0200
+Subject: net/smc: do not schedule tx_work in SMC_CLOSED state
+
+From: Ursula Braun <ubraun@linux.ibm.com>
+
+[ Upstream commit f9cedf1a9b1cdcfb0c52edb391d01771e43994a4 ]
+
+The setsockopts options TCP_NODELAY and TCP_CORK may schedule the
+tx worker. Make sure the socket is not yet moved into SMC_CLOSED
+state (for instance by a shutdown SHUT_RDWR call).
+
+Reported-by: syzbot+92209502e7aab127c75f@syzkaller.appspotmail.com
+Reported-by: syzbot+b972214bb803a343f4fe@syzkaller.appspotmail.com
+Fixes: 01d2f7e2cdd31 ("net/smc: sockopts TCP_NODELAY and TCP_CORK")
+Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
+Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/smc/af_smc.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1680,14 +1680,18 @@ static int smc_setsockopt(struct socket
+ }
+ break;
+ case TCP_NODELAY:
+- if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN) {
++ if (sk->sk_state != SMC_INIT &&
++ sk->sk_state != SMC_LISTEN &&
++ sk->sk_state != SMC_CLOSED) {
+ if (val && !smc->use_fallback)
+ mod_delayed_work(system_wq, &smc->conn.tx_work,
+ 0);
+ }
+ break;
+ case TCP_CORK:
+- if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN) {
++ if (sk->sk_state != SMC_INIT &&
++ sk->sk_state != SMC_LISTEN &&
++ sk->sk_state != SMC_CLOSED) {
+ if (!val && !smc->use_fallback)
+ mod_delayed_work(system_wq, &smc->conn.tx_work,
+ 0);
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Johan Hovold <johan@kernel.org>
+Date: Mon, 5 Aug 2019 12:00:55 +0200
+Subject: NFC: nfcmrvl: fix gpio-handling regression
+
+From: Johan Hovold <johan@kernel.org>
+
+[ Upstream commit c3953a3c2d3175d2f9f0304c9a1ba89e7743c5e4 ]
+
+Fix two reset-gpio sanity checks which were never converted to use
+gpio_is_valid(), and make sure to use -EINVAL to indicate a missing
+reset line also for the UART-driver module parameter and for the USB
+driver.
+
+This specifically prevents the UART and USB drivers from incidentally
+trying to request and use gpio 0, and also avoids triggering a WARN() in
+gpio_to_desc() during probe when no valid reset line has been specified.
+
+Fixes: e33a3f84f88f ("NFC: nfcmrvl: allow gpio 0 for reset signalling")
+Reported-by: syzbot+cf35b76f35e068a1107f@syzkaller.appspotmail.com
+Tested-by: syzbot+cf35b76f35e068a1107f@syzkaller.appspotmail.com
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nfc/nfcmrvl/main.c | 4 ++--
+ drivers/nfc/nfcmrvl/uart.c | 4 ++--
+ drivers/nfc/nfcmrvl/usb.c | 1 +
+ 3 files changed, 5 insertions(+), 4 deletions(-)
+
+--- a/drivers/nfc/nfcmrvl/main.c
++++ b/drivers/nfc/nfcmrvl/main.c
+@@ -244,7 +244,7 @@ void nfcmrvl_chip_reset(struct nfcmrvl_p
+ /* Reset possible fault of previous session */
+ clear_bit(NFCMRVL_PHY_ERROR, &priv->flags);
+
+- if (priv->config.reset_n_io) {
++ if (gpio_is_valid(priv->config.reset_n_io)) {
+ nfc_info(priv->dev, "reset the chip\n");
+ gpio_set_value(priv->config.reset_n_io, 0);
+ usleep_range(5000, 10000);
+@@ -255,7 +255,7 @@ void nfcmrvl_chip_reset(struct nfcmrvl_p
+
+ void nfcmrvl_chip_halt(struct nfcmrvl_private *priv)
+ {
+- if (priv->config.reset_n_io)
++ if (gpio_is_valid(priv->config.reset_n_io))
+ gpio_set_value(priv->config.reset_n_io, 0);
+ }
+
+--- a/drivers/nfc/nfcmrvl/uart.c
++++ b/drivers/nfc/nfcmrvl/uart.c
+@@ -26,7 +26,7 @@
+ static unsigned int hci_muxed;
+ static unsigned int flow_control;
+ static unsigned int break_control;
+-static unsigned int reset_n_io;
++static int reset_n_io = -EINVAL;
+
+ /*
+ ** NFCMRVL NCI OPS
+@@ -231,5 +231,5 @@ MODULE_PARM_DESC(break_control, "Tell if
+ module_param(hci_muxed, uint, 0);
+ MODULE_PARM_DESC(hci_muxed, "Tell if transport is muxed in HCI one.");
+
+-module_param(reset_n_io, uint, 0);
++module_param(reset_n_io, int, 0);
+ MODULE_PARM_DESC(reset_n_io, "GPIO that is wired to RESET_N signal.");
+--- a/drivers/nfc/nfcmrvl/usb.c
++++ b/drivers/nfc/nfcmrvl/usb.c
+@@ -305,6 +305,7 @@ static int nfcmrvl_probe(struct usb_inte
+
+ /* No configuration for USB */
+ memset(&config, 0, sizeof(config));
++ config.reset_n_io = -EINVAL;
+
+ nfc_info(&udev->dev, "intf %p id %p\n", intf, id);
+
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Claudiu Manoil <claudiu.manoil@nxp.com>
+Date: Thu, 25 Jul 2019 16:33:18 +0300
+Subject: ocelot: Cancel delayed work before wq destruction
+
+From: Claudiu Manoil <claudiu.manoil@nxp.com>
+
+[ Upstream commit c5d139697d5d9ecf9c7cd92d7d7838a173508900 ]
+
+Make sure the delayed work for stats update is not pending before
+wq destruction.
+This fixes the module unload path.
+The issue is there since day 1.
+
+Fixes: a556c76adc05 ("net: mscc: Add initial Ocelot switch support")
+
+Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
+Reviewed-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/mscc/ocelot.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1767,6 +1767,7 @@ EXPORT_SYMBOL(ocelot_init);
+
+ void ocelot_deinit(struct ocelot *ocelot)
+ {
++ cancel_delayed_work(&ocelot->stats_work);
+ destroy_workqueue(ocelot->stats_queue);
+ mutex_destroy(&ocelot->stats_lock);
+ }
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:41 AM CEST
+From: Heiner Kallweit <hkallweit1@gmail.com>
+Date: Sat, 27 Jul 2019 12:45:10 +0200
+Subject: r8169: don't use MSI before RTL8168d
+
+From: Heiner Kallweit <hkallweit1@gmail.com>
+
+[ Upstream commit 003bd5b4a7b4a94b501e3a1e2e7c9df6b2a94ed4 ]
+
+It was reported that after resuming from suspend network fails with
+error "do_IRQ: 3.38 No irq handler for vector", see [0]. Enabling WoL
+can work around the issue, but the only actual fix is to disable MSI.
+So let's mimic the behavior of the vendor driver and disable MSI on
+all chip versions before RTL8168d.
+
+[0] https://bugzilla.kernel.org/show_bug.cgi?id=204079
+
+Fixes: 6c6aa15fdea5 ("r8169: improve interrupt handling")
+Reported-by: Dušan Dragić <dragic.dusan@gmail.com>
+Tested-by: Dušan Dragić <dragic.dusan@gmail.com>
+Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/realtek/r8169.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7239,13 +7239,18 @@ static int rtl_alloc_irq(struct rtl8169_
+ {
+ unsigned int flags;
+
+- if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
++ switch (tp->mac_version) {
++ case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
+ RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ RTL_W8(tp, Cfg9346, Cfg9346_Lock);
++ /* fall through */
++ case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_24:
+ flags = PCI_IRQ_LEGACY;
+- } else {
++ break;
++ default:
+ flags = PCI_IRQ_ALL_TYPES;
++ break;
+ }
+
+ return pci_alloc_irq_vectors(tp->pci_dev, 1, 1, flags);
hid-wacom-fix-bit-shift-for-cintiq-companion-2.patch
hid-add-quirk-for-hp-x1200-pixart-oem-mouse.patch
ib-directly-cast-the-sockaddr-union-to-aockaddr.patch
+atm-iphase-fix-spectre-v1-vulnerability.patch
+bnx2x-disable-multi-cos-feature.patch
+ife-error-out-when-nla-attributes-are-empty.patch
+ip6_gre-reload-ipv6h-in-prepare_ip6gre_xmit_ipv6.patch
+ip6_tunnel-fix-possible-use-after-free-on-xmit.patch
+ipip-validate-header-length-in-ipip_tunnel_xmit.patch
+mlxsw-spectrum-fix-error-path-in-mlxsw_sp_module_init.patch
+mvpp2-fix-panic-on-module-removal.patch
+mvpp2-refactor-mtu-change-code.patch
+net-bridge-delete-local-fdb-on-device-init-failure.patch
+net-bridge-mcast-don-t-delete-permanent-entries-when-fast-leave-is-enabled.patch
+net-fix-ifindex-collision-during-namespace-removal.patch
+net-mlx5e-always-initialize-frag-last_in_page.patch
+net-mlx5-use-reversed-order-when-unregister-devices.patch
+net-phylink-fix-flow-control-for-fixed-link.patch
+net-qualcomm-rmnet-fix-incorrect-ul-checksum-offload-logic.patch
+net-sched-fix-a-possible-null-pointer-dereference-in-dequeue_func.patch
+net-sched-update-vlan-action-for-batched-events-operations.patch
+net-sched-use-temporary-variable-for-actions-indexes.patch
+net-smc-do-not-schedule-tx_work-in-smc_closed-state.patch
+nfc-nfcmrvl-fix-gpio-handling-regression.patch
+ocelot-cancel-delayed-work-before-wq-destruction.patch
+tipc-compat-allow-tipc-commands-without-arguments.patch
+tun-mark-small-packets-as-owned-by-the-tap-sock.patch
+net-mlx5-fix-modify_cq_in-alignment.patch
+net-mlx5e-prevent-encap-flow-counter-update-async-to-user-query.patch
+r8169-don-t-use-msi-before-rtl8168d.patch
+compat_ioctl-pppoe-fix-pppoeiocsfwd-handling.patch
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Taras Kondratiuk <takondra@cisco.com>
+Date: Mon, 29 Jul 2019 22:15:07 +0000
+Subject: tipc: compat: allow tipc commands without arguments
+
+From: Taras Kondratiuk <takondra@cisco.com>
+
+[ Upstream commit 4da5f0018eef4c0de31675b670c80e82e13e99d1 ]
+
+Commit 2753ca5d9009 ("tipc: fix uninit-value in tipc_nl_compat_doit")
+broke older tipc tools that use compat interface (e.g. tipc-config from
+tipcutils package):
+
+% tipc-config -p
+operation not supported
+
+The commit started to reject TIPC netlink compat messages that do not
+have attributes. It is too restrictive because some of such messages are
+valid (they don't need any arguments):
+
+% grep 'tx none' include/uapi/linux/tipc_config.h
+#define TIPC_CMD_NOOP 0x0000 /* tx none, rx none */
+#define TIPC_CMD_GET_MEDIA_NAMES 0x0002 /* tx none, rx media_name(s) */
+#define TIPC_CMD_GET_BEARER_NAMES 0x0003 /* tx none, rx bearer_name(s) */
+#define TIPC_CMD_SHOW_PORTS 0x0006 /* tx none, rx ultra_string */
+#define TIPC_CMD_GET_REMOTE_MNG 0x4003 /* tx none, rx unsigned */
+#define TIPC_CMD_GET_MAX_PORTS 0x4004 /* tx none, rx unsigned */
+#define TIPC_CMD_GET_NETID 0x400B /* tx none, rx unsigned */
+#define TIPC_CMD_NOT_NET_ADMIN 0xC001 /* tx none, rx none */
+
+This patch relaxes the original fix and rejects messages without
+arguments only if such arguments are expected by a command (reg_type is
+non zero).
+
+Fixes: 2753ca5d9009 ("tipc: fix uninit-value in tipc_nl_compat_doit")
+Cc: stable@vger.kernel.org
+Signed-off-by: Taras Kondratiuk <takondra@cisco.com>
+Acked-by: Ying Xue <ying.xue@windriver.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/tipc/netlink_compat.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -55,6 +55,7 @@ struct tipc_nl_compat_msg {
+ int rep_type;
+ int rep_size;
+ int req_type;
++ int req_size;
+ struct net *net;
+ struct sk_buff *rep;
+ struct tlv_desc *req;
+@@ -257,7 +258,8 @@ static int tipc_nl_compat_dumpit(struct
+ int err;
+ struct sk_buff *arg;
+
+- if (msg->req_type && !TLV_CHECK_TYPE(msg->req, msg->req_type))
++ if (msg->req_type && (!msg->req_size ||
++ !TLV_CHECK_TYPE(msg->req, msg->req_type)))
+ return -EINVAL;
+
+ msg->rep = tipc_tlv_alloc(msg->rep_size);
+@@ -354,7 +356,8 @@ static int tipc_nl_compat_doit(struct ti
+ {
+ int err;
+
+- if (msg->req_type && !TLV_CHECK_TYPE(msg->req, msg->req_type))
++ if (msg->req_type && (!msg->req_size ||
++ !TLV_CHECK_TYPE(msg->req, msg->req_type)))
+ return -EINVAL;
+
+ err = __tipc_nl_compat_doit(cmd, msg);
+@@ -1276,8 +1279,8 @@ static int tipc_nl_compat_recv(struct sk
+ goto send;
+ }
+
+- len = nlmsg_attrlen(req_nlh, GENL_HDRLEN + TIPC_GENL_HDRLEN);
+- if (!len || !TLV_OK(msg.req, len)) {
++ msg.req_size = nlmsg_attrlen(req_nlh, GENL_HDRLEN + TIPC_GENL_HDRLEN);
++ if (msg.req_size && !TLV_OK(msg.req, msg.req_size)) {
+ msg.rep = tipc_get_err_tlv(TIPC_CFG_NOT_SUPPORTED);
+ err = -EOPNOTSUPP;
+ goto send;
--- /dev/null
+From foo@baz Thu 08 Aug 2019 08:50:40 AM CEST
+From: Alexis Bauvin <abauvin@scaleway.com>
+Date: Tue, 23 Jul 2019 16:23:01 +0200
+Subject: tun: mark small packets as owned by the tap sock
+
+From: Alexis Bauvin <abauvin@scaleway.com>
+
+[ Upstream commit 4b663366246be1d1d4b1b8b01245b2e88ad9e706 ]
+
+- v1 -> v2: Move skb_set_owner_w to __tun_build_skb to reduce patch size
+
+Small packets going out of a tap device go through an optimized code
+path that uses build_skb() rather than sock_alloc_send_pskb(). The
+latter calls skb_set_owner_w(), but the small packet code path does not.
+
+The net effect is that small packets are not owned by the userland
+application's socket (e.g. QEMU), while large packets are.
+This can be seen with a TCP session, where packets are not owned when
+the window size is small enough (around PAGE_SIZE), while they are once
+the window grows (note that this requires the host to support virtio
+tso for the guest to offload segmentation).
+All this leads to inconsistent behaviour in the kernel, especially on
+netfilter modules that uses sk->socket (e.g. xt_owner).
+
+Fixes: 66ccbc9c87c2 ("tap: use build_skb() for small packet")
+Signed-off-by: Alexis Bauvin <abauvin@scaleway.com>
+Acked-by: Jason Wang <jasowang@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/tun.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1682,6 +1682,7 @@ static struct sk_buff *tun_build_skb(str
+
+ skb_reserve(skb, pad - delta);
+ skb_put(skb, len);
++ skb_set_owner_w(skb, tfile->socket.sk);
+ get_page(alloc_frag->page);
+ alloc_frag->offset += buflen;
+