--- /dev/null
+From foo@baz Mon 18 May 2020 02:43:44 PM CEST
+From: Cong Wang <xiyou.wangcong@gmail.com>
+Date: Thu, 7 May 2020 12:19:03 -0700
+Subject: net: fix a potential recursive NETDEV_FEAT_CHANGE
+
+From: Cong Wang <xiyou.wangcong@gmail.com>
+
+[ Upstream commit dd912306ff008891c82cd9f63e8181e47a9cb2fb ]
+
+syzbot managed to trigger a recursive NETDEV_FEAT_CHANGE event
+between bonding master and slave. I managed to find a reproducer
+for this:
+
+ ip li set bond0 up
+ ifenslave bond0 eth0
+ brctl addbr br0
+ ethtool -K eth0 lro off
+ brctl addif br0 bond0
+ ip li set br0 up
+
+When a NETDEV_FEAT_CHANGE event is triggered on a bonding slave,
+it captures this and calls bond_compute_features() to fixup its
+master's and other slaves' features. However, when syncing with
+its lower devices by netdev_sync_lower_features() this event is
+triggered again on slaves when the LRO feature fails to change,
+so it goes back and forth recursively until the kernel stack is
+exhausted.
+
+Commit 17b85d29e82c intentionally lets __netdev_update_features()
+return -1 for such a failure case, so we have to just rely on
+the existing check inside netdev_sync_lower_features() and skip
+NETDEV_FEAT_CHANGE event only for this specific failure case.
+
+Fixes: fd867d51f889 ("net/core: generic support for disabling netdev features down stack")
+Reported-by: syzbot+e73ceacfd8560cc8a3ca@syzkaller.appspotmail.com
+Reported-by: syzbot+c2fb6f9ddcea95ba49b5@syzkaller.appspotmail.com
+Cc: Jarod Wilson <jarod@redhat.com>
+Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Jann Horn <jannh@google.com>
+Reviewed-by: Jay Vosburgh <jay.vosburgh@canonical.com>
+Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
+Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/dev.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6939,11 +6939,13 @@ static void netdev_sync_lower_features(s
+ netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
+ &feature, lower->name);
+ lower->wanted_features &= ~feature;
+- netdev_update_features(lower);
++ __netdev_update_features(lower);
+
+ if (unlikely(lower->features & feature))
+ netdev_WARN(upper, "failed to disable %pNF on %s!\n",
+ &feature, lower->name);
++ else
++ netdev_features_change(lower);
+ }
+ }
+ }
--- /dev/null
+From foo@baz Mon 18 May 2020 02:43:44 PM CEST
+From: Paolo Abeni <pabeni@redhat.com>
+Date: Fri, 8 May 2020 19:28:34 +0200
+Subject: net: ipv4: really enforce backoff for redirects
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+[ Upstream commit 57644431a6c2faac5d754ebd35780cf43a531b1a ]
+
+In commit b406472b5ad7 ("net: ipv4: avoid mixed n_redirects and
+rate_tokens usage") I missed the fact that a 0 'rate_tokens' will
+bypass the backoff algorithm.
+
+Since rate_tokens is cleared after a redirect silence, and never
+incremented on redirects, if the host keeps receiving packets
+requiring redirect it will reply ignoring the backoff.
+
+Additionally, the 'rate_last' field will be updated with the
+cadence of the ingress packet requiring redirect. If that rate is
+high enough, that will prevent the host from generating any
+other kind of ICMP messages
+
+The check for a zero 'rate_tokens' value was likely a shortcut
+to avoid the more complex backoff algorithm after a redirect
+silence period. Address the issue checking for 'n_redirects'
+instead, which is incremented on successful redirect, and
+does not interfere with other ICMP replies.
+
+Fixes: b406472b5ad7 ("net: ipv4: avoid mixed n_redirects and rate_tokens usage")
+Reported-and-tested-by: Colin Walters <walters@redhat.com>
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv4/route.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -898,7 +898,7 @@ void ip_rt_send_redirect(struct sk_buff
+ /* Check for load limit; set rate_last to the latest sent
+ * redirect.
+ */
+- if (peer->rate_tokens == 0 ||
++ if (peer->n_redirects == 0 ||
+ time_after(jiffies,
+ (peer->rate_last +
+ (ip_rt_redirect_load << peer->n_redirects)))) {
--- /dev/null
+From foo@baz Mon 18 May 2020 02:43:44 PM CEST
+From: Paolo Abeni <pabeni@redhat.com>
+Date: Tue, 12 May 2020 14:43:14 +0200
+Subject: netlabel: cope with NULL catmap
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+[ Upstream commit eead1c2ea2509fd754c6da893a94f0e69e83ebe4 ]
+
+The cipso and calipso code can set the MLS_CAT attribute on
+successful parsing, even if the corresponding catmap has
+not been allocated, as per current configuration and external
+input.
+
+Later, selinux code tries to access the catmap if the MLS_CAT flag
+is present via netlbl_catmap_getlong(). That may cause null ptr
+dereference while processing incoming network traffic.
+
+Address the issue setting the MLS_CAT flag only if the catmap is
+really allocated. Additionally let netlbl_catmap_getlong() cope
+with NULL catmap.
+
+Reported-by: Matthew Sheets <matthew.sheets@gd-ms.com>
+Fixes: 4b8feff251da ("netlabel: fix the horribly broken catmap functions")
+Fixes: ceba1832b1b2 ("calipso: Set the calipso socket label to match the secattr.")
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Acked-by: Paul Moore <paul@paul-moore.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv4/cipso_ipv4.c | 6 ++++--
+ net/ipv6/calipso.c | 3 ++-
+ net/netlabel/netlabel_kapi.c | 6 ++++++
+ 3 files changed, 12 insertions(+), 3 deletions(-)
+
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1272,7 +1272,8 @@ static int cipso_v4_parsetag_rbm(const s
+ return ret_val;
+ }
+
+- secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++ if (secattr->attr.mls.cat)
++ secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ }
+
+ return 0;
+@@ -1453,7 +1454,8 @@ static int cipso_v4_parsetag_rng(const s
+ return ret_val;
+ }
+
+- secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++ if (secattr->attr.mls.cat)
++ secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ }
+
+ return 0;
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -1061,7 +1061,8 @@ static int calipso_opt_getattr(const uns
+ goto getattr_return;
+ }
+
+- secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++ if (secattr->attr.mls.cat)
++ secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ }
+
+ secattr->type = NETLBL_NLTYPE_CALIPSO;
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -748,6 +748,12 @@ int netlbl_catmap_getlong(struct netlbl_
+ if ((off & (BITS_PER_LONG - 1)) != 0)
+ return -EINVAL;
+
++ /* a null catmap is equivalent to an empty one */
++ if (!catmap) {
++ *offset = (u32)-1;
++ return 0;
++ }
++
+ if (off < catmap->startbit) {
+ off = catmap->startbit;
+ *offset = off;
--- /dev/null
+From foo@baz Mon 18 May 2020 02:43:44 PM CEST
+From: Zefan Li <lizefan@huawei.com>
+Date: Sat, 9 May 2020 11:32:10 +0800
+Subject: netprio_cgroup: Fix unlimited memory leak of v2 cgroups
+
+From: Zefan Li <lizefan@huawei.com>
+
+[ Upstream commit 090e28b229af92dc5b40786ca673999d59e73056 ]
+
+If systemd is configured to use hybrid mode which enables the use of
+both cgroup v1 and v2, systemd will create new cgroup on both the default
+root (v2) and netprio_cgroup hierarchy (v1) for a new session and attach
+task to the two cgroups. If the task does some network thing then the v2
+cgroup can never be freed after the session exited.
+
+One of our machines ran into OOM due to this memory leak.
+
+In the scenario described above when sk_alloc() is called
+cgroup_sk_alloc() thought it's in v2 mode, so it stores
+the cgroup pointer in sk->sk_cgrp_data and increments
+the cgroup refcnt, but then sock_update_netprioidx()
+thought it's in v1 mode, so it stores netprioidx value
+in sk->sk_cgrp_data, so the cgroup refcnt will never be freed.
+
+Currently we do the mode switch when someone writes to the ifpriomap
+cgroup control file. The easiest fix is to also do the switch when
+a task is attached to a new cgroup.
+
+Fixes: bd1060a1d671 ("sock, cgroup: add sock->sk_cgroup")
+Reported-by: Yang Yingliang <yangyingliang@huawei.com>
+Tested-by: Yang Yingliang <yangyingliang@huawei.com>
+Signed-off-by: Zefan Li <lizefan@huawei.com>
+Acked-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/netprio_cgroup.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/net/core/netprio_cgroup.c
++++ b/net/core/netprio_cgroup.c
+@@ -237,6 +237,8 @@ static void net_prio_attach(struct cgrou
+ struct task_struct *p;
+ struct cgroup_subsys_state *css;
+
++ cgroup_sk_alloc_disable();
++
+ cgroup_taskset_for_each(p, css, tset) {
+ void *v = (void *)(unsigned long)css->cgroup->id;
+
--- /dev/null
+From foo@baz Mon 18 May 2020 02:43:44 PM CEST
+From: "Maciej Żenczykowski" <maze@google.com>
+Date: Tue, 5 May 2020 11:57:23 -0700
+Subject: Revert "ipv6: add mtu lock check in __ip6_rt_update_pmtu"
+
+From: "Maciej Żenczykowski" <maze@google.com>
+
+[ Upstream commit 09454fd0a4ce23cb3d8af65066c91a1bf27120dd ]
+
+This reverts commit 19bda36c4299ce3d7e5bce10bebe01764a655a6d:
+
+| ipv6: add mtu lock check in __ip6_rt_update_pmtu
+|
+| Prior to this patch, ipv6 didn't do mtu lock check in ip6_update_pmtu.
+| It leaded to that mtu lock doesn't really work when receiving the pkt
+| of ICMPV6_PKT_TOOBIG.
+|
+| This patch is to add mtu lock check in __ip6_rt_update_pmtu just as ipv4
+| did in __ip_rt_update_pmtu.
+
+The above reasoning is incorrect. IPv6 *requires* icmp based pmtu to work.
+There's already a comment to this effect elsewhere in the kernel:
+
+ $ git grep -p -B1 -A3 'RTAX_MTU lock'
+ net/ipv6/route.c=4813=
+
+ static int rt6_mtu_change_route(struct fib6_info *f6i, void *p_arg)
+ ...
+ /* In IPv6 pmtu discovery is not optional,
+ so that RTAX_MTU lock cannot disable it.
+ We still use this lock to block changes
+ caused by addrconf/ndisc.
+ */
+
+This reverts to the pre-4.9 behaviour.
+
+Cc: Eric Dumazet <edumazet@google.com>
+Cc: Willem de Bruijn <willemb@google.com>
+Cc: Xin Long <lucien.xin@gmail.com>
+Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
+Signed-off-by: Maciej Żenczykowski <maze@google.com>
+Fixes: 19bda36c4299 ("ipv6: add mtu lock check in __ip6_rt_update_pmtu")
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv6/route.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1373,8 +1373,10 @@ static void __ip6_rt_update_pmtu(struct
+ {
+ struct rt6_info *rt6 = (struct rt6_info *)dst;
+
+- if (dst_metric_locked(dst, RTAX_MTU))
+- return;
++ /* Note: do *NOT* check dst_metric_locked(dst, RTAX_MTU)
++ * IPv6 pmtu discovery isn't optional, so 'mtu lock' cannot disable it.
++ * [see also comment in rt6_mtu_change_route()]
++ */
+
+ dst_confirm(dst);
+ mtu = max_t(u32, mtu, IPV6_MIN_MTU);
gcc-10-disable-array-bounds-warning-for-now.patch
gcc-10-disable-stringop-overflow-warning-for-now.patch
gcc-10-disable-restrict-warning-for-now.patch
+net-fix-a-potential-recursive-netdev_feat_change.patch
+netlabel-cope-with-null-catmap.patch
+revert-ipv6-add-mtu-lock-check-in-__ip6_rt_update_pmtu.patch
+net-ipv4-really-enforce-backoff-for-redirects.patch
+netprio_cgroup-fix-unlimited-memory-leak-of-v2-cgroups.patch