From: Greg Kroah-Hartman Date: Sat, 26 Sep 2015 19:21:02 +0000 (-0700) Subject: 3.10-stable patches X-Git-Tag: v4.1.9~12 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=e6c9126ebd79c5644805cc8a4f77e1b17a945b72;p=thirdparty%2Fkernel%2Fstable-queue.git 3.10-stable patches added patches: bonding-correct-the-mac-address-for-follow-fail_over_mac-policy.patch bonding-fix-destruction-of-bond-with-devices-different-from-arphrd_ether.patch bridge-mdb-fix-double-add-notification.patch bridge-mdb-zero-out-the-local-br_ip-variable-before-use.patch inet-frags-fix-defragmented-packet-s-ip-header-for-af_packet.patch ipv6-lock-socket-in-ip6_datagram_connect.patch ipv6-make-mld-packets-to-only-be-processed-locally.patch isdn-gigaset-reset-tty-receive_room-when-attaching-ser_gigaset.patch net-call-rcu_read_lock-early-in-process_backlog.patch net-clone-skb-before-setting-peeked-flag.patch net-fix-skb-csum-races-when-peeking.patch net-fix-skb_set_peeked-use-after-free-bug.patch net-pktgen-fix-race-between-pktgen_thread_worker-and-kthread_stop.patch net-tipc-initialize-security-state-for-new-connection-socket.patch netlink-don-t-hold-mutex-in-rcu-callback-when-releasing-mmapd-ring.patch rds-fix-an-integer-overflow-test-in-rds_info_getsockopt.patch --- diff --git a/queue-3.10/bonding-correct-the-mac-address-for-follow-fail_over_mac-policy.patch b/queue-3.10/bonding-correct-the-mac-address-for-follow-fail_over_mac-policy.patch new file mode 100644 index 00000000000..b447c258e1d --- /dev/null +++ b/queue-3.10/bonding-correct-the-mac-address-for-follow-fail_over_mac-policy.patch @@ -0,0 +1,79 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: dingtianhong +Date: Thu, 16 Jul 2015 16:30:02 +0800 +Subject: bonding: correct the MAC address for "follow" fail_over_mac policy + +From: dingtianhong + +[ Upstream commit a951bc1e6ba58f11df5ed5ddc41311e10f5fd20b ] + +The "follow" fail_over_mac policy is useful for multiport devices that +either become confused or incur a performance penalty when multiple +ports are programmed with the same MAC address, but the same MAC +address still may happened by this steps for this policy: + +1) echo +eth0 > /sys/class/net/bond0/bonding/slaves + bond0 has the same mac address with eth0, it is MAC1. + +2) echo +eth1 > /sys/class/net/bond0/bonding/slaves + eth1 is backup, eth1 has MAC2. + +3) ifconfig eth0 down + eth1 became active slave, bond will swap MAC for eth0 and eth1, + so eth1 has MAC1, and eth0 has MAC2. + +4) ifconfig eth1 down + there is no active slave, and eth1 still has MAC1, eth2 has MAC2. + +5) ifconfig eth0 up + the eth0 became active slave again, the bond set eth0 to MAC1. + +Something wrong here, then if you set eth1 up, the eth0 and eth1 will have the same +MAC address, it will break this policy for ACTIVE_BACKUP mode. + +This patch will fix this problem by finding the old active slave and +swap them MAC address before change active slave. + +Signed-off-by: Ding Tianhong +Tested-by: Nikolay Aleksandrov +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + drivers/net/bonding/bond_main.c | 19 +++++++++++++++++++ + 1 file changed, 19 insertions(+) + +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -876,6 +876,22 @@ static void bond_mc_swap(struct bonding + } + } + ++static struct slave *bond_get_old_active(struct bonding *bond, ++ struct slave *new_active) ++{ ++ struct slave *slave; ++ ++ bond_for_each_slave(bond, slave) { ++ if (slave == new_active) ++ continue; ++ ++ if (ether_addr_equal(bond->dev->dev_addr, slave->dev->dev_addr)) ++ return slave; ++ } ++ ++ return NULL; ++} ++ + /* + * bond_do_fail_over_mac + * +@@ -919,6 +935,9 @@ static void bond_do_fail_over_mac(struct + write_unlock_bh(&bond->curr_slave_lock); + read_unlock(&bond->lock); + ++ if (!old_active) ++ old_active = bond_get_old_active(bond, new_active); ++ + if (old_active) { + memcpy(tmp_mac, new_active->dev->dev_addr, ETH_ALEN); + memcpy(saddr.sa_data, old_active->dev->dev_addr, diff --git a/queue-3.10/bonding-fix-destruction-of-bond-with-devices-different-from-arphrd_ether.patch b/queue-3.10/bonding-fix-destruction-of-bond-with-devices-different-from-arphrd_ether.patch new file mode 100644 index 00000000000..14ff55b6862 --- /dev/null +++ b/queue-3.10/bonding-fix-destruction-of-bond-with-devices-different-from-arphrd_ether.patch @@ -0,0 +1,101 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Nikolay Aleksandrov +Date: Wed, 15 Jul 2015 21:52:51 +0200 +Subject: bonding: fix destruction of bond with devices different from arphrd_ether + +From: Nikolay Aleksandrov + +[ Upstream commit 06f6d1094aa0992432b1e2a0920b0ee86ccd83bf ] + +When the bonding is being unloaded and the netdevice notifier is +unregistered it executes NETDEV_UNREGISTER for each device which should +remove the bond's proc entry but if the device enslaved is not of +ARPHRD_ETHER type and is in front of the bonding, it may execute +bond_release_and_destroy() first which would release the last slave and +destroy the bond device leaving the proc entry and thus we will get the +following error (with dynamic debug on for bond_netdev_event to see the +events order): +[ 908.963051] eql: event: 9 +[ 908.963052] eql: IFF_SLAVE +[ 908.963054] eql: event: 2 +[ 908.963056] eql: IFF_SLAVE +[ 908.963058] eql: event: 6 +[ 908.963059] eql: IFF_SLAVE +[ 908.963110] bond0: Releasing active interface eql +[ 908.976168] bond0: Destroying bond bond0 +[ 908.976266] bond0 (unregistering): Released all slaves +[ 908.984097] ------------[ cut here ]------------ +[ 908.984107] WARNING: CPU: 0 PID: 1787 at fs/proc/generic.c:575 +remove_proc_entry+0x112/0x160() +[ 908.984110] remove_proc_entry: removing non-empty directory +'net/bonding', leaking at least 'bond0' +[ 908.984111] Modules linked in: bonding(-) eql(O) 9p nfsd auth_rpcgss +oid_registry nfs_acl nfs lockd grace fscache sunrpc crct10dif_pclmul +crc32_pclmul crc32c_intel ghash_clmulni_intel ppdev qxl drm_kms_helper +snd_hda_codec_generic aesni_intel ttm aes_x86_64 glue_helper pcspkr lrw +gf128mul ablk_helper cryptd snd_hda_intel virtio_console snd_hda_codec +psmouse serio_raw snd_hwdep snd_hda_core 9pnet_virtio 9pnet evdev joydev +drm virtio_balloon snd_pcm snd_timer snd soundcore i2c_piix4 i2c_core +pvpanic acpi_cpufreq parport_pc parport processor thermal_sys button +autofs4 ext4 crc16 mbcache jbd2 hid_generic usbhid hid sg sr_mod cdrom +ata_generic virtio_blk virtio_net floppy ata_piix e1000 libata ehci_pci +virtio_pci scsi_mod uhci_hcd ehci_hcd virtio_ring virtio usbcore +usb_common [last unloaded: bonding] + +[ 908.984168] CPU: 0 PID: 1787 Comm: rmmod Tainted: G W O +4.2.0-rc2+ #8 +[ 908.984170] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 +[ 908.984172] 0000000000000000 ffffffff81732d41 ffffffff81525b34 +ffff8800358dfda8 +[ 908.984175] ffffffff8106c521 ffff88003595af78 ffff88003595af40 +ffff88003e3a4280 +[ 908.984178] ffffffffa058d040 0000000000000000 ffffffff8106c59a +ffffffff8172ebd0 +[ 908.984181] Call Trace: +[ 908.984188] [] ? dump_stack+0x40/0x50 +[ 908.984193] [] ? warn_slowpath_common+0x81/0xb0 +[ 908.984196] [] ? warn_slowpath_fmt+0x4a/0x50 +[ 908.984199] [] ? remove_proc_entry+0x112/0x160 +[ 908.984205] [] ? bond_destroy_proc_dir+0x26/0x30 +[bonding] +[ 908.984208] [] ? bond_net_exit+0x8e/0xa0 [bonding] +[ 908.984217] [] ? ops_exit_list.isra.4+0x37/0x70 +[ 908.984225] [] ? +unregister_pernet_operations+0x8d/0xd0 +[ 908.984228] [] ? +unregister_pernet_subsys+0x1d/0x30 +[ 908.984232] [] ? bonding_exit+0x23/0xdba [bonding] +[ 908.984236] [] ? SyS_delete_module+0x18a/0x250 +[ 908.984241] [] ? task_work_run+0x89/0xc0 +[ 908.984244] [] ? +entry_SYSCALL_64_fastpath+0x16/0x75 +[ 908.984247] ---[ end trace 7c006ed4abbef24b ]--- + +Thus remove the proc entry manually if bond_release_and_destroy() is +used. Because of the checks in bond_remove_proc_entry() it's not a +problem for a bond device to change namespaces (the bug fixed by the +Fixes commit) but since commit +f9399814927ad ("bonding: Don't allow bond devices to change network +namespaces.") that can't happen anyway. + +Reported-by: Carol Soto +Signed-off-by: Nikolay Aleksandrov +Fixes: a64d49c3dd50 ("bonding: Manage /proc/net/bonding/ entries from + the netdev events") +Tested-by: Carol L Soto +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + drivers/net/bonding/bond_main.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -2188,6 +2188,7 @@ static int bond_release_and_destroy(str + bond_dev->priv_flags |= IFF_DISABLE_NETPOLL; + pr_info("%s: destroying bond %s.\n", + bond_dev->name, bond_dev->name); ++ bond_remove_proc_entry(bond); + unregister_netdevice(bond_dev); + } + return ret; diff --git a/queue-3.10/bridge-mdb-fix-double-add-notification.patch b/queue-3.10/bridge-mdb-fix-double-add-notification.patch new file mode 100644 index 00000000000..7ace1ebbc34 --- /dev/null +++ b/queue-3.10/bridge-mdb-fix-double-add-notification.patch @@ -0,0 +1,41 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Nikolay Aleksandrov +Date: Mon, 13 Jul 2015 06:36:19 -0700 +Subject: bridge: mdb: fix double add notification + +From: Nikolay Aleksandrov + +[ Upstream commit 5ebc784625ea68a9570d1f70557e7932988cd1b4 ] + +Since the mdb add/del code was introduced there have been 2 br_mdb_notify +calls when doing br_mdb_add() resulting in 2 notifications on each add. + +Example: + Command: bridge mdb add dev br0 port eth1 grp 239.0.0.1 permanent + Before patch: + root@debian:~# bridge monitor all + [MDB]dev br0 port eth1 grp 239.0.0.1 permanent + [MDB]dev br0 port eth1 grp 239.0.0.1 permanent + + After patch: + root@debian:~# bridge monitor all + [MDB]dev br0 port eth1 grp 239.0.0.1 permanent + +Signed-off-by: Nikolay Aleksandrov +Fixes: cfd567543590 ("bridge: add support of adding and deleting mdb entries") +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/bridge/br_mdb.c | 1 - + 1 file changed, 1 deletion(-) + +--- a/net/bridge/br_mdb.c ++++ b/net/bridge/br_mdb.c +@@ -345,7 +345,6 @@ static int br_mdb_add_group(struct net_b + return -ENOMEM; + rcu_assign_pointer(*pp, p); + +- br_mdb_notify(br->dev, port, group, RTM_NEWMDB); + return 0; + } + diff --git a/queue-3.10/bridge-mdb-zero-out-the-local-br_ip-variable-before-use.patch b/queue-3.10/bridge-mdb-zero-out-the-local-br_ip-variable-before-use.patch new file mode 100644 index 00000000000..ec36a1a9227 --- /dev/null +++ b/queue-3.10/bridge-mdb-zero-out-the-local-br_ip-variable-before-use.patch @@ -0,0 +1,57 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Nikolay Aleksandrov +Date: Tue, 7 Jul 2015 15:55:56 +0200 +Subject: bridge: mdb: zero out the local br_ip variable before use + +From: Nikolay Aleksandrov + +[ Upstream commit f1158b74e54f2e2462ba5e2f45a118246d9d5b43 ] + +Since commit b0e9a30dd669 ("bridge: Add vlan id to multicast groups") +there's a check in br_ip_equal() for a matching vlan id, but the mdb +functions were not modified to use (or at least zero it) so when an +entry was added it would have a garbage vlan id (from the local br_ip +variable in __br_mdb_add/del) and this would prevent it from being +matched and also deleted. So zero out the whole local ip var to protect +ourselves from future changes and also to fix the current bug, since +there's no vlan id support in the mdb uapi - use always vlan id 0. +Example before patch: +root@debian:~# bridge mdb add dev br0 port eth1 grp 239.0.0.1 permanent +root@debian:~# bridge mdb +dev br0 port eth1 grp 239.0.0.1 permanent +root@debian:~# bridge mdb del dev br0 port eth1 grp 239.0.0.1 permanent +RTNETLINK answers: Invalid argument + +After patch: +root@debian:~# bridge mdb add dev br0 port eth1 grp 239.0.0.1 permanent +root@debian:~# bridge mdb +dev br0 port eth1 grp 239.0.0.1 permanent +root@debian:~# bridge mdb del dev br0 port eth1 grp 239.0.0.1 permanent +root@debian:~# bridge mdb + +Signed-off-by: Nikolay Aleksandrov +Fixes: b0e9a30dd669 ("bridge: Add vlan id to multicast groups") +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/bridge/br_mdb.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/net/bridge/br_mdb.c ++++ b/net/bridge/br_mdb.c +@@ -368,6 +368,7 @@ static int __br_mdb_add(struct net *net, + if (!p || p->br != br || p->state == BR_STATE_DISABLED) + return -EINVAL; + ++ memset(&ip, 0, sizeof(ip)); + ip.proto = entry->addr.proto; + if (ip.proto == htons(ETH_P_IP)) + ip.u.ip4 = entry->addr.u.ip4; +@@ -417,6 +418,7 @@ static int __br_mdb_del(struct net_bridg + if (timer_pending(&br->multicast_querier_timer)) + return -EBUSY; + ++ memset(&ip, 0, sizeof(ip)); + ip.proto = entry->addr.proto; + if (ip.proto == htons(ETH_P_IP)) + ip.u.ip4 = entry->addr.u.ip4; diff --git a/queue-3.10/inet-frags-fix-defragmented-packet-s-ip-header-for-af_packet.patch b/queue-3.10/inet-frags-fix-defragmented-packet-s-ip-header-for-af_packet.patch new file mode 100644 index 00000000000..a112dc4845e --- /dev/null +++ b/queue-3.10/inet-frags-fix-defragmented-packet-s-ip-header-for-af_packet.patch @@ -0,0 +1,58 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Edward Hyunkoo Jee +Date: Tue, 21 Jul 2015 09:43:59 +0200 +Subject: inet: frags: fix defragmented packet's IP header for af_packet + +From: Edward Hyunkoo Jee + +[ Upstream commit 0848f6428ba3a2e42db124d41ac6f548655735bf ] + +When ip_frag_queue() computes positions, it assumes that the passed +sk_buff does not contain L2 headers. + +However, when PACKET_FANOUT_FLAG_DEFRAG is used, IP reassembly +functions can be called on outgoing packets that contain L2 headers. + +Also, IPv4 checksum is not corrected after reassembly. + +Fixes: 7736d33f4262 ("packet: Add pre-defragmentation support for ipv4 fanouts.") +Signed-off-by: Edward Hyunkoo Jee +Signed-off-by: Eric Dumazet +Cc: Willem de Bruijn +Cc: Jerry Chu +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/ipv4/ip_fragment.c | 7 +++++-- + 1 file changed, 5 insertions(+), 2 deletions(-) + +--- a/net/ipv4/ip_fragment.c ++++ b/net/ipv4/ip_fragment.c +@@ -356,7 +356,7 @@ static int ip_frag_queue(struct ipq *qp, + ihl = ip_hdrlen(skb); + + /* Determine the position of this fragment. */ +- end = offset + skb->len - ihl; ++ end = offset + skb->len - skb_network_offset(skb) - ihl; + err = -EINVAL; + + /* Is this the final fragment? */ +@@ -386,7 +386,7 @@ static int ip_frag_queue(struct ipq *qp, + goto err; + + err = -ENOMEM; +- if (pskb_pull(skb, ihl) == NULL) ++ if (!pskb_pull(skb, skb_network_offset(skb) + ihl)) + goto err; + + err = pskb_trim_rcsum(skb, end - offset); +@@ -627,6 +627,9 @@ static int ip_frag_reasm(struct ipq *qp, + iph->frag_off = qp->q.max_size ? htons(IP_DF) : 0; + iph->tot_len = htons(len); + iph->tos |= ecn; ++ ++ ip_send_check(iph); ++ + IP_INC_STATS_BH(net, IPSTATS_MIB_REASMOKS); + qp->q.fragments = NULL; + qp->q.fragments_tail = NULL; diff --git a/queue-3.10/ipv6-lock-socket-in-ip6_datagram_connect.patch b/queue-3.10/ipv6-lock-socket-in-ip6_datagram_connect.patch new file mode 100644 index 00000000000..55f38906149 --- /dev/null +++ b/queue-3.10/ipv6-lock-socket-in-ip6_datagram_connect.patch @@ -0,0 +1,126 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Eric Dumazet +Date: Tue, 14 Jul 2015 08:10:22 +0200 +Subject: ipv6: lock socket in ip6_datagram_connect() + +From: Eric Dumazet + +[ Upstream commit 03645a11a570d52e70631838cb786eb4253eb463 ] + +ip6_datagram_connect() is doing a lot of socket changes without +socket being locked. + +This looks wrong, at least for udp_lib_rehash() which could corrupt +lists because of concurrent udp_sk(sk)->udp_portaddr_hash accesses. + +Signed-off-by: Eric Dumazet +Acked-by: Herbert Xu +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + include/net/ip.h | 1 + + net/ipv4/datagram.c | 16 ++++++++++++---- + net/ipv6/datagram.c | 20 +++++++++++++++----- + 3 files changed, 28 insertions(+), 9 deletions(-) + +--- a/include/net/ip.h ++++ b/include/net/ip.h +@@ -141,6 +141,7 @@ static inline struct sk_buff *ip_finish_ + } + + /* datagram.c */ ++int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len); + extern int ip4_datagram_connect(struct sock *sk, + struct sockaddr *uaddr, int addr_len); + +--- a/net/ipv4/datagram.c ++++ b/net/ipv4/datagram.c +@@ -20,7 +20,7 @@ + #include + #include + +-int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) ++int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) + { + struct inet_sock *inet = inet_sk(sk); + struct sockaddr_in *usin = (struct sockaddr_in *) uaddr; +@@ -39,8 +39,6 @@ int ip4_datagram_connect(struct sock *sk + + sk_dst_reset(sk); + +- lock_sock(sk); +- + oif = sk->sk_bound_dev_if; + saddr = inet->inet_saddr; + if (ipv4_is_multicast(usin->sin_addr.s_addr)) { +@@ -81,9 +79,19 @@ int ip4_datagram_connect(struct sock *sk + sk_dst_set(sk, &rt->dst); + err = 0; + out: +- release_sock(sk); + return err; + } ++EXPORT_SYMBOL(__ip4_datagram_connect); ++ ++int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) ++{ ++ int res; ++ ++ lock_sock(sk); ++ res = __ip4_datagram_connect(sk, uaddr, addr_len); ++ release_sock(sk); ++ return res; ++} + EXPORT_SYMBOL(ip4_datagram_connect); + + /* Because UDP xmit path can manipulate sk_dst_cache without holding +--- a/net/ipv6/datagram.c ++++ b/net/ipv6/datagram.c +@@ -40,7 +40,7 @@ static bool ipv6_mapped_addr_any(const s + return ipv6_addr_v4mapped(a) && (a->s6_addr32[3] == 0); + } + +-int ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) ++static int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) + { + struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr; + struct inet_sock *inet = inet_sk(sk); +@@ -56,7 +56,7 @@ int ip6_datagram_connect(struct sock *sk + if (usin->sin6_family == AF_INET) { + if (__ipv6_only_sock(sk)) + return -EAFNOSUPPORT; +- err = ip4_datagram_connect(sk, uaddr, addr_len); ++ err = __ip4_datagram_connect(sk, uaddr, addr_len); + goto ipv4_connected; + } + +@@ -99,9 +99,9 @@ int ip6_datagram_connect(struct sock *sk + sin.sin_addr.s_addr = daddr->s6_addr32[3]; + sin.sin_port = usin->sin6_port; + +- err = ip4_datagram_connect(sk, +- (struct sockaddr *) &sin, +- sizeof(sin)); ++ err = __ip4_datagram_connect(sk, ++ (struct sockaddr *) &sin, ++ sizeof(sin)); + + ipv4_connected: + if (err) +@@ -204,6 +204,16 @@ out: + fl6_sock_release(flowlabel); + return err; + } ++ ++int ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) ++{ ++ int res; ++ ++ lock_sock(sk); ++ res = __ip6_datagram_connect(sk, uaddr, addr_len); ++ release_sock(sk); ++ return res; ++} + EXPORT_SYMBOL_GPL(ip6_datagram_connect); + + void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, diff --git a/queue-3.10/ipv6-make-mld-packets-to-only-be-processed-locally.patch b/queue-3.10/ipv6-make-mld-packets-to-only-be-processed-locally.patch new file mode 100644 index 00000000000..4414f7c2afe --- /dev/null +++ b/queue-3.10/ipv6-make-mld-packets-to-only-be-processed-locally.patch @@ -0,0 +1,40 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Angga +Date: Fri, 3 Jul 2015 14:40:52 +1200 +Subject: ipv6: Make MLD packets to only be processed locally + +From: Angga + +[ Upstream commit 4c938d22c88a9ddccc8c55a85e0430e9c62b1ac5 ] + +Before commit daad151263cf ("ipv6: Make ipv6_is_mld() inline and use it +from ip6_mc_input().") MLD packets were only processed locally. After the +change, a copy of MLD packet goes through ip6_mr_input, causing +MRT6MSG_NOCACHE message to be generated to user space. + +Make MLD packet only processed locally. + +Fixes: daad151263cf ("ipv6: Make ipv6_is_mld() inline and use it from ip6_mc_input().") +Signed-off-by: Hermin Anggawijaya +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/ipv6/ip6_input.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/net/ipv6/ip6_input.c ++++ b/net/ipv6/ip6_input.c +@@ -325,10 +325,10 @@ int ip6_mc_input(struct sk_buff *skb) + if (offset < 0) + goto out; + +- if (!ipv6_is_mld(skb, nexthdr, offset)) +- goto out; ++ if (ipv6_is_mld(skb, nexthdr, offset)) ++ deliver = true; + +- deliver = true; ++ goto out; + } + /* unknown RA - process it normally */ + } diff --git a/queue-3.10/isdn-gigaset-reset-tty-receive_room-when-attaching-ser_gigaset.patch b/queue-3.10/isdn-gigaset-reset-tty-receive_room-when-attaching-ser_gigaset.patch new file mode 100644 index 00000000000..89827107e41 --- /dev/null +++ b/queue-3.10/isdn-gigaset-reset-tty-receive_room-when-attaching-ser_gigaset.patch @@ -0,0 +1,52 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Tilman Schmidt +Date: Tue, 14 Jul 2015 00:37:13 +0200 +Subject: isdn/gigaset: reset tty->receive_room when attaching ser_gigaset + +From: Tilman Schmidt + +[ Upstream commit fd98e9419d8d622a4de91f76b306af6aa627aa9c ] + +Commit 79901317ce80 ("n_tty: Don't flush buffer when closing ldisc"), +first merged in kernel release 3.10, caused the following regression +in the Gigaset M101 driver: + +Before that commit, when closing the N_TTY line discipline in +preparation to switching to N_GIGASET_M101, receive_room would be +reset to a non-zero value by the call to n_tty_flush_buffer() in +n_tty's close method. With the removal of that call, receive_room +might be left at zero, blocking data reception on the serial line. + +The present patch fixes that regression by setting receive_room +to an appropriate value in the ldisc open method. + +Fixes: 79901317ce80 ("n_tty: Don't flush buffer when closing ldisc") +Signed-off-by: Tilman Schmidt +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + drivers/isdn/gigaset/ser-gigaset.c | 11 ++++++++++- + 1 file changed, 10 insertions(+), 1 deletion(-) + +--- a/drivers/isdn/gigaset/ser-gigaset.c ++++ b/drivers/isdn/gigaset/ser-gigaset.c +@@ -524,9 +524,18 @@ gigaset_tty_open(struct tty_struct *tty) + cs->hw.ser->tty = tty; + atomic_set(&cs->hw.ser->refcnt, 1); + init_completion(&cs->hw.ser->dead_cmp); +- + tty->disc_data = cs; + ++ /* Set the amount of data we're willing to receive per call ++ * from the hardware driver to half of the input buffer size ++ * to leave some reserve. ++ * Note: We don't do flow control towards the hardware driver. ++ * If more data is received than will fit into the input buffer, ++ * it will be dropped and an error will be logged. This should ++ * never happen as the device is slow and the buffer size ample. ++ */ ++ tty->receive_room = RBUFSIZE/2; ++ + /* OK.. Initialization of the datastructures and the HW is done.. Now + * startup system and notify the LL that we are ready to run + */ diff --git a/queue-3.10/net-call-rcu_read_lock-early-in-process_backlog.patch b/queue-3.10/net-call-rcu_read_lock-early-in-process_backlog.patch new file mode 100644 index 00000000000..4818ded0441 --- /dev/null +++ b/queue-3.10/net-call-rcu_read_lock-early-in-process_backlog.patch @@ -0,0 +1,151 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Julian Anastasov +Date: Thu, 9 Jul 2015 09:59:10 +0300 +Subject: net: call rcu_read_lock early in process_backlog + +From: Julian Anastasov + +[ Upstream commit 2c17d27c36dcce2b6bf689f41a46b9e909877c21 ] + +Incoming packet should be either in backlog queue or +in RCU read-side section. Otherwise, the final sequence of +flush_backlog() and synchronize_net() may miss packets +that can run without device reference: + +CPU 1 CPU 2 + skb->dev: no reference + process_backlog:__skb_dequeue + process_backlog:local_irq_enable + +on_each_cpu for +flush_backlog => IPI(hardirq): flush_backlog + - packet not found in backlog + + CPU delayed ... +synchronize_net +- no ongoing RCU +read-side sections + +netdev_run_todo, +rcu_barrier: no +ongoing callbacks + __netif_receive_skb_core:rcu_read_lock + - too late +free dev + process packet for freed dev + +Fixes: 6e583ce5242f ("net: eliminate refcounting in backlog queue") +Cc: Eric W. Biederman +Cc: Stephen Hemminger +Signed-off-by: Julian Anastasov +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/core/dev.c | 29 ++++++++++++++--------------- + 1 file changed, 14 insertions(+), 15 deletions(-) + +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -3443,8 +3443,6 @@ static int __netif_receive_skb_core(stru + + pt_prev = NULL; + +- rcu_read_lock(); +- + another_round: + skb->skb_iif = skb->dev->ifindex; + +@@ -3454,7 +3452,7 @@ another_round: + skb->protocol == cpu_to_be16(ETH_P_8021AD)) { + skb = vlan_untag(skb); + if (unlikely(!skb)) +- goto unlock; ++ goto out; + } + + #ifdef CONFIG_NET_CLS_ACT +@@ -3479,7 +3477,7 @@ skip_taps: + #ifdef CONFIG_NET_CLS_ACT + skb = handle_ing(skb, &pt_prev, &ret, orig_dev); + if (!skb) +- goto unlock; ++ goto out; + ncls: + #endif + +@@ -3494,7 +3492,7 @@ ncls: + if (vlan_do_receive(&skb)) + goto another_round; + else if (unlikely(!skb)) +- goto unlock; ++ goto out; + } + + rx_handler = rcu_dereference(skb->dev->rx_handler); +@@ -3506,7 +3504,7 @@ ncls: + switch (rx_handler(&skb)) { + case RX_HANDLER_CONSUMED: + ret = NET_RX_SUCCESS; +- goto unlock; ++ goto out; + case RX_HANDLER_ANOTHER: + goto another_round; + case RX_HANDLER_EXACT: +@@ -3558,8 +3556,6 @@ drop: + ret = NET_RX_DROP; + } + +-unlock: +- rcu_read_unlock(); + out: + return ret; + } +@@ -3606,29 +3602,30 @@ static int __netif_receive_skb(struct sk + */ + int netif_receive_skb(struct sk_buff *skb) + { ++ int ret; ++ + net_timestamp_check(netdev_tstamp_prequeue, skb); + + if (skb_defer_rx_timestamp(skb)) + return NET_RX_SUCCESS; + ++ rcu_read_lock(); ++ + #ifdef CONFIG_RPS + if (static_key_false(&rps_needed)) { + struct rps_dev_flow voidflow, *rflow = &voidflow; +- int cpu, ret; +- +- rcu_read_lock(); +- +- cpu = get_rps_cpu(skb->dev, skb, &rflow); ++ int cpu = get_rps_cpu(skb->dev, skb, &rflow); + + if (cpu >= 0) { + ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); + rcu_read_unlock(); + return ret; + } +- rcu_read_unlock(); + } + #endif +- return __netif_receive_skb(skb); ++ ret = __netif_receive_skb(skb); ++ rcu_read_unlock(); ++ return ret; + } + EXPORT_SYMBOL(netif_receive_skb); + +@@ -4038,8 +4035,10 @@ static int process_backlog(struct napi_s + unsigned int qlen; + + while ((skb = __skb_dequeue(&sd->process_queue))) { ++ rcu_read_lock(); + local_irq_enable(); + __netif_receive_skb(skb); ++ rcu_read_unlock(); + local_irq_disable(); + input_queue_head_incr(sd); + if (++work >= quota) { diff --git a/queue-3.10/net-clone-skb-before-setting-peeked-flag.patch b/queue-3.10/net-clone-skb-before-setting-peeked-flag.patch new file mode 100644 index 00000000000..0ad1d1d4daa --- /dev/null +++ b/queue-3.10/net-clone-skb-before-setting-peeked-flag.patch @@ -0,0 +1,108 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Herbert Xu +Date: Mon, 13 Jul 2015 16:04:13 +0800 +Subject: net: Clone skb before setting peeked flag + +From: Herbert Xu + +[ Upstream commit 738ac1ebb96d02e0d23bc320302a6ea94c612dec ] + +Shared skbs must not be modified and this is crucial for broadcast +and/or multicast paths where we use it as an optimisation to avoid +unnecessary cloning. + +The function skb_recv_datagram breaks this rule by setting peeked +without cloning the skb first. This causes funky races which leads +to double-free. + +This patch fixes this by cloning the skb and replacing the skb +in the list when setting skb->peeked. + +Fixes: a59322be07c9 ("[UDP]: Only increment counter on first peek/recv") +Reported-by: Konstantin Khlebnikov +Signed-off-by: Herbert Xu +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/core/datagram.c | 41 ++++++++++++++++++++++++++++++++++++++--- + 1 file changed, 38 insertions(+), 3 deletions(-) + +--- a/net/core/datagram.c ++++ b/net/core/datagram.c +@@ -128,6 +128,35 @@ out_noerr: + goto out; + } + ++static int skb_set_peeked(struct sk_buff *skb) ++{ ++ struct sk_buff *nskb; ++ ++ if (skb->peeked) ++ return 0; ++ ++ /* We have to unshare an skb before modifying it. */ ++ if (!skb_shared(skb)) ++ goto done; ++ ++ nskb = skb_clone(skb, GFP_ATOMIC); ++ if (!nskb) ++ return -ENOMEM; ++ ++ skb->prev->next = nskb; ++ skb->next->prev = nskb; ++ nskb->prev = skb->prev; ++ nskb->next = skb->next; ++ ++ consume_skb(skb); ++ skb = nskb; ++ ++done: ++ skb->peeked = 1; ++ ++ return 0; ++} ++ + /** + * __skb_recv_datagram - Receive a datagram skbuff + * @sk: socket +@@ -162,7 +191,9 @@ out_noerr: + struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, + int *peeked, int *off, int *err) + { ++ struct sk_buff_head *queue = &sk->sk_receive_queue; + struct sk_buff *skb, *last; ++ unsigned long cpu_flags; + long timeo; + /* + * Caller is allowed not to check sk->sk_err before skb_recv_datagram() +@@ -181,8 +212,6 @@ struct sk_buff *__skb_recv_datagram(stru + * Look at current nfs client by the way... + * However, this function was correct in any case. 8) + */ +- unsigned long cpu_flags; +- struct sk_buff_head *queue = &sk->sk_receive_queue; + int _off = *off; + + last = (struct sk_buff *)queue; +@@ -196,7 +225,11 @@ struct sk_buff *__skb_recv_datagram(stru + _off -= skb->len; + continue; + } +- skb->peeked = 1; ++ ++ error = skb_set_peeked(skb); ++ if (error) ++ goto unlock_err; ++ + atomic_inc(&skb->users); + } else + __skb_unlink(skb, queue); +@@ -216,6 +249,8 @@ struct sk_buff *__skb_recv_datagram(stru + + return NULL; + ++unlock_err: ++ spin_unlock_irqrestore(&queue->lock, cpu_flags); + no_packet: + *err = error; + return NULL; diff --git a/queue-3.10/net-fix-skb-csum-races-when-peeking.patch b/queue-3.10/net-fix-skb-csum-races-when-peeking.patch new file mode 100644 index 00000000000..568a52e2e44 --- /dev/null +++ b/queue-3.10/net-fix-skb-csum-races-when-peeking.patch @@ -0,0 +1,41 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Herbert Xu +Date: Mon, 13 Jul 2015 20:01:42 +0800 +Subject: net: Fix skb csum races when peeking + +From: Herbert Xu + +[ Upstream commit 89c22d8c3b278212eef6a8cc66b570bc840a6f5a ] + +When we calculate the checksum on the recv path, we store the +result in the skb as an optimisation in case we need the checksum +again down the line. + +This is in fact bogus for the MSG_PEEK case as this is done without +any locking. So multiple threads can peek and then store the result +to the same skb, potentially resulting in bogus skb states. + +This patch fixes this by only storing the result if the skb is not +shared. This preserves the optimisations for the few cases where +it can be done safely due to locking or other reasons, e.g., SIOCINQ. + +Signed-off-by: Herbert Xu +Acked-by: Eric Dumazet +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/core/datagram.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/net/core/datagram.c ++++ b/net/core/datagram.c +@@ -700,7 +700,8 @@ __sum16 __skb_checksum_complete_head(str + if (likely(!sum)) { + if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE)) + netdev_rx_csum_fault(skb->dev); +- skb->ip_summed = CHECKSUM_UNNECESSARY; ++ if (!skb_shared(skb)) ++ skb->ip_summed = CHECKSUM_UNNECESSARY; + } + return sum; + } diff --git a/queue-3.10/net-fix-skb_set_peeked-use-after-free-bug.patch b/queue-3.10/net-fix-skb_set_peeked-use-after-free-bug.patch new file mode 100644 index 00000000000..73419344f83 --- /dev/null +++ b/queue-3.10/net-fix-skb_set_peeked-use-after-free-bug.patch @@ -0,0 +1,76 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Herbert Xu +Date: Tue, 4 Aug 2015 15:42:47 +0800 +Subject: net: Fix skb_set_peeked use-after-free bug + +From: Herbert Xu + +[ Upstream commit a0a2a6602496a45ae838a96db8b8173794b5d398 ] + +The commit 738ac1ebb96d02e0d23bc320302a6ea94c612dec ("net: Clone +skb before setting peeked flag") introduced a use-after-free bug +in skb_recv_datagram. This is because skb_set_peeked may create +a new skb and free the existing one. As it stands the caller will +continue to use the old freed skb. + +This patch fixes it by making skb_set_peeked return the new skb +(or the old one if unchanged). + +Fixes: 738ac1ebb96d ("net: Clone skb before setting peeked flag") +Reported-by: Brenden Blanco +Signed-off-by: Herbert Xu +Tested-by: Brenden Blanco +Reviewed-by: Konstantin Khlebnikov +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/core/datagram.c | 13 +++++++------ + 1 file changed, 7 insertions(+), 6 deletions(-) + +--- a/net/core/datagram.c ++++ b/net/core/datagram.c +@@ -128,12 +128,12 @@ out_noerr: + goto out; + } + +-static int skb_set_peeked(struct sk_buff *skb) ++static struct sk_buff *skb_set_peeked(struct sk_buff *skb) + { + struct sk_buff *nskb; + + if (skb->peeked) +- return 0; ++ return skb; + + /* We have to unshare an skb before modifying it. */ + if (!skb_shared(skb)) +@@ -141,7 +141,7 @@ static int skb_set_peeked(struct sk_buff + + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); + + skb->prev->next = nskb; + skb->next->prev = nskb; +@@ -154,7 +154,7 @@ static int skb_set_peeked(struct sk_buff + done: + skb->peeked = 1; + +- return 0; ++ return skb; + } + + /** +@@ -226,8 +226,9 @@ struct sk_buff *__skb_recv_datagram(stru + continue; + } + +- error = skb_set_peeked(skb); +- if (error) ++ skb = skb_set_peeked(skb); ++ error = PTR_ERR(skb); ++ if (IS_ERR(skb)) + goto unlock_err; + + atomic_inc(&skb->users); diff --git a/queue-3.10/net-pktgen-fix-race-between-pktgen_thread_worker-and-kthread_stop.patch b/queue-3.10/net-pktgen-fix-race-between-pktgen_thread_worker-and-kthread_stop.patch new file mode 100644 index 00000000000..56f47ada3b9 --- /dev/null +++ b/queue-3.10/net-pktgen-fix-race-between-pktgen_thread_worker-and-kthread_stop.patch @@ -0,0 +1,35 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Oleg Nesterov +Date: Wed, 8 Jul 2015 21:42:11 +0200 +Subject: net: pktgen: fix race between pktgen_thread_worker() and kthread_stop() + +From: Oleg Nesterov + +[ Upstream commit fecdf8be2d91e04b0a9a4f79ff06499a36f5d14f ] + +pktgen_thread_worker() is obviously racy, kthread_stop() can come +between the kthread_should_stop() check and set_current_state(). + +Signed-off-by: Oleg Nesterov +Reported-by: Jan Stancek +Reported-by: Marcelo Leitner +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/core/pktgen.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/net/core/pktgen.c ++++ b/net/core/pktgen.c +@@ -3377,8 +3377,10 @@ static int pktgen_thread_worker(void *ar + pktgen_rem_thread(t); + + /* Wait for kthread_stop */ +- while (!kthread_should_stop()) { ++ for (;;) { + set_current_state(TASK_INTERRUPTIBLE); ++ if (kthread_should_stop()) ++ break; + schedule(); + } + __set_current_state(TASK_RUNNING); diff --git a/queue-3.10/net-tipc-initialize-security-state-for-new-connection-socket.patch b/queue-3.10/net-tipc-initialize-security-state-for-new-connection-socket.patch new file mode 100644 index 00000000000..d3956f6be4a --- /dev/null +++ b/queue-3.10/net-tipc-initialize-security-state-for-new-connection-socket.patch @@ -0,0 +1,42 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Stephen Smalley +Date: Tue, 7 Jul 2015 09:43:45 -0400 +Subject: net/tipc: initialize security state for new connection socket + +From: Stephen Smalley + +[ Upstream commit fdd75ea8df370f206a8163786e7470c1277a5064 ] + +Calling connect() with an AF_TIPC socket would trigger a series +of error messages from SELinux along the lines of: +SELinux: Invalid class 0 +type=AVC msg=audit(1434126658.487:34500): avc: denied { } + for pid=292 comm="kworker/u16:5" scontext=system_u:system_r:kernel_t:s0 + tcontext=system_u:object_r:unlabeled_t:s0 tclass= + permissive=0 + +This was due to a failure to initialize the security state of the new +connection sock by the tipc code, leaving it with junk in the security +class field and an unlabeled secid. Add a call to security_sk_clone() +to inherit the security state from the parent socket. + +Reported-by: Tim Shearer +Signed-off-by: Stephen Smalley +Acked-by: Paul Moore +Acked-by: Ying Xue +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/tipc/socket.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -1528,6 +1528,7 @@ static int accept(struct socket *sock, s + res = tipc_create(sock_net(sock->sk), new_sock, 0, 0); + if (res) + goto exit; ++ security_sk_clone(sock->sk, new_sock->sk); + + new_sk = new_sock->sk; + new_tsock = tipc_sk(new_sk); diff --git a/queue-3.10/netlink-don-t-hold-mutex-in-rcu-callback-when-releasing-mmapd-ring.patch b/queue-3.10/netlink-don-t-hold-mutex-in-rcu-callback-when-releasing-mmapd-ring.patch new file mode 100644 index 00000000000..df6277a1c49 --- /dev/null +++ b/queue-3.10/netlink-don-t-hold-mutex-in-rcu-callback-when-releasing-mmapd-ring.patch @@ -0,0 +1,210 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Florian Westphal +Date: Tue, 21 Jul 2015 16:33:50 +0200 +Subject: netlink: don't hold mutex in rcu callback when releasing mmapd ring + +From: Florian Westphal + +[ Upstream commit 0470eb99b4721586ccac954faac3fa4472da0845 ] + +Kirill A. Shutemov says: + +This simple test-case trigers few locking asserts in kernel: + +int main(int argc, char **argv) +{ + unsigned int block_size = 16 * 4096; + struct nl_mmap_req req = { + .nm_block_size = block_size, + .nm_block_nr = 64, + .nm_frame_size = 16384, + .nm_frame_nr = 64 * block_size / 16384, + }; + unsigned int ring_size; + int fd; + + fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_GENERIC); + if (setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, &req, sizeof(req)) < 0) + exit(1); + if (setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, &req, sizeof(req)) < 0) + exit(1); + + ring_size = req.nm_block_nr * req.nm_block_size; + mmap(NULL, 2 * ring_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); + return 0; +} + ++++ exited with 0 +++ +BUG: sleeping function called from invalid context at /home/kas/git/public/linux-mm/kernel/locking/mutex.c:616 +in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: init +3 locks held by init/1: + #0: (reboot_mutex){+.+...}, at: [] SyS_reboot+0xa9/0x220 + #1: ((reboot_notifier_list).rwsem){.+.+..}, at: [] __blocking_notifier_call_chain+0x39/0x70 + #2: (rcu_callback){......}, at: [] rcu_do_batch.isra.49+0x160/0x10c0 +Preemption disabled at:[] __delay+0xf/0x20 + +CPU: 1 PID: 1 Comm: init Not tainted 4.1.0-00009-gbddf4c4818e0 #253 +Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014 + ffff88017b3d8000 ffff88027bc03c38 ffffffff81929ceb 0000000000000102 + 0000000000000000 ffff88027bc03c68 ffffffff81085a9d 0000000000000002 + ffffffff81ca2a20 0000000000000268 0000000000000000 ffff88027bc03c98 +Call Trace: + [] dump_stack+0x4f/0x7b + [] ___might_sleep+0x16d/0x270 + [] __might_sleep+0x4d/0x90 + [] mutex_lock_nested+0x2f/0x430 + [] ? _raw_spin_unlock_irqrestore+0x5d/0x80 + [] ? __this_cpu_preempt_check+0x13/0x20 + [] netlink_set_ring+0x1ed/0x350 + [] ? netlink_undo_bind+0x70/0x70 + [] netlink_sock_destruct+0x80/0x150 + [] __sk_free+0x1d/0x160 + [] sk_free+0x19/0x20 +[..] + +Cong Wang says: + +We can't hold mutex lock in a rcu callback, [..] + +Thomas Graf says: + +The socket should be dead at this point. It might be simpler to +add a netlink_release_ring() function which doesn't require +locking at all. + +Reported-by: "Kirill A. Shutemov" +Diagnosed-by: Cong Wang +Suggested-by: Thomas Graf +Signed-off-by: Florian Westphal +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/netlink/af_netlink.c | 79 +++++++++++++++++++++++++++-------------------- + 1 file changed, 47 insertions(+), 32 deletions(-) + +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -214,25 +214,52 @@ err1: + return NULL; + } + ++ ++static void ++__netlink_set_ring(struct sock *sk, struct nl_mmap_req *req, bool tx_ring, void **pg_vec, ++ unsigned int order) ++{ ++ struct netlink_sock *nlk = nlk_sk(sk); ++ struct sk_buff_head *queue; ++ struct netlink_ring *ring; ++ ++ queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue; ++ ring = tx_ring ? &nlk->tx_ring : &nlk->rx_ring; ++ ++ spin_lock_bh(&queue->lock); ++ ++ ring->frame_max = req->nm_frame_nr - 1; ++ ring->head = 0; ++ ring->frame_size = req->nm_frame_size; ++ ring->pg_vec_pages = req->nm_block_size / PAGE_SIZE; ++ ++ swap(ring->pg_vec_len, req->nm_block_nr); ++ swap(ring->pg_vec_order, order); ++ swap(ring->pg_vec, pg_vec); ++ ++ __skb_queue_purge(queue); ++ spin_unlock_bh(&queue->lock); ++ ++ WARN_ON(atomic_read(&nlk->mapped)); ++ ++ if (pg_vec) ++ free_pg_vec(pg_vec, order, req->nm_block_nr); ++} ++ + static int netlink_set_ring(struct sock *sk, struct nl_mmap_req *req, +- bool closing, bool tx_ring) ++ bool tx_ring) + { + struct netlink_sock *nlk = nlk_sk(sk); + struct netlink_ring *ring; +- struct sk_buff_head *queue; + void **pg_vec = NULL; + unsigned int order = 0; +- int err; + + ring = tx_ring ? &nlk->tx_ring : &nlk->rx_ring; +- queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue; + +- if (!closing) { +- if (atomic_read(&nlk->mapped)) +- return -EBUSY; +- if (atomic_read(&ring->pending)) +- return -EBUSY; +- } ++ if (atomic_read(&nlk->mapped)) ++ return -EBUSY; ++ if (atomic_read(&ring->pending)) ++ return -EBUSY; + + if (req->nm_block_nr) { + if (ring->pg_vec != NULL) +@@ -264,31 +291,19 @@ static int netlink_set_ring(struct sock + return -EINVAL; + } + +- err = -EBUSY; + mutex_lock(&nlk->pg_vec_lock); +- if (closing || atomic_read(&nlk->mapped) == 0) { +- err = 0; +- spin_lock_bh(&queue->lock); +- +- ring->frame_max = req->nm_frame_nr - 1; +- ring->head = 0; +- ring->frame_size = req->nm_frame_size; +- ring->pg_vec_pages = req->nm_block_size / PAGE_SIZE; +- +- swap(ring->pg_vec_len, req->nm_block_nr); +- swap(ring->pg_vec_order, order); +- swap(ring->pg_vec, pg_vec); +- +- __skb_queue_purge(queue); +- spin_unlock_bh(&queue->lock); +- +- WARN_ON(atomic_read(&nlk->mapped)); ++ if (atomic_read(&nlk->mapped) == 0) { ++ __netlink_set_ring(sk, req, tx_ring, pg_vec, order); ++ mutex_unlock(&nlk->pg_vec_lock); ++ return 0; + } ++ + mutex_unlock(&nlk->pg_vec_lock); + + if (pg_vec) + free_pg_vec(pg_vec, order, req->nm_block_nr); +- return err; ++ ++ return -EBUSY; + } + + static void netlink_mm_open(struct vm_area_struct *vma) +@@ -762,10 +777,10 @@ static void netlink_sock_destruct(struct + + memset(&req, 0, sizeof(req)); + if (nlk->rx_ring.pg_vec) +- netlink_set_ring(sk, &req, true, false); ++ __netlink_set_ring(sk, &req, false, NULL, 0); + memset(&req, 0, sizeof(req)); + if (nlk->tx_ring.pg_vec) +- netlink_set_ring(sk, &req, true, true); ++ __netlink_set_ring(sk, &req, true, NULL, 0); + } + #endif /* CONFIG_NETLINK_MMAP */ + +@@ -2017,7 +2032,7 @@ static int netlink_setsockopt(struct soc + return -EINVAL; + if (copy_from_user(&req, optval, sizeof(req))) + return -EFAULT; +- err = netlink_set_ring(sk, &req, false, ++ err = netlink_set_ring(sk, &req, + optname == NETLINK_TX_RING); + break; + } diff --git a/queue-3.10/rds-fix-an-integer-overflow-test-in-rds_info_getsockopt.patch b/queue-3.10/rds-fix-an-integer-overflow-test-in-rds_info_getsockopt.patch new file mode 100644 index 00000000000..3907c60143b --- /dev/null +++ b/queue-3.10/rds-fix-an-integer-overflow-test-in-rds_info_getsockopt.patch @@ -0,0 +1,36 @@ +From foo@baz Sat Sep 26 11:20:32 PDT 2015 +From: Dan Carpenter +Date: Sat, 1 Aug 2015 15:33:26 +0300 +Subject: rds: fix an integer overflow test in rds_info_getsockopt() + +From: Dan Carpenter + +[ Upstream commit 468b732b6f76b138c0926eadf38ac88467dcd271 ] + +"len" is a signed integer. We check that len is not negative, so it +goes from zero to INT_MAX. PAGE_SIZE is unsigned long so the comparison +is type promoted to unsigned long. ULONG_MAX - 4095 is a higher than +INT_MAX so the condition can never be true. + +I don't know if this is harmful but it seems safe to limit "len" to +INT_MAX - 4095. + +Fixes: a8c879a7ee98 ('RDS: Info and stats') +Signed-off-by: Dan Carpenter +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + net/rds/info.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/rds/info.c ++++ b/net/rds/info.c +@@ -176,7 +176,7 @@ int rds_info_getsockopt(struct socket *s + + /* check for all kinds of wrapping and the like */ + start = (unsigned long)optval; +- if (len < 0 || len + PAGE_SIZE - 1 < len || start + len < start) { ++ if (len < 0 || len > INT_MAX - PAGE_SIZE + 1 || start + len < start) { + ret = -EINVAL; + goto out; + } diff --git a/queue-3.10/series b/queue-3.10/series index f27beb69ada..310c047034c 100644 --- a/queue-3.10/series +++ b/queue-3.10/series @@ -31,3 +31,19 @@ hfs-hfsplus-cache-pages-correctly-between-bnode_create-and-bnode_free.patch sctp-fix-asconf-list-handling.patch vhost-scsi-potential-memory-corruption.patch x86-bpf_jit-fix-compilation-of-large-bpf-programs.patch +ipv6-make-mld-packets-to-only-be-processed-locally.patch +net-tipc-initialize-security-state-for-new-connection-socket.patch +bridge-mdb-zero-out-the-local-br_ip-variable-before-use.patch +net-pktgen-fix-race-between-pktgen_thread_worker-and-kthread_stop.patch +net-call-rcu_read_lock-early-in-process_backlog.patch +net-clone-skb-before-setting-peeked-flag.patch +net-fix-skb-csum-races-when-peeking.patch +net-fix-skb_set_peeked-use-after-free-bug.patch +bridge-mdb-fix-double-add-notification.patch +isdn-gigaset-reset-tty-receive_room-when-attaching-ser_gigaset.patch +ipv6-lock-socket-in-ip6_datagram_connect.patch +bonding-fix-destruction-of-bond-with-devices-different-from-arphrd_ether.patch +bonding-correct-the-mac-address-for-follow-fail_over_mac-policy.patch +inet-frags-fix-defragmented-packet-s-ip-header-for-af_packet.patch +netlink-don-t-hold-mutex-in-rcu-callback-when-releasing-mmapd-ring.patch +rds-fix-an-integer-overflow-test-in-rds_info_getsockopt.patch