kernel: update to 3.10.30.
[people/teissler/ipfire-2.x.git] / src / patches / linux-3.10.25-imq.patch
CommitLineData
8ec868ab
MT
1diff -ruN linux-3.10.27/drivers/net/imq.c linux-3.10.27-imq/drivers/net/imq.c
2--- linux-3.10.27/drivers/net/imq.c 1970-01-01 01:00:00.000000000 +0100
3+++ linux-3.10.27-imq/drivers/net/imq.c 2014-01-18 10:19:59.342342913 +0100
4@@ -0,0 +1,1001 @@
5+/*
6+ * Pseudo-driver for the intermediate queue device.
7+ *
8+ * This program is free software; you can redistribute it and/or
9+ * modify it under the terms of the GNU General Public License
10+ * as published by the Free Software Foundation; either version
11+ * 2 of the License, or (at your option) any later version.
12+ *
13+ * Authors: Patrick McHardy, <kaber@trash.net>
14+ *
15+ * The first version was written by Martin Devera, <devik@cdi.cz>
16+ *
17+ * Credits: Jan Rafaj <imq2t@cedric.vabo.cz>
18+ * - Update patch to 2.4.21
19+ * Sebastian Strollo <sstrollo@nortelnetworks.com>
20+ * - Fix "Dead-loop on netdevice imq"-issue
21+ * Marcel Sebek <sebek64@post.cz>
22+ * - Update to 2.6.2-rc1
23+ *
24+ * After some time of inactivity there is a group taking care
25+ * of IMQ again: http://www.linuximq.net
26+ *
27+ *
28+ * 2004/06/30 - New version of IMQ patch to kernels <=2.6.7
29+ * including the following changes:
30+ *
31+ * - Correction of ipv6 support "+"s issue (Hasso Tepper)
32+ * - Correction of imq_init_devs() issue that resulted in
33+ * kernel OOPS unloading IMQ as module (Norbert Buchmuller)
34+ * - Addition of functionality to choose number of IMQ devices
35+ * during kernel config (Andre Correa)
36+ * - Addition of functionality to choose how IMQ hooks on
37+ * PRE and POSTROUTING (after or before NAT) (Andre Correa)
38+ * - Cosmetic corrections (Norbert Buchmuller) (Andre Correa)
39+ *
40+ *
41+ * 2005/12/16 - IMQ versions between 2.6.7 and 2.6.13 were
42+ * released with almost no problems. 2.6.14-x was released
43+ * with some important changes: nfcache was removed; After
44+ * some weeks of trouble we figured out that some IMQ fields
45+ * in skb were missing in skbuff.c - skb_clone and copy_skb_header.
46+ * These functions are correctly patched by this new patch version.
47+ *
48+ * Thanks for all who helped to figure out all the problems with
49+ * 2.6.14.x: Patrick McHardy, Rune Kock, VeNoMouS, Max CtRiX,
50+ * Kevin Shanahan, Richard Lucassen, Valery Dachev (hopefully
51+ * I didn't forget anybody). I apologize again for my lack of time.
52+ *
53+ *
54+ * 2008/06/17 - 2.6.25 - Changed imq.c to use qdisc_run() instead
55+ * of qdisc_restart() and moved qdisc_run() to tasklet to avoid
56+ * recursive locking. New initialization routines to fix 'rmmod' not
57+ * working anymore. Used code from ifb.c. (Jussi Kivilinna)
58+ *
59+ * 2008/08/06 - 2.6.26 - (JK)
60+ * - Replaced tasklet with 'netif_schedule()'.
61+ * - Cleaned up and added comments for imq_nf_queue().
62+ *
63+ * 2009/04/12
64+ * - Add skb_save_cb/skb_restore_cb helper functions for backuping
65+ * control buffer. This is needed because qdisc-layer on kernels
66+ * 2.6.27 and newer overwrite control buffer. (Jussi Kivilinna)
67+ * - Add better locking for IMQ device. Hopefully this will solve
68+ * SMP issues. (Jussi Kivilinna)
69+ * - Port to 2.6.27
70+ * - Port to 2.6.28
71+ * - Port to 2.6.29 + fix rmmod not working
72+ *
73+ * 2009/04/20 - (Jussi Kivilinna)
74+ * - Use netdevice feature flags to avoid extra packet handling
75+ * by core networking layer and possibly increase performance.
76+ *
77+ * 2009/09/26 - (Jussi Kivilinna)
78+ * - Add imq_nf_reinject_lockless to fix deadlock with
79+ * imq_nf_queue/imq_nf_reinject.
80+ *
81+ * 2009/12/08 - (Jussi Kivilinna)
82+ * - Port to 2.6.32
83+ * - Add check for skb->nf_queue_entry==NULL in imq_dev_xmit()
84+ * - Also add better error checking for skb->nf_queue_entry usage
85+ *
86+ * 2010/02/25 - (Jussi Kivilinna)
87+ * - Port to 2.6.33
88+ *
89+ * 2010/08/15 - (Jussi Kivilinna)
90+ * - Port to 2.6.35
91+ * - Simplify hook registration by using nf_register_hooks.
92+ * - nf_reinject doesn't need spinlock around it, therefore remove
93+ * imq_nf_reinject function. Other nf_reinject users protect
94+ * their own data with spinlock. With IMQ however all data is
95+ * needed is stored per skbuff, so no locking is needed.
96+ * - Changed IMQ to use 'separate' NF_IMQ_QUEUE instead of
97+ * NF_QUEUE, this allows working coexistance of IMQ and other
98+ * NF_QUEUE users.
99+ * - Make IMQ multi-queue. Number of IMQ device queues can be
100+ * increased with 'numqueues' module parameters. Default number
101+ * of queues is 1, in other words by default IMQ works as
102+ * single-queue device. Multi-queue selection is based on
103+ * IFB multi-queue patch by Changli Gao <xiaosuo@gmail.com>.
104+ *
105+ * 2011/03/18 - (Jussi Kivilinna)
106+ * - Port to 2.6.38
107+ *
108+ * 2011/07/12 - (syoder89@gmail.com)
109+ * - Crash fix that happens when the receiving interface has more
110+ * than one queue (add missing skb_set_queue_mapping in
111+ * imq_select_queue).
112+ *
113+ * 2011/07/26 - (Jussi Kivilinna)
114+ * - Add queue mapping checks for packets exiting IMQ.
115+ * - Port to 3.0
116+ *
117+ * 2011/08/16 - (Jussi Kivilinna)
118+ * - Clear IFF_TX_SKB_SHARING flag that was added for linux 3.0.2
119+ *
120+ * 2011/11/03 - Germano Michel <germanomichel@gmail.com>
121+ * - Fix IMQ for net namespaces
122+ *
123+ * 2011/11/04 - Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
124+ * - Port to 3.1
125+ * - Clean-up, move 'get imq device pointer by imqX name' to
126+ * separate function from imq_nf_queue().
127+ *
128+ * 2012/01/05 - Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
129+ * - Port to 3.2
130+ *
131+ * 2012/03/19 - Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
132+ * - Port to 3.3
133+ *
134+ * 2012/12/12 - Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
135+ * - Port to 3.7
136+ * - Fix checkpatch.pl warnings
137+ *
138+ * 2013/09/10 - Jussi Kivilinna <jussi.kivilinna@iki.fi>
139+ * - Fixed GSO handling for 3.10, see imq_nf_queue() for comments.
140+ * - Don't copy skb->cb_next when copying or cloning skbuffs.
141+ *
142+ * Also, many thanks to pablo Sebastian Greco for making the initial
143+ * patch and to those who helped the testing.
144+ *
145+ * More info at: http://www.linuximq.net/ (Andre Correa)
146+ */
147+
148+#include <linux/module.h>
149+#include <linux/kernel.h>
150+#include <linux/moduleparam.h>
151+#include <linux/list.h>
152+#include <linux/skbuff.h>
153+#include <linux/netdevice.h>
154+#include <linux/etherdevice.h>
155+#include <linux/rtnetlink.h>
156+#include <linux/if_arp.h>
157+#include <linux/netfilter.h>
158+#include <linux/netfilter_ipv4.h>
159+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
160+ #include <linux/netfilter_ipv6.h>
161+#endif
162+#include <linux/imq.h>
163+#include <net/pkt_sched.h>
164+#include <net/netfilter/nf_queue.h>
165+#include <net/sock.h>
166+#include <linux/ip.h>
167+#include <linux/ipv6.h>
168+#include <linux/if_vlan.h>
169+#include <linux/if_pppox.h>
170+#include <net/ip.h>
171+#include <net/ipv6.h>
172+
173+static int imq_nf_queue(struct nf_queue_entry *entry, unsigned queue_num);
174+
175+static nf_hookfn imq_nf_hook;
176+
177+static struct nf_hook_ops imq_ops[] = {
178+ {
179+ /* imq_ingress_ipv4 */
180+ .hook = imq_nf_hook,
181+ .owner = THIS_MODULE,
182+ .pf = PF_INET,
183+ .hooknum = NF_INET_PRE_ROUTING,
184+#if defined(CONFIG_IMQ_BEHAVIOR_BA) || defined(CONFIG_IMQ_BEHAVIOR_BB)
185+ .priority = NF_IP_PRI_MANGLE + 1,
186+#else
187+ .priority = NF_IP_PRI_NAT_DST + 1,
188+#endif
189+ },
190+ {
191+ /* imq_egress_ipv4 */
192+ .hook = imq_nf_hook,
193+ .owner = THIS_MODULE,
194+ .pf = PF_INET,
195+ .hooknum = NF_INET_POST_ROUTING,
196+#if defined(CONFIG_IMQ_BEHAVIOR_AA) || defined(CONFIG_IMQ_BEHAVIOR_BA)
197+ .priority = NF_IP_PRI_LAST,
198+#else
199+ .priority = NF_IP_PRI_NAT_SRC - 1,
200+#endif
201+ },
202+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
203+ {
204+ /* imq_ingress_ipv6 */
205+ .hook = imq_nf_hook,
206+ .owner = THIS_MODULE,
207+ .pf = PF_INET6,
208+ .hooknum = NF_INET_PRE_ROUTING,
209+#if defined(CONFIG_IMQ_BEHAVIOR_BA) || defined(CONFIG_IMQ_BEHAVIOR_BB)
210+ .priority = NF_IP6_PRI_MANGLE + 1,
211+#else
212+ .priority = NF_IP6_PRI_NAT_DST + 1,
213+#endif
214+ },
215+ {
216+ /* imq_egress_ipv6 */
217+ .hook = imq_nf_hook,
218+ .owner = THIS_MODULE,
219+ .pf = PF_INET6,
220+ .hooknum = NF_INET_POST_ROUTING,
221+#if defined(CONFIG_IMQ_BEHAVIOR_AA) || defined(CONFIG_IMQ_BEHAVIOR_BA)
222+ .priority = NF_IP6_PRI_LAST,
223+#else
224+ .priority = NF_IP6_PRI_NAT_SRC - 1,
225+#endif
226+ },
227+#endif
228+};
229+
230+#if defined(CONFIG_IMQ_NUM_DEVS)
231+static int numdevs = CONFIG_IMQ_NUM_DEVS;
232+#else
233+static int numdevs = IMQ_MAX_DEVS;
234+#endif
235+
236+static struct net_device *imq_devs_cache[IMQ_MAX_DEVS];
237+
238+#define IMQ_MAX_QUEUES 32
239+static int numqueues = 1;
240+static u32 imq_hashrnd;
241+
242+static inline __be16 pppoe_proto(const struct sk_buff *skb)
243+{
244+ return *((__be16 *)(skb_mac_header(skb) + ETH_HLEN +
245+ sizeof(struct pppoe_hdr)));
246+}
247+
248+static u16 imq_hash(struct net_device *dev, struct sk_buff *skb)
249+{
250+ unsigned int pull_len;
251+ u16 protocol = skb->protocol;
252+ u32 addr1, addr2;
253+ u32 hash, ihl = 0;
254+ union {
255+ u16 in16[2];
256+ u32 in32;
257+ } ports;
258+ u8 ip_proto;
259+
260+ pull_len = 0;
261+
262+recheck:
263+ switch (protocol) {
264+ case htons(ETH_P_8021Q): {
265+ if (unlikely(skb_pull(skb, VLAN_HLEN) == NULL))
266+ goto other;
267+
268+ pull_len += VLAN_HLEN;
269+ skb->network_header += VLAN_HLEN;
270+
271+ protocol = vlan_eth_hdr(skb)->h_vlan_encapsulated_proto;
272+ goto recheck;
273+ }
274+
275+ case htons(ETH_P_PPP_SES): {
276+ if (unlikely(skb_pull(skb, PPPOE_SES_HLEN) == NULL))
277+ goto other;
278+
279+ pull_len += PPPOE_SES_HLEN;
280+ skb->network_header += PPPOE_SES_HLEN;
281+
282+ protocol = pppoe_proto(skb);
283+ goto recheck;
284+ }
285+
286+ case htons(ETH_P_IP): {
287+ const struct iphdr *iph = ip_hdr(skb);
288+
289+ if (unlikely(!pskb_may_pull(skb, sizeof(struct iphdr))))
290+ goto other;
291+
292+ addr1 = iph->daddr;
293+ addr2 = iph->saddr;
294+
295+ ip_proto = !(ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) ?
296+ iph->protocol : 0;
297+ ihl = ip_hdrlen(skb);
298+
299+ break;
300+ }
301+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
302+ case htons(ETH_P_IPV6): {
303+ const struct ipv6hdr *iph = ipv6_hdr(skb);
304+ __be16 fo = 0;
305+
306+ if (unlikely(!pskb_may_pull(skb, sizeof(struct ipv6hdr))))
307+ goto other;
308+
309+ addr1 = iph->daddr.s6_addr32[3];
310+ addr2 = iph->saddr.s6_addr32[3];
311+ ihl = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &ip_proto,
312+ &fo);
313+ if (unlikely(ihl < 0))
314+ goto other;
315+
316+ break;
317+ }
318+#endif
319+ default:
320+other:
321+ if (pull_len != 0) {
322+ skb_push(skb, pull_len);
323+ skb->network_header -= pull_len;
324+ }
325+
326+ return (u16)(ntohs(protocol) % dev->real_num_tx_queues);
327+ }
328+
329+ if (addr1 > addr2)
330+ swap(addr1, addr2);
331+
332+ switch (ip_proto) {
333+ case IPPROTO_TCP:
334+ case IPPROTO_UDP:
335+ case IPPROTO_DCCP:
336+ case IPPROTO_ESP:
337+ case IPPROTO_AH:
338+ case IPPROTO_SCTP:
339+ case IPPROTO_UDPLITE: {
340+ if (likely(skb_copy_bits(skb, ihl, &ports.in32, 4) >= 0)) {
341+ if (ports.in16[0] > ports.in16[1])
342+ swap(ports.in16[0], ports.in16[1]);
343+ break;
344+ }
345+ /* fall-through */
346+ }
347+ default:
348+ ports.in32 = 0;
349+ break;
350+ }
351+
352+ if (pull_len != 0) {
353+ skb_push(skb, pull_len);
354+ skb->network_header -= pull_len;
355+ }
356+
357+ hash = jhash_3words(addr1, addr2, ports.in32, imq_hashrnd ^ ip_proto);
358+
359+ return (u16)(((u64)hash * dev->real_num_tx_queues) >> 32);
360+}
361+
362+static inline bool sk_tx_queue_recorded(struct sock *sk)
363+{
364+ return (sk_tx_queue_get(sk) >= 0);
365+}
366+
367+static struct netdev_queue *imq_select_queue(struct net_device *dev,
368+ struct sk_buff *skb)
369+{
370+ u16 queue_index = 0;
371+ u32 hash;
372+
373+ if (likely(dev->real_num_tx_queues == 1))
374+ goto out;
375+
376+ /* IMQ can be receiving ingress or engress packets. */
377+
378+ /* Check first for if rx_queue is set */
379+ if (skb_rx_queue_recorded(skb)) {
380+ queue_index = skb_get_rx_queue(skb);
381+ goto out;
382+ }
383+
384+ /* Check if socket has tx_queue set */
385+ if (sk_tx_queue_recorded(skb->sk)) {
386+ queue_index = sk_tx_queue_get(skb->sk);
387+ goto out;
388+ }
389+
390+ /* Try use socket hash */
391+ if (skb->sk && skb->sk->sk_hash) {
392+ hash = skb->sk->sk_hash;
393+ queue_index =
394+ (u16)(((u64)hash * dev->real_num_tx_queues) >> 32);
395+ goto out;
396+ }
397+
398+ /* Generate hash from packet data */
399+ queue_index = imq_hash(dev, skb);
400+
401+out:
402+ if (unlikely(queue_index >= dev->real_num_tx_queues))
403+ queue_index = (u16)((u32)queue_index % dev->real_num_tx_queues);
404+
405+ skb_set_queue_mapping(skb, queue_index);
406+ return netdev_get_tx_queue(dev, queue_index);
407+}
408+
409+static struct net_device_stats *imq_get_stats(struct net_device *dev)
410+{
411+ return &dev->stats;
412+}
413+
414+/* called for packets kfree'd in qdiscs at places other than enqueue */
415+static void imq_skb_destructor(struct sk_buff *skb)
416+{
417+ struct nf_queue_entry *entry = skb->nf_queue_entry;
418+
419+ skb->nf_queue_entry = NULL;
420+
421+ if (entry) {
422+ nf_queue_entry_release_refs(entry);
423+ kfree(entry);
424+ }
425+
426+ skb_restore_cb(skb); /* kfree backup */
427+}
428+
429+static void imq_done_check_queue_mapping(struct sk_buff *skb,
430+ struct net_device *dev)
431+{
432+ unsigned int queue_index;
433+
434+ /* Don't let queue_mapping be left too large after exiting IMQ */
435+ if (likely(skb->dev != dev && skb->dev != NULL)) {
436+ queue_index = skb_get_queue_mapping(skb);
437+ if (unlikely(queue_index >= skb->dev->real_num_tx_queues)) {
438+ queue_index = (u16)((u32)queue_index %
439+ skb->dev->real_num_tx_queues);
440+ skb_set_queue_mapping(skb, queue_index);
441+ }
442+ } else {
443+ /* skb->dev was IMQ device itself or NULL, be on safe side and
444+ * just clear queue mapping.
445+ */
446+ skb_set_queue_mapping(skb, 0);
447+ }
448+}
449+
450+static netdev_tx_t imq_dev_xmit(struct sk_buff *skb, struct net_device *dev)
451+{
452+ struct nf_queue_entry *entry = skb->nf_queue_entry;
453+
454+ skb->nf_queue_entry = NULL;
455+ dev->trans_start = jiffies;
456+
457+ dev->stats.tx_bytes += skb->len;
458+ dev->stats.tx_packets++;
459+
460+ if (unlikely(entry == NULL)) {
461+ /* We don't know what is going on here.. packet is queued for
462+ * imq device, but (probably) not by us.
463+ *
464+ * If this packet was not send here by imq_nf_queue(), then
465+ * skb_save_cb() was not used and skb_free() should not show:
466+ * WARNING: IMQ: kfree_skb: skb->cb_next:..
467+ * and/or
468+ * WARNING: IMQ: kfree_skb: skb->nf_queue_entry...
469+ *
470+ * However if this message is shown, then IMQ is somehow broken
471+ * and you should report this to linuximq.net.
472+ */
473+
474+ /* imq_dev_xmit is black hole that eats all packets, report that
475+ * we eat this packet happily and increase dropped counters.
476+ */
477+
478+ dev->stats.tx_dropped++;
479+ dev_kfree_skb(skb);
480+
481+ return NETDEV_TX_OK;
482+ }
483+
484+ skb_restore_cb(skb); /* restore skb->cb */
485+
486+ skb->imq_flags = 0;
487+ skb->destructor = NULL;
488+
489+ imq_done_check_queue_mapping(skb, dev);
490+
491+ nf_reinject(entry, NF_ACCEPT);
492+
493+ return NETDEV_TX_OK;
494+}
495+
496+static struct net_device *get_imq_device_by_index(int index)
497+{
498+ struct net_device *dev = NULL;
499+ struct net *net;
500+ char buf[8];
501+
502+ /* get device by name and cache result */
503+ snprintf(buf, sizeof(buf), "imq%d", index);
504+
505+ /* Search device from all namespaces. */
506+ for_each_net(net) {
507+ dev = dev_get_by_name(net, buf);
508+ if (dev)
509+ break;
510+ }
511+
512+ if (WARN_ON_ONCE(dev == NULL)) {
513+ /* IMQ device not found. Exotic config? */
514+ return ERR_PTR(-ENODEV);
515+ }
516+
517+ imq_devs_cache[index] = dev;
518+ dev_put(dev);
519+
520+ return dev;
521+}
522+
523+static struct nf_queue_entry *nf_queue_entry_dup(struct nf_queue_entry *e)
524+{
525+ struct nf_queue_entry *entry = kmemdup(e, e->size, GFP_ATOMIC);
526+ if (entry) {
527+ if (nf_queue_entry_get_refs(entry))
528+ return entry;
529+ kfree(entry);
530+ }
531+ return NULL;
532+}
533+
534+#ifdef CONFIG_BRIDGE_NETFILTER
535+/* When called from bridge netfilter, skb->data must point to MAC header
536+ * before calling skb_gso_segment(). Else, original MAC header is lost
537+ * and segmented skbs will be sent to wrong destination.
538+ */
539+static void nf_bridge_adjust_skb_data(struct sk_buff *skb)
540+{
541+ if (skb->nf_bridge)
542+ __skb_push(skb, skb->network_header - skb->mac_header);
543+}
544+
545+static void nf_bridge_adjust_segmented_data(struct sk_buff *skb)
546+{
547+ if (skb->nf_bridge)
548+ __skb_pull(skb, skb->network_header - skb->mac_header);
549+}
550+#else
551+#define nf_bridge_adjust_skb_data(s) do {} while (0)
552+#define nf_bridge_adjust_segmented_data(s) do {} while (0)
553+#endif
554+
555+static void free_entry(struct nf_queue_entry *entry)
556+{
557+ nf_queue_entry_release_refs(entry);
558+ kfree(entry);
559+}
560+
561+static int __imq_nf_queue(struct nf_queue_entry *entry, struct net_device *dev);
562+
563+static int __imq_nf_queue_gso(struct nf_queue_entry *entry,
564+ struct net_device *dev, struct sk_buff *skb)
565+{
566+ int ret = -ENOMEM;
567+ struct nf_queue_entry *entry_seg;
568+
569+ nf_bridge_adjust_segmented_data(skb);
570+
571+ if (skb->next == NULL) { /* last packet, no need to copy entry */
572+ struct sk_buff *gso_skb = entry->skb;
573+ entry->skb = skb;
574+ ret = __imq_nf_queue(entry, dev);
575+ if (ret)
576+ entry->skb = gso_skb;
577+ return ret;
578+ }
579+
580+ skb->next = NULL;
581+
582+ entry_seg = nf_queue_entry_dup(entry);
583+ if (entry_seg) {
584+ entry_seg->skb = skb;
585+ ret = __imq_nf_queue(entry_seg, dev);
586+ if (ret)
587+ free_entry(entry_seg);
588+ }
589+ return ret;
590+}
591+
592+static int imq_nf_queue(struct nf_queue_entry *entry, unsigned queue_num)
593+{
594+ struct sk_buff *skb, *segs;
595+ struct net_device *dev;
596+ unsigned int queued;
597+ int index, retval, err;
598+
599+ index = entry->skb->imq_flags & IMQ_F_IFMASK;
600+ if (unlikely(index > numdevs - 1)) {
601+ if (net_ratelimit())
602+ pr_warn("IMQ: invalid device specified, highest is %u\n",
603+ numdevs - 1);
604+ retval = -EINVAL;
605+ goto out_no_dev;
606+ }
607+
608+ /* check for imq device by index from cache */
609+ dev = imq_devs_cache[index];
610+ if (unlikely(!dev)) {
611+ dev = get_imq_device_by_index(index);
612+ if (IS_ERR(dev)) {
613+ retval = PTR_ERR(dev);
614+ goto out_no_dev;
615+ }
616+ }
617+
618+ if (unlikely(!(dev->flags & IFF_UP))) {
619+ entry->skb->imq_flags = 0;
620+ retval = -ECANCELED;
621+ goto out_no_dev;
622+ }
623+
624+ if (!skb_is_gso(entry->skb))
625+ return __imq_nf_queue(entry, dev);
626+
627+ /* Since 3.10.x, GSO handling moved here as result of upstream commit
628+ * a5fedd43d5f6c94c71053a66e4c3d2e35f1731a2 (netfilter: move
629+ * skb_gso_segment into nfnetlink_queue module).
630+ *
631+ * Following code replicates the gso handling from
632+ * 'net/netfilter/nfnetlink_queue_core.c':nfqnl_enqueue_packet().
633+ */
634+
635+ skb = entry->skb;
636+
637+ switch (entry->pf) {
638+ case NFPROTO_IPV4:
639+ skb->protocol = htons(ETH_P_IP);
640+ break;
641+ case NFPROTO_IPV6:
642+ skb->protocol = htons(ETH_P_IPV6);
643+ break;
644+ }
645+
646+ nf_bridge_adjust_skb_data(skb);
647+ segs = skb_gso_segment(skb, 0);
648+ /* Does not use PTR_ERR to limit the number of error codes that can be
649+ * returned by nf_queue. For instance, callers rely on -ECANCELED to
650+ * mean 'ignore this hook'.
651+ */
652+ err = -ENOBUFS;
653+ if (IS_ERR(segs))
654+ goto out_err;
655+ queued = 0;
656+ err = 0;
657+ do {
658+ struct sk_buff *nskb = segs->next;
659+ if (nskb && nskb->next)
660+ nskb->cb_next = NULL;
661+ if (err == 0)
662+ err = __imq_nf_queue_gso(entry, dev, segs);
663+ if (err == 0)
664+ queued++;
665+ else
666+ kfree_skb(segs);
667+ segs = nskb;
668+ } while (segs);
669+
670+ if (queued) {
671+ if (err) /* some segments are already queued */
672+ free_entry(entry);
673+ kfree_skb(skb);
674+ return 0;
675+ }
676+
677+out_err:
678+ nf_bridge_adjust_segmented_data(skb);
679+ retval = err;
680+out_no_dev:
681+ return retval;
682+}
683+
684+static int __imq_nf_queue(struct nf_queue_entry *entry, struct net_device *dev)
685+{
686+ struct sk_buff *skb_orig, *skb, *skb_shared;
687+ struct Qdisc *q;
688+ struct netdev_queue *txq;
689+ spinlock_t *root_lock;
690+ int users;
691+ int retval = -EINVAL;
692+ unsigned int orig_queue_index;
693+
694+ dev->last_rx = jiffies;
695+
696+ skb = entry->skb;
697+ skb_orig = NULL;
698+
699+ /* skb has owner? => make clone */
700+ if (unlikely(skb->destructor)) {
701+ skb_orig = skb;
702+ skb = skb_clone(skb, GFP_ATOMIC);
703+ if (unlikely(!skb)) {
704+ retval = -ENOMEM;
705+ goto out;
706+ }
707+ skb->cb_next = NULL;
708+ entry->skb = skb;
709+ }
710+
711+ skb->nf_queue_entry = entry;
712+
713+ dev->stats.rx_bytes += skb->len;
714+ dev->stats.rx_packets++;
715+
716+ if (!skb->dev) {
717+ /* skb->dev == NULL causes problems, try the find cause. */
718+ if (net_ratelimit()) {
719+ dev_warn(&dev->dev,
720+ "received packet with skb->dev == NULL\n");
721+ dump_stack();
722+ }
723+
724+ skb->dev = dev;
725+ }
726+
727+ /* Disables softirqs for lock below */
728+ rcu_read_lock_bh();
729+
730+ /* Multi-queue selection */
731+ orig_queue_index = skb_get_queue_mapping(skb);
732+ txq = imq_select_queue(dev, skb);
733+
734+ q = rcu_dereference(txq->qdisc);
735+ if (unlikely(!q->enqueue))
736+ goto packet_not_eaten_by_imq_dev;
737+
738+ root_lock = qdisc_lock(q);
739+ spin_lock(root_lock);
740+
741+ users = atomic_read(&skb->users);
742+
743+ skb_shared = skb_get(skb); /* increase reference count by one */
744+
745+ /* backup skb->cb, as qdisc layer will overwrite it */
746+ skb_save_cb(skb_shared);
747+ qdisc_enqueue_root(skb_shared, q); /* might kfree_skb */
748+
749+ if (likely(atomic_read(&skb_shared->users) == users + 1)) {
750+ kfree_skb(skb_shared); /* decrease reference count by one */
751+
752+ skb->destructor = &imq_skb_destructor;
753+
754+ /* cloned? */
755+ if (unlikely(skb_orig))
756+ kfree_skb(skb_orig); /* free original */
757+
758+ spin_unlock(root_lock);
759+ rcu_read_unlock_bh();
760+
761+ /* schedule qdisc dequeue */
762+ __netif_schedule(q);
763+
764+ retval = 0;
765+ goto out;
766+ } else {
767+ skb_restore_cb(skb_shared); /* restore skb->cb */
768+ skb->nf_queue_entry = NULL;
769+ /*
770+ * qdisc dropped packet and decreased skb reference count of
771+ * skb, so we don't really want to and try refree as that would
772+ * actually destroy the skb.
773+ */
774+ spin_unlock(root_lock);
775+ goto packet_not_eaten_by_imq_dev;
776+ }
777+
778+packet_not_eaten_by_imq_dev:
779+ skb_set_queue_mapping(skb, orig_queue_index);
780+ rcu_read_unlock_bh();
781+
782+ /* cloned? restore original */
783+ if (unlikely(skb_orig)) {
784+ kfree_skb(skb);
785+ entry->skb = skb_orig;
786+ }
787+ retval = -1;
788+out:
789+ return retval;
790+}
791+
792+static unsigned int imq_nf_hook(unsigned int hook, struct sk_buff *pskb,
793+ const struct net_device *indev,
794+ const struct net_device *outdev,
795+ int (*okfn)(struct sk_buff *))
796+{
797+ return (pskb->imq_flags & IMQ_F_ENQUEUE) ? NF_IMQ_QUEUE : NF_ACCEPT;
798+}
799+
800+static int imq_close(struct net_device *dev)
801+{
802+ netif_stop_queue(dev);
803+ return 0;
804+}
805+
806+static int imq_open(struct net_device *dev)
807+{
808+ netif_start_queue(dev);
809+ return 0;
810+}
811+
812+static const struct net_device_ops imq_netdev_ops = {
813+ .ndo_open = imq_open,
814+ .ndo_stop = imq_close,
815+ .ndo_start_xmit = imq_dev_xmit,
816+ .ndo_get_stats = imq_get_stats,
817+};
818+
819+static void imq_setup(struct net_device *dev)
820+{
821+ dev->netdev_ops = &imq_netdev_ops;
822+ dev->type = ARPHRD_VOID;
823+ dev->mtu = 16000; /* too small? */
824+ dev->tx_queue_len = 11000; /* too big? */
825+ dev->flags = IFF_NOARP;
826+ dev->features = NETIF_F_SG | NETIF_F_FRAGLIST |
827+ NETIF_F_GSO | NETIF_F_HW_CSUM |
828+ NETIF_F_HIGHDMA;
829+ dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE |
830+ IFF_TX_SKB_SHARING);
831+}
832+
833+static int imq_validate(struct nlattr *tb[], struct nlattr *data[])
834+{
835+ int ret = 0;
836+
837+ if (tb[IFLA_ADDRESS]) {
838+ if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) {
839+ ret = -EINVAL;
840+ goto end;
841+ }
842+ if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS]))) {
843+ ret = -EADDRNOTAVAIL;
844+ goto end;
845+ }
846+ }
847+ return 0;
848+end:
849+ pr_warn("IMQ: imq_validate failed (%d)\n", ret);
850+ return ret;
851+}
852+
853+static struct rtnl_link_ops imq_link_ops __read_mostly = {
854+ .kind = "imq",
855+ .priv_size = 0,
856+ .setup = imq_setup,
857+ .validate = imq_validate,
858+};
859+
860+static const struct nf_queue_handler imq_nfqh = {
861+ .outfn = imq_nf_queue,
862+};
863+
864+static int __init imq_init_hooks(void)
865+{
866+ int ret;
867+
868+ nf_register_queue_imq_handler(&imq_nfqh);
869+
870+ ret = nf_register_hooks(imq_ops, ARRAY_SIZE(imq_ops));
871+ if (ret < 0)
872+ nf_unregister_queue_imq_handler();
873+
874+ return ret;
875+}
876+
877+static int __init imq_init_one(int index)
878+{
879+ struct net_device *dev;
880+ int ret;
881+
882+ dev = alloc_netdev_mq(0, "imq%d", imq_setup, numqueues);
883+ if (!dev)
884+ return -ENOMEM;
885+
886+ ret = dev_alloc_name(dev, dev->name);
887+ if (ret < 0)
888+ goto fail;
889+
890+ dev->rtnl_link_ops = &imq_link_ops;
891+ ret = register_netdevice(dev);
892+ if (ret < 0)
893+ goto fail;
894+
895+ return 0;
896+fail:
897+ free_netdev(dev);
898+ return ret;
899+}
900+
901+static int __init imq_init_devs(void)
902+{
903+ int err, i;
904+
905+ if (numdevs < 1 || numdevs > IMQ_MAX_DEVS) {
906+ pr_err("IMQ: numdevs has to be betweed 1 and %u\n",
907+ IMQ_MAX_DEVS);
908+ return -EINVAL;
909+ }
910+
911+ if (numqueues < 1 || numqueues > IMQ_MAX_QUEUES) {
912+ pr_err("IMQ: numqueues has to be betweed 1 and %u\n",
913+ IMQ_MAX_QUEUES);
914+ return -EINVAL;
915+ }
916+
917+ get_random_bytes(&imq_hashrnd, sizeof(imq_hashrnd));
918+
919+ rtnl_lock();
920+ err = __rtnl_link_register(&imq_link_ops);
921+
922+ for (i = 0; i < numdevs && !err; i++)
923+ err = imq_init_one(i);
924+
925+ if (err) {
926+ __rtnl_link_unregister(&imq_link_ops);
927+ memset(imq_devs_cache, 0, sizeof(imq_devs_cache));
928+ }
929+ rtnl_unlock();
930+
931+ return err;
932+}
933+
934+static int __init imq_init_module(void)
935+{
936+ int err;
937+
938+#if defined(CONFIG_IMQ_NUM_DEVS)
939+ BUILD_BUG_ON(CONFIG_IMQ_NUM_DEVS > 16);
940+ BUILD_BUG_ON(CONFIG_IMQ_NUM_DEVS < 2);
941+ BUILD_BUG_ON(CONFIG_IMQ_NUM_DEVS - 1 > IMQ_F_IFMASK);
942+#endif
943+
944+ err = imq_init_devs();
945+ if (err) {
946+ pr_err("IMQ: Error trying imq_init_devs(net)\n");
947+ return err;
948+ }
949+
950+ err = imq_init_hooks();
951+ if (err) {
952+ pr_err(KERN_ERR "IMQ: Error trying imq_init_hooks()\n");
953+ rtnl_link_unregister(&imq_link_ops);
954+ memset(imq_devs_cache, 0, sizeof(imq_devs_cache));
955+ return err;
956+ }
957+
958+ pr_info("IMQ driver loaded successfully. (numdevs = %d, numqueues = %d)\n",
959+ numdevs, numqueues);
960+
961+#if defined(CONFIG_IMQ_BEHAVIOR_BA) || defined(CONFIG_IMQ_BEHAVIOR_BB)
962+ pr_info("\tHooking IMQ before NAT on PREROUTING.\n");
963+#else
964+ pr_info("\tHooking IMQ after NAT on PREROUTING.\n");
965+#endif
966+#if defined(CONFIG_IMQ_BEHAVIOR_AB) || defined(CONFIG_IMQ_BEHAVIOR_BB)
967+ pr_info("\tHooking IMQ before NAT on POSTROUTING.\n");
968+#else
969+ pr_info("\tHooking IMQ after NAT on POSTROUTING.\n");
970+#endif
971+
972+ return 0;
973+}
974+
975+static void __exit imq_unhook(void)
976+{
977+ nf_unregister_hooks(imq_ops, ARRAY_SIZE(imq_ops));
978+ nf_unregister_queue_imq_handler();
979+}
980+
981+static void __exit imq_cleanup_devs(void)
982+{
983+ rtnl_link_unregister(&imq_link_ops);
984+ memset(imq_devs_cache, 0, sizeof(imq_devs_cache));
985+}
986+
987+static void __exit imq_exit_module(void)
988+{
989+ imq_unhook();
990+ imq_cleanup_devs();
991+ pr_info("IMQ driver unloaded successfully.\n");
992+}
993+
994+module_init(imq_init_module);
995+module_exit(imq_exit_module);
996+
997+module_param(numdevs, int, 0);
998+module_param(numqueues, int, 0);
999+MODULE_PARM_DESC(numdevs, "number of IMQ devices (how many imq* devices will be created)");
1000+MODULE_PARM_DESC(numqueues, "number of queues per IMQ device");
1001+MODULE_AUTHOR("http://www.linuximq.net");
1002+MODULE_DESCRIPTION("Pseudo-driver for the intermediate queue device. See http://www.linuximq.net/ for more information.");
1003+MODULE_LICENSE("GPL");
1004+MODULE_ALIAS_RTNL_LINK("imq");
1005+
1006diff -ruN linux-3.10.27/drivers/net/Kconfig linux-3.10.27-imq/drivers/net/Kconfig
1007--- linux-3.10.27/drivers/net/Kconfig 2014-01-16 00:29:14.000000000 +0100
1008+++ linux-3.10.27-imq/drivers/net/Kconfig 2014-01-18 10:19:59.341342885 +0100
1009@@ -207,6 +207,125 @@
1010 depends on RIONET
1011 default "128"
1012
1013+config IMQ
1014+ tristate "IMQ (intermediate queueing device) support"
1015+ depends on NETDEVICES && NETFILTER
1016+ ---help---
1017+ The IMQ device(s) is used as placeholder for QoS queueing
1018+ disciplines. Every packet entering/leaving the IP stack can be
1019+ directed through the IMQ device where it's enqueued/dequeued to the
1020+ attached qdisc. This allows you to treat network devices as classes
1021+ and distribute bandwidth among them. Iptables is used to specify
1022+ through which IMQ device, if any, packets travel.
1023+
1024+ More information at: http://www.linuximq.net/
1025+
1026+ To compile this driver as a module, choose M here: the module
1027+ will be called imq. If unsure, say N.
1028+
1029+choice
1030+ prompt "IMQ behavior (PRE/POSTROUTING)"
1031+ depends on IMQ
1032+ default IMQ_BEHAVIOR_AB
1033+ help
1034+ This setting defines how IMQ behaves in respect to its
1035+ hooking in PREROUTING and POSTROUTING.
1036+
1037+ IMQ can work in any of the following ways:
1038+
1039+ PREROUTING | POSTROUTING
1040+ -----------------|-------------------
1041+ #1 After NAT | After NAT
1042+ #2 After NAT | Before NAT
1043+ #3 Before NAT | After NAT
1044+ #4 Before NAT | Before NAT
1045+
1046+ The default behavior is to hook before NAT on PREROUTING
1047+ and after NAT on POSTROUTING (#3).
1048+
1049+ This settings are specially usefull when trying to use IMQ
1050+ to shape NATed clients.
1051+
1052+ More information can be found at: www.linuximq.net
1053+
1054+ If not sure leave the default settings alone.
1055+
1056+config IMQ_BEHAVIOR_AA
1057+ bool "IMQ AA"
1058+ help
1059+ This setting defines how IMQ behaves in respect to its
1060+ hooking in PREROUTING and POSTROUTING.
1061+
1062+ Choosing this option will make IMQ hook like this:
1063+
1064+ PREROUTING: After NAT
1065+ POSTROUTING: After NAT
1066+
1067+ More information can be found at: www.linuximq.net
1068+
1069+ If not sure leave the default settings alone.
1070+
1071+config IMQ_BEHAVIOR_AB
1072+ bool "IMQ AB"
1073+ help
1074+ This setting defines how IMQ behaves in respect to its
1075+ hooking in PREROUTING and POSTROUTING.
1076+
1077+ Choosing this option will make IMQ hook like this:
1078+
1079+ PREROUTING: After NAT
1080+ POSTROUTING: Before NAT
1081+
1082+ More information can be found at: www.linuximq.net
1083+
1084+ If not sure leave the default settings alone.
1085+
1086+config IMQ_BEHAVIOR_BA
1087+ bool "IMQ BA"
1088+ help
1089+ This setting defines how IMQ behaves in respect to its
1090+ hooking in PREROUTING and POSTROUTING.
1091+
1092+ Choosing this option will make IMQ hook like this:
1093+
1094+ PREROUTING: Before NAT
1095+ POSTROUTING: After NAT
1096+
1097+ More information can be found at: www.linuximq.net
1098+
1099+ If not sure leave the default settings alone.
1100+
1101+config IMQ_BEHAVIOR_BB
1102+ bool "IMQ BB"
1103+ help
1104+ This setting defines how IMQ behaves in respect to its
1105+ hooking in PREROUTING and POSTROUTING.
1106+
1107+ Choosing this option will make IMQ hook like this:
1108+
1109+ PREROUTING: Before NAT
1110+ POSTROUTING: Before NAT
1111+
1112+ More information can be found at: www.linuximq.net
1113+
1114+ If not sure leave the default settings alone.
1115+
1116+endchoice
1117+
1118+config IMQ_NUM_DEVS
1119+ int "Number of IMQ devices"
1120+ range 2 16
1121+ depends on IMQ
1122+ default "16"
1123+ help
1124+ This setting defines how many IMQ devices will be created.
1125+
1126+ The default value is 16.
1127+
1128+ More information can be found at: www.linuximq.net
1129+
1130+ If not sure leave the default settings alone.
1131+
1132 config TUN
1133 tristate "Universal TUN/TAP device driver support"
1134 select CRC32
1135diff -ruN linux-3.10.27/drivers/net/Makefile linux-3.10.27-imq/drivers/net/Makefile
1136--- linux-3.10.27/drivers/net/Makefile 2014-01-16 00:29:14.000000000 +0100
1137+++ linux-3.10.27-imq/drivers/net/Makefile 2014-01-18 10:19:59.341342885 +0100
1138@@ -9,6 +9,7 @@
1139 obj-$(CONFIG_DUMMY) += dummy.o
1140 obj-$(CONFIG_EQUALIZER) += eql.o
1141 obj-$(CONFIG_IFB) += ifb.o
1142+obj-$(CONFIG_IMQ) += imq.o
1143 obj-$(CONFIG_MACVLAN) += macvlan.o
1144 obj-$(CONFIG_MACVTAP) += macvtap.o
1145 obj-$(CONFIG_MII) += mii.o
1146diff -ruN linux-3.10.27/include/linux/imq.h linux-3.10.27-imq/include/linux/imq.h
1147--- linux-3.10.27/include/linux/imq.h 1970-01-01 01:00:00.000000000 +0100
1148+++ linux-3.10.27-imq/include/linux/imq.h 2014-01-18 10:19:59.342342913 +0100
1149@@ -0,0 +1,13 @@
1150+#ifndef _IMQ_H
1151+#define _IMQ_H
1152+
1153+/* IFMASK (16 device indexes, 0 to 15) and flag(s) fit in 5 bits */
1154+#define IMQ_F_BITS 5
1155+
1156+#define IMQ_F_IFMASK 0x0f
1157+#define IMQ_F_ENQUEUE 0x10
1158+
1159+#define IMQ_MAX_DEVS (IMQ_F_IFMASK + 1)
1160+
1161+#endif /* _IMQ_H */
1162+
1163diff -ruN linux-3.10.27/include/linux/netfilter/xt_IMQ.h linux-3.10.27-imq/include/linux/netfilter/xt_IMQ.h
1164--- linux-3.10.27/include/linux/netfilter/xt_IMQ.h 1970-01-01 01:00:00.000000000 +0100
1165+++ linux-3.10.27-imq/include/linux/netfilter/xt_IMQ.h 2014-01-18 10:19:59.342342913 +0100
1166@@ -0,0 +1,9 @@
1167+#ifndef _XT_IMQ_H
1168+#define _XT_IMQ_H
1169+
1170+struct xt_imq_info {
1171+ unsigned int todev; /* target imq device */
1172+};
1173+
1174+#endif /* _XT_IMQ_H */
1175+
1176diff -ruN linux-3.10.27/include/linux/netfilter_ipv4/ipt_IMQ.h linux-3.10.27-imq/include/linux/netfilter_ipv4/ipt_IMQ.h
1177--- linux-3.10.27/include/linux/netfilter_ipv4/ipt_IMQ.h 1970-01-01 01:00:00.000000000 +0100
1178+++ linux-3.10.27-imq/include/linux/netfilter_ipv4/ipt_IMQ.h 2014-01-18 10:19:59.343342933 +0100
1179@@ -0,0 +1,10 @@
1180+#ifndef _IPT_IMQ_H
1181+#define _IPT_IMQ_H
1182+
1183+/* Backwards compatibility for old userspace */
1184+#include <linux/netfilter/xt_IMQ.h>
1185+
1186+#define ipt_imq_info xt_imq_info
1187+
1188+#endif /* _IPT_IMQ_H */
1189+
1190diff -ruN linux-3.10.27/include/linux/netfilter_ipv6/ip6t_IMQ.h linux-3.10.27-imq/include/linux/netfilter_ipv6/ip6t_IMQ.h
1191--- linux-3.10.27/include/linux/netfilter_ipv6/ip6t_IMQ.h 1970-01-01 01:00:00.000000000 +0100
1192+++ linux-3.10.27-imq/include/linux/netfilter_ipv6/ip6t_IMQ.h 2014-01-18 10:19:59.343342933 +0100
1193@@ -0,0 +1,10 @@
1194+#ifndef _IP6T_IMQ_H
1195+#define _IP6T_IMQ_H
1196+
1197+/* Backwards compatibility for old userspace */
1198+#include <linux/netfilter/xt_IMQ.h>
1199+
1200+#define ip6t_imq_info xt_imq_info
1201+
1202+#endif /* _IP6T_IMQ_H */
1203+
1204diff -ruN linux-3.10.27/include/linux/skbuff.h linux-3.10.27-imq/include/linux/skbuff.h
1205--- linux-3.10.27/include/linux/skbuff.h 2014-01-16 00:29:14.000000000 +0100
1206+++ linux-3.10.27-imq/include/linux/skbuff.h 2014-01-18 10:18:22.220271201 +0100
1207@@ -33,6 +33,9 @@
1208 #include <linux/dma-mapping.h>
1209 #include <linux/netdev_features.h>
1210 #include <net/flow_keys.h>
1211+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1212+#include <linux/imq.h>
1213+#endif
1214
1215 /* Don't change this without changing skb_csum_unnecessary! */
1216 #define CHECKSUM_NONE 0
1217@@ -414,6 +417,9 @@
1218 * first. This is owned by whoever has the skb queued ATM.
1219 */
1220 char cb[48] __aligned(8);
1221+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1222+ void *cb_next;
1223+#endif
1224
1225 unsigned long _skb_refdst;
1226 #ifdef CONFIG_XFRM
1227@@ -449,6 +455,9 @@
1228 #if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE)
1229 struct nf_conntrack *nfct;
1230 #endif
1231+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1232+ struct nf_queue_entry *nf_queue_entry;
1233+#endif
1234 #ifdef CONFIG_BRIDGE_NETFILTER
1235 struct nf_bridge_info *nf_bridge;
1236 #endif
1237@@ -487,7 +496,9 @@
1238 __u8 encapsulation:1;
1239 /* 7/9 bit hole (depending on ndisc_nodetype presence) */
1240 kmemcheck_bitfield_end(flags2);
1241-
1242+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1243+ __u8 imq_flags:IMQ_F_BITS;
1244+#endif
1245 #ifdef CONFIG_NET_DMA
1246 dma_cookie_t dma_cookie;
1247 #endif
1248@@ -616,7 +627,10 @@
1249 {
1250 return (struct rtable *)skb_dst(skb);
1251 }
1252-
1253+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1254+extern int skb_save_cb(struct sk_buff *skb);
1255+extern int skb_restore_cb(struct sk_buff *skb);
1256+#endif
1257 extern void kfree_skb(struct sk_buff *skb);
1258 extern void kfree_skb_list(struct sk_buff *segs);
1259 extern void skb_tx_error(struct sk_buff *skb);
1260@@ -2735,6 +2749,10 @@
1261 nf_conntrack_get(src->nfct);
1262 dst->nfctinfo = src->nfctinfo;
1263 #endif
1264+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1265+ dst->imq_flags = src->imq_flags;
1266+ dst->nf_queue_entry = src->nf_queue_entry;
1267+#endif
1268 #ifdef CONFIG_BRIDGE_NETFILTER
1269 dst->nf_bridge = src->nf_bridge;
1270 nf_bridge_get(src->nf_bridge);
1271diff -ruN linux-3.10.27/include/net/netfilter/nf_queue.h linux-3.10.27-imq/include/net/netfilter/nf_queue.h
1272--- linux-3.10.27/include/net/netfilter/nf_queue.h 2014-01-16 00:29:14.000000000 +0100
1273+++ linux-3.10.27-imq/include/net/netfilter/nf_queue.h 2014-01-18 10:19:59.345342949 +0100
1274@@ -29,6 +29,12 @@
1275 void nf_register_queue_handler(const struct nf_queue_handler *qh);
1276 void nf_unregister_queue_handler(void);
1277 extern void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict);
1278+extern void nf_queue_entry_release_refs(struct nf_queue_entry *entry);
1279+
1280+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1281+extern void nf_register_queue_imq_handler(const struct nf_queue_handler *qh);
1282+extern void nf_unregister_queue_imq_handler(void);
1283+#endif
1284
1285 bool nf_queue_entry_get_refs(struct nf_queue_entry *entry);
1286 void nf_queue_entry_release_refs(struct nf_queue_entry *entry);
1287diff -ruN linux-3.10.27/include/uapi/linux/netfilter.h linux-3.10.27-imq/include/uapi/linux/netfilter.h
1288--- linux-3.10.27/include/uapi/linux/netfilter.h 2014-01-16 00:29:14.000000000 +0100
1289+++ linux-3.10.27-imq/include/uapi/linux/netfilter.h 2014-01-18 10:19:59.345342949 +0100
1290@@ -13,7 +13,8 @@
1291 #define NF_QUEUE 3
1292 #define NF_REPEAT 4
1293 #define NF_STOP 5
1294-#define NF_MAX_VERDICT NF_STOP
1295+#define NF_IMQ_QUEUE 6
1296+#define NF_MAX_VERDICT NF_IMQ_QUEUE
1297
1298 /* we overload the higher bits for encoding auxiliary data such as the queue
1299 * number or errno values. Not nice, but better than additional function
1300diff -ruN linux-3.10.27/net/core/dev.c linux-3.10.27-imq/net/core/dev.c
1301--- linux-3.10.27/net/core/dev.c 2014-01-16 00:29:14.000000000 +0100
1302+++ linux-3.10.27-imq/net/core/dev.c 2014-01-18 10:19:59.347342963 +0100
1303@@ -129,6 +129,9 @@
1304 #include <linux/inetdevice.h>
1305 #include <linux/cpu_rmap.h>
1306 #include <linux/static_key.h>
1307+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1308+#include <linux/imq.h>
1309+#endif
1310
1311 #include "net-sysfs.h"
1312
1313@@ -2573,7 +2576,12 @@
1314 }
1315 }
1316
1317+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1318+ if (!list_empty(&ptype_all) &&
1319+ !(skb->imq_flags & IMQ_F_ENQUEUE))
1320+#else
1321 if (!list_empty(&ptype_all))
1322+#endif
1323 dev_queue_xmit_nit(skb, dev);
1324
1325 skb_len = skb->len;
1326diff -ruN linux-3.10.27/net/core/skbuff.c linux-3.10.27-imq/net/core/skbuff.c
1327--- linux-3.10.27/net/core/skbuff.c 2014-01-16 00:29:14.000000000 +0100
1328+++ linux-3.10.27-imq/net/core/skbuff.c 2014-01-18 10:19:59.348342972 +0100
1329@@ -73,6 +73,9 @@
1330
1331 struct kmem_cache *skbuff_head_cache __read_mostly;
1332 static struct kmem_cache *skbuff_fclone_cache __read_mostly;
1333+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1334+static struct kmem_cache *skbuff_cb_store_cache __read_mostly;
1335+#endif
1336
1337 static void sock_pipe_buf_release(struct pipe_inode_info *pipe,
1338 struct pipe_buffer *buf)
1339@@ -92,6 +95,82 @@
1340 return 1;
1341 }
1342
1343+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1344+/* Control buffer save/restore for IMQ devices */
1345+struct skb_cb_table {
1346+ char cb[48] __aligned(8);
1347+ void *cb_next;
1348+ atomic_t refcnt;
1349+};
1350+
1351+static DEFINE_SPINLOCK(skb_cb_store_lock);
1352+
1353+int skb_save_cb(struct sk_buff *skb)
1354+{
1355+ struct skb_cb_table *next;
1356+
1357+ next = kmem_cache_alloc(skbuff_cb_store_cache, GFP_ATOMIC);
1358+ if (!next)
1359+ return -ENOMEM;
1360+
1361+ BUILD_BUG_ON(sizeof(skb->cb) != sizeof(next->cb));
1362+
1363+ memcpy(next->cb, skb->cb, sizeof(skb->cb));
1364+ next->cb_next = skb->cb_next;
1365+
1366+ atomic_set(&next->refcnt, 1);
1367+
1368+ skb->cb_next = next;
1369+ return 0;
1370+}
1371+EXPORT_SYMBOL(skb_save_cb);
1372+
1373+int skb_restore_cb(struct sk_buff *skb)
1374+{
1375+ struct skb_cb_table *next;
1376+
1377+ if (!skb->cb_next)
1378+ return 0;
1379+
1380+ next = skb->cb_next;
1381+
1382+ BUILD_BUG_ON(sizeof(skb->cb) != sizeof(next->cb));
1383+
1384+ memcpy(skb->cb, next->cb, sizeof(skb->cb));
1385+ skb->cb_next = next->cb_next;
1386+
1387+ spin_lock(&skb_cb_store_lock);
1388+
1389+ if (atomic_dec_and_test(&next->refcnt))
1390+ kmem_cache_free(skbuff_cb_store_cache, next);
1391+
1392+ spin_unlock(&skb_cb_store_lock);
1393+
1394+ return 0;
1395+}
1396+EXPORT_SYMBOL(skb_restore_cb);
1397+
1398+static void skb_copy_stored_cb(struct sk_buff *new, const struct sk_buff *__old)
1399+{
1400+ struct skb_cb_table *next;
1401+ struct sk_buff *old;
1402+
1403+ if (!__old->cb_next) {
1404+ new->cb_next = NULL;
1405+ return;
1406+ }
1407+
1408+ spin_lock(&skb_cb_store_lock);
1409+
1410+ old = (struct sk_buff *)__old;
1411+
1412+ next = old->cb_next;
1413+ atomic_inc(&next->refcnt);
1414+ new->cb_next = next;
1415+
1416+ spin_unlock(&skb_cb_store_lock);
1417+}
1418+#endif
1419
1420 /* Pipe buffer operations for a socket. */
1421 static const struct pipe_buf_operations sock_pipe_buf_ops = {
1422@@ -582,6 +661,28 @@
1423 WARN_ON(in_irq());
1424 skb->destructor(skb);
1425 }
1426+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1427+ /*
1428+ * This should not happen. When it does, avoid memleak by restoring
1429+ * the chain of cb-backups.
1430+ */
1431+ while (skb->cb_next != NULL) {
1432+ if (net_ratelimit())
1433+ pr_warn("IMQ: kfree_skb: skb->cb_next: %08x\n",
1434+ (unsigned int)skb->cb_next);
1435+
1436+ skb_restore_cb(skb);
1437+ }
1438+ /*
1439+ * This should not happen either, nf_queue_entry is nullified in
1440+ * imq_dev_xmit(). If we have non-NULL nf_queue_entry then we are
1441+ * leaking entry pointers, maybe memory. We don't know if this is
1442+ * pointer to already freed memory, or should this be freed.
1443+ * If this happens we need to add refcounting, etc for nf_queue_entry.
1444+ */
1445+ if (skb->nf_queue_entry && net_ratelimit())
1446+ pr_warn("%s\n", "IMQ: kfree_skb: skb->nf_queue_entry != NULL");
1447+#endif
1448 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
1449 nf_conntrack_put(skb->nfct);
1450 #endif
1451@@ -713,6 +814,10 @@
1452 new->sp = secpath_get(old->sp);
1453 #endif
1454 memcpy(new->cb, old->cb, sizeof(old->cb));
1455+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1456+ new->cb_next = NULL;
1457+ /*skb_copy_stored_cb(new, old);*/
1458+#endif
1459 new->csum = old->csum;
1460 new->local_df = old->local_df;
1461 new->pkt_type = old->pkt_type;
1462@@ -3093,6 +3198,13 @@
1463 0,
1464 SLAB_HWCACHE_ALIGN|SLAB_PANIC,
1465 NULL);
1466+#if defined(CONFIG_IMQ) || defined(CONFIG_IMQ_MODULE)
1467+ skbuff_cb_store_cache = kmem_cache_create("skbuff_cb_store_cache",
1468+ sizeof(struct skb_cb_table),
1469+ 0,
1470+ SLAB_HWCACHE_ALIGN|SLAB_PANIC,
1471+ NULL);
1472+#endif
1473 }
1474
1475 /**
1476diff -ruN linux-3.10.27/net/core/skbuff.c.orig linux-3.10.27-imq/net/core/skbuff.c.orig
1477--- linux-3.10.27/net/core/skbuff.c.orig 1970-01-01 01:00:00.000000000 +0100
1478+++ linux-3.10.27-imq/net/core/skbuff.c.orig 2014-01-16 00:29:14.000000000 +0100
1479@@ -0,0 +1,3503 @@
1480+/*
1481+ * Routines having to do with the 'struct sk_buff' memory handlers.
1482+ *
1483+ * Authors: Alan Cox <alan@lxorguk.ukuu.org.uk>
1484+ * Florian La Roche <rzsfl@rz.uni-sb.de>
1485+ *
1486+ * Fixes:
1487+ * Alan Cox : Fixed the worst of the load
1488+ * balancer bugs.
1489+ * Dave Platt : Interrupt stacking fix.
1490+ * Richard Kooijman : Timestamp fixes.
1491+ * Alan Cox : Changed buffer format.
1492+ * Alan Cox : destructor hook for AF_UNIX etc.
1493+ * Linus Torvalds : Better skb_clone.
1494+ * Alan Cox : Added skb_copy.
1495+ * Alan Cox : Added all the changed routines Linus
1496+ * only put in the headers
1497+ * Ray VanTassle : Fixed --skb->lock in free
1498+ * Alan Cox : skb_copy copy arp field
1499+ * Andi Kleen : slabified it.
1500+ * Robert Olsson : Removed skb_head_pool
1501+ *
1502+ * NOTE:
1503+ * The __skb_ routines should be called with interrupts
1504+ * disabled, or you better be *real* sure that the operation is atomic
1505+ * with respect to whatever list is being frobbed (e.g. via lock_sock()
1506+ * or via disabling bottom half handlers, etc).
1507+ *
1508+ * This program is free software; you can redistribute it and/or
1509+ * modify it under the terms of the GNU General Public License
1510+ * as published by the Free Software Foundation; either version
1511+ * 2 of the License, or (at your option) any later version.
1512+ */
1513+
1514+/*
1515+ * The functions in this file will not compile correctly with gcc 2.4.x
1516+ */
1517+
1518+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
1519+
1520+#include <linux/module.h>
1521+#include <linux/types.h>
1522+#include <linux/kernel.h>
1523+#include <linux/kmemcheck.h>
1524+#include <linux/mm.h>
1525+#include <linux/interrupt.h>
1526+#include <linux/in.h>
1527+#include <linux/inet.h>
1528+#include <linux/slab.h>
1529+#include <linux/netdevice.h>
1530+#ifdef CONFIG_NET_CLS_ACT
1531+#include <net/pkt_sched.h>
1532+#endif
1533+#include <linux/string.h>
1534+#include <linux/skbuff.h>
1535+#include <linux/splice.h>
1536+#include <linux/cache.h>
1537+#include <linux/rtnetlink.h>
1538+#include <linux/init.h>
1539+#include <linux/scatterlist.h>
1540+#include <linux/errqueue.h>
1541+#include <linux/prefetch.h>
1542+
1543+#include <net/protocol.h>
1544+#include <net/dst.h>
1545+#include <net/sock.h>
1546+#include <net/checksum.h>
1547+#include <net/xfrm.h>
1548+
1549+#include <asm/uaccess.h>
1550+#include <trace/events/skb.h>
1551+#include <linux/highmem.h>
1552+
1553+struct kmem_cache *skbuff_head_cache __read_mostly;
1554+static struct kmem_cache *skbuff_fclone_cache __read_mostly;
1555+
1556+static void sock_pipe_buf_release(struct pipe_inode_info *pipe,
1557+ struct pipe_buffer *buf)
1558+{
1559+ put_page(buf->page);
1560+}
1561+
1562+static void sock_pipe_buf_get(struct pipe_inode_info *pipe,
1563+ struct pipe_buffer *buf)
1564+{
1565+ get_page(buf->page);
1566+}
1567+
1568+static int sock_pipe_buf_steal(struct pipe_inode_info *pipe,
1569+ struct pipe_buffer *buf)
1570+{
1571+ return 1;
1572+}
1573+
1574+
1575+/* Pipe buffer operations for a socket. */
1576+static const struct pipe_buf_operations sock_pipe_buf_ops = {
1577+ .can_merge = 0,
1578+ .map = generic_pipe_buf_map,
1579+ .unmap = generic_pipe_buf_unmap,
1580+ .confirm = generic_pipe_buf_confirm,
1581+ .release = sock_pipe_buf_release,
1582+ .steal = sock_pipe_buf_steal,
1583+ .get = sock_pipe_buf_get,
1584+};
1585+
1586+/**
1587+ * skb_panic - private function for out-of-line support
1588+ * @skb: buffer
1589+ * @sz: size
1590+ * @addr: address
1591+ * @msg: skb_over_panic or skb_under_panic
1592+ *
1593+ * Out-of-line support for skb_put() and skb_push().
1594+ * Called via the wrapper skb_over_panic() or skb_under_panic().
1595+ * Keep out of line to prevent kernel bloat.
1596+ * __builtin_return_address is not used because it is not always reliable.
1597+ */
1598+static void skb_panic(struct sk_buff *skb, unsigned int sz, void *addr,
1599+ const char msg[])
1600+{
1601+ pr_emerg("%s: text:%p len:%d put:%d head:%p data:%p tail:%#lx end:%#lx dev:%s\n",
1602+ msg, addr, skb->len, sz, skb->head, skb->data,
1603+ (unsigned long)skb->tail, (unsigned long)skb->end,
1604+ skb->dev ? skb->dev->name : "<NULL>");
1605+ BUG();
1606+}
1607+
1608+static void skb_over_panic(struct sk_buff *skb, unsigned int sz, void *addr)
1609+{
1610+ skb_panic(skb, sz, addr, __func__);
1611+}
1612+
1613+static void skb_under_panic(struct sk_buff *skb, unsigned int sz, void *addr)
1614+{
1615+ skb_panic(skb, sz, addr, __func__);
1616+}
1617+
1618+/*
1619+ * kmalloc_reserve is a wrapper around kmalloc_node_track_caller that tells
1620+ * the caller if emergency pfmemalloc reserves are being used. If it is and
1621+ * the socket is later found to be SOCK_MEMALLOC then PFMEMALLOC reserves
1622+ * may be used. Otherwise, the packet data may be discarded until enough
1623+ * memory is free
1624+ */
1625+#define kmalloc_reserve(size, gfp, node, pfmemalloc) \
1626+ __kmalloc_reserve(size, gfp, node, _RET_IP_, pfmemalloc)
1627+
1628+static void *__kmalloc_reserve(size_t size, gfp_t flags, int node,
1629+ unsigned long ip, bool *pfmemalloc)
1630+{
1631+ void *obj;
1632+ bool ret_pfmemalloc = false;
1633+
1634+ /*
1635+ * Try a regular allocation, when that fails and we're not entitled
1636+ * to the reserves, fail.
1637+ */
1638+ obj = kmalloc_node_track_caller(size,
1639+ flags | __GFP_NOMEMALLOC | __GFP_NOWARN,
1640+ node);
1641+ if (obj || !(gfp_pfmemalloc_allowed(flags)))
1642+ goto out;
1643+
1644+ /* Try again but now we are using pfmemalloc reserves */
1645+ ret_pfmemalloc = true;
1646+ obj = kmalloc_node_track_caller(size, flags, node);
1647+
1648+out:
1649+ if (pfmemalloc)
1650+ *pfmemalloc = ret_pfmemalloc;
1651+
1652+ return obj;
1653+}
1654+
1655+/* Allocate a new skbuff. We do this ourselves so we can fill in a few
1656+ * 'private' fields and also do memory statistics to find all the
1657+ * [BEEP] leaks.
1658+ *
1659+ */
1660+
1661+struct sk_buff *__alloc_skb_head(gfp_t gfp_mask, int node)
1662+{
1663+ struct sk_buff *skb;
1664+
1665+ /* Get the HEAD */
1666+ skb = kmem_cache_alloc_node(skbuff_head_cache,
1667+ gfp_mask & ~__GFP_DMA, node);
1668+ if (!skb)
1669+ goto out;
1670+
1671+ /*
1672+ * Only clear those fields we need to clear, not those that we will
1673+ * actually initialise below. Hence, don't put any more fields after
1674+ * the tail pointer in struct sk_buff!
1675+ */
1676+ memset(skb, 0, offsetof(struct sk_buff, tail));
1677+ skb->head = NULL;
1678+ skb->truesize = sizeof(struct sk_buff);
1679+ atomic_set(&skb->users, 1);
1680+
1681+#ifdef NET_SKBUFF_DATA_USES_OFFSET
1682+ skb->mac_header = ~0U;
1683+#endif
1684+out:
1685+ return skb;
1686+}
1687+
1688+/**
1689+ * __alloc_skb - allocate a network buffer
1690+ * @size: size to allocate
1691+ * @gfp_mask: allocation mask
1692+ * @flags: If SKB_ALLOC_FCLONE is set, allocate from fclone cache
1693+ * instead of head cache and allocate a cloned (child) skb.
1694+ * If SKB_ALLOC_RX is set, __GFP_MEMALLOC will be used for
1695+ * allocations in case the data is required for writeback
1696+ * @node: numa node to allocate memory on
1697+ *
1698+ * Allocate a new &sk_buff. The returned buffer has no headroom and a
1699+ * tail room of at least size bytes. The object has a reference count
1700+ * of one. The return is the buffer. On a failure the return is %NULL.
1701+ *
1702+ * Buffers may only be allocated from interrupts using a @gfp_mask of
1703+ * %GFP_ATOMIC.
1704+ */
1705+struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
1706+ int flags, int node)
1707+{
1708+ struct kmem_cache *cache;
1709+ struct skb_shared_info *shinfo;
1710+ struct sk_buff *skb;
1711+ u8 *data;
1712+ bool pfmemalloc;
1713+
1714+ cache = (flags & SKB_ALLOC_FCLONE)
1715+ ? skbuff_fclone_cache : skbuff_head_cache;
1716+
1717+ if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
1718+ gfp_mask |= __GFP_MEMALLOC;
1719+
1720+ /* Get the HEAD */
1721+ skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
1722+ if (!skb)
1723+ goto out;
1724+ prefetchw(skb);
1725+
1726+ /* We do our best to align skb_shared_info on a separate cache
1727+ * line. It usually works because kmalloc(X > SMP_CACHE_BYTES) gives
1728+ * aligned memory blocks, unless SLUB/SLAB debug is enabled.
1729+ * Both skb->head and skb_shared_info are cache line aligned.
1730+ */
1731+ size = SKB_DATA_ALIGN(size);
1732+ size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
1733+ data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc);
1734+ if (!data)
1735+ goto nodata;
1736+ /* kmalloc(size) might give us more room than requested.
1737+ * Put skb_shared_info exactly at the end of allocated zone,
1738+ * to allow max possible filling before reallocation.
1739+ */
1740+ size = SKB_WITH_OVERHEAD(ksize(data));
1741+ prefetchw(data + size);
1742+
1743+ /*
1744+ * Only clear those fields we need to clear, not those that we will
1745+ * actually initialise below. Hence, don't put any more fields after
1746+ * the tail pointer in struct sk_buff!
1747+ */
1748+ memset(skb, 0, offsetof(struct sk_buff, tail));
1749+ /* Account for allocated memory : skb + skb->head */
1750+ skb->truesize = SKB_TRUESIZE(size);
1751+ skb->pfmemalloc = pfmemalloc;
1752+ atomic_set(&skb->users, 1);
1753+ skb->head = data;
1754+ skb->data = data;
1755+ skb_reset_tail_pointer(skb);
1756+ skb->end = skb->tail + size;
1757+#ifdef NET_SKBUFF_DATA_USES_OFFSET
1758+ skb->mac_header = ~0U;
1759+ skb->transport_header = ~0U;
1760+#endif
1761+
1762+ /* make sure we initialize shinfo sequentially */
1763+ shinfo = skb_shinfo(skb);
1764+ memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
1765+ atomic_set(&shinfo->dataref, 1);
1766+ kmemcheck_annotate_variable(shinfo->destructor_arg);
1767+
1768+ if (flags & SKB_ALLOC_FCLONE) {
1769+ struct sk_buff *child = skb + 1;
1770+ atomic_t *fclone_ref = (atomic_t *) (child + 1);
1771+
1772+ kmemcheck_annotate_bitfield(child, flags1);
1773+ kmemcheck_annotate_bitfield(child, flags2);
1774+ skb->fclone = SKB_FCLONE_ORIG;
1775+ atomic_set(fclone_ref, 1);
1776+
1777+ child->fclone = SKB_FCLONE_UNAVAILABLE;
1778+ child->pfmemalloc = pfmemalloc;
1779+ }
1780+out:
1781+ return skb;
1782+nodata:
1783+ kmem_cache_free(cache, skb);
1784+ skb = NULL;
1785+ goto out;
1786+}
1787+EXPORT_SYMBOL(__alloc_skb);
1788+
1789+/**
1790+ * build_skb - build a network buffer
1791+ * @data: data buffer provided by caller
1792+ * @frag_size: size of fragment, or 0 if head was kmalloced
1793+ *
1794+ * Allocate a new &sk_buff. Caller provides space holding head and
1795+ * skb_shared_info. @data must have been allocated by kmalloc()
1796+ * The return is the new skb buffer.
1797+ * On a failure the return is %NULL, and @data is not freed.
1798+ * Notes :
1799+ * Before IO, driver allocates only data buffer where NIC put incoming frame
1800+ * Driver should add room at head (NET_SKB_PAD) and
1801+ * MUST add room at tail (SKB_DATA_ALIGN(skb_shared_info))
1802+ * After IO, driver calls build_skb(), to allocate sk_buff and populate it
1803+ * before giving packet to stack.
1804+ * RX rings only contains data buffers, not full skbs.
1805+ */
1806+struct sk_buff *build_skb(void *data, unsigned int frag_size)
1807+{
1808+ struct skb_shared_info *shinfo;
1809+ struct sk_buff *skb;
1810+ unsigned int size = frag_size ? : ksize(data);
1811+
1812+ skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC);
1813+ if (!skb)
1814+ return NULL;
1815+
1816+ size -= SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
1817+
1818+ memset(skb, 0, offsetof(struct sk_buff, tail));
1819+ skb->truesize = SKB_TRUESIZE(size);
1820+ skb->head_frag = frag_size != 0;
1821+ atomic_set(&skb->users, 1);
1822+ skb->head = data;
1823+ skb->data = data;
1824+ skb_reset_tail_pointer(skb);
1825+ skb->end = skb->tail + size;
1826+#ifdef NET_SKBUFF_DATA_USES_OFFSET
1827+ skb->mac_header = ~0U;
1828+ skb->transport_header = ~0U;
1829+#endif
1830+
1831+ /* make sure we initialize shinfo sequentially */
1832+ shinfo = skb_shinfo(skb);
1833+ memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
1834+ atomic_set(&shinfo->dataref, 1);
1835+ kmemcheck_annotate_variable(shinfo->destructor_arg);
1836+
1837+ return skb;
1838+}
1839+EXPORT_SYMBOL(build_skb);
1840+
1841+struct netdev_alloc_cache {
1842+ struct page_frag frag;
1843+ /* we maintain a pagecount bias, so that we dont dirty cache line
1844+ * containing page->_count every time we allocate a fragment.
1845+ */
1846+ unsigned int pagecnt_bias;
1847+};
1848+static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache);
1849+
1850+static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
1851+{
1852+ struct netdev_alloc_cache *nc;
1853+ void *data = NULL;
1854+ int order;
1855+ unsigned long flags;
1856+
1857+ local_irq_save(flags);
1858+ nc = &__get_cpu_var(netdev_alloc_cache);
1859+ if (unlikely(!nc->frag.page)) {
1860+refill:
1861+ for (order = NETDEV_FRAG_PAGE_MAX_ORDER; ;) {
1862+ gfp_t gfp = gfp_mask;
1863+
1864+ if (order)
1865+ gfp |= __GFP_COMP | __GFP_NOWARN;
1866+ nc->frag.page = alloc_pages(gfp, order);
1867+ if (likely(nc->frag.page))
1868+ break;
1869+ if (--order < 0)
1870+ goto end;
1871+ }
1872+ nc->frag.size = PAGE_SIZE << order;
1873+recycle:
1874+ atomic_set(&nc->frag.page->_count, NETDEV_PAGECNT_MAX_BIAS);
1875+ nc->pagecnt_bias = NETDEV_PAGECNT_MAX_BIAS;
1876+ nc->frag.offset = 0;
1877+ }
1878+
1879+ if (nc->frag.offset + fragsz > nc->frag.size) {
1880+ /* avoid unnecessary locked operations if possible */
1881+ if ((atomic_read(&nc->frag.page->_count) == nc->pagecnt_bias) ||
1882+ atomic_sub_and_test(nc->pagecnt_bias, &nc->frag.page->_count))
1883+ goto recycle;
1884+ goto refill;
1885+ }
1886+
1887+ data = page_address(nc->frag.page) + nc->frag.offset;
1888+ nc->frag.offset += fragsz;
1889+ nc->pagecnt_bias--;
1890+end:
1891+ local_irq_restore(flags);
1892+ return data;
1893+}
1894+
1895+/**
1896+ * netdev_alloc_frag - allocate a page fragment
1897+ * @fragsz: fragment size
1898+ *
1899+ * Allocates a frag from a page for receive buffer.
1900+ * Uses GFP_ATOMIC allocations.
1901+ */
1902+void *netdev_alloc_frag(unsigned int fragsz)
1903+{
1904+ return __netdev_alloc_frag(fragsz, GFP_ATOMIC | __GFP_COLD);
1905+}
1906+EXPORT_SYMBOL(netdev_alloc_frag);
1907+
1908+/**
1909+ * __netdev_alloc_skb - allocate an skbuff for rx on a specific device
1910+ * @dev: network device to receive on
1911+ * @length: length to allocate
1912+ * @gfp_mask: get_free_pages mask, passed to alloc_skb
1913+ *
1914+ * Allocate a new &sk_buff and assign it a usage count of one. The
1915+ * buffer has unspecified headroom built in. Users should allocate
1916+ * the headroom they think they need without accounting for the
1917+ * built in space. The built in space is used for optimisations.
1918+ *
1919+ * %NULL is returned if there is no free memory.
1920+ */
1921+struct sk_buff *__netdev_alloc_skb(struct net_device *dev,
1922+ unsigned int length, gfp_t gfp_mask)
1923+{
1924+ struct sk_buff *skb = NULL;
1925+ unsigned int fragsz = SKB_DATA_ALIGN(length + NET_SKB_PAD) +
1926+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
1927+
1928+ if (fragsz <= PAGE_SIZE && !(gfp_mask & (__GFP_WAIT | GFP_DMA))) {
1929+ void *data;
1930+
1931+ if (sk_memalloc_socks())
1932+ gfp_mask |= __GFP_MEMALLOC;
1933+
1934+ data = __netdev_alloc_frag(fragsz, gfp_mask);
1935+
1936+ if (likely(data)) {
1937+ skb = build_skb(data, fragsz);
1938+ if (unlikely(!skb))
1939+ put_page(virt_to_head_page(data));
1940+ }
1941+ } else {
1942+ skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask,
1943+ SKB_ALLOC_RX, NUMA_NO_NODE);
1944+ }
1945+ if (likely(skb)) {
1946+ skb_reserve(skb, NET_SKB_PAD);
1947+ skb->dev = dev;
1948+ }
1949+ return skb;
1950+}
1951+EXPORT_SYMBOL(__netdev_alloc_skb);
1952+
1953+void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off,
1954+ int size, unsigned int truesize)
1955+{
1956+ skb_fill_page_desc(skb, i, page, off, size);
1957+ skb->len += size;
1958+ skb->data_len += size;
1959+ skb->truesize += truesize;
1960+}
1961+EXPORT_SYMBOL(skb_add_rx_frag);
1962+
1963+static void skb_drop_list(struct sk_buff **listp)
1964+{
1965+ kfree_skb_list(*listp);
1966+ *listp = NULL;
1967+}
1968+
1969+static inline void skb_drop_fraglist(struct sk_buff *skb)
1970+{
1971+ skb_drop_list(&skb_shinfo(skb)->frag_list);
1972+}
1973+
1974+static void skb_clone_fraglist(struct sk_buff *skb)
1975+{
1976+ struct sk_buff *list;
1977+
1978+ skb_walk_frags(skb, list)
1979+ skb_get(list);
1980+}
1981+
1982+static void skb_free_head(struct sk_buff *skb)
1983+{
1984+ if (skb->head_frag)
1985+ put_page(virt_to_head_page(skb->head));
1986+ else
1987+ kfree(skb->head);
1988+}
1989+
1990+static void skb_release_data(struct sk_buff *skb)
1991+{
1992+ if (!skb->cloned ||
1993+ !atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
1994+ &skb_shinfo(skb)->dataref)) {
1995+ if (skb_shinfo(skb)->nr_frags) {
1996+ int i;
1997+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
1998+ skb_frag_unref(skb, i);
1999+ }
2000+
2001+ /*
2002+ * If skb buf is from userspace, we need to notify the caller
2003+ * the lower device DMA has done;
2004+ */
2005+ if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
2006+ struct ubuf_info *uarg;
2007+
2008+ uarg = skb_shinfo(skb)->destructor_arg;
2009+ if (uarg->callback)
2010+ uarg->callback(uarg, true);
2011+ }
2012+
2013+ if (skb_has_frag_list(skb))
2014+ skb_drop_fraglist(skb);
2015+
2016+ skb_free_head(skb);
2017+ }
2018+}
2019+
2020+/*
2021+ * Free an skbuff by memory without cleaning the state.
2022+ */
2023+static void kfree_skbmem(struct sk_buff *skb)
2024+{
2025+ struct sk_buff *other;
2026+ atomic_t *fclone_ref;
2027+
2028+ switch (skb->fclone) {
2029+ case SKB_FCLONE_UNAVAILABLE:
2030+ kmem_cache_free(skbuff_head_cache, skb);
2031+ break;
2032+
2033+ case SKB_FCLONE_ORIG:
2034+ fclone_ref = (atomic_t *) (skb + 2);
2035+ if (atomic_dec_and_test(fclone_ref))
2036+ kmem_cache_free(skbuff_fclone_cache, skb);
2037+ break;
2038+
2039+ case SKB_FCLONE_CLONE:
2040+ fclone_ref = (atomic_t *) (skb + 1);
2041+ other = skb - 1;
2042+
2043+ /* The clone portion is available for
2044+ * fast-cloning again.
2045+ */
2046+ skb->fclone = SKB_FCLONE_UNAVAILABLE;
2047+
2048+ if (atomic_dec_and_test(fclone_ref))
2049+ kmem_cache_free(skbuff_fclone_cache, other);
2050+ break;
2051+ }
2052+}
2053+
2054+static void skb_release_head_state(struct sk_buff *skb)
2055+{
2056+ skb_dst_drop(skb);
2057+#ifdef CONFIG_XFRM
2058+ secpath_put(skb->sp);
2059+#endif
2060+ if (skb->destructor) {
2061+ WARN_ON(in_irq());
2062+ skb->destructor(skb);
2063+ }
2064+#if IS_ENABLED(CONFIG_NF_CONNTRACK)
2065+ nf_conntrack_put(skb->nfct);
2066+#endif
2067+#ifdef CONFIG_BRIDGE_NETFILTER
2068+ nf_bridge_put(skb->nf_bridge);
2069+#endif
2070+/* XXX: IS this still necessary? - JHS */
2071+#ifdef CONFIG_NET_SCHED
2072+ skb->tc_index = 0;
2073+#ifdef CONFIG_NET_CLS_ACT
2074+ skb->tc_verd = 0;
2075+#endif
2076+#endif
2077+}
2078+
2079+/* Free everything but the sk_buff shell. */
2080+static void skb_release_all(struct sk_buff *skb)
2081+{
2082+ skb_release_head_state(skb);
2083+ if (likely(skb->head))
2084+ skb_release_data(skb);
2085+}
2086+
2087+/**
2088+ * __kfree_skb - private function
2089+ * @skb: buffer
2090+ *
2091+ * Free an sk_buff. Release anything attached to the buffer.
2092+ * Clean the state. This is an internal helper function. Users should
2093+ * always call kfree_skb
2094+ */
2095+
2096+void __kfree_skb(struct sk_buff *skb)
2097+{
2098+ skb_release_all(skb);
2099+ kfree_skbmem(skb);
2100+}
2101+EXPORT_SYMBOL(__kfree_skb);
2102+
2103+/**
2104+ * kfree_skb - free an sk_buff
2105+ * @skb: buffer to free
2106+ *
2107+ * Drop a reference to the buffer and free it if the usage count has
2108+ * hit zero.
2109+ */
2110+void kfree_skb(struct sk_buff *skb)
2111+{
2112+ if (unlikely(!skb))
2113+ return;
2114+ if (likely(atomic_read(&skb->users) == 1))
2115+ smp_rmb();
2116+ else if (likely(!atomic_dec_and_test(&skb->users)))
2117+ return;
2118+ trace_kfree_skb(skb, __builtin_return_address(0));
2119+ __kfree_skb(skb);
2120+}
2121+EXPORT_SYMBOL(kfree_skb);
2122+
2123+void kfree_skb_list(struct sk_buff *segs)
2124+{
2125+ while (segs) {
2126+ struct sk_buff *next = segs->next;
2127+
2128+ kfree_skb(segs);
2129+ segs = next;
2130+ }
2131+}
2132+EXPORT_SYMBOL(kfree_skb_list);
2133+
2134+/**
2135+ * skb_tx_error - report an sk_buff xmit error
2136+ * @skb: buffer that triggered an error
2137+ *
2138+ * Report xmit error if a device callback is tracking this skb.
2139+ * skb must be freed afterwards.
2140+ */
2141+void skb_tx_error(struct sk_buff *skb)
2142+{
2143+ if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
2144+ struct ubuf_info *uarg;
2145+
2146+ uarg = skb_shinfo(skb)->destructor_arg;
2147+ if (uarg->callback)
2148+ uarg->callback(uarg, false);
2149+ skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;
2150+ }
2151+}
2152+EXPORT_SYMBOL(skb_tx_error);
2153+
2154+/**
2155+ * consume_skb - free an skbuff
2156+ * @skb: buffer to free
2157+ *
2158+ * Drop a ref to the buffer and free it if the usage count has hit zero
2159+ * Functions identically to kfree_skb, but kfree_skb assumes that the frame
2160+ * is being dropped after a failure and notes that
2161+ */
2162+void consume_skb(struct sk_buff *skb)
2163+{
2164+ if (unlikely(!skb))
2165+ return;
2166+ if (likely(atomic_read(&skb->users) == 1))
2167+ smp_rmb();
2168+ else if (likely(!atomic_dec_and_test(&skb->users)))
2169+ return;
2170+ trace_consume_skb(skb);
2171+ __kfree_skb(skb);
2172+}
2173+EXPORT_SYMBOL(consume_skb);
2174+
2175+static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
2176+{
2177+ new->tstamp = old->tstamp;
2178+ new->dev = old->dev;
2179+ new->transport_header = old->transport_header;
2180+ new->network_header = old->network_header;
2181+ new->mac_header = old->mac_header;
2182+ new->inner_transport_header = old->inner_transport_header;
2183+ new->inner_network_header = old->inner_network_header;
2184+ new->inner_mac_header = old->inner_mac_header;
2185+ skb_dst_copy(new, old);
2186+ new->rxhash = old->rxhash;
2187+ new->ooo_okay = old->ooo_okay;
2188+ new->l4_rxhash = old->l4_rxhash;
2189+ new->no_fcs = old->no_fcs;
2190+ new->encapsulation = old->encapsulation;
2191+#ifdef CONFIG_XFRM
2192+ new->sp = secpath_get(old->sp);
2193+#endif
2194+ memcpy(new->cb, old->cb, sizeof(old->cb));
2195+ new->csum = old->csum;
2196+ new->local_df = old->local_df;
2197+ new->pkt_type = old->pkt_type;
2198+ new->ip_summed = old->ip_summed;
2199+ skb_copy_queue_mapping(new, old);
2200+ new->priority = old->priority;
2201+#if IS_ENABLED(CONFIG_IP_VS)
2202+ new->ipvs_property = old->ipvs_property;
2203+#endif
2204+ new->pfmemalloc = old->pfmemalloc;
2205+ new->protocol = old->protocol;
2206+ new->mark = old->mark;
2207+ new->skb_iif = old->skb_iif;
2208+ __nf_copy(new, old);
2209+#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
2210+ new->nf_trace = old->nf_trace;
2211+#endif
2212+#ifdef CONFIG_NET_SCHED
2213+ new->tc_index = old->tc_index;
2214+#ifdef CONFIG_NET_CLS_ACT
2215+ new->tc_verd = old->tc_verd;
2216+#endif
2217+#endif
2218+ new->vlan_proto = old->vlan_proto;
2219+ new->vlan_tci = old->vlan_tci;
2220+
2221+ skb_copy_secmark(new, old);
2222+}
2223+
2224+/*
2225+ * You should not add any new code to this function. Add it to
2226+ * __copy_skb_header above instead.
2227+ */
2228+static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
2229+{
2230+#define C(x) n->x = skb->x
2231+
2232+ n->next = n->prev = NULL;
2233+ n->sk = NULL;
2234+ __copy_skb_header(n, skb);
2235+
2236+ C(len);
2237+ C(data_len);
2238+ C(mac_len);
2239+ n->hdr_len = skb->nohdr ? skb_headroom(skb) : skb->hdr_len;
2240+ n->cloned = 1;
2241+ n->nohdr = 0;
2242+ n->destructor = NULL;
2243+ C(tail);
2244+ C(end);
2245+ C(head);
2246+ C(head_frag);
2247+ C(data);
2248+ C(truesize);
2249+ atomic_set(&n->users, 1);
2250+
2251+ atomic_inc(&(skb_shinfo(skb)->dataref));
2252+ skb->cloned = 1;
2253+
2254+ return n;
2255+#undef C
2256+}
2257+
2258+/**
2259+ * skb_morph - morph one skb into another
2260+ * @dst: the skb to receive the contents
2261+ * @src: the skb to supply the contents
2262+ *
2263+ * This is identical to skb_clone except that the target skb is
2264+ * supplied by the user.
2265+ *
2266+ * The target skb is returned upon exit.
2267+ */
2268+struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src)
2269+{
2270+ skb_release_all(dst);
2271+ return __skb_clone(dst, src);
2272+}
2273+EXPORT_SYMBOL_GPL(skb_morph);
2274+
2275+/**
2276+ * skb_copy_ubufs - copy userspace skb frags buffers to kernel
2277+ * @skb: the skb to modify
2278+ * @gfp_mask: allocation priority
2279+ *
2280+ * This must be called on SKBTX_DEV_ZEROCOPY skb.
2281+ * It will copy all frags into kernel and drop the reference
2282+ * to userspace pages.
2283+ *
2284+ * If this function is called from an interrupt gfp_mask() must be
2285+ * %GFP_ATOMIC.
2286+ *
2287+ * Returns 0 on success or a negative error code on failure
2288+ * to allocate kernel memory to copy to.
2289+ */
2290+int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
2291+{
2292+ int i;
2293+ int num_frags = skb_shinfo(skb)->nr_frags;
2294+ struct page *page, *head = NULL;
2295+ struct ubuf_info *uarg = skb_shinfo(skb)->destructor_arg;
2296+
2297+ for (i = 0; i < num_frags; i++) {
2298+ u8 *vaddr;
2299+ skb_frag_t *f = &skb_shinfo(skb)->frags[i];
2300+
2301+ page = alloc_page(gfp_mask);
2302+ if (!page) {
2303+ while (head) {
2304+ struct page *next = (struct page *)head->private;
2305+ put_page(head);
2306+ head = next;
2307+ }
2308+ return -ENOMEM;
2309+ }
2310+ vaddr = kmap_atomic(skb_frag_page(f));
2311+ memcpy(page_address(page),
2312+ vaddr + f->page_offset, skb_frag_size(f));
2313+ kunmap_atomic(vaddr);
2314+ page->private = (unsigned long)head;
2315+ head = page;
2316+ }
2317+
2318+ /* skb frags release userspace buffers */
2319+ for (i = 0; i < num_frags; i++)
2320+ skb_frag_unref(skb, i);
2321+
2322+ uarg->callback(uarg, false);
2323+
2324+ /* skb frags point to kernel buffers */
2325+ for (i = num_frags - 1; i >= 0; i--) {
2326+ __skb_fill_page_desc(skb, i, head, 0,
2327+ skb_shinfo(skb)->frags[i].size);
2328+ head = (struct page *)head->private;
2329+ }
2330+
2331+ skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;
2332+ return 0;
2333+}
2334+EXPORT_SYMBOL_GPL(skb_copy_ubufs);
2335+
2336+/**
2337+ * skb_clone - duplicate an sk_buff
2338+ * @skb: buffer to clone
2339+ * @gfp_mask: allocation priority
2340+ *
2341+ * Duplicate an &sk_buff. The new one is not owned by a socket. Both
2342+ * copies share the same packet data but not structure. The new
2343+ * buffer has a reference count of 1. If the allocation fails the
2344+ * function returns %NULL otherwise the new buffer is returned.
2345+ *
2346+ * If this function is called from an interrupt gfp_mask() must be
2347+ * %GFP_ATOMIC.
2348+ */
2349+
2350+struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
2351+{
2352+ struct sk_buff *n;
2353+
2354+ if (skb_orphan_frags(skb, gfp_mask))
2355+ return NULL;
2356+
2357+ n = skb + 1;
2358+ if (skb->fclone == SKB_FCLONE_ORIG &&
2359+ n->fclone == SKB_FCLONE_UNAVAILABLE) {
2360+ atomic_t *fclone_ref = (atomic_t *) (n + 1);
2361+ n->fclone = SKB_FCLONE_CLONE;
2362+ atomic_inc(fclone_ref);
2363+ } else {
2364+ if (skb_pfmemalloc(skb))
2365+ gfp_mask |= __GFP_MEMALLOC;
2366+
2367+ n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
2368+ if (!n)
2369+ return NULL;
2370+
2371+ kmemcheck_annotate_bitfield(n, flags1);
2372+ kmemcheck_annotate_bitfield(n, flags2);
2373+ n->fclone = SKB_FCLONE_UNAVAILABLE;
2374+ }
2375+
2376+ return __skb_clone(n, skb);
2377+}
2378+EXPORT_SYMBOL(skb_clone);
2379+
2380+static void skb_headers_offset_update(struct sk_buff *skb, int off)
2381+{
2382+ /* {transport,network,mac}_header and tail are relative to skb->head */
2383+ skb->transport_header += off;
2384+ skb->network_header += off;
2385+ if (skb_mac_header_was_set(skb))
2386+ skb->mac_header += off;
2387+ skb->inner_transport_header += off;
2388+ skb->inner_network_header += off;
2389+ skb->inner_mac_header += off;
2390+}
2391+
2392+static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
2393+{
2394+#ifndef NET_SKBUFF_DATA_USES_OFFSET
2395+ /*
2396+ * Shift between the two data areas in bytes
2397+ */
2398+ unsigned long offset = new->data - old->data;
2399+#endif
2400+
2401+ __copy_skb_header(new, old);
2402+
2403+#ifndef NET_SKBUFF_DATA_USES_OFFSET
2404+ skb_headers_offset_update(new, offset);
2405+#endif
2406+ skb_shinfo(new)->gso_size = skb_shinfo(old)->gso_size;
2407+ skb_shinfo(new)->gso_segs = skb_shinfo(old)->gso_segs;
2408+ skb_shinfo(new)->gso_type = skb_shinfo(old)->gso_type;
2409+}
2410+
2411+static inline int skb_alloc_rx_flag(const struct sk_buff *skb)
2412+{
2413+ if (skb_pfmemalloc(skb))
2414+ return SKB_ALLOC_RX;
2415+ return 0;
2416+}
2417+
2418+/**
2419+ * skb_copy - create private copy of an sk_buff
2420+ * @skb: buffer to copy
2421+ * @gfp_mask: allocation priority
2422+ *
2423+ * Make a copy of both an &sk_buff and its data. This is used when the
2424+ * caller wishes to modify the data and needs a private copy of the
2425+ * data to alter. Returns %NULL on failure or the pointer to the buffer
2426+ * on success. The returned buffer has a reference count of 1.
2427+ *
2428+ * As by-product this function converts non-linear &sk_buff to linear
2429+ * one, so that &sk_buff becomes completely private and caller is allowed
2430+ * to modify all the data of returned buffer. This means that this
2431+ * function is not recommended for use in circumstances when only
2432+ * header is going to be modified. Use pskb_copy() instead.
2433+ */
2434+
2435+struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
2436+{
2437+ int headerlen = skb_headroom(skb);
2438+ unsigned int size = skb_end_offset(skb) + skb->data_len;
2439+ struct sk_buff *n = __alloc_skb(size, gfp_mask,
2440+ skb_alloc_rx_flag(skb), NUMA_NO_NODE);
2441+
2442+ if (!n)
2443+ return NULL;
2444+
2445+ /* Set the data pointer */
2446+ skb_reserve(n, headerlen);
2447+ /* Set the tail pointer and length */
2448+ skb_put(n, skb->len);
2449+
2450+ if (skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len))
2451+ BUG();
2452+
2453+ copy_skb_header(n, skb);
2454+ return n;
2455+}
2456+EXPORT_SYMBOL(skb_copy);
2457+
2458+/**
2459+ * __pskb_copy - create copy of an sk_buff with private head.
2460+ * @skb: buffer to copy
2461+ * @headroom: headroom of new skb
2462+ * @gfp_mask: allocation priority
2463+ *
2464+ * Make a copy of both an &sk_buff and part of its data, located
2465+ * in header. Fragmented data remain shared. This is used when
2466+ * the caller wishes to modify only header of &sk_buff and needs
2467+ * private copy of the header to alter. Returns %NULL on failure
2468+ * or the pointer to the buffer on success.
2469+ * The returned buffer has a reference count of 1.
2470+ */
2471+
2472+struct sk_buff *__pskb_copy(struct sk_buff *skb, int headroom, gfp_t gfp_mask)
2473+{
2474+ unsigned int size = skb_headlen(skb) + headroom;
2475+ struct sk_buff *n = __alloc_skb(size, gfp_mask,
2476+ skb_alloc_rx_flag(skb), NUMA_NO_NODE);
2477+
2478+ if (!n)
2479+ goto out;
2480+
2481+ /* Set the data pointer */
2482+ skb_reserve(n, headroom);
2483+ /* Set the tail pointer and length */
2484+ skb_put(n, skb_headlen(skb));
2485+ /* Copy the bytes */
2486+ skb_copy_from_linear_data(skb, n->data, n->len);
2487+
2488+ n->truesize += skb->data_len;
2489+ n->data_len = skb->data_len;
2490+ n->len = skb->len;
2491+
2492+ if (skb_shinfo(skb)->nr_frags) {
2493+ int i;
2494+
2495+ if (skb_orphan_frags(skb, gfp_mask)) {
2496+ kfree_skb(n);
2497+ n = NULL;
2498+ goto out;
2499+ }
2500+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
2501+ skb_shinfo(n)->frags[i] = skb_shinfo(skb)->frags[i];
2502+ skb_frag_ref(skb, i);
2503+ }
2504+ skb_shinfo(n)->nr_frags = i;
2505+ }
2506+
2507+ if (skb_has_frag_list(skb)) {
2508+ skb_shinfo(n)->frag_list = skb_shinfo(skb)->frag_list;
2509+ skb_clone_fraglist(n);
2510+ }
2511+
2512+ copy_skb_header(n, skb);
2513+out:
2514+ return n;
2515+}
2516+EXPORT_SYMBOL(__pskb_copy);
2517+
2518+/**
2519+ * pskb_expand_head - reallocate header of &sk_buff
2520+ * @skb: buffer to reallocate
2521+ * @nhead: room to add at head
2522+ * @ntail: room to add at tail
2523+ * @gfp_mask: allocation priority
2524+ *
2525+ * Expands (or creates identical copy, if &nhead and &ntail are zero)
2526+ * header of skb. &sk_buff itself is not changed. &sk_buff MUST have
2527+ * reference count of 1. Returns zero in the case of success or error,
2528+ * if expansion failed. In the last case, &sk_buff is not changed.
2529+ *
2530+ * All the pointers pointing into skb header may change and must be
2531+ * reloaded after call to this function.
2532+ */
2533+
2534+int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
2535+ gfp_t gfp_mask)
2536+{
2537+ int i;
2538+ u8 *data;
2539+ int size = nhead + skb_end_offset(skb) + ntail;
2540+ long off;
2541+
2542+ BUG_ON(nhead < 0);
2543+
2544+ if (skb_shared(skb))
2545+ BUG();
2546+
2547+ size = SKB_DATA_ALIGN(size);
2548+
2549+ if (skb_pfmemalloc(skb))
2550+ gfp_mask |= __GFP_MEMALLOC;
2551+ data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
2552+ gfp_mask, NUMA_NO_NODE, NULL);
2553+ if (!data)
2554+ goto nodata;
2555+ size = SKB_WITH_OVERHEAD(ksize(data));
2556+
2557+ /* Copy only real data... and, alas, header. This should be
2558+ * optimized for the cases when header is void.
2559+ */
2560+ memcpy(data + nhead, skb->head, skb_tail_pointer(skb) - skb->head);
2561+
2562+ memcpy((struct skb_shared_info *)(data + size),
2563+ skb_shinfo(skb),
2564+ offsetof(struct skb_shared_info, frags[skb_shinfo(skb)->nr_frags]));
2565+
2566+ /*
2567+ * if shinfo is shared we must drop the old head gracefully, but if it
2568+ * is not we can just drop the old head and let the existing refcount
2569+ * be since all we did is relocate the values
2570+ */
2571+ if (skb_cloned(skb)) {
2572+ /* copy this zero copy skb frags */
2573+ if (skb_orphan_frags(skb, gfp_mask))
2574+ goto nofrags;
2575+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
2576+ skb_frag_ref(skb, i);
2577+
2578+ if (skb_has_frag_list(skb))
2579+ skb_clone_fraglist(skb);
2580+
2581+ skb_release_data(skb);
2582+ } else {
2583+ skb_free_head(skb);
2584+ }
2585+ off = (data + nhead) - skb->head;
2586+
2587+ skb->head = data;
2588+ skb->head_frag = 0;
2589+ skb->data += off;
2590+#ifdef NET_SKBUFF_DATA_USES_OFFSET
2591+ skb->end = size;
2592+ off = nhead;
2593+#else
2594+ skb->end = skb->head + size;
2595+#endif
2596+ skb->tail += off;
2597+ skb_headers_offset_update(skb, off);
2598+ /* Only adjust this if it actually is csum_start rather than csum */
2599+ if (skb->ip_summed == CHECKSUM_PARTIAL)
2600+ skb->csum_start += nhead;
2601+ skb->cloned = 0;
2602+ skb->hdr_len = 0;
2603+ skb->nohdr = 0;
2604+ atomic_set(&skb_shinfo(skb)->dataref, 1);
2605+ return 0;
2606+
2607+nofrags:
2608+ kfree(data);
2609+nodata:
2610+ return -ENOMEM;
2611+}
2612+EXPORT_SYMBOL(pskb_expand_head);
2613+
2614+/* Make private copy of skb with writable head and some headroom */
2615+
2616+struct sk_buff *skb_realloc_headroom(struct sk_buff *skb, unsigned int headroom)
2617+{
2618+ struct sk_buff *skb2;
2619+ int delta = headroom - skb_headroom(skb);
2620+
2621+ if (delta <= 0)
2622+ skb2 = pskb_copy(skb, GFP_ATOMIC);
2623+ else {
2624+ skb2 = skb_clone(skb, GFP_ATOMIC);
2625+ if (skb2 && pskb_expand_head(skb2, SKB_DATA_ALIGN(delta), 0,
2626+ GFP_ATOMIC)) {
2627+ kfree_skb(skb2);
2628+ skb2 = NULL;
2629+ }
2630+ }
2631+ return skb2;
2632+}
2633+EXPORT_SYMBOL(skb_realloc_headroom);
2634+
2635+/**
2636+ * skb_copy_expand - copy and expand sk_buff
2637+ * @skb: buffer to copy
2638+ * @newheadroom: new free bytes at head
2639+ * @newtailroom: new free bytes at tail
2640+ * @gfp_mask: allocation priority
2641+ *
2642+ * Make a copy of both an &sk_buff and its data and while doing so
2643+ * allocate additional space.
2644+ *
2645+ * This is used when the caller wishes to modify the data and needs a
2646+ * private copy of the data to alter as well as more space for new fields.
2647+ * Returns %NULL on failure or the pointer to the buffer
2648+ * on success. The returned buffer has a reference count of 1.
2649+ *
2650+ * You must pass %GFP_ATOMIC as the allocation priority if this function
2651+ * is called from an interrupt.
2652+ */
2653+struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
2654+ int newheadroom, int newtailroom,
2655+ gfp_t gfp_mask)
2656+{
2657+ /*
2658+ * Allocate the copy buffer
2659+ */
2660+ struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom,
2661+ gfp_mask, skb_alloc_rx_flag(skb),
2662+ NUMA_NO_NODE);
2663+ int oldheadroom = skb_headroom(skb);
2664+ int head_copy_len, head_copy_off;
2665+ int off;
2666+
2667+ if (!n)
2668+ return NULL;
2669+
2670+ skb_reserve(n, newheadroom);
2671+
2672+ /* Set the tail pointer and length */
2673+ skb_put(n, skb->len);
2674+
2675+ head_copy_len = oldheadroom;
2676+ head_copy_off = 0;
2677+ if (newheadroom <= head_copy_len)
2678+ head_copy_len = newheadroom;
2679+ else
2680+ head_copy_off = newheadroom - head_copy_len;
2681+
2682+ /* Copy the linear header and data. */
2683+ if (skb_copy_bits(skb, -head_copy_len, n->head + head_copy_off,
2684+ skb->len + head_copy_len))
2685+ BUG();
2686+
2687+ copy_skb_header(n, skb);
2688+
2689+ off = newheadroom - oldheadroom;
2690+ if (n->ip_summed == CHECKSUM_PARTIAL)
2691+ n->csum_start += off;
2692+#ifdef NET_SKBUFF_DATA_USES_OFFSET
2693+ skb_headers_offset_update(n, off);
2694+#endif
2695+
2696+ return n;
2697+}
2698+EXPORT_SYMBOL(skb_copy_expand);
2699+
2700+/**
2701+ * skb_pad - zero pad the tail of an skb
2702+ * @skb: buffer to pad
2703+ * @pad: space to pad
2704+ *
2705+ * Ensure that a buffer is followed by a padding area that is zero
2706+ * filled. Used by network drivers which may DMA or transfer data
2707+ * beyond the buffer end onto the wire.
2708+ *
2709+ * May return error in out of memory cases. The skb is freed on error.
2710+ */
2711+
2712+int skb_pad(struct sk_buff *skb, int pad)
2713+{
2714+ int err;
2715+ int ntail;
2716+
2717+ /* If the skbuff is non linear tailroom is always zero.. */
2718+ if (!skb_cloned(skb) && skb_tailroom(skb) >= pad) {
2719+ memset(skb->data+skb->len, 0, pad);
2720+ return 0;
2721+ }
2722+
2723+ ntail = skb->data_len + pad - (skb->end - skb->tail);
2724+ if (likely(skb_cloned(skb) || ntail > 0)) {
2725+ err = pskb_expand_head(skb, 0, ntail, GFP_ATOMIC);
2726+ if (unlikely(err))
2727+ goto free_skb;
2728+ }
2729+
2730+ /* FIXME: The use of this function with non-linear skb's really needs
2731+ * to be audited.
2732+ */
2733+ err = skb_linearize(skb);
2734+ if (unlikely(err))
2735+ goto free_skb;
2736+
2737+ memset(skb->data + skb->len, 0, pad);
2738+ return 0;
2739+
2740+free_skb:
2741+ kfree_skb(skb);
2742+ return err;
2743+}
2744+EXPORT_SYMBOL(skb_pad);
2745+
2746+/**
2747+ * skb_put - add data to a buffer
2748+ * @skb: buffer to use
2749+ * @len: amount of data to add
2750+ *
2751+ * This function extends the used data area of the buffer. If this would
2752+ * exceed the total buffer size the kernel will panic. A pointer to the
2753+ * first byte of the extra data is returned.
2754+ */
2755+unsigned char *skb_put(struct sk_buff *skb, unsigned int len)
2756+{
2757+ unsigned char *tmp = skb_tail_pointer(skb);
2758+ SKB_LINEAR_ASSERT(skb);
2759+ skb->tail += len;
2760+ skb->len += len;
2761+ if (unlikely(skb->tail > skb->end))
2762+ skb_over_panic(skb, len, __builtin_return_address(0));
2763+ return tmp;
2764+}
2765+EXPORT_SYMBOL(skb_put);
2766+
2767+/**
2768+ * skb_push - add data to the start of a buffer
2769+ * @skb: buffer to use
2770+ * @len: amount of data to add
2771+ *
2772+ * This function extends the used data area of the buffer at the buffer
2773+ * start. If this would exceed the total buffer headroom the kernel will
2774+ * panic. A pointer to the first byte of the extra data is returned.
2775+ */
2776+unsigned char *skb_push(struct sk_buff *skb, unsigned int len)
2777+{
2778+ skb->data -= len;
2779+ skb->len += len;
2780+ if (unlikely(skb->data<skb->head))
2781+ skb_under_panic(skb, len, __builtin_return_address(0));
2782+ return skb->data;
2783+}
2784+EXPORT_SYMBOL(skb_push);
2785+
2786+/**
2787+ * skb_pull - remove data from the start of a buffer
2788+ * @skb: buffer to use
2789+ * @len: amount of data to remove
2790+ *
2791+ * This function removes data from the start of a buffer, returning
2792+ * the memory to the headroom. A pointer to the next data in the buffer
2793+ * is returned. Once the data has been pulled future pushes will overwrite
2794+ * the old data.
2795+ */
2796+unsigned char *skb_pull(struct sk_buff *skb, unsigned int len)
2797+{
2798+ return skb_pull_inline(skb, len);
2799+}
2800+EXPORT_SYMBOL(skb_pull);
2801+
2802+/**
2803+ * skb_trim - remove end from a buffer
2804+ * @skb: buffer to alter
2805+ * @len: new length
2806+ *
2807+ * Cut the length of a buffer down by removing data from the tail. If
2808+ * the buffer is already under the length specified it is not modified.
2809+ * The skb must be linear.
2810+ */
2811+void skb_trim(struct sk_buff *skb, unsigned int len)
2812+{
2813+ if (skb->len > len)
2814+ __skb_trim(skb, len);
2815+}
2816+EXPORT_SYMBOL(skb_trim);
2817+
2818+/* Trims skb to length len. It can change skb pointers.
2819+ */
2820+
2821+int ___pskb_trim(struct sk_buff *skb, unsigned int len)
2822+{
2823+ struct sk_buff **fragp;
2824+ struct sk_buff *frag;
2825+ int offset = skb_headlen(skb);
2826+ int nfrags = skb_shinfo(skb)->nr_frags;
2827+ int i;
2828+ int err;
2829+
2830+ if (skb_cloned(skb) &&
2831+ unlikely((err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC))))
2832+ return err;
2833+
2834+ i = 0;
2835+ if (offset >= len)
2836+ goto drop_pages;
2837+
2838+ for (; i < nfrags; i++) {
2839+ int end = offset + skb_frag_size(&skb_shinfo(skb)->frags[i]);
2840+
2841+ if (end < len) {
2842+ offset = end;
2843+ continue;
2844+ }
2845+
2846+ skb_frag_size_set(&skb_shinfo(skb)->frags[i++], len - offset);
2847+
2848+drop_pages:
2849+ skb_shinfo(skb)->nr_frags = i;
2850+
2851+ for (; i < nfrags; i++)
2852+ skb_frag_unref(skb, i);
2853+
2854+ if (skb_has_frag_list(skb))
2855+ skb_drop_fraglist(skb);
2856+ goto done;
2857+ }
2858+
2859+ for (fragp = &skb_shinfo(skb)->frag_list; (frag = *fragp);
2860+ fragp = &frag->next) {
2861+ int end = offset + frag->len;
2862+
2863+ if (skb_shared(frag)) {
2864+ struct sk_buff *nfrag;
2865+
2866+ nfrag = skb_clone(frag, GFP_ATOMIC);
2867+ if (unlikely(!nfrag))
2868+ return -ENOMEM;
2869+
2870+ nfrag->next = frag->next;
2871+ consume_skb(frag);
2872+ frag = nfrag;
2873+ *fragp = frag;
2874+ }
2875+
2876+ if (end < len) {
2877+ offset = end;
2878+ continue;
2879+ }
2880+
2881+ if (end > len &&
2882+ unlikely((err = pskb_trim(frag, len - offset))))
2883+ return err;
2884+
2885+ if (frag->next)
2886+ skb_drop_list(&frag->next);
2887+ break;
2888+ }
2889+
2890+done:
2891+ if (len > skb_headlen(skb)) {
2892+ skb->data_len -= skb->len - len;
2893+ skb->len = len;
2894+ } else {
2895+ skb->len = len;
2896+ skb->data_len = 0;
2897+ skb_set_tail_pointer(skb, len);
2898+ }
2899+
2900+ return 0;
2901+}
2902+EXPORT_SYMBOL(___pskb_trim);
2903+
2904+/**
2905+ * __pskb_pull_tail - advance tail of skb header
2906+ * @skb: buffer to reallocate
2907+ * @delta: number of bytes to advance tail
2908+ *
2909+ * The function makes a sense only on a fragmented &sk_buff,
2910+ * it expands header moving its tail forward and copying necessary
2911+ * data from fragmented part.
2912+ *
2913+ * &sk_buff MUST have reference count of 1.
2914+ *
2915+ * Returns %NULL (and &sk_buff does not change) if pull failed
2916+ * or value of new tail of skb in the case of success.
2917+ *
2918+ * All the pointers pointing into skb header may change and must be
2919+ * reloaded after call to this function.
2920+ */
2921+
2922+/* Moves tail of skb head forward, copying data from fragmented part,
2923+ * when it is necessary.
2924+ * 1. It may fail due to malloc failure.
2925+ * 2. It may change skb pointers.
2926+ *
2927+ * It is pretty complicated. Luckily, it is called only in exceptional cases.
2928+ */
2929+unsigned char *__pskb_pull_tail(struct sk_buff *skb, int delta)
2930+{
2931+ /* If skb has not enough free space at tail, get new one
2932+ * plus 128 bytes for future expansions. If we have enough
2933+ * room at tail, reallocate without expansion only if skb is cloned.
2934+ */
2935+ int i, k, eat = (skb->tail + delta) - skb->end;
2936+
2937+ if (eat > 0 || skb_cloned(skb)) {
2938+ if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0,
2939+ GFP_ATOMIC))
2940+ return NULL;
2941+ }
2942+
2943+ if (skb_copy_bits(skb, skb_headlen(skb), skb_tail_pointer(skb), delta))
2944+ BUG();
2945+
2946+ /* Optimization: no fragments, no reasons to preestimate
2947+ * size of pulled pages. Superb.
2948+ */
2949+ if (!skb_has_frag_list(skb))
2950+ goto pull_pages;
2951+
2952+ /* Estimate size of pulled pages. */
2953+ eat = delta;
2954+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
2955+ int size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
2956+
2957+ if (size >= eat)
2958+ goto pull_pages;
2959+ eat -= size;
2960+ }
2961+
2962+ /* If we need update frag list, we are in troubles.
2963+ * Certainly, it possible to add an offset to skb data,
2964+ * but taking into account that pulling is expected to
2965+ * be very rare operation, it is worth to fight against
2966+ * further bloating skb head and crucify ourselves here instead.
2967+ * Pure masohism, indeed. 8)8)
2968+ */
2969+ if (eat) {
2970+ struct sk_buff *list = skb_shinfo(skb)->frag_list;
2971+ struct sk_buff *clone = NULL;
2972+ struct sk_buff *insp = NULL;
2973+
2974+ do {
2975+ BUG_ON(!list);
2976+
2977+ if (list->len <= eat) {
2978+ /* Eaten as whole. */
2979+ eat -= list->len;
2980+ list = list->next;
2981+ insp = list;
2982+ } else {
2983+ /* Eaten partially. */
2984+
2985+ if (skb_shared(list)) {
2986+ /* Sucks! We need to fork list. :-( */
2987+ clone = skb_clone(list, GFP_ATOMIC);
2988+ if (!clone)
2989+ return NULL;
2990+ insp = list->next;
2991+ list = clone;
2992+ } else {
2993+ /* This may be pulled without
2994+ * problems. */
2995+ insp = list;
2996+ }
2997+ if (!pskb_pull(list, eat)) {
2998+ kfree_skb(clone);
2999+ return NULL;
3000+ }
3001+ break;
3002+ }
3003+ } while (eat);
3004+
3005+ /* Free pulled out fragments. */
3006+ while ((list = skb_shinfo(skb)->frag_list) != insp) {
3007+ skb_shinfo(skb)->frag_list = list->next;
3008+ kfree_skb(list);
3009+ }
3010+ /* And insert new clone at head. */
3011+ if (clone) {
3012+ clone->next = list;
3013+ skb_shinfo(skb)->frag_list = clone;
3014+ }
3015+ }
3016+ /* Success! Now we may commit changes to skb data. */
3017+
3018+pull_pages:
3019+ eat = delta;
3020+ k = 0;
3021+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
3022+ int size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
3023+
3024+ if (size <= eat) {
3025+ skb_frag_unref(skb, i);
3026+ eat -= size;
3027+ } else {
3028+ skb_shinfo(skb)->frags[k] = skb_shinfo(skb)->frags[i];
3029+ if (eat) {
3030+ skb_shinfo(skb)->frags[k].page_offset += eat;
3031+ skb_frag_size_sub(&skb_shinfo(skb)->frags[k], eat);
3032+ eat = 0;
3033+ }
3034+ k++;
3035+ }
3036+ }
3037+ skb_shinfo(skb)->nr_frags = k;
3038+
3039+ skb->tail += delta;
3040+ skb->data_len -= delta;
3041+
3042+ return skb_tail_pointer(skb);
3043+}
3044+EXPORT_SYMBOL(__pskb_pull_tail);
3045+
3046+/**
3047+ * skb_copy_bits - copy bits from skb to kernel buffer
3048+ * @skb: source skb
3049+ * @offset: offset in source
3050+ * @to: destination buffer
3051+ * @len: number of bytes to copy
3052+ *
3053+ * Copy the specified number of bytes from the source skb to the
3054+ * destination buffer.
3055+ *
3056+ * CAUTION ! :
3057+ * If its prototype is ever changed,
3058+ * check arch/{*}/net/{*}.S files,
3059+ * since it is called from BPF assembly code.
3060+ */
3061+int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len)
3062+{
3063+ int start = skb_headlen(skb);
3064+ struct sk_buff *frag_iter;
3065+ int i, copy;
3066+
3067+ if (offset > (int)skb->len - len)
3068+ goto fault;
3069+
3070+ /* Copy header. */
3071+ if ((copy = start - offset) > 0) {
3072+ if (copy > len)
3073+ copy = len;
3074+ skb_copy_from_linear_data_offset(skb, offset, to, copy);
3075+ if ((len -= copy) == 0)
3076+ return 0;
3077+ offset += copy;
3078+ to += copy;
3079+ }
3080+
3081+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
3082+ int end;
3083+ skb_frag_t *f = &skb_shinfo(skb)->frags[i];
3084+
3085+ WARN_ON(start > offset + len);
3086+
3087+ end = start + skb_frag_size(f);
3088+ if ((copy = end - offset) > 0) {
3089+ u8 *vaddr;
3090+
3091+ if (copy > len)
3092+ copy = len;
3093+
3094+ vaddr = kmap_atomic(skb_frag_page(f));
3095+ memcpy(to,
3096+ vaddr + f->page_offset + offset - start,
3097+ copy);
3098+ kunmap_atomic(vaddr);
3099+
3100+ if ((len -= copy) == 0)
3101+ return 0;
3102+ offset += copy;
3103+ to += copy;
3104+ }
3105+ start = end;
3106+ }
3107+
3108+ skb_walk_frags(skb, frag_iter) {
3109+ int end;
3110+
3111+ WARN_ON(start > offset + len);
3112+
3113+ end = start + frag_iter->len;
3114+ if ((copy = end - offset) > 0) {
3115+ if (copy > len)
3116+ copy = len;
3117+ if (skb_copy_bits(frag_iter, offset - start, to, copy))
3118+ goto fault;
3119+ if ((len -= copy) == 0)
3120+ return 0;
3121+ offset += copy;
3122+ to += copy;
3123+ }
3124+ start = end;
3125+ }
3126+
3127+ if (!len)
3128+ return 0;
3129+
3130+fault:
3131+ return -EFAULT;
3132+}
3133+EXPORT_SYMBOL(skb_copy_bits);
3134+
3135+/*
3136+ * Callback from splice_to_pipe(), if we need to release some pages
3137+ * at the end of the spd in case we error'ed out in filling the pipe.
3138+ */
3139+static void sock_spd_release(struct splice_pipe_desc *spd, unsigned int i)
3140+{
3141+ put_page(spd->pages[i]);
3142+}
3143+
3144+static struct page *linear_to_page(struct page *page, unsigned int *len,
3145+ unsigned int *offset,
3146+ struct sock *sk)
3147+{
3148+ struct page_frag *pfrag = sk_page_frag(sk);
3149+
3150+ if (!sk_page_frag_refill(sk, pfrag))
3151+ return NULL;
3152+
3153+ *len = min_t(unsigned int, *len, pfrag->size - pfrag->offset);
3154+
3155+ memcpy(page_address(pfrag->page) + pfrag->offset,
3156+ page_address(page) + *offset, *len);
3157+ *offset = pfrag->offset;
3158+ pfrag->offset += *len;
3159+
3160+ return pfrag->page;
3161+}
3162+
3163+static bool spd_can_coalesce(const struct splice_pipe_desc *spd,
3164+ struct page *page,
3165+ unsigned int offset)
3166+{
3167+ return spd->nr_pages &&
3168+ spd->pages[spd->nr_pages - 1] == page &&
3169+ (spd->partial[spd->nr_pages - 1].offset +
3170+ spd->partial[spd->nr_pages - 1].len == offset);
3171+}
3172+
3173+/*
3174+ * Fill page/offset/length into spd, if it can hold more pages.
3175+ */
3176+static bool spd_fill_page(struct splice_pipe_desc *spd,
3177+ struct pipe_inode_info *pipe, struct page *page,
3178+ unsigned int *len, unsigned int offset,
3179+ bool linear,
3180+ struct sock *sk)
3181+{
3182+ if (unlikely(spd->nr_pages == MAX_SKB_FRAGS))
3183+ return true;
3184+
3185+ if (linear) {
3186+ page = linear_to_page(page, len, &offset, sk);
3187+ if (!page)
3188+ return true;
3189+ }
3190+ if (spd_can_coalesce(spd, page, offset)) {
3191+ spd->partial[spd->nr_pages - 1].len += *len;
3192+ return false;
3193+ }
3194+ get_page(page);
3195+ spd->pages[spd->nr_pages] = page;
3196+ spd->partial[spd->nr_pages].len = *len;
3197+ spd->partial[spd->nr_pages].offset = offset;
3198+ spd->nr_pages++;
3199+
3200+ return false;
3201+}
3202+
3203+static bool __splice_segment(struct page *page, unsigned int poff,
3204+ unsigned int plen, unsigned int *off,
3205+ unsigned int *len,
3206+ struct splice_pipe_desc *spd, bool linear,
3207+ struct sock *sk,
3208+ struct pipe_inode_info *pipe)
3209+{
3210+ if (!*len)
3211+ return true;
3212+
3213+ /* skip this segment if already processed */
3214+ if (*off >= plen) {
3215+ *off -= plen;
3216+ return false;
3217+ }
3218+
3219+ /* ignore any bits we already processed */
3220+ poff += *off;
3221+ plen -= *off;
3222+ *off = 0;
3223+
3224+ do {
3225+ unsigned int flen = min(*len, plen);
3226+
3227+ if (spd_fill_page(spd, pipe, page, &flen, poff,
3228+ linear, sk))
3229+ return true;
3230+ poff += flen;
3231+ plen -= flen;
3232+ *len -= flen;
3233+ } while (*len && plen);
3234+
3235+ return false;
3236+}
3237+
3238+/*
3239+ * Map linear and fragment data from the skb to spd. It reports true if the
3240+ * pipe is full or if we already spliced the requested length.
3241+ */
3242+static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe,
3243+ unsigned int *offset, unsigned int *len,
3244+ struct splice_pipe_desc *spd, struct sock *sk)
3245+{
3246+ int seg;
3247+
3248+ /* map the linear part :
3249+ * If skb->head_frag is set, this 'linear' part is backed by a
3250+ * fragment, and if the head is not shared with any clones then
3251+ * we can avoid a copy since we own the head portion of this page.
3252+ */
3253+ if (__splice_segment(virt_to_page(skb->data),
3254+ (unsigned long) skb->data & (PAGE_SIZE - 1),
3255+ skb_headlen(skb),
3256+ offset, len, spd,
3257+ skb_head_is_locked(skb),
3258+ sk, pipe))
3259+ return true;
3260+
3261+ /*
3262+ * then map the fragments
3263+ */
3264+ for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) {
3265+ const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];
3266+
3267+ if (__splice_segment(skb_frag_page(f),
3268+ f->page_offset, skb_frag_size(f),
3269+ offset, len, spd, false, sk, pipe))
3270+ return true;
3271+ }
3272+
3273+ return false;
3274+}
3275+
3276+/*
3277+ * Map data from the skb to a pipe. Should handle both the linear part,
3278+ * the fragments, and the frag list. It does NOT handle frag lists within
3279+ * the frag list, if such a thing exists. We'd probably need to recurse to
3280+ * handle that cleanly.
3281+ */
3282+int skb_splice_bits(struct sk_buff *skb, unsigned int offset,
3283+ struct pipe_inode_info *pipe, unsigned int tlen,
3284+ unsigned int flags)
3285+{
3286+ struct partial_page partial[MAX_SKB_FRAGS];
3287+ struct page *pages[MAX_SKB_FRAGS];
3288+ struct splice_pipe_desc spd = {
3289+ .pages = pages,
3290+ .partial = partial,
3291+ .nr_pages_max = MAX_SKB_FRAGS,
3292+ .flags = flags,
3293+ .ops = &sock_pipe_buf_ops,
3294+ .spd_release = sock_spd_release,
3295+ };
3296+ struct sk_buff *frag_iter;
3297+ struct sock *sk = skb->sk;
3298+ int ret = 0;
3299+
3300+ /*
3301+ * __skb_splice_bits() only fails if the output has no room left,
3302+ * so no point in going over the frag_list for the error case.
3303+ */
3304+ if (__skb_splice_bits(skb, pipe, &offset, &tlen, &spd, sk))
3305+ goto done;
3306+ else if (!tlen)
3307+ goto done;
3308+
3309+ /*
3310+ * now see if we have a frag_list to map
3311+ */
3312+ skb_walk_frags(skb, frag_iter) {
3313+ if (!tlen)
3314+ break;
3315+ if (__skb_splice_bits(frag_iter, pipe, &offset, &tlen, &spd, sk))
3316+ break;
3317+ }
3318+
3319+done:
3320+ if (spd.nr_pages) {
3321+ /*
3322+ * Drop the socket lock, otherwise we have reverse
3323+ * locking dependencies between sk_lock and i_mutex
3324+ * here as compared to sendfile(). We enter here
3325+ * with the socket lock held, and splice_to_pipe() will
3326+ * grab the pipe inode lock. For sendfile() emulation,
3327+ * we call into ->sendpage() with the i_mutex lock held
3328+ * and networking will grab the socket lock.
3329+ */
3330+ release_sock(sk);
3331+ ret = splice_to_pipe(pipe, &spd);
3332+ lock_sock(sk);
3333+ }
3334+
3335+ return ret;
3336+}
3337+
3338+/**
3339+ * skb_store_bits - store bits from kernel buffer to skb
3340+ * @skb: destination buffer
3341+ * @offset: offset in destination
3342+ * @from: source buffer
3343+ * @len: number of bytes to copy
3344+ *
3345+ * Copy the specified number of bytes from the source buffer to the
3346+ * destination skb. This function handles all the messy bits of
3347+ * traversing fragment lists and such.
3348+ */
3349+
3350+int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len)
3351+{
3352+ int start = skb_headlen(skb);
3353+ struct sk_buff *frag_iter;
3354+ int i, copy;
3355+
3356+ if (offset > (int)skb->len - len)
3357+ goto fault;
3358+
3359+ if ((copy = start - offset) > 0) {
3360+ if (copy > len)
3361+ copy = len;
3362+ skb_copy_to_linear_data_offset(skb, offset, from, copy);
3363+ if ((len -= copy) == 0)
3364+ return 0;
3365+ offset += copy;
3366+ from += copy;
3367+ }
3368+
3369+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
3370+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
3371+ int end;
3372+
3373+ WARN_ON(start > offset + len);
3374+
3375+ end = start + skb_frag_size(frag);
3376+ if ((copy = end - offset) > 0) {
3377+ u8 *vaddr;
3378+
3379+ if (copy > len)
3380+ copy = len;
3381+
3382+ vaddr = kmap_atomic(skb_frag_page(frag));
3383+ memcpy(vaddr + frag->page_offset + offset - start,
3384+ from, copy);
3385+ kunmap_atomic(vaddr);
3386+
3387+ if ((len -= copy) == 0)
3388+ return 0;
3389+ offset += copy;
3390+ from += copy;
3391+ }
3392+ start = end;
3393+ }
3394+
3395+ skb_walk_frags(skb, frag_iter) {
3396+ int end;
3397+
3398+ WARN_ON(start > offset + len);
3399+
3400+ end = start + frag_iter->len;
3401+ if ((copy = end - offset) > 0) {
3402+ if (copy > len)
3403+ copy = len;
3404+ if (skb_store_bits(frag_iter, offset - start,
3405+ from, copy))
3406+ goto fault;
3407+ if ((len -= copy) == 0)
3408+ return 0;
3409+ offset += copy;
3410+ from += copy;
3411+ }
3412+ start = end;
3413+ }
3414+ if (!len)
3415+ return 0;
3416+
3417+fault:
3418+ return -EFAULT;
3419+}
3420+EXPORT_SYMBOL(skb_store_bits);
3421+
3422+/* Checksum skb data. */
3423+
3424+__wsum skb_checksum(const struct sk_buff *skb, int offset,
3425+ int len, __wsum csum)
3426+{
3427+ int start = skb_headlen(skb);
3428+ int i, copy = start - offset;
3429+ struct sk_buff *frag_iter;
3430+ int pos = 0;
3431+
3432+ /* Checksum header. */
3433+ if (copy > 0) {
3434+ if (copy > len)
3435+ copy = len;
3436+ csum = csum_partial(skb->data + offset, copy, csum);
3437+ if ((len -= copy) == 0)
3438+ return csum;
3439+ offset += copy;
3440+ pos = copy;
3441+ }
3442+
3443+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
3444+ int end;
3445+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
3446+
3447+ WARN_ON(start > offset + len);
3448+
3449+ end = start + skb_frag_size(frag);
3450+ if ((copy = end - offset) > 0) {
3451+ __wsum csum2;
3452+ u8 *vaddr;
3453+
3454+ if (copy > len)
3455+ copy = len;
3456+ vaddr = kmap_atomic(skb_frag_page(frag));
3457+ csum2 = csum_partial(vaddr + frag->page_offset +
3458+ offset - start, copy, 0);
3459+ kunmap_atomic(vaddr);
3460+ csum = csum_block_add(csum, csum2, pos);
3461+ if (!(len -= copy))
3462+ return csum;
3463+ offset += copy;
3464+ pos += copy;
3465+ }
3466+ start = end;
3467+ }
3468+
3469+ skb_walk_frags(skb, frag_iter) {
3470+ int end;
3471+
3472+ WARN_ON(start > offset + len);
3473+
3474+ end = start + frag_iter->len;
3475+ if ((copy = end - offset) > 0) {
3476+ __wsum csum2;
3477+ if (copy > len)
3478+ copy = len;
3479+ csum2 = skb_checksum(frag_iter, offset - start,
3480+ copy, 0);
3481+ csum = csum_block_add(csum, csum2, pos);
3482+ if ((len -= copy) == 0)
3483+ return csum;
3484+ offset += copy;
3485+ pos += copy;
3486+ }
3487+ start = end;
3488+ }
3489+ BUG_ON(len);
3490+
3491+ return csum;
3492+}
3493+EXPORT_SYMBOL(skb_checksum);
3494+
3495+/* Both of above in one bottle. */
3496+
3497+__wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
3498+ u8 *to, int len, __wsum csum)
3499+{
3500+ int start = skb_headlen(skb);
3501+ int i, copy = start - offset;
3502+ struct sk_buff *frag_iter;
3503+ int pos = 0;
3504+
3505+ /* Copy header. */
3506+ if (copy > 0) {
3507+ if (copy > len)
3508+ copy = len;
3509+ csum = csum_partial_copy_nocheck(skb->data + offset, to,
3510+ copy, csum);
3511+ if ((len -= copy) == 0)
3512+ return csum;
3513+ offset += copy;
3514+ to += copy;
3515+ pos = copy;
3516+ }
3517+
3518+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
3519+ int end;
3520+
3521+ WARN_ON(start > offset + len);
3522+
3523+ end = start + skb_frag_size(&skb_shinfo(skb)->frags[i]);
3524+ if ((copy = end - offset) > 0) {
3525+ __wsum csum2;
3526+ u8 *vaddr;
3527+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
3528+
3529+ if (copy > len)
3530+ copy = len;
3531+ vaddr = kmap_atomic(skb_frag_page(frag));
3532+ csum2 = csum_partial_copy_nocheck(vaddr +
3533+ frag->page_offset +
3534+ offset - start, to,
3535+ copy, 0);
3536+ kunmap_atomic(vaddr);
3537+ csum = csum_block_add(csum, csum2, pos);
3538+ if (!(len -= copy))
3539+ return csum;
3540+ offset += copy;
3541+ to += copy;
3542+ pos += copy;
3543+ }
3544+ start = end;
3545+ }
3546+
3547+ skb_walk_frags(skb, frag_iter) {
3548+ __wsum csum2;
3549+ int end;
3550+
3551+ WARN_ON(start > offset + len);
3552+
3553+ end = start + frag_iter->len;
3554+ if ((copy = end - offset) > 0) {
3555+ if (copy > len)
3556+ copy = len;
3557+ csum2 = skb_copy_and_csum_bits(frag_iter,
3558+ offset - start,
3559+ to, copy, 0);
3560+ csum = csum_block_add(csum, csum2, pos);
3561+ if ((len -= copy) == 0)
3562+ return csum;
3563+ offset += copy;
3564+ to += copy;
3565+ pos += copy;
3566+ }
3567+ start = end;
3568+ }
3569+ BUG_ON(len);
3570+ return csum;
3571+}
3572+EXPORT_SYMBOL(skb_copy_and_csum_bits);
3573+
3574+void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to)
3575+{
3576+ __wsum csum;
3577+ long csstart;
3578+
3579+ if (skb->ip_summed == CHECKSUM_PARTIAL)
3580+ csstart = skb_checksum_start_offset(skb);
3581+ else
3582+ csstart = skb_headlen(skb);
3583+
3584+ BUG_ON(csstart > skb_headlen(skb));
3585+
3586+ skb_copy_from_linear_data(skb, to, csstart);
3587+
3588+ csum = 0;
3589+ if (csstart != skb->len)
3590+ csum = skb_copy_and_csum_bits(skb, csstart, to + csstart,
3591+ skb->len - csstart, 0);
3592+
3593+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
3594+ long csstuff = csstart + skb->csum_offset;
3595+
3596+ *((__sum16 *)(to + csstuff)) = csum_fold(csum);
3597+ }
3598+}
3599+EXPORT_SYMBOL(skb_copy_and_csum_dev);
3600+
3601+/**
3602+ * skb_dequeue - remove from the head of the queue
3603+ * @list: list to dequeue from
3604+ *
3605+ * Remove the head of the list. The list lock is taken so the function
3606+ * may be used safely with other locking list functions. The head item is
3607+ * returned or %NULL if the list is empty.
3608+ */
3609+
3610+struct sk_buff *skb_dequeue(struct sk_buff_head *list)
3611+{
3612+ unsigned long flags;
3613+ struct sk_buff *result;
3614+
3615+ spin_lock_irqsave(&list->lock, flags);
3616+ result = __skb_dequeue(list);
3617+ spin_unlock_irqrestore(&list->lock, flags);
3618+ return result;
3619+}
3620+EXPORT_SYMBOL(skb_dequeue);
3621+
3622+/**
3623+ * skb_dequeue_tail - remove from the tail of the queue
3624+ * @list: list to dequeue from
3625+ *
3626+ * Remove the tail of the list. The list lock is taken so the function
3627+ * may be used safely with other locking list functions. The tail item is
3628+ * returned or %NULL if the list is empty.
3629+ */
3630+struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list)
3631+{
3632+ unsigned long flags;
3633+ struct sk_buff *result;
3634+
3635+ spin_lock_irqsave(&list->lock, flags);
3636+ result = __skb_dequeue_tail(list);
3637+ spin_unlock_irqrestore(&list->lock, flags);
3638+ return result;
3639+}
3640+EXPORT_SYMBOL(skb_dequeue_tail);
3641+
3642+/**
3643+ * skb_queue_purge - empty a list
3644+ * @list: list to empty
3645+ *
3646+ * Delete all buffers on an &sk_buff list. Each buffer is removed from
3647+ * the list and one reference dropped. This function takes the list
3648+ * lock and is atomic with respect to other list locking functions.
3649+ */
3650+void skb_queue_purge(struct sk_buff_head *list)
3651+{
3652+ struct sk_buff *skb;
3653+ while ((skb = skb_dequeue(list)) != NULL)
3654+ kfree_skb(skb);
3655+}
3656+EXPORT_SYMBOL(skb_queue_purge);
3657+
3658+/**
3659+ * skb_queue_head - queue a buffer at the list head
3660+ * @list: list to use
3661+ * @newsk: buffer to queue
3662+ *
3663+ * Queue a buffer at the start of the list. This function takes the
3664+ * list lock and can be used safely with other locking &sk_buff functions
3665+ * safely.
3666+ *
3667+ * A buffer cannot be placed on two lists at the same time.
3668+ */
3669+void skb_queue_head(struct sk_buff_head *list, struct sk_buff *newsk)
3670+{
3671+ unsigned long flags;
3672+
3673+ spin_lock_irqsave(&list->lock, flags);
3674+ __skb_queue_head(list, newsk);
3675+ spin_unlock_irqrestore(&list->lock, flags);
3676+}
3677+EXPORT_SYMBOL(skb_queue_head);
3678+
3679+/**
3680+ * skb_queue_tail - queue a buffer at the list tail
3681+ * @list: list to use
3682+ * @newsk: buffer to queue
3683+ *
3684+ * Queue a buffer at the tail of the list. This function takes th