From: Julian Anastasov Date: Thu, 9 Jul 2015 06:59:10 +0000 (+0300) Subject: net: call rcu_read_lock early in process_backlog X-Git-Tag: v3.4.111~87 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=eda27b22b978281403a883591331e7c65bf4a8f5;p=thirdparty%2Fkernel%2Fstable.git net: call rcu_read_lock early in process_backlog commit 2c17d27c36dcce2b6bf689f41a46b9e909877c21 upstream. Incoming packet should be either in backlog queue or in RCU read-side section. Otherwise, the final sequence of flush_backlog() and synchronize_net() may miss packets that can run without device reference: CPU 1 CPU 2 skb->dev: no reference process_backlog:__skb_dequeue process_backlog:local_irq_enable on_each_cpu for flush_backlog => IPI(hardirq): flush_backlog - packet not found in backlog CPU delayed ... synchronize_net - no ongoing RCU read-side sections netdev_run_todo, rcu_barrier: no ongoing callbacks __netif_receive_skb_core:rcu_read_lock - too late free dev process packet for freed dev Fixes: 6e583ce5242f ("net: eliminate refcounting in backlog queue") Cc: Eric W. Biederman Cc: Stephen Hemminger Signed-off-by: Julian Anastasov Signed-off-by: David S. Miller [lizf: Backported to 3.4: - adjust context - no need to change "goto unlock" to "goto out"] Signed-off-by: Zefan Li --- diff --git a/net/core/dev.c b/net/core/dev.c index 1e363d06b2e73..4f679bf4f12e0 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3191,8 +3191,6 @@ static int __netif_receive_skb(struct sk_buff *skb) pt_prev = NULL; - rcu_read_lock(); - another_round: __this_cpu_inc(softnet_data.processed); @@ -3287,7 +3285,6 @@ ncls: } out: - rcu_read_unlock(); return ret; } @@ -3308,29 +3305,30 @@ out: */ int netif_receive_skb(struct sk_buff *skb) { + int ret; + net_timestamp_check(netdev_tstamp_prequeue, skb); if (skb_defer_rx_timestamp(skb)) return NET_RX_SUCCESS; + rcu_read_lock(); + #ifdef CONFIG_RPS if (static_key_false(&rps_needed)) { struct rps_dev_flow voidflow, *rflow = &voidflow; - int cpu, ret; - - rcu_read_lock(); - - cpu = get_rps_cpu(skb->dev, skb, &rflow); + int cpu = get_rps_cpu(skb->dev, skb, &rflow); if (cpu >= 0) { ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); rcu_read_unlock(); return ret; } - rcu_read_unlock(); } #endif - return __netif_receive_skb(skb); + ret = __netif_receive_skb(skb); + rcu_read_unlock(); + return ret; } EXPORT_SYMBOL(netif_receive_skb); @@ -3721,8 +3719,10 @@ static int process_backlog(struct napi_struct *napi, int quota) unsigned int qlen; while ((skb = __skb_dequeue(&sd->process_queue))) { + rcu_read_lock(); local_irq_enable(); __netif_receive_skb(skb); + rcu_read_unlock(); local_irq_disable(); input_queue_head_incr(sd); if (++work >= quota) {