]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
ipv4: disable bh while doing route gc
authorMarcelo Ricardo Leitner <mleitner@redhat.com>
Mon, 13 Oct 2014 17:03:30 +0000 (14:03 -0300)
committerBen Hutchings <ben@decadent.org.uk>
Wed, 5 Nov 2014 20:27:47 +0000 (20:27 +0000)
Further tests revealed that after moving the garbage collector to a work
queue and protecting it with a spinlock may leave the system prone to
soft lockups if bottom half gets very busy.

It was reproced with a set of firewall rules that REJECTed packets. If
the NIC bottom half handler ends up running on the same CPU that is
running the garbage collector on a very large cache, the garbage
collector will not be able to do its job due to the amount of work
needed for handling the REJECTs and also won't reschedule.

The fix is to disable bottom half during the garbage collecting, as it
already was in the first place (most calls to it came from softirqs).

Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
net/ipv4/route.c

index 6b7108e53fd92c1123736f1ba2bdb98606c4d7f5..8e79a9e04276c7a1ea4d45368909596aa68ee35d 100644 (file)
@@ -1000,7 +1000,7 @@ static void __do_rt_garbage_collect(int elasticity, int min_interval)
         * do not make it too frequently.
         */
 
-       spin_lock(&rt_gc_lock);
+       spin_lock_bh(&rt_gc_lock);
 
        RT_CACHE_STAT_INC(gc_total);
 
@@ -1103,7 +1103,7 @@ work_done:
            dst_entries_get_slow(&ipv4_dst_ops) < ipv4_dst_ops.gc_thresh)
                expire = ip_rt_gc_timeout;
 out:
-       spin_unlock(&rt_gc_lock);
+       spin_unlock_bh(&rt_gc_lock);
 }
 
 static void __rt_garbage_collect(struct work_struct *w)