]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/2.6.32.17/sched-fix-over-scheduling-bug.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 2.6.32.17 / sched-fix-over-scheduling-bug.patch
CommitLineData
45426b3c
GKH
1From 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 Mon Sep 17 00:00:00 2001
2From: Alex,Shi <alex.shi@intel.com>
3Date: Thu, 17 Jun 2010 14:08:13 +0800
4Subject: sched: Fix over-scheduling bug
5
6From: Alex,Shi <alex.shi@intel.com>
7
8commit 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 upstream.
9
10Commit e70971591 ("sched: Optimize unused cgroup configuration") introduced
11an imbalanced scheduling bug.
12
13If we do not use CGROUP, function update_h_load won't update h_load. When the
14system has a large number of tasks far more than logical CPU number, the
15incorrect cfs_rq[cpu]->h_load value will cause load_balance() to pull too
16many tasks to the local CPU from the busiest CPU. So the busiest CPU keeps
17going in a round robin. That will hurt performance.
18
19The issue was found originally by a scientific calculation workload that
20developed by Yanmin. With that commit, the workload performance drops
21about 40%.
22
23 CPU before after
24
25 00 : 2 : 7
26 01 : 1 : 7
27 02 : 11 : 6
28 03 : 12 : 7
29 04 : 6 : 6
30 05 : 11 : 7
31 06 : 10 : 6
32 07 : 12 : 7
33 08 : 11 : 6
34 09 : 12 : 6
35 10 : 1 : 6
36 11 : 1 : 6
37 12 : 6 : 6
38 13 : 2 : 6
39 14 : 2 : 6
40 15 : 1 : 6
41
42Reviewed-by: Yanmin zhang <yanmin.zhang@intel.com>
43Signed-off-by: Alex Shi <alex.shi@intel.com>
44Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
45LKML-Reference: <1276754893.9452.5442.camel@debian>
46Signed-off-by: Ingo Molnar <mingo@elte.hu>
47Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
48
49---
50 kernel/sched.c | 3 ---
51 1 file changed, 3 deletions(-)
52
53--- a/kernel/sched.c
54+++ b/kernel/sched.c
55@@ -1717,9 +1717,6 @@ static void update_shares_locked(struct
56
57 static void update_h_load(long cpu)
58 {
59- if (root_task_group_empty())
60- return;
61-
62 walk_tg_tree(tg_load_down, tg_nop, (void *)cpu);
63 }
64