]>
Commit | Line | Data |
---|---|---|
45426b3c GKH |
1 | From 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 Mon Sep 17 00:00:00 2001 |
2 | From: Alex,Shi <alex.shi@intel.com> | |
3 | Date: Thu, 17 Jun 2010 14:08:13 +0800 | |
4 | Subject: sched: Fix over-scheduling bug | |
5 | ||
6 | From: Alex,Shi <alex.shi@intel.com> | |
7 | ||
8 | commit 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 upstream. | |
9 | ||
10 | Commit e70971591 ("sched: Optimize unused cgroup configuration") introduced | |
11 | an imbalanced scheduling bug. | |
12 | ||
13 | If we do not use CGROUP, function update_h_load won't update h_load. When the | |
14 | system has a large number of tasks far more than logical CPU number, the | |
15 | incorrect cfs_rq[cpu]->h_load value will cause load_balance() to pull too | |
16 | many tasks to the local CPU from the busiest CPU. So the busiest CPU keeps | |
17 | going in a round robin. That will hurt performance. | |
18 | ||
19 | The issue was found originally by a scientific calculation workload that | |
20 | developed by Yanmin. With that commit, the workload performance drops | |
21 | about 40%. | |
22 | ||
23 | CPU before after | |
24 | ||
25 | 00 : 2 : 7 | |
26 | 01 : 1 : 7 | |
27 | 02 : 11 : 6 | |
28 | 03 : 12 : 7 | |
29 | 04 : 6 : 6 | |
30 | 05 : 11 : 7 | |
31 | 06 : 10 : 6 | |
32 | 07 : 12 : 7 | |
33 | 08 : 11 : 6 | |
34 | 09 : 12 : 6 | |
35 | 10 : 1 : 6 | |
36 | 11 : 1 : 6 | |
37 | 12 : 6 : 6 | |
38 | 13 : 2 : 6 | |
39 | 14 : 2 : 6 | |
40 | 15 : 1 : 6 | |
41 | ||
42 | Reviewed-by: Yanmin zhang <yanmin.zhang@intel.com> | |
43 | Signed-off-by: Alex Shi <alex.shi@intel.com> | |
44 | Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> | |
45 | LKML-Reference: <1276754893.9452.5442.camel@debian> | |
46 | Signed-off-by: Ingo Molnar <mingo@elte.hu> | |
47 | Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> | |
48 | ||
49 | --- | |
50 | kernel/sched.c | 3 --- | |
51 | 1 file changed, 3 deletions(-) | |
52 | ||
53 | --- a/kernel/sched.c | |
54 | +++ b/kernel/sched.c | |
55 | @@ -1717,9 +1717,6 @@ static void update_shares_locked(struct | |
56 | ||
57 | static void update_h_load(long cpu) | |
58 | { | |
59 | - if (root_task_group_empty()) | |
60 | - return; | |
61 | - | |
62 | walk_tg_tree(tg_load_down, tg_nop, (void *)cpu); | |
63 | } | |
64 |