--- /dev/null
+From amitarora@in.ibm.com Sun Sep 5 16:36:14 2010
+From: Amit Arora <amitarora@in.ibm.com>
+Date: Wed, 19 May 2010 14:35:57 +0530
+Subject: sched: kill migration thread in CPU_POST_DEAD instead of CPU_DEAD
+To: stable <stable@kernel.org>
+Cc: Ingo Molnar <mingo@elte.hu>, Peter Zijlstra <a.p.zijlstra@chello.nl>, Greg KH <greg@kroah.com>
+Message-ID: <a5ed721109833bb5953fd5e19c51e0c9e231d36e.1283514306.git.efault@gmx.de>
+
+From: Amit Arora <amitarora@in.ibm.com>
+
+[Fixed in a different manner upstream, due to rewrites in this area]
+
+Problem : In a stress test where some heavy tests were running along with
+regular CPU offlining and onlining, a hang was observed. The system seems to
+be hung at a point where migration_call() tries to kill the migration_thread
+of the dying CPU, which just got moved to the current CPU. This migration
+thread does not get a chance to run (and die) since rt_throttled is set to 1
+on current, and it doesn't get cleared as the hrtimer which is supposed to
+reset the rt bandwidth (sched_rt_period_timer) is tied to the CPU which we just
+marked dead!
+
+Solution : This patch pushes the killing of migration thread to "CPU_POST_DEAD"
+event. By then all the timers (including sched_rt_period_timer) should have got
+migrated (along with other callbacks).
+
+Signed-off-by: Amit Arora <amitarora@in.ibm.com>
+Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
+Signed-off-by: Mike Galbraith <efault@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+---
+ kernel/sched.c | 16 +++++++++++++---
+ 1 file changed, 13 insertions(+), 3 deletions(-)
+
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -7752,14 +7752,24 @@ migration_call(struct notifier_block *nf
+ cpu_rq(cpu)->migration_thread = NULL;
+ break;
+
++ case CPU_POST_DEAD:
++ /*
++ * Bring the migration thread down in CPU_POST_DEAD event,
++ * since the timers should have got migrated by now and thus
++ * we should not see a deadlock between trying to kill the
++ * migration thread and the sched_rt_period_timer.
++ */
++ rq = cpu_rq(cpu);
++ kthread_stop(rq->migration_thread);
++ put_task_struct(rq->migration_thread);
++ rq->migration_thread = NULL;
++ break;
++
+ case CPU_DEAD:
+ case CPU_DEAD_FROZEN:
+ cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */
+ migrate_live_tasks(cpu);
+ rq = cpu_rq(cpu);
+- kthread_stop(rq->migration_thread);
+- put_task_struct(rq->migration_thread);
+- rq->migration_thread = NULL;
+ /* Idle task back to normal (off runqueue, low prio) */
+ spin_lock_irq(&rq->lock);
+ update_rq_clock(rq);
--- /dev/null
+From efault@gmx.de Sun Sep 5 16:38:18 2010
+From: Mike Galbraith <efault@gmx.de>
+Date: Thu, 26 Aug 2010 05:29:16 +0200
+Subject: sched: revert stable c6fc81a sched: Fix a race between ttwu() and migrate_task()
+To: stable <stable@kernel.org>
+Cc: Ingo Molnar <mingo@elte.hu>, Peter Zijlstra <a.p.zijlstra@chello.nl>, Greg KH <greg@kroah.com>
+Message-ID: <08bb7f240b9a67919a23b9da22affb4ec0ab8cf4.1283514306.git.efault@gmx.de>
+
+From: Mike Galbraith <efault@gmx.de>
+
+This commit does not appear to have been meant for 32-stable, and causes ltp's
+cpusets testcases to fail, revert it.
+
+Original commit text:
+
+sched: Fix a race between ttwu() and migrate_task()
+
+Based on commit e2912009fb7b715728311b0d8fe327a1432b3f79 upstream, but
+done differently as this issue is not present in .33 or .34 kernels due
+to rework in this area.
+
+If a task is in the TASK_WAITING state, then try_to_wake_up() is working
+on it, and it will place it on the correct cpu.
+
+This commit ensures that neither migrate_task() nor __migrate_task()
+calls set_task_cpu(p) while p is in the TASK_WAKING state. Otherwise,
+there could be two concurrent calls to set_task_cpu(p), resulting in
+the task's cfs_rq being inconsistent with its cpu.
+
+Signed-off-by: Mike Galbraith <efault@gmx.de>
+Cc: Ingo Molnar <mingo@elte.hu>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ kernel/sched.c | 9 ++++-----
+ 1 file changed, 4 insertions(+), 5 deletions(-)
+
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2123,10 +2123,12 @@ migrate_task(struct task_struct *p, int
+
+ /*
+ * If the task is not on a runqueue (and not running), then
+- * the next wake-up will properly place the task.
++ * it is sufficient to simply update the task's cpu field.
+ */
+- if (!p->se.on_rq && !task_running(rq, p))
++ if (!p->se.on_rq && !task_running(rq, p)) {
++ set_task_cpu(p, dest_cpu);
+ return 0;
++ }
+
+ init_completion(&req->done);
+ req->task = p;
+@@ -7217,9 +7219,6 @@ static int __migrate_task(struct task_st
+ /* Already moved. */
+ if (task_cpu(p) != src_cpu)
+ goto done;
+- /* Waking up, don't get in the way of try_to_wake_up(). */
+- if (p->state == TASK_WAKING)
+- goto fail;
+ /* Affinity changed (again). */
+ if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
+ goto fail;
x86-tsc-sched-recompute-cyc2ns_offset-s-during-resume-from-sleep-states.patch
pci-msi-remove-unsafe-and-unnecessary-hardware-access.patch
pci-msi-restore-read_msi_msg_desc-add-get_cached_msi_msg_desc.patch
+sched-kill-migration-thread-in-cpu_post_dead-instead-of-cpu_dead.patch
+sched-revert-stable-c6fc81a-sched-fix-a-race-between-ttwu-and-migrate_task.patch