From: Greg Kroah-Hartman Date: Wed, 17 Apr 2013 14:37:17 +0000 (-0700) Subject: 3.0-stable patches X-Git-Tag: v3.8.9~31 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=2f0f6052c701a721f63304ced312fad81ae84246;p=thirdparty%2Fkernel%2Fstable-queue.git 3.0-stable patches added patches: hrtimer-don-t-reinitialize-a-cpu_base-lock-on-cpu_up.patch --- diff --git a/queue-3.0/hrtimer-don-t-reinitialize-a-cpu_base-lock-on-cpu_up.patch b/queue-3.0/hrtimer-don-t-reinitialize-a-cpu_base-lock-on-cpu_up.patch new file mode 100644 index 00000000000..40fc6bb2f6e --- /dev/null +++ b/queue-3.0/hrtimer-don-t-reinitialize-a-cpu_base-lock-on-cpu_up.patch @@ -0,0 +1,89 @@ +From 84cc8fd2fe65866e49d70b38b3fdf7219dd92fe0 Mon Sep 17 00:00:00 2001 +From: Michael Bohan +Date: Tue, 19 Mar 2013 19:19:25 -0700 +Subject: hrtimer: Don't reinitialize a cpu_base lock on CPU_UP + +From: Michael Bohan + +commit 84cc8fd2fe65866e49d70b38b3fdf7219dd92fe0 upstream. + +The current code makes the assumption that a cpu_base lock won't be +held if the CPU corresponding to that cpu_base is offline, which isn't +always true. + +If a hrtimer is not queued, then it will not be migrated by +migrate_hrtimers() when a CPU is offlined. Therefore, the hrtimer's +cpu_base may still point to a CPU which has subsequently gone offline +if the timer wasn't enqueued at the time the CPU went down. + +Normally this wouldn't be a problem, but a cpu_base's lock is blindly +reinitialized each time a CPU is brought up. If a CPU is brought +online during the period that another thread is performing a hrtimer +operation on a stale hrtimer, then the lock will be reinitialized +under its feet, and a SPIN_BUG() like the following will be observed: + +<0>[ 28.082085] BUG: spinlock already unlocked on CPU#0, swapper/0/0 +<0>[ 28.087078] lock: 0xc4780b40, value 0x0 .magic: dead4ead, .owner: /-1, .owner_cpu: -1 +<4>[ 42.451150] [] (unwind_backtrace+0x0/0x120) from [] (do_raw_spin_unlock+0x44/0xdc) +<4>[ 42.460430] [] (do_raw_spin_unlock+0x44/0xdc) from [] (_raw_spin_unlock+0x8/0x30) +<4>[ 42.469632] [] (_raw_spin_unlock+0x8/0x30) from [] (__hrtimer_start_range_ns+0x1e4/0x4f8) +<4>[ 42.479521] [] (__hrtimer_start_range_ns+0x1e4/0x4f8) from [] (hrtimer_start+0x20/0x28) +<4>[ 42.489247] [] (hrtimer_start+0x20/0x28) from [] (rcu_idle_enter_common+0x1ac/0x320) +<4>[ 42.498709] [] (rcu_idle_enter_common+0x1ac/0x320) from [] (rcu_idle_enter+0xa0/0xb8) +<4>[ 42.508259] [] (rcu_idle_enter+0xa0/0xb8) from [] (cpu_idle+0x24/0xf0) +<4>[ 42.516503] [] (cpu_idle+0x24/0xf0) from [] (rest_init+0x88/0xa0) +<4>[ 42.524319] [] (rest_init+0x88/0xa0) from [] (start_kernel+0x3d0/0x434) + +As an example, this particular crash occurred when hrtimer_start() was +executed on CPU #0. The code locked the hrtimer's current cpu_base +corresponding to CPU #1. CPU #0 then tried to switch the hrtimer's +cpu_base to an optimal CPU which was online. In this case, it selected +the cpu_base corresponding to CPU #3. + +Before it could proceed, CPU #1 came online and reinitialized the +spinlock corresponding to its cpu_base. Thus now CPU #0 held a lock +which was reinitialized. When CPU #0 finally ended up unlocking the +old cpu_base corresponding to CPU #1 so that it could switch to CPU +#3, we hit this SPIN_BUG() above while in switch_hrtimer_base(). + +CPU #0 CPU #1 +---- ---- +... +hrtimer_start() +lock_hrtimer_base(base #1) +... init_hrtimers_cpu() +switch_hrtimer_base() ... +... raw_spin_lock_init(&cpu_base->lock) +raw_spin_unlock(&cpu_base->lock) ... + + +Solve this by statically initializing the lock. + +Signed-off-by: Michael Bohan +Link: http://lkml.kernel.org/r/1363745965-23475-1-git-send-email-mbohan@codeaurora.org +Signed-off-by: Thomas Gleixner +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/hrtimer.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +--- a/kernel/hrtimer.c ++++ b/kernel/hrtimer.c +@@ -61,6 +61,7 @@ + DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = + { + ++ .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock), + .clock_base = + { + { +@@ -1640,8 +1641,6 @@ static void __cpuinit init_hrtimers_cpu( + struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu); + int i; + +- raw_spin_lock_init(&cpu_base->lock); +- + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { + cpu_base->clock_base[i].cpu_base = cpu_base; + timerqueue_init_head(&cpu_base->clock_base[i].active); diff --git a/queue-3.0/series b/queue-3.0/series new file mode 100644 index 00000000000..f98165c08e1 --- /dev/null +++ b/queue-3.0/series @@ -0,0 +1 @@ +hrtimer-don-t-reinitialize-a-cpu_base-lock-on-cpu_up.patch