--- /dev/null
+From stable-bounces@linux.kernel.org Wed Jun 4 15:09:45 2008
+Date: Wed, 4 Jun 2008 22:05:29 GMT
+Message-Id: <200806042205.m54M5TBH012074@hera.kernel.org>
+From: jejb@kernel.org
+To: jejb@kernel.org, stable@kernel.org
+Subject: x86, fpu: fix CONFIG_PREEMPT=y corruption of application's FPU stack
+
+From: Suresh Siddha <suresh.b.siddha@intel.com>
+
+upstream commit: 870568b39064cab2dd971fe57969916036982862
+
+Jürgen Mell reported an FPU state corruption bug under CONFIG_PREEMPT,
+and bisected it to commit v2.6.19-1363-gacc2076, "i386: add sleazy FPU
+optimization".
+
+Add tsk_used_math() checks to prevent calling math_state_restore()
+which can sleep in the case of !tsk_used_math(). This prevents
+making a blocking call in __switch_to().
+
+Apparently "fpu_counter > 5" check is not enough, as in some signal handling
+and fork/exec scenarios, fpu_counter > 5 and !tsk_used_math() is possible.
+
+It's a side effect though. This is the failing scenario:
+
+process 'A' in save_i387_ia32() just after clear_used_math()
+
+Got an interrupt and pre-empted out.
+
+At the next context switch to process 'A' again, kernel tries to restore
+the math state proactively and sees a fpu_counter > 0 and !tsk_used_math()
+
+This results in init_fpu() during the __switch_to()'s math_state_restore()
+
+And resulting in fpu corruption which will be saved/restored
+(save_i387_fxsave and restore_i387_fxsave) during the remaining
+part of the signal handling after the context switch.
+
+Bisected-by: Jürgen Mell <j.mell@t-online.de>
+Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
+Tested-by: Jürgen Mell <j.mell@t-online.de>
+Signed-off-by: Ingo Molnar <mingo@elte.hu>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Chris Wright <chrisw@sous-sol.org>
+---
+ arch/x86/kernel/process_32.c | 5 ++++-
+ arch/x86/kernel/process_64.c | 5 ++++-
+ 2 files changed, 8 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -710,8 +710,11 @@ struct task_struct * __switch_to(struct
+ /* If the task has used fpu the last 5 timeslices, just do a full
+ * restore of the math state immediately to avoid the trap; the
+ * chances of needing FPU soon are obviously high now
++ *
++ * tsk_used_math() checks prevent calling math_state_restore(),
++ * which can sleep in the case of !tsk_used_math()
+ */
+- if (next_p->fpu_counter > 5)
++ if (tsk_used_math(next_p) && next_p->fpu_counter > 5)
+ math_state_restore();
+
+ /*
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -696,8 +696,11 @@ __switch_to(struct task_struct *prev_p,
+ /* If the task has used fpu the last 5 timeslices, just do a full
+ * restore of the math state immediately to avoid the trap; the
+ * chances of needing FPU soon are obviously high now
++ *
++ * tsk_used_math() checks prevent calling math_state_restore(),
++ * which can sleep in the case of !tsk_used_math()
+ */
+- if (next_p->fpu_counter>5)
++ if (tsk_used_math(next_p) && next_p->fpu_counter > 5)
+ math_state_restore();
+ return prev_p;
+ }