--- /dev/null
+From 0c824b51b338c808de650b440ba5f9f4a725f7fc Mon Sep 17 00:00:00 2001
+From: Tony Battersby <tonyb@cybernetics.com>
+Date: Tue, 16 Oct 2007 22:29:52 +0200
+Subject: [PATCH] ide: fix serverworks.c UDMA regression
+Message-Id: <200710162125.41987.bzolnier@gmail.com>
+
+From: Tony Battersby <tonyb@cybernetics.com>
+
+patch 0c824b51b338c808de650b440ba5f9f4a725f7fc in mainline.
+
+The patch described by the following excerpt from ChangeLog-2.6.22 makes
+it impossible to use UDMA on a Tyan S2707 motherboard (SvrWks CSB5):
+
+commit 2d5eaa6dd744a641e75503232a01f52d0768884c
+Author: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
+Date: Thu May 10 00:01:08 2007 +0200
+
+ ide: rework the code for selecting the best DMA transfer mode (v3)
+
+ ...
+
+This one-line patch against 2.6.23 fixes the problem.
+
+Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
+Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ drivers/ide/pci/serverworks.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/ide/pci/serverworks.c
++++ b/drivers/ide/pci/serverworks.c
+@@ -101,6 +101,7 @@ static u8 svwks_udma_filter(ide_drive_t
+ mode = 2;
+
+ switch(mode) {
++ case 3: mask = 0x3f; break;
+ case 2: mask = 0x1f; break;
+ case 1: mask = 0x07; break;
+ default: mask = 0x00; break;
--- /dev/null
+From stable-bounces@linux.kernel.org Tue Oct 16 23:25:28 2007
+From: akpm@linux-foundation.org
+Date: Tue, 16 Oct 2007 23:18:32 -0700
+Subject: writeback: don't propagate AOP_WRITEPAGE_ACTIVATE
+To: torvalds@linux-foundation.org
+Cc: akpm@linux-foundation.org, stable@kernel.org
+Message-ID: <200710170618.l9H6IWq3005517@imap1.linux-foundation.org>
+
+
+From: Andrew Morton <akpm@linux-foundation.org>
+
+patch e423003028183df54f039dfda8b58c49e78c89d7 in mainline.
+
+This is a writeback-internal marker but we're propagating it all the way back
+to userspace!.
+
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+
+
+---
+ mm/page-writeback.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -674,8 +674,10 @@ retry:
+
+ ret = (*writepage)(page, wbc, data);
+
+- if (unlikely(ret == AOP_WRITEPAGE_ACTIVATE))
++ if (unlikely(ret == AOP_WRITEPAGE_ACTIVATE)) {
+ unlock_page(page);
++ ret = 0;
++ }
+ if (ret || (--(wbc->nr_to_write) <= 0))
+ done = 1;
+ if (wbc->nonblocking && bdi_write_congested(bdi)) {
--- /dev/null
+From edaf420fdc122e7a42326fe39274c8b8c9b19d41 Mon Sep 17 00:00:00 2001
+From: Dave Johnson <djohnson@sw.starentnetworks.com>
+Date: Tue, 23 Oct 2007 22:37:22 +0200
+Subject: [PATCH] x86: fix TSC clock source calibration error
+Message-ID: <20071018085713.GA11022@elte.hu>
+
+From: Dave Johnson <djohnson@sw.starentnetworks.com>
+
+patch edaf420fdc122e7a42326fe39274c8b8c9b19d41 in mainline.
+
+I ran into this problem on a system that was unable to obtain NTP sync
+because the clock was running very slow (over 10000ppm slow). ntpd had
+declared all of its peers 'reject' with 'peer_dist' reason.
+
+On investigation, the tsc_khz variable was significantly incorrect
+causing xtime to run slow. After a reboot tsc_khz was correct so I
+did a reboot test to see how often the problem occurred:
+
+Test was done on a 2000 Mhz Xeon system. Of 689 reboots, 8 of them
+had unacceptable tsc_khz values (>500ppm):
+
+ range of tsc_khz # of boots % of boots
+ ---------------- ---------- ----------
+ < 1999750 0 0.000%
+1999750 - 1999800 21 3.048%
+1999800 - 1999850 166 24.128%
+1999850 - 1999900 241 35.029%
+1999900 - 1999950 211 30.669%
+1999950 - 2000000 42 6.105%
+2000000 - 2000000 0 0.000%
+2000050 - 2000100 0 0.000%
+ [...]
+2000100 - 2015000 1 0.145% << BAD
+2015000 - 2030000 6 0.872% << BAD
+2030000 - 2045000 1 0.145% << BAD
+2045000 < 0 0.000%
+
+The worst boot was 2032.577 Mhz, over 1.5% off!
+
+It appears that on rare occasions, mach_countup() is taking longer to
+complete than necessary.
+
+I suspect that this is caused by the CPU taking a periodic SMI
+interrupt right at the end of the 30ms calibration loop. This would
+cause the loop to delay while the SMI BIOS hander runs. The resulting
+TSC value is beyond what it actually should be resulting in a higher
+tsc_khz.
+
+The below patch makes native_calculate_cpu_khz() take the best
+(shortest duration, lowest khz) run of it's 3 calibration loops. If a
+SMI goes off causing a bad result (long duration, higher khz) it will
+be discarded.
+
+With the patch applied, 300 boots of the same system produce good
+results:
+
+ range of tsc_khz # of boots % of boots
+ ---------------- ---------- ----------
+ < 1999750 0 0.000%
+1999750 - 1999800 30 10.000%
+1999800 - 1999850 166 55.333%
+1999850 - 1999900 89 29.667%
+1999900 - 1999950 15 5.000%
+1999950 < 0 0.000%
+
+Problem was found and tested against 2.6.18. Patch is against 2.6.22.
+
+Signed-off-by: Dave Johnson <djohnson@sw.starentnetworks.com>
+Signed-off-by: Ingo Molnar <mingo@elte.hu>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ arch/i386/kernel/tsc.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/arch/i386/kernel/tsc.c
++++ b/arch/i386/kernel/tsc.c
+@@ -122,7 +122,7 @@ unsigned long native_calculate_cpu_khz(v
+ {
+ unsigned long long start, end;
+ unsigned long count;
+- u64 delta64;
++ u64 delta64 = (u64)ULLONG_MAX;
+ int i;
+ unsigned long flags;
+
+@@ -134,6 +134,7 @@ unsigned long native_calculate_cpu_khz(v
+ rdtscll(start);
+ mach_countup(&count);
+ rdtscll(end);
++ delta64 = min(delta64, (end - start));
+ }
+ /*
+ * Error: ECTCNEVERSET
+@@ -144,8 +145,6 @@ unsigned long native_calculate_cpu_khz(v
+ if (count <= 1)
+ goto err;
+
+- delta64 = end - start;
+-
+ /* cpu freq too fast: */
+ if (delta64 > (1ULL<<32))
+ goto err;