--- /dev/null
+From jacliburn@bellsouth.net Sat Jun 21 23:04:00 2008
+From: Jay Cliburn <jacliburn@bellsouth.net>
+Date: Thu, 19 Jun 2008 20:27:55 -0500
+Subject: atl1: relax eeprom mac address error check
+To: stable@kernel.org
+Cc: csnook@redhat.com, advantis@gmx.net, jgarzik@redhat.com
+Message-ID: <20080619202755.7a934026@osprey.hogchain.net>
+
+
+From: Radu Cristescu <advantis@gmx.net>
+
+upstream commit: 58c7821c4264a7ddd6f0c31c5caaf393b3897f10
+
+The atl1 driver tries to determine the MAC address thusly:
+
+ - If an EEPROM exists, read the MAC address from EEPROM and
+ validate it.
+ - If an EEPROM doesn't exist, try to read a MAC address from
+ SPI flash.
+ - If that fails, try to read a MAC address directly from the
+ MAC Station Address register.
+ - If that fails, assign a random MAC address provided by the
+ kernel.
+
+We now have a report of a system fitted with an EEPROM containing all
+zeros where we expect the MAC address to be, and we currently handle
+this as an error condition. Turns out, on this system the BIOS writes
+a valid MAC address to the NIC's MAC Station Address register, but we
+never try to read it because we return an error when we find the all-
+zeros address in EEPROM.
+
+This patch relaxes the error check and continues looking for a MAC
+address even if it finds an illegal one in EEPROM.
+
+http://ubuntuforums.org/showthread.php?t=562617
+
+[jacliburn@bellsouth.net: backport to 2.6.25.7]
+
+Signed-off-by: Radu Cristescu <advantis@gmx.net>
+Signed-off-by: Jay Cliburn <jacliburn@bellsouth.net>
+Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ drivers/net/atl1/atl1_hw.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/drivers/net/atl1/atl1_hw.c
++++ b/drivers/net/atl1/atl1_hw.c
+@@ -250,7 +250,6 @@ static int atl1_get_permanent_address(st
+ memcpy(hw->perm_mac_addr, eth_addr, ETH_ALEN);
+ return 0;
+ }
+- return 1;
+ }
+
+ /* see if SPI FLAGS exist ? */
+atl1-relax-eeprom-mac-address-error-check.patch
reinstate-zero_page-optimization-in-get_user_pages-and-fix-xip.patch
sctp-make-sure-n-sizeof-does-not-overflow.patch
+x86-use-bootmem_exclusive-on-32-bit.patch
+x86-set-pae-physical_mask_shift-to-44-bits.patch
--- /dev/null
+From jejb@kernel.org Sat Jun 21 23:06:28 2008
+From: Jeremy Fitzhardinge <jeremy@goop.org>
+Date: Fri, 20 Jun 2008 21:32:12 GMT
+Subject: x86: set PAE PHYSICAL_MASK_SHIFT to 44 bits.
+To: jejb@kernel.org, stable@kernel.org
+Message-ID: <200806202132.m5KLWCHB017874@hera.kernel.org>
+
+From: Jeremy Fitzhardinge <jeremy@goop.org>
+
+commit ad524d46f36bbc32033bb72ba42958f12bf49b06 upstream
+
+When a 64-bit x86 processor runs in 32-bit PAE mode, a pte can
+potentially have the same number of physical address bits as the
+64-bit host ("Enhanced Legacy PAE Paging"). This means, in theory,
+we could have up to 52 bits of physical address in a pte.
+
+The 32-bit kernel uses a 32-bit unsigned long to represent a pfn.
+This means that it can only represent physical addresses up to 32+12=44
+bits wide. Rather than widening pfns everywhere, just set 2^44 as the
+Linux x86_32-PAE architectural limit for physical address size.
+
+This is a bugfix for two cases:
+1. running a 32-bit PAE kernel on a machine with
+ more than 64GB RAM.
+2. running a 32-bit PAE Xen guest on a host machine with
+ more than 64GB RAM
+
+In both cases, a pte could need to have more than 36 bits of physical,
+and masking it to 36-bits will cause fairly severe havoc.
+
+Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
+Cc: Jan Beulich <jbeulich@novell.com>
+Signed-off-by: Ingo Molnar <mingo@elte.hu>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ include/asm-x86/page_32.h | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/include/asm-x86/page_32.h
++++ b/include/asm-x86/page_32.h
+@@ -14,7 +14,8 @@
+ #define __PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
+
+ #ifdef CONFIG_X86_PAE
+-#define __PHYSICAL_MASK_SHIFT 36
++/* 44=32+12, the limit we can fit into an unsigned long pfn */
++#define __PHYSICAL_MASK_SHIFT 44
+ #define __VIRTUAL_MASK_SHIFT 32
+ #define PAGETABLE_LEVELS 3
+
--- /dev/null
+From jejb@kernel.org Sat Jun 21 23:05:46 2008
+From: Bernhard Walle <bwalle@suse.de>
+Date: Fri, 20 Jun 2008 21:31:06 GMT
+Subject: x86: use BOOTMEM_EXCLUSIVE on 32-bit
+To: jejb@kernel.org, stable@kernel.org
+Message-ID: <200806202131.m5KLV67N017665@hera.kernel.org>
+
+From: Bernhard Walle <bwalle@suse.de>
+
+commit d3942cff620bea073fc4e3c8ed878eb1e84615ce upstream
+
+This patch uses the BOOTMEM_EXCLUSIVE for crashkernel reservation also for
+i386 and prints a error message on failure.
+
+The patch is still for 2.6.26 since it is only bug fixing. The unification
+of reserve_crashkernel() between i386 and x86_64 should be done for 2.6.27.
+
+Signed-off-by: Bernhard Walle <bwalle@suse.de>
+Signed-off-by: Ingo Molnar <mingo@elte.hu>
+Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
+
+---
+ arch/x86/kernel/setup_32.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/setup_32.c
++++ b/arch/x86/kernel/setup_32.c
+@@ -483,10 +483,16 @@ static void __init reserve_crashkernel(v
+ (unsigned long)(crash_size >> 20),
+ (unsigned long)(crash_base >> 20),
+ (unsigned long)(total_mem >> 20));
++
++ if (reserve_bootmem(crash_base, crash_size,
++ BOOTMEM_EXCLUSIVE) < 0) {
++ printk(KERN_INFO "crashkernel reservation "
++ "failed - memory is in use\n");
++ return;
++ }
++
+ crashk_res.start = crash_base;
+ crashk_res.end = crash_base + crash_size - 1;
+- reserve_bootmem(crash_base, crash_size,
+- BOOTMEM_DEFAULT);
+ } else
+ printk(KERN_INFO "crashkernel reservation failed - "
+ "you have to specify a base address\n");