mm-pagewalk.c-walk_page_range-should-avoid-vm_pfnmap-areas.patch
mm-thp-use-pmd_populate-to-update-the-pmd-with-pgtable_t-pointer.patch
scsi-ipr-need-to-reset-adapter-after-the-6th-eeh-error.patch
+x86-allow-fpu-to-be-used-at-interrupt-time-even-with-eagerfpu.patch
+x86-64-init-fix-a-possible-wraparound-bug-in-switchover-in-head_64.s.patch
+x86-range-fix-missing-merge-during-add-range.patch
+x86-crc32-pclmul-fix-build-with-older-binutils.patch
--- /dev/null
+From e9d0626ed43a41a3fc526d1df06122b0d4eac174 Mon Sep 17 00:00:00 2001
+From: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Date: Tue, 14 May 2013 14:48:58 +0800
+Subject: x86-64, init: Fix a possible wraparound bug in switchover in head_64.S
+
+From: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+
+commit e9d0626ed43a41a3fc526d1df06122b0d4eac174 upstream.
+
+In head_64.S, a switchover has been used to handle kernel crossing
+1G, 512G boundaries.
+
+And commit 8170e6bed465b4b0c7687f93e9948aca4358a33b
+ x86, 64bit: Use a #PF handler to materialize early mappings on demand
+said:
+ During the switchover in head_64.S, before #PF handler is available,
+ we use three pages to handle kernel crossing 1G, 512G boundaries with
+ sharing page by playing games with page aliasing: the same page is
+ mapped twice in the higher-level tables with appropriate wraparound.
+
+But from the switchover code, when we set up the PUD table:
+114 addq $4096, %rdx
+115 movq %rdi, %rax
+116 shrq $PUD_SHIFT, %rax
+117 andl $(PTRS_PER_PUD-1), %eax
+118 movq %rdx, (4096+0)(%rbx,%rax,8)
+119 movq %rdx, (4096+8)(%rbx,%rax,8)
+
+It seems line 119 has a potential bug there. For example,
+if the kernel is loaded at physical address 511G+1008M, that is
+ 000000000 111111111 111111000 000000000000000000000
+and the kernel _end is 512G+2M, that is
+ 000000001 000000000 000000001 000000000000000000000
+So in this example, when using the 2nd page to setup PUD (line 114~119),
+rax is 511.
+In line 118, we put rdx which is the address of the PMD page (the 3rd page)
+into entry 511 of the PUD table. But in line 119, the entry we calculate from
+(4096+8)(%rbx,%rax,8) has exceeded the PUD page. IMO, the entry in line
+119 should be wraparound into entry 0 of the PUD table.
+
+The patch fixes the bug.
+
+Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Link: http://lkml.kernel.org/r/5191DE5A.3020302@cn.fujitsu.com
+Signed-off-by: Yinghai Lu <yinghai@kernel.org>
+Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/head_64.S | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -115,8 +115,10 @@ startup_64:
+ movq %rdi, %rax
+ shrq $PUD_SHIFT, %rax
+ andl $(PTRS_PER_PUD-1), %eax
+- movq %rdx, (4096+0)(%rbx,%rax,8)
+- movq %rdx, (4096+8)(%rbx,%rax,8)
++ movq %rdx, 4096(%rbx,%rax,8)
++ incl %eax
++ andl $(PTRS_PER_PUD-1), %eax
++ movq %rdx, 4096(%rbx,%rax,8)
+
+ addq $8192, %rbx
+ movq %rdi, %rax
--- /dev/null
+From 5187b28ff08249ab8a162e802209ed04e271ca02 Mon Sep 17 00:00:00 2001
+From: Pekka Riikonen <priikone@iki.fi>
+Date: Mon, 13 May 2013 14:32:07 +0200
+Subject: x86: Allow FPU to be used at interrupt time even with eagerfpu
+
+From: Pekka Riikonen <priikone@iki.fi>
+
+commit 5187b28ff08249ab8a162e802209ed04e271ca02 upstream.
+
+With the addition of eagerfpu the irq_fpu_usable() now returns false
+negatives especially in the case of ksoftirqd and interrupted idle task,
+two common cases for FPU use for example in networking/crypto. With
+eagerfpu=off FPU use is possible in those contexts. This is because of
+the eagerfpu check in interrupted_kernel_fpu_idle():
+
+...
+ * For now, with eagerfpu we will return interrupted kernel FPU
+ * state as not-idle. TBD: Ideally we can change the return value
+ * to something like __thread_has_fpu(current). But we need to
+ * be careful of doing __thread_clear_has_fpu() before saving
+ * the FPU etc for supporting nested uses etc. For now, take
+ * the simple route!
+...
+ if (use_eager_fpu())
+ return 0;
+
+As eagerfpu is automatically "on" on those CPUs that also have the
+features like AES-NI this patch changes the eagerfpu check to return 1 in
+case the kernel_fpu_begin() has not been said yet. Once it has been the
+__thread_has_fpu() will start returning 0.
+
+Notice that with eagerfpu the __thread_has_fpu is always true initially.
+FPU use is thus always possible no matter what task is under us, unless
+the state has already been saved with kernel_fpu_begin().
+
+[ hpa: this is a performance regression, not a correctness regression,
+ but since it can be quite serious on CPUs which need encryption at
+ interrupt time I am marking this for urgent/stable. ]
+
+Signed-off-by: Pekka Riikonen <priikone@iki.fi>
+Link: http://lkml.kernel.org/r/alpine.GSO.2.00.1305131356320.18@git.silcnet.org
+Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/i387.c | 14 +++++---------
+ 1 file changed, 5 insertions(+), 9 deletions(-)
+
+--- a/arch/x86/kernel/i387.c
++++ b/arch/x86/kernel/i387.c
+@@ -22,23 +22,19 @@
+ /*
+ * Were we in an interrupt that interrupted kernel mode?
+ *
+- * For now, with eagerfpu we will return interrupted kernel FPU
+- * state as not-idle. TBD: Ideally we can change the return value
+- * to something like __thread_has_fpu(current). But we need to
+- * be careful of doing __thread_clear_has_fpu() before saving
+- * the FPU etc for supporting nested uses etc. For now, take
+- * the simple route!
+- *
+ * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
+ * pair does nothing at all: the thread must not have fpu (so
+ * that we don't try to save the FPU state), and TS must
+ * be set (so that the clts/stts pair does nothing that is
+ * visible in the interrupted kernel thread).
++ *
++ * Except for the eagerfpu case when we return 1 unless we've already
++ * been eager and saved the state in kernel_fpu_begin().
+ */
+ static inline bool interrupted_kernel_fpu_idle(void)
+ {
+ if (use_eager_fpu())
+- return 0;
++ return __thread_has_fpu(current);
+
+ return !__thread_has_fpu(current) &&
+ (read_cr0() & X86_CR0_TS);
+@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
+ struct task_struct *me = current;
+
+ if (__thread_has_fpu(me)) {
+- __save_init_fpu(me);
+ __thread_clear_has_fpu(me);
++ __save_init_fpu(me);
+ /* We do 'stts()' in __kernel_fpu_end() */
+ } else if (!use_eager_fpu()) {
+ this_cpu_write(fpu_owner_task, NULL);
--- /dev/null
+From 2baad6121e2b2fa3428ee6cb2298107be11ab23a Mon Sep 17 00:00:00 2001
+From: Jan Beulich <JBeulich@suse.com>
+Date: Wed, 29 May 2013 13:43:54 +0100
+Subject: x86, crc32-pclmul: Fix build with older binutils
+
+From: Jan Beulich <JBeulich@suse.com>
+
+commit 2baad6121e2b2fa3428ee6cb2298107be11ab23a upstream.
+
+binutils prior to 2.18 (e.g. the ones found on SLE10) don't support
+assembling PEXTRD, so a macro based approach like the one for PCLMULQDQ
+in the same file should be used.
+
+This requires making the helper macros capable of recognizing 32-bit
+general purpose register operands.
+
+[ hpa: tagging for stable as it is a low risk build fix ]
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Link: http://lkml.kernel.org/r/51A6142A02000078000D99D8@nat28.tlf.novell.com
+Cc: Alexander Boyko <alexander_boyko@xyratex.com>
+Cc: Herbert Xu <herbert@gondor.apana.org.au>
+Cc: Huang Ying <ying.huang@intel.com>
+Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/crypto/crc32-pclmul_asm.S | 2 -
+ arch/x86/include/asm/inst.h | 74 ++++++++++++++++++++++++++++++++++++-
+ 2 files changed, 73 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/crypto/crc32-pclmul_asm.S
++++ b/arch/x86/crypto/crc32-pclmul_asm.S
+@@ -241,6 +241,6 @@ fold_64:
+ pand %xmm3, %xmm1
+ PCLMULQDQ 0x00, CONSTANT, %xmm1
+ pxor %xmm2, %xmm1
+- pextrd $0x01, %xmm1, %eax
++ PEXTRD 0x01, %xmm1, %eax
+
+ ret
+--- a/arch/x86/include/asm/inst.h
++++ b/arch/x86/include/asm/inst.h
+@@ -9,12 +9,68 @@
+
+ #define REG_NUM_INVALID 100
+
+-#define REG_TYPE_R64 0
+-#define REG_TYPE_XMM 1
++#define REG_TYPE_R32 0
++#define REG_TYPE_R64 1
++#define REG_TYPE_XMM 2
+ #define REG_TYPE_INVALID 100
+
++ .macro R32_NUM opd r32
++ \opd = REG_NUM_INVALID
++ .ifc \r32,%eax
++ \opd = 0
++ .endif
++ .ifc \r32,%ecx
++ \opd = 1
++ .endif
++ .ifc \r32,%edx
++ \opd = 2
++ .endif
++ .ifc \r32,%ebx
++ \opd = 3
++ .endif
++ .ifc \r32,%esp
++ \opd = 4
++ .endif
++ .ifc \r32,%ebp
++ \opd = 5
++ .endif
++ .ifc \r32,%esi
++ \opd = 6
++ .endif
++ .ifc \r32,%edi
++ \opd = 7
++ .endif
++#ifdef CONFIG_X86_64
++ .ifc \r32,%r8d
++ \opd = 8
++ .endif
++ .ifc \r32,%r9d
++ \opd = 9
++ .endif
++ .ifc \r32,%r10d
++ \opd = 10
++ .endif
++ .ifc \r32,%r11d
++ \opd = 11
++ .endif
++ .ifc \r32,%r12d
++ \opd = 12
++ .endif
++ .ifc \r32,%r13d
++ \opd = 13
++ .endif
++ .ifc \r32,%r14d
++ \opd = 14
++ .endif
++ .ifc \r32,%r15d
++ \opd = 15
++ .endif
++#endif
++ .endm
++
+ .macro R64_NUM opd r64
+ \opd = REG_NUM_INVALID
++#ifdef CONFIG_X86_64
+ .ifc \r64,%rax
+ \opd = 0
+ .endif
+@@ -63,6 +119,7 @@
+ .ifc \r64,%r15
+ \opd = 15
+ .endif
++#endif
+ .endm
+
+ .macro XMM_NUM opd xmm
+@@ -118,10 +175,13 @@
+ .endm
+
+ .macro REG_TYPE type reg
++ R32_NUM reg_type_r32 \reg
+ R64_NUM reg_type_r64 \reg
+ XMM_NUM reg_type_xmm \reg
+ .if reg_type_r64 <> REG_NUM_INVALID
+ \type = REG_TYPE_R64
++ .elseif reg_type_r32 <> REG_NUM_INVALID
++ \type = REG_TYPE_R32
+ .elseif reg_type_xmm <> REG_NUM_INVALID
+ \type = REG_TYPE_XMM
+ .else
+@@ -162,6 +222,16 @@
+ .byte \imm8
+ .endm
+
++ .macro PEXTRD imm8 xmm gpr
++ R32_NUM extrd_opd1 \gpr
++ XMM_NUM extrd_opd2 \xmm
++ PFX_OPD_SIZE
++ PFX_REX extrd_opd1 extrd_opd2
++ .byte 0x0f, 0x3a, 0x16
++ MODRM 0xc0 extrd_opd1 extrd_opd2
++ .byte \imm8
++ .endm
++
+ .macro AESKEYGENASSIST rcon xmm1 xmm2
+ XMM_NUM aeskeygen_opd1 \xmm1
+ XMM_NUM aeskeygen_opd2 \xmm2
--- /dev/null
+From fbe06b7bae7c9cf6ab05168fce5ee93b2f4bae7c Mon Sep 17 00:00:00 2001
+From: Yinghai Lu <yinghai@kernel.org>
+Date: Fri, 17 May 2013 11:49:10 -0700
+Subject: x86, range: fix missing merge during add range
+
+From: Yinghai Lu <yinghai@kernel.org>
+
+commit fbe06b7bae7c9cf6ab05168fce5ee93b2f4bae7c upstream.
+
+Christian found v3.9 does not work with E350 with EFI is enabled.
+
+[ 1.658832] Trying to unpack rootfs image as initramfs...
+[ 1.679935] BUG: unable to handle kernel paging request at ffff88006e3fd000
+[ 1.686940] IP: [<ffffffff813661df>] memset+0x1f/0xb0
+[ 1.692010] PGD 1f77067 PUD 1f7a067 PMD 61420067 PTE 0
+
+but early memtest report all memory could be accessed without problem.
+
+early page table is set in following sequence:
+[ 0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
+[ 0.000000] init_memory_mapping: [mem 0x6e600000-0x6e7fffff]
+[ 0.000000] init_memory_mapping: [mem 0x6c000000-0x6e5fffff]
+[ 0.000000] init_memory_mapping: [mem 0x00100000-0x6bffffff]
+[ 0.000000] init_memory_mapping: [mem 0x6e800000-0x6ea07fff]
+but later efi_enter_virtual_mode try set mapping again wrongly.
+[ 0.010644] pid_max: default: 32768 minimum: 301
+[ 0.015302] init_memory_mapping: [mem 0x640c5000-0x6e3fcfff]
+that means it fails with pfn_range_is_mapped.
+
+It turns out that we have a bug in add_range_with_merge and it does not
+merge range properly when new add one fill the hole between two exsiting
+ranges. In the case when [mem 0x00100000-0x6bffffff] is the hole between
+[mem 0x00000000-0x000fffff] and [mem 0x6c000000-0x6e7fffff].
+
+Fix the add_range_with_merge by calling itself recursively.
+
+Reported-by: "Christian König" <christian.koenig@amd.com>
+Signed-off-by: Yinghai Lu <yinghai@kernel.org>
+Link: http://lkml.kernel.org/r/CAE9FiQVofGoSk7q5-0irjkBxemqK729cND4hov-1QCBJDhxpgQ@mail.gmail.com
+Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/range.c | 8 +++++---
+ 1 file changed, 5 insertions(+), 3 deletions(-)
+
+--- a/kernel/range.c
++++ b/kernel/range.c
+@@ -48,9 +48,11 @@ int add_range_with_merge(struct range *r
+ final_start = min(range[i].start, start);
+ final_end = max(range[i].end, end);
+
+- range[i].start = final_start;
+- range[i].end = final_end;
+- return nr_range;
++ /* clear it and add it back for further merge */
++ range[i].start = 0;
++ range[i].end = 0;
++ return add_range_with_merge(range, az, nr_range,
++ final_start, final_end);
+ }
+
+ /* Need to add it: */