From: Greg Kroah-Hartman Date: Wed, 18 Aug 2010 01:11:44 +0000 (-0700) Subject: another .27 patch X-Git-Tag: v2.6.27.52~9 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=fb18d338fdf470b503d3f54a2153b1c6e910f933;p=thirdparty%2Fkernel%2Fstable-queue.git another .27 patch --- diff --git a/review-2.6.27/mm-fix-page-table-unmap-for-stack-guard-page-properly.patch b/review-2.6.27/mm-fix-page-table-unmap-for-stack-guard-page-properly.patch new file mode 100644 index 00000000000..44129cbaaa2 --- /dev/null +++ b/review-2.6.27/mm-fix-page-table-unmap-for-stack-guard-page-properly.patch @@ -0,0 +1,64 @@ +From 11ac552477e32835cb6970bf0a70c210807f5673 Mon Sep 17 00:00:00 2001 +From: Linus Torvalds +Date: Sat, 14 Aug 2010 11:44:56 -0700 +Subject: mm: fix page table unmap for stack guard page properly +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Linus Torvalds + +commit 11ac552477e32835cb6970bf0a70c210807f5673 upstream. + +We do in fact need to unmap the page table _before_ doing the whole +stack guard page logic, because if it is needed (mainly 32-bit x86 with +PAE and CONFIG_HIGHPTE, but other architectures may use it too) then it +will do a kmap_atomic/kunmap_atomic. + +And those kmaps will create an atomic region that we cannot do +allocations in. However, the whole stack expand code will need to do +anon_vma_prepare() and vma_lock_anon_vma() and they cannot do that in an +atomic region. + +Now, a better model might actually be to do the anon_vma_prepare() when +_creating_ a VM_GROWSDOWN segment, and not have to worry about any of +this at page fault time. But in the meantime, this is the +straightforward fix for the issue. + +See https://bugzilla.kernel.org/show_bug.cgi?id=16588 for details. + +Reported-by: Wylda +Reported-by: Sedat Dilek +Reported-by: Mike Pagano +Reported-by: François Valenduc +Tested-by: Ed Tomlinson +Cc: Pekka Enberg +Cc: Greg KH +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/memory.c | 9 ++++----- + 1 file changed, 4 insertions(+), 5 deletions(-) + +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -2428,14 +2428,13 @@ static int do_anonymous_page(struct mm_s + spinlock_t *ptl; + pte_t entry; + +- if (check_stack_guard_page(vma, address) < 0) { +- pte_unmap(page_table); ++ pte_unmap(page_table); ++ ++ /* Check if we need to add a guard page to the stack */ ++ if (check_stack_guard_page(vma, address) < 0) + return VM_FAULT_SIGBUS; +- } + + /* Allocate our own private page. */ +- pte_unmap(page_table); +- + if (unlikely(anon_vma_prepare(vma))) + goto oom; + page = alloc_zeroed_user_highpage_movable(vma, address); diff --git a/review-2.6.27/series b/review-2.6.27/series index 7c26ef1fcb3..cc279ed03ef 100644 --- a/review-2.6.27/series +++ b/review-2.6.27/series @@ -2,3 +2,4 @@ mm-keep-a-guard-page-below-a-grow-down-stack-segment.patch mm-fix-missing-page-table-unmap-for-stack-guard-page-failure-case.patch x86-don-t-send-sigbus-for-kernel-page-faults.patch mm-pass-correct-mm-when-growing-stack.patch +mm-fix-page-table-unmap-for-stack-guard-page-properly.patch