From: Greg Kroah-Hartman Date: Wed, 11 Mar 2015 13:49:01 +0000 (+0100) Subject: 3.10-stable patches X-Git-Tag: v3.10.72~48 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=ee5c8e98a9e93d71f79886a0270523154bd2654a;p=thirdparty%2Fkernel%2Fstable-queue.git 3.10-stable patches added patches: mm-compaction-fix-wrong-order-check-in-compact_finished.patch mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch mm-memory.c-actually-remap-enough-memory.patch mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch --- diff --git a/queue-3.10/mm-compaction-fix-wrong-order-check-in-compact_finished.patch b/queue-3.10/mm-compaction-fix-wrong-order-check-in-compact_finished.patch new file mode 100644 index 00000000000..48b386cfdc8 --- /dev/null +++ b/queue-3.10/mm-compaction-fix-wrong-order-check-in-compact_finished.patch @@ -0,0 +1,60 @@ +From 372549c2a3778fd3df445819811c944ad54609ca Mon Sep 17 00:00:00 2001 +From: Joonsoo Kim +Date: Thu, 12 Feb 2015 14:59:50 -0800 +Subject: mm/compaction: fix wrong order check in compact_finished() + +From: Joonsoo Kim + +commit 372549c2a3778fd3df445819811c944ad54609ca upstream. + +What we want to check here is whether there is highorder freepage in buddy +list of other migratetype in order to steal it without fragmentation. +But, current code just checks cc->order which means allocation request +order. So, this is wrong. + +Without this fix, non-movable synchronous compaction below pageblock order +would not stopped until compaction is complete, because migratetype of +most pageblocks are movable and high order freepage made by compaction is +usually on movable type buddy list. + +There is some report related to this bug. See below link. + + http://www.spinics.net/lists/linux-mm/msg81666.html + +Although the issued system still has load spike comes from compaction, +this makes that system completely stable and responsive according to his +report. + +stress-highalloc test in mmtests with non movable order 7 allocation +doesn't show any notable difference in allocation success rate, but, it +shows more compaction success rate. + +Compaction success rate (Compaction success * 100 / Compaction stalls, %) +18.47 : 28.94 + +Fixes: 1fb3f8ca0e92 ("mm: compaction: capture a suitable high-order page immediately when it is made available") +Signed-off-by: Joonsoo Kim +Acked-by: Vlastimil Babka +Reviewed-by: Zhang Yanfei +Cc: Mel Gorman +Cc: David Rientjes +Cc: Rik van Riel +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/compaction.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/mm/compaction.c ++++ b/mm/compaction.c +@@ -881,7 +881,7 @@ static int compact_finished(struct zone + return COMPACT_PARTIAL; + + /* Job done if allocation would set block type */ +- if (cc->order >= pageblock_order && area->nr_free) ++ if (order >= pageblock_order && area->nr_free) + return COMPACT_PARTIAL; + } + diff --git a/queue-3.10/mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch b/queue-3.10/mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch new file mode 100644 index 00000000000..1c905f6a5f2 --- /dev/null +++ b/queue-3.10/mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch @@ -0,0 +1,51 @@ +From 9fbc1f635fd0bd28cb32550211bf095753ac637a Mon Sep 17 00:00:00 2001 +From: Naoya Horiguchi +Date: Wed, 11 Feb 2015 15:25:32 -0800 +Subject: mm/hugetlb: add migration entry check in __unmap_hugepage_range + +From: Naoya Horiguchi + +commit 9fbc1f635fd0bd28cb32550211bf095753ac637a upstream. + +If __unmap_hugepage_range() tries to unmap the address range over which +hugepage migration is on the way, we get the wrong page because pte_page() +doesn't work for migration entries. This patch simply clears the pte for +migration entries as we do for hwpoison entries. + +Fixes: 290408d4a2 ("hugetlb: hugepage migration core") +Signed-off-by: Naoya Horiguchi +Cc: Hugh Dickins +Cc: James Hogan +Cc: David Rientjes +Cc: Mel Gorman +Cc: Johannes Weiner +Cc: Michal Hocko +Cc: Rik van Riel +Cc: Andrea Arcangeli +Cc: Luiz Capitulino +Cc: Nishanth Aravamudan +Cc: Lee Schermerhorn +Cc: Steve Capper +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/hugetlb.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -2451,9 +2451,10 @@ again: + continue; + + /* +- * HWPoisoned hugepage is already unmapped and dropped reference ++ * Migrating hugepage or HWPoisoned hugepage is already ++ * unmapped and its refcount is dropped, so just clear pte here. + */ +- if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) { ++ if (unlikely(!pte_present(pte))) { + huge_pte_clear(mm, address, ptep); + continue; + } diff --git a/queue-3.10/mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch b/queue-3.10/mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch new file mode 100644 index 00000000000..5c76a4ef3bc --- /dev/null +++ b/queue-3.10/mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch @@ -0,0 +1,70 @@ +From a8bda28d87c38c6aa93de28ba5d30cc18e865a11 Mon Sep 17 00:00:00 2001 +From: Naoya Horiguchi +Date: Wed, 11 Feb 2015 15:25:28 -0800 +Subject: mm/hugetlb: add migration/hwpoisoned entry check in hugetlb_change_protection + +From: Naoya Horiguchi + +commit a8bda28d87c38c6aa93de28ba5d30cc18e865a11 upstream. + +There is a race condition between hugepage migration and +change_protection(), where hugetlb_change_protection() doesn't care about +migration entries and wrongly overwrites them. That causes unexpected +results like kernel crash. HWPoison entries also can cause the same +problem. + +This patch adds is_hugetlb_entry_(migration|hwpoisoned) check in this +function to do proper actions. + +Fixes: 290408d4a2 ("hugetlb: hugepage migration core") +Signed-off-by: Naoya Horiguchi +Cc: Hugh Dickins +Cc: James Hogan +Cc: David Rientjes +Cc: Mel Gorman +Cc: Johannes Weiner +Cc: Michal Hocko +Cc: Rik van Riel +Cc: Andrea Arcangeli +Cc: Luiz Capitulino +Cc: Nishanth Aravamudan +Cc: Lee Schermerhorn +Cc: Steve Capper +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/hugetlb.c | 21 ++++++++++++++++++++- + 1 file changed, 20 insertions(+), 1 deletion(-) + +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -3116,7 +3116,26 @@ unsigned long hugetlb_change_protection( + pages++; + continue; + } +- if (!huge_pte_none(huge_ptep_get(ptep))) { ++ pte = huge_ptep_get(ptep); ++ if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) { ++ spin_unlock(ptl); ++ continue; ++ } ++ if (unlikely(is_hugetlb_entry_migration(pte))) { ++ swp_entry_t entry = pte_to_swp_entry(pte); ++ ++ if (is_write_migration_entry(entry)) { ++ pte_t newpte; ++ ++ make_migration_entry_read(&entry); ++ newpte = swp_entry_to_pte(entry); ++ set_huge_pte_at(mm, address, ptep, newpte); ++ pages++; ++ } ++ spin_unlock(ptl); ++ continue; ++ } ++ if (!huge_pte_none(pte)) { + pte = huge_ptep_get_and_clear(mm, address, ptep); + pte = pte_mkhuge(huge_pte_modify(pte, newprot)); + pte = arch_make_huge_pte(pte, vma, NULL, 0); diff --git a/queue-3.10/mm-memory.c-actually-remap-enough-memory.patch b/queue-3.10/mm-memory.c-actually-remap-enough-memory.patch new file mode 100644 index 00000000000..46c6ca14f7b --- /dev/null +++ b/queue-3.10/mm-memory.c-actually-remap-enough-memory.patch @@ -0,0 +1,36 @@ +From 9cb12d7b4ccaa976f97ce0c5fd0f1b6a83bc2a75 Mon Sep 17 00:00:00 2001 +From: Grazvydas Ignotas +Date: Thu, 12 Feb 2015 15:00:19 -0800 +Subject: mm/memory.c: actually remap enough memory + +From: Grazvydas Ignotas + +commit 9cb12d7b4ccaa976f97ce0c5fd0f1b6a83bc2a75 upstream. + +For whatever reason, generic_access_phys() only remaps one page, but +actually allows to access arbitrary size. It's quite easy to trigger +large reads, like printing out large structure with gdb, which leads to a +crash. Fix it by remapping correct size. + +Fixes: 28b2ee20c7cb ("access_process_vm device memory infrastructure") +Signed-off-by: Grazvydas Ignotas +Cc: Rik van Riel +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/memory.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -4088,7 +4088,7 @@ int generic_access_phys(struct vm_area_s + if (follow_phys(vma, addr, write, &prot, &phys_addr)) + return -EINVAL; + +- maddr = ioremap_prot(phys_addr, PAGE_SIZE, prot); ++ maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot); + if (write) + memcpy_toio(maddr + offset, buf, len); + else diff --git a/queue-3.10/mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch b/queue-3.10/mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch new file mode 100644 index 00000000000..9411482bbdd --- /dev/null +++ b/queue-3.10/mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch @@ -0,0 +1,63 @@ +From 5703b087dc8eaf47bfb399d6cf512d471beff405 Mon Sep 17 00:00:00 2001 +From: Roman Gushchin +Date: Wed, 11 Feb 2015 15:28:39 -0800 +Subject: mm/mmap.c: fix arithmetic overflow in __vm_enough_memory() + +From: Roman Gushchin + +commit 5703b087dc8eaf47bfb399d6cf512d471beff405 upstream. + +I noticed, that "allowed" can easily overflow by falling below 0, +because (total_vm / 32) can be larger than "allowed". The problem +occurs in OVERCOMMIT_NONE mode. + +In this case, a huge allocation can success and overcommit the system +(despite OVERCOMMIT_NONE mode). All subsequent allocations will fall +(system-wide), so system become unusable. + +The problem was masked out by commit c9b1d0981fcc +("mm: limit growth of 3% hardcoded other user reserve"), +but it's easy to reproduce it on older kernels: +1) set overcommit_memory sysctl to 2 +2) mmap() large file multiple times (with VM_SHARED flag) +3) try to malloc() large amount of memory + +It also can be reproduced on newer kernels, but miss-configured +sysctl_user_reserve_kbytes is required. + +Fix this issue by switching to signed arithmetic here. + +[akpm@linux-foundation.org: use min_t] +Signed-off-by: Roman Gushchin +Cc: Andrew Shewmaker +Cc: Rik van Riel +Cc: Konstantin Khlebnikov +Reviewed-by: Michal Hocko +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/mmap.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/mm/mmap.c ++++ b/mm/mmap.c +@@ -127,7 +127,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed); + */ + int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin) + { +- unsigned long free, allowed, reserve; ++ long free, allowed, reserve; + + vm_acct_memory(pages); + +@@ -193,7 +193,7 @@ int __vm_enough_memory(struct mm_struct + */ + if (mm) { + reserve = sysctl_user_reserve_kbytes >> (PAGE_SHIFT - 10); +- allowed -= min(mm->total_vm / 32, reserve); ++ allowed -= min_t(long, mm->total_vm / 32, reserve); + } + + if (percpu_counter_read_positive(&vm_committed_as) < allowed) diff --git a/queue-3.10/mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch b/queue-3.10/mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch new file mode 100644 index 00000000000..25bcebb2d7e --- /dev/null +++ b/queue-3.10/mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch @@ -0,0 +1,61 @@ +From 8138a67a5557ffea3a21dfd6f037842d4e748513 Mon Sep 17 00:00:00 2001 +From: Roman Gushchin +Date: Wed, 11 Feb 2015 15:28:42 -0800 +Subject: mm/nommu.c: fix arithmetic overflow in __vm_enough_memory() + +From: Roman Gushchin + +commit 8138a67a5557ffea3a21dfd6f037842d4e748513 upstream. + +I noticed that "allowed" can easily overflow by falling below 0, because +(total_vm / 32) can be larger than "allowed". The problem occurs in +OVERCOMMIT_NONE mode. + +In this case, a huge allocation can success and overcommit the system +(despite OVERCOMMIT_NONE mode). All subsequent allocations will fall +(system-wide), so system become unusable. + +The problem was masked out by commit c9b1d0981fcc +("mm: limit growth of 3% hardcoded other user reserve"), +but it's easy to reproduce it on older kernels: +1) set overcommit_memory sysctl to 2 +2) mmap() large file multiple times (with VM_SHARED flag) +3) try to malloc() large amount of memory + +It also can be reproduced on newer kernels, but miss-configured +sysctl_user_reserve_kbytes is required. + +Fix this issue by switching to signed arithmetic here. + +Signed-off-by: Roman Gushchin +Cc: Andrew Shewmaker +Cc: Rik van Riel +Cc: Konstantin Khlebnikov +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/nommu.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/mm/nommu.c ++++ b/mm/nommu.c +@@ -1898,7 +1898,7 @@ EXPORT_SYMBOL(unmap_mapping_range); + */ + int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin) + { +- unsigned long free, allowed, reserve; ++ long free, allowed, reserve; + + vm_acct_memory(pages); + +@@ -1963,7 +1963,7 @@ int __vm_enough_memory(struct mm_struct + */ + if (mm) { + reserve = sysctl_user_reserve_kbytes >> (PAGE_SHIFT - 10); +- allowed -= min(mm->total_vm / 32, reserve); ++ allowed -= min_t(long, mm->total_vm / 32, reserve); + } + + if (percpu_counter_read_positive(&vm_committed_as) < allowed) diff --git a/queue-3.10/series b/queue-3.10/series index 8574c40a164..ec2ae31e5f1 100644 --- a/queue-3.10/series +++ b/queue-3.10/series @@ -13,3 +13,9 @@ macvtap-make-sure-neighbour-code-can-push-ethernet-header.patch usb-plusb-add-support-for-national-instruments-host-to-host-cable.patch udp-only-allow-ufo-for-packets-from-sock_dgram-sockets.patch team-don-t-traverse-port-list-using-rcu-in-team_set_mac_address.patch +mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch +mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch +mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch +mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch +mm-compaction-fix-wrong-order-check-in-compact_finished.patch +mm-memory.c-actually-remap-enough-memory.patch