--- /dev/null
+From 372549c2a3778fd3df445819811c944ad54609ca Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 12 Feb 2015 14:59:50 -0800
+Subject: mm/compaction: fix wrong order check in compact_finished()
+
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+
+commit 372549c2a3778fd3df445819811c944ad54609ca upstream.
+
+What we want to check here is whether there is highorder freepage in buddy
+list of other migratetype in order to steal it without fragmentation.
+But, current code just checks cc->order which means allocation request
+order. So, this is wrong.
+
+Without this fix, non-movable synchronous compaction below pageblock order
+would not stopped until compaction is complete, because migratetype of
+most pageblocks are movable and high order freepage made by compaction is
+usually on movable type buddy list.
+
+There is some report related to this bug. See below link.
+
+ http://www.spinics.net/lists/linux-mm/msg81666.html
+
+Although the issued system still has load spike comes from compaction,
+this makes that system completely stable and responsive according to his
+report.
+
+stress-highalloc test in mmtests with non movable order 7 allocation
+doesn't show any notable difference in allocation success rate, but, it
+shows more compaction success rate.
+
+Compaction success rate (Compaction success * 100 / Compaction stalls, %)
+18.47 : 28.94
+
+Fixes: 1fb3f8ca0e92 ("mm: compaction: capture a suitable high-order page immediately when it is made available")
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Reviewed-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Rik van Riel <riel@redhat.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/compaction.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -881,7 +881,7 @@ static int compact_finished(struct zone
+ return COMPACT_PARTIAL;
+
+ /* Job done if allocation would set block type */
+- if (cc->order >= pageblock_order && area->nr_free)
++ if (order >= pageblock_order && area->nr_free)
+ return COMPACT_PARTIAL;
+ }
+
--- /dev/null
+From 9fbc1f635fd0bd28cb32550211bf095753ac637a Mon Sep 17 00:00:00 2001
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Date: Wed, 11 Feb 2015 15:25:32 -0800
+Subject: mm/hugetlb: add migration entry check in __unmap_hugepage_range
+
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+
+commit 9fbc1f635fd0bd28cb32550211bf095753ac637a upstream.
+
+If __unmap_hugepage_range() tries to unmap the address range over which
+hugepage migration is on the way, we get the wrong page because pte_page()
+doesn't work for migration entries. This patch simply clears the pte for
+migration entries as we do for hwpoison entries.
+
+Fixes: 290408d4a2 ("hugetlb: hugepage migration core")
+Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: James Hogan <james.hogan@imgtec.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Mel Gorman <mel@csn.ul.ie>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Michal Hocko <mhocko@suse.cz>
+Cc: Rik van Riel <riel@redhat.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Luiz Capitulino <lcapitulino@redhat.com>
+Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
+Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
+Cc: Steve Capper <steve.capper@linaro.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/hugetlb.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2451,9 +2451,10 @@ again:
+ continue;
+
+ /*
+- * HWPoisoned hugepage is already unmapped and dropped reference
++ * Migrating hugepage or HWPoisoned hugepage is already
++ * unmapped and its refcount is dropped, so just clear pte here.
+ */
+- if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) {
++ if (unlikely(!pte_present(pte))) {
+ huge_pte_clear(mm, address, ptep);
+ continue;
+ }
--- /dev/null
+From a8bda28d87c38c6aa93de28ba5d30cc18e865a11 Mon Sep 17 00:00:00 2001
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Date: Wed, 11 Feb 2015 15:25:28 -0800
+Subject: mm/hugetlb: add migration/hwpoisoned entry check in hugetlb_change_protection
+
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+
+commit a8bda28d87c38c6aa93de28ba5d30cc18e865a11 upstream.
+
+There is a race condition between hugepage migration and
+change_protection(), where hugetlb_change_protection() doesn't care about
+migration entries and wrongly overwrites them. That causes unexpected
+results like kernel crash. HWPoison entries also can cause the same
+problem.
+
+This patch adds is_hugetlb_entry_(migration|hwpoisoned) check in this
+function to do proper actions.
+
+Fixes: 290408d4a2 ("hugetlb: hugepage migration core")
+Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: James Hogan <james.hogan@imgtec.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Mel Gorman <mel@csn.ul.ie>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Michal Hocko <mhocko@suse.cz>
+Cc: Rik van Riel <riel@redhat.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Luiz Capitulino <lcapitulino@redhat.com>
+Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
+Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
+Cc: Steve Capper <steve.capper@linaro.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/hugetlb.c | 21 ++++++++++++++++++++-
+ 1 file changed, 20 insertions(+), 1 deletion(-)
+
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3116,7 +3116,26 @@ unsigned long hugetlb_change_protection(
+ pages++;
+ continue;
+ }
+- if (!huge_pte_none(huge_ptep_get(ptep))) {
++ pte = huge_ptep_get(ptep);
++ if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) {
++ spin_unlock(ptl);
++ continue;
++ }
++ if (unlikely(is_hugetlb_entry_migration(pte))) {
++ swp_entry_t entry = pte_to_swp_entry(pte);
++
++ if (is_write_migration_entry(entry)) {
++ pte_t newpte;
++
++ make_migration_entry_read(&entry);
++ newpte = swp_entry_to_pte(entry);
++ set_huge_pte_at(mm, address, ptep, newpte);
++ pages++;
++ }
++ spin_unlock(ptl);
++ continue;
++ }
++ if (!huge_pte_none(pte)) {
+ pte = huge_ptep_get_and_clear(mm, address, ptep);
+ pte = pte_mkhuge(huge_pte_modify(pte, newprot));
+ pte = arch_make_huge_pte(pte, vma, NULL, 0);
--- /dev/null
+From 9cb12d7b4ccaa976f97ce0c5fd0f1b6a83bc2a75 Mon Sep 17 00:00:00 2001
+From: Grazvydas Ignotas <notasas@gmail.com>
+Date: Thu, 12 Feb 2015 15:00:19 -0800
+Subject: mm/memory.c: actually remap enough memory
+
+From: Grazvydas Ignotas <notasas@gmail.com>
+
+commit 9cb12d7b4ccaa976f97ce0c5fd0f1b6a83bc2a75 upstream.
+
+For whatever reason, generic_access_phys() only remaps one page, but
+actually allows to access arbitrary size. It's quite easy to trigger
+large reads, like printing out large structure with gdb, which leads to a
+crash. Fix it by remapping correct size.
+
+Fixes: 28b2ee20c7cb ("access_process_vm device memory infrastructure")
+Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
+Cc: Rik van Riel <riel@redhat.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memory.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4088,7 +4088,7 @@ int generic_access_phys(struct vm_area_s
+ if (follow_phys(vma, addr, write, &prot, &phys_addr))
+ return -EINVAL;
+
+- maddr = ioremap_prot(phys_addr, PAGE_SIZE, prot);
++ maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot);
+ if (write)
+ memcpy_toio(maddr + offset, buf, len);
+ else
--- /dev/null
+From 5703b087dc8eaf47bfb399d6cf512d471beff405 Mon Sep 17 00:00:00 2001
+From: Roman Gushchin <klamm@yandex-team.ru>
+Date: Wed, 11 Feb 2015 15:28:39 -0800
+Subject: mm/mmap.c: fix arithmetic overflow in __vm_enough_memory()
+
+From: Roman Gushchin <klamm@yandex-team.ru>
+
+commit 5703b087dc8eaf47bfb399d6cf512d471beff405 upstream.
+
+I noticed, that "allowed" can easily overflow by falling below 0,
+because (total_vm / 32) can be larger than "allowed". The problem
+occurs in OVERCOMMIT_NONE mode.
+
+In this case, a huge allocation can success and overcommit the system
+(despite OVERCOMMIT_NONE mode). All subsequent allocations will fall
+(system-wide), so system become unusable.
+
+The problem was masked out by commit c9b1d0981fcc
+("mm: limit growth of 3% hardcoded other user reserve"),
+but it's easy to reproduce it on older kernels:
+1) set overcommit_memory sysctl to 2
+2) mmap() large file multiple times (with VM_SHARED flag)
+3) try to malloc() large amount of memory
+
+It also can be reproduced on newer kernels, but miss-configured
+sysctl_user_reserve_kbytes is required.
+
+Fix this issue by switching to signed arithmetic here.
+
+[akpm@linux-foundation.org: use min_t]
+Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
+Cc: Andrew Shewmaker <agshew@gmail.com>
+Cc: Rik van Riel <riel@redhat.com>
+Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
+Reviewed-by: Michal Hocko <mhocko@suse.cz>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/mmap.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -127,7 +127,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed);
+ */
+ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
+ {
+- unsigned long free, allowed, reserve;
++ long free, allowed, reserve;
+
+ vm_acct_memory(pages);
+
+@@ -193,7 +193,7 @@ int __vm_enough_memory(struct mm_struct
+ */
+ if (mm) {
+ reserve = sysctl_user_reserve_kbytes >> (PAGE_SHIFT - 10);
+- allowed -= min(mm->total_vm / 32, reserve);
++ allowed -= min_t(long, mm->total_vm / 32, reserve);
+ }
+
+ if (percpu_counter_read_positive(&vm_committed_as) < allowed)
--- /dev/null
+From 8138a67a5557ffea3a21dfd6f037842d4e748513 Mon Sep 17 00:00:00 2001
+From: Roman Gushchin <klamm@yandex-team.ru>
+Date: Wed, 11 Feb 2015 15:28:42 -0800
+Subject: mm/nommu.c: fix arithmetic overflow in __vm_enough_memory()
+
+From: Roman Gushchin <klamm@yandex-team.ru>
+
+commit 8138a67a5557ffea3a21dfd6f037842d4e748513 upstream.
+
+I noticed that "allowed" can easily overflow by falling below 0, because
+(total_vm / 32) can be larger than "allowed". The problem occurs in
+OVERCOMMIT_NONE mode.
+
+In this case, a huge allocation can success and overcommit the system
+(despite OVERCOMMIT_NONE mode). All subsequent allocations will fall
+(system-wide), so system become unusable.
+
+The problem was masked out by commit c9b1d0981fcc
+("mm: limit growth of 3% hardcoded other user reserve"),
+but it's easy to reproduce it on older kernels:
+1) set overcommit_memory sysctl to 2
+2) mmap() large file multiple times (with VM_SHARED flag)
+3) try to malloc() large amount of memory
+
+It also can be reproduced on newer kernels, but miss-configured
+sysctl_user_reserve_kbytes is required.
+
+Fix this issue by switching to signed arithmetic here.
+
+Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
+Cc: Andrew Shewmaker <agshew@gmail.com>
+Cc: Rik van Riel <riel@redhat.com>
+Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/nommu.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -1898,7 +1898,7 @@ EXPORT_SYMBOL(unmap_mapping_range);
+ */
+ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
+ {
+- unsigned long free, allowed, reserve;
++ long free, allowed, reserve;
+
+ vm_acct_memory(pages);
+
+@@ -1963,7 +1963,7 @@ int __vm_enough_memory(struct mm_struct
+ */
+ if (mm) {
+ reserve = sysctl_user_reserve_kbytes >> (PAGE_SHIFT - 10);
+- allowed -= min(mm->total_vm / 32, reserve);
++ allowed -= min_t(long, mm->total_vm / 32, reserve);
+ }
+
+ if (percpu_counter_read_positive(&vm_committed_as) < allowed)
usb-plusb-add-support-for-national-instruments-host-to-host-cable.patch
udp-only-allow-ufo-for-packets-from-sock_dgram-sockets.patch
team-don-t-traverse-port-list-using-rcu-in-team_set_mac_address.patch
+mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch
+mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch
+mm-mmap.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch
+mm-nommu.c-fix-arithmetic-overflow-in-__vm_enough_memory.patch
+mm-compaction-fix-wrong-order-check-in-compact_finished.patch
+mm-memory.c-actually-remap-enough-memory.patch