--- /dev/null
+From 51bb1a4093cc68bc16b282548d9cee6104be0ef1 Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 13 Nov 2014 15:19:14 -0800
+Subject: mm/page_alloc: add freepage on isolate pageblock to correct buddy list
+
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+
+commit 51bb1a4093cc68bc16b282548d9cee6104be0ef1 upstream.
+
+In free_pcppages_bulk(), we use cached migratetype of freepage to
+determine type of buddy list where freepage will be added. This
+information is stored when freepage is added to pcp list, so if
+isolation of pageblock of this freepage begins after storing, this
+cached information could be stale. In other words, it has original
+migratetype rather than MIGRATE_ISOLATE.
+
+There are two problems caused by this stale information.
+
+One is that we can't keep these freepages from being allocated.
+Although this pageblock is isolated, freepage will be added to normal
+buddy list so that it could be allocated without any restriction. And
+the other problem is incorrect freepage accounting. Freepages on
+isolate pageblock should not be counted for number of freepage.
+
+Following is the code snippet in free_pcppages_bulk().
+
+ /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+ __free_one_page(page, page_to_pfn(page), zone, 0, mt);
+ trace_mm_page_pcpu_drain(page, 0, mt);
+ if (likely(!is_migrate_isolate_page(page))) {
+ __mod_zone_page_state(zone, NR_FREE_PAGES, 1);
+ if (is_migrate_cma(mt))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
+ }
+
+As you can see above snippet, current code already handle second
+problem, incorrect freepage accounting, by re-fetching pageblock
+migratetype through is_migrate_isolate_page(page).
+
+But, because this re-fetched information isn't used for
+__free_one_page(), first problem would not be solved. This patch try to
+solve this situation to re-fetch pageblock migratetype before
+__free_one_page() and to use it for __free_one_page().
+
+In addition to move up position of this re-fetch, this patch use
+optimization technique, re-fetching migratetype only if there is isolate
+pageblock. Pageblock isolation is rare event, so we can avoid
+re-fetching in common case with this optimization.
+
+This patch also correct migratetype of the tracepoint output.
+
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Acked-by: Minchan Kim <minchan@kernel.org>
+Acked-by: Michal Nazarewicz <mina86@mina86.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
+Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Cc: Tang Chen <tangchen@cn.fujitsu.com>
+Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Cc: Wen Congyang <wency@cn.fujitsu.com>
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Laura Abbott <lauraa@codeaurora.org>
+Cc: Heesub Shin <heesub.shin@samsung.com>
+Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
+Cc: Ritesh Harjani <ritesh.list@gmail.com>
+Cc: Gioh Kim <gioh.kim@lge.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/page_alloc.c | 13 ++++++++-----
+ 1 file changed, 8 insertions(+), 5 deletions(-)
+
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -716,14 +716,17 @@ static void free_pcppages_bulk(struct zo
+ /* must delete as __free_one_page list manipulates */
+ list_del(&page->lru);
+ mt = get_freepage_migratetype(page);
++ if (unlikely(has_isolate_pageblock(zone))) {
++ mt = get_pageblock_migratetype(page);
++ if (is_migrate_isolate(mt))
++ goto skip_counting;
++ }
++ __mod_zone_freepage_state(zone, 1, mt);
++
++skip_counting:
+ /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+ __free_one_page(page, page_to_pfn(page), zone, 0, mt);
+ trace_mm_page_pcpu_drain(page, 0, mt);
+- if (likely(!is_migrate_isolate_page(page))) {
+- __mod_zone_page_state(zone, NR_FREE_PAGES, 1);
+- if (is_migrate_cma(mt))
+- __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
+- }
+ } while (--to_free && --batch_free && !list_empty(list));
+ }
+ spin_unlock(&zone->lock);
--- /dev/null
+From ad53f92eb416d81e469fa8ea57153e59455e7175 Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 13 Nov 2014 15:19:11 -0800
+Subject: mm/page_alloc: fix incorrect isolation behavior by rechecking migratetype
+
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+
+commit ad53f92eb416d81e469fa8ea57153e59455e7175 upstream.
+
+Before describing bugs itself, I first explain definition of freepage.
+
+ 1. pages on buddy list are counted as freepage.
+ 2. pages on isolate migratetype buddy list are *not* counted as freepage.
+ 3. pages on cma buddy list are counted as CMA freepage, too.
+
+Now, I describe problems and related patch.
+
+Patch 1: There is race conditions on getting pageblock migratetype that
+it results in misplacement of freepages on buddy list, incorrect
+freepage count and un-availability of freepage.
+
+Patch 2: Freepages on pcp list could have stale cached information to
+determine migratetype of buddy list to go. This causes misplacement of
+freepages on buddy list and incorrect freepage count.
+
+Patch 4: Merging between freepages on different migratetype of
+pageblocks will cause freepages accouting problem. This patch fixes it.
+
+Without patchset [3], above problem doesn't happens on my CMA allocation
+test, because CMA reserved pages aren't used at all. So there is no
+chance for above race.
+
+With patchset [3], I did simple CMA allocation test and get below
+result:
+
+ - Virtual machine, 4 cpus, 1024 MB memory, 256 MB CMA reservation
+ - run kernel build (make -j16) on background
+ - 30 times CMA allocation(8MB * 30 = 240MB) attempts in 5 sec interval
+ - Result: more than 5000 freepage count are missed
+
+With patchset [3] and this patchset, I found that no freepage count are
+missed so that I conclude that problems are solved.
+
+On my simple memory offlining test, these problems also occur on that
+environment, too.
+
+This patch (of 4):
+
+There are two paths to reach core free function of buddy allocator,
+__free_one_page(), one is free_one_page()->__free_one_page() and the
+other is free_hot_cold_page()->free_pcppages_bulk()->__free_one_page().
+Each paths has race condition causing serious problems. At first, this
+patch is focused on first type of freepath. And then, following patch
+will solve the problem in second type of freepath.
+
+In the first type of freepath, we got migratetype of freeing page
+without holding the zone lock, so it could be racy. There are two cases
+of this race.
+
+ 1. pages are added to isolate buddy list after restoring orignal
+ migratetype
+
+ CPU1 CPU2
+
+ get migratetype => return MIGRATE_ISOLATE
+ call free_one_page() with MIGRATE_ISOLATE
+
+ grab the zone lock
+ unisolate pageblock
+ release the zone lock
+
+ grab the zone lock
+ call __free_one_page() with MIGRATE_ISOLATE
+ freepage go into isolate buddy list,
+ although pageblock is already unisolated
+
+This may cause two problems. One is that we can't use this page anymore
+until next isolation attempt of this pageblock, because freepage is on
+isolate buddy list. The other is that freepage accouting could be wrong
+due to merging between different buddy list. Freepages on isolate buddy
+list aren't counted as freepage, but ones on normal buddy list are
+counted as freepage. If merge happens, buddy freepage on normal buddy
+list is inevitably moved to isolate buddy list without any consideration
+of freepage accouting so it could be incorrect.
+
+ 2. pages are added to normal buddy list while pageblock is isolated.
+ It is similar with above case.
+
+This also may cause two problems. One is that we can't keep these
+freepages from being allocated. Although this pageblock is isolated,
+freepage would be added to normal buddy list so that it could be
+allocated without any restriction. And the other problem is same as
+case 1, that it, incorrect freepage accouting.
+
+This race condition would be prevented by checking migratetype again
+with holding the zone lock. Because it is somewhat heavy operation and
+it isn't needed in common case, we want to avoid rechecking as much as
+possible. So this patch introduce new variable, nr_isolate_pageblock in
+struct zone to check if there is isolated pageblock. With this, we can
+avoid to re-check migratetype in common case and do it only if there is
+isolated pageblock or migratetype is MIGRATE_ISOLATE. This solve above
+mentioned problems.
+
+Changes from v3:
+Add one more check in free_one_page() that checks whether migratetype is
+MIGRATE_ISOLATE or not. Without this, abovementioned case 1 could happens.
+
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Acked-by: Minchan Kim <minchan@kernel.org>
+Acked-by: Michal Nazarewicz <mina86@mina86.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
+Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Cc: Tang Chen <tangchen@cn.fujitsu.com>
+Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Cc: Wen Congyang <wency@cn.fujitsu.com>
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Laura Abbott <lauraa@codeaurora.org>
+Cc: Heesub Shin <heesub.shin@samsung.com>
+Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
+Cc: Ritesh Harjani <ritesh.list@gmail.com>
+Cc: Gioh Kim <gioh.kim@lge.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ include/linux/mmzone.h | 9 +++++++++
+ include/linux/page-isolation.h | 8 ++++++++
+ mm/page_alloc.c | 11 +++++++++--
+ mm/page_isolation.c | 2 ++
+ 4 files changed, 28 insertions(+), 2 deletions(-)
+
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -431,6 +431,15 @@ struct zone {
+ */
+ int nr_migrate_reserve_block;
+
++#ifdef CONFIG_MEMORY_ISOLATION
++ /*
++ * Number of isolated pageblock. It is used to solve incorrect
++ * freepage counting problem due to racy retrieving migratetype
++ * of pageblock. Protected by zone->lock.
++ */
++ unsigned long nr_isolate_pageblock;
++#endif
++
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ /* see spanned/present_pages for more description */
+ seqlock_t span_seqlock;
+--- a/include/linux/page-isolation.h
++++ b/include/linux/page-isolation.h
+@@ -2,6 +2,10 @@
+ #define __LINUX_PAGEISOLATION_H
+
+ #ifdef CONFIG_MEMORY_ISOLATION
++static inline bool has_isolate_pageblock(struct zone *zone)
++{
++ return zone->nr_isolate_pageblock;
++}
+ static inline bool is_migrate_isolate_page(struct page *page)
+ {
+ return get_pageblock_migratetype(page) == MIGRATE_ISOLATE;
+@@ -11,6 +15,10 @@ static inline bool is_migrate_isolate(in
+ return migratetype == MIGRATE_ISOLATE;
+ }
+ #else
++static inline bool has_isolate_pageblock(struct zone *zone)
++{
++ return false;
++}
+ static inline bool is_migrate_isolate_page(struct page *page)
+ {
+ return false;
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -740,9 +740,16 @@ static void free_one_page(struct zone *z
+ if (nr_scanned)
+ __mod_zone_page_state(zone, NR_PAGES_SCANNED, -nr_scanned);
+
++ if (unlikely(has_isolate_pageblock(zone) ||
++ is_migrate_isolate(migratetype))) {
++ migratetype = get_pfnblock_migratetype(page, pfn);
++ if (is_migrate_isolate(migratetype))
++ goto skip_counting;
++ }
++ __mod_zone_freepage_state(zone, 1 << order, migratetype);
++
++skip_counting:
+ __free_one_page(page, pfn, zone, order, migratetype);
+- if (unlikely(!is_migrate_isolate(migratetype)))
+- __mod_zone_freepage_state(zone, 1 << order, migratetype);
+ spin_unlock(&zone->lock);
+ }
+
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -60,6 +60,7 @@ out:
+ int migratetype = get_pageblock_migratetype(page);
+
+ set_pageblock_migratetype(page, MIGRATE_ISOLATE);
++ zone->nr_isolate_pageblock++;
+ nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE);
+
+ __mod_zone_freepage_state(zone, -nr_pages, migratetype);
+@@ -83,6 +84,7 @@ void unset_migratetype_isolate(struct pa
+ nr_pages = move_freepages_block(zone, page, migratetype);
+ __mod_zone_freepage_state(zone, nr_pages, migratetype);
+ set_pageblock_migratetype(page, migratetype);
++ zone->nr_isolate_pageblock--;
+ out:
+ spin_unlock_irqrestore(&zone->lock, flags);
+ }
--- /dev/null
+From 8f82b55dd558a74fc33d69a1f2c2605d0cd2c908 Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 13 Nov 2014 15:19:18 -0800
+Subject: mm/page_alloc: move freepage counting logic to __free_one_page()
+
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+
+commit 8f82b55dd558a74fc33d69a1f2c2605d0cd2c908 upstream.
+
+All the caller of __free_one_page() has similar freepage counting logic,
+so we can move it to __free_one_page(). This reduce line of code and
+help future maintenance.
+
+This is also preparation step for "mm/page_alloc: restrict max order of
+merging on isolated pageblock" which fix the freepage counting problem
+on freepage with more than pageblock order.
+
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Minchan Kim <minchan@kernel.org>
+Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
+Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Cc: Tang Chen <tangchen@cn.fujitsu.com>
+Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Cc: Wen Congyang <wency@cn.fujitsu.com>
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Michal Nazarewicz <mina86@mina86.com>
+Cc: Laura Abbott <lauraa@codeaurora.org>
+Cc: Heesub Shin <heesub.shin@samsung.com>
+Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
+Cc: Ritesh Harjani <ritesh.list@gmail.com>
+Cc: Gioh Kim <gioh.kim@lge.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/page_alloc.c | 14 +++-----------
+ 1 file changed, 3 insertions(+), 11 deletions(-)
+
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -578,6 +578,8 @@ static inline void __free_one_page(struc
+ return;
+
+ VM_BUG_ON(migratetype == -1);
++ if (!is_migrate_isolate(migratetype))
++ __mod_zone_freepage_state(zone, 1 << order, migratetype);
+
+ page_idx = pfn & ((1 << MAX_ORDER) - 1);
+
+@@ -716,14 +718,9 @@ static void free_pcppages_bulk(struct zo
+ /* must delete as __free_one_page list manipulates */
+ list_del(&page->lru);
+ mt = get_freepage_migratetype(page);
+- if (unlikely(has_isolate_pageblock(zone))) {
++ if (unlikely(has_isolate_pageblock(zone)))
+ mt = get_pageblock_migratetype(page);
+- if (is_migrate_isolate(mt))
+- goto skip_counting;
+- }
+- __mod_zone_freepage_state(zone, 1, mt);
+
+-skip_counting:
+ /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+ __free_one_page(page, page_to_pfn(page), zone, 0, mt);
+ trace_mm_page_pcpu_drain(page, 0, mt);
+@@ -746,12 +743,7 @@ static void free_one_page(struct zone *z
+ if (unlikely(has_isolate_pageblock(zone) ||
+ is_migrate_isolate(migratetype))) {
+ migratetype = get_pfnblock_migratetype(page, pfn);
+- if (is_migrate_isolate(migratetype))
+- goto skip_counting;
+ }
+- __mod_zone_freepage_state(zone, 1 << order, migratetype);
+-
+-skip_counting:
+ __free_one_page(page, pfn, zone, order, migratetype);
+ spin_unlock(&zone->lock);
+ }
--- /dev/null
+From 3c605096d3158216ba9326a16266f6ba128c2c8d Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 13 Nov 2014 15:19:21 -0800
+Subject: mm/page_alloc: restrict max order of merging on isolated pageblock
+
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+
+commit 3c605096d3158216ba9326a16266f6ba128c2c8d upstream.
+
+Current pageblock isolation logic could isolate each pageblock
+individually. This causes freepage accounting problem if freepage with
+pageblock order on isolate pageblock is merged with other freepage on
+normal pageblock. We can prevent merging by restricting max order of
+merging to pageblock order if freepage is on isolate pageblock.
+
+A side-effect of this change is that there could be non-merged buddy
+freepage even if finishing pageblock isolation, because undoing
+pageblock isolation is just to move freepage from isolate buddy list to
+normal buddy list rather than to consider merging. So, the patch also
+makes undoing pageblock isolation consider freepage merge. When
+un-isolation, freepage with more than pageblock order and it's buddy are
+checked. If they are on normal pageblock, instead of just moving, we
+isolate the freepage and free it in order to get merged.
+
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Minchan Kim <minchan@kernel.org>
+Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
+Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
+Cc: Tang Chen <tangchen@cn.fujitsu.com>
+Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Cc: Wen Congyang <wency@cn.fujitsu.com>
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Michal Nazarewicz <mina86@mina86.com>
+Cc: Laura Abbott <lauraa@codeaurora.org>
+Cc: Heesub Shin <heesub.shin@samsung.com>
+Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
+Cc: Ritesh Harjani <ritesh.list@gmail.com>
+Cc: Gioh Kim <gioh.kim@lge.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/internal.h | 25 +++++++++++++++++++++++++
+ mm/page_alloc.c | 41 ++++++++++++++---------------------------
+ mm/page_isolation.c | 41 +++++++++++++++++++++++++++++++++++++++--
+ 3 files changed, 78 insertions(+), 29 deletions(-)
+
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -108,6 +108,31 @@ extern pmd_t *mm_find_pmd(struct mm_stru
+ /*
+ * in mm/page_alloc.c
+ */
++
++/*
++ * Locate the struct page for both the matching buddy in our
++ * pair (buddy1) and the combined O(n+1) page they form (page).
++ *
++ * 1) Any buddy B1 will have an order O twin B2 which satisfies
++ * the following equation:
++ * B2 = B1 ^ (1 << O)
++ * For example, if the starting buddy (buddy2) is #8 its order
++ * 1 buddy is #10:
++ * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10
++ *
++ * 2) Any buddy B will have an order O+1 parent P which
++ * satisfies the following equation:
++ * P = B & ~(1 << O)
++ *
++ * Assumption: *_mem_map is contiguous at least up to MAX_ORDER
++ */
++static inline unsigned long
++__find_buddy_index(unsigned long page_idx, unsigned int order)
++{
++ return page_idx ^ (1 << order);
++}
++
++extern int __isolate_free_page(struct page *page, unsigned int order);
+ extern void __free_pages_bootmem(struct page *page, unsigned int order);
+ extern void prep_compound_page(struct page *page, unsigned long order);
+ #ifdef CONFIG_MEMORY_FAILURE
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -468,29 +468,6 @@ static inline void rmv_page_order(struct
+ }
+
+ /*
+- * Locate the struct page for both the matching buddy in our
+- * pair (buddy1) and the combined O(n+1) page they form (page).
+- *
+- * 1) Any buddy B1 will have an order O twin B2 which satisfies
+- * the following equation:
+- * B2 = B1 ^ (1 << O)
+- * For example, if the starting buddy (buddy2) is #8 its order
+- * 1 buddy is #10:
+- * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10
+- *
+- * 2) Any buddy B will have an order O+1 parent P which
+- * satisfies the following equation:
+- * P = B & ~(1 << O)
+- *
+- * Assumption: *_mem_map is contiguous at least up to MAX_ORDER
+- */
+-static inline unsigned long
+-__find_buddy_index(unsigned long page_idx, unsigned int order)
+-{
+- return page_idx ^ (1 << order);
+-}
+-
+-/*
+ * This function checks whether a page is free && is the buddy
+ * we can do coalesce a page and its buddy if
+ * (a) the buddy is not in a hole &&
+@@ -570,6 +547,7 @@ static inline void __free_one_page(struc
+ unsigned long combined_idx;
+ unsigned long uninitialized_var(buddy_idx);
+ struct page *buddy;
++ int max_order = MAX_ORDER;
+
+ VM_BUG_ON(!zone_is_initialized(zone));
+
+@@ -578,15 +556,24 @@ static inline void __free_one_page(struc
+ return;
+
+ VM_BUG_ON(migratetype == -1);
+- if (!is_migrate_isolate(migratetype))
++ if (is_migrate_isolate(migratetype)) {
++ /*
++ * We restrict max order of merging to prevent merge
++ * between freepages on isolate pageblock and normal
++ * pageblock. Without this, pageblock isolation
++ * could cause incorrect freepage accounting.
++ */
++ max_order = min(MAX_ORDER, pageblock_order + 1);
++ } else {
+ __mod_zone_freepage_state(zone, 1 << order, migratetype);
++ }
+
+- page_idx = pfn & ((1 << MAX_ORDER) - 1);
++ page_idx = pfn & ((1 << max_order) - 1);
+
+ VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
+ VM_BUG_ON_PAGE(bad_range(zone, page), page);
+
+- while (order < MAX_ORDER-1) {
++ while (order < max_order - 1) {
+ buddy_idx = __find_buddy_index(page_idx, order);
+ buddy = page + (buddy_idx - page_idx);
+ if (!page_is_buddy(page, buddy, order))
+@@ -1487,7 +1474,7 @@ void split_page(struct page *page, unsig
+ }
+ EXPORT_SYMBOL_GPL(split_page);
+
+-static int __isolate_free_page(struct page *page, unsigned int order)
++int __isolate_free_page(struct page *page, unsigned int order)
+ {
+ unsigned long watermark;
+ struct zone *zone;
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -76,17 +76,54 @@ void unset_migratetype_isolate(struct pa
+ {
+ struct zone *zone;
+ unsigned long flags, nr_pages;
++ struct page *isolated_page = NULL;
++ unsigned int order;
++ unsigned long page_idx, buddy_idx;
++ struct page *buddy;
+
+ zone = page_zone(page);
+ spin_lock_irqsave(&zone->lock, flags);
+ if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
+ goto out;
+- nr_pages = move_freepages_block(zone, page, migratetype);
+- __mod_zone_freepage_state(zone, nr_pages, migratetype);
++
++ /*
++ * Because freepage with more than pageblock_order on isolated
++ * pageblock is restricted to merge due to freepage counting problem,
++ * it is possible that there is free buddy page.
++ * move_freepages_block() doesn't care of merge so we need other
++ * approach in order to merge them. Isolation and free will make
++ * these pages to be merged.
++ */
++ if (PageBuddy(page)) {
++ order = page_order(page);
++ if (order >= pageblock_order) {
++ page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
++ buddy_idx = __find_buddy_index(page_idx, order);
++ buddy = page + (buddy_idx - page_idx);
++
++ if (!is_migrate_isolate_page(buddy)) {
++ __isolate_free_page(page, order);
++ set_page_refcounted(page);
++ isolated_page = page;
++ }
++ }
++ }
++
++ /*
++ * If we isolate freepage with more than pageblock_order, there
++ * should be no freepage in the range, so we could avoid costly
++ * pageblock scanning for freepage moving.
++ */
++ if (!isolated_page) {
++ nr_pages = move_freepages_block(zone, page, migratetype);
++ __mod_zone_freepage_state(zone, nr_pages, migratetype);
++ }
+ set_pageblock_migratetype(page, migratetype);
+ zone->nr_isolate_pageblock--;
+ out:
+ spin_unlock_irqrestore(&zone->lock, flags);
++ if (isolated_page)
++ __free_pages(isolated_page, order);
+ }
+
+ static inline struct page *
sparc64-fix-crashes-in-schizo_pcierr_intr_other.patch
sparc64-do-irq_-enter-exit-around-generic_smp_call_function.patch
sparc32-implement-xchg-and-atomic_xchg-using-atomic_hash-locks.patch
+zram-avoid-kunmap_atomic-of-a-null-pointer.patch
+mm-page_alloc-fix-incorrect-isolation-behavior-by-rechecking-migratetype.patch
+mm-page_alloc-add-freepage-on-isolate-pageblock-to-correct-buddy-list.patch
+mm-page_alloc-move-freepage-counting-logic-to-__free_one_page.patch
+mm-page_alloc-restrict-max-order-of-merging-on-isolated-pageblock.patch
--- /dev/null
+From c406515239376fc93a30d5d03192182160cbd3fb Mon Sep 17 00:00:00 2001
+From: Weijie Yang <weijie.yang@samsung.com>
+Date: Thu, 13 Nov 2014 15:19:05 -0800
+Subject: zram: avoid kunmap_atomic() of a NULL pointer
+
+From: Weijie Yang <weijie.yang@samsung.com>
+
+commit c406515239376fc93a30d5d03192182160cbd3fb upstream.
+
+zram could kunmap_atomic() a NULL pointer in a rare situation: a zram
+page becomes a full-zeroed page after a partial write io. The current
+code doesn't handle this case and performs kunmap_atomic() on a NULL
+pointer, which panics the kernel.
+
+This patch fixes this issue.
+
+Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
+Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
+Cc: Dan Streetman <ddstreet@ieee.org>
+Cc: Nitin Gupta <ngupta@vflare.org>
+Cc: Weijie Yang <weijie.yang.kh@gmail.com>
+Acked-by: Jerome Marchand <jmarchan@redhat.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/block/zram/zram_drv.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -476,7 +476,8 @@ static int zram_bvec_write(struct zram *
+ }
+
+ if (page_zero_filled(uncmem)) {
+- kunmap_atomic(user_mem);
++ if (user_mem)
++ kunmap_atomic(user_mem);
+ /* Free memory associated with this sector now. */
+ bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
+ zram_free_page(zram, index);