--- /dev/null
+From 401507d67d5c2854f5a88b3f93f64fc6f267bca5 Mon Sep 17 00:00:00 2001
+From: Wang Nan <wangnan0@huawei.com>
+Date: Wed, 29 Oct 2014 14:50:18 -0700
+Subject: cgroup/kmemleak: add kmemleak_free() for cgroup deallocations.
+
+From: Wang Nan <wangnan0@huawei.com>
+
+commit 401507d67d5c2854f5a88b3f93f64fc6f267bca5 upstream.
+
+Commit ff7ee93f4715 ("cgroup/kmemleak: Annotate alloc_page() for cgroup
+allocations") introduces kmemleak_alloc() for alloc_page_cgroup(), but
+corresponding kmemleak_free() is missing, which makes kmemleak be
+wrongly disabled after memory offlining. Log is pasted at the end of
+this commit message.
+
+This patch add kmemleak_free() into free_page_cgroup(). During page
+offlining, this patch removes corresponding entries in kmemleak rbtree.
+After that, the freed memory can be allocated again by other subsystems
+without killing kmemleak.
+
+ bash # for x in 1 2 3 4; do echo offline > /sys/devices/system/memory/memory$x/state ; sleep 1; done ; dmesg | grep leak
+
+ Offlined Pages 32768
+ kmemleak: Cannot insert 0xffff880016969000 into the object search tree (overlaps existing)
+ CPU: 0 PID: 412 Comm: sleep Not tainted 3.17.0-rc5+ #86
+ Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
+ Call Trace:
+ dump_stack+0x46/0x58
+ create_object+0x266/0x2c0
+ kmemleak_alloc+0x26/0x50
+ kmem_cache_alloc+0xd3/0x160
+ __sigqueue_alloc+0x49/0xd0
+ __send_signal+0xcb/0x410
+ send_signal+0x45/0x90
+ __group_send_sig_info+0x13/0x20
+ do_notify_parent+0x1bb/0x260
+ do_exit+0x767/0xa40
+ do_group_exit+0x44/0xa0
+ SyS_exit_group+0x17/0x20
+ system_call_fastpath+0x16/0x1b
+
+ kmemleak: Kernel memory leak detector disabled
+ kmemleak: Object 0xffff880016900000 (size 524288):
+ kmemleak: comm "swapper/0", pid 0, jiffies 4294667296
+ kmemleak: min_count = 0
+ kmemleak: count = 0
+ kmemleak: flags = 0x1
+ kmemleak: checksum = 0
+ kmemleak: backtrace:
+ log_early+0x63/0x77
+ kmemleak_alloc+0x4b/0x50
+ init_section_page_cgroup+0x7f/0xf5
+ page_cgroup_init+0xc5/0xd0
+ start_kernel+0x333/0x408
+ x86_64_start_reservations+0x2a/0x2c
+ x86_64_start_kernel+0xf5/0xfc
+
+Fixes: ff7ee93f4715 (cgroup/kmemleak: Annotate alloc_page() for cgroup allocations)
+Signed-off-by: Wang Nan <wangnan0@huawei.com>
+Acked-by: Johannes Weiner <hannes@cmpxchg.org>
+Acked-by: Michal Hocko <mhocko@suse.cz>
+Cc: Steven Rostedt <rostedt@goodmis.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/page_cgroup.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/mm/page_cgroup.c
++++ b/mm/page_cgroup.c
+@@ -171,6 +171,7 @@ static void free_page_cgroup(void *addr)
+ sizeof(struct page_cgroup) * PAGES_PER_SECTION;
+
+ BUG_ON(PageReserved(page));
++ kmemleak_free(addr);
+ free_pages_exact(addr, table_size);
+ }
+ }
--- /dev/null
+From ea5d05b34aca25c066e0699512d0ffbd8ee6ac3e Mon Sep 17 00:00:00 2001
+From: Jan Kara <jack@suse.cz>
+Date: Wed, 29 Oct 2014 14:50:44 -0700
+Subject: lib/bitmap.c: fix undefined shift in __bitmap_shift_{left|right}()
+
+From: Jan Kara <jack@suse.cz>
+
+commit ea5d05b34aca25c066e0699512d0ffbd8ee6ac3e upstream.
+
+If __bitmap_shift_left() or __bitmap_shift_right() are asked to shift by
+a multiple of BITS_PER_LONG, they will try to shift a long value by
+BITS_PER_LONG bits which is undefined. Change the functions to avoid
+the undefined shift.
+
+Coverity id: 1192175
+Coverity id: 1192174
+Signed-off-by: Jan Kara <jack@suse.cz>
+Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ lib/bitmap.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -131,7 +131,9 @@ void __bitmap_shift_right(unsigned long
+ lower = src[off + k];
+ if (left && off + k == lim - 1)
+ lower &= mask;
+- dst[k] = upper << (BITS_PER_LONG - rem) | lower >> rem;
++ dst[k] = lower >> rem;
++ if (rem)
++ dst[k] |= upper << (BITS_PER_LONG - rem);
+ if (left && k == lim - 1)
+ dst[k] &= mask;
+ }
+@@ -172,7 +174,9 @@ void __bitmap_shift_left(unsigned long *
+ upper = src[k];
+ if (left && k == lim - 1)
+ upper &= (1UL << left) - 1;
+- dst[k + off] = lower >> (BITS_PER_LONG - rem) | upper << rem;
++ dst[k + off] = upper << rem;
++ if (rem)
++ dst[k + off] |= lower >> (BITS_PER_LONG - rem);
+ if (left && k + off == lim - 1)
+ dst[k + off] &= (1UL << left) - 1;
+ }
--- /dev/null
+From 5ddacbe92b806cd5b4f8f154e8e46ac267fff55c Mon Sep 17 00:00:00 2001
+From: Yu Zhao <yuzhao@google.com>
+Date: Wed, 29 Oct 2014 14:50:26 -0700
+Subject: mm: free compound page with correct order
+
+From: Yu Zhao <yuzhao@google.com>
+
+commit 5ddacbe92b806cd5b4f8f154e8e46ac267fff55c upstream.
+
+Compound page should be freed by put_page() or free_pages() with correct
+order. Not doing so will cause tail pages leaked.
+
+The compound order can be obtained by compound_order() or use
+HPAGE_PMD_ORDER in our case. Some people would argue the latter is
+faster but I prefer the former which is more general.
+
+This bug was observed not just on our servers (the worst case we saw is
+11G leaked on a 48G machine) but also on our workstations running Ubuntu
+based distro.
+
+ $ cat /proc/vmstat | grep thp_zero_page_alloc
+ thp_zero_page_alloc 55
+ thp_zero_page_alloc_failed 0
+
+This means there is (thp_zero_page_alloc - 1) * (2M - 4K) memory leaked.
+
+Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page")
+Signed-off-by: Yu Zhao <yuzhao@google.com>
+Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Mel Gorman <mel@csn.ul.ie>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Bob Liu <lliubbo@gmail.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/huge_memory.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -199,7 +199,7 @@ retry:
+ preempt_disable();
+ if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
+ preempt_enable();
+- __free_page(zero_page);
++ __free_pages(zero_page, compound_order(zero_page));
+ goto retry;
+ }
+
+@@ -231,7 +231,7 @@ static unsigned long shrink_huge_zero_pa
+ if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
+ struct page *zero_page = xchg(&huge_zero_page, NULL);
+ BUG_ON(zero_page == NULL);
+- __free_page(zero_page);
++ __free_pages(zero_page, compound_order(zero_page));
+ return HPAGE_PMD_NR;
+ }
+
--- /dev/null
+From 2f7dd7a4100ad4affcb141605bef178ab98ccb18 Mon Sep 17 00:00:00 2001
+From: Johannes Weiner <hannes@cmpxchg.org>
+Date: Thu, 2 Oct 2014 16:16:57 -0700
+Subject: mm: memcontrol: do not iterate uninitialized memcgs
+
+From: Johannes Weiner <hannes@cmpxchg.org>
+
+commit 2f7dd7a4100ad4affcb141605bef178ab98ccb18 upstream.
+
+The cgroup iterators yield css objects that have not yet gone through
+css_online(), but they are not complete memcgs at this point and so the
+memcg iterators should not return them. Commit d8ad30559715 ("mm/memcg:
+iteration skip memcgs not yet fully initialized") set out to implement
+exactly this, but it uses CSS_ONLINE, a cgroup-internal flag that does
+not meet the ordering requirements for memcg, and so the iterator may
+skip over initialized groups, or return partially initialized memcgs.
+
+The cgroup core can not reasonably provide a clear answer on whether the
+object around the css has been fully initialized, as that depends on
+controller-specific locking and lifetime rules. Thus, introduce a
+memcg-specific flag that is set after the memcg has been initialized in
+css_online(), and read before mem_cgroup_iter() callers access the memcg
+members.
+
+Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Tejun Heo <tj@kernel.org>
+Acked-by: Michal Hocko <mhocko@suse.cz>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: <stable@vger.kernel.org> [3.12+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memcontrol.c | 35 +++++++++++++++++++++++++++++++----
+ 1 file changed, 31 insertions(+), 4 deletions(-)
+
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -292,6 +292,9 @@ struct mem_cgroup {
+ /* vmpressure notifications */
+ struct vmpressure vmpressure;
+
++ /* css_online() has been completed */
++ int initialized;
++
+ /*
+ * the counter to account for mem+swap usage.
+ */
+@@ -1127,9 +1130,21 @@ skip_node:
+ * skipping css reference should be safe.
+ */
+ if (next_css) {
+- if ((next_css == &root->css) ||
+- ((next_css->flags & CSS_ONLINE) && css_tryget(next_css)))
+- return mem_cgroup_from_css(next_css);
++ struct mem_cgroup *memcg = mem_cgroup_from_css(next_css);
++
++ if (next_css == &root->css)
++ return memcg;
++
++ if (css_tryget(next_css)) {
++ /*
++ * Make sure the memcg is initialized:
++ * mem_cgroup_css_online() orders the the
++ * initialization against setting the flag.
++ */
++ if (smp_load_acquire(&memcg->initialized))
++ return memcg;
++ css_put(next_css);
++ }
+
+ prev_css = next_css;
+ goto skip_node;
+@@ -6538,6 +6553,7 @@ mem_cgroup_css_online(struct cgroup_subs
+ {
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ struct mem_cgroup *parent = mem_cgroup_from_css(css_parent(css));
++ int ret;
+
+ if (css->cgroup->id > MEM_CGROUP_ID_MAX)
+ return -ENOSPC;
+@@ -6574,7 +6590,18 @@ mem_cgroup_css_online(struct cgroup_subs
+ }
+ mutex_unlock(&memcg_create_mutex);
+
+- return memcg_init_kmem(memcg, &mem_cgroup_subsys);
++ ret = memcg_init_kmem(memcg, &mem_cgroup_subsys);
++ if (ret)
++ return ret;
++
++ /*
++ * Make sure the memcg is initialized: mem_cgroup_iter()
++ * orders reading memcg->initialized against its callers
++ * reading the memcg members.
++ */
++ smp_store_release(&memcg->initialized, 1);
++
++ return 0;
+ }
+
+ /*
--- /dev/null
+From 84ce0f0e94ac97217398b3b69c21c7a62ebeed05 Mon Sep 17 00:00:00 2001
+From: Jan Kara <jack@suse.cz>
+Date: Wed, 22 Oct 2014 20:13:39 -0600
+Subject: scsi: Fix error handling in SCSI_IOCTL_SEND_COMMAND
+
+From: Jan Kara <jack@suse.cz>
+
+commit 84ce0f0e94ac97217398b3b69c21c7a62ebeed05 upstream.
+
+When sg_scsi_ioctl() fails to prepare request to submit in
+blk_rq_map_kern() we jump to a label where we just end up copying
+(luckily zeroed-out) kernel buffer to userspace instead of reporting
+error. Fix the problem by jumping to the right label.
+
+CC: Jens Axboe <axboe@kernel.dk>
+CC: linux-scsi@vger.kernel.org
+Coverity-id: 1226871
+Signed-off-by: Jan Kara <jack@suse.cz>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Fixed up the, now unused, out label.
+
+Signed-off-by: Jens Axboe <axboe@fb.com>
+
+---
+ block/scsi_ioctl.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/block/scsi_ioctl.c
++++ b/block/scsi_ioctl.c
+@@ -489,7 +489,7 @@ int sg_scsi_ioctl(struct request_queue *
+
+ if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) {
+ err = DRIVER_ERROR << 24;
+- goto out;
++ goto error;
+ }
+
+ memset(sense, 0, sizeof(sense));
+@@ -499,7 +499,6 @@ int sg_scsi_ioctl(struct request_queue *
+
+ blk_execute_rq(q, disk, rq, 0);
+
+-out:
+ err = rq->errors & 0xff; /* only 8 bit SCSI status */
+ if (err) {
+ if (rq->sense_len && rq->sense) {
usb-do-not-allow-usb_alloc_streams-on-unconfigured-devices.patch
usb-kobil_sct-fix-non-atomic-allocation-in-write-path.patch
usb-remove-references-to-non-existent-plat_s5p-symbol.patch
+sh-fix-sh770x-scif-memory-regions.patch
+mm-free-compound-page-with-correct-order.patch
+cgroup-kmemleak-add-kmemleak_free-for-cgroup-deallocations.patch
+mm-memcontrol-do-not-iterate-uninitialized-memcgs.patch
+lib-bitmap.c-fix-undefined-shift-in-__bitmap_shift_-left-right.patch
+scsi-fix-error-handling-in-scsi_ioctl_send_command.patch
--- /dev/null
+From 5417421b270229bfce0795ccc99a4b481e4954ca Mon Sep 17 00:00:00 2001
+From: Andriy Skulysh <askulysh@gmail.com>
+Date: Wed, 29 Oct 2014 14:50:59 -0700
+Subject: sh: fix sh770x SCIF memory regions
+
+From: Andriy Skulysh <askulysh@gmail.com>
+
+commit 5417421b270229bfce0795ccc99a4b481e4954ca upstream.
+
+Resources scif1_resources & scif2_resources overlap. Actual SCIF region
+size is 0x10.
+
+This is regression from commit d850acf975be ("sh: Declare SCIF register
+base and IRQ as resources")
+
+Signed-off-by: Andriy Skulysh <askulysh@gmail.com>
+Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+Cc: Geert Uytterhoeven <geert@linux-m68k.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/sh/kernel/cpu/sh3/setup-sh770x.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/arch/sh/kernel/cpu/sh3/setup-sh770x.c
++++ b/arch/sh/kernel/cpu/sh3/setup-sh770x.c
+@@ -118,7 +118,7 @@ static struct plat_sci_port scif0_platfo
+ };
+
+ static struct resource scif0_resources[] = {
+- DEFINE_RES_MEM(0xfffffe80, 0x100),
++ DEFINE_RES_MEM(0xfffffe80, 0x10),
+ DEFINE_RES_IRQ(evt2irq(0x4e0)),
+ };
+
+@@ -143,7 +143,7 @@ static struct plat_sci_port scif1_platfo
+ };
+
+ static struct resource scif1_resources[] = {
+- DEFINE_RES_MEM(0xa4000150, 0x100),
++ DEFINE_RES_MEM(0xa4000150, 0x10),
+ DEFINE_RES_IRQ(evt2irq(0x900)),
+ };
+
+@@ -169,7 +169,7 @@ static struct plat_sci_port scif2_platfo
+ };
+
+ static struct resource scif2_resources[] = {
+- DEFINE_RES_MEM(0xa4000140, 0x100),
++ DEFINE_RES_MEM(0xa4000140, 0x10),
+ DEFINE_RES_IRQ(evt2irq(0x880)),
+ };
+