--- /dev/null
+From 40f705a736eac10e7dca7ab5dd5ed675a6df031d Mon Sep 17 00:00:00 2001
+From: Jann Horn <jann@thejh.net>
+Date: Wed, 9 Sep 2015 15:38:30 -0700
+Subject: fs: Don't dump core if the corefile would become world-readable.
+
+From: Jann Horn <jann@thejh.net>
+
+commit 40f705a736eac10e7dca7ab5dd5ed675a6df031d upstream.
+
+On a filesystem like vfat, all files are created with the same owner
+and mode independent of who created the file. When a vfat filesystem
+is mounted with root as owner of all files and read access for everyone,
+root's processes left world-readable coredumps on it (but other
+users' processes only left empty corefiles when given write access
+because of the uid mismatch).
+
+Given that the old behavior was inconsistent and insecure, I don't see
+a problem with changing it. Now, all processes refuse to dump core unless
+the resulting corefile will only be readable by their owner.
+
+Signed-off-by: Jann Horn <jann@thejh.net>
+Acked-by: Kees Cook <keescook@chromium.org>
+Cc: Al Viro <viro@zeniv.linux.org.uk>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/coredump.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -678,11 +678,15 @@ void do_coredump(const siginfo_t *siginf
+ if (!S_ISREG(inode->i_mode))
+ goto close_fail;
+ /*
+- * Dont allow local users get cute and trick others to coredump
+- * into their pre-created files.
++ * Don't dump core if the filesystem changed owner or mode
++ * of the file during file creation. This is an issue when
++ * a process dumps core while its cwd is e.g. on a vfat
++ * filesystem.
+ */
+ if (!uid_eq(inode->i_uid, current_fsuid()))
+ goto close_fail;
++ if ((inode->i_mode & 0677) != 0600)
++ goto close_fail;
+ if (!(cprm.file->f_mode & FMODE_CAN_WRITE))
+ goto close_fail;
+ if (do_truncate(cprm.file->f_path.dentry, 0, 0, cprm.file))
--- /dev/null
+From fbb1816942c04429e85dbf4c1a080accc534299e Mon Sep 17 00:00:00 2001
+From: Jann Horn <jann@thejh.net>
+Date: Wed, 9 Sep 2015 15:38:28 -0700
+Subject: fs: if a coredump already exists, unlink and recreate with O_EXCL
+
+From: Jann Horn <jann@thejh.net>
+
+commit fbb1816942c04429e85dbf4c1a080accc534299e upstream.
+
+It was possible for an attacking user to trick root (or another user) into
+writing his coredumps into an attacker-readable, pre-existing file using
+rename() or link(), causing the disclosure of secret data from the victim
+process' virtual memory. Depending on the configuration, it was also
+possible to trick root into overwriting system files with coredumps. Fix
+that issue by never writing coredumps into existing files.
+
+Requirements for the attack:
+ - The attack only applies if the victim's process has a nonzero
+ RLIMIT_CORE and is dumpable.
+ - The attacker can trick the victim into coredumping into an
+ attacker-writable directory D, either because the core_pattern is
+ relative and the victim's cwd is attacker-writable or because an
+ absolute core_pattern pointing to a world-writable directory is used.
+ - The attacker has one of these:
+ A: on a system with protected_hardlinks=0:
+ execute access to a folder containing a victim-owned,
+ attacker-readable file on the same partition as D, and the
+ victim-owned file will be deleted before the main part of the attack
+ takes place. (In practice, there are lots of files that fulfill
+ this condition, e.g. entries in Debian's /var/lib/dpkg/info/.)
+ This does not apply to most Linux systems because most distros set
+ protected_hardlinks=1.
+ B: on a system with protected_hardlinks=1:
+ execute access to a folder containing a victim-owned,
+ attacker-readable and attacker-writable file on the same partition
+ as D, and the victim-owned file will be deleted before the main part
+ of the attack takes place.
+ (This seems to be uncommon.)
+ C: on any system, independent of protected_hardlinks:
+ write access to a non-sticky folder containing a victim-owned,
+ attacker-readable file on the same partition as D
+ (This seems to be uncommon.)
+
+The basic idea is that the attacker moves the victim-owned file to where
+he expects the victim process to dump its core. The victim process dumps
+its core into the existing file, and the attacker reads the coredump from
+it.
+
+If the attacker can't move the file because he does not have write access
+to the containing directory, he can instead link the file to a directory
+he controls, then wait for the original link to the file to be deleted
+(because the kernel checks that the link count of the corefile is 1).
+
+A less reliable variant that requires D to be non-sticky works with link()
+and does not require deletion of the original link: link() the file into
+D, but then unlink() it directly before the kernel performs the link count
+check.
+
+On systems with protected_hardlinks=0, this variant allows an attacker to
+not only gain information from coredumps, but also clobber existing,
+victim-writable files with coredumps. (This could theoretically lead to a
+privilege escalation.)
+
+Signed-off-by: Jann Horn <jann@thejh.net>
+Cc: Kees Cook <keescook@chromium.org>
+Cc: Al Viro <viro@zeniv.linux.org.uk>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/coredump.c | 38 ++++++++++++++++++++++++++++++++------
+ 1 file changed, 32 insertions(+), 6 deletions(-)
+
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -506,10 +506,10 @@ void do_coredump(const siginfo_t *siginf
+ const struct cred *old_cred;
+ struct cred *cred;
+ int retval = 0;
+- int flag = 0;
+ int ispipe;
+ struct files_struct *displaced;
+- bool need_nonrelative = false;
++ /* require nonrelative corefile path and be extra careful */
++ bool need_suid_safe = false;
+ bool core_dumped = false;
+ static atomic_t core_dump_count = ATOMIC_INIT(0);
+ struct coredump_params cprm = {
+@@ -543,9 +543,8 @@ void do_coredump(const siginfo_t *siginf
+ */
+ if (__get_dumpable(cprm.mm_flags) == SUID_DUMP_ROOT) {
+ /* Setuid core dump mode */
+- flag = O_EXCL; /* Stop rewrite attacks */
+ cred->fsuid = GLOBAL_ROOT_UID; /* Dump root private */
+- need_nonrelative = true;
++ need_suid_safe = true;
+ }
+
+ retval = coredump_wait(siginfo->si_signo, &core_state);
+@@ -626,7 +625,7 @@ void do_coredump(const siginfo_t *siginf
+ if (cprm.limit < binfmt->min_coredump)
+ goto fail_unlock;
+
+- if (need_nonrelative && cn.corename[0] != '/') {
++ if (need_suid_safe && cn.corename[0] != '/') {
+ printk(KERN_WARNING "Pid %d(%s) can only dump core "\
+ "to fully qualified path!\n",
+ task_tgid_vnr(current), current->comm);
+@@ -634,8 +633,35 @@ void do_coredump(const siginfo_t *siginf
+ goto fail_unlock;
+ }
+
++ /*
++ * Unlink the file if it exists unless this is a SUID
++ * binary - in that case, we're running around with root
++ * privs and don't want to unlink another user's coredump.
++ */
++ if (!need_suid_safe) {
++ mm_segment_t old_fs;
++
++ old_fs = get_fs();
++ set_fs(KERNEL_DS);
++ /*
++ * If it doesn't exist, that's fine. If there's some
++ * other problem, we'll catch it at the filp_open().
++ */
++ (void) sys_unlink((const char __user *)cn.corename);
++ set_fs(old_fs);
++ }
++
++ /*
++ * There is a race between unlinking and creating the
++ * file, but if that causes an EEXIST here, that's
++ * fine - another process raced with us while creating
++ * the corefile, and the other process won. To userspace,
++ * what matters is that at least one of the two processes
++ * writes its coredump successfully, not which one.
++ */
+ cprm.file = filp_open(cn.corename,
+- O_CREAT | 2 | O_NOFOLLOW | O_LARGEFILE | flag,
++ O_CREAT | 2 | O_NOFOLLOW |
++ O_LARGEFILE | O_EXCL,
+ 0600);
+ if (IS_ERR(cprm.file))
+ goto fail_unlock;
--- /dev/null
+From ee5d004fd0591536a061451eba2b187092e9127c Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.com>
+Date: Wed, 22 Jul 2015 10:20:07 +1000
+Subject: md: flush ->event_work before stopping array.
+
+From: NeilBrown <neilb@suse.com>
+
+commit ee5d004fd0591536a061451eba2b187092e9127c upstream.
+
+The 'event_work' worker used by dm-raid may still be running
+when the array is stopped. This can result in an oops.
+
+So flush the workqueue on which it is run after detaching
+and before destroying the device.
+
+Reported-by: Heinz Mauelshagen <heinzm@redhat.com>
+Signed-off-by: NeilBrown <neilb@suse.com>
+Fixes: 9d09e663d550 ("dm: raid456 basic support")
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/md/md.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5365,6 +5365,8 @@ static void __md_stop(struct mddev *mdde
+ {
+ struct md_personality *pers = mddev->pers;
+ mddev_detach(mddev);
++ /* Ensure ->event_work is done */
++ flush_workqueue(md_misc_wq);
+ spin_lock(&mddev->lock);
+ mddev->ready = 0;
+ mddev->pers = NULL;
--- /dev/null
+From 299b0685e31c9f3dcc2d58ee3beca761a40b44b3 Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.com>
+Date: Mon, 6 Jul 2015 17:37:49 +1000
+Subject: md/raid10: always set reshape_safe when initializing reshape_position.
+
+From: NeilBrown <neilb@suse.com>
+
+commit 299b0685e31c9f3dcc2d58ee3beca761a40b44b3 upstream.
+
+'reshape_position' tracks where in the reshape we have reached.
+'reshape_safe' tracks where in the reshape we have safely recorded
+in the metadata.
+
+These are compared to determine when to update the metadata.
+So it is important that reshape_safe is initialised properly.
+Currently it isn't. When starting a reshape from the beginning
+it usually has the correct value by luck. But when reducing the
+number of devices in a RAID10, it has the wrong value and this leads
+to the metadata not being updated correctly.
+This can lead to corruption if the reshape is not allowed to complete.
+
+This patch is suitable for any -stable kernel which supports RAID10
+reshape, which is 3.5 and later.
+
+Fixes: 3ea7daa5d7fd ("md/raid10: add reshape support")
+Signed-off-by: NeilBrown <neilb@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/md/raid10.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3566,6 +3566,7 @@ static struct r10conf *setup_conf(struct
+ /* far_copies must be 1 */
+ conf->prev.stride = conf->dev_sectors;
+ }
++ conf->reshape_safe = conf->reshape_progress;
+ spin_lock_init(&conf->device_lock);
+ INIT_LIST_HEAD(&conf->retry_list);
+
+@@ -3770,7 +3771,6 @@ static int run(struct mddev *mddev)
+ }
+ conf->offset_diff = min_offset_diff;
+
+- conf->reshape_safe = conf->reshape_progress;
+ clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
+@@ -4113,6 +4113,7 @@ static int raid10_start_reshape(struct m
+ conf->reshape_progress = size;
+ } else
+ conf->reshape_progress = 0;
++ conf->reshape_safe = conf->reshape_progress;
+ spin_unlock_irq(&conf->device_lock);
+
+ if (mddev->delta_disks && mddev->bitmap) {
+@@ -4180,6 +4181,7 @@ abort:
+ rdev->new_data_offset = rdev->data_offset;
+ smp_wmb();
+ conf->reshape_progress = MaxSector;
++ conf->reshape_safe = MaxSector;
+ mddev->reshape_position = MaxSector;
+ spin_unlock_irq(&conf->device_lock);
+ return ret;
+@@ -4534,6 +4536,7 @@ static void end_reshape(struct r10conf *
+ md_finish_reshape(conf->mddev);
+ smp_wmb();
+ conf->reshape_progress = MaxSector;
++ conf->reshape_safe = MaxSector;
+ spin_unlock_irq(&conf->device_lock);
+
+ /* read-ahead size must cover two whole stripes, which is
--- /dev/null
+From 2d5b569b665ea6d0b15c52529ff06300de81a7ce Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.com>
+Date: Mon, 6 Jul 2015 12:49:23 +1000
+Subject: md/raid5: avoid races when changing cache size.
+
+From: NeilBrown <neilb@suse.com>
+
+commit 2d5b569b665ea6d0b15c52529ff06300de81a7ce upstream.
+
+Cache size can grow or shrink due to various pressures at
+any time. So when we resize the cache as part of a 'grow'
+operation (i.e. change the size to allow more devices) we need
+to blocks that automatic growing/shrinking.
+
+So introduce a mutex. auto grow/shrink uses mutex_trylock()
+and just doesn't bother if there is a blockage.
+Resizing the whole cache holds the mutex to ensure that
+the correct number of new stripes is allocated.
+
+This bug can result in some stripes not being freed when an
+array is stopped. This leads to the kmem_cache not being
+freed and a subsequent array can try to use the same kmem_cache
+and get confused.
+
+Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.")
+Signed-off-by: NeilBrown <neilb@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+
+---
+ drivers/md/raid5.c | 31 +++++++++++++++++++++++++------
+ drivers/md/raid5.h | 3 ++-
+ 2 files changed, 27 insertions(+), 7 deletions(-)
+
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2151,6 +2151,9 @@ static int resize_stripes(struct r5conf
+ if (!sc)
+ return -ENOMEM;
+
++ /* Need to ensure auto-resizing doesn't interfere */
++ mutex_lock(&conf->cache_size_mutex);
++
+ for (i = conf->max_nr_stripes; i; i--) {
+ nsh = alloc_stripe(sc, GFP_KERNEL);
+ if (!nsh)
+@@ -2167,6 +2170,7 @@ static int resize_stripes(struct r5conf
+ kmem_cache_free(sc, nsh);
+ }
+ kmem_cache_destroy(sc);
++ mutex_unlock(&conf->cache_size_mutex);
+ return -ENOMEM;
+ }
+ /* Step 2 - Must use GFP_NOIO now.
+@@ -2213,6 +2217,7 @@ static int resize_stripes(struct r5conf
+ } else
+ err = -ENOMEM;
+
++ mutex_unlock(&conf->cache_size_mutex);
+ /* Step 4, return new stripes to service */
+ while(!list_empty(&newstripes)) {
+ nsh = list_entry(newstripes.next, struct stripe_head, lru);
+@@ -5846,12 +5851,14 @@ static void raid5d(struct md_thread *thr
+ pr_debug("%d stripes handled\n", handled);
+
+ spin_unlock_irq(&conf->device_lock);
+- if (test_and_clear_bit(R5_ALLOC_MORE, &conf->cache_state)) {
++ if (test_and_clear_bit(R5_ALLOC_MORE, &conf->cache_state) &&
++ mutex_trylock(&conf->cache_size_mutex)) {
+ grow_one_stripe(conf, __GFP_NOWARN);
+ /* Set flag even if allocation failed. This helps
+ * slow down allocation requests when mem is short
+ */
+ set_bit(R5_DID_ALLOC, &conf->cache_state);
++ mutex_unlock(&conf->cache_size_mutex);
+ }
+
+ async_tx_issue_pending_all();
+@@ -5883,18 +5890,22 @@ raid5_set_cache_size(struct mddev *mddev
+ return -EINVAL;
+
+ conf->min_nr_stripes = size;
++ mutex_lock(&conf->cache_size_mutex);
+ while (size < conf->max_nr_stripes &&
+ drop_one_stripe(conf))
+ ;
++ mutex_unlock(&conf->cache_size_mutex);
+
+
+ err = md_allow_write(mddev);
+ if (err)
+ return err;
+
++ mutex_lock(&conf->cache_size_mutex);
+ while (size > conf->max_nr_stripes)
+ if (!grow_one_stripe(conf, GFP_KERNEL))
+ break;
++ mutex_unlock(&conf->cache_size_mutex);
+
+ return 0;
+ }
+@@ -6360,11 +6371,18 @@ static unsigned long raid5_cache_scan(st
+ struct shrink_control *sc)
+ {
+ struct r5conf *conf = container_of(shrink, struct r5conf, shrinker);
+- int ret = 0;
+- while (ret < sc->nr_to_scan) {
+- if (drop_one_stripe(conf) == 0)
+- return SHRINK_STOP;
+- ret++;
++ unsigned long ret = SHRINK_STOP;
++
++ if (mutex_trylock(&conf->cache_size_mutex)) {
++ ret= 0;
++ while (ret < sc->nr_to_scan) {
++ if (drop_one_stripe(conf) == 0) {
++ ret = SHRINK_STOP;
++ break;
++ }
++ ret++;
++ }
++ mutex_unlock(&conf->cache_size_mutex);
+ }
+ return ret;
+ }
+@@ -6433,6 +6451,7 @@ static struct r5conf *setup_conf(struct
+ goto abort;
+ spin_lock_init(&conf->device_lock);
+ seqcount_init(&conf->gen_lock);
++ mutex_init(&conf->cache_size_mutex);
+ init_waitqueue_head(&conf->wait_for_stripe);
+ init_waitqueue_head(&conf->wait_for_overlap);
+ INIT_LIST_HEAD(&conf->handle_list);
+--- a/drivers/md/raid5.h
++++ b/drivers/md/raid5.h
+@@ -482,7 +482,8 @@ struct r5conf {
+ */
+ int active_name;
+ char cache_name[2][32];
+- struct kmem_cache *slab_cache; /* for allocating stripes */
++ struct kmem_cache *slab_cache; /* for allocating stripes */
++ struct mutex cache_size_mutex; /* Protect changes to cache size */
+
+ int seq_flush, seq_write;
+ int quiesce;
--- /dev/null
+From 49895bcc7e566ba455eb2996607d6fbd3447ce16 Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.com>
+Date: Mon, 3 Aug 2015 17:09:57 +1000
+Subject: md/raid5: don't let shrink_slab shrink too far.
+
+From: NeilBrown <neilb@suse.com>
+
+commit 49895bcc7e566ba455eb2996607d6fbd3447ce16 upstream.
+
+I have a report of drop_one_stripe() called from
+raid5_cache_scan() apparently finding ->max_nr_stripes == 0.
+
+This should not be allowed.
+
+So add a test to keep max_nr_stripes above min_nr_stripes.
+
+Also use a 'mask' rather than a 'mod' in drop_one_stripe
+to ensure 'hash' is valid even if max_nr_stripes does reach zero.
+
+
+Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.")
+Reported-by: Tomas Papan <tomas.papan@gmail.com>
+Signed-off-by: NeilBrown <neilb@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/md/raid5.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2245,7 +2245,7 @@ static int resize_stripes(struct r5conf
+ static int drop_one_stripe(struct r5conf *conf)
+ {
+ struct stripe_head *sh;
+- int hash = (conf->max_nr_stripes - 1) % NR_STRIPE_HASH_LOCKS;
++ int hash = (conf->max_nr_stripes - 1) & STRIPE_HASH_LOCKS_MASK;
+
+ spin_lock_irq(conf->hash_locks + hash);
+ sh = get_free_stripe(conf, hash);
+@@ -6375,7 +6375,8 @@ static unsigned long raid5_cache_scan(st
+
+ if (mutex_trylock(&conf->cache_size_mutex)) {
+ ret= 0;
+- while (ret < sc->nr_to_scan) {
++ while (ret < sc->nr_to_scan &&
++ conf->max_nr_stripes > conf->min_nr_stripes) {
+ if (drop_one_stripe(conf) == 0) {
+ ret = SHRINK_STOP;
+ break;
--- /dev/null
+From 71f8a4b81d040b3d094424197ca2f1bf811b1245 Mon Sep 17 00:00:00 2001
+From: Jialing Fu <jlfu@marvell.com>
+Date: Fri, 28 Aug 2015 11:13:09 +0800
+Subject: mmc: core: fix race condition in mmc_wait_data_done
+
+From: Jialing Fu <jlfu@marvell.com>
+
+commit 71f8a4b81d040b3d094424197ca2f1bf811b1245 upstream.
+
+The following panic is captured in ker3.14, but the issue still exists
+in latest kernel.
+---------------------------------------------------------------------
+[ 20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer dereference
+at virtual address 00000578
+......
+[ 20.738499] c0 3136 (Compiler) PC is at _raw_spin_lock_irqsave+0x24/0x60
+[ 20.738527] c0 3136 (Compiler) LR is at _raw_spin_lock_irqsave+0x20/0x60
+[ 20.740134] c0 3136 (Compiler) Call trace:
+[ 20.740165] c0 3136 (Compiler) [<ffffffc0008ee900>] _raw_spin_lock_irqsave+0x24/0x60
+[ 20.740200] c0 3136 (Compiler) [<ffffffc0000dd024>] __wake_up+0x1c/0x54
+[ 20.740230] c0 3136 (Compiler) [<ffffffc000639414>] mmc_wait_data_done+0x28/0x34
+[ 20.740262] c0 3136 (Compiler) [<ffffffc0006391a0>] mmc_request_done+0xa4/0x220
+[ 20.740314] c0 3136 (Compiler) [<ffffffc000656894>] sdhci_tasklet_finish+0xac/0x264
+[ 20.740352] c0 3136 (Compiler) [<ffffffc0000a2b58>] tasklet_action+0xa0/0x158
+[ 20.740382] c0 3136 (Compiler) [<ffffffc0000a2078>] __do_softirq+0x10c/0x2e4
+[ 20.740411] c0 3136 (Compiler) [<ffffffc0000a24bc>] irq_exit+0x8c/0xc0
+[ 20.740439] c0 3136 (Compiler) [<ffffffc00008489c>] handle_IRQ+0x48/0xac
+[ 20.740469] c0 3136 (Compiler) [<ffffffc000081428>] gic_handle_irq+0x38/0x7c
+----------------------------------------------------------------------
+Because in SMP, "mrq" has race condition between below two paths:
+path1: CPU0: <tasklet context>
+ static void mmc_wait_data_done(struct mmc_request *mrq)
+ {
+ mrq->host->context_info.is_done_rcv = true;
+ //
+ // If CPU0 has just finished "is_done_rcv = true" in path1, and at
+ // this moment, IRQ or ICache line missing happens in CPU0.
+ // What happens in CPU1 (path2)?
+ //
+ // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode:
+ // path2 would have chance to break from wait_event_interruptible
+ // in mmc_wait_for_data_req_done and continue to run for next
+ // mmc_request (mmc_blk_rw_rq_prep).
+ //
+ // Within mmc_blk_rq_prep, mrq is cleared to 0.
+ // If below line still gets host from "mrq" as the result of
+ // compiler, the panic happens as we traced.
+ wake_up_interruptible(&mrq->host->context_info.wait);
+ }
+
+path2: CPU1: <The mmcqd thread runs mmc_queue_thread>
+ static int mmc_wait_for_data_req_done(...
+ {
+ ...
+ while (1) {
+ wait_event_interruptible(context_info->wait,
+ (context_info->is_done_rcv ||
+ context_info->is_new_req));
+ static void mmc_blk_rw_rq_prep(...
+ {
+ ...
+ memset(brq, 0, sizeof(struct mmc_blk_request));
+
+This issue happens very coincidentally; however adding mdelay(1) in
+mmc_wait_data_done as below could duplicate it easily.
+
+ static void mmc_wait_data_done(struct mmc_request *mrq)
+ {
+ mrq->host->context_info.is_done_rcv = true;
++ mdelay(1);
+ wake_up_interruptible(&mrq->host->context_info.wait);
+ }
+
+At runtime, IRQ or ICache line missing may just happen at the same place
+of the mdelay(1).
+
+This patch gets the mmc_context_info at the beginning of function, it can
+avoid this race condition.
+
+Signed-off-by: Jialing Fu <jlfu@marvell.com>
+Tested-by: Shawn Lin <shawn.lin@rock-chips.com>
+Fixes: 2220eedfd7ae ("mmc: fix async request mechanism ....")
+Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mmc/core/core.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -330,8 +330,10 @@ EXPORT_SYMBOL(mmc_start_bkops);
+ */
+ static void mmc_wait_data_done(struct mmc_request *mrq)
+ {
+- mrq->host->context_info.is_done_rcv = true;
+- wake_up_interruptible(&mrq->host->context_info.wait);
++ struct mmc_context_info *context_info = &mrq->host->context_info;
++
++ context_info->is_done_rcv = true;
++ wake_up_interruptible(&context_info->wait);
+ }
+
+ static void mmc_wait_done(struct mmc_request *mrq)
--- /dev/null
+From 0dafa60eb2506617e6968b97cc5a44914a7fb1a6 Mon Sep 17 00:00:00 2001
+From: Jisheng Zhang <jszhang@marvell.com>
+Date: Tue, 18 Aug 2015 16:21:39 +0800
+Subject: mmc: sdhci: also get preset value and driver type for MMC_DDR52
+
+From: Jisheng Zhang <jszhang@marvell.com>
+
+commit 0dafa60eb2506617e6968b97cc5a44914a7fb1a6 upstream.
+
+commit bb8175a8aa42 ("mmc: sdhci: clarify DDR timing mode between
+SD-UHS and eMMC") added MMC_DDR52 as eMMC's DDR mode to be
+distinguished from SD-UHS, but it missed setting driver type for
+MMC_DDR52 timing mode.
+
+So sometimes we get the following error on Marvell BG2Q DMP board:
+
+[ 1.559598] mmcblk0: error -84 transferring data, sector 0, nr 8, cmd
+response 0x900, card status 0xb00
+[ 1.569314] mmcblk0: retrying using single block read
+[ 1.575676] mmcblk0: error -84 transferring data, sector 2, nr 6, cmd
+response 0x900, card status 0x0
+[ 1.585202] blk_update_request: I/O error, dev mmcblk0, sector 2
+[ 1.591818] mmcblk0: error -84 transferring data, sector 3, nr 5, cmd
+response 0x900, card status 0x0
+[ 1.601341] blk_update_request: I/O error, dev mmcblk0, sector 3
+
+This patches fixes this by adding the missing driver type setting.
+
+Fixes: bb8175a8aa42 ("mmc: sdhci: clarify DDR timing mode ...")
+Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mmc/host/sdhci.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1146,6 +1146,7 @@ static u16 sdhci_get_preset_value(struct
+ preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR104);
+ break;
+ case MMC_TIMING_UHS_DDR50:
++ case MMC_TIMING_MMC_DDR52:
+ preset = sdhci_readw(host, SDHCI_PRESET_FOR_DDR50);
+ break;
+ case MMC_TIMING_MMC_HS400:
+@@ -1598,7 +1599,8 @@ static void sdhci_do_set_ios(struct sdhc
+ (ios->timing == MMC_TIMING_UHS_SDR25) ||
+ (ios->timing == MMC_TIMING_UHS_SDR50) ||
+ (ios->timing == MMC_TIMING_UHS_SDR104) ||
+- (ios->timing == MMC_TIMING_UHS_DDR50))) {
++ (ios->timing == MMC_TIMING_UHS_DDR50) ||
++ (ios->timing == MMC_TIMING_MMC_DDR52))) {
+ u16 preset;
+
+ sdhci_enable_preset_value(host, true);
--- /dev/null
+From 143b648ddf1583905fa15d32be27a31442fc7933 Mon Sep 17 00:00:00 2001
+From: Adam Lee <adam.lee@canonical.com>
+Date: Mon, 3 Aug 2015 14:33:28 +0800
+Subject: mmc: sdhci-pci: set the clear transfer mode register quirk for O2Micro
+
+From: Adam Lee <adam.lee@canonical.com>
+
+commit 143b648ddf1583905fa15d32be27a31442fc7933 upstream.
+
+This patch fixes MMC not working issue on O2Micro/BayHub Host, which
+requires transfer mode register to be cleared when sending no DMA
+command.
+
+Signed-off-by: Peter Guo <peter.guo@bayhubtech.com>
+Signed-off-by: Adam Lee <adam.lee@canonical.com>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mmc/host/sdhci-pci.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/mmc/host/sdhci-pci.c
++++ b/drivers/mmc/host/sdhci-pci.c
+@@ -549,6 +549,7 @@ static int jmicron_resume(struct sdhci_p
+ static const struct sdhci_pci_fixes sdhci_o2 = {
+ .probe = sdhci_pci_o2_probe,
+ .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
++ .quirks2 = SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD,
+ .probe_slot = sdhci_pci_o2_probe_slot,
+ .resume = sdhci_pci_o2_resume,
+ };
--- /dev/null
+From b1b4e435e4ef7de77f07bf2a42c8380b960c2d44 Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Thu, 3 Sep 2015 22:45:21 +0200
+Subject: parisc: Filter out spurious interrupts in PA-RISC irq handler
+
+From: Helge Deller <deller@gmx.de>
+
+commit b1b4e435e4ef7de77f07bf2a42c8380b960c2d44 upstream.
+
+When detecting a serial port on newer PA-RISC machines (with iosapic) we have a
+long way to go to find the right IRQ line, registering it, then registering the
+serial port and the irq handler for the serial port. During this phase spurious
+interrupts for the serial port may happen which then crashes the kernel because
+the action handler might not have been set up yet.
+
+So, basically it's a race condition between the serial port hardware and the
+CPU which sets up the necessary fields in the irq sructs. The main reason for
+this race is, that we unmask the serial port irqs too early without having set
+up everything properly before (which isn't easily possible because we need the
+IRQ number to register the serial ports).
+
+This patch is a work-around for this problem. It adds checks to the CPU irq
+handler to verify if the IRQ action field has been initialized already. If not,
+we just skip this interrupt (which isn't critical for a serial port at bootup).
+The real fix would probably involve rewriting all PA-RISC specific IRQ code
+(for CPU, IOSAPIC, GSC and EISA) to use IRQ domains with proper parenting of
+the irq chips and proper irq enabling along this line.
+
+This bug has been in the PA-RISC port since the beginning, but the crashes
+happened very rarely with currently used hardware. But on the latest machine
+which I bought (a C8000 workstation), which uses the fastest CPUs (4 x PA8900,
+1GHz) and which has the largest possible L1 cache size (64MB each), the kernel
+crashed at every boot because of this race. So, without this patch the machine
+would currently be unuseable.
+
+For the record, here is the flow logic:
+1. serial_init_chip() in 8250_gsc.c calls iosapic_serial_irq().
+2. iosapic_serial_irq() calls txn_alloc_irq() to find the irq.
+3. iosapic_serial_irq() calls cpu_claim_irq() to register the CPU irq
+4. cpu_claim_irq() unmasks the CPU irq (which it shouldn't!)
+5. serial_init_chip() then registers the 8250 port.
+Problems:
+- In step 4 the CPU irq shouldn't have been registered yet, but after step 5
+- If serial irq happens between 4 and 5 have finished, the kernel will crash
+
+Signed-off-by: Helge Deller <deller@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/parisc/kernel/irq.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/arch/parisc/kernel/irq.c
++++ b/arch/parisc/kernel/irq.c
+@@ -507,8 +507,8 @@ void do_cpu_irq_mask(struct pt_regs *reg
+ struct pt_regs *old_regs;
+ unsigned long eirr_val;
+ int irq, cpu = smp_processor_id();
+-#ifdef CONFIG_SMP
+ struct irq_desc *desc;
++#ifdef CONFIG_SMP
+ cpumask_t dest;
+ #endif
+
+@@ -521,8 +521,12 @@ void do_cpu_irq_mask(struct pt_regs *reg
+ goto set_out;
+ irq = eirr_to_irq(eirr_val);
+
+-#ifdef CONFIG_SMP
++ /* Filter out spurious interrupts, mostly from serial port at bootup */
+ desc = irq_to_desc(irq);
++ if (unlikely(!desc->action))
++ goto set_out;
++
++#ifdef CONFIG_SMP
+ cpumask_copy(&dest, desc->irq_data.affinity);
+ if (irqd_is_per_cpu(&desc->irq_data) &&
+ !cpumask_test_cpu(smp_processor_id(), &dest)) {
--- /dev/null
+From 1b59ddfcf1678de38a1f8ca9fb8ea5eebeff1843 Mon Sep 17 00:00:00 2001
+From: John David Anglin <dave.anglin@bell.net>
+Date: Mon, 7 Sep 2015 20:13:28 -0400
+Subject: parisc: Use double word condition in 64bit CAS operation
+
+From: John David Anglin <dave.anglin@bell.net>
+
+commit 1b59ddfcf1678de38a1f8ca9fb8ea5eebeff1843 upstream.
+
+The attached change fixes the condition used in the "sub" instruction.
+A double word comparison is needed. This fixes the 64-bit LWS CAS
+operation on 64-bit kernels.
+
+I can now enable 64-bit atomic support in GCC.
+
+Signed-off-by: John David Anglin <dave.anglin>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/parisc/kernel/syscall.S | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -821,7 +821,7 @@ cas2_action:
+ /* 64bit CAS */
+ #ifdef CONFIG_64BIT
+ 19: ldd,ma 0(%sr3,%r26), %r29
+- sub,= %r29, %r25, %r0
++ sub,*= %r29, %r25, %r0
+ b,n cas2_end
+ 20: std,ma %r24, 0(%sr3,%r26)
+ copy %r0, %r28
--- /dev/null
+From e02a653e15d8d32e9e768fd99a3271aafe5c5d77 Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Wed, 2 Sep 2015 18:17:29 +0200
+Subject: PCI,parisc: Enable 64-bit bus addresses on PA-RISC
+
+From: Helge Deller <deller@gmx.de>
+
+commit e02a653e15d8d32e9e768fd99a3271aafe5c5d77 upstream.
+
+Commit 3a9ad0b ("PCI: Add pci_bus_addr_t") unconditionally introduced usage of
+64-bit PCI bus addresses on all 64-bit platforms which broke PA-RISC.
+
+It turned out that due to enabling the 64-bit addresses, the PCI logic decided
+to use the GMMIO instead of the LMMIO region. This commit simply disables
+registering the GMMIO and thus we fall back to use the LMMIO region as before.
+
+Reverts commit 45ea2a5fed6dacb9bb0558d8b21eacc1c45d5bb4
+("PCI: Don't use 64-bit bus addresses on PA-RISC")
+
+To: linux-parisc@vger.kernel.org
+Cc: linux-pci@vger.kernel.org
+Cc: Bjorn Helgaas <bhelgaas@google.com>
+Cc: Meelis Roos <mroos@linux.ee>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/parisc/lba_pci.c | 7 +++++--
+ drivers/pci/Kconfig | 2 +-
+ 2 files changed, 6 insertions(+), 3 deletions(-)
+
+--- a/drivers/parisc/lba_pci.c
++++ b/drivers/parisc/lba_pci.c
+@@ -1556,8 +1556,11 @@ lba_driver_probe(struct parisc_device *d
+ if (lba_dev->hba.lmmio_space.flags)
+ pci_add_resource_offset(&resources, &lba_dev->hba.lmmio_space,
+ lba_dev->hba.lmmio_space_offset);
+- if (lba_dev->hba.gmmio_space.flags)
+- pci_add_resource(&resources, &lba_dev->hba.gmmio_space);
++ if (lba_dev->hba.gmmio_space.flags) {
++ /* pci_add_resource(&resources, &lba_dev->hba.gmmio_space); */
++ pr_warn("LBA: Not registering GMMIO space %pR\n",
++ &lba_dev->hba.gmmio_space);
++ }
+
+ pci_add_resource(&resources, &lba_dev->hba.bus_num);
+
+--- a/drivers/pci/Kconfig
++++ b/drivers/pci/Kconfig
+@@ -2,7 +2,7 @@
+ # PCI configuration
+ #
+ config PCI_BUS_ADDR_T_64BIT
+- def_bool y if (ARCH_DMA_ADDR_T_64BIT || (64BIT && !PARISC))
++ def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT)
+ depends on PCI
+
+ config PCI_MSI
--- /dev/null
+From 5f1b2f77646fc0ef2f36fc554f5722a1381d0892 Mon Sep 17 00:00:00 2001
+From: Mitja Spes <mitja@lxnav.com>
+Date: Wed, 2 Sep 2015 10:02:29 +0200
+Subject: rtc: abx80x: fix RTC write bit
+
+From: Mitja Spes <mitja@lxnav.com>
+
+commit 5f1b2f77646fc0ef2f36fc554f5722a1381d0892 upstream.
+
+Fix RTC write bit as per application manual
+
+Signed-off-by: Mitja Spes <mitja@lxnav.com>
+Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/rtc/rtc-abx80x.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/rtc/rtc-abx80x.c
++++ b/drivers/rtc/rtc-abx80x.c
+@@ -28,7 +28,7 @@
+ #define ABX8XX_REG_WD 0x07
+
+ #define ABX8XX_REG_CTRL1 0x10
+-#define ABX8XX_CTRL_WRITE BIT(1)
++#define ABX8XX_CTRL_WRITE BIT(0)
+ #define ABX8XX_CTRL_12_24 BIT(6)
+
+ #define ABX8XX_REG_CFG_KEY 0x1f
--- /dev/null
+From 1fb1c35f56bb6ab4a65920c648154b0f78f634a5 Mon Sep 17 00:00:00 2001
+From: Joonyoung Shim <jy0922.shim@samsung.com>
+Date: Wed, 12 Aug 2015 19:21:46 +0900
+Subject: rtc: s3c: fix disabled clocks for alarm
+
+From: Joonyoung Shim <jy0922.shim@samsung.com>
+
+commit 1fb1c35f56bb6ab4a65920c648154b0f78f634a5 upstream.
+
+The clock enable/disable codes for alarm have been removed from
+commit 24e1455493da ("drivers/rtc/rtc-s3c.c: delete duplicate clock
+control") and the clocks are disabled even if alarm is set, so alarm
+interrupt can't happen.
+
+The s3c_rtc_setaie function can be called several times with 'enabled'
+argument having same value, so it needs to check whether clocks are
+enabled or not.
+
+Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
+Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/rtc/rtc-s3c.c | 24 ++++++++++++++++++------
+ 1 file changed, 18 insertions(+), 6 deletions(-)
+
+--- a/drivers/rtc/rtc-s3c.c
++++ b/drivers/rtc/rtc-s3c.c
+@@ -39,6 +39,7 @@ struct s3c_rtc {
+ void __iomem *base;
+ struct clk *rtc_clk;
+ struct clk *rtc_src_clk;
++ bool clk_disabled;
+
+ struct s3c_rtc_data *data;
+
+@@ -71,9 +72,12 @@ static void s3c_rtc_enable_clk(struct s3
+ unsigned long irq_flags;
+
+ spin_lock_irqsave(&info->alarm_clk_lock, irq_flags);
+- clk_enable(info->rtc_clk);
+- if (info->data->needs_src_clk)
+- clk_enable(info->rtc_src_clk);
++ if (info->clk_disabled) {
++ clk_enable(info->rtc_clk);
++ if (info->data->needs_src_clk)
++ clk_enable(info->rtc_src_clk);
++ info->clk_disabled = false;
++ }
+ spin_unlock_irqrestore(&info->alarm_clk_lock, irq_flags);
+ }
+
+@@ -82,9 +86,12 @@ static void s3c_rtc_disable_clk(struct s
+ unsigned long irq_flags;
+
+ spin_lock_irqsave(&info->alarm_clk_lock, irq_flags);
+- if (info->data->needs_src_clk)
+- clk_disable(info->rtc_src_clk);
+- clk_disable(info->rtc_clk);
++ if (!info->clk_disabled) {
++ if (info->data->needs_src_clk)
++ clk_disable(info->rtc_src_clk);
++ clk_disable(info->rtc_clk);
++ info->clk_disabled = true;
++ }
+ spin_unlock_irqrestore(&info->alarm_clk_lock, irq_flags);
+ }
+
+@@ -128,6 +135,11 @@ static int s3c_rtc_setaie(struct device
+
+ s3c_rtc_disable_clk(info);
+
++ if (enabled)
++ s3c_rtc_enable_clk(info);
++ else
++ s3c_rtc_disable_clk(info);
++
+ return 0;
+ }
+
--- /dev/null
+From ff02c0444b83201ff76cc49deccac8cf2bffc7bc Mon Sep 17 00:00:00 2001
+From: Joonyoung Shim <jy0922.shim@samsung.com>
+Date: Fri, 21 Aug 2015 18:43:41 +0900
+Subject: rtc: s5m: fix to update ctrl register
+
+From: Joonyoung Shim <jy0922.shim@samsung.com>
+
+commit ff02c0444b83201ff76cc49deccac8cf2bffc7bc upstream.
+
+According to datasheet, the S2MPS13X and S2MPS14X should update write
+buffer via setting WUDR bit to high after ctrl register is written.
+
+If not, ALARM interrupt of rtc-s5m doesn't happen first time when i use
+tools/testing/selftests/timers/rtctest.c test program and hour format is
+used to 12 hour mode in Odroid-XU3 board.
+
+One more issue is the RTC doesn't keep time on Odroid-XU3 board when i
+turn on board after power off even if RTC battery is connected. It can
+be solved as setting WUDR & RUDR bits to high at the same time after
+RTC_CTRL register is written. It's same with condition of only writing
+ALARM registers, so this is for only S2MPS14 and we should set WUDR &
+A_UDR bits to high on S2MPS13.
+
+I can't find any reasonable description about this like fix from
+datasheet, but can find similar codes from rtc driver source of
+hardkernel kernel and vendor kernel.
+
+Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
+Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/rtc/rtc-s5m.c | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/drivers/rtc/rtc-s5m.c
++++ b/drivers/rtc/rtc-s5m.c
+@@ -635,6 +635,16 @@ static int s5m8767_rtc_init_reg(struct s
+ case S2MPS13X:
+ data[0] = (0 << BCD_EN_SHIFT) | (1 << MODEL24_SHIFT);
+ ret = regmap_write(info->regmap, info->regs->ctrl, data[0]);
++ if (ret < 0)
++ break;
++
++ /*
++ * Should set WUDR & (RUDR or AUDR) bits to high after writing
++ * RTC_CTRL register like writing Alarm registers. We can't find
++ * the description from datasheet but vendor code does that
++ * really.
++ */
++ ret = s5m8767_rtc_set_alarm_reg(info);
+ break;
+
+ default:
sunrpc-xs_reset_transport-must-mark-the-connection-as-disconnected.patch
sunrpc-ensure-that-we-wait-for-connections-to-complete-before-retrying.patch
sunrpc-lock-the-transport-layer-on-shutdown.patch
+rtc-s3c-fix-disabled-clocks-for-alarm.patch
+rtc-s5m-fix-to-update-ctrl-register.patch
+rtc-abx80x-fix-rtc-write-bit.patch
+pci-parisc-enable-64-bit-bus-addresses-on-pa-risc.patch
+parisc-use-double-word-condition-in-64bit-cas-operation.patch
+parisc-filter-out-spurious-interrupts-in-pa-risc-irq-handler.patch
+vmscan-fix-increasing-nr_isolated-incurred-by-putback-unevictable-pages.patch
+fs-if-a-coredump-already-exists-unlink-and-recreate-with-o_excl.patch
+fs-don-t-dump-core-if-the-corefile-would-become-world-readable.patch
+mmc-sdhci-pci-set-the-clear-transfer-mode-register-quirk-for-o2micro.patch
+mmc-sdhci-also-get-preset-value-and-driver-type-for-mmc_ddr52.patch
+mmc-core-fix-race-condition-in-mmc_wait_data_done.patch
+md-raid5-avoid-races-when-changing-cache-size.patch
+md-raid5-don-t-let-shrink_slab-shrink-too-far.patch
+md-raid10-always-set-reshape_safe-when-initializing-reshape_position.patch
+md-flush-event_work-before-stopping-array.patch
--- /dev/null
+From c54839a722a02818677bcabe57e957f0ce4f841d Mon Sep 17 00:00:00 2001
+From: Jaewon Kim <jaewon31.kim@samsung.com>
+Date: Tue, 8 Sep 2015 15:02:21 -0700
+Subject: vmscan: fix increasing nr_isolated incurred by putback unevictable pages
+
+From: Jaewon Kim <jaewon31.kim@samsung.com>
+
+commit c54839a722a02818677bcabe57e957f0ce4f841d upstream.
+
+reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
+number of pages removed from the candidate list. But shrink_page_list()
+puts back mlocked pages without passing it to caller and without
+counting as nr_reclaimed. This increases nr_isolated.
+
+To fix this, this patch changes shrink_page_list() to pass unevictable
+pages back to caller. Caller will take care those pages.
+
+Minchan said:
+
+It fixes two issues.
+
+1. With unevictable page, cma_alloc will be successful.
+
+Exactly speaking, cma_alloc of current kernel will fail due to
+unevictable pages.
+
+2. fix leaking of NR_ISOLATED counter of vmstat
+
+With it, too_many_isolated works. Otherwise, it could make hang until
+the process get SIGKILL.
+
+Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com>
+Acked-by: Minchan Kim <minchan@kernel.org>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/vmscan.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1153,7 +1153,7 @@ cull_mlocked:
+ if (PageSwapCache(page))
+ try_to_free_swap(page);
+ unlock_page(page);
+- putback_lru_page(page);
++ list_add(&page->lru, &ret_pages);
+ continue;
+
+ activate_locked: