--- /dev/null
+From 90de827b9c238f8d8209bc7adc70190575514315 Mon Sep 17 00:00:00 2001
+From: Soren Brinkmann <soren.brinkmann@xilinx.com>
+Date: Wed, 19 Jun 2013 10:53:03 -0700
+Subject: arm: multi_v7_defconfig: Enable Zynq UART driver
+
+From: Soren Brinkmann <soren.brinkmann@xilinx.com>
+
+commit 90de827b9c238f8d8209bc7adc70190575514315 upstream.
+
+Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
+Signed-off-by: Michal Simek <michal.simek@xilinx.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/arm/configs/multi_v7_defconfig | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/arch/arm/configs/multi_v7_defconfig
++++ b/arch/arm/configs/multi_v7_defconfig
+@@ -48,6 +48,8 @@ CONFIG_SERIAL_SIRFSOC=y
+ CONFIG_SERIAL_SIRFSOC_CONSOLE=y
+ CONFIG_SERIAL_VT8500=y
+ CONFIG_SERIAL_VT8500_CONSOLE=y
++CONFIG_SERIAL_XILINX_PS_UART=y
++CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y
+ CONFIG_IPMI_HANDLER=y
+ CONFIG_IPMI_SI=y
+ CONFIG_I2C=y
--- /dev/null
+From 7ba3ec5749ddb61f79f7be17b5fd7720eebc52de Mon Sep 17 00:00:00 2001
+From: Jan Kara <jack@suse.cz>
+Date: Tue, 5 Nov 2013 01:15:38 +0100
+Subject: ext2: Fix fs corruption in ext2_get_xip_mem()
+
+From: Jan Kara <jack@suse.cz>
+
+commit 7ba3ec5749ddb61f79f7be17b5fd7720eebc52de upstream.
+
+Commit 8e3dffc651cb "Ext2: mark inode dirty after the function
+dquot_free_block_nodirty is called" unveiled a bug in __ext2_get_block()
+called from ext2_get_xip_mem(). That function called ext2_get_block()
+mistakenly asking it to map 0 blocks while 1 was intended. Before the
+above mentioned commit things worked out fine by luck but after that commit
+we started returning that we allocated 0 blocks while we in fact
+allocated 1 block and thus allocation was looping until all blocks in
+the filesystem were exhausted.
+
+Fix the problem by properly asking for one block and also add assertion
+in ext2_get_blocks() to catch similar problems.
+
+Reported-and-tested-by: Andiry Xu <andiry.xu@gmail.com>
+Signed-off-by: Jan Kara <jack@suse.cz>
+Cc: Wang Nan <wangnan0@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext2/inode.c | 2 ++
+ fs/ext2/xip.c | 1 +
+ 2 files changed, 3 insertions(+)
+
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -632,6 +632,8 @@ static int ext2_get_blocks(struct inode
+ int count = 0;
+ ext2_fsblk_t first_block = 0;
+
++ BUG_ON(maxblocks == 0);
++
+ depth = ext2_block_to_path(inode,iblock,offsets,&blocks_to_boundary);
+
+ if (depth == 0)
+--- a/fs/ext2/xip.c
++++ b/fs/ext2/xip.c
+@@ -35,6 +35,7 @@ __ext2_get_block(struct inode *inode, pg
+ int rc;
+
+ memset(&tmp, 0, sizeof(struct buffer_head));
++ tmp.b_size = 1 << inode->i_blkbits;
+ rc = ext2_get_block(inode, pgoff, &tmp, create);
+ *result = tmp.b_blocknr;
+
--- /dev/null
+From 0c740d0afc3bff0a097ad03a1c8df92757516f5c Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Tue, 21 Jan 2014 15:49:56 -0800
+Subject: introduce for_each_thread() to replace the buggy while_each_thread()
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit 0c740d0afc3bff0a097ad03a1c8df92757516f5c upstream.
+
+while_each_thread() and next_thread() should die, almost every lockless
+usage is wrong.
+
+1. Unless g == current, the lockless while_each_thread() is not safe.
+
+ while_each_thread(g, t) can loop forever if g exits, next_thread()
+ can't reach the unhashed thread in this case. Note that this can
+ happen even if g is the group leader, it can exec.
+
+2. Even if while_each_thread() itself was correct, people often use
+ it wrongly.
+
+ It was never safe to just take rcu_read_lock() and loop unless
+ you verify that pid_alive(g) == T, even the first next_thread()
+ can point to the already freed/reused memory.
+
+This patch adds signal_struct->thread_head and task->thread_node to
+create the normal rcu-safe list with the stable head. The new
+for_each_thread(g, t) helper is always safe under rcu_read_lock() as
+long as this task_struct can't go away.
+
+Note: of course it is ugly to have both task_struct->thread_node and the
+old task_struct->thread_group, we will kill it later, after we change
+the users of while_each_thread() to use for_each_thread().
+
+Perhaps we can kill it even before we convert all users, we can
+reimplement next_thread(t) using the new thread_head/thread_node. But
+we can't do this right now because this will lead to subtle behavioural
+changes. For example, do/while_each_thread() always sees at least one
+task, while for_each_thread() can do nothing if the whole thread group
+has died. Or thread_group_empty(), currently its semantics is not clear
+unless thread_group_leader(p) and we need to audit the callers before we
+can change it.
+
+So this patch adds the new interface which has to coexist with the old
+one for some time, hopefully the next changes will be more or less
+straightforward and the old one will go away soon.
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Reviewed-by: Sergey Dyasly <dserrg@gmail.com>
+Tested-by: Sergey Dyasly <dserrg@gmail.com>
+Reviewed-by: Sameer Nanda <snanda@chromium.org>
+Acked-by: David Rientjes <rientjes@google.com>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Frederic Weisbecker <fweisbec@gmail.com>
+Cc: Mandeep Singh Baines <msb@chromium.org>
+Cc: "Ma, Xindong" <xindong.ma@intel.com>
+Cc: Michal Hocko <mhocko@suse.cz>
+Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ include/linux/init_task.h | 2 ++
+ include/linux/sched.h | 12 ++++++++++++
+ kernel/exit.c | 1 +
+ kernel/fork.c | 7 +++++++
+ 4 files changed, 22 insertions(+)
+
+--- a/include/linux/init_task.h
++++ b/include/linux/init_task.h
+@@ -40,6 +40,7 @@ extern struct fs_struct init_fs;
+
+ #define INIT_SIGNALS(sig) { \
+ .nr_threads = 1, \
++ .thread_head = LIST_HEAD_INIT(init_task.thread_node), \
+ .wait_chldexit = __WAIT_QUEUE_HEAD_INITIALIZER(sig.wait_chldexit),\
+ .shared_pending = { \
+ .list = LIST_HEAD_INIT(sig.shared_pending.list), \
+@@ -213,6 +214,7 @@ extern struct task_group root_task_group
+ [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \
+ }, \
+ .thread_group = LIST_HEAD_INIT(tsk.thread_group), \
++ .thread_node = LIST_HEAD_INIT(init_signals.thread_head), \
+ INIT_IDS \
+ INIT_PERF_EVENTS(tsk) \
+ INIT_TRACE_IRQFLAGS \
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -480,6 +480,7 @@ struct signal_struct {
+ atomic_t sigcnt;
+ atomic_t live;
+ int nr_threads;
++ struct list_head thread_head;
+
+ wait_queue_head_t wait_chldexit; /* for wait4() */
+
+@@ -1160,6 +1161,7 @@ struct task_struct {
+ /* PID/PID hash table linkage. */
+ struct pid_link pids[PIDTYPE_MAX];
+ struct list_head thread_group;
++ struct list_head thread_node;
+
+ struct completion *vfork_done; /* for vfork() */
+ int __user *set_child_tid; /* CLONE_CHILD_SETTID */
+@@ -2167,6 +2169,16 @@ extern bool current_is_single_threaded(v
+ #define while_each_thread(g, t) \
+ while ((t = next_thread(t)) != g)
+
++#define __for_each_thread(signal, t) \
++ list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node)
++
++#define for_each_thread(p, t) \
++ __for_each_thread((p)->signal, t)
++
++/* Careful: this is a double loop, 'break' won't work as expected. */
++#define for_each_process_thread(p, t) \
++ for_each_process(p) for_each_thread(p, t)
++
+ static inline int get_nr_threads(struct task_struct *tsk)
+ {
+ return tsk->signal->nr_threads;
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -74,6 +74,7 @@ static void __unhash_process(struct task
+ __this_cpu_dec(process_counts);
+ }
+ list_del_rcu(&p->thread_group);
++ list_del_rcu(&p->thread_node);
+ }
+
+ /*
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1045,6 +1045,11 @@ static int copy_signal(unsigned long clo
+ sig->nr_threads = 1;
+ atomic_set(&sig->live, 1);
+ atomic_set(&sig->sigcnt, 1);
++
++ /* list_add(thread_node, thread_head) without INIT_LIST_HEAD() */
++ sig->thread_head = (struct list_head)LIST_HEAD_INIT(tsk->thread_node);
++ tsk->thread_node = (struct list_head)LIST_HEAD_INIT(sig->thread_head);
++
+ init_waitqueue_head(&sig->wait_chldexit);
+ sig->curr_target = tsk;
+ init_sigpending(&sig->shared_pending);
+@@ -1471,6 +1476,8 @@ static struct task_struct *copy_process(
+ p->group_leader = current->group_leader;
+ list_add_tail_rcu(&p->thread_group,
+ &p->group_leader->thread_group);
++ list_add_tail_rcu(&p->thread_node,
++ &p->signal->thread_head);
+ }
+ attach_pid(p, PIDTYPE_PID, pid);
+ nr_threads++;
--- /dev/null
+From 80628ca06c5d42929de6bc22c0a41589a834d151 Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Wed, 3 Jul 2013 15:08:30 -0700
+Subject: kernel/fork.c:copy_process(): unify CLONE_THREAD-or-thread_group_leader code
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit 80628ca06c5d42929de6bc22c0a41589a834d151 upstream.
+
+Cleanup and preparation for the next changes.
+
+Move the "if (clone_flags & CLONE_THREAD)" code down under "if
+(likely(p->pid))" and turn it into into the "else" branch. This makes the
+process/thread initialization more symmetrical and removes one check.
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Michal Hocko <mhocko@suse.cz>
+Cc: Pavel Emelyanov <xemul@parallels.com>
+Cc: Sergey Dyasly <dserrg@gmail.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/fork.c | 15 +++++++--------
+ 1 file changed, 7 insertions(+), 8 deletions(-)
+
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1448,14 +1448,6 @@ static struct task_struct *copy_process(
+ goto bad_fork_free_pid;
+ }
+
+- if (clone_flags & CLONE_THREAD) {
+- current->signal->nr_threads++;
+- atomic_inc(¤t->signal->live);
+- atomic_inc(¤t->signal->sigcnt);
+- p->group_leader = current->group_leader;
+- list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group);
+- }
+-
+ if (likely(p->pid)) {
+ ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
+
+@@ -1472,6 +1464,13 @@ static struct task_struct *copy_process(
+ list_add_tail(&p->sibling, &p->real_parent->children);
+ list_add_tail_rcu(&p->tasks, &init_task.tasks);
+ __this_cpu_inc(process_counts);
++ } else {
++ current->signal->nr_threads++;
++ atomic_inc(¤t->signal->live);
++ atomic_inc(¤t->signal->sigcnt);
++ p->group_leader = current->group_leader;
++ list_add_tail_rcu(&p->thread_group,
++ &p->group_leader->thread_group);
+ }
+ attach_pid(p, PIDTYPE_PID, pid);
+ nr_threads++;
--- /dev/null
+From 4d4048be8a93769350efa31d2482a038b7de73d0 Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Tue, 21 Jan 2014 15:50:01 -0800
+Subject: oom_kill: add rcu_read_lock() into find_lock_task_mm()
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit 4d4048be8a93769350efa31d2482a038b7de73d0 upstream.
+
+find_lock_task_mm() expects it is called under rcu or tasklist lock, but
+it seems that at least oom_unkillable_task()->task_in_mem_cgroup() and
+mem_cgroup_out_of_memory()->oom_badness() can call it lockless.
+
+Perhaps we could fix the callers, but this patch simply adds rcu lock
+into find_lock_task_mm(). This also allows to simplify a bit one of its
+callers, oom_kill_process().
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Cc: Sergey Dyasly <dserrg@gmail.com>
+Cc: Sameer Nanda <snanda@chromium.org>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Frederic Weisbecker <fweisbec@gmail.com>
+Cc: Mandeep Singh Baines <msb@chromium.org>
+Cc: "Ma, Xindong" <xindong.ma@intel.com>
+Reviewed-by: Michal Hocko <mhocko@suse.cz>
+Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
+Acked-by: David Rientjes <rientjes@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/oom_kill.c | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -102,14 +102,19 @@ struct task_struct *find_lock_task_mm(st
+ {
+ struct task_struct *t;
+
++ rcu_read_lock();
++
+ for_each_thread(p, t) {
+ task_lock(t);
+ if (likely(t->mm))
+- return t;
++ goto found;
+ task_unlock(t);
+ }
++ t = NULL;
++found:
++ rcu_read_unlock();
+
+- return NULL;
++ return t;
+ }
+
+ /* return true if the task is not adequate as candidate victim task. */
+@@ -461,10 +466,8 @@ void oom_kill_process(struct task_struct
+ }
+ read_unlock(&tasklist_lock);
+
+- rcu_read_lock();
+ p = find_lock_task_mm(victim);
+ if (!p) {
+- rcu_read_unlock();
+ put_task_struct(victim);
+ return;
+ } else if (victim != p) {
+@@ -490,6 +493,7 @@ void oom_kill_process(struct task_struct
+ * That thread will now get access to memory reserves since it has a
+ * pending fatal signal.
+ */
++ rcu_read_lock();
+ for_each_process(p)
+ if (p->mm == mm && !same_thread_group(p, victim) &&
+ !(p->flags & PF_KTHREAD)) {
--- /dev/null
+From 1da4db0cd5c8a31d4468ec906b413e75e604b465 Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Tue, 21 Jan 2014 15:49:58 -0800
+Subject: oom_kill: change oom_kill.c to use for_each_thread()
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit 1da4db0cd5c8a31d4468ec906b413e75e604b465 upstream.
+
+Change oom_kill.c to use for_each_thread() rather than the racy
+while_each_thread() which can loop forever if we race with exit.
+
+Note also that most users were buggy even if while_each_thread() was
+fine, the task can exit even _before_ rcu_read_lock().
+
+Fortunately the new for_each_thread() only requires the stable
+task_struct, so this change fixes both problems.
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Reviewed-by: Sergey Dyasly <dserrg@gmail.com>
+Tested-by: Sergey Dyasly <dserrg@gmail.com>
+Reviewed-by: Sameer Nanda <snanda@chromium.org>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Frederic Weisbecker <fweisbec@gmail.com>
+Cc: Mandeep Singh Baines <msb@chromium.org>
+Cc: "Ma, Xindong" <xindong.ma@intel.com>
+Reviewed-by: Michal Hocko <mhocko@suse.cz>
+Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
+Acked-by: David Rientjes <rientjes@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/oom_kill.c | 20 ++++++++++----------
+ 1 file changed, 10 insertions(+), 10 deletions(-)
+
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -59,7 +59,7 @@ static bool has_intersects_mems_allowed(
+ {
+ struct task_struct *start = tsk;
+
+- do {
++ for_each_thread(start, tsk) {
+ if (mask) {
+ /*
+ * If this is a mempolicy constrained oom, tsk's
+@@ -77,7 +77,7 @@ static bool has_intersects_mems_allowed(
+ if (cpuset_mems_allowed_intersects(current, tsk))
+ return true;
+ }
+- } while_each_thread(start, tsk);
++ }
+
+ return false;
+ }
+@@ -97,14 +97,14 @@ static bool has_intersects_mems_allowed(
+ */
+ struct task_struct *find_lock_task_mm(struct task_struct *p)
+ {
+- struct task_struct *t = p;
++ struct task_struct *t;
+
+- do {
++ for_each_thread(p, t) {
+ task_lock(t);
+ if (likely(t->mm))
+ return t;
+ task_unlock(t);
+- } while_each_thread(p, t);
++ }
+
+ return NULL;
+ }
+@@ -301,7 +301,7 @@ static struct task_struct *select_bad_pr
+ unsigned long chosen_points = 0;
+
+ rcu_read_lock();
+- do_each_thread(g, p) {
++ for_each_process_thread(g, p) {
+ unsigned int points;
+
+ switch (oom_scan_process_thread(p, totalpages, nodemask,
+@@ -323,7 +323,7 @@ static struct task_struct *select_bad_pr
+ chosen = p;
+ chosen_points = points;
+ }
+- } while_each_thread(g, p);
++ }
+ if (chosen)
+ get_task_struct(chosen);
+ rcu_read_unlock();
+@@ -406,7 +406,7 @@ void oom_kill_process(struct task_struct
+ {
+ struct task_struct *victim = p;
+ struct task_struct *child;
+- struct task_struct *t = p;
++ struct task_struct *t;
+ struct mm_struct *mm;
+ unsigned int victim_points = 0;
+ static DEFINE_RATELIMIT_STATE(oom_rs, DEFAULT_RATELIMIT_INTERVAL,
+@@ -437,7 +437,7 @@ void oom_kill_process(struct task_struct
+ * still freeing memory.
+ */
+ read_lock(&tasklist_lock);
+- do {
++ for_each_thread(p, t) {
+ list_for_each_entry(child, &t->children, sibling) {
+ unsigned int child_points;
+
+@@ -455,7 +455,7 @@ void oom_kill_process(struct task_struct
+ get_task_struct(victim);
+ }
+ }
+- } while_each_thread(p, t);
++ }
+ read_unlock(&tasklist_lock);
+
+ rcu_read_lock();
--- /dev/null
+From ad96244179fbd55b40c00f10f399bc04739b8e1f Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Tue, 21 Jan 2014 15:50:00 -0800
+Subject: oom_kill: has_intersects_mems_allowed() needs rcu_read_lock()
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit ad96244179fbd55b40c00f10f399bc04739b8e1f upstream.
+
+At least out_of_memory() calls has_intersects_mems_allowed() without
+even rcu_read_lock(), this is obviously buggy.
+
+Add the necessary rcu_read_lock(). This means that we can not simply
+return from the loop, we need "bool ret" and "break".
+
+While at it, swap the names of task_struct's (the argument and the
+local). This cleans up the code a little bit and avoids the unnecessary
+initialization.
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Reviewed-by: Sergey Dyasly <dserrg@gmail.com>
+Tested-by: Sergey Dyasly <dserrg@gmail.com>
+Reviewed-by: Sameer Nanda <snanda@chromium.org>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Frederic Weisbecker <fweisbec@gmail.com>
+Cc: Mandeep Singh Baines <msb@chromium.org>
+Cc: "Ma, Xindong" <xindong.ma@intel.com>
+Reviewed-by: Michal Hocko <mhocko@suse.cz>
+Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
+Acked-by: David Rientjes <rientjes@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/oom_kill.c | 19 +++++++++++--------
+ 1 file changed, 11 insertions(+), 8 deletions(-)
+
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -47,18 +47,20 @@ static DEFINE_SPINLOCK(zone_scan_lock);
+ #ifdef CONFIG_NUMA
+ /**
+ * has_intersects_mems_allowed() - check task eligiblity for kill
+- * @tsk: task struct of which task to consider
++ * @start: task struct of which task to consider
+ * @mask: nodemask passed to page allocator for mempolicy ooms
+ *
+ * Task eligibility is determined by whether or not a candidate task, @tsk,
+ * shares the same mempolicy nodes as current if it is bound by such a policy
+ * and whether or not it has the same set of allowed cpuset nodes.
+ */
+-static bool has_intersects_mems_allowed(struct task_struct *tsk,
++static bool has_intersects_mems_allowed(struct task_struct *start,
+ const nodemask_t *mask)
+ {
+- struct task_struct *start = tsk;
++ struct task_struct *tsk;
++ bool ret = false;
+
++ rcu_read_lock();
+ for_each_thread(start, tsk) {
+ if (mask) {
+ /*
+@@ -67,19 +69,20 @@ static bool has_intersects_mems_allowed(
+ * mempolicy intersects current, otherwise it may be
+ * needlessly killed.
+ */
+- if (mempolicy_nodemask_intersects(tsk, mask))
+- return true;
++ ret = mempolicy_nodemask_intersects(tsk, mask);
+ } else {
+ /*
+ * This is not a mempolicy constrained oom, so only
+ * check the mems of tsk's cpuset.
+ */
+- if (cpuset_mems_allowed_intersects(current, tsk))
+- return true;
++ ret = cpuset_mems_allowed_intersects(current, tsk);
+ }
++ if (ret)
++ break;
+ }
++ rcu_read_unlock();
+
+- return false;
++ return ret;
+ }
+ #else
+ static bool has_intersects_mems_allowed(struct task_struct *tsk,
netfilter-nf_conntrack-avoid-large-timeout-for-mid-stream-pickup.patch
arm-7748-1-oabi-handle-faults-when-loading-swi-instruction-from-userspace.patch
serial-8250_dma-check-the-result-of-tx-buffer-mapping.patch
+ext2-fix-fs-corruption-in-ext2_get_xip_mem.patch
+arm-multi_v7_defconfig-enable-zynq-uart-driver.patch
+kernel-fork.c-copy_process-unify-clone_thread-or-thread_group_leader-code.patch
+introduce-for_each_thread-to-replace-the-buggy-while_each_thread.patch
+oom_kill-change-oom_kill.c-to-use-for_each_thread.patch
+oom_kill-has_intersects_mems_allowed-needs-rcu_read_lock.patch
+oom_kill-add-rcu_read_lock-into-find_lock_task_mm.patch
+vm_is_stack-use-for_each_thread-rather-then-buggy-while_each_thread.patch
--- /dev/null
+From 4449a51a7c281602d3a385044ab928322a122a02 Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Fri, 8 Aug 2014 14:19:17 -0700
+Subject: vm_is_stack: use for_each_thread() rather then buggy while_each_thread()
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+commit 4449a51a7c281602d3a385044ab928322a122a02 upstream.
+
+Aleksei hit the soft lockup during reading /proc/PID/smaps. David
+investigated the problem and suggested the right fix.
+
+while_each_thread() is racy and should die, this patch updates
+vm_is_stack().
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Reported-by: Aleksei Besogonov <alex.besogonov@gmail.com>
+Tested-by: Aleksei Besogonov <alex.besogonov@gmail.com>
+Suggested-by: David Rientjes <rientjes@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Li Zefan <lizefan@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/util.c | 9 +++------
+ 1 file changed, 3 insertions(+), 6 deletions(-)
+
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -272,17 +272,14 @@ pid_t vm_is_stack(struct task_struct *ta
+
+ if (in_group) {
+ struct task_struct *t;
+- rcu_read_lock();
+- if (!pid_alive(task))
+- goto done;
+
+- t = task;
+- do {
++ rcu_read_lock();
++ for_each_thread(task, t) {
+ if (vm_is_stack_for_task(t, vma)) {
+ ret = t->pid;
+ goto done;
+ }
+- } while_each_thread(task, t);
++ }
+ done:
+ rcu_read_unlock();
+ }