--- /dev/null
+From 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 Mon Sep 17 00:00:00 2001
+From: chenjie <chenjie6@huawei.com>
+Date: Wed, 29 Nov 2017 16:10:54 -0800
+Subject: mm/madvise.c: fix madvise() infinite loop under special circumstances
+
+From: chenjie <chenjie6@huawei.com>
+
+commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
+
+MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
+Unfortunately madvise_willneed() doesn't communicate this information
+properly to the generic madvise syscall implementation. The calling
+convention is quite subtle there. madvise_vma() is supposed to either
+return an error or update &prev otherwise the main loop will never
+advance to the next vma and it will keep looping for ever without a way
+to get out of the kernel.
+
+It seems this has been broken since introduction. Nobody has noticed
+because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
+
+[mhocko@suse.com: rewrite changelog]
+Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
+Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
+Signed-off-by: chenjie <chenjie6@huawei.com>
+Signed-off-by: guoxuenan <guoxuenan@huawei.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Minchan Kim <minchan@kernel.org>
+Cc: zhangyi (F) <yi.zhang@huawei.com>
+Cc: Miao Xie <miaoxie@huawei.com>
+Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
+Cc: Shaohua Li <shli@fb.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
+Cc: Rik van Riel <riel@redhat.com>
+Cc: Carsten Otte <cotte@de.ibm.com>
+Cc: Dan Williams <dan.j.williams@intel.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/madvise.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -221,9 +221,9 @@ static long madvise_willneed(struct vm_a
+ {
+ struct file *file = vma->vm_file;
+
++ *prev = vma;
+ #ifdef CONFIG_SWAP
+ if (!file || mapping_cap_swap_backed(file->f_mapping)) {
+- *prev = vma;
+ if (!file)
+ force_swapin_readahead(vma, start, end);
+ else
+@@ -241,7 +241,6 @@ static long madvise_willneed(struct vm_a
+ return 0;
+ }
+
+- *prev = vma;
+ start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+ if (end > vma->vm_end)
+ end = vma->vm_end;
--- /dev/null
+From a8f97366452ed491d13cf1e44241bc0b5740b1f0 Mon Sep 17 00:00:00 2001
+From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
+Date: Mon, 27 Nov 2017 06:21:25 +0300
+Subject: mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
+
+From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+
+commit a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
+
+Currently, we unconditionally make page table dirty in touch_pmd().
+It may result in false-positive can_follow_write_pmd().
+
+We may avoid the situation, if we would only make the page table entry
+dirty if caller asks for write access -- FOLL_WRITE.
+
+The patch also changes touch_pud() in the same way.
+
+Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Hugh Dickins <hughd@google.com>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+[Salvatore Bonaccorso: backport for 3.16:
+ - Adjust context
+ - Drop specific part for PUD-sized transparent hugepages. Support
+ for PUD-sized transparent hugepages was added in v4.11-rc1
+]
+Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/huge_memory.c | 14 ++++----------
+ 1 file changed, 4 insertions(+), 10 deletions(-)
+
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1240,17 +1240,11 @@ struct page *follow_trans_huge_pmd(struc
+ VM_BUG_ON_PAGE(!PageHead(page), page);
+ if (flags & FOLL_TOUCH) {
+ pmd_t _pmd;
+- /*
+- * We should set the dirty bit only for FOLL_WRITE but
+- * for now the dirty bit in the pmd is meaningless.
+- * And if the dirty bit will become meaningful and
+- * we'll only set it with FOLL_WRITE, an atomic
+- * set_bit will be required on the pmd to set the
+- * young bit, instead of the current set_pmd_at.
+- */
+- _pmd = pmd_mkyoung(pmd_mkdirty(*pmd));
++ _pmd = pmd_mkyoung(*pmd);
++ if (flags & FOLL_WRITE)
++ _pmd = pmd_mkdirty(_pmd);
+ if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
+- pmd, _pmd, 1))
++ pmd, _pmd, flags & FOLL_WRITE))
+ update_mmu_cache_pmd(vma, addr, pmd);
+ }
+ if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
netlink-add-a-start-callback-for-starting-a-netlink-dump.patch
ipsec-fix-aborted-xfrm-policy-dump-crash.patch
+mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
+mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch