--- /dev/null
+From 3625c2c234ef66acf21a72d47a5ffa94f6c5ebf2 Mon Sep 17 00:00:00 2001
+From: Jan Beulich <JBeulich@suse.com>
+Date: Tue, 26 Jan 2016 04:15:18 -0700
+Subject: x86/mm: Fix types used in pgprot cacheability flags translations
+
+From: Jan Beulich <JBeulich@suse.com>
+
+commit 3625c2c234ef66acf21a72d47a5ffa94f6c5ebf2 upstream.
+
+For PAE kernels "unsigned long" is not suitable to hold page protection
+flags, since _PAGE_NX doesn't fit there. This is the reason for quite a
+few W+X pages getting reported as insecure during boot (observed namely
+for the entire initrd range).
+
+Fixes: 281d4078be ("x86: Make page cache mode a real type")
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Juergen Gross <JGross@suse.com>
+Link: http://lkml.kernel.org/r/56A7635602000078000CAFF1@prv-mh.provo.novell.com
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/include/asm/pgtable_types.h | 6 ++----
+ 1 file changed, 2 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -363,20 +363,18 @@ static inline enum page_cache_mode pgpro
+ }
+ static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
+ {
++ pgprotval_t val = pgprot_val(pgprot);
+ pgprot_t new;
+- unsigned long val;
+
+- val = pgprot_val(pgprot);
+ pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+ ((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+ return new;
+ }
+ static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
+ {
++ pgprotval_t val = pgprot_val(pgprot);
+ pgprot_t new;
+- unsigned long val;
+
+- val = pgprot_val(pgprot);
+ pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+ ((val & _PAGE_PAT_LARGE) >>
+ (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
--- /dev/null
+From 742563777e8da62197d6cb4b99f4027f59454735 Mon Sep 17 00:00:00 2001
+From: Matt Fleming <matt@codeblueprint.co.uk>
+Date: Fri, 29 Jan 2016 11:36:10 +0000
+Subject: x86/mm/pat: Avoid truncation when converting cpa->numpages to address
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Matt Fleming <matt@codeblueprint.co.uk>
+
+commit 742563777e8da62197d6cb4b99f4027f59454735 upstream.
+
+There are a couple of nasty truncation bugs lurking in the pageattr
+code that can be triggered when mapping EFI regions, e.g. when we pass
+a cpa->pgd pointer. Because cpa->numpages is a 32-bit value, shifting
+left by PAGE_SHIFT will truncate the resultant address to 32-bits.
+
+Viorel-Cătălin managed to trigger this bug on his Dell machine that
+provides a ~5GB EFI region which requires 1236992 pages to be mapped.
+When calling populate_pud() the end of the region gets calculated
+incorrectly in the following buggy expression,
+
+ end = start + (cpa->numpages << PAGE_SHIFT);
+
+And only 188416 pages are mapped. Next, populate_pud() gets invoked
+for a second time because of the loop in __change_page_attr_set_clr(),
+only this time no pages get mapped because shifting the remaining
+number of pages (1048576) by PAGE_SHIFT is zero. At which point the
+loop in __change_page_attr_set_clr() spins forever because we fail to
+map progress.
+
+Hitting this bug depends very much on the virtual address we pick to
+map the large region at and how many pages we map on the initial run
+through the loop. This explains why this issue was only recently hit
+with the introduction of commit
+
+ a5caa209ba9c ("x86/efi: Fix boot crash by mapping EFI memmap
+ entries bottom-up at runtime, instead of top-down")
+
+It's interesting to note that safe uses of cpa->numpages do exist in
+the pageattr code. If instead of shifting ->numpages we multiply by
+PAGE_SIZE, no truncation occurs because PAGE_SIZE is a UL value, and
+so the result is unsigned long.
+
+To avoid surprises when users try to convert very large cpa->numpages
+values to addresses, change the data type from 'int' to 'unsigned
+long', thereby making it suitable for shifting by PAGE_SHIFT without
+any type casting.
+
+The alternative would be to make liberal use of casting, but that is
+far more likely to cause problems in the future when someone adds more
+code and fails to cast properly; this bug was difficult enough to
+track down in the first place.
+
+Reported-and-tested-by: Viorel-Cătălin Răpițeanu <rapiteanu.catalin@gmail.com>
+Acked-by: Borislav Petkov <bp@alien8.de>
+Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
+Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=110131
+Link: http://lkml.kernel.org/r/1454067370-10374-1-git-send-email-matt@codeblueprint.co.uk
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/mm/pageattr.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -33,7 +33,7 @@ struct cpa_data {
+ pgd_t *pgd;
+ pgprot_t mask_set;
+ pgprot_t mask_clr;
+- int numpages;
++ unsigned long numpages;
+ int flags;
+ unsigned long pfn;
+ unsigned force_split : 1;
+@@ -1345,7 +1345,7 @@ static int __change_page_attr_set_clr(st
+ * CPA operation. Either a large page has been
+ * preserved or a single page update happened.
+ */
+- BUG_ON(cpa->numpages > numpages);
++ BUG_ON(cpa->numpages > numpages || !cpa->numpages);
+ numpages -= cpa->numpages;
+ if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY))
+ cpa->curpage++;