From: Greg Kroah-Hartman Date: Mon, 22 Feb 2016 03:29:51 +0000 (-0800) Subject: 4.4-stable patches X-Git-Tag: v3.10.98~7 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=d15d11e5144c66c34df4d2239f375585e1d83d51;p=thirdparty%2Fkernel%2Fstable-queue.git 4.4-stable patches added patches: x86-mm-fix-types-used-in-pgprot-cacheability-flags-translations.patch x86-mm-pat-avoid-truncation-when-converting-cpa-numpages-to-address.patch --- diff --git a/queue-4.4/x86-mm-fix-types-used-in-pgprot-cacheability-flags-translations.patch b/queue-4.4/x86-mm-fix-types-used-in-pgprot-cacheability-flags-translations.patch new file mode 100644 index 00000000000..8e9c3e6793f --- /dev/null +++ b/queue-4.4/x86-mm-fix-types-used-in-pgprot-cacheability-flags-translations.patch @@ -0,0 +1,50 @@ +From 3625c2c234ef66acf21a72d47a5ffa94f6c5ebf2 Mon Sep 17 00:00:00 2001 +From: Jan Beulich +Date: Tue, 26 Jan 2016 04:15:18 -0700 +Subject: x86/mm: Fix types used in pgprot cacheability flags translations + +From: Jan Beulich + +commit 3625c2c234ef66acf21a72d47a5ffa94f6c5ebf2 upstream. + +For PAE kernels "unsigned long" is not suitable to hold page protection +flags, since _PAGE_NX doesn't fit there. This is the reason for quite a +few W+X pages getting reported as insecure during boot (observed namely +for the entire initrd range). + +Fixes: 281d4078be ("x86: Make page cache mode a real type") +Signed-off-by: Jan Beulich +Reviewed-by: Juergen Gross +Link: http://lkml.kernel.org/r/56A7635602000078000CAFF1@prv-mh.provo.novell.com +Signed-off-by: Thomas Gleixner +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/include/asm/pgtable_types.h | 6 ++---- + 1 file changed, 2 insertions(+), 4 deletions(-) + +--- a/arch/x86/include/asm/pgtable_types.h ++++ b/arch/x86/include/asm/pgtable_types.h +@@ -363,20 +363,18 @@ static inline enum page_cache_mode pgpro + } + static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot) + { ++ pgprotval_t val = pgprot_val(pgprot); + pgprot_t new; +- unsigned long val; + +- val = pgprot_val(pgprot); + pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) | + ((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT)); + return new; + } + static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot) + { ++ pgprotval_t val = pgprot_val(pgprot); + pgprot_t new; +- unsigned long val; + +- val = pgprot_val(pgprot); + pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) | + ((val & _PAGE_PAT_LARGE) >> + (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT)); diff --git a/queue-4.4/x86-mm-pat-avoid-truncation-when-converting-cpa-numpages-to-address.patch b/queue-4.4/x86-mm-pat-avoid-truncation-when-converting-cpa-numpages-to-address.patch new file mode 100644 index 00000000000..8135f1b36a7 --- /dev/null +++ b/queue-4.4/x86-mm-pat-avoid-truncation-when-converting-cpa-numpages-to-address.patch @@ -0,0 +1,87 @@ +From 742563777e8da62197d6cb4b99f4027f59454735 Mon Sep 17 00:00:00 2001 +From: Matt Fleming +Date: Fri, 29 Jan 2016 11:36:10 +0000 +Subject: x86/mm/pat: Avoid truncation when converting cpa->numpages to address +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Matt Fleming + +commit 742563777e8da62197d6cb4b99f4027f59454735 upstream. + +There are a couple of nasty truncation bugs lurking in the pageattr +code that can be triggered when mapping EFI regions, e.g. when we pass +a cpa->pgd pointer. Because cpa->numpages is a 32-bit value, shifting +left by PAGE_SHIFT will truncate the resultant address to 32-bits. + +Viorel-Cătălin managed to trigger this bug on his Dell machine that +provides a ~5GB EFI region which requires 1236992 pages to be mapped. +When calling populate_pud() the end of the region gets calculated +incorrectly in the following buggy expression, + + end = start + (cpa->numpages << PAGE_SHIFT); + +And only 188416 pages are mapped. Next, populate_pud() gets invoked +for a second time because of the loop in __change_page_attr_set_clr(), +only this time no pages get mapped because shifting the remaining +number of pages (1048576) by PAGE_SHIFT is zero. At which point the +loop in __change_page_attr_set_clr() spins forever because we fail to +map progress. + +Hitting this bug depends very much on the virtual address we pick to +map the large region at and how many pages we map on the initial run +through the loop. This explains why this issue was only recently hit +with the introduction of commit + + a5caa209ba9c ("x86/efi: Fix boot crash by mapping EFI memmap + entries bottom-up at runtime, instead of top-down") + +It's interesting to note that safe uses of cpa->numpages do exist in +the pageattr code. If instead of shifting ->numpages we multiply by +PAGE_SIZE, no truncation occurs because PAGE_SIZE is a UL value, and +so the result is unsigned long. + +To avoid surprises when users try to convert very large cpa->numpages +values to addresses, change the data type from 'int' to 'unsigned +long', thereby making it suitable for shifting by PAGE_SHIFT without +any type casting. + +The alternative would be to make liberal use of casting, but that is +far more likely to cause problems in the future when someone adds more +code and fails to cast properly; this bug was difficult enough to +track down in the first place. + +Reported-and-tested-by: Viorel-Cătălin Răpițeanu +Acked-by: Borislav Petkov +Cc: Sai Praneeth Prakhya +Signed-off-by: Matt Fleming +Link: https://bugzilla.kernel.org/show_bug.cgi?id=110131 +Link: http://lkml.kernel.org/r/1454067370-10374-1-git-send-email-matt@codeblueprint.co.uk +Signed-off-by: Thomas Gleixner +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/mm/pageattr.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/arch/x86/mm/pageattr.c ++++ b/arch/x86/mm/pageattr.c +@@ -33,7 +33,7 @@ struct cpa_data { + pgd_t *pgd; + pgprot_t mask_set; + pgprot_t mask_clr; +- int numpages; ++ unsigned long numpages; + int flags; + unsigned long pfn; + unsigned force_split : 1; +@@ -1345,7 +1345,7 @@ static int __change_page_attr_set_clr(st + * CPA operation. Either a large page has been + * preserved or a single page update happened. + */ +- BUG_ON(cpa->numpages > numpages); ++ BUG_ON(cpa->numpages > numpages || !cpa->numpages); + numpages -= cpa->numpages; + if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY)) + cpa->curpage++;