From: Sasha Levin Date: Sun, 4 Dec 2022 23:57:13 +0000 (-0500) Subject: Fixes for 4.14 X-Git-Tag: v4.9.335~37^2~1 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=b069ef5ee8d1aeb03c1b4a9f89255d5df682bb55;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 4.14 Signed-off-by: Sasha Levin --- diff --git a/queue-4.14/series b/queue-4.14/series index ba2b9479132..7322a15059d 100644 --- a/queue-4.14/series +++ b/queue-4.14/series @@ -72,3 +72,4 @@ nvme-restrict-management-ioctls-to-admin.patch x86-tsx-add-a-feature-bit-for-tsx-control-msr-support.patch x86-pm-add-enumeration-check-before-spec-msrs-save-restore-setup.patch bluetooth-l2cap-fix-accepting-connection-request-for-invalid-spsm.patch +x86-ioremap-fix-page-aligned-size-calculation-in-__i.patch diff --git a/queue-4.14/x86-ioremap-fix-page-aligned-size-calculation-in-__i.patch b/queue-4.14/x86-ioremap-fix-page-aligned-size-calculation-in-__i.patch new file mode 100644 index 00000000000..03fb4d082d1 --- /dev/null +++ b/queue-4.14/x86-ioremap-fix-page-aligned-size-calculation-in-__i.patch @@ -0,0 +1,54 @@ +From d6cba237c830767e0b56f2ec61a234e22391c35f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 4 Dec 2022 13:52:01 -0800 +Subject: x86/ioremap: Fix page aligned size calculation in __ioremap_caller() + +From: Michael Kelley + +[ Upstream commit 4dbd6a3e90e03130973688fd79e19425f720d999 ] + +Current code re-calculates the size after aligning the starting and +ending physical addresses on a page boundary. But the re-calculation +also embeds the masking of high order bits that exceed the size of +the physical address space (via PHYSICAL_PAGE_MASK). If the masking +removes any high order bits, the size calculation results in a huge +value that is likely to immediately fail. + +Fix this by re-calculating the page-aligned size first. Then mask any +high order bits using PHYSICAL_PAGE_MASK. + +Fixes: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") +Signed-off-by: Michael Kelley +Signed-off-by: Borislav Petkov +Acked-by: Dave Hansen +Cc: +Link: https://lore.kernel.org/r/1668624097-14884-2-git-send-email-mikelley@microsoft.com +Signed-off-by: Sasha Levin +--- + arch/x86/mm/ioremap.c | 8 +++++++- + 1 file changed, 7 insertions(+), 1 deletion(-) + +diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c +index 3faf9667cc40..13ac4bc1a2dc 100644 +--- a/arch/x86/mm/ioremap.c ++++ b/arch/x86/mm/ioremap.c +@@ -124,9 +124,15 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, + * Mappings have to be page-aligned + */ + offset = phys_addr & ~PAGE_MASK; +- phys_addr &= PHYSICAL_PAGE_MASK; ++ phys_addr &= PAGE_MASK; + size = PAGE_ALIGN(last_addr+1) - phys_addr; + ++ /* ++ * Mask out any bits not part of the actual physical ++ * address, like memory encryption bits. ++ */ ++ phys_addr &= PHYSICAL_PAGE_MASK; ++ + retval = reserve_memtype(phys_addr, (u64)phys_addr + size, + pcm, &new_pcm); + if (retval) { +-- +2.35.1 +