From: Greg Kroah-Hartman Date: Thu, 11 Jun 2020 11:21:49 +0000 (+0200) Subject: 4.14-stable patches X-Git-Tag: v5.4.47~137 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=7332ac070662082bb212e387d2caaaab496643e6;p=thirdparty%2Fkernel%2Fstable-queue.git 4.14-stable patches added patches: arch-openrisc-fix-issues-with-access_ok.patch fix-acccess_ok-on-alpha-and-sh.patch lib-reduce-user_access_begin-boundaries-in-strncpy_from_user-and-strnlen_user.patch make-user_access_begin-do-access_ok.patch serial-imx-fix-handling-of-tc-irq-in-combination-with-dma.patch x86-uaccess-inhibit-speculation-past-access_ok-in-user_access_begin.patch --- diff --git a/queue-4.14/arch-openrisc-fix-issues-with-access_ok.patch b/queue-4.14/arch-openrisc-fix-issues-with-access_ok.patch new file mode 100644 index 00000000000..5b78f5db393 --- /dev/null +++ b/queue-4.14/arch-openrisc-fix-issues-with-access_ok.patch @@ -0,0 +1,46 @@ +From 9cb2feb4d21d97386eb25c7b67e2793efcc1e70a Mon Sep 17 00:00:00 2001 +From: Stafford Horne +Date: Tue, 8 Jan 2019 22:15:15 +0900 +Subject: arch/openrisc: Fix issues with access_ok() + +From: Stafford Horne + +commit 9cb2feb4d21d97386eb25c7b67e2793efcc1e70a upstream. + +The commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") +exposed incorrect implementations of access_ok() macro in several +architectures. This change fixes 2 issues found in OpenRISC. + +OpenRISC was not properly using parenthesis for arguments and also using +arguments twice. This patch fixes those 2 issues. + +I test booted this patch with v5.0-rc1 on qemu and it's working fine. + +Cc: Guenter Roeck +Cc: Linus Torvalds +Reported-by: Linus Torvalds +Signed-off-by: Stafford Horne +Signed-off-by: Linus Torvalds +Signed-off-by: Miles Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/openrisc/include/asm/uaccess.h | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +--- a/arch/openrisc/include/asm/uaccess.h ++++ b/arch/openrisc/include/asm/uaccess.h +@@ -58,8 +58,12 @@ + /* Ensure that addr is below task's addr_limit */ + #define __addr_ok(addr) ((unsigned long) addr < get_fs()) + +-#define access_ok(type, addr, size) \ +- __range_ok((unsigned long)addr, (unsigned long)size) ++#define access_ok(type, addr, size) \ ++({ \ ++ unsigned long __ao_addr = (unsigned long)(addr); \ ++ unsigned long __ao_size = (unsigned long)(size); \ ++ __range_ok(__ao_addr, __ao_size); \ ++}) + + /* + * These are the main single-value transfer routines. They automatically diff --git a/queue-4.14/fix-acccess_ok-on-alpha-and-sh.patch b/queue-4.14/fix-acccess_ok-on-alpha-and-sh.patch new file mode 100644 index 00000000000..d1f43285406 --- /dev/null +++ b/queue-4.14/fix-acccess_ok-on-alpha-and-sh.patch @@ -0,0 +1,126 @@ +From 94bd8a05cd4de344a9a57e52ef7d99550251984f Mon Sep 17 00:00:00 2001 +From: Linus Torvalds +Date: Sun, 6 Jan 2019 11:15:04 -0800 +Subject: Fix 'acccess_ok()' on alpha and SH + +From: Linus Torvalds + +commit 94bd8a05cd4de344a9a57e52ef7d99550251984f upstream. + +Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") +broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. + +It turns out that the bug wasn't actually in that commit itself (which +would have been surprising: it was mostly a no-op), but in how the +addition of access_ok() to the strncpy_from_user() and strnlen_user() +functions now triggered the case where those functions would test the +access of the very last byte of the user address space. + +The string functions actually did that user range test before too, but +they did it manually by just comparing against user_addr_max(). But +with user_access_begin() doing the check (using "access_ok()"), it now +exposed problems in the architecture implementations of that function. + +For example, on alpha, the access_ok() helper macro looked like this: + + #define __access_ok(addr, size) \ + ((get_fs().seg & (addr | size | (addr+size))) == 0) + +and what it basically tests is of any of the high bits get set (the +USER_DS masking value is 0xfffffc0000000000). + +And that's completely wrong for the "addr+size" check. Because it's +off-by-one for the case where we check to the very end of the user +address space, which is exactly what the strn*_user() functions do. + +Why? Because "addr+size" will be exactly the size of the address space, +so trying to access the last byte of the user address space will fail +the __access_ok() check, even though it shouldn't. As a result, the +user string accessor functions failed consistently - because they +literally don't know how long the string is going to be, and the max +access is going to be that last byte of the user address space. + +Side note: that alpha macro is buggy for another reason too - it re-uses +the arguments twice. + +And SH has another version of almost the exact same bug: + + #define __addr_ok(addr) \ + ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) + +so far so good: yes, a user address must be below the limit. But then: + + #define __access_ok(addr, size) \ + (__addr_ok((addr) + (size))) + +is wrong with the exact same off-by-one case: the case when "addr+size" +is exactly _equal_ to the limit is actually perfectly fine (think "one +byte access at the last address of the user address space") + +The SH version is actually seriously buggy in another way: it doesn't +actually check for overflow, even though it did copy the _comment_ that +talks about overflow. + +So it turns out that both SH and alpha actually have completely buggy +implementations of access_ok(), but they happened to work in practice +(although the SH overflow one is a serious serious security bug, not +that anybody likely cares about SH security). + +This fixes the problems by using a similar macro on both alpha and SH. +It isn't trying to be clever, the end address is based on this logic: + + unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; + +which basically says "add start and length, and then subtract one unless +the length was zero". We can't subtract one for a zero length, or we'd +just hit an underflow instead. + +For a lot of access_ok() users the length is a constant, so this isn't +actually as expensive as it initially looks. + +Reported-and-tested-by: Guenter Roeck +Cc: Matt Turner +Cc: Yoshinori Sato +Signed-off-by: Linus Torvalds +Signed-off-by: Miles Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/alpha/include/asm/uaccess.h | 8 +++++--- + arch/sh/include/asm/uaccess.h | 7 +++++-- + 2 files changed, 10 insertions(+), 5 deletions(-) + +--- a/arch/alpha/include/asm/uaccess.h ++++ b/arch/alpha/include/asm/uaccess.h +@@ -30,11 +30,13 @@ + * Address valid if: + * - "addr" doesn't have any high-bits set + * - AND "size" doesn't have any high-bits set +- * - AND "addr+size" doesn't have any high-bits set ++ * - AND "addr+size-(size != 0)" doesn't have any high-bits set + * - OR we are in kernel mode. + */ +-#define __access_ok(addr, size) \ +- ((get_fs().seg & (addr | size | (addr+size))) == 0) ++#define __access_ok(addr, size) ({ \ ++ unsigned long __ao_a = (addr), __ao_b = (size); \ ++ unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; \ ++ (get_fs().seg & (__ao_a | __ao_b | __ao_end)) == 0; }) + + #define access_ok(type, addr, size) \ + ({ \ +--- a/arch/sh/include/asm/uaccess.h ++++ b/arch/sh/include/asm/uaccess.h +@@ -16,8 +16,11 @@ + * sum := addr + size; carry? --> flag = true; + * if (sum >= addr_limit) flag = true; + */ +-#define __access_ok(addr, size) \ +- (__addr_ok((addr) + (size))) ++#define __access_ok(addr, size) ({ \ ++ unsigned long __ao_a = (addr), __ao_b = (size); \ ++ unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; \ ++ __ao_end >= __ao_a && __addr_ok(__ao_end); }) ++ + #define access_ok(type, addr, size) \ + (__chk_user_ptr(addr), \ + __access_ok((unsigned long __force)(addr), (size))) diff --git a/queue-4.14/lib-reduce-user_access_begin-boundaries-in-strncpy_from_user-and-strnlen_user.patch b/queue-4.14/lib-reduce-user_access_begin-boundaries-in-strncpy_from_user-and-strnlen_user.patch new file mode 100644 index 00000000000..218b95dd5d5 --- /dev/null +++ b/queue-4.14/lib-reduce-user_access_begin-boundaries-in-strncpy_from_user-and-strnlen_user.patch @@ -0,0 +1,89 @@ +From ab10ae1c3bef56c29bac61e1201c752221b87b41 Mon Sep 17 00:00:00 2001 +From: Christophe Leroy +Date: Thu, 23 Jan 2020 08:34:18 +0000 +Subject: lib: Reduce user_access_begin() boundaries in strncpy_from_user() and strnlen_user() + +From: Christophe Leroy + +commit ab10ae1c3bef56c29bac61e1201c752221b87b41 upstream. + +The range passed to user_access_begin() by strncpy_from_user() and +strnlen_user() starts at 'src' and goes up to the limit of userspace +although reads will be limited by the 'count' param. + +On 32 bits powerpc (book3s/32) access has to be granted for each +256Mbytes segment and the cost increases with the number of segments to +unlock. + +Limit the range with 'count' param. + +Fixes: 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") +Signed-off-by: Christophe Leroy +Signed-off-by: Linus Torvalds +Signed-off-by: Miles Chen +Signed-off-by: Greg Kroah-Hartman +--- + lib/strncpy_from_user.c | 14 +++++++------- + lib/strnlen_user.c | 14 +++++++------- + 2 files changed, 14 insertions(+), 14 deletions(-) + +--- a/lib/strncpy_from_user.c ++++ b/lib/strncpy_from_user.c +@@ -29,13 +29,6 @@ static inline long do_strncpy_from_user( + const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS; + unsigned long res = 0; + +- /* +- * Truncate 'max' to the user-specified limit, so that +- * we only have one limit we need to check in the loop +- */ +- if (max > count) +- max = count; +- + if (IS_UNALIGNED(src, dst)) + goto byte_at_a_time; + +@@ -113,6 +106,13 @@ long strncpy_from_user(char *dst, const + unsigned long max = max_addr - src_addr; + long retval; + ++ /* ++ * Truncate 'max' to the user-specified limit, so that ++ * we only have one limit we need to check in the loop ++ */ ++ if (max > count) ++ max = count; ++ + kasan_check_write(dst, count); + check_object_size(dst, count, false); + if (user_access_begin(VERIFY_READ, src, max)) { +--- a/lib/strnlen_user.c ++++ b/lib/strnlen_user.c +@@ -32,13 +32,6 @@ static inline long do_strnlen_user(const + unsigned long c; + + /* +- * Truncate 'max' to the user-specified limit, so that +- * we only have one limit we need to check in the loop +- */ +- if (max > count) +- max = count; +- +- /* + * Do everything aligned. But that means that we + * need to also expand the maximum.. + */ +@@ -114,6 +107,13 @@ long strnlen_user(const char __user *str + unsigned long max = max_addr - src_addr; + long retval; + ++ /* ++ * Truncate 'max' to the user-specified limit, so that ++ * we only have one limit we need to check in the loop ++ */ ++ if (max > count) ++ max = count; ++ + if (user_access_begin(VERIFY_READ, str, max)) { + retval = do_strnlen_user(str, count, max); + user_access_end(); diff --git a/queue-4.14/make-user_access_begin-do-access_ok.patch b/queue-4.14/make-user_access_begin-do-access_ok.patch new file mode 100644 index 00000000000..06824520d3d --- /dev/null +++ b/queue-4.14/make-user_access_begin-do-access_ok.patch @@ -0,0 +1,201 @@ +From 594cc251fdd0d231d342d88b2fdff4bc42fb0690 Mon Sep 17 00:00:00 2001 +From: Linus Torvalds +Date: Fri, 4 Jan 2019 12:56:09 -0800 +Subject: make 'user_access_begin()' do 'access_ok()' + +From: Linus Torvalds + +commit 594cc251fdd0d231d342d88b2fdff4bc42fb0690 upstream. + +Originally, the rule used to be that you'd have to do access_ok() +separately, and then user_access_begin() before actually doing the +direct (optimized) user access. + +But experience has shown that people then decide not to do access_ok() +at all, and instead rely on it being implied by other operations or +similar. Which makes it very hard to verify that the access has +actually been range-checked. + +If you use the unsafe direct user accesses, hardware features (either +SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged +Access Never - on ARM) do force you to use user_access_begin(). But +nothing really forces the range check. + +By putting the range check into user_access_begin(), we actually force +people to do the right thing (tm), and the range check vill be visible +near the actual accesses. We have way too long a history of people +trying to avoid them. + +Signed-off-by: Linus Torvalds +Signed-off-by: Miles Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/include/asm/uaccess.h | 12 +++++++++++- + drivers/gpu/drm/i915/i915_gem_execbuffer.c | 17 +++++++++++++++-- + include/linux/uaccess.h | 2 +- + kernel/compat.c | 6 ++---- + kernel/exit.c | 6 ++---- + lib/strncpy_from_user.c | 9 +++++---- + lib/strnlen_user.c | 9 +++++---- + 7 files changed, 41 insertions(+), 20 deletions(-) + +--- a/arch/x86/include/asm/uaccess.h ++++ b/arch/x86/include/asm/uaccess.h +@@ -711,7 +711,17 @@ extern struct movsl_mask { + * checking before using them, but you have to surround them with the + * user_access_begin/end() pair. + */ +-#define user_access_begin() __uaccess_begin() ++static __must_check inline bool user_access_begin(int type, ++ const void __user *ptr, ++ size_t len) ++{ ++ if (unlikely(!access_ok(type, ptr, len))) ++ return 0; ++ __uaccess_begin(); ++ return 1; ++} ++ ++#define user_access_begin(a, b, c) user_access_begin(a, b, c) + #define user_access_end() __uaccess_end() + + #define unsafe_put_user(x, ptr, err_label) \ +--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c ++++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c +@@ -1566,7 +1566,9 @@ static int eb_copy_relocations(const str + * happened we would make the mistake of assuming that the + * relocations were valid. + */ +- user_access_begin(); ++ if (!user_access_begin(VERIFY_WRITE, urelocs, size)) ++ goto end_user; ++ + for (copied = 0; copied < nreloc; copied++) + unsafe_put_user(-1, + &urelocs[copied].presumed_offset, +@@ -2601,6 +2603,7 @@ i915_gem_execbuffer2(struct drm_device * + struct drm_i915_gem_execbuffer2 *args = data; + struct drm_i915_gem_exec_object2 *exec2_list; + struct drm_syncobj **fences = NULL; ++ const size_t count = args->buffer_count; + int err; + + if (args->buffer_count < 1 || args->buffer_count > SIZE_MAX / sz - 1) { +@@ -2649,7 +2652,17 @@ i915_gem_execbuffer2(struct drm_device * + unsigned int i; + + /* Copy the new buffer offsets back to the user's exec list. */ +- user_access_begin(); ++ /* ++ * Note: count * sizeof(*user_exec_list) does not overflow, ++ * because we checked 'count' in check_buffer_count(). ++ * ++ * And this range already got effectively checked earlier ++ * when we did the "copy_from_user()" above. ++ */ ++ if (!user_access_begin(VERIFY_WRITE, user_exec_list, ++ count * sizeof(*user_exec_list))) ++ goto end_user; ++ + for (i = 0; i < args->buffer_count; i++) { + if (!(exec2_list[i].offset & UPDATE)) + continue; +--- a/include/linux/uaccess.h ++++ b/include/linux/uaccess.h +@@ -267,7 +267,7 @@ extern long strncpy_from_unsafe(char *ds + probe_kernel_read(&retval, addr, sizeof(retval)) + + #ifndef user_access_begin +-#define user_access_begin() do { } while (0) ++#define user_access_begin(type, ptr, len) access_ok(type, ptr, len) + #define user_access_end() do { } while (0) + #define unsafe_get_user(x, ptr, err) do { if (unlikely(__get_user(x, ptr))) goto err; } while (0) + #define unsafe_put_user(x, ptr, err) do { if (unlikely(__put_user(x, ptr))) goto err; } while (0) +--- a/kernel/compat.c ++++ b/kernel/compat.c +@@ -437,10 +437,9 @@ long compat_get_bitmap(unsigned long *ma + bitmap_size = ALIGN(bitmap_size, BITS_PER_COMPAT_LONG); + nr_compat_longs = BITS_TO_COMPAT_LONGS(bitmap_size); + +- if (!access_ok(VERIFY_READ, umask, bitmap_size / 8)) ++ if (!user_access_begin(VERIFY_READ, umask, bitmap_size / 8)) + return -EFAULT; + +- user_access_begin(); + while (nr_compat_longs > 1) { + compat_ulong_t l1, l2; + unsafe_get_user(l1, umask++, Efault); +@@ -467,10 +466,9 @@ long compat_put_bitmap(compat_ulong_t __ + bitmap_size = ALIGN(bitmap_size, BITS_PER_COMPAT_LONG); + nr_compat_longs = BITS_TO_COMPAT_LONGS(bitmap_size); + +- if (!access_ok(VERIFY_WRITE, umask, bitmap_size / 8)) ++ if (!user_access_begin(VERIFY_WRITE, umask, bitmap_size / 8)) + return -EFAULT; + +- user_access_begin(); + while (nr_compat_longs > 1) { + unsigned long m = *mask++; + unsafe_put_user((compat_ulong_t)m, umask++, Efault); +--- a/kernel/exit.c ++++ b/kernel/exit.c +@@ -1597,10 +1597,9 @@ SYSCALL_DEFINE5(waitid, int, which, pid_ + if (!infop) + return err; + +- if (!access_ok(VERIFY_WRITE, infop, sizeof(*infop))) ++ if (!user_access_begin(VERIFY_WRITE, infop, sizeof(*infop))) + return -EFAULT; + +- user_access_begin(); + unsafe_put_user(signo, &infop->si_signo, Efault); + unsafe_put_user(0, &infop->si_errno, Efault); + unsafe_put_user(info.cause, &infop->si_code, Efault); +@@ -1725,10 +1724,9 @@ COMPAT_SYSCALL_DEFINE5(waitid, + if (!infop) + return err; + +- if (!access_ok(VERIFY_WRITE, infop, sizeof(*infop))) ++ if (!user_access_begin(VERIFY_WRITE, infop, sizeof(*infop))) + return -EFAULT; + +- user_access_begin(); + unsafe_put_user(signo, &infop->si_signo, Efault); + unsafe_put_user(0, &infop->si_errno, Efault); + unsafe_put_user(info.cause, &infop->si_code, Efault); +--- a/lib/strncpy_from_user.c ++++ b/lib/strncpy_from_user.c +@@ -115,10 +115,11 @@ long strncpy_from_user(char *dst, const + + kasan_check_write(dst, count); + check_object_size(dst, count, false); +- user_access_begin(); +- retval = do_strncpy_from_user(dst, src, count, max); +- user_access_end(); +- return retval; ++ if (user_access_begin(VERIFY_READ, src, max)) { ++ retval = do_strncpy_from_user(dst, src, count, max); ++ user_access_end(); ++ return retval; ++ } + } + return -EFAULT; + } +--- a/lib/strnlen_user.c ++++ b/lib/strnlen_user.c +@@ -114,10 +114,11 @@ long strnlen_user(const char __user *str + unsigned long max = max_addr - src_addr; + long retval; + +- user_access_begin(); +- retval = do_strnlen_user(str, count, max); +- user_access_end(); +- return retval; ++ if (user_access_begin(VERIFY_READ, str, max)) { ++ retval = do_strnlen_user(str, count, max); ++ user_access_end(); ++ return retval; ++ } + } + return 0; + } diff --git a/queue-4.14/serial-imx-fix-handling-of-tc-irq-in-combination-with-dma.patch b/queue-4.14/serial-imx-fix-handling-of-tc-irq-in-combination-with-dma.patch new file mode 100644 index 00000000000..cb802a228ba --- /dev/null +++ b/queue-4.14/serial-imx-fix-handling-of-tc-irq-in-combination-with-dma.patch @@ -0,0 +1,73 @@ +From 1866541492641c02874bf51f9d8712b5510f2c64 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= +Date: Fri, 2 Mar 2018 11:07:28 +0100 +Subject: serial: imx: Fix handling of TC irq in combination with DMA +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Uwe Kleine-König + +commit 1866541492641c02874bf51f9d8712b5510f2c64 upstream. + +When using RS485 half duplex the Transmitter Complete irq is needed to +determine the moment when the transmitter can be disabled. When using +DMA this irq must only be enabled when DMA has completed to transfer all +data. Otherwise the CPU might busily trigger this irq which is not +properly handled and so the also pending irq for the DMA transfer cannot +trigger. + +Signed-off-by: Uwe Kleine-König +[Backport to v4.14] +Signed-off-by: Frieder Schrempf +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/tty/serial/imx.c | 22 ++++++++++++++++++---- + 1 file changed, 18 insertions(+), 4 deletions(-) + +--- a/drivers/tty/serial/imx.c ++++ b/drivers/tty/serial/imx.c +@@ -538,6 +538,11 @@ static void dma_tx_callback(void *data) + + if (!uart_circ_empty(xmit) && !uart_tx_stopped(&sport->port)) + imx_dma_tx(sport); ++ else if (sport->port.rs485.flags & SER_RS485_ENABLED) { ++ temp = readl(sport->port.membase + UCR4); ++ temp |= UCR4_TCEN; ++ writel(temp, sport->port.membase + UCR4); ++ } + + spin_unlock_irqrestore(&sport->port.lock, flags); + } +@@ -555,6 +560,10 @@ static void imx_dma_tx(struct imx_port * + if (sport->dma_is_txing) + return; + ++ temp = readl(sport->port.membase + UCR4); ++ temp &= ~UCR4_TCEN; ++ writel(temp, sport->port.membase + UCR4); ++ + sport->tx_bytes = uart_circ_chars_pending(xmit); + + if (xmit->tail < xmit->head || xmit->head == 0) { +@@ -617,10 +626,15 @@ static void imx_start_tx(struct uart_por + if (!(port->rs485.flags & SER_RS485_RX_DURING_TX)) + imx_stop_rx(port); + +- /* enable transmitter and shifter empty irq */ +- temp = readl(port->membase + UCR4); +- temp |= UCR4_TCEN; +- writel(temp, port->membase + UCR4); ++ /* ++ * Enable transmitter and shifter empty irq only if DMA is off. ++ * In the DMA case this is done in the tx-callback. ++ */ ++ if (!sport->dma_is_enabled) { ++ temp = readl(port->membase + UCR4); ++ temp |= UCR4_TCEN; ++ writel(temp, port->membase + UCR4); ++ } + } + + if (!sport->dma_is_enabled) { diff --git a/queue-4.14/series b/queue-4.14/series index 7aa47feaebb..2411b65cc0c 100644 --- a/queue-4.14/series +++ b/queue-4.14/series @@ -1,2 +1,8 @@ ipv6-fix-ipv6_addrform-operation-logic.patch vxlan-avoid-infinite-loop-when-suppressing-ns-messages-with-invalid-options.patch +make-user_access_begin-do-access_ok.patch +fix-acccess_ok-on-alpha-and-sh.patch +arch-openrisc-fix-issues-with-access_ok.patch +x86-uaccess-inhibit-speculation-past-access_ok-in-user_access_begin.patch +lib-reduce-user_access_begin-boundaries-in-strncpy_from_user-and-strnlen_user.patch +serial-imx-fix-handling-of-tc-irq-in-combination-with-dma.patch diff --git a/queue-4.14/x86-uaccess-inhibit-speculation-past-access_ok-in-user_access_begin.patch b/queue-4.14/x86-uaccess-inhibit-speculation-past-access_ok-in-user_access_begin.patch new file mode 100644 index 00000000000..f49dc15abb1 --- /dev/null +++ b/queue-4.14/x86-uaccess-inhibit-speculation-past-access_ok-in-user_access_begin.patch @@ -0,0 +1,45 @@ +From 6e693b3ffecb0b478c7050b44a4842854154f715 Mon Sep 17 00:00:00 2001 +From: Will Deacon +Date: Sat, 19 Jan 2019 21:56:05 +0000 +Subject: x86: uaccess: Inhibit speculation past access_ok() in user_access_begin() + +From: Will Deacon + +commit 6e693b3ffecb0b478c7050b44a4842854154f715 upstream. + +Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") +makes the access_ok() check part of the user_access_begin() preceding a +series of 'unsafe' accesses. This has the desirable effect of ensuring +that all 'unsafe' accesses have been range-checked, without having to +pick through all of the callsites to verify whether the appropriate +checking has been made. + +However, the consolidated range check does not inhibit speculation, so +it is still up to the caller to ensure that they are not susceptible to +any speculative side-channel attacks for user addresses that ultimately +fail the access_ok() check. + +This is an oversight, so use __uaccess_begin_nospec() to ensure that +speculation is inhibited until the access_ok() check has passed. + +Reported-by: Julien Thierry +Signed-off-by: Will Deacon +Signed-off-by: Linus Torvalds +Cc: Miles Chen +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/include/asm/uaccess.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/x86/include/asm/uaccess.h ++++ b/arch/x86/include/asm/uaccess.h +@@ -717,7 +717,7 @@ static __must_check inline bool user_acc + { + if (unlikely(!access_ok(type, ptr, len))) + return 0; +- __uaccess_begin(); ++ __uaccess_begin_nospec(); + return 1; + } +