From: Greg Kroah-Hartman Date: Wed, 26 Feb 2020 10:47:06 +0000 (+0100) Subject: 4.9-stable patches X-Git-Tag: v4.4.215~68 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=c48036cd785945388ebfd5f7344c770c3c2ec915;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: revert-ipc-sem-remove-uneeded-sem_undo_list-lock-usage-in-exit_sem.patch tty-serial-atmel-manage-shutdown-in-case-of-rs485-or-iso7816-mode.patch tty-serial-imx-setup-the-correct-sg-entry-for-tx-dma.patch x86-mce-amd-fix-kobject-lifetime.patch x86-mce-amd-publish-the-bank-pointer-only-after-setup-has-succeeded.patch --- diff --git a/queue-4.9/revert-ipc-sem-remove-uneeded-sem_undo_list-lock-usage-in-exit_sem.patch b/queue-4.9/revert-ipc-sem-remove-uneeded-sem_undo_list-lock-usage-in-exit_sem.patch new file mode 100644 index 00000000000..c8880c10cb9 --- /dev/null +++ b/queue-4.9/revert-ipc-sem-remove-uneeded-sem_undo_list-lock-usage-in-exit_sem.patch @@ -0,0 +1,135 @@ +From edf28f4061afe4c2d9eb1c3323d90e882c1d6800 Mon Sep 17 00:00:00 2001 +From: Ioanna Alifieraki +Date: Thu, 20 Feb 2020 20:04:00 -0800 +Subject: Revert "ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()" + +From: Ioanna Alifieraki + +commit edf28f4061afe4c2d9eb1c3323d90e882c1d6800 upstream. + +This reverts commit a97955844807e327df11aa33869009d14d6b7de0. + +Commit a97955844807 ("ipc,sem: remove uneeded sem_undo_list lock usage +in exit_sem()") removes a lock that is needed. This leads to a process +looping infinitely in exit_sem() and can also lead to a crash. There is +a reproducer available in [1] and with the commit reverted the issue +does not reproduce anymore. + +Using the reproducer found in [1] is fairly easy to reach a point where +one of the child processes is looping infinitely in exit_sem between +for(;;) and if (semid == -1) block, while it's trying to free its last +sem_undo structure which has already been freed by freeary(). + +Each sem_undo struct is on two lists: one per semaphore set (list_id) +and one per process (list_proc). The list_id list tracks undos by +semaphore set, and the list_proc by process. + +Undo structures are removed either by freeary() or by exit_sem(). The +freeary function is invoked when the user invokes a syscall to remove a +semaphore set. During this operation freeary() traverses the list_id +associated with the semaphore set and removes the undo structures from +both the list_id and list_proc lists. + +For this case, exit_sem() is called at process exit. Each process +contains a struct sem_undo_list (referred to as "ulp") which contains +the head for the list_proc list. When the process exits, exit_sem() +traverses this list to remove each sem_undo struct. As in freeary(), +whenever a sem_undo struct is removed from list_proc, it is also removed +from the list_id list. + +Removing elements from list_id is safe for both exit_sem() and freeary() +due to sem_lock(). Removing elements from list_proc is not safe; +freeary() locks &un->ulp->lock when it performs +list_del_rcu(&un->list_proc) but exit_sem() does not (locking was +removed by commit a97955844807 ("ipc,sem: remove uneeded sem_undo_list +lock usage in exit_sem()"). + +This can result in the following situation while executing the +reproducer [1] : Consider a child process in exit_sem() and the parent +in freeary() (because of semctl(sid[i], NSEM, IPC_RMID)). + + - The list_proc for the child contains the last two undo structs A and + B (the rest have been removed either by exit_sem() or freeary()). + + - The semid for A is 1 and semid for B is 2. + + - exit_sem() removes A and at the same time freeary() removes B. + + - Since A and B have different semid sem_lock() will acquire different + locks for each process and both can proceed. + +The bug is that they remove A and B from the same list_proc at the same +time because only freeary() acquires the ulp lock. When exit_sem() +removes A it makes ulp->list_proc.next to point at B and at the same +time freeary() removes B setting B->semid=-1. + +At the next iteration of for(;;) loop exit_sem() will try to remove B. + +The only way to break from for(;;) is for (&un->list_proc == +&ulp->list_proc) to be true which is not. Then exit_sem() will check if +B->semid=-1 which is and will continue looping in for(;;) until the +memory for B is reallocated and the value at B->semid is changed. + +At that point, exit_sem() will crash attempting to unlink B from the +lists (this can be easily triggered by running the reproducer [1] a +second time). + +To prove this scenario instrumentation was added to keep information +about each sem_undo (un) struct that is removed per process and per +semaphore set (sma). + + CPU0 CPU1 + [caller holds sem_lock(sma for A)] ... + freeary() exit_sem() + ... ... + ... sem_lock(sma for B) + spin_lock(A->ulp->lock) ... + list_del_rcu(un_A->list_proc) list_del_rcu(un_B->list_proc) + +Undo structures A and B have different semid and sem_lock() operations +proceed. However they belong to the same list_proc list and they are +removed at the same time. This results into ulp->list_proc.next +pointing to the address of B which is already removed. + +After reverting commit a97955844807 ("ipc,sem: remove uneeded +sem_undo_list lock usage in exit_sem()") the issue was no longer +reproducible. + +[1] https://bugzilla.redhat.com/show_bug.cgi?id=1694779 + +Link: http://lkml.kernel.org/r/20191211191318.11860-1-ioanna-maria.alifieraki@canonical.com +Fixes: a97955844807 ("ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()") +Signed-off-by: Ioanna Alifieraki +Acked-by: Manfred Spraul +Acked-by: Herton R. Krzesinski +Cc: Arnd Bergmann +Cc: Catalin Marinas +Cc: +Cc: Joel Fernandes (Google) +Cc: Davidlohr Bueso +Cc: Jay Vosburgh +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + ipc/sem.c | 6 ++---- + 1 file changed, 2 insertions(+), 4 deletions(-) + +--- a/ipc/sem.c ++++ b/ipc/sem.c +@@ -2159,11 +2159,9 @@ void exit_sem(struct task_struct *tsk) + ipc_assert_locked_object(&sma->sem_perm); + list_del(&un->list_id); + +- /* we are the last process using this ulp, acquiring ulp->lock +- * isn't required. Besides that, we are also protected against +- * IPC_RMID as we hold sma->sem_perm lock now +- */ ++ spin_lock(&ulp->lock); + list_del_rcu(&un->list_proc); ++ spin_unlock(&ulp->lock); + + /* perform adjustments registered in un */ + for (i = 0; i < sma->sem_nsems; i++) { diff --git a/queue-4.9/series b/queue-4.9/series index 6c8aed27be6..598e00c8959 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -124,3 +124,8 @@ usb-fix-novation-sourcecontrol-xl-after-suspend.patch usb-hub-don-t-record-a-connect-change-event-during-reset-resume.patch staging-rtl8188eu-fix-potential-security-hole.patch staging-rtl8188eu-fix-potential-overuse-of-kernel-memory.patch +x86-mce-amd-publish-the-bank-pointer-only-after-setup-has-succeeded.patch +x86-mce-amd-fix-kobject-lifetime.patch +tty-serial-atmel-manage-shutdown-in-case-of-rs485-or-iso7816-mode.patch +tty-serial-imx-setup-the-correct-sg-entry-for-tx-dma.patch +revert-ipc-sem-remove-uneeded-sem_undo_list-lock-usage-in-exit_sem.patch diff --git a/queue-4.9/tty-serial-atmel-manage-shutdown-in-case-of-rs485-or-iso7816-mode.patch b/queue-4.9/tty-serial-atmel-manage-shutdown-in-case-of-rs485-or-iso7816-mode.patch new file mode 100644 index 00000000000..d47a1fb78e9 --- /dev/null +++ b/queue-4.9/tty-serial-atmel-manage-shutdown-in-case-of-rs485-or-iso7816-mode.patch @@ -0,0 +1,36 @@ +From 04b5bfe3dc94e64d0590c54045815cb5183fb095 Mon Sep 17 00:00:00 2001 +From: Nicolas Ferre +Date: Mon, 10 Feb 2020 16:20:53 +0100 +Subject: tty/serial: atmel: manage shutdown in case of RS485 or ISO7816 mode + +From: Nicolas Ferre + +commit 04b5bfe3dc94e64d0590c54045815cb5183fb095 upstream. + +In atmel_shutdown() we call atmel_stop_rx() and atmel_stop_tx() functions. +Prevent the rx restart that is implemented in RS485 or ISO7816 modes when +calling atmel_stop_tx() by using the atomic information tasklet_shutdown +that is already in place for this purpose. + +Fixes: 98f2082c3ac4 ("tty/serial: atmel: enforce tasklet init and termination sequences") +Signed-off-by: Nicolas Ferre +Cc: stable +Link: https://lore.kernel.org/r/20200210152053.8289-1-nicolas.ferre@microchip.com +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/tty/serial/atmel_serial.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/drivers/tty/serial/atmel_serial.c ++++ b/drivers/tty/serial/atmel_serial.c +@@ -501,7 +501,8 @@ static void atmel_stop_tx(struct uart_po + atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask); + + if (atmel_uart_is_half_duplex(port)) +- atmel_start_rx(port); ++ if (!atomic_read(&atmel_port->tasklet_shutdown)) ++ atmel_start_rx(port); + + } + diff --git a/queue-4.9/tty-serial-imx-setup-the-correct-sg-entry-for-tx-dma.patch b/queue-4.9/tty-serial-imx-setup-the-correct-sg-entry-for-tx-dma.patch new file mode 100644 index 00000000000..61761cd5fb8 --- /dev/null +++ b/queue-4.9/tty-serial-imx-setup-the-correct-sg-entry-for-tx-dma.patch @@ -0,0 +1,110 @@ +From f76707831829530ffdd3888bebc108aecefccaa0 Mon Sep 17 00:00:00 2001 +From: Fugang Duan +Date: Tue, 11 Feb 2020 14:16:01 +0800 +Subject: tty: serial: imx: setup the correct sg entry for tx dma +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Fugang Duan + +commit f76707831829530ffdd3888bebc108aecefccaa0 upstream. + +There has oops as below happen on i.MX8MP EVK platform that has +6G bytes DDR memory. + +when (xmit->tail < xmit->head) && (xmit->head == 0), +it setups one sg entry with sg->length is zero: + sg_set_buf(sgl + 1, xmit->buf, xmit->head); + +if xmit->buf is allocated from >4G address space, and SDMA only +support <4G address space, then dma_map_sg() will call swiotlb_map() +to do bounce buffer copying and mapping. + +But swiotlb_map() don't allow sg entry's length is zero, otherwise +report BUG_ON(). + +So the patch is to correct the tx DMA scatter list. + +Oops: +[ 287.675715] kernel BUG at kernel/dma/swiotlb.c:497! +[ 287.680592] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP +[ 287.686075] Modules linked in: +[ 287.689133] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.3-00016-g3fdc4e0-dirty #10 +[ 287.696872] Hardware name: FSL i.MX8MP EVK (DT) +[ 287.701402] pstate: 80000085 (Nzcv daIf -PAN -UAO) +[ 287.706199] pc : swiotlb_tbl_map_single+0x1fc/0x310 +[ 287.711076] lr : swiotlb_map+0x60/0x148 +[ 287.714909] sp : ffff800010003c00 +[ 287.718221] x29: ffff800010003c00 x28: 0000000000000000 +[ 287.723533] x27: 0000000000000040 x26: ffff800011ae0000 +[ 287.728844] x25: ffff800011ae09f8 x24: 0000000000000000 +[ 287.734155] x23: 00000001b7af9000 x22: 0000000000000000 +[ 287.739465] x21: ffff000176409c10 x20: 00000000001f7ffe +[ 287.744776] x19: ffff000176409c10 x18: 000000000000002e +[ 287.750087] x17: 0000000000000000 x16: 0000000000000000 +[ 287.755397] x15: 0000000000000000 x14: 0000000000000000 +[ 287.760707] x13: ffff00017f334000 x12: 0000000000000001 +[ 287.766018] x11: 00000000001fffff x10: 0000000000000000 +[ 287.771328] x9 : 0000000000000003 x8 : 0000000000000000 +[ 287.776638] x7 : 0000000000000000 x6 : 0000000000000000 +[ 287.781949] x5 : 0000000000200000 x4 : 0000000000000000 +[ 287.787259] x3 : 0000000000000001 x2 : 00000001b7af9000 +[ 287.792570] x1 : 00000000fbfff000 x0 : 0000000000000000 +[ 287.797881] Call trace: +[ 287.800328] swiotlb_tbl_map_single+0x1fc/0x310 +[ 287.804859] swiotlb_map+0x60/0x148 +[ 287.808347] dma_direct_map_page+0xf0/0x130 +[ 287.812530] dma_direct_map_sg+0x78/0xe0 +[ 287.816453] imx_uart_dma_tx+0x134/0x2f8 +[ 287.820374] imx_uart_dma_tx_callback+0xd8/0x168 +[ 287.824992] vchan_complete+0x194/0x200 +[ 287.828828] tasklet_action_common.isra.0+0x154/0x1a0 +[ 287.833879] tasklet_action+0x24/0x30 +[ 287.837540] __do_softirq+0x120/0x23c +[ 287.841202] irq_exit+0xb8/0xd8 +[ 287.844343] __handle_domain_irq+0x64/0xb8 +[ 287.848438] gic_handle_irq+0x5c/0x148 +[ 287.852185] el1_irq+0xb8/0x180 +[ 287.855327] cpuidle_enter_state+0x84/0x360 +[ 287.859508] cpuidle_enter+0x34/0x48 +[ 287.863083] call_cpuidle+0x18/0x38 +[ 287.866571] do_idle+0x1e0/0x280 +[ 287.869798] cpu_startup_entry+0x20/0x40 +[ 287.873721] rest_init+0xd4/0xe0 +[ 287.876949] arch_call_rest_init+0xc/0x14 +[ 287.880958] start_kernel+0x420/0x44c +[ 287.884622] Code: 9124c021 9417aff8 a94363f7 17ffffd5 (d4210000) +[ 287.890718] ---[ end trace 5bc44c4ab6b009ce ]--- +[ 287.895334] Kernel panic - not syncing: Fatal exception in interrupt +[ 287.901686] SMP: stopping secondary CPUs +[ 288.905607] SMP: failed to stop secondary CPUs 0-1 +[ 288.910395] Kernel Offset: disabled +[ 288.913882] CPU features: 0x0002,2000200c +[ 288.917888] Memory Limit: none +[ 288.920944] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- + +Reported-by: Eagle Zhou +Tested-by: Eagle Zhou +Signed-off-by: Fugang Duan +Cc: stable +Fixes: 7942f8577f2a ("serial: imx: TX DMA: clean up sg initialization") +Reviewed-by: Uwe Kleine-König +Link: https://lore.kernel.org/r/1581401761-6378-1-git-send-email-fugang.duan@nxp.com +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/tty/serial/imx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/tty/serial/imx.c ++++ b/drivers/tty/serial/imx.c +@@ -532,7 +532,7 @@ static void imx_dma_tx(struct imx_port * + + sport->tx_bytes = uart_circ_chars_pending(xmit); + +- if (xmit->tail < xmit->head) { ++ if (xmit->tail < xmit->head || xmit->head == 0) { + sport->dma_tx_nents = 1; + sg_init_one(sgl, xmit->buf + xmit->tail, sport->tx_bytes); + } else { diff --git a/queue-4.9/x86-mce-amd-fix-kobject-lifetime.patch b/queue-4.9/x86-mce-amd-fix-kobject-lifetime.patch new file mode 100644 index 00000000000..7dea2371a30 --- /dev/null +++ b/queue-4.9/x86-mce-amd-fix-kobject-lifetime.patch @@ -0,0 +1,87 @@ +From 51dede9c05df2b78acd6dcf6a17d21f0877d2d7b Mon Sep 17 00:00:00 2001 +From: Thomas Gleixner +Date: Thu, 13 Feb 2020 19:01:34 +0100 +Subject: x86/mce/amd: Fix kobject lifetime + +From: Thomas Gleixner + +commit 51dede9c05df2b78acd6dcf6a17d21f0877d2d7b upstream. + +Accessing the MCA thresholding controls in sysfs concurrently with CPU +hotplug can lead to a couple of KASAN-reported issues: + + BUG: KASAN: use-after-free in sysfs_file_ops+0x155/0x180 + Read of size 8 at addr ffff888367578940 by task grep/4019 + +and + + BUG: KASAN: use-after-free in show_error_count+0x15c/0x180 + Read of size 2 at addr ffff888368a05514 by task grep/4454 + +for example. Both result from the fact that the threshold block +creation/teardown code frees the descriptor memory itself instead of +defining proper ->release function and leaving it to the driver core to +take care of that, after all sysfs accesses have completed. + +Do that and get rid of the custom freeing code, fixing the above UAFs in +the process. + + [ bp: write commit message. ] + +Fixes: 95268664390b ("[PATCH] x86_64: mce_amd support for family 0x10 processors") +Signed-off-by: Thomas Gleixner +Signed-off-by: Borislav Petkov +Cc: +Link: https://lkml.kernel.org/r/20200214082801.13836-1-bp@alien8.de +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kernel/cpu/mcheck/mce_amd.c | 17 +++++++++++------ + 1 file changed, 11 insertions(+), 6 deletions(-) + +--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c ++++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c +@@ -846,9 +846,12 @@ static const struct sysfs_ops threshold_ + .store = store, + }; + ++static void threshold_block_release(struct kobject *kobj); ++ + static struct kobj_type threshold_ktype = { + .sysfs_ops = &threshold_ops, + .default_attrs = default_attrs, ++ .release = threshold_block_release, + }; + + static const char *get_name(unsigned int bank, struct threshold_block *b) +@@ -1073,8 +1076,12 @@ static int threshold_create_device(unsig + return err; + } + +-static void deallocate_threshold_block(unsigned int cpu, +- unsigned int bank) ++static void threshold_block_release(struct kobject *kobj) ++{ ++ kfree(to_block(kobj)); ++} ++ ++static void deallocate_threshold_block(unsigned int cpu, unsigned int bank) + { + struct threshold_block *pos = NULL; + struct threshold_block *tmp = NULL; +@@ -1084,13 +1091,11 @@ static void deallocate_threshold_block(u + return; + + list_for_each_entry_safe(pos, tmp, &head->blocks->miscj, miscj) { +- kobject_put(&pos->kobj); + list_del(&pos->miscj); +- kfree(pos); ++ kobject_put(&pos->kobj); + } + +- kfree(per_cpu(threshold_banks, cpu)[bank]->blocks); +- per_cpu(threshold_banks, cpu)[bank]->blocks = NULL; ++ kobject_put(&head->blocks->kobj); + } + + static void __threshold_remove_blocks(struct threshold_bank *b) diff --git a/queue-4.9/x86-mce-amd-publish-the-bank-pointer-only-after-setup-has-succeeded.patch b/queue-4.9/x86-mce-amd-publish-the-bank-pointer-only-after-setup-has-succeeded.patch new file mode 100644 index 00000000000..5107017eb75 --- /dev/null +++ b/queue-4.9/x86-mce-amd-publish-the-bank-pointer-only-after-setup-has-succeeded.patch @@ -0,0 +1,104 @@ +From 6e5cf31fbe651bed7ba1df768f2e123531132417 Mon Sep 17 00:00:00 2001 +From: Borislav Petkov +Date: Tue, 4 Feb 2020 13:28:41 +0100 +Subject: x86/mce/amd: Publish the bank pointer only after setup has succeeded + +From: Borislav Petkov + +commit 6e5cf31fbe651bed7ba1df768f2e123531132417 upstream. + +threshold_create_bank() creates a bank descriptor per MCA error +thresholding counter which can be controlled over sysfs. It publishes +the pointer to that bank in a per-CPU variable and then goes on to +create additional thresholding blocks if the bank has such. + +However, that creation of additional blocks in +allocate_threshold_blocks() can fail, leading to a use-after-free +through the per-CPU pointer. + +Therefore, publish that pointer only after all blocks have been setup +successfully. + +Fixes: 019f34fccfd5 ("x86, MCE, AMD: Move shared bank to node descriptor") +Reported-by: Saar Amar +Reported-by: Dan Carpenter +Signed-off-by: Borislav Petkov +Cc: +Link: http://lkml.kernel.org/r/20200128140846.phctkvx5btiexvbx@kili.mountain +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kernel/cpu/mcheck/mce_amd.c | 33 ++++++++++++++++----------------- + 1 file changed, 16 insertions(+), 17 deletions(-) + +--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c ++++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c +@@ -879,8 +879,9 @@ static const char *get_name(unsigned int + return buf_mcatype; + } + +-static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank, +- unsigned int block, u32 address) ++static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb, ++ unsigned int bank, unsigned int block, ++ u32 address) + { + struct threshold_block *b = NULL; + u32 low, high; +@@ -924,16 +925,12 @@ static int allocate_threshold_blocks(uns + + INIT_LIST_HEAD(&b->miscj); + +- if (per_cpu(threshold_banks, cpu)[bank]->blocks) { +- list_add(&b->miscj, +- &per_cpu(threshold_banks, cpu)[bank]->blocks->miscj); +- } else { +- per_cpu(threshold_banks, cpu)[bank]->blocks = b; +- } ++ if (tb->blocks) ++ list_add(&b->miscj, &tb->blocks->miscj); ++ else ++ tb->blocks = b; + +- err = kobject_init_and_add(&b->kobj, &threshold_ktype, +- per_cpu(threshold_banks, cpu)[bank]->kobj, +- get_name(bank, b)); ++ err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(bank, b)); + if (err) + goto out_free; + recurse: +@@ -941,7 +938,7 @@ recurse: + if (!address) + return 0; + +- err = allocate_threshold_blocks(cpu, bank, block, address); ++ err = allocate_threshold_blocks(cpu, tb, bank, block, address); + if (err) + goto out_free; + +@@ -1026,8 +1023,6 @@ static int threshold_create_bank(unsigne + goto out_free; + } + +- per_cpu(threshold_banks, cpu)[bank] = b; +- + if (is_shared_bank(bank)) { + atomic_set(&b->cpus, 1); + +@@ -1038,9 +1033,13 @@ static int threshold_create_bank(unsigne + } + } + +- err = allocate_threshold_blocks(cpu, bank, 0, msr_ops.misc(bank)); +- if (!err) +- goto out; ++ err = allocate_threshold_blocks(cpu, b, bank, 0, msr_ops.misc(bank)); ++ if (err) ++ goto out_free; ++ ++ per_cpu(threshold_banks, cpu)[bank] = b; ++ ++ return 0; + + out_free: + kfree(b);