From: Greg Kroah-Hartman Date: Mon, 11 Feb 2019 11:39:39 +0000 (+0100) Subject: 4.14-stable patches X-Git-Tag: v4.9.156~23 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=73ca2b543b4fea32c10aa34cf9a3aecd6a70b2c0;p=thirdparty%2Fkernel%2Fstable-queue.git 4.14-stable patches added patches: cpu-hotplug-fix-smt-disabled-by-bios-detection-for-kvm.patch dmaengine-bcm2835-fix-abort-of-transactions.patch dmaengine-bcm2835-fix-interrupt-race-on-rt.patch dmaengine-imx-dma-fix-wrong-callback-invoke.patch futex-handle-early-deadlock-return-correctly.patch irqchip-gic-v3-its-plug-allocation-race-for-devices-sharing-a-devid.patch kvm-fix-kvm_ioctl_create_device-reference-counting-cve-2019-6974.patch kvm-nvmx-unconditionally-cancel-preemption-timer-in-free_nested-cve-2019-7221.patch kvm-x86-work-around-leak-of-uninitialized-stack-contents-cve-2019-7222.patch perf-core-don-t-warn-for-impossible-ring-buffer-sizes.patch perf-tests-evsel-tp-sched-fix-bitwise-operator.patch perf-x86-intel-uncore-add-node-id-mask.patch scsi-aic94xx-fix-module-loading.patch scsi-cxlflash-prevent-deadlock-when-adapter-probe-fails.patch serial-8250_pci-make-pci-class-test-non-fatal.patch serial-fix-race-between-flush_to_ldisc-and-tty_open.patch staging-speakup-fix-tty-operation-null-derefs.patch usb-dwc3-gadget-handle-0-xfer-length-for-out-ep.patch usb-gadget-musb-fix-short-isoc-packets-with-inventra-dma.patch usb-gadget-udc-net2272-fix-bitwise-and-boolean-operations.patch usb-phy-am335x-fix-race-condition-in-_probe.patch x86-mce-initialize-mce.bank-in-the-case-of-a-fatal-error-in-mce_no_way_out.patch --- diff --git a/queue-4.14/cpu-hotplug-fix-smt-disabled-by-bios-detection-for-kvm.patch b/queue-4.14/cpu-hotplug-fix-smt-disabled-by-bios-detection-for-kvm.patch new file mode 100644 index 00000000000..096e440095e --- /dev/null +++ b/queue-4.14/cpu-hotplug-fix-smt-disabled-by-bios-detection-for-kvm.patch @@ -0,0 +1,206 @@ +From b284909abad48b07d3071a9fc9b5692b3e64914b Mon Sep 17 00:00:00 2001 +From: Josh Poimboeuf +Date: Wed, 30 Jan 2019 07:13:58 -0600 +Subject: cpu/hotplug: Fix "SMT disabled by BIOS" detection for KVM + +From: Josh Poimboeuf + +commit b284909abad48b07d3071a9fc9b5692b3e64914b upstream. + +With the following commit: + + 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") + +... the hotplug code attempted to detect when SMT was disabled by BIOS, +in which case it reported SMT as permanently disabled. However, that +code broke a virt hotplug scenario, where the guest is booted with only +primary CPU threads, and a sibling is brought online later. + +The problem is that there doesn't seem to be a way to reliably +distinguish between the HW "SMT disabled by BIOS" case and the virt +"sibling not yet brought online" case. So the above-mentioned commit +was a bit misguided, as it permanently disabled SMT for both cases, +preventing future virt sibling hotplugs. + +Going back and reviewing the original problems which were attempted to +be solved by that commit, when SMT was disabled in BIOS: + + 1) /sys/devices/system/cpu/smt/control showed "on" instead of + "notsupported"; and + + 2) vmx_vm_init() was incorrectly showing the L1TF_MSG_SMT warning. + +I'd propose that we instead consider #1 above to not actually be a +problem. Because, at least in the virt case, it's possible that SMT +wasn't disabled by BIOS and a sibling thread could be brought online +later. So it makes sense to just always default the smt control to "on" +to allow for that possibility (assuming cpuid indicates that the CPU +supports SMT). + +The real problem is #2, which has a simple fix: change vmx_vm_init() to +query the actual current SMT state -- i.e., whether any siblings are +currently online -- instead of looking at the SMT "control" sysfs value. + +So fix it by: + + a) reverting the original "fix" and its followup fix: + + 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") + bc2d8d262cba ("cpu/hotplug: Fix SMT supported evaluation") + + and + + b) changing vmx_vm_init() to query the actual current SMT state -- + instead of the sysfs control value -- to determine whether the L1TF + warning is needed. This also requires the 'sched_smt_present' + variable to exported, instead of 'cpu_smt_control'. + +Fixes: 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") +Reported-by: Igor Mammedov +Signed-off-by: Josh Poimboeuf +Signed-off-by: Thomas Gleixner +Cc: Joe Mario +Cc: Jiri Kosina +Cc: Peter Zijlstra +Cc: kvm@vger.kernel.org +Cc: stable@vger.kernel.org +Link: https://lkml.kernel.org/r/e3a85d585da28cc333ecbc1e78ee9216e6da9396.1548794349.git.jpoimboe@redhat.com +Signed-off-by: Greg Kroah-Hartman + + +--- + arch/x86/kernel/cpu/bugs.c | 2 +- + arch/x86/kvm/vmx.c | 3 ++- + include/linux/cpu.h | 2 -- + kernel/cpu.c | 33 ++++----------------------------- + kernel/sched/fair.c | 1 + + kernel/smp.c | 2 -- + 6 files changed, 8 insertions(+), 35 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -68,7 +68,7 @@ void __init check_bugs(void) + * identify_boot_cpu() initialized SMT support information, let the + * core code know. + */ +- cpu_smt_check_topology_early(); ++ cpu_smt_check_topology(); + + if (!IS_ENABLED(CONFIG_SMP)) { + pr_info("CPU: "); +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -27,6 +27,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -10120,7 +10121,7 @@ static int vmx_vm_init(struct kvm *kvm) + * Warn upon starting the first VM in a potentially + * insecure environment. + */ +- if (cpu_smt_control == CPU_SMT_ENABLED) ++ if (sched_smt_active()) + pr_warn_once(L1TF_MSG_SMT); + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER) + pr_warn_once(L1TF_MSG_L1D); +--- a/include/linux/cpu.h ++++ b/include/linux/cpu.h +@@ -188,12 +188,10 @@ enum cpuhp_smt_control { + #if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT) + extern enum cpuhp_smt_control cpu_smt_control; + extern void cpu_smt_disable(bool force); +-extern void cpu_smt_check_topology_early(void); + extern void cpu_smt_check_topology(void); + #else + # define cpu_smt_control (CPU_SMT_ENABLED) + static inline void cpu_smt_disable(bool force) { } +-static inline void cpu_smt_check_topology_early(void) { } + static inline void cpu_smt_check_topology(void) { } + #endif + +--- a/kernel/cpu.c ++++ b/kernel/cpu.c +@@ -356,9 +356,6 @@ void __weak arch_smt_update(void) { } + + #ifdef CONFIG_HOTPLUG_SMT + enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED; +-EXPORT_SYMBOL_GPL(cpu_smt_control); +- +-static bool cpu_smt_available __read_mostly; + + void __init cpu_smt_disable(bool force) + { +@@ -376,25 +373,11 @@ void __init cpu_smt_disable(bool force) + + /* + * The decision whether SMT is supported can only be done after the full +- * CPU identification. Called from architecture code before non boot CPUs +- * are brought up. +- */ +-void __init cpu_smt_check_topology_early(void) +-{ +- if (!topology_smt_supported()) +- cpu_smt_control = CPU_SMT_NOT_SUPPORTED; +-} +- +-/* +- * If SMT was disabled by BIOS, detect it here, after the CPUs have been +- * brought online. This ensures the smt/l1tf sysfs entries are consistent +- * with reality. cpu_smt_available is set to true during the bringup of non +- * boot CPUs when a SMT sibling is detected. Note, this may overwrite +- * cpu_smt_control's previous setting. ++ * CPU identification. Called from architecture code. + */ + void __init cpu_smt_check_topology(void) + { +- if (!cpu_smt_available) ++ if (!topology_smt_supported()) + cpu_smt_control = CPU_SMT_NOT_SUPPORTED; + } + +@@ -407,18 +390,10 @@ early_param("nosmt", smt_cmdline_disable + + static inline bool cpu_smt_allowed(unsigned int cpu) + { +- if (topology_is_primary_thread(cpu)) ++ if (cpu_smt_control == CPU_SMT_ENABLED) + return true; + +- /* +- * If the CPU is not a 'primary' thread and the booted_once bit is +- * set then the processor has SMT support. Store this information +- * for the late check of SMT support in cpu_smt_check_topology(). +- */ +- if (per_cpu(cpuhp_state, cpu).booted_once) +- cpu_smt_available = true; +- +- if (cpu_smt_control == CPU_SMT_ENABLED) ++ if (topology_is_primary_thread(cpu)) + return true; + + /* +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -5651,6 +5651,7 @@ find_idlest_cpu(struct sched_group *grou + + #ifdef CONFIG_SCHED_SMT + DEFINE_STATIC_KEY_FALSE(sched_smt_present); ++EXPORT_SYMBOL_GPL(sched_smt_present); + + static inline void set_idle_cores(int cpu, int val) + { +--- a/kernel/smp.c ++++ b/kernel/smp.c +@@ -584,8 +584,6 @@ void __init smp_init(void) + num_nodes, (num_nodes > 1 ? "s" : ""), + num_cpus, (num_cpus > 1 ? "s" : "")); + +- /* Final decision about SMT support */ +- cpu_smt_check_topology(); + /* Any cleanup work */ + smp_cpus_done(setup_max_cpus); + } diff --git a/queue-4.14/dmaengine-bcm2835-fix-abort-of-transactions.patch b/queue-4.14/dmaengine-bcm2835-fix-abort-of-transactions.patch new file mode 100644 index 00000000000..130511bec62 --- /dev/null +++ b/queue-4.14/dmaengine-bcm2835-fix-abort-of-transactions.patch @@ -0,0 +1,162 @@ +From 9e528c799d17a4ac37d788c81440b50377dd592d Mon Sep 17 00:00:00 2001 +From: Lukas Wunner +Date: Wed, 23 Jan 2019 09:26:00 +0100 +Subject: dmaengine: bcm2835: Fix abort of transactions + +From: Lukas Wunner + +commit 9e528c799d17a4ac37d788c81440b50377dd592d upstream. + +There are multiple issues with bcm2835_dma_abort() (which is called on +termination of a transaction): + +* The algorithm to abort the transaction first pauses the channel by + clearing the ACTIVE flag in the CS register, then waits for the PAUSED + flag to clear. Page 49 of the spec documents the latter as follows: + + "Indicates if the DMA is currently paused and not transferring data. + This will occur if the active bit has been cleared [...]" + https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf + + So the function is entering an infinite loop because it is waiting for + PAUSED to clear which is always set due to the function having cleared + the ACTIVE flag. The only thing that's saving it from itself is the + upper bound of 10000 loop iterations. + + The code comment says that the intention is to "wait for any current + AXI transfer to complete", so the author probably wanted to check the + WAITING_FOR_OUTSTANDING_WRITES flag instead. Amend the function + accordingly. + +* The CS register is only read at the beginning of the function. It + needs to be read again after pausing the channel and before checking + for outstanding writes, otherwise writes which were issued between + the register read at the beginning of the function and pausing the + channel may not be waited for. + +* The function seeks to abort the transfer by writing 0 to the NEXTCONBK + register and setting the ABORT and ACTIVE flags. Thereby, the 0 in + NEXTCONBK is sought to be loaded into the CONBLK_AD register. However + experimentation has shown this approach to not work: The CONBLK_AD + register remains the same as before and the CS register contains + 0x00000030 (PAUSED | DREQ_STOPS_DMA). In other words, the control + block is not aborted but merely paused and it will be resumed once the + next DMA transaction is started. That is absolutely not the desired + behavior. + + A simpler approach is to set the channel's RESET flag instead. This + reliably zeroes the NEXTCONBK as well as the CS register. It requires + less code and only a single MMIO write. This is also what popular + user space DMA drivers do, e.g.: + https://github.com/metachris/RPIO/blob/master/source/c_pwm/pwm.c + + Note that the spec is contradictory whether the NEXTCONBK register + is writeable at all. On the one hand, page 41 claims: + + "The value loaded into the NEXTCONBK register can be overwritten so + that the linked list of Control Block data structures can be + dynamically altered. However it is only safe to do this when the DMA + is paused." + + On the other hand, page 40 specifies: + + "Only three registers in each channel's register set are directly + writeable (CS, CONBLK_AD and DEBUG). The other registers (TI, + SOURCE_AD, DEST_AD, TXFR_LEN, STRIDE & NEXTCONBK), are automatically + loaded from a Control Block data structure held in external memory." + +Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") +Signed-off-by: Lukas Wunner +Cc: stable@vger.kernel.org # v3.14+ +Cc: Frank Pavlic +Cc: Martin Sperl +Cc: Florian Meier +Cc: Clive Messer +Cc: Matthias Reichl +Tested-by: Stefan Wahren +Acked-by: Florian Kauer +Signed-off-by: Vinod Koul +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/dma/bcm2835-dma.c | 41 +++++++++-------------------------------- + 1 file changed, 9 insertions(+), 32 deletions(-) + +--- a/drivers/dma/bcm2835-dma.c ++++ b/drivers/dma/bcm2835-dma.c +@@ -415,13 +415,11 @@ static void bcm2835_dma_fill_cb_chain_wi + } + } + +-static int bcm2835_dma_abort(void __iomem *chan_base) ++static int bcm2835_dma_abort(struct bcm2835_chan *c) + { +- unsigned long cs; ++ void __iomem *chan_base = c->chan_base; + long int timeout = 10000; + +- cs = readl(chan_base + BCM2835_DMA_CS); +- + /* + * A zero control block address means the channel is idle. + * (The ACTIVE flag in the CS register is not a reliable indicator.) +@@ -433,25 +431,16 @@ static int bcm2835_dma_abort(void __iome + writel(0, chan_base + BCM2835_DMA_CS); + + /* Wait for any current AXI transfer to complete */ +- while ((cs & BCM2835_DMA_ISPAUSED) && --timeout) { ++ while ((readl(chan_base + BCM2835_DMA_CS) & ++ BCM2835_DMA_WAITING_FOR_WRITES) && --timeout) + cpu_relax(); +- cs = readl(chan_base + BCM2835_DMA_CS); +- } + +- /* We'll un-pause when we set of our next DMA */ ++ /* Peripheral might be stuck and fail to signal AXI write responses */ + if (!timeout) +- return -ETIMEDOUT; +- +- if (!(cs & BCM2835_DMA_ACTIVE)) +- return 0; +- +- /* Terminate the control block chain */ +- writel(0, chan_base + BCM2835_DMA_NEXTCB); +- +- /* Abort the whole DMA */ +- writel(BCM2835_DMA_ABORT | BCM2835_DMA_ACTIVE, +- chan_base + BCM2835_DMA_CS); ++ dev_err(c->vc.chan.device->dev, ++ "failed to complete outstanding writes\n"); + ++ writel(BCM2835_DMA_RESET, chan_base + BCM2835_DMA_CS); + return 0; + } + +@@ -804,7 +793,6 @@ static int bcm2835_dma_terminate_all(str + struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); + struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device); + unsigned long flags; +- int timeout = 10000; + LIST_HEAD(head); + + spin_lock_irqsave(&c->vc.lock, flags); +@@ -818,18 +806,7 @@ static int bcm2835_dma_terminate_all(str + if (c->desc) { + bcm2835_dma_desc_free(&c->desc->vd); + c->desc = NULL; +- bcm2835_dma_abort(c->chan_base); +- +- /* Wait for stopping */ +- while (--timeout) { +- if (!readl(c->chan_base + BCM2835_DMA_ADDR)) +- break; +- +- cpu_relax(); +- } +- +- if (!timeout) +- dev_err(d->ddev.dev, "DMA transfer could not be terminated\n"); ++ bcm2835_dma_abort(c); + } + + vchan_get_all_descriptors(&c->vc, &head); diff --git a/queue-4.14/dmaengine-bcm2835-fix-interrupt-race-on-rt.patch b/queue-4.14/dmaengine-bcm2835-fix-interrupt-race-on-rt.patch new file mode 100644 index 00000000000..3954696bbc4 --- /dev/null +++ b/queue-4.14/dmaengine-bcm2835-fix-interrupt-race-on-rt.patch @@ -0,0 +1,155 @@ +From f7da7782aba92593f7b82f03d2409a1c5f4db91b Mon Sep 17 00:00:00 2001 +From: Lukas Wunner +Date: Wed, 23 Jan 2019 09:26:00 +0100 +Subject: dmaengine: bcm2835: Fix interrupt race on RT + +From: Lukas Wunner + +commit f7da7782aba92593f7b82f03d2409a1c5f4db91b upstream. + +If IRQ handlers are threaded (either because CONFIG_PREEMPT_RT_BASE is +enabled or "threadirqs" was passed on the command line) and if system +load is sufficiently high that wakeup latency of IRQ threads degrades, +SPI DMA transactions on the BCM2835 occasionally break like this: + +ks8851 spi0.0: SPI transfer timed out +bcm2835-dma 3f007000.dma: DMA transfer could not be terminated +ks8851 spi0.0 eth2: ks8851_rdfifo: spi_sync() failed + +The root cause is an assumption made by the DMA driver which is +documented in a code comment in bcm2835_dma_terminate_all(): + +/* + * Stop DMA activity: we assume the callback will not be called + * after bcm_dma_abort() returns (even if it does, it will see + * c->desc is NULL and exit.) + */ + +That assumption falls apart if the IRQ handler bcm2835_dma_callback() is +threaded: A client may terminate a descriptor and issue a new one +before the IRQ handler had a chance to run. In fact the IRQ handler may +miss an *arbitrary* number of descriptors. The result is the following +race condition: + +1. A descriptor finishes, its interrupt is deferred to the IRQ thread. +2. A client calls dma_terminate_async() which sets channel->desc = NULL. +3. The client issues a new descriptor. Because channel->desc is NULL, + bcm2835_dma_issue_pending() immediately starts the descriptor. +4. Finally the IRQ thread runs and writes BCM2835_DMA_INT to the CS + register to acknowledge the interrupt. This clears the ACTIVE flag, + so the newly issued descriptor is paused in the middle of the + transaction. Because channel->desc is not NULL, the IRQ thread + finalizes the descriptor and tries to start the next one. + +I see two possible solutions: The first is to call synchronize_irq() +in bcm2835_dma_issue_pending() to wait until the IRQ thread has +finished before issuing a new descriptor. The downside of this approach +is unnecessary latency if clients desire rapidly terminating and +re-issuing descriptors and don't have any use for an IRQ callback. +(The SPI TX DMA channel is a case in point.) + +A better alternative is to make the IRQ thread recognize that it has +missed descriptors and avoid finalizing the newly issued descriptor. +So first of all, set the ACTIVE flag when acknowledging the interrupt. +This keeps a newly issued descriptor running. + +If the descriptor was finished, the channel remains idle despite the +ACTIVE flag being set. However the ACTIVE flag can then no longer be +used to check whether the channel is idle, so instead check whether +the register containing the current control block address is zero +and finalize the current descriptor only if so. + +That way, there is no impact on latency and throughput if the client +doesn't care for the interrupt: Only minimal additional overhead is +introduced for non-cyclic descriptors as one further MMIO read is +necessary per interrupt to check for idleness of the channel. Cyclic +descriptors are sped up slightly by removing one MMIO write per +interrupt. + +Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") +Signed-off-by: Lukas Wunner +Cc: stable@vger.kernel.org # v3.14+ +Cc: Frank Pavlic +Cc: Martin Sperl +Cc: Florian Meier +Cc: Clive Messer +Cc: Matthias Reichl +Tested-by: Stefan Wahren +Acked-by: Florian Kauer +Signed-off-by: Vinod Koul +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/dma/bcm2835-dma.c | 33 ++++++++++++++++++--------------- + 1 file changed, 18 insertions(+), 15 deletions(-) + +--- a/drivers/dma/bcm2835-dma.c ++++ b/drivers/dma/bcm2835-dma.c +@@ -421,7 +421,12 @@ static int bcm2835_dma_abort(void __iome + long int timeout = 10000; + + cs = readl(chan_base + BCM2835_DMA_CS); +- if (!(cs & BCM2835_DMA_ACTIVE)) ++ ++ /* ++ * A zero control block address means the channel is idle. ++ * (The ACTIVE flag in the CS register is not a reliable indicator.) ++ */ ++ if (!readl(chan_base + BCM2835_DMA_ADDR)) + return 0; + + /* Write 0 to the active bit - Pause the DMA */ +@@ -485,8 +490,15 @@ static irqreturn_t bcm2835_dma_callback( + + spin_lock_irqsave(&c->vc.lock, flags); + +- /* Acknowledge interrupt */ +- writel(BCM2835_DMA_INT, c->chan_base + BCM2835_DMA_CS); ++ /* ++ * Clear the INT flag to receive further interrupts. Keep the channel ++ * active in case the descriptor is cyclic or in case the client has ++ * already terminated the descriptor and issued a new one. (May happen ++ * if this IRQ handler is threaded.) If the channel is finished, it ++ * will remain idle despite the ACTIVE flag being set. ++ */ ++ writel(BCM2835_DMA_INT | BCM2835_DMA_ACTIVE, ++ c->chan_base + BCM2835_DMA_CS); + + d = c->desc; + +@@ -494,11 +506,7 @@ static irqreturn_t bcm2835_dma_callback( + if (d->cyclic) { + /* call the cyclic callback */ + vchan_cyclic_callback(&d->vd); +- +- /* Keep the DMA engine running */ +- writel(BCM2835_DMA_ACTIVE, +- c->chan_base + BCM2835_DMA_CS); +- } else { ++ } else if (!readl(c->chan_base + BCM2835_DMA_ADDR)) { + vchan_cookie_complete(&c->desc->vd); + bcm2835_dma_start_desc(c); + } +@@ -806,11 +814,7 @@ static int bcm2835_dma_terminate_all(str + list_del_init(&c->node); + spin_unlock(&d->lock); + +- /* +- * Stop DMA activity: we assume the callback will not be called +- * after bcm_dma_abort() returns (even if it does, it will see +- * c->desc is NULL and exit.) +- */ ++ /* stop DMA activity */ + if (c->desc) { + bcm2835_dma_desc_free(&c->desc->vd); + c->desc = NULL; +@@ -818,8 +822,7 @@ static int bcm2835_dma_terminate_all(str + + /* Wait for stopping */ + while (--timeout) { +- if (!(readl(c->chan_base + BCM2835_DMA_CS) & +- BCM2835_DMA_ACTIVE)) ++ if (!readl(c->chan_base + BCM2835_DMA_ADDR)) + break; + + cpu_relax(); diff --git a/queue-4.14/dmaengine-imx-dma-fix-wrong-callback-invoke.patch b/queue-4.14/dmaengine-imx-dma-fix-wrong-callback-invoke.patch new file mode 100644 index 00000000000..3b2a6d740b9 --- /dev/null +++ b/queue-4.14/dmaengine-imx-dma-fix-wrong-callback-invoke.patch @@ -0,0 +1,64 @@ +From 341198eda723c8c1cddbb006a89ad9e362502ea2 Mon Sep 17 00:00:00 2001 +From: Leonid Iziumtsev +Date: Tue, 15 Jan 2019 17:15:23 +0000 +Subject: dmaengine: imx-dma: fix wrong callback invoke + +From: Leonid Iziumtsev + +commit 341198eda723c8c1cddbb006a89ad9e362502ea2 upstream. + +Once the "ld_queue" list is not empty, next descriptor will migrate +into "ld_active" list. The "desc" variable will be overwritten +during that transition. And later the dmaengine_desc_get_callback_invoke() +will use it as an argument. As result we invoke wrong callback. + +That behaviour was in place since: +commit fcaaba6c7136 ("dmaengine: imx-dma: fix callback path in tasklet"). +But after commit 4cd13c21b207 ("softirq: Let ksoftirqd do its job") +things got worse, since possible delay between tasklet_schedule() +from DMA irq handler and actual tasklet function execution got bigger. +And that gave more time for new DMA request to be submitted and +to be put into "ld_queue" list. + +It has been noticed that DMA issue is causing problems for "mxc-mmc" +driver. While stressing the system with heavy network traffic and +writing/reading to/from sd card simultaneously the timeout may happen: + +10013000.sdhci: mxcmci_watchdog: read time out (status = 0x30004900) + +That often lead to file system corruption. + +Signed-off-by: Leonid Iziumtsev +Signed-off-by: Vinod Koul +Cc: stable@vger.kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/dma/imx-dma.c | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +--- a/drivers/dma/imx-dma.c ++++ b/drivers/dma/imx-dma.c +@@ -623,7 +623,7 @@ static void imxdma_tasklet(unsigned long + { + struct imxdma_channel *imxdmac = (void *)data; + struct imxdma_engine *imxdma = imxdmac->imxdma; +- struct imxdma_desc *desc; ++ struct imxdma_desc *desc, *next_desc; + unsigned long flags; + + spin_lock_irqsave(&imxdma->lock, flags); +@@ -653,10 +653,10 @@ static void imxdma_tasklet(unsigned long + list_move_tail(imxdmac->ld_active.next, &imxdmac->ld_free); + + if (!list_empty(&imxdmac->ld_queue)) { +- desc = list_first_entry(&imxdmac->ld_queue, struct imxdma_desc, +- node); ++ next_desc = list_first_entry(&imxdmac->ld_queue, ++ struct imxdma_desc, node); + list_move_tail(imxdmac->ld_queue.next, &imxdmac->ld_active); +- if (imxdma_xfer_desc(desc) < 0) ++ if (imxdma_xfer_desc(next_desc) < 0) + dev_warn(imxdma->dev, "%s: channel: %d couldn't xfer desc\n", + __func__, imxdmac->channel); + } diff --git a/queue-4.14/futex-handle-early-deadlock-return-correctly.patch b/queue-4.14/futex-handle-early-deadlock-return-correctly.patch new file mode 100644 index 00000000000..43d53bec273 --- /dev/null +++ b/queue-4.14/futex-handle-early-deadlock-return-correctly.patch @@ -0,0 +1,226 @@ +From 1a1fb985f2e2b85ec0d3dc2e519ee48389ec2434 Mon Sep 17 00:00:00 2001 +From: Thomas Gleixner +Date: Tue, 29 Jan 2019 23:15:12 +0100 +Subject: futex: Handle early deadlock return correctly + +From: Thomas Gleixner + +commit 1a1fb985f2e2b85ec0d3dc2e519ee48389ec2434 upstream. + +commit 56222b212e8e ("futex: Drop hb->lock before enqueueing on the +rtmutex") changed the locking rules in the futex code so that the hash +bucket lock is not longer held while the waiter is enqueued into the +rtmutex wait list. This made the lock and the unlock path symmetric, but +unfortunately the possible early exit from __rt_mutex_proxy_start() due to +a detected deadlock was not updated accordingly. That allows a concurrent +unlocker to observe inconsitent state which triggers the warning in the +unlock path. + +futex_lock_pi() futex_unlock_pi() + lock(hb->lock) + queue(hb_waiter) lock(hb->lock) + lock(rtmutex->wait_lock) + unlock(hb->lock) + // acquired hb->lock + hb_waiter = futex_top_waiter() + lock(rtmutex->wait_lock) + __rt_mutex_proxy_start() + ---> fail + remove(rtmutex_waiter); + ---> returns -EDEADLOCK + unlock(rtmutex->wait_lock) + // acquired wait_lock + wake_futex_pi() + rt_mutex_next_owner() + --> returns NULL + --> WARN + + lock(hb->lock) + unqueue(hb_waiter) + +The problem is caused by the remove(rtmutex_waiter) in the failure case of +__rt_mutex_proxy_start() as this lets the unlocker observe a waiter in the +hash bucket but no waiter on the rtmutex, i.e. inconsistent state. + +The original commit handles this correctly for the other early return cases +(timeout, signal) by delaying the removal of the rtmutex waiter until the +returning task reacquired the hash bucket lock. + +Treat the failure case of __rt_mutex_proxy_start() in the same way and let +the existing cleanup code handle the eventual handover of the rtmutex +gracefully. The regular rt_mutex_proxy_start() gains the rtmutex waiter +removal for the failure case, so that the other callsites are still +operating correctly. + +Add proper comments to the code so all these details are fully documented. + +Thanks to Peter for helping with the analysis and writing the really +valuable code comments. + +Fixes: 56222b212e8e ("futex: Drop hb->lock before enqueueing on the rtmutex") +Reported-by: Heiko Carstens +Co-developed-by: Peter Zijlstra +Signed-off-by: Peter Zijlstra +Signed-off-by: Thomas Gleixner +Tested-by: Heiko Carstens +Cc: Martin Schwidefsky +Cc: linux-s390@vger.kernel.org +Cc: Stefan Liebler +Cc: Sebastian Sewior +Cc: stable@vger.kernel.org +Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1901292311410.1950@nanos.tec.linutronix.de +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/futex.c | 28 ++++++++++++++++++---------- + kernel/locking/rtmutex.c | 37 ++++++++++++++++++++++++++++++++----- + 2 files changed, 50 insertions(+), 15 deletions(-) + +--- a/kernel/futex.c ++++ b/kernel/futex.c +@@ -2811,35 +2811,39 @@ retry_private: + * and BUG when futex_unlock_pi() interleaves with this. + * + * Therefore acquire wait_lock while holding hb->lock, but drop the +- * latter before calling rt_mutex_start_proxy_lock(). This still fully +- * serializes against futex_unlock_pi() as that does the exact same +- * lock handoff sequence. ++ * latter before calling __rt_mutex_start_proxy_lock(). This ++ * interleaves with futex_unlock_pi() -- which does a similar lock ++ * handoff -- such that the latter can observe the futex_q::pi_state ++ * before __rt_mutex_start_proxy_lock() is done. + */ + raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); + spin_unlock(q.lock_ptr); ++ /* ++ * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter ++ * such that futex_unlock_pi() is guaranteed to observe the waiter when ++ * it sees the futex_q::pi_state. ++ */ + ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current); + raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock); + + if (ret) { + if (ret == 1) + ret = 0; +- +- spin_lock(q.lock_ptr); +- goto no_block; ++ goto cleanup; + } + +- + if (unlikely(to)) + hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS); + + ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); + ++cleanup: + spin_lock(q.lock_ptr); + /* +- * If we failed to acquire the lock (signal/timeout), we must ++ * If we failed to acquire the lock (deadlock/signal/timeout), we must + * first acquire the hb->lock before removing the lock from the +- * rt_mutex waitqueue, such that we can keep the hb and rt_mutex +- * wait lists consistent. ++ * rt_mutex waitqueue, such that we can keep the hb and rt_mutex wait ++ * lists consistent. + * + * In particular; it is important that futex_unlock_pi() can not + * observe this inconsistency. +@@ -2963,6 +2967,10 @@ retry: + * there is no point where we hold neither; and therefore + * wake_futex_pi() must observe a state consistent with what we + * observed. ++ * ++ * In particular; this forces __rt_mutex_start_proxy() to ++ * complete such that we're guaranteed to observe the ++ * rt_waiter. Also see the WARN in wake_futex_pi(). + */ + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); + spin_unlock(&hb->lock); +--- a/kernel/locking/rtmutex.c ++++ b/kernel/locking/rtmutex.c +@@ -1726,12 +1726,33 @@ void rt_mutex_proxy_unlock(struct rt_mut + rt_mutex_set_owner(lock, NULL); + } + ++/** ++ * __rt_mutex_start_proxy_lock() - Start lock acquisition for another task ++ * @lock: the rt_mutex to take ++ * @waiter: the pre-initialized rt_mutex_waiter ++ * @task: the task to prepare ++ * ++ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock ++ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that. ++ * ++ * NOTE: does _NOT_ remove the @waiter on failure; must either call ++ * rt_mutex_wait_proxy_lock() or rt_mutex_cleanup_proxy_lock() after this. ++ * ++ * Returns: ++ * 0 - task blocked on lock ++ * 1 - acquired the lock for task, caller should wake it up ++ * <0 - error ++ * ++ * Special API call for PI-futex support. ++ */ + int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter, + struct task_struct *task) + { + int ret; + ++ lockdep_assert_held(&lock->wait_lock); ++ + if (try_to_take_rt_mutex(lock, task, NULL)) + return 1; + +@@ -1749,9 +1770,6 @@ int __rt_mutex_start_proxy_lock(struct r + ret = 0; + } + +- if (unlikely(ret)) +- remove_waiter(lock, waiter); +- + debug_rt_mutex_print_deadlock(waiter); + + return ret; +@@ -1763,12 +1781,18 @@ int __rt_mutex_start_proxy_lock(struct r + * @waiter: the pre-initialized rt_mutex_waiter + * @task: the task to prepare + * ++ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock ++ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that. ++ * ++ * NOTE: unlike __rt_mutex_start_proxy_lock this _DOES_ remove the @waiter ++ * on failure. ++ * + * Returns: + * 0 - task blocked on lock + * 1 - acquired the lock for task, caller should wake it up + * <0 - error + * +- * Special API call for FUTEX_REQUEUE_PI support. ++ * Special API call for PI-futex support. + */ + int rt_mutex_start_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter, +@@ -1778,6 +1802,8 @@ int rt_mutex_start_proxy_lock(struct rt_ + + raw_spin_lock_irq(&lock->wait_lock); + ret = __rt_mutex_start_proxy_lock(lock, waiter, task); ++ if (unlikely(ret)) ++ remove_waiter(lock, waiter); + raw_spin_unlock_irq(&lock->wait_lock); + + return ret; +@@ -1845,7 +1871,8 @@ int rt_mutex_wait_proxy_lock(struct rt_m + * @lock: the rt_mutex we were woken on + * @waiter: the pre-initialized rt_mutex_waiter + * +- * Attempt to clean up after a failed rt_mutex_wait_proxy_lock(). ++ * Attempt to clean up after a failed __rt_mutex_start_proxy_lock() or ++ * rt_mutex_wait_proxy_lock(). + * + * Unless we acquired the lock; we're still enqueued on the wait-list and can + * in fact still be granted ownership until we're removed. Therefore we can diff --git a/queue-4.14/irqchip-gic-v3-its-plug-allocation-race-for-devices-sharing-a-devid.patch b/queue-4.14/irqchip-gic-v3-its-plug-allocation-race-for-devices-sharing-a-devid.patch new file mode 100644 index 00000000000..cde2e51765a --- /dev/null +++ b/queue-4.14/irqchip-gic-v3-its-plug-allocation-race-for-devices-sharing-a-devid.patch @@ -0,0 +1,145 @@ +From 9791ec7df0e7b4d80706ccea8f24b6542f6059e9 Mon Sep 17 00:00:00 2001 +From: Marc Zyngier +Date: Tue, 29 Jan 2019 10:02:33 +0000 +Subject: irqchip/gic-v3-its: Plug allocation race for devices sharing a DevID + +From: Marc Zyngier + +commit 9791ec7df0e7b4d80706ccea8f24b6542f6059e9 upstream. + +On systems or VMs where multiple devices share a single DevID +(because they sit behind a PCI bridge, or because the HW is +broken in funky ways), we reuse the save its_device structure +in order to reflect this. + +It turns out that there is a distinct lack of locking when looking +up the its_device, and two device being probed concurrently can result +in double allocations. That's obviously not nice. + +A solution for this is to have a per-ITS mutex that serializes device +allocation. + +A similar issue exists on the freeing side, which can run concurrently +with the allocation. On top of now taking the appropriate lock, we +also make sure that a shared device is never freed, as we have no way +to currently track the life cycle of such object. + +Reported-by: Zheng Xiang +Tested-by: Zheng Xiang +Cc: stable@vger.kernel.org +Signed-off-by: Marc Zyngier +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/irqchip/irq-gic-v3-its.c | 32 +++++++++++++++++++++++++++----- + 1 file changed, 27 insertions(+), 5 deletions(-) + +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -87,9 +87,14 @@ struct its_baser { + * The ITS structure - contains most of the infrastructure, with the + * top-level MSI domain, the command queue, the collections, and the + * list of devices writing to it. ++ * ++ * dev_alloc_lock has to be taken for device allocations, while the ++ * spinlock must be taken to parse data structures such as the device ++ * list. + */ + struct its_node { + raw_spinlock_t lock; ++ struct mutex dev_alloc_lock; + struct list_head entry; + void __iomem *base; + phys_addr_t phys_base; +@@ -138,6 +143,7 @@ struct its_device { + void *itt; + u32 nr_ites; + u32 device_id; ++ bool shared; + }; + + static struct { +@@ -2109,6 +2115,7 @@ static int its_msi_prepare(struct irq_do + struct its_device *its_dev; + struct msi_domain_info *msi_info; + u32 dev_id; ++ int err = 0; + + /* + * We ignore "dev" entierely, and rely on the dev_id that has +@@ -2131,6 +2138,7 @@ static int its_msi_prepare(struct irq_do + return -EINVAL; + } + ++ mutex_lock(&its->dev_alloc_lock); + its_dev = its_find_device(its, dev_id); + if (its_dev) { + /* +@@ -2138,18 +2146,22 @@ static int its_msi_prepare(struct irq_do + * another alias (PCI bridge of some sort). No need to + * create the device. + */ ++ its_dev->shared = true; + pr_debug("Reusing ITT for devID %x\n", dev_id); + goto out; + } + + its_dev = its_create_device(its, dev_id, nvec, true); +- if (!its_dev) +- return -ENOMEM; ++ if (!its_dev) { ++ err = -ENOMEM; ++ goto out; ++ } + + pr_debug("ITT %d entries, %d bits\n", nvec, ilog2(nvec)); + out: ++ mutex_unlock(&its->dev_alloc_lock); + info->scratchpad[0].ptr = its_dev; +- return 0; ++ return err; + } + + static struct msi_domain_ops its_msi_domain_ops = { +@@ -2252,6 +2264,7 @@ static void its_irq_domain_free(struct i + { + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct its_device *its_dev = irq_data_get_irq_chip_data(d); ++ struct its_node *its = its_dev->its; + int i; + + for (i = 0; i < nr_irqs; i++) { +@@ -2266,8 +2279,14 @@ static void its_irq_domain_free(struct i + irq_domain_reset_irq_data(data); + } + +- /* If all interrupts have been freed, start mopping the floor */ +- if (bitmap_empty(its_dev->event_map.lpi_map, ++ mutex_lock(&its->dev_alloc_lock); ++ ++ /* ++ * If all interrupts have been freed, start mopping the ++ * floor. This is conditionned on the device not being shared. ++ */ ++ if (!its_dev->shared && ++ bitmap_empty(its_dev->event_map.lpi_map, + its_dev->event_map.nr_lpis)) { + its_lpi_free_chunks(its_dev->event_map.lpi_map, + its_dev->event_map.lpi_base, +@@ -2279,6 +2298,8 @@ static void its_irq_domain_free(struct i + its_free_device(its_dev); + } + ++ mutex_unlock(&its->dev_alloc_lock); ++ + irq_domain_free_irqs_parent(domain, virq, nr_irqs); + } + +@@ -2966,6 +2987,7 @@ static int __init its_probe_one(struct r + } + + raw_spin_lock_init(&its->lock); ++ mutex_init(&its->dev_alloc_lock); + INIT_LIST_HEAD(&its->entry); + INIT_LIST_HEAD(&its->its_device_list); + typer = gic_read_typer(its_base + GITS_TYPER); diff --git a/queue-4.14/kvm-fix-kvm_ioctl_create_device-reference-counting-cve-2019-6974.patch b/queue-4.14/kvm-fix-kvm_ioctl_create_device-reference-counting-cve-2019-6974.patch new file mode 100644 index 00000000000..760e7c11ac6 --- /dev/null +++ b/queue-4.14/kvm-fix-kvm_ioctl_create_device-reference-counting-cve-2019-6974.patch @@ -0,0 +1,57 @@ +From cfa39381173d5f969daf43582c95ad679189cbc9 Mon Sep 17 00:00:00 2001 +From: Jann Horn +Date: Sat, 26 Jan 2019 01:54:33 +0100 +Subject: kvm: fix kvm_ioctl_create_device() reference counting (CVE-2019-6974) + +From: Jann Horn + +commit cfa39381173d5f969daf43582c95ad679189cbc9 upstream. + +kvm_ioctl_create_device() does the following: + +1. creates a device that holds a reference to the VM object (with a borrowed + reference, the VM's refcount has not been bumped yet) +2. initializes the device +3. transfers the reference to the device to the caller's file descriptor table +4. calls kvm_get_kvm() to turn the borrowed reference to the VM into a real + reference + +The ownership transfer in step 3 must not happen before the reference to the VM +becomes a proper, non-borrowed reference, which only happens in step 4. +After step 3, an attacker can close the file descriptor and drop the borrowed +reference, which can cause the refcount of the kvm object to drop to zero. + +This means that we need to grab a reference for the device before +anon_inode_getfd(), otherwise the VM can disappear from under us. + +Fixes: 852b6d57dc7f ("kvm: add device control API") +Cc: stable@kernel.org +Signed-off-by: Jann Horn +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + virt/kvm/kvm_main.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -2912,8 +2912,10 @@ static int kvm_ioctl_create_device(struc + if (ops->init) + ops->init(dev); + ++ kvm_get_kvm(kvm); + ret = anon_inode_getfd(ops->name, &kvm_device_fops, dev, O_RDWR | O_CLOEXEC); + if (ret < 0) { ++ kvm_put_kvm(kvm); + mutex_lock(&kvm->lock); + list_del(&dev->vm_node); + mutex_unlock(&kvm->lock); +@@ -2921,7 +2923,6 @@ static int kvm_ioctl_create_device(struc + return ret; + } + +- kvm_get_kvm(kvm); + cd->fd = ret; + return 0; + } diff --git a/queue-4.14/kvm-nvmx-unconditionally-cancel-preemption-timer-in-free_nested-cve-2019-7221.patch b/queue-4.14/kvm-nvmx-unconditionally-cancel-preemption-timer-in-free_nested-cve-2019-7221.patch new file mode 100644 index 00000000000..a894bba34b5 --- /dev/null +++ b/queue-4.14/kvm-nvmx-unconditionally-cancel-preemption-timer-in-free_nested-cve-2019-7221.patch @@ -0,0 +1,41 @@ +From ecec76885bcfe3294685dc363fd1273df0d5d65f Mon Sep 17 00:00:00 2001 +From: Peter Shier +Date: Thu, 11 Oct 2018 11:46:46 -0700 +Subject: KVM: nVMX: unconditionally cancel preemption timer in free_nested (CVE-2019-7221) + +From: Peter Shier + +commit ecec76885bcfe3294685dc363fd1273df0d5d65f upstream. + +Bugzilla: 1671904 + +There are multiple code paths where an hrtimer may have been started to +emulate an L1 VMX preemption timer that can result in a call to free_nested +without an intervening L2 exit where the hrtimer is normally +cancelled. Unconditionally cancel in free_nested to cover all cases. + +Embargoed until Feb 7th 2019. + +Signed-off-by: Peter Shier +Reported-by: Jim Mattson +Reviewed-by: Jim Mattson +Reported-by: Felix Wilhelm +Cc: stable@kernel.org +Message-Id: <20181011184646.154065-1-pshier@google.com> +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/vmx.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -7708,6 +7708,7 @@ static void free_nested(struct vcpu_vmx + if (!vmx->nested.vmxon) + return; + ++ hrtimer_cancel(&vmx->nested.preemption_timer); + vmx->nested.vmxon = false; + free_vpid(vmx->nested.vpid02); + vmx->nested.posted_intr_nv = -1; diff --git a/queue-4.14/kvm-x86-work-around-leak-of-uninitialized-stack-contents-cve-2019-7222.patch b/queue-4.14/kvm-x86-work-around-leak-of-uninitialized-stack-contents-cve-2019-7222.patch new file mode 100644 index 00000000000..01030e7765b --- /dev/null +++ b/queue-4.14/kvm-x86-work-around-leak-of-uninitialized-stack-contents-cve-2019-7222.patch @@ -0,0 +1,47 @@ +From 353c0956a618a07ba4bbe7ad00ff29fe70e8412a Mon Sep 17 00:00:00 2001 +From: Paolo Bonzini +Date: Tue, 29 Jan 2019 18:41:16 +0100 +Subject: KVM: x86: work around leak of uninitialized stack contents (CVE-2019-7222) + +From: Paolo Bonzini + +commit 353c0956a618a07ba4bbe7ad00ff29fe70e8412a upstream. + +Bugzilla: 1671930 + +Emulation of certain instructions (VMXON, VMCLEAR, VMPTRLD, VMWRITE with +memory operand, INVEPT, INVVPID) can incorrectly inject a page fault +when passed an operand that points to an MMIO address. The page fault +will use uninitialized kernel stack memory as the CR2 and error code. + +The right behavior would be to abort the VM with a KVM_EXIT_INTERNAL_ERROR +exit to userspace; however, it is not an easy fix, so for now just +ensure that the error code and CR2 are zero. + +Embargoed until Feb 7th 2019. + +Reported-by: Felix Wilhelm +Cc: stable@kernel.org +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/x86.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -4611,6 +4611,13 @@ int kvm_read_guest_virt(struct kvm_vcpu + { + u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0; + ++ /* ++ * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED ++ * is returned, but our callers are not ready for that and they blindly ++ * call kvm_inject_page_fault. Ensure that they at least do not leak ++ * uninitialized kernel stack memory into cr2 and error code. ++ */ ++ memset(exception, 0, sizeof(*exception)); + return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, + exception); + } diff --git a/queue-4.14/perf-core-don-t-warn-for-impossible-ring-buffer-sizes.patch b/queue-4.14/perf-core-don-t-warn-for-impossible-ring-buffer-sizes.patch new file mode 100644 index 00000000000..5cf42694dea --- /dev/null +++ b/queue-4.14/perf-core-don-t-warn-for-impossible-ring-buffer-sizes.patch @@ -0,0 +1,55 @@ +From 9dff0aa95a324e262ffb03f425d00e4751f3294e Mon Sep 17 00:00:00 2001 +From: Mark Rutland +Date: Thu, 10 Jan 2019 14:27:45 +0000 +Subject: perf/core: Don't WARN() for impossible ring-buffer sizes + +From: Mark Rutland + +commit 9dff0aa95a324e262ffb03f425d00e4751f3294e upstream. + +The perf tool uses /proc/sys/kernel/perf_event_mlock_kb to determine how +large its ringbuffer mmap should be. This can be configured to arbitrary +values, which can be larger than the maximum possible allocation from +kmalloc. + +When this is configured to a suitably large value (e.g. thanks to the +perf fuzzer), attempting to use perf record triggers a WARN_ON_ONCE() in +__alloc_pages_nodemask(): + + WARNING: CPU: 2 PID: 5666 at mm/page_alloc.c:4511 __alloc_pages_nodemask+0x3f8/0xbc8 + +Let's avoid this by checking that the requested allocation is possible +before calling kzalloc. + +Reported-by: Julien Thierry +Signed-off-by: Mark Rutland +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Julien Thierry +Cc: Alexander Shishkin +Cc: Arnaldo Carvalho de Melo +Cc: Jiri Olsa +Cc: Linus Torvalds +Cc: Namhyung Kim +Cc: Peter Zijlstra +Cc: Thomas Gleixner +Cc: +Link: https://lkml.kernel.org/r/20190110142745.25495-1-mark.rutland@arm.com +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/events/ring_buffer.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/kernel/events/ring_buffer.c ++++ b/kernel/events/ring_buffer.c +@@ -719,6 +719,9 @@ struct ring_buffer *rb_alloc(int nr_page + size = sizeof(struct ring_buffer); + size += nr_pages * sizeof(void *); + ++ if (order_base_2(size) >= MAX_ORDER) ++ goto fail; ++ + rb = kzalloc(size, GFP_KERNEL); + if (!rb) + goto fail; diff --git a/queue-4.14/perf-tests-evsel-tp-sched-fix-bitwise-operator.patch b/queue-4.14/perf-tests-evsel-tp-sched-fix-bitwise-operator.patch new file mode 100644 index 00000000000..0bc0d49f818 --- /dev/null +++ b/queue-4.14/perf-tests-evsel-tp-sched-fix-bitwise-operator.patch @@ -0,0 +1,43 @@ +From 489338a717a0dfbbd5a3fabccf172b78f0ac9015 Mon Sep 17 00:00:00 2001 +From: "Gustavo A. R. Silva" +Date: Tue, 22 Jan 2019 17:34:39 -0600 +Subject: perf tests evsel-tp-sched: Fix bitwise operator + +From: Gustavo A. R. Silva + +commit 489338a717a0dfbbd5a3fabccf172b78f0ac9015 upstream. + +Notice that the use of the bitwise OR operator '|' always leads to true +in this particular case, which seems a bit suspicious due to the context +in which this expression is being used. + +Fix this by using bitwise AND operator '&' instead. + +This bug was detected with the help of Coccinelle. + +Signed-off-by: Gustavo A. R. Silva +Acked-by: Jiri Olsa +Cc: Alexander Shishkin +Cc: Namhyung Kim +Cc: Peter Zijlstra +Cc: stable@vger.kernel.org +Fixes: 6a6cd11d4e57 ("perf test: Add test for the sched tracepoint format fields") +Link: http://lkml.kernel.org/r/20190122233439.GA5868@embeddedor +Signed-off-by: Arnaldo Carvalho de Melo +Signed-off-by: Greg Kroah-Hartman + +--- + tools/perf/tests/evsel-tp-sched.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/tools/perf/tests/evsel-tp-sched.c ++++ b/tools/perf/tests/evsel-tp-sched.c +@@ -17,7 +17,7 @@ static int perf_evsel__test_field(struct + return -1; + } + +- is_signed = !!(field->flags | FIELD_IS_SIGNED); ++ is_signed = !!(field->flags & FIELD_IS_SIGNED); + if (should_be_signed && !is_signed) { + pr_debug("%s: \"%s\" signedness(%d) is wrong, should be %d\n", + evsel->name, name, is_signed, should_be_signed); diff --git a/queue-4.14/perf-x86-intel-uncore-add-node-id-mask.patch b/queue-4.14/perf-x86-intel-uncore-add-node-id-mask.patch new file mode 100644 index 00000000000..8c2b743bbd4 --- /dev/null +++ b/queue-4.14/perf-x86-intel-uncore-add-node-id-mask.patch @@ -0,0 +1,63 @@ +From 9e63a7894fd302082cf3627fe90844421a6cbe7f Mon Sep 17 00:00:00 2001 +From: Kan Liang +Date: Sun, 27 Jan 2019 06:53:14 -0800 +Subject: perf/x86/intel/uncore: Add Node ID mask + +From: Kan Liang + +commit 9e63a7894fd302082cf3627fe90844421a6cbe7f upstream. + +Some PCI uncore PMUs cannot be registered on an 8-socket system (HPE +Superdome Flex). + +To understand which Socket the PCI uncore PMUs belongs to, perf retrieves +the local Node ID of the uncore device from CPUNODEID(0xC0) of the PCI +configuration space, and the mapping between Socket ID and Node ID from +GIDNIDMAP(0xD4). The Socket ID can be calculated accordingly. + +The local Node ID is only available at bit 2:0, but current code doesn't +mask it. If a BIOS doesn't clear the rest of the bits, an incorrect Node ID +will be fetched. + +Filter the Node ID by adding a mask. + +Reported-by: Song Liu +Tested-by: Song Liu +Signed-off-by: Kan Liang +Signed-off-by: Peter Zijlstra (Intel) +Cc: Alexander Shishkin +Cc: Arnaldo Carvalho de Melo +Cc: Jiri Olsa +Cc: Linus Torvalds +Cc: Peter Zijlstra +Cc: Thomas Gleixner +Cc: # v3.7+ +Fixes: 7c94ee2e0917 ("perf/x86: Add Intel Nehalem and Sandy Bridge-EP uncore support") +Link: https://lkml.kernel.org/r/1548600794-33162-1-git-send-email-kan.liang@linux.intel.com +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/events/intel/uncore_snbep.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/arch/x86/events/intel/uncore_snbep.c ++++ b/arch/x86/events/intel/uncore_snbep.c +@@ -1221,6 +1221,8 @@ static struct pci_driver snbep_uncore_pc + .id_table = snbep_uncore_pci_ids, + }; + ++#define NODE_ID_MASK 0x7 ++ + /* + * build pci bus to socket mapping + */ +@@ -1242,7 +1244,7 @@ static int snbep_pci2phy_map_init(int de + err = pci_read_config_dword(ubox_dev, nodeid_loc, &config); + if (err) + break; +- nodeid = config; ++ nodeid = config & NODE_ID_MASK; + /* get the Node ID mapping */ + err = pci_read_config_dword(ubox_dev, idmap_loc, &config); + if (err) diff --git a/queue-4.14/scsi-aic94xx-fix-module-loading.patch b/queue-4.14/scsi-aic94xx-fix-module-loading.patch new file mode 100644 index 00000000000..58dd575abbd --- /dev/null +++ b/queue-4.14/scsi-aic94xx-fix-module-loading.patch @@ -0,0 +1,65 @@ +From 42caa0edabd6a0a392ec36a5f0943924e4954311 Mon Sep 17 00:00:00 2001 +From: James Bottomley +Date: Wed, 30 Jan 2019 16:42:12 -0800 +Subject: scsi: aic94xx: fix module loading + +From: James Bottomley + +commit 42caa0edabd6a0a392ec36a5f0943924e4954311 upstream. + +The aic94xx driver is currently failing to load with errors like + +sysfs: cannot create duplicate filename '/devices/pci0000:00/0000:00:03.0/0000:02:00.3/0000:07:02.0/revision' + +Because the PCI code had recently added a file named 'revision' to every +PCI device. Fix this by renaming the aic94xx revision file to +aic_revision. This is safe to do for us because as far as I can tell, +there's nothing in userspace relying on the current aic94xx revision file +so it can be renamed without breaking anything. + +Fixes: 702ed3be1b1b (PCI: Create revision file in sysfs) +Cc: stable@vger.kernel.org +Signed-off-by: James Bottomley +Signed-off-by: Martin K. Petersen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/scsi/aic94xx/aic94xx_init.c | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +--- a/drivers/scsi/aic94xx/aic94xx_init.c ++++ b/drivers/scsi/aic94xx/aic94xx_init.c +@@ -281,7 +281,7 @@ static ssize_t asd_show_dev_rev(struct d + return snprintf(buf, PAGE_SIZE, "%s\n", + asd_dev_rev[asd_ha->revision_id]); + } +-static DEVICE_ATTR(revision, S_IRUGO, asd_show_dev_rev, NULL); ++static DEVICE_ATTR(aic_revision, S_IRUGO, asd_show_dev_rev, NULL); + + static ssize_t asd_show_dev_bios_build(struct device *dev, + struct device_attribute *attr,char *buf) +@@ -478,7 +478,7 @@ static int asd_create_dev_attrs(struct a + { + int err; + +- err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + if (err) + return err; + +@@ -500,13 +500,13 @@ err_update_bios: + err_biosb: + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build); + err_rev: +- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + return err; + } + + static void asd_remove_dev_attrs(struct asd_ha_struct *asd_ha) + { +- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_update_bios); diff --git a/queue-4.14/scsi-cxlflash-prevent-deadlock-when-adapter-probe-fails.patch b/queue-4.14/scsi-cxlflash-prevent-deadlock-when-adapter-probe-fails.patch new file mode 100644 index 00000000000..af791c669ca --- /dev/null +++ b/queue-4.14/scsi-cxlflash-prevent-deadlock-when-adapter-probe-fails.patch @@ -0,0 +1,74 @@ +From bb61b843ffd46978d7ca5095453e572714934eeb Mon Sep 17 00:00:00 2001 +From: Vaibhav Jain +Date: Wed, 30 Jan 2019 17:56:51 +0530 +Subject: scsi: cxlflash: Prevent deadlock when adapter probe fails + +From: Vaibhav Jain + +commit bb61b843ffd46978d7ca5095453e572714934eeb upstream. + +Presently when an error is encountered during probe of the cxlflash +adapter, a deadlock is seen with cpu thread stuck inside +cxlflash_remove(). Below is the trace of the deadlock as logged by +khungtaskd: + +cxlflash 0006:00:00.0: cxlflash_probe: init_afu failed rc=-16 +INFO: task kworker/80:1:890 blocked for more than 120 seconds. + Not tainted 5.0.0-rc4-capi2-kexec+ #2 +"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +kworker/80:1 D 0 890 2 0x00000808 +Workqueue: events work_for_cpu_fn + +Call Trace: + 0x4d72136320 (unreliable) + __switch_to+0x2cc/0x460 + __schedule+0x2bc/0xac0 + schedule+0x40/0xb0 + cxlflash_remove+0xec/0x640 [cxlflash] + cxlflash_probe+0x370/0x8f0 [cxlflash] + local_pci_probe+0x6c/0x140 + work_for_cpu_fn+0x38/0x60 + process_one_work+0x260/0x530 + worker_thread+0x280/0x5d0 + kthread+0x1a8/0x1b0 + ret_from_kernel_thread+0x5c/0x80 +INFO: task systemd-udevd:5160 blocked for more than 120 seconds. + +The deadlock occurs as cxlflash_remove() is called from cxlflash_probe() +without setting 'cxlflash_cfg->state' to STATE_PROBED and the probe thread +starts to wait on 'cxlflash_cfg->reset_waitq'. Since the device was never +successfully probed the 'cxlflash_cfg->state' never changes from +STATE_PROBING hence the deadlock occurs. + +We fix this deadlock by setting the variable 'cxlflash_cfg->state' to +STATE_PROBED in case an error occurs during cxlflash_probe() and just +before calling cxlflash_remove(). + +Cc: stable@vger.kernel.org +Fixes: c21e0bbfc485("cxlflash: Base support for IBM CXL Flash Adapter") +Signed-off-by: Vaibhav Jain +Signed-off-by: Martin K. Petersen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/scsi/cxlflash/main.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/drivers/scsi/cxlflash/main.c ++++ b/drivers/scsi/cxlflash/main.c +@@ -3659,6 +3659,7 @@ static int cxlflash_probe(struct pci_dev + host->max_cmd_len = CXLFLASH_MAX_CDB_LEN; + + cfg = shost_priv(host); ++ cfg->state = STATE_PROBING; + cfg->host = host; + rc = alloc_mem(cfg); + if (rc) { +@@ -3741,6 +3742,7 @@ out: + return rc; + + out_remove: ++ cfg->state = STATE_PROBED; + cxlflash_remove(pdev); + goto out; + } diff --git a/queue-4.14/serial-8250_pci-make-pci-class-test-non-fatal.patch b/queue-4.14/serial-8250_pci-make-pci-class-test-non-fatal.patch new file mode 100644 index 00000000000..f97b560820d --- /dev/null +++ b/queue-4.14/serial-8250_pci-make-pci-class-test-non-fatal.patch @@ -0,0 +1,58 @@ +From 824d17c57b0abbcb9128fb3f7327fae14761914b Mon Sep 17 00:00:00 2001 +From: Andy Shevchenko +Date: Thu, 24 Jan 2019 23:51:21 +0200 +Subject: serial: 8250_pci: Make PCI class test non fatal + +From: Andy Shevchenko + +commit 824d17c57b0abbcb9128fb3f7327fae14761914b upstream. + +As has been reported the National Instruments serial cards have broken +PCI class. + +The commit 7d8905d06405 + + ("serial: 8250_pci: Enable device after we check black list") + +made the PCI class check mandatory for the case when device is listed in +a quirk list. + +Make PCI class test non fatal to allow broken card be enumerated. + +Fixes: 7d8905d06405 ("serial: 8250_pci: Enable device after we check black list") +Cc: stable +Reported-by: Guan Yung Tseng +Tested-by: Guan Yung Tseng +Tested-by: KHUENY.Gerhard +Signed-off-by: Andy Shevchenko +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/tty/serial/8250/8250_pci.c | 9 +++++---- + 1 file changed, 5 insertions(+), 4 deletions(-) + +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -3425,6 +3425,11 @@ static int + serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board) + { + int num_iomem, num_port, first_port = -1, i; ++ int rc; ++ ++ rc = serial_pci_is_class_communication(dev); ++ if (rc) ++ return rc; + + /* + * Should we try to make guesses for multiport serial devices later? +@@ -3652,10 +3657,6 @@ pciserial_init_one(struct pci_dev *dev, + + board = &pci_boards[ent->driver_data]; + +- rc = serial_pci_is_class_communication(dev); +- if (rc) +- return rc; +- + rc = serial_pci_is_blacklisted(dev); + if (rc) + return rc; diff --git a/queue-4.14/serial-fix-race-between-flush_to_ldisc-and-tty_open.patch b/queue-4.14/serial-fix-race-between-flush_to_ldisc-and-tty_open.patch new file mode 100644 index 00000000000..d7dc7bae72d --- /dev/null +++ b/queue-4.14/serial-fix-race-between-flush_to_ldisc-and-tty_open.patch @@ -0,0 +1,84 @@ +From fedb5760648a291e949f2380d383b5b2d2749b5e Mon Sep 17 00:00:00 2001 +From: Greg Kroah-Hartman +Date: Thu, 31 Jan 2019 17:43:16 +0800 +Subject: serial: fix race between flush_to_ldisc and tty_open + +From: Greg Kroah-Hartman + +commit fedb5760648a291e949f2380d383b5b2d2749b5e upstream. + +There still is a race window after the commit b027e2298bd588 +("tty: fix data race between tty_init_dev and flush of buf"), +and we encountered this crash issue if receive_buf call comes +before tty initialization completes in tty_open and +tty->driver_data may be NULL. + +CPU0 CPU1 +---- ---- + tty_open + tty_init_dev + tty_ldisc_unlock + schedule +flush_to_ldisc + receive_buf + tty_port_default_receive_buf + tty_ldisc_receive_buf + n_tty_receive_buf_common + __receive_buf + uart_flush_chars + uart_start + /*tty->driver_data is NULL*/ + tty->ops->open + /*init tty->driver_data*/ + +it can be fixed by extending ldisc semaphore lock in tty_init_dev +to driver_data initialized completely after tty->ops->open(), but +this will lead to get lock on one function and unlock in some other +function, and hard to maintain, so fix this race only by checking +tty->driver_data when receiving, and return if tty->driver_data +is NULL, and n_tty_receive_buf_common maybe calls uart_unthrottle, +so add the same check. + +Because the tty layer knows nothing about the driver associated with the +device, the tty layer can not do anything here, it is up to the tty +driver itself to check for this type of race. Fix up the serial driver +to correctly check to see if it is finished binding with the device when +being called, and if not, abort the tty calls. + +[Description and problem report and testing from Li RongQing, I rewrote +the patch to be in the serial layer, not in the tty core - gregkh] + +Reported-by: Li RongQing +Tested-by: Li RongQing +Signed-off-by: Wang Li +Signed-off-by: Zhang Yu +Signed-off-by: Li RongQing +Cc: stable +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/tty/serial/serial_core.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -143,6 +143,9 @@ static void uart_start(struct tty_struct + struct uart_port *port; + unsigned long flags; + ++ if (!state) ++ return; ++ + port = uart_port_lock(state, flags); + __uart_start(tty); + uart_port_unlock(port, flags); +@@ -2415,6 +2418,9 @@ static void uart_poll_put_char(struct tt + struct uart_state *state = drv->state + line; + struct uart_port *port; + ++ if (!state) ++ return; ++ + port = uart_port_ref(state); + if (!port) + return; diff --git a/queue-4.14/series b/queue-4.14/series index 990d0ed6074..cecc1172792 100644 --- a/queue-4.14/series +++ b/queue-4.14/series @@ -177,3 +177,25 @@ alsa-hda-serialize-codec-registrations.patch fuse-call-pipe_buf_release-under-pipe-lock.patch fuse-decrement-nr_writeback_temp-on-the-right-page.patch fuse-handle-zero-sized-retrieve-correctly.patch +dmaengine-bcm2835-fix-interrupt-race-on-rt.patch +dmaengine-bcm2835-fix-abort-of-transactions.patch +dmaengine-imx-dma-fix-wrong-callback-invoke.patch +futex-handle-early-deadlock-return-correctly.patch +irqchip-gic-v3-its-plug-allocation-race-for-devices-sharing-a-devid.patch +usb-phy-am335x-fix-race-condition-in-_probe.patch +usb-dwc3-gadget-handle-0-xfer-length-for-out-ep.patch +usb-gadget-udc-net2272-fix-bitwise-and-boolean-operations.patch +usb-gadget-musb-fix-short-isoc-packets-with-inventra-dma.patch +staging-speakup-fix-tty-operation-null-derefs.patch +scsi-cxlflash-prevent-deadlock-when-adapter-probe-fails.patch +scsi-aic94xx-fix-module-loading.patch +kvm-x86-work-around-leak-of-uninitialized-stack-contents-cve-2019-7222.patch +kvm-fix-kvm_ioctl_create_device-reference-counting-cve-2019-6974.patch +kvm-nvmx-unconditionally-cancel-preemption-timer-in-free_nested-cve-2019-7221.patch +cpu-hotplug-fix-smt-disabled-by-bios-detection-for-kvm.patch +perf-x86-intel-uncore-add-node-id-mask.patch +x86-mce-initialize-mce.bank-in-the-case-of-a-fatal-error-in-mce_no_way_out.patch +perf-core-don-t-warn-for-impossible-ring-buffer-sizes.patch +perf-tests-evsel-tp-sched-fix-bitwise-operator.patch +serial-fix-race-between-flush_to_ldisc-and-tty_open.patch +serial-8250_pci-make-pci-class-test-non-fatal.patch diff --git a/queue-4.14/staging-speakup-fix-tty-operation-null-derefs.patch b/queue-4.14/staging-speakup-fix-tty-operation-null-derefs.patch new file mode 100644 index 00000000000..68ff14a3f13 --- /dev/null +++ b/queue-4.14/staging-speakup-fix-tty-operation-null-derefs.patch @@ -0,0 +1,47 @@ +From a1960e0f1639cb1f7a3d94521760fc73091f6640 Mon Sep 17 00:00:00 2001 +From: Johan Hovold +Date: Wed, 30 Jan 2019 10:49:34 +0100 +Subject: staging: speakup: fix tty-operation NULL derefs + +From: Johan Hovold + +commit a1960e0f1639cb1f7a3d94521760fc73091f6640 upstream. + +The send_xchar() and tiocmset() tty operations are optional. Add the +missing sanity checks to prevent user-space triggerable NULL-pointer +dereferences. + +Fixes: 6b9ad1c742bf ("staging: speakup: add send_xchar, tiocmset and input functionality for tty") +Cc: stable # 4.13 +Cc: Okash Khawaja +Cc: Samuel Thibault +Signed-off-by: Johan Hovold +Reviewed-by: Samuel Thibault +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/staging/speakup/spk_ttyio.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +--- a/drivers/staging/speakup/spk_ttyio.c ++++ b/drivers/staging/speakup/spk_ttyio.c +@@ -246,7 +246,8 @@ static void spk_ttyio_send_xchar(char ch + return; + } + +- speakup_tty->ops->send_xchar(speakup_tty, ch); ++ if (speakup_tty->ops->send_xchar) ++ speakup_tty->ops->send_xchar(speakup_tty, ch); + mutex_unlock(&speakup_tty_mutex); + } + +@@ -258,7 +259,8 @@ static void spk_ttyio_tiocmset(unsigned + return; + } + +- speakup_tty->ops->tiocmset(speakup_tty, set, clear); ++ if (speakup_tty->ops->tiocmset) ++ speakup_tty->ops->tiocmset(speakup_tty, set, clear); + mutex_unlock(&speakup_tty_mutex); + } + diff --git a/queue-4.14/usb-dwc3-gadget-handle-0-xfer-length-for-out-ep.patch b/queue-4.14/usb-dwc3-gadget-handle-0-xfer-length-for-out-ep.patch new file mode 100644 index 00000000000..23ba6cd00fa --- /dev/null +++ b/queue-4.14/usb-dwc3-gadget-handle-0-xfer-length-for-out-ep.patch @@ -0,0 +1,37 @@ +From 1e19cdc8060227b0802bda6bc0bd22b23679ba32 Mon Sep 17 00:00:00 2001 +From: Tejas Joglekar +Date: Tue, 22 Jan 2019 13:26:51 +0530 +Subject: usb: dwc3: gadget: Handle 0 xfer length for OUT EP + +From: Tejas Joglekar + +commit 1e19cdc8060227b0802bda6bc0bd22b23679ba32 upstream. + +For OUT endpoints, zero-length transfers require MaxPacketSize buffer as +per the DWC_usb3 programming guide 3.30a section 4.2.3.3. + +This patch fixes this by explicitly checking zero length +transfer to correctly pad up to MaxPacketSize. + +Fixes: c6267a51639b ("usb: dwc3: gadget: align transfers to wMaxPacketSize") +Cc: stable@vger.kernel.org + +Signed-off-by: Tejas Joglekar +Signed-off-by: Felipe Balbi +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/dwc3/gadget.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -1114,7 +1114,7 @@ static void dwc3_prepare_one_trb_linear( + unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc); + unsigned int rem = length % maxp; + +- if (rem && usb_endpoint_dir_out(dep->endpoint.desc)) { ++ if ((!length || rem) && usb_endpoint_dir_out(dep->endpoint.desc)) { + struct dwc3 *dwc = dep->dwc; + struct dwc3_trb *trb; + diff --git a/queue-4.14/usb-gadget-musb-fix-short-isoc-packets-with-inventra-dma.patch b/queue-4.14/usb-gadget-musb-fix-short-isoc-packets-with-inventra-dma.patch new file mode 100644 index 00000000000..0e772c1310e --- /dev/null +++ b/queue-4.14/usb-gadget-musb-fix-short-isoc-packets-with-inventra-dma.patch @@ -0,0 +1,114 @@ +From c418fd6c01fbc5516a2cd1eaf1df1ec86869028a Mon Sep 17 00:00:00 2001 +From: Paul Elder +Date: Wed, 30 Jan 2019 08:13:21 -0600 +Subject: usb: gadget: musb: fix short isoc packets with inventra dma + +From: Paul Elder + +commit c418fd6c01fbc5516a2cd1eaf1df1ec86869028a upstream. + +Handling short packets (length < max packet size) in the Inventra DMA +engine in the MUSB driver causes the MUSB DMA controller to hang. An +example of a problem that is caused by this problem is when streaming +video out of a UVC gadget, only the first video frame is transferred. + +For short packets (mode-0 or mode-1 DMA), MUSB_TXCSR_TXPKTRDY must be +set manually by the driver. This was previously done in musb_g_tx +(musb_gadget.c), but incorrectly (all csr flags were cleared, and only +MUSB_TXCSR_MODE and MUSB_TXCSR_TXPKTRDY were set). Fixing that problem +allows some requests to be transferred correctly, but multiple requests +were often put together in one USB packet, and caused problems if the +packet size was not a multiple of 4. Instead, set MUSB_TXCSR_TXPKTRDY +in dma_controller_irq (musbhsdma.c), just like host mode transfers. + +This topic was originally tackled by Nicolas Boichat [0] [1] and is +discussed further at [2] as part of his GSoC project [3]. + +[0] https://groups.google.com/forum/?hl=en#!topic/beagleboard-gsoc/k8Azwfp75CU +[1] https://gitorious.org/beagleboard-usbsniffer/beagleboard-usbsniffer-kernel/commit/b0be3b6cc195ba732189b04f1d43ec843c3e54c9?p=beagleboard-usbsniffer:beagleboard-usbsniffer-kernel.git;a=patch;h=b0be3b6cc195ba732189b04f1d43ec843c3e54c9 +[2] http://beagleboard-usbsniffer.blogspot.com/2010/07/musb-isochronous-transfers-fixed.html +[3] http://elinux.org/BeagleBoard/GSoC/USBSniffer + +Fixes: 550a7375fe72 ("USB: Add MUSB and TUSB support") +Signed-off-by: Paul Elder +Signed-off-by: Bin Liu +Cc: stable +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/musb/musb_gadget.c | 13 +------------ + drivers/usb/musb/musbhsdma.c | 21 +++++++++++---------- + 2 files changed, 12 insertions(+), 22 deletions(-) + +--- a/drivers/usb/musb/musb_gadget.c ++++ b/drivers/usb/musb/musb_gadget.c +@@ -477,13 +477,10 @@ void musb_g_tx(struct musb *musb, u8 epn + } + + if (request) { +- u8 is_dma = 0; +- bool short_packet = false; + + trace_musb_req_tx(req); + + if (dma && (csr & MUSB_TXCSR_DMAENAB)) { +- is_dma = 1; + csr |= MUSB_TXCSR_P_WZC_BITS; + csr &= ~(MUSB_TXCSR_DMAENAB | MUSB_TXCSR_P_UNDERRUN | + MUSB_TXCSR_TXPKTRDY | MUSB_TXCSR_AUTOSET); +@@ -501,16 +498,8 @@ void musb_g_tx(struct musb *musb, u8 epn + */ + if ((request->zero && request->length) + && (request->length % musb_ep->packet_sz == 0) +- && (request->actual == request->length)) +- short_packet = true; ++ && (request->actual == request->length)) { + +- if ((musb_dma_inventra(musb) || musb_dma_ux500(musb)) && +- (is_dma && (!dma->desired_mode || +- (request->actual & +- (musb_ep->packet_sz - 1))))) +- short_packet = true; +- +- if (short_packet) { + /* + * On DMA completion, FIFO may not be + * available yet... +--- a/drivers/usb/musb/musbhsdma.c ++++ b/drivers/usb/musb/musbhsdma.c +@@ -320,12 +320,10 @@ static irqreturn_t dma_controller_irq(in + channel->status = MUSB_DMA_STATUS_FREE; + + /* completed */ +- if ((devctl & MUSB_DEVCTL_HM) +- && (musb_channel->transmit) +- && ((channel->desired_mode == 0) +- || (channel->actual_len & +- (musb_channel->max_packet_sz - 1))) +- ) { ++ if (musb_channel->transmit && ++ (!channel->desired_mode || ++ (channel->actual_len % ++ musb_channel->max_packet_sz))) { + u8 epnum = musb_channel->epnum; + int offset = musb->io.ep_offset(epnum, + MUSB_TXCSR); +@@ -337,11 +335,14 @@ static irqreturn_t dma_controller_irq(in + */ + musb_ep_select(mbase, epnum); + txcsr = musb_readw(mbase, offset); +- txcsr &= ~(MUSB_TXCSR_DMAENAB ++ if (channel->desired_mode == 1) { ++ txcsr &= ~(MUSB_TXCSR_DMAENAB + | MUSB_TXCSR_AUTOSET); +- musb_writew(mbase, offset, txcsr); +- /* Send out the packet */ +- txcsr &= ~MUSB_TXCSR_DMAMODE; ++ musb_writew(mbase, offset, txcsr); ++ /* Send out the packet */ ++ txcsr &= ~MUSB_TXCSR_DMAMODE; ++ txcsr |= MUSB_TXCSR_DMAENAB; ++ } + txcsr |= MUSB_TXCSR_TXPKTRDY; + musb_writew(mbase, offset, txcsr); + } diff --git a/queue-4.14/usb-gadget-udc-net2272-fix-bitwise-and-boolean-operations.patch b/queue-4.14/usb-gadget-udc-net2272-fix-bitwise-and-boolean-operations.patch new file mode 100644 index 00000000000..2f529e59bd1 --- /dev/null +++ b/queue-4.14/usb-gadget-udc-net2272-fix-bitwise-and-boolean-operations.patch @@ -0,0 +1,43 @@ +From 07c69f1148da7de3978686d3af9263325d9d60bd Mon Sep 17 00:00:00 2001 +From: "Gustavo A. R. Silva" +Date: Tue, 22 Jan 2019 15:28:08 -0600 +Subject: usb: gadget: udc: net2272: Fix bitwise and boolean operations + +From: Gustavo A. R. Silva + +commit 07c69f1148da7de3978686d3af9263325d9d60bd upstream. + +(!x & y) strikes again. + +Fix bitwise and boolean operations by enclosing the expression: + + intcsr & (1 << NET2272_PCI_IRQ) + +in parentheses, before applying the boolean operator '!'. + +Notice that this code has been there since 2011. So, it would +be helpful if someone can double-check this. + +This issue was detected with the help of Coccinelle. + +Fixes: ceb80363b2ec ("USB: net2272: driver for PLX NET2272 USB device controller") +Cc: stable@vger.kernel.org +Signed-off-by: Gustavo A. R. Silva +Signed-off-by: Felipe Balbi +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/gadget/udc/net2272.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/usb/gadget/udc/net2272.c ++++ b/drivers/usb/gadget/udc/net2272.c +@@ -2096,7 +2096,7 @@ static irqreturn_t net2272_irq(int irq, + #if defined(PLX_PCI_RDK2) + /* see if PCI int for us by checking irqstat */ + intcsr = readl(dev->rdk2.fpga_base_addr + RDK2_IRQSTAT); +- if (!intcsr & (1 << NET2272_PCI_IRQ)) { ++ if (!(intcsr & (1 << NET2272_PCI_IRQ))) { + spin_unlock(&dev->lock); + return IRQ_NONE; + } diff --git a/queue-4.14/usb-phy-am335x-fix-race-condition-in-_probe.patch b/queue-4.14/usb-phy-am335x-fix-race-condition-in-_probe.patch new file mode 100644 index 00000000000..4de80365460 --- /dev/null +++ b/queue-4.14/usb-phy-am335x-fix-race-condition-in-_probe.patch @@ -0,0 +1,45 @@ +From a53469a68eb886e84dd8b69a1458a623d3591793 Mon Sep 17 00:00:00 2001 +From: Bin Liu +Date: Wed, 16 Jan 2019 11:54:07 -0600 +Subject: usb: phy: am335x: fix race condition in _probe + +From: Bin Liu + +commit a53469a68eb886e84dd8b69a1458a623d3591793 upstream. + +power off the phy should be done before populate the phy. Otherwise, +am335x_init() could be called by the phy owner to power on the phy first, +then am335x_phy_probe() turns off the phy again without the caller knowing +it. + +Fixes: 2fc711d76352 ("usb: phy: am335x: Enable USB remote wakeup using PHY wakeup") +Cc: stable@vger.kernel.org # v3.18+ +Signed-off-by: Bin Liu +Signed-off-by: Felipe Balbi +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/phy/phy-am335x.c | 5 +---- + 1 file changed, 1 insertion(+), 4 deletions(-) + +--- a/drivers/usb/phy/phy-am335x.c ++++ b/drivers/usb/phy/phy-am335x.c +@@ -60,9 +60,6 @@ static int am335x_phy_probe(struct platf + if (ret) + return ret; + +- ret = usb_add_phy_dev(&am_phy->usb_phy_gen.phy); +- if (ret) +- return ret; + am_phy->usb_phy_gen.phy.init = am335x_init; + am_phy->usb_phy_gen.phy.shutdown = am335x_shutdown; + +@@ -81,7 +78,7 @@ static int am335x_phy_probe(struct platf + device_set_wakeup_enable(dev, false); + phy_ctrl_power(am_phy->phy_ctrl, am_phy->id, am_phy->dr_mode, false); + +- return 0; ++ return usb_add_phy_dev(&am_phy->usb_phy_gen.phy); + } + + static int am335x_phy_remove(struct platform_device *pdev) diff --git a/queue-4.14/x86-mce-initialize-mce.bank-in-the-case-of-a-fatal-error-in-mce_no_way_out.patch b/queue-4.14/x86-mce-initialize-mce.bank-in-the-case-of-a-fatal-error-in-mce_no_way_out.patch new file mode 100644 index 00000000000..882f4050d6e --- /dev/null +++ b/queue-4.14/x86-mce-initialize-mce.bank-in-the-case-of-a-fatal-error-in-mce_no_way_out.patch @@ -0,0 +1,51 @@ +From d28af26faa0b1daf3c692603d46bc4687c16f19e Mon Sep 17 00:00:00 2001 +From: Tony Luck +Date: Thu, 31 Jan 2019 16:33:41 -0800 +Subject: x86/MCE: Initialize mce.bank in the case of a fatal error in mce_no_way_out() + +From: Tony Luck + +commit d28af26faa0b1daf3c692603d46bc4687c16f19e upstream. + +Internal injection testing crashed with a console log that said: + + mce: [Hardware Error]: CPU 7: Machine Check Exception: f Bank 0: bd80000000100134 + +This caused a lot of head scratching because the MCACOD (bits 15:0) of +that status is a signature from an L1 data cache error. But Linux says +that it found it in "Bank 0", which on this model CPU only reports L1 +instruction cache errors. + +The answer was that Linux doesn't initialize "m->bank" in the case that +it finds a fatal error in the mce_no_way_out() pre-scan of banks. If +this was a local machine check, then this partially initialized struct +mce is being passed to mce_panic(). + +Fix is simple: just initialize m->bank in the case of a fatal error. + +Fixes: 40c36e2741d7 ("x86/mce: Fix incorrect "Machine check from unknown source" message") +Signed-off-by: Tony Luck +Signed-off-by: Borislav Petkov +Cc: "H. Peter Anvin" +Cc: Ingo Molnar +Cc: Thomas Gleixner +Cc: Vishal Verma +Cc: x86-ml +Cc: stable@vger.kernel.org # v4.18 Note pre-v5.0 arch/x86/kernel/cpu/mce/core.c was called arch/x86/kernel/cpu/mcheck/mce.c +Link: https://lkml.kernel.org/r/20190201003341.10638-1-tony.luck@intel.com +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kernel/cpu/mcheck/mce.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/arch/x86/kernel/cpu/mcheck/mce.c ++++ b/arch/x86/kernel/cpu/mcheck/mce.c +@@ -773,6 +773,7 @@ static int mce_no_way_out(struct mce *m, + quirk_no_way_out(i, m, regs); + + if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) { ++ m->bank = i; + mce_read_aux(m, i); + *msg = tmp; + return 1;