From: Greg Kroah-Hartman Date: Thu, 3 Oct 2019 08:05:30 +0000 (+0200) Subject: 5.3-stable patches X-Git-Tag: v4.4.195~42 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=4cf82141226a14a3ce4e98d5ddf5cb61a7d996e3;p=thirdparty%2Fkernel%2Fstable-queue.git 5.3-stable patches added patches: fuse-fix-beyond-end-of-page-access-in-fuse_parse_cache.patch fuse-fix-deadlock-with-aio-poll-and-fuse_iqueue-waitq.lock.patch fuse-fix-missing-unlock_page-in-fuse_writepage.patch kvm-x86-add-significant-index-flag-to-a-few-cpuid-leaves.patch kvm-x86-always-stop-emulation-on-page-fault.patch kvm-x86-disable-posted-interrupts-for-non-standard-irqs-delivery-modes.patch kvm-x86-manually-calculate-reserved-bits-when-loading-pdptrs.patch kvm-x86-mmu-use-fast-invalidate-mechanism-to-zap-mmio-sptes.patch kvm-x86-set-ctxt-have_exception-in-x86_decode_insn.patch parisc-disable-hp-hsc-pci-cards-to-prevent-kernel-crash.patch platform-x86-intel_int0002_vgpio-fix-wakeups-not-working-on-cherry-trail.patch tpm-wrap-the-buffer-from-the-caller-to-tpm_buf-in-tpm_send.patch tpm_tis_core-set-tpm_chip_flag_irq-before-probing-for-interrupts.patch tpm_tis_core-turn-on-the-tpm-before-probing-irq-s.patch --- diff --git a/queue-5.3/fuse-fix-beyond-end-of-page-access-in-fuse_parse_cache.patch b/queue-5.3/fuse-fix-beyond-end-of-page-access-in-fuse_parse_cache.patch new file mode 100644 index 00000000000..a8e7d9ba3c3 --- /dev/null +++ b/queue-5.3/fuse-fix-beyond-end-of-page-access-in-fuse_parse_cache.patch @@ -0,0 +1,72 @@ +From e5854b1cdf6cb48a20e01e3bdad0476a4c60a077 Mon Sep 17 00:00:00 2001 +From: Tejun Heo +Date: Sun, 22 Sep 2019 06:19:36 -0700 +Subject: fuse: fix beyond-end-of-page access in fuse_parse_cache() + +From: Tejun Heo + +commit e5854b1cdf6cb48a20e01e3bdad0476a4c60a077 upstream. + +With DEBUG_PAGEALLOC on, the following triggers. + + BUG: unable to handle page fault for address: ffff88859367c000 + #PF: supervisor read access in kernel mode + #PF: error_code(0x0000) - not-present page + PGD 3001067 P4D 3001067 PUD 406d3a8067 PMD 406d30c067 PTE 800ffffa6c983060 + Oops: 0000 [#1] SMP DEBUG_PAGEALLOC + CPU: 38 PID: 3110657 Comm: python2.7 + RIP: 0010:fuse_readdir+0x88f/0xe7a [fuse] + Code: 49 8b 4d 08 49 39 4e 60 0f 84 44 04 00 00 48 8b 43 08 43 8d 1c 3c 4d 01 7e 68 49 89 dc 48 03 5c 24 38 49 89 46 60 8b 44 24 30 <8b> 4b 10 44 29 e0 48 89 ca 48 83 c1 1f 48 83 e1 f8 83 f8 17 49 89 + RSP: 0018:ffffc90035edbde0 EFLAGS: 00010286 + RAX: 0000000000001000 RBX: ffff88859367bff0 RCX: 0000000000000000 + RDX: 0000000000000000 RSI: ffff88859367bfed RDI: 0000000000920907 + RBP: ffffc90035edbe90 R08: 000000000000014b R09: 0000000000000004 + R10: ffff88859367b000 R11: 0000000000000000 R12: 0000000000000ff0 + R13: ffffc90035edbee0 R14: ffff889fb8546180 R15: 0000000000000020 + FS: 00007f80b5f4a740(0000) GS:ffff889fffa00000(0000) knlGS:0000000000000000 + CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 + CR2: ffff88859367c000 CR3: 0000001c170c2001 CR4: 00000000003606e0 + DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 + DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 + Call Trace: + iterate_dir+0x122/0x180 + __x64_sys_getdents+0xa6/0x140 + do_syscall_64+0x42/0x100 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + +It's in fuse_parse_cache(). %rbx (ffff88859367bff0) is fuse_dirent +pointer - addr + offset. FUSE_DIRENT_SIZE() is trying to dereference +namelen off of it but that derefs into the next page which is disabled +by pagealloc debug causing a PF. + +This is caused by dirent->namelen being accessed before ensuring that +there's enough bytes in the page for the dirent. Fix it by pushing +down reclen calculation. + +Signed-off-by: Tejun Heo +Fixes: 5d7bc7e8680c ("fuse: allow using readdir cache") +Cc: stable@vger.kernel.org # v4.20+ +Signed-off-by: Miklos Szeredi +Signed-off-by: Greg Kroah-Hartman + +--- + fs/fuse/readdir.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/fs/fuse/readdir.c ++++ b/fs/fuse/readdir.c +@@ -372,11 +372,13 @@ static enum fuse_parse_result fuse_parse + for (;;) { + struct fuse_dirent *dirent = addr + offset; + unsigned int nbytes = size - offset; +- size_t reclen = FUSE_DIRENT_SIZE(dirent); ++ size_t reclen; + + if (nbytes < FUSE_NAME_OFFSET || !dirent->namelen) + break; + ++ reclen = FUSE_DIRENT_SIZE(dirent); /* derefs ->namelen */ ++ + if (WARN_ON(dirent->namelen > FUSE_NAME_MAX)) + return FOUND_ERR; + if (WARN_ON(reclen > nbytes)) diff --git a/queue-5.3/fuse-fix-deadlock-with-aio-poll-and-fuse_iqueue-waitq.lock.patch b/queue-5.3/fuse-fix-deadlock-with-aio-poll-and-fuse_iqueue-waitq.lock.patch new file mode 100644 index 00000000000..44e31f11f35 --- /dev/null +++ b/queue-5.3/fuse-fix-deadlock-with-aio-poll-and-fuse_iqueue-waitq.lock.patch @@ -0,0 +1,403 @@ +From 76e43c8ccaa35c30d5df853013561145a0f750a5 Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Sun, 8 Sep 2019 20:15:18 -0700 +Subject: fuse: fix deadlock with aio poll and fuse_iqueue::waitq.lock + +From: Eric Biggers + +commit 76e43c8ccaa35c30d5df853013561145a0f750a5 upstream. + +When IOCB_CMD_POLL is used on the FUSE device, aio_poll() disables IRQs +and takes kioctx::ctx_lock, then fuse_iqueue::waitq.lock. + +This may have to wait for fuse_iqueue::waitq.lock to be released by one +of many places that take it with IRQs enabled. Since the IRQ handler +may take kioctx::ctx_lock, lockdep reports that a deadlock is possible. + +Fix it by protecting the state of struct fuse_iqueue with a separate +spinlock, and only accessing fuse_iqueue::waitq using the versions of +the waitqueue functions which do IRQ-safe locking internally. + +Reproducer: + + #include + #include + #include + #include + #include + #include + #include + + int main() + { + char opts[128]; + int fd = open("/dev/fuse", O_RDWR); + aio_context_t ctx = 0; + struct iocb cb = { .aio_lio_opcode = IOCB_CMD_POLL, .aio_fildes = fd }; + struct iocb *cbp = &cb; + + sprintf(opts, "fd=%d,rootmode=040000,user_id=0,group_id=0", fd); + mkdir("mnt", 0700); + mount("foo", "mnt", "fuse", 0, opts); + syscall(__NR_io_setup, 1, &ctx); + syscall(__NR_io_submit, ctx, 1, &cbp); + } + +Beginning of lockdep output: + + ===================================================== + WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected + 5.3.0-rc5 #9 Not tainted + ----------------------------------------------------- + syz_fuse/135 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: + 000000003590ceda (&fiq->waitq){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline] + 000000003590ceda (&fiq->waitq){+.+.}, at: aio_poll fs/aio.c:1751 [inline] + 000000003590ceda (&fiq->waitq){+.+.}, at: __io_submit_one.constprop.0+0x203/0x5b0 fs/aio.c:1825 + + and this task is already holding: + 0000000075037284 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq include/linux/spinlock.h:363 [inline] + 0000000075037284 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll fs/aio.c:1749 [inline] + 0000000075037284 (&(&ctx->ctx_lock)->rlock){..-.}, at: __io_submit_one.constprop.0+0x1f4/0x5b0 fs/aio.c:1825 + which would create a new lock dependency: + (&(&ctx->ctx_lock)->rlock){..-.} -> (&fiq->waitq){+.+.} + + but this new dependency connects a SOFTIRQ-irq-safe lock: + (&(&ctx->ctx_lock)->rlock){..-.} + + [...] + +Reported-by: syzbot+af05535bb79520f95431@syzkaller.appspotmail.com +Reported-by: syzbot+d86c4426a01f60feddc7@syzkaller.appspotmail.com +Fixes: bfe4037e722e ("aio: implement IOCB_CMD_POLL") +Cc: # v4.19+ +Cc: Christoph Hellwig +Signed-off-by: Eric Biggers +Signed-off-by: Miklos Szeredi +Signed-off-by: Greg Kroah-Hartman + +--- + fs/fuse/dev.c | 93 ++++++++++++++++++++++++++++--------------------------- + fs/fuse/fuse_i.h | 3 + + fs/fuse/inode.c | 1 + 3 files changed, 52 insertions(+), 45 deletions(-) + +--- a/fs/fuse/dev.c ++++ b/fs/fuse/dev.c +@@ -377,7 +377,7 @@ static void queue_request(struct fuse_iq + req->in.h.len = sizeof(struct fuse_in_header) + + len_args(req->in.numargs, (struct fuse_arg *) req->in.args); + list_add_tail(&req->list, &fiq->pending); +- wake_up_locked(&fiq->waitq); ++ wake_up(&fiq->waitq); + kill_fasync(&fiq->fasync, SIGIO, POLL_IN); + } + +@@ -389,16 +389,16 @@ void fuse_queue_forget(struct fuse_conn + forget->forget_one.nodeid = nodeid; + forget->forget_one.nlookup = nlookup; + +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + if (fiq->connected) { + fiq->forget_list_tail->next = forget; + fiq->forget_list_tail = forget; +- wake_up_locked(&fiq->waitq); ++ wake_up(&fiq->waitq); + kill_fasync(&fiq->fasync, SIGIO, POLL_IN); + } else { + kfree(forget); + } +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + } + + static void flush_bg_queue(struct fuse_conn *fc) +@@ -412,10 +412,10 @@ static void flush_bg_queue(struct fuse_c + req = list_first_entry(&fc->bg_queue, struct fuse_req, list); + list_del(&req->list); + fc->active_background++; +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + req->in.h.unique = fuse_get_unique(fiq); + queue_request(fiq, req); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + } + } + +@@ -439,9 +439,9 @@ static void request_end(struct fuse_conn + * smp_mb() from queue_interrupt(). + */ + if (!list_empty(&req->intr_entry)) { +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + list_del_init(&req->intr_entry); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + } + WARN_ON(test_bit(FR_PENDING, &req->flags)); + WARN_ON(test_bit(FR_SENT, &req->flags)); +@@ -483,10 +483,10 @@ put_request: + + static int queue_interrupt(struct fuse_iqueue *fiq, struct fuse_req *req) + { +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + /* Check for we've sent request to interrupt this req */ + if (unlikely(!test_bit(FR_INTERRUPTED, &req->flags))) { +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + return -EINVAL; + } + +@@ -499,13 +499,13 @@ static int queue_interrupt(struct fuse_i + smp_mb(); + if (test_bit(FR_FINISHED, &req->flags)) { + list_del_init(&req->intr_entry); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + return 0; + } +- wake_up_locked(&fiq->waitq); ++ wake_up(&fiq->waitq); + kill_fasync(&fiq->fasync, SIGIO, POLL_IN); + } +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + return 0; + } + +@@ -535,16 +535,16 @@ static void request_wait_answer(struct f + if (!err) + return; + +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + /* Request is not yet in userspace, bail out */ + if (test_bit(FR_PENDING, &req->flags)) { + list_del(&req->list); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + __fuse_put_request(req); + req->out.h.error = -EINTR; + return; + } +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + } + + /* +@@ -559,9 +559,9 @@ static void __fuse_request_send(struct f + struct fuse_iqueue *fiq = &fc->iq; + + BUG_ON(test_bit(FR_BACKGROUND, &req->flags)); +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + if (!fiq->connected) { +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + req->out.h.error = -ENOTCONN; + } else { + req->in.h.unique = fuse_get_unique(fiq); +@@ -569,7 +569,7 @@ static void __fuse_request_send(struct f + /* acquire extra reference, since request is still needed + after request_end() */ + __fuse_get_request(req); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + + request_wait_answer(fc, req); + /* Pairs with smp_wmb() in request_end() */ +@@ -700,12 +700,12 @@ static int fuse_request_send_notify_repl + + __clear_bit(FR_ISREPLY, &req->flags); + req->in.h.unique = unique; +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + if (fiq->connected) { + queue_request(fiq, req); + err = 0; + } +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + + return err; + } +@@ -1149,12 +1149,12 @@ static int request_pending(struct fuse_i + * Unlike other requests this is assembled on demand, without a need + * to allocate a separate fuse_req structure. + * +- * Called with fiq->waitq.lock held, releases it ++ * Called with fiq->lock held, releases it + */ + static int fuse_read_interrupt(struct fuse_iqueue *fiq, + struct fuse_copy_state *cs, + size_t nbytes, struct fuse_req *req) +-__releases(fiq->waitq.lock) ++__releases(fiq->lock) + { + struct fuse_in_header ih; + struct fuse_interrupt_in arg; +@@ -1169,7 +1169,7 @@ __releases(fiq->waitq.lock) + ih.unique = (req->in.h.unique | FUSE_INT_REQ_BIT); + arg.unique = req->in.h.unique; + +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + if (nbytes < reqsize) + return -EINVAL; + +@@ -1206,7 +1206,7 @@ static struct fuse_forget_link *dequeue_ + static int fuse_read_single_forget(struct fuse_iqueue *fiq, + struct fuse_copy_state *cs, + size_t nbytes) +-__releases(fiq->waitq.lock) ++__releases(fiq->lock) + { + int err; + struct fuse_forget_link *forget = dequeue_forget(fiq, 1, NULL); +@@ -1220,7 +1220,7 @@ __releases(fiq->waitq.lock) + .len = sizeof(ih) + sizeof(arg), + }; + +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + kfree(forget); + if (nbytes < ih.len) + return -EINVAL; +@@ -1238,7 +1238,7 @@ __releases(fiq->waitq.lock) + + static int fuse_read_batch_forget(struct fuse_iqueue *fiq, + struct fuse_copy_state *cs, size_t nbytes) +-__releases(fiq->waitq.lock) ++__releases(fiq->lock) + { + int err; + unsigned max_forgets; +@@ -1252,13 +1252,13 @@ __releases(fiq->waitq.lock) + }; + + if (nbytes < ih.len) { +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + return -EINVAL; + } + + max_forgets = (nbytes - ih.len) / sizeof(struct fuse_forget_one); + head = dequeue_forget(fiq, max_forgets, &count); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + + arg.count = count; + ih.len += count * sizeof(struct fuse_forget_one); +@@ -1288,7 +1288,7 @@ __releases(fiq->waitq.lock) + static int fuse_read_forget(struct fuse_conn *fc, struct fuse_iqueue *fiq, + struct fuse_copy_state *cs, + size_t nbytes) +-__releases(fiq->waitq.lock) ++__releases(fiq->lock) + { + if (fc->minor < 16 || fiq->forget_list_head.next->next == NULL) + return fuse_read_single_forget(fiq, cs, nbytes); +@@ -1318,16 +1318,19 @@ static ssize_t fuse_dev_do_read(struct f + unsigned int hash; + + restart: +- spin_lock(&fiq->waitq.lock); +- err = -EAGAIN; +- if ((file->f_flags & O_NONBLOCK) && fiq->connected && +- !request_pending(fiq)) +- goto err_unlock; ++ for (;;) { ++ spin_lock(&fiq->lock); ++ if (!fiq->connected || request_pending(fiq)) ++ break; ++ spin_unlock(&fiq->lock); + +- err = wait_event_interruptible_exclusive_locked(fiq->waitq, ++ if (file->f_flags & O_NONBLOCK) ++ return -EAGAIN; ++ err = wait_event_interruptible_exclusive(fiq->waitq, + !fiq->connected || request_pending(fiq)); +- if (err) +- goto err_unlock; ++ if (err) ++ return err; ++ } + + if (!fiq->connected) { + err = fc->aborted ? -ECONNABORTED : -ENODEV; +@@ -1351,7 +1354,7 @@ static ssize_t fuse_dev_do_read(struct f + req = list_entry(fiq->pending.next, struct fuse_req, list); + clear_bit(FR_PENDING, &req->flags); + list_del_init(&req->list); +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + + in = &req->in; + reqsize = in->h.len; +@@ -1409,7 +1412,7 @@ out_end: + return err; + + err_unlock: +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + return err; + } + +@@ -2121,12 +2124,12 @@ static __poll_t fuse_dev_poll(struct fil + fiq = &fud->fc->iq; + poll_wait(file, &fiq->waitq, wait); + +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + if (!fiq->connected) + mask = EPOLLERR; + else if (request_pending(fiq)) + mask |= EPOLLIN | EPOLLRDNORM; +- spin_unlock(&fiq->waitq.lock); ++ spin_unlock(&fiq->lock); + + return mask; + } +@@ -2221,15 +2224,15 @@ void fuse_abort_conn(struct fuse_conn *f + flush_bg_queue(fc); + spin_unlock(&fc->bg_lock); + +- spin_lock(&fiq->waitq.lock); ++ spin_lock(&fiq->lock); + fiq->connected = 0; + list_for_each_entry(req, &fiq->pending, list) + clear_bit(FR_PENDING, &req->flags); + list_splice_tail_init(&fiq->pending, &to_end); + while (forget_pending(fiq)) + kfree(dequeue_forget(fiq, 1, NULL)); +- wake_up_all_locked(&fiq->waitq); +- spin_unlock(&fiq->waitq.lock); ++ wake_up_all(&fiq->waitq); ++ spin_unlock(&fiq->lock); + kill_fasync(&fiq->fasync, SIGIO, POLL_IN); + end_polls(fc); + wake_up_all(&fc->blocked_waitq); +--- a/fs/fuse/fuse_i.h ++++ b/fs/fuse/fuse_i.h +@@ -450,6 +450,9 @@ struct fuse_iqueue { + /** Connection established */ + unsigned connected; + ++ /** Lock protecting accesses to members of this structure */ ++ spinlock_t lock; ++ + /** Readers of the connection are waiting on this */ + wait_queue_head_t waitq; + +--- a/fs/fuse/inode.c ++++ b/fs/fuse/inode.c +@@ -582,6 +582,7 @@ static int fuse_show_options(struct seq_ + static void fuse_iqueue_init(struct fuse_iqueue *fiq) + { + memset(fiq, 0, sizeof(struct fuse_iqueue)); ++ spin_lock_init(&fiq->lock); + init_waitqueue_head(&fiq->waitq); + INIT_LIST_HEAD(&fiq->pending); + INIT_LIST_HEAD(&fiq->interrupts); diff --git a/queue-5.3/fuse-fix-missing-unlock_page-in-fuse_writepage.patch b/queue-5.3/fuse-fix-missing-unlock_page-in-fuse_writepage.patch new file mode 100644 index 00000000000..68ddb7471af --- /dev/null +++ b/queue-5.3/fuse-fix-missing-unlock_page-in-fuse_writepage.patch @@ -0,0 +1,32 @@ +From d5880c7a8620290a6c90ced7a0e8bd0ad9419601 Mon Sep 17 00:00:00 2001 +From: Vasily Averin +Date: Fri, 13 Sep 2019 18:17:11 +0300 +Subject: fuse: fix missing unlock_page in fuse_writepage() + +From: Vasily Averin + +commit d5880c7a8620290a6c90ced7a0e8bd0ad9419601 upstream. + +unlock_page() was missing in case of an already in-flight write against the +same page. + +Signed-off-by: Vasily Averin +Fixes: ff17be086477 ("fuse: writepage: skip already in flight") +Cc: # v3.13 +Signed-off-by: Miklos Szeredi +Signed-off-by: Greg Kroah-Hartman + +--- + fs/fuse/file.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -1767,6 +1767,7 @@ static int fuse_writepage(struct page *p + WARN_ON(wbc->sync_mode == WB_SYNC_ALL); + + redirty_page_for_writepage(wbc, page); ++ unlock_page(page); + return 0; + } + diff --git a/queue-5.3/kvm-x86-add-significant-index-flag-to-a-few-cpuid-leaves.patch b/queue-5.3/kvm-x86-add-significant-index-flag-to-a-few-cpuid-leaves.patch new file mode 100644 index 00000000000..5fccddd663c --- /dev/null +++ b/queue-5.3/kvm-x86-add-significant-index-flag-to-a-few-cpuid-leaves.patch @@ -0,0 +1,44 @@ +From a06dcd625d6181747fac7f4c140b5a4c397a778c Mon Sep 17 00:00:00 2001 +From: Jim Mattson +Date: Thu, 12 Sep 2019 09:55:03 -0700 +Subject: kvm: x86: Add "significant index" flag to a few CPUID leaves + +From: Jim Mattson + +commit a06dcd625d6181747fac7f4c140b5a4c397a778c upstream. + +According to the Intel SDM, volume 2, "CPUID," the index is +significant (or partially significant) for CPUID leaves 0FH, 10H, 12H, +17H, 18H, and 1FH. + +Add the corresponding flag to these CPUID leaves in do_host_cpuid(). + +Signed-off-by: Jim Mattson +Reviewed-by: Peter Shier +Reviewed-by: Steve Rutherford +Fixes: a87f2d3a6eadab ("KVM: x86: Add Intel CPUID.1F cpuid emulation support") +Reviewed-by: Krish Sadhukhan +Cc: stable@vger.kernel.org +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/cpuid.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/arch/x86/kvm/cpuid.c ++++ b/arch/x86/kvm/cpuid.c +@@ -304,7 +304,13 @@ static void do_host_cpuid(struct kvm_cpu + case 7: + case 0xb: + case 0xd: ++ case 0xf: ++ case 0x10: ++ case 0x12: + case 0x14: ++ case 0x17: ++ case 0x18: ++ case 0x1f: + case 0x8000001d: + entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; + break; diff --git a/queue-5.3/kvm-x86-always-stop-emulation-on-page-fault.patch b/queue-5.3/kvm-x86-always-stop-emulation-on-page-fault.patch new file mode 100644 index 00000000000..9ba0f60cecf --- /dev/null +++ b/queue-5.3/kvm-x86-always-stop-emulation-on-page-fault.patch @@ -0,0 +1,52 @@ +From 8530a79c5a9f4e29e6ffb35ec1a79d81f4968ec8 Mon Sep 17 00:00:00 2001 +From: Jan Dakinevich +Date: Tue, 27 Aug 2019 13:07:09 +0000 +Subject: KVM: x86: always stop emulation on page fault + +From: Jan Dakinevich + +commit 8530a79c5a9f4e29e6ffb35ec1a79d81f4968ec8 upstream. + +inject_emulated_exception() returns true if and only if nested page +fault happens. However, page fault can come from guest page tables +walk, either nested or not nested. In both cases we should stop an +attempt to read under RIP and give guest to step over its own page +fault handler. + +This is also visible when an emulated instruction causes a #GP fault +and the VMware backdoor is enabled. To handle the VMware backdoor, +KVM intercepts #GP faults; with only the next patch applied, +x86_emulate_instruction() injects a #GP but returns EMULATE_FAIL +instead of EMULATE_DONE. EMULATE_FAIL causes handle_exception_nmi() +(or gp_interception() for SVM) to re-inject the original #GP because it +thinks emulation failed due to a non-VMware opcode. This patch prevents +the issue as x86_emulate_instruction() will return EMULATE_DONE after +injecting the #GP. + +Fixes: 6ea6e84309ca ("KVM: x86: inject exceptions produced by x86_decode_insn") +Cc: stable@vger.kernel.org +Cc: Denis Lunev +Cc: Roman Kagan +Cc: Denis Plotnikov +Signed-off-by: Jan Dakinevich +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/x86.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -6528,8 +6528,10 @@ int x86_emulate_instruction(struct kvm_v + if (reexecute_instruction(vcpu, cr2, write_fault_to_spt, + emulation_type)) + return EMULATE_DONE; +- if (ctxt->have_exception && inject_emulated_exception(vcpu)) ++ if (ctxt->have_exception) { ++ inject_emulated_exception(vcpu); + return EMULATE_DONE; ++ } + if (emulation_type & EMULTYPE_SKIP) + return EMULATE_FAIL; + return handle_emulation_failure(vcpu, emulation_type); diff --git a/queue-5.3/kvm-x86-disable-posted-interrupts-for-non-standard-irqs-delivery-modes.patch b/queue-5.3/kvm-x86-disable-posted-interrupts-for-non-standard-irqs-delivery-modes.patch new file mode 100644 index 00000000000..642e79c64df --- /dev/null +++ b/queue-5.3/kvm-x86-disable-posted-interrupts-for-non-standard-irqs-delivery-modes.patch @@ -0,0 +1,92 @@ +From fdcf756213756c23b533ca4974d1f48c6a4d4281 Mon Sep 17 00:00:00 2001 +From: Alexander Graf +Date: Thu, 5 Sep 2019 14:58:18 +0200 +Subject: KVM: x86: Disable posted interrupts for non-standard IRQs delivery modes + +From: Alexander Graf + +commit fdcf756213756c23b533ca4974d1f48c6a4d4281 upstream. + +We can easily route hardware interrupts directly into VM context when +they target the "Fixed" or "LowPriority" delivery modes. + +However, on modes such as "SMI" or "Init", we need to go via KVM code +to actually put the vCPU into a different mode of operation, so we can +not post the interrupt + +Add code in the VMX and SVM PI logic to explicitly refuse to establish +posted mappings for advanced IRQ deliver modes. This reflects the logic +in __apic_accept_irq() which also only ever passes Fixed and LowPriority +interrupts as posted interrupts into the guest. + +This fixes a bug I have with code which configures real hardware to +inject virtual SMIs into my guest. + +Signed-off-by: Alexander Graf +Reviewed-by: Liran Alon +Reviewed-by: Sean Christopherson +Reviewed-by: Wanpeng Li +Cc: stable@vger.kernel.org +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/include/asm/kvm_host.h | 7 +++++++ + arch/x86/kvm/svm.c | 4 +++- + arch/x86/kvm/vmx/vmx.c | 6 +++++- + 3 files changed, 15 insertions(+), 2 deletions(-) + +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1583,6 +1583,13 @@ bool kvm_intr_is_single_vcpu(struct kvm + void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e, + struct kvm_lapic_irq *irq); + ++static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq) ++{ ++ /* We can only post Fixed and LowPrio IRQs */ ++ return (irq->delivery_mode == dest_Fixed || ++ irq->delivery_mode == dest_LowestPrio); ++} ++ + static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) + { + if (kvm_x86_ops->vcpu_blocking) +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -5274,7 +5274,8 @@ get_pi_vcpu_info(struct kvm *kvm, struct + + kvm_set_msi_irq(kvm, e, &irq); + +- if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu)) { ++ if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) || ++ !kvm_irq_is_postable(&irq)) { + pr_debug("SVM: %s: use legacy intr remap mode for irq %u\n", + __func__, irq.vector); + return -1; +@@ -5328,6 +5329,7 @@ static int svm_update_pi_irte(struct kvm + * 1. When cannot target interrupt to a specific vcpu. + * 2. Unsetting posted interrupt. + * 3. APIC virtialization is disabled for the vcpu. ++ * 4. IRQ has incompatible delivery mode (SMI, INIT, etc) + */ + if (!get_pi_vcpu_info(kvm, e, &vcpu_info, &svm) && set && + kvm_vcpu_apicv_active(&svm->vcpu)) { +--- a/arch/x86/kvm/vmx/vmx.c ++++ b/arch/x86/kvm/vmx/vmx.c +@@ -7369,10 +7369,14 @@ static int vmx_update_pi_irte(struct kvm + * irqbalance to make the interrupts single-CPU. + * + * We will support full lowest-priority interrupt later. ++ * ++ * In addition, we can only inject generic interrupts using ++ * the PI mechanism, refuse to route others through it. + */ + + kvm_set_msi_irq(kvm, e, &irq); +- if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu)) { ++ if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) || ++ !kvm_irq_is_postable(&irq)) { + /* + * Make sure the IRTE is in remapped mode if + * we don't handle it in posted mode. diff --git a/queue-5.3/kvm-x86-manually-calculate-reserved-bits-when-loading-pdptrs.patch b/queue-5.3/kvm-x86-manually-calculate-reserved-bits-when-loading-pdptrs.patch new file mode 100644 index 00000000000..ce524849bd6 --- /dev/null +++ b/queue-5.3/kvm-x86-manually-calculate-reserved-bits-when-loading-pdptrs.patch @@ -0,0 +1,75 @@ +From 16cfacc8085782dab8e365979356ce1ca87fd6cc Mon Sep 17 00:00:00 2001 +From: Sean Christopherson +Date: Tue, 3 Sep 2019 16:36:45 -0700 +Subject: KVM: x86: Manually calculate reserved bits when loading PDPTRS + +From: Sean Christopherson + +commit 16cfacc8085782dab8e365979356ce1ca87fd6cc upstream. + +Manually generate the PDPTR reserved bit mask when explicitly loading +PDPTRs. The reserved bits that are being tracked by the MMU reflect the +current paging mode, which is unlikely to be PAE paging in the vast +majority of flows that use load_pdptrs(), e.g. CR0 and CR4 emulation, +__set_sregs(), etc... This can cause KVM to incorrectly signal a bad +PDPTR, or more likely, miss a reserved bit check and subsequently fail +a VM-Enter due to a bad VMCS.GUEST_PDPTR. + +Add a one off helper to generate the reserved bits instead of sharing +code across the MMU's calculations and the PDPTR emulation. The PDPTR +reserved bits are basically set in stone, and pushing a helper into +the MMU's calculation adds unnecessary complexity without improving +readability. + +Oppurtunistically fix/update the comment for load_pdptrs(). + +Note, the buggy commit also introduced a deliberate functional change, +"Also remove bit 5-6 from rsvd_bits_mask per latest SDM.", which was +effectively (and correctly) reverted by commit cd9ae5fe47df ("KVM: x86: +Fix page-tables reserved bits"). A bit of SDM archaeology shows that +the SDM from late 2008 had a bug (likely a copy+paste error) where it +listed bits 6:5 as AVL and A for PDPTEs used for 4k entries but reserved +for 2mb entries. I.e. the SDM contradicted itself, and bits 6:5 are and +always have been reserved. + +Fixes: 20c466b56168d ("KVM: Use rsvd_bits_mask in load_pdptrs()") +Cc: stable@vger.kernel.org +Cc: Nadav Amit +Reported-by: Doug Reiland +Signed-off-by: Sean Christopherson +Reviewed-by: Peter Xu +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/x86.c | 11 ++++++++--- + 1 file changed, 8 insertions(+), 3 deletions(-) + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -674,8 +674,14 @@ static int kvm_read_nested_guest_page(st + data, offset, len, access); + } + ++static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) ++{ ++ return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) | ++ rsvd_bits(1, 2); ++} ++ + /* +- * Load the pae pdptrs. Return true is they are all valid. ++ * Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise. + */ + int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) + { +@@ -694,8 +700,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, s + } + for (i = 0; i < ARRAY_SIZE(pdpte); ++i) { + if ((pdpte[i] & PT_PRESENT_MASK) && +- (pdpte[i] & +- vcpu->arch.mmu->guest_rsvd_check.rsvd_bits_mask[0][2])) { ++ (pdpte[i] & pdptr_rsvd_bits(vcpu))) { + ret = 0; + goto out; + } diff --git a/queue-5.3/kvm-x86-mmu-use-fast-invalidate-mechanism-to-zap-mmio-sptes.patch b/queue-5.3/kvm-x86-mmu-use-fast-invalidate-mechanism-to-zap-mmio-sptes.patch new file mode 100644 index 00000000000..dc9a58f1749 --- /dev/null +++ b/queue-5.3/kvm-x86-mmu-use-fast-invalidate-mechanism-to-zap-mmio-sptes.patch @@ -0,0 +1,91 @@ +From 92f58b5c0181596d9f1e317b49ada2e728fb76eb Mon Sep 17 00:00:00 2001 +From: Sean Christopherson +Date: Thu, 12 Sep 2019 19:46:04 -0700 +Subject: KVM: x86/mmu: Use fast invalidate mechanism to zap MMIO sptes + +From: Sean Christopherson + +commit 92f58b5c0181596d9f1e317b49ada2e728fb76eb upstream. + +Use the fast invalidate mechasim to zap MMIO sptes on a MMIO generation +wrap. The fast invalidate flow was reintroduced to fix a livelock bug +in kvm_mmu_zap_all() that can occur if kvm_mmu_zap_all() is invoked when +the guest has live vCPUs. I.e. using kvm_mmu_zap_all() to handle the +MMIO generation wrap is theoretically susceptible to the livelock bug. + +This effectively reverts commit 4771450c345dc ("Revert "KVM: MMU: drop +kvm_mmu_zap_mmio_sptes""), i.e. restores the behavior of commit +a8eca9dcc656a ("KVM: MMU: drop kvm_mmu_zap_mmio_sptes"). + +Note, this actually fixes commit 571c5af06e303 ("KVM: x86/mmu: +Voluntarily reschedule as needed when zapping MMIO sptes"), but there +is no need to incrementally revert back to using fast invalidate, e.g. +doing so doesn't provide any bisection or stability benefits. + +Fixes: 571c5af06e303 ("KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes") +Cc: stable@vger.kernel.org +Signed-off-by: Sean Christopherson +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/mmu.c | 17 +++-------------- + 1 file changed, 3 insertions(+), 14 deletions(-) + +--- a/arch/x86/kvm/mmu.c ++++ b/arch/x86/kvm/mmu.c +@@ -395,8 +395,6 @@ static void mark_mmio_spte(struct kvm_vc + mask |= (gpa & shadow_nonpresent_or_rsvd_mask) + << shadow_nonpresent_or_rsvd_mask_len; + +- page_header(__pa(sptep))->mmio_cached = true; +- + trace_mark_mmio_spte(sptep, gfn, access, gen); + mmu_spte_set(sptep, mask); + } +@@ -5956,7 +5954,7 @@ void kvm_mmu_slot_set_dirty(struct kvm * + } + EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); + +-static void __kvm_mmu_zap_all(struct kvm *kvm, bool mmio_only) ++void kvm_mmu_zap_all(struct kvm *kvm) + { + struct kvm_mmu_page *sp, *node; + LIST_HEAD(invalid_list); +@@ -5965,14 +5963,10 @@ static void __kvm_mmu_zap_all(struct kvm + spin_lock(&kvm->mmu_lock); + restart: + list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { +- if (mmio_only && !sp->mmio_cached) +- continue; + if (sp->role.invalid && sp->root_count) + continue; +- if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) { +- WARN_ON_ONCE(mmio_only); ++ if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) + goto restart; +- } + if (cond_resched_lock(&kvm->mmu_lock)) + goto restart; + } +@@ -5981,11 +5975,6 @@ restart: + spin_unlock(&kvm->mmu_lock); + } + +-void kvm_mmu_zap_all(struct kvm *kvm) +-{ +- return __kvm_mmu_zap_all(kvm, false); +-} +- + void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) + { + WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); +@@ -6007,7 +5996,7 @@ void kvm_mmu_invalidate_mmio_sptes(struc + */ + if (unlikely(gen == 0)) { + kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n"); +- __kvm_mmu_zap_all(kvm, true); ++ kvm_mmu_zap_all_fast(kvm); + } + } + diff --git a/queue-5.3/kvm-x86-set-ctxt-have_exception-in-x86_decode_insn.patch b/queue-5.3/kvm-x86-set-ctxt-have_exception-in-x86_decode_insn.patch new file mode 100644 index 00000000000..c74b1b616e2 --- /dev/null +++ b/queue-5.3/kvm-x86-set-ctxt-have_exception-in-x86_decode_insn.patch @@ -0,0 +1,53 @@ +From c8848cee74ff05638e913582a476bde879c968ad Mon Sep 17 00:00:00 2001 +From: Jan Dakinevich +Date: Tue, 27 Aug 2019 13:07:08 +0000 +Subject: KVM: x86: set ctxt->have_exception in x86_decode_insn() + +From: Jan Dakinevich + +commit c8848cee74ff05638e913582a476bde879c968ad upstream. + +x86_emulate_instruction() takes into account ctxt->have_exception flag +during instruction decoding, but in practice this flag is never set in +x86_decode_insn(). + +Fixes: 6ea6e84309ca ("KVM: x86: inject exceptions produced by x86_decode_insn") +Cc: stable@vger.kernel.org +Cc: Denis Lunev +Cc: Roman Kagan +Cc: Denis Plotnikov +Signed-off-by: Jan Dakinevich +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/emulate.c | 2 ++ + arch/x86/kvm/x86.c | 6 ++++++ + 2 files changed, 8 insertions(+) + +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -5395,6 +5395,8 @@ done_prefixes: + ctxt->memopp->addr.mem.ea + ctxt->_eip); + + done: ++ if (rc == X86EMUL_PROPAGATE_FAULT) ++ ctxt->have_exception = true; + return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK; + } + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -6529,6 +6529,12 @@ int x86_emulate_instruction(struct kvm_v + emulation_type)) + return EMULATE_DONE; + if (ctxt->have_exception) { ++ /* ++ * #UD should result in just EMULATION_FAILED, and trap-like ++ * exception should not be encountered during decode. ++ */ ++ WARN_ON_ONCE(ctxt->exception.vector == UD_VECTOR || ++ exception_type(ctxt->exception.vector) == EXCPT_TRAP); + inject_emulated_exception(vcpu); + return EMULATE_DONE; + } diff --git a/queue-5.3/parisc-disable-hp-hsc-pci-cards-to-prevent-kernel-crash.patch b/queue-5.3/parisc-disable-hp-hsc-pci-cards-to-prevent-kernel-crash.patch new file mode 100644 index 00000000000..d6c7d896f6b --- /dev/null +++ b/queue-5.3/parisc-disable-hp-hsc-pci-cards-to-prevent-kernel-crash.patch @@ -0,0 +1,73 @@ +From 5fa1659105fac63e0f3c199b476025c2e04111ce Mon Sep 17 00:00:00 2001 +From: Helge Deller +Date: Thu, 5 Sep 2019 16:44:17 +0200 +Subject: parisc: Disable HP HSC-PCI Cards to prevent kernel crash + +From: Helge Deller + +commit 5fa1659105fac63e0f3c199b476025c2e04111ce upstream. + +The HP Dino PCI controller chip can be used in two variants: as on-board +controller (e.g. in B160L), or on an Add-On card ("Card-Mode") to bridge +PCI components to systems without a PCI bus, e.g. to a HSC/GSC bus. One +such Add-On card is the HP HSC-PCI Card which has one or more DEC Tulip +PCI NIC chips connected to the on-card Dino PCI controller. + +Dino in Card-Mode has a big disadvantage: All PCI memory accesses need +to go through the DINO_MEM_DATA register, so Linux drivers will not be +able to use the ioremap() function. Without ioremap() many drivers will +not work, one example is the tulip driver which then simply crashes the +kernel if it tries to access the ports on the HP HSC card. + +This patch disables the HP HSC card if it finds one, and as such +fixes the kernel crash on a HP D350/2 machine. + +Signed-off-by: Helge Deller +Noticed-by: Phil Scarr +Cc: stable@vger.kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/parisc/dino.c | 24 ++++++++++++++++++++++++ + 1 file changed, 24 insertions(+) + +--- a/drivers/parisc/dino.c ++++ b/drivers/parisc/dino.c +@@ -156,6 +156,15 @@ static inline struct dino_device *DINO_D + return container_of(hba, struct dino_device, hba); + } + ++/* Check if PCI device is behind a Card-mode Dino. */ ++static int pci_dev_is_behind_card_dino(struct pci_dev *dev) ++{ ++ struct dino_device *dino_dev; ++ ++ dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge)); ++ return is_card_dino(&dino_dev->hba.dev->id); ++} ++ + /* + * Dino Configuration Space Accessor Functions + */ +@@ -437,6 +446,21 @@ static void quirk_cirrus_cardbus(struct + } + DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_6832, quirk_cirrus_cardbus ); + ++#ifdef CONFIG_TULIP ++static void pci_fixup_tulip(struct pci_dev *dev) ++{ ++ if (!pci_dev_is_behind_card_dino(dev)) ++ return; ++ if (!(pci_resource_flags(dev, 1) & IORESOURCE_MEM)) ++ return; ++ pr_warn("%s: HP HSC-PCI Cards with card-mode Dino not yet supported.\n", ++ pci_name(dev)); ++ /* Disable this card by zeroing the PCI resources */ ++ memset(&dev->resource[0], 0, sizeof(dev->resource[0])); ++ memset(&dev->resource[1], 0, sizeof(dev->resource[1])); ++} ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_DEC, PCI_ANY_ID, pci_fixup_tulip); ++#endif /* CONFIG_TULIP */ + + static void __init + dino_bios_init(void) diff --git a/queue-5.3/platform-x86-intel_int0002_vgpio-fix-wakeups-not-working-on-cherry-trail.patch b/queue-5.3/platform-x86-intel_int0002_vgpio-fix-wakeups-not-working-on-cherry-trail.patch new file mode 100644 index 00000000000..8b12e1a7ba0 --- /dev/null +++ b/queue-5.3/platform-x86-intel_int0002_vgpio-fix-wakeups-not-working-on-cherry-trail.patch @@ -0,0 +1,39 @@ +From 1bd43d0077b9a32a8b8059036471f3fc82dae342 Mon Sep 17 00:00:00 2001 +From: Hans de Goede +Date: Fri, 23 Aug 2019 19:48:14 +0200 +Subject: platform/x86: intel_int0002_vgpio: Fix wakeups not working on Cherry Trail + +From: Hans de Goede + +commit 1bd43d0077b9a32a8b8059036471f3fc82dae342 upstream. + +Commit 871f1f2bcb01 ("platform/x86: intel_int0002_vgpio: Only implement +irq_set_wake on Bay Trail") removed the irq_set_wake method from the +struct irq_chip used on Cherry Trail, but it did not set +IRQCHIP_SKIP_SET_WAKE causing kernel/irq/manage.c: set_irq_wake_real() +to return -ENXIO. + +This causes the kernel to no longer see PME events reported through the +INT0002 device as wakeup events. Which e.g. breaks wakeup by the (USB) +keyboard on many Cherry Trail 2-in-1 devices. + +Cc: stable@vger.kernel.org +Fixes: 871f1f2bcb01 ("platform/x86: intel_int0002_vgpio: Only implement irq_set_wake on Bay Trail") +Signed-off-by: Hans de Goede +Signed-off-by: Andy Shevchenko +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/platform/x86/intel_int0002_vgpio.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/platform/x86/intel_int0002_vgpio.c ++++ b/drivers/platform/x86/intel_int0002_vgpio.c +@@ -144,6 +144,7 @@ static struct irq_chip int0002_cht_irqch + * No set_wake, on CHT the IRQ is typically shared with the ACPI SCI + * and we don't want to mess with the ACPI SCI irq settings. + */ ++ .flags = IRQCHIP_SKIP_SET_WAKE, + }; + + static const struct x86_cpu_id int0002_cpu_ids[] = { diff --git a/queue-5.3/series b/queue-5.3/series index a6734f44b4c..ecb83153975 100644 --- a/queue-5.3/series +++ b/queue-5.3/series @@ -252,3 +252,17 @@ alsa-hda-realtek-pci-quirk-for-medion-e4254.patch blk-mq-add-callback-of-.cleanup_rq.patch scsi-implement-.cleanup_rq-callback.patch powerpc-imc-dont-create-debugfs-files-for-cpu-less-nodes.patch +tpm_tis_core-turn-on-the-tpm-before-probing-irq-s.patch +tpm_tis_core-set-tpm_chip_flag_irq-before-probing-for-interrupts.patch +tpm-wrap-the-buffer-from-the-caller-to-tpm_buf-in-tpm_send.patch +fuse-fix-deadlock-with-aio-poll-and-fuse_iqueue-waitq.lock.patch +fuse-fix-missing-unlock_page-in-fuse_writepage.patch +fuse-fix-beyond-end-of-page-access-in-fuse_parse_cache.patch +parisc-disable-hp-hsc-pci-cards-to-prevent-kernel-crash.patch +platform-x86-intel_int0002_vgpio-fix-wakeups-not-working-on-cherry-trail.patch +kvm-x86-always-stop-emulation-on-page-fault.patch +kvm-x86-set-ctxt-have_exception-in-x86_decode_insn.patch +kvm-x86-manually-calculate-reserved-bits-when-loading-pdptrs.patch +kvm-x86-disable-posted-interrupts-for-non-standard-irqs-delivery-modes.patch +kvm-x86-add-significant-index-flag-to-a-few-cpuid-leaves.patch +kvm-x86-mmu-use-fast-invalidate-mechanism-to-zap-mmio-sptes.patch diff --git a/queue-5.3/tpm-wrap-the-buffer-from-the-caller-to-tpm_buf-in-tpm_send.patch b/queue-5.3/tpm-wrap-the-buffer-from-the-caller-to-tpm_buf-in-tpm_send.patch new file mode 100644 index 00000000000..5c7b5258e01 --- /dev/null +++ b/queue-5.3/tpm-wrap-the-buffer-from-the-caller-to-tpm_buf-in-tpm_send.patch @@ -0,0 +1,55 @@ +From e13cd21ffd50a07b55dcc4d8c38cedf27f28eaa1 Mon Sep 17 00:00:00 2001 +From: Jarkko Sakkinen +Date: Mon, 16 Sep 2019 11:38:34 +0300 +Subject: tpm: Wrap the buffer from the caller to tpm_buf in tpm_send() + +From: Jarkko Sakkinen + +commit e13cd21ffd50a07b55dcc4d8c38cedf27f28eaa1 upstream. + +tpm_send() does not give anymore the result back to the caller. This +would require another memcpy(), which kind of tells that the whole +approach is somewhat broken. Instead, as Mimi suggested, this commit +just wraps the data to the tpm_buf, and thus the result will not go to +the garbage. + +Obviously this assumes from the caller that it passes large enough +buffer, which makes the whole API somewhat broken because it could be +different size than @buflen but since trusted keys is the only module +using this API right now I think that this fix is sufficient for the +moment. + +In the near future the plan is to replace the parameters with a tpm_buf +created by the caller. + +Reported-by: Mimi Zohar +Suggested-by: Mimi Zohar +Cc: stable@vger.kernel.org +Fixes: 412eb585587a ("use tpm_buf in tpm_transmit_cmd() as the IO parameter") +Signed-off-by: Jarkko Sakkinen +Reviewed-by: Jerry Snitselaar +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/char/tpm/tpm-interface.c | 9 ++------- + 1 file changed, 2 insertions(+), 7 deletions(-) + +--- a/drivers/char/tpm/tpm-interface.c ++++ b/drivers/char/tpm/tpm-interface.c +@@ -354,14 +354,9 @@ int tpm_send(struct tpm_chip *chip, void + if (!chip) + return -ENODEV; + +- rc = tpm_buf_init(&buf, 0, 0); +- if (rc) +- goto out; +- +- memcpy(buf.data, cmd, buflen); ++ buf.data = cmd; + rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to a send a command"); +- tpm_buf_destroy(&buf); +-out: ++ + tpm_put_ops(chip); + return rc; + } diff --git a/queue-5.3/tpm_tis_core-set-tpm_chip_flag_irq-before-probing-for-interrupts.patch b/queue-5.3/tpm_tis_core-set-tpm_chip_flag_irq-before-probing-for-interrupts.patch new file mode 100644 index 00000000000..785aba56c74 --- /dev/null +++ b/queue-5.3/tpm_tis_core-set-tpm_chip_flag_irq-before-probing-for-interrupts.patch @@ -0,0 +1,33 @@ +From 1ea32c83c699df32689d329b2415796b7bfc2f6e Mon Sep 17 00:00:00 2001 +From: Stefan Berger +Date: Thu, 29 Aug 2019 20:09:06 -0400 +Subject: tpm_tis_core: Set TPM_CHIP_FLAG_IRQ before probing for interrupts + +From: Stefan Berger + +commit 1ea32c83c699df32689d329b2415796b7bfc2f6e upstream. + +The tpm_tis_core has to set the TPM_CHIP_FLAG_IRQ before probing for +interrupts since there is no other place in the code that would set +it. + +Cc: linux-stable@vger.kernel.org +Fixes: 570a36097f30 ("tpm: drop 'irq' from struct tpm_vendor_specific") +Signed-off-by: Stefan Berger +Signed-off-by: Jarkko Sakkinen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/char/tpm/tpm_tis_core.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -981,6 +981,7 @@ int tpm_tis_core_init(struct device *dev + } + + tpm_chip_start(chip); ++ chip->flags |= TPM_CHIP_FLAG_IRQ; + if (irq) { + tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED, + irq); diff --git a/queue-5.3/tpm_tis_core-turn-on-the-tpm-before-probing-irq-s.patch b/queue-5.3/tpm_tis_core-turn-on-the-tpm-before-probing-irq-s.patch new file mode 100644 index 00000000000..9b42d7ba5f2 --- /dev/null +++ b/queue-5.3/tpm_tis_core-turn-on-the-tpm-before-probing-irq-s.patch @@ -0,0 +1,42 @@ +From 5b359c7c43727e624eac3efc7ad21bd2defea161 Mon Sep 17 00:00:00 2001 +From: Stefan Berger +Date: Tue, 20 Aug 2019 08:25:17 -0400 +Subject: tpm_tis_core: Turn on the TPM before probing IRQ's + +From: Stefan Berger + +commit 5b359c7c43727e624eac3efc7ad21bd2defea161 upstream. + +The interrupt probing sequence in tpm_tis_core cannot obviously run with +the TPM power gated. Power on the TPM with tpm_chip_start() before +probing IRQ's. Turn it off once the probing is complete. + +Cc: linux-stable@vger.kernel.org +Fixes: a3fbfae82b4c ("tpm: take TPM chip power gating out of tpm_transmit()") +Signed-off-by: Stefan Berger +Reviewed-by: Jarkko Sakkinen +Signed-off-by: Jarkko Sakkinen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/char/tpm/tpm_tis_core.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -980,6 +980,7 @@ int tpm_tis_core_init(struct device *dev + goto out_err; + } + ++ tpm_chip_start(chip); + if (irq) { + tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED, + irq); +@@ -989,6 +990,7 @@ int tpm_tis_core_init(struct device *dev + } else { + tpm_tis_probe_irq(chip, intmask); + } ++ tpm_chip_stop(chip); + } + + rc = tpm_chip_register(chip);