From: Greg Kroah-Hartman Date: Thu, 1 Feb 2018 08:24:02 +0000 (+0100) Subject: 4.4-stable patches X-Git-Tag: v4.4.115~17 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=3eb6834868727689a18f068a06a2cd1f79a080ea;p=thirdparty%2Fkernel%2Fstable-queue.git 4.4-stable patches added patches: alsa-seq-make-ioctls-race-free.patch kaiser-fix-intel_bts-perf-crashes.patch x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch --- diff --git a/queue-4.4/alsa-seq-make-ioctls-race-free.patch b/queue-4.4/alsa-seq-make-ioctls-race-free.patch new file mode 100644 index 00000000000..a60c053f03d --- /dev/null +++ b/queue-4.4/alsa-seq-make-ioctls-race-free.patch @@ -0,0 +1,77 @@ +From b3defb791b26ea0683a93a4f49c77ec45ec96f10 Mon Sep 17 00:00:00 2001 +From: Takashi Iwai +Date: Tue, 9 Jan 2018 23:11:03 +0100 +Subject: ALSA: seq: Make ioctls race-free + +From: Takashi Iwai + +commit b3defb791b26ea0683a93a4f49c77ec45ec96f10 upstream. + +The ALSA sequencer ioctls have no protection against racy calls while +the concurrent operations may lead to interfere with each other. As +reported recently, for example, the concurrent calls of setting client +pool with a combination of write calls may lead to either the +unkillable dead-lock or UAF. + +As a slightly big hammer solution, this patch introduces the mutex to +make each ioctl exclusive. Although this may reduce performance via +parallel ioctl calls, usually it's not demanded for sequencer usages, +hence it should be negligible. + +Reported-by: Luo Quan +Reviewed-by: Kees Cook +Reviewed-by: Greg Kroah-Hartman +Signed-off-by: Takashi Iwai +[bwh: Backported to 4.4: ioctl dispatch is done from snd_seq_do_ioctl(); + take the mutex and add ret variable there.] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman + +--- + sound/core/seq/seq_clientmgr.c | 10 ++++++++-- + sound/core/seq/seq_clientmgr.h | 1 + + 2 files changed, 9 insertions(+), 2 deletions(-) + +--- a/sound/core/seq/seq_clientmgr.c ++++ b/sound/core/seq/seq_clientmgr.c +@@ -236,6 +236,7 @@ static struct snd_seq_client *seq_create + rwlock_init(&client->ports_lock); + mutex_init(&client->ports_mutex); + INIT_LIST_HEAD(&client->ports_list_head); ++ mutex_init(&client->ioctl_mutex); + + /* find free slot in the client table */ + spin_lock_irqsave(&clients_lock, flags); +@@ -2195,6 +2196,7 @@ static int snd_seq_do_ioctl(struct snd_s + void __user *arg) + { + struct seq_ioctl_table *p; ++ int ret; + + switch (cmd) { + case SNDRV_SEQ_IOCTL_PVERSION: +@@ -2208,8 +2210,12 @@ static int snd_seq_do_ioctl(struct snd_s + if (! arg) + return -EFAULT; + for (p = ioctl_tables; p->cmd; p++) { +- if (p->cmd == cmd) +- return p->func(client, arg); ++ if (p->cmd == cmd) { ++ mutex_lock(&client->ioctl_mutex); ++ ret = p->func(client, arg); ++ mutex_unlock(&client->ioctl_mutex); ++ return ret; ++ } + } + pr_debug("ALSA: seq unknown ioctl() 0x%x (type='%c', number=0x%02x)\n", + cmd, _IOC_TYPE(cmd), _IOC_NR(cmd)); +--- a/sound/core/seq/seq_clientmgr.h ++++ b/sound/core/seq/seq_clientmgr.h +@@ -59,6 +59,7 @@ struct snd_seq_client { + struct list_head ports_list_head; + rwlock_t ports_lock; + struct mutex ports_mutex; ++ struct mutex ioctl_mutex; + int convert32; /* convert 32->64bit */ + + /* output pool */ diff --git a/queue-4.4/kaiser-fix-intel_bts-perf-crashes.patch b/queue-4.4/kaiser-fix-intel_bts-perf-crashes.patch new file mode 100644 index 00000000000..6b68272004f --- /dev/null +++ b/queue-4.4/kaiser-fix-intel_bts-perf-crashes.patch @@ -0,0 +1,134 @@ +From hughd@google.com Thu Feb 1 09:09:20 2018 +From: Hugh Dickins +Date: Mon, 29 Jan 2018 18:15:33 -0800 +Subject: kaiser: fix intel_bts perf crashes +To: Greg Kroah-Hartman +Cc: Hugh Dickins , Thomas Gleixner , Ingo Molnar , Andy Lutomirski , Alexander Shishkin , Linus Torvalds , Vince Weaver , stable@vger.kernel.org, Jiri Kosina +Message-ID: <20180130021533.228782-1-hughd@google.com> + +From: Hugh Dickins + +Vince reported perf_fuzzer quickly locks up on 4.15-rc7 with PTI; +Robert reported Bad RIP with KPTI and Intel BTS also on 4.15-rc7: +honggfuzz -f /tmp/somedirectorywithatleastonefile \ + --linux_perf_bts_edge -s -- /bin/true +(honggfuzz from https://github.com/google/honggfuzz) crashed with +BUG: unable to handle kernel paging request at ffff9d3215100000 +(then narrowed it down to +perf record --per-thread -e intel_bts//u -- /bin/ls). + +The intel_bts driver does not use the 'normal' BTS buffer which is +exposed through kaiser_add_mapping(), but instead uses the memory +allocated for the perf AUX buffer. + +This obviously comes apart when using PTI, because then the kernel +mapping, which includes that AUX buffer memory, disappears while +switched to user page tables. + +Easily fixed in old-Kaiser backports, by applying kaiser_add_mapping() +to those pages; perhaps not so easy for upstream, where 4.15-rc8 commit +99a9dc98ba52 ("x86,perf: Disable intel_bts when PTI") disables for now. + +Slightly reorganized surrounding code in bts_buffer_setup_aux(), +so it can better match bts_buffer_free_aux(): free_aux with an #ifdef +to avoid the loop when PTI is off, but setup_aux needs to loop anyway +(and kaiser_add_mapping() is cheap when PTI config is off or "pti=off"). + +Reported-by: Vince Weaver +Reported-by: Robert Święcki +Analyzed-by: Peter Zijlstra +Analyzed-by: Stephane Eranian +Cc: Thomas Gleixner +Cc: Ingo Molnar +Cc: Andy Lutomirski +Cc: Alexander Shishkin +Cc: Linus Torvalds +Cc: Vince Weaver +Cc: stable@vger.kernel.org +Cc: Jiri Kosina +Signed-off-by: Hugh Dickins +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kernel/cpu/perf_event_intel_bts.c | 44 +++++++++++++++++++++-------- + 1 file changed, 33 insertions(+), 11 deletions(-) + +--- a/arch/x86/kernel/cpu/perf_event_intel_bts.c ++++ b/arch/x86/kernel/cpu/perf_event_intel_bts.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -67,6 +68,23 @@ static size_t buf_size(struct page *page + return 1 << (PAGE_SHIFT + page_private(page)); + } + ++static void bts_buffer_free_aux(void *data) ++{ ++#ifdef CONFIG_PAGE_TABLE_ISOLATION ++ struct bts_buffer *buf = data; ++ int nbuf; ++ ++ for (nbuf = 0; nbuf < buf->nr_bufs; nbuf++) { ++ struct page *page = buf->buf[nbuf].page; ++ void *kaddr = page_address(page); ++ size_t page_size = buf_size(page); ++ ++ kaiser_remove_mapping((unsigned long)kaddr, page_size); ++ } ++#endif ++ kfree(data); ++} ++ + static void * + bts_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool overwrite) + { +@@ -103,29 +121,33 @@ bts_buffer_setup_aux(int cpu, void **pag + buf->real_size = size - size % BTS_RECORD_SIZE; + + for (pg = 0, nbuf = 0, offset = 0, pad = 0; nbuf < buf->nr_bufs; nbuf++) { +- unsigned int __nr_pages; ++ void *kaddr = pages[pg]; ++ size_t page_size; ++ ++ page = virt_to_page(kaddr); ++ page_size = buf_size(page); ++ ++ if (kaiser_add_mapping((unsigned long)kaddr, ++ page_size, __PAGE_KERNEL) < 0) { ++ buf->nr_bufs = nbuf; ++ bts_buffer_free_aux(buf); ++ return NULL; ++ } + +- page = virt_to_page(pages[pg]); +- __nr_pages = PagePrivate(page) ? 1 << page_private(page) : 1; + buf->buf[nbuf].page = page; + buf->buf[nbuf].offset = offset; + buf->buf[nbuf].displacement = (pad ? BTS_RECORD_SIZE - pad : 0); +- buf->buf[nbuf].size = buf_size(page) - buf->buf[nbuf].displacement; ++ buf->buf[nbuf].size = page_size - buf->buf[nbuf].displacement; + pad = buf->buf[nbuf].size % BTS_RECORD_SIZE; + buf->buf[nbuf].size -= pad; + +- pg += __nr_pages; +- offset += __nr_pages << PAGE_SHIFT; ++ pg += page_size >> PAGE_SHIFT; ++ offset += page_size; + } + + return buf; + } + +-static void bts_buffer_free_aux(void *data) +-{ +- kfree(data); +-} +- + static unsigned long bts_buffer_offset(struct bts_buffer *buf, unsigned int idx) + { + return buf->buf[idx].offset + buf->buf[idx].displacement; diff --git a/queue-4.4/series b/queue-4.4/series index db87e518728..a2d7810ed47 100644 --- a/queue-4.4/series +++ b/queue-4.4/series @@ -8,3 +8,6 @@ bpf-avoid-false-sharing-of-map-refcount-with-max_entries.patch bpf-fix-divides-by-zero.patch bpf-fix-32-bit-divide-by-zero.patch bpf-reject-stores-into-ctx-via-st-and-xadd.patch +x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch +kaiser-fix-intel_bts-perf-crashes.patch +alsa-seq-make-ioctls-race-free.patch diff --git a/queue-4.4/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch b/queue-4.4/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch new file mode 100644 index 00000000000..7107e7422e6 --- /dev/null +++ b/queue-4.4/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch @@ -0,0 +1,71 @@ +From 445b69e3b75e42362a5bdc13c8b8f61599e2228a Mon Sep 17 00:00:00 2001 +From: Dave Hansen +Date: Wed, 10 Jan 2018 14:49:39 -0800 +Subject: x86/pti: Make unpoison of pgd for trusted boot work for real + +From: Dave Hansen + +commit 445b69e3b75e42362a5bdc13c8b8f61599e2228a upstream. + +The inital fix for trusted boot and PTI potentially misses the pgd clearing +if pud_alloc() sets a PGD. It probably works in *practice* because for two +adjacent calls to map_tboot_page() that share a PGD entry, the first will +clear NX, *then* allocate and set the PGD (without NX clear). The second +call will *not* allocate but will clear the NX bit. + +Defer the NX clearing to a point after it is known that all top-level +allocations have occurred. Add a comment to clarify why. + +[ tglx: Massaged changelog ] + +[hughd notes: I have not tested tboot, but this looks to me as necessary +and as safe in old-Kaiser backports as it is upstream; I'm not submitting +the commit-to-be-fixed 262b6b30087, since it was undone by 445b69e3b75e, +and makes conflict trouble because of 5-level's p4d versus 4-level's pgd.] + +Fixes: 262b6b30087 ("x86/tboot: Unbreak tboot with PTI enabled") +Signed-off-by: Dave Hansen +Signed-off-by: Thomas Gleixner +Reviewed-by: Andrea Arcangeli +Cc: Jon Masters +Cc: "Tim Chen" +Cc: gnomes@lxorguk.ukuu.org.uk +Cc: peterz@infradead.org +Cc: ning.sun@intel.com +Cc: tboot-devel@lists.sourceforge.net +Cc: andi@firstfloor.org +Cc: luto@kernel.org +Cc: law@redhat.com +Cc: pbonzini@redhat.com +Cc: torvalds@linux-foundation.org +Cc: gregkh@linux-foundation.org +Cc: dwmw@amazon.co.uk +Cc: nickc@redhat.com +Cc: stable@vger.kernel.org +Link: https://lkml.kernel.org/r/20180110224939.2695CD47@viggo.jf.intel.com +Signed-off-by: Hugh Dickins +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kernel/tboot.c | 10 ++++++++++ + 1 file changed, 10 insertions(+) + +--- a/arch/x86/kernel/tboot.c ++++ b/arch/x86/kernel/tboot.c +@@ -140,6 +140,16 @@ static int map_tboot_page(unsigned long + return -1; + set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot)); + pte_unmap(pte); ++ ++ /* ++ * PTI poisons low addresses in the kernel page tables in the ++ * name of making them unusable for userspace. To execute ++ * code at such a low address, the poison must be cleared. ++ * ++ * Note: 'pgd' actually gets set in pud_alloc(). ++ */ ++ pgd->pgd &= ~_PAGE_NX; ++ + return 0; + } +