From: Greg Kroah-Hartman Date: Thu, 27 Apr 2023 08:37:06 +0000 (+0200) Subject: 6.1-stable patches X-Git-Tag: v5.15.110~23 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=f20c9403884fd1ce610eff321c5e07e35655189d;p=thirdparty%2Fkernel%2Fstable-queue.git 6.1-stable patches added patches: kvm-arm64-retry-fault-if-vma_lookup-results-become-invalid.patch phy-phy-brcm-usb-utilize-platform_get_irq_byname_optional.patch series um-only-disable-sse-on-clang-to-work-around-old-gcc-bugs.patch --- diff --git a/queue-6.1/kvm-arm64-retry-fault-if-vma_lookup-results-become-invalid.patch b/queue-6.1/kvm-arm64-retry-fault-if-vma_lookup-results-become-invalid.patch new file mode 100644 index 00000000000..0a5cdc50ab0 --- /dev/null +++ b/queue-6.1/kvm-arm64-retry-fault-if-vma_lookup-results-become-invalid.patch @@ -0,0 +1,107 @@ +From 13ec9308a85702af7c31f3638a2720863848a7f2 Mon Sep 17 00:00:00 2001 +From: David Matlack +Date: Mon, 13 Mar 2023 16:54:54 -0700 +Subject: KVM: arm64: Retry fault if vma_lookup() results become invalid + +From: David Matlack + +commit 13ec9308a85702af7c31f3638a2720863848a7f2 upstream. + +Read mmu_invalidate_seq before dropping the mmap_lock so that KVM can +detect if the results of vma_lookup() (e.g. vma_shift) become stale +before it acquires kvm->mmu_lock. This fixes a theoretical bug where a +VMA could be changed by userspace after vma_lookup() and before KVM +reads the mmu_invalidate_seq, causing KVM to install page table entries +based on a (possibly) no-longer-valid vma_shift. + +Re-order the MMU cache top-up to earlier in user_mem_abort() so that it +is not done after KVM has read mmu_invalidate_seq (i.e. so as to avoid +inducing spurious fault retries). + +This bug has existed since KVM/ARM's inception. It's unlikely that any +sane userspace currently modifies VMAs in such a way as to trigger this +race. And even with directed testing I was unable to reproduce it. But a +sufficiently motivated host userspace might be able to exploit this +race. + +Fixes: 94f8e6418d39 ("KVM: ARM: Handle guest faults in KVM") +Cc: stable@vger.kernel.org +Reported-by: Sean Christopherson +Signed-off-by: David Matlack +Reviewed-by: Marc Zyngier +Link: https://lore.kernel.org/r/20230313235454.2964067-1-dmatlack@google.com +Signed-off-by: Oliver Upton +[will: Use FSC_PERM instead of ESR_ELx_FSC_PERM] +Signed-off-by: Will Deacon +Signed-off-by: Greg Kroah-Hartman +--- + arch/arm64/kvm/mmu.c | 47 +++++++++++++++++++++-------------------------- + 1 file changed, 21 insertions(+), 26 deletions(-) + +--- a/arch/arm64/kvm/mmu.c ++++ b/arch/arm64/kvm/mmu.c +@@ -1179,6 +1179,20 @@ static int user_mem_abort(struct kvm_vcp + } + + /* ++ * Permission faults just need to update the existing leaf entry, ++ * and so normally don't require allocations from the memcache. The ++ * only exception to this is when dirty logging is enabled at runtime ++ * and a write fault needs to collapse a block entry into a table. ++ */ ++ if (fault_status != FSC_PERM || ++ (logging_active && write_fault)) { ++ ret = kvm_mmu_topup_memory_cache(memcache, ++ kvm_mmu_cache_min_pages(kvm)); ++ if (ret) ++ return ret; ++ } ++ ++ /* + * Let's check if we will get back a huge page backed by hugetlbfs, or + * get block mapping for device MMIO region. + */ +@@ -1234,36 +1248,17 @@ static int user_mem_abort(struct kvm_vcp + fault_ipa &= ~(vma_pagesize - 1); + + gfn = fault_ipa >> PAGE_SHIFT; +- mmap_read_unlock(current->mm); +- +- /* +- * Permission faults just need to update the existing leaf entry, +- * and so normally don't require allocations from the memcache. The +- * only exception to this is when dirty logging is enabled at runtime +- * and a write fault needs to collapse a block entry into a table. +- */ +- if (fault_status != FSC_PERM || (logging_active && write_fault)) { +- ret = kvm_mmu_topup_memory_cache(memcache, +- kvm_mmu_cache_min_pages(kvm)); +- if (ret) +- return ret; +- } + +- mmu_seq = vcpu->kvm->mmu_invalidate_seq; + /* +- * Ensure the read of mmu_invalidate_seq happens before we call +- * gfn_to_pfn_prot (which calls get_user_pages), so that we don't risk +- * the page we just got a reference to gets unmapped before we have a +- * chance to grab the mmu_lock, which ensure that if the page gets +- * unmapped afterwards, the call to kvm_unmap_gfn will take it away +- * from us again properly. This smp_rmb() interacts with the smp_wmb() +- * in kvm_mmu_notifier_invalidate_. ++ * Read mmu_invalidate_seq so that KVM can detect if the results of ++ * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to ++ * acquiring kvm->mmu_lock. + * +- * Besides, __gfn_to_pfn_memslot() instead of gfn_to_pfn_prot() is +- * used to avoid unnecessary overhead introduced to locate the memory +- * slot because it's always fixed even @gfn is adjusted for huge pages. ++ * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs ++ * with the smp_wmb() in kvm_mmu_invalidate_end(). + */ +- smp_rmb(); ++ mmu_seq = vcpu->kvm->mmu_invalidate_seq; ++ mmap_read_unlock(current->mm); + + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, + write_fault, &writable, NULL); diff --git a/queue-6.1/phy-phy-brcm-usb-utilize-platform_get_irq_byname_optional.patch b/queue-6.1/phy-phy-brcm-usb-utilize-platform_get_irq_byname_optional.patch new file mode 100644 index 00000000000..d44c18b570c --- /dev/null +++ b/queue-6.1/phy-phy-brcm-usb-utilize-platform_get_irq_byname_optional.patch @@ -0,0 +1,36 @@ +From 53bffe0055741440a6c91abb80bad1c62ea443e3 Mon Sep 17 00:00:00 2001 +From: Florian Fainelli +Date: Wed, 26 Oct 2022 15:44:49 -0700 +Subject: phy: phy-brcm-usb: Utilize platform_get_irq_byname_optional() + +From: Florian Fainelli + +commit 53bffe0055741440a6c91abb80bad1c62ea443e3 upstream. + +The wake-up interrupt lines are entirely optional, avoid printing +messages that interrupts were not found by switching to the _optional +variant. + +Signed-off-by: Florian Fainelli +Acked-by: Justin Chen +Link: https://lore.kernel.org/r/20221026224450.2958762-1-f.fainelli@gmail.com +Signed-off-by: Vinod Koul +Signed-off-by: Greg Kroah-Hartman +--- + drivers/phy/broadcom/phy-brcm-usb.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/drivers/phy/broadcom/phy-brcm-usb.c ++++ b/drivers/phy/broadcom/phy-brcm-usb.c +@@ -445,9 +445,9 @@ static int brcm_usb_phy_dvr_init(struct + priv->suspend_clk = NULL; + } + +- priv->wake_irq = platform_get_irq_byname(pdev, "wake"); ++ priv->wake_irq = platform_get_irq_byname_optional(pdev, "wake"); + if (priv->wake_irq < 0) +- priv->wake_irq = platform_get_irq_byname(pdev, "wakeup"); ++ priv->wake_irq = platform_get_irq_byname_optional(pdev, "wakeup"); + if (priv->wake_irq >= 0) { + err = devm_request_irq(dev, priv->wake_irq, + brcm_usb_phy_wake_isr, 0, diff --git a/queue-6.1/series b/queue-6.1/series new file mode 100644 index 00000000000..4cf976df5de --- /dev/null +++ b/queue-6.1/series @@ -0,0 +1,3 @@ +um-only-disable-sse-on-clang-to-work-around-old-gcc-bugs.patch +phy-phy-brcm-usb-utilize-platform_get_irq_byname_optional.patch +kvm-arm64-retry-fault-if-vma_lookup-results-become-invalid.patch diff --git a/queue-6.1/um-only-disable-sse-on-clang-to-work-around-old-gcc-bugs.patch b/queue-6.1/um-only-disable-sse-on-clang-to-work-around-old-gcc-bugs.patch new file mode 100644 index 00000000000..7f7c35fec01 --- /dev/null +++ b/queue-6.1/um-only-disable-sse-on-clang-to-work-around-old-gcc-bugs.patch @@ -0,0 +1,54 @@ +From a3046a618a284579d1189af8711765f553eed707 Mon Sep 17 00:00:00 2001 +From: David Gow +Date: Sat, 18 Mar 2023 12:15:54 +0800 +Subject: um: Only disable SSE on clang to work around old GCC bugs + +From: David Gow + +commit a3046a618a284579d1189af8711765f553eed707 upstream. + +As part of the Rust support for UML, we disable SSE (and similar flags) +to match the normal x86 builds. This both makes sense (we ideally want a +similar configuration to x86), and works around a crash bug with SSE +generation under Rust with LLVM. + +However, this breaks compiling stdlib.h under gcc < 11, as the x86_64 +ABI requires floating-point return values be stored in an SSE register. +gcc 11 fixes this by only doing register allocation when a function is +actually used, and since we never use atof(), it shouldn't be a problem: +https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652 + +Nevertheless, only disable SSE on clang setups, as that's a simple way +of working around everyone's bugs. + +Fixes: 884981867947 ("rust: arch/um: Disable FP/SIMD instruction to match x86") +Reported-by: Roberto Sassu +Link: https://lore.kernel.org/linux-um/6df2ecef9011d85654a82acd607fdcbc93ad593c.camel@huaweicloud.com/ +Tested-by: Roberto Sassu +Tested-by: SeongJae Park +Signed-off-by: David Gow +Reviewed-by: Vincenzo Palazzo +Tested-by: Arthur Grillo +Signed-off-by: Richard Weinberger +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/Makefile.um | 5 +++++ + 1 file changed, 5 insertions(+) + +--- a/arch/x86/Makefile.um ++++ b/arch/x86/Makefile.um +@@ -3,9 +3,14 @@ core-y += arch/x86/crypto/ + + # + # Disable SSE and other FP/SIMD instructions to match normal x86 ++# This is required to work around issues in older LLVM versions, but breaks ++# GCC versions < 11. See: ++# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652 + # ++ifeq ($(CONFIG_CC_IS_CLANG),y) + KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx + KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2 ++endif + + ifeq ($(CONFIG_X86_32),y) + START := 0x8048000