--- /dev/null
+From 13ec9308a85702af7c31f3638a2720863848a7f2 Mon Sep 17 00:00:00 2001
+From: David Matlack <dmatlack@google.com>
+Date: Mon, 13 Mar 2023 16:54:54 -0700
+Subject: KVM: arm64: Retry fault if vma_lookup() results become invalid
+
+From: David Matlack <dmatlack@google.com>
+
+commit 13ec9308a85702af7c31f3638a2720863848a7f2 upstream.
+
+Read mmu_invalidate_seq before dropping the mmap_lock so that KVM can
+detect if the results of vma_lookup() (e.g. vma_shift) become stale
+before it acquires kvm->mmu_lock. This fixes a theoretical bug where a
+VMA could be changed by userspace after vma_lookup() and before KVM
+reads the mmu_invalidate_seq, causing KVM to install page table entries
+based on a (possibly) no-longer-valid vma_shift.
+
+Re-order the MMU cache top-up to earlier in user_mem_abort() so that it
+is not done after KVM has read mmu_invalidate_seq (i.e. so as to avoid
+inducing spurious fault retries).
+
+This bug has existed since KVM/ARM's inception. It's unlikely that any
+sane userspace currently modifies VMAs in such a way as to trigger this
+race. And even with directed testing I was unable to reproduce it. But a
+sufficiently motivated host userspace might be able to exploit this
+race.
+
+Fixes: 94f8e6418d39 ("KVM: ARM: Handle guest faults in KVM")
+Cc: stable@vger.kernel.org
+Reported-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: David Matlack <dmatlack@google.com>
+Reviewed-by: Marc Zyngier <maz@kernel.org>
+Link: https://lore.kernel.org/r/20230313235454.2964067-1-dmatlack@google.com
+Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
+[will: Use FSC_PERM instead of ESR_ELx_FSC_PERM]
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/kvm/mmu.c | 47 +++++++++++++++++++++--------------------------
+ 1 file changed, 21 insertions(+), 26 deletions(-)
+
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1179,6 +1179,20 @@ static int user_mem_abort(struct kvm_vcp
+ }
+
+ /*
++ * Permission faults just need to update the existing leaf entry,
++ * and so normally don't require allocations from the memcache. The
++ * only exception to this is when dirty logging is enabled at runtime
++ * and a write fault needs to collapse a block entry into a table.
++ */
++ if (fault_status != FSC_PERM ||
++ (logging_active && write_fault)) {
++ ret = kvm_mmu_topup_memory_cache(memcache,
++ kvm_mmu_cache_min_pages(kvm));
++ if (ret)
++ return ret;
++ }
++
++ /*
+ * Let's check if we will get back a huge page backed by hugetlbfs, or
+ * get block mapping for device MMIO region.
+ */
+@@ -1234,36 +1248,17 @@ static int user_mem_abort(struct kvm_vcp
+ fault_ipa &= ~(vma_pagesize - 1);
+
+ gfn = fault_ipa >> PAGE_SHIFT;
+- mmap_read_unlock(current->mm);
+-
+- /*
+- * Permission faults just need to update the existing leaf entry,
+- * and so normally don't require allocations from the memcache. The
+- * only exception to this is when dirty logging is enabled at runtime
+- * and a write fault needs to collapse a block entry into a table.
+- */
+- if (fault_status != FSC_PERM || (logging_active && write_fault)) {
+- ret = kvm_mmu_topup_memory_cache(memcache,
+- kvm_mmu_cache_min_pages(kvm));
+- if (ret)
+- return ret;
+- }
+
+- mmu_seq = vcpu->kvm->mmu_invalidate_seq;
+ /*
+- * Ensure the read of mmu_invalidate_seq happens before we call
+- * gfn_to_pfn_prot (which calls get_user_pages), so that we don't risk
+- * the page we just got a reference to gets unmapped before we have a
+- * chance to grab the mmu_lock, which ensure that if the page gets
+- * unmapped afterwards, the call to kvm_unmap_gfn will take it away
+- * from us again properly. This smp_rmb() interacts with the smp_wmb()
+- * in kvm_mmu_notifier_invalidate_<page|range_end>.
++ * Read mmu_invalidate_seq so that KVM can detect if the results of
++ * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to
++ * acquiring kvm->mmu_lock.
+ *
+- * Besides, __gfn_to_pfn_memslot() instead of gfn_to_pfn_prot() is
+- * used to avoid unnecessary overhead introduced to locate the memory
+- * slot because it's always fixed even @gfn is adjusted for huge pages.
++ * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs
++ * with the smp_wmb() in kvm_mmu_invalidate_end().
+ */
+- smp_rmb();
++ mmu_seq = vcpu->kvm->mmu_invalidate_seq;
++ mmap_read_unlock(current->mm);
+
+ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
+ write_fault, &writable, NULL);
--- /dev/null
+From 53bffe0055741440a6c91abb80bad1c62ea443e3 Mon Sep 17 00:00:00 2001
+From: Florian Fainelli <f.fainelli@gmail.com>
+Date: Wed, 26 Oct 2022 15:44:49 -0700
+Subject: phy: phy-brcm-usb: Utilize platform_get_irq_byname_optional()
+
+From: Florian Fainelli <f.fainelli@gmail.com>
+
+commit 53bffe0055741440a6c91abb80bad1c62ea443e3 upstream.
+
+The wake-up interrupt lines are entirely optional, avoid printing
+messages that interrupts were not found by switching to the _optional
+variant.
+
+Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
+Acked-by: Justin Chen <justinpopo6@gmail.com>
+Link: https://lore.kernel.org/r/20221026224450.2958762-1-f.fainelli@gmail.com
+Signed-off-by: Vinod Koul <vkoul@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/phy/broadcom/phy-brcm-usb.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -445,9 +445,9 @@ static int brcm_usb_phy_dvr_init(struct
+ priv->suspend_clk = NULL;
+ }
+
+- priv->wake_irq = platform_get_irq_byname(pdev, "wake");
++ priv->wake_irq = platform_get_irq_byname_optional(pdev, "wake");
+ if (priv->wake_irq < 0)
+- priv->wake_irq = platform_get_irq_byname(pdev, "wakeup");
++ priv->wake_irq = platform_get_irq_byname_optional(pdev, "wakeup");
+ if (priv->wake_irq >= 0) {
+ err = devm_request_irq(dev, priv->wake_irq,
+ brcm_usb_phy_wake_isr, 0,
--- /dev/null
+From a3046a618a284579d1189af8711765f553eed707 Mon Sep 17 00:00:00 2001
+From: David Gow <davidgow@google.com>
+Date: Sat, 18 Mar 2023 12:15:54 +0800
+Subject: um: Only disable SSE on clang to work around old GCC bugs
+
+From: David Gow <davidgow@google.com>
+
+commit a3046a618a284579d1189af8711765f553eed707 upstream.
+
+As part of the Rust support for UML, we disable SSE (and similar flags)
+to match the normal x86 builds. This both makes sense (we ideally want a
+similar configuration to x86), and works around a crash bug with SSE
+generation under Rust with LLVM.
+
+However, this breaks compiling stdlib.h under gcc < 11, as the x86_64
+ABI requires floating-point return values be stored in an SSE register.
+gcc 11 fixes this by only doing register allocation when a function is
+actually used, and since we never use atof(), it shouldn't be a problem:
+https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652
+
+Nevertheless, only disable SSE on clang setups, as that's a simple way
+of working around everyone's bugs.
+
+Fixes: 884981867947 ("rust: arch/um: Disable FP/SIMD instruction to match x86")
+Reported-by: Roberto Sassu <roberto.sassu@huaweicloud.com>
+Link: https://lore.kernel.org/linux-um/6df2ecef9011d85654a82acd607fdcbc93ad593c.camel@huaweicloud.com/
+Tested-by: Roberto Sassu <roberto.sassu@huaweicloud.com>
+Tested-by: SeongJae Park <sj@kernel.org>
+Signed-off-by: David Gow <davidgow@google.com>
+Reviewed-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
+Tested-by: Arthur Grillo <arthurgrillo@riseup.net>
+Signed-off-by: Richard Weinberger <richard@nod.at>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/Makefile.um | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/arch/x86/Makefile.um
++++ b/arch/x86/Makefile.um
+@@ -3,9 +3,14 @@ core-y += arch/x86/crypto/
+
+ #
+ # Disable SSE and other FP/SIMD instructions to match normal x86
++# This is required to work around issues in older LLVM versions, but breaks
++# GCC versions < 11. See:
++# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652
+ #
++ifeq ($(CONFIG_CC_IS_CLANG),y)
+ KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
+ KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2
++endif
+
+ ifeq ($(CONFIG_X86_32),y)
+ START := 0x8048000