--- /dev/null
+From 0c21a18d5d6c6a73d098fb9b4701572370942df9 Mon Sep 17 00:00:00 2001
+From: Sunil V L <sunilvl@ventanamicro.com>
+Date: Mon, 16 Oct 2023 22:39:39 +0530
+Subject: ACPI: irq: Fix incorrect return value in acpi_register_gsi()
+
+From: Sunil V L <sunilvl@ventanamicro.com>
+
+commit 0c21a18d5d6c6a73d098fb9b4701572370942df9 upstream.
+
+acpi_register_gsi() should return a negative value in case of failure.
+
+Currently, it returns the return value from irq_create_fwspec_mapping().
+However, irq_create_fwspec_mapping() returns 0 for failure. Fix the
+issue by returning -EINVAL if irq_create_fwspec_mapping() returns zero.
+
+Fixes: d44fa3d46079 ("ACPI: Add support for ResourceSource/IRQ domain mapping")
+Cc: 4.11+ <stable@vger.kernel.org> # 4.11+
+Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
+[ rjw: Rename a new local variable ]
+Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/acpi/irq.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/drivers/acpi/irq.c
++++ b/drivers/acpi/irq.c
+@@ -57,6 +57,7 @@ int acpi_register_gsi(struct device *dev
+ int polarity)
+ {
+ struct irq_fwspec fwspec;
++ unsigned int irq;
+
+ fwspec.fwnode = acpi_get_gsi_domain_id(gsi);
+ if (WARN_ON(!fwspec.fwnode)) {
+@@ -68,7 +69,11 @@ int acpi_register_gsi(struct device *dev
+ fwspec.param[1] = acpi_dev_get_irq_type(trigger, polarity);
+ fwspec.param_count = 2;
+
+- return irq_create_fwspec_mapping(&fwspec);
++ irq = irq_create_fwspec_mapping(&fwspec);
++ if (!irq)
++ return -EINVAL;
++
++ return irq;
+ }
+ EXPORT_SYMBOL_GPL(acpi_register_gsi);
+
--- /dev/null
+From 1bbac8d6af085408885675c1e29b2581250be124 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
+Date: Fri, 25 Aug 2023 15:55:02 +0200
+Subject: dt-bindings: mmc: sdhci-msm: correct minimum number of clocks
+
+From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
+
+commit 1bbac8d6af085408885675c1e29b2581250be124 upstream.
+
+In the TXT binding before conversion, the "xo" clock was listed as
+optional. Conversion kept it optional in "clock-names", but not in
+"clocks". This fixes dbts_check warnings like:
+
+ qcom-sdx65-mtp.dtb: mmc@8804000: clocks: [[13, 59], [13, 58]] is too short
+
+Cc: <stable@vger.kernel.org>
+Fixes: a45537723f4b ("dt-bindings: mmc: sdhci-msm: Convert bindings to yaml")
+Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
+Acked-by: Conor Dooley <conor.dooley@microchip.com>
+Link: https://lore.kernel.org/r/20230825135503.282135-1-krzysztof.kozlowski@linaro.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/devicetree/bindings/mmc/sdhci-msm.yaml | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
++++ b/Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
+@@ -59,7 +59,7 @@ properties:
+ maxItems: 4
+
+ clocks:
+- minItems: 3
++ minItems: 2
+ items:
+ - description: Main peripheral bus clock, PCLK/HCLK - AHB Bus clock
+ - description: SDC MMC clock, MCLK
--- /dev/null
+From 0df9dab891ff0d9b646d82e4fe038229e4c02451 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Fri, 15 Sep 2023 17:39:15 -0700
+Subject: KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 0df9dab891ff0d9b646d82e4fe038229e4c02451 upstream.
+
+Stop zapping invalidate TDP MMU roots via work queue now that KVM
+preserves TDP MMU roots until they are explicitly invalidated. Zapping
+roots asynchronously was effectively a workaround to avoid stalling a vCPU
+for an extended during if a vCPU unloaded a root, which at the time
+happened whenever the guest toggled CR0.WP (a frequent operation for some
+guest kernels).
+
+While a clever hack, zapping roots via an unbound worker had subtle,
+unintended consequences on host scheduling, especially when zapping
+multiple roots, e.g. as part of a memslot. Because the work of zapping a
+root is no longer bound to the task that initiated the zap, things like
+the CPU affinity and priority of the original task get lost. Losing the
+affinity and priority can be especially problematic if unbound workqueues
+aren't affined to a small number of CPUs, as zapping multiple roots can
+cause KVM to heavily utilize the majority of CPUs in the system, *beyond*
+the CPUs KVM is already using to run vCPUs.
+
+When deleting a memslot via KVM_SET_USER_MEMORY_REGION, the async root
+zap can result in KVM occupying all logical CPUs for ~8ms, and result in
+high priority tasks not being scheduled in in a timely manner. In v5.15,
+which doesn't preserve unloaded roots, the issues were even more noticeable
+as KVM would zap roots more frequently and could occupy all CPUs for 50ms+.
+
+Consuming all CPUs for an extended duration can lead to significant jitter
+throughout the system, e.g. on ChromeOS with virtio-gpu, deleting memslots
+is a semi-frequent operation as memslots are deleted and recreated with
+different host virtual addresses to react to host GPU drivers allocating
+and freeing GPU blobs. On ChromeOS, the jitter manifests as audio blips
+during games due to the audio server's tasks not getting scheduled in
+promptly, despite the tasks having a high realtime priority.
+
+Deleting memslots isn't exactly a fast path and should be avoided when
+possible, and ChromeOS is working towards utilizing MAP_FIXED to avoid the
+memslot shenanigans, but KVM is squarely in the wrong. Not to mention
+that removing the async zapping eliminates a non-trivial amount of
+complexity.
+
+Note, one of the subtle behaviors hidden behind the async zapping is that
+KVM would zap invalidated roots only once (ignoring partial zaps from
+things like mmu_notifier events). Preserve this behavior by adding a flag
+to identify roots that are scheduled to be zapped versus roots that have
+already been zapped but not yet freed.
+
+Add a comment calling out why kvm_tdp_mmu_invalidate_all_roots() can
+encounter invalid roots, as it's not at all obvious why zapping
+invalidated roots shouldn't simply zap all invalid roots.
+
+Reported-by: Pattara Teerapong <pteerapong@google.com>
+Cc: David Stevens <stevensd@google.com>
+Cc: Yiwei Zhang<zzyiwei@google.com>
+Cc: Paul Hsia <paulhsia@google.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Message-Id: <20230916003916.2545000-4-seanjc@google.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Reviewed-by: David Matlack <dmatlack@google.com>
+Tested-by: David Matlack <dmatlack@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/kvm_host.h | 3
+ arch/x86/kvm/mmu/mmu.c | 9 --
+ arch/x86/kvm/mmu/mmu_internal.h | 15 ++--
+ arch/x86/kvm/mmu/tdp_mmu.c | 135 ++++++++++++++++------------------------
+ arch/x86/kvm/mmu/tdp_mmu.h | 4 -
+ arch/x86/kvm/x86.c | 5 -
+ 6 files changed, 69 insertions(+), 102 deletions(-)
+
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1324,7 +1324,6 @@ struct kvm_arch {
+ * the thread holds the MMU lock in write mode.
+ */
+ spinlock_t tdp_mmu_pages_lock;
+- struct workqueue_struct *tdp_mmu_zap_wq;
+ #endif /* CONFIG_X86_64 */
+
+ /*
+@@ -1727,7 +1726,7 @@ void kvm_mmu_vendor_module_exit(void);
+
+ void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
+ int kvm_mmu_create(struct kvm_vcpu *vcpu);
+-int kvm_mmu_init_vm(struct kvm *kvm);
++void kvm_mmu_init_vm(struct kvm *kvm);
+ void kvm_mmu_uninit_vm(struct kvm *kvm);
+
+ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5994,19 +5994,16 @@ static void kvm_mmu_invalidate_zap_pages
+ kvm_mmu_zap_all_fast(kvm);
+ }
+
+-int kvm_mmu_init_vm(struct kvm *kvm)
++void kvm_mmu_init_vm(struct kvm *kvm)
+ {
+ struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
+- int r;
+
+ INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
+ INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages);
+ spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
+
+- r = kvm_mmu_init_tdp_mmu(kvm);
+- if (r < 0)
+- return r;
++ kvm_mmu_init_tdp_mmu(kvm);
+
+ node->track_write = kvm_mmu_pte_write;
+ node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
+@@ -6019,8 +6016,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
+
+ kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
+ kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
+-
+- return 0;
+ }
+
+ static void mmu_free_vm_memory_caches(struct kvm *kvm)
+--- a/arch/x86/kvm/mmu/mmu_internal.h
++++ b/arch/x86/kvm/mmu/mmu_internal.h
+@@ -56,7 +56,12 @@ struct kvm_mmu_page {
+
+ bool tdp_mmu_page;
+ bool unsync;
+- u8 mmu_valid_gen;
++ union {
++ u8 mmu_valid_gen;
++
++ /* Only accessed under slots_lock. */
++ bool tdp_mmu_scheduled_root_to_zap;
++ };
+ bool lpage_disallowed; /* Can't be replaced by an equiv large page */
+
+ /*
+@@ -92,13 +97,7 @@ struct kvm_mmu_page {
+ struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+ tdp_ptep_t ptep;
+ };
+- union {
+- DECLARE_BITMAP(unsync_child_bitmap, 512);
+- struct {
+- struct work_struct tdp_mmu_async_work;
+- void *tdp_mmu_async_data;
+- };
+- };
++ DECLARE_BITMAP(unsync_child_bitmap, 512);
+
+ struct list_head lpage_disallowed_link;
+ #ifdef CONFIG_X86_32
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -14,24 +14,16 @@ static bool __read_mostly tdp_mmu_enable
+ module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644);
+
+ /* Initializes the TDP MMU for the VM, if enabled. */
+-int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
++void kvm_mmu_init_tdp_mmu(struct kvm *kvm)
+ {
+- struct workqueue_struct *wq;
+-
+ if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
+- return 0;
+-
+- wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
+- if (!wq)
+- return -ENOMEM;
++ return;
+
+ /* This should not be changed for the lifetime of the VM. */
+ kvm->arch.tdp_mmu_enabled = true;
+ INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots);
+ spin_lock_init(&kvm->arch.tdp_mmu_pages_lock);
+ INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages);
+- kvm->arch.tdp_mmu_zap_wq = wq;
+- return 1;
+ }
+
+ /* Arbitrarily returns true so that this may be used in if statements. */
+@@ -57,20 +49,15 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *
+ * ultimately frees all roots.
+ */
+ kvm_tdp_mmu_invalidate_all_roots(kvm);
+-
+- /*
+- * Destroying a workqueue also first flushes the workqueue, i.e. no
+- * need to invoke kvm_tdp_mmu_zap_invalidated_roots().
+- */
+- destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
++ kvm_tdp_mmu_zap_invalidated_roots(kvm);
+
+ WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
+ WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
+
+ /*
+ * Ensure that all the outstanding RCU callbacks to free shadow pages
+- * can run before the VM is torn down. Work items on tdp_mmu_zap_wq
+- * can call kvm_tdp_mmu_put_root and create new callbacks.
++ * can run before the VM is torn down. Putting the last reference to
++ * zapped roots will create new callbacks.
+ */
+ rcu_barrier();
+ }
+@@ -97,46 +84,6 @@ static void tdp_mmu_free_sp_rcu_callback
+ tdp_mmu_free_sp(sp);
+ }
+
+-static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
+- bool shared);
+-
+-static void tdp_mmu_zap_root_work(struct work_struct *work)
+-{
+- struct kvm_mmu_page *root = container_of(work, struct kvm_mmu_page,
+- tdp_mmu_async_work);
+- struct kvm *kvm = root->tdp_mmu_async_data;
+-
+- read_lock(&kvm->mmu_lock);
+-
+- /*
+- * A TLB flush is not necessary as KVM performs a local TLB flush when
+- * allocating a new root (see kvm_mmu_load()), and when migrating vCPU
+- * to a different pCPU. Note, the local TLB flush on reuse also
+- * invalidates any paging-structure-cache entries, i.e. TLB entries for
+- * intermediate paging structures, that may be zapped, as such entries
+- * are associated with the ASID on both VMX and SVM.
+- */
+- tdp_mmu_zap_root(kvm, root, true);
+-
+- /*
+- * Drop the refcount using kvm_tdp_mmu_put_root() to test its logic for
+- * avoiding an infinite loop. By design, the root is reachable while
+- * it's being asynchronously zapped, thus a different task can put its
+- * last reference, i.e. flowing through kvm_tdp_mmu_put_root() for an
+- * asynchronously zapped root is unavoidable.
+- */
+- kvm_tdp_mmu_put_root(kvm, root, true);
+-
+- read_unlock(&kvm->mmu_lock);
+-}
+-
+-static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
+-{
+- root->tdp_mmu_async_data = kvm;
+- INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
+- queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
+-}
+-
+ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ bool shared)
+ {
+@@ -222,11 +169,11 @@ static struct kvm_mmu_page *tdp_mmu_next
+ #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
+ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
+
+-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \
+- for (_root = tdp_mmu_next_root(_kvm, NULL, false, false); \
++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
++ for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \
+ _root; \
+- _root = tdp_mmu_next_root(_kvm, _root, false, false)) \
+- if (!kvm_lockdep_assert_mmu_lock_held(_kvm, false)) { \
++ _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \
++ if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
+ } else
+
+ /*
+@@ -305,7 +252,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(stru
+ * by a memslot update or by the destruction of the VM. Initialize the
+ * refcount to two; one reference for the vCPU, and one reference for
+ * the TDP MMU itself, which is held until the root is invalidated and
+- * is ultimately put by tdp_mmu_zap_root_work().
++ * is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
+ */
+ refcount_set(&root->tdp_mmu_root_count, 2);
+
+@@ -963,7 +910,7 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *k
+ {
+ struct kvm_mmu_page *root;
+
+- for_each_tdp_mmu_root_yield_safe(kvm, root)
++ for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+ flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
+
+ return flush;
+@@ -985,7 +932,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm
+ * is being destroyed or the userspace VMM has exited. In both cases,
+ * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
+ */
+- for_each_tdp_mmu_root_yield_safe(kvm, root)
++ for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+ tdp_mmu_zap_root(kvm, root, false);
+ }
+
+@@ -995,18 +942,47 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm
+ */
+ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
+ {
+- flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
++ struct kvm_mmu_page *root;
++
++ read_lock(&kvm->mmu_lock);
++
++ for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
++ if (!root->tdp_mmu_scheduled_root_to_zap)
++ continue;
++
++ root->tdp_mmu_scheduled_root_to_zap = false;
++ KVM_BUG_ON(!root->role.invalid, kvm);
++
++ /*
++ * A TLB flush is not necessary as KVM performs a local TLB
++ * flush when allocating a new root (see kvm_mmu_load()), and
++ * when migrating a vCPU to a different pCPU. Note, the local
++ * TLB flush on reuse also invalidates paging-structure-cache
++ * entries, i.e. TLB entries for intermediate paging structures,
++ * that may be zapped, as such entries are associated with the
++ * ASID on both VMX and SVM.
++ */
++ tdp_mmu_zap_root(kvm, root, true);
++
++ /*
++ * The referenced needs to be put *after* zapping the root, as
++ * the root must be reachable by mmu_notifiers while it's being
++ * zapped
++ */
++ kvm_tdp_mmu_put_root(kvm, root, true);
++ }
++
++ read_unlock(&kvm->mmu_lock);
+ }
+
+ /*
+ * Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
+ * is about to be zapped, e.g. in response to a memslots update. The actual
+- * zapping is performed asynchronously. Using a separate workqueue makes it
+- * easy to ensure that the destruction is performed before the "fast zap"
+- * completes, without keeping a separate list of invalidated roots; the list is
+- * effectively the list of work items in the workqueue.
++ * zapping is done separately so that it happens with mmu_lock with read,
++ * whereas invalidating roots must be done with mmu_lock held for write (unless
++ * the VM is being destroyed).
+ *
+- * Note, the asynchronous worker is gifted the TDP MMU's reference.
++ * Note, kvm_tdp_mmu_zap_invalidated_roots() is gifted the TDP MMU's reference.
+ * See kvm_tdp_mmu_get_vcpu_root_hpa().
+ */
+ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+@@ -1031,19 +1007,20 @@ void kvm_tdp_mmu_invalidate_all_roots(st
+ /*
+ * As above, mmu_lock isn't held when destroying the VM! There can't
+ * be other references to @kvm, i.e. nothing else can invalidate roots
+- * or be consuming roots, but walking the list of roots does need to be
+- * guarded against roots being deleted by the asynchronous zap worker.
++ * or get/put references to roots.
+ */
+- rcu_read_lock();
+-
+- list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
++ list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
++ /*
++ * Note, invalid roots can outlive a memslot update! Invalid
++ * roots must be *zapped* before the memslot update completes,
++ * but a different task can acquire a reference and keep the
++ * root alive after its been zapped.
++ */
+ if (!root->role.invalid) {
++ root->tdp_mmu_scheduled_root_to_zap = true;
+ root->role.invalid = true;
+- tdp_mmu_schedule_zap_root(kvm, root);
+ }
+ }
+-
+- rcu_read_unlock();
+ }
+
+ /*
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -65,7 +65,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(
+ u64 *spte);
+
+ #ifdef CONFIG_X86_64
+-int kvm_mmu_init_tdp_mmu(struct kvm *kvm);
++void kvm_mmu_init_tdp_mmu(struct kvm *kvm);
+ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm);
+ static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; }
+
+@@ -86,7 +86,7 @@ static inline bool is_tdp_mmu(struct kvm
+ return sp && is_tdp_mmu_page(sp) && sp->root_count;
+ }
+ #else
+-static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; }
++static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {}
+ static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {}
+ static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
+ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; }
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -12453,9 +12453,7 @@ int kvm_arch_init_vm(struct kvm *kvm, un
+ if (ret)
+ goto out;
+
+- ret = kvm_mmu_init_vm(kvm);
+- if (ret)
+- goto out_page_track;
++ kvm_mmu_init_vm(kvm);
+
+ ret = static_call(kvm_x86_vm_init)(kvm);
+ if (ret)
+@@ -12500,7 +12498,6 @@ int kvm_arch_init_vm(struct kvm *kvm, un
+
+ out_uninit_mmu:
+ kvm_mmu_uninit_vm(kvm);
+-out_page_track:
+ kvm_page_track_cleanup(kvm);
+ out:
+ return ret;
--- /dev/null
+From 84ee19bffc9306128cd0f1c650e89767079efeff Mon Sep 17 00:00:00 2001
+From: Avri Altman <avri.altman@wdc.com>
+Date: Wed, 27 Sep 2023 10:15:00 +0300
+Subject: mmc: core: Capture correct oemid-bits for eMMC cards
+
+From: Avri Altman <avri.altman@wdc.com>
+
+commit 84ee19bffc9306128cd0f1c650e89767079efeff upstream.
+
+The OEMID is an 8-bit binary number rather than 16-bit as the current code
+parses for. The OEMID occupies bits [111:104] in the CID register, see the
+eMMC spec JESD84-B51 paragraph 7.2.3. It seems that the 16-bit comes from
+the legacy MMC specs (v3.31 and before).
+
+Let's fix the parsing by simply move to use 8-bit instead of 16-bit. This
+means we ignore the impact on some of those old MMC cards that may be out
+there, but on the other hand this shouldn't be a problem as the OEMID seems
+not be an important feature for these cards.
+
+Signed-off-by: Avri Altman <avri.altman@wdc.com>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20230927071500.1791882-1-avri.altman@wdc.com
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mmc/core/mmc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -104,7 +104,7 @@ static int mmc_decode_cid(struct mmc_car
+ case 3: /* MMC v3.1 - v3.3 */
+ case 4: /* MMC v4 */
+ card->cid.manfid = UNSTUFF_BITS(resp, 120, 8);
+- card->cid.oemid = UNSTUFF_BITS(resp, 104, 16);
++ card->cid.oemid = UNSTUFF_BITS(resp, 104, 8);
+ card->cid.prod_name[0] = UNSTUFF_BITS(resp, 96, 8);
+ card->cid.prod_name[1] = UNSTUFF_BITS(resp, 88, 8);
+ card->cid.prod_name[2] = UNSTUFF_BITS(resp, 80, 8);
--- /dev/null
+From 32a9cdb8869dc111a0c96cf8e1762be9684af15b Mon Sep 17 00:00:00 2001
+From: Haibo Chen <haibo.chen@nxp.com>
+Date: Wed, 30 Aug 2023 17:39:22 +0800
+Subject: mmc: core: sdio: hold retuning if sdio in 1-bit mode
+
+From: Haibo Chen <haibo.chen@nxp.com>
+
+commit 32a9cdb8869dc111a0c96cf8e1762be9684af15b upstream.
+
+tuning only support in 4-bit mode or 8 bit mode, so in 1-bit mode,
+need to hold retuning.
+
+Find this issue when use manual tuning method on imx93. When system
+resume back, SDIO WIFI try to switch back to 4 bit mode, first will
+trigger retuning, and all tuning command failed.
+
+Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
+Acked-by: Adrian Hunter <adrian.hunter@intel.com>
+Fixes: dfa13ebbe334 ("mmc: host: Add facility to support re-tuning")
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20230830093922.3095850-1-haibo.chen@nxp.com
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mmc/core/sdio.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -1089,8 +1089,14 @@ static int mmc_sdio_resume(struct mmc_ho
+ }
+ err = mmc_sdio_reinit_card(host);
+ } else if (mmc_card_wake_sdio_irq(host)) {
+- /* We may have switched to 1-bit mode during suspend */
++ /*
++ * We may have switched to 1-bit mode during suspend,
++ * need to hold retuning, because tuning only supprt
++ * 4-bit mode or 8 bit mode.
++ */
++ mmc_retune_hold_now(host);
+ err = sdio_enable_4bit_bus(host->card);
++ mmc_retune_release(host);
+ }
+
+ if (err)
--- /dev/null
+From c7bb120c1c66672b657e95d0942c989b8275aeb3 Mon Sep 17 00:00:00 2001
+From: Pablo Sun <pablo.sun@mediatek.com>
+Date: Fri, 22 Sep 2023 17:53:48 +0800
+Subject: mmc: mtk-sd: Use readl_poll_timeout_atomic in msdc_reset_hw
+
+From: Pablo Sun <pablo.sun@mediatek.com>
+
+commit c7bb120c1c66672b657e95d0942c989b8275aeb3 upstream.
+
+Use atomic readl_poll_timeout_atomic, because msdc_reset_hw
+may be invoked in IRQ handler in the following context:
+
+ msdc_irq() -> msdc_cmd_done() -> msdc_reset_hw()
+
+The following kernel BUG stack trace can be observed on
+Genio 1200 EVK after initializing MSDC1 hardware during kernel boot:
+
+[ 1.187441] BUG: scheduling while atomic: swapper/0/0/0x00010002
+[ 1.189157] Modules linked in:
+[ 1.204633] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 5.15.42-mtk+modified #1
+[ 1.205713] Hardware name: MediaTek Genio 1200 EVK-P1V2-EMMC (DT)
+[ 1.206484] Call trace:
+[ 1.206796] dump_backtrace+0x0/0x1ac
+[ 1.207266] show_stack+0x24/0x30
+[ 1.207692] dump_stack_lvl+0x68/0x84
+[ 1.208162] dump_stack+0x1c/0x38
+[ 1.208587] __schedule_bug+0x68/0x80
+[ 1.209056] __schedule+0x6ec/0x7c0
+[ 1.209502] schedule+0x7c/0x110
+[ 1.209915] schedule_hrtimeout_range_clock+0xc4/0x1f0
+[ 1.210569] schedule_hrtimeout_range+0x20/0x30
+[ 1.211148] usleep_range_state+0x84/0xc0
+[ 1.211661] msdc_reset_hw+0xc8/0x1b0
+[ 1.212134] msdc_cmd_done.isra.0+0x4ac/0x5f0
+[ 1.212693] msdc_irq+0x104/0x2d4
+[ 1.213121] __handle_irq_event_percpu+0x68/0x280
+[ 1.213725] handle_irq_event+0x70/0x15c
+[ 1.214230] handle_fasteoi_irq+0xb0/0x1a4
+[ 1.214755] handle_domain_irq+0x6c/0x9c
+[ 1.215260] gic_handle_irq+0xc4/0x180
+[ 1.215741] call_on_irq_stack+0x2c/0x54
+[ 1.216245] do_interrupt_handler+0x5c/0x70
+[ 1.216782] el1_interrupt+0x30/0x80
+[ 1.217242] el1h_64_irq_handler+0x1c/0x2c
+[ 1.217769] el1h_64_irq+0x78/0x7c
+[ 1.218206] cpuidle_enter_state+0xc8/0x600
+[ 1.218744] cpuidle_enter+0x44/0x5c
+[ 1.219205] do_idle+0x224/0x2d0
+[ 1.219624] cpu_startup_entry+0x30/0x80
+[ 1.220129] rest_init+0x108/0x134
+[ 1.220568] arch_call_rest_init+0x1c/0x28
+[ 1.221094] start_kernel+0x6c0/0x700
+[ 1.221564] __primary_switched+0xc0/0xc8
+
+Fixes: ffaea6ebfe9c ("mmc: mtk-sd: Use readl_poll_timeout instead of open-coded polling")
+Signed-off-by: Pablo Sun <pablo.sun@mediatek.com>
+Reviewed-by: Chen-Yu Tsai <wenst@chromium.org>
+Reviewed-by: AngeloGioacchino Del Regno <angelogioachino.delregno@collabora.com>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20230922095348.22182-1-pablo.sun@mediatek.com
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mmc/host/mtk-sd.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -655,11 +655,11 @@ static void msdc_reset_hw(struct msdc_ho
+ u32 val;
+
+ sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_RST);
+- readl_poll_timeout(host->base + MSDC_CFG, val, !(val & MSDC_CFG_RST), 0, 0);
++ readl_poll_timeout_atomic(host->base + MSDC_CFG, val, !(val & MSDC_CFG_RST), 0, 0);
+
+ sdr_set_bits(host->base + MSDC_FIFOCS, MSDC_FIFOCS_CLR);
+- readl_poll_timeout(host->base + MSDC_FIFOCS, val,
+- !(val & MSDC_FIFOCS_CLR), 0, 0);
++ readl_poll_timeout_atomic(host->base + MSDC_FIFOCS, val,
++ !(val & MSDC_FIFOCS_CLR), 0, 0);
+
+ val = readl(host->base + MSDC_INT);
+ writel(val, host->base + MSDC_INT);
--- /dev/null
+From 1202d617e3d04c8d27a14ef30784a698c48170b3 Mon Sep 17 00:00:00 2001
+From: Sven van Ashbrook <svenva@chromium.org>
+Date: Thu, 31 Aug 2023 16:00:56 +0000
+Subject: mmc: sdhci-pci-gli: fix LPM negotiation so x86/S0ix SoCs can suspend
+
+From: Sven van Ashbrook <svenva@chromium.org>
+
+commit 1202d617e3d04c8d27a14ef30784a698c48170b3 upstream.
+
+To improve the r/w performance of GL9763E, the current driver inhibits LPM
+negotiation while the device is active.
+
+This prevents a large number of SoCs from suspending, notably x86 systems
+which commonly use S0ix as the suspend mechanism - for example, Intel
+Alder Lake and Raptor Lake processors.
+
+Failure description:
+1. Userspace initiates s2idle suspend (e.g. via writing to
+ /sys/power/state)
+2. This switches the runtime_pm device state to active, which disables
+ LPM negotiation, then calls the "regular" suspend callback
+3. With LPM negotiation disabled, the bus cannot enter low-power state
+4. On a large number of SoCs, if the bus not in a low-power state, S0ix
+ cannot be entered, which in turn prevents the SoC from entering
+ suspend.
+
+Fix by re-enabling LPM negotiation in the device's suspend callback.
+
+Suggested-by: Stanislaw Kardach <skardach@google.com>
+Fixes: f9e5b33934ce ("mmc: host: Improve I/O read/write performance for GL9763E")
+Cc: stable@vger.kernel.org
+Signed-off-by: Sven van Ashbrook <svenva@chromium.org>
+Acked-by: Adrian Hunter <adrian.hunter@intel.com>
+Link: https://lore.kernel.org/r/20230831160055.v3.1.I7ed1ca09797be2dd76ca914c57d88b32d24dac88@changeid
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mmc/host/sdhci-pci-gli.c | 104 ++++++++++++++++++++++++---------------
+ 1 file changed, 66 insertions(+), 38 deletions(-)
+
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -756,42 +756,6 @@ static u32 sdhci_gl9750_readl(struct sdh
+ return value;
+ }
+
+-#ifdef CONFIG_PM_SLEEP
+-static int sdhci_pci_gli_resume(struct sdhci_pci_chip *chip)
+-{
+- struct sdhci_pci_slot *slot = chip->slots[0];
+-
+- pci_free_irq_vectors(slot->chip->pdev);
+- gli_pcie_enable_msi(slot);
+-
+- return sdhci_pci_resume_host(chip);
+-}
+-
+-static int sdhci_cqhci_gli_resume(struct sdhci_pci_chip *chip)
+-{
+- struct sdhci_pci_slot *slot = chip->slots[0];
+- int ret;
+-
+- ret = sdhci_pci_gli_resume(chip);
+- if (ret)
+- return ret;
+-
+- return cqhci_resume(slot->host->mmc);
+-}
+-
+-static int sdhci_cqhci_gli_suspend(struct sdhci_pci_chip *chip)
+-{
+- struct sdhci_pci_slot *slot = chip->slots[0];
+- int ret;
+-
+- ret = cqhci_suspend(slot->host->mmc);
+- if (ret)
+- return ret;
+-
+- return sdhci_suspend_host(slot->host);
+-}
+-#endif
+-
+ static void gl9763e_hs400_enhanced_strobe(struct mmc_host *mmc,
+ struct mmc_ios *ios)
+ {
+@@ -1040,6 +1004,70 @@ static int gl9763e_runtime_resume(struct
+ }
+ #endif
+
++#ifdef CONFIG_PM_SLEEP
++static int sdhci_pci_gli_resume(struct sdhci_pci_chip *chip)
++{
++ struct sdhci_pci_slot *slot = chip->slots[0];
++
++ pci_free_irq_vectors(slot->chip->pdev);
++ gli_pcie_enable_msi(slot);
++
++ return sdhci_pci_resume_host(chip);
++}
++
++static int gl9763e_resume(struct sdhci_pci_chip *chip)
++{
++ struct sdhci_pci_slot *slot = chip->slots[0];
++ int ret;
++
++ ret = sdhci_pci_gli_resume(chip);
++ if (ret)
++ return ret;
++
++ ret = cqhci_resume(slot->host->mmc);
++ if (ret)
++ return ret;
++
++ /*
++ * Disable LPM negotiation to bring device back in sync
++ * with its runtime_pm state.
++ */
++ gl9763e_set_low_power_negotiation(slot, false);
++
++ return 0;
++}
++
++static int gl9763e_suspend(struct sdhci_pci_chip *chip)
++{
++ struct sdhci_pci_slot *slot = chip->slots[0];
++ int ret;
++
++ /*
++ * Certain SoCs can suspend only with the bus in low-
++ * power state, notably x86 SoCs when using S0ix.
++ * Re-enable LPM negotiation to allow entering L1 state
++ * and entering system suspend.
++ */
++ gl9763e_set_low_power_negotiation(slot, true);
++
++ ret = cqhci_suspend(slot->host->mmc);
++ if (ret)
++ goto err_suspend;
++
++ ret = sdhci_suspend_host(slot->host);
++ if (ret)
++ goto err_suspend_host;
++
++ return 0;
++
++err_suspend_host:
++ cqhci_resume(slot->host->mmc);
++err_suspend:
++ gl9763e_set_low_power_negotiation(slot, false);
++ return ret;
++}
++#endif
++
+ static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot)
+ {
+ struct pci_dev *pdev = slot->chip->pdev;
+@@ -1147,8 +1175,8 @@ const struct sdhci_pci_fixes sdhci_gl976
+ .probe_slot = gli_probe_slot_gl9763e,
+ .ops = &sdhci_gl9763e_ops,
+ #ifdef CONFIG_PM_SLEEP
+- .resume = sdhci_cqhci_gli_resume,
+- .suspend = sdhci_cqhci_gli_suspend,
++ .resume = gl9763e_resume,
++ .suspend = gl9763e_suspend,
+ #endif
+ #ifdef CONFIG_PM
+ .runtime_suspend = gl9763e_runtime_suspend,
--- /dev/null
+From 6792b7fce610bcd1cf3e07af3607fe7e2c38c1d8 Mon Sep 17 00:00:00 2001
+From: Geert Uytterhoeven <geert+renesas@glider.be>
+Date: Wed, 30 Aug 2023 17:00:34 +0200
+Subject: mtd: physmap-core: Restore map_rom fallback
+
+From: Geert Uytterhoeven <geert+renesas@glider.be>
+
+commit 6792b7fce610bcd1cf3e07af3607fe7e2c38c1d8 upstream.
+
+When the exact mapping type driver was not available, the old
+physmap_of_core driver fell back to mapping the region as ROM.
+Unfortunately this feature was lost when the DT and pdata cases were
+merged. Revive this useful feature.
+
+Fixes: 642b1e8dbed7bbbf ("mtd: maps: Merge physmap_of.c into physmap-core.c")
+Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Link: https://lore.kernel.org/linux-mtd/550e8c8c1da4c4baeb3d71ff79b14a18d4194f9e.1693407371.git.geert+renesas@glider.be
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/maps/physmap-core.c | 11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -552,6 +552,17 @@ static int physmap_flash_probe(struct pl
+ if (info->probe_type) {
+ info->mtds[i] = do_map_probe(info->probe_type,
+ &info->maps[i]);
++
++ /* Fall back to mapping region as ROM */
++ if (!info->mtds[i] && IS_ENABLED(CONFIG_MTD_ROM) &&
++ strcmp(info->probe_type, "map_rom")) {
++ dev_warn(&dev->dev,
++ "map_probe() failed for type %s\n",
++ info->probe_type);
++
++ info->mtds[i] = do_map_probe("map_rom",
++ &info->maps[i]);
++ }
+ } else {
+ int j;
+
--- /dev/null
+From 3a4a893dbb19e229db3b753f0462520b561dee98 Mon Sep 17 00:00:00 2001
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+Date: Mon, 17 Jul 2023 21:42:20 +0200
+Subject: mtd: rawnand: arasan: Ensure program page operations are successful
+
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+
+commit 3a4a893dbb19e229db3b753f0462520b561dee98 upstream.
+
+The NAND core complies with the ONFI specification, which itself
+mentions that after any program or erase operation, a status check
+should be performed to see whether the operation was finished *and*
+successful.
+
+The NAND core offers helpers to finish a page write (sending the
+"PAGE PROG" command, waiting for the NAND chip to be ready again, and
+checking the operation status). But in some cases, advanced controller
+drivers might want to optimize this and craft their own page write
+helper to leverage additional hardware capabilities, thus not always
+using the core facilities.
+
+Some drivers, like this one, do not use the core helper to finish a page
+write because the final cycles are automatically managed by the
+hardware. In this case, the additional care must be taken to manually
+perform the final status check.
+
+Let's read the NAND chip status at the end of the page write helper and
+return -EIO upon error.
+
+Cc: Michal Simek <michal.simek@amd.com>
+Cc: stable@vger.kernel.org
+Fixes: 88ffef1b65cf ("mtd: rawnand: arasan: Support the hardware BCH ECC engine")
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Acked-by: Michal Simek <michal.simek@amd.com>
+Link: https://lore.kernel.org/linux-mtd/20230717194221.229778-2-miquel.raynal@bootlin.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/raw/arasan-nand-controller.c | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -515,6 +515,7 @@ static int anfc_write_page_hw_ecc(struct
+ struct mtd_info *mtd = nand_to_mtd(chip);
+ unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0);
+ dma_addr_t dma_addr;
++ u8 status;
+ int ret;
+ struct anfc_op nfc_op = {
+ .pkt_reg =
+@@ -561,10 +562,21 @@ static int anfc_write_page_hw_ecc(struct
+ }
+
+ /* Spare data is not protected */
+- if (oob_required)
++ if (oob_required) {
+ ret = nand_write_oob_std(chip, page);
++ if (ret)
++ return ret;
++ }
++
++ /* Check write status on the chip side */
++ ret = nand_status_op(chip, &status);
++ if (ret)
++ return ret;
++
++ if (status & NAND_STATUS_FAIL)
++ return -EIO;
+
+- return ret;
++ return 0;
+ }
+
+ static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
--- /dev/null
+From 3e01d5254698ea3d18e09d96b974c762328352cd Mon Sep 17 00:00:00 2001
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+Date: Mon, 17 Jul 2023 21:42:19 +0200
+Subject: mtd: rawnand: marvell: Ensure program page operations are successful
+
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+
+commit 3e01d5254698ea3d18e09d96b974c762328352cd upstream.
+
+The NAND core complies with the ONFI specification, which itself
+mentions that after any program or erase operation, a status check
+should be performed to see whether the operation was finished *and*
+successful.
+
+The NAND core offers helpers to finish a page write (sending the
+"PAGE PROG" command, waiting for the NAND chip to be ready again, and
+checking the operation status). But in some cases, advanced controller
+drivers might want to optimize this and craft their own page write
+helper to leverage additional hardware capabilities, thus not always
+using the core facilities.
+
+Some drivers, like this one, do not use the core helper to finish a page
+write because the final cycles are automatically managed by the
+hardware. In this case, the additional care must be taken to manually
+perform the final status check.
+
+Let's read the NAND chip status at the end of the page write helper and
+return -EIO upon error.
+
+Cc: stable@vger.kernel.org
+Fixes: 02f26ecf8c77 ("mtd: nand: add reworked Marvell NAND controller driver")
+Reported-by: Aviram Dali <aviramd@marvell.com>
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Tested-by: Ravi Chandra Minnikanti <rminnikanti@marvell.com>
+Link: https://lore.kernel.org/linux-mtd/20230717194221.229778-1-miquel.raynal@bootlin.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/raw/marvell_nand.c | 23 ++++++++++++++++++++++-
+ 1 file changed, 22 insertions(+), 1 deletion(-)
+
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -1154,6 +1154,7 @@ static int marvell_nfc_hw_ecc_hmg_do_wri
+ .ndcb[2] = NDCB2_ADDR5_PAGE(page),
+ };
+ unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0);
++ u8 status;
+ int ret;
+
+ /* NFCv2 needs more information about the operation being executed */
+@@ -1187,7 +1188,18 @@ static int marvell_nfc_hw_ecc_hmg_do_wri
+
+ ret = marvell_nfc_wait_op(chip,
+ PSEC_TO_MSEC(sdr->tPROG_max));
+- return ret;
++ if (ret)
++ return ret;
++
++ /* Check write status on the chip side */
++ ret = nand_status_op(chip, &status);
++ if (ret)
++ return ret;
++
++ if (status & NAND_STATUS_FAIL)
++ return -EIO;
++
++ return 0;
+ }
+
+ static int marvell_nfc_hw_ecc_hmg_write_page_raw(struct nand_chip *chip,
+@@ -1616,6 +1628,7 @@ static int marvell_nfc_hw_ecc_bch_write_
+ int data_len = lt->data_bytes;
+ int spare_len = lt->spare_bytes;
+ int chunk, ret;
++ u8 status;
+
+ marvell_nfc_select_target(chip, chip->cur_cs);
+
+@@ -1652,6 +1665,14 @@ static int marvell_nfc_hw_ecc_bch_write_
+ if (ret)
+ return ret;
+
++ /* Check write status on the chip side */
++ ret = nand_status_op(chip, &status);
++ if (ret)
++ return ret;
++
++ if (status & NAND_STATUS_FAIL)
++ return -EIO;
++
+ return 0;
+ }
+
--- /dev/null
+From 9777cc13fd2c3212618904636354be60835e10bb Mon Sep 17 00:00:00 2001
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+Date: Mon, 17 Jul 2023 21:42:21 +0200
+Subject: mtd: rawnand: pl353: Ensure program page operations are successful
+
+From: Miquel Raynal <miquel.raynal@bootlin.com>
+
+commit 9777cc13fd2c3212618904636354be60835e10bb upstream.
+
+The NAND core complies with the ONFI specification, which itself
+mentions that after any program or erase operation, a status check
+should be performed to see whether the operation was finished *and*
+successful.
+
+The NAND core offers helpers to finish a page write (sending the
+"PAGE PROG" command, waiting for the NAND chip to be ready again, and
+checking the operation status). But in some cases, advanced controller
+drivers might want to optimize this and craft their own page write
+helper to leverage additional hardware capabilities, thus not always
+using the core facilities.
+
+Some drivers, like this one, do not use the core helper to finish a page
+write because the final cycles are automatically managed by the
+hardware. In this case, the additional care must be taken to manually
+perform the final status check.
+
+Let's read the NAND chip status at the end of the page write helper and
+return -EIO upon error.
+
+Cc: Michal Simek <michal.simek@amd.com>
+Cc: stable@vger.kernel.org
+Fixes: 08d8c62164a3 ("mtd: rawnand: pl353: Add support for the ARM PL353 SMC NAND controller")
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Tested-by: Michal Simek <michal.simek@amd.com>
+Link: https://lore.kernel.org/linux-mtd/20230717194221.229778-3-miquel.raynal@bootlin.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/raw/pl35x-nand-controller.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/drivers/mtd/nand/raw/pl35x-nand-controller.c
++++ b/drivers/mtd/nand/raw/pl35x-nand-controller.c
+@@ -513,6 +513,7 @@ static int pl35x_nand_write_page_hwecc(s
+ u32 addr1 = 0, addr2 = 0, row;
+ u32 cmd_addr;
+ int i, ret;
++ u8 status;
+
+ ret = pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_APB);
+ if (ret)
+@@ -565,6 +566,14 @@ static int pl35x_nand_write_page_hwecc(s
+ if (ret)
+ goto disable_ecc_engine;
+
++ /* Check write status on the chip side */
++ ret = nand_status_op(chip, &status);
++ if (ret)
++ goto disable_ecc_engine;
++
++ if (status & NAND_STATUS_FAIL)
++ ret = -EIO;
++
+ disable_ecc_engine:
+ pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS);
+
--- /dev/null
+From 5279f4a9eed3ee7d222b76511ea7a22c89e7eefd Mon Sep 17 00:00:00 2001
+From: Bibek Kumar Patro <quic_bibekkum@quicinc.com>
+Date: Wed, 13 Sep 2023 12:37:02 +0530
+Subject: mtd: rawnand: qcom: Unmap the right resource upon probe failure
+
+From: Bibek Kumar Patro <quic_bibekkum@quicinc.com>
+
+commit 5279f4a9eed3ee7d222b76511ea7a22c89e7eefd upstream.
+
+We currently provide the physical address of the DMA region
+rather than the output of dma_map_resource() which is obviously wrong.
+
+Fixes: 7330fc505af4 ("mtd: rawnand: qcom: stop using phys_to_dma()")
+Cc: stable@vger.kernel.org
+Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
+Signed-off-by: Bibek Kumar Patro <quic_bibekkum@quicinc.com>
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Link: https://lore.kernel.org/linux-mtd/20230913070702.12707-1-quic_bibekkum@quicinc.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/raw/qcom_nandc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -3310,7 +3310,7 @@ err_nandc_alloc:
+ err_aon_clk:
+ clk_disable_unprepare(nandc->core_clk);
+ err_core_clk:
+- dma_unmap_resource(dev, res->start, resource_size(res),
++ dma_unmap_resource(dev, nandc->base_dma, resource_size(res),
+ DMA_BIDIRECTIONAL, 0);
+ return ret;
+ }
--- /dev/null
+From 9836a987860e33943945d4b257729a4f94eae576 Mon Sep 17 00:00:00 2001
+From: Martin Kurbanov <mmkurbanov@sberdevices.ru>
+Date: Tue, 5 Sep 2023 17:56:37 +0300
+Subject: mtd: spinand: micron: correct bitmask for ecc status
+
+From: Martin Kurbanov <mmkurbanov@sberdevices.ru>
+
+commit 9836a987860e33943945d4b257729a4f94eae576 upstream.
+
+Valid bitmask is 0x70 in the status register.
+
+Fixes: a508e8875e13 ("mtd: spinand: Add initial support for Micron MT29F2G01ABAGD")
+Signed-off-by: Martin Kurbanov <mmkurbanov@sberdevices.ru>
+Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Link: https://lore.kernel.org/linux-mtd/20230905145637.139068-1-mmkurbanov@sberdevices.ru
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/spi/micron.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/mtd/nand/spi/micron.c
++++ b/drivers/mtd/nand/spi/micron.c
+@@ -12,7 +12,7 @@
+
+ #define SPINAND_MFR_MICRON 0x2c
+
+-#define MICRON_STATUS_ECC_MASK GENMASK(7, 4)
++#define MICRON_STATUS_ECC_MASK GENMASK(6, 4)
+ #define MICRON_STATUS_ECC_NO_BITFLIPS (0 << 4)
+ #define MICRON_STATUS_ECC_1TO3_BITFLIPS (1 << 4)
+ #define MICRON_STATUS_ECC_4TO6_BITFLIPS (3 << 4)
--- /dev/null
+From f588d72bd95f748849685412b1f0c7959ca228cf Mon Sep 17 00:00:00 2001
+From: Dai Ngo <dai.ngo@oracle.com>
+Date: Mon, 18 Sep 2023 23:30:20 -0700
+Subject: nfs42: client needs to strip file mode's suid/sgid bit after ALLOCATE op
+
+From: Dai Ngo <dai.ngo@oracle.com>
+
+commit f588d72bd95f748849685412b1f0c7959ca228cf upstream.
+
+The Linux NFS server strips the SUID and SGID from the file mode
+on ALLOCATE op.
+
+Modify _nfs42_proc_fallocate to add NFS_INO_REVAL_FORCED to
+nfs_set_cache_invalid's argument to force update of the file
+mode suid/sgid bit.
+
+Suggested-by: Trond Myklebust <trondmy@hammerspace.com>
+Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
+Reviewed-by: Jeff Layton <jlayton@kernel.org>
+Tested-by: Jeff Layton <jlayton@kernel.org>
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/nfs42proc.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -81,7 +81,8 @@ static int _nfs42_proc_fallocate(struct
+ if (status == 0) {
+ if (nfs_should_remove_suid(inode)) {
+ spin_lock(&inode->i_lock);
+- nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE);
++ nfs_set_cache_invalid(inode,
++ NFS_INO_REVAL_FORCED | NFS_INO_INVALID_MODE);
+ spin_unlock(&inode->i_lock);
+ }
+ status = nfs_post_op_update_inode_force_wcc(inode,
--- /dev/null
+From 379e4adfddd6a2f95a4f2029b8ddcbacf92b21f9 Mon Sep 17 00:00:00 2001
+From: Olga Kornievskaia <kolga@netapp.com>
+Date: Mon, 9 Oct 2023 10:59:01 -0400
+Subject: NFSv4.1: fixup use EXCHGID4_FLAG_USE_PNFS_DS for DS server
+
+From: Olga Kornievskaia <kolga@netapp.com>
+
+commit 379e4adfddd6a2f95a4f2029b8ddcbacf92b21f9 upstream.
+
+This patches fixes commit 51d674a5e488 "NFSv4.1: use
+EXCHGID4_FLAG_USE_PNFS_DS for DS server", purpose of that
+commit was to mark EXCHANGE_ID to the DS with the appropriate
+flag.
+
+However, connection to MDS can return both EXCHGID4_FLAG_USE_PNFS_DS
+and EXCHGID4_FLAG_USE_PNFS_MDS set but previous patch would only
+remember the USE_PNFS_DS and for the 2nd EXCHANGE_ID send that
+to the MDS.
+
+Instead, just mark the pnfs path exclusively.
+
+Fixes: 51d674a5e488 ("NFSv4.1: use EXCHGID4_FLAG_USE_PNFS_DS for DS server")
+Signed-off-by: Olga Kornievskaia <kolga@netapp.com>
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/nfs4proc.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8875,8 +8875,6 @@ static int _nfs4_proc_exchange_id(struct
+ /* Save the EXCHANGE_ID verifier session trunk tests */
+ memcpy(clp->cl_confirm.data, argp->verifier.data,
+ sizeof(clp->cl_confirm.data));
+- if (resp->flags & EXCHGID4_FLAG_USE_PNFS_DS)
+- set_bit(NFS_CS_DS, &clp->cl_flags);
+ out:
+ trace_nfs4_exchange_id(clp, status);
+ rpc_put_task(task);
--- /dev/null
+From 5c3f4066462a5f6cac04d3dd81c9f551fabbc6c7 Mon Sep 17 00:00:00 2001
+From: Keith Busch <kbusch@kernel.org>
+Date: Thu, 12 Oct 2023 11:13:51 -0700
+Subject: nvme-pci: add BOGUS_NID for Intel 0a54 device
+
+From: Keith Busch <kbusch@kernel.org>
+
+commit 5c3f4066462a5f6cac04d3dd81c9f551fabbc6c7 upstream.
+
+These ones claim cmic and nmic capable, so need special consideration to ignore
+their duplicate identifiers.
+
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=217981
+Reported-by: welsh@cassens.com
+Signed-off-by: Keith Busch <kbusch@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nvme/host/pci.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3439,7 +3439,8 @@ static const struct pci_device_id nvme_i
+ { PCI_VDEVICE(INTEL, 0x0a54), /* Intel P4500/P4600 */
+ .driver_data = NVME_QUIRK_STRIPE_SIZE |
+ NVME_QUIRK_DEALLOCATE_ZEROES |
+- NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++ NVME_QUIRK_IGNORE_DEV_SUBNQN |
++ NVME_QUIRK_BOGUS_NID, },
+ { PCI_VDEVICE(INTEL, 0x0a55), /* Dell Express Flash P4600 */
+ .driver_data = NVME_QUIRK_STRIPE_SIZE |
+ NVME_QUIRK_DEALLOCATE_ZEROES, },
--- /dev/null
+From 3820c4fdc247b6f0a4162733bdb8ddf8f2e8a1e4 Mon Sep 17 00:00:00 2001
+From: Maurizio Lombardi <mlombard@redhat.com>
+Date: Mon, 31 Jul 2023 12:37:58 +0200
+Subject: nvme-rdma: do not try to stop unallocated queues
+
+From: Maurizio Lombardi <mlombard@redhat.com>
+
+commit 3820c4fdc247b6f0a4162733bdb8ddf8f2e8a1e4 upstream.
+
+Trying to stop a queue which hasn't been allocated will result
+in a warning due to calling mutex_lock() against an uninitialized mutex.
+
+ DEBUG_LOCKS_WARN_ON(lock->magic != lock)
+ WARNING: CPU: 4 PID: 104150 at kernel/locking/mutex.c:579
+
+ Call trace:
+ RIP: 0010:__mutex_lock+0x1173/0x14a0
+ nvme_rdma_stop_queue+0x1b/0xa0 [nvme_rdma]
+ nvme_rdma_teardown_io_queues.part.0+0xb0/0x1d0 [nvme_rdma]
+ nvme_rdma_delete_ctrl+0x50/0x100 [nvme_rdma]
+ nvme_do_delete_ctrl+0x149/0x158 [nvme_core]
+
+Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
+Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
+Tested-by: Yi Zhang <yi.zhang@redhat.com>
+Signed-off-by: Keith Busch <kbusch@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nvme/host/rdma.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -643,6 +643,9 @@ static void __nvme_rdma_stop_queue(struc
+
+ static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
+ {
++ if (!test_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
++ return;
++
+ mutex_lock(&queue->queue_lock);
+ if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
+ __nvme_rdma_stop_queue(queue);
--- /dev/null
+From 2b32c76e2b0154b98b9322ae7546b8156cd703e6 Mon Sep 17 00:00:00 2001
+From: Keith Busch <kbusch@kernel.org>
+Date: Mon, 16 Oct 2023 13:12:47 -0700
+Subject: nvme: sanitize metadata bounce buffer for reads
+
+From: Keith Busch <kbusch@kernel.org>
+
+commit 2b32c76e2b0154b98b9322ae7546b8156cd703e6 upstream.
+
+User can request more metadata bytes than the device will write. Ensure
+kernel buffer is initialized so we're not leaking unsanitized memory on
+the copy-out.
+
+Fixes: 0b7f1f26f95a51a ("nvme: use the block layer for userspace passthrough metadata")
+Reviewed-by: Jens Axboe <axboe@kernel.dk>
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
+Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
+Signed-off-by: Keith Busch <kbusch@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nvme/host/ioctl.c | 10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -32,9 +32,13 @@ static void *nvme_add_user_metadata(stru
+ if (!buf)
+ goto out;
+
+- ret = -EFAULT;
+- if ((req_op(req) == REQ_OP_DRV_OUT) && copy_from_user(buf, ubuf, len))
+- goto out_free_meta;
++ if (req_op(req) == REQ_OP_DRV_OUT) {
++ ret = -EFAULT;
++ if (copy_from_user(buf, ubuf, len))
++ goto out_free_meta;
++ } else {
++ memset(buf, 0, len);
++ }
+
+ bip = bio_integrity_alloc(bio, GFP_KERNEL, 1);
+ if (IS_ERR(bip)) {
--- /dev/null
+From f965b281fd872b2e18bd82dd97730db9834d0750 Mon Sep 17 00:00:00 2001
+From: Maurizio Lombardi <mlombard@redhat.com>
+Date: Tue, 17 Oct 2023 10:28:45 +0200
+Subject: nvmet-auth: complete a request only after freeing the dhchap pointers
+
+From: Maurizio Lombardi <mlombard@redhat.com>
+
+commit f965b281fd872b2e18bd82dd97730db9834d0750 upstream.
+
+It may happen that the work to destroy a queue
+(for example nvmet_tcp_release_queue_work()) is started while
+an auth-send or auth-receive command is still completing.
+
+nvmet_sq_destroy() will block, waiting for all the references
+to the sq to be dropped, the last reference is then
+dropped when nvmet_req_complete() is called.
+
+When this happens, both nvmet_sq_destroy() and
+nvmet_execute_auth_send()/_receive() will free the dhchap pointers by
+calling nvmet_auth_sq_free().
+Since there isn't any lock, the two threads may race against each other,
+causing double frees and memory corruptions, as reported by KASAN.
+
+Reproduced by stress blktests nvme/041 nvme/042 nvme/043
+
+ nvme nvme2: qid 0: authenticated with hash hmac(sha512) dhgroup ffdhe4096
+ ==================================================================
+ BUG: KASAN: double-free in kfree+0xec/0x4b0
+
+ Call Trace:
+ <TASK>
+ kfree+0xec/0x4b0
+ nvmet_auth_sq_free+0xe1/0x160 [nvmet]
+ nvmet_execute_auth_send+0x482/0x16d0 [nvmet]
+ process_one_work+0x8e5/0x1510
+
+ Allocated by task 191846:
+ __kasan_kmalloc+0x81/0xa0
+ nvmet_auth_ctrl_sesskey+0xf6/0x380 [nvmet]
+ nvmet_auth_reply+0x119/0x990 [nvmet]
+
+ Freed by task 143270:
+ kfree+0xec/0x4b0
+ nvmet_auth_sq_free+0xe1/0x160 [nvmet]
+ process_one_work+0x8e5/0x1510
+
+Fix this bug by calling nvmet_req_complete() only after freeing the
+pointers, so we will prevent the race by holding the sq reference.
+
+V2: remove redundant code
+
+Fixes: db1312dd9548 ("nvmet: implement basic In-Band Authentication")
+Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Keith Busch <kbusch@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nvme/target/fabrics-cmd-auth.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+--- a/drivers/nvme/target/fabrics-cmd-auth.c
++++ b/drivers/nvme/target/fabrics-cmd-auth.c
+@@ -337,19 +337,21 @@ done:
+ __func__, ctrl->cntlid, req->sq->qid,
+ status, req->error_loc);
+ req->cqe->result.u64 = 0;
+- nvmet_req_complete(req, status);
+ if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 &&
+ req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) {
+ unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120;
+
+ mod_delayed_work(system_wq, &req->sq->auth_expired_work,
+ auth_expire_secs * HZ);
+- return;
++ goto complete;
+ }
+ /* Final states, clear up variables */
+ nvmet_auth_sq_free(req->sq);
+ if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
+ nvmet_ctrl_fatal_error(ctrl);
++
++complete:
++ nvmet_req_complete(req, status);
+ }
+
+ static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
+@@ -527,11 +529,12 @@ void nvmet_execute_auth_receive(struct n
+ kfree(d);
+ done:
+ req->cqe->result.u64 = 0;
+- nvmet_req_complete(req, status);
++
+ if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2)
+ nvmet_auth_sq_free(req->sq);
+ else if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
+ nvmet_auth_sq_free(req->sq);
+ nvmet_ctrl_fatal_error(ctrl);
+ }
++ nvmet_req_complete(req, status);
+ }
--- /dev/null
+From f63955721a8020e979b99cc417dcb6da3106aa24 Mon Sep 17 00:00:00 2001
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+Date: Sun, 8 Oct 2023 14:20:19 -0400
+Subject: pNFS: Fix a hang in nfs4_evict_inode()
+
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+
+commit f63955721a8020e979b99cc417dcb6da3106aa24 upstream.
+
+We are not allowed to call pnfs_mark_matching_lsegs_return() without
+also holding a reference to the layout header, since doing so could lead
+to the reference count going to zero when we call
+pnfs_layout_remove_lseg(). This again can lead to a hang when we get to
+nfs4_evict_inode() and are unable to clear the layout pointer.
+
+pnfs_layout_return_unused_byserver() is guilty of this behaviour, and
+has been seen to trigger the refcount warning prior to a hang.
+
+Fixes: b6d49ecd1081 ("NFSv4: Fix a pNFS layout related use-after-free race when freeing the inode")
+Cc: stable@vger.kernel.org
+Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/pnfs.c | 33 +++++++++++++++++++++++----------
+ 1 file changed, 23 insertions(+), 10 deletions(-)
+
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2634,31 +2634,44 @@ pnfs_should_return_unused_layout(struct
+ return mode == 0;
+ }
+
+-static int
+-pnfs_layout_return_unused_byserver(struct nfs_server *server, void *data)
++static int pnfs_layout_return_unused_byserver(struct nfs_server *server,
++ void *data)
+ {
+ const struct pnfs_layout_range *range = data;
++ const struct cred *cred;
+ struct pnfs_layout_hdr *lo;
+ struct inode *inode;
++ nfs4_stateid stateid;
++ enum pnfs_iomode iomode;
++
+ restart:
+ rcu_read_lock();
+ list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) {
+- if (!pnfs_layout_can_be_returned(lo) ||
++ inode = lo->plh_inode;
++ if (!inode || !pnfs_layout_can_be_returned(lo) ||
+ test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
+ continue;
+- inode = lo->plh_inode;
+ spin_lock(&inode->i_lock);
+- if (!pnfs_should_return_unused_layout(lo, range)) {
++ if (!lo->plh_inode ||
++ !pnfs_should_return_unused_layout(lo, range)) {
+ spin_unlock(&inode->i_lock);
+ continue;
+ }
++ pnfs_get_layout_hdr(lo);
++ pnfs_set_plh_return_info(lo, range->iomode, 0);
++ if (pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++ range, 0) != 0 ||
++ !pnfs_prepare_layoutreturn(lo, &stateid, &cred, &iomode)) {
++ spin_unlock(&inode->i_lock);
++ rcu_read_unlock();
++ pnfs_put_layout_hdr(lo);
++ cond_resched();
++ goto restart;
++ }
+ spin_unlock(&inode->i_lock);
+- inode = pnfs_grab_inode_layout_hdr(lo);
+- if (!inode)
+- continue;
+ rcu_read_unlock();
+- pnfs_mark_layout_for_return(inode, range);
+- iput(inode);
++ pnfs_send_layoutreturn(lo, &stateid, &cred, iomode, false);
++ pnfs_put_layout_hdr(lo);
+ cond_resched();
+ goto restart;
+ }
--- /dev/null
+From e1c6cfbb3bd1377e2ddcbe06cf8fb1ec323ea7d3 Mon Sep 17 00:00:00 2001
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+Date: Sun, 8 Oct 2023 14:28:46 -0400
+Subject: pNFS/flexfiles: Check the layout validity in ff_layout_mirror_prepare_stats
+
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+
+commit e1c6cfbb3bd1377e2ddcbe06cf8fb1ec323ea7d3 upstream.
+
+Ensure that we check the layout pointer and validity after dereferencing
+it in ff_layout_mirror_prepare_stats.
+
+Fixes: 08e2e5bc6c9a ("pNFS/flexfiles: Clean up layoutstats")
+Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/flexfilelayout/flexfilelayout.c | 17 ++++++++++-------
+ 1 file changed, 10 insertions(+), 7 deletions(-)
+
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -2520,9 +2520,9 @@ ff_layout_mirror_prepare_stats(struct pn
+ return i;
+ }
+
+-static int
+-ff_layout_prepare_layoutstats(struct nfs42_layoutstat_args *args)
++static int ff_layout_prepare_layoutstats(struct nfs42_layoutstat_args *args)
+ {
++ struct pnfs_layout_hdr *lo;
+ struct nfs4_flexfile_layout *ff_layout;
+ const int dev_count = PNFS_LAYOUTSTATS_MAXDEV;
+
+@@ -2533,11 +2533,14 @@ ff_layout_prepare_layoutstats(struct nfs
+ return -ENOMEM;
+
+ spin_lock(&args->inode->i_lock);
+- ff_layout = FF_LAYOUT_FROM_HDR(NFS_I(args->inode)->layout);
+- args->num_dev = ff_layout_mirror_prepare_stats(&ff_layout->generic_hdr,
+- &args->devinfo[0],
+- dev_count,
+- NFS4_FF_OP_LAYOUTSTATS);
++ lo = NFS_I(args->inode)->layout;
++ if (lo && pnfs_layout_is_valid(lo)) {
++ ff_layout = FF_LAYOUT_FROM_HDR(lo);
++ args->num_dev = ff_layout_mirror_prepare_stats(
++ &ff_layout->generic_hdr, &args->devinfo[0], dev_count,
++ NFS4_FF_OP_LAYOUTSTATS);
++ } else
++ args->num_dev = 0;
+ spin_unlock(&args->inode->i_lock);
+ if (!args->num_dev) {
+ kfree(args->devinfo);
--- /dev/null
+From 62140a1e4dec4594d5d1e1d353747bf2ef434e8b Mon Sep 17 00:00:00 2001
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Date: Tue, 17 Oct 2023 17:18:06 +0300
+Subject: Revert "pinctrl: avoid unsafe code pattern in find_pinctrl()"
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+commit 62140a1e4dec4594d5d1e1d353747bf2ef434e8b upstream.
+
+The commit breaks MMC enumeration on the Intel Merrifield
+plaform.
+
+Before:
+[ 36.439057] mmc0: SDHCI controller on PCI [0000:00:01.0] using ADMA
+[ 36.450924] mmc2: SDHCI controller on PCI [0000:00:01.3] using ADMA
+[ 36.459355] mmc1: SDHCI controller on PCI [0000:00:01.2] using ADMA
+[ 36.706399] mmc0: new DDR MMC card at address 0001
+[ 37.058972] mmc2: new ultra high speed DDR50 SDIO card at address 0001
+[ 37.278977] mmcblk0: mmc0:0001 H4G1d 3.64 GiB
+[ 37.297300] mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10
+
+After:
+[ 36.436704] mmc2: SDHCI controller on PCI [0000:00:01.3] using ADMA
+[ 36.436720] mmc1: SDHCI controller on PCI [0000:00:01.0] using ADMA
+[ 36.463685] mmc0: SDHCI controller on PCI [0000:00:01.2] using ADMA
+[ 36.720627] mmc1: new DDR MMC card at address 0001
+[ 37.068181] mmc2: new ultra high speed DDR50 SDIO card at address 0001
+[ 37.279998] mmcblk1: mmc1:0001 H4G1d 3.64 GiB
+[ 37.302670] mmcblk1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10
+
+This reverts commit c153a4edff6ab01370fcac8e46f9c89cca1060c2.
+
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Link: https://lore.kernel.org/r/20231017141806.535191-1-andriy.shevchenko@linux.intel.com
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pinctrl/core.c | 16 +++++++---------
+ 1 file changed, 7 insertions(+), 9 deletions(-)
+
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1007,20 +1007,17 @@ static int add_setting(struct pinctrl *p
+
+ static struct pinctrl *find_pinctrl(struct device *dev)
+ {
+- struct pinctrl *entry, *p = NULL;
++ struct pinctrl *p;
+
+ mutex_lock(&pinctrl_list_mutex);
+-
+- list_for_each_entry(entry, &pinctrl_list, node) {
+- if (entry->dev == dev) {
+- p = entry;
+- kref_get(&p->users);
+- break;
++ list_for_each_entry(p, &pinctrl_list, node)
++ if (p->dev == dev) {
++ mutex_unlock(&pinctrl_list_mutex);
++ return p;
+ }
+- }
+
+ mutex_unlock(&pinctrl_list_mutex);
+- return p;
++ return NULL;
+ }
+
+ static void pinctrl_free(struct pinctrl *p, bool inlist);
+@@ -1129,6 +1126,7 @@ struct pinctrl *pinctrl_get(struct devic
+ p = find_pinctrl(dev);
+ if (p) {
+ dev_dbg(dev, "obtain a copy of previously claimed pinctrl\n");
++ kref_get(&p->users);
+ return p;
+ }
+
net-store-netdevs-in-an-xarray.patch
net-move-altnames-together-with-the-netdevice.patch
net-smc-fix-smc-clc-failed-issue-when-netdevice-not-.patch
+mtd-rawnand-qcom-unmap-the-right-resource-upon-probe-failure.patch
+mtd-rawnand-pl353-ensure-program-page-operations-are-successful.patch
+mtd-rawnand-marvell-ensure-program-page-operations-are-successful.patch
+mtd-rawnand-arasan-ensure-program-page-operations-are-successful.patch
+mtd-spinand-micron-correct-bitmask-for-ecc-status.patch
+mtd-physmap-core-restore-map_rom-fallback.patch
+dt-bindings-mmc-sdhci-msm-correct-minimum-number-of-clocks.patch
+mmc-sdhci-pci-gli-fix-lpm-negotiation-so-x86-s0ix-socs-can-suspend.patch
+mmc-mtk-sd-use-readl_poll_timeout_atomic-in-msdc_reset_hw.patch
+mmc-core-sdio-hold-retuning-if-sdio-in-1-bit-mode.patch
+mmc-core-capture-correct-oemid-bits-for-emmc-cards.patch
+revert-pinctrl-avoid-unsafe-code-pattern-in-find_pinctrl.patch
+pnfs-fix-a-hang-in-nfs4_evict_inode.patch
+pnfs-flexfiles-check-the-layout-validity-in-ff_layout_mirror_prepare_stats.patch
+nfsv4.1-fixup-use-exchgid4_flag_use_pnfs_ds-for-ds-server.patch
+acpi-irq-fix-incorrect-return-value-in-acpi_register_gsi.patch
+nfs42-client-needs-to-strip-file-mode-s-suid-sgid-bit-after-allocate-op.patch
+nvme-sanitize-metadata-bounce-buffer-for-reads.patch
+nvme-pci-add-bogus_nid-for-intel-0a54-device.patch
+nvmet-auth-complete-a-request-only-after-freeing-the-dhchap-pointers.patch
+nvme-rdma-do-not-try-to-stop-unallocated-queues.patch
+kvm-x86-mmu-stop-zapping-invalidated-tdp-mmu-roots-asynchronously.patch