+++ /dev/null
-From eed6bfa8b28230382b797a88569f2c7569a1a419 Mon Sep 17 00:00:00 2001
-From: Ryan Roberts <ryan.roberts@arm.com>
-Date: Wed, 26 Feb 2025 12:06:53 +0000
-Subject: arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level
-
-From: Ryan Roberts <ryan.roberts@arm.com>
-
-commit eed6bfa8b28230382b797a88569f2c7569a1a419 upstream.
-
-commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for
-FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to
-TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in
-flush_hugetlb_tlb_range() was not updated. So as it stands, when trying
-to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will
-spuriously try to invalidate at level 0 on LPA2-enabled systems.
-
-Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are
-at it, explicitly use the correct stride and level for CONT_PMD_SIZE and
-CONT_PTE_SIZE, which should provide a minor optimization.
-
-Cc: stable@vger.kernel.org
-Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2")
-Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
-Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
-Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
-Link: https://lore.kernel.org/r/20250226120656.2400136-4-ryan.roberts@arm.com
-Signed-off-by: Will Deacon <will@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------
- 1 file changed, 16 insertions(+), 6 deletions(-)
-
---- a/arch/arm64/include/asm/hugetlb.h
-+++ b/arch/arm64/include/asm/hugetlb.h
-@@ -68,12 +68,22 @@ static inline void flush_hugetlb_tlb_ran
- {
- unsigned long stride = huge_page_size(hstate_vma(vma));
-
-- if (stride == PMD_SIZE)
-- __flush_tlb_range(vma, start, end, stride, false, 2);
-- else if (stride == PUD_SIZE)
-- __flush_tlb_range(vma, start, end, stride, false, 1);
-- else
-- __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
-+ switch (stride) {
-+#ifndef __PAGETABLE_PMD_FOLDED
-+ case PUD_SIZE:
-+ __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1);
-+ break;
-+#endif
-+ case CONT_PMD_SIZE:
-+ case PMD_SIZE:
-+ __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2);
-+ break;
-+ case CONT_PTE_SIZE:
-+ __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3);
-+ break;
-+ default:
-+ __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
-+ }
- }
-
- #endif /* __ASM_HUGETLB_H */
+++ /dev/null
-From 49c87f7677746f3c5bd16c81b23700bb6b88bfd4 Mon Sep 17 00:00:00 2001
-From: Ryan Roberts <ryan.roberts@arm.com>
-Date: Wed, 26 Feb 2025 12:06:52 +0000
-Subject: arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes
-
-From: Ryan Roberts <ryan.roberts@arm.com>
-
-commit 49c87f7677746f3c5bd16c81b23700bb6b88bfd4 upstream.
-
-arm64 supports multiple huge_pte sizes. Some of the sizes are covered by
-a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some
-are covered by multiple ptes at a particular level (CONT_PTE_SIZE,
-CONT_PMD_SIZE). So the function has to figure out the size from the
-huge_pte pointer. This was previously done by walking the pgtable to
-determine the level and by using the PTE_CONT bit to determine the
-number of ptes at the level.
-
-But the PTE_CONT bit is only valid when the pte is present. For
-non-present pte values (e.g. markers, migration entries), the previous
-implementation was therefore erroneously determining the size. There is
-at least one known caller in core-mm, move_huge_pte(), which may call
-huge_ptep_get_and_clear() for a non-present pte. So we must be robust to
-this case. Additionally the "regular" ptep_get_and_clear() is robust to
-being called for non-present ptes so it makes sense to follow the
-behavior.
-
-Fix this by using the new sz parameter which is now provided to the
-function. Additionally when clearing each pte in a contig range, don't
-gather the access and dirty bits if the pte is not present.
-
-An alternative approach that would not require API changes would be to
-store the PTE_CONT bit in a spare bit in the swap entry pte for the
-non-present case. But it felt cleaner to follow other APIs' lead and
-just pass in the size.
-
-As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap
-entry offset field (layout of non-present pte). Since hugetlb is never
-swapped to disk, this field will only be populated for markers, which
-always set this bit to 0 and hwpoison swap entries, which set the offset
-field to a PFN; So it would only ever be 1 for a 52-bit PVA system where
-memory in that high half was poisoned (I think!). So in practice, this
-bit would almost always be zero for non-present ptes and we would only
-clear the first entry if it was actually a contiguous block. That's
-probably a less severe symptom than if it was always interpreted as 1
-and cleared out potentially-present neighboring PTEs.
-
-Cc: stable@vger.kernel.org
-Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
-Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
-Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
-Link: https://lore.kernel.org/r/20250226120656.2400136-3-ryan.roberts@arm.com
-Signed-off-by: Will Deacon <will@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/arm64/mm/hugetlbpage.c | 51 ++++++++++++++++----------------------------
- 1 file changed, 19 insertions(+), 32 deletions(-)
-
---- a/arch/arm64/mm/hugetlbpage.c
-+++ b/arch/arm64/mm/hugetlbpage.c
-@@ -100,20 +100,11 @@ static int find_num_contig(struct mm_str
-
- static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
- {
-- int contig_ptes = 0;
-+ int contig_ptes = 1;
-
- *pgsize = size;
-
- switch (size) {
--#ifndef __PAGETABLE_PMD_FOLDED
-- case PUD_SIZE:
-- if (pud_sect_supported())
-- contig_ptes = 1;
-- break;
--#endif
-- case PMD_SIZE:
-- contig_ptes = 1;
-- break;
- case CONT_PMD_SIZE:
- *pgsize = PMD_SIZE;
- contig_ptes = CONT_PMDS;
-@@ -122,6 +113,8 @@ static inline int num_contig_ptes(unsign
- *pgsize = PAGE_SIZE;
- contig_ptes = CONT_PTES;
- break;
-+ default:
-+ WARN_ON(!__hugetlb_valid_size(size));
- }
-
- return contig_ptes;
-@@ -163,24 +156,23 @@ static pte_t get_clear_contig(struct mm_
- unsigned long pgsize,
- unsigned long ncontig)
- {
-- pte_t orig_pte = __ptep_get(ptep);
-- unsigned long i;
-+ pte_t pte, tmp_pte;
-+ bool present;
-
-- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
-- pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
--
-- /*
-- * If HW_AFDBM is enabled, then the HW could turn on
-- * the dirty or accessed bit for any page in the set,
-- * so check them all.
-- */
-- if (pte_dirty(pte))
-- orig_pte = pte_mkdirty(orig_pte);
--
-- if (pte_young(pte))
-- orig_pte = pte_mkyoung(orig_pte);
-+ pte = __ptep_get_and_clear(mm, addr, ptep);
-+ present = pte_present(pte);
-+ while (--ncontig) {
-+ ptep++;
-+ addr += pgsize;
-+ tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
-+ if (present) {
-+ if (pte_dirty(tmp_pte))
-+ pte = pte_mkdirty(pte);
-+ if (pte_young(tmp_pte))
-+ pte = pte_mkyoung(pte);
-+ }
- }
-- return orig_pte;
-+ return pte;
- }
-
- static pte_t get_clear_contig_flush(struct mm_struct *mm,
-@@ -390,13 +382,8 @@ pte_t huge_ptep_get_and_clear(struct mm_
- {
- int ncontig;
- size_t pgsize;
-- pte_t orig_pte = __ptep_get(ptep);
--
-- if (!pte_cont(orig_pte))
-- return __ptep_get_and_clear(mm, addr, ptep);
--
-- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-
-+ ncontig = num_contig_ptes(sz, &pgsize);
- return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
- }
-
+++ /dev/null
-From 4bffe64c241308d2e8fc77450f4818c04bfdd097 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 25 Feb 2025 10:27:54 +0530
-Subject: drm/xe: cancel pending job timer before freeing scheduler
-
-From: Tejas Upadhyay <tejas.upadhyay@intel.com>
-
-[ Upstream commit 12c2f962fe71f390951d9242725bc7e608f55927 ]
-
-The async call to __guc_exec_queue_fini_async frees the scheduler
-while a submission may time out and restart. To prevent this race
-condition, the pending job timer should be canceled before freeing
-the scheduler.
-
-V3(MattB):
- - Adjust position of cancel pending job
- - Remove gitlab issue# from commit message
-V2(MattB):
- - Cancel pending jobs before scheduler finish
-
-Fixes: a20c75dba192 ("drm/xe: Call __guc_exec_queue_fini_async direct for KERNEL exec_queues")
-Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-Link: https://patchwork.freedesktop.org/patch/msgid/20250225045754.600905-1-tejas.upadhyay@intel.com
-Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
-(cherry picked from commit 18fbd567e75f9b97b699b2ab4f1fa76b7cf268f6)
-Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- drivers/gpu/drm/xe/xe_guc_submit.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
-index fed23304e4da5..3fd2b28b91ab9 100644
---- a/drivers/gpu/drm/xe/xe_guc_submit.c
-+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
-@@ -1215,6 +1215,8 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
-
- if (xe_exec_queue_is_lr(q))
- cancel_work_sync(&ge->lr_tdr);
-+ /* Confirm no work left behind accessing device structures */
-+ cancel_delayed_work_sync(&ge->sched.base.work_tdr);
- release_guc_id(guc, q);
- xe_sched_entity_fini(&ge->entity);
- xe_sched_fini(&ge->sched);
---
-2.39.5
-
ice-fix-deinitializing-vf-in-error-path.patch
ice-avoid-setting-default-rx-vsi-twice-in-switchdev-.patch
tcp-defer-ts_recent-changes-until-req-is-owned.patch
-drm-xe-cancel-pending-job-timer-before-freeing-sched.patch
net-clear-old-fragment-checksum-value-in-napi_reuse_.patch
net-mvpp2-cls-fixed-non-ip-flow-with-vlan-tag-flow-d.patch
net-mlx5-irq-fix-null-string-in-debug-print.patch
drm-amd-display-add-a-quirk-to-enable-edp0-on-dp1.patch
drm-amd-display-fix-hpd-after-gpu-reset.patch
arm64-mm-fix-boot-panic-on-ampere-altra.patch
-arm64-hugetlb-fix-huge_ptep_get_and_clear-for-non-present-ptes.patch
-arm64-hugetlb-fix-flush_hugetlb_tlb_range-invalidation-level.patch
block-remove-zone-write-plugs-when-handling-native-zone-append-writes.patch
i2c-npcm-disable-interrupt-enable-bit-before-devm_request_irq.patch
i2c-ls2x-fix-frequency-division-register-access.patch
+++ /dev/null
-From eed6bfa8b28230382b797a88569f2c7569a1a419 Mon Sep 17 00:00:00 2001
-From: Ryan Roberts <ryan.roberts@arm.com>
-Date: Wed, 26 Feb 2025 12:06:53 +0000
-Subject: arm64: hugetlb: Fix flush_hugetlb_tlb_range() invalidation level
-
-From: Ryan Roberts <ryan.roberts@arm.com>
-
-commit eed6bfa8b28230382b797a88569f2c7569a1a419 upstream.
-
-commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for
-FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to
-TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in
-flush_hugetlb_tlb_range() was not updated. So as it stands, when trying
-to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will
-spuriously try to invalidate at level 0 on LPA2-enabled systems.
-
-Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are
-at it, explicitly use the correct stride and level for CONT_PMD_SIZE and
-CONT_PTE_SIZE, which should provide a minor optimization.
-
-Cc: stable@vger.kernel.org
-Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2")
-Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
-Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
-Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
-Link: https://lore.kernel.org/r/20250226120656.2400136-4-ryan.roberts@arm.com
-Signed-off-by: Will Deacon <will@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------
- 1 file changed, 16 insertions(+), 6 deletions(-)
-
---- a/arch/arm64/include/asm/hugetlb.h
-+++ b/arch/arm64/include/asm/hugetlb.h
-@@ -76,12 +76,22 @@ static inline void flush_hugetlb_tlb_ran
- {
- unsigned long stride = huge_page_size(hstate_vma(vma));
-
-- if (stride == PMD_SIZE)
-- __flush_tlb_range(vma, start, end, stride, false, 2);
-- else if (stride == PUD_SIZE)
-- __flush_tlb_range(vma, start, end, stride, false, 1);
-- else
-- __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
-+ switch (stride) {
-+#ifndef __PAGETABLE_PMD_FOLDED
-+ case PUD_SIZE:
-+ __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1);
-+ break;
-+#endif
-+ case CONT_PMD_SIZE:
-+ case PMD_SIZE:
-+ __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2);
-+ break;
-+ case CONT_PTE_SIZE:
-+ __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3);
-+ break;
-+ default:
-+ __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
-+ }
- }
-
- #endif /* __ASM_HUGETLB_H */
+++ /dev/null
-From 49c87f7677746f3c5bd16c81b23700bb6b88bfd4 Mon Sep 17 00:00:00 2001
-From: Ryan Roberts <ryan.roberts@arm.com>
-Date: Wed, 26 Feb 2025 12:06:52 +0000
-Subject: arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes
-
-From: Ryan Roberts <ryan.roberts@arm.com>
-
-commit 49c87f7677746f3c5bd16c81b23700bb6b88bfd4 upstream.
-
-arm64 supports multiple huge_pte sizes. Some of the sizes are covered by
-a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some
-are covered by multiple ptes at a particular level (CONT_PTE_SIZE,
-CONT_PMD_SIZE). So the function has to figure out the size from the
-huge_pte pointer. This was previously done by walking the pgtable to
-determine the level and by using the PTE_CONT bit to determine the
-number of ptes at the level.
-
-But the PTE_CONT bit is only valid when the pte is present. For
-non-present pte values (e.g. markers, migration entries), the previous
-implementation was therefore erroneously determining the size. There is
-at least one known caller in core-mm, move_huge_pte(), which may call
-huge_ptep_get_and_clear() for a non-present pte. So we must be robust to
-this case. Additionally the "regular" ptep_get_and_clear() is robust to
-being called for non-present ptes so it makes sense to follow the
-behavior.
-
-Fix this by using the new sz parameter which is now provided to the
-function. Additionally when clearing each pte in a contig range, don't
-gather the access and dirty bits if the pte is not present.
-
-An alternative approach that would not require API changes would be to
-store the PTE_CONT bit in a spare bit in the swap entry pte for the
-non-present case. But it felt cleaner to follow other APIs' lead and
-just pass in the size.
-
-As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap
-entry offset field (layout of non-present pte). Since hugetlb is never
-swapped to disk, this field will only be populated for markers, which
-always set this bit to 0 and hwpoison swap entries, which set the offset
-field to a PFN; So it would only ever be 1 for a 52-bit PVA system where
-memory in that high half was poisoned (I think!). So in practice, this
-bit would almost always be zero for non-present ptes and we would only
-clear the first entry if it was actually a contiguous block. That's
-probably a less severe symptom than if it was always interpreted as 1
-and cleared out potentially-present neighboring PTEs.
-
-Cc: stable@vger.kernel.org
-Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
-Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
-Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
-Link: https://lore.kernel.org/r/20250226120656.2400136-3-ryan.roberts@arm.com
-Signed-off-by: Will Deacon <will@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/arm64/mm/hugetlbpage.c | 51 ++++++++++++++++----------------------------
- 1 file changed, 19 insertions(+), 32 deletions(-)
-
---- a/arch/arm64/mm/hugetlbpage.c
-+++ b/arch/arm64/mm/hugetlbpage.c
-@@ -100,20 +100,11 @@ static int find_num_contig(struct mm_str
-
- static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
- {
-- int contig_ptes = 0;
-+ int contig_ptes = 1;
-
- *pgsize = size;
-
- switch (size) {
--#ifndef __PAGETABLE_PMD_FOLDED
-- case PUD_SIZE:
-- if (pud_sect_supported())
-- contig_ptes = 1;
-- break;
--#endif
-- case PMD_SIZE:
-- contig_ptes = 1;
-- break;
- case CONT_PMD_SIZE:
- *pgsize = PMD_SIZE;
- contig_ptes = CONT_PMDS;
-@@ -122,6 +113,8 @@ static inline int num_contig_ptes(unsign
- *pgsize = PAGE_SIZE;
- contig_ptes = CONT_PTES;
- break;
-+ default:
-+ WARN_ON(!__hugetlb_valid_size(size));
- }
-
- return contig_ptes;
-@@ -163,24 +156,23 @@ static pte_t get_clear_contig(struct mm_
- unsigned long pgsize,
- unsigned long ncontig)
- {
-- pte_t orig_pte = __ptep_get(ptep);
-- unsigned long i;
-+ pte_t pte, tmp_pte;
-+ bool present;
-
-- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
-- pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
--
-- /*
-- * If HW_AFDBM is enabled, then the HW could turn on
-- * the dirty or accessed bit for any page in the set,
-- * so check them all.
-- */
-- if (pte_dirty(pte))
-- orig_pte = pte_mkdirty(orig_pte);
--
-- if (pte_young(pte))
-- orig_pte = pte_mkyoung(orig_pte);
-+ pte = __ptep_get_and_clear(mm, addr, ptep);
-+ present = pte_present(pte);
-+ while (--ncontig) {
-+ ptep++;
-+ addr += pgsize;
-+ tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
-+ if (present) {
-+ if (pte_dirty(tmp_pte))
-+ pte = pte_mkdirty(pte);
-+ if (pte_young(tmp_pte))
-+ pte = pte_mkyoung(pte);
-+ }
- }
-- return orig_pte;
-+ return pte;
- }
-
- static pte_t get_clear_contig_flush(struct mm_struct *mm,
-@@ -401,13 +393,8 @@ pte_t huge_ptep_get_and_clear(struct mm_
- {
- int ncontig;
- size_t pgsize;
-- pte_t orig_pte = __ptep_get(ptep);
--
-- if (!pte_cont(orig_pte))
-- return __ptep_get_and_clear(mm, addr, ptep);
--
-- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-
-+ ncontig = num_contig_ptes(sz, &pgsize);
- return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
- }
-
+++ /dev/null
-From 2a58a4edc3e927fd355744e285ba6cbb47f782f2 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 25 Feb 2025 10:27:54 +0530
-Subject: drm/xe: cancel pending job timer before freeing scheduler
-
-From: Tejas Upadhyay <tejas.upadhyay@intel.com>
-
-[ Upstream commit 12c2f962fe71f390951d9242725bc7e608f55927 ]
-
-The async call to __guc_exec_queue_fini_async frees the scheduler
-while a submission may time out and restart. To prevent this race
-condition, the pending job timer should be canceled before freeing
-the scheduler.
-
-V3(MattB):
- - Adjust position of cancel pending job
- - Remove gitlab issue# from commit message
-V2(MattB):
- - Cancel pending jobs before scheduler finish
-
-Fixes: a20c75dba192 ("drm/xe: Call __guc_exec_queue_fini_async direct for KERNEL exec_queues")
-Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-Link: https://patchwork.freedesktop.org/patch/msgid/20250225045754.600905-1-tejas.upadhyay@intel.com
-Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
-(cherry picked from commit 18fbd567e75f9b97b699b2ab4f1fa76b7cf268f6)
-Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- drivers/gpu/drm/xe/xe_guc_submit.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
-index 6f4a9812b4f4a..fe17e9ba86725 100644
---- a/drivers/gpu/drm/xe/xe_guc_submit.c
-+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
-@@ -1238,6 +1238,8 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
-
- if (xe_exec_queue_is_lr(q))
- cancel_work_sync(&ge->lr_tdr);
-+ /* Confirm no work left behind accessing device structures */
-+ cancel_delayed_work_sync(&ge->sched.base.work_tdr);
- release_guc_id(guc, q);
- xe_sched_entity_fini(&ge->entity);
- xe_sched_fini(&ge->sched);
---
-2.39.5
-
ice-fix-deinitializing-vf-in-error-path.patch
ice-avoid-setting-default-rx-vsi-twice-in-switchdev-.patch
tcp-defer-ts_recent-changes-until-req-is-owned.patch
-drm-xe-cancel-pending-job-timer-before-freeing-sched.patch
net-clear-old-fragment-checksum-value-in-napi_reuse_.patch
net-mvpp2-cls-fixed-non-ip-flow-with-vlan-tag-flow-d.patch
net-mlx5-fix-vport-qos-cleanup-on-error.patch
drm-amd-display-add-a-quirk-to-enable-edp0-on-dp1.patch
drm-amd-display-fix-hpd-after-gpu-reset.patch
arm64-mm-fix-boot-panic-on-ampere-altra.patch
-arm64-hugetlb-fix-huge_ptep_get_and_clear-for-non-present-ptes.patch
-arm64-hugetlb-fix-flush_hugetlb_tlb_range-invalidation-level.patch
block-remove-zone-write-plugs-when-handling-native-zone-append-writes.patch
btrfs-skip-inodes-without-loaded-extent-maps-when-shrinking-extent-maps.patch
btrfs-do-regular-iput-instead-of-delayed-iput-during-extent-map-shrinking.patch