From: Greg Kroah-Hartman Date: Sun, 2 Oct 2022 15:40:58 +0000 (+0200) Subject: 5.10-stable patches X-Git-Tag: v5.19.13~18 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=ac81c78047f58f6c8b6197774034abd66bec208a;p=thirdparty%2Fkernel%2Fstable-queue.git 5.10-stable patches added patches: scsi-hisi_sas-revert-scsi-hisi_sas-limit-max-hw-sectors-for-v3-hw.patch swiotlb-max-mapping-size-takes-min-align-mask-into-account.patch --- diff --git a/queue-5.10/scsi-hisi_sas-revert-scsi-hisi_sas-limit-max-hw-sectors-for-v3-hw.patch b/queue-5.10/scsi-hisi_sas-revert-scsi-hisi_sas-limit-max-hw-sectors-for-v3-hw.patch new file mode 100644 index 00000000000..9e203b91763 --- /dev/null +++ b/queue-5.10/scsi-hisi_sas-revert-scsi-hisi_sas-limit-max-hw-sectors-for-v3-hw.patch @@ -0,0 +1,56 @@ +From yukuai3@huawei.com Sun Oct 2 17:40:03 2022 +From: Yu Kuai +Date: Tue, 27 Sep 2022 21:01:16 +0800 +Subject: scsi: hisi_sas: Revert "scsi: hisi_sas: Limit max hw sectors for v3 HW" +To: , , , , +Cc: , , , +Message-ID: <20220927130116.1013775-1-yukuai3@huawei.com> + +From: Yu Kuai + +This reverts commit 24cd0b9bfdff126c066032b0d40ab0962d35e777. + +1) commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if +iova search fails") tries to fix that iova allocation can fail while +there are still free space available. This is not backported to 5.10 +stable. +2) commit fce54ed02757 ("scsi: hisi_sas: Limit max hw sectors for v3 +HW") fix the performance regression introduced by 1), however, this +is just a temporary solution and will cause io performance regression +because it limit max io size to PAGE_SIZE * 32(128k for 4k page_size). +3) John Garry posted a patchset to fix the problem. +4) The temporary solution is reverted. + +It's weird that the patch in 2) is backported to 5.10 stable alone, +while the right thing to do is to backport them all together. + +Signed-off-by: Yu Kuai +Reviewed-by: John Garry +Signed-off-by: Greg Kroah-Hartman +--- + drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 7 ------- + 1 file changed, 7 deletions(-) + +--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +@@ -2738,7 +2738,6 @@ static int slave_configure_v3_hw(struct + struct hisi_hba *hisi_hba = shost_priv(shost); + struct device *dev = hisi_hba->dev; + int ret = sas_slave_configure(sdev); +- unsigned int max_sectors; + + if (ret) + return ret; +@@ -2756,12 +2755,6 @@ static int slave_configure_v3_hw(struct + } + } + +- /* Set according to IOMMU IOVA caching limit */ +- max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue), +- (PAGE_SIZE * 32) >> SECTOR_SHIFT); +- +- blk_queue_max_hw_sectors(sdev->request_queue, max_sectors); +- + return 0; + } + diff --git a/queue-5.10/series b/queue-5.10/series index 1dec3ed15c7..ab9ed14ee85 100644 --- a/queue-5.10/series +++ b/queue-5.10/series @@ -26,3 +26,5 @@ mm-migrate_device.c-flush-tlb-while-holding-ptl.patch mm-fix-madivse_pageout-mishandling-on-non-lru-page.patch media-dvb_vb2-fix-possible-out-of-bound-access.patch media-rkvdec-disable-h.264-error-detection.patch +swiotlb-max-mapping-size-takes-min-align-mask-into-account.patch +scsi-hisi_sas-revert-scsi-hisi_sas-limit-max-hw-sectors-for-v3-hw.patch diff --git a/queue-5.10/swiotlb-max-mapping-size-takes-min-align-mask-into-account.patch b/queue-5.10/swiotlb-max-mapping-size-takes-min-align-mask-into-account.patch new file mode 100644 index 00000000000..abc9f9ead22 --- /dev/null +++ b/queue-5.10/swiotlb-max-mapping-size-takes-min-align-mask-into-account.patch @@ -0,0 +1,53 @@ +From 82806744fd7dde603b64c151eeddaa4ee62193fd Mon Sep 17 00:00:00 2001 +From: Tianyu Lan +Date: Tue, 10 May 2022 10:21:09 -0400 +Subject: swiotlb: max mapping size takes min align mask into account + +From: Tianyu Lan + +commit 82806744fd7dde603b64c151eeddaa4ee62193fd upstream. + +swiotlb_find_slots() skips slots according to io tlb aligned mask +calculated from min aligned mask and original physical address +offset. This affects max mapping size. The mapping size can't +achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is +non-zero. This will cause system boot up failure in Hyper-V +Isolation VM where swiotlb force is enabled. Scsi layer use return +value of dma_max_mapping_size() to set max segment size and it +finally calls swiotlb_max_mapping_size(). Hyper-V storage driver +sets min align mask to 4k - 1. Scsi layer may pass 256k length of +request buffer with 0~4k offset and Hyper-V storage driver can't +get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't +find 256k length bounce buffer with offset. Make swiotlb_max_mapping +_size() take min align mask into account. + +Signed-off-by: Tianyu Lan +Signed-off-by: Christoph Hellwig +Signed-off-by: Rishabh Bhatnagar +Signed-off-by: Greg Kroah-Hartman +--- + kernel/dma/swiotlb.c | 13 ++++++++++++- + 1 file changed, 12 insertions(+), 1 deletion(-) + +--- a/kernel/dma/swiotlb.c ++++ b/kernel/dma/swiotlb.c +@@ -734,7 +734,18 @@ dma_addr_t swiotlb_map(struct device *de + + size_t swiotlb_max_mapping_size(struct device *dev) + { +- return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; ++ int min_align_mask = dma_get_min_align_mask(dev); ++ int min_align = 0; ++ ++ /* ++ * swiotlb_find_slots() skips slots according to ++ * min align mask. This affects max mapping size. ++ * Take it into acount here. ++ */ ++ if (min_align_mask) ++ min_align = roundup(min_align_mask, IO_TLB_SIZE); ++ ++ return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align; + } + + bool is_swiotlb_active(void)