--- /dev/null
+From yukuai3@huawei.com Sun Oct 2 17:40:03 2022
+From: Yu Kuai <yukuai3@huawei.com>
+Date: Tue, 27 Sep 2022 21:01:16 +0800
+Subject: scsi: hisi_sas: Revert "scsi: hisi_sas: Limit max hw sectors for v3 HW"
+To: <gregkh@linuxfoundation.org>, <stable@vger.kernel.org>, <john.garry@huawei.com>, <jejb@linux.ibm.com>, <martin.petersen@oracle.com>
+Cc: <linux-scsi@vger.kernel.org>, <yukuai3@huawei.com>, <yukuai1@huaweicloud.com>, <yi.zhang@huawei.com>
+Message-ID: <20220927130116.1013775-1-yukuai3@huawei.com>
+
+From: Yu Kuai <yukuai3@huawei.com>
+
+This reverts commit 24cd0b9bfdff126c066032b0d40ab0962d35e777.
+
+1) commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if
+iova search fails") tries to fix that iova allocation can fail while
+there are still free space available. This is not backported to 5.10
+stable.
+2) commit fce54ed02757 ("scsi: hisi_sas: Limit max hw sectors for v3
+HW") fix the performance regression introduced by 1), however, this
+is just a temporary solution and will cause io performance regression
+because it limit max io size to PAGE_SIZE * 32(128k for 4k page_size).
+3) John Garry posted a patchset to fix the problem.
+4) The temporary solution is reverted.
+
+It's weird that the patch in 2) is backported to 5.10 stable alone,
+while the right thing to do is to backport them all together.
+
+Signed-off-by: Yu Kuai <yukuai3@huawei.com>
+Reviewed-by: John Garry <john.garry@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 7 -------
+ 1 file changed, 7 deletions(-)
+
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2738,7 +2738,6 @@ static int slave_configure_v3_hw(struct
+ struct hisi_hba *hisi_hba = shost_priv(shost);
+ struct device *dev = hisi_hba->dev;
+ int ret = sas_slave_configure(sdev);
+- unsigned int max_sectors;
+
+ if (ret)
+ return ret;
+@@ -2756,12 +2755,6 @@ static int slave_configure_v3_hw(struct
+ }
+ }
+
+- /* Set according to IOMMU IOVA caching limit */
+- max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
+- (PAGE_SIZE * 32) >> SECTOR_SHIFT);
+-
+- blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);
+-
+ return 0;
+ }
+
--- /dev/null
+From 82806744fd7dde603b64c151eeddaa4ee62193fd Mon Sep 17 00:00:00 2001
+From: Tianyu Lan <Tianyu.Lan@microsoft.com>
+Date: Tue, 10 May 2022 10:21:09 -0400
+Subject: swiotlb: max mapping size takes min align mask into account
+
+From: Tianyu Lan <Tianyu.Lan@microsoft.com>
+
+commit 82806744fd7dde603b64c151eeddaa4ee62193fd upstream.
+
+swiotlb_find_slots() skips slots according to io tlb aligned mask
+calculated from min aligned mask and original physical address
+offset. This affects max mapping size. The mapping size can't
+achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is
+non-zero. This will cause system boot up failure in Hyper-V
+Isolation VM where swiotlb force is enabled. Scsi layer use return
+value of dma_max_mapping_size() to set max segment size and it
+finally calls swiotlb_max_mapping_size(). Hyper-V storage driver
+sets min align mask to 4k - 1. Scsi layer may pass 256k length of
+request buffer with 0~4k offset and Hyper-V storage driver can't
+get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't
+find 256k length bounce buffer with offset. Make swiotlb_max_mapping
+_size() take min align mask into account.
+
+Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Rishabh Bhatnagar <risbhat@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/dma/swiotlb.c | 13 ++++++++++++-
+ 1 file changed, 12 insertions(+), 1 deletion(-)
+
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -734,7 +734,18 @@ dma_addr_t swiotlb_map(struct device *de
+
+ size_t swiotlb_max_mapping_size(struct device *dev)
+ {
+- return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
++ int min_align_mask = dma_get_min_align_mask(dev);
++ int min_align = 0;
++
++ /*
++ * swiotlb_find_slots() skips slots according to
++ * min align mask. This affects max mapping size.
++ * Take it into acount here.
++ */
++ if (min_align_mask)
++ min_align = roundup(min_align_mask, IO_TLB_SIZE);
++
++ return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align;
+ }
+
+ bool is_swiotlb_active(void)