]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
RDMA/rxe: Fix data copy for IB_SEND_INLINE
authorHonggang LI <honggangli@163.com>
Thu, 16 May 2024 09:50:52 +0000 (17:50 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 27 Jun 2024 11:52:27 +0000 (13:52 +0200)
commit 03fa18a992d5626fd7bf3557a52e826bf8b326b3 upstream.

For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
specified in sge list will be placed inline in the Send Request.

The data should be copied by CPU from the virtual addresses of
corresponding sge list DMA addresses.

Cc: stable@kernel.org
Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
Signed-off-by: Honggang LI <honggangli@163.com>
Link: https://lore.kernel.org/r/20240516095052.542767-1-honggangli@163.com
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/infiniband/sw/rxe/rxe_verbs.c

index a49784e5156c54212d2af6b727a0687540b91feb..a7e9510666e23356e6d9e48bae89a32af842a32c 100644 (file)
@@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
        int i;
 
        for (i = 0; i < ibwr->num_sge; i++, sge++) {
-               memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
+               memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
                p += sge->length;
        }
 }