]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
RDMA/core: add rdma_rw_max_sge() helper for SQ sizing
authorChuck Lever <chuck.lever@oracle.com>
Wed, 28 Jan 2026 00:53:59 +0000 (19:53 -0500)
committerLeon Romanovsky <leon@kernel.org>
Wed, 28 Jan 2026 10:54:53 +0000 (05:54 -0500)
commitafcae7d7b8a278a6c29e064f99e5bafd4ac1fb37
tree2d3840b87c5954450daf3c1f7c83dcde9e55cd93
parentbea28ac14cab25d79ea759138def79aa82e0b428
RDMA/core: add rdma_rw_max_sge() helper for SQ sizing

svc_rdma_accept() computes sc_sq_depth as the sum of rq_depth and the
number of rdma_rw contexts (ctxts). This value is used to allocate the
Send CQ and to initialize the sc_sq_avail credit pool.

However, when the device uses memory registration for RDMA operations,
rdma_rw_init_qp() inflates the QP's max_send_wr by a factor of three
per context to account for REG and INV work requests. The Send CQ and
credit pool remain sized for only one work request per context,
causing Send Queue exhaustion under heavy NFS WRITE workloads.

Introduce rdma_rw_max_sge() to compute the actual number of Send Queue
entries required for a given number of rdma_rw contexts. Upper layer
protocols call this helper before creating a Queue Pair so that their
Send CQs and credit accounting match the QP's true capacity.

Update svc_rdma_accept() to use rdma_rw_max_sge() when computing
sc_sq_depth, ensuring the credit pool reflects the work requests
that rdma_rw_init_qp() will reserve.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Fixes: 00bd1439f464 ("RDMA/rw: Support threshold for registration vs scattering to local pages")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://patch.msgid.link/20260128005400.25147-5-cel@kernel.org
Signed-off-by: Leon Romanovsky <leon@kernel.org>
drivers/infiniband/core/rw.c
include/rdma/rw.h
net/sunrpc/xprtrdma/svc_rdma_transport.c