From: Eric Dumazet Date: Thu, 30 Jul 2020 01:57:55 +0000 (-0700) Subject: RDMA/umem: Add a schedule point in ib_umem_get() X-Git-Tag: v5.9-rc1~117^2~2 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=928da37a229f344424ffc89c9a58feb2368bb018;p=thirdparty%2Fkernel%2Flinux.git RDMA/umem: Add a schedule point in ib_umem_get() Mapping as little as 64GB can take more than 10 seconds, triggering issues on kernels with CONFIG_PREEMPT_NONE=y. ib_umem_get() already splits the work in 2MB units on x86_64, adding a cond_resched() in the long-lasting loop is enough to solve the issue. Note that sg_alloc_table() can still use more than 100 ms, which is also problematic. This might be addressed later in ib_umem_add_sg_table(), adding new blocks in sgl on demand. Link: https://lore.kernel.org/r/20200730015755.1827498-1-edumazet@google.com Signed-off-by: Eric Dumazet Signed-off-by: Jason Gunthorpe --- diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 82455a1392f1d..831bff8d52e54 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, sg = umem->sg_head.sgl; while (npages) { + cond_resched(); ret = pin_user_pages_fast(cur_base, min_t(unsigned long, npages, PAGE_SIZE /