From: Matthew Wilcox Date: Wed, 22 Aug 2018 04:56:30 +0000 (-0700) Subject: userfaultfd: use fault_wqh lock X-Git-Tag: v4.19-rc1~59^2~89 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=c430d1e848ff1240d126e79780f3c26208b8aed9;p=thirdparty%2Fkernel%2Flinux.git userfaultfd: use fault_wqh lock The userfaultfd code currently uses the unlocked waitqueue helpers for managing fault_wqh, but instead of holding the waitqueue lock for this waitqueue around these calls, it the waitqueue lock of fault_pending_wq, which is a different waitqueue instance. Given that the waitqueue is not exposed to the rest of the kernel this actually works ok at the moment, but prevents the userfaultfd locking rules from being enforced using lockdep. Switch to the internally locked waitqueue helpers instead. This means that the lock inside fault_wqh now nests inside the fault_pending_wqh lock, but that's not a problem since it was entirely unused before. [hch@lst.de: slight changelog updates] [rppt@linux.vnet.ibm.com: spotted changelog spellos] Link: http://lkml.kernel.org/r/20171214152344.6880-3-hch@lst.de Signed-off-by: Matthew Wilcox Signed-off-by: Christoph Hellwig Reviewed-by: Mike Rapoport Cc: Al Viro Cc: Andrea Arcangeli Cc: Ingo Molnar Cc: Jason Baron Cc: Peter Zijlstra Cc: Davidlohr Bueso Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 15c265d450bf8..f649023b19b5c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -910,7 +910,7 @@ wakeup: */ spin_lock(&ctx->fault_pending_wqh.lock); __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range); - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range); + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range); spin_unlock(&ctx->fault_pending_wqh.lock); /* Flush pending events that may still wait on event_wqh */ @@ -1066,7 +1066,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait, * anyway. */ list_del(&uwq->wq.entry); - __add_wait_queue(&ctx->fault_wqh, &uwq->wq); + add_wait_queue(&ctx->fault_wqh, &uwq->wq); write_seqcount_end(&ctx->refile_seq); @@ -1215,7 +1215,7 @@ static void __wake_userfault(struct userfaultfd_ctx *ctx, __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, range); if (waitqueue_active(&ctx->fault_wqh)) - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, range); + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range); spin_unlock(&ctx->fault_pending_wqh.lock); }