mutex_lock_io() differs from mutex_lock() in that it may call
io_schedule() when a task must sleep waiting for the lock. This
distinction only makes sense in block I/O or memory reclaim paths,
where giving I/O a chance to make progress is useful.
At this call site, cxl_pci_mbox_send(), the mutex protects an MMIO
mailbox. The task holding the lock is not blocking I/O progress, so
calling io_schedule(), as mutex_lock_io() may do, has no practical
effect.
Although there is no functional change, using the correct locking
primitive, that more accurately reflects the semantics and intended
use of the lock, improves code clarity and avoids misleading readers
and tools.
[ dj: Dropped fixes tag, no need to backport ]
Reported-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Closes: https://lore.kernel.org/linux-cxl/0d2af1e8-7f1b-438c-a090-fd366c8c63e0@oracle.com/
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Link: https://patch.msgid.link/20250529205117.1990465-1-alison.schofield@intel.com
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
{
int rc;
- mutex_lock_io(&cxl_mbox->mbox_mutex);
+ mutex_lock(&cxl_mbox->mbox_mutex);
rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd);
mutex_unlock(&cxl_mbox->mbox_mutex);