Patch series "mm/swapfile: fix and cleanup swap list iterations", v2.
This series fixes a potential list iteration issue in swap_sync_discard()
when devices are removed, and includes a cleanup for
__folio_throttle_swaprate().
This patch (of 2):
When the next node is removed from the plist (e.g. by swapoff),
plist_del() makes the node point to itself, causing the iteration to loop
on the same entry indefinitely.
Add a plist_node_empty() check to detect this case and restart iteration,
allowing swap_sync_discard() to continue processing remaining swap devices
that still have pending discard entries.
Additionally, switch from swap_avail_lock/swap_avail_head to
swap_lock/swap_active_head so that iteration is only affected by swapoff
operations rather than frequent availability changes, reducing exceptional
condition checks and lock contention.
Link: https://lkml.kernel.org/r/20251127100303.783198-1-youngjun.park@lge.com
Link: https://lkml.kernel.org/r/20251127100303.783198-2-youngjun.park@lge.com
Fixes: 686ea517f471 ("mm, swap: do not perform synchronous discard during allocation")
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Suggested-by: Kairui Song <kasong@tencent.com>
Acked-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
bool ret = false;
struct swap_info_struct *si, *next;
- spin_lock(&swap_avail_lock);
- plist_for_each_entry_safe(si, next, &swap_avail_head, avail_list) {
- spin_unlock(&swap_avail_lock);
+ spin_lock(&swap_lock);
+start_over:
+ plist_for_each_entry_safe(si, next, &swap_active_head, list) {
+ spin_unlock(&swap_lock);
if (get_swap_device_info(si)) {
if (si->flags & SWP_PAGE_DISCARD)
ret = swap_do_scheduled_discard(si);
}
if (ret)
return true;
- spin_lock(&swap_avail_lock);
+
+ spin_lock(&swap_lock);
+ if (plist_node_empty(&next->list))
+ goto start_over;
}
- spin_unlock(&swap_avail_lock);
+ spin_unlock(&swap_lock);
return false;
}