swap_map had a static flexible size, so the last cluster won't be fully
covered, hence the allocator needs to check the scan border to avoid OOB.
But the swap table has a fixed-sized swap table for each cluster, and the
slots beyond the device size are marked as bad slots. The allocator can
simply scan all slots as usual, and any bad slots will be skipped.
Link: https://lkml.kernel.org/r/20260218-swap-table-p3-v3-10-f4e34be021a7@tencent.com
Signed-off-by: Kairui Song <kasong@tencent.com>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kairui Song <ryncsn@gmail.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
struct swap_info_struct *si, pgoff_t offset)
{
VM_WARN_ON_ONCE(percpu_ref_is_zero(&si->users)); /* race with swapoff */
- VM_WARN_ON_ONCE(offset >= si->max);
+ VM_WARN_ON_ONCE(offset >= roundup(si->max, SWAPFILE_CLUSTER));
return &si->cluster_info[offset / SWAPFILE_CLUSTER];
}
{
unsigned int next = SWAP_ENTRY_INVALID, found = SWAP_ENTRY_INVALID;
unsigned long start = ALIGN_DOWN(offset, SWAPFILE_CLUSTER);
- unsigned long end = min(start + SWAPFILE_CLUSTER, si->max);
unsigned int order = likely(folio) ? folio_order(folio) : 0;
+ unsigned long end = start + SWAPFILE_CLUSTER;
unsigned int nr_pages = 1 << order;
bool need_reclaim, ret, usable;