]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO
authorKairui Song <kasong@tencent.com>
Fri, 19 Dec 2025 19:57:51 +0000 (03:57 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Sat, 31 Jan 2026 22:22:54 +0000 (14:22 -0800)
commitc246d236b18befdfeb82ce2a01e23d45cb5eeea6
treed6aa4d68fe021f8ec73f4dca02227042347f3ff3
parent4b34f1d82c6549837b2061096dea249e881a4495
mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO

Now the overhead of the swap cache is trivial to none, bypassing the swap
cache is no longer a good optimization.

We have removed the cache bypass swapin for anon memory, now do the same
for shmem.  Many helpers and functions can be dropped now.

The performance may slightly drop because of the co-existence and double
update of swap_map and swap table, and this problem will be improved very
soon in later commits by dropping the swap_map update partially:

Swapin of 24 GB file with tmpfs with
transparent_hugepage_tmpfs=within_size and ZRAM, 3 test runs on my
machine:

Before:  After this commit:  After this series:
5.99s    6.29s               6.08s

And later swap table phases will drop the swap_map completely to avoid
overhead and reduce memory usage.

Link: https://lkml.kernel.org/r/20251219195751.61328-1-ryncsn@gmail.com
Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Rafael J. Wysocki (Intel) <rafael@kernel.org>
Cc: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Deepanshu Kartikey <kartikey406@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/shmem.c
mm/swap.h
mm/swapfile.c