From: Kuan-Wei Chiu Date: Fri, 20 Mar 2026 18:09:37 +0000 (+0000) Subject: ubifs: remove unnecessary cond_resched() from list_sort() compare X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=237213776d0fd62487da513b55732cfb20f7eee8;p=thirdparty%2Fkernel%2Fstable.git ubifs: remove unnecessary cond_resched() from list_sort() compare Patch series "lib/list_sort: Clean up list_sort() scheduling workarounds", v3. Historically, list_sort() included a hack in merge_final() that periodically invoked dummy cmp(priv, b, b) calls when merging highly unbalanced lists. This allowed the caller to invoke cond_resched() within their comparison callbacks to avoid soft lockups. However, an audit of the kernel tree shows that fs/ubifs/ has been the sole user of this mechanism. For all other generic list_sort() users, this results in wasted function calls and unnecessary overhead in a tight loop. Recent discussions and code inspection confirmed that the lists being sorted in UBIFS are bounded in size (a few thousand elements at most), and the comparison functions are extremely lightweight. Therefore, UBIFS does not actually need to rely on this mechanism. This patch (of 2): Historically, UBIFS embedded cond_resched() calls inside its list_sort() comparison callbacks (data_nodes_cmp, nondata_nodes_cmp, and replay_entries_cmp) to prevent soft lockups when sorting long lists. However, further inspection by Richard Weinberger reveals that these compare functions are extremely lightweight and do not perform any blocking MTD I/O. Furthermore, the lists being sorted are strictly bounded in size: - In the GC case, the list contains at most the number of nodes that fit into a single LEB. - In the replay case, the list spans across a few LEBs from the UBIFS journal, amounting to at most a few thousand elements. Since the compare functions are called a few thousand times at most, the overhead of frequent scheduling points is unjustified. Removing the cond_resched() calls simplifies the comparison logic and reduces unnecessary context switch checks during the sort. Link: https://lkml.kernel.org/r/20260320180938.1827148-1-visitorckw@gmail.com Link: https://lkml.kernel.org/r/20260320180938.1827148-2-visitorckw@gmail.com Signed-off-by: Kuan-Wei Chiu Reviewed-by: Zhihao Cheng Acked-by: Richard Weinberger Cc: Ching-Chun (Jim) Huang Cc: Christoph Hellwig Cc: Mars Cheng Cc: Yu-Chun Lin Signed-off-by: Andrew Morton --- diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c index 0bf08b7755b83..933c79b5cd6b9 100644 --- a/fs/ubifs/gc.c +++ b/fs/ubifs/gc.c @@ -109,7 +109,6 @@ static int data_nodes_cmp(void *priv, const struct list_head *a, struct ubifs_info *c = priv; struct ubifs_scan_node *sa, *sb; - cond_resched(); if (a == b) return 0; @@ -153,7 +152,6 @@ static int nondata_nodes_cmp(void *priv, const struct list_head *a, struct ubifs_info *c = priv; struct ubifs_scan_node *sa, *sb; - cond_resched(); if (a == b) return 0; diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c index a9a568f4a868a..263045e05cf18 100644 --- a/fs/ubifs/replay.c +++ b/fs/ubifs/replay.c @@ -305,7 +305,6 @@ static int replay_entries_cmp(void *priv, const struct list_head *a, struct ubifs_info *c = priv; struct replay_entry *ra, *rb; - cond_resched(); if (a == b) return 0;