]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
sched/fair: Introduce and use the vruntime_cmp() and vruntime_op() wrappers for wrapp...
authorIngo Molnar <mingo@kernel.org>
Tue, 2 Dec 2025 15:10:32 +0000 (16:10 +0100)
committerIngo Molnar <mingo@kernel.org>
Mon, 15 Dec 2025 06:52:45 +0000 (07:52 +0100)
commit5758e48eefaf111d7764d8f1c8b666140fe5fa27
tree1555554af63fa7ab15c530146259351cb5a6fbf8
parentdcbc9d3f0e594223275a18f7016001889ad35eff
sched/fair: Introduce and use the vruntime_cmp() and vruntime_op() wrappers for wrapped-signed aritmetics

We have to be careful with vruntime comparisons and subtraction,
due to the possibility of wrapping, so we have macros like:

   #define vruntime_gt(field, lse, rse) ({ (s64)((lse)->field - (rse)->field) > 0; })

Which is used like this:

if (vruntime_gt(min_vruntime, se, rse))
se->min_vruntime = rse->min_vruntime;

Replace this with an easier to read pattern that uses the regular
arithmetics operators:

if (vruntime_cmp(se->min_vruntime, ">", rse->min_vruntime))
se->min_vruntime = rse->min_vruntime;

Also replace vruntime subtractions with vruntime_op():

-       delta = (s64)(sea->vruntime - seb->vruntime) +
-               (s64)(cfs_rqb->zero_vruntime_fi - cfs_rqa->zero_vruntime_fi);
+       delta = vruntime_op(sea->vruntime, "-", seb->vruntime) +
+               vruntime_op(cfs_rqb->zero_vruntime_fi, "-", cfs_rqa->zero_vruntime_fi);

In the vruntime_cmp() and vruntime_op() macros use Use __builtin_strcmp(),
because of __HAVE_ARCH_STRCMP might turn off the compiler optimizations
we rely on here to catch usage bugs.

No change in functionality.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c