Initializing the tmc's group, the group's number of children and the
group's parent can all be done without locking because:
1) Reading the group's parent and its group mask is done locklessly.
2) The connections prepared for a given CPU hierarchy are visible to the
target CPU once online, thanks to the CPU hotplug enforced memory
ordering.
3) In case of a newly created upper level, the new root and its
connections and initialization are made visible by the CPU which made
the connections. When that CPUs goes idle in the future, the new link
is published by tmigr_inactive_up() through the atomic RmW on
->migr_state.
4) If CPUs were still walking up the active hierarchy, they could observe
the new root earlier. In this case the ordering is enforced by an
early initialization of the group mask and by barriers that maintain
address dependency as explained in:
b729cc1ec21a ("timers/migration: Fix another race between hotplug and idle entry/exit")
de3ced72a792 ("timers/migration: Enforce group initialization visibility to tree walkers")
5) Timers are propagated by a chain of group locking from the bottom to
the top. And while doing so, the tree also propagates groups links
and initialization. Therefore remote expiration, which also relies
on group locking, will observe those links and initialization while
holding the root lock before walking the tree remotely and update
remote timers. This is especially important for migrators in the
active hierarchy that may observe the new root early.
Therefore the locking is unnecessary at initialization. If anything, it
just brings confusion. Remove it.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251024132536.39841-3-frederic@kernel.org
{
struct tmigr_walk data;
- raw_spin_lock_irq(&child->lock);
- raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING);
-
if (activate) {
/*
* @child is the old top and @parent the new one. In this
*/
smp_store_release(&child->parent, parent);
- raw_spin_unlock(&parent->lock);
- raw_spin_unlock_irq(&child->lock);
-
trace_tmigr_connect_child_parent(child);
if (!activate)
if (i == 0) {
struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu);
- raw_spin_lock_irq(&group->lock);
-
tmc->tmgroup = group;
tmc->groupmask = BIT(group->num_children++);
- raw_spin_unlock_irq(&group->lock);
-
trace_tmigr_connect_cpu_parent(tmc);
/* There are no children that need to be connected */