Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Convert deferred_reenq_locals from llist to regular list
The deferred reenqueue local mechanism uses an llist (lockless list) for
collecting schedulers that need their local DSQs re-enqueued. Convert to a
regular list protected by a raw_spinlock.
The llist was used for its lockless properties, but the upcoming changes to
support remote reenqueue require more complex list operations that are
difficult to implement correctly with lockless data structures. A spinlock-
protected regular list provides the necessary flexibility.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Relocate run_deferred() and its callees
Previously, both process_ddsp_deferred_locals() and reenq_local() required
forward declarations. Reorganize so that only run_deferred() needs to be
declared. Both callees are grouped right before run_deferred() for better
locality. This reduces forward declaration clutter and will ease adding more
to the run_deferred() path.
No functional changes.
v2: Also relocate process_ddsp_deferred_locals() next to run_deferred()
(Daniel Jordan).
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Change find_global_dsq() to take CPU number instead of task
Change find_global_dsq() to take a CPU number directly instead of a task
pointer. This prepares for callers where the CPU is available but the task is
not.
No functional changes.
v2: Rename tcpu to cpu in find_global_dsq() (Emil Tsalapatis).
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Factor out pnode allocation and deallocation into helpers
Extract pnode allocation and deallocation logic into alloc_pnode() and
free_pnode() helpers. This simplifies scx_alloc_and_add_sched() and prepares
for adding more per-node initialization and cleanup in subsequent patches.
No functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Wrap global DSQs in per-node structure
Global DSQs are currently stored as an array of scx_dispatch_q pointers,
one per NUMA node. To allow adding more per-node data structures, wrap the
global DSQ in scx_sched_pnode and replace global_dsqs with pnode array.
NUMA-aware allocation is maintained. No functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Sat, 7 Mar 2026 15:29:49 +0000 (05:29 -1000)]
sched_ext: Relocate scx_bpf_task_cgroup() and its BTF_ID to the end of kfunc section
Move scx_bpf_task_cgroup() kfunc definition and its BTF_ID entry to the end
of the kfunc section before __bpf_kfunc_end_defs() for cleaner code
organization.
No functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Andrea Righi [Sat, 7 Mar 2026 09:56:31 +0000 (10:56 +0100)]
sched_ext: Pass full dequeue flags to ops.quiescent()
ops.quiescent() is invoked with the same deq_flags as ops.dequeue(), so
the BPF scheduler is able to distinguish sleep vs property changes in
both callbacks.
However, dequeue_task_scx() receives deq_flags as an int from the
sched_class interface, so SCX flags above bit 32 (%SCX_DEQ_SCHED_CHANGE)
are truncated. ops_dequeue() reconstructs the full u64 for ops.dequeue(),
but ops.quiescent() is still called with the original int and can never
see %SCX_DEQ_SCHED_CHANGE.
Fix this by constructing the full u64 deq_flags in dequeue_task_scx()
(renaming the int parameter to core_deq_flags) and passing the complete
flags to both ops_dequeue() and ops.quiescent().
Tejun Heo [Sat, 7 Mar 2026 14:57:53 +0000 (04:57 -1000)]
Merge branch 'for-7.0-fixes' into for-7.1
Pull in 57ccf5ccdc56 ("sched_ext: Fix enqueue_task_scx() truncation of
upper enqueue flags") which conflicts with ebf1ccff79c4 ("sched_ext: Fix
ops.dequeue() semantics").
Tejun Heo [Sat, 7 Mar 2026 14:53:32 +0000 (04:53 -1000)]
sched_ext: Fix enqueue_task_scx() truncation of upper enqueue flags
enqueue_task_scx() takes int enq_flags from the sched_class interface.
SCX enqueue flags starting at bit 32 (SCX_ENQ_PREEMPT and above) are
silently truncated when passed through activate_task(). extra_enq_flags
was added as a workaround - storing high bits in rq->scx.extra_enq_flags
and OR-ing them back in enqueue_task_scx(). However, the OR target is
still the int parameter, so the high bits are lost anyway.
The current impact is limited as the only affected flag is SCX_ENQ_PREEMPT
which is informational to the BPF scheduler - its loss means the scheduler
doesn't know about preemption but doesn't cause incorrect behavior.
Fix by renaming the int parameter to core_enq_flags and introducing a
u64 enq_flags local that merges both sources. All downstream functions
already take u64 enq_flags.
Cheng-Yang Chou [Fri, 6 Mar 2026 18:21:01 +0000 (02:21 +0800)]
sched_ext: Documentation: Update sched-ext.rst
- Remove CONFIG_PAHOLE_HAS_BTF_TAG from required config list
- Document ext_idle.c as the built-in idle CPU selection policy
- Add descriptions for example schedulers in tools/sched_ext/
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Add basic building blocks for nested sub-scheduler dispatching
This is an early-stage partial implementation that demonstrates the core
building blocks for nested sub-scheduler dispatching. While significant
work remains in the enqueue path and other areas, this patch establishes
the fundamental mechanisms needed for hierarchical scheduler operation.
The key building blocks introduced include:
- Private stack support for ops.dispatch() to prevent stack overflow when
walking down nested schedulers during dispatch operations
- scx_bpf_sub_dispatch() kfunc that allows parent schedulers to trigger
dispatch operations on their direct child schedulers
- Proper parent-child relationship validation to ensure dispatch requests
are only made to legitimate child schedulers
- Updated scx_dispatch_sched() to handle both nested and non-nested
invocations with appropriate kf_mask handling
The qmap scheduler is updated to demonstrate the functionality by calling
scx_bpf_sub_dispatch() on registered child schedulers when it has no
tasks in its own queues.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Add rhashtable lookup for sub-schedulers
Add rhashtable-based lookup for sub-schedulers indexed by cgroup_id to
enable efficient scheduler discovery in preparation for multiple scheduler
support. The hash table allows quick lookup of the appropriate scheduler
instance when processing tasks from different cgroups.
This extends scx_link_sched() to register sub-schedulers in the hash table
and scx_unlink_sched() to remove them. A new scx_find_sub_sched() function
provides the lookup interface.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Make scx_bpf_reenqueue_local() sub-sched aware
scx_bpf_reenqueue_local() currently re-enqueues all tasks on the local DSQ
regardless of which sub-scheduler owns them. With multiple sub-schedulers,
each should only re-enqueue tasks it owns or are owned by its descendants.
Replace the per-rq boolean flag with a lock-free linked list to track
per-scheduler reenqueue requests. Filter tasks in reenq_local() using
hierarchical ownership checks and block deferrals during bypass to prevent
use on dead schedulers.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Implement cgroup sub-sched enabling and disabling
The preceding changes implemented the framework to support cgroup
sub-scheds and updated scheduling paths and kfuncs so that they have
minimal but working support for sub-scheds. However, actual sub-sched
enabling/disabling hasn't been implemented yet and all tasks stayed on
scx_root.
Implement cgroup sub-sched enabling and disabling to actually activate
sub-scheds:
- Both enable and disable operations bypass only the tasks in the subtree
of the child being enabled or disabled to limit disruptions.
- When enabling, all candidate tasks are first initialized for the child
sched. Once that succeeds, the tasks are exited for the parent and then
switched over to the child. This adds a bit of complication but
guarantees that child scheduler failures are always contained.
- Disabling works the same way in the other direction. However, when the
parent may fail to initialize a task, disabling is propagated up to the
parent. While this means that a parent sched fail due to a child sched
event, the failure can only originate from the parent itself (its
ops.init_task()). The only effect a malfunctioning child can have on the
parent is attempting to move the tasks back to the parent.
After this change, although not all the necessary mechanisms are in place
yet, sub-scheds can take control of their tasks and schedule them.
v2: Fix missing scx_cgroup_unlock()/percpu_up_write() in abort path
(Cheng-Yang Chou).
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Support dumping multiple schedulers and add scheduler identification
Extend scx_dump_state() to support multiple schedulers and improve task
identification in dumps. The function now takes a specific scheduler to
dump and can optionally filter tasks by scheduler.
scx_dump_task() now displays which scheduler each task belongs to, using
"*" to mark tasks owned by the scheduler being dumped. Sub-schedulers
are identified with their level and cgroup ID.
The SysRq-D handler now iterates through all active schedulers under
scx_sched_lock and dumps each one separately. For SysRq-D dumps, only
tasks owned by each scheduler are dumped to avoid redundancy since all
schedulers are being dumped. Error-triggered dumps continue to dump all
tasks since only that specific scheduler is being dumped.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Convert scx_dump_state() spinlock to raw spinlock
The scx_dump_state() function uses a regular spinlock to serialize
access. In a subsequent patch, this function will be called while
holding scx_sched_lock, which is a raw spinlock, creating a lock
nesting violation.
Convert the dump_lock to a raw spinlock and use the guard macro for
cleaner lock management.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Make watchdog sub-sched aware
Currently, the watchdog checks all tasks as if they are all on scx_root.
Move scx_watchdog_timeout inside scx_sched and make check_rq_for_timeouts()
use the timeout from the scx_sched associated with each task.
refresh_watchdog() is added, which determines the timer interval as half of
the shortest watchdog timeouts of all scheds and arms or disarms it as
necessary. Every scx_sched instance has equivalent or better detection
latency while sharing the same timer.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Move scx_dsp_ctx and scx_dsp_max_batch into scx_sched
scx_dsp_ctx and scx_dsp_max_batch are global variables used in the dispatch
path. In prepration for multiple scheduler support, move the former into
scx_sched_pcpu and the latter into scx_sched. No user-visible behavior
changes intended.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Dispatch from all scx_sched instances
The cgroup sub-sched support involves invasive changes to many areas of
sched_ext. The overall scaffolding is now in place and the next step is
implementing sub-sched enable/disable.
To enable partial testing and verification, update balance_one() to
dispatch from all scx_sched instances until it finds a task to run. This
should keep scheduling working when sub-scheds are enabled with tasks on
them. This will be replaced by BPF-driven hierarchical operation.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:04 +0000 (07:58 -1000)]
sched_ext: Implement hierarchical bypass mode
When a sub-scheduler enters bypass mode, its tasks must be scheduled by an
ancestor to guarantee forward progress. Tasks from bypassing descendants are
queued in the bypass DSQs of the nearest non-bypassing ancestor, or the root
scheduler if all ancestors are bypassing. This requires coordination between
bypassing schedulers and their hosts.
Add bypass_enq_target_dsq() to find the correct bypass DSQ by walking up the
hierarchy until reaching a non-bypassing ancestor. When a sub-scheduler starts
bypassing, all its runnable tasks are re-enqueued after scx_bypassing() is set,
ensuring proper migration to ancestor bypass DSQs.
Update scx_dispatch_sched() to handle hosting bypassed descendants. When a
scheduler is not bypassing but has bypassing descendants, it must schedule both
its own tasks and bypassed descendant tasks. A simple policy is implemented
where every Nth dispatch attempt (SCX_BYPASS_HOST_NTH=2) consumes from the
bypass DSQ. A fallback consumption is also added at the end of dispatch to
ensure bypassed tasks make progress even when normal scheduling is idle.
Update enable_bypass_dsp() and disable_bypass_dsp() to increment
bypass_dsp_enable_depth on both the bypassing scheduler and its parent host,
ensuring both can detect that bypass dispatch is active through
bypass_dsp_enabled().
Add SCX_EV_SUB_BYPASS_DISPATCH event counter to track scheduling of bypassed
descendant tasks.
v2: Fix comment typos (Andrea).
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Separate bypass dispatch enabling from bypass depth tracking
The bypass_depth field tracks nesting of bypass operations but is also used
to determine whether the bypass dispatch path should be active. With
hierarchical scheduling, child schedulers may need to activate their parent's
bypass dispatch path without affecting the parent's bypass_depth, requiring
separation of these concerns.
Add bypass_dsp_enable_depth and bypass_dsp_claim to independently control
bypass dispatch path activation. The new enable_bypass_dsp() and
disable_bypass_dsp() functions manage this state with proper claim semantics
to prevent races. The bypass dispatch path now only activates when
bypass_dsp_enabled() returns true, which checks the new enable_depth counter.
The disable operation is carefully ordered after all tasks are moved out of
bypass DSQs to ensure they are drained before the dispatch path is disabled.
During scheduler teardown, disable_bypass_dsp() is called explicitly to ensure
cleanup even if bypass mode was never entered normally.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: When calling ops.dispatch() @prev must be on the same scx_sched
The @prev parameter passed into ops.dispatch() is expected to be on the
same sched. Passing in @prev which isn't on the sched can spuriously
trigger failures that can kill the scheduler. Pass in @prev iff it's on
the same sched.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Factor out scx_dispatch_sched()
In preparation of multiple scheduler support, factor out
scx_dispatch_sched() from balance_one(). The function boundary makes
remembering $prev_on_scx and $prev_on_rq less useful. Open code $prev_on_scx
in balance_one() and $prev_on_rq in both balance_one() and
scx_dispatch_sched().
No functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Prepare bypass mode for hierarchical operation
Bypass mode is used to simplify enable and disable paths and guarantee
forward progress when something goes wrong. When enabled, all tasks skip BPF
scheduling and fall back to simple in-kernel FIFO scheduling. While this
global behavior can be used as-is when dealing with sub-scheds, that would
allow any sub-sched instance to affect the whole system in a significantly
disruptive manner.
Make bypass state hierarchical by propagating it to descendants and updating
per-cpu flags accordingly. This allows an scx_sched to bypass if itself or
any of its ancestors are in bypass mode. However, this doesn't make the
actual bypass enqueue and dispatch paths hierarchical yet. That will be done
in later patches.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Move bypass state into scx_sched
In preparation of multiple scheduler support, make bypass state
per-scx_sched. Move scx_bypass_depth, bypass_timestamp and bypass_lb_timer
from globals into scx_sched. Move SCX_RQ_BYPASSING from rq to scx_sched_pcpu
as SCX_SCHED_PCPU_BYPASSING.
scx_bypass() now takes @sch and scx_rq_bypassing(rq) is replaced with
scx_bypassing(sch, cpu). All callers updated.
scx_bypassed_for_enable existed to balance the global scx_bypass_depth when
enable failed. Now that bypass_depth is per-scheduler, the counter is
destroyed along with the scheduler on enable failure. Remove
scx_bypassed_for_enable.
As all tasks currently use the root scheduler, there's no observable behavior
change.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Move bypass_dsq into scx_sched_pcpu
To support bypass mode for sub-schedulers, move bypass_dsq from struct scx_rq
to struct scx_sched_pcpu. Add bypass_dsq() helper. Move bypass_dsq
initialization from init_sched_ext_class() to scx_alloc_and_attach_sched().
bypass_lb_cpu() now takes a CPU number instead of rq pointer. All callers
updated. No behavior change as all tasks use the root scheduler.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Move aborting flag to per-scheduler field
The abort state was tracked in the global scx_aborting flag which was used to
break out of potential live-lock scenarios when an error occurs. With
hierarchical scheduling, each scheduler instance must track its own abort
state independently so that an aborting scheduler doesn't interfere with
others.
Move the aborting flag into struct scx_sched and update all access sites. The
early initialization check in scx_root_enable() that warned about residual
aborting state is no longer needed as each scheduler instance now starts with
a clean state.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Move default slice to per-scheduler field
The default time slice was stored in the global scx_slice_dfl variable which
was dynamically modified when entering and exiting bypass mode. With
hierarchical scheduling, each scheduler instance needs its own default slice
configuration so that bypass operations on one scheduler don't affect others.
Move slice_dfl into struct scx_sched and update all access sites. The bypass
logic now modifies the root scheduler's slice_dfl. At task initialization in
init_scx_entity(), use the SCX_SLICE_DFL constant directly since the task may
not yet be associated with a specific scheduler.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Refactor task init/exit helpers
- Add the @sch parameter to scx_init_task() and drop @tg as it can be
obtained from @p. Separate out __scx_init_task() which does everything
except for the task state transition.
- Add the @sch parameter to scx_enable_task(). Separate out
__scx_enable_task() which does everything except for the task state
transition.
- Add the @sch parameter to scx_disable_task().
- Rename scx_exit_task() to scx_disable_and_exit_task() and separate out
__scx_disable_and_exit_task() which does everything except for the task
state transition.
While some task state transitions are relocated, no meaningful behavior
changes are expected.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: scx_dsq_move() should validate the task belongs to the right scheduler
scx_bpf_dsq_move[_vtime]() calls scx_dsq_move() to move task from a DSQ to
another. However, @p doesn't necessarily have to come form the containing
iteration and can thus be a task which belongs to another scx_sched. Verify
that @p is on the same scx_sched as the DSQ being iterated.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Enforce scheduler ownership when updating slice and dsq_vtime
scx_bpf_task_set_slice() and scx_bpf_task_set_dsq_vtime() now verify that
the calling scheduler has authority over the task before allowing updates.
This prevents schedulers from modifying tasks that don't belong to them in
hierarchical scheduling configurations.
Direct writes to p->scx.slice and p->scx.dsq_vtime are deprecated and now
trigger warnings. They will be disallowed in a future release.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Enforce scheduling authority in dispatch and select_cpu operations
Add checks to enforce scheduling authority boundaries when multiple
schedulers are present:
1. In scx_dsq_insert_preamble() and the dispatch retry path, ignore attempts
to insert tasks that the scheduler doesn't own, counting them via
SCX_EV_INSERT_NOT_OWNED. As BPF schedulers are allowed to ignore
dequeues, such attempts can occur legitimately during sub-scheduler
enabling when tasks move between schedulers. The counter helps distinguish
normal cases from scheduler bugs.
2. For scx_bpf_dsq_insert_vtime() and scx_bpf_select_cpu_and(), error out
when sub-schedulers are attached. These functions lack the aux__prog
parameter needed to identify the calling scheduler, so they cannot be used
safely with multiple schedulers. BPF programs should use the arg-wrapped
versions (__scx_bpf_dsq_insert_vtime() and __scx_bpf_select_cpu_and())
instead.
These checks ensure that with multiple concurrent schedulers, scheduler
identity can be properly determined and unauthorized task operations are
prevented or tracked.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Introduce scx_prog_sched()
In preparation for multiple scheduler support, introduce scx_prog_sched()
accessor which returns the scx_sched instance associated with a BPF program.
The association is determined via the special KF_IMPLICIT_ARGS kfunc
parameter, which provides access to bpf_prog_aux. This aux can be used to
retrieve the struct_ops (sched_ext_ops) that the program is associated with,
and from there, the corresponding scx_sched instance.
For compatibility, when ops.sub_attach is not implemented (older schedulers
without sub-scheduler support), unassociated programs fall back to scx_root.
A warning is logged once per scheduler for such programs.
As scx_root is still the only scheduler, this shouldn't introduce
user-visible behavior changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Introduce scx_task_sched[_rcu]()
In preparation of multiple scheduler support, add p->scx.sched which points
to the scx_sched instance that the task is scheduled by, which is currently
always scx_root. Add scx_task_sched[_rcu]() accessors which return the
associated scx_sched of the specified task and replace the raw scx_root
dereferences with it where applicable. scx_task_on_sched() is also added to
test whether a given task is on the specified sched.
As scx_root is still the only scheduler, this shouldn't introduce
user-visible behavior changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:03 +0000 (07:58 -1000)]
sched_ext: Introduce cgroup sub-sched support
A system often runs multiple workloads especially in multi-tenant server
environments where a system is split into partitions servicing separate
more-or-less independent workloads each requiring an application-specific
scheduler. To support such and other use cases, sched_ext is in the process
of growing multiple scheduler support.
When partitioning a system in terms of CPUs for such use cases, an
oft-taken approach is hard partitioning the system using cpuset. While it
would be possible to tie sched_ext multiple scheduler support to cpuset
partitions, such an approach would have fundamental limitations stemming
from the lack of dynamism and flexibility.
Users often don't care which specific CPUs are assigned to which workload
and want to take advantage of optimizations which are enabled by running
workloads on a larger machine - e.g. opportunistic over-commit, improving
latency critical workload characteristics while maintaining bandwidth
fairness, employing control mechanisms based on different criteria than
on-CPU time for e.g. flexible memory bandwidth isolation, packing similar
parts from different workloads on same L3s to improve cache efficiency,
and so on.
As this sort of dynamic behaviors are impossible or difficult to implement
with hard partitioning, sched_ext is implementing cgroup sub-sched support
where schedulers can be attached to the cgroup hierarchy and a parent
scheduler is responsible for controlling the CPUs that each child can use
at any given moment. This makes CPU distribution dynamically controlled by
BPF allowing high flexibility.
This patch adds the skeletal sched_ext cgroup sub-sched support:
- sched_ext_ops.sub_cgroup_id and .sub_attach/detach() are added. Non-zero
sub_cgroup_id indicates that the scheduler is to be attached to the
identified cgroup. A sub-sched is attached to the cgroup iff the nearest
ancestor scheduler implements .sub_attach() and grants the attachment. Max
nesting depth is limited by SCX_SUB_MAX_DEPTH.
- When a scheduler exits, all its descendant schedulers are exited
together. Also, cgroup.scx_sched added which points to the effective
scheduler instance for the cgroup. This is updated on scheduler
init/exit and inherited on cgroup online. When a cgroup is offlined, the
attached scheduler is automatically exited.
- Sub-sched support is gated on CONFIG_EXT_SUB_SCHED which is
automatically enabled if both SCX and cgroups are enabled. Sub-sched
support is not tied to the CPU controller but rather the cgroup
hierarchy itself. This is intentional as the support for cpu.weight and
cpu.max based resource control is orthogonal to sub-sched support. Note
that CONFIG_CGROUPS around cgroup subtree iteration support for
scx_task_iter is replaced with CONFIG_EXT_SUB_SCHED for consistency.
- This allows loading sub-scheds and most framework operations such as
propagating disable down the hierarchy work. However, sub-scheds are not
operational yet and all tasks stay with the root sched. This will serve
as the basis for building up full sub-sched support.
- DSQs point to the scx_sched they belong to.
- scx_qmap is updated to allow attachment of sub-scheds and also serving
as sub-scheds.
- scx_is_descendant() is added but not yet used in this patch. It is used by
later changes in the series and placed here as this is where the function
belongs.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:02 +0000 (07:58 -1000)]
sched_ext: Reorganize enable/disable path for multi-scheduler support
In preparation for multiple scheduler support, reorganize the enable and
disable paths to make scheduler instances explicit. Extract
scx_root_disable() from scx_disable_workfn(). Rename scx_enable_workfn()
to scx_root_enable_workfn(). Change scx_disable() to take @sch parameter
and only queue disable_work if scx_claim_exit() succeeds for consistency.
Move exit_kind validation into scx_claim_exit(). The sysrq handler now
prints a message when no scheduler is loaded.
These changes don't materially affect user-visible behavior.
v2: Keep scx_enable() name as-is and only rename the workfn to
scx_root_enable_workfn(). Change scx_enable() return type to s32.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:02 +0000 (07:58 -1000)]
sched/core: Swap the order between sched_post_fork() and cgroup_post_fork()
The planned sched_ext cgroup sub-scheduler support needs the newly forked
task to be associated with its cgroup in its post_fork() hook. There is no
existing ordering requirement between the two now. Swap them and note the
new ordering requirement.
Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrea Righi <arighi@nvidia.com> Cc: Ingo Molnar <mingo@redhat.com>
Tejun Heo [Fri, 6 Mar 2026 17:58:02 +0000 (07:58 -1000)]
sched_ext: Implement cgroup subtree iteration for scx_task_iter
For the planned cgroup sub-scheduler support, enable/disable operations are
going to be subtree specific and iterating all tasks in the system for those
operations can be unnecessarily expensive and disruptive.
cgroup already has mechanisms to perform subtree task iterations. Implement
cgroup subtree iteration for scx_task_iter:
- Add optional @cgrp to scx_task_iter_start() which enables cgroup subtree
iteration.
- Make scx_task_iter use css_next_descendant_pre() and css_task_iter to
iterate all tasks in the cgroup subtree.
- Update all existing callers to pass NULL to maintain current behavior.
The two iteration mechanisms are independent and duplicate. It's likely that
scx_tasks can be removed in favor of always using cgroup iteration if
CONFIG_SCHED_CLASS_EXT depends on CONFIG_CGROUPS.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
David Carlier [Fri, 6 Mar 2026 04:50:55 +0000 (04:50 +0000)]
sched_ext: Use READ_ONCE() for scx_slice_bypass_us in scx_bypass()
Commit 0927780c90ce ("sched_ext: Use READ_ONCE() for lock-free reads
of module param variables") annotated the plain reads of
scx_slice_bypass_us and scx_bypass_lb_intv_us in bypass_lb_cpu(), but
missed a third site in scx_bypass():
scx_slice_bypass_us is a module parameter writable via sysfs in
process context through set_slice_us() -> param_set_uint_minmax(),
which performs a plain store without holding bypass_lock. scx_bypass()
reads the variable under bypass_lock, but since the writer does not
take that lock, the two accesses are concurrent.
WRITE_ONCE() only applies volatile semantics to the store of
scx_slice_dfl -- the val expression containing scx_slice_bypass_us is
evaluated as a plain read, providing no protection against concurrent
writes.
Wrap the read with READ_ONCE() to complete the annotation started by
commit 0927780c90ce and make the access KCSAN-clean, consistent with
the existing READ_ONCE(scx_slice_bypass_us) in bypass_lb_cpu().
Signed-off-by: David Carlier <devnexen@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
Until now, these didn't need to be exposed because controllers only cared
about the css hierarchy. The planned sched_ext hierarchical scheduler
support will be based on the default cgroup hierarchy, which is in line
with the existing BPF cgroup support, and thus needs these exposed.
Linus Torvalds [Thu, 5 Mar 2026 19:49:05 +0000 (11:49 -0800)]
Merge tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull crypto library fixes from Eric Biggers:
- Several test fixes:
- Fix flakiness in the interrupt context tests in certain VMs
- Make the lib/crypto/ KUnit tests depend on the corresponding
library options rather than selecting them. This follows the
standard KUnit convention, and it fixes an issue where enabling
CONFIG_KUNIT_ALL_TESTS pulled in all the crypto library code
- Add a kunitconfig file for lib/crypto/
- Fix a couple stale references to "aes-generic" that made it in
concurrently with the rename to "aes-lib"
- Update the help text for several CRYPTO kconfig options to remove
outdated information about users that now use the library instead
* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux:
crypto: testmgr - Fix stale references to aes-generic
crypto: Clean up help text for CRYPTO_CRC32
crypto: Clean up help text for CRYPTO_CRC32C
crypto: Clean up help text for CRYPTO_XXHASH
crypto: Clean up help text for CRYPTO_SHA256
crypto: Clean up help text for CRYPTO_BLAKE2B
lib/crypto: tests: Add a .kunitconfig file
lib/crypto: tests: Depend on library options rather than selecting them
kunit: irq: Ensure timer doesn't fire too frequently
Linus Torvalds [Thu, 5 Mar 2026 19:37:27 +0000 (11:37 -0800)]
Merge tag 'acpi-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI support fixes from Rafael Wysocki:
- Revert a commit related to ACPI device power management that was
not supposed to make any functional difference, but it did so and
introduced a regression (Rafael Wysocki)
- Update the _CPC object definition in ACPICA to match ACPI 6.6 and
prevent the kernel from printing a false-positive warning regarding
_CPC output package format on platforms shipping with firmware based
on ACPI 6.6 (Saket Dumbre)
* tag 'acpi-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
Revert "ACPI: PM: Let acpi_dev_pm_attach() skip devices without ACPI PM"
ACPICA: Update the _CPC definition to match ACPI 6.6
Linus Torvalds [Thu, 5 Mar 2026 19:00:46 +0000 (11:00 -0800)]
Merge tag 'net-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from CAN, netfilter and wireless.
Current release - new code bugs:
- sched: cake: fixup cake_mq rate adjustment for diffserv config
- wifi: fix missing ieee80211_eml_params member initialization
Previous releases - regressions:
- tcp: give up on stronger sk_rcvbuf checks (for now)
Previous releases - always broken:
- net: fix rcu_tasks stall in threaded busypoll
- sched:
- fq: clear q->band_pkt_count[] in fq_reset()
- only allow act_ct to bind to clsact/ingress qdiscs and shared
blocks
- bridge: check relevant per-VLAN options in VLAN range grouping
- xsk: fix fragment node deletion to prevent buffer leak
Misc:
- spring cleanup of inactive maintainers"
* tag 'net-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (138 commits)
xdp: produce a warning when calculated tailroom is negative
net: enetc: use truesize as XDP RxQ info frag_size
libeth, idpf: use truesize as XDP RxQ info frag_size
i40e: use xdp.frame_sz as XDP RxQ info frag_size
i40e: fix registering XDP RxQ info
ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
ice: fix rxq info registering in mbuf packets
xsk: introduce helper to determine rxq->frag_size
xdp: use modulo operation to calculate XDP frag tailroom
selftests/tc-testing: Add tests exercising act_ife metalist replace behaviour
net/sched: act_ife: Fix metalist update behavior
selftests: net: add test for IPv4 route with loopback IPv6 nexthop
net: ipv6: fix panic when IPv4 route references loopback IPv6 nexthop
net: vxlan: fix nd_tbl NULL dereference when IPv6 is disabled
net: bridge: fix nd_tbl NULL dereference when IPv6 is disabled
MAINTAINERS: remove Thomas Falcon from IBM ibmvnic
MAINTAINERS: remove Claudiu Manoil and Alexandre Belloni from Ocelot switch
MAINTAINERS: replace Taras Chornyi with Elad Nachman for Marvell Prestera
MAINTAINERS: remove Jonathan Lemon from OpenCompute PTP
MAINTAINERS: replace Clark Wang with Frank Li for Freescale FEC
...
Merge a fix updating the _CPC object definition in ACPICA to avoid
printing a false-positive output package format warning on new
platforms (Saket Dumbre)
* acpica:
ACPICA: Update the _CPC definition to match ACPI 6.6
Andrea Righi [Thu, 5 Mar 2026 06:29:00 +0000 (07:29 +0100)]
sched_ext: Document task ownership state machine
The task ownership state machine in sched_ext is quite hard to follow
from the code alone. The interaction of ownership states, memory
ordering rules and cross-CPU "lock dancing" makes the overall model
subtle.
Extend the documentation next to scx_ops_state to provide a more
structured and self-contained description of the state transitions and
their synchronization rules.
The new reference should make the code easier to reason about and
maintain and can help future contributors understand the overall
task-ownership workflow.
Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
zhidao su [Thu, 5 Mar 2026 06:18:56 +0000 (14:18 +0800)]
sched_ext: Use READ_ONCE() for lock-free reads of module param variables
bypass_lb_cpu() reads scx_bypass_lb_intv_us and scx_slice_bypass_us
without holding any lock, in timer callback context where module
parameter writes via sysfs can happen concurrently:
if (delta < DIV_ROUND_UP(min_delta_us, scx_slice_bypass_us))
^^^^^^^^^^^^^^^^^
plain read -- KCSAN data race
scx_bypass_lb_intv_us already uses READ_ONCE() in scx_bypass_lb_timerfn()
and scx_bypass() for its other lock-free read sites, leaving
bypass_lb_cpu() inconsistent. scx_slice_bypass_us has the same
lock-free access pattern in the same function.
Fix both plain reads by using READ_ONCE() to complete the concurrent
access annotation and make the code KCSAN-clean.
Signed-off-by: zhidao su <suzhidao@xiaomi.com> Signed-off-by: Tejun Heo <tj@kernel.org>
Linus Torvalds [Thu, 5 Mar 2026 16:05:05 +0000 (08:05 -0800)]
Merge tag 'trace-v7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing fixes from Steven Rostedt:
- Fix thresh_return of function graph tracer
The update to store data on the shadow stack removed the abuse of
using the task recursion word as a way to keep track of what
functions to ignore. The trace_graph_return() was updated to handle
this, but when function_graph tracer is using a threshold (only trace
functions that took longer than a specified time), it uses
trace_graph_thresh_return() instead.
This function was still incorrectly using the task struct recursion
word causing the function graph tracer to permanently set all
functions to "notrace"
- Fix thresh_return nosleep accounting
When the calltime was moved to the shadow stack storage instead of
being on the fgraph descriptor, the calculations for the amount of
sleep time was updated. The calculation was done in the
trace_graph_thresh_return() function, which also called the
trace_graph_return(), which did the calculation again, causing the
time to be doubled.
Remove the call to trace_graph_return() as what it needed to do
wasn't that much, and just do the work in
trace_graph_thresh_return().
- Fix syscall trace event activation on boot up
The syscall trace events are pseudo events attached to the
raw_syscall tracepoints. When the first syscall event is enabled, it
enables the raw_syscall tracepoint and doesn't need to do anything
when a second syscall event is also enabled.
When events are enabled via the kernel command line, syscall events
are partially enabled as the enabling is called before rcu_init. This
is due to allow early events to be enabled immediately. Because
kernel command line events do not distinguish between different types
of events, the syscall events are enabled here but are not fully
functioning. After rcu_init, they are disabled and re-enabled so that
they can be fully enabled.
The problem happened is that this "disable-enable" is done one at a
time. If more than one syscall event is specified on the command
line, by disabling them one at a time, the counter never gets to
zero, and the raw_syscall is not disabled and enabled, keeping the
syscall events in their non-fully functional state.
Instead, disable all events and re-enabled them all, as that will
ensure the raw_syscall event is also disabled and re-enabled.
- Disable preemption in ftrace pid filtering
The ftrace pid filtering attaches to the fork and exit tracepoints to
add or remove pids that should be traced. They access variables
protected by RCU (preemption disabled). Now that tracepoint callbacks
are called with preemption enabled, this protection needs to be added
explicitly, and not depend on the functions being called with
preemption disabled.
- Disable preemption in event pid filtering
The event pid filtering needs the same preemption disabling guards as
ftrace pid filtering.
- Fix accounting of the memory mapped ring buffer on fork
Memory mapping the ftrace ring buffer sets the vm_flags to DONTCOPY.
But this does not prevent the application from calling
madvise(MADVISE_DOFORK). This causes the mapping to be copied on
fork. After the first tasks exits, the mapping is considered unmapped
by everyone. But when he second task exits, the counter goes below
zero and triggers a WARN_ON.
Since nothing prevents two separate tasks from mmapping the ftrace
ring buffer (although two mappings may mess each other up), there's
no reason to stop the memory from being copied on fork.
Update the vm_operations to have an ".open" handler to update the
accounting and let the ring buffer know someone else has it mapped.
- Add all ftrace headers in MAINTAINERS file
The MAINTAINERS file only specifies include/linux/ftrace.h But misses
ftrace_irq.h and ftrace_regs.h. Make the file use wildcards to get
all *ftrace* files.
* tag 'trace-v7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
ftrace: Add MAINTAINERS entries for all ftrace headers
tracing: Fix WARN_ON in tracing_buffers_mmap_close
tracing: Disable preemption in the tracepoint callbacks handling filtered pids
ftrace: Disable preemption in the tracepoint callbacks handling filtered pids
tracing: Fix syscall events activation by ensuring refcount hits zero
fgraph: Fix thresh_return nosleeptime double-adjust
fgraph: Fix thresh_return clear per-task notrace
====================
Address XDP frags having negative tailroom
Aside from the issue described below, tailroom calculation does not account
for pages being split between frags, e.g. in i40e, enetc and
AF_XDP ZC with smaller chunks. These series address the problem by
calculating modulo (skb_frag_off() % rxq->frag_size) in order to get
data offset within a smaller block of memory. Please note, xskxceiver
tail grow test passes without modulo e.g. in xdpdrv mode on i40e,
because there is not enough descriptors to get to flipped buffers.
Many ethernet drivers report xdp Rx queue frag size as being the same as
DMA write size. However, the only user of this field, namely
bpf_xdp_frags_increase_tail(), clearly expects a truesize.
Such difference leads to unspecific memory corruption issues under certain
circumstances, e.g. in ixgbevf maximum DMA write size is 3 KB, so when
running xskxceiver's XDP_ADJUST_TAIL_GROW_MULTI_BUFF, 6K packet fully uses
all DMA-writable space in 2 buffers. This would be fine, if only
rxq->frag_size was properly set to 4K, but value of 3K results in a
negative tailroom, because there is a non-zero page offset.
We are supposed to return -EINVAL and be done with it in such case,
but due to tailroom being stored as an unsigned int, it is reported to be
somewhere near UINT_MAX, resulting in a tail being grown, even if the
requested offset is too much(it is around 2K in the abovementioned test).
This later leads to all kinds of unspecific calltraces.
The issue can be fixed in all in-tree drivers, but we cannot just trust OOT
drivers to not do this. Therefore, make tailroom a signed int and produce a
warning when it is negative to prevent such mistakes in the future.
The issue can also be easily reproduced with ice driver, by applying
the following diff to xskxceiver and enjoying a kernel panic in xdpdrv mode:
diff --git a/tools/testing/selftests/bpf/prog_tests/test_xsk.c b/tools/testing/selftests/bpf/prog_tests/test_xsk.c
index 5af28f359cfd..042d587fa7ef 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_xsk.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_xsk.c
@@ -2541,8 +2541,8 @@ int testapp_adjust_tail_grow_mb(struct test_spec *test)
{
test->mtu = MAX_ETH_JUMBO_SIZE;
/* Grow by (frag_size - last_frag_Size) - 1 to stay inside the last fragment */
- return testapp_adjust_tail(test, (XSK_UMEM__MAX_FRAME_SIZE / 2) - 1,
- XSK_UMEM__LARGE_FRAME_SIZE * 2);
+ return testapp_adjust_tail(test, XSK_UMEM__MAX_FRAME_SIZE * 100,
+ 6912);
}
int testapp_tx_queue_consumer(struct test_spec *test)
If we print out the values involved in the tailroom calculation:
Larysa Zaremba [Thu, 5 Mar 2026 11:12:50 +0000 (12:12 +0100)]
xdp: produce a warning when calculated tailroom is negative
Many ethernet drivers report xdp Rx queue frag size as being the same as
DMA write size. However, the only user of this field, namely
bpf_xdp_frags_increase_tail(), clearly expects a truesize.
Such difference leads to unspecific memory corruption issues under certain
circumstances, e.g. in ixgbevf maximum DMA write size is 3 KB, so when
running xskxceiver's XDP_ADJUST_TAIL_GROW_MULTI_BUFF, 6K packet fully uses
all DMA-writable space in 2 buffers. This would be fine, if only
rxq->frag_size was properly set to 4K, but value of 3K results in a
negative tailroom, because there is a non-zero page offset.
We are supposed to return -EINVAL and be done with it in such case, but due
to tailroom being stored as an unsigned int, it is reported to be somewhere
near UINT_MAX, resulting in a tail being grown, even if the requested
offset is too much (it is around 2K in the abovementioned test). This later
leads to all kinds of unspecific calltraces.
The issue can be fixed in all in-tree drivers, but we cannot just trust OOT
drivers to not do this. Therefore, make tailroom a signed int and produce a
warning when it is negative to prevent such mistakes in the future.
Fixes: bf25146a5595 ("bpf: add frags support to the bpf_xdp_adjust_tail() API") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-10-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:49 +0000 (12:12 +0100)]
net: enetc: use truesize as XDP RxQ info frag_size
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects truesize instead of DMA
write size. Different assumptions in enetc driver configuration lead to
negative tailroom.
Set frag_size to the same value as frame_sz.
Fixes: 2768b2e2f7d2 ("net: enetc: register XDP RX queues with frag_size") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-9-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:48 +0000 (12:12 +0100)]
libeth, idpf: use truesize as XDP RxQ info frag_size
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects whole buffer size instead
of DMA write size. Different assumptions in idpf driver configuration lead
to negative tailroom.
To make it worse, buffer sizes are not actually uniform in idpf when
splitq is enabled, as there are several buffer queues, so rxq->rx_buf_size
is meaningless in this case.
Use truesize of the first bufq in AF_XDP ZC, as there is only one. Disable
growing tail for regular splitq.
Fixes: ac8a861f632e ("idpf: prepare structures to support XDP") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-8-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:47 +0000 (12:12 +0100)]
i40e: use xdp.frame_sz as XDP RxQ info frag_size
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects whole buffer size instead
of DMA write size. Different assumptions in i40e driver configuration lead
to negative tailroom.
Set frag_size to the same value as frame_sz in shared pages mode, use new
helper to set frag_size when AF_XDP ZC is active.
Fixes: a045d2f2d03d ("i40e: set xdp_rxq_info::frag_size") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-7-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:46 +0000 (12:12 +0100)]
i40e: fix registering XDP RxQ info
Current way of handling XDP RxQ info in i40e has a problem, where frag_size
is not updated when xsk_buff_pool is detached or when MTU is changed, this
leads to growing tail always failing for multi-buffer packets.
Couple XDP RxQ info registering with buffer allocations and unregistering
with cleaning the ring.
Fixes: a045d2f2d03d ("i40e: set xdp_rxq_info::frag_size") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-6-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:45 +0000 (12:12 +0100)]
ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects whole buff size instead
of DMA write size. Different assumptions in ice driver configuration lead
to negative tailroom.
This allows to trigger kernel panic, when using
XDP_ADJUST_TAIL_GROW_MULTI_BUFF xskxceiver test and changing packet size to
6912 and the requested offset to a huge value, e.g.
XSK_UMEM__MAX_FRAME_SIZE * 100.
Due to other quirks of the ZC configuration in ice, panic is not observed
in ZC mode, but tailroom growing still fails when it should not.
Use fill queue buffer truesize instead of DMA write size in XDP RxQ info.
Fix ZC mode too by using the new helper.
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-5-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:44 +0000 (12:12 +0100)]
ice: fix rxq info registering in mbuf packets
XDP RxQ info contains frag_size, which depends on the MTU. This makes the
old way of registering RxQ info before calculating new buffer sizes
invalid. Currently, it leads to frag_size being outdated, making it
sometimes impossible to grow tailroom in a mbuf packet. E.g. fragments are
actually 3K+, but frag size is still as if MTU was 1500.
Always register new XDP RxQ info after reconfiguring memory pools.
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-4-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:43 +0000 (12:12 +0100)]
xsk: introduce helper to determine rxq->frag_size
rxq->frag_size is basically a step between consecutive strictly aligned
frames. In ZC mode, chunk size fits exactly, but if chunks are unaligned,
there is no safe way to determine accessible space to grow tailroom.
Report frag_size to be zero, if chunks are unaligned, chunk_size otherwise.
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX") Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-3-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Larysa Zaremba [Thu, 5 Mar 2026 11:12:42 +0000 (12:12 +0100)]
xdp: use modulo operation to calculate XDP frag tailroom
The current formula for calculating XDP tailroom in mbuf packets works only
if each frag has its own page (if rxq->frag_size is PAGE_SIZE), this
defeats the purpose of the parameter overall and without any indication
leads to negative calculated tailroom on at least half of frags, if shared
pages are used.
There are not many drivers that set rxq->frag_size. Among them:
* i40e and enetc always split page uniformly between frags, use shared
pages
* ice uses page_pool frags via libeth, those are power-of-2 and uniformly
distributed across page
* idpf has variable frag_size with XDP on, so current API is not applicable
* mlx5, mtk and mvneta use PAGE_SIZE or 0 as frag_size for page_pool
As for AF_XDP ZC, only ice, i40e and idpf declare frag_size for it. Modulo
operation yields good results for aligned chunks, they are all power-of-2,
between 2K and PAGE_SIZE. Formula without modulo fails when chunk_size is
2K. Buffers in unaligned mode are not distributed uniformly, so modulo
operation would not work.
To accommodate unaligned buffers, we could define frag_size as
data + tailroom, and hence do not subtract offset when calculating
tailroom, but this would necessitate more changes in the drivers.
Define rxq->frag_size as an even portion of a page that fully belongs to a
single frag. When calculating tailroom, locate the data start within such
portion by performing a modulo operation on page offset.
Fixes: bf25146a5595 ("bpf: add frags support to the bpf_xdp_adjust_tail() API") Acked-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20260305111253.2317394-2-larysa.zaremba@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Whenever an ife action replace changes the metalist, instead of
replacing the old data on the metalist, the current ife code is appending
the new metadata. Aside from being innapropriate behavior, this may lead
to an unbounded addition of metadata to the metalist which might cause an
out of bounds error when running the encode op:
ip -6 nexthop add id 100 dev lo
ip route add 172.20.20.0/24 nhid 100
ping -c1 172.20.20.1 # kernel crash
Problem Description
When a standalone IPv6 nexthop object is created with a loopback device,
fib6_nh_init() misclassifies it as a reject route. Nexthop objects have
no destination prefix (fc_dst=::), so fib6_is_reject() always matches
any loopback nexthop. The reject path skips fib_nh_common_init(), leaving
nhc_pcpu_rth_output unallocated. When an IPv4 route later references
this nexthop and triggers a route lookup, __mkroute_output() calls
raw_cpu_ptr(nhc->nhc_pcpu_rth_output) on a NULL pointer, causing a page
fault.
The reject classification was designed for regular IPv6 routes to prevent
kernel routing loops, but nexthop objects should not be subject to this
check since they carry no destination information. Loop prevention is
handled separately when the route itself is created.
[1] https://syzkaller.appspot.com/bug?extid=334190e097a98a1b81bb
====================
When a standalone IPv6 nexthop object is created with a loopback device
(e.g., "ip -6 nexthop add id 100 dev lo"), fib6_nh_init() misclassifies
it as a reject route. This is because nexthop objects have no destination
prefix (fc_dst=::), causing fib6_is_reject() to match any loopback
nexthop. The reject path skips fib_nh_common_init(), leaving
nhc_pcpu_rth_output unallocated. If an IPv4 route later references this
nexthop, __mkroute_output() dereferences NULL nhc_pcpu_rth_output and
panics.
Simplify the check in fib6_nh_init() to only match explicit reject
routes (RTF_REJECT) instead of using fib6_is_reject(). The loopback
promotion heuristic in fib6_is_reject() is handled separately by
ip6_route_info_create_nh(). After this change, the three cases behave
as follows:
2. Implicit loopback reject route ("ip -6 route add 2001:db8::/32 dev lo"):
RTF_REJECT is not set, takes normal path, fib_nh_common_init() is
called. ip6_route_info_create_nh() still promotes it to reject
afterward. nhc_pcpu_rth_output is allocated but unused, which is
harmless.
3. Standalone nexthop object ("ip -6 nexthop add id 100 dev lo"):
RTF_REJECT is not set, takes normal path, fib_nh_common_init() is
called. nhc_pcpu_rth_output is properly allocated, fixing the crash
when IPv4 routes reference this nexthop.
Suggested-by: Ido Schimmel <idosch@nvidia.com> Fixes: 493ced1ac47c ("ipv4: Allow routes to use nexthop objects") Reported-by: syzbot+334190e097a98a1b81bb@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/698f8482.a70a0220.2c38d7.00ca.GAE@google.com/T/ Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://patch.msgid.link/20260304113817.294966-2-jiayuan.chen@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: vxlan: fix nd_tbl NULL dereference when IPv6 is disabled
When booting with the 'ipv6.disable=1' parameter, the nd_tbl is never
initialized because inet6_init() exits before ndisc_init() is called
which initializes it. If an IPv6 packet is injected into the interface,
route_shortcircuit() is called and a NULL pointer dereference happens on
neigh_lookup().
Fix this by adding an early check on route_shortcircuit() when protocol
is ETH_P_IPV6. Note that ipv6_mod_enabled() cannot be used here because
VXLAN can be built-in even when IPv6 is built as a module.
net: bridge: fix nd_tbl NULL dereference when IPv6 is disabled
When booting with the 'ipv6.disable=1' parameter, the nd_tbl is never
initialized because inet6_init() exits before ndisc_init() is called
which initializes it. Then, if neigh_suppress is enabled and an ICMPv6
Neighbor Discovery packet reaches the bridge, br_do_suppress_nd() will
dereference ipv6_stub->nd_tbl which is NULL, passing it to
neigh_lookup(). This causes a kernel NULL pointer dereference.
Fix this by replacing IS_ENABLED(IPV6) call with ipv6_mod_enabled() in
the callers. This is in essence disabling NS/NA suppression when IPv6 is
disabled.
Fixes: ed842faeb2bd ("bridge: suppress nd pkts on BR_NEIGH_SUPPRESS ports") Reported-by: Guruprasad C P <gurucp2005@gmail.com> Closes: https://lore.kernel.org/netdev/CAHXs0ORzd62QOG-Fttqa2Cx_A_VFp=utE2H2VTX5nqfgs7LDxQ@mail.gmail.com/ Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Link: https://patch.msgid.link/20260304120357.9778-1-fmancera@suse.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
MAINTAINERS: annual cleanup of inactive maintainers
Annual cleanup of inactive maintainers under networking.
The goal is to make sure MAINTAINERS reflect reality for
code which is relatively actively changed (at least 70 commits
in the last 2 years or at least 120 commits in the last 5 years).
Those who either:
- were the initial author / "upstreamer" of the driver; or
- authored at least 1/3rd of the exiting code base (per git blame); or
- authored at least 25% of commits before becoming inactive
are moved to CREDITS.
The discovery of inactive maintainers was done using gitdm tools,
with a bunch of ad-hoc scripts on top to do the rest. I tried to
double check the results but this is mostly a scripted cleanup
so please report inaccuracies if any.
====================
Jakub Kicinski [Tue, 3 Mar 2026 21:53:11 +0000 (13:53 -0800)]
MAINTAINERS: remove Thomas Falcon from IBM ibmvnic
We have not seen emails or tags from Thomas's IBM address
(tlfalcon@linux.ibm.com) in over 5 years. Looks like Thomas
is active in perf tooling at Intel (thomas.falcon@intel.com).
Subsystem IBM Power SRIOV Virtual NIC Device Driver
Changes 49 / 134 (36%)
Last activity: 2025-08-26
Haren Myneni <haren@linux.ibm.com>:
Tags 3c14917953a5 2025-08-26 00:00:00 2
Rick Lindsley <ricklind@linux.ibm.com>:
Nick Child <nnac123@linux.ibm.com>:
Author d93a6caab5d7 2025-03-25 00:00:00 14
Tags d93a6caab5d7 2025-03-25 00:00:00 16
Thomas Falcon <tlfalcon@linux.ibm.com>:
Top reviewers:
[22]: drt@linux.ibm.com
[13]: horms@kernel.org
[9]: ricklind@linux.vnet.ibm.com
[3]: davemarq@linux.ibm.com
INACTIVE MAINTAINER Thomas Falcon <tlfalcon@linux.ibm.com>
Move Thomas to CREDITS as the initial author of ibmvnic.
Jakub Kicinski [Tue, 3 Mar 2026 21:53:10 +0000 (13:53 -0800)]
MAINTAINERS: remove Claudiu Manoil and Alexandre Belloni from Ocelot switch
We have not seen tags from Claudiu for the Ocelot switch driver
in over 5 years. He is active upstream in other NXP subsystems
(ENETC, gianfar), with 46 emails on lore since 2024.
We have not seen tags from Alexandre for the Ocelot switch driver
in over 5 years. He is very active upstream in other subsystems
(RTC, I3C, Atmel/Microchip SoC), with over 1,200 emails on lore
since 2024.
Vladimir Oltean is active.
Jakub Kicinski [Tue, 3 Mar 2026 21:53:09 +0000 (13:53 -0800)]
MAINTAINERS: replace Taras Chornyi with Elad Nachman for Marvell Prestera
We have not seen emails or tags from Taras in over 5 years,
and there is no recent mailing list activity.
Elad Nachman has been providing reviews in the last couple
of years and is the top reviewer for this subsystem.
Jakub Kicinski [Tue, 3 Mar 2026 21:53:07 +0000 (13:53 -0800)]
MAINTAINERS: replace Clark Wang with Frank Li for Freescale FEC
We have not seen tags from Clark for FEC in over 5 years.
He has some limited recent activity on the mailing list in other
NXP subsystems (stmmac, phy). Wei Fang and Shenwei Wang are active,
with decent review coverage (61%).
Frank Li has been reviewing code actively more recenty, let's
make it official.
Subsystem FREESCALE IMX / MXC FEC DRIVER
Changes 57 / 92 (61%)
Last activity: 2026-02-10
Wei Fang <wei.fang@nxp.com>:
Author 25eb3058eb70 2026-02-10 00:00:00 33
Tags 25eb3058eb70 2026-02-10 00:00:00 61
Shenwei Wang <shenwei.wang@nxp.com>:
Author d466c16026e9 2025-09-14 00:00:00 6
Tags d466c16026e9 2025-09-14 00:00:00 6
Clark Wang <xiaoning.wang@nxp.com>:
Top reviewers:
[23]: Frank.Li@nxp.com
[17]: andrew@lunn.ch
[4]: csokas.bence@prolan.hu
[3]: horms@kernel.org
[2]: maxime.chevallier@bootlin.com
INACTIVE MAINTAINER Clark Wang <xiaoning.wang@nxp.com>
Jakub Kicinski [Tue, 3 Mar 2026 21:53:06 +0000 (13:53 -0800)]
MAINTAINERS: remove DENG Qingfang from MediaTek switch
We have not seen tags from DENG Qingfang for the MediaTek
switch driver in over 5 years. He is active upstream with
PPP/PPPoE patches in net-next. Chester and Daniel are active.
Subsystem MEDIATEK SWITCH DRIVER
Changes 26 / 70 (37%)
Last activity: 2025-12-01
Chester A. Unal <chester.a.unal@arinc9.com>:
Tags 585943b7ad30 2025-12-01 00:00:00 7
Daniel Golle <daniel@makrotopia.org>:
Author 497041d76301 2025-04-23 00:00:00 2
Tags 3b87e60d2131 2025-12-01 00:00:00 14
DENG Qingfang <dqfext@gmail.com>:
Sean Wang <sean.wang@mediatek.com>:
Top reviewers:
[4]: andrew@lunn.ch
[4]: florian.fainelli@broadcom.com
[4]: arinc.unal@arinc9.com
[2]: olteanv@gmail.com
INACTIVE MAINTAINER DENG Qingfang <dqfext@gmail.com>
Jakub Kicinski [Tue, 3 Mar 2026 21:53:05 +0000 (13:53 -0800)]
MAINTAINERS: remove Sean Wang from MediaTek Ethernet and switch
We have not seen tags from Sean in over 5 years,
with only one mailing list post since 2024.
Felix and Lorenzo are active for the Ethernet driver,
and Chester, Daniel and DENG Qingfang are active for
the switch driver.
Subsystem MEDIATEK ETHERNET DRIVER
Changes 55 / 113 (48%)
Last activity: 2025-10-12
Felix Fietkau <nbd@nbd.name>:
Author d4736737110f 2025-09-02 00:00:00 3
Tags d4736737110f 2025-09-02 00:00:00 4
Sean Wang <sean.wang@mediatek.com>:
Lorenzo Bianconi <lorenzo@kernel.org>:
Author 96326447d466 2025-08-13 00:00:00 35
Tags 3abc0e55ea1f 2025-10-12 00:00:00 40
Top reviewers:
[26]: horms@kernel.org
[5]: andrew@lunn.ch
[4]: jacob.e.keller@intel.com
[3]: shannon.nelson@amd.com
[3]: michal.swiatkowski@linux.intel.com
INACTIVE MAINTAINER Sean Wang <sean.wang@mediatek.com>
Subsystem MEDIATEK SWITCH DRIVER
Changes 26 / 70 (37%)
Last activity: 2025-12-01
Chester A. Unal <chester.a.unal@arinc9.com>:
Tags 585943b7ad30 2025-12-01 00:00:00 7
Daniel Golle <daniel@makrotopia.org>:
Author 497041d76301 2025-04-23 00:00:00 2
Tags 3b87e60d2131 2025-12-01 00:00:00 14
DENG Qingfang <dqfext@gmail.com>:
Sean Wang <sean.wang@mediatek.com>:
Top reviewers:
[4]: andrew@lunn.ch
[4]: florian.fainelli@broadcom.com
[4]: arinc.unal@arinc9.com
[2]: olteanv@gmail.com
INACTIVE MAINTAINER Sean Wang <sean.wang@mediatek.com>
Jakub Kicinski [Tue, 3 Mar 2026 21:53:03 +0000 (13:53 -0800)]
MAINTAINERS: remove Jerin Jacob from Marvell OcteonTX2
We have not seen tags from Jerin for OcteonTX2 in over 5 years.
Recent lore activity is in DPDK (non-kernel), not Linux.
Sunil, Linu, Geetha, hariprasad, and Subbaraya are active,
though the review coverage isn't great (38%).
Jakub Kicinski [Tue, 3 Mar 2026 21:53:02 +0000 (13:53 -0800)]
MAINTAINERS: remove Manish Chopra from QLogic QL4xxx (now orphan)
We have not seen tags from Manish for the QL4xxx driver in over 5 years,
and there is no mailing list activity since Oct 2023. There has been
no maintainer activity in this subsystem at all.
Since there is no other maintainer for this driver it becomes an Orphan.
Jakub Kicinski [Tue, 3 Mar 2026 21:53:01 +0000 (13:53 -0800)]
MAINTAINERS: remove Johan Hedberg from Bluetooth subsystem
We have not seen emails or tags from Johan in over 5 years,
and there is no recent mailing list activity.
Marcel Holtmann hasn't provided any tags in the Bluetooth
subsystem in over 5 years, but he is active on the Bluetooth
mailing list, providing informal review.
Luiz Augusto von Dentz is very active, handling essentially
all commits and reviews (12% coverage, but Luiz is the sole
active committer).
Subsystem BLUETOOTH SUBSYSTEM
Changes 50 / 411 (12%)
Last activity: 2026-02-23
Marcel Holtmann <marcel@holtmann.org>:
Johan Hedberg <johan.hedberg@gmail.com>:
Luiz Augusto von Dentz <luiz.dentz@gmail.com>:
Author 138d7eca445e 2026-02-23 00:00:00 164
Committer 138d7eca445e 2026-02-23 00:00:00 361
Tags 138d7eca445e 2026-02-23 00:00:00 362
Top reviewers:
[15]: pmenzel@molgen.mpg.de
[8]: keescook@chromium.org
[5]: willemb@google.com
[4]: horms@kernel.org
[3]: kuniyu@amazon.com
[3]: luiz.von.dentz@intel.com
INACTIVE MAINTAINER Johan Hedberg <johan.hedberg@gmail.com>
Sun Jian [Wed, 25 Feb 2026 11:14:51 +0000 (19:14 +0800)]
selftests: net: tun: don't abort XFAIL cases
The tun UDP tunnel GSO fixture contains XFAIL-marked variants intended to
exercise failure paths (e.g. EMSGSIZE / "Message too long").
Using ASSERT_EQ() in these tests aborts the subtest, which prevents the
harness from classifying them as XFAIL and can make the overall net: tun
test fail.
Switch the relevant ASSERT_EQ() checks to EXPECT_EQ() so the subtests
continue running and the failures are correctly reported and accounted
as XFAIL where applicable.
Sun Jian [Wed, 25 Feb 2026 11:14:50 +0000 (19:14 +0800)]
selftests/harness: order TEST_F and XFAIL_ADD constructors
TEST_F() allocates and registers its struct __test_metadata via mmap()
inside its constructor, and only then assigns the
_##fixture_##test##_object pointer.
XFAIL_ADD() runs in a constructor too and reads
_##fixture_##test##_object to initialize xfail->test. If XFAIL_ADD runs
first, xfail->test can be NULL and the expected failure will be reported
as FAIL.
Use constructor priorities to ensure TEST_F registration runs before
XFAIL_ADD, without adding extra state or runtime lookups.
Jakub Kicinski [Thu, 5 Mar 2026 15:33:25 +0000 (07:33 -0800)]
Merge tag 'nf-26-03-05' of https://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf
Florian Westphal says:
====================
netfilter: updates for net
1) Inseo An reported a bug with the set element handling in nf_tables:
When set cannot accept more elements, we unlink and immediately free
an element that was inserted into a public data structure, freeing it
without waiting for RCU grace period. Fix this by doing the
increment earlier and by deferring possible unlink-and-free to the
existing abort path, which performs the needed synchronize_rcu before
free. From Pablo Neira Ayuso. This is an ancient bug, dating back to
kernel 4.10.
2) syzbot reported WARN_ON() splat in nf_tables that occurs on memory
allocation failure. Fix this by a new iterator annotation:
The affected walker does not need to clone the data structure and
can just use the live version if no clone exists yet.
Also from Pablo. This bug existed since 6.10 days.
3) Ancient forever bug in nft_pipapo data structure:
The garbage collection logic to remove expired elements is broken.
We must unlink from data structure and can only hand the freeing
to call_rcu after the clone/live pointers of the data structures
have been swapped. Else, readers can observe the free'd element.
Reported by Yiming Qian.
* tag 'nf-26-03-05' of https://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf:
netfilter: nft_set_pipapo: split gc into unlink and reclaim phase
netfilter: nf_tables: clone set on flush only
netfilter: nf_tables: unconditionally bump set->nelems before insertion
====================
Jerome Marchand [Thu, 5 Mar 2026 09:31:17 +0000 (10:31 +0100)]
ftrace: Add MAINTAINERS entries for all ftrace headers
There is currently no entry for ftrace_irq.h and ftrace_regs.h. Add a
generic entry for all *ftrace* headers to include them and prevent
overlooking future ftrace headers.
netfilter: nft_set_pipapo: split gc into unlink and reclaim phase
Yiming Qian reports Use-after-free in the pipapo set type:
Under a large number of expired elements, commit-time GC can run for a very
long time in a non-preemptible context, triggering soft lockup warnings and
RCU stall reports (local denial of service).
We must split GC in an unlink and a reclaim phase.
We cannot queue elements for freeing until pointers have been swapped.
Expired elements are still exposed to both the packet path and userspace
dumpers via the live copy of the data structure.
call_rcu() does not protect us: dump operations or element lookups starting
after call_rcu has fired can still observe the free'd element, unless the
commit phase has made enough progress to swap the clone and live pointers
before any new reader has picked up the old version.
This a similar approach as done recently for the rbtree backend in commit 35f83a75529a ("netfilter: nft_set_rbtree: don't gc elements on insert").
Fixes: 3c4287f62044 ("nf_tables: Add set type for arbitrary concatenation of ranges") Reported-by: Yiming Qian <yimingqian591@gmail.com> Signed-off-by: Florian Westphal <fw@strlen.de>
Restrict set clone to the flush set command in the preparation phase.
Add NFT_ITER_UPDATE_CLONE and use it for this purpose, update the rbtree
and pipapo backends to only clone the set when this iteration type is
used.
As for the existing NFT_ITER_UPDATE type, update the pipapo backend to
use the existing set clone if available, otherwise use the existing set
representation. After this update, there is no need to clone a set that
is being deleted, this includes bound anonymous set.
An alternative approach to NFT_ITER_UPDATE_CLONE is to add a .clone
interface and call it from the flush set path.
Reported-by: syzbot+4924a0edc148e8b4b342@syzkaller.appspotmail.com Fixes: 3f1d886cc7c3 ("netfilter: nft_set_pipapo: move cloning of match info to insert/removal path") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
netfilter: nf_tables: unconditionally bump set->nelems before insertion
In case that the set is full, a new element gets published then removed
without waiting for the RCU grace period, while RCU reader can be
walking over it already.
To address this issue, add the element transaction even if set is full,
but toggle the set_full flag to report -ENFILE so the abort path safely
unwinds the set to its previous state.
As for element updates, decrement set->nelems to restore it.
A simpler fix is to call synchronize_rcu() in the error path.
However, with a large batch adding elements to already maxed-out set,
this could cause noticeable slowdown of such batches.
Fixes: 35d0ac9070ef ("netfilter: nf_tables: fix set->nelems counting with no NLM_F_EXCL") Reported-by: Inseo An <y0un9sa@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
net: Provide a PREEMPT_RT specific check for netdev_queue::_xmit_lock
After acquiring netdev_queue::_xmit_lock the number of the CPU owning
the lock is recorded in netdev_queue::xmit_lock_owner. This works as
long as the BH context is not preemptible.
On PREEMPT_RT the softirq context is preemptible and without the
softirq-lock it is possible to have multiple user in __dev_queue_xmit()
submitting a skb on the same CPU. This is fine in general but this means
also that the current CPU is recorded as netdev_queue::xmit_lock_owner.
This in turn leads to the recursion alert and the skb is dropped.
Instead checking the for CPU number, that owns the lock, PREEMPT_RT can
check if the lockowner matches the current task.
Add netif_tx_owned() which returns true if the current context owns the
lock by comparing the provided CPU number with the recorded number. This
resembles the current check by negating the condition (the current check
returns true if the lock is not owned).
On PREEMPT_RT use rt_mutex_owner() to return the lock owner and compare
the current task against it.
Use the new helper in __dev_queue_xmit() and netif_local_xmit_active()
which provides a similar check.
Update comments regarding pairing READ_ONCE().
Reported-by: Bert Karwatzki <spasswolf@web.de> Closes: https://lore.kernel.org/all/20260216134333.412332-1-spasswolf@web.de Fixes: 3253cb49cbad4 ("softirq: Allow to drop the softirq-BKL lock on PREEMPT_RT") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reported-by: Bert Karwatzki <spasswolf@web.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20260302162631.uGUyIqDT@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
====================
net: stmmac: Fix VLAN handling when interface is down
VLAN register accesses on the MAC side require the PHY RX clock to be
active. When the network interface is down, the PHY is suspended and
the RX clock is unavailable, causing VLAN operations to fail with
timeouts.
The VLAN core automatically removes VID 0 after the interface goes down
and re-adds it when it comes back up, so these timeouts happen during
normal interface down/up:
# ip link set end1 down
renesas-gbeth 15c40000.ethernet end1: Timeout accessing MAC_VLAN_Tag_Filter
renesas-gbeth 15c40000.ethernet end1: failed to kill vid 0081/0
Adding VLANs while the interface is down also fails:
# ip link add link end1 name end1.10 type vlan id 10
renesas-gbeth 15c40000.ethernet end1: Timeout accessing MAC_VLAN_Tag_Filter
RTNETLINK answers: Device or resource busy
Patch 4 fixes this by adding checks in the VLAN paths for netif_running(),
and skipping register accesses if the interface is down. Only the software
state is updated in this case. When the interface is brought up, the VLAN
state is restored to hardware.
Patches 1-3 fix some issues in the existing VLAN implementation.
====================
Ovidiu Panait [Tue, 3 Mar 2026 14:58:28 +0000 (14:58 +0000)]
net: stmmac: Defer VLAN HW configuration when interface is down
VLAN register accesses on the MAC side require the PHY RX clock to be
active. When the network interface is down, the PHY is suspended and
the RX clock is unavailable, causing VLAN operations to fail with
timeouts.
The VLAN core automatically removes VID 0 after the interface goes down
and re-adds it when it comes back up, so these timeouts happen during
normal interface down/up:
# ip link set end1 down
renesas-gbeth 15c40000.ethernet end1: Timeout accessing MAC_VLAN_Tag_Filter
renesas-gbeth 15c40000.ethernet end1: failed to kill vid 0081/0
Adding VLANs while the interface is down also fails:
# ip link add link end1 name end1.10 type vlan id 10
renesas-gbeth 15c40000.ethernet end1: Timeout accessing MAC_VLAN_Tag_Filter
RTNETLINK answers: Device or resource busy
To fix this, check if the interface is up before accessing VLAN registers.
The software state is always kept up to date regardless of interface state.
When the interface is brought up, stmmac_vlan_restore() is called
to write the VLAN state to hardware.
Ovidiu Panait [Tue, 3 Mar 2026 14:58:27 +0000 (14:58 +0000)]
net: stmmac: Fix VLAN HW state restore
When the network interface is opened or resumed, a DMA reset is performed,
which resets all hardware state, including VLAN state. Currently, only
the resume path is restoring the VLAN state via
stmmac_restore_hw_vlan_rx_fltr(), but that is incomplete: the VLAN hash
table and the VLAN_TAG control bits are not restored.
Therefore, add stmmac_vlan_restore(), which restores the full VLAN
state by updating both the HW filter entries and the hash table, and
call it from both the open and resume paths.
The VLAN restore is moved outside of phylink_rx_clk_stop_block/unblock
in the resume path because receive clock stop is already disabled when
stmmac supports VLAN.
Also, remove the hash readback code in vlan_restore_hw_rx_fltr() that
attempts to restore VTHM by reading VLAN_HASH_TABLE, as it always reads
zero after DMA reset, making it dead code.
Fixes: 3cd1cfcba26e ("net: stmmac: Implement VLAN Hash Filtering in XGMAC") Fixes: ed64639bc1e0 ("net: stmmac: Add support for VLAN Rx filtering") Signed-off-by: Ovidiu Panait <ovidiu.panait.rb@renesas.com> Link: https://patch.msgid.link/20260303145828.7845-4-ovidiu.panait.rb@renesas.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ovidiu Panait [Tue, 3 Mar 2026 14:58:26 +0000 (14:58 +0000)]
net: stmmac: Improve double VLAN handling
The double VLAN bits (EDVLP, ESVL, DOVLTC) are handled inconsistently
between the two vlan_update_hash() implementations:
- dwxgmac2_update_vlan_hash() explicitly clears the double VLAN bits when
is_double is false, meaning that adding a 802.1Q VLAN will disable
double VLAN mode:
$ ip link add link eth0 name eth0.200 type vlan id 200 protocol 802.1ad
$ ip link add link eth0 name eth0.100 type vlan id 100
# Double VLAN bits no longer set
- vlan_update_hash() sets these bits and only clears them when the last
VLAN has been removed, so double VLAN mode remains enabled even after all
802.1AD VLANs are removed.
Address both issues by tracking the number of active 802.1AD VLANs in
priv->num_double_vlans. Pass this count to stmmac_vlan_update() so both
implementations correctly set the double VLAN bits when any 802.1AD
VLAN is active, and clear them only when none remain.
Also update vlan_update_hash() to explicitly clear the double VLAN bits
when is_double is false, matching the dwxgmac2 behavior.
Ovidiu Panait [Tue, 3 Mar 2026 14:58:25 +0000 (14:58 +0000)]
net: stmmac: Fix error handling in VLAN add and delete paths
stmmac_vlan_rx_add_vid() updates active_vlans and the VLAN hash
register before writing the HW filter entry. If the filter write
fails, it leaves a stale VID in active_vlans and the hash register.
stmmac_vlan_rx_kill_vid() has the reverse problem: it clears
active_vlans before removing the HW filter. On failure, the VID is
gone from active_vlans but still present in the HW filter table.
To fix this, reorder the operations to update the hash table first,
then attempt the HW filter operation. If the HW filter fails, roll
back both the active_vlans bitmap and the hash table by calling
stmmac_vlan_update() again.
Larysa removes VF restriction for LLDP filters on ice to allow for LLDP
traffic to reach the correct destination.
Jakub adds retry mechanism for AdminQ Read/Write SFF EEPROM call to
follow hardware specification on ice.
Zilin Guan adds cleanup path to free XDP rings on failure in
ice_set_ringparam().
Michal bypasses firmware logging unroll in libie when it isn't supported.
Kohei Enju fixes iavf to take into account hardware MTU support when
setting max MTU values.
Vivek Behera fixes issues on igb and igc using incorrect IRQs when Tx/Rx
queues do not share the same IRQ.
* '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue:
igc: Fix trigger of incorrect irq in igc_xsk_wakeup function
igb: Fix trigger of incorrect irq in igb_xsk_wakeup
iavf: fix netdev->max_mtu to respect actual hardware limit
libie: don't unroll if fwlog isn't supported
ice: Fix memory leak in ice_set_ringparam()
ice: fix retry for AQ command 0x06EE
ice: reintroduce retry mechanism for indirect AQ
ice: fix adding AQ LLDP filter for VF
====================
Jakub Kicinski [Thu, 5 Mar 2026 02:21:15 +0000 (18:21 -0800)]
Merge branch 'mptcp-misc-fixes-for-v7-0-rc2'
Matthieu Baerts says:
====================
mptcp: misc fixes for v7.0-rc2
Here are various unrelated fixes:
- Patch 1: avoid bufferbloat in simult_flows selftest which can cause
instabilities. A fix for v5.10.
- Patches 2-3: reduce RM_ADDR lost by not sending it over the same
subflow as the one being removed, if possible. A fix for v5.13.
- Patches 4-5: avoid a WARN when using signal + subflow endpoints with a
subflow limit of 0, and removing such endpoints during an active
connection. A fix for v5.17.
====================