XDG_SESSION_EXTRA_DEVICE_ACCESS will now take a colon-separated list of
identifiers. For every identifier $ID, the session is granted access to all
devices tagged as "xaccess-$ID" in udev.
Yu Watanabe [Tue, 17 Feb 2026 13:20:38 +0000 (22:20 +0900)]
Sensor cleanup 1st pass (#40675)
This is a general cleanup of the sensors hwdb file divides into several
commits per brand.
I have merged the devices that use the same matrices, clean up a little
some clear dmi matches, and apply a inline comment with the device where
is certainly very clear way to display.
My idea is to do more cleanup steps but actually will require more
effort to achieve complete dmis, I can do it with little time, and some
consensus for comment styling.
About the comment styling actually I thing we could follow two rules at
the same time: inline comment when the dmi match is short and there is
no additional many information than just the model, and the other one
comment above the dmi match when is too long or there are need to add
more info.
For safety, prefer board product name, that always has the short name,
over system product name, that in few models has a very long string with
the short name at the end.
The following models added at the time of this commit BR1100FKA, RC72LA
and TP412UA needs a wildcard before when using pn.
Unmerged Q502LAB, Q551LB and Q551LN, in the merged match there are many
more unreported models.
Chris Down [Tue, 17 Feb 2026 06:58:44 +0000 (14:58 +0800)]
oomd: Fix unnecessary delays during OOM kills with pending kills present
Let's say a user has two services with ManagedOOMMemoryPressure=kill,
perhaps a web server under system.slice and a batch job under
user.slice. Both exceed their pressure limits. On the previous timer
tick, oomd has already queued the web server's candidate for killing,
but the prekill hook has not yet responded, so the kill is still
pending.
In the code, monitor_memory_pressure_contexts_handler() iterates over
all pressure targets that have exceeded their limits. When it reaches
the web server target and calls oomd_cgroup_kill_mark(), which returns 0
because that cgroup is already queued. The code treats this the same as
a successful new kill: it resets the 15 second delay timer and returns
from the function, exiting the loop.
This loop is handled by SET_FOREACH and the iteration order is
hash-dependent. As such, if the web server target happens coincidentally
to be visited first, oomd never evaluates the batch job target at all.
The effect is twofold:
1. oomd stalls for 15 seconds despite not having initiated any new kill.
That can unnecessarily delay further action to stem increases in
memory pressure. The delay exists to let stale pressure counters
settle after a kill, but no kill has happened here.
2. It non-deterministically skips pressure targets that may have
unqueued candidates, dangerously allowing memory pressure to persist
for longer than it should.
Fix this by skipping cgroups that are already queued so the loop
proceeds to try other pressure targets. We should only delay when a new
kill mark is actually created.
Chris Down [Tue, 17 Feb 2026 06:30:16 +0000 (14:30 +0800)]
oomd: Fix silent failure to find bad cgroups when another cgroup dies
Consider a workload slice with several sibling cgroups. Imagine that one
of those cgroups is removed between the moment oomd enumerates the
directory and the moment it reads memory.oom.group. This is actually
relatively plausible under the high memory pressure conditions where
oomd is most needed.
In this case, the failed read prompts us to `return 0`, which exits the
entire enumeration loop in recursively_get_cgroup_context(). As a
result, all remaining sibling cgroups are silently dropped from the
candidate list for that monitoring cycle.
The effect is that oomd can fail to identify and kill the actual
offending cgroup, allowing memory pressure to persist until a subsequent
cycle where the race doesn't occur.
Fix this by instead proceeding to evaluate further sibling cgroups.
Let's say a user has two services with ManagedOOMMemoryPressure=kill,
one for a web server under system.slice, and one for a batch job under
user.slice. The batch job is causing severe memory pressure, whereas the
web server's cgroup has no processes with significant pgscan activity.
In the code, monitor_memory_pressure_contexts_handler() iterates over
all pressure targets that have exceeded their limits. When
oomd_select_by_pgscan_rate() returns 0 (that is, no candidates) for a
target, we return from the entire SET_FOREACH loop instead of moving to
the next target. Since SET_FOREACH iteration order is hash-dependent, if
the web server target happens to be visited first, oomd will find no
kill candidates for it and exit the loop. The batch job target that is
actually slamming the machine will never even be evaluated, and can
continue to wreak havoc without any intervention.
The effect is that oomd non-deterministically and silently fails to kill
cgroups that it should actually kill, allowing memory pressure to
persist and dangerously build up on the machine.
The fix is simple, keep evaluating remaining targets when one does not
match.
These were introduced as part of the effort of sd-executor
worker pool (#29566), which never landed due to unsignificant
performance improvement. Let's just remove the unused
helpers. If that work ever gets resurrected they can be
restored from this commit pretty easily.
Yu Watanabe [Tue, 17 Feb 2026 05:53:46 +0000 (14:53 +0900)]
oomd: Fix Killed signal reason being lost (#40689)
Emitting "oom" doesn't mesh with the org.freedesktop.oom1.Manager
Killed() API contract, which defines "memory-used" and "memory-pressure"
as possible reasons. Consumers that key off reason thus will either lose
policy attribution or may reject the unknown value completely.
Plumb the reason through so it is visible to consumers.
Chris Down [Sun, 15 Feb 2026 17:42:51 +0000 (01:42 +0800)]
oomd: Fix Killed signal reason being lost
Emitting "oom" doesn't mesh with the org.freedesktop.oom1.Manager
Killed() API contract, which defines "memory-used" and "memory-pressure"
as possible reasons. Consumers that key off reason thus will either lose
policy attribution or may reject the unknown value completely.
Plumb the reason through so it is visible to consumers.
Daan De Meyer [Mon, 16 Feb 2026 18:59:10 +0000 (19:59 +0100)]
nspawn-mount: Use setns() in wipe_fully_visible_api_fs()
namespace_enter() now does a is_our_namespace() check, which requires
/proc on older kernels, which is not available anymore after we call
do_wipe_fully_visible_api_fs() in wipe_fully_visible_api_fs().
Let's just call setns() instead as namespace_enter() is overkill to
enter a single namespace anyway.
Daan De Meyer [Mon, 16 Feb 2026 14:42:35 +0000 (15:42 +0100)]
mkosi: Set CacheOnly=metadata for test images (#40699)
The default behavior is to sync repository metadata for every image
that does not have a cache and we recently changed behavior to
invalidate
all cached images whenever we decide the repository metadata needs to
be resynced.
In systemd we have two images that are not cached because they use
BaseTrees=
hence set CacheOnly=metadata to explicitly indicate these two images
should never cause a repository metadata if resync even though they are
not cached.
Daan De Meyer [Mon, 16 Feb 2026 12:28:22 +0000 (13:28 +0100)]
mkosi: Set CacheOnly=metadata for test images
The default behavior is to sync repository metadata for every image
that does not have a cache and we recently changed behavior to invalidate
all cached images whenever we decide the repository metadata needs to
be resynced.
In systemd we have two images that are not cached because they use BaseTrees=
hence set CacheOnly=metadata to explicitly indicate these two images
should never cause a repository metadata if resync even though they are
not cached.
* 66d51024b7 man: Update caching section
* 4eac60f168 Remove all cached images if repository metadata will be synced
* 025c6c0150 Move Incremental= to inherited settings in docs
* 427970d8fd Make MakeScriptsExecutable= a multiversal setting
* 53bd2da6fe Look at all CacheOnly= settings to determine if we need to sync metadata
* 114ae558ef config / qemu: add Console=headless