Yu Watanabe [Tue, 13 May 2025 14:02:13 +0000 (23:02 +0900)]
login,udev: avoid race between systemd-logind and systemd-udevd in setting ACLs
Previously, both udevd and logind modifies ACLs of a device node. Hence,
there exists a race something like the following:
1. udevd reads an old state file,
2. logind updates the state file, and apply new ACLs,
3. udevd applies ACLs based on the old state file.
This makes logind not update ACLs but trigger uevents for relevant
devices to make ACLs updated by udevd.
Yu Watanabe [Tue, 13 May 2025 14:50:22 +0000 (23:50 +0900)]
login: do not call manager_process_seat_device() more than once per event
When udevd broadcasts an event for e.g. a graphics device with master-of-seat
tag, then previously manager_process_seat_device() was called twice for
the event.
With this commit, the function is called only once even for an event for
such device.
compress: deal with zstd decoder issues gracefully
If zstd frames are corrupted the initial size returned for the current
frame might be wrong. Don#t assert() on that, but handle it gracefully,
as EBADMSG
logs-show: use memory_startswith() rather than startswith()
Let's be strict here: this data is conceptually not NUL terminated,
hence use memory_startswith() rather than startswith() (which implies
NUL termination). All other similar cases in logs-show.c got this right.
Fix the remaining three, too.
journal-upload-journal: handle partially written fields gracefully
With the more efficient sync semantics it's more likely that
journal-upload-journal will try to read a partially written message.
Previously we'd fail then. Let's instead treat this gracefully,
expecting that this is either the end or will be fixed shortly (and
we'll get notified via inotify about it and recheck).
journal-remote: destroy event sources before MHD context
The MHD context owns the fd we watch via our event source, hence when we
destroy the context before the event source the event source might still
reference the fd that is now invalid. Hence swap the order.
journald: make journal Varlink IPC accessible to unpriv clients
The Synchronize() function is just too useful for clients, so that we
can make "systemd-run -v --user" actually useful. Hence let's make the
socket accessible without privs. Deny most method calls however, except
for the Synchronize() call.
Previously, if the Synchronize() varlink call is issued we'd wait for
journald to become idle before returning success. That is problematic
however: on a busy system journald might never become idle. Hence, let's
beef up the logic to ensure that we do not wait longer than necessary:
i.e. we make sure we process any data enqueued before the sync request
was submitted, but not more.
Implementing this isn't trivial unfortunately. To deal with this
reasonably, we need to determine somehow for incoming log messages
whether they are from before or after the point in time where the sync
requested was received.
For AF_UNIX/SOCK_DGRAM we can use SO_TIMESTAMP to directly compare
timestamps of incoming messages with the timestamp of the sync request
(unfortunately only CLOCK_REALTIME).
For AF_UNIX/SOCK_STREAM we can call SIOCINQ at the moment we initiate
the sync, and then continue processing incoming traffic, counting down
the bytes until the SIOCINQ returned bytes have been processed. All
further data must have been enqueued later hence.
With those two mechanisms in place we can relatively reliably
synchronize the journal.
This also adds a boolean argument "offline" to the Synchronize() call,
which controls whether to offline the journal after processing the
pending messages. it defaults to true, for compat with the status quo
ante. But for most cases the offlining is probably not necessary, and is
cheaper to do without, hence allow not to do it.
journald: downgrade event source priority of kmsg to same as native/syslog inputs
So far we schduled kmsg events at higher priority than native/syslog
ones. But that's quite problematic, since it means that kmsg events can
drown out native/syslog log events. And this actually shows up in some
CI tests.
Address that, and schedule all three sources at the same priority, so
that the earlier event always is processed first, regarding which
protocol is used.
journalctl: optionally delay --follow exit for a journal synchronization
Let's optionally issue a Varlink Synchronize() call in --follow mode
when asked to terminate. This is useful so that the tool can be called
and it is guaranteed it processed all messages generated before the
request to exit before it exits.
We want this in "systemd-run -v" in particular, so that we can be sure
we are not missing any log output from the invoked service before it
exits
Allow callers to synchronize on the point in time where the journal file
watches are fully established, in --follow mode.
Tools can invoke journalctl using this, knowing that any log message
happening after the READY=1 is definitely going to be processed by the
journalctl invocation.
sd-netlink: allow configuration of flags parameter when creating message object
We soon want to add for sock_diag(7) netlink sockets. Those reuse the
same message type codes for request and response but with different
message formats. Hence we need to look at NLM_F_REQUEST to determine
which message policy to apply. Hence it is essential to know the flags
parameters right away when creating a message, since we cannot do early
validation otherwise.
This only adds support for setting the flags value right at the moment
of creation of the message object, it does not otherwise add
sock_diag(7) support, that is added in a later message.
This also corrects the flag for synthetic NLMSG_ERROR messages which
should not have the NLM_F_REQUEST flag set (since they are responses,
not requests).
Let's extend pid1's varlink interface and add a Describe method to get
the global Manager object information as a JSON object
(io.systemd.Manager.Describe).
Because the new varlink interface should be available on both the user
managers and the system manager, we also make the necessary changes to
expose a varlink server on user managers.
This PR is first part of https://github.com/systemd/systemd/pull/33965
with minimal changes to address feedback.
Add support for a sysfail boot entry. Sysfail boot entries can be used
for optional tweaking the automatic selection order in case a failure
state of the system in some form is detected (boot firmware failure
etc).
The EFI variable `LoaderEntrySysFail` contains the sysfail boot loader
entry to use. It can be set using bootctl:
```
$ bootctl set-sysfail sysfail.conf
```
The `LoaderEntrySysFail` EFI variable would be unset automatically
during next boot by `systemd-boot-clear-sysfail.service` if no system
failure occured, otherwise it would be kept as it is and a system
failure reason will be saved to `LoaderSysFailReason` EFI variable.
`sysfail_check()` expected to be extented to support possibleconditions
when we should boot sysfail("recovery") boot entry.
Also add support for using a sysfail boot entry in case of UEFI firmware
capsule update failure [1]. The status of a firmware update is obtained
from the EFI System Resource Table (ESRT), which provides an optional
mechanism for identifying device and system firmware resources for the
purposes of targeting firmware updates to those resources.
Current implementation uses the value of LastAttemptStatus field from
ESRT, which describes the result of the last firmware update attempt for
the firmware resource entry. The field is updated each time an
`UpdateCapsule()` is attempted for an ESRT entry and is preserved across
reboots (non-volatile).
This can be be used in setups with support for A/B OTA updates, where
the boot firmware and Linux/RootFS might be updated synchronously.
The check is activated by adding "sysfail-firmware-upd" to loader.conf
Let's extend pid1's varlink interface and add a Describe method to
get the global Manager object information as a JSON object
(io.systemd.Manager.Describe).
Because the new varlink interface should be available on both the
user managers and the system manager, we also make the necessary
changes to expose a varlink server on user managers.
This breaks the rule stated at the beginning of help_sudo_mode():
> NB: Let's not go overboard with short options: we try to keep a modicum of compatibility with
> sudo's short switches, hence please do not introduce new short switches unless they have a roughly
> equivalent purpose on sudo. Use long options for everything private to run0.
Mike Yuan [Mon, 12 May 2025 14:10:03 +0000 (16:10 +0200)]
core: accept "|" ExecStart= prefix to spawn target user's shell; teach run0 about the new logic (#37071)
I've always been reluctant to invoke the current user's shell in another
user's context, hence was fully grounded in `sudo -i`. With this bit in
place `run0` will finally be feature-complete on my side ;-)
Yu Watanabe [Mon, 12 May 2025 14:04:49 +0000 (23:04 +0900)]
udev: sort received events by their seqnum (#37314)
The kernel sometimes sends uevents in a random order, so previously the
received events were not sorted by their seqnum. We determine which
event is ready for processing by using the assumption that queued events
are sorted by their seqnum. Let's sort the received events before queue
them, to make events processed in a correct ordering.
Igor Opaniuk [Thu, 23 Jan 2025 12:31:04 +0000 (13:31 +0100)]
sd-boot: use sysfail entry for UEFI firmware update failure
Add support for using a sysfail boot entry in case of UEFI firmware
capsule update failure [1]. The status of a firmware update is obtained from
the EFI System Resource Table (ESRT), which provides an optional mechanism
for identifying device and system firmware resources for the purposes of
targeting firmware updates to those resources.
Current implementation uses the value of LastAttemptStatus field from
ESRT, which describes the result of the last firmware update attempt for
the firmware resource entry. The field is updated each time an
UpdateCapsule() is attempted for an ESRT entry and is preserved across
reboots (non-volatile).
This can be be used in setups with support for A/B OTA updates, where
the boot firmware and Linux/RootFS might be updated synchronously.
[1] https://uefi.org/specs/UEFI/2.10/23_Firmware_Update_and_Reporting.html Signed-off-by: Igor Opaniuk <igor.opaniuk@foundries.io>
Igor Opaniuk [Mon, 24 Mar 2025 14:33:16 +0000 (15:33 +0100)]
bootctl: configure a sysfail entry
You can configure the sysfail boot entry using the bootctl command:
$ bootctl set-sysfail sysfail.conf
The value will be stored in the `LoaderEntrySysFail` EFI variable.
The `LoaderEntrySysFail` EFI variable would be unset automatically
during next boot by `systemd-boot-clear-sysfail.service` if no
system failure occured, otherwise it would be kept as it is and a system
failure reason will be saved to `LoaderSysFailReason` EFI variable.
Signed-off-by: Igor Opaniuk <igor.opaniuk@foundries.io>
Igor Opaniuk [Mon, 24 Mar 2025 14:30:49 +0000 (15:30 +0100)]
sd-boot: add support for a sysfail entry
Add support for a sysfail boot entry. Sysfail boot entries can be
used for optional tweaking the automatic selection order in case a
failure state of the system in some form is detected (boot firmware
failure etc).
The EFI variable `LoaderEntrySysFail` holds the boot loader entry to
be used in the event of a system failure. If a failure occurs, the reason
will be stored in the `LoaderSysFailReason` EFI variable.
sysfail_check() expected to be extented to support possible
conditions when we should boot sysfail("recovery") boot entry.
Signed-off-by: Igor Opaniuk <igor.opaniuk@foundries.io>
This mostly makes sure we do something reasonable when our tool is
called from a boot of an entry that was already marked as definitely
"bad" on a previous boot. Such an entry we can return into a "good"
state, but we cannot return it into an "indeterminate" state, because
the status quo ante is already known.
Daan De Meyer [Sat, 10 May 2025 20:19:22 +0000 (22:19 +0200)]
meson: Don't create static library target unless option is enabled
While we don't build these by default, all the source files still
get added to the compile_commands.json file by meson, which can confuse
tools as they might end up analyzing the source files twice or analyzing
the wrong one.
To avoid this issue, only define the static library target if the
corresponding option is enabled.
Daan De Meyer [Fri, 9 May 2025 18:48:51 +0000 (20:48 +0200)]
meson: Remove unneeded include directories
meson by default adds the current source and build directory as include
directories. Because we structure our meson code by gathering a giant dict
of everything we want to do and then doing all the actual target generation
in the top level meson.build, this behavior does not make sense at all because
we end up adding the top level repository directory as an include directory
which is never what we want.
At the same time, let's also make sure the top level directory of the build
directory is not an include directory, by moving the version.h generation
into the src/version subdirectory and then adding the src/version subdirectory
of the build directory as an include directory instead of the top level
repository directory.
Making this change means that language servers such as clangd can't get
confused when they automatically insert an #include line and insert
"#include "src/basic/fs-util.h" instead of "#include "fs-util.h".
Daan De Meyer [Wed, 7 May 2025 20:55:15 +0000 (22:55 +0200)]
meson: Extract objects instead of creating intermediate static libraries
Currently, when we want to add unit tests for code that is compiled into
an executable, we either compile the code at least twice (once for the
executable, and once for each test that uses it) or we create a static
library which is then used by both the executable and all the tests.
Both of these options are not ideal, compiling source files more than
once slows down the build for no reason and creating the intermediate
static libraries takes a lot of boilerplate.
Instead, let's use the extract_objects() method that meson exposes on
build targets. This allows us to extract the objects corresponding to
specific source files and use them in other executables. Because we
define all executables upfront into a dictionary, we integrate this into
the dictionary approach by adding two new fields:
- 'extract' takes a list of files for which objects should be extracted.
The extracted objects are stored in a dict keyed by the executable name
from which they were extracted.
- 'objects' takes the name of an executable from which the extracted
objects should be added to the current executable.
One side effect of this approach is that we can't build test executables
anymore without building the main executable, so we stop building test
executables unless we're also building the main executable. This allows
us to switch to using subdir_done() in all of these subdirectories to skip
parsing them if the corresponding component is disabled.
These changes get me down from 2439 => 2403 ninja targets on a full rebuild
from scratch.
Daan De Meyer [Sun, 11 May 2025 07:42:28 +0000 (09:42 +0200)]
meson: Stop doing nested build when fuzzers are enabled
Currently, when fuzzers are enabled, we run meson from within meson
to build the fuzzer executables with sanitizers. The idea is that
we can build the fuzzers with different kinds of sanitizers
independently from the main build.
The issue with this setup is that we don't actually make use of it.
We only build the fuzzers with one set of sanitizers (address,undefined)
so we're adding a bunch of extra complexity without any benefit as we
can just setup the top level meson build with these sanitizers and get
the same result.
The other issue with this setup is that we don't pass on all the options
passed to the top level meson build to the nested meson build. The only things
we pass on are extra compiler arguments and the value of the auto_features
option, but none of the individual feature options if overridden are passed on,
which can lead to very hard to debug issues as an option enabled in the top
level build is not enabled in the nested build.
Since we're not getting anything useful out of this setup, let's simplify
and get rid of the nested meson build. Instead, sanitizers should be enabled
for the top level meson.build. This currently didn't work as we were overriding
the sanitizers passed to the meson build with the fuzzer sanitizer, so we
fix that as well by making sure we combine the fuzzer sanitizer with the ones
passed in by the user.
We also drop support for looking up libFuzzer as a separate library as
it has been shipped builtin in clang since clang 6.0, so we can assume
that -fsanitize=fuzzer is available.
To make sure we still run the fuzzing tests, we enable the fuzz-tests option
by default now to make sure they still always run (without instrumentation unless
one of llvm-fuzz or oss-fuzz is enabled).
bless-boot: never try to rename an entry file onto itself
If we are booting a known bad entry, and we are asked to mark it as bad,
we so far would end up renaming the entry onto itself, which resulted in
EEXIST and is really borked operation. Let's catch that case and handle
it explicitly.
bless-boot: in "status" output report bad state from prev boot as "dirty"
The bless-boot logic currently assumes that if the name of the boot
entry reported via the EFI var matches the name on disk that the state
is "indeterminate", as we haven't counted down the counter (to mark it
bad) or drop the counter (to mark it good) yet. But there's one corner
case we so far didn't care about: what if the entry already reached 0
left tries in a previous boot, i.e. if the user invoked an entry already
known to be completely bad. In that case we'd still return
"indeterminate", but that's kinda misleading, because we *know* the
currently booted entry is bad, however we inherited that fact from a
previous boot, we didn't determine it on the current.
hence, let's introduce a new status we report in this case, that is both
distinct from "bad" (which indicates whether the *current* boot is bad)
and "indirect" (which indicates the current boot has not been decided on
yet): "dirty".
Why "dirty"? To mirror "clean" which we already have, which indicates a
boot already marked good in a previous boot, which is a relatively
symmetric state.
This is a really weak api break of sorts, because it introduces a new
state we never reported before, but I think it's fine, because the old
reporting was just wrong, and in a way this is bugfix, that we now
report correctly something where previously returned kind of rubbish
(though systematic rubbish).
Valentin Hăloiu [Sun, 11 May 2025 00:33:28 +0000 (01:33 +0100)]
Add netdev files associated with link to networkd JSON output (#37402)
`networkctl status LINK` gained support for showing the netdev
configuration files associated with a link in c9837c17d57d7e0fd9d3e2a4f2693f389ca76c24, but these netdev files were
never added to the JSON output too.
This pull-request fixes that by adding two new fields (`NetDevFile` and
`NetDevFileDropins`) to the `networkctl` (and `D-Bus`) JSON output.
Yu Watanabe [Thu, 1 May 2025 11:58:18 +0000 (20:58 +0900)]
core: disable mounting disconnected private tmpfs on /var/tmp/ when DefaultDependencies=no
If DefaultDependencies=no, /var/ may not be mounted yet when the service
is being started. Previously, In such case, if the service has
PrivateTmp=disconnected, the service manager created /var/tmp/ on the
root filesystem and mounted the disconnected private tmpfs there. That
poluted the root filesystem and disturbed gpt-auto-generator on next
boot, as /var/ would not be empty anymore. See issue #37258.
This changes PrivateTmp=disconnected as the following:
- If DefaultDependencies=no and RootDirectory=/RootImage= are not set,
then a private tmpfs is mounted _only_ on /tmp/, and set $TMPDIR=/tmp
environment variable to suggest the service to use /tmp/.
- If DefaultDependencies=yes and RootDirectory=/RootImage= are not set,
then implies RequiresMountsFor=/var/, though that is typically
redundant, but anyway. Hence, we can safely mount /var/tmp/.
- Otherwise, i.e. when one of RootDirectory=/RootImage= is set, behaves
as the same as the previous, as the private root filesystem for the
service is explicitly prepared by the service manager, and we can
safely mount a private tmpfs on /var/tmp/ without any extra
dependencies.
Yu Watanabe [Sat, 10 May 2025 18:22:04 +0000 (03:22 +0900)]
core/mount: drop unnecessary dependency generations
When the unit is new, then mount_setup_new_unit() adds the unit to the
load queue, and the same dependencies will be anyway added.
When the unit already exists but previously failed to be loaded, then
mount_setup_existing_unit() also adds the unit to the load queue.
Hence it is not necessary to regenerate dependencies here now.
So, we need to regenerate dependencies only when things changed and
the unit has been already loaded.
Unfortunately, the kernel may send events in a random order:
```
[ 25.769624] systemd-udevd[194]: sdi7: Device is queued (SEQNUM=2843, ACTION=add)
[ 25.769893] systemd-udevd[194]: sda5: Device is queued (SEQNUM=2842, ACTION=add)
[ 25.770517] systemd-udevd[194]: sdi8: Device is queued (SEQNUM=2844, ACTION=add)
```
As you can see, udevd receives the event with SEQNUM=2843 earlier than
one with SEQNUM=2842.
Let's make queued events sorted, as our logic of determining which event
is ready for being processed assumes that queued events are sorted.
See event_build_dependencies().
Also, refuse to queue an event if another event with the same seqnum is
already queued.
Yu Watanabe [Tue, 6 May 2025 16:09:09 +0000 (01:09 +0900)]
udev: refactoring for managing events for locked block devices
Previously, when an event for block device is queued or an inotify event
for a block device is triggered, all queued events were checked by
event_queue_assume_block_device_unlocked(), and its cost may be huge
on early boot stage.
This makes locked events are managed in separated prioq and hashmap,
to reduce the cost of event_queue_assume_block_device_unlocked(),
which is now renamed to manager_requeue_locked_events_by_device().
This also changes the clockid of timer events for requeueing and
timeout of retry from CLOCK_BOOTTIME to CLOCK_MONOTONIC. Otherwise,
if the system suspend while an event is locked, the event may be
requeued immediately after come back from the suspend and timed out.