Ivan Shapovalov [Fri, 20 Mar 2026 15:45:07 +0000 (16:45 +0100)]
tmpfiles: do not mandate `STATX_ATIME` and `STATX_MTIME`
Timestamps are not guaranteed to be set by `statx()`, and their presence
should not be asserted as a proxy to judge the kernel version. In
particular, `STATX_ATIME` is omitted from the return when querying a
file on a `noatime` superblock, causing spurious errors from tmpfiles.
Correctness analysis
====================
The timestamps produced by the `statx()` call in `opendir_and_stat()`
are only ever used once, in `clean_item_instance()` (lines 3148-3149)
as inputs to `dir_cleanup()`. Convert absent timestamps into
`NSEC_INFINITY` as per the previous commit.
Ivan Shapovalov [Fri, 20 Mar 2026 15:36:44 +0000 (16:36 +0100)]
tmpfiles: use `NSEC_INFINITY` consistently in dir_cleanup()
Correctness analysis
====================
The *time_nsec variables are used for a total of 2 or 3 times:
- twice in needs_cleanup() (lines 788, 839)
- once in a recursive dir_cleanup() (line 764) as self_*time_nsec
In needs_cleanup(), all passed timestamps are guarded against
NSEC_INFINITY (this does not fix any real bugs as a 0 value is also
older than any cutoff point and thus would not cause any deletions).
Recursively in dir_cleanup(), the self_* variables are used to reset
the toplevel directory utimes, where they are superficially compared
against NSEC_INFINITY as a guard, but subsequently mishandled in the
case when only one of the times is NSEC_INFINITY: in this case, it will
be a) logged as a bogus value and b) passed through directly to
timespec_store_nsec(), which does special-case it, but in a way that
is invalid for futimens(). This is further fixed up by explicitly
mapping NSEC_INFINITY to TIMESPEC_OMIT.
This constitutes a bugfix in theory, as a ~STATX_ATIME return from
statx() would have previously caused the corresponding utime to be
reset to 0 epoch) rather than being omitted from being set. However,
in a directory with ~STATX_ATIME, attempts to set atime would likely
be ignored as well.
Mostly this is a self-consistency fix that establishes that
dir_cleanup() should be called with NSEC_INFINITY in place of
absent timestamps.
vmspawn: use machine name in runtime directory path
Replace the random hex suffix in the runtime directory with the machine
name, changing the layout from /run/systemd/vmspawn.<random> to
/run/systemd/vmspawn/<machine-name>/.
This makes runtime directories machine-discoverable from the filesystem
and groups all vmspawn instances under a shared parent directory, similar
to how nspawn uses /run/systemd/nspawn/.
Use runtime_directory_generic() instead of runtime_directory() since
vmspawn is not a service with RuntimeDirectory= set and the
$RUNTIME_DIRECTORY check in the latter never succeeds. The directory is
always created by vmspawn itself and cleaned up via
rm_rf_physical_and_freep on exit. The parent vmspawn/ directory is
intentionally left behind as a shared namespace.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
sd-json: fix sd_json_variant_unsigned() dispatching to wrong accessor for references
sd_json_variant_unsigned() incorrectly calls sd_json_variant_integer()
for reference-type variants instead of recursing to itself. This silently
returns 0 for unsigned values in the range INT64_MAX+1 through
UINT64_MAX, since sd_json_variant_integer() cannot represent them.
The sibling functions sd_json_variant_integer() and
sd_json_variant_real() correctly recurse to themselves.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
ssh-proxy: fix use-after-free of borrowed varlink reply reference
sd_varlink_call_full() returns borrowed references into the varlink
connection's receive buffer (v->current). fetch_machine() stored this
borrowed reference with _cleanup_(sd_json_variant_unrefp), which would
unref it on error paths -- potentially freeing the parent object while
the varlink connection still owns it. On success, TAKE_PTR passed the
raw borrowed pointer to the caller, but the varlink connection (and its
receive buffer) is freed when fetch_machine returns, leaving the caller
with a dangling pointer.
Fix by removing the cleanup attribute (the reference is borrowed, not
owned) and taking a real ref via sd_json_variant_ref() before returning
to the caller, so the data survives the varlink connection's cleanup.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
shared: introduce MachineRegistrationContext to track bus and registration state
Bundle scope, buses, and registration success booleans into a
MachineRegistrationContext struct. This eliminates the reterr_registered_system and
reterr_registered_user output parameters from
register_machine_with_fallback_and_log() and the corresponding input
parameters from unregister_machine_with_fallback_and_log().
The struct carries state from registration to unregistration so the
caller no longer needs to manually thread individual booleans between
the two calls.
register_machine_with_fallback_and_log() goes from 7 to 3 parameters,
unregister_machine_with_fallback_and_log() goes from 5 to 2.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
shared: introduce MachineRegistration struct for machine registration
Replace the long positional parameter lists in register_machine() and
register_machine_with_fallback_and_log() with a MachineRegistration
struct that bundles all machine-describing fields.
This reduces register_machine() from 13 parameters to 3 and
register_machine_with_fallback_and_log() from 17 parameters to 7.
Callers now use designated initializers, which makes omitted fields
(zero/NULL/false) implicit and the code much more readable.
Field names are aligned with the existing Machine struct in machine.h
(id, root_directory, vsock_cid, ssh_address, ssh_private_key_path).
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
shared: document allocateUnit limitation on D-Bus fallback path
The D-Bus registration methods (RegisterMachineEx, RegisterMachineWithNetwork)
do not support the allocateUnit feature that the varlink path provides.
When varlink is unavailable and registration falls back to D-Bus, machined
discovers the caller's existing cgroup unit instead of creating a dedicated
scope. Callers that skip client-side scope allocation (relying on the
server to do it via allocateUnit) will end up without a dedicated scope
on the D-Bus fallback path.
Document this limitation at the fallback site so callers are aware.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
vmspawn: only open runtime bus when needed for registration or scope allocation
The runtime bus (user bus in user scope, system bus in system scope) is
only needed for scope allocation (!arg_keep_unit) or machine registration
(arg_register != 0). When both are disabled the bus was still opened
unconditionally which causes unnecessary failures if the user bus is
unavailable.
Gate the runtime bus opening on the same condition nspawn already uses.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
Daan De Meyer [Sun, 29 Mar 2026 11:10:42 +0000 (11:10 +0000)]
nspawn: rename --user= to --uid= and repurpose --user/--system for runtime scope
Rename nspawn's --user=NAME option to --uid=NAME for selecting the
container user. The -u short option is preserved. --user=NAME and
--user NAME are still accepted but emit a deprecation warning. A
pre-parsing step stitches the space-separated --user NAME form into
--user=NAME before getopt sees it, preserving backwards compatibility
despite --user now being an optional_argument.
Repurpose --user (without argument) and --system as standalone
switches for selecting the runtime scope (user vs system service
manager).
Replace all uses of the arg_privileged boolean with
arg_runtime_scope comparisons throughout nspawn. The default scope
is auto-detected from the effective UID.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Sun, 29 Mar 2026 18:22:40 +0000 (18:22 +0000)]
shared: move machine registration to shared machine-register.{c,h}
Move register_machine() and unregister_machine() from
vmspawn-register.{c,h} into shared machine-register.{c,h} so both
nspawn and vmspawn can use the same implementation.
The unified register_machine() uses varlink first (for richer
features like SSH support and unit allocation) with a D-Bus
RegisterMachineWithNetwork fallback for older machined. The
interface adds a class parameter ("vm" or "container") and
local_ifindex for nspawn's network interface support.
The unified unregister_machine() similarly tries varlink first
(io.systemd.Machine.Unregister) before falling back to D-Bus.
Both register_machine() and unregister_machine() only log at debug
level internally, leaving error/notice logging to callers.
Add register_machine_with_fallback() which tries system and/or user
scope registration based on a RuntimeScope parameter
(_RUNTIME_SCOPE_INVALID for both), and
unregister_machine_with_fallback() as its counterpart. Both use
RET_GATHER() to collect errors from each scope.
Make --register= a tristate (yes/no/auto) defaulting to auto. When
set to auto, registration failures are logged at notice level and
ignored. When set to yes, failures are fatal.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
machined: skip leader ownership check for user scope
When registering a machine, machined verifies that the leader process
is owned by the calling user via process_is_owned_by_uid(). This
check fails for user scope machined when the leader is inside a user
namespace: after the leader calls setns(CLONE_NEWUSER), it becomes
non-dumpable, and the subsequent ptrace_may_access() check in the
kernel denies access to the process's user namespace, since the
calling user lacks CAP_SYS_PTRACE in the mm's user namespace (the
host namespace), even though the user owns the child user namespace.
Skip this check when running in user scope. For system scope, the
check is important because multiple users share the same machined
instance, so one user must not be able to claim another user's process
as a machine leader. For user scope this is unnecessary: the varlink
socket lives under $XDG_RUNTIME_DIR (mode 0700), so only the owning
user can connect, and the user machined instance can only perform
operations bounded by that user's own privileges. Registering a
foreign PID does not escalate capabilities.
vmspawn: Redirect QEMU's stdin/stdout/stderr to the PTY
When a PTY is allocated for the console, QEMU's own stdio file
descriptors were still inherited directly from vmspawn, meaning any
output QEMU writes to stdout/stderr (e.g. warnings) would bypass the
PTY forwarder and go straight to the terminal. Similarly, QEMU could
read directly from the terminal's stdin.
Fix this by opening the PTY slave side and passing it as stdio_fds to
the fork call with FORK_REARRANGE_STDIO, so that all of QEMU's I/O
goes through the PTY and is properly forwarded.
vmspawn: Use ~ instead of ! as negation prefix for --firmware-features=
Switch the negation character for firmware feature exclusion from
"!" to "~" to be consistent with other systemd options that support
negation such as SystemCallFilter=.
vmspawn: Add comment explaining substring match in firmware_data_matches_machine()
The machine types in QEMU firmware descriptions are glob patterns
like "pc-q35-*", so we use strstr() substring matching to check if
our machine type is covered by a given firmware entry.
There's no way to configure the log level for swtpm_setup, so pipe
it's logfile (which defaults to stderr) to /dev/null unless debug
logging is enabled.
swtpm: gracefully fall back when --print-profiles output is not JSON
Older swtpm versions print --help output instead of JSON when
swtpm_setup --print-profiles is invoked. Previously, the JSON parse
failure was treated as fatal, preventing swtpm manufacture entirely on
these older versions.
Extract profile detection into a separate swtpm_find_best_profile()
helper and treat JSON parse failure as a graceful fallback: log a
notice and continue without a profile, same as when no builtin profiles
are found.
udev: also trigger loop device for boot disk when partition scanning is unsupported (#41509)
Previously, probe_gpt_sector_size_mismatch() would bail out early when
the GPT sector size matched the device sector size. However, some
devices (e.g. certain CD-ROM drives) do not support kernel partition
scanning even when sector sizes match. In that case, the kernel still
cannot parse the partition table, and we need to set up a loop device to
expose the partitions — just as we do for the sector size mismatch case.
Check blockdev_partscan_enabled() when sector sizes match, and only skip
the boot partition check if partition scanning is actually supported.
Also rename the function, udev property, and log messages to reflect the
broader scope:
Per UEFI specification §13.3.2, El Torito partition discovery applies to
any block device, not just optical media. Rename
disk_get_part_uuid_cdrom() to disk_get_part_uuid_eltorito() and update
all log messages and comments to say "El Torito" instead of "CDROM" to
reflect this.
udev: also trigger loop device for boot disk when partition scanning is unsupported
Previously, probe_gpt_sector_size_mismatch() would bail out early when
the GPT sector size matched the device sector size. However, some
devices (e.g. certain CD-ROM drives) do not support kernel partition
scanning even when sector sizes match. In that case, the kernel still
cannot parse the partition table, and we need to set up a loop device to
expose the partitions — just as we do for the sector size mismatch case.
Check blockdev_partscan_enabled() when sector sizes match, and only skip
the boot partition check if partition scanning is actually supported.
Also rename the function, udev property, and log messages to reflect the
broader scope:
We have this rule in systemd that unless we are sure that getenv() is
safe and there's a reason to use it we should always prefer
secure_getenv(). Follow our own rules here, as per CODING_STYLE
document.
This really doesn't matter here, all of this is highly privileged, but
hopefully Claude & Colleagues shut up about this then, and maybe detect
the pattern better.
Daan De Meyer [Mon, 30 Mar 2026 20:11:15 +0000 (20:11 +0000)]
udev: probe GPT sector size and trigger loop device on mismatch
When the GPT partition table uses a different sector size than the
device's native block size (e.g. 512-byte GPT on a 2048-byte CD-ROM
booted via El Torito), the kernel cannot parse the partition table.
Probe the GPT sector size upfront and configure blkid with the correct
value so it always finds the partition table. If a sector size mismatch
is detected, trigger a loop device to re-expose the device with the
correct sector size and skip root partition discovery on the original
device — it will happen on the loop device instead.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
dissect: resolve sysfs paths to devnodes in --attach
When a udev rule uses ENV{SYSTEMD_WANTS}+="systemd-loop@.service" on a
block device, the %f specifier in the service file resolves to the sysfs
path rather than the device node path. Detect sysfs paths in
parse_image_path_argument() and resolve them to the corresponding
devnode using sd_device_new_from_syspath() + sd_device_get_devname().
Daan De Meyer [Mon, 30 Mar 2026 19:23:10 +0000 (19:23 +0000)]
boot: add El Torito CDROM partition UUID discovery
When booting from a CD-ROM via El Torito, the UEFI device path contains a
CDROM_DEVICE_PATH node instead of a HARDDRIVE_DEVICE_PATH node. Unlike the
hard drive variant, the CDROM node does not carry a partition UUID, so
systemd-boot previously could not determine the boot partition UUID in this
scenario.
Add disk_get_part_uuid_cdrom() which recovers the partition UUID by reading
the GPT from the underlying disk. Since ISO images are commonly mastered
with 512-byte GPT sectors on media with 2048-byte blocks, the function
probes for the GPT header at multiple sector sizes (512, 1024, 2048, 4096)
and matches the partition by comparing byte offsets between the CDROM node's
PartitionStart and each GPT entry's StartingLBA.
The function reuses read_gpt_entries() for GPT parsing and adds debug
logging for each failure path to aid diagnosis on real hardware.
Also adds the CDROM_DEVICE_PATH struct and MEDIA_CDROM_DP subtype constant
to device-path.h, and fixes disk_get_part_uuid() to preserve the original
device path pointer so it can be passed to the CDROM fallback.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Mon, 30 Mar 2026 18:59:09 +0000 (18:59 +0000)]
boot: use EFI_DISK_IO_PROTOCOL instead of EFI_BLOCK_IO_PROTOCOL for disk reads
EFI_DISK_IO_PROTOCOL (UEFI spec section 13.7,
https://uefi.org/specs/UEFI/2.10/13_Protocols_Media_Access.html#disk-i-o-protocol)
supports reads at arbitrary byte offsets with no alignment requirements on the
buffer. The UEFI spec mandates that firmware produces this protocol on every
handle that also has EFI_BLOCK_IO_PROTOCOL, so it is always available.
This is a better fit than EFI_BLOCK_IO_PROTOCOL for our GPT parsing and
BitLocker detection because Block I/O requires that both the read offset (LBA)
and the buffer are aligned to the media's IoAlign value. Meeting that
constraint forces us to use xmalloc_aligned_pages() with
PHYSICAL_ADDRESS_TO_POINTER(), page-granularity allocations, and manual size
rounding (ALIGN_TO). Disk I/O handles all of that internally, so callers can
use plain xmalloc() or even stack buffers and read exactly the number of bytes
they need.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
repart: Split out El Torito boot catalog writing into context_eltorito()
Writing the El Torito boot catalog should be independent of writing the
partition table. Previously, the El Torito logic was embedded in
context_write_partition_table(), which meant it was skipped when the
partition table hadn't changed. Extract it into a separate
context_eltorito() function and invoke it after
context_write_partition_table() so the boot catalog is always written
when enabled.
Also move the overlap verification to be done as soon as we have all
the necessary information to do the check and before doing any expensive
work;
TEST-74-AUX-UTILS: check for failed units after capsule test
TEST-74-AUX-UTILS has a number of subtests.
test/units/TEST-74-AUX-UTILS.capsule.sh runs first and starts and stops
capsule@foobar.service. Looking at the test, the unit is cleanly stopped.
But later test/units/TEST-74-AUX-UTILS.machine-id-setup.sh tests for
failed units. capsule@foobar.service is listed as failed, causing the
second subtest to fail.
Add the same test in test/units/TEST-74-AUX-UTILS.capsule.sh to see
if the failed test really originates from there.
TEST-70-TPM2: Suppress PCR public key auto-loading in basic tests (#41496)
When systemd-cryptenroll --tpm2-device=auto is called on a system where
a tpm2-pcr-public-key.pem exists it automatically creates tokens with a
signed PCR policy. Unlocking such a token via --unlock-tpm2-device=auto
requires a tpm2-pcr-signature.json file, which is not present.
This creates a race with systemd-tpm2-setup.service at boot: if the
service completes before the test, the key exists and the subsequent
--unlock-tpm2-device=auto calls fail, which I believe is the cause of
the test flakiness.
This also seems to mesh with the fact that this only flakes on Debian
CI, since that's built with ukify which installs a public key.
Let's hopefully fix this by passing --tpm2-public-key= to all
--tpm2-device= enrollment calls that aren't explicitly intended to test
signed PCR policy behaviour.
the ISO9660 date range and the "struct tm" range are quite different,
let's add extra paranoia checks that we can always convert the dates
without issues or fail cleanly.
Chris Down [Fri, 3 Apr 2026 15:03:28 +0000 (00:03 +0900)]
TEST-70-TPM2: Suppress PCR public key auto-loading in basic tests
When systemd-cryptenroll --tpm2-device=auto is called on a system where
a tpm2-pcr-public-key.pem exists it automatically creates tokens with a
signed PCR policy. Unlocking such a token via --unlock-tpm2-device=auto
requires a tpm2-pcr-signature.json file, which is not present.
This creates a race with systemd-tpm2-setup.service at boot: if the
service completes before the test, the key exists and the subsequent
--unlock-tpm2-device=auto calls fail, which I believe is the cause of
the test flakiness.
This also seems to mesh with the fact that this only flakes on Debian
CI, since that's built with ukify which installs a public key.
Let's hopefully fix this by passing --tpm2-public-key= to all
--tpm2-device= enrollment calls that aren't explicitly intended to test
signed PCR policy behaviour.
Translations update from [Fedora
Weblate](https://translate.fedoraproject.org) for
[systemd/main](https://translate.fedoraproject.org/projects/systemd/main/).
many: final set of coccinelle check-pointer-deref tweaks (#41426)
This is a followup to https://github.com/systemd/systemd/pull/41400 with
the final set of tweaks so that the new coccinelle `check-pinters-deref`
checker runs without failures.
It also includes a commit with some fixes for redundant asserts and the
final commit to make the assert() in the inner loop of qsort
POINTER_MAY_BE_NULL which will become a no-op - that is probably the
most controversial one, I hope the commit message explains the
trade-offs, I'm happy to drop it, I have no strong opinion either way.
Please see the individual commits/commit messages for more details, most
is (hopefully) relatively boring/mechanical.
Verify that GPT images with an ISO9660 El Torito boot catalog are
dissected via the GPT partition table rather than being treated as a
single iso9660 filesystem.
Follow-up for e33eb053fb ("dissect-image: Drop
blkid_probe_filter_superblocks_usage() call from probe_blkid_filter()")
dissect-image: Drop blkid_probe_filter_superblocks_usage() call from probe_blkid_filter()
probe_blkid_filter() sets up a blkid superblock filter to restrict
filesystem detection to a known-safe set of types (btrfs, erofs, ext4,
f2fs, squashfs, vfat, xfs). It does so via two consecutive calls:
However, both filter functions share the same internal bitmap in libblkid.
Each call goes through blkid_probe_get_filter(), which zeroes the entire
bitmap before applying the new filter. This means the second call (usage
filter) silently destroys the type filter set by the first call.
The result is that only RAID superblocks end up being filtered, while all
other filesystem types — including iso9660 — pass through unfiltered.
This causes ISO images (e.g. those with El Torito boot catalogs and GPT)
to be incorrectly dissected: blkid detects the iso9660 superblock on
the whole device (since iso9660 is marked BLKID_IDINFO_TOLERANT and can
coexist with partition tables), the code enters the unpartitioned
single-filesystem path, and then mounting fails because iso9660 is not
in the allowed filesystem list:
"File system type 'iso9660' is not allowed to be mounted as result
of automatic dissection."
Fix this by dropping the blkid_probe_filter_superblocks_usage() call.
The BLKID_FLTR_ONLYIN type filter already restricts probing to only
the listed types, which implicitly excludes RAID superblocks as well,
making the usage filter redundant.
Follow-up for 72bf86663c ("dissect: use blkid_probe filters to restrict
probing to supported FSes and no raid")
dissect-image: add crypto_LUKS and swap to blkid probe filter
allowed_fstypes() returns the list of filesystem types that we are
willing to mount. However, the blkid probe filter needs to detect
additional non-mountable types: crypto_LUKS (so that LUKS-encrypted
partitions can be identified and decrypted) and swap (so that swap
partitions can be identified).
Without these types in the BLKID_FLTR_ONLYIN filter, blkid reports
"No type detected" for encrypted and swap partitions, causing
image policy checks to fail (e.g. "encrypted was required") and
mount operations to fail with "File system type not supported".
Note that verity types (DM_verity_hash, verity_hash_signature) do
not need to be added here because their fstype is assigned directly
during partition table parsing, not via blkid probing.
Daan De Meyer [Fri, 27 Mar 2026 13:53:02 +0000 (13:53 +0000)]
vmspawn: add --cxl= option and memory hotplug support
Add --cxl=BOOL option to enable CXL (Compute Express Link) support in
the virtual machine. CXL is a high-speed interconnect standard that
allows CPUs to access memory attached to devices such as accelerators
and memory expanders, enabling flexible memory pooling and expansion
beyond what is physically installed on the motherboard. When enabled,
adds cxl=on to the QEMU machine configuration. Only supported on x86_64
and aarch64 architectures.
This is added for testing purposes and for feature parity with mkosi's
CXL= setting.
Extend --ram= to accept an optional maximum size for memory hotplug,
using the syntax --ram=SIZE[:MAXSIZE] (e.g. --ram=2G:8G). When a
maximum is specified, the maxmem key is added to the QEMU memory
configuration section to enable memory hotplug up to the given limit.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
core: do not GC units that have FDs stored (#41435)
If a unit has FileDescriptorStorePreserve=yes we'll keep its FDs around
in case it starts again. But if there are no reverse dependencies
referencing it, we'll also GC it and lose all the FDs, which defeats the
point of the setting (which is opt-in).
Do not GC units that have FDs stored to avoid this.
When running `updatectl update` since f0b2ea63, Install() was called
with the version resolved by Acquire() rather than the originally
requested (empty) version, causing the stricter update-to-version polkit
action to be used instead of the update polkit action.
gcc and newer clang seem to be fine with it, but clang 14, 16, 18
is unhappy:
../src/timedate/timedatectl.c:1006:25: error: fallthrough annotation does not directly precede switch label
_fallthrough_;
^
_fallthrough_ doesn't seem to be used very often in option parsing,
so let's remove the use for now.
-h/--help and --version are moved from a standalone section into the Output
group, because they look misplaced without a header and having a new "Options"
group with the verb-like entries also wasn't appealing.
Output is changed a bit to avoid repeating "rather than path":
- -p --print=filename Print selected filename rather than path
- -p --print=version Print selected version rather than path
- -p --print=type Print selected inode type rather than path
- -p --print=arch Print selected architecture rather than path
- -p --print=tries Print selected tries left/tries done rather than path
- -p --print=all Print all of the above
- --resolve=yes Canonicalize the result path
+ -h --help Show this help
+ --version Show package version
+ -p --print=WHAT Print selected WHAT rather than path
+ --print=filename ... print selected filename
+ --print=version ... print selected version
+ --print=type ... print selected inode
+ --print=arch ... print selected architecture
+ --print=tries ... print selected tries left/tries done
+ --print=all ... print all of the above
+ --resolve=BOOL Canonicalize the result path
In some builds (package builds, so with optimization and lto, but I
haven't been able to pin down the exact combination on options that
matters), we end up with items in the verbs array reordered. The order
matters (because of groups, but also because we have some specific order
for display), so this reordering is something that we don't want.
From what I was able to read, the compiler + linker generally keep the
order within a single translation unit, but this is more of a convention
and implementation choice than a guarantee. Add this attribute [1]. It
seems to have the desired effect in CI.
analyze: consistently print error if table formatting fails
We don't want to return an error without printing something.
So for things which don't matter, explicitly suppress the error
with (void). In other cases, add the standard message.
We generally want to print an error message if table_print()
fails. Add a helper function for this and use it consistently.
This does one of the three things depending on the call site:
- a no-change reformatting of the code
- change from a custom message to the generic one
- addition of the error message where previously none was printed
In the third case, the actual use impact is very small, since the
table formatting is very unlikely to fail. But if it did, we would
often return an error without any message whatsoever, which we
never want to do.
tree-wide: drop flush&check step after table printing
Almost all callers of table_print() specify stdout or NULL (equivalent
to stdout) as the output stream. Simplify things by not requiring the
the stream to be specified.
In almost all cases, the printing of the table is surrounded by normal
printfs() that don't do explicit flushing and for which we don't check
the output stream status. Let's simplify most callers and skip this
step. The reason is not so much to avoid the extra step itself, but
instead to avoid the _handling_ of the potential failure. We generally
only want to print an error message for ENOMEM and other "internal"
errors, so strictly speaking we should filter out the errors from the
stream. By skipping the flush&check step we implicitly do this.
analyze: use table_print_with_pager in one more place
I guess this wasn't converted previously because verb_blame doesn't
support json output, the flags that are passed atm cannot contain real
json flags. That's OK, we can still use table_print_with_pager.
vmspawn: drop ICH9-LPC S3 disable and guard cfi.pflash01 for x86
The ICH9-LPC disable_s3 global QEMU config was a workaround for an
OVMF limitation where S3 resume didn't work with X64 PEI + SMM. SMM is
required for secure boot as it prevents the guest from writing directly
to the pflash, bypassing UEFI variable protections. With X64 PEI + SMM
enabled and S3 advertised, OVMF would hang on S3 resume. The
workaround was to tell QEMU not to advertise S3 support.
This limitation has been resolved in edk2 — the S3Verification() check
was removed in edk2 commit 098c5570 ("OvmfPkg/PlatformPei: drop
S3Verification()") after edk2 gained native X64 PEI + SMM + S3 resume
support. See https://github.com/tianocore/edk2/commit/098c5570.
Drop the now-unnecessary ICH9-LPC disable_s3 config entirely, and
guard the cfi.pflash01 secure=on setting with an x86 architecture
check since SMM is x86-specific and this option is invalid on ARM.
Chris Down [Fri, 3 Apr 2026 05:52:42 +0000 (13:52 +0800)]
core: Prevent corrupting units from stale alias state on daemon-reload (#39703)
During daemon-reload (or daemon-reexec), when a unit becomes an alias to
another unit, deserialising the alias's stale serialised state can
corrupt the canonical unit's live runtime state.
Consider this scenario:
1. Before reload:
- a.service is running
- b.service was stopped earlier and is dead
- Both exist as independent units
3. daemon-reload triggers serialisation. State file contains both units:
- a.service -> state=running, cgroup=/system.slice/a.service, PID=1234,
...
- b.service -> state=dead, cgroup=(empty), no PIDs, ...
4. During deserialisation:
- Processes a.service: loads Unit A, deserialises -> state=RUNNING
- Processes b.service: manager_load_unit() detects symlink, returns Unit
A
- unit_deserialize_state(Unit A, ...) overwrites with b's dead state
5. The result is that:
- Unit A incorrectly shows state=dead despite PID 1234 still running
- If a.service has Upholds= dependents, catch-up logic sees a.service
should be running but is dead
- systemd starts a.service again -> PID 5678
- Two instances run: PID 1234 (left-over) and PID 5678 (new)
This bug is deterministic when serialisation orders a.service before
b.service.
The root cause is that manager_deserialize_one_unit() calls
manager_load_unit(name, &u) which resolves aliases via
unit_follow_merge(), returning the canonical Unit object. However, the
code doesn't distinguish between two cases when u->id differs from the
requested name from the state file. In the corruption case, we're
deserialising an alias entry and unit_deserialize_state() blindly
overwrites the canonical unit's fields with stale data from the old,
independent unit. The serialised b.service then overwrites Unit A's
correct live state.
This commit first scans the serialised unit names, then adds a check
after manager_load_unit():
if (!streq(u->id, name) && set_contains(serialized_units, u->id))
...
This detects when the loaded unit's canonical ID (u->id) differs from
the serialised name, indicating the name is now an alias for a different
unit and the canonical unit also has its own serialised state entry.
If the canonical unit does not have its own serialised state entry, we
keep the state entry. That handles cases where the old name is really
just a rename, and thus the old name is the only serialised state for
the unit. In that case there is no bug, because there is no separate
canonical state entry for the stale alias entry to overwrite.
Skipping is safe because:
1. The canonical unit's own state entry will be correctly deserialised
regardless of order. This fix only prevents other stale alias entries
from corrupting it.
2. unit_merge() has already transferred the necessary data. When
b.service became an alias during unit loading, unit_merge() already
migrated dependencies and references to the canonical unit.
3. After merging, the alias doesn't have its own runtime state. The
serialised data represents b.service when it was independent, which is
now obsolete once the canonical unit also has its own serialised entry.
4. All fields are stale. unit_deserialize_state() would overwrite state,
timestamps, cgroup paths, pids, etc. There's no scenario where we want
this data applied on top of the canonical unit's own serialised state.
This fix also correctly handles unit precedence. For example, imagine
this scenario:
1. `b.service` is a valid, running unit defined in `/run`.
2. The sysadmin creates `ln -s .../a.service /etc/.../b.service`.
3. On reload, the new symlink in `/etc` overrides the unit in `/run`.
The new perspective from the manager side is that `b.service` is now an
alias for `a.service`.
In this case, systemd correctly abandons the old b.service unit, because
that's the intended general semantics of unit file precedence. We also
do that in other cases, like when a unit file in /etc/systemd/system/
masks a vendor-supplied unit file in /lib/systemd/system/, or when an
admin uses systemctl mask to explicitly disable a unit.
In all these scenarios, the configuration with the highest precedence
(in /etc/) is treated as the new source of truth. The old unit's
definition is discarded, and its running processes are (correctly)
abandoned. In that respect we are not doing anything new here.
Some may ask why we shouldn't just ignore the symlink if we think this
case will come up. I think there are multiple very strong reasons not to
do so:
1. It violates unit precedence. The unit design is built on a strict
precedence list. When an admin puts any file in /etc, they are
intentionally overriding everything else. If manager_load_unit were to
"ignore" this file based on runtime state, it would break this
fundamental precedent.
2. It makes daemon-reload stateful. daemon-reload is supposed to be a
simple, stateless operation, basically to read the files on disk and
apply the new configuration. But doing this would make daemon-reload
stateful, because we'd have to read the files on disk, but
cross-reference the current runtime state, and... maybe ignore some
files. This is complex and unpredictable.
3. It also completely ignores the user intent. The admin clearly has
tried to replace the old service with an alias. Ignoring their
instruction is the opposite of what they want.
Yaping Li [Thu, 19 Mar 2026 21:10:49 +0000 (14:10 -0700)]
report: add per-service metrics to the varlink Metrics API
Added these metrics:
- ActiveTimestamp: active state transition timestamps (enter/exit)
- InactiveExitTimestamp: when the unit last left inactive state
- NRestarts: restart count
- StateChangeTimestamp: last state change timestamp
- StatusErrno: service errno status
Per-service cgroup metrics (CpuUsage, MemoryUsage, IOReadBytes,
IOReadOperations, TasksCurrent) are not included here as they are
gathered by the kernel and will be served by a separate process that
reads cgroup files directly, minimizing PID1 involvement.
Yaping Li [Fri, 27 Mar 2026 02:57:46 +0000 (19:57 -0700)]
report: add manager-level metrics to varlink Metrics API
Added these metrics:
- JobsQueued: number of jobs currently queued
- SystemState: overall system state (running, degraded, etc.)
- UnitsByLoadStateTotal: unit counts broken down by load state
- UnitsTotal: total number of units
Also bump METRICS_MAX from 1024 to 4096 to accommodate the new
per-unit metrics that are now collected.
vmspawn: Add --console-transport= option to select serial vs virtio-serial
Add a --console-transport= option that selects between virtio-serial
(the default, appearing as /dev/hvc0) and a regular serial port
(appearing as /dev/ttyS0 or /dev/ttyAMA0 depending on architecture).
This is primarily useful for testing purposes, for example to test
sd-stub's automatic console= kernel command line parameter handling. It
allows verifying that the guest OS correctly handles serial console
configurations without virtio.
When serial transport is selected, -serial chardev:console is used on
the QEMU command line to connect the chardev to the platform's default
serial device. This cannot be done via the QEMU config file as on some
platforms (e.g. ARM) the serial device is a sysbus device that can only
be connected via serial_hd() which is populated by -serial.
Daan De Meyer [Mon, 30 Mar 2026 17:23:48 +0000 (19:23 +0200)]
loop-util: create loop device for block devices with sector size mismatch
Previously, loop_device_make_internal() always used the block device
directly (via loop_device_open_from_fd()) for whole-device access,
regardless of sector size. This is incorrect when the GPT partition
table was written with a different sector size than the device reports,
as happens with CD-ROM/ISO boot via El Torito: the device has
2048-byte blocks but the GPT uses 512-byte sectors.
Restructure the sector size handling in loop_device_make_internal():
- Move GPT sector size probing (UINT32_MAX case) before the
block-vs-regular-file split so both paths share the same logic and
O_DIRECT handling. Check f_flags instead of loop_flags for O_DIRECT
detection, since we're probing the original fd before any reopening.
- For block devices, get the device sector size and compare it against
the resolved sector_size. Only use the block device directly when
sector sizes match. When they differ (probed GPT mismatch or explicit
sector size request), fall through to create a real loop device with
the correct sector size.
- Default sector_size=0 to the device sector size for block devices
(instead of always 512), so "no preference" correctly matches the
device's sector size.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Sat, 28 Mar 2026 13:12:25 +0000 (13:12 +0000)]
vmspawn: propagate $TERM from host into VM via kernel command line
When running in a console mode (interactive, native, or read-only),
propagate the host's $TERM into the VM by adding TERM= and
systemd.tty.term.hvc0= to the kernel command line.
TERM= is picked up by PID 1 and inherited by services on /dev/console
(such as emergency.service). systemd.tty.term.hvc0= is used by services
directly attached to /dev/hvc0 (such as serial-getty@hvc0.service) which
look up $TERM via the systemd.tty.term.<tty> kernel command line
parameter.
While systemd can auto-detect the terminal type via DCS XTGETTCAP, not
all terminal emulators implement this, so explicitly propagating $TERM
provides a more reliable experience. We skip propagation when $TERM is
unset or set to "unknown" (as is the case in GitHub Actions and some
other CI environments).
Previously this was handled by mkosi synthesizing the corresponding
kernel command line parameters externally.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Support `CopyBlocks=` for `Verity={hash,sig}` (#41393)
This enables deriving the minimum size of the `Verity=hash` partition
using the `Verity=` logic when the size of the `Verity=data` partition
is bigger than the `CopyBlocks=` target.
This enables using `Minimize=true` for an "installer image" and later
using sd-repart to install to a system with reserve space for future
updates by specifying `Size{Min,Max}Bytes=` only in the `Verity=data`
partition, without needing to hardcode the corresponding size for the
`Verity=hash` partition.
While not strictly necessary for `Verity=signature` partitions (since
they have a fixed size) there isn't too much reason to not support it,
since then you can still specify `VerityMatchKey=` to indicate that the
partition is logically still part of that group of partitions.
---
Alternative to: https://github.com/systemd/systemd/pull/41156
Fixes https://github.com/systemd/systemd/issues/40995
This is a rebased version of #40936:
- the first few commits are #41003, which is greenlighted for merging
after the release
- then there's #40923
- and some changes on top
network-generator: support BOOTIF= and rd.bootif=0 options (#41028)
The network generator currently supports many of the options described
by dracut.cmdline(7), but not everything.
This commit adds support for the BOOTIF= option (and the related
rd.bootif= option) used in PXE setups.
This is implemented by treating BOOTIF as a special name/placeholder
when used as an interface name, and expecting a MAC address to be set in
the BOOTIF= parameter. The resulting .network file then uses MACAddress=
in the [Match] section, instead of Name=.