udev: also trigger loop device for boot disk when partition scanning is unsupported (#41509)
Previously, probe_gpt_sector_size_mismatch() would bail out early when
the GPT sector size matched the device sector size. However, some
devices (e.g. certain CD-ROM drives) do not support kernel partition
scanning even when sector sizes match. In that case, the kernel still
cannot parse the partition table, and we need to set up a loop device to
expose the partitions — just as we do for the sector size mismatch case.
Check blockdev_partscan_enabled() when sector sizes match, and only skip
the boot partition check if partition scanning is actually supported.
Also rename the function, udev property, and log messages to reflect the
broader scope:
Per UEFI specification §13.3.2, El Torito partition discovery applies to
any block device, not just optical media. Rename
disk_get_part_uuid_cdrom() to disk_get_part_uuid_eltorito() and update
all log messages and comments to say "El Torito" instead of "CDROM" to
reflect this.
udev: also trigger loop device for boot disk when partition scanning is unsupported
Previously, probe_gpt_sector_size_mismatch() would bail out early when
the GPT sector size matched the device sector size. However, some
devices (e.g. certain CD-ROM drives) do not support kernel partition
scanning even when sector sizes match. In that case, the kernel still
cannot parse the partition table, and we need to set up a loop device to
expose the partitions — just as we do for the sector size mismatch case.
Check blockdev_partscan_enabled() when sector sizes match, and only skip
the boot partition check if partition scanning is actually supported.
Also rename the function, udev property, and log messages to reflect the
broader scope:
We have this rule in systemd that unless we are sure that getenv() is
safe and there's a reason to use it we should always prefer
secure_getenv(). Follow our own rules here, as per CODING_STYLE
document.
This really doesn't matter here, all of this is highly privileged, but
hopefully Claude & Colleagues shut up about this then, and maybe detect
the pattern better.
Daan De Meyer [Mon, 30 Mar 2026 20:11:15 +0000 (20:11 +0000)]
udev: probe GPT sector size and trigger loop device on mismatch
When the GPT partition table uses a different sector size than the
device's native block size (e.g. 512-byte GPT on a 2048-byte CD-ROM
booted via El Torito), the kernel cannot parse the partition table.
Probe the GPT sector size upfront and configure blkid with the correct
value so it always finds the partition table. If a sector size mismatch
is detected, trigger a loop device to re-expose the device with the
correct sector size and skip root partition discovery on the original
device — it will happen on the loop device instead.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
dissect: resolve sysfs paths to devnodes in --attach
When a udev rule uses ENV{SYSTEMD_WANTS}+="systemd-loop@.service" on a
block device, the %f specifier in the service file resolves to the sysfs
path rather than the device node path. Detect sysfs paths in
parse_image_path_argument() and resolve them to the corresponding
devnode using sd_device_new_from_syspath() + sd_device_get_devname().
Daan De Meyer [Mon, 30 Mar 2026 19:23:10 +0000 (19:23 +0000)]
boot: add El Torito CDROM partition UUID discovery
When booting from a CD-ROM via El Torito, the UEFI device path contains a
CDROM_DEVICE_PATH node instead of a HARDDRIVE_DEVICE_PATH node. Unlike the
hard drive variant, the CDROM node does not carry a partition UUID, so
systemd-boot previously could not determine the boot partition UUID in this
scenario.
Add disk_get_part_uuid_cdrom() which recovers the partition UUID by reading
the GPT from the underlying disk. Since ISO images are commonly mastered
with 512-byte GPT sectors on media with 2048-byte blocks, the function
probes for the GPT header at multiple sector sizes (512, 1024, 2048, 4096)
and matches the partition by comparing byte offsets between the CDROM node's
PartitionStart and each GPT entry's StartingLBA.
The function reuses read_gpt_entries() for GPT parsing and adds debug
logging for each failure path to aid diagnosis on real hardware.
Also adds the CDROM_DEVICE_PATH struct and MEDIA_CDROM_DP subtype constant
to device-path.h, and fixes disk_get_part_uuid() to preserve the original
device path pointer so it can be passed to the CDROM fallback.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Mon, 30 Mar 2026 18:59:09 +0000 (18:59 +0000)]
boot: use EFI_DISK_IO_PROTOCOL instead of EFI_BLOCK_IO_PROTOCOL for disk reads
EFI_DISK_IO_PROTOCOL (UEFI spec section 13.7,
https://uefi.org/specs/UEFI/2.10/13_Protocols_Media_Access.html#disk-i-o-protocol)
supports reads at arbitrary byte offsets with no alignment requirements on the
buffer. The UEFI spec mandates that firmware produces this protocol on every
handle that also has EFI_BLOCK_IO_PROTOCOL, so it is always available.
This is a better fit than EFI_BLOCK_IO_PROTOCOL for our GPT parsing and
BitLocker detection because Block I/O requires that both the read offset (LBA)
and the buffer are aligned to the media's IoAlign value. Meeting that
constraint forces us to use xmalloc_aligned_pages() with
PHYSICAL_ADDRESS_TO_POINTER(), page-granularity allocations, and manual size
rounding (ALIGN_TO). Disk I/O handles all of that internally, so callers can
use plain xmalloc() or even stack buffers and read exactly the number of bytes
they need.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
repart: Split out El Torito boot catalog writing into context_eltorito()
Writing the El Torito boot catalog should be independent of writing the
partition table. Previously, the El Torito logic was embedded in
context_write_partition_table(), which meant it was skipped when the
partition table hadn't changed. Extract it into a separate
context_eltorito() function and invoke it after
context_write_partition_table() so the boot catalog is always written
when enabled.
Also move the overlap verification to be done as soon as we have all
the necessary information to do the check and before doing any expensive
work;
TEST-74-AUX-UTILS: check for failed units after capsule test
TEST-74-AUX-UTILS has a number of subtests.
test/units/TEST-74-AUX-UTILS.capsule.sh runs first and starts and stops
capsule@foobar.service. Looking at the test, the unit is cleanly stopped.
But later test/units/TEST-74-AUX-UTILS.machine-id-setup.sh tests for
failed units. capsule@foobar.service is listed as failed, causing the
second subtest to fail.
Add the same test in test/units/TEST-74-AUX-UTILS.capsule.sh to see
if the failed test really originates from there.
TEST-70-TPM2: Suppress PCR public key auto-loading in basic tests (#41496)
When systemd-cryptenroll --tpm2-device=auto is called on a system where
a tpm2-pcr-public-key.pem exists it automatically creates tokens with a
signed PCR policy. Unlocking such a token via --unlock-tpm2-device=auto
requires a tpm2-pcr-signature.json file, which is not present.
This creates a race with systemd-tpm2-setup.service at boot: if the
service completes before the test, the key exists and the subsequent
--unlock-tpm2-device=auto calls fail, which I believe is the cause of
the test flakiness.
This also seems to mesh with the fact that this only flakes on Debian
CI, since that's built with ukify which installs a public key.
Let's hopefully fix this by passing --tpm2-public-key= to all
--tpm2-device= enrollment calls that aren't explicitly intended to test
signed PCR policy behaviour.
the ISO9660 date range and the "struct tm" range are quite different,
let's add extra paranoia checks that we can always convert the dates
without issues or fail cleanly.
Chris Down [Fri, 3 Apr 2026 15:03:28 +0000 (00:03 +0900)]
TEST-70-TPM2: Suppress PCR public key auto-loading in basic tests
When systemd-cryptenroll --tpm2-device=auto is called on a system where
a tpm2-pcr-public-key.pem exists it automatically creates tokens with a
signed PCR policy. Unlocking such a token via --unlock-tpm2-device=auto
requires a tpm2-pcr-signature.json file, which is not present.
This creates a race with systemd-tpm2-setup.service at boot: if the
service completes before the test, the key exists and the subsequent
--unlock-tpm2-device=auto calls fail, which I believe is the cause of
the test flakiness.
This also seems to mesh with the fact that this only flakes on Debian
CI, since that's built with ukify which installs a public key.
Let's hopefully fix this by passing --tpm2-public-key= to all
--tpm2-device= enrollment calls that aren't explicitly intended to test
signed PCR policy behaviour.
Translations update from [Fedora
Weblate](https://translate.fedoraproject.org) for
[systemd/main](https://translate.fedoraproject.org/projects/systemd/main/).
many: final set of coccinelle check-pointer-deref tweaks (#41426)
This is a followup to https://github.com/systemd/systemd/pull/41400 with
the final set of tweaks so that the new coccinelle `check-pinters-deref`
checker runs without failures.
It also includes a commit with some fixes for redundant asserts and the
final commit to make the assert() in the inner loop of qsort
POINTER_MAY_BE_NULL which will become a no-op - that is probably the
most controversial one, I hope the commit message explains the
trade-offs, I'm happy to drop it, I have no strong opinion either way.
Please see the individual commits/commit messages for more details, most
is (hopefully) relatively boring/mechanical.
Verify that GPT images with an ISO9660 El Torito boot catalog are
dissected via the GPT partition table rather than being treated as a
single iso9660 filesystem.
Follow-up for e33eb053fb ("dissect-image: Drop
blkid_probe_filter_superblocks_usage() call from probe_blkid_filter()")
dissect-image: Drop blkid_probe_filter_superblocks_usage() call from probe_blkid_filter()
probe_blkid_filter() sets up a blkid superblock filter to restrict
filesystem detection to a known-safe set of types (btrfs, erofs, ext4,
f2fs, squashfs, vfat, xfs). It does so via two consecutive calls:
However, both filter functions share the same internal bitmap in libblkid.
Each call goes through blkid_probe_get_filter(), which zeroes the entire
bitmap before applying the new filter. This means the second call (usage
filter) silently destroys the type filter set by the first call.
The result is that only RAID superblocks end up being filtered, while all
other filesystem types — including iso9660 — pass through unfiltered.
This causes ISO images (e.g. those with El Torito boot catalogs and GPT)
to be incorrectly dissected: blkid detects the iso9660 superblock on
the whole device (since iso9660 is marked BLKID_IDINFO_TOLERANT and can
coexist with partition tables), the code enters the unpartitioned
single-filesystem path, and then mounting fails because iso9660 is not
in the allowed filesystem list:
"File system type 'iso9660' is not allowed to be mounted as result
of automatic dissection."
Fix this by dropping the blkid_probe_filter_superblocks_usage() call.
The BLKID_FLTR_ONLYIN type filter already restricts probing to only
the listed types, which implicitly excludes RAID superblocks as well,
making the usage filter redundant.
Follow-up for 72bf86663c ("dissect: use blkid_probe filters to restrict
probing to supported FSes and no raid")
dissect-image: add crypto_LUKS and swap to blkid probe filter
allowed_fstypes() returns the list of filesystem types that we are
willing to mount. However, the blkid probe filter needs to detect
additional non-mountable types: crypto_LUKS (so that LUKS-encrypted
partitions can be identified and decrypted) and swap (so that swap
partitions can be identified).
Without these types in the BLKID_FLTR_ONLYIN filter, blkid reports
"No type detected" for encrypted and swap partitions, causing
image policy checks to fail (e.g. "encrypted was required") and
mount operations to fail with "File system type not supported".
Note that verity types (DM_verity_hash, verity_hash_signature) do
not need to be added here because their fstype is assigned directly
during partition table parsing, not via blkid probing.
Daan De Meyer [Fri, 27 Mar 2026 13:53:02 +0000 (13:53 +0000)]
vmspawn: add --cxl= option and memory hotplug support
Add --cxl=BOOL option to enable CXL (Compute Express Link) support in
the virtual machine. CXL is a high-speed interconnect standard that
allows CPUs to access memory attached to devices such as accelerators
and memory expanders, enabling flexible memory pooling and expansion
beyond what is physically installed on the motherboard. When enabled,
adds cxl=on to the QEMU machine configuration. Only supported on x86_64
and aarch64 architectures.
This is added for testing purposes and for feature parity with mkosi's
CXL= setting.
Extend --ram= to accept an optional maximum size for memory hotplug,
using the syntax --ram=SIZE[:MAXSIZE] (e.g. --ram=2G:8G). When a
maximum is specified, the maxmem key is added to the QEMU memory
configuration section to enable memory hotplug up to the given limit.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
core: do not GC units that have FDs stored (#41435)
If a unit has FileDescriptorStorePreserve=yes we'll keep its FDs around
in case it starts again. But if there are no reverse dependencies
referencing it, we'll also GC it and lose all the FDs, which defeats the
point of the setting (which is opt-in).
Do not GC units that have FDs stored to avoid this.
When running `updatectl update` since f0b2ea63, Install() was called
with the version resolved by Acquire() rather than the originally
requested (empty) version, causing the stricter update-to-version polkit
action to be used instead of the update polkit action.
gcc and newer clang seem to be fine with it, but clang 14, 16, 18
is unhappy:
../src/timedate/timedatectl.c:1006:25: error: fallthrough annotation does not directly precede switch label
_fallthrough_;
^
_fallthrough_ doesn't seem to be used very often in option parsing,
so let's remove the use for now.
-h/--help and --version are moved from a standalone section into the Output
group, because they look misplaced without a header and having a new "Options"
group with the verb-like entries also wasn't appealing.
Output is changed a bit to avoid repeating "rather than path":
- -p --print=filename Print selected filename rather than path
- -p --print=version Print selected version rather than path
- -p --print=type Print selected inode type rather than path
- -p --print=arch Print selected architecture rather than path
- -p --print=tries Print selected tries left/tries done rather than path
- -p --print=all Print all of the above
- --resolve=yes Canonicalize the result path
+ -h --help Show this help
+ --version Show package version
+ -p --print=WHAT Print selected WHAT rather than path
+ --print=filename ... print selected filename
+ --print=version ... print selected version
+ --print=type ... print selected inode
+ --print=arch ... print selected architecture
+ --print=tries ... print selected tries left/tries done
+ --print=all ... print all of the above
+ --resolve=BOOL Canonicalize the result path
In some builds (package builds, so with optimization and lto, but I
haven't been able to pin down the exact combination on options that
matters), we end up with items in the verbs array reordered. The order
matters (because of groups, but also because we have some specific order
for display), so this reordering is something that we don't want.
From what I was able to read, the compiler + linker generally keep the
order within a single translation unit, but this is more of a convention
and implementation choice than a guarantee. Add this attribute [1]. It
seems to have the desired effect in CI.
analyze: consistently print error if table formatting fails
We don't want to return an error without printing something.
So for things which don't matter, explicitly suppress the error
with (void). In other cases, add the standard message.
We generally want to print an error message if table_print()
fails. Add a helper function for this and use it consistently.
This does one of the three things depending on the call site:
- a no-change reformatting of the code
- change from a custom message to the generic one
- addition of the error message where previously none was printed
In the third case, the actual use impact is very small, since the
table formatting is very unlikely to fail. But if it did, we would
often return an error without any message whatsoever, which we
never want to do.
tree-wide: drop flush&check step after table printing
Almost all callers of table_print() specify stdout or NULL (equivalent
to stdout) as the output stream. Simplify things by not requiring the
the stream to be specified.
In almost all cases, the printing of the table is surrounded by normal
printfs() that don't do explicit flushing and for which we don't check
the output stream status. Let's simplify most callers and skip this
step. The reason is not so much to avoid the extra step itself, but
instead to avoid the _handling_ of the potential failure. We generally
only want to print an error message for ENOMEM and other "internal"
errors, so strictly speaking we should filter out the errors from the
stream. By skipping the flush&check step we implicitly do this.
analyze: use table_print_with_pager in one more place
I guess this wasn't converted previously because verb_blame doesn't
support json output, the flags that are passed atm cannot contain real
json flags. That's OK, we can still use table_print_with_pager.
vmspawn: drop ICH9-LPC S3 disable and guard cfi.pflash01 for x86
The ICH9-LPC disable_s3 global QEMU config was a workaround for an
OVMF limitation where S3 resume didn't work with X64 PEI + SMM. SMM is
required for secure boot as it prevents the guest from writing directly
to the pflash, bypassing UEFI variable protections. With X64 PEI + SMM
enabled and S3 advertised, OVMF would hang on S3 resume. The
workaround was to tell QEMU not to advertise S3 support.
This limitation has been resolved in edk2 — the S3Verification() check
was removed in edk2 commit 098c5570 ("OvmfPkg/PlatformPei: drop
S3Verification()") after edk2 gained native X64 PEI + SMM + S3 resume
support. See https://github.com/tianocore/edk2/commit/098c5570.
Drop the now-unnecessary ICH9-LPC disable_s3 config entirely, and
guard the cfi.pflash01 secure=on setting with an x86 architecture
check since SMM is x86-specific and this option is invalid on ARM.
Chris Down [Fri, 3 Apr 2026 05:52:42 +0000 (13:52 +0800)]
core: Prevent corrupting units from stale alias state on daemon-reload (#39703)
During daemon-reload (or daemon-reexec), when a unit becomes an alias to
another unit, deserialising the alias's stale serialised state can
corrupt the canonical unit's live runtime state.
Consider this scenario:
1. Before reload:
- a.service is running
- b.service was stopped earlier and is dead
- Both exist as independent units
3. daemon-reload triggers serialisation. State file contains both units:
- a.service -> state=running, cgroup=/system.slice/a.service, PID=1234,
...
- b.service -> state=dead, cgroup=(empty), no PIDs, ...
4. During deserialisation:
- Processes a.service: loads Unit A, deserialises -> state=RUNNING
- Processes b.service: manager_load_unit() detects symlink, returns Unit
A
- unit_deserialize_state(Unit A, ...) overwrites with b's dead state
5. The result is that:
- Unit A incorrectly shows state=dead despite PID 1234 still running
- If a.service has Upholds= dependents, catch-up logic sees a.service
should be running but is dead
- systemd starts a.service again -> PID 5678
- Two instances run: PID 1234 (left-over) and PID 5678 (new)
This bug is deterministic when serialisation orders a.service before
b.service.
The root cause is that manager_deserialize_one_unit() calls
manager_load_unit(name, &u) which resolves aliases via
unit_follow_merge(), returning the canonical Unit object. However, the
code doesn't distinguish between two cases when u->id differs from the
requested name from the state file. In the corruption case, we're
deserialising an alias entry and unit_deserialize_state() blindly
overwrites the canonical unit's fields with stale data from the old,
independent unit. The serialised b.service then overwrites Unit A's
correct live state.
This commit first scans the serialised unit names, then adds a check
after manager_load_unit():
if (!streq(u->id, name) && set_contains(serialized_units, u->id))
...
This detects when the loaded unit's canonical ID (u->id) differs from
the serialised name, indicating the name is now an alias for a different
unit and the canonical unit also has its own serialised state entry.
If the canonical unit does not have its own serialised state entry, we
keep the state entry. That handles cases where the old name is really
just a rename, and thus the old name is the only serialised state for
the unit. In that case there is no bug, because there is no separate
canonical state entry for the stale alias entry to overwrite.
Skipping is safe because:
1. The canonical unit's own state entry will be correctly deserialised
regardless of order. This fix only prevents other stale alias entries
from corrupting it.
2. unit_merge() has already transferred the necessary data. When
b.service became an alias during unit loading, unit_merge() already
migrated dependencies and references to the canonical unit.
3. After merging, the alias doesn't have its own runtime state. The
serialised data represents b.service when it was independent, which is
now obsolete once the canonical unit also has its own serialised entry.
4. All fields are stale. unit_deserialize_state() would overwrite state,
timestamps, cgroup paths, pids, etc. There's no scenario where we want
this data applied on top of the canonical unit's own serialised state.
This fix also correctly handles unit precedence. For example, imagine
this scenario:
1. `b.service` is a valid, running unit defined in `/run`.
2. The sysadmin creates `ln -s .../a.service /etc/.../b.service`.
3. On reload, the new symlink in `/etc` overrides the unit in `/run`.
The new perspective from the manager side is that `b.service` is now an
alias for `a.service`.
In this case, systemd correctly abandons the old b.service unit, because
that's the intended general semantics of unit file precedence. We also
do that in other cases, like when a unit file in /etc/systemd/system/
masks a vendor-supplied unit file in /lib/systemd/system/, or when an
admin uses systemctl mask to explicitly disable a unit.
In all these scenarios, the configuration with the highest precedence
(in /etc/) is treated as the new source of truth. The old unit's
definition is discarded, and its running processes are (correctly)
abandoned. In that respect we are not doing anything new here.
Some may ask why we shouldn't just ignore the symlink if we think this
case will come up. I think there are multiple very strong reasons not to
do so:
1. It violates unit precedence. The unit design is built on a strict
precedence list. When an admin puts any file in /etc, they are
intentionally overriding everything else. If manager_load_unit were to
"ignore" this file based on runtime state, it would break this
fundamental precedent.
2. It makes daemon-reload stateful. daemon-reload is supposed to be a
simple, stateless operation, basically to read the files on disk and
apply the new configuration. But doing this would make daemon-reload
stateful, because we'd have to read the files on disk, but
cross-reference the current runtime state, and... maybe ignore some
files. This is complex and unpredictable.
3. It also completely ignores the user intent. The admin clearly has
tried to replace the old service with an alias. Ignoring their
instruction is the opposite of what they want.
Yaping Li [Thu, 19 Mar 2026 21:10:49 +0000 (14:10 -0700)]
report: add per-service metrics to the varlink Metrics API
Added these metrics:
- ActiveTimestamp: active state transition timestamps (enter/exit)
- InactiveExitTimestamp: when the unit last left inactive state
- NRestarts: restart count
- StateChangeTimestamp: last state change timestamp
- StatusErrno: service errno status
Per-service cgroup metrics (CpuUsage, MemoryUsage, IOReadBytes,
IOReadOperations, TasksCurrent) are not included here as they are
gathered by the kernel and will be served by a separate process that
reads cgroup files directly, minimizing PID1 involvement.
Yaping Li [Fri, 27 Mar 2026 02:57:46 +0000 (19:57 -0700)]
report: add manager-level metrics to varlink Metrics API
Added these metrics:
- JobsQueued: number of jobs currently queued
- SystemState: overall system state (running, degraded, etc.)
- UnitsByLoadStateTotal: unit counts broken down by load state
- UnitsTotal: total number of units
Also bump METRICS_MAX from 1024 to 4096 to accommodate the new
per-unit metrics that are now collected.
vmspawn: Add --console-transport= option to select serial vs virtio-serial
Add a --console-transport= option that selects between virtio-serial
(the default, appearing as /dev/hvc0) and a regular serial port
(appearing as /dev/ttyS0 or /dev/ttyAMA0 depending on architecture).
This is primarily useful for testing purposes, for example to test
sd-stub's automatic console= kernel command line parameter handling. It
allows verifying that the guest OS correctly handles serial console
configurations without virtio.
When serial transport is selected, -serial chardev:console is used on
the QEMU command line to connect the chardev to the platform's default
serial device. This cannot be done via the QEMU config file as on some
platforms (e.g. ARM) the serial device is a sysbus device that can only
be connected via serial_hd() which is populated by -serial.
Daan De Meyer [Mon, 30 Mar 2026 17:23:48 +0000 (19:23 +0200)]
loop-util: create loop device for block devices with sector size mismatch
Previously, loop_device_make_internal() always used the block device
directly (via loop_device_open_from_fd()) for whole-device access,
regardless of sector size. This is incorrect when the GPT partition
table was written with a different sector size than the device reports,
as happens with CD-ROM/ISO boot via El Torito: the device has
2048-byte blocks but the GPT uses 512-byte sectors.
Restructure the sector size handling in loop_device_make_internal():
- Move GPT sector size probing (UINT32_MAX case) before the
block-vs-regular-file split so both paths share the same logic and
O_DIRECT handling. Check f_flags instead of loop_flags for O_DIRECT
detection, since we're probing the original fd before any reopening.
- For block devices, get the device sector size and compare it against
the resolved sector_size. Only use the block device directly when
sector sizes match. When they differ (probed GPT mismatch or explicit
sector size request), fall through to create a real loop device with
the correct sector size.
- Default sector_size=0 to the device sector size for block devices
(instead of always 512), so "no preference" correctly matches the
device's sector size.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Sat, 28 Mar 2026 13:12:25 +0000 (13:12 +0000)]
vmspawn: propagate $TERM from host into VM via kernel command line
When running in a console mode (interactive, native, or read-only),
propagate the host's $TERM into the VM by adding TERM= and
systemd.tty.term.hvc0= to the kernel command line.
TERM= is picked up by PID 1 and inherited by services on /dev/console
(such as emergency.service). systemd.tty.term.hvc0= is used by services
directly attached to /dev/hvc0 (such as serial-getty@hvc0.service) which
look up $TERM via the systemd.tty.term.<tty> kernel command line
parameter.
While systemd can auto-detect the terminal type via DCS XTGETTCAP, not
all terminal emulators implement this, so explicitly propagating $TERM
provides a more reliable experience. We skip propagation when $TERM is
unset or set to "unknown" (as is the case in GitHub Actions and some
other CI environments).
Previously this was handled by mkosi synthesizing the corresponding
kernel command line parameters externally.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Support `CopyBlocks=` for `Verity={hash,sig}` (#41393)
This enables deriving the minimum size of the `Verity=hash` partition
using the `Verity=` logic when the size of the `Verity=data` partition
is bigger than the `CopyBlocks=` target.
This enables using `Minimize=true` for an "installer image" and later
using sd-repart to install to a system with reserve space for future
updates by specifying `Size{Min,Max}Bytes=` only in the `Verity=data`
partition, without needing to hardcode the corresponding size for the
`Verity=hash` partition.
While not strictly necessary for `Verity=signature` partitions (since
they have a fixed size) there isn't too much reason to not support it,
since then you can still specify `VerityMatchKey=` to indicate that the
partition is logically still part of that group of partitions.
---
Alternative to: https://github.com/systemd/systemd/pull/41156
Fixes https://github.com/systemd/systemd/issues/40995
This is a rebased version of #40936:
- the first few commits are #41003, which is greenlighted for merging
after the release
- then there's #40923
- and some changes on top
network-generator: support BOOTIF= and rd.bootif=0 options (#41028)
The network generator currently supports many of the options described
by dracut.cmdline(7), but not everything.
This commit adds support for the BOOTIF= option (and the related
rd.bootif= option) used in PXE setups.
This is implemented by treating BOOTIF as a special name/placeholder
when used as an interface name, and expecting a MAC address to be set in
the BOOTIF= parameter. The resulting .network file then uses MACAddress=
in the [Match] section, instead of Name=.
basic/terminal-util: use getenv_terminal_is_dumb()
terminal_prepare_query() is called from terminal_get_size() which
operates on an explicitly passed fd — typically /dev/console opened
directly by PID 1 via reset_dev_console_fd(), or a service's TTY via
exec_context_apply_tty_size(). Using terminal_is_dumb() here is wrong
because it additionally checks on_tty(), which tests whether stderr is
a tty. PID 1's stderr may not be a tty (e.g. connected to kmsg or the
journal), causing terminal_is_dumb() to return true and skip the ANSI
query even though the fd we're operating on is a perfectly functional
terminal.
Use getenv_terminal_is_dumb() instead, which only checks $TERM, matching
what terminal_reset_ansi_seq() already does.
Also use it in terminal_get_cursor_position(), which also receives fds
to operate on.
basic/terminal-util: use non-blocking writes when sending ANSI sequences in terminal_get_size()
terminal_get_size() writes ANSI escape sequences (CSI 18 and DSR
queries) to the output fd to determine terminal dimensions. This is
called during early boot via reset_dev_console_fd() and from service
execution contexts via exec_context_apply_tty_size().
Previously, these writes used loop_write() on a blocking fd, which
could block indefinitely if the terminal is not consuming data — for
example on a serial console with flow control asserted, or a
disconnected terminal. This is the same problem that was solved for
terminal_reset_ansi_seq() in systemd/systemd#32369 by temporarily
setting the fd to non-blocking mode with a write timeout.
Apply the same pattern here: set the output fd to non-blocking in
terminal_get_size() before issuing the queries, and restore blocking
mode afterward. Change the loop_write() calls in
terminal_query_size_by_dsr() and terminal_query_size_by_csi18() to
loop_write_full() with a 100ms timeout so writes fail gracefully
instead of hanging.
Also introduce the CONSOLE_ANSI_SEQUENCE_TIMEOUT_USEC constant for all
timeouts used across all ANSI sequence writes and reads (vt_disallocate(),
terminal_reset_ansi_seq(), and the two size query functions). 333ms
is now used for all timeouts in terminal-util.c.
Also introduce a cleanup function for resetting a fd back to blocking
mode after it was made non-blocking.
Daan De Meyer [Fri, 27 Mar 2026 14:58:35 +0000 (14:58 +0000)]
vmspawn: use fstab.extra credential for runtime mounts instead of kernel cmdline
Switch runtime virtiofs mount configuration from systemd.mount-extra=
kernel command line parameters to the fstab.extra credential. This
avoids consuming kernel command line space (which is limited) and
matches the approach used by mkosi.
Each mount is added as an fstab entry in the format:
{tag} {destination} virtiofs {ro|rw},x-initrd.mount
If the user already specified a fstab.extra credential via
--set-credential= or --load-credential=, the virtiofs mount entries
are appended to it rather than conflicting.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Daan De Meyer [Sun, 29 Mar 2026 20:22:24 +0000 (20:22 +0000)]
vmspawn: add scsi-cd disk type for ISO/CD-ROM image support
Add DISK_TYPE_SCSI_CD to support attaching disk images as CD-ROM
drives, needed for testing El Torito ISO images built by
systemd-repart.
When --image-disk-type=scsi-cd is specified, the image is attached
with media=cdrom and readonly=on on the drive, using scsi-cd as the
device driver on the SCSI bus. This also works for --extra-drive=
with the scsi-cd: prefix.
The QEMU configuration matches the standard OVMF CD-ROM boot setup:
-drive if=none,media=cdrom,format=raw,readonly=on
-device virtio-scsi-pci
-device scsi-cd
When direct kernel booting with scsi-cd, if the kernel command line
contains "rw", append "ro" to override it since CD-ROMs are
read-only.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Chris Down [Fri, 20 Mar 2026 09:48:49 +0000 (17:48 +0800)]
core: Prevent corrupting units from stale alias state on daemon-reload
During daemon-reload (or daemon-reexec), when a unit becomes an alias to
another unit, deserialising the alias's stale serialised state can
corrupt the canonical unit's live runtime state.
Consider this scenario:
1. Before reload:
- a.service is running
- b.service was stopped earlier and is dead
- Both exist as independent units
3. daemon-reload triggers serialisation. State file contains both units:
- a.service -> state=running, cgroup=/system.slice/a.service,
PID=1234, ...
- b.service -> state=dead, cgroup=(empty), no PIDs, ...
4. During deserialisation:
- Processes a.service: loads Unit A, deserialises -> state=RUNNING
- Processes b.service: manager_load_unit() detects symlink, returns
Unit A
- unit_deserialize_state(Unit A, ...) overwrites with b's dead state
5. The result is that:
- Unit A incorrectly shows state=dead despite PID 1234 still running
- If a.service has Upholds= dependents, catch-up logic sees
a.service should be running but is dead
- systemd starts a.service again -> PID 5678
- Two instances run: PID 1234 (left-over) and PID 5678 (new)
This bug is deterministic when serialisation orders a.service before
b.service.
The root cause is that manager_deserialize_one_unit() calls
manager_load_unit(name, &u) which resolves aliases via
unit_follow_merge(), returning the canonical Unit object. However, the
code doesn't distinguish between two cases when u->id differs from the
requested name from the state file. In the corruption case, we're
deserialising an alias entry and unit_deserialize_state() blindly
overwrites the canonical unit's fields with stale data from the old,
independent unit. The serialised b.service then overwrites Unit A's
correct live state.
This commit first scans the serialised unit names, then adds a check
after manager_load_unit():
if (!streq(u->id, name) && set_contains(serialized_units, u->id))
...
This detects when the loaded unit's canonical ID (u->id) differs from
the serialised name, indicating the name is now an alias for a different
unit and the canonical unit also has its own serialised state entry.
If the canonical unit does not have its own serialised state entry, we
keep the state entry. That handles cases where the old name is really
just a rename, and thus the old name is the only serialised state for
the unit. In that case there is no bug, because there is no separate
canonical state entry for the stale alias entry to overwrite.
Skipping is safe because:
1. The canonical unit's own state entry will be correctly deserialised
regardless of order. This fix only prevents other stale alias entries
from corrupting it.
2. unit_merge() has already transferred the necessary data. When
b.service became an alias during unit loading, unit_merge() already
migrated dependencies and references to the canonical unit.
3. After merging, the alias doesn't have its own runtime state. The
serialised data represents b.service when it was independent, which
is now obsolete once the canonical unit also has its own serialised
entry.
4. All fields are stale. unit_deserialize_state() would overwrite state,
timestamps, cgroup paths, pids, etc. There's no scenario where we
want this data applied on top of the canonical unit's own serialised
state.
This fix also correctly handles unit precedence. For example, imagine
this scenario:
1. `b.service` is a valid, running unit defined in `/run`.
2. The sysadmin creates `ln -s .../a.service /etc/.../b.service`.
3. On reload, the new symlink in `/etc` overrides the unit in `/run`.
The new perspective from the manager side is that `b.service` is now an
alias for `a.service`.
In this case, systemd correctly abandons the old b.service unit, because
that's the intended general semantics of unit file precedence. We also
do that in other cases, like when a unit file in /etc/systemd/system/
masks a vendor-supplied unit file in /lib/systemd/system/, or when an
admin uses systemctl mask to explicitly disable a unit.
In all these scenarios, the configuration with the highest precedence
(in /etc/) is treated as the new source of truth. The old unit's
definition is discarded, and its running processes are (correctly)
abandoned. In that respect we are not doing anything new here.
Some may ask why we shouldn't just ignore the symlink if we think this
case will come up. I think there are multiple very strong reasons not to
do so:
1. It violates unit precedence. The unit design is built on a strict
precedence list. When an admin puts any file in /etc, they are
intentionally overriding everything else. If manager_load_unit were
to "ignore" this file based on runtime state, it would break this
fundamental precedent.
2. It makes daemon-reload stateful. daemon-reload is supposed to be a
simple, stateless operation, basically to read the files on disk and
apply the new configuration. But doing this would make daemon-reload
stateful, because we'd have to read the files on disk, but
cross-reference the current runtime state, and... maybe ignore some
files. This is complex and unpredictable.
3. It also completely ignores the user intent. The admin clearly has
tried to replace the old service with an alias. Ignoring their
instruction is the opposite of what they want.
Daan De Meyer [Fri, 27 Mar 2026 13:38:47 +0000 (13:38 +0000)]
vmspawn: use PTY for native console to avoid QEMU O_NONBLOCK issue
QEMU's stdio chardev sets O_NONBLOCK on both stdin and stdout (see
chardev/char-stdio.c [1] and chardev/char-fd.c [2]). Since forked
processes share file descriptions, and on a terminal all three stdio
fds typically reference the same file description, this affects our
own stdio too.
Avoid this by using a PTY with chardev serial instead of chardev
stdio for native console mode, matching the approach already used
for interactive and read-only modes. The PTY forwarder shovels bytes
transparently between our stdio and QEMU's PTY using the new
PTY_FORWARD_DUMB_TERMINAL and PTY_FORWARD_TRANSPARENT flags, which
disable terminal decoration (background tinting, window title, OSC
context) and escape sequence handling (Ctrl-] exit, hotkeys)
respectively.
The chardev is configured with mux=on so the QEMU monitor remains
accessible via Ctrl-a c.
Also dedup CONSOLE_NATIVE, CONSOLE_READ_ONLY, and CONSOLE_INTERACTIVE
handling by using fallthrough, with the only differences being the
ptyfwd flags, mux setting, and monitor section.
stub: Determine the correct serial console from the ACPI device path
Instead of requiring exactly one serial device and assuming ttyS0,
extract the COM port index from the ACPI device path and use the
uart I/O port address format for the console= kernel argument.
On x86, the ACPI UID for PNP0501 (16550 UART) maps directly to the
COM port number: UID 0 = COM1 (0x3F8), UID 1 = COM2 (0x2F8), etc.
The I/O port addresses are fixed in the kernel (see
arch/x86/include/asm/serial.h). Using the console=uart,io,<addr>
format (see Documentation/admin-guide/kernel-parameters.txt) addresses
the UART by I/O port directly rather than relying on ttyS naming,
and also provides early console output before the full serial driver
loads.
Restrict the entire serial console auto-detection to x86. On non-x86
(e.g. ARM with PL011 UARTs), displays may be available without GOP
(e.g. simple-framebuffer via device tree), serial device indices are
assigned dynamically during probe rather than being fixed to I/O port
addresses, and the kernel has its own console auto-detection via DT
stdout-path.
When ConOut has no device path (ConSplitter), all text output handles
are enumerated. If multiple handles have PNP0501 UART nodes with
different UIDs, bail out rather than guessing.
Add ACPI_DP device path subtype, ACPI_HID_DEVICE_PATH struct, and
EISA_PNP_ID() macro to device-path.h for parsing ACPI device path
nodes. Remove MSG_UART_DP, device_path_has_uart(), count_serial_devices()
and proto/serial-io.h (no longer needed).
Daan De Meyer [Mon, 30 Mar 2026 19:50:27 +0000 (21:50 +0200)]
shared/gpt: add gpt_probe() for GPT header and partition entry reading
Add gpt_probe() which probes for a GPT partition table at various sector
sizes (512-4096) and optionally returns the header and partition entries.
Returns the detected sector size on success, 0 if no GPT was found, or
negative errno on error.
Refactor probe_sector_size() in dissect-image.c to be a thin wrapper
around gpt_probe().
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
If a unit has FileDescriptorStorePreserve=yes we'll keep its FDs
around in case it starts again. But if there are no reverse
dependencies referencing it, we'll also GC it and lose all the FDs,
which defeats the point of the setting (which is opt-in).
Do not GC units that have FDs stored to avoid this.