* 94af257c72 d/t/control: pull libmicrohttpd-dev for unit-tests suite
* 08263f18a4 d/t/control: pull libfdisk-dev for test suites
* e54175a0a4 Install new files for upstream build
mkosi: trim verity.sig json files to remove NUL padding before passing to jq
jq started rejecting input that has NUL bytes to fix some security issues,
so we need to trim the verity.sig json files, which are spat out with
the NUL bytes padding from the GPT partition content.
‣ Running postinstall script /home/runner/work/systemd/systemd/mkosi/mkosi.postinst.chroot…
jq: parse error: Invalid numeric literal at EOF at line 1, column 16384
‣ "/work/postinst final" returned non-zero exit code 5.
sd-stub: make initrd passing incremental + other EFI prep work for #41543 (#41748)
This is split out of #41543 but I think makes sense of its own.
It primary does one thing: ensure that initrds installed via the Linux EFI protocol are incremental in behaviour (i.e. we read the previously set initrd and combine it with ours). So far we'd simply not install any initrds at all in this case, which would break stuff.
THis is preparatory for #41543, but is generally the better, safer behaviour.
This also contains three minor changes which are purley prep work for #41543 but shouldn't hurt in the big picture.
Translations update from [Fedora
Weblate](https://translate.fedoraproject.org) for
[systemd/main](https://translate.fedoraproject.org/projects/systemd/main/).
vmspawn: catch unsupported growing of qcow2 images (#41654)
For qcow2 images it's not enough to grow the file. Since we probably
don't want to shell out to qemu-img either let's just error out to make
the user aware that growing needs to be done manually.
Turns out that the real reason behind this fail is that the machine was
under heavy load due to a busy-loop from the stub init. The cause of
this is a bug in bash, where running commands that fork (i.e. not
built-ins) can cause a permanent busy-loop due to a desync in trap
handling if you send the signals to the bash process _just right_:
ci: Restore severity prefix on claude-review inline comments
Commit a65ebc3ff9 ("claude-review: improve review quality for large
PRs") dropped the `Claude: **<severity>**: ` prefix from posted inline
comments on the theory that Claude was also adding the severity into
`body`, producing duplicates. But nothing in the prompt or schema
actually asks the subagent to include severity in `body` — severity
is a separate structured field. The result is that inline comments
no longer show must-fix/suggestion/nit classification.
Restore the prefix in the posting step, and add an explicit instruction
to the subagent prompt telling it not to repeat severity inside `body`
so the two don't collide.
Signed-off-by: Christian Brauner (Amutable) <brauner@kernel.org>
boot: introduce a common structure for cpio target dirs
There are only a few target dirs we place resources in when generating
on-the-fly initrd cpios. These dirs have very specific attributes.
Instead of repeating this everywhere, let's encapsulate them in a new
explicit structure, that we can reuse at various places.
This is preparation for placing extra resources of Type #1 entry also in
them without having to encode access modes at multiple places
redundantly.
stub: load previous initrd that is already configured, too
This changes the initrd combination logic to also include any initrd
already configured via the "LINUX_INITRD_MEDIA_GUID" device in the
initrd we pass to the linux kernel.
Or in other words: with this systemd-stub starts operating purely
incremental: it will extend any previously installed initrd with its own
stuff, so that both the previous initrd(s) and systemd-stub's are in
effect.
boot: change initrd_register() so that it replaces any previously registered LINUX_INITRD_MEDIA device
So far, if an initrd is already registered we'd silently not register
one again. Let's make this more reliable and systematic, and register
ours, overriding what is previously set.
(Note, in a later commit we'll incorporate any previously set initrd,
which hence makes this all incremental instead of destructive as it
might appear now)
Convert fdisk-util to the dlopen pattern used by other optional shared
libraries in libshared. Declare the libfdisk API entry points with
DLSYM_PROTOTYPE, resolve them in a dlopen_fdisk() helper, and call the
sym_* wrappers from the homework, sysupdate and repart binaries that
use them.
With this in place fdisk-util can live in libshared itself, linked only
against libfdisk's headers (via libfdisk_cflags). The libshared_fdisk
convenience library and the libfdisk link dependency on systemd-homework,
systemd-sysupdate, systemd-repart and systemd-repart.standalone go away.
Also add a dlopen_fdisk() check to test-dlopen-so.
Philip Withnall [Mon, 20 Apr 2026 17:02:42 +0000 (18:02 +0100)]
sysupdate: Emit READY=1 status when installing
`READY=1` is already correctly emitted when acquiring an update, but was
forgotten to be emitted when subsequently installing that update.
That meant that the state tracking code in `sysupdated` and hence
`updatectl` could not properly report the progress of the install
operation, and hence printed “Already up-to-date” after a successful
update installation, rather than “Done”.
Add a test to catch this in future.
Signed-off-by: Philip Withnall <pwithnall@gnome.org> Fixes: https://github.com/systemd/systemd/issues/41502
core: implement Kill/Automount/Mount Context/Runtime for io.systemd.Unit.List (#39391)
The PR implements the following objects + tests for
`io.systemd.Unit.List`:
- `KillContext`
- `AutomountContext`
- `AutomountRuntime`
- `MountContext`
- `MountRuntime`
It's a continuation of the following PRs:
* https://github.com/systemd/systemd/pull/37432
* https://github.com/systemd/systemd/pull/37646
* https://github.com/systemd/systemd/pull/38032
* https://github.com/systemd/systemd/pull/38212
Convert curl-util to the dlopen pattern used by other optional shared
libraries in libshared (libarchive, pcre2, idn, ...). Declare the curl
API entry points with DLSYM_PROTOTYPE, resolve them in a dlopen_curl()
helper, and call the sym_* wrappers from callers. curl_glue_new() now
loads the library on first use, so consumers going through CurlGlue
pick this up automatically; journal-upload and report-upload call
dlopen_curl() directly since they use curl without the glue layer.
With this in place curl-util can live in libshared itself, linked only
against libcurl's headers (via libcurl_cflags). The libcurlutil_static
convenience library and the libcurl link dependency on systemd-imdsd,
systemd-pull, systemd-journal-upload and systemd-report go away.
Also move the easy_setopt() helper macro next to the DLSYM declarations
so all consumers use a single sym-prefixed definition, and add a
dlopen_curl() check to test-dlopen-so.
json-stream: hide JsonStreamQueueItem as an implementation detail
The json-stream API previously exposed JsonStreamQueueItem and several
functions operating on it (json_stream_make_queue_item(),
json_stream_enqueue_item(), json_stream_queue_item_free(),
json_stream_queue_item_get_data()). These existed solely to support
sd-varlink's "defer-and-modify" pattern for streaming replies, where a
reply is held back so its "continues" field can be set before
transmission. This is a varlink protocol concern that should not leak
into the generic transport layer.
Similarly, the fd pushing API (json_stream_push_fd(),
json_stream_reset_pushed_fds()) and the pushed_fds state lived inside
JsonStream, even though fd-to-message association is a protocol-level
concern managed entirely by sd-varlink.
Rework the API so that:
- JsonStreamQueueItem and all its functions become static to
json-stream.c. The only output API is now json_stream_enqueue_full()
(accepting explicit fds) and the inline json_stream_enqueue() wrapper
for the common no-fds case.
- The pushed_fds state moves from JsonStream into sd_varlink, where
sd_varlink_push_fd() and sd_varlink_reset_fds() manage it directly.
- The deferred reply in sd-varlink changes from a JsonStreamQueueItem*
to a plain sd_json_variant* plus a separate previous_fds/n_previous_fds
pair, keeping the protocol-specific bookkeeping in sd-varlink where it
belongs.
- A new varlink_enqueue() helper wraps json_stream_enqueue_full() with
the varlink connection's pushed fds, transferring fd ownership to the
queue item on success.
The reworks the ESP/XBOOTLDR logic to pin the ESP/XBOOTLDR via an fd,
and return that as optional return parameter.
So far we only pinned the parent dir of the ESP/XBOOTLDR, which was
useful when verifying that ESP/XBOOTLDR is actually a mount point by
comparing mount ids. This however became obsolete with a98a6eb95cc980edab4b0f9c59e6573edc7ffe0c. Hence, let's clean this up,
and pin the inode we really care about and return it.
chase: tighten flags checks in chase_and_unlinkat()
Some flags don't reasonably apply to chase_and_unlinkat() (because we
open the parent inode of an inode to delete, which is always a dir),
hence let's catch these flags when misused.
(I ran into this, and it was very confusing to debug, hence let's make
it easier)
Newer tar started using openat2() via open_subdir() to address
CVE-2025-45582 [0]. Now, gnulib, that tar uses, provides the openat2()
syscall in two ways [1]:
1) If glibc doesn't provide openat2(), it provides its own version in
openat2.c, that tries to call openat2() syscall first, and if it
returns ENOSYS, it emulates the function in userspace.
2) If glibc provides openat2(), it uses that directly, without providing
any fallback on ENOSYS.
Quite recently our test suite started calling nspawn with
--suppress-sync=yes. This means that we call seccomp_suppress_sync(),
which eventually calls block_open_flag(), that blocks the openat2()
syscall completely and refuses it with ENOSYS as this syscall can't be
sensibly filtered (see the openat2()-relevant comments in
block_open_flag() and seccomp_restrict_sxid()). And when glibc provides
openat2(), there's no fallback, so the ENOSYS bubbles up to the user as:
TEST-25-IMPORT.sh[163]: + tar xzf /var/tmp/scratch.tar.gz
TEST-25-IMPORT.sh[163]: tar: ./adirectory/athirdfile: Cannot open: Function not implemented
TEST-25-IMPORT.sh[163]: tar: Exiting with failure status due to previous errors
Let's mitigate this by re-enabling sync for TEST-25-IMPORT, at least for
now.
In nspawn.c's run_container() the child_netns_fd = receive_one_fd(...)
failure path logged 'r' instead of the negative errno returned in
child_netns_fd, so the actual error from receive_one_fd was being
overwritten by whatever 'r' happened to hold. The other receive_one_fd
call sites in the same function use the returned fd variable directly
(mntns_fd, etc.), so align this one.
In shared/nsresource.c's nsresource_add_cgroup() the cgroup_fd_idx =
sd_varlink_push_dup_fd(...) failure path logged userns_fd_idx, which
is the previous successful push's index, not the negative errno we
just got from pushing cgroup_fd. Log cgroup_fd_idx instead.
Both were flagged by static analysis (#41709) and match the immediately
preceding userns_fd-path pattern that was presumably copy-pasted.
compress: gracefully handle a truncated ZSTD frame
If a journal file contains a truncated ZSTD frame (i.e. a frame with
Frame_Content_Size > 0, but with not enough data in Data_Block),
ZSTD_decompressStream() would return a non-zero, non-error value. This
would then skip the error path in the ZSTD_isError() branch and we'd hit
the following assert:
Let's handle this situation gracefully and return EBADMSG instead.
Also, add another journalctl invocation to the corrupted-journals test
that goes through the sd_journal_get_data() -> decompress_startswith_zstd()
code path which, among other things, covers the issue when run on the
provided journal file.
strxcpyx: add a paranoia check for vsnprintf()'s return value
vsnprintf() can, under some circumstances, return negative value, namely
during encoding errors when converting wchars to multi-byte characters.
This would then wreak havoc in the arithmetics we do following the
vsnprintf() call. However, since we never do any wchar shenanigans in
our code it should never happen.
Let's encode this assumption into the code as an assert(), similarly how
we already do this in other places (like strextendf_with_separator()).
iovec-wrapper: rename iovw_append() to iovw_extend()
The naming is consistent with strv_extend().
This also
- introduces tiny iovw_extend_iov() wrapper,
- refuse when the source and target points to the same object,
- check the final count before extending in iovw_extend_iovw().
repart: add EncryptKDF= option for LUKS2 partitions
systemd-repart currently creates LUKS2 encrypted partitions using
libcryptsetup's default KDF (Argon2id), which requires ~1GB of memory
during key derivation. This is too much for memory-constrained
environments such as kdump with limited crashkernel memory, where
luksOpen fails due to insufficient memory.
Add an EncryptKDF= option to repart.d partition definitions that allows
selecting the KDF type. Supported values are:
- "argon2id" — Argon2id with libcryptsetup-benchmarked parameters
- "pbkdf2" — PBKDF2 with libcryptsetup-benchmarked parameters
- "minimal" — PBKDF2 with SHA-512, 1000 iterations, no benchmarking,
matching the existing cryptsetup_set_minimal_pbkdf() behaviour used
for TPM2-sealed keys
When not specified, the libcryptsetup default (argon2id) is used,
preserving existing behaviour.
The KDF type is applied via sym_crypt_set_pbkdf_type() after
sym_crypt_format() and before any keyslots are added.
These don't make too much sense on their own, but they also don't really
hurt. They are preparation for #41543, but in order to make things
either to review I split these four commits out, since they are not
directly part of what the PR shall achieve
The NOTES section in os-release(5) contains an unusual formatting.
Switch function and ulink tags and remove a newline within ulink text to
keep the entry formatting in sync with others. Also, this preserves the
formatting within the text itself.
mountpoint-util: initialize mnt_id for name_to_handle_at(AT_HANDLE_MNT_ID_UNIQUE)
Suppress the following message:
```
$ sudo valgrind --leak-check=full build/networkctl dhcp-lease wlp59s0
==175708== Memcheck, a memory error detector
==175708== Copyright (C) 2002-2024, and GNU GPL'd, by Julian Seward et al.
==175708== Using Valgrind-3.26.0 and LibVEX; rerun with -h for copyright info
==175708== Command: build/networkctl status wlp59s0
==175708==
==175708== Conditional jump or move depends on uninitialised value(s)
==175708== at 0x4BC33D1: inode_same_at (stat-util.c:610)
==175708== by 0x4BF1972: inode_same (stat-util.h:86)
==175708== by 0x4BF48FE: running_in_chroot (virt.c:817)
==175708== by 0x4B16643: running_in_chroot_or_offline (verbs.c:37)
==175708== by 0x4B175CE: _dispatch_verb_with_args (verbs.c:136)
==175708== by 0x4B17868: dispatch_verb (verbs.c:160)
==175708== by 0x407CBB: networkctl_main (networkctl.c:249)
==175708== by 0x407D06: run (networkctl.c:263)
==175708== by 0x407D39: main (networkctl.c:266)
==175708==
```
Not sure if it is an issue in valgrind or glibc, but at least there is
nothing we can do except for working around it.
sleep: convert to "verbs", using the new option+verb macros
We had verb-like dispatch, but done in a manual way. We have a fairly
heavy preperation steps that wraps all operations in the same way, so we
don't want to call the operation implementation functions directly. But
let's use the generic verb machinery and pass the state directly using
the userdata pointer and the recently added verb data pointer.
--help output is substantially the same, but options are now in a new
section below the verbs.
bootctl: make bootspec-util.c independent of bootctl.c
This changes boot_config_load_and_select() to also take the root path as
input, just like the ESP and XBOOTLDR path.
This has the benefit of making the whole file independent of bootctl.c,
which means we can link it into a separate test, and is preparatory work
for a follow-up commit.
If unprivileged_mode is false then verify_esp() will treat access errors
like any other and log about them. Here we set it to false, hence
there's no point to log a 2nd time.
boot: never auto-boot a menu entry with the non-default profile
When figuring out which menu entry to pick by default, let's not
consider any with a profile number > 0. This reflects that fact that
additional profiles are generally used for
debug/recovery/factory-reset/storage target mode boots, and those should
never be auto-selected. Hence do a simple check: if profile != 0, simply
do not consider the entry as a default.
We might eventually want to beef this up, and add a property one can set
in the profile metadata that controls this behaviour, but for now let's
just do a this simple fix.
namespace: don't log misleading error in the r > 0 path
fd_is_fs_type() returns < 0 for errors, 0 for false, and > 0 for true, so
in the r > branch we'd most likely report EPERM together with the error
message which is misleading.
Allows appending kernel command line arguments, like
kexec-tool does. This is especially needed for the integration
tests, as mkosi adds a bunch of options that are needed for the
test suite to work, and it breaks without them.
Kai Lüke [Thu, 16 Apr 2026 06:10:02 +0000 (15:10 +0900)]
vmspawn: catch unsupported growing of qcow2 images
For qcow2 images it's not enough to grow the file. Since we probably
don't want to shell out to qemu-img either let's just error out to make
the user aware that growing needs to be done manually.
The interface of this program was rather strange. It took an option that
specified what to do, but that option behaved exactly like a verb. Let's
change the interface to the more modern style with verbs. Since the
inteface was documented in the man page, provide a compat shim to handle
the old options.
(In practice, I doubt anybody will notice the change. But since it was
documented, it's easier to provide the compat then to think too much
whether it is actually needed. I think we can drop it an year or so.)
Extend fake-report-server.py with optional --cert, --key, --port
arguments for TLS support. Add a test case that generates a
self-signed certificate and tests HTTPS upload of metrics and facts.
Also exercise the --header param.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a fake HTTP server (fake-report-server.py) that accepts JSON POST
requests and validates the report structure, and test cases in
TEST-74-AUX-UTILS.report.sh that exercise plain HTTP upload of both
metrics and facts.
Co-developed-by: Claude Opus 4.6 <noreply@anthropic.com>