Luca Boccassi [Tue, 12 Aug 2025 22:59:15 +0000 (23:59 +0100)]
test-cgroup: cleanup test cgroup
One test cgroup gets left behind by the test, as it moves itself
into it. Move itself and back to the original cgroup at the end
and clean up.
This fixes a failure when running the test first as root, and then
as unprivileged (initial cleanup fails as the leftover test cgroup
is owned by root).
Matteo Croce [Tue, 12 Aug 2025 16:53:59 +0000 (18:53 +0200)]
core: suppress warning
Avoid definition of `exec_context_get_tty_for_pam` if pam support is
disabled, to avoid the following warning:
```
../src/core/exec-invoke.c:1231:12: warning: ‘exec_context_get_tty_for_pam’ defined but not used [-Wunused-function]
1231 | static int exec_context_get_tty_for_pam(const ExecContext *context, char **ret) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
Marcos Alano [Sat, 9 Aug 2025 10:52:27 +0000 (07:52 -0300)]
Enable KEY_PERFORMANCE key present on Linux 6.17
Note, this change does not require the kernel running on the host is
equal or newer than 6.17. But systemd-udevd needs to be built with the
kernel headers with KEY_PERFORMANCE, and the relevant kernel header is
already updated by the previous commit.
Found using linkchecker.
For virtiofsd, the man page is maintained upstream, but doesn't seem to be
available in any of the usual places. So let's link to the Debian version.
systemd.filter I have no idea what it is.
Vasiliy Kovalev [Sun, 10 Aug 2025 07:24:19 +0000 (10:24 +0300)]
hwdb: Add launch emoji keyboard mapping for Asus M1607KA
By default, pressing Fn+F8 maps the scancode to KEY_BLUETOOTH (in evtest,
MSC_SCAN 7e -> KEY_BLUETOOTH). Windows/the manufacturer may intercept the
same scancode to execute "Launch Emoji keyboard."
On Linux, we get the "raw" KEY_BLUETOOTH code, which is unacceptable.
prog1 is already reserved by default for launching MyAsus (a Windows
application) with the Fn+F12 combination, so we will use prog2.
Yu Watanabe [Fri, 8 Aug 2025 01:06:22 +0000 (10:06 +0900)]
test-network: also save the current state of socket units for resolved and stop them
Silence the following waring:
```
Stopping 'systemd-resolved.service', but its triggering units are still active:
systemd-resolved-varlink.socket, systemd-resolved-monitor.socket
```
logging: Improve logging messages related to NFTSet.
The 'NFTSet' directive in various units adds and removes entries in nftables
sets, it does not add or remove entire sets. The logging messages should
indicate that an entry was added or removed, not that a set was added or
removed.
Luca Boccassi [Wed, 6 Aug 2025 13:33:10 +0000 (14:33 +0100)]
test: use Europe/Helsinki instead of Europe/Kyiv in test-calendarspec
Europe/Kyiv was added somewhat recently. Use Europe/Helsinki which is
much older and thus works with older tzdata like version 2022a.
line 193: "2016-03-27 03:17:00" new_tz=:Europe/Kyiv
At: Sun 2016-03-27 03:17:00.000000 Europe
Assertion 'r == -ENOENT' failed at src/test/test-calendarspec.c:70, function _test_next(). Aborting.
Luca Boccassi [Wed, 6 Aug 2025 13:07:26 +0000 (14:07 +0100)]
test: fix repeated runs of test-oomd-util by clearing test cgroup
If the test is ran multiple times in a row, without an ephemeral
scope (eg: non-booted nspawn), then subsequent runs will fail as
the test cgroup is not cleared so the previous xattrs are still
present. Trim the test cgroup before and after the test.
Luca Boccassi [Wed, 6 Aug 2025 11:41:01 +0000 (12:41 +0100)]
seccomp: fix build with glibc < 2.39
../src/shared/seccomp-util.c: In function ‘seccomp_restrict_sxid’:
../src/shared/seccomp-util.c:2228:25: error: ‘__NR_fchmodat2’ undeclared (first use in this function); did you mean ‘fchmodat2’?
2228 | __NR_fchmodat2,
| ^~~~~~~~~~~~~~
| fchmodat2
The override/sys/syscalls.h needs to be included before the seccomp
headers, otherwise the internal seccomp preprocessor machinery will
not see the local definitions, so the local ifdef will be true but
the seccomp own definitions will be empty
Yu Watanabe [Mon, 4 Aug 2025 17:44:18 +0000 (02:44 +0900)]
udev/spawn: continue to read stdout even if the result buffer is full
Previously, when the stdout of a spawned process (e.g. dmi_memory_id) is
truncated, the event source was not re-enabled, that will cause the process
to remain in a write-blocked state if the stdout buffer is full, and the
process will time out:
```
Spawned process 'dmi_memory_id' [1116] timed out after 2min 59s, killing.
Process 'dmi_memory_id' terminated by signal KILL.
```
The solution is to continue enabling the event source so that on_spawn_io()
can continue reading the stdout buffer. When the result buffer is full, the
local `buf` variable will be used to drain remaining stdout.
E.g. dns_question_contains_key() may return negative errno, hence we
should not use ASSERT_TRUE/FALSE() for the function.
This also has bunch of cleanups:
- call functions in ASSERT_NOT_NULL(),
- add short comments for constant function arguments,
- merge several test cases,
- use memstream, rather than temporal files.
Yu Watanabe [Fri, 1 Aug 2025 19:42:33 +0000 (04:42 +0900)]
test-dns-answer: fix misuse of ASSERT_TRUE/FALSE()
E.g. dns_answer_match_key() may return negative errno, hence we should
use ASSERT_OK_POSITIVE/ZERO().
This also has bunch of cleanups:
- call functions in ASSERT_NOT_NULL(),
- add short comments for constant function arguments,
- merge several test cases,
- use memstream, rather than temporal files.
Michal Sekletar [Thu, 31 Jul 2025 16:26:09 +0000 (18:26 +0200)]
sd-bus/bus-track: use install_callback in sd_bus_track_add_name()
Previously we didn't provide any install_callback to
sd_bus_add_match_async() so in case AddMatch() method call timed out we
destroyed the bus connection. This seems overly aggressive and simply
updating the sd_bus_track object accordingly should be enough.
Fabian Vogt [Fri, 1 Aug 2025 08:59:09 +0000 (10:59 +0200)]
virt: Actually use DMI detection on RISC-V as well
When booting Linux with ACPI in QEMU, the device tree is not used and
the DT based detection will not work. DMI values are accurate though
and indicate QEMU.
While detect_vm_dmi_vendor() was enabled for RISC-V in a previous commit,
it missed detect_vm_dmi(), so it was never actually used. Fix that.
TEST-13-NSPAWN: wait for a few seconds after markers found
Otherwise, the scope that the nspawn container belonging to may be
removed before the grandchild process of the machined exits and it may
be SIGKILLed.
```
[ 100.829613] systemd-machined[678]: Successfully forked off '(sd-bindmnt)' as PID 2962.
[ 100.833366] systemd-nspawn[2953]: Inner child finished, invoking payload.
[ 100.836111] (sd-bindmnt)[2962]: Skipping PR_SET_MM, as we don't have privileges.
[ 100.836401] (sd-bindmnt)[2962]: Successfully forked off '(sd-bindmnt-inner)' as PID 2964.
[ 100.846498] (sd-bindmnt)[2962]: (sd-bindmnt-inner) terminated by signal KILL.
[ 100.848846] systemd[1]: machine-TEST\x2d13\x2dNSPAWN.machinectl\x2dbind.7ye.scope: cgroup is empty
[ 100.849303] systemd[1]: machine-TEST\x2d13\x2dNSPAWN.machinectl\x2dbind.7ye.scope: Deactivated successfully.
[ 100.849317] systemd[1]: machine-TEST\x2d13\x2dNSPAWN.machinectl\x2dbind.7ye.scope: Changed running -> dead
[ 100.849752] systemd[1]: machine-TEST\x2d13\x2dNSPAWN.machinectl\x2dbind.7ye.scope: Consumed 91ms CPU time, 1.3M memory peak.
[ 100.850399] systemd-machined[678]: (sd-bindmnt) failed with exit status 1.
[ 100.850414] systemd-machined[678]: Child failed.
[ 100.854574] systemd-machined[678]: Failed to mount /tmp/marker-varlink on /tmp/marker-varlink in the namespace of machine 'TEST-13-NSPAWN.machinectl-bind.7ye': Protocol error
```
Yu Watanabe [Fri, 1 Aug 2025 04:37:11 +0000 (13:37 +0900)]
basic: do not use PROJECT_FILE in one more generated file
Fixes the following build warning:
```
In file included from ../../../home/runner/work/systemd/systemd/src/basic/assert-util.h:4,
from ../../../home/runner/work/systemd/systemd/src/basic/forward.h:17,
from ../../../home/runner/work/systemd/systemd/src/basic/filesystems.h:4,
from src/basic/filesystem-sets.c:2:
src/basic/filesystem-sets.c: In function ‘fs_in_group’:
../../../home/runner/work/systemd/systemd/src/fundamental/assert-fundamental.h:76:9: warning: array subscript 42 is above array bounds of ‘const char[28]’ [-Warray-bounds=]
76 | log_assert_failed_unreachable(PROJECT_FILE, __LINE__, __func__)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/basic/filesystem-sets.c:559:18: note: in expansion of macro ‘assert_not_reached’
559 | default: assert_not_reached();
| ^~~~~~~~~~~~~~~~~~
```
journald: add debug logs around offlining/archiving/rotating/varlink operations
It is not easy to understand what happens to a journal file
even with debug logs enabled. Add more dbg messages around operations
started by users to make it possible to follow the flow of operations.
- drop unused variables,
- adjust number of partitions, interations, and timeout,
- clear partitions on each test case finished,
- check if unnecessary devlinks are removed,
- several coding style cleanups.
TEST-64-UDEV-STORAGE: several fixlets for check_device_units()
To suppress the following warnings in case check_device_unit() failed e.g.
when the device is already removed:
```
sed: couldn't write 130 items to stdout: Broken pipe
awk: write failure (Broken pipe)
awk: close failed on file "/dev/stdout" (Broken pipe)
```
udev/node: check the target device node of devlink on removal
If the removal of the devlink is requested due to this is a 'remove' event,
it is trivial that the devlink will not be owned by this device anymore.
Let's read the devlink and if it points to our device node, then we need
to update the devlink. If it points to another device node, then it is already
owned by another device, hence we should not touch it and keep it as is.
Jan Čermák [Wed, 30 Jul 2025 17:18:13 +0000 (19:18 +0200)]
journal-gatewayd: fix busy loop when following way beyond journal end
Fix regression introduced in a7bfb9f76b96888d60b4f287f29dcbf758ba34c0,
where busy loop can be started with a request for following logs with a
range header set with num_skip value pointing beyond the end of the
journal. In that case the reader callback returns 0 and is called
immediately again, usually causing an endless loop that is not recovered
even when new journal events are added.
The bug does not occur if num_skip is not set - in that case if no
journal entries matching the filters are added, the tight loop is
avoided by the sd_journal_wait().
To fix the issue, when no matching journal events are available, set a
flag and reuse the backoff mechanism using the sd_journal_wait().
test: send trailing newlines in notify messages in TEST-50-DISSECT
It seems the failing test in https://github.com/systemd/systemd/issues/37626
is due to MONOTONIC_USEC= being somehow lost. Add a trailing newline when
sending messages with socat, hopefully ensuring it is delivered and read.
As prompted by #38393, search domains may be large when a complicated
network setting is used, especially when VPN is used. Let's bump the
limit to 1024.
Note, this does not bump the maximum number of DNS servers, as setting
thousands of DNS servers is spurious and mostly meaningless. Let's keep
the maximum for a while until someone requests to also bump them.
- move to TEST-07-PID1, as it is a timer setting,
- rename the timer and service, to emphasize they are for testing
DeferReactivation=,
- use timeout command to wait for the timer being triggered several times,
- stop the timer when not necessary,
- accept 9 seconds as delta, as there are fluctuations.
As discussed in https://github.com/systemd/systemd/issues/38399, "ordinary"
systems can have the field table with a large number of values, causing journal
rotation to occur early. For example, audit generates a log of fields:
$ journalctl --fields | rg -c '^_?AUDIT'
114
It seems that the "structured log" capabilities of the journal are being use
more than in the past. Looking at some journal files on my system, it seems
the field hash table field is quite high in many cases:
$ build/test-journal-dump /var/log/journal/*/* | rg 'table fill'
Data hash table fill: 15.1%
Field hash table fill: 69.1%
Data hash table fill: 4.9%
Field hash table fill: 32.4%
Data hash table fill: 10.2%
Field hash table fill: 34.2%
Data hash table fill: 9.9%
Field hash table fill: 37.2%
Data hash table fill: 26.8%
Field hash table fill: 21.9%
Data hash table fill: 35.6%
Field hash table fill: 22.8%
Data hash table fill: 25.5%
Field hash table fill: 54.1%
Data hash table fill: 3.4%
Field hash table fill: 43.8%
Data hash table fill: 75.0%
Field hash table fill: 70.3%
Data hash table fill: 75.0%
Field hash table fill: 63.1%
Data hash table fill: 75.0%
Field hash table fill: 74.2%
Data hash table fill: 35.6%
Field hash table fill: 43.2%
Data hash table fill: 35.5%
Field hash table fill: 75.4%
Data hash table fill: 75.0%
Field hash table fill: 59.8%
Data hash table fill: 75.0%
Field hash table fill: 56.5%
Data hash table fill: 16.9%
Field hash table fill: 76.3%
Data hash table fill: 18.1%
Field hash table fill: 76.9%
Data hash table fill: 75.0%
Field hash table fill: 42.0%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 32.1%
Data hash table fill: 75.0%
Field hash table fill: 21.9%
Data hash table fill: 75.0%
Field hash table fill: 21.9%
Data hash table fill: 75.0%
Field hash table fill: 21.9%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 22.8%
Data hash table fill: 75.0%
Field hash table fill: 21.9%
Data hash table fill: 75.0%
Field hash table fill: 22.5%
Data hash table fill: 9.6%
Field hash table fill: 53.8%
Data hash table fill: 75.0%
Field hash table fill: 22.2%
Data hash table fill: 75.0%
Field hash table fill: 22.2%
Data hash table fill: 75.0%
Field hash table fill: 22.2%
Data hash table fill: 35.6%
Field hash table fill: 75.1%
Data hash table fill: 33.6%
Field hash table fill: 50.2%
Data hash table fill: 75.0%
Field hash table fill: 26.7%
Data hash table fill: 75.0%
Field hash table fill: 25.8%
Data hash table fill: 75.0%
Field hash table fill: 29.1%
Data hash table fill: 75.0%
Field hash table fill: 25.8%
Data hash table fill: 75.0%
Field hash table fill: 31.8%
Data hash table fill: 75.0%
Field hash table fill: 18.9%
Data hash table fill: 75.0%
Field hash table fill: 22.2%
Data hash table fill: 75.0%
Field hash table fill: 20.1%
Data hash table fill: 75.0%
Field hash table fill: 29.1%
Data hash table fill: 75.0%
Field hash table fill: 30.9%
Data hash table fill: 75.0%
Field hash table fill: 28.5%
Data hash table fill: 75.0%
Field hash table fill: 28.5%
Data hash table fill: 75.0%
Field hash table fill: 25.8%
Data hash table fill: 75.0%
Field hash table fill: 25.2%
Data hash table fill: 75.0%
Field hash table fill: 39.3%
Data hash table fill: 50.2%
Field hash table fill: 75.1%
Data hash table fill: 75.0%
Field hash table fill: 61.9%
Data hash table fill: 75.0%
Field hash table fill: 56.5%
Data hash table fill: 75.0%
Field hash table fill: 58.6%
Data hash table fill: 48.9%
Field hash table fill: 79.6%
Data hash table fill: 75.0%
Field hash table fill: 71.5%
Data hash table fill: 75.0%
Field hash table fill: 60.1%
Data hash table fill: 31.4%
Field hash table fill: 75.7%
Data hash table fill: 27.0%
Field hash table fill: 69.4%
Data hash table fill: 28.9%
Field hash table fill: 76.6%
Data hash table fill: 60.2%
Field hash table fill: 79.9%
Data hash table fill: 8.8%
Field hash table fill: 78.7%
Data hash table fill: 5.8%
Field hash table fill: 61.3%
Data hash table fill: 75.0%
Field hash table fill: 64.0%
Data hash table fill: 61.4%
Field hash table fill: 63.4%
Data hash table fill: 29.7%
Field hash table fill: 61.9%
Data hash table fill: 18.9%
Field hash table fill: 30.9%
Data hash table fill: 1.4%
Field hash table fill: 22.2%
Data hash table fill: 0.4%
Field hash table fill: 13.5%
Data hash table fill: 2.6%
Field hash table fill: 37.5%
Data hash table fill: 1.3%
Field hash table fill: 23.4%
Data hash table fill: 0.6%
Field hash table fill: 15.3%
Data hash table fill: 18.7%
Field hash table fill: 33.9%
Data hash table fill: 7.4%
Field hash table fill: 37.5%
Data hash table fill: 20.2%
Field hash table fill: 44.1%
Data hash table fill: 1.3%
Field hash table fill: 33.0%
Data hash table fill: 75.0%
Field hash table fill: 19.2%
Data hash table fill: 42.2%
Field hash table fill: 23.4%
Data hash table fill: 1.6%
Field hash table fill: 87.1%
Data hash table fill: 0.1%
Field hash table fill: 98.8%
Data hash table fill: 0.2%
Field hash table fill: 128.8%
Data hash table fill: 15.4%
Field hash table fill: 31.2%
Data hash table fill: 7.4%
Field hash table fill: 22.5%
Data hash table fill: 10.5%
Field hash table fill: 38.7%
Data hash table fill: 2.8%
Field hash table fill: 18.0%
Data hash table fill: 1.5%
Field hash table fill: 15.9%
Data hash table fill: 0.0%
Field hash table fill: 7.5%
Data hash table fill: 0.1%
Field hash table fill: 12.0%
Data hash table fill: 0.2%
Field hash table fill: 10.8%
Data hash table fill: 0.2%
Field hash table fill: 15.6%
Data hash table fill: 0.1%
Field hash table fill: 11.7%
Data hash table fill: 0.1%
Field hash table fill: 12.0%
Data hash table fill: 0.0%
Field hash table fill: 6.6%
Data hash table fill: 1.4%
Field hash table fill: 18.0%
Data hash table fill: 0.7%
Field hash table fill: 16.8%
Data hash table fill: 1.1%
Field hash table fill: 18.0%
Data hash table fill: 0.2%
Field hash table fill: 10.8%
Data hash table fill: 0.1%
Field hash table fill: 10.8%
Data hash table fill: 0.4%
Field hash table fill: 11.1%
Since filling of the field hash table to 75% normally causes file rotation,
let's double the default to make rotation happen less often.
We'll use 11kB more for the hash table, which should be fine, considering
that journal files are usually at least 8 MB.