1 The extended testsuite only works with UID=0. It consists of the subdirectories
2 named "test/TEST-??-*", each of which contains a description of an OS image and
3 a test which consists of systemd units and scripts to execute in this image.
4 The same image is used for execution under `systemd-nspawn` and `qemu`.
6 To run the extended testsuite do the following:
8 $ ninja -C build # Avoid building anything as root later
9 $ sudo test/run-integration-tests.sh
10 ninja: Entering directory `/home/zbyszek/src/systemd/build'
12 --x-- Running TEST-01-BASIC --x--
13 + make -C TEST-01-BASIC clean setup run
14 make: Entering directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
15 TEST-01-BASIC CLEANUP: Basic systemd setup
16 TEST-01-BASIC SETUP: Basic systemd setup
18 TEST-01-BASIC RUN: Basic systemd setup [OK]
19 make: Leaving directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
20 --x-- Result of TEST-01-BASIC: 0 --x--
21 --x-- Running TEST-02-CRYPTSETUP --x--
22 + make -C TEST-02-CRYPTSETUP clean setup run
24 If one of the tests fails, then $subdir/test.log contains the log file of
27 To run just one of the cases:
29 $ sudo make -C test/TEST-01-BASIC clean setup run
31 To run the meson-based integration test config
32 enable integration tests and options for required commands with the following:
34 $ meson configure build -Dremote=enabled -Dopenssl=enabled -Dblkid=enabled -Dtpm2=enabled
36 Once enabled, first build the integration test image:
38 $ meson compile -C build mkosi
40 After the image has been built, the integration tests can be run with:
42 $ SYSTEMD_INTEGRATION_TESTS=1 meson test -C build/ --suite integration-tests --num-processes "$(($(nproc) / 2))"
44 As usual, specific tests can be run in meson by appending the name of the test
45 which is usually the name of the directory e.g.
47 $ SYSTEMD_INTEGRATION_TESTS=1 meson test -C build/ -v TEST-01-BASIC
49 Due to limitations in meson, the integration tests do not yet depend on the mkosi target, which means the
50 mkosi target has to be manually rebuilt before running the integration tests. To rebuild the image and rerun
51 a test, the following command can be used:
53 $ meson compile -C build mkosi && SYSTEMD_INTEGRATION_TESTS=1 meson test -C build -v TEST-01-BASIC
55 See `meson introspect build --tests` for a list of tests.
57 Specifying the build directory
58 ==============================
60 If the build directory is not detected automatically, it can be specified
63 $ sudo BUILD_DIR=some-other-build/ test/run-integration-tests
67 $ sudo make -C test/TEST-01-BASIC BUILD_DIR=../../some-other-build/ ...
69 Note that in the second case, the path is relative to the test case directory.
70 An absolute path may also be used in both cases.
72 Testing installed binaries instead of built
73 ===========================================
75 To run the extended testsuite using the systemd installed on the system instead
76 of the systemd from a build, use the NO_BUILD=1:
78 $ sudo NO_BUILD=1 test/run-integration-tests
80 Configuration variables
81 =======================
84 Don't run tests under qemu
87 Run only tests that require qemu
90 Don't run tests under systemd-nspawn
93 Run all tests that do not require qemu under systemd-nspawn
96 Disable qemu KVM auto-detection (may be necessary when you're trying to run the
97 *vanilla* qemu and have both qemu and qemu-kvm installed)
100 Allow tests to run with nested KVM. By default, the testsuite disables
101 nested KVM if the host machine already runs under KVM. Setting this
102 variable disables such checks
105 Configure amount of memory for qemu VMs (defaults to 512M)
108 Configure number of CPUs for qemu VMs (defaults to 1)
111 Append additional parameters to the kernel command line
113 NSPAWN_ARGUMENTS='...'
114 Specify additional arguments for systemd-nspawn
116 QEMU_TIMEOUT=infinity
117 Set a timeout for tests under qemu (defaults to 1800 sec)
119 NSPAWN_TIMEOUT=infinity
120 Set a timeout for tests under systemd-nspawn (defaults to 1800 sec)
123 Configure the machine to be more *user-friendly* for interactive debuggung
124 (e.g. by setting a usable default terminal, suppressing the shutdown after
127 TEST_MATCH_SUBTEST=subtest
128 If the test makes use of `run_subtests` use this variable to provide
129 a POSIX extended regex to run only subtests matching the expression
131 TEST_MATCH_TESTCASE=testcase
132 Same as $TEST_MATCH_SUBTEST but for subtests that make use of `run_testcases`
134 The kernel and initrd can be specified with $KERNEL_BIN and $INITRD. (Fedora's
135 or Debian's default kernel path and initrd are used by default.)
137 A script will try to find your qemu binary. If you want to specify a different
140 Debugging the qemu image
141 ========================
143 If you want to log in the testsuite virtual machine, use INTERACTIVE_DEBUG=1
146 $ sudo make -C test/TEST-01-BASIC INTERACTIVE_DEBUG=1 run
148 The root password is empty.
153 New PR submitted to the project are run through regression tests, and one set
154 of those is the 'autopkgtest' runs for several different architectures, called
155 'Ubuntu CI'. Part of that testing is to run all these tests. Sometimes these
156 tests are temporarily deny-listed from running in the 'autopkgtest' tests while
157 debugging a flaky test; that is done by creating a file in the test directory
158 named 'deny-list-ubuntu-ci', for example to prevent the TEST-01-BASIC test from
159 running in the 'autopkgtest' runs, create the file
160 'TEST-01-BASIC/deny-list-ubuntu-ci'.
162 The tests may be disabled only for specific archs, by creating a deny-list file
163 with the arch name at the end, e.g.
164 'TEST-01-BASIC/deny-list-ubuntu-ci-arm64' to disable the TEST-01-BASIC test
165 only on test runs for the 'arm64' architecture.
167 Note the arch naming is not from 'uname -m', it is Debian arch names:
168 https://wiki.debian.org/ArchitectureSpecificsMemo
170 For PRs that fix a currently deny-listed test, the PR should include removal
171 of the deny-list file.
173 In case a test fails, the full set of artifacts, including the journal of the
174 failed run, can be downloaded from the artifacts.tar.gz archive which will be
175 reachable in the same URL parent directory as the logs.gz that gets linked on
176 the Github CI status.
178 The log URL can be derived following a simple algorithm, however the test
179 completion timestamp is needed and it's not easy to find without access to the
180 log itself. For example, a noble s390x job started on 2024-03-23 at 02:09:11
181 will be stored at the following URL:
183 https://autopkgtest.ubuntu.com/results/autopkgtest-noble-upstream-systemd-ci-systemd-ci/noble/s390x/s/systemd-upstream/20240323_020911_e8e88@/log.gz
185 The 5 characters at the end of the last directory are not random, but the first
186 5 characters of a SHA1 hash generated based on the set of parameters given to
187 the build plus the completion timestamp, such as:
189 $ echo -n 'systemd-upstream {"build-git": "https://salsa.debian.org/systemd-team/systemd.git#debian/master", "env": ["UPSTREAM_REPO=https://github.com/systemd/systemd.git", "CFLAGS=-O0", "DEB_BUILD_PROFILES=pkg.systemd.upstream noudeb", "TEST_UPSTREAM=1", "CONFFLAGS_UPSTREAM=--werror -Dslow-tests=true", "UPSTREAM_PULL_REQUEST=31444", "GITHUB_STATUSES_URL=https://api.github.com/repos/systemd/systemd/statuses/c27f600a1c47f10b22964eaedfb5e9f0d4279cd9"], "ppas": ["upstream-systemd-ci/systemd-ci"], "submit-time": "2024-02-27 17:06:27", "uuid": "02cd262f-af22-4f82-ac91-53fa5a9e7811"}' | sha1sum | cut -c1-5
191 To add new dependencies or new binaries to the packages used during the tests,
192 a merge request can be sent to: https://salsa.debian.org/systemd-team/systemd
193 targeting the 'upstream-ci' branch.
195 The cloud-side infrastructure, that is hooked into the Github interface, is
198 https://git.launchpad.net/autopkgtest-cloud/
200 A generic description of the testing infrastructure can be found at:
202 https://wiki.ubuntu.com/ProposedMigration/AutopkgtestInfrastructure
204 In case of infrastructure issues with this CI, things might go wrong in two
207 - starting a job: this is done via a Github webhook, so check if the HTTP POST
208 are failing on https://github.com/systemd/systemd/settings/hooks
209 - running a job: all currently running jobs are listed at
210 https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream in case the PR
211 does not show the status for some reason
212 - reporting the job result: this is done on Canonical's cloud infrastructure,
213 if jobs are started and running but no status is visible on the PR, then it is
214 likely that reporting back is not working
216 The CI job needs a PPA in order to be accepted, and the upstream-systemd-ci/systemd-ci
217 PPA is used. Note that this is necessary even when there are no packages to backport,
218 but by default a PPA won't have a repository for a release if there are no packages
219 built for it. To work around this problem, when a new empty release is needed the
220 mark-suite-dirty tool from the https://git.launchpad.net/ubuntu-archive-tools can
221 be used to force the PPA to publish an empty repository, for example:
223 $ ./mark-suite-dirty -A ppa:upstream-systemd-ci/ubuntu/systemd-ci -s noble
225 will create an empty 'noble' repository that can be used for 'noble' CI jobs.
227 For infrastructure help, reaching out to 'qa-help' via the #ubuntu-quality channel
228 on libera.chat is an effective way to receive support in general.
230 Given access to the shared secret, tests can be re-run using the generic
231 retry-github-test tool:
233 https://git.launchpad.net/autopkgtest-cloud/tree/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/retry-github-test
235 A wrapper script that makes it easier to use is also available:
237 https://piware.de/gitweb/?p=bin.git;a=blob;f=retry-gh-systemd-Test
239 Manually running a part of the Ubuntu CI test suite
240 ===================================================
242 In some situations one may want/need to run one of the tests run by Ubuntu CI
243 locally for debugging purposes. For this, you need a machine (or a VM) with
244 the same Ubuntu release as is used by Ubuntu CI (Jammy ATTOW).
246 First of all, clone the Debian systemd repository and sync it with the code of
247 the PR (set by the $UPSTREAM_PULL_REQUEST env variable) you'd like to debug:
249 # git clone https://salsa.debian.org/systemd-team/systemd.git
251 # git checkout upstream-ci
252 # TEST_UPSTREAM=1 UPSTREAM_PULL_REQUEST=12345 ./debian/extra/checkout-upstream
254 Now install necessary build & test dependencies:
256 ## PPA with some newer Ubuntu packages required by upstream systemd
257 # add-apt-repository -y --enable-source ppa:upstream-systemd-ci/systemd-ci
258 # apt build-dep -y systemd
259 # apt install -y autopkgtest debhelper genisoimage git qemu-system-x86 \
260 libcurl4-openssl-dev libfdisk-dev libtss2-dev libfido2-dev \
261 libssl-dev python3-pefile
263 Build systemd deb packages with debug info:
265 # TEST_UPSTREAM=1 DEB_BUILD_OPTIONS="nocheck nostrip noopt" dpkg-buildpackage -us -uc
268 Prepare a testbed image for autopkgtest (tweak the release as necessary):
270 # autopkgtest-buildvm-ubuntu-cloud --ram-size 1024 -v -a amd64 -r jammy
272 And finally run the autopkgtest itself:
274 # autopkgtest -o logs *.deb systemd/ \
275 --env=TEST_UPSTREAM=1 \
277 --test-name=boot-and-services \
279 -- autopkgtest-virt-qemu --cpus 4 --ram-size 2048 autopkgtest-jammy-amd64.img
281 where --test-name= is the name of the test you want to run/debug. The
282 --shell-fail option will pause the execution in case the test fails and shows
283 you the information how to connect to the testbed for further debugging.
285 Manually running CodeQL analysis
286 =====================================
288 This is mostly useful for debugging various CodeQL quirks.
290 Download the CodeQL Bundle from https://github.com/github/codeql-action/releases
291 and unpack it somewhere. From now the 'tutorial' assumes you have the `codeql`
292 binary from the unpacked archive in $PATH for brevity.
294 Switch to the systemd repository if not already:
298 Create an initial CodeQL database:
300 $ CCACHE_DISABLE=1 codeql database create codeqldb --language=cpp -vvv
302 Disabling ccache is important, otherwise you might see CodeQL complaining:
304 No source code was seen and extracted to /home/mrc0mmand/repos/@ci-incubator/systemd/codeqldb.
305 This can occur if the specified build commands failed to compile or process any code.
306 - Confirm that there is some source code for the specified language in the project.
307 - For codebases written in Go, JavaScript, TypeScript, and Python, do not specify
308 an explicit --command.
309 - For other languages, the --command must specify a "clean" build which compiles
310 all the source code files without reusing existing build artefacts.
312 If you want to run all queries systemd uses in CodeQL, run:
314 $ codeql database analyze codeqldb/ --format csv --output results.csv .github/codeql-custom.qls .github/codeql-queries/*.ql -vvv
316 Note: this will take a while.
318 If you're interested in a specific check, the easiest way (without hunting down
319 the specific CodeQL query file) is to create a custom query suite. For example:
321 $ cat >test.qls <<EOF
323 from: codeql/cpp-queries
329 And then execute it in the same way as above:
331 $ codeql database analyze codeqldb/ --format csv --output results.csv test.qls -vvv
333 More about query suites here: https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/
335 The results are then located in the `results.csv` file as a comma separated
336 values list (obviously), which is the most human-friendly output format the
337 CodeQL utility provides (so far).
339 Running Coverity locally
340 ========================
342 Note: this requires a Coverity license, as the public tool tarball (from [0])
343 doesn't contain cov-analyze and friends, so the usefulness of this guide is
346 Debugging certain pesky Coverity defects can be painful, especially since the
347 OSS Coverity instance has a very strict limit on how many builds we can send it
348 per day/week, so if you have an access to a non-OSS Coverity license, knowing
349 how to debug defects locally might come in handy.
351 After installing the necessary tooling we need to populate the emit DB first:
354 $ meson setup build -Dman=false
355 $ cov-build --dir=./cov ninja -C build
357 From there it depends if you're interested in a specific defect or all of them.
360 $ cov-analyze --dir=./cov --wait-for-license
362 If you want to debug a specific defect, telling that to cov-analyze speeds
365 $ cov-analyze --dir=./cov --wait-for-license --disable-default --enable ASSERT_SIDE_EFFECT
367 The final step is getting the actual report which can be generated in multiple
368 formats, for example:
370 $ cov-format-errors --dir ./cov --text-output-style multiline
371 $ cov-format-errors --dir=./cov --emacs-style
372 $ cov-format-errors --dir=./cov --html-output html-out
374 Which generate a text report, an emacs-compatible text report, and an HTML
377 Other useful options for cov-format-error include --file <file> to filter out
378 defects for a specific file, --checker-regex DEFECT_TYPE to filter our only a
379 specific defect (if this wasn't done already by cov-analyze), and many others,
380 see --help for an exhaustive list.
382 [0] https://scan.coverity.com/download
387 We have a daily cron job in CentOS CI which runs all unit and integration tests,
388 collects coverage using gcov/lcov, and uploads the report to Coveralls[0]. In
389 order to collect the most accurate coverage information, some measures have
390 to be taken regarding sandboxing, namely:
392 - ProtectSystem= and ProtectHome= need to be turned off
393 - the $BUILD_DIR with necessary .gcno files needs to be present in the image
394 and needs to be writable by all processes
396 The first point is relatively easy to handle and is handled automagically by
397 our test "framework" by creating necessary dropins.
399 Making the $BUILD_DIR accessible to _everything_ is slightly more complicated.
400 First, and foremost, the $BUILD_DIR has a POSIX ACL that makes it writable
401 to everyone. However, this is not enough in some cases, like for services
402 that use DynamicUser=yes, since that implies ProtectSystem=strict that can't
403 be turned off. A solution to this is to use ReadWritePaths=$BUILD_DIR, which
404 works for the majority of cases, but can't be turned on globally, since
405 ReadWritePaths= creates its own mount namespace which might break some
406 services. Hence, the ReadWritePaths=$BUILD_DIR is enabled for all services
407 with the `test-` prefix (i.e. test-foo.service or test-foo-bar.service), both
408 in the system and the user managers.
410 So, if you're considering writing an integration test that makes use
411 of DynamicUser=yes, or other sandboxing stuff that implies it, please prefix
412 the test unit (be it a static one or a transient one created via systemd-run),
413 with `test-`, unless the test unit needs to be able to install mount points
414 in the main mount namespace - in that case use IGNORE_MISSING_COVERAGE=yes
415 in the test definition (i.e. TEST-*-NAME/test.sh), which will skip the post-test
416 check for missing coverage for the respective test.
418 [0] https://coveralls.io/github/systemd/systemd