fileio: bump limit for read_full_file() and friends to 64M
Apparently people use such large key files. Specifically, people used 4M
key files, and we lowered the limit from 4M to 4M-1 back in 248.
This raises the limit to 64M for read_full_file() to avoid these
specific issues and give some non-trivial room beyond the 4M files seen
IRL.
Note that that a 64M allocation in glibc is always immediately done via
mmap(), and is thus a lot slower than shorter allocations. This means
read_virtual_file() becomes ridiculously slow if we'd use the large
limit, since we use it all the time for reading /proc and /sys metadata,
and read_virtual_file() typically allocates the full size with malloc()
in advance. In fact it becomes so slow, that test-process-util kept
timing out on me all the time, once I blindly raised the limit.
This patch hence introduces two distinct limits for read_full_file() and
read_virtual_file(): the former is much larger than the latter and the
latter remains where it is. This is safe since the former uses an
exponentially growing realloc() loop while the latter uses the
aforementioend ahead-of-time full limit allocation.
Since those workarounds have been added, work has been done to tighten
up log_*() return values. Seems we get no warning with
gcc-11.1.1-1.fc34.x86_64 and -O0/-O2.
journal: don't try to reuse already calculated hash between files with keyed hash feature
When suppressing duplicate fields between files we so far tried to reuse
the already known hash value of the data fields between files. This was
fine as long as we used the same hash function everywhere. However,
since addition of the keyed hash feature for journal files this doesn't
work anymore, since the hashes will be different for different files.
Yu Watanabe [Wed, 9 Jun 2021 04:30:16 +0000 (13:30 +0900)]
tmpfile: several minor coding style fixes
This makes the followings:
- reduces scope of variables,
- drop unnecessary 'else'
- use CLOSE_AND_REPLACE() macro
- use strnull() for possible NULL string
bpf-firewall: close gap when updating the firewall
If we have BPF_F_ALLOW_MULTI support we can install the new program
before we drop the old (because we can install two program at the same
time). Let's do that, and thus fully close the firewall
gap.
Yu Watanabe [Mon, 7 Jun 2021 07:26:10 +0000 (16:26 +0900)]
network: do not process requests which conditionalized with link flags while the flags are updating
E.g. nexthop requires IFF_UP flag, but the currently stored flag may be
outdated if we called link_down(). This makes such requests pending if
at least one of the flags are updating.
Yu Watanabe [Mon, 7 Jun 2021 06:54:48 +0000 (15:54 +0900)]
network: do not drop requests on carrier lost
On carrier lost, then all requests which require carrier will not be
processed. And they will be processed when the interface gained its
carrier again. So, it is not necessary to drop requests here.
Previously, IPv6LinkLocalAddressGenerationMode= is not set, then we
define the address generation mode based on the result of reading
stable_secret sysctl value. This makes the mode is determined by whether
a secret address is specified in the new setting.
Mostly logging related: let's downgrade logging in dlopen_bpf() for
example, and remove duplicate logging at various places. Add %m to log
messages and so on.
bpf-firewall: move destruction of IP firewall objects to bpf-firewall.c
These are so many runtime objects, let's add a bpf_firewall_close()
helper that destroys them all, and call that from unit_free(), simply as
an excercise of encapsulating more BPF code in bpf-firewall.c.
This also brings the destruction order and variable declaration order in
struct Unit into the same systematic order.
No change in behaviour just some minor refactoring.
In dns_server_unlink_marked() and dns_server_mark_all() we done recursively.
People might have dozens of servers defined, and it's better to avoid recursion
when a simple loop suffices.
dns_server_unlink_marked() would only unmark the first marked server.
tmpfiles: extend "Age" to accept an "age-by" argument
For "systemd-tmpfiles --cleanup", when the "Age" parameter
is specified, the criteria for deletion is determined from
the path's last modification timestamp ("mtime"), its last
access timestamp ("atime") and its last status change
timestamp ("ctime").
For instance, if one of those paths to be cleaned up are
opened, it results in the modification of "atime", which
results file system entry to not be removed because the
default aging algorithm would skip the entry.
Add an optional "age-by" argument by extending the "Age"
parameter to restrict the clean-up for a particular type
of file timestamp, which can be specified in "tmpfiles.d"
as follows:
[age-by:]cleanup-age, where age-by is "[abcmACBM]+"
For example:
d /foo/bar - - - abM:1m -
Would clean-up any files that were not accessed and created,
or directories that were not modified less than a minute ago
in "/foo/bar".
Allen Webb [Tue, 30 Mar 2021 14:37:11 +0000 (09:37 -0500)]
tmpfiles: add '=' action modifier.
Add the '=' action modifier that instructs tmpfiles.d to check the file
type of a path and remove objects that do not match before trying to
open or create the path.
core: do not serialize mounts and automounts for switch-root
When e.g. tmp.mount is present in the initrd, and we serialize it, switch root,
and deserialize, the new systemd is confused because it thinks /tmp is mounted.
In general, it doesn't make sense to serialize anything that refers to paths in
the old root file system.
This fixes two errors for me:
1. tmp.mount was not mounted properly before local-fs.target. It would be
mounted as some point (I guess when we re-read /proc/self/mountinfo for some
other reason). In effect systemd-tmpfiles-setup.service would see one fs, and
some other units started later a different one. In particular gdm.service would
fail because the pre-created /tmp/.X11-unix with proper permissions would not
exist at time it was started.
2. # systemd[1]: proc-sys-fs-binfmt_misc.automount: Got hangup/error on autofs pipe from kernel. Likely our automount point has been unmounted by someone or something else?
# systemd[1]: proc-sys-fs-binfmt_misc.automount: Failed with result 'unmounted'.
# systemd[1]: Mounting proc-sys-fs-binfmt_misc.mount...
# systemd[1]: Mounted proc-sys-fs-binfmt_misc.mount.
# systemd[1]: Starting systemd-binfmt.service...
# systemd[1]: Finished systemd-binfmt.service.
# systemd[1]: proc-sys-fs-binfmt_misc.automount: Path /proc/sys/fs/binfmt_misc is already a mount point, refusing start.
# systemd[1]: Failed to set up automount proc-sys-fs-binfmt_misc.automount.
# systemd[1]: proc-sys-fs-binfmt_misc.automount: Path /proc/sys/fs/binfmt_misc is already a mount point, refusing start.
# systemd[1]: Failed to set up automount proc-sys-fs-binfmt_misc.automount.
# systemd[1]: proc-sys-fs-binfmt_misc.automount: Path /proc/sys/fs/binfmt_misc is already a mount point, refusing start.
# systemd[1]: Failed to set up automount proc-sys-fs-binfmt_misc.automount.
# systemd[1]: Stopping systemd-binfmt.service...
# systemd[1]: systemd-binfmt.service: Deactivated successfully.
# systemd[1]: Stopped systemd-binfmt.service.
I couldn't understand the error here, but in retrospect the first line is entirely
correct: "someone or something else" was the old systemd unmounting the old root.
Luca Boccassi [Fri, 12 Mar 2021 20:17:09 +0000 (20:17 +0000)]
coredump: check cgroups memory limit if storing on tmpfs
When /var/lib/systemd/coredump/ is backed by a tmpfs, all disk usage
will be accounted under the systemd-coredump process cgroup memory
limit.
If MemoryMax is set, this might cause systemd-coredump to be terminated
by the kernel oom handler when writing large uncompressed core files,
even if the compressed core would fit within the limits.
Detect if a tmpfs is used, and if so check MemoryMax from the process
and slice cgroups, and do not write uncompressed core files that are
greater than half the available memory. If the limit is breached,
stop writing and compress the written chunk immediately, then delete
the uncompressed chunk to free more memory, and resume compressing
directly from STDIN.
Example debug log when this situation happens:
systemd-coredump[737455]: Setting max_size to limit writes to 51344896 bytes.
systemd-coredump[737455]: ZSTD compression finished (51344896 -> 3260 bytes, 0.0%)
systemd-coredump[737455]: ZSTD compression finished (1022786048 -> 47245 bytes, 0.0%)
systemd-coredump[737455]: Process 737445 (a.out) of user 1000 dumped core.
Luca Boccassi [Wed, 26 May 2021 18:16:48 +0000 (19:16 +0100)]
core: add MemoryAvailable unit property
Try to infer the unused memory that a unit can claim before the
memory.max limit is reached, including any limit set on any parent
slice above the unit itself.
We were effectively doing all post-upgrade scripts twice in Fedora. We got this
wrong, so it's likely other people will get it wrong too. So let's explain
what is actually needed to make this work, but also when it's not useful.