]> git.ipfire.org Git - thirdparty/systemd.git/blame - docs/CGROUP_DELEGATION.md
fileio: add brief explanations for flags
[thirdparty/systemd.git] / docs / CGROUP_DELEGATION.md
CommitLineData
c3e270f4
FB
1---
2title: Control Group APIs and Delegation
4cdca0af 3category: Interfaces
b41a3f66 4layout: default
c3e270f4
FB
5---
6
e30eaff3
LP
7# Control Group APIs and Delegation
8
1e46eb59
LP
9*Intended audience: hackers working on userspace subsystems that require direct
10cgroup access, such as container managers and similar.*
11
e30eaff3
LP
12So you are wondering about resource management with systemd, you know Linux
13control groups (cgroups) a bit and are trying to integrate your software with
14what systemd has to offer there. Here's a bit of documentation about the
15concepts and interfaces involved with this.
16
17What's described here has been part of systemd and documented since v205
5b24525a 18times. However, it has been updated and improved substantially, even
e30eaff3
LP
19though the concepts stayed mostly the same. This is an attempt to provide more
20comprehensive up-to-date information about all this, particular in light of the
21poor implementations of the components interfacing with systemd of current
22container managers.
23
bb6d563a
ZJS
24Before you read on, please make sure you read the low-level kernel
25documentation about the
26[unified cgroup hierarchy](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html).
27This document then adds in the higher-level view from systemd.
e30eaff3
LP
28
29This document augments the existing documentation we already have:
30
31* [The New Control Group Interfaces](https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/)
32* [Writing VM and Container Managers](https://www.freedesktop.org/wiki/Software/systemd/writing-vm-managers/)
33
34These wiki documents are not as up to date as they should be, currently, but
35the basic concepts still fully apply. You should read them too, if you do something
36with cgroups and systemd, in particular as they shine more light on the various
37D-Bus APIs provided. (That said, sooner or later we should probably fold that
38wiki documentation into this very document, too.)
39
40## Two Key Design Rules
41
42Much of the philosophy behind these concepts is based on a couple of basic
4e1dfa45
CD
43design ideas of cgroup v2 (which we however try to adapt as far as we can to
44cgroup v1 too). Specifically two cgroup v2 rules are the most relevant:
e30eaff3
LP
45
461. The **no-processes-in-inner-nodes** rule: this means that it's not permitted
47to have processes directly attached to a cgroup that also has child cgroups and
48vice versa. A cgroup is either an inner node or a leaf node of the tree, and if
49it's an inner node it may not contain processes directly, and if it's a leaf
50node then it may not have child cgroups. (Note that there are some minor
5b24525a 51exceptions to this rule, though. E.g. the root cgroup is special and allows
e30eaff3
LP
52both processes and children — which is used in particular to maintain kernel
53threads.)
54
552. The **single-writer** rule: this means that each cgroup only has a single
56writer, i.e. a single process managing it. It's OK if different cgroups have
57different processes managing them. However, only a single process should own a
58specific cgroup, and when it does that ownership is exclusive, and nothing else
59should manipulate it at the same time. This rule ensures that various pieces of
60software don't step on each other's toes constantly.
61
62These two rules have various effects. For example, one corollary of this is: if
63your container manager creates and manages cgroups in the system's root cgroup
64you violate rule #2, as the root cgroup is managed by systemd and hence off
65limits to everybody else.
66
4e1dfa45 67Note that rule #1 is generally enforced by the kernel if cgroup v2 is used: as
e30eaff3 68soon as you add a process to a cgroup it is ensured the rule is not
4e1dfa45 69violated. On cgroup v1 this rule didn't exist, and hence isn't enforced, even
e30eaff3 70though it's a good thing to follow it then too. Rule #2 is not enforced on
4e1dfa45 71either cgroup v1 nor cgroup v2 (this is UNIX after all, in the general case
e30eaff3
LP
72root can do anything, modulo SELinux and friends), but if you ignore it you'll
73be in constant pain as various pieces of software will fight over cgroup
74ownership.
75
4e1dfa45 76Note that cgroup v1 is currently the most deployed implementation, even though
5b24525a 77it's semantically broken in many ways, and in many cases doesn't actually do
4e1dfa45
CD
78what people think it does. cgroup v2 is where things are going, and most new
79kernel features in this area are only added to cgroup v2, and not cgroup v1
80anymore. For example cgroup v2 provides proper cgroup-empty notifications, has
5b24525a
ZJS
81support for all kinds of per-cgroup BPF magic, supports secure delegation of
82cgroup trees to less privileged processes and so on, which all are not
4e1dfa45 83available on cgroup v1.
e30eaff3
LP
84
85## Three Different Tree Setups 🌳
86
87systemd supports three different modes how cgroups are set up. Specifically:
88
4e1dfa45 891. **Unified** — this is the simplest mode, and exposes a pure cgroup v2
e30eaff3
LP
90logic. In this mode `/sys/fs/cgroup` is the only mounted cgroup API file system
91and all available controllers are exclusively exposed through it.
92
4e1dfa45 932. **Legacy** — this is the traditional cgroup v1 mode. In this mode the
e30eaff3
LP
94various controllers each get their own cgroup file system mounted to
95`/sys/fs/cgroup/<controller>/`. On top of that systemd manages its own cgroup
96hierarchy for managing purposes as `/sys/fs/cgroup/systemd/`.
97
983. **Hybrid** — this is a hybrid between the unified and legacy mode. It's set
99up mostly like legacy, except that there's also an additional hierarchy
4e1dfa45 100`/sys/fs/cgroup/unified/` that contains the cgroup v2 hierarchy. (Note that in
9afd5740
LP
101this mode the unified hierarchy won't have controllers attached, the
102controllers are all mounted as separate hierarchies as in legacy mode,
4e1dfa45 103i.e. `/sys/fs/cgroup/unified/` is purely and exclusively about core cgroup v2
9afd5740 104functionality and not about resource management.) In this mode compatibility
4e1dfa45 105with cgroup v1 is retained while some cgroup v2 features are available
9afd5740
LP
106too. This mode is a stopgap. Don't bother with this too much unless you have
107too much free time.
e30eaff3
LP
108
109To say this clearly, legacy and hybrid modes have no future. If you develop
110software today and don't focus on the unified mode, then you are writing
111software for yesterday, not tomorrow. They are primarily supported for
112compatibility reasons and will not receive new features. Sorry.
113
114Superficially, in legacy and hybrid modes it might appear that the parallel
115cgroup hierarchies for each controller are orthogonal from each other. In
116systemd they are not: the hierarchies of all controllers are always kept in
117sync (at least mostly: sub-trees might be suppressed in certain hierarchies if
118no controller usage is required for them). The fact that systemd keeps these
119hierarchies in sync means that the legacy and hybrid hierarchies are
120conceptually very close to the unified hierarchy. In particular this allows us
5b24525a
ZJS
121to talk of one specific cgroup and actually mean the same cgroup in all
122available controller hierarchies. E.g. if we talk about the cgroup `/foo/bar/`
123then we actually mean `/sys/fs/cgroup/cpu/foo/bar/` as well as
124`/sys/fs/cgroup/memory/foo/bar/`, `/sys/fs/cgroup/pids/foo/bar/`, and so on.
4e1dfa45 125Note that in cgroup v2 the controller hierarchies aren't orthogonal, hence
e30eaff3
LP
126thinking about them as orthogonal won't help you in the long run anyway.
127
128If you wonder how to detect which of these three modes is currently used, use
129`statfs()` on `/sys/fs/cgroup/`. If it reports `CGROUP2_SUPER_MAGIC` in its
130`.f_type` field, then you are in unified mode. If it reports `TMPFS_MAGIC` then
b2454670 131you are either in legacy or hybrid mode. To distinguish these two cases, run
e30eaff3
LP
132`statfs()` again on `/sys/fs/cgroup/unified/`. If that succeeds and reports
133`CGROUP2_SUPER_MAGIC` you are in hybrid mode, otherwise not.
ed1de710
FM
134From a shell, you can use check the `Type` in `stat -f /sys/fs/cgroup` and
135`stat -f /sys/fs/cgroup/unified`.
e30eaff3
LP
136
137## systemd's Unit Types
138
139The low-level kernel cgroups feature is exposed in systemd in three different
140"unit" types. Specifically:
141
1421. 💼 The `.service` unit type. This unit type is for units encapsulating
143 processes systemd itself starts. Units of these types have cgroups that are
144 the leaves of the cgroup tree the systemd instance manages (though possibly
145 they might contain a sub-tree of their own managed by something else, made
146 possible by the concept of delegation, see below). Service units are usually
147 instantiated based on a unit file on disk that describes the command line to
148 invoke and other properties of the service. However, service units may also
149 be declared and started programmatically at runtime through a D-Bus API
150 (which is called *transient* services).
151
1522. 👓 The `.scope` unit type. This is very similar to `.service`. The main
153 difference: the processes the units of this type encapsulate are forked off
154 by some unrelated manager process, and that manager asked systemd to expose
155 them as a unit. Unlike services, scopes can only be declared and started
156 programmatically, i.e. are always transient. That's because they encapsulate
157 processes forked off by something else, i.e. existing runtime objects, and
158 hence cannot really be defined fully in 'offline' concepts such as unit
159 files.
160
1613. 🔪 The `.slice` unit type. Units of this type do not directly contain any
162 processes. Units of this type are the inner nodes of part of the cgroup tree
163 the systemd instance manages. Much like services, slices can be defined
164 either on disk with unit files or programmatically as transient units.
165
166Slices expose the trunk and branches of a tree, and scopes and services are
167attached to those branches as leaves. The idea is that scopes and services can
168be moved around though, i.e. assigned to a different slice if needed.
169
170The naming of slice units directly maps to the cgroup tree path. This is not
171the case for service and scope units however. A slice named `foo-bar-baz.slice`
172maps to a cgroup `/foo.slice/foo-bar.slice/foo-bar-baz.slice/`. A service
173`quux.service` which is attached to the slice `foo-bar-baz.slice` maps to the
174cgroup `/foo.slice/foo-bar.slice/foo-bar-baz.slice/quux.service/`.
175
176By default systemd sets up four slice units:
177
1781. `-.slice` is the root slice. i.e. the parent of everything else. On the host
4e1dfa45 179 system it maps directly to the top-level directory of cgroup v2.
e30eaff3
LP
180
1812. `system.slice` is where system services are by default placed, unless
182 configured otherwise.
183
1843. `user.slice` is where user sessions are placed. Each user gets a slice of
185 its own below that.
186
1874. `machines.slice` is where VMs and containers are supposed to be
188 placed. `systemd-nspawn` makes use of this by default, and you're very welcome
189 to place your containers and VMs there too if you hack on managers for those.
190
191Users may define any amount of additional slices they like though, the four
192above are just the defaults.
193
194## Delegation
195
196Container managers and suchlike often want to control cgroups directly using
197the raw kernel APIs. That's entirely fine and supported, as long as proper
4e1dfa45
CD
198*delegation* is followed. Delegation is a concept we inherited from cgroup v2,
199but we expose it on cgroup v1 too. Delegation means that some parts of the
e30eaff3
LP
200cgroup tree may be managed by different managers than others. As long as it is
201clear which manager manages which part of the tree each one can do within its
202sub-graph of the tree whatever it wants.
203
204Only sub-trees can be delegated (though whoever decides to request a sub-tree
5b24525a
ZJS
205can delegate sub-sub-trees further to somebody else if they like). Delegation
206takes place at a specific cgroup: in systemd there's a `Delegate=` property you
207can set for a service or scope unit. If you do, it's the cut-off point for
208systemd's cgroup management: the unit itself is managed by systemd, i.e. all
209its attributes are managed exclusively by systemd, however your program may
210create/remove sub-cgroups inside it freely, and those then become exclusive
211property of your program, systemd won't touch them — all attributes of *those*
212sub-cgroups can be manipulated freely and exclusively by your program.
e30eaff3
LP
213
214By turning on the `Delegate=` property for a scope or service you get a few
215guarantees:
216
2171. systemd won't fiddle with your sub-tree of the cgroup tree anymore. It won't
218 change attributes of any cgroups below it, nor will it create or remove any
219 cgroups thereunder, nor migrate processes across the boundaries of that
220 sub-tree as it deems useful anymore.
221
2222. If your service makes use of the `User=` functionality, then the sub-tree
223 will be `chown()`ed to the indicated user so that it can correctly create
224 cgroups below it. Note however that systemd will do that only in the unified
225 hierarchy (in unified and hybrid mode) as well as on systemd's own private
226 hierarchy (in legacy and hybrid mode). It won't pass ownership of the legacy
227 controller hierarchies. Delegation to less privileges processes is not safe
4e1dfa45 228 in cgroup v1 (as a limitation of the kernel), hence systemd won't facilitate
e30eaff3
LP
229 access to it.
230
2313. Any BPF IP filter programs systemd installs will be installed with
232 `BPF_F_ALLOW_MULTI` so that your program can install additional ones.
233
234In unit files the `Delegate=` property is superficially exposed as
235boolean. However, since v236 it optionally takes a list of controller names
236instead. If so, delegation is requested for listed controllers
1a31d050 237specifically. Note that this only encodes a request. Depending on various
e30eaff3
LP
238parameters it might happen that your service actually will get fewer
239controllers delegated (for example, because the controller is not available on
240the current kernel or was turned off) or more. If no list is specified
241(i.e. the property simply set to `yes`) then all available controllers are
242delegated.
243
244Let's stress one thing: delegation is available on scope and service units
5b24525a
ZJS
245only. It's expressly not available on slice units. Why? Because slice units are
246our *inner* nodes of the cgroup trees and we freely attach service and scopes
e5988600 247to them. If we'd allow delegation on slice units then this would mean that
5b24525a
ZJS
248both systemd and your own manager would create/delete cgroups below the slice
249unit and that conflicts with the single-writer rule.
e30eaff3
LP
250
251So, if you want to do your own raw cgroups kernel level access, then allocate a
252scope unit, or a service unit (or just use the service unit you already have
253for your service code), and turn on delegation for it.
254
e2391ce0
LP
255(OK, here's one caveat: if you turn on delegation for a service, and that
256service has `ExecStartPost=`, `ExecReload=`, `ExecStop=` or `ExecStopPost=`
257set, then these commands will be executed within the `.control/` sub-cgroup of
258your service's cgroup. This is necessary because by turning on delegation we
259have to assume that the cgroup delegated to your service is now an *inner*
260cgroup, which means that it may not directly contain any processes. Hence, if
261your service has any of these four settings set, you must be prepared that a
262`.control/` subcgroup might appear, managed by the service manager. This also
263means that your service code should have moved itself further down the cgroup
264tree by the time it notifies the service manager about start-up readiness, so
265that the service's main cgroup is definitely an inner node by the time the
266service manager might start `ExecStartPost=`.)
267
e30eaff3
LP
268## Three Scenarios
269
270Let's say you write a container manager, and you wonder what to do regarding
271cgroups for it, as you want your manager to be able to run on systemd systems.
272
273You basically have three options:
274
5b24525a
ZJS
2751. 😊 The *integration-is-good* option. For this, you register each container
276 you have either as a systemd service (i.e. let systemd invoke the executor
277 binary for you) or a systemd scope (i.e. your manager executes the binary
278 directly, but then tells systemd about it. In this mode the administrator
279 can use the usual systemd resource management and reporting commands
280 individually on those containers. By turning on `Delegate=` for these scopes
281 or services you make it possible to run cgroup-enabled programs in your
282 containers, for example a nested systemd instance. This option has two
283 sub-options:
e30eaff3 284
5b24525a
ZJS
285 a. You transiently register the service or scope by directly contacting
286 systemd via D-Bus. In this case systemd will just manage the unit for you
287 and nothing else.
e30eaff3
LP
288
289 b. Instead you register the service or scope through `systemd-machined`
290 (also via D-Bus). This mini-daemon is basically just a proxy for the same
291 operations as in a. The main benefit of this: this way you let the system
292 know that what you are registering is a container, and this opens up
293 certain additional integration points. For example, `journalctl -M` can
294 then be used to directly look into any container's journal logs (should
295 the container run systemd inside), or `systemctl -M` can be used to
296 directly invoke systemd operations inside the containers. Moreover tools
297 like "ps" can then show you to which container a process belongs (`ps -eo
298 pid,comm,machine`), and even gnome-system-monitor supports it.
299
3002. 🙁 The *i-like-islands* option. If all you care about is your own cgroup tree,
301 and you want to have to do as little as possible with systemd and no
302 interest in integration with the rest of the system, then this is a valid
303 option. For this all you have to do is turn on `Delegate=` for your main
304 manager daemon. Then figure out the cgroup systemd placed your daemon in:
305 you can now freely create sub-cgroups beneath it. Don't forget the
306 *no-processes-in-inner-nodes* rule however: you have to move your main
307 daemon process out of that cgroup (and into a sub-cgroup) before you can
308 start further processes in any of your sub-cgroups.
309
3103. 🙁 The *i-like-continents* option. In this option you'd leave your manager
311 daemon where it is, and would not turn on delegation on its unit. However,
312 as first thing you register a new scope unit with systemd, and that scope
313 unit would have `Delegate=` turned on, and then you place all your
314 containers underneath it. From systemd's PoV there'd be two units: your
315 manager service and the big scope that contains all your containers in one.
316
317BTW: if for whatever reason you say "I hate D-Bus, I'll never call any D-Bus
318API, kthxbye", then options #1 and #3 are not available, as they generally
319involve talking to systemd from your program code, via D-Bus. You still have
320option #2 in that case however, as you can simply set `Delegate=` in your
321service's unit file and you are done and have your own sub-tree. In fact, #2 is
322the one option that allows you to completely ignore systemd's existence: you
323can entirely generically follow the single rule that you just use the cgroup
324you are started in, and everything below it, whatever that might be. That said,
325maybe if you dislike D-Bus and systemd that much, the better approach might be
326to work on that, and widen your horizon a bit. You are welcome.
327
328## Controller Support
329
330systemd supports a number of controllers (but not all). Specifically, supported
331are:
332
4e1dfa45
CD
333* on cgroup v1: `cpu`, `cpuacct`, `blkio`, `memory`, `devices`, `pids`
334* on cgroup v2: `cpu`, `io`, `memory`, `pids`
e30eaff3 335
4e1dfa45
CD
336It is our intention to natively support all cgroup v2 controllers as they are
337added to the kernel. However, regarding cgroup v1: at this point we will not
5b24525a 338add support for any other controllers anymore. This means systemd currently
4e1dfa45 339does not and will never manage the following controllers on cgroup v1:
e30eaff3
LP
340`freezer`, `cpuset`, `net_cls`, `perf_event`, `net_prio`, `hugetlb`. Why not?
341Depending on the case, either their API semantics or implementations aren't
4e1dfa45 342really usable, or it's very clear they have no future on cgroup v2, and we
e30eaff3
LP
343won't add new code for stuff that clearly has no future.
344
4e1dfa45 345Effectively this means that all those mentioned cgroup v1 controllers are up
e30eaff3
LP
346for grabs: systemd won't manage them, and hence won't delegate them to your
347code (however, systemd will still mount their hierarchies, simply because it
348mounts all controller hierarchies it finds available in the kernel). If you
349decide to use them, then that's fine, but systemd won't help you with it (but
350also not interfere with it). To be nice to other tenants it might be wise to
351replicate the cgroup hierarchies of the other controllers in them too however,
352but of course that's between you and those other tenants, and systemd won't
353care. Replicating the cgroup hierarchies in those unsupported controllers would
354mean replicating the full cgroup paths in them, and hence the prefixing
355`.slice` components too, otherwise the hierarchies will start being orthogonal
356after all, and that's not really desirable. On more thing: systemd will clean
357up after you in the hierarchies it manages: if your daemon goes down, its
358cgroups will be removed too. You basically get the guarantee that you start
359with a pristine cgroup sub-tree for your service or scope whenever it is
360started. This is not the case however in the hierarchies systemd doesn't
361manage. This means that your programs should be ready to deal with left-over
362cgroups in them — from previous runs, and be extra careful with them as they
363might still carry settings that might not be valid anymore.
364
365Note a particular asymmetry here: if your systemd version doesn't support a
4e1dfa45 366specific controller on cgroup v1 you can still make use of it for delegation,
e30eaff3 367by directly fiddling with its hierarchy and replicating the cgroup tree there
4e1dfa45 368as necessary (as suggested above). However, on cgroup v2 this is different:
e30eaff3
LP
369separately mounted hierarchies are not available, and delegation has always to
370happen through systemd itself. This means: when you update your kernel and it
371adds a new, so far unseen controller, and you want to use it for delegation,
372then you also need to update systemd to a version that groks it.
373
374## systemd as Container Payload
375
376systemd can happily run as a container payload's PID 1. Note that systemd
377unconditionally needs write access to the cgroup tree however, hence you need
378to delegate a sub-tree to it. Note that there's nothing too special you have to
379do beyond that: just invoke systemd as PID 1 inside the root of the delegated
380cgroup sub-tree, and it will figure out the rest: it will determine the cgroup
381it is running in and take possession of it. It won't interfere with any cgroup
382outside of the sub-tree it was invoked in. Use of `CLONE_NEWCGROUP` is hence
383optional (but of course wise).
384
385Note one particular asymmetry here though: systemd will try to take possession
386of the root cgroup you pass to it *in* *full*, i.e. it will not only
e5988600 387create/remove child cgroups below it, it will also attempt to manage the
e30eaff3
LP
388attributes of it. OTOH as mentioned above, when delegating a cgroup tree to
389somebody else it only passes the rights to create/remove sub-cgroups, but will
390insist on managing the delegated cgroup tree's top-level attributes. Or in
391other words: systemd is *greedy* when accepting delegated cgroup trees and also
392*greedy* when delegating them to others: it insists on managing attributes on
393the specific cgroup in both cases. A container manager that is itself a payload
394of a host systemd which wants to run a systemd as its own container payload
395instead hence needs to insert an extra level in the hierarchy in between, so
396that the systemd on the host and the one in the container won't fight for the
397attributes. That said, you likely should do that anyway, due to the
398no-processes-in-inner-cgroups rule, see below.
399
400When systemd runs as container payload it will make use of all hierarchies it
401has write access to. For legacy mode you need to make at least
402`/sys/fs/cgroup/systemd/` available, all other hierarchies are optional. For
403hybrid mode you need to add `/sys/fs/cgroup/unified/`. Finally, for fully
404unified you (of course, I guess) need to provide only `/sys/fs/cgroup/` itself.
405
406## Some Dos
407
4081. ⚡ If you go for implementation option 1a or 1b (as in the list above), then
409 each of your containers will have its own systemd-managed unit and hence
410 cgroup with possibly further sub-cgroups below. Typically the first process
411 running in that unit will be some kind of executor program, which will in
412 turn fork off the payload processes of the container. In this case don't
413 forget that there are two levels of delegation involved: first, systemd
414 delegates a group sub-tree to your executor. And then your executor should
415 delegate a sub-tree further down to the container payload. Oh, and because
416 of the no-process-in-inner-nodes rule, your executor needs to migrate itself
417 to a sub-cgroup of the cgroup it got delegated, too. Most likely you hence
418 want a two-pronged approach: below the cgroup you got started in, you want
419 one cgroup maybe called `supervisor/` where your manager runs in and then
420 for each container a sibling cgroup of that maybe called `payload-xyz/`.
421
4222. ⚡ Don't forget that the cgroups you create have to have names that are
423 suitable as UNIX file names, and that they live in the same namespace as the
424 various kernel attribute files. Hence, when you want to allow the user
425 arbitrary naming, you might need to escape some of the names (for example,
426 you really don't want to create a cgroup named `tasks`, just because the
427 user created a container by that name, because `tasks` after all is a magic
4e1dfa45 428 attribute in cgroup v1, and your `mkdir()` will hence fail with `EEXIST`. In
e30eaff3
LP
429 systemd we do escaping by prefixing names that might collide with a kernel
430 attribute name with an underscore. You might want to do the same, but this
431 is really up to you how you do it. Just do it, and be careful.
432
433## Some Don'ts
434
4351. 🚫 Never create your own cgroups below arbitrary cgroups systemd manages, i.e
436 cgroups you haven't set `Delegate=` in. Specifically: 🔥 don't create your
437 own cgroups below the root cgroup 🔥. That's owned by systemd, and you will
438 step on systemd's toes if you ignore that, and systemd will step on
439 yours. Get your own delegated sub-tree, you may create as many cgroups there
440 as you like. Seriously, if you create cgroups directly in the cgroup root,
441 then all you do is ask for trouble.
442
4432. 🚫 Don't attempt to set `Delegate=` in slice units, and in particular not in
444 `-.slice`. It's not supported, and will generate an error.
445
4463. 🚫 Never *write* to any of the attributes of a cgroup systemd created for
447 you. It's systemd's private property. You are welcome to manipulate the
448 attributes of cgroups you created in your own delegated sub-tree, but the
449 cgroup tree of systemd itself is out of limits for you. It's fine to *read*
450 from any attribute you like however. That's totally OK and welcome.
451
d11623e9
LP
4524. 🚫 When not using `CLONE_NEWCGROUP` when delegating a sub-tree to a
453 container payload running systemd, then don't get the idea that you can bind
454 mount only a sub-tree of the host's cgroup tree into the container. Part of
455 the cgroup API is that `/proc/$PID/cgroup` reports the cgroup path of every
e30eaff3
LP
456 process, and hence any path below `/sys/fs/cgroup/` needs to match what
457 `/proc/$PID/cgroup` of the payload processes reports. What you can do safely
d11623e9
LP
458 however, is mount the upper parts of the cgroup tree read-only (or even
459 replace the middle bits with an intermediary `tmpfs` — but be careful not to
460 break the `statfs()` detection logic discussed above), as long as the path
461 to the delegated sub-tree remains accessible as-is.
e30eaff3 462
3ee9b2f6
LP
4635. ⚡ Currently, the algorithm for mapping between slice/scope/service unit
464 naming and their cgroup paths is not considered public API of systemd, and
465 may change in future versions. This means: it's best to avoid implementing a
466 local logic of translating cgroup paths to slice/scope/service names in your
467 program, or vice versa — it's likely going to break sooner or later. Use the
468 appropriate D-Bus API calls for that instead, so that systemd translates
469 this for you. (Specifically: each Unit object has a `ControlGroup` property
470 to get the cgroup for a unit. The method `GetUnitByControlGroup()` may be
471 used to get the unit for a cgroup.)
472
4e1dfa45 4736. ⚡ Think twice before delegating cgroup v1 controllers to less privileged
e30eaff3 474 containers. It's not safe, you basically allow your containers to freeze the
4e1dfa45 475 system with that and worse. Delegation is a strongpoint of cgroup v2 though,
e30eaff3
LP
476 and there it's safe to treat delegation boundaries as privilege boundaries.
477
478And that's it for now. If you have further questions, refer to the systemd
479mailing list.
480
481— Berlin, 2018-04-20