]>
Commit | Line | Data |
---|---|---|
99c8b231 | 1 | ========================== |
00f0b825 | 2 | Memory Resource Controller |
99c8b231 | 3 | ========================== |
00f0b825 | 4 | |
56eb2767 | 5 | .. caution:: |
99c8b231 | 6 | This document is hopelessly outdated and it asks for a complete |
1306a85a JW |
7 | rewrite. It still contains a useful information so we are keeping it |
8 | here but make sure to check the current code if you need a deeper | |
9 | understanding. | |
10 | ||
56eb2767 | 11 | .. note:: |
99c8b231 | 12 | The Memory Resource Controller has generically been referred to as the |
67de0162 JS |
13 | memory controller in this document. Do not confuse memory controller |
14 | used here with the memory controller that is used in hardware. | |
1b6df3aa | 15 | |
4ddb1a2a | 16 | .. hint:: |
dc10e281 KH |
17 | When we mention a cgroup (cgroupfs's directory) with memory controller, |
18 | we call it "memory cgroup". When you see git-log and source code, you'll | |
19 | see patch's title and function names tend to use "memcg". | |
20 | In this document, we avoid using it. | |
1b6df3aa | 21 | |
1b6df3aa | 22 | Benefits and Purpose of the memory controller |
99c8b231 | 23 | ============================================= |
1b6df3aa BS |
24 | |
25 | The memory controller isolates the memory behaviour of a group of tasks | |
71da431c | 26 | from the rest of the system. The article on LWN [12]_ mentions some probable |
1b6df3aa BS |
27 | uses of the memory controller. The memory controller can be used to |
28 | ||
29 | a. Isolate an application or a group of applications | |
1939c557 | 30 | Memory-hungry applications can be isolated and limited to a smaller |
1b6df3aa | 31 | amount of memory. |
1939c557 | 32 | b. Create a cgroup with a limited amount of memory; this can be used |
1b6df3aa BS |
33 | as a good alternative to booting with mem=XXXX. |
34 | c. Virtualization solutions can control the amount of memory they want | |
35 | to assign to a virtual machine instance. | |
36 | d. A CD/DVD burner could control the amount of memory used by the | |
37 | rest of the system to ensure that burning does not fail due to lack | |
38 | of available memory. | |
1939c557 | 39 | e. There are several other use cases; find one or use the controller just |
1b6df3aa BS |
40 | for fun (to learn and hack on the VM subsystem). |
41 | ||
dc10e281 KH |
42 | Current Status: linux-2.6.34-mmotm(development version of 2010/April) |
43 | ||
44 | Features: | |
99c8b231 | 45 | |
dc10e281 | 46 | - accounting anonymous pages, file caches, swap caches usage and limiting them. |
6252efcc | 47 | - pages are linked to per-memcg LRU exclusively, and there is no global LRU. |
dc10e281 KH |
48 | - optionally, memory+swap usage can be accounted and limited. |
49 | - hierarchical accounting | |
50 | - soft limit | |
1939c557 | 51 | - moving (recharging) account at moving a task is selectable. |
dc10e281 | 52 | - usage threshold notifier |
70ddf637 | 53 | - memory pressure notifier |
dc10e281 KH |
54 | - oom-killer disable knob and oom-notifier |
55 | - Root cgroup has no limit controls. | |
56 | ||
1939c557 | 57 | Kernel memory support is a work in progress, and the current version provides |
da3ad2e1 BS |
58 | basically functionality. (See :ref:`section 2.7 |
59 | <cgroup-v1-memory-kernel-extension>`) | |
dc10e281 KH |
60 | |
61 | Brief summary of control files. | |
62 | ||
99c8b231 MCC |
63 | ==================================== ========================================== |
64 | tasks attach a task(thread) and show list of | |
65 | threads | |
66 | cgroup.procs show list of processes | |
67 | cgroup.event_control an interface for event_fd() | |
2343e88d | 68 | This knob is not available on CONFIG_PREEMPT_RT systems. |
99c8b231 MCC |
69 | memory.usage_in_bytes show current usage for memory |
70 | (See 5.5 for details) | |
71 | memory.memsw.usage_in_bytes show current usage for memory+Swap | |
72 | (See 5.5 for details) | |
73 | memory.limit_in_bytes set/show limit of memory usage | |
74 | memory.memsw.limit_in_bytes set/show limit of memory+Swap usage | |
75 | memory.failcnt show the number of memory usage hits limits | |
76 | memory.memsw.failcnt show the number of memory+Swap hits limits | |
77 | memory.max_usage_in_bytes show max memory usage recorded | |
78 | memory.memsw.max_usage_in_bytes show max memory+Swap usage recorded | |
79 | memory.soft_limit_in_bytes set/show soft limit of memory usage | |
2343e88d | 80 | This knob is not available on CONFIG_PREEMPT_RT systems. |
99c8b231 MCC |
81 | memory.stat show various statistics |
82 | memory.use_hierarchy set/show hierarchical account enabled | |
18421863 RG |
83 | This knob is deprecated and shouldn't be |
84 | used. | |
99c8b231 MCC |
85 | memory.force_empty trigger forced page reclaim |
86 | memory.pressure_level set memory pressure notifications | |
87 | memory.swappiness set/show swappiness parameter of vmscan | |
88 | (See sysctl's vm.swappiness) | |
89 | memory.move_charge_at_immigrate set/show controls of moving charges | |
da34a848 JW |
90 | This knob is deprecated and shouldn't be |
91 | used. | |
99c8b231 MCC |
92 | memory.oom_control set/show oom controls. |
93 | memory.numa_stat show the number of memory usage per numa | |
94 | node | |
4597648f MH |
95 | memory.kmem.limit_in_bytes Deprecated knob to set and read the kernel |
96 | memory hard limit. Kernel hard limit is not | |
97 | supported since 5.16. Writing any value to | |
98 | do file will not have any effect same as if | |
99 | nokmem kernel parameter was specified. | |
100 | Kernel memory is still charged and reported | |
101 | by memory.kmem.usage_in_bytes. | |
99c8b231 MCC |
102 | memory.kmem.usage_in_bytes show current kernel memory allocation |
103 | memory.kmem.failcnt show the number of kernel memory usage | |
104 | hits limits | |
105 | memory.kmem.max_usage_in_bytes show max kernel memory usage recorded | |
106 | ||
107 | memory.kmem.tcp.limit_in_bytes set/show hard limit for tcp buf memory | |
108 | memory.kmem.tcp.usage_in_bytes show current tcp buf memory allocation | |
109 | memory.kmem.tcp.failcnt show the number of tcp buf memory usage | |
110 | hits limits | |
111 | memory.kmem.tcp.max_usage_in_bytes show max tcp buf memory usage recorded | |
112 | ==================================== ========================================== | |
e5671dfa | 113 | |
1b6df3aa | 114 | 1. History |
99c8b231 | 115 | ========== |
1b6df3aa BS |
116 | |
117 | The memory controller has a long history. A request for comments for the memory | |
71da431c | 118 | controller was posted by Balbir Singh [1]_. At the time the RFC was posted |
1b6df3aa BS |
119 | there were several implementations for memory control. The goal of the |
120 | RFC was to build consensus and agreement for the minimal features required | |
71da431c BS |
121 | for memory control. The first RSS controller was posted by Balbir Singh [2]_ |
122 | in Feb 2007. Pavel Emelianov [3]_ [4]_ [5]_ has since posted three versions | |
123 | of the RSS controller. At OLS, at the resource management BoF, everyone | |
124 | suggested that we handle both page cache and RSS together. Another request was | |
125 | raised to allow user space handling of OOM. The current memory controller is | |
1b6df3aa | 126 | at version 6; it combines both mapped (RSS) and unmapped Page |
71da431c | 127 | Cache Control [11]_. |
1b6df3aa BS |
128 | |
129 | 2. Memory Control | |
99c8b231 | 130 | ================= |
1b6df3aa BS |
131 | |
132 | Memory is a unique resource in the sense that it is present in a limited | |
133 | amount. If a task requires a lot of CPU processing, the task can spread | |
134 | its processing over a period of hours, days, months or years, but with | |
135 | memory, the same physical memory needs to be reused to accomplish the task. | |
136 | ||
137 | The memory controller implementation has been divided into phases. These | |
138 | are: | |
139 | ||
140 | 1. Memory controller | |
141 | 2. mlock(2) controller | |
142 | 3. Kernel user memory accounting and slab control | |
143 | 4. user mappings length controller | |
144 | ||
145 | The memory controller is the first controller developed. | |
146 | ||
147 | 2.1. Design | |
99c8b231 | 148 | ----------- |
1b6df3aa | 149 | |
5b1efc02 JW |
150 | The core of the design is a counter called the page_counter. The |
151 | page_counter tracks the current memory usage and limit of the group of | |
152 | processes associated with the controller. Each cgroup has a memory controller | |
153 | specific data structure (mem_cgroup) associated with it. | |
1b6df3aa BS |
154 | |
155 | 2.2. Accounting | |
99c8b231 MCC |
156 | --------------- |
157 | ||
f7423bb7 BS |
158 | .. code-block:: |
159 | :caption: Figure 1: Hierarchy of Accounting | |
1b6df3aa BS |
160 | |
161 | +--------------------+ | |
5b1efc02 JW |
162 | | mem_cgroup | |
163 | | (page_counter) | | |
1b6df3aa BS |
164 | +--------------------+ |
165 | / ^ \ | |
166 | / | \ | |
167 | +---------------+ | +---------------+ | |
168 | | mm_struct | |.... | mm_struct | | |
169 | | | | | | | |
170 | +---------------+ | +---------------+ | |
171 | | | |
172 | + --------------+ | |
173 | | | |
174 | +---------------+ +------+--------+ | |
175 | | page +----------> page_cgroup| | |
176 | | | | | | |
177 | +---------------+ +---------------+ | |
178 | ||
1b6df3aa BS |
179 | |
180 | ||
181 | Figure 1 shows the important aspects of the controller | |
182 | ||
183 | 1. Accounting happens per cgroup | |
184 | 2. Each mm_struct knows about which cgroup it belongs to | |
185 | 3. Each page has a pointer to the page_cgroup, which in turn knows the | |
186 | cgroup it belongs to | |
187 | ||
348b4655 JL |
188 | The accounting is done as follows: mem_cgroup_charge_common() is invoked to |
189 | set up the necessary data structures and check if the cgroup that is being | |
190 | charged is over its limit. If it is, then reclaim is invoked on the cgroup. | |
1b6df3aa BS |
191 | More details can be found in the reclaim section of this document. |
192 | If everything goes well, a page meta-data-structure called page_cgroup is | |
dc10e281 KH |
193 | updated. page_cgroup has its own LRU on cgroup. |
194 | (*) page_cgroup structure is allocated at boot/memory-hotplug time. | |
1b6df3aa BS |
195 | |
196 | 2.2.1 Accounting details | |
99c8b231 | 197 | ------------------------ |
1b6df3aa | 198 | |
5b4e655e | 199 | All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. |
6252efcc | 200 | Some pages which are never reclaimable and will not be on the LRU |
dc10e281 | 201 | are not accounted. We just account pages under usual VM management. |
5b4e655e KH |
202 | |
203 | RSS pages are accounted at page_fault unless they've already been accounted | |
204 | for earlier. A file page will be accounted for as Page Cache when it's | |
fe9ebb8c | 205 | inserted into inode (xarray). While it's mapped into the page tables of |
5b4e655e KH |
206 | processes, duplicate accounting is carefully avoided. |
207 | ||
1939c557 | 208 | An RSS page is unaccounted when it's fully unmapped. A PageCache page is |
fe9ebb8c | 209 | unaccounted when it's removed from xarray. Even if RSS pages are fully |
dc10e281 | 210 | unmapped (by kswapd), they may exist as SwapCache in the system until they |
1939c557 | 211 | are really freed. Such SwapCaches are also accounted. |
0a27cae1 | 212 | A swapped-in page is accounted after adding into swapcache. |
dc10e281 | 213 | |
1939c557 | 214 | Note: The kernel does swapin-readahead and reads multiple swaps at once. |
0a27cae1 AS |
215 | Since page's memcg recorded into swap whatever memsw enabled, the page will |
216 | be accounted after swapin. | |
5b4e655e KH |
217 | |
218 | At page migration, accounting information is kept. | |
219 | ||
dc10e281 KH |
220 | Note: we just account pages-on-LRU because our purpose is to control amount |
221 | of used pages; not-on-LRU pages tend to be out-of-control from VM view. | |
1b6df3aa BS |
222 | |
223 | 2.3 Shared Page Accounting | |
99c8b231 | 224 | -------------------------- |
1b6df3aa BS |
225 | |
226 | Shared pages are accounted on the basis of the first touch approach. The | |
227 | cgroup that first touches a page is accounted for the page. The principle | |
228 | behind this approach is that a cgroup that aggressively uses a shared | |
229 | page will eventually get charged for it (once it is uncharged from | |
230 | the cgroup that brought it in -- this will happen on memory pressure). | |
231 | ||
da3ad2e1 BS |
232 | But see :ref:`section 8.2 <cgroup-v1-memory-movable-charges>` when moving a |
233 | task to another cgroup, its pages may be recharged to the new cgroup, if | |
234 | move_charge_at_immigrate has been chosen. | |
4b91355e | 235 | |
0a27cae1 | 236 | 2.4 Swap Extension |
99c8b231 | 237 | -------------------------------------- |
dc10e281 | 238 | |
0a27cae1 AS |
239 | Swap usage is always recorded for each of cgroup. Swap Extension allows you to |
240 | read and limit it. | |
8c7c6e34 | 241 | |
0a27cae1 | 242 | When CONFIG_SWAP is enabled, following files are added. |
99c8b231 | 243 | |
8c7c6e34 KH |
244 | - memory.memsw.usage_in_bytes. |
245 | - memory.memsw.limit_in_bytes. | |
246 | ||
dc10e281 KH |
247 | memsw means memory+swap. Usage of memory+swap is limited by |
248 | memsw.limit_in_bytes. | |
249 | ||
250 | Example: Assume a system with 4G of swap. A task which allocates 6G of memory | |
251 | (by mistake) under 2G memory limitation will use all swap. | |
252 | In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. | |
1939c557 | 253 | By using the memsw limit, you can avoid system OOM which can be caused by swap |
dc10e281 | 254 | shortage. |
8c7c6e34 | 255 | |
5fa16afc BS |
256 | 2.4.1 why 'memory+swap' rather than swap |
257 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
99c8b231 | 258 | |
8c7c6e34 KH |
259 | The global LRU(kswapd) can swap out arbitrary pages. Swap-out means |
260 | to move account from memory to swap...there is no change in usage of | |
dc10e281 KH |
261 | memory+swap. In other words, when we want to limit the usage of swap without |
262 | affecting global LRU, memory+swap limit is better than just limiting swap from | |
1939c557 | 263 | an OS point of view. |
22a668d7 | 264 | |
5fa16afc BS |
265 | 2.4.2. What happens when a cgroup hits memory.memsw.limit_in_bytes |
266 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
99c8b231 | 267 | |
67de0162 | 268 | When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out |
22a668d7 KH |
269 | in this cgroup. Then, swap-out will not be done by cgroup routine and file |
270 | caches are dropped. But as mentioned above, global LRU can do swapout memory | |
271 | from it for sanity of the system's memory management state. You can't forbid | |
272 | it by cgroup. | |
8c7c6e34 KH |
273 | |
274 | 2.5 Reclaim | |
99c8b231 | 275 | ----------- |
1b6df3aa | 276 | |
dc10e281 KH |
277 | Each cgroup maintains a per cgroup LRU which has the same structure as |
278 | global VM. When a cgroup goes over its limit, we first try | |
1b6df3aa BS |
279 | to reclaim memory from the cgroup so as to make space for the new |
280 | pages that the cgroup has touched. If the reclaim is unsuccessful, | |
281 | an OOM routine is invoked to select and kill the bulkiest task in the | |
da3ad2e1 | 282 | cgroup. (See :ref:`10. OOM Control <cgroup-v1-memory-oom-control>` below.) |
1b6df3aa BS |
283 | |
284 | The reclaim algorithm has not been modified for cgroups, except that | |
1939c557 | 285 | pages that are selected for reclaiming come from the per-cgroup LRU |
1b6df3aa BS |
286 | list. |
287 | ||
56eb2767 BS |
288 | .. note:: |
289 | Reclaim does not work for the root cgroup, since we cannot set any | |
290 | limits on the root cgroup. | |
4b3bde4c | 291 | |
56eb2767 BS |
292 | .. note:: |
293 | When panic_on_oom is set to "2", the whole system will panic. | |
daaf1e68 | 294 | |
9490ff27 | 295 | When oom event notifier is registered, event will be delivered. |
da3ad2e1 | 296 | (See :ref:`oom_control <cgroup-v1-memory-oom-control>` section) |
9490ff27 | 297 | |
dc10e281 | 298 | 2.6 Locking |
99c8b231 | 299 | ----------- |
1b6df3aa | 300 | |
eb084894 | 301 | Lock order is as follows:: |
1b6df3aa | 302 | |
15b44736 HD |
303 | Page lock (PG_locked bit of page->flags) |
304 | mm->page_table_lock or split pte_lock | |
6c77b607 | 305 | folio_memcg_lock (memcg->move_lock) |
15b44736 HD |
306 | mapping->i_pages lock |
307 | lruvec->lru_lock. | |
99c8b231 | 308 | |
15b44736 HD |
309 | Per-node-per-memcgroup LRU (cgroup's private LRU) is guarded by |
310 | lruvec->lru_lock; PG_lru bit of page->flags is cleared before | |
311 | isolating a page from its LRU under lruvec->lru_lock. | |
1b6df3aa | 312 | |
da3ad2e1 BS |
313 | .. _cgroup-v1-memory-kernel-extension: |
314 | ||
e55b9f96 | 315 | 2.7 Kernel Memory Extension |
99c8b231 | 316 | ----------------------------------------------- |
e5671dfa GC |
317 | |
318 | With the Kernel memory extension, the Memory Controller is able to limit | |
319 | the amount of kernel memory used by the system. Kernel memory is fundamentally | |
320 | different than user memory, since it can't be swapped out, which makes it | |
321 | possible to DoS the system by consuming too much of this precious resource. | |
322 | ||
2bdbc5bc QH |
323 | Kernel memory accounting is enabled for all memory cgroups by default. But |
324 | it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel | |
325 | at boot time. In this case, kernel memory will not be accounted at all. | |
d5bdae7d | 326 | |
e5671dfa | 327 | Kernel memory limits are not imposed for the root cgroup. Usage for the root |
d5bdae7d GC |
328 | cgroup may or may not be accounted. The memory used is accumulated into |
329 | memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. | |
330 | (currently only for tcp). | |
99c8b231 | 331 | |
d5bdae7d GC |
332 | The main "kmem" counter is fed into the main counter, so kmem charges will |
333 | also be visible from the user counter. | |
e5671dfa | 334 | |
e5671dfa GC |
335 | Currently no soft limit is implemented for kernel memory. It is future work |
336 | to trigger slab reclaim when those limits are reached. | |
337 | ||
338 | 2.7.1 Current Kernel Memory resources accounted | |
99c8b231 | 339 | ----------------------------------------------- |
e5671dfa | 340 | |
99c8b231 MCC |
341 | stack pages: |
342 | every process consumes some stack pages. By accounting into | |
343 | kernel memory, we prevent new processes from being created when the kernel | |
344 | memory usage is too high. | |
d5bdae7d | 345 | |
99c8b231 MCC |
346 | slab pages: |
347 | pages allocated by the SLAB or SLUB allocator are tracked. A copy | |
348 | of each kmem_cache is created every time the cache is touched by the first time | |
349 | from inside the memcg. The creation is done lazily, so some objects can still be | |
350 | skipped while the cache is being created. All objects in a slab page should | |
351 | belong to the same memcg. This only fails to hold when a task is migrated to a | |
352 | different memcg during the page allocation by the cache. | |
92e79349 | 353 | |
99c8b231 MCC |
354 | sockets memory pressure: |
355 | some sockets protocols have memory pressure | |
356 | thresholds. The Memory Controller allows them to be controlled individually | |
357 | per cgroup, instead of globally. | |
e5671dfa | 358 | |
99c8b231 MCC |
359 | tcp memory pressure: |
360 | sockets memory pressure for the tcp protocol. | |
d1a4c0b3 | 361 | |
29d293b6 | 362 | 2.7.2 Common use cases |
99c8b231 | 363 | ---------------------- |
d5bdae7d GC |
364 | |
365 | Because the "kmem" counter is fed to the main user counter, kernel memory can | |
366 | never be limited completely independently of user memory. Say "U" is the user | |
367 | limit, and "K" the kernel limit. There are three possible ways limits can be | |
368 | set: | |
369 | ||
99c8b231 | 370 | U != 0, K = unlimited: |
d5bdae7d GC |
371 | This is the standard memcg limitation mechanism already present before kmem |
372 | accounting. Kernel memory is completely ignored. | |
373 | ||
99c8b231 | 374 | U != 0, K < U: |
d5bdae7d | 375 | Kernel memory is a subset of the user memory. This setup is useful in |
fdebeae0 BC |
376 | deployments where the total amount of memory per-cgroup is overcommitted. |
377 | Overcommitting kernel memory limits is definitely not recommended, since the | |
d5bdae7d GC |
378 | box can still run out of non-reclaimable memory. |
379 | In this case, the admin could set up K so that the sum of all groups is | |
380 | never greater than the total memory, and freely set U at the cost of his | |
381 | QoS. | |
99c8b231 | 382 | |
56eb2767 BS |
383 | .. warning:: |
384 | In the current implementation, memory reclaim will NOT be triggered for | |
385 | a cgroup when it hits K while staying below U, which makes this setup | |
386 | impractical. | |
d5bdae7d | 387 | |
99c8b231 | 388 | U != 0, K >= U: |
d5bdae7d GC |
389 | Since kmem charges will also be fed to the user counter and reclaim will be |
390 | triggered for the cgroup for both kinds of memory. This setup gives the | |
391 | admin a unified view of memory, and it is also useful for people who just | |
392 | want to track kernel memory usage. | |
393 | ||
1b6df3aa | 394 | 3. User Interface |
99c8b231 | 395 | ================= |
1b6df3aa | 396 | |
980660ca | 397 | To use the user interface: |
1b6df3aa | 398 | |
980660ca BS |
399 | 1. Enable CONFIG_CGROUPS and CONFIG_MEMCG options |
400 | 2. Prepare the cgroups (see :ref:`Why are cgroups needed? | |
401 | <cgroups-why-needed>` for the background information):: | |
99c8b231 MCC |
402 | |
403 | # mount -t tmpfs none /sys/fs/cgroup | |
404 | # mkdir /sys/fs/cgroup/memory | |
405 | # mount -t cgroup none /sys/fs/cgroup/memory -o memory | |
406 | ||
980660ca | 407 | 3. Make the new group and move bash into it:: |
99c8b231 MCC |
408 | |
409 | # mkdir /sys/fs/cgroup/memory/0 | |
410 | # echo $$ > /sys/fs/cgroup/memory/0/tasks | |
1b6df3aa | 411 | |
980660ca | 412 | 4. Since now we're in the 0 cgroup, we can alter the memory limit:: |
1b6df3aa | 413 | |
99c8b231 | 414 | # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
0eea1030 | 415 | |
980660ca BS |
416 | The limit can now be queried:: |
417 | ||
418 | # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes | |
419 | 4194304 | |
420 | ||
56eb2767 BS |
421 | .. note:: |
422 | We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, | |
423 | mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, | |
424 | Gibibytes.) | |
dc10e281 | 425 | |
56eb2767 BS |
426 | .. note:: |
427 | We can write "-1" to reset the ``*.limit_in_bytes(unlimited)``. | |
0eea1030 | 428 | |
56eb2767 BS |
429 | .. note:: |
430 | We cannot set limits on the root cgroup any more. | |
0eea1030 | 431 | |
99c8b231 MCC |
432 | |
433 | We can check the usage:: | |
434 | ||
435 | # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes | |
436 | 1216512 | |
0eea1030 | 437 | |
1939c557 | 438 | A successful write to this file does not guarantee a successful setting of |
dc10e281 | 439 | this limit to the value written into the file. This can be due to a |
0eea1030 | 440 | number of factors, such as rounding up to page boundaries or the total |
dc10e281 | 441 | availability of memory on the system. The user is required to re-read |
99c8b231 | 442 | this file after a write to guarantee the value committed by the kernel:: |
0eea1030 | 443 | |
99c8b231 MCC |
444 | # echo 1 > memory.limit_in_bytes |
445 | # cat memory.limit_in_bytes | |
446 | 4096 | |
1b6df3aa BS |
447 | |
448 | The memory.failcnt field gives the number of times that the cgroup limit was | |
449 | exceeded. | |
450 | ||
dfc05c25 KH |
451 | The memory.stat file gives accounting information. Now, the number of |
452 | caches, RSS and Active pages/Inactive pages are shown. | |
453 | ||
1b6df3aa | 454 | 4. Testing |
99c8b231 | 455 | ========== |
1b6df3aa | 456 | |
dc10e281 KH |
457 | For testing features and implementation, see memcg_test.txt. |
458 | ||
459 | Performance test is also important. To see pure memory controller's overhead, | |
460 | testing on tmpfs will give you good numbers of small overheads. | |
461 | Example: do kernel make on tmpfs. | |
462 | ||
463 | Page-fault scalability is also important. At measuring parallel | |
464 | page fault test, multi-process test may be better than multi-thread | |
465 | test because it has noise of shared objects/status. | |
466 | ||
467 | But the above two are testing extreme situations. | |
468 | Trying usual test under memory controller is always helpful. | |
1b6df3aa | 469 | |
da3ad2e1 BS |
470 | .. _cgroup-v1-memory-test-troubleshoot: |
471 | ||
1b6df3aa | 472 | 4.1 Troubleshooting |
99c8b231 | 473 | ------------------- |
1b6df3aa BS |
474 | |
475 | Sometimes a user might find that the application under a cgroup is | |
1939c557 | 476 | terminated by the OOM killer. There are several causes for this: |
1b6df3aa BS |
477 | |
478 | 1. The cgroup limit is too low (just too low to do anything useful) | |
479 | 2. The user is using anonymous memory and swap is turned off or too low | |
480 | ||
481 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of | |
482 | some of the pages cached in the cgroup (page cache pages). | |
483 | ||
da3ad2e1 BS |
484 | To know what happens, disabling OOM_Kill as per :ref:`"10. OOM Control" |
485 | <cgroup-v1-memory-oom-control>` (below) and seeing what happens will be | |
486 | helpful. | |
487 | ||
488 | .. _cgroup-v1-memory-test-task-migration: | |
dc10e281 | 489 | |
1b6df3aa | 490 | 4.2 Task migration |
99c8b231 | 491 | ------------------ |
1b6df3aa | 492 | |
a33f3224 | 493 | When a task migrates from one cgroup to another, its charge is not |
7dc74be0 | 494 | carried forward by default. The pages allocated from the original cgroup still |
1b6df3aa BS |
495 | remain charged to it, the charge is dropped when the page is freed or |
496 | reclaimed. | |
497 | ||
dc10e281 | 498 | You can move charges of a task along with task migration. |
da3ad2e1 | 499 | See :ref:`8. "Move charges at task migration" <cgroup-v1-memory-move-charges>` |
7dc74be0 | 500 | |
1b6df3aa | 501 | 4.3 Removing a cgroup |
99c8b231 | 502 | --------------------- |
1b6df3aa | 503 | |
da3ad2e1 BS |
504 | A cgroup can be removed by rmdir, but as discussed in :ref:`sections 4.1 |
505 | <cgroup-v1-memory-test-troubleshoot>` and :ref:`4.2 | |
506 | <cgroup-v1-memory-test-task-migration>`, a cgroup might have some charge | |
507 | associated with it, even though all tasks have migrated away from it. (because | |
508 | we charge against pages, not against tasks.) | |
dc10e281 | 509 | |
18421863 | 510 | We move the stats to parent, and no change on the charge except uncharging |
cc926f78 | 511 | from the child. |
1b6df3aa | 512 | |
8c7c6e34 KH |
513 | Charges recorded in swap information is not updated at removal of cgroup. |
514 | Recorded information is discarded and a cgroup which uses swap (swapcache) | |
515 | will be charged as a new owner of it. | |
516 | ||
99c8b231 MCC |
517 | 5. Misc. interfaces |
518 | =================== | |
c1e862c1 KH |
519 | |
520 | 5.1 force_empty | |
99c8b231 | 521 | --------------- |
c1e862c1 | 522 | memory.force_empty interface is provided to make cgroup's memory usage empty. |
99c8b231 | 523 | When writing anything to this:: |
c1e862c1 | 524 | |
99c8b231 | 525 | # echo 0 > memory.force_empty |
c1e862c1 | 526 | |
f61c42a7 | 527 | the cgroup will be reclaimed and as many pages reclaimed as possible. |
c1e862c1 | 528 | |
1939c557 | 529 | The typical use case for this interface is before calling rmdir(). |
053bc569 YS |
530 | Though rmdir() offlines memcg, but the memcg may still stay there due to |
531 | charged file caches. Some out-of-use page caches may keep charged until | |
532 | memory pressure happens. If you want to avoid that, force_empty will be useful. | |
c1e862c1 | 533 | |
7f016ee8 | 534 | 5.2 stat file |
99c8b231 | 535 | ------------- |
c863d835 | 536 | |
b9d2a17b BS |
537 | memory.stat file includes following statistics: |
538 | ||
539 | * per-memory cgroup local status | |
540 | ||
541 | =============== =============================================================== | |
542 | cache # of bytes of page cache memory. | |
543 | rss # of bytes of anonymous and swap cache memory (includes | |
544 | transparent hugepages). | |
545 | rss_huge # of bytes of anonymous transparent hugepages. | |
546 | mapped_file # of bytes of mapped file (includes tmpfs/shmem) | |
547 | pgpgin # of charging events to the memory cgroup. The charging | |
548 | event happens each time a page is accounted as either mapped | |
549 | anon page(RSS) or cache page(Page Cache) to the cgroup. | |
550 | pgpgout # of uncharging events to the memory cgroup. The uncharging | |
551 | event happens each time a page is unaccounted from the | |
552 | cgroup. | |
553 | swap # of bytes of swap usage | |
554 | dirty # of bytes that are waiting to get written back to the disk. | |
555 | writeback # of bytes of file/anon cache that are queued for syncing to | |
556 | disk. | |
557 | inactive_anon # of bytes of anonymous and swap cache memory on inactive | |
558 | LRU list. | |
559 | active_anon # of bytes of anonymous and swap cache memory on active | |
560 | LRU list. | |
561 | inactive_file # of bytes of file-backed memory and MADV_FREE anonymous | |
562 | memory (LazyFree pages) on inactive LRU list. | |
563 | active_file # of bytes of file-backed memory on active LRU list. | |
564 | unevictable # of bytes of memory that cannot be reclaimed (mlocked etc). | |
565 | =============== =============================================================== | |
566 | ||
567 | * status considering hierarchy (see memory.use_hierarchy settings): | |
568 | ||
569 | ========================= =================================================== | |
570 | hierarchical_memory_limit # of bytes of memory limit with regard to | |
571 | hierarchy | |
572 | under which the memory cgroup is | |
573 | hierarchical_memsw_limit # of bytes of memory+swap limit with regard to | |
574 | hierarchy under which memory cgroup is. | |
575 | ||
576 | total_<counter> # hierarchical version of <counter>, which in | |
577 | addition to the cgroup's own value includes the | |
578 | sum of all hierarchical children's values of | |
579 | <counter>, i.e. total_cache | |
580 | ========================= =================================================== | |
581 | ||
582 | * additional vm parameters (depends on CONFIG_DEBUG_VM): | |
583 | ||
584 | ========================= ======================================== | |
585 | recent_rotated_anon VM internal parameter. (see mm/vmscan.c) | |
586 | recent_rotated_file VM internal parameter. (see mm/vmscan.c) | |
587 | recent_scanned_anon VM internal parameter. (see mm/vmscan.c) | |
588 | recent_scanned_file VM internal parameter. (see mm/vmscan.c) | |
589 | ========================= ======================================== | |
c863d835 | 590 | |
56eb2767 | 591 | .. hint:: |
dc10e281 KH |
592 | recent_rotated means recent frequency of LRU rotation. |
593 | recent_scanned means recent # of scans to LRU. | |
7f016ee8 KM |
594 | showing for better debug please see the code for meanings. |
595 | ||
56eb2767 | 596 | .. note:: |
c863d835 BR |
597 | Only anonymous and swap cache memory is listed as part of 'rss' stat. |
598 | This should not be confused with the true 'resident set size' or the | |
dc10e281 | 599 | amount of physical memory used by the cgroup. |
99c8b231 | 600 | |
03eac8b2 | 601 | 'rss + mapped_file" will give you resident set size of cgroup. |
99c8b231 | 602 | |
dc10e281 | 603 | (Note: file and shmem may be shared among other cgroups. In that case, |
99c8b231 MCC |
604 | mapped_file is accounted only when the memory cgroup is owner of page |
605 | cache.) | |
7f016ee8 | 606 | |
a7885eb8 | 607 | 5.3 swappiness |
99c8b231 | 608 | -------------- |
a7885eb8 | 609 | |
688eb988 MH |
610 | Overrides /proc/sys/vm/swappiness for the particular group. The tunable |
611 | in the root cgroup corresponds to the global swappiness setting. | |
612 | ||
613 | Please note that unlike during the global reclaim, limit reclaim | |
614 | enforces that 0 swappiness really prevents from any swapping even if | |
615 | there is a swap storage available. This might lead to memcg OOM killer | |
616 | if there are no file pages to reclaim. | |
a7885eb8 | 617 | |
dc10e281 | 618 | 5.4 failcnt |
99c8b231 | 619 | ----------- |
dc10e281 KH |
620 | |
621 | A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. | |
622 | This failcnt(== failure count) shows the number of times that a usage counter | |
623 | hit its limit. When a memory cgroup hits a limit, failcnt increases and | |
624 | memory under it will be reclaimed. | |
625 | ||
99c8b231 MCC |
626 | You can reset failcnt by writing 0 to failcnt file:: |
627 | ||
628 | # echo 0 > .../memory.failcnt | |
a7885eb8 | 629 | |
a111c966 | 630 | 5.5 usage_in_bytes |
99c8b231 | 631 | ------------------ |
a111c966 DN |
632 | |
633 | For efficiency, as other kernel components, memory cgroup uses some optimization | |
634 | to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the | |
1939c557 | 635 | method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz |
a111c966 DN |
636 | value for efficient access. (Of course, when necessary, it's synchronized.) |
637 | If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) | |
638 | value in memory.stat(see 5.2). | |
639 | ||
50c35e5b | 640 | 5.6 numa_stat |
99c8b231 | 641 | ------------- |
50c35e5b YH |
642 | |
643 | This is similar to numa_maps but operates on a per-memcg basis. This is | |
644 | useful for providing visibility into the numa locality information within | |
645 | an memcg since the pages are allowed to be allocated from any physical | |
1939c557 MK |
646 | node. One of the use cases is evaluating application performance by |
647 | combining this information with the application's CPU allocation. | |
50c35e5b | 648 | |
071aee13 YH |
649 | Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" |
650 | per-node page counts including "hierarchical_<counter>" which sums up all | |
651 | hierarchical children's values in addition to the memcg's own value. | |
652 | ||
99c8b231 | 653 | The output format of memory.numa_stat is:: |
50c35e5b | 654 | |
99c8b231 MCC |
655 | total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... |
656 | file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
657 | anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
658 | unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
659 | hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
50c35e5b | 660 | |
071aee13 | 661 | The "total" count is sum of file + anon + unevictable. |
50c35e5b | 662 | |
52bc0d82 | 663 | 6. Hierarchy support |
99c8b231 | 664 | ==================== |
c1e862c1 | 665 | |
52bc0d82 BS |
666 | The memory controller supports a deep hierarchy and hierarchical accounting. |
667 | The hierarchy is created by creating the appropriate cgroups in the | |
668 | cgroup filesystem. Consider for example, the following cgroup filesystem | |
99c8b231 | 669 | hierarchy:: |
52bc0d82 | 670 | |
67de0162 | 671 | root |
52bc0d82 | 672 | / | \ |
67de0162 JS |
673 | / | \ |
674 | a b c | |
675 | | \ | |
676 | | \ | |
677 | d e | |
52bc0d82 BS |
678 | |
679 | In the diagram above, with hierarchical accounting enabled, all memory | |
18421863 RG |
680 | usage of e, is accounted to its ancestors up until the root (i.e, c and root). |
681 | If one of the ancestors goes over its limit, the reclaim algorithm reclaims | |
682 | from the tasks in the ancestor and the children of the ancestor. | |
52bc0d82 | 683 | |
18421863 RG |
684 | 6.1 Hierarchical accounting and reclaim |
685 | --------------------------------------- | |
52bc0d82 | 686 | |
18421863 RG |
687 | Hierarchical accounting is enabled by default. Disabling the hierarchical |
688 | accounting is deprecated. An attempt to do it will result in a failure | |
689 | and a warning printed to dmesg. | |
52bc0d82 | 690 | |
18421863 | 691 | For compatibility reasons writing 1 to memory.use_hierarchy will always pass:: |
52bc0d82 | 692 | |
18421863 | 693 | # echo 1 > memory.use_hierarchy |
52bc0d82 | 694 | |
a6df6361 | 695 | 7. Soft limits |
99c8b231 | 696 | ============== |
a6df6361 BS |
697 | |
698 | Soft limits allow for greater sharing of memory. The idea behind soft limits | |
699 | is to allow control groups to use as much of the memory as needed, provided | |
700 | ||
701 | a. There is no memory contention | |
702 | b. They do not exceed their hard limit | |
703 | ||
dc10e281 | 704 | When the system detects memory contention or low memory, control groups |
a6df6361 BS |
705 | are pushed back to their soft limits. If the soft limit of each control |
706 | group is very high, they are pushed back as much as possible to make | |
707 | sure that one control group does not starve the others of memory. | |
708 | ||
1939c557 | 709 | Please note that soft limits is a best-effort feature; it comes with |
a6df6361 BS |
710 | no guarantees, but it does its best to make sure that when memory is |
711 | heavily contended for, memory is allocated based on the soft limit | |
1939c557 | 712 | hints/setup. Currently soft limit based reclaim is set up such that |
a6df6361 BS |
713 | it gets invoked from balance_pgdat (kswapd). |
714 | ||
715 | 7.1 Interface | |
99c8b231 | 716 | ------------- |
a6df6361 BS |
717 | |
718 | Soft limits can be setup by using the following commands (in this example we | |
99c8b231 | 719 | assume a soft limit of 256 MiB):: |
a6df6361 | 720 | |
99c8b231 | 721 | # echo 256M > memory.soft_limit_in_bytes |
a6df6361 | 722 | |
99c8b231 | 723 | If we want to change this to 1G, we can at any time use:: |
a6df6361 | 724 | |
99c8b231 | 725 | # echo 1G > memory.soft_limit_in_bytes |
a6df6361 | 726 | |
56eb2767 | 727 | .. note:: |
99c8b231 | 728 | Soft limits take effect over a long period of time, since they involve |
a6df6361 | 729 | reclaiming memory for balancing between memory cgroups |
56eb2767 BS |
730 | |
731 | .. note:: | |
99c8b231 | 732 | It is recommended to set the soft limit always below the hard limit, |
a6df6361 BS |
733 | otherwise the hard limit will take precedence. |
734 | ||
da3ad2e1 BS |
735 | .. _cgroup-v1-memory-move-charges: |
736 | ||
da34a848 JW |
737 | 8. Move charges at task migration (DEPRECATED!) |
738 | =============================================== | |
739 | ||
740 | THIS IS DEPRECATED! | |
741 | ||
742 | It's expensive and unreliable! It's better practice to launch workload | |
743 | tasks directly from inside their target cgroup. Use dedicated workload | |
744 | cgroups to allow fine-grained policy adjustments without having to | |
745 | move physical pages between control domains. | |
7dc74be0 DN |
746 | |
747 | Users can move charges associated with a task along with task migration, that | |
748 | is, uncharge task's pages from the old cgroup and charge them to the new cgroup. | |
02491447 DN |
749 | This feature is not supported in !CONFIG_MMU environments because of lack of |
750 | page tables. | |
7dc74be0 DN |
751 | |
752 | 8.1 Interface | |
99c8b231 | 753 | ------------- |
7dc74be0 | 754 | |
8173d5a4 | 755 | This feature is disabled by default. It can be enabled (and disabled again) by |
7dc74be0 DN |
756 | writing to memory.move_charge_at_immigrate of the destination cgroup. |
757 | ||
99c8b231 | 758 | If you want to enable it:: |
7dc74be0 | 759 | |
99c8b231 | 760 | # echo (some positive value) > memory.move_charge_at_immigrate |
7dc74be0 | 761 | |
56eb2767 | 762 | .. note:: |
99c8b231 | 763 | Each bits of move_charge_at_immigrate has its own meaning about what type |
da3ad2e1 BS |
764 | of charges should be moved. See :ref:`section 8.2 |
765 | <cgroup-v1-memory-movable-charges>` for details. | |
56eb2767 BS |
766 | |
767 | .. note:: | |
99c8b231 | 768 | Charges are moved only when you move mm->owner, in other words, |
1939c557 | 769 | a leader of a thread group. |
56eb2767 BS |
770 | |
771 | .. note:: | |
99c8b231 | 772 | If we cannot find enough space for the task in the destination cgroup, we |
7dc74be0 DN |
773 | try to make space by reclaiming memory. Task migration may fail if we |
774 | cannot make enough space. | |
56eb2767 BS |
775 | |
776 | .. note:: | |
99c8b231 | 777 | It can take several seconds if you move charges much. |
7dc74be0 | 778 | |
99c8b231 | 779 | And if you want disable it again:: |
7dc74be0 | 780 | |
99c8b231 | 781 | # echo 0 > memory.move_charge_at_immigrate |
7dc74be0 | 782 | |
da3ad2e1 BS |
783 | .. _cgroup-v1-memory-movable-charges: |
784 | ||
1939c557 | 785 | 8.2 Type of charges which can be moved |
99c8b231 | 786 | -------------------------------------- |
7dc74be0 | 787 | |
1939c557 MK |
788 | Each bit in move_charge_at_immigrate has its own meaning about what type of |
789 | charges should be moved. But in any case, it must be noted that an account of | |
790 | a page or a swap can be moved only when it is charged to the task's current | |
791 | (old) memory cgroup. | |
7dc74be0 | 792 | |
99c8b231 MCC |
793 | +---+--------------------------------------------------------------------------+ |
794 | |bit| what type of charges would be moved ? | | |
795 | +===+==========================================================================+ | |
796 | | 0 | A charge of an anonymous page (or swap of it) used by the target task. | | |
797 | | | You must enable Swap Extension (see 2.4) to enable move of swap charges. | | |
798 | +---+--------------------------------------------------------------------------+ | |
799 | | 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) | | |
800 | | | and swaps of tmpfs file) mmapped by the target task. Unlike the case of | | |
801 | | | anonymous pages, file pages (and swaps) in the range mmapped by the task | | |
802 | | | will be moved even if the task hasn't done page fault, i.e. they might | | |
803 | | | not be the task's "RSS", but other task's "RSS" that maps the same file. | | |
804 | | | And mapcount of the page is ignored (the page can be moved even if | | |
805 | | | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to | | |
806 | | | enable move of swap charges. | | |
807 | +---+--------------------------------------------------------------------------+ | |
7dc74be0 DN |
808 | |
809 | 8.3 TODO | |
99c8b231 | 810 | -------- |
7dc74be0 | 811 | |
7dc74be0 DN |
812 | - All of moving charge operations are done under cgroup_mutex. It's not good |
813 | behavior to hold the mutex too long, so we may need some trick. | |
814 | ||
2e72b634 | 815 | 9. Memory thresholds |
99c8b231 | 816 | ==================== |
2e72b634 | 817 | |
1939c557 | 818 | Memory cgroup implements memory thresholds using the cgroups notification |
2e72b634 KS |
819 | API (see cgroups.txt). It allows to register multiple memory and memsw |
820 | thresholds and gets notifications when it crosses. | |
821 | ||
1939c557 | 822 | To register a threshold, an application must: |
99c8b231 | 823 | |
dc10e281 KH |
824 | - create an eventfd using eventfd(2); |
825 | - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; | |
826 | - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to | |
827 | cgroup.event_control. | |
2e72b634 KS |
828 | |
829 | Application will be notified through eventfd when memory usage crosses | |
830 | threshold in any direction. | |
831 | ||
832 | It's applicable for root and non-root cgroup. | |
833 | ||
da3ad2e1 BS |
834 | .. _cgroup-v1-memory-oom-control: |
835 | ||
9490ff27 | 836 | 10. OOM Control |
99c8b231 | 837 | =============== |
9490ff27 | 838 | |
3c11ecf4 KH |
839 | memory.oom_control file is for OOM notification and other controls. |
840 | ||
1939c557 | 841 | Memory cgroup implements OOM notifier using the cgroup notification |
dc10e281 KH |
842 | API (See cgroups.txt). It allows to register multiple OOM notification |
843 | delivery and gets notification when OOM happens. | |
9490ff27 | 844 | |
1939c557 | 845 | To register a notifier, an application must: |
99c8b231 | 846 | |
9490ff27 KH |
847 | - create an eventfd using eventfd(2) |
848 | - open memory.oom_control file | |
dc10e281 KH |
849 | - write string like "<event_fd> <fd of memory.oom_control>" to |
850 | cgroup.event_control | |
9490ff27 | 851 | |
1939c557 MK |
852 | The application will be notified through eventfd when OOM happens. |
853 | OOM notification doesn't work for the root cgroup. | |
9490ff27 | 854 | |
1939c557 | 855 | You can disable the OOM-killer by writing "1" to memory.oom_control file, as: |
dc10e281 | 856 | |
3c11ecf4 KH |
857 | #echo 1 > memory.oom_control |
858 | ||
dc10e281 KH |
859 | If OOM-killer is disabled, tasks under cgroup will hang/sleep |
860 | in memory cgroup's OOM-waitqueue when they request accountable memory. | |
3c11ecf4 | 861 | |
dc10e281 | 862 | For running them, you have to relax the memory cgroup's OOM status by |
99c8b231 | 863 | |
3c11ecf4 | 864 | * enlarge limit or reduce usage. |
99c8b231 | 865 | |
3c11ecf4 | 866 | To reduce usage, |
99c8b231 | 867 | |
3c11ecf4 KH |
868 | * kill some tasks. |
869 | * move some tasks to other group with account migration. | |
870 | * remove some files (on tmpfs?) | |
871 | ||
872 | Then, stopped tasks will work again. | |
873 | ||
874 | At reading, current status of OOM is shown. | |
99c8b231 MCC |
875 | |
876 | - oom_kill_disable 0 or 1 | |
877 | (if 1, oom-killer is disabled) | |
878 | - under_oom 0 or 1 | |
879 | (if 1, the memory cgroup is under OOM, tasks may be stopped.) | |
1eff491f YS |
880 | - oom_kill integer counter |
881 | The number of processes belonging to this cgroup killed by any | |
882 | kind of OOM killer. | |
9490ff27 | 883 | |
70ddf637 | 884 | 11. Memory Pressure |
99c8b231 | 885 | =================== |
70ddf637 AV |
886 | |
887 | The pressure level notifications can be used to monitor the memory | |
888 | allocation cost; based on the pressure, applications can implement | |
889 | different strategies of managing their memory resources. The pressure | |
890 | levels are defined as following: | |
891 | ||
892 | The "low" level means that the system is reclaiming memory for new | |
893 | allocations. Monitoring this reclaiming activity might be useful for | |
894 | maintaining cache level. Upon notification, the program (typically | |
895 | "Activity Manager") might analyze vmstat and act in advance (i.e. | |
896 | prematurely shutdown unimportant services). | |
897 | ||
898 | The "medium" level means that the system is experiencing medium memory | |
899 | pressure, the system might be making swap, paging out active file caches, | |
900 | etc. Upon this event applications may decide to further analyze | |
901 | vmstat/zoneinfo/memcg or internal memory usage statistics and free any | |
902 | resources that can be easily reconstructed or re-read from a disk. | |
903 | ||
904 | The "critical" level means that the system is actively thrashing, it is | |
905 | about to out of memory (OOM) or even the in-kernel OOM killer is on its | |
906 | way to trigger. Applications should do whatever they can to help the | |
907 | system. It might be too late to consult with vmstat or any other | |
908 | statistics, so it's advisable to take an immediate action. | |
909 | ||
b6bb9811 DR |
910 | By default, events are propagated upward until the event is handled, i.e. the |
911 | events are not pass-through. For example, you have three cgroups: A->B->C. Now | |
912 | you set up an event listener on cgroups A, B and C, and suppose group C | |
913 | experiences some pressure. In this situation, only group C will receive the | |
914 | notification, i.e. groups A and B will not receive it. This is done to avoid | |
915 | excessive "broadcasting" of messages, which disturbs the system and which is | |
916 | especially bad if we are low on memory or thrashing. Group B, will receive | |
ab8aebdc | 917 | notification only if there are no event listeners for group C. |
b6bb9811 DR |
918 | |
919 | There are three optional modes that specify different propagation behavior: | |
920 | ||
921 | - "default": this is the default behavior specified above. This mode is the | |
922 | same as omitting the optional mode parameter, preserved by backwards | |
923 | compatibility. | |
924 | ||
925 | - "hierarchy": events always propagate up to the root, similar to the default | |
926 | behavior, except that propagation continues regardless of whether there are | |
927 | event listeners at each level, with the "hierarchy" mode. In the above | |
928 | example, groups A, B, and C will receive notification of memory pressure. | |
929 | ||
930 | - "local": events are pass-through, i.e. they only receive notifications when | |
931 | memory pressure is experienced in the memcg for which the notification is | |
932 | registered. In the above example, group C will receive notification if | |
933 | registered for "local" notification and the group experiences memory | |
934 | pressure. However, group B will never receive notification, regardless if | |
935 | there is an event listener for group C or not, if group B is registered for | |
936 | local notification. | |
937 | ||
938 | The level and event notification mode ("hierarchy" or "local", if necessary) are | |
939 | specified by a comma-delimited string, i.e. "low,hierarchy" specifies | |
940 | hierarchical, pass-through, notification for all ancestor memcgs. Notification | |
941 | that is the default, non pass-through behavior, does not specify a mode. | |
942 | "medium,local" specifies pass-through notification for the medium level. | |
70ddf637 AV |
943 | |
944 | The file memory.pressure_level is only used to setup an eventfd. To | |
945 | register a notification, an application must: | |
946 | ||
947 | - create an eventfd using eventfd(2); | |
948 | - open memory.pressure_level; | |
b6bb9811 | 949 | - write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>" |
70ddf637 AV |
950 | to cgroup.event_control. |
951 | ||
952 | Application will be notified through eventfd when memory pressure is at | |
953 | the specific level (or higher). Read/write operations to | |
954 | memory.pressure_level are no implemented. | |
955 | ||
956 | Test: | |
957 | ||
958 | Here is a small script example that makes a new cgroup, sets up a | |
959 | memory limit, sets up a notification in the cgroup and then makes child | |
99c8b231 | 960 | cgroup experience a critical pressure:: |
70ddf637 | 961 | |
99c8b231 MCC |
962 | # cd /sys/fs/cgroup/memory/ |
963 | # mkdir foo | |
964 | # cd foo | |
965 | # cgroup_event_listener memory.pressure_level low,hierarchy & | |
966 | # echo 8000000 > memory.limit_in_bytes | |
967 | # echo 8000000 > memory.memsw.limit_in_bytes | |
968 | # echo $$ > tasks | |
969 | # dd if=/dev/zero | read x | |
70ddf637 AV |
970 | |
971 | (Expect a bunch of notifications, and eventually, the oom-killer will | |
972 | trigger.) | |
973 | ||
974 | 12. TODO | |
99c8b231 | 975 | ======== |
1b6df3aa | 976 | |
f968ef1c LZ |
977 | 1. Make per-cgroup scanner reclaim not-shared pages first |
978 | 2. Teach controller to account for shared-pages | |
979 | 3. Start reclamation in the background when the limit is | |
1b6df3aa | 980 | not yet hit but the usage is getting closer |
1b6df3aa BS |
981 | |
982 | Summary | |
99c8b231 | 983 | ======= |
1b6df3aa BS |
984 | |
985 | Overall, the memory controller has been a stable controller and has been | |
986 | commented and discussed quite extensively in the community. | |
987 | ||
988 | References | |
99c8b231 | 989 | ========== |
1b6df3aa | 990 | |
71da431c BS |
991 | .. [1] Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ |
992 | .. [2] Singh, Balbir. Memory Controller (RSS Control), | |
1b6df3aa | 993 | http://lwn.net/Articles/222762/ |
71da431c | 994 | .. [3] Emelianov, Pavel. Resource controllers based on process cgroups |
05a5f51c | 995 | https://lore.kernel.org/r/45ED7DEC.7010403@sw.ru |
71da431c | 996 | .. [4] Emelianov, Pavel. RSS controller based on process cgroups (v2) |
05a5f51c | 997 | https://lore.kernel.org/r/461A3010.90403@sw.ru |
71da431c | 998 | .. [5] Emelianov, Pavel. RSS controller based on process cgroups (v3) |
05a5f51c | 999 | https://lore.kernel.org/r/465D9739.8070209@openvz.org |
71da431c | 1000 | |
1b6df3aa BS |
1001 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ |
1002 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control | |
1003 | subsystem (v3), http://lwn.net/Articles/235534/ | |
2324c5dd | 1004 | 8. Singh, Balbir. RSS controller v2 test results (lmbench), |
05a5f51c | 1005 | https://lore.kernel.org/r/464C95D4.7070806@linux.vnet.ibm.com |
2324c5dd | 1006 | 9. Singh, Balbir. RSS controller v2 AIM9 results |
05a5f51c | 1007 | https://lore.kernel.org/r/464D267A.50107@linux.vnet.ibm.com |
2324c5dd | 1008 | 10. Singh, Balbir. Memory controller v6 test results, |
05a5f51c | 1009 | https://lore.kernel.org/r/20070819094658.654.84837.sendpatchset@balbir-laptop |
71da431c BS |
1010 | |
1011 | .. [11] Singh, Balbir. Memory controller introduction (v6), | |
1012 | https://lore.kernel.org/r/20070817084228.26003.12568.sendpatchset@balbir-laptop | |
1013 | .. [12] Corbet, Jonathan, Controlling memory use in cgroups, | |
1014 | http://lwn.net/Articles/243795/ |