Loading...
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2. It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors. All
13future changes must be reflected in this document. Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18 1. Introduction
19 1-1. Terminology
20 1-2. What is cgroup?
21 2. Basic Operations
22 2-1. Mounting
23 2-2. Organizing Processes and Threads
24 2-2-1. Processes
25 2-2-2. Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
31 2-5. Delegation
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
34 2-6. Guidelines
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
38 3-1. Weights
39 3-2. Limits
40 3-3. Protections
41 3-4. Allocations
42 4. Interface Files
43 4-1. Format
44 4-2. Conventions
45 4-3. Core Interface Files
46 5. Controllers
47 5-1. CPU
48 5-1-1. CPU Interface Files
49 5-2. Memory
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
53 5-3. IO
54 5-3-1. IO Interface Files
55 5-3-2. Writeback
56 5-3-3. IO Latency
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
59 5-3-4. IO Priority
60 5-4. PID
61 5-4-1. PID Interface Files
62 5-5. Cpuset
63 5.5-1. Cpuset Interface Files
64 5-6. Device
65 5-7. RDMA
66 5-7-1. RDMA Interface Files
67 5-8. HugeTLB
68 5.8-1. HugeTLB Interface Files
69 5-9. Misc
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
72 5-10. Others
73 5-10-1. perf_event
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
77 6. Namespace
78 6-1. Basics
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
91 R-5-1. Memory
92
93
94Introduction
95============
96
97Terminology
98-----------
99
100"cgroup" stands for "control group" and is never capitalized. The
101singular form is used to designate the whole feature and also as a
102qualifier as in "cgroup controllers". When explicitly referring to
103multiple individual control groups, the plural form "cgroups" is used.
104
105
106What is cgroup?
107---------------
108
109cgroup is a mechanism to organize processes hierarchically and
110distribute system resources along the hierarchy in a controlled and
111configurable manner.
112
113cgroup is largely composed of two parts - the core and controllers.
114cgroup core is primarily responsible for hierarchically organizing
115processes. A cgroup controller is usually responsible for
116distributing a specific type of system resource along the hierarchy
117although there are utility controllers which serve purposes other than
118resource distribution.
119
120cgroups form a tree structure and every process in the system belongs
121to one and only one cgroup. All threads of a process belong to the
122same cgroup. On creation, all processes are put in the cgroup that
123the parent process belongs to at the time. A process can be migrated
124to another cgroup. Migration of a process doesn't affect already
125existing descendant processes.
126
127Following certain structural constraints, controllers may be enabled or
128disabled selectively on a cgroup. All controller behaviors are
129hierarchical - if a controller is enabled on a cgroup, it affects all
130processes which belong to the cgroups consisting the inclusive
131sub-hierarchy of the cgroup. When a controller is enabled on a nested
132cgroup, it always restricts the resource distribution further. The
133restrictions set closer to the root in the hierarchy can not be
134overridden from further away.
135
136
137Basic Operations
138================
139
140Mounting
141--------
142
143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144hierarchy can be mounted with the following mount command::
145
146 # mount -t cgroup2 none $MOUNT_POINT
147
148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149controllers which support v2 and are not bound to a v1 hierarchy are
150automatically bound to the v2 hierarchy and show up at the root.
151Controllers which are not in active use in the v2 hierarchy can be
152bound to other hierarchies. This allows mixing v2 hierarchy with the
153legacy v1 multiple hierarchies in a fully backward compatible way.
154
155A controller can be moved across hierarchies only after the controller
156is no longer referenced in its current hierarchy. Because per-cgroup
157controller states are destroyed asynchronously and controllers may
158have lingering references, a controller may not show up immediately on
159the v2 hierarchy after the final umount of the previous hierarchy.
160Similarly, a controller should be fully disabled to be moved out of
161the unified hierarchy and it may take some time for the disabled
162controller to become available for other hierarchies; furthermore, due
163to inter-controller dependencies, other controllers may need to be
164disabled too.
165
166While useful for development and manual configurations, moving
167controllers dynamically between the v2 and other hierarchies is
168strongly discouraged for production use. It is recommended to decide
169the hierarchies and controller associations before starting using the
170controllers after system boot.
171
172During transition to v2, system management software might still
173automount the v1 cgroup filesystem and so hijack all controllers
174during boot, before manual intervention is possible. To make testing
175and experimenting easier, the kernel parameter cgroup_no_v1= allows
176disabling controllers in v1 and make them always available in v2.
177
178cgroup v2 currently supports the following mount options.
179
180 nsdelegate
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
186
187 favordynmods
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
194
195 memory_localevents
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
202
203 memory_recursiveprot
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
212
213 memory_hugetlb_accounting
214 Count HugeTLB memory usage towards the cgroup's overall
215 memory usage for the memory controller (for the purpose of
216 statistics reporting and memory protetion). This is a new
217 behavior that could regress existing setups, so it must be
218 explicitly opted in with this mount option.
219
220 A few caveats to keep in mind:
221
222 * There is no HugeTLB pool management involved in the memory
223 controller. The pre-allocated pool does not belong to anyone.
224 Specifically, when a new HugeTLB folio is allocated to
225 the pool, it is not accounted for from the perspective of the
226 memory controller. It is only charged to a cgroup when it is
227 actually used (for e.g at page fault time). Host memory
228 overcommit management has to consider this when configuring
229 hard limits. In general, HugeTLB pool management should be
230 done via other mechanisms (such as the HugeTLB controller).
231 * Failure to charge a HugeTLB folio to the memory controller
232 results in SIGBUS. This could happen even if the HugeTLB pool
233 still has pages available (but the cgroup limit is hit and
234 reclaim attempt fails).
235 * Charging HugeTLB memory towards the memory controller affects
236 memory protection and reclaim dynamics. Any userspace tuning
237 (of low, min limits for e.g) needs to take this into account.
238 * HugeTLB pages utilized while this option is not selected
239 will not be tracked by the memory controller (even if cgroup
240 v2 is remounted later on).
241
242 pids_localevents
243 The option restores v1-like behavior of pids.events:max, that is only
244 local (inside cgroup proper) fork failures are counted. Without this
245 option pids.events.max represents any pids.max enforcemnt across
246 cgroup's subtree.
247
248
249
250Organizing Processes and Threads
251--------------------------------
252
253Processes
254~~~~~~~~~
255
256Initially, only the root cgroup exists to which all processes belong.
257A child cgroup can be created by creating a sub-directory::
258
259 # mkdir $CGROUP_NAME
260
261A given cgroup may have multiple child cgroups forming a tree
262structure. Each cgroup has a read-writable interface file
263"cgroup.procs". When read, it lists the PIDs of all processes which
264belong to the cgroup one-per-line. The PIDs are not ordered and the
265same PID may show up more than once if the process got moved to
266another cgroup and then back or the PID got recycled while reading.
267
268A process can be migrated into a cgroup by writing its PID to the
269target cgroup's "cgroup.procs" file. Only one process can be migrated
270on a single write(2) call. If a process is composed of multiple
271threads, writing the PID of any thread migrates all threads of the
272process.
273
274When a process forks a child process, the new process is born into the
275cgroup that the forking process belongs to at the time of the
276operation. After exit, a process stays associated with the cgroup
277that it belonged to at the time of exit until it's reaped; however, a
278zombie process does not appear in "cgroup.procs" and thus can't be
279moved to another cgroup.
280
281A cgroup which doesn't have any children or live processes can be
282destroyed by removing the directory. Note that a cgroup which doesn't
283have any children and is associated only with zombie processes is
284considered empty and can be removed::
285
286 # rmdir $CGROUP_NAME
287
288"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
289cgroup is in use in the system, this file may contain multiple lines,
290one for each hierarchy. The entry for cgroup v2 is always in the
291format "0::$PATH"::
292
293 # cat /proc/842/cgroup
294 ...
295 0::/test-cgroup/test-cgroup-nested
296
297If the process becomes a zombie and the cgroup it was associated with
298is removed subsequently, " (deleted)" is appended to the path::
299
300 # cat /proc/842/cgroup
301 ...
302 0::/test-cgroup/test-cgroup-nested (deleted)
303
304
305Threads
306~~~~~~~
307
308cgroup v2 supports thread granularity for a subset of controllers to
309support use cases requiring hierarchical resource distribution across
310the threads of a group of processes. By default, all threads of a
311process belong to the same cgroup, which also serves as the resource
312domain to host resource consumptions which are not specific to a
313process or thread. The thread mode allows threads to be spread across
314a subtree while still maintaining the common resource domain for them.
315
316Controllers which support thread mode are called threaded controllers.
317The ones which don't are called domain controllers.
318
319Marking a cgroup threaded makes it join the resource domain of its
320parent as a threaded cgroup. The parent may be another threaded
321cgroup whose resource domain is further up in the hierarchy. The root
322of a threaded subtree, that is, the nearest ancestor which is not
323threaded, is called threaded domain or thread root interchangeably and
324serves as the resource domain for the entire subtree.
325
326Inside a threaded subtree, threads of a process can be put in
327different cgroups and are not subject to the no internal process
328constraint - threaded controllers can be enabled on non-leaf cgroups
329whether they have threads in them or not.
330
331As the threaded domain cgroup hosts all the domain resource
332consumptions of the subtree, it is considered to have internal
333resource consumptions whether there are processes in it or not and
334can't have populated child cgroups which aren't threaded. Because the
335root cgroup is not subject to no internal process constraint, it can
336serve both as a threaded domain and a parent to domain cgroups.
337
338The current operation mode or type of the cgroup is shown in the
339"cgroup.type" file which indicates whether the cgroup is a normal
340domain, a domain which is serving as the domain of a threaded subtree,
341or a threaded cgroup.
342
343On creation, a cgroup is always a domain cgroup and can be made
344threaded by writing "threaded" to the "cgroup.type" file. The
345operation is single direction::
346
347 # echo threaded > cgroup.type
348
349Once threaded, the cgroup can't be made a domain again. To enable the
350thread mode, the following conditions must be met.
351
352- As the cgroup will join the parent's resource domain. The parent
353 must either be a valid (threaded) domain or a threaded cgroup.
354
355- When the parent is an unthreaded domain, it must not have any domain
356 controllers enabled or populated domain children. The root is
357 exempt from this requirement.
358
359Topology-wise, a cgroup can be in an invalid state. Please consider
360the following topology::
361
362 A (threaded domain) - B (threaded) - C (domain, just created)
363
364C is created as a domain but isn't connected to a parent which can
365host child domains. C can't be used until it is turned into a
366threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
367these cases. Operations which fail due to invalid topology use
368EOPNOTSUPP as the errno.
369
370A domain cgroup is turned into a threaded domain when one of its child
371cgroup becomes threaded or threaded controllers are enabled in the
372"cgroup.subtree_control" file while there are processes in the cgroup.
373A threaded domain reverts to a normal domain when the conditions
374clear.
375
376When read, "cgroup.threads" contains the list of the thread IDs of all
377threads in the cgroup. Except that the operations are per-thread
378instead of per-process, "cgroup.threads" has the same format and
379behaves the same way as "cgroup.procs". While "cgroup.threads" can be
380written to in any cgroup, as it can only move threads inside the same
381threaded domain, its operations are confined inside each threaded
382subtree.
383
384The threaded domain cgroup serves as the resource domain for the whole
385subtree, and, while the threads can be scattered across the subtree,
386all the processes are considered to be in the threaded domain cgroup.
387"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
388processes in the subtree and is not readable in the subtree proper.
389However, "cgroup.procs" can be written to from anywhere in the subtree
390to migrate all threads of the matching process to the cgroup.
391
392Only threaded controllers can be enabled in a threaded subtree. When
393a threaded controller is enabled inside a threaded subtree, it only
394accounts for and controls resource consumptions associated with the
395threads in the cgroup and its descendants. All consumptions which
396aren't tied to a specific thread belong to the threaded domain cgroup.
397
398Because a threaded subtree is exempt from no internal process
399constraint, a threaded controller must be able to handle competition
400between threads in a non-leaf cgroup and its child cgroups. Each
401threaded controller defines how such competitions are handled.
402
403Currently, the following controllers are threaded and can be enabled
404in a threaded cgroup::
405
406- cpu
407- cpuset
408- perf_event
409- pids
410
411[Un]populated Notification
412--------------------------
413
414Each non-root cgroup has a "cgroup.events" file which contains
415"populated" field indicating whether the cgroup's sub-hierarchy has
416live processes in it. Its value is 0 if there is no live process in
417the cgroup and its descendants; otherwise, 1. poll and [id]notify
418events are triggered when the value changes. This can be used, for
419example, to start a clean-up operation after all processes of a given
420sub-hierarchy have exited. The populated state updates and
421notifications are recursive. Consider the following sub-hierarchy
422where the numbers in the parentheses represent the numbers of processes
423in each cgroup::
424
425 A(4) - B(0) - C(1)
426 \ D(0)
427
428A, B and C's "populated" fields would be 1 while D's 0. After the one
429process in C exits, B and C's "populated" fields would flip to "0" and
430file modified events will be generated on the "cgroup.events" files of
431both cgroups.
432
433
434Controlling Controllers
435-----------------------
436
437Enabling and Disabling
438~~~~~~~~~~~~~~~~~~~~~~
439
440Each cgroup has a "cgroup.controllers" file which lists all
441controllers available for the cgroup to enable::
442
443 # cat cgroup.controllers
444 cpu io memory
445
446No controller is enabled by default. Controllers can be enabled and
447disabled by writing to the "cgroup.subtree_control" file::
448
449 # echo "+cpu +memory -io" > cgroup.subtree_control
450
451Only controllers which are listed in "cgroup.controllers" can be
452enabled. When multiple operations are specified as above, either they
453all succeed or fail. If multiple operations on the same controller
454are specified, the last one is effective.
455
456Enabling a controller in a cgroup indicates that the distribution of
457the target resource across its immediate children will be controlled.
458Consider the following sub-hierarchy. The enabled controllers are
459listed in parentheses::
460
461 A(cpu,memory) - B(memory) - C()
462 \ D()
463
464As A has "cpu" and "memory" enabled, A will control the distribution
465of CPU cycles and memory to its children, in this case, B. As B has
466"memory" enabled but not "CPU", C and D will compete freely on CPU
467cycles but their division of memory available to B will be controlled.
468
469As a controller regulates the distribution of the target resource to
470the cgroup's children, enabling it creates the controller's interface
471files in the child cgroups. In the above example, enabling "cpu" on B
472would create the "cpu." prefixed controller interface files in C and
473D. Likewise, disabling "memory" from B would remove the "memory."
474prefixed controller interface files from C and D. This means that the
475controller interface files - anything which doesn't start with
476"cgroup." are owned by the parent rather than the cgroup itself.
477
478
479Top-down Constraint
480~~~~~~~~~~~~~~~~~~~
481
482Resources are distributed top-down and a cgroup can further distribute
483a resource only if the resource has been distributed to it from the
484parent. This means that all non-root "cgroup.subtree_control" files
485can only contain controllers which are enabled in the parent's
486"cgroup.subtree_control" file. A controller can be enabled only if
487the parent has the controller enabled and a controller can't be
488disabled if one or more children have it enabled.
489
490
491No Internal Process Constraint
492~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
493
494Non-root cgroups can distribute domain resources to their children
495only when they don't have any processes of their own. In other words,
496only domain cgroups which don't contain any processes can have domain
497controllers enabled in their "cgroup.subtree_control" files.
498
499This guarantees that, when a domain controller is looking at the part
500of the hierarchy which has it enabled, processes are always only on
501the leaves. This rules out situations where child cgroups compete
502against internal processes of the parent.
503
504The root cgroup is exempt from this restriction. Root contains
505processes and anonymous resource consumption which can't be associated
506with any other cgroups and requires special treatment from most
507controllers. How resource consumption in the root cgroup is governed
508is up to each controller (for more information on this topic please
509refer to the Non-normative information section in the Controllers
510chapter).
511
512Note that the restriction doesn't get in the way if there is no
513enabled controller in the cgroup's "cgroup.subtree_control". This is
514important as otherwise it wouldn't be possible to create children of a
515populated cgroup. To control resource distribution of a cgroup, the
516cgroup must create children and transfer all its processes to the
517children before enabling controllers in its "cgroup.subtree_control"
518file.
519
520
521Delegation
522----------
523
524Model of Delegation
525~~~~~~~~~~~~~~~~~~~
526
527A cgroup can be delegated in two ways. First, to a less privileged
528user by granting write access of the directory and its "cgroup.procs",
529"cgroup.threads" and "cgroup.subtree_control" files to the user.
530Second, if the "nsdelegate" mount option is set, automatically to a
531cgroup namespace on namespace creation.
532
533Because the resource control interface files in a given directory
534control the distribution of the parent's resources, the delegatee
535shouldn't be allowed to write to them. For the first method, this is
536achieved by not granting access to these files. For the second, files
537outside the namespace should be hidden from the delegatee by the means
538of at least mount namespacing, and the kernel rejects writes to all
539files on a namespace root from inside the cgroup namespace, except for
540those files listed in "/sys/kernel/cgroup/delegate" (including
541"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
542
543The end results are equivalent for both delegation types. Once
544delegated, the user can build sub-hierarchy under the directory,
545organize processes inside it as it sees fit and further distribute the
546resources it received from the parent. The limits and other settings
547of all resource controllers are hierarchical and regardless of what
548happens in the delegated sub-hierarchy, nothing can escape the
549resource restrictions imposed by the parent.
550
551Currently, cgroup doesn't impose any restrictions on the number of
552cgroups in or nesting depth of a delegated sub-hierarchy; however,
553this may be limited explicitly in the future.
554
555
556Delegation Containment
557~~~~~~~~~~~~~~~~~~~~~~
558
559A delegated sub-hierarchy is contained in the sense that processes
560can't be moved into or out of the sub-hierarchy by the delegatee.
561
562For delegations to a less privileged user, this is achieved by
563requiring the following conditions for a process with a non-root euid
564to migrate a target process into a cgroup by writing its PID to the
565"cgroup.procs" file.
566
567- The writer must have write access to the "cgroup.procs" file.
568
569- The writer must have write access to the "cgroup.procs" file of the
570 common ancestor of the source and destination cgroups.
571
572The above two constraints ensure that while a delegatee may migrate
573processes around freely in the delegated sub-hierarchy it can't pull
574in from or push out to outside the sub-hierarchy.
575
576For an example, let's assume cgroups C0 and C1 have been delegated to
577user U0 who created C00, C01 under C0 and C10 under C1 as follows and
578all processes under C0 and C1 belong to U0::
579
580 ~~~~~~~~~~~~~ - C0 - C00
581 ~ cgroup ~ \ C01
582 ~ hierarchy ~
583 ~~~~~~~~~~~~~ - C1 - C10
584
585Let's also say U0 wants to write the PID of a process which is
586currently in C10 into "C00/cgroup.procs". U0 has write access to the
587file; however, the common ancestor of the source cgroup C10 and the
588destination cgroup C00 is above the points of delegation and U0 would
589not have write access to its "cgroup.procs" files and thus the write
590will be denied with -EACCES.
591
592For delegations to namespaces, containment is achieved by requiring
593that both the source and destination cgroups are reachable from the
594namespace of the process which is attempting the migration. If either
595is not reachable, the migration is rejected with -ENOENT.
596
597
598Guidelines
599----------
600
601Organize Once and Control
602~~~~~~~~~~~~~~~~~~~~~~~~~
603
604Migrating a process across cgroups is a relatively expensive operation
605and stateful resources such as memory are not moved together with the
606process. This is an explicit design decision as there often exist
607inherent trade-offs between migration and various hot paths in terms
608of synchronization cost.
609
610As such, migrating processes across cgroups frequently as a means to
611apply different resource restrictions is discouraged. A workload
612should be assigned to a cgroup according to the system's logical and
613resource structure once on start-up. Dynamic adjustments to resource
614distribution can be made by changing controller configuration through
615the interface files.
616
617
618Avoid Name Collisions
619~~~~~~~~~~~~~~~~~~~~~
620
621Interface files for a cgroup and its children cgroups occupy the same
622directory and it is possible to create children cgroups which collide
623with interface files.
624
625All cgroup core interface files are prefixed with "cgroup." and each
626controller's interface files are prefixed with the controller name and
627a dot. A controller's name is composed of lower case alphabets and
628'_'s but never begins with an '_' so it can be used as the prefix
629character for collision avoidance. Also, interface file names won't
630start or end with terms which are often used in categorizing workloads
631such as job, service, slice, unit or workload.
632
633cgroup doesn't do anything to prevent name collisions and it's the
634user's responsibility to avoid them.
635
636
637Resource Distribution Models
638============================
639
640cgroup controllers implement several resource distribution schemes
641depending on the resource type and expected use cases. This section
642describes major schemes in use along with their expected behaviors.
643
644
645Weights
646-------
647
648A parent's resource is distributed by adding up the weights of all
649active children and giving each the fraction matching the ratio of its
650weight against the sum. As only children which can make use of the
651resource at the moment participate in the distribution, this is
652work-conserving. Due to the dynamic nature, this model is usually
653used for stateless resources.
654
655All weights are in the range [1, 10000] with the default at 100. This
656allows symmetric multiplicative biases in both directions at fine
657enough granularity while staying in the intuitive range.
658
659As long as the weight is in range, all configuration combinations are
660valid and there is no reason to reject configuration changes or
661process migrations.
662
663"cpu.weight" proportionally distributes CPU cycles to active children
664and is an example of this type.
665
666
667.. _cgroupv2-limits-distributor:
668
669Limits
670------
671
672A child can only consume up to the configured amount of the resource.
673Limits can be over-committed - the sum of the limits of children can
674exceed the amount of resource available to the parent.
675
676Limits are in the range [0, max] and defaults to "max", which is noop.
677
678As limits can be over-committed, all configuration combinations are
679valid and there is no reason to reject configuration changes or
680process migrations.
681
682"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
683on an IO device and is an example of this type.
684
685.. _cgroupv2-protections-distributor:
686
687Protections
688-----------
689
690A cgroup is protected up to the configured amount of the resource
691as long as the usages of all its ancestors are under their
692protected levels. Protections can be hard guarantees or best effort
693soft boundaries. Protections can also be over-committed in which case
694only up to the amount available to the parent is protected among
695children.
696
697Protections are in the range [0, max] and defaults to 0, which is
698noop.
699
700As protections can be over-committed, all configuration combinations
701are valid and there is no reason to reject configuration changes or
702process migrations.
703
704"memory.low" implements best-effort memory protection and is an
705example of this type.
706
707
708Allocations
709-----------
710
711A cgroup is exclusively allocated a certain amount of a finite
712resource. Allocations can't be over-committed - the sum of the
713allocations of children can not exceed the amount of resource
714available to the parent.
715
716Allocations are in the range [0, max] and defaults to 0, which is no
717resource.
718
719As allocations can't be over-committed, some configuration
720combinations are invalid and should be rejected. Also, if the
721resource is mandatory for execution of processes, process migrations
722may be rejected.
723
724"cpu.rt.max" hard-allocates realtime slices and is an example of this
725type.
726
727
728Interface Files
729===============
730
731Format
732------
733
734All interface files should be in one of the following formats whenever
735possible::
736
737 New-line separated values
738 (when only one value can be written at once)
739
740 VAL0\n
741 VAL1\n
742 ...
743
744 Space separated values
745 (when read-only or multiple values can be written at once)
746
747 VAL0 VAL1 ...\n
748
749 Flat keyed
750
751 KEY0 VAL0\n
752 KEY1 VAL1\n
753 ...
754
755 Nested keyed
756
757 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
758 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
759 ...
760
761For a writable file, the format for writing should generally match
762reading; however, controllers may allow omitting later fields or
763implement restricted shortcuts for most common use cases.
764
765For both flat and nested keyed files, only the values for a single key
766can be written at a time. For nested keyed files, the sub key pairs
767may be specified in any order and not all pairs have to be specified.
768
769
770Conventions
771-----------
772
773- Settings for a single feature should be contained in a single file.
774
775- The root cgroup should be exempt from resource control and thus
776 shouldn't have resource control interface files.
777
778- The default time unit is microseconds. If a different unit is ever
779 used, an explicit unit suffix must be present.
780
781- A parts-per quantity should use a percentage decimal with at least
782 two digit fractional part - e.g. 13.40.
783
784- If a controller implements weight based resource distribution, its
785 interface file should be named "weight" and have the range [1,
786 10000] with 100 as the default. The values are chosen to allow
787 enough and symmetric bias in both directions while keeping it
788 intuitive (the default is 100%).
789
790- If a controller implements an absolute resource guarantee and/or
791 limit, the interface files should be named "min" and "max"
792 respectively. If a controller implements best effort resource
793 guarantee and/or limit, the interface files should be named "low"
794 and "high" respectively.
795
796 In the above four control files, the special token "max" should be
797 used to represent upward infinity for both reading and writing.
798
799- If a setting has a configurable default value and keyed specific
800 overrides, the default entry should be keyed with "default" and
801 appear as the first entry in the file.
802
803 The default value can be updated by writing either "default $VAL" or
804 "$VAL".
805
806 When writing to update a specific override, "default" can be used as
807 the value to indicate removal of the override. Override entries
808 with "default" as the value must not appear when read.
809
810 For example, a setting which is keyed by major:minor device numbers
811 with integer values may look like the following::
812
813 # cat cgroup-example-interface-file
814 default 150
815 8:0 300
816
817 The default value can be updated by::
818
819 # echo 125 > cgroup-example-interface-file
820
821 or::
822
823 # echo "default 125" > cgroup-example-interface-file
824
825 An override can be set by::
826
827 # echo "8:16 170" > cgroup-example-interface-file
828
829 and cleared by::
830
831 # echo "8:0 default" > cgroup-example-interface-file
832 # cat cgroup-example-interface-file
833 default 125
834 8:16 170
835
836- For events which are not very high frequency, an interface file
837 "events" should be created which lists event key value pairs.
838 Whenever a notifiable event happens, file modified event should be
839 generated on the file.
840
841
842Core Interface Files
843--------------------
844
845All cgroup core files are prefixed with "cgroup."
846
847 cgroup.type
848 A read-write single value file which exists on non-root
849 cgroups.
850
851 When read, it indicates the current type of the cgroup, which
852 can be one of the following values.
853
854 - "domain" : A normal valid domain cgroup.
855
856 - "domain threaded" : A threaded domain cgroup which is
857 serving as the root of a threaded subtree.
858
859 - "domain invalid" : A cgroup which is in an invalid state.
860 It can't be populated or have controllers enabled. It may
861 be allowed to become a threaded cgroup.
862
863 - "threaded" : A threaded cgroup which is a member of a
864 threaded subtree.
865
866 A cgroup can be turned into a threaded cgroup by writing
867 "threaded" to this file.
868
869 cgroup.procs
870 A read-write new-line separated values file which exists on
871 all cgroups.
872
873 When read, it lists the PIDs of all processes which belong to
874 the cgroup one-per-line. The PIDs are not ordered and the
875 same PID may show up more than once if the process got moved
876 to another cgroup and then back or the PID got recycled while
877 reading.
878
879 A PID can be written to migrate the process associated with
880 the PID to the cgroup. The writer should match all of the
881 following conditions.
882
883 - It must have write access to the "cgroup.procs" file.
884
885 - It must have write access to the "cgroup.procs" file of the
886 common ancestor of the source and destination cgroups.
887
888 When delegating a sub-hierarchy, write access to this file
889 should be granted along with the containing directory.
890
891 In a threaded cgroup, reading this file fails with EOPNOTSUPP
892 as all the processes belong to the thread root. Writing is
893 supported and moves every thread of the process to the cgroup.
894
895 cgroup.threads
896 A read-write new-line separated values file which exists on
897 all cgroups.
898
899 When read, it lists the TIDs of all threads which belong to
900 the cgroup one-per-line. The TIDs are not ordered and the
901 same TID may show up more than once if the thread got moved to
902 another cgroup and then back or the TID got recycled while
903 reading.
904
905 A TID can be written to migrate the thread associated with the
906 TID to the cgroup. The writer should match all of the
907 following conditions.
908
909 - It must have write access to the "cgroup.threads" file.
910
911 - The cgroup that the thread is currently in must be in the
912 same resource domain as the destination cgroup.
913
914 - It must have write access to the "cgroup.procs" file of the
915 common ancestor of the source and destination cgroups.
916
917 When delegating a sub-hierarchy, write access to this file
918 should be granted along with the containing directory.
919
920 cgroup.controllers
921 A read-only space separated values file which exists on all
922 cgroups.
923
924 It shows space separated list of all controllers available to
925 the cgroup. The controllers are not ordered.
926
927 cgroup.subtree_control
928 A read-write space separated values file which exists on all
929 cgroups. Starts out empty.
930
931 When read, it shows space separated list of the controllers
932 which are enabled to control resource distribution from the
933 cgroup to its children.
934
935 Space separated list of controllers prefixed with '+' or '-'
936 can be written to enable or disable controllers. A controller
937 name prefixed with '+' enables the controller and '-'
938 disables. If a controller appears more than once on the list,
939 the last one is effective. When multiple enable and disable
940 operations are specified, either all succeed or all fail.
941
942 cgroup.events
943 A read-only flat-keyed file which exists on non-root cgroups.
944 The following entries are defined. Unless specified
945 otherwise, a value change in this file generates a file
946 modified event.
947
948 populated
949 1 if the cgroup or its descendants contains any live
950 processes; otherwise, 0.
951 frozen
952 1 if the cgroup is frozen; otherwise, 0.
953
954 cgroup.max.descendants
955 A read-write single value files. The default is "max".
956
957 Maximum allowed number of descent cgroups.
958 If the actual number of descendants is equal or larger,
959 an attempt to create a new cgroup in the hierarchy will fail.
960
961 cgroup.max.depth
962 A read-write single value files. The default is "max".
963
964 Maximum allowed descent depth below the current cgroup.
965 If the actual descent depth is equal or larger,
966 an attempt to create a new child cgroup will fail.
967
968 cgroup.stat
969 A read-only flat-keyed file with the following entries:
970
971 nr_descendants
972 Total number of visible descendant cgroups.
973
974 nr_dying_descendants
975 Total number of dying descendant cgroups. A cgroup becomes
976 dying after being deleted by a user. The cgroup will remain
977 in dying state for some time undefined time (which can depend
978 on system load) before being completely destroyed.
979
980 A process can't enter a dying cgroup under any circumstances,
981 a dying cgroup can't revive.
982
983 A dying cgroup can consume system resources not exceeding
984 limits, which were active at the moment of cgroup deletion.
985
986 nr_subsys_<cgroup_subsys>
987 Total number of live cgroup subsystems (e.g memory
988 cgroup) at and beneath the current cgroup.
989
990 nr_dying_subsys_<cgroup_subsys>
991 Total number of dying cgroup subsystems (e.g. memory
992 cgroup) at and beneath the current cgroup.
993
994 cgroup.freeze
995 A read-write single value file which exists on non-root cgroups.
996 Allowed values are "0" and "1". The default is "0".
997
998 Writing "1" to the file causes freezing of the cgroup and all
999 descendant cgroups. This means that all belonging processes will
1000 be stopped and will not run until the cgroup will be explicitly
1001 unfrozen. Freezing of the cgroup may take some time; when this action
1002 is completed, the "frozen" value in the cgroup.events control file
1003 will be updated to "1" and the corresponding notification will be
1004 issued.
1005
1006 A cgroup can be frozen either by its own settings, or by settings
1007 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1008 cgroup will remain frozen.
1009
1010 Processes in the frozen cgroup can be killed by a fatal signal.
1011 They also can enter and leave a frozen cgroup: either by an explicit
1012 move by a user, or if freezing of the cgroup races with fork().
1013 If a process is moved to a frozen cgroup, it stops. If a process is
1014 moved out of a frozen cgroup, it becomes running.
1015
1016 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1017 it's possible to delete a frozen (and empty) cgroup, as well as
1018 create new sub-cgroups.
1019
1020 cgroup.kill
1021 A write-only single value file which exists in non-root cgroups.
1022 The only allowed value is "1".
1023
1024 Writing "1" to the file causes the cgroup and all descendant cgroups to
1025 be killed. This means that all processes located in the affected cgroup
1026 tree will be killed via SIGKILL.
1027
1028 Killing a cgroup tree will deal with concurrent forks appropriately and
1029 is protected against migrations.
1030
1031 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1032 killing cgroups is a process directed operation, i.e. it affects
1033 the whole thread-group.
1034
1035 cgroup.pressure
1036 A read-write single value file that allowed values are "0" and "1".
1037 The default is "1".
1038
1039 Writing "0" to the file will disable the cgroup PSI accounting.
1040 Writing "1" to the file will re-enable the cgroup PSI accounting.
1041
1042 This control attribute is not hierarchical, so disable or enable PSI
1043 accounting in a cgroup does not affect PSI accounting in descendants
1044 and doesn't need pass enablement via ancestors from root.
1045
1046 The reason this control attribute exists is that PSI accounts stalls for
1047 each cgroup separately and aggregates it at each level of the hierarchy.
1048 This may cause non-negligible overhead for some workloads when under
1049 deep level of the hierarchy, in which case this control attribute can
1050 be used to disable PSI accounting in the non-leaf cgroups.
1051
1052 irq.pressure
1053 A read-write nested-keyed file.
1054
1055 Shows pressure stall information for IRQ/SOFTIRQ. See
1056 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1057
1058Controllers
1059===========
1060
1061.. _cgroup-v2-cpu:
1062
1063CPU
1064---
1065
1066The "cpu" controllers regulates distribution of CPU cycles. This
1067controller implements weight and absolute bandwidth limit models for
1068normal scheduling policy and absolute bandwidth allocation model for
1069realtime scheduling policy.
1070
1071In all the above models, cycles distribution is defined only on a temporal
1072base and it does not account for the frequency at which tasks are executed.
1073The (optional) utilization clamping support allows to hint the schedutil
1074cpufreq governor about the minimum desired frequency which should always be
1075provided by a CPU, as well as the maximum desired frequency, which should not
1076be exceeded by a CPU.
1077
1078WARNING: cgroup2 doesn't yet support control of realtime processes. For
1079a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group
1080scheduling of realtime processes, the cpu controller can only be enabled
1081when all RT processes are in the root cgroup. This limitation does
1082not apply if CONFIG_RT_GROUP_SCHED is disabled. Be aware that system
1083management software may already have placed RT processes into nonroot
1084cgroups during the system boot process, and these processes may need
1085to be moved to the root cgroup before the cpu controller can be enabled
1086with a CONFIG_RT_GROUP_SCHED enabled kernel.
1087
1088
1089CPU Interface Files
1090~~~~~~~~~~~~~~~~~~~
1091
1092All time durations are in microseconds.
1093
1094 cpu.stat
1095 A read-only flat-keyed file.
1096 This file exists whether the controller is enabled or not.
1097
1098 It always reports the following three stats:
1099
1100 - usage_usec
1101 - user_usec
1102 - system_usec
1103
1104 and the following five when the controller is enabled:
1105
1106 - nr_periods
1107 - nr_throttled
1108 - throttled_usec
1109 - nr_bursts
1110 - burst_usec
1111
1112 cpu.weight
1113 A read-write single value file which exists on non-root
1114 cgroups. The default is "100".
1115
1116 For non idle groups (cpu.idle = 0), the weight is in the
1117 range [1, 10000].
1118
1119 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1120 then the weight will show as a 0.
1121
1122 cpu.weight.nice
1123 A read-write single value file which exists on non-root
1124 cgroups. The default is "0".
1125
1126 The nice value is in the range [-20, 19].
1127
1128 This interface file is an alternative interface for
1129 "cpu.weight" and allows reading and setting weight using the
1130 same values used by nice(2). Because the range is smaller and
1131 granularity is coarser for the nice values, the read value is
1132 the closest approximation of the current weight.
1133
1134 cpu.max
1135 A read-write two value file which exists on non-root cgroups.
1136 The default is "max 100000".
1137
1138 The maximum bandwidth limit. It's in the following format::
1139
1140 $MAX $PERIOD
1141
1142 which indicates that the group may consume up to $MAX in each
1143 $PERIOD duration. "max" for $MAX indicates no limit. If only
1144 one number is written, $MAX is updated.
1145
1146 cpu.max.burst
1147 A read-write single value file which exists on non-root
1148 cgroups. The default is "0".
1149
1150 The burst in the range [0, $MAX].
1151
1152 cpu.pressure
1153 A read-write nested-keyed file.
1154
1155 Shows pressure stall information for CPU. See
1156 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1157
1158 cpu.uclamp.min
1159 A read-write single value file which exists on non-root cgroups.
1160 The default is "0", i.e. no utilization boosting.
1161
1162 The requested minimum utilization (protection) as a percentage
1163 rational number, e.g. 12.34 for 12.34%.
1164
1165 This interface allows reading and setting minimum utilization clamp
1166 values similar to the sched_setattr(2). This minimum utilization
1167 value is used to clamp the task specific minimum utilization clamp.
1168
1169 The requested minimum utilization (protection) is always capped by
1170 the current value for the maximum utilization (limit), i.e.
1171 `cpu.uclamp.max`.
1172
1173 cpu.uclamp.max
1174 A read-write single value file which exists on non-root cgroups.
1175 The default is "max". i.e. no utilization capping
1176
1177 The requested maximum utilization (limit) as a percentage rational
1178 number, e.g. 98.76 for 98.76%.
1179
1180 This interface allows reading and setting maximum utilization clamp
1181 values similar to the sched_setattr(2). This maximum utilization
1182 value is used to clamp the task specific maximum utilization clamp.
1183
1184 cpu.idle
1185 A read-write single value file which exists on non-root cgroups.
1186 The default is 0.
1187
1188 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1189 Setting this value to a 1 will make the scheduling policy of the
1190 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1191 own relative priorities, but the cgroup itself will be treated as
1192 very low priority relative to its peers.
1193
1194
1195
1196Memory
1197------
1198
1199The "memory" controller regulates distribution of memory. Memory is
1200stateful and implements both limit and protection models. Due to the
1201intertwining between memory usage and reclaim pressure and the
1202stateful nature of memory, the distribution model is relatively
1203complex.
1204
1205While not completely water-tight, all major memory usages by a given
1206cgroup are tracked so that the total memory consumption can be
1207accounted and controlled to a reasonable extent. Currently, the
1208following types of memory usages are tracked.
1209
1210- Userland memory - page cache and anonymous memory.
1211
1212- Kernel data structures such as dentries and inodes.
1213
1214- TCP socket buffers.
1215
1216The above list may expand in the future for better coverage.
1217
1218
1219Memory Interface Files
1220~~~~~~~~~~~~~~~~~~~~~~
1221
1222All memory amounts are in bytes. If a value which is not aligned to
1223PAGE_SIZE is written, the value may be rounded up to the closest
1224PAGE_SIZE multiple when read back.
1225
1226 memory.current
1227 A read-only single value file which exists on non-root
1228 cgroups.
1229
1230 The total amount of memory currently being used by the cgroup
1231 and its descendants.
1232
1233 memory.min
1234 A read-write single value file which exists on non-root
1235 cgroups. The default is "0".
1236
1237 Hard memory protection. If the memory usage of a cgroup
1238 is within its effective min boundary, the cgroup's memory
1239 won't be reclaimed under any conditions. If there is no
1240 unprotected reclaimable memory available, OOM killer
1241 is invoked. Above the effective min boundary (or
1242 effective low boundary if it is higher), pages are reclaimed
1243 proportionally to the overage, reducing reclaim pressure for
1244 smaller overages.
1245
1246 Effective min boundary is limited by memory.min values of
1247 all ancestor cgroups. If there is memory.min overcommitment
1248 (child cgroup or cgroups are requiring more protected memory
1249 than parent will allow), then each child cgroup will get
1250 the part of parent's protection proportional to its
1251 actual memory usage below memory.min.
1252
1253 Putting more memory than generally available under this
1254 protection is discouraged and may lead to constant OOMs.
1255
1256 If a memory cgroup is not populated with processes,
1257 its memory.min is ignored.
1258
1259 memory.low
1260 A read-write single value file which exists on non-root
1261 cgroups. The default is "0".
1262
1263 Best-effort memory protection. If the memory usage of a
1264 cgroup is within its effective low boundary, the cgroup's
1265 memory won't be reclaimed unless there is no reclaimable
1266 memory available in unprotected cgroups.
1267 Above the effective low boundary (or
1268 effective min boundary if it is higher), pages are reclaimed
1269 proportionally to the overage, reducing reclaim pressure for
1270 smaller overages.
1271
1272 Effective low boundary is limited by memory.low values of
1273 all ancestor cgroups. If there is memory.low overcommitment
1274 (child cgroup or cgroups are requiring more protected memory
1275 than parent will allow), then each child cgroup will get
1276 the part of parent's protection proportional to its
1277 actual memory usage below memory.low.
1278
1279 Putting more memory than generally available under this
1280 protection is discouraged.
1281
1282 memory.high
1283 A read-write single value file which exists on non-root
1284 cgroups. The default is "max".
1285
1286 Memory usage throttle limit. If a cgroup's usage goes
1287 over the high boundary, the processes of the cgroup are
1288 throttled and put under heavy reclaim pressure.
1289
1290 Going over the high limit never invokes the OOM killer and
1291 under extreme conditions the limit may be breached. The high
1292 limit should be used in scenarios where an external process
1293 monitors the limited cgroup to alleviate heavy reclaim
1294 pressure.
1295
1296 memory.max
1297 A read-write single value file which exists on non-root
1298 cgroups. The default is "max".
1299
1300 Memory usage hard limit. This is the main mechanism to limit
1301 memory usage of a cgroup. If a cgroup's memory usage reaches
1302 this limit and can't be reduced, the OOM killer is invoked in
1303 the cgroup. Under certain circumstances, the usage may go
1304 over the limit temporarily.
1305
1306 In default configuration regular 0-order allocations always
1307 succeed unless OOM killer chooses current task as a victim.
1308
1309 Some kinds of allocations don't invoke the OOM killer.
1310 Caller could retry them differently, return into userspace
1311 as -ENOMEM or silently ignore in cases like disk readahead.
1312
1313 memory.reclaim
1314 A write-only nested-keyed file which exists for all cgroups.
1315
1316 This is a simple interface to trigger memory reclaim in the
1317 target cgroup.
1318
1319 Example::
1320
1321 echo "1G" > memory.reclaim
1322
1323 Please note that the kernel can over or under reclaim from
1324 the target cgroup. If less bytes are reclaimed than the
1325 specified amount, -EAGAIN is returned.
1326
1327 Please note that the proactive reclaim (triggered by this
1328 interface) is not meant to indicate memory pressure on the
1329 memory cgroup. Therefore socket memory balancing triggered by
1330 the memory reclaim normally is not exercised in this case.
1331 This means that the networking layer will not adapt based on
1332 reclaim induced by memory.reclaim.
1333
1334The following nested keys are defined.
1335
1336 ========== ================================
1337 swappiness Swappiness value to reclaim with
1338 ========== ================================
1339
1340 Specifying a swappiness value instructs the kernel to perform
1341 the reclaim with that swappiness value. Note that this has the
1342 same semantics as vm.swappiness applied to memcg reclaim with
1343 all the existing limitations and potential future extensions.
1344
1345 memory.peak
1346 A read-write single value file which exists on non-root cgroups.
1347
1348 The max memory usage recorded for the cgroup and its descendants since
1349 either the creation of the cgroup or the most recent reset for that FD.
1350
1351 A write of any non-empty string to this file resets it to the
1352 current memory usage for subsequent reads through the same
1353 file descriptor.
1354
1355 memory.oom.group
1356 A read-write single value file which exists on non-root
1357 cgroups. The default value is "0".
1358
1359 Determines whether the cgroup should be treated as
1360 an indivisible workload by the OOM killer. If set,
1361 all tasks belonging to the cgroup or to its descendants
1362 (if the memory cgroup is not a leaf cgroup) are killed
1363 together or not at all. This can be used to avoid
1364 partial kills to guarantee workload integrity.
1365
1366 Tasks with the OOM protection (oom_score_adj set to -1000)
1367 are treated as an exception and are never killed.
1368
1369 If the OOM killer is invoked in a cgroup, it's not going
1370 to kill any tasks outside of this cgroup, regardless
1371 memory.oom.group values of ancestor cgroups.
1372
1373 memory.events
1374 A read-only flat-keyed file which exists on non-root cgroups.
1375 The following entries are defined. Unless specified
1376 otherwise, a value change in this file generates a file
1377 modified event.
1378
1379 Note that all fields in this file are hierarchical and the
1380 file modified event can be generated due to an event down the
1381 hierarchy. For the local events at the cgroup level see
1382 memory.events.local.
1383
1384 low
1385 The number of times the cgroup is reclaimed due to
1386 high memory pressure even though its usage is under
1387 the low boundary. This usually indicates that the low
1388 boundary is over-committed.
1389
1390 high
1391 The number of times processes of the cgroup are
1392 throttled and routed to perform direct memory reclaim
1393 because the high memory boundary was exceeded. For a
1394 cgroup whose memory usage is capped by the high limit
1395 rather than global memory pressure, this event's
1396 occurrences are expected.
1397
1398 max
1399 The number of times the cgroup's memory usage was
1400 about to go over the max boundary. If direct reclaim
1401 fails to bring it down, the cgroup goes to OOM state.
1402
1403 oom
1404 The number of time the cgroup's memory usage was
1405 reached the limit and allocation was about to fail.
1406
1407 This event is not raised if the OOM killer is not
1408 considered as an option, e.g. for failed high-order
1409 allocations or if caller asked to not retry attempts.
1410
1411 oom_kill
1412 The number of processes belonging to this cgroup
1413 killed by any kind of OOM killer.
1414
1415 oom_group_kill
1416 The number of times a group OOM has occurred.
1417
1418 memory.events.local
1419 Similar to memory.events but the fields in the file are local
1420 to the cgroup i.e. not hierarchical. The file modified event
1421 generated on this file reflects only the local events.
1422
1423 memory.stat
1424 A read-only flat-keyed file which exists on non-root cgroups.
1425
1426 This breaks down the cgroup's memory footprint into different
1427 types of memory, type-specific details, and other information
1428 on the state and past events of the memory management system.
1429
1430 All memory amounts are in bytes.
1431
1432 The entries are ordered to be human readable, and new entries
1433 can show up in the middle. Don't rely on items remaining in a
1434 fixed position; use the keys to look up specific values!
1435
1436 If the entry has no per-node counter (or not show in the
1437 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1438 to indicate that it will not show in the memory.numa_stat.
1439
1440 anon
1441 Amount of memory used in anonymous mappings such as
1442 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1443
1444 file
1445 Amount of memory used to cache filesystem data,
1446 including tmpfs and shared memory.
1447
1448 kernel (npn)
1449 Amount of total kernel memory, including
1450 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1451 addition to other kernel memory use cases.
1452
1453 kernel_stack
1454 Amount of memory allocated to kernel stacks.
1455
1456 pagetables
1457 Amount of memory allocated for page tables.
1458
1459 sec_pagetables
1460 Amount of memory allocated for secondary page tables,
1461 this currently includes KVM mmu allocations on x86
1462 and arm64 and IOMMU page tables.
1463
1464 percpu (npn)
1465 Amount of memory used for storing per-cpu kernel
1466 data structures.
1467
1468 sock (npn)
1469 Amount of memory used in network transmission buffers
1470
1471 vmalloc (npn)
1472 Amount of memory used for vmap backed memory.
1473
1474 shmem
1475 Amount of cached filesystem data that is swap-backed,
1476 such as tmpfs, shm segments, shared anonymous mmap()s
1477
1478 zswap
1479 Amount of memory consumed by the zswap compression backend.
1480
1481 zswapped
1482 Amount of application memory swapped out to zswap.
1483
1484 file_mapped
1485 Amount of cached filesystem data mapped with mmap()
1486
1487 file_dirty
1488 Amount of cached filesystem data that was modified but
1489 not yet written back to disk
1490
1491 file_writeback
1492 Amount of cached filesystem data that was modified and
1493 is currently being written back to disk
1494
1495 swapcached
1496 Amount of swap cached in memory. The swapcache is accounted
1497 against both memory and swap usage.
1498
1499 anon_thp
1500 Amount of memory used in anonymous mappings backed by
1501 transparent hugepages
1502
1503 file_thp
1504 Amount of cached filesystem data backed by transparent
1505 hugepages
1506
1507 shmem_thp
1508 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1509 transparent hugepages
1510
1511 inactive_anon, active_anon, inactive_file, active_file, unevictable
1512 Amount of memory, swap-backed and filesystem-backed,
1513 on the internal memory management lists used by the
1514 page reclaim algorithm.
1515
1516 As these represent internal list state (eg. shmem pages are on anon
1517 memory management lists), inactive_foo + active_foo may not be equal to
1518 the value for the foo counter, since the foo counter is type-based, not
1519 list-based.
1520
1521 slab_reclaimable
1522 Part of "slab" that might be reclaimed, such as
1523 dentries and inodes.
1524
1525 slab_unreclaimable
1526 Part of "slab" that cannot be reclaimed on memory
1527 pressure.
1528
1529 slab (npn)
1530 Amount of memory used for storing in-kernel data
1531 structures.
1532
1533 workingset_refault_anon
1534 Number of refaults of previously evicted anonymous pages.
1535
1536 workingset_refault_file
1537 Number of refaults of previously evicted file pages.
1538
1539 workingset_activate_anon
1540 Number of refaulted anonymous pages that were immediately
1541 activated.
1542
1543 workingset_activate_file
1544 Number of refaulted file pages that were immediately activated.
1545
1546 workingset_restore_anon
1547 Number of restored anonymous pages which have been detected as
1548 an active workingset before they got reclaimed.
1549
1550 workingset_restore_file
1551 Number of restored file pages which have been detected as an
1552 active workingset before they got reclaimed.
1553
1554 workingset_nodereclaim
1555 Number of times a shadow node has been reclaimed
1556
1557 pgscan (npn)
1558 Amount of scanned pages (in an inactive LRU list)
1559
1560 pgsteal (npn)
1561 Amount of reclaimed pages
1562
1563 pgscan_kswapd (npn)
1564 Amount of scanned pages by kswapd (in an inactive LRU list)
1565
1566 pgscan_direct (npn)
1567 Amount of scanned pages directly (in an inactive LRU list)
1568
1569 pgscan_khugepaged (npn)
1570 Amount of scanned pages by khugepaged (in an inactive LRU list)
1571
1572 pgsteal_kswapd (npn)
1573 Amount of reclaimed pages by kswapd
1574
1575 pgsteal_direct (npn)
1576 Amount of reclaimed pages directly
1577
1578 pgsteal_khugepaged (npn)
1579 Amount of reclaimed pages by khugepaged
1580
1581 pgfault (npn)
1582 Total number of page faults incurred
1583
1584 pgmajfault (npn)
1585 Number of major page faults incurred
1586
1587 pgrefill (npn)
1588 Amount of scanned pages (in an active LRU list)
1589
1590 pgactivate (npn)
1591 Amount of pages moved to the active LRU list
1592
1593 pgdeactivate (npn)
1594 Amount of pages moved to the inactive LRU list
1595
1596 pglazyfree (npn)
1597 Amount of pages postponed to be freed under memory pressure
1598
1599 pglazyfreed (npn)
1600 Amount of reclaimed lazyfree pages
1601
1602 swpin_zero
1603 Number of pages swapped into memory and filled with zero, where I/O
1604 was optimized out because the page content was detected to be zero
1605 during swapout.
1606
1607 swpout_zero
1608 Number of zero-filled pages swapped out with I/O skipped due to the
1609 content being detected as zero.
1610
1611 zswpin
1612 Number of pages moved in to memory from zswap.
1613
1614 zswpout
1615 Number of pages moved out of memory to zswap.
1616
1617 zswpwb
1618 Number of pages written from zswap to swap.
1619
1620 thp_fault_alloc (npn)
1621 Number of transparent hugepages which were allocated to satisfy
1622 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1623 is not set.
1624
1625 thp_collapse_alloc (npn)
1626 Number of transparent hugepages which were allocated to allow
1627 collapsing an existing range of pages. This counter is not
1628 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1629
1630 thp_swpout (npn)
1631 Number of transparent hugepages which are swapout in one piece
1632 without splitting.
1633
1634 thp_swpout_fallback (npn)
1635 Number of transparent hugepages which were split before swapout.
1636 Usually because failed to allocate some continuous swap space
1637 for the huge page.
1638
1639 numa_pages_migrated (npn)
1640 Number of pages migrated by NUMA balancing.
1641
1642 numa_pte_updates (npn)
1643 Number of pages whose page table entries are modified by
1644 NUMA balancing to produce NUMA hinting faults on access.
1645
1646 numa_hint_faults (npn)
1647 Number of NUMA hinting faults.
1648
1649 pgdemote_kswapd
1650 Number of pages demoted by kswapd.
1651
1652 pgdemote_direct
1653 Number of pages demoted directly.
1654
1655 pgdemote_khugepaged
1656 Number of pages demoted by khugepaged.
1657
1658 hugetlb
1659 Amount of memory used by hugetlb pages. This metric only shows
1660 up if hugetlb usage is accounted for in memory.current (i.e.
1661 cgroup is mounted with the memory_hugetlb_accounting option).
1662
1663 memory.numa_stat
1664 A read-only nested-keyed file which exists on non-root cgroups.
1665
1666 This breaks down the cgroup's memory footprint into different
1667 types of memory, type-specific details, and other information
1668 per node on the state of the memory management system.
1669
1670 This is useful for providing visibility into the NUMA locality
1671 information within an memcg since the pages are allowed to be
1672 allocated from any physical node. One of the use case is evaluating
1673 application performance by combining this information with the
1674 application's CPU allocation.
1675
1676 All memory amounts are in bytes.
1677
1678 The output format of memory.numa_stat is::
1679
1680 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1681
1682 The entries are ordered to be human readable, and new entries
1683 can show up in the middle. Don't rely on items remaining in a
1684 fixed position; use the keys to look up specific values!
1685
1686 The entries can refer to the memory.stat.
1687
1688 memory.swap.current
1689 A read-only single value file which exists on non-root
1690 cgroups.
1691
1692 The total amount of swap currently being used by the cgroup
1693 and its descendants.
1694
1695 memory.swap.high
1696 A read-write single value file which exists on non-root
1697 cgroups. The default is "max".
1698
1699 Swap usage throttle limit. If a cgroup's swap usage exceeds
1700 this limit, all its further allocations will be throttled to
1701 allow userspace to implement custom out-of-memory procedures.
1702
1703 This limit marks a point of no return for the cgroup. It is NOT
1704 designed to manage the amount of swapping a workload does
1705 during regular operation. Compare to memory.swap.max, which
1706 prohibits swapping past a set amount, but lets the cgroup
1707 continue unimpeded as long as other memory can be reclaimed.
1708
1709 Healthy workloads are not expected to reach this limit.
1710
1711 memory.swap.peak
1712 A read-write single value file which exists on non-root cgroups.
1713
1714 The max swap usage recorded for the cgroup and its descendants since
1715 the creation of the cgroup or the most recent reset for that FD.
1716
1717 A write of any non-empty string to this file resets it to the
1718 current memory usage for subsequent reads through the same
1719 file descriptor.
1720
1721 memory.swap.max
1722 A read-write single value file which exists on non-root
1723 cgroups. The default is "max".
1724
1725 Swap usage hard limit. If a cgroup's swap usage reaches this
1726 limit, anonymous memory of the cgroup will not be swapped out.
1727
1728 memory.swap.events
1729 A read-only flat-keyed file which exists on non-root cgroups.
1730 The following entries are defined. Unless specified
1731 otherwise, a value change in this file generates a file
1732 modified event.
1733
1734 high
1735 The number of times the cgroup's swap usage was over
1736 the high threshold.
1737
1738 max
1739 The number of times the cgroup's swap usage was about
1740 to go over the max boundary and swap allocation
1741 failed.
1742
1743 fail
1744 The number of times swap allocation failed either
1745 because of running out of swap system-wide or max
1746 limit.
1747
1748 When reduced under the current usage, the existing swap
1749 entries are reclaimed gradually and the swap usage may stay
1750 higher than the limit for an extended period of time. This
1751 reduces the impact on the workload and memory management.
1752
1753 memory.zswap.current
1754 A read-only single value file which exists on non-root
1755 cgroups.
1756
1757 The total amount of memory consumed by the zswap compression
1758 backend.
1759
1760 memory.zswap.max
1761 A read-write single value file which exists on non-root
1762 cgroups. The default is "max".
1763
1764 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1765 limit, it will refuse to take any more stores before existing
1766 entries fault back in or are written out to disk.
1767
1768 memory.zswap.writeback
1769 A read-write single value file. The default value is "1".
1770 Note that this setting is hierarchical, i.e. the writeback would be
1771 implicitly disabled for child cgroups if the upper hierarchy
1772 does so.
1773
1774 When this is set to 0, all swapping attempts to swapping devices
1775 are disabled. This included both zswap writebacks, and swapping due
1776 to zswap store failures. If the zswap store failures are recurring
1777 (for e.g if the pages are incompressible), users can observe
1778 reclaim inefficiency after disabling writeback (because the same
1779 pages might be rejected again and again).
1780
1781 Note that this is subtly different from setting memory.swap.max to
1782 0, as it still allows for pages to be written to the zswap pool.
1783 This setting has no effect if zswap is disabled, and swapping
1784 is allowed unless memory.swap.max is set to 0.
1785
1786 memory.pressure
1787 A read-only nested-keyed file.
1788
1789 Shows pressure stall information for memory. See
1790 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1791
1792
1793Usage Guidelines
1794~~~~~~~~~~~~~~~~
1795
1796"memory.high" is the main mechanism to control memory usage.
1797Over-committing on high limit (sum of high limits > available memory)
1798and letting global memory pressure to distribute memory according to
1799usage is a viable strategy.
1800
1801Because breach of the high limit doesn't trigger the OOM killer but
1802throttles the offending cgroup, a management agent has ample
1803opportunities to monitor and take appropriate actions such as granting
1804more memory or terminating the workload.
1805
1806Determining whether a cgroup has enough memory is not trivial as
1807memory usage doesn't indicate whether the workload can benefit from
1808more memory. For example, a workload which writes data received from
1809network to a file can use all available memory but can also operate as
1810performant with a small amount of memory. A measure of memory
1811pressure - how much the workload is being impacted due to lack of
1812memory - is necessary to determine whether a workload needs more
1813memory; unfortunately, memory pressure monitoring mechanism isn't
1814implemented yet.
1815
1816
1817Memory Ownership
1818~~~~~~~~~~~~~~~~
1819
1820A memory area is charged to the cgroup which instantiated it and stays
1821charged to the cgroup until the area is released. Migrating a process
1822to a different cgroup doesn't move the memory usages that it
1823instantiated while in the previous cgroup to the new cgroup.
1824
1825A memory area may be used by processes belonging to different cgroups.
1826To which cgroup the area will be charged is in-deterministic; however,
1827over time, the memory area is likely to end up in a cgroup which has
1828enough memory allowance to avoid high reclaim pressure.
1829
1830If a cgroup sweeps a considerable amount of memory which is expected
1831to be accessed repeatedly by other cgroups, it may make sense to use
1832POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1833belonging to the affected files to ensure correct memory ownership.
1834
1835
1836IO
1837--
1838
1839The "io" controller regulates the distribution of IO resources. This
1840controller implements both weight based and absolute bandwidth or IOPS
1841limit distribution; however, weight based distribution is available
1842only if cfq-iosched is in use and neither scheme is available for
1843blk-mq devices.
1844
1845
1846IO Interface Files
1847~~~~~~~~~~~~~~~~~~
1848
1849 io.stat
1850 A read-only nested-keyed file.
1851
1852 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1853 The following nested keys are defined.
1854
1855 ====== =====================
1856 rbytes Bytes read
1857 wbytes Bytes written
1858 rios Number of read IOs
1859 wios Number of write IOs
1860 dbytes Bytes discarded
1861 dios Number of discard IOs
1862 ====== =====================
1863
1864 An example read output follows::
1865
1866 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1867 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1868
1869 io.cost.qos
1870 A read-write nested-keyed file which exists only on the root
1871 cgroup.
1872
1873 This file configures the Quality of Service of the IO cost
1874 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1875 currently implements "io.weight" proportional control. Lines
1876 are keyed by $MAJ:$MIN device numbers and not ordered. The
1877 line for a given device is populated on the first write for
1878 the device on "io.cost.qos" or "io.cost.model". The following
1879 nested keys are defined.
1880
1881 ====== =====================================
1882 enable Weight-based control enable
1883 ctrl "auto" or "user"
1884 rpct Read latency percentile [0, 100]
1885 rlat Read latency threshold
1886 wpct Write latency percentile [0, 100]
1887 wlat Write latency threshold
1888 min Minimum scaling percentage [1, 10000]
1889 max Maximum scaling percentage [1, 10000]
1890 ====== =====================================
1891
1892 The controller is disabled by default and can be enabled by
1893 setting "enable" to 1. "rpct" and "wpct" parameters default
1894 to zero and the controller uses internal device saturation
1895 state to adjust the overall IO rate between "min" and "max".
1896
1897 When a better control quality is needed, latency QoS
1898 parameters can be configured. For example::
1899
1900 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1901
1902 shows that on sdb, the controller is enabled, will consider
1903 the device saturated if the 95th percentile of read completion
1904 latencies is above 75ms or write 150ms, and adjust the overall
1905 IO issue rate between 50% and 150% accordingly.
1906
1907 The lower the saturation point, the better the latency QoS at
1908 the cost of aggregate bandwidth. The narrower the allowed
1909 adjustment range between "min" and "max", the more conformant
1910 to the cost model the IO behavior. Note that the IO issue
1911 base rate may be far off from 100% and setting "min" and "max"
1912 blindly can lead to a significant loss of device capacity or
1913 control quality. "min" and "max" are useful for regulating
1914 devices which show wide temporary behavior changes - e.g. a
1915 ssd which accepts writes at the line speed for a while and
1916 then completely stalls for multiple seconds.
1917
1918 When "ctrl" is "auto", the parameters are controlled by the
1919 kernel and may change automatically. Setting "ctrl" to "user"
1920 or setting any of the percentile and latency parameters puts
1921 it into "user" mode and disables the automatic changes. The
1922 automatic mode can be restored by setting "ctrl" to "auto".
1923
1924 io.cost.model
1925 A read-write nested-keyed file which exists only on the root
1926 cgroup.
1927
1928 This file configures the cost model of the IO cost model based
1929 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1930 implements "io.weight" proportional control. Lines are keyed
1931 by $MAJ:$MIN device numbers and not ordered. The line for a
1932 given device is populated on the first write for the device on
1933 "io.cost.qos" or "io.cost.model". The following nested keys
1934 are defined.
1935
1936 ===== ================================
1937 ctrl "auto" or "user"
1938 model The cost model in use - "linear"
1939 ===== ================================
1940
1941 When "ctrl" is "auto", the kernel may change all parameters
1942 dynamically. When "ctrl" is set to "user" or any other
1943 parameters are written to, "ctrl" become "user" and the
1944 automatic changes are disabled.
1945
1946 When "model" is "linear", the following model parameters are
1947 defined.
1948
1949 ============= ========================================
1950 [r|w]bps The maximum sequential IO throughput
1951 [r|w]seqiops The maximum 4k sequential IOs per second
1952 [r|w]randiops The maximum 4k random IOs per second
1953 ============= ========================================
1954
1955 From the above, the builtin linear model determines the base
1956 costs of a sequential and random IO and the cost coefficient
1957 for the IO size. While simple, this model can cover most
1958 common device classes acceptably.
1959
1960 The IO cost model isn't expected to be accurate in absolute
1961 sense and is scaled to the device behavior dynamically.
1962
1963 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1964 generate device-specific coefficients.
1965
1966 io.weight
1967 A read-write flat-keyed file which exists on non-root cgroups.
1968 The default is "default 100".
1969
1970 The first line is the default weight applied to devices
1971 without specific override. The rest are overrides keyed by
1972 $MAJ:$MIN device numbers and not ordered. The weights are in
1973 the range [1, 10000] and specifies the relative amount IO time
1974 the cgroup can use in relation to its siblings.
1975
1976 The default weight can be updated by writing either "default
1977 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1978 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1979
1980 An example read output follows::
1981
1982 default 100
1983 8:16 200
1984 8:0 50
1985
1986 io.max
1987 A read-write nested-keyed file which exists on non-root
1988 cgroups.
1989
1990 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1991 device numbers and not ordered. The following nested keys are
1992 defined.
1993
1994 ===== ==================================
1995 rbps Max read bytes per second
1996 wbps Max write bytes per second
1997 riops Max read IO operations per second
1998 wiops Max write IO operations per second
1999 ===== ==================================
2000
2001 When writing, any number of nested key-value pairs can be
2002 specified in any order. "max" can be specified as the value
2003 to remove a specific limit. If the same key is specified
2004 multiple times, the outcome is undefined.
2005
2006 BPS and IOPS are measured in each IO direction and IOs are
2007 delayed if limit is reached. Temporary bursts are allowed.
2008
2009 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2010
2011 echo "8:16 rbps=2097152 wiops=120" > io.max
2012
2013 Reading returns the following::
2014
2015 8:16 rbps=2097152 wbps=max riops=max wiops=120
2016
2017 Write IOPS limit can be removed by writing the following::
2018
2019 echo "8:16 wiops=max" > io.max
2020
2021 Reading now returns the following::
2022
2023 8:16 rbps=2097152 wbps=max riops=max wiops=max
2024
2025 io.pressure
2026 A read-only nested-keyed file.
2027
2028 Shows pressure stall information for IO. See
2029 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2030
2031
2032Writeback
2033~~~~~~~~~
2034
2035Page cache is dirtied through buffered writes and shared mmaps and
2036written asynchronously to the backing filesystem by the writeback
2037mechanism. Writeback sits between the memory and IO domains and
2038regulates the proportion of dirty memory by balancing dirtying and
2039write IOs.
2040
2041The io controller, in conjunction with the memory controller,
2042implements control of page cache writeback IOs. The memory controller
2043defines the memory domain that dirty memory ratio is calculated and
2044maintained for and the io controller defines the io domain which
2045writes out dirty pages for the memory domain. Both system-wide and
2046per-cgroup dirty memory states are examined and the more restrictive
2047of the two is enforced.
2048
2049cgroup writeback requires explicit support from the underlying
2050filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
2051btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
2052attributed to the root cgroup.
2053
2054There are inherent differences in memory and writeback management
2055which affects how cgroup ownership is tracked. Memory is tracked per
2056page while writeback per inode. For the purpose of writeback, an
2057inode is assigned to a cgroup and all IO requests to write dirty pages
2058from the inode are attributed to that cgroup.
2059
2060As cgroup ownership for memory is tracked per page, there can be pages
2061which are associated with different cgroups than the one the inode is
2062associated with. These are called foreign pages. The writeback
2063constantly keeps track of foreign pages and, if a particular foreign
2064cgroup becomes the majority over a certain period of time, switches
2065the ownership of the inode to that cgroup.
2066
2067While this model is enough for most use cases where a given inode is
2068mostly dirtied by a single cgroup even when the main writing cgroup
2069changes over time, use cases where multiple cgroups write to a single
2070inode simultaneously are not supported well. In such circumstances, a
2071significant portion of IOs are likely to be attributed incorrectly.
2072As memory controller assigns page ownership on the first use and
2073doesn't update it until the page is released, even if writeback
2074strictly follows page ownership, multiple cgroups dirtying overlapping
2075areas wouldn't work as expected. It's recommended to avoid such usage
2076patterns.
2077
2078The sysctl knobs which affect writeback behavior are applied to cgroup
2079writeback as follows.
2080
2081 vm.dirty_background_ratio, vm.dirty_ratio
2082 These ratios apply the same to cgroup writeback with the
2083 amount of available memory capped by limits imposed by the
2084 memory controller and system-wide clean memory.
2085
2086 vm.dirty_background_bytes, vm.dirty_bytes
2087 For cgroup writeback, this is calculated into ratio against
2088 total available memory and applied the same way as
2089 vm.dirty[_background]_ratio.
2090
2091
2092IO Latency
2093~~~~~~~~~~
2094
2095This is a cgroup v2 controller for IO workload protection. You provide a group
2096with a latency target, and if the average latency exceeds that target the
2097controller will throttle any peers that have a lower latency target than the
2098protected workload.
2099
2100The limits are only applied at the peer level in the hierarchy. This means that
2101in the diagram below, only groups A, B, and C will influence each other, and
2102groups D and F will influence each other. Group G will influence nobody::
2103
2104 [root]
2105 / | \
2106 A B C
2107 / \ |
2108 D F G
2109
2110
2111So the ideal way to configure this is to set io.latency in groups A, B, and C.
2112Generally you do not want to set a value lower than the latency your device
2113supports. Experiment to find the value that works best for your workload.
2114Start at higher than the expected latency for your device and watch the
2115avg_lat value in io.stat for your workload group to get an idea of the
2116latency you see during normal operation. Use the avg_lat value as a basis for
2117your real setting, setting at 10-15% higher than the value in io.stat.
2118
2119How IO Latency Throttling Works
2120~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2121
2122io.latency is work conserving; so as long as everybody is meeting their latency
2123target the controller doesn't do anything. Once a group starts missing its
2124target it begins throttling any peer group that has a higher target than itself.
2125This throttling takes 2 forms:
2126
2127- Queue depth throttling. This is the number of outstanding IO's a group is
2128 allowed to have. We will clamp down relatively quickly, starting at no limit
2129 and going all the way down to 1 IO at a time.
2130
2131- Artificial delay induction. There are certain types of IO that cannot be
2132 throttled without possibly adversely affecting higher priority groups. This
2133 includes swapping and metadata IO. These types of IO are allowed to occur
2134 normally, however they are "charged" to the originating group. If the
2135 originating group is being throttled you will see the use_delay and delay
2136 fields in io.stat increase. The delay value is how many microseconds that are
2137 being added to any process that runs in this group. Because this number can
2138 grow quite large if there is a lot of swapping or metadata IO occurring we
2139 limit the individual delay events to 1 second at a time.
2140
2141Once the victimized group starts meeting its latency target again it will start
2142unthrottling any peer groups that were throttled previously. If the victimized
2143group simply stops doing IO the global counter will unthrottle appropriately.
2144
2145IO Latency Interface Files
2146~~~~~~~~~~~~~~~~~~~~~~~~~~
2147
2148 io.latency
2149 This takes a similar format as the other controllers.
2150
2151 "MAJOR:MINOR target=<target time in microseconds>"
2152
2153 io.stat
2154 If the controller is enabled you will see extra stats in io.stat in
2155 addition to the normal ones.
2156
2157 depth
2158 This is the current queue depth for the group.
2159
2160 avg_lat
2161 This is an exponential moving average with a decay rate of 1/exp
2162 bound by the sampling interval. The decay rate interval can be
2163 calculated by multiplying the win value in io.stat by the
2164 corresponding number of samples based on the win value.
2165
2166 win
2167 The sampling window size in milliseconds. This is the minimum
2168 duration of time between evaluation events. Windows only elapse
2169 with IO activity. Idle periods extend the most recent window.
2170
2171IO Priority
2172~~~~~~~~~~~
2173
2174A single attribute controls the behavior of the I/O priority cgroup policy,
2175namely the io.prio.class attribute. The following values are accepted for
2176that attribute:
2177
2178 no-change
2179 Do not modify the I/O priority class.
2180
2181 promote-to-rt
2182 For requests that have a non-RT I/O priority class, change it into RT.
2183 Also change the priority level of these requests to 4. Do not modify
2184 the I/O priority of requests that have priority class RT.
2185
2186 restrict-to-be
2187 For requests that do not have an I/O priority class or that have I/O
2188 priority class RT, change it into BE. Also change the priority level
2189 of these requests to 0. Do not modify the I/O priority class of
2190 requests that have priority class IDLE.
2191
2192 idle
2193 Change the I/O priority class of all requests into IDLE, the lowest
2194 I/O priority class.
2195
2196 none-to-rt
2197 Deprecated. Just an alias for promote-to-rt.
2198
2199The following numerical values are associated with the I/O priority policies:
2200
2201+----------------+---+
2202| no-change | 0 |
2203+----------------+---+
2204| promote-to-rt | 1 |
2205+----------------+---+
2206| restrict-to-be | 2 |
2207+----------------+---+
2208| idle | 3 |
2209+----------------+---+
2210
2211The numerical value that corresponds to each I/O priority class is as follows:
2212
2213+-------------------------------+---+
2214| IOPRIO_CLASS_NONE | 0 |
2215+-------------------------------+---+
2216| IOPRIO_CLASS_RT (real-time) | 1 |
2217+-------------------------------+---+
2218| IOPRIO_CLASS_BE (best effort) | 2 |
2219+-------------------------------+---+
2220| IOPRIO_CLASS_IDLE | 3 |
2221+-------------------------------+---+
2222
2223The algorithm to set the I/O priority class for a request is as follows:
2224
2225- If I/O priority class policy is promote-to-rt, change the request I/O
2226 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2227 level to 4.
2228- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2229 class policy into a number, then change the request I/O priority class
2230 into the maximum of the I/O priority class policy number and the numerical
2231 I/O priority class.
2232
2233PID
2234---
2235
2236The process number controller is used to allow a cgroup to stop any
2237new tasks from being fork()'d or clone()'d after a specified limit is
2238reached.
2239
2240The number of tasks in a cgroup can be exhausted in ways which other
2241controllers cannot prevent, thus warranting its own controller. For
2242example, a fork bomb is likely to exhaust the number of tasks before
2243hitting memory restrictions.
2244
2245Note that PIDs used in this controller refer to TIDs, process IDs as
2246used by the kernel.
2247
2248
2249PID Interface Files
2250~~~~~~~~~~~~~~~~~~~
2251
2252 pids.max
2253 A read-write single value file which exists on non-root
2254 cgroups. The default is "max".
2255
2256 Hard limit of number of processes.
2257
2258 pids.current
2259 A read-only single value file which exists on non-root cgroups.
2260
2261 The number of processes currently in the cgroup and its
2262 descendants.
2263
2264 pids.peak
2265 A read-only single value file which exists on non-root cgroups.
2266
2267 The maximum value that the number of processes in the cgroup and its
2268 descendants has ever reached.
2269
2270 pids.events
2271 A read-only flat-keyed file which exists on non-root cgroups. Unless
2272 specified otherwise, a value change in this file generates a file
2273 modified event. The following entries are defined.
2274
2275 max
2276 The number of times the cgroup's total number of processes hit the pids.max
2277 limit (see also pids_localevents).
2278
2279 pids.events.local
2280 Similar to pids.events but the fields in the file are local
2281 to the cgroup i.e. not hierarchical. The file modified event
2282 generated on this file reflects only the local events.
2283
2284Organisational operations are not blocked by cgroup policies, so it is
2285possible to have pids.current > pids.max. This can be done by either
2286setting the limit to be smaller than pids.current, or attaching enough
2287processes to the cgroup such that pids.current is larger than
2288pids.max. However, it is not possible to violate a cgroup PID policy
2289through fork() or clone(). These will return -EAGAIN if the creation
2290of a new process would cause a cgroup policy to be violated.
2291
2292
2293Cpuset
2294------
2295
2296The "cpuset" controller provides a mechanism for constraining
2297the CPU and memory node placement of tasks to only the resources
2298specified in the cpuset interface files in a task's current cgroup.
2299This is especially valuable on large NUMA systems where placing jobs
2300on properly sized subsets of the systems with careful processor and
2301memory placement to reduce cross-node memory access and contention
2302can improve overall system performance.
2303
2304The "cpuset" controller is hierarchical. That means the controller
2305cannot use CPUs or memory nodes not allowed in its parent.
2306
2307
2308Cpuset Interface Files
2309~~~~~~~~~~~~~~~~~~~~~~
2310
2311 cpuset.cpus
2312 A read-write multiple values file which exists on non-root
2313 cpuset-enabled cgroups.
2314
2315 It lists the requested CPUs to be used by tasks within this
2316 cgroup. The actual list of CPUs to be granted, however, is
2317 subjected to constraints imposed by its parent and can differ
2318 from the requested CPUs.
2319
2320 The CPU numbers are comma-separated numbers or ranges.
2321 For example::
2322
2323 # cat cpuset.cpus
2324 0-4,6,8-10
2325
2326 An empty value indicates that the cgroup is using the same
2327 setting as the nearest cgroup ancestor with a non-empty
2328 "cpuset.cpus" or all the available CPUs if none is found.
2329
2330 The value of "cpuset.cpus" stays constant until the next update
2331 and won't be affected by any CPU hotplug events.
2332
2333 cpuset.cpus.effective
2334 A read-only multiple values file which exists on all
2335 cpuset-enabled cgroups.
2336
2337 It lists the onlined CPUs that are actually granted to this
2338 cgroup by its parent. These CPUs are allowed to be used by
2339 tasks within the current cgroup.
2340
2341 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2342 all the CPUs from the parent cgroup that can be available to
2343 be used by this cgroup. Otherwise, it should be a subset of
2344 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2345 can be granted. In this case, it will be treated just like an
2346 empty "cpuset.cpus".
2347
2348 Its value will be affected by CPU hotplug events.
2349
2350 cpuset.mems
2351 A read-write multiple values file which exists on non-root
2352 cpuset-enabled cgroups.
2353
2354 It lists the requested memory nodes to be used by tasks within
2355 this cgroup. The actual list of memory nodes granted, however,
2356 is subjected to constraints imposed by its parent and can differ
2357 from the requested memory nodes.
2358
2359 The memory node numbers are comma-separated numbers or ranges.
2360 For example::
2361
2362 # cat cpuset.mems
2363 0-1,3
2364
2365 An empty value indicates that the cgroup is using the same
2366 setting as the nearest cgroup ancestor with a non-empty
2367 "cpuset.mems" or all the available memory nodes if none
2368 is found.
2369
2370 The value of "cpuset.mems" stays constant until the next update
2371 and won't be affected by any memory nodes hotplug events.
2372
2373 Setting a non-empty value to "cpuset.mems" causes memory of
2374 tasks within the cgroup to be migrated to the designated nodes if
2375 they are currently using memory outside of the designated nodes.
2376
2377 There is a cost for this memory migration. The migration
2378 may not be complete and some memory pages may be left behind.
2379 So it is recommended that "cpuset.mems" should be set properly
2380 before spawning new tasks into the cpuset. Even if there is
2381 a need to change "cpuset.mems" with active tasks, it shouldn't
2382 be done frequently.
2383
2384 cpuset.mems.effective
2385 A read-only multiple values file which exists on all
2386 cpuset-enabled cgroups.
2387
2388 It lists the onlined memory nodes that are actually granted to
2389 this cgroup by its parent. These memory nodes are allowed to
2390 be used by tasks within the current cgroup.
2391
2392 If "cpuset.mems" is empty, it shows all the memory nodes from the
2393 parent cgroup that will be available to be used by this cgroup.
2394 Otherwise, it should be a subset of "cpuset.mems" unless none of
2395 the memory nodes listed in "cpuset.mems" can be granted. In this
2396 case, it will be treated just like an empty "cpuset.mems".
2397
2398 Its value will be affected by memory nodes hotplug events.
2399
2400 cpuset.cpus.exclusive
2401 A read-write multiple values file which exists on non-root
2402 cpuset-enabled cgroups.
2403
2404 It lists all the exclusive CPUs that are allowed to be used
2405 to create a new cpuset partition. Its value is not used
2406 unless the cgroup becomes a valid partition root. See the
2407 "cpuset.cpus.partition" section below for a description of what
2408 a cpuset partition is.
2409
2410 When the cgroup becomes a partition root, the actual exclusive
2411 CPUs that are allocated to that partition are listed in
2412 "cpuset.cpus.exclusive.effective" which may be different
2413 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2414 has previously been set, "cpuset.cpus.exclusive.effective"
2415 is always a subset of it.
2416
2417 Users can manually set it to a value that is different from
2418 "cpuset.cpus". One constraint in setting it is that the list of
2419 CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2420 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
2421 isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2422 of it to leave at least one CPU available when the exclusive
2423 CPUs are taken away.
2424
2425 For a parent cgroup, any one of its exclusive CPUs can only
2426 be distributed to at most one of its child cgroups. Having an
2427 exclusive CPU appearing in two or more of its child cgroups is
2428 not allowed (the exclusivity rule). A value that violates the
2429 exclusivity rule will be rejected with a write error.
2430
2431 The root cgroup is a partition root and all its available CPUs
2432 are in its exclusive CPU set.
2433
2434 cpuset.cpus.exclusive.effective
2435 A read-only multiple values file which exists on all non-root
2436 cpuset-enabled cgroups.
2437
2438 This file shows the effective set of exclusive CPUs that
2439 can be used to create a partition root. The content
2440 of this file will always be a subset of its parent's
2441 "cpuset.cpus.exclusive.effective" if its parent is not the root
2442 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2443 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2444 treated to have an implicit value of "cpuset.cpus" in the
2445 formation of local partition.
2446
2447 cpuset.cpus.isolated
2448 A read-only and root cgroup only multiple values file.
2449
2450 This file shows the set of all isolated CPUs used in existing
2451 isolated partitions. It will be empty if no isolated partition
2452 is created.
2453
2454 cpuset.cpus.partition
2455 A read-write single value file which exists on non-root
2456 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2457 and is not delegatable.
2458
2459 It accepts only the following input values when written to.
2460
2461 ========== =====================================
2462 "member" Non-root member of a partition
2463 "root" Partition root
2464 "isolated" Partition root without load balancing
2465 ========== =====================================
2466
2467 A cpuset partition is a collection of cpuset-enabled cgroups with
2468 a partition root at the top of the hierarchy and its descendants
2469 except those that are separate partition roots themselves and
2470 their descendants. A partition has exclusive access to the
2471 set of exclusive CPUs allocated to it. Other cgroups outside
2472 of that partition cannot use any CPUs in that set.
2473
2474 There are two types of partitions - local and remote. A local
2475 partition is one whose parent cgroup is also a valid partition
2476 root. A remote partition is one whose parent cgroup is not a
2477 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2478 is optional for the creation of a local partition as its
2479 "cpuset.cpus.exclusive" file will assume an implicit value that
2480 is the same as "cpuset.cpus" if it is not set. Writing the
2481 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2482 before the target partition root is mandatory for the creation
2483 of a remote partition.
2484
2485 Currently, a remote partition cannot be created under a local
2486 partition. All the ancestors of a remote partition root except
2487 the root cgroup cannot be a partition root.
2488
2489 The root cgroup is always a partition root and its state cannot
2490 be changed. All other non-root cgroups start out as "member".
2491
2492 When set to "root", the current cgroup is the root of a new
2493 partition or scheduling domain. The set of exclusive CPUs is
2494 determined by the value of its "cpuset.cpus.exclusive.effective".
2495
2496 When set to "isolated", the CPUs in that partition will be in
2497 an isolated state without any load balancing from the scheduler
2498 and excluded from the unbound workqueues. Tasks placed in such
2499 a partition with multiple CPUs should be carefully distributed
2500 and bound to each of the individual CPUs for optimal performance.
2501
2502 A partition root ("root" or "isolated") can be in one of the
2503 two possible states - valid or invalid. An invalid partition
2504 root is in a degraded state where some state information may
2505 be retained, but behaves more like a "member".
2506
2507 All possible state transitions among "member", "root" and
2508 "isolated" are allowed.
2509
2510 On read, the "cpuset.cpus.partition" file can show the following
2511 values.
2512
2513 ============================= =====================================
2514 "member" Non-root member of a partition
2515 "root" Partition root
2516 "isolated" Partition root without load balancing
2517 "root invalid (<reason>)" Invalid partition root
2518 "isolated invalid (<reason>)" Invalid isolated partition root
2519 ============================= =====================================
2520
2521 In the case of an invalid partition root, a descriptive string on
2522 why the partition is invalid is included within parentheses.
2523
2524 For a local partition root to be valid, the following conditions
2525 must be met.
2526
2527 1) The parent cgroup is a valid partition root.
2528 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2529 though it may contain offline CPUs.
2530 3) The "cpuset.cpus.effective" cannot be empty unless there is
2531 no task associated with this partition.
2532
2533 For a remote partition root to be valid, all the above conditions
2534 except the first one must be met.
2535
2536 External events like hotplug or changes to "cpuset.cpus" or
2537 "cpuset.cpus.exclusive" can cause a valid partition root to
2538 become invalid and vice versa. Note that a task cannot be
2539 moved to a cgroup with empty "cpuset.cpus.effective".
2540
2541 A valid non-root parent partition may distribute out all its CPUs
2542 to its child local partitions when there is no task associated
2543 with it.
2544
2545 Care must be taken to change a valid partition root to "member"
2546 as all its child local partitions, if present, will become
2547 invalid causing disruption to tasks running in those child
2548 partitions. These inactivated partitions could be recovered if
2549 their parent is switched back to a partition root with a proper
2550 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2551
2552 Poll and inotify events are triggered whenever the state of
2553 "cpuset.cpus.partition" changes. That includes changes caused
2554 by write to "cpuset.cpus.partition", cpu hotplug or other
2555 changes that modify the validity status of the partition.
2556 This will allow user space agents to monitor unexpected changes
2557 to "cpuset.cpus.partition" without the need to do continuous
2558 polling.
2559
2560 A user can pre-configure certain CPUs to an isolated state
2561 with load balancing disabled at boot time with the "isolcpus"
2562 kernel boot command line option. If those CPUs are to be put
2563 into a partition, they have to be used in an isolated partition.
2564
2565
2566Device controller
2567-----------------
2568
2569Device controller manages access to device files. It includes both
2570creation of new device files (using mknod), and access to the
2571existing device files.
2572
2573Cgroup v2 device controller has no interface files and is implemented
2574on top of cgroup BPF. To control access to device files, a user may
2575create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2576them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2577device file, corresponding BPF programs will be executed, and depending
2578on the return value the attempt will succeed or fail with -EPERM.
2579
2580A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2581bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2582access type (mknod/read/write) and device (type, major and minor numbers).
2583If the program returns 0, the attempt fails with -EPERM, otherwise it
2584succeeds.
2585
2586An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2587tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2588
2589
2590RDMA
2591----
2592
2593The "rdma" controller regulates the distribution and accounting of
2594RDMA resources.
2595
2596RDMA Interface Files
2597~~~~~~~~~~~~~~~~~~~~
2598
2599 rdma.max
2600 A readwrite nested-keyed file that exists for all the cgroups
2601 except root that describes current configured resource limit
2602 for a RDMA/IB device.
2603
2604 Lines are keyed by device name and are not ordered.
2605 Each line contains space separated resource name and its configured
2606 limit that can be distributed.
2607
2608 The following nested keys are defined.
2609
2610 ========== =============================
2611 hca_handle Maximum number of HCA Handles
2612 hca_object Maximum number of HCA Objects
2613 ========== =============================
2614
2615 An example for mlx4 and ocrdma device follows::
2616
2617 mlx4_0 hca_handle=2 hca_object=2000
2618 ocrdma1 hca_handle=3 hca_object=max
2619
2620 rdma.current
2621 A read-only file that describes current resource usage.
2622 It exists for all the cgroup except root.
2623
2624 An example for mlx4 and ocrdma device follows::
2625
2626 mlx4_0 hca_handle=1 hca_object=20
2627 ocrdma1 hca_handle=1 hca_object=23
2628
2629HugeTLB
2630-------
2631
2632The HugeTLB controller allows to limit the HugeTLB usage per control group and
2633enforces the controller limit during page fault.
2634
2635HugeTLB Interface Files
2636~~~~~~~~~~~~~~~~~~~~~~~
2637
2638 hugetlb.<hugepagesize>.current
2639 Show current usage for "hugepagesize" hugetlb. It exists for all
2640 the cgroup except root.
2641
2642 hugetlb.<hugepagesize>.max
2643 Set/show the hard limit of "hugepagesize" hugetlb usage.
2644 The default value is "max". It exists for all the cgroup except root.
2645
2646 hugetlb.<hugepagesize>.events
2647 A read-only flat-keyed file which exists on non-root cgroups.
2648
2649 max
2650 The number of allocation failure due to HugeTLB limit
2651
2652 hugetlb.<hugepagesize>.events.local
2653 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2654 are local to the cgroup i.e. not hierarchical. The file modified event
2655 generated on this file reflects only the local events.
2656
2657 hugetlb.<hugepagesize>.numa_stat
2658 Similar to memory.numa_stat, it shows the numa information of the
2659 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2660 use hugetlb pages are included. The per-node values are in bytes.
2661
2662Misc
2663----
2664
2665The Miscellaneous cgroup provides the resource limiting and tracking
2666mechanism for the scalar resources which cannot be abstracted like the other
2667cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2668option.
2669
2670A resource can be added to the controller via enum misc_res_type{} in the
2671include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2672in the kernel/cgroup/misc.c file. Provider of the resource must set its
2673capacity prior to using the resource by calling misc_cg_set_capacity().
2674
2675Once a capacity is set then the resource usage can be updated using charge and
2676uncharge APIs. All of the APIs to interact with misc controller are in
2677include/linux/misc_cgroup.h.
2678
2679Misc Interface Files
2680~~~~~~~~~~~~~~~~~~~~
2681
2682Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2683
2684 misc.capacity
2685 A read-only flat-keyed file shown only in the root cgroup. It shows
2686 miscellaneous scalar resources available on the platform along with
2687 their quantities::
2688
2689 $ cat misc.capacity
2690 res_a 50
2691 res_b 10
2692
2693 misc.current
2694 A read-only flat-keyed file shown in the all cgroups. It shows
2695 the current usage of the resources in the cgroup and its children.::
2696
2697 $ cat misc.current
2698 res_a 3
2699 res_b 0
2700
2701 misc.peak
2702 A read-only flat-keyed file shown in all cgroups. It shows the
2703 historical maximum usage of the resources in the cgroup and its
2704 children.::
2705
2706 $ cat misc.peak
2707 res_a 10
2708 res_b 8
2709
2710 misc.max
2711 A read-write flat-keyed file shown in the non root cgroups. Allowed
2712 maximum usage of the resources in the cgroup and its children.::
2713
2714 $ cat misc.max
2715 res_a max
2716 res_b 4
2717
2718 Limit can be set by::
2719
2720 # echo res_a 1 > misc.max
2721
2722 Limit can be set to max by::
2723
2724 # echo res_a max > misc.max
2725
2726 Limits can be set higher than the capacity value in the misc.capacity
2727 file.
2728
2729 misc.events
2730 A read-only flat-keyed file which exists on non-root cgroups. The
2731 following entries are defined. Unless specified otherwise, a value
2732 change in this file generates a file modified event. All fields in
2733 this file are hierarchical.
2734
2735 max
2736 The number of times the cgroup's resource usage was
2737 about to go over the max boundary.
2738
2739 misc.events.local
2740 Similar to misc.events but the fields in the file are local to the
2741 cgroup i.e. not hierarchical. The file modified event generated on
2742 this file reflects only the local events.
2743
2744Migration and Ownership
2745~~~~~~~~~~~~~~~~~~~~~~~
2746
2747A miscellaneous scalar resource is charged to the cgroup in which it is used
2748first, and stays charged to that cgroup until that resource is freed. Migrating
2749a process to a different cgroup does not move the charge to the destination
2750cgroup where the process has moved.
2751
2752Others
2753------
2754
2755perf_event
2756~~~~~~~~~~
2757
2758perf_event controller, if not mounted on a legacy hierarchy, is
2759automatically enabled on the v2 hierarchy so that perf events can
2760always be filtered by cgroup v2 path. The controller can still be
2761moved to a legacy hierarchy after v2 hierarchy is populated.
2762
2763
2764Non-normative information
2765-------------------------
2766
2767This section contains information that isn't considered to be a part of
2768the stable kernel API and so is subject to change.
2769
2770
2771CPU controller root cgroup process behaviour
2772~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2773
2774When distributing CPU cycles in the root cgroup each thread in this
2775cgroup is treated as if it was hosted in a separate child cgroup of the
2776root cgroup. This child cgroup weight is dependent on its thread nice
2777level.
2778
2779For details of this mapping see sched_prio_to_weight array in
2780kernel/sched/core.c file (values from this array should be scaled
2781appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2782
2783
2784IO controller root cgroup process behaviour
2785~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2786
2787Root cgroup processes are hosted in an implicit leaf child node.
2788When distributing IO resources this implicit child node is taken into
2789account as if it was a normal child cgroup of the root cgroup with a
2790weight value of 200.
2791
2792
2793Namespace
2794=========
2795
2796Basics
2797------
2798
2799cgroup namespace provides a mechanism to virtualize the view of the
2800"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2801flag can be used with clone(2) and unshare(2) to create a new cgroup
2802namespace. The process running inside the cgroup namespace will have
2803its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2804cgroupns root is the cgroup of the process at the time of creation of
2805the cgroup namespace.
2806
2807Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2808complete path of the cgroup of a process. In a container setup where
2809a set of cgroups and namespaces are intended to isolate processes the
2810"/proc/$PID/cgroup" file may leak potential system level information
2811to the isolated processes. For example::
2812
2813 # cat /proc/self/cgroup
2814 0::/batchjobs/container_id1
2815
2816The path '/batchjobs/container_id1' can be considered as system-data
2817and undesirable to expose to the isolated processes. cgroup namespace
2818can be used to restrict visibility of this path. For example, before
2819creating a cgroup namespace, one would see::
2820
2821 # ls -l /proc/self/ns/cgroup
2822 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2823 # cat /proc/self/cgroup
2824 0::/batchjobs/container_id1
2825
2826After unsharing a new namespace, the view changes::
2827
2828 # ls -l /proc/self/ns/cgroup
2829 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2830 # cat /proc/self/cgroup
2831 0::/
2832
2833When some thread from a multi-threaded process unshares its cgroup
2834namespace, the new cgroupns gets applied to the entire process (all
2835the threads). This is natural for the v2 hierarchy; however, for the
2836legacy hierarchies, this may be unexpected.
2837
2838A cgroup namespace is alive as long as there are processes inside or
2839mounts pinning it. When the last usage goes away, the cgroup
2840namespace is destroyed. The cgroupns root and the actual cgroups
2841remain.
2842
2843
2844The Root and Views
2845------------------
2846
2847The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2848process calling unshare(2) is running. For example, if a process in
2849/batchjobs/container_id1 cgroup calls unshare, cgroup
2850/batchjobs/container_id1 becomes the cgroupns root. For the
2851init_cgroup_ns, this is the real root ('/') cgroup.
2852
2853The cgroupns root cgroup does not change even if the namespace creator
2854process later moves to a different cgroup::
2855
2856 # ~/unshare -c # unshare cgroupns in some cgroup
2857 # cat /proc/self/cgroup
2858 0::/
2859 # mkdir sub_cgrp_1
2860 # echo 0 > sub_cgrp_1/cgroup.procs
2861 # cat /proc/self/cgroup
2862 0::/sub_cgrp_1
2863
2864Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2865
2866Processes running inside the cgroup namespace will be able to see
2867cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2868From within an unshared cgroupns::
2869
2870 # sleep 100000 &
2871 [1] 7353
2872 # echo 7353 > sub_cgrp_1/cgroup.procs
2873 # cat /proc/7353/cgroup
2874 0::/sub_cgrp_1
2875
2876From the initial cgroup namespace, the real cgroup path will be
2877visible::
2878
2879 $ cat /proc/7353/cgroup
2880 0::/batchjobs/container_id1/sub_cgrp_1
2881
2882From a sibling cgroup namespace (that is, a namespace rooted at a
2883different cgroup), the cgroup path relative to its own cgroup
2884namespace root will be shown. For instance, if PID 7353's cgroup
2885namespace root is at '/batchjobs/container_id2', then it will see::
2886
2887 # cat /proc/7353/cgroup
2888 0::/../container_id2/sub_cgrp_1
2889
2890Note that the relative path always starts with '/' to indicate that
2891its relative to the cgroup namespace root of the caller.
2892
2893
2894Migration and setns(2)
2895----------------------
2896
2897Processes inside a cgroup namespace can move into and out of the
2898namespace root if they have proper access to external cgroups. For
2899example, from inside a namespace with cgroupns root at
2900/batchjobs/container_id1, and assuming that the global hierarchy is
2901still accessible inside cgroupns::
2902
2903 # cat /proc/7353/cgroup
2904 0::/sub_cgrp_1
2905 # echo 7353 > batchjobs/container_id2/cgroup.procs
2906 # cat /proc/7353/cgroup
2907 0::/../container_id2
2908
2909Note that this kind of setup is not encouraged. A task inside cgroup
2910namespace should only be exposed to its own cgroupns hierarchy.
2911
2912setns(2) to another cgroup namespace is allowed when:
2913
2914(a) the process has CAP_SYS_ADMIN against its current user namespace
2915(b) the process has CAP_SYS_ADMIN against the target cgroup
2916 namespace's userns
2917
2918No implicit cgroup changes happen with attaching to another cgroup
2919namespace. It is expected that the someone moves the attaching
2920process under the target cgroup namespace root.
2921
2922
2923Interaction with Other Namespaces
2924---------------------------------
2925
2926Namespace specific cgroup hierarchy can be mounted by a process
2927running inside a non-init cgroup namespace::
2928
2929 # mount -t cgroup2 none $MOUNT_POINT
2930
2931This will mount the unified cgroup hierarchy with cgroupns root as the
2932filesystem root. The process needs CAP_SYS_ADMIN against its user and
2933mount namespaces.
2934
2935The virtualization of /proc/self/cgroup file combined with restricting
2936the view of cgroup hierarchy by namespace-private cgroupfs mount
2937provides a properly isolated cgroup view inside the container.
2938
2939
2940Information on Kernel Programming
2941=================================
2942
2943This section contains kernel programming information in the areas
2944where interacting with cgroup is necessary. cgroup core and
2945controllers are not covered.
2946
2947
2948Filesystem Support for Writeback
2949--------------------------------
2950
2951A filesystem can support cgroup writeback by updating
2952address_space_operations->writepage[s]() to annotate bio's using the
2953following two functions.
2954
2955 wbc_init_bio(@wbc, @bio)
2956 Should be called for each bio carrying writeback data and
2957 associates the bio with the inode's owner cgroup and the
2958 corresponding request queue. This must be called after
2959 a queue (device) has been associated with the bio and
2960 before submission.
2961
2962 wbc_account_cgroup_owner(@wbc, @folio, @bytes)
2963 Should be called for each data segment being written out.
2964 While this function doesn't care exactly when it's called
2965 during the writeback session, it's the easiest and most
2966 natural to call it as data segments are added to a bio.
2967
2968With writeback bio's annotated, cgroup support can be enabled per
2969super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2970selective disabling of cgroup writeback support which is helpful when
2971certain filesystem features, e.g. journaled data mode, are
2972incompatible.
2973
2974wbc_init_bio() binds the specified bio to its cgroup. Depending on
2975the configuration, the bio may be executed at a lower priority and if
2976the writeback session is holding shared resources, e.g. a journal
2977entry, may lead to priority inversion. There is no one easy solution
2978for the problem. Filesystems can try to work around specific problem
2979cases by skipping wbc_init_bio() and using bio_associate_blkg()
2980directly.
2981
2982
2983Deprecated v1 Core Features
2984===========================
2985
2986- Multiple hierarchies including named ones are not supported.
2987
2988- All v1 mount options are not supported.
2989
2990- The "tasks" file is removed and "cgroup.procs" is not sorted.
2991
2992- "cgroup.clone_children" is removed.
2993
2994- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
2995 "cgroup.stat" files at the root instead.
2996
2997
2998Issues with v1 and Rationales for v2
2999====================================
3000
3001Multiple Hierarchies
3002--------------------
3003
3004cgroup v1 allowed an arbitrary number of hierarchies and each
3005hierarchy could host any number of controllers. While this seemed to
3006provide a high level of flexibility, it wasn't useful in practice.
3007
3008For example, as there is only one instance of each controller, utility
3009type controllers such as freezer which can be useful in all
3010hierarchies could only be used in one. The issue is exacerbated by
3011the fact that controllers couldn't be moved to another hierarchy once
3012hierarchies were populated. Another issue was that all controllers
3013bound to a hierarchy were forced to have exactly the same view of the
3014hierarchy. It wasn't possible to vary the granularity depending on
3015the specific controller.
3016
3017In practice, these issues heavily limited which controllers could be
3018put on the same hierarchy and most configurations resorted to putting
3019each controller on its own hierarchy. Only closely related ones, such
3020as the cpu and cpuacct controllers, made sense to be put on the same
3021hierarchy. This often meant that userland ended up managing multiple
3022similar hierarchies repeating the same steps on each hierarchy
3023whenever a hierarchy management operation was necessary.
3024
3025Furthermore, support for multiple hierarchies came at a steep cost.
3026It greatly complicated cgroup core implementation but more importantly
3027the support for multiple hierarchies restricted how cgroup could be
3028used in general and what controllers was able to do.
3029
3030There was no limit on how many hierarchies there might be, which meant
3031that a thread's cgroup membership couldn't be described in finite
3032length. The key might contain any number of entries and was unlimited
3033in length, which made it highly awkward to manipulate and led to
3034addition of controllers which existed only to identify membership,
3035which in turn exacerbated the original problem of proliferating number
3036of hierarchies.
3037
3038Also, as a controller couldn't have any expectation regarding the
3039topologies of hierarchies other controllers might be on, each
3040controller had to assume that all other controllers were attached to
3041completely orthogonal hierarchies. This made it impossible, or at
3042least very cumbersome, for controllers to cooperate with each other.
3043
3044In most use cases, putting controllers on hierarchies which are
3045completely orthogonal to each other isn't necessary. What usually is
3046called for is the ability to have differing levels of granularity
3047depending on the specific controller. In other words, hierarchy may
3048be collapsed from leaf towards root when viewed from specific
3049controllers. For example, a given configuration might not care about
3050how memory is distributed beyond a certain level while still wanting
3051to control how CPU cycles are distributed.
3052
3053
3054Thread Granularity
3055------------------
3056
3057cgroup v1 allowed threads of a process to belong to different cgroups.
3058This didn't make sense for some controllers and those controllers
3059ended up implementing different ways to ignore such situations but
3060much more importantly it blurred the line between API exposed to
3061individual applications and system management interface.
3062
3063Generally, in-process knowledge is available only to the process
3064itself; thus, unlike service-level organization of processes,
3065categorizing threads of a process requires active participation from
3066the application which owns the target process.
3067
3068cgroup v1 had an ambiguously defined delegation model which got abused
3069in combination with thread granularity. cgroups were delegated to
3070individual applications so that they can create and manage their own
3071sub-hierarchies and control resource distributions along them. This
3072effectively raised cgroup to the status of a syscall-like API exposed
3073to lay programs.
3074
3075First of all, cgroup has a fundamentally inadequate interface to be
3076exposed this way. For a process to access its own knobs, it has to
3077extract the path on the target hierarchy from /proc/self/cgroup,
3078construct the path by appending the name of the knob to the path, open
3079and then read and/or write to it. This is not only extremely clunky
3080and unusual but also inherently racy. There is no conventional way to
3081define transaction across the required steps and nothing can guarantee
3082that the process would actually be operating on its own sub-hierarchy.
3083
3084cgroup controllers implemented a number of knobs which would never be
3085accepted as public APIs because they were just adding control knobs to
3086system-management pseudo filesystem. cgroup ended up with interface
3087knobs which were not properly abstracted or refined and directly
3088revealed kernel internal details. These knobs got exposed to
3089individual applications through the ill-defined delegation mechanism
3090effectively abusing cgroup as a shortcut to implementing public APIs
3091without going through the required scrutiny.
3092
3093This was painful for both userland and kernel. Userland ended up with
3094misbehaving and poorly abstracted interfaces and kernel exposing and
3095locked into constructs inadvertently.
3096
3097
3098Competition Between Inner Nodes and Threads
3099-------------------------------------------
3100
3101cgroup v1 allowed threads to be in any cgroups which created an
3102interesting problem where threads belonging to a parent cgroup and its
3103children cgroups competed for resources. This was nasty as two
3104different types of entities competed and there was no obvious way to
3105settle it. Different controllers did different things.
3106
3107The cpu controller considered threads and cgroups as equivalents and
3108mapped nice levels to cgroup weights. This worked for some cases but
3109fell flat when children wanted to be allocated specific ratios of CPU
3110cycles and the number of internal threads fluctuated - the ratios
3111constantly changed as the number of competing entities fluctuated.
3112There also were other issues. The mapping from nice level to weight
3113wasn't obvious or universal, and there were various other knobs which
3114simply weren't available for threads.
3115
3116The io controller implicitly created a hidden leaf node for each
3117cgroup to host the threads. The hidden leaf had its own copies of all
3118the knobs with ``leaf_`` prefixed. While this allowed equivalent
3119control over internal threads, it was with serious drawbacks. It
3120always added an extra layer of nesting which wouldn't be necessary
3121otherwise, made the interface messy and significantly complicated the
3122implementation.
3123
3124The memory controller didn't have a way to control what happened
3125between internal tasks and child cgroups and the behavior was not
3126clearly defined. There were attempts to add ad-hoc behaviors and
3127knobs to tailor the behavior to specific workloads which would have
3128led to problems extremely difficult to resolve in the long term.
3129
3130Multiple controllers struggled with internal tasks and came up with
3131different ways to deal with it; unfortunately, all the approaches were
3132severely flawed and, furthermore, the widely different behaviors
3133made cgroup as a whole highly inconsistent.
3134
3135This clearly is a problem which needs to be addressed from cgroup core
3136in a uniform way.
3137
3138
3139Other Interface Issues
3140----------------------
3141
3142cgroup v1 grew without oversight and developed a large number of
3143idiosyncrasies and inconsistencies. One issue on the cgroup core side
3144was how an empty cgroup was notified - a userland helper binary was
3145forked and executed for each event. The event delivery wasn't
3146recursive or delegatable. The limitations of the mechanism also led
3147to in-kernel event delivery filtering mechanism further complicating
3148the interface.
3149
3150Controller interfaces were problematic too. An extreme example is
3151controllers completely ignoring hierarchical organization and treating
3152all cgroups as if they were all located directly under the root
3153cgroup. Some controllers exposed a large amount of inconsistent
3154implementation details to userland.
3155
3156There also was no consistency across controllers. When a new cgroup
3157was created, some controllers defaulted to not imposing extra
3158restrictions while others disallowed any resource usage until
3159explicitly configured. Configuration knobs for the same type of
3160control used widely differing naming schemes and formats. Statistics
3161and information knobs were named arbitrarily and used different
3162formats and units even in the same controller.
3163
3164cgroup v2 establishes common conventions where appropriate and updates
3165controllers so that they expose minimal and consistent interfaces.
3166
3167
3168Controller Issues and Remedies
3169------------------------------
3170
3171Memory
3172~~~~~~
3173
3174The original lower boundary, the soft limit, is defined as a limit
3175that is per default unset. As a result, the set of cgroups that
3176global reclaim prefers is opt-in, rather than opt-out. The costs for
3177optimizing these mostly negative lookups are so high that the
3178implementation, despite its enormous size, does not even provide the
3179basic desirable behavior. First off, the soft limit has no
3180hierarchical meaning. All configured groups are organized in a global
3181rbtree and treated like equal peers, regardless where they are located
3182in the hierarchy. This makes subtree delegation impossible. Second,
3183the soft limit reclaim pass is so aggressive that it not just
3184introduces high allocation latencies into the system, but also impacts
3185system performance due to overreclaim, to the point where the feature
3186becomes self-defeating.
3187
3188The memory.low boundary on the other hand is a top-down allocated
3189reserve. A cgroup enjoys reclaim protection when it's within its
3190effective low, which makes delegation of subtrees possible. It also
3191enjoys having reclaim pressure proportional to its overage when
3192above its effective low.
3193
3194The original high boundary, the hard limit, is defined as a strict
3195limit that can not budge, even if the OOM killer has to be called.
3196But this generally goes against the goal of making the most out of the
3197available memory. The memory consumption of workloads varies during
3198runtime, and that requires users to overcommit. But doing that with a
3199strict upper limit requires either a fairly accurate prediction of the
3200working set size or adding slack to the limit. Since working set size
3201estimation is hard and error prone, and getting it wrong results in
3202OOM kills, most users tend to err on the side of a looser limit and
3203end up wasting precious resources.
3204
3205The memory.high boundary on the other hand can be set much more
3206conservatively. When hit, it throttles allocations by forcing them
3207into direct reclaim to work off the excess, but it never invokes the
3208OOM killer. As a result, a high boundary that is chosen too
3209aggressively will not terminate the processes, but instead it will
3210lead to gradual performance degradation. The user can monitor this
3211and make corrections until the minimal memory footprint that still
3212gives acceptable performance is found.
3213
3214In extreme cases, with many concurrent allocations and a complete
3215breakdown of reclaim progress within the group, the high boundary can
3216be exceeded. But even then it's mostly better to satisfy the
3217allocation from the slack available in other groups or the rest of the
3218system than killing the group. Otherwise, memory.max is there to
3219limit this type of spillover and ultimately contain buggy or even
3220malicious applications.
3221
3222Setting the original memory.limit_in_bytes below the current usage was
3223subject to a race condition, where concurrent charges could cause the
3224limit setting to fail. memory.max on the other hand will first set the
3225limit to prevent new charges, and then reclaim and OOM kill until the
3226new limit is met - or the task writing to memory.max is killed.
3227
3228The combined memory+swap accounting and limiting is replaced by real
3229control over swap space.
3230
3231The main argument for a combined memory+swap facility in the original
3232cgroup design was that global or parental pressure would always be
3233able to swap all anonymous memory of a child group, regardless of the
3234child's own (possibly untrusted) configuration. However, untrusted
3235groups can sabotage swapping by other means - such as referencing its
3236anonymous memory in a tight loop - and an admin can not assume full
3237swappability when overcommitting untrusted jobs.
3238
3239For trusted jobs, on the other hand, a combined counter is not an
3240intuitive userspace interface, and it flies in the face of the idea
3241that cgroup controllers should account and limit specific physical
3242resources. Swap space is a resource like all others in the system,
3243and that's why unified hierarchy allows distributing it separately.
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2. It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors. All
13future changes must be reflected in this document. Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18 1. Introduction
19 1-1. Terminology
20 1-2. What is cgroup?
21 2. Basic Operations
22 2-1. Mounting
23 2-2. Organizing Processes and Threads
24 2-2-1. Processes
25 2-2-2. Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
31 2-5. Delegation
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
34 2-6. Guidelines
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
38 3-1. Weights
39 3-2. Limits
40 3-3. Protections
41 3-4. Allocations
42 4. Interface Files
43 4-1. Format
44 4-2. Conventions
45 4-3. Core Interface Files
46 5. Controllers
47 5-1. CPU
48 5-1-1. CPU Interface Files
49 5-2. Memory
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
53 5-3. IO
54 5-3-1. IO Interface Files
55 5-3-2. Writeback
56 5-3-3. IO Latency
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
59 5-3-4. IO Priority
60 5-4. PID
61 5-4-1. PID Interface Files
62 5-5. Cpuset
63 5.5-1. Cpuset Interface Files
64 5-6. Device
65 5-7. RDMA
66 5-7-1. RDMA Interface Files
67 5-8. HugeTLB
68 5.8-1. HugeTLB Interface Files
69 5-9. Misc
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
72 5-10. Others
73 5-10-1. perf_event
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
77 6. Namespace
78 6-1. Basics
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
91 R-5-1. Memory
92
93
94Introduction
95============
96
97Terminology
98-----------
99
100"cgroup" stands for "control group" and is never capitalized. The
101singular form is used to designate the whole feature and also as a
102qualifier as in "cgroup controllers". When explicitly referring to
103multiple individual control groups, the plural form "cgroups" is used.
104
105
106What is cgroup?
107---------------
108
109cgroup is a mechanism to organize processes hierarchically and
110distribute system resources along the hierarchy in a controlled and
111configurable manner.
112
113cgroup is largely composed of two parts - the core and controllers.
114cgroup core is primarily responsible for hierarchically organizing
115processes. A cgroup controller is usually responsible for
116distributing a specific type of system resource along the hierarchy
117although there are utility controllers which serve purposes other than
118resource distribution.
119
120cgroups form a tree structure and every process in the system belongs
121to one and only one cgroup. All threads of a process belong to the
122same cgroup. On creation, all processes are put in the cgroup that
123the parent process belongs to at the time. A process can be migrated
124to another cgroup. Migration of a process doesn't affect already
125existing descendant processes.
126
127Following certain structural constraints, controllers may be enabled or
128disabled selectively on a cgroup. All controller behaviors are
129hierarchical - if a controller is enabled on a cgroup, it affects all
130processes which belong to the cgroups consisting the inclusive
131sub-hierarchy of the cgroup. When a controller is enabled on a nested
132cgroup, it always restricts the resource distribution further. The
133restrictions set closer to the root in the hierarchy can not be
134overridden from further away.
135
136
137Basic Operations
138================
139
140Mounting
141--------
142
143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144hierarchy can be mounted with the following mount command::
145
146 # mount -t cgroup2 none $MOUNT_POINT
147
148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149controllers which support v2 and are not bound to a v1 hierarchy are
150automatically bound to the v2 hierarchy and show up at the root.
151Controllers which are not in active use in the v2 hierarchy can be
152bound to other hierarchies. This allows mixing v2 hierarchy with the
153legacy v1 multiple hierarchies in a fully backward compatible way.
154
155A controller can be moved across hierarchies only after the controller
156is no longer referenced in its current hierarchy. Because per-cgroup
157controller states are destroyed asynchronously and controllers may
158have lingering references, a controller may not show up immediately on
159the v2 hierarchy after the final umount of the previous hierarchy.
160Similarly, a controller should be fully disabled to be moved out of
161the unified hierarchy and it may take some time for the disabled
162controller to become available for other hierarchies; furthermore, due
163to inter-controller dependencies, other controllers may need to be
164disabled too.
165
166While useful for development and manual configurations, moving
167controllers dynamically between the v2 and other hierarchies is
168strongly discouraged for production use. It is recommended to decide
169the hierarchies and controller associations before starting using the
170controllers after system boot.
171
172During transition to v2, system management software might still
173automount the v1 cgroup filesystem and so hijack all controllers
174during boot, before manual intervention is possible. To make testing
175and experimenting easier, the kernel parameter cgroup_no_v1= allows
176disabling controllers in v1 and make them always available in v2.
177
178cgroup v2 currently supports the following mount options.
179
180 nsdelegate
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
186
187 favordynmods
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
194
195 memory_localevents
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
202
203 memory_recursiveprot
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
212
213
214Organizing Processes and Threads
215--------------------------------
216
217Processes
218~~~~~~~~~
219
220Initially, only the root cgroup exists to which all processes belong.
221A child cgroup can be created by creating a sub-directory::
222
223 # mkdir $CGROUP_NAME
224
225A given cgroup may have multiple child cgroups forming a tree
226structure. Each cgroup has a read-writable interface file
227"cgroup.procs". When read, it lists the PIDs of all processes which
228belong to the cgroup one-per-line. The PIDs are not ordered and the
229same PID may show up more than once if the process got moved to
230another cgroup and then back or the PID got recycled while reading.
231
232A process can be migrated into a cgroup by writing its PID to the
233target cgroup's "cgroup.procs" file. Only one process can be migrated
234on a single write(2) call. If a process is composed of multiple
235threads, writing the PID of any thread migrates all threads of the
236process.
237
238When a process forks a child process, the new process is born into the
239cgroup that the forking process belongs to at the time of the
240operation. After exit, a process stays associated with the cgroup
241that it belonged to at the time of exit until it's reaped; however, a
242zombie process does not appear in "cgroup.procs" and thus can't be
243moved to another cgroup.
244
245A cgroup which doesn't have any children or live processes can be
246destroyed by removing the directory. Note that a cgroup which doesn't
247have any children and is associated only with zombie processes is
248considered empty and can be removed::
249
250 # rmdir $CGROUP_NAME
251
252"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
253cgroup is in use in the system, this file may contain multiple lines,
254one for each hierarchy. The entry for cgroup v2 is always in the
255format "0::$PATH"::
256
257 # cat /proc/842/cgroup
258 ...
259 0::/test-cgroup/test-cgroup-nested
260
261If the process becomes a zombie and the cgroup it was associated with
262is removed subsequently, " (deleted)" is appended to the path::
263
264 # cat /proc/842/cgroup
265 ...
266 0::/test-cgroup/test-cgroup-nested (deleted)
267
268
269Threads
270~~~~~~~
271
272cgroup v2 supports thread granularity for a subset of controllers to
273support use cases requiring hierarchical resource distribution across
274the threads of a group of processes. By default, all threads of a
275process belong to the same cgroup, which also serves as the resource
276domain to host resource consumptions which are not specific to a
277process or thread. The thread mode allows threads to be spread across
278a subtree while still maintaining the common resource domain for them.
279
280Controllers which support thread mode are called threaded controllers.
281The ones which don't are called domain controllers.
282
283Marking a cgroup threaded makes it join the resource domain of its
284parent as a threaded cgroup. The parent may be another threaded
285cgroup whose resource domain is further up in the hierarchy. The root
286of a threaded subtree, that is, the nearest ancestor which is not
287threaded, is called threaded domain or thread root interchangeably and
288serves as the resource domain for the entire subtree.
289
290Inside a threaded subtree, threads of a process can be put in
291different cgroups and are not subject to the no internal process
292constraint - threaded controllers can be enabled on non-leaf cgroups
293whether they have threads in them or not.
294
295As the threaded domain cgroup hosts all the domain resource
296consumptions of the subtree, it is considered to have internal
297resource consumptions whether there are processes in it or not and
298can't have populated child cgroups which aren't threaded. Because the
299root cgroup is not subject to no internal process constraint, it can
300serve both as a threaded domain and a parent to domain cgroups.
301
302The current operation mode or type of the cgroup is shown in the
303"cgroup.type" file which indicates whether the cgroup is a normal
304domain, a domain which is serving as the domain of a threaded subtree,
305or a threaded cgroup.
306
307On creation, a cgroup is always a domain cgroup and can be made
308threaded by writing "threaded" to the "cgroup.type" file. The
309operation is single direction::
310
311 # echo threaded > cgroup.type
312
313Once threaded, the cgroup can't be made a domain again. To enable the
314thread mode, the following conditions must be met.
315
316- As the cgroup will join the parent's resource domain. The parent
317 must either be a valid (threaded) domain or a threaded cgroup.
318
319- When the parent is an unthreaded domain, it must not have any domain
320 controllers enabled or populated domain children. The root is
321 exempt from this requirement.
322
323Topology-wise, a cgroup can be in an invalid state. Please consider
324the following topology::
325
326 A (threaded domain) - B (threaded) - C (domain, just created)
327
328C is created as a domain but isn't connected to a parent which can
329host child domains. C can't be used until it is turned into a
330threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
331these cases. Operations which fail due to invalid topology use
332EOPNOTSUPP as the errno.
333
334A domain cgroup is turned into a threaded domain when one of its child
335cgroup becomes threaded or threaded controllers are enabled in the
336"cgroup.subtree_control" file while there are processes in the cgroup.
337A threaded domain reverts to a normal domain when the conditions
338clear.
339
340When read, "cgroup.threads" contains the list of the thread IDs of all
341threads in the cgroup. Except that the operations are per-thread
342instead of per-process, "cgroup.threads" has the same format and
343behaves the same way as "cgroup.procs". While "cgroup.threads" can be
344written to in any cgroup, as it can only move threads inside the same
345threaded domain, its operations are confined inside each threaded
346subtree.
347
348The threaded domain cgroup serves as the resource domain for the whole
349subtree, and, while the threads can be scattered across the subtree,
350all the processes are considered to be in the threaded domain cgroup.
351"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
352processes in the subtree and is not readable in the subtree proper.
353However, "cgroup.procs" can be written to from anywhere in the subtree
354to migrate all threads of the matching process to the cgroup.
355
356Only threaded controllers can be enabled in a threaded subtree. When
357a threaded controller is enabled inside a threaded subtree, it only
358accounts for and controls resource consumptions associated with the
359threads in the cgroup and its descendants. All consumptions which
360aren't tied to a specific thread belong to the threaded domain cgroup.
361
362Because a threaded subtree is exempt from no internal process
363constraint, a threaded controller must be able to handle competition
364between threads in a non-leaf cgroup and its child cgroups. Each
365threaded controller defines how such competitions are handled.
366
367
368[Un]populated Notification
369--------------------------
370
371Each non-root cgroup has a "cgroup.events" file which contains
372"populated" field indicating whether the cgroup's sub-hierarchy has
373live processes in it. Its value is 0 if there is no live process in
374the cgroup and its descendants; otherwise, 1. poll and [id]notify
375events are triggered when the value changes. This can be used, for
376example, to start a clean-up operation after all processes of a given
377sub-hierarchy have exited. The populated state updates and
378notifications are recursive. Consider the following sub-hierarchy
379where the numbers in the parentheses represent the numbers of processes
380in each cgroup::
381
382 A(4) - B(0) - C(1)
383 \ D(0)
384
385A, B and C's "populated" fields would be 1 while D's 0. After the one
386process in C exits, B and C's "populated" fields would flip to "0" and
387file modified events will be generated on the "cgroup.events" files of
388both cgroups.
389
390
391Controlling Controllers
392-----------------------
393
394Enabling and Disabling
395~~~~~~~~~~~~~~~~~~~~~~
396
397Each cgroup has a "cgroup.controllers" file which lists all
398controllers available for the cgroup to enable::
399
400 # cat cgroup.controllers
401 cpu io memory
402
403No controller is enabled by default. Controllers can be enabled and
404disabled by writing to the "cgroup.subtree_control" file::
405
406 # echo "+cpu +memory -io" > cgroup.subtree_control
407
408Only controllers which are listed in "cgroup.controllers" can be
409enabled. When multiple operations are specified as above, either they
410all succeed or fail. If multiple operations on the same controller
411are specified, the last one is effective.
412
413Enabling a controller in a cgroup indicates that the distribution of
414the target resource across its immediate children will be controlled.
415Consider the following sub-hierarchy. The enabled controllers are
416listed in parentheses::
417
418 A(cpu,memory) - B(memory) - C()
419 \ D()
420
421As A has "cpu" and "memory" enabled, A will control the distribution
422of CPU cycles and memory to its children, in this case, B. As B has
423"memory" enabled but not "CPU", C and D will compete freely on CPU
424cycles but their division of memory available to B will be controlled.
425
426As a controller regulates the distribution of the target resource to
427the cgroup's children, enabling it creates the controller's interface
428files in the child cgroups. In the above example, enabling "cpu" on B
429would create the "cpu." prefixed controller interface files in C and
430D. Likewise, disabling "memory" from B would remove the "memory."
431prefixed controller interface files from C and D. This means that the
432controller interface files - anything which doesn't start with
433"cgroup." are owned by the parent rather than the cgroup itself.
434
435
436Top-down Constraint
437~~~~~~~~~~~~~~~~~~~
438
439Resources are distributed top-down and a cgroup can further distribute
440a resource only if the resource has been distributed to it from the
441parent. This means that all non-root "cgroup.subtree_control" files
442can only contain controllers which are enabled in the parent's
443"cgroup.subtree_control" file. A controller can be enabled only if
444the parent has the controller enabled and a controller can't be
445disabled if one or more children have it enabled.
446
447
448No Internal Process Constraint
449~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
450
451Non-root cgroups can distribute domain resources to their children
452only when they don't have any processes of their own. In other words,
453only domain cgroups which don't contain any processes can have domain
454controllers enabled in their "cgroup.subtree_control" files.
455
456This guarantees that, when a domain controller is looking at the part
457of the hierarchy which has it enabled, processes are always only on
458the leaves. This rules out situations where child cgroups compete
459against internal processes of the parent.
460
461The root cgroup is exempt from this restriction. Root contains
462processes and anonymous resource consumption which can't be associated
463with any other cgroups and requires special treatment from most
464controllers. How resource consumption in the root cgroup is governed
465is up to each controller (for more information on this topic please
466refer to the Non-normative information section in the Controllers
467chapter).
468
469Note that the restriction doesn't get in the way if there is no
470enabled controller in the cgroup's "cgroup.subtree_control". This is
471important as otherwise it wouldn't be possible to create children of a
472populated cgroup. To control resource distribution of a cgroup, the
473cgroup must create children and transfer all its processes to the
474children before enabling controllers in its "cgroup.subtree_control"
475file.
476
477
478Delegation
479----------
480
481Model of Delegation
482~~~~~~~~~~~~~~~~~~~
483
484A cgroup can be delegated in two ways. First, to a less privileged
485user by granting write access of the directory and its "cgroup.procs",
486"cgroup.threads" and "cgroup.subtree_control" files to the user.
487Second, if the "nsdelegate" mount option is set, automatically to a
488cgroup namespace on namespace creation.
489
490Because the resource control interface files in a given directory
491control the distribution of the parent's resources, the delegatee
492shouldn't be allowed to write to them. For the first method, this is
493achieved by not granting access to these files. For the second, the
494kernel rejects writes to all files other than "cgroup.procs" and
495"cgroup.subtree_control" on a namespace root from inside the
496namespace.
497
498The end results are equivalent for both delegation types. Once
499delegated, the user can build sub-hierarchy under the directory,
500organize processes inside it as it sees fit and further distribute the
501resources it received from the parent. The limits and other settings
502of all resource controllers are hierarchical and regardless of what
503happens in the delegated sub-hierarchy, nothing can escape the
504resource restrictions imposed by the parent.
505
506Currently, cgroup doesn't impose any restrictions on the number of
507cgroups in or nesting depth of a delegated sub-hierarchy; however,
508this may be limited explicitly in the future.
509
510
511Delegation Containment
512~~~~~~~~~~~~~~~~~~~~~~
513
514A delegated sub-hierarchy is contained in the sense that processes
515can't be moved into or out of the sub-hierarchy by the delegatee.
516
517For delegations to a less privileged user, this is achieved by
518requiring the following conditions for a process with a non-root euid
519to migrate a target process into a cgroup by writing its PID to the
520"cgroup.procs" file.
521
522- The writer must have write access to the "cgroup.procs" file.
523
524- The writer must have write access to the "cgroup.procs" file of the
525 common ancestor of the source and destination cgroups.
526
527The above two constraints ensure that while a delegatee may migrate
528processes around freely in the delegated sub-hierarchy it can't pull
529in from or push out to outside the sub-hierarchy.
530
531For an example, let's assume cgroups C0 and C1 have been delegated to
532user U0 who created C00, C01 under C0 and C10 under C1 as follows and
533all processes under C0 and C1 belong to U0::
534
535 ~~~~~~~~~~~~~ - C0 - C00
536 ~ cgroup ~ \ C01
537 ~ hierarchy ~
538 ~~~~~~~~~~~~~ - C1 - C10
539
540Let's also say U0 wants to write the PID of a process which is
541currently in C10 into "C00/cgroup.procs". U0 has write access to the
542file; however, the common ancestor of the source cgroup C10 and the
543destination cgroup C00 is above the points of delegation and U0 would
544not have write access to its "cgroup.procs" files and thus the write
545will be denied with -EACCES.
546
547For delegations to namespaces, containment is achieved by requiring
548that both the source and destination cgroups are reachable from the
549namespace of the process which is attempting the migration. If either
550is not reachable, the migration is rejected with -ENOENT.
551
552
553Guidelines
554----------
555
556Organize Once and Control
557~~~~~~~~~~~~~~~~~~~~~~~~~
558
559Migrating a process across cgroups is a relatively expensive operation
560and stateful resources such as memory are not moved together with the
561process. This is an explicit design decision as there often exist
562inherent trade-offs between migration and various hot paths in terms
563of synchronization cost.
564
565As such, migrating processes across cgroups frequently as a means to
566apply different resource restrictions is discouraged. A workload
567should be assigned to a cgroup according to the system's logical and
568resource structure once on start-up. Dynamic adjustments to resource
569distribution can be made by changing controller configuration through
570the interface files.
571
572
573Avoid Name Collisions
574~~~~~~~~~~~~~~~~~~~~~
575
576Interface files for a cgroup and its children cgroups occupy the same
577directory and it is possible to create children cgroups which collide
578with interface files.
579
580All cgroup core interface files are prefixed with "cgroup." and each
581controller's interface files are prefixed with the controller name and
582a dot. A controller's name is composed of lower case alphabets and
583'_'s but never begins with an '_' so it can be used as the prefix
584character for collision avoidance. Also, interface file names won't
585start or end with terms which are often used in categorizing workloads
586such as job, service, slice, unit or workload.
587
588cgroup doesn't do anything to prevent name collisions and it's the
589user's responsibility to avoid them.
590
591
592Resource Distribution Models
593============================
594
595cgroup controllers implement several resource distribution schemes
596depending on the resource type and expected use cases. This section
597describes major schemes in use along with their expected behaviors.
598
599
600Weights
601-------
602
603A parent's resource is distributed by adding up the weights of all
604active children and giving each the fraction matching the ratio of its
605weight against the sum. As only children which can make use of the
606resource at the moment participate in the distribution, this is
607work-conserving. Due to the dynamic nature, this model is usually
608used for stateless resources.
609
610All weights are in the range [1, 10000] with the default at 100. This
611allows symmetric multiplicative biases in both directions at fine
612enough granularity while staying in the intuitive range.
613
614As long as the weight is in range, all configuration combinations are
615valid and there is no reason to reject configuration changes or
616process migrations.
617
618"cpu.weight" proportionally distributes CPU cycles to active children
619and is an example of this type.
620
621
622Limits
623------
624
625A child can only consume upto the configured amount of the resource.
626Limits can be over-committed - the sum of the limits of children can
627exceed the amount of resource available to the parent.
628
629Limits are in the range [0, max] and defaults to "max", which is noop.
630
631As limits can be over-committed, all configuration combinations are
632valid and there is no reason to reject configuration changes or
633process migrations.
634
635"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
636on an IO device and is an example of this type.
637
638
639Protections
640-----------
641
642A cgroup is protected upto the configured amount of the resource
643as long as the usages of all its ancestors are under their
644protected levels. Protections can be hard guarantees or best effort
645soft boundaries. Protections can also be over-committed in which case
646only upto the amount available to the parent is protected among
647children.
648
649Protections are in the range [0, max] and defaults to 0, which is
650noop.
651
652As protections can be over-committed, all configuration combinations
653are valid and there is no reason to reject configuration changes or
654process migrations.
655
656"memory.low" implements best-effort memory protection and is an
657example of this type.
658
659
660Allocations
661-----------
662
663A cgroup is exclusively allocated a certain amount of a finite
664resource. Allocations can't be over-committed - the sum of the
665allocations of children can not exceed the amount of resource
666available to the parent.
667
668Allocations are in the range [0, max] and defaults to 0, which is no
669resource.
670
671As allocations can't be over-committed, some configuration
672combinations are invalid and should be rejected. Also, if the
673resource is mandatory for execution of processes, process migrations
674may be rejected.
675
676"cpu.rt.max" hard-allocates realtime slices and is an example of this
677type.
678
679
680Interface Files
681===============
682
683Format
684------
685
686All interface files should be in one of the following formats whenever
687possible::
688
689 New-line separated values
690 (when only one value can be written at once)
691
692 VAL0\n
693 VAL1\n
694 ...
695
696 Space separated values
697 (when read-only or multiple values can be written at once)
698
699 VAL0 VAL1 ...\n
700
701 Flat keyed
702
703 KEY0 VAL0\n
704 KEY1 VAL1\n
705 ...
706
707 Nested keyed
708
709 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
710 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
711 ...
712
713For a writable file, the format for writing should generally match
714reading; however, controllers may allow omitting later fields or
715implement restricted shortcuts for most common use cases.
716
717For both flat and nested keyed files, only the values for a single key
718can be written at a time. For nested keyed files, the sub key pairs
719may be specified in any order and not all pairs have to be specified.
720
721
722Conventions
723-----------
724
725- Settings for a single feature should be contained in a single file.
726
727- The root cgroup should be exempt from resource control and thus
728 shouldn't have resource control interface files.
729
730- The default time unit is microseconds. If a different unit is ever
731 used, an explicit unit suffix must be present.
732
733- A parts-per quantity should use a percentage decimal with at least
734 two digit fractional part - e.g. 13.40.
735
736- If a controller implements weight based resource distribution, its
737 interface file should be named "weight" and have the range [1,
738 10000] with 100 as the default. The values are chosen to allow
739 enough and symmetric bias in both directions while keeping it
740 intuitive (the default is 100%).
741
742- If a controller implements an absolute resource guarantee and/or
743 limit, the interface files should be named "min" and "max"
744 respectively. If a controller implements best effort resource
745 guarantee and/or limit, the interface files should be named "low"
746 and "high" respectively.
747
748 In the above four control files, the special token "max" should be
749 used to represent upward infinity for both reading and writing.
750
751- If a setting has a configurable default value and keyed specific
752 overrides, the default entry should be keyed with "default" and
753 appear as the first entry in the file.
754
755 The default value can be updated by writing either "default $VAL" or
756 "$VAL".
757
758 When writing to update a specific override, "default" can be used as
759 the value to indicate removal of the override. Override entries
760 with "default" as the value must not appear when read.
761
762 For example, a setting which is keyed by major:minor device numbers
763 with integer values may look like the following::
764
765 # cat cgroup-example-interface-file
766 default 150
767 8:0 300
768
769 The default value can be updated by::
770
771 # echo 125 > cgroup-example-interface-file
772
773 or::
774
775 # echo "default 125" > cgroup-example-interface-file
776
777 An override can be set by::
778
779 # echo "8:16 170" > cgroup-example-interface-file
780
781 and cleared by::
782
783 # echo "8:0 default" > cgroup-example-interface-file
784 # cat cgroup-example-interface-file
785 default 125
786 8:16 170
787
788- For events which are not very high frequency, an interface file
789 "events" should be created which lists event key value pairs.
790 Whenever a notifiable event happens, file modified event should be
791 generated on the file.
792
793
794Core Interface Files
795--------------------
796
797All cgroup core files are prefixed with "cgroup."
798
799 cgroup.type
800 A read-write single value file which exists on non-root
801 cgroups.
802
803 When read, it indicates the current type of the cgroup, which
804 can be one of the following values.
805
806 - "domain" : A normal valid domain cgroup.
807
808 - "domain threaded" : A threaded domain cgroup which is
809 serving as the root of a threaded subtree.
810
811 - "domain invalid" : A cgroup which is in an invalid state.
812 It can't be populated or have controllers enabled. It may
813 be allowed to become a threaded cgroup.
814
815 - "threaded" : A threaded cgroup which is a member of a
816 threaded subtree.
817
818 A cgroup can be turned into a threaded cgroup by writing
819 "threaded" to this file.
820
821 cgroup.procs
822 A read-write new-line separated values file which exists on
823 all cgroups.
824
825 When read, it lists the PIDs of all processes which belong to
826 the cgroup one-per-line. The PIDs are not ordered and the
827 same PID may show up more than once if the process got moved
828 to another cgroup and then back or the PID got recycled while
829 reading.
830
831 A PID can be written to migrate the process associated with
832 the PID to the cgroup. The writer should match all of the
833 following conditions.
834
835 - It must have write access to the "cgroup.procs" file.
836
837 - It must have write access to the "cgroup.procs" file of the
838 common ancestor of the source and destination cgroups.
839
840 When delegating a sub-hierarchy, write access to this file
841 should be granted along with the containing directory.
842
843 In a threaded cgroup, reading this file fails with EOPNOTSUPP
844 as all the processes belong to the thread root. Writing is
845 supported and moves every thread of the process to the cgroup.
846
847 cgroup.threads
848 A read-write new-line separated values file which exists on
849 all cgroups.
850
851 When read, it lists the TIDs of all threads which belong to
852 the cgroup one-per-line. The TIDs are not ordered and the
853 same TID may show up more than once if the thread got moved to
854 another cgroup and then back or the TID got recycled while
855 reading.
856
857 A TID can be written to migrate the thread associated with the
858 TID to the cgroup. The writer should match all of the
859 following conditions.
860
861 - It must have write access to the "cgroup.threads" file.
862
863 - The cgroup that the thread is currently in must be in the
864 same resource domain as the destination cgroup.
865
866 - It must have write access to the "cgroup.procs" file of the
867 common ancestor of the source and destination cgroups.
868
869 When delegating a sub-hierarchy, write access to this file
870 should be granted along with the containing directory.
871
872 cgroup.controllers
873 A read-only space separated values file which exists on all
874 cgroups.
875
876 It shows space separated list of all controllers available to
877 the cgroup. The controllers are not ordered.
878
879 cgroup.subtree_control
880 A read-write space separated values file which exists on all
881 cgroups. Starts out empty.
882
883 When read, it shows space separated list of the controllers
884 which are enabled to control resource distribution from the
885 cgroup to its children.
886
887 Space separated list of controllers prefixed with '+' or '-'
888 can be written to enable or disable controllers. A controller
889 name prefixed with '+' enables the controller and '-'
890 disables. If a controller appears more than once on the list,
891 the last one is effective. When multiple enable and disable
892 operations are specified, either all succeed or all fail.
893
894 cgroup.events
895 A read-only flat-keyed file which exists on non-root cgroups.
896 The following entries are defined. Unless specified
897 otherwise, a value change in this file generates a file
898 modified event.
899
900 populated
901 1 if the cgroup or its descendants contains any live
902 processes; otherwise, 0.
903 frozen
904 1 if the cgroup is frozen; otherwise, 0.
905
906 cgroup.max.descendants
907 A read-write single value files. The default is "max".
908
909 Maximum allowed number of descent cgroups.
910 If the actual number of descendants is equal or larger,
911 an attempt to create a new cgroup in the hierarchy will fail.
912
913 cgroup.max.depth
914 A read-write single value files. The default is "max".
915
916 Maximum allowed descent depth below the current cgroup.
917 If the actual descent depth is equal or larger,
918 an attempt to create a new child cgroup will fail.
919
920 cgroup.stat
921 A read-only flat-keyed file with the following entries:
922
923 nr_descendants
924 Total number of visible descendant cgroups.
925
926 nr_dying_descendants
927 Total number of dying descendant cgroups. A cgroup becomes
928 dying after being deleted by a user. The cgroup will remain
929 in dying state for some time undefined time (which can depend
930 on system load) before being completely destroyed.
931
932 A process can't enter a dying cgroup under any circumstances,
933 a dying cgroup can't revive.
934
935 A dying cgroup can consume system resources not exceeding
936 limits, which were active at the moment of cgroup deletion.
937
938 cgroup.freeze
939 A read-write single value file which exists on non-root cgroups.
940 Allowed values are "0" and "1". The default is "0".
941
942 Writing "1" to the file causes freezing of the cgroup and all
943 descendant cgroups. This means that all belonging processes will
944 be stopped and will not run until the cgroup will be explicitly
945 unfrozen. Freezing of the cgroup may take some time; when this action
946 is completed, the "frozen" value in the cgroup.events control file
947 will be updated to "1" and the corresponding notification will be
948 issued.
949
950 A cgroup can be frozen either by its own settings, or by settings
951 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
952 cgroup will remain frozen.
953
954 Processes in the frozen cgroup can be killed by a fatal signal.
955 They also can enter and leave a frozen cgroup: either by an explicit
956 move by a user, or if freezing of the cgroup races with fork().
957 If a process is moved to a frozen cgroup, it stops. If a process is
958 moved out of a frozen cgroup, it becomes running.
959
960 Frozen status of a cgroup doesn't affect any cgroup tree operations:
961 it's possible to delete a frozen (and empty) cgroup, as well as
962 create new sub-cgroups.
963
964 cgroup.kill
965 A write-only single value file which exists in non-root cgroups.
966 The only allowed value is "1".
967
968 Writing "1" to the file causes the cgroup and all descendant cgroups to
969 be killed. This means that all processes located in the affected cgroup
970 tree will be killed via SIGKILL.
971
972 Killing a cgroup tree will deal with concurrent forks appropriately and
973 is protected against migrations.
974
975 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
976 killing cgroups is a process directed operation, i.e. it affects
977 the whole thread-group.
978
979 cgroup.pressure
980 A read-write single value file that allowed values are "0" and "1".
981 The default is "1".
982
983 Writing "0" to the file will disable the cgroup PSI accounting.
984 Writing "1" to the file will re-enable the cgroup PSI accounting.
985
986 This control attribute is not hierarchical, so disable or enable PSI
987 accounting in a cgroup does not affect PSI accounting in descendants
988 and doesn't need pass enablement via ancestors from root.
989
990 The reason this control attribute exists is that PSI accounts stalls for
991 each cgroup separately and aggregates it at each level of the hierarchy.
992 This may cause non-negligible overhead for some workloads when under
993 deep level of the hierarchy, in which case this control attribute can
994 be used to disable PSI accounting in the non-leaf cgroups.
995
996 irq.pressure
997 A read-write nested-keyed file.
998
999 Shows pressure stall information for IRQ/SOFTIRQ. See
1000 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1001
1002Controllers
1003===========
1004
1005.. _cgroup-v2-cpu:
1006
1007CPU
1008---
1009
1010The "cpu" controllers regulates distribution of CPU cycles. This
1011controller implements weight and absolute bandwidth limit models for
1012normal scheduling policy and absolute bandwidth allocation model for
1013realtime scheduling policy.
1014
1015In all the above models, cycles distribution is defined only on a temporal
1016base and it does not account for the frequency at which tasks are executed.
1017The (optional) utilization clamping support allows to hint the schedutil
1018cpufreq governor about the minimum desired frequency which should always be
1019provided by a CPU, as well as the maximum desired frequency, which should not
1020be exceeded by a CPU.
1021
1022WARNING: cgroup2 doesn't yet support control of realtime processes and
1023the cpu controller can only be enabled when all RT processes are in
1024the root cgroup. Be aware that system management software may already
1025have placed RT processes into nonroot cgroups during the system boot
1026process, and these processes may need to be moved to the root cgroup
1027before the cpu controller can be enabled.
1028
1029
1030CPU Interface Files
1031~~~~~~~~~~~~~~~~~~~
1032
1033All time durations are in microseconds.
1034
1035 cpu.stat
1036 A read-only flat-keyed file.
1037 This file exists whether the controller is enabled or not.
1038
1039 It always reports the following three stats:
1040
1041 - usage_usec
1042 - user_usec
1043 - system_usec
1044
1045 and the following three when the controller is enabled:
1046
1047 - nr_periods
1048 - nr_throttled
1049 - throttled_usec
1050 - nr_bursts
1051 - burst_usec
1052
1053 cpu.weight
1054 A read-write single value file which exists on non-root
1055 cgroups. The default is "100".
1056
1057 The weight in the range [1, 10000].
1058
1059 cpu.weight.nice
1060 A read-write single value file which exists on non-root
1061 cgroups. The default is "0".
1062
1063 The nice value is in the range [-20, 19].
1064
1065 This interface file is an alternative interface for
1066 "cpu.weight" and allows reading and setting weight using the
1067 same values used by nice(2). Because the range is smaller and
1068 granularity is coarser for the nice values, the read value is
1069 the closest approximation of the current weight.
1070
1071 cpu.max
1072 A read-write two value file which exists on non-root cgroups.
1073 The default is "max 100000".
1074
1075 The maximum bandwidth limit. It's in the following format::
1076
1077 $MAX $PERIOD
1078
1079 which indicates that the group may consume upto $MAX in each
1080 $PERIOD duration. "max" for $MAX indicates no limit. If only
1081 one number is written, $MAX is updated.
1082
1083 cpu.max.burst
1084 A read-write single value file which exists on non-root
1085 cgroups. The default is "0".
1086
1087 The burst in the range [0, $MAX].
1088
1089 cpu.pressure
1090 A read-write nested-keyed file.
1091
1092 Shows pressure stall information for CPU. See
1093 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1094
1095 cpu.uclamp.min
1096 A read-write single value file which exists on non-root cgroups.
1097 The default is "0", i.e. no utilization boosting.
1098
1099 The requested minimum utilization (protection) as a percentage
1100 rational number, e.g. 12.34 for 12.34%.
1101
1102 This interface allows reading and setting minimum utilization clamp
1103 values similar to the sched_setattr(2). This minimum utilization
1104 value is used to clamp the task specific minimum utilization clamp.
1105
1106 The requested minimum utilization (protection) is always capped by
1107 the current value for the maximum utilization (limit), i.e.
1108 `cpu.uclamp.max`.
1109
1110 cpu.uclamp.max
1111 A read-write single value file which exists on non-root cgroups.
1112 The default is "max". i.e. no utilization capping
1113
1114 The requested maximum utilization (limit) as a percentage rational
1115 number, e.g. 98.76 for 98.76%.
1116
1117 This interface allows reading and setting maximum utilization clamp
1118 values similar to the sched_setattr(2). This maximum utilization
1119 value is used to clamp the task specific maximum utilization clamp.
1120
1121
1122
1123Memory
1124------
1125
1126The "memory" controller regulates distribution of memory. Memory is
1127stateful and implements both limit and protection models. Due to the
1128intertwining between memory usage and reclaim pressure and the
1129stateful nature of memory, the distribution model is relatively
1130complex.
1131
1132While not completely water-tight, all major memory usages by a given
1133cgroup are tracked so that the total memory consumption can be
1134accounted and controlled to a reasonable extent. Currently, the
1135following types of memory usages are tracked.
1136
1137- Userland memory - page cache and anonymous memory.
1138
1139- Kernel data structures such as dentries and inodes.
1140
1141- TCP socket buffers.
1142
1143The above list may expand in the future for better coverage.
1144
1145
1146Memory Interface Files
1147~~~~~~~~~~~~~~~~~~~~~~
1148
1149All memory amounts are in bytes. If a value which is not aligned to
1150PAGE_SIZE is written, the value may be rounded up to the closest
1151PAGE_SIZE multiple when read back.
1152
1153 memory.current
1154 A read-only single value file which exists on non-root
1155 cgroups.
1156
1157 The total amount of memory currently being used by the cgroup
1158 and its descendants.
1159
1160 memory.min
1161 A read-write single value file which exists on non-root
1162 cgroups. The default is "0".
1163
1164 Hard memory protection. If the memory usage of a cgroup
1165 is within its effective min boundary, the cgroup's memory
1166 won't be reclaimed under any conditions. If there is no
1167 unprotected reclaimable memory available, OOM killer
1168 is invoked. Above the effective min boundary (or
1169 effective low boundary if it is higher), pages are reclaimed
1170 proportionally to the overage, reducing reclaim pressure for
1171 smaller overages.
1172
1173 Effective min boundary is limited by memory.min values of
1174 all ancestor cgroups. If there is memory.min overcommitment
1175 (child cgroup or cgroups are requiring more protected memory
1176 than parent will allow), then each child cgroup will get
1177 the part of parent's protection proportional to its
1178 actual memory usage below memory.min.
1179
1180 Putting more memory than generally available under this
1181 protection is discouraged and may lead to constant OOMs.
1182
1183 If a memory cgroup is not populated with processes,
1184 its memory.min is ignored.
1185
1186 memory.low
1187 A read-write single value file which exists on non-root
1188 cgroups. The default is "0".
1189
1190 Best-effort memory protection. If the memory usage of a
1191 cgroup is within its effective low boundary, the cgroup's
1192 memory won't be reclaimed unless there is no reclaimable
1193 memory available in unprotected cgroups.
1194 Above the effective low boundary (or
1195 effective min boundary if it is higher), pages are reclaimed
1196 proportionally to the overage, reducing reclaim pressure for
1197 smaller overages.
1198
1199 Effective low boundary is limited by memory.low values of
1200 all ancestor cgroups. If there is memory.low overcommitment
1201 (child cgroup or cgroups are requiring more protected memory
1202 than parent will allow), then each child cgroup will get
1203 the part of parent's protection proportional to its
1204 actual memory usage below memory.low.
1205
1206 Putting more memory than generally available under this
1207 protection is discouraged.
1208
1209 memory.high
1210 A read-write single value file which exists on non-root
1211 cgroups. The default is "max".
1212
1213 Memory usage throttle limit. This is the main mechanism to
1214 control memory usage of a cgroup. If a cgroup's usage goes
1215 over the high boundary, the processes of the cgroup are
1216 throttled and put under heavy reclaim pressure.
1217
1218 Going over the high limit never invokes the OOM killer and
1219 under extreme conditions the limit may be breached.
1220
1221 memory.max
1222 A read-write single value file which exists on non-root
1223 cgroups. The default is "max".
1224
1225 Memory usage hard limit. This is the final protection
1226 mechanism. If a cgroup's memory usage reaches this limit and
1227 can't be reduced, the OOM killer is invoked in the cgroup.
1228 Under certain circumstances, the usage may go over the limit
1229 temporarily.
1230
1231 In default configuration regular 0-order allocations always
1232 succeed unless OOM killer chooses current task as a victim.
1233
1234 Some kinds of allocations don't invoke the OOM killer.
1235 Caller could retry them differently, return into userspace
1236 as -ENOMEM or silently ignore in cases like disk readahead.
1237
1238 This is the ultimate protection mechanism. As long as the
1239 high limit is used and monitored properly, this limit's
1240 utility is limited to providing the final safety net.
1241
1242 memory.reclaim
1243 A write-only nested-keyed file which exists for all cgroups.
1244
1245 This is a simple interface to trigger memory reclaim in the
1246 target cgroup.
1247
1248 This file accepts a single key, the number of bytes to reclaim.
1249 No nested keys are currently supported.
1250
1251 Example::
1252
1253 echo "1G" > memory.reclaim
1254
1255 The interface can be later extended with nested keys to
1256 configure the reclaim behavior. For example, specify the
1257 type of memory to reclaim from (anon, file, ..).
1258
1259 Please note that the kernel can over or under reclaim from
1260 the target cgroup. If less bytes are reclaimed than the
1261 specified amount, -EAGAIN is returned.
1262
1263 Please note that the proactive reclaim (triggered by this
1264 interface) is not meant to indicate memory pressure on the
1265 memory cgroup. Therefore socket memory balancing triggered by
1266 the memory reclaim normally is not exercised in this case.
1267 This means that the networking layer will not adapt based on
1268 reclaim induced by memory.reclaim.
1269
1270 memory.peak
1271 A read-only single value file which exists on non-root
1272 cgroups.
1273
1274 The max memory usage recorded for the cgroup and its
1275 descendants since the creation of the cgroup.
1276
1277 memory.oom.group
1278 A read-write single value file which exists on non-root
1279 cgroups. The default value is "0".
1280
1281 Determines whether the cgroup should be treated as
1282 an indivisible workload by the OOM killer. If set,
1283 all tasks belonging to the cgroup or to its descendants
1284 (if the memory cgroup is not a leaf cgroup) are killed
1285 together or not at all. This can be used to avoid
1286 partial kills to guarantee workload integrity.
1287
1288 Tasks with the OOM protection (oom_score_adj set to -1000)
1289 are treated as an exception and are never killed.
1290
1291 If the OOM killer is invoked in a cgroup, it's not going
1292 to kill any tasks outside of this cgroup, regardless
1293 memory.oom.group values of ancestor cgroups.
1294
1295 memory.events
1296 A read-only flat-keyed file which exists on non-root cgroups.
1297 The following entries are defined. Unless specified
1298 otherwise, a value change in this file generates a file
1299 modified event.
1300
1301 Note that all fields in this file are hierarchical and the
1302 file modified event can be generated due to an event down the
1303 hierarchy. For the local events at the cgroup level see
1304 memory.events.local.
1305
1306 low
1307 The number of times the cgroup is reclaimed due to
1308 high memory pressure even though its usage is under
1309 the low boundary. This usually indicates that the low
1310 boundary is over-committed.
1311
1312 high
1313 The number of times processes of the cgroup are
1314 throttled and routed to perform direct memory reclaim
1315 because the high memory boundary was exceeded. For a
1316 cgroup whose memory usage is capped by the high limit
1317 rather than global memory pressure, this event's
1318 occurrences are expected.
1319
1320 max
1321 The number of times the cgroup's memory usage was
1322 about to go over the max boundary. If direct reclaim
1323 fails to bring it down, the cgroup goes to OOM state.
1324
1325 oom
1326 The number of time the cgroup's memory usage was
1327 reached the limit and allocation was about to fail.
1328
1329 This event is not raised if the OOM killer is not
1330 considered as an option, e.g. for failed high-order
1331 allocations or if caller asked to not retry attempts.
1332
1333 oom_kill
1334 The number of processes belonging to this cgroup
1335 killed by any kind of OOM killer.
1336
1337 oom_group_kill
1338 The number of times a group OOM has occurred.
1339
1340 memory.events.local
1341 Similar to memory.events but the fields in the file are local
1342 to the cgroup i.e. not hierarchical. The file modified event
1343 generated on this file reflects only the local events.
1344
1345 memory.stat
1346 A read-only flat-keyed file which exists on non-root cgroups.
1347
1348 This breaks down the cgroup's memory footprint into different
1349 types of memory, type-specific details, and other information
1350 on the state and past events of the memory management system.
1351
1352 All memory amounts are in bytes.
1353
1354 The entries are ordered to be human readable, and new entries
1355 can show up in the middle. Don't rely on items remaining in a
1356 fixed position; use the keys to look up specific values!
1357
1358 If the entry has no per-node counter (or not show in the
1359 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1360 to indicate that it will not show in the memory.numa_stat.
1361
1362 anon
1363 Amount of memory used in anonymous mappings such as
1364 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1365
1366 file
1367 Amount of memory used to cache filesystem data,
1368 including tmpfs and shared memory.
1369
1370 kernel (npn)
1371 Amount of total kernel memory, including
1372 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1373 addition to other kernel memory use cases.
1374
1375 kernel_stack
1376 Amount of memory allocated to kernel stacks.
1377
1378 pagetables
1379 Amount of memory allocated for page tables.
1380
1381 sec_pagetables
1382 Amount of memory allocated for secondary page tables,
1383 this currently includes KVM mmu allocations on x86
1384 and arm64.
1385
1386 percpu (npn)
1387 Amount of memory used for storing per-cpu kernel
1388 data structures.
1389
1390 sock (npn)
1391 Amount of memory used in network transmission buffers
1392
1393 vmalloc (npn)
1394 Amount of memory used for vmap backed memory.
1395
1396 shmem
1397 Amount of cached filesystem data that is swap-backed,
1398 such as tmpfs, shm segments, shared anonymous mmap()s
1399
1400 zswap
1401 Amount of memory consumed by the zswap compression backend.
1402
1403 zswapped
1404 Amount of application memory swapped out to zswap.
1405
1406 file_mapped
1407 Amount of cached filesystem data mapped with mmap()
1408
1409 file_dirty
1410 Amount of cached filesystem data that was modified but
1411 not yet written back to disk
1412
1413 file_writeback
1414 Amount of cached filesystem data that was modified and
1415 is currently being written back to disk
1416
1417 swapcached
1418 Amount of swap cached in memory. The swapcache is accounted
1419 against both memory and swap usage.
1420
1421 anon_thp
1422 Amount of memory used in anonymous mappings backed by
1423 transparent hugepages
1424
1425 file_thp
1426 Amount of cached filesystem data backed by transparent
1427 hugepages
1428
1429 shmem_thp
1430 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1431 transparent hugepages
1432
1433 inactive_anon, active_anon, inactive_file, active_file, unevictable
1434 Amount of memory, swap-backed and filesystem-backed,
1435 on the internal memory management lists used by the
1436 page reclaim algorithm.
1437
1438 As these represent internal list state (eg. shmem pages are on anon
1439 memory management lists), inactive_foo + active_foo may not be equal to
1440 the value for the foo counter, since the foo counter is type-based, not
1441 list-based.
1442
1443 slab_reclaimable
1444 Part of "slab" that might be reclaimed, such as
1445 dentries and inodes.
1446
1447 slab_unreclaimable
1448 Part of "slab" that cannot be reclaimed on memory
1449 pressure.
1450
1451 slab (npn)
1452 Amount of memory used for storing in-kernel data
1453 structures.
1454
1455 workingset_refault_anon
1456 Number of refaults of previously evicted anonymous pages.
1457
1458 workingset_refault_file
1459 Number of refaults of previously evicted file pages.
1460
1461 workingset_activate_anon
1462 Number of refaulted anonymous pages that were immediately
1463 activated.
1464
1465 workingset_activate_file
1466 Number of refaulted file pages that were immediately activated.
1467
1468 workingset_restore_anon
1469 Number of restored anonymous pages which have been detected as
1470 an active workingset before they got reclaimed.
1471
1472 workingset_restore_file
1473 Number of restored file pages which have been detected as an
1474 active workingset before they got reclaimed.
1475
1476 workingset_nodereclaim
1477 Number of times a shadow node has been reclaimed
1478
1479 pgscan (npn)
1480 Amount of scanned pages (in an inactive LRU list)
1481
1482 pgsteal (npn)
1483 Amount of reclaimed pages
1484
1485 pgscan_kswapd (npn)
1486 Amount of scanned pages by kswapd (in an inactive LRU list)
1487
1488 pgscan_direct (npn)
1489 Amount of scanned pages directly (in an inactive LRU list)
1490
1491 pgscan_khugepaged (npn)
1492 Amount of scanned pages by khugepaged (in an inactive LRU list)
1493
1494 pgsteal_kswapd (npn)
1495 Amount of reclaimed pages by kswapd
1496
1497 pgsteal_direct (npn)
1498 Amount of reclaimed pages directly
1499
1500 pgsteal_khugepaged (npn)
1501 Amount of reclaimed pages by khugepaged
1502
1503 pgfault (npn)
1504 Total number of page faults incurred
1505
1506 pgmajfault (npn)
1507 Number of major page faults incurred
1508
1509 pgrefill (npn)
1510 Amount of scanned pages (in an active LRU list)
1511
1512 pgactivate (npn)
1513 Amount of pages moved to the active LRU list
1514
1515 pgdeactivate (npn)
1516 Amount of pages moved to the inactive LRU list
1517
1518 pglazyfree (npn)
1519 Amount of pages postponed to be freed under memory pressure
1520
1521 pglazyfreed (npn)
1522 Amount of reclaimed lazyfree pages
1523
1524 thp_fault_alloc (npn)
1525 Number of transparent hugepages which were allocated to satisfy
1526 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1527 is not set.
1528
1529 thp_collapse_alloc (npn)
1530 Number of transparent hugepages which were allocated to allow
1531 collapsing an existing range of pages. This counter is not
1532 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1533
1534 memory.numa_stat
1535 A read-only nested-keyed file which exists on non-root cgroups.
1536
1537 This breaks down the cgroup's memory footprint into different
1538 types of memory, type-specific details, and other information
1539 per node on the state of the memory management system.
1540
1541 This is useful for providing visibility into the NUMA locality
1542 information within an memcg since the pages are allowed to be
1543 allocated from any physical node. One of the use case is evaluating
1544 application performance by combining this information with the
1545 application's CPU allocation.
1546
1547 All memory amounts are in bytes.
1548
1549 The output format of memory.numa_stat is::
1550
1551 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1552
1553 The entries are ordered to be human readable, and new entries
1554 can show up in the middle. Don't rely on items remaining in a
1555 fixed position; use the keys to look up specific values!
1556
1557 The entries can refer to the memory.stat.
1558
1559 memory.swap.current
1560 A read-only single value file which exists on non-root
1561 cgroups.
1562
1563 The total amount of swap currently being used by the cgroup
1564 and its descendants.
1565
1566 memory.swap.high
1567 A read-write single value file which exists on non-root
1568 cgroups. The default is "max".
1569
1570 Swap usage throttle limit. If a cgroup's swap usage exceeds
1571 this limit, all its further allocations will be throttled to
1572 allow userspace to implement custom out-of-memory procedures.
1573
1574 This limit marks a point of no return for the cgroup. It is NOT
1575 designed to manage the amount of swapping a workload does
1576 during regular operation. Compare to memory.swap.max, which
1577 prohibits swapping past a set amount, but lets the cgroup
1578 continue unimpeded as long as other memory can be reclaimed.
1579
1580 Healthy workloads are not expected to reach this limit.
1581
1582 memory.swap.max
1583 A read-write single value file which exists on non-root
1584 cgroups. The default is "max".
1585
1586 Swap usage hard limit. If a cgroup's swap usage reaches this
1587 limit, anonymous memory of the cgroup will not be swapped out.
1588
1589 memory.swap.events
1590 A read-only flat-keyed file which exists on non-root cgroups.
1591 The following entries are defined. Unless specified
1592 otherwise, a value change in this file generates a file
1593 modified event.
1594
1595 high
1596 The number of times the cgroup's swap usage was over
1597 the high threshold.
1598
1599 max
1600 The number of times the cgroup's swap usage was about
1601 to go over the max boundary and swap allocation
1602 failed.
1603
1604 fail
1605 The number of times swap allocation failed either
1606 because of running out of swap system-wide or max
1607 limit.
1608
1609 When reduced under the current usage, the existing swap
1610 entries are reclaimed gradually and the swap usage may stay
1611 higher than the limit for an extended period of time. This
1612 reduces the impact on the workload and memory management.
1613
1614 memory.zswap.current
1615 A read-only single value file which exists on non-root
1616 cgroups.
1617
1618 The total amount of memory consumed by the zswap compression
1619 backend.
1620
1621 memory.zswap.max
1622 A read-write single value file which exists on non-root
1623 cgroups. The default is "max".
1624
1625 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1626 limit, it will refuse to take any more stores before existing
1627 entries fault back in or are written out to disk.
1628
1629 memory.pressure
1630 A read-only nested-keyed file.
1631
1632 Shows pressure stall information for memory. See
1633 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1634
1635
1636Usage Guidelines
1637~~~~~~~~~~~~~~~~
1638
1639"memory.high" is the main mechanism to control memory usage.
1640Over-committing on high limit (sum of high limits > available memory)
1641and letting global memory pressure to distribute memory according to
1642usage is a viable strategy.
1643
1644Because breach of the high limit doesn't trigger the OOM killer but
1645throttles the offending cgroup, a management agent has ample
1646opportunities to monitor and take appropriate actions such as granting
1647more memory or terminating the workload.
1648
1649Determining whether a cgroup has enough memory is not trivial as
1650memory usage doesn't indicate whether the workload can benefit from
1651more memory. For example, a workload which writes data received from
1652network to a file can use all available memory but can also operate as
1653performant with a small amount of memory. A measure of memory
1654pressure - how much the workload is being impacted due to lack of
1655memory - is necessary to determine whether a workload needs more
1656memory; unfortunately, memory pressure monitoring mechanism isn't
1657implemented yet.
1658
1659
1660Memory Ownership
1661~~~~~~~~~~~~~~~~
1662
1663A memory area is charged to the cgroup which instantiated it and stays
1664charged to the cgroup until the area is released. Migrating a process
1665to a different cgroup doesn't move the memory usages that it
1666instantiated while in the previous cgroup to the new cgroup.
1667
1668A memory area may be used by processes belonging to different cgroups.
1669To which cgroup the area will be charged is in-deterministic; however,
1670over time, the memory area is likely to end up in a cgroup which has
1671enough memory allowance to avoid high reclaim pressure.
1672
1673If a cgroup sweeps a considerable amount of memory which is expected
1674to be accessed repeatedly by other cgroups, it may make sense to use
1675POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1676belonging to the affected files to ensure correct memory ownership.
1677
1678
1679IO
1680--
1681
1682The "io" controller regulates the distribution of IO resources. This
1683controller implements both weight based and absolute bandwidth or IOPS
1684limit distribution; however, weight based distribution is available
1685only if cfq-iosched is in use and neither scheme is available for
1686blk-mq devices.
1687
1688
1689IO Interface Files
1690~~~~~~~~~~~~~~~~~~
1691
1692 io.stat
1693 A read-only nested-keyed file.
1694
1695 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1696 The following nested keys are defined.
1697
1698 ====== =====================
1699 rbytes Bytes read
1700 wbytes Bytes written
1701 rios Number of read IOs
1702 wios Number of write IOs
1703 dbytes Bytes discarded
1704 dios Number of discard IOs
1705 ====== =====================
1706
1707 An example read output follows::
1708
1709 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1710 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1711
1712 io.cost.qos
1713 A read-write nested-keyed file which exists only on the root
1714 cgroup.
1715
1716 This file configures the Quality of Service of the IO cost
1717 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1718 currently implements "io.weight" proportional control. Lines
1719 are keyed by $MAJ:$MIN device numbers and not ordered. The
1720 line for a given device is populated on the first write for
1721 the device on "io.cost.qos" or "io.cost.model". The following
1722 nested keys are defined.
1723
1724 ====== =====================================
1725 enable Weight-based control enable
1726 ctrl "auto" or "user"
1727 rpct Read latency percentile [0, 100]
1728 rlat Read latency threshold
1729 wpct Write latency percentile [0, 100]
1730 wlat Write latency threshold
1731 min Minimum scaling percentage [1, 10000]
1732 max Maximum scaling percentage [1, 10000]
1733 ====== =====================================
1734
1735 The controller is disabled by default and can be enabled by
1736 setting "enable" to 1. "rpct" and "wpct" parameters default
1737 to zero and the controller uses internal device saturation
1738 state to adjust the overall IO rate between "min" and "max".
1739
1740 When a better control quality is needed, latency QoS
1741 parameters can be configured. For example::
1742
1743 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1744
1745 shows that on sdb, the controller is enabled, will consider
1746 the device saturated if the 95th percentile of read completion
1747 latencies is above 75ms or write 150ms, and adjust the overall
1748 IO issue rate between 50% and 150% accordingly.
1749
1750 The lower the saturation point, the better the latency QoS at
1751 the cost of aggregate bandwidth. The narrower the allowed
1752 adjustment range between "min" and "max", the more conformant
1753 to the cost model the IO behavior. Note that the IO issue
1754 base rate may be far off from 100% and setting "min" and "max"
1755 blindly can lead to a significant loss of device capacity or
1756 control quality. "min" and "max" are useful for regulating
1757 devices which show wide temporary behavior changes - e.g. a
1758 ssd which accepts writes at the line speed for a while and
1759 then completely stalls for multiple seconds.
1760
1761 When "ctrl" is "auto", the parameters are controlled by the
1762 kernel and may change automatically. Setting "ctrl" to "user"
1763 or setting any of the percentile and latency parameters puts
1764 it into "user" mode and disables the automatic changes. The
1765 automatic mode can be restored by setting "ctrl" to "auto".
1766
1767 io.cost.model
1768 A read-write nested-keyed file which exists only on the root
1769 cgroup.
1770
1771 This file configures the cost model of the IO cost model based
1772 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1773 implements "io.weight" proportional control. Lines are keyed
1774 by $MAJ:$MIN device numbers and not ordered. The line for a
1775 given device is populated on the first write for the device on
1776 "io.cost.qos" or "io.cost.model". The following nested keys
1777 are defined.
1778
1779 ===== ================================
1780 ctrl "auto" or "user"
1781 model The cost model in use - "linear"
1782 ===== ================================
1783
1784 When "ctrl" is "auto", the kernel may change all parameters
1785 dynamically. When "ctrl" is set to "user" or any other
1786 parameters are written to, "ctrl" become "user" and the
1787 automatic changes are disabled.
1788
1789 When "model" is "linear", the following model parameters are
1790 defined.
1791
1792 ============= ========================================
1793 [r|w]bps The maximum sequential IO throughput
1794 [r|w]seqiops The maximum 4k sequential IOs per second
1795 [r|w]randiops The maximum 4k random IOs per second
1796 ============= ========================================
1797
1798 From the above, the builtin linear model determines the base
1799 costs of a sequential and random IO and the cost coefficient
1800 for the IO size. While simple, this model can cover most
1801 common device classes acceptably.
1802
1803 The IO cost model isn't expected to be accurate in absolute
1804 sense and is scaled to the device behavior dynamically.
1805
1806 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1807 generate device-specific coefficients.
1808
1809 io.weight
1810 A read-write flat-keyed file which exists on non-root cgroups.
1811 The default is "default 100".
1812
1813 The first line is the default weight applied to devices
1814 without specific override. The rest are overrides keyed by
1815 $MAJ:$MIN device numbers and not ordered. The weights are in
1816 the range [1, 10000] and specifies the relative amount IO time
1817 the cgroup can use in relation to its siblings.
1818
1819 The default weight can be updated by writing either "default
1820 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1821 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1822
1823 An example read output follows::
1824
1825 default 100
1826 8:16 200
1827 8:0 50
1828
1829 io.max
1830 A read-write nested-keyed file which exists on non-root
1831 cgroups.
1832
1833 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1834 device numbers and not ordered. The following nested keys are
1835 defined.
1836
1837 ===== ==================================
1838 rbps Max read bytes per second
1839 wbps Max write bytes per second
1840 riops Max read IO operations per second
1841 wiops Max write IO operations per second
1842 ===== ==================================
1843
1844 When writing, any number of nested key-value pairs can be
1845 specified in any order. "max" can be specified as the value
1846 to remove a specific limit. If the same key is specified
1847 multiple times, the outcome is undefined.
1848
1849 BPS and IOPS are measured in each IO direction and IOs are
1850 delayed if limit is reached. Temporary bursts are allowed.
1851
1852 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1853
1854 echo "8:16 rbps=2097152 wiops=120" > io.max
1855
1856 Reading returns the following::
1857
1858 8:16 rbps=2097152 wbps=max riops=max wiops=120
1859
1860 Write IOPS limit can be removed by writing the following::
1861
1862 echo "8:16 wiops=max" > io.max
1863
1864 Reading now returns the following::
1865
1866 8:16 rbps=2097152 wbps=max riops=max wiops=max
1867
1868 io.pressure
1869 A read-only nested-keyed file.
1870
1871 Shows pressure stall information for IO. See
1872 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1873
1874
1875Writeback
1876~~~~~~~~~
1877
1878Page cache is dirtied through buffered writes and shared mmaps and
1879written asynchronously to the backing filesystem by the writeback
1880mechanism. Writeback sits between the memory and IO domains and
1881regulates the proportion of dirty memory by balancing dirtying and
1882write IOs.
1883
1884The io controller, in conjunction with the memory controller,
1885implements control of page cache writeback IOs. The memory controller
1886defines the memory domain that dirty memory ratio is calculated and
1887maintained for and the io controller defines the io domain which
1888writes out dirty pages for the memory domain. Both system-wide and
1889per-cgroup dirty memory states are examined and the more restrictive
1890of the two is enforced.
1891
1892cgroup writeback requires explicit support from the underlying
1893filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1894btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1895attributed to the root cgroup.
1896
1897There are inherent differences in memory and writeback management
1898which affects how cgroup ownership is tracked. Memory is tracked per
1899page while writeback per inode. For the purpose of writeback, an
1900inode is assigned to a cgroup and all IO requests to write dirty pages
1901from the inode are attributed to that cgroup.
1902
1903As cgroup ownership for memory is tracked per page, there can be pages
1904which are associated with different cgroups than the one the inode is
1905associated with. These are called foreign pages. The writeback
1906constantly keeps track of foreign pages and, if a particular foreign
1907cgroup becomes the majority over a certain period of time, switches
1908the ownership of the inode to that cgroup.
1909
1910While this model is enough for most use cases where a given inode is
1911mostly dirtied by a single cgroup even when the main writing cgroup
1912changes over time, use cases where multiple cgroups write to a single
1913inode simultaneously are not supported well. In such circumstances, a
1914significant portion of IOs are likely to be attributed incorrectly.
1915As memory controller assigns page ownership on the first use and
1916doesn't update it until the page is released, even if writeback
1917strictly follows page ownership, multiple cgroups dirtying overlapping
1918areas wouldn't work as expected. It's recommended to avoid such usage
1919patterns.
1920
1921The sysctl knobs which affect writeback behavior are applied to cgroup
1922writeback as follows.
1923
1924 vm.dirty_background_ratio, vm.dirty_ratio
1925 These ratios apply the same to cgroup writeback with the
1926 amount of available memory capped by limits imposed by the
1927 memory controller and system-wide clean memory.
1928
1929 vm.dirty_background_bytes, vm.dirty_bytes
1930 For cgroup writeback, this is calculated into ratio against
1931 total available memory and applied the same way as
1932 vm.dirty[_background]_ratio.
1933
1934
1935IO Latency
1936~~~~~~~~~~
1937
1938This is a cgroup v2 controller for IO workload protection. You provide a group
1939with a latency target, and if the average latency exceeds that target the
1940controller will throttle any peers that have a lower latency target than the
1941protected workload.
1942
1943The limits are only applied at the peer level in the hierarchy. This means that
1944in the diagram below, only groups A, B, and C will influence each other, and
1945groups D and F will influence each other. Group G will influence nobody::
1946
1947 [root]
1948 / | \
1949 A B C
1950 / \ |
1951 D F G
1952
1953
1954So the ideal way to configure this is to set io.latency in groups A, B, and C.
1955Generally you do not want to set a value lower than the latency your device
1956supports. Experiment to find the value that works best for your workload.
1957Start at higher than the expected latency for your device and watch the
1958avg_lat value in io.stat for your workload group to get an idea of the
1959latency you see during normal operation. Use the avg_lat value as a basis for
1960your real setting, setting at 10-15% higher than the value in io.stat.
1961
1962How IO Latency Throttling Works
1963~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1964
1965io.latency is work conserving; so as long as everybody is meeting their latency
1966target the controller doesn't do anything. Once a group starts missing its
1967target it begins throttling any peer group that has a higher target than itself.
1968This throttling takes 2 forms:
1969
1970- Queue depth throttling. This is the number of outstanding IO's a group is
1971 allowed to have. We will clamp down relatively quickly, starting at no limit
1972 and going all the way down to 1 IO at a time.
1973
1974- Artificial delay induction. There are certain types of IO that cannot be
1975 throttled without possibly adversely affecting higher priority groups. This
1976 includes swapping and metadata IO. These types of IO are allowed to occur
1977 normally, however they are "charged" to the originating group. If the
1978 originating group is being throttled you will see the use_delay and delay
1979 fields in io.stat increase. The delay value is how many microseconds that are
1980 being added to any process that runs in this group. Because this number can
1981 grow quite large if there is a lot of swapping or metadata IO occurring we
1982 limit the individual delay events to 1 second at a time.
1983
1984Once the victimized group starts meeting its latency target again it will start
1985unthrottling any peer groups that were throttled previously. If the victimized
1986group simply stops doing IO the global counter will unthrottle appropriately.
1987
1988IO Latency Interface Files
1989~~~~~~~~~~~~~~~~~~~~~~~~~~
1990
1991 io.latency
1992 This takes a similar format as the other controllers.
1993
1994 "MAJOR:MINOR target=<target time in microseconds>"
1995
1996 io.stat
1997 If the controller is enabled you will see extra stats in io.stat in
1998 addition to the normal ones.
1999
2000 depth
2001 This is the current queue depth for the group.
2002
2003 avg_lat
2004 This is an exponential moving average with a decay rate of 1/exp
2005 bound by the sampling interval. The decay rate interval can be
2006 calculated by multiplying the win value in io.stat by the
2007 corresponding number of samples based on the win value.
2008
2009 win
2010 The sampling window size in milliseconds. This is the minimum
2011 duration of time between evaluation events. Windows only elapse
2012 with IO activity. Idle periods extend the most recent window.
2013
2014IO Priority
2015~~~~~~~~~~~
2016
2017A single attribute controls the behavior of the I/O priority cgroup policy,
2018namely the blkio.prio.class attribute. The following values are accepted for
2019that attribute:
2020
2021 no-change
2022 Do not modify the I/O priority class.
2023
2024 none-to-rt
2025 For requests that do not have an I/O priority class (NONE),
2026 change the I/O priority class into RT. Do not modify
2027 the I/O priority class of other requests.
2028
2029 restrict-to-be
2030 For requests that do not have an I/O priority class or that have I/O
2031 priority class RT, change it into BE. Do not modify the I/O priority
2032 class of requests that have priority class IDLE.
2033
2034 idle
2035 Change the I/O priority class of all requests into IDLE, the lowest
2036 I/O priority class.
2037
2038The following numerical values are associated with the I/O priority policies:
2039
2040+-------------+---+
2041| no-change | 0 |
2042+-------------+---+
2043| none-to-rt | 1 |
2044+-------------+---+
2045| rt-to-be | 2 |
2046+-------------+---+
2047| all-to-idle | 3 |
2048+-------------+---+
2049
2050The numerical value that corresponds to each I/O priority class is as follows:
2051
2052+-------------------------------+---+
2053| IOPRIO_CLASS_NONE | 0 |
2054+-------------------------------+---+
2055| IOPRIO_CLASS_RT (real-time) | 1 |
2056+-------------------------------+---+
2057| IOPRIO_CLASS_BE (best effort) | 2 |
2058+-------------------------------+---+
2059| IOPRIO_CLASS_IDLE | 3 |
2060+-------------------------------+---+
2061
2062The algorithm to set the I/O priority class for a request is as follows:
2063
2064- Translate the I/O priority class policy into a number.
2065- Change the request I/O priority class into the maximum of the I/O priority
2066 class policy number and the numerical I/O priority class.
2067
2068PID
2069---
2070
2071The process number controller is used to allow a cgroup to stop any
2072new tasks from being fork()'d or clone()'d after a specified limit is
2073reached.
2074
2075The number of tasks in a cgroup can be exhausted in ways which other
2076controllers cannot prevent, thus warranting its own controller. For
2077example, a fork bomb is likely to exhaust the number of tasks before
2078hitting memory restrictions.
2079
2080Note that PIDs used in this controller refer to TIDs, process IDs as
2081used by the kernel.
2082
2083
2084PID Interface Files
2085~~~~~~~~~~~~~~~~~~~
2086
2087 pids.max
2088 A read-write single value file which exists on non-root
2089 cgroups. The default is "max".
2090
2091 Hard limit of number of processes.
2092
2093 pids.current
2094 A read-only single value file which exists on all cgroups.
2095
2096 The number of processes currently in the cgroup and its
2097 descendants.
2098
2099Organisational operations are not blocked by cgroup policies, so it is
2100possible to have pids.current > pids.max. This can be done by either
2101setting the limit to be smaller than pids.current, or attaching enough
2102processes to the cgroup such that pids.current is larger than
2103pids.max. However, it is not possible to violate a cgroup PID policy
2104through fork() or clone(). These will return -EAGAIN if the creation
2105of a new process would cause a cgroup policy to be violated.
2106
2107
2108Cpuset
2109------
2110
2111The "cpuset" controller provides a mechanism for constraining
2112the CPU and memory node placement of tasks to only the resources
2113specified in the cpuset interface files in a task's current cgroup.
2114This is especially valuable on large NUMA systems where placing jobs
2115on properly sized subsets of the systems with careful processor and
2116memory placement to reduce cross-node memory access and contention
2117can improve overall system performance.
2118
2119The "cpuset" controller is hierarchical. That means the controller
2120cannot use CPUs or memory nodes not allowed in its parent.
2121
2122
2123Cpuset Interface Files
2124~~~~~~~~~~~~~~~~~~~~~~
2125
2126 cpuset.cpus
2127 A read-write multiple values file which exists on non-root
2128 cpuset-enabled cgroups.
2129
2130 It lists the requested CPUs to be used by tasks within this
2131 cgroup. The actual list of CPUs to be granted, however, is
2132 subjected to constraints imposed by its parent and can differ
2133 from the requested CPUs.
2134
2135 The CPU numbers are comma-separated numbers or ranges.
2136 For example::
2137
2138 # cat cpuset.cpus
2139 0-4,6,8-10
2140
2141 An empty value indicates that the cgroup is using the same
2142 setting as the nearest cgroup ancestor with a non-empty
2143 "cpuset.cpus" or all the available CPUs if none is found.
2144
2145 The value of "cpuset.cpus" stays constant until the next update
2146 and won't be affected by any CPU hotplug events.
2147
2148 cpuset.cpus.effective
2149 A read-only multiple values file which exists on all
2150 cpuset-enabled cgroups.
2151
2152 It lists the onlined CPUs that are actually granted to this
2153 cgroup by its parent. These CPUs are allowed to be used by
2154 tasks within the current cgroup.
2155
2156 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2157 all the CPUs from the parent cgroup that can be available to
2158 be used by this cgroup. Otherwise, it should be a subset of
2159 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2160 can be granted. In this case, it will be treated just like an
2161 empty "cpuset.cpus".
2162
2163 Its value will be affected by CPU hotplug events.
2164
2165 cpuset.mems
2166 A read-write multiple values file which exists on non-root
2167 cpuset-enabled cgroups.
2168
2169 It lists the requested memory nodes to be used by tasks within
2170 this cgroup. The actual list of memory nodes granted, however,
2171 is subjected to constraints imposed by its parent and can differ
2172 from the requested memory nodes.
2173
2174 The memory node numbers are comma-separated numbers or ranges.
2175 For example::
2176
2177 # cat cpuset.mems
2178 0-1,3
2179
2180 An empty value indicates that the cgroup is using the same
2181 setting as the nearest cgroup ancestor with a non-empty
2182 "cpuset.mems" or all the available memory nodes if none
2183 is found.
2184
2185 The value of "cpuset.mems" stays constant until the next update
2186 and won't be affected by any memory nodes hotplug events.
2187
2188 Setting a non-empty value to "cpuset.mems" causes memory of
2189 tasks within the cgroup to be migrated to the designated nodes if
2190 they are currently using memory outside of the designated nodes.
2191
2192 There is a cost for this memory migration. The migration
2193 may not be complete and some memory pages may be left behind.
2194 So it is recommended that "cpuset.mems" should be set properly
2195 before spawning new tasks into the cpuset. Even if there is
2196 a need to change "cpuset.mems" with active tasks, it shouldn't
2197 be done frequently.
2198
2199 cpuset.mems.effective
2200 A read-only multiple values file which exists on all
2201 cpuset-enabled cgroups.
2202
2203 It lists the onlined memory nodes that are actually granted to
2204 this cgroup by its parent. These memory nodes are allowed to
2205 be used by tasks within the current cgroup.
2206
2207 If "cpuset.mems" is empty, it shows all the memory nodes from the
2208 parent cgroup that will be available to be used by this cgroup.
2209 Otherwise, it should be a subset of "cpuset.mems" unless none of
2210 the memory nodes listed in "cpuset.mems" can be granted. In this
2211 case, it will be treated just like an empty "cpuset.mems".
2212
2213 Its value will be affected by memory nodes hotplug events.
2214
2215 cpuset.cpus.partition
2216 A read-write single value file which exists on non-root
2217 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2218 and is not delegatable.
2219
2220 It accepts only the following input values when written to.
2221
2222 ========== =====================================
2223 "member" Non-root member of a partition
2224 "root" Partition root
2225 "isolated" Partition root without load balancing
2226 ========== =====================================
2227
2228 The root cgroup is always a partition root and its state
2229 cannot be changed. All other non-root cgroups start out as
2230 "member".
2231
2232 When set to "root", the current cgroup is the root of a new
2233 partition or scheduling domain that comprises itself and all
2234 its descendants except those that are separate partition roots
2235 themselves and their descendants.
2236
2237 When set to "isolated", the CPUs in that partition root will
2238 be in an isolated state without any load balancing from the
2239 scheduler. Tasks placed in such a partition with multiple
2240 CPUs should be carefully distributed and bound to each of the
2241 individual CPUs for optimal performance.
2242
2243 The value shown in "cpuset.cpus.effective" of a partition root
2244 is the CPUs that the partition root can dedicate to a potential
2245 new child partition root. The new child subtracts available
2246 CPUs from its parent "cpuset.cpus.effective".
2247
2248 A partition root ("root" or "isolated") can be in one of the
2249 two possible states - valid or invalid. An invalid partition
2250 root is in a degraded state where some state information may
2251 be retained, but behaves more like a "member".
2252
2253 All possible state transitions among "member", "root" and
2254 "isolated" are allowed.
2255
2256 On read, the "cpuset.cpus.partition" file can show the following
2257 values.
2258
2259 ============================= =====================================
2260 "member" Non-root member of a partition
2261 "root" Partition root
2262 "isolated" Partition root without load balancing
2263 "root invalid (<reason>)" Invalid partition root
2264 "isolated invalid (<reason>)" Invalid isolated partition root
2265 ============================= =====================================
2266
2267 In the case of an invalid partition root, a descriptive string on
2268 why the partition is invalid is included within parentheses.
2269
2270 For a partition root to become valid, the following conditions
2271 must be met.
2272
2273 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
2274 are not shared by any of its siblings (exclusivity rule).
2275 2) The parent cgroup is a valid partition root.
2276 3) The "cpuset.cpus" is not empty and must contain at least
2277 one of the CPUs from parent's "cpuset.cpus", i.e. they overlap.
2278 4) The "cpuset.cpus.effective" cannot be empty unless there is
2279 no task associated with this partition.
2280
2281 External events like hotplug or changes to "cpuset.cpus" can
2282 cause a valid partition root to become invalid and vice versa.
2283 Note that a task cannot be moved to a cgroup with empty
2284 "cpuset.cpus.effective".
2285
2286 For a valid partition root with the sibling cpu exclusivity
2287 rule enabled, changes made to "cpuset.cpus" that violate the
2288 exclusivity rule will invalidate the partition as well as its
2289 sibiling partitions with conflicting cpuset.cpus values. So
2290 care must be taking in changing "cpuset.cpus".
2291
2292 A valid non-root parent partition may distribute out all its CPUs
2293 to its child partitions when there is no task associated with it.
2294
2295 Care must be taken to change a valid partition root to
2296 "member" as all its child partitions, if present, will become
2297 invalid causing disruption to tasks running in those child
2298 partitions. These inactivated partitions could be recovered if
2299 their parent is switched back to a partition root with a proper
2300 set of "cpuset.cpus".
2301
2302 Poll and inotify events are triggered whenever the state of
2303 "cpuset.cpus.partition" changes. That includes changes caused
2304 by write to "cpuset.cpus.partition", cpu hotplug or other
2305 changes that modify the validity status of the partition.
2306 This will allow user space agents to monitor unexpected changes
2307 to "cpuset.cpus.partition" without the need to do continuous
2308 polling.
2309
2310
2311Device controller
2312-----------------
2313
2314Device controller manages access to device files. It includes both
2315creation of new device files (using mknod), and access to the
2316existing device files.
2317
2318Cgroup v2 device controller has no interface files and is implemented
2319on top of cgroup BPF. To control access to device files, a user may
2320create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2321them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2322device file, corresponding BPF programs will be executed, and depending
2323on the return value the attempt will succeed or fail with -EPERM.
2324
2325A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2326bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2327access type (mknod/read/write) and device (type, major and minor numbers).
2328If the program returns 0, the attempt fails with -EPERM, otherwise it
2329succeeds.
2330
2331An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2332tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2333
2334
2335RDMA
2336----
2337
2338The "rdma" controller regulates the distribution and accounting of
2339RDMA resources.
2340
2341RDMA Interface Files
2342~~~~~~~~~~~~~~~~~~~~
2343
2344 rdma.max
2345 A readwrite nested-keyed file that exists for all the cgroups
2346 except root that describes current configured resource limit
2347 for a RDMA/IB device.
2348
2349 Lines are keyed by device name and are not ordered.
2350 Each line contains space separated resource name and its configured
2351 limit that can be distributed.
2352
2353 The following nested keys are defined.
2354
2355 ========== =============================
2356 hca_handle Maximum number of HCA Handles
2357 hca_object Maximum number of HCA Objects
2358 ========== =============================
2359
2360 An example for mlx4 and ocrdma device follows::
2361
2362 mlx4_0 hca_handle=2 hca_object=2000
2363 ocrdma1 hca_handle=3 hca_object=max
2364
2365 rdma.current
2366 A read-only file that describes current resource usage.
2367 It exists for all the cgroup except root.
2368
2369 An example for mlx4 and ocrdma device follows::
2370
2371 mlx4_0 hca_handle=1 hca_object=20
2372 ocrdma1 hca_handle=1 hca_object=23
2373
2374HugeTLB
2375-------
2376
2377The HugeTLB controller allows to limit the HugeTLB usage per control group and
2378enforces the controller limit during page fault.
2379
2380HugeTLB Interface Files
2381~~~~~~~~~~~~~~~~~~~~~~~
2382
2383 hugetlb.<hugepagesize>.current
2384 Show current usage for "hugepagesize" hugetlb. It exists for all
2385 the cgroup except root.
2386
2387 hugetlb.<hugepagesize>.max
2388 Set/show the hard limit of "hugepagesize" hugetlb usage.
2389 The default value is "max". It exists for all the cgroup except root.
2390
2391 hugetlb.<hugepagesize>.events
2392 A read-only flat-keyed file which exists on non-root cgroups.
2393
2394 max
2395 The number of allocation failure due to HugeTLB limit
2396
2397 hugetlb.<hugepagesize>.events.local
2398 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2399 are local to the cgroup i.e. not hierarchical. The file modified event
2400 generated on this file reflects only the local events.
2401
2402 hugetlb.<hugepagesize>.numa_stat
2403 Similar to memory.numa_stat, it shows the numa information of the
2404 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2405 use hugetlb pages are included. The per-node values are in bytes.
2406
2407Misc
2408----
2409
2410The Miscellaneous cgroup provides the resource limiting and tracking
2411mechanism for the scalar resources which cannot be abstracted like the other
2412cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2413option.
2414
2415A resource can be added to the controller via enum misc_res_type{} in the
2416include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2417in the kernel/cgroup/misc.c file. Provider of the resource must set its
2418capacity prior to using the resource by calling misc_cg_set_capacity().
2419
2420Once a capacity is set then the resource usage can be updated using charge and
2421uncharge APIs. All of the APIs to interact with misc controller are in
2422include/linux/misc_cgroup.h.
2423
2424Misc Interface Files
2425~~~~~~~~~~~~~~~~~~~~
2426
2427Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2428
2429 misc.capacity
2430 A read-only flat-keyed file shown only in the root cgroup. It shows
2431 miscellaneous scalar resources available on the platform along with
2432 their quantities::
2433
2434 $ cat misc.capacity
2435 res_a 50
2436 res_b 10
2437
2438 misc.current
2439 A read-only flat-keyed file shown in the non-root cgroups. It shows
2440 the current usage of the resources in the cgroup and its children.::
2441
2442 $ cat misc.current
2443 res_a 3
2444 res_b 0
2445
2446 misc.max
2447 A read-write flat-keyed file shown in the non root cgroups. Allowed
2448 maximum usage of the resources in the cgroup and its children.::
2449
2450 $ cat misc.max
2451 res_a max
2452 res_b 4
2453
2454 Limit can be set by::
2455
2456 # echo res_a 1 > misc.max
2457
2458 Limit can be set to max by::
2459
2460 # echo res_a max > misc.max
2461
2462 Limits can be set higher than the capacity value in the misc.capacity
2463 file.
2464
2465 misc.events
2466 A read-only flat-keyed file which exists on non-root cgroups. The
2467 following entries are defined. Unless specified otherwise, a value
2468 change in this file generates a file modified event. All fields in
2469 this file are hierarchical.
2470
2471 max
2472 The number of times the cgroup's resource usage was
2473 about to go over the max boundary.
2474
2475Migration and Ownership
2476~~~~~~~~~~~~~~~~~~~~~~~
2477
2478A miscellaneous scalar resource is charged to the cgroup in which it is used
2479first, and stays charged to that cgroup until that resource is freed. Migrating
2480a process to a different cgroup does not move the charge to the destination
2481cgroup where the process has moved.
2482
2483Others
2484------
2485
2486perf_event
2487~~~~~~~~~~
2488
2489perf_event controller, if not mounted on a legacy hierarchy, is
2490automatically enabled on the v2 hierarchy so that perf events can
2491always be filtered by cgroup v2 path. The controller can still be
2492moved to a legacy hierarchy after v2 hierarchy is populated.
2493
2494
2495Non-normative information
2496-------------------------
2497
2498This section contains information that isn't considered to be a part of
2499the stable kernel API and so is subject to change.
2500
2501
2502CPU controller root cgroup process behaviour
2503~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2504
2505When distributing CPU cycles in the root cgroup each thread in this
2506cgroup is treated as if it was hosted in a separate child cgroup of the
2507root cgroup. This child cgroup weight is dependent on its thread nice
2508level.
2509
2510For details of this mapping see sched_prio_to_weight array in
2511kernel/sched/core.c file (values from this array should be scaled
2512appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2513
2514
2515IO controller root cgroup process behaviour
2516~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2517
2518Root cgroup processes are hosted in an implicit leaf child node.
2519When distributing IO resources this implicit child node is taken into
2520account as if it was a normal child cgroup of the root cgroup with a
2521weight value of 200.
2522
2523
2524Namespace
2525=========
2526
2527Basics
2528------
2529
2530cgroup namespace provides a mechanism to virtualize the view of the
2531"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2532flag can be used with clone(2) and unshare(2) to create a new cgroup
2533namespace. The process running inside the cgroup namespace will have
2534its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2535cgroupns root is the cgroup of the process at the time of creation of
2536the cgroup namespace.
2537
2538Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2539complete path of the cgroup of a process. In a container setup where
2540a set of cgroups and namespaces are intended to isolate processes the
2541"/proc/$PID/cgroup" file may leak potential system level information
2542to the isolated processes. For example::
2543
2544 # cat /proc/self/cgroup
2545 0::/batchjobs/container_id1
2546
2547The path '/batchjobs/container_id1' can be considered as system-data
2548and undesirable to expose to the isolated processes. cgroup namespace
2549can be used to restrict visibility of this path. For example, before
2550creating a cgroup namespace, one would see::
2551
2552 # ls -l /proc/self/ns/cgroup
2553 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2554 # cat /proc/self/cgroup
2555 0::/batchjobs/container_id1
2556
2557After unsharing a new namespace, the view changes::
2558
2559 # ls -l /proc/self/ns/cgroup
2560 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2561 # cat /proc/self/cgroup
2562 0::/
2563
2564When some thread from a multi-threaded process unshares its cgroup
2565namespace, the new cgroupns gets applied to the entire process (all
2566the threads). This is natural for the v2 hierarchy; however, for the
2567legacy hierarchies, this may be unexpected.
2568
2569A cgroup namespace is alive as long as there are processes inside or
2570mounts pinning it. When the last usage goes away, the cgroup
2571namespace is destroyed. The cgroupns root and the actual cgroups
2572remain.
2573
2574
2575The Root and Views
2576------------------
2577
2578The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2579process calling unshare(2) is running. For example, if a process in
2580/batchjobs/container_id1 cgroup calls unshare, cgroup
2581/batchjobs/container_id1 becomes the cgroupns root. For the
2582init_cgroup_ns, this is the real root ('/') cgroup.
2583
2584The cgroupns root cgroup does not change even if the namespace creator
2585process later moves to a different cgroup::
2586
2587 # ~/unshare -c # unshare cgroupns in some cgroup
2588 # cat /proc/self/cgroup
2589 0::/
2590 # mkdir sub_cgrp_1
2591 # echo 0 > sub_cgrp_1/cgroup.procs
2592 # cat /proc/self/cgroup
2593 0::/sub_cgrp_1
2594
2595Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2596
2597Processes running inside the cgroup namespace will be able to see
2598cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2599From within an unshared cgroupns::
2600
2601 # sleep 100000 &
2602 [1] 7353
2603 # echo 7353 > sub_cgrp_1/cgroup.procs
2604 # cat /proc/7353/cgroup
2605 0::/sub_cgrp_1
2606
2607From the initial cgroup namespace, the real cgroup path will be
2608visible::
2609
2610 $ cat /proc/7353/cgroup
2611 0::/batchjobs/container_id1/sub_cgrp_1
2612
2613From a sibling cgroup namespace (that is, a namespace rooted at a
2614different cgroup), the cgroup path relative to its own cgroup
2615namespace root will be shown. For instance, if PID 7353's cgroup
2616namespace root is at '/batchjobs/container_id2', then it will see::
2617
2618 # cat /proc/7353/cgroup
2619 0::/../container_id2/sub_cgrp_1
2620
2621Note that the relative path always starts with '/' to indicate that
2622its relative to the cgroup namespace root of the caller.
2623
2624
2625Migration and setns(2)
2626----------------------
2627
2628Processes inside a cgroup namespace can move into and out of the
2629namespace root if they have proper access to external cgroups. For
2630example, from inside a namespace with cgroupns root at
2631/batchjobs/container_id1, and assuming that the global hierarchy is
2632still accessible inside cgroupns::
2633
2634 # cat /proc/7353/cgroup
2635 0::/sub_cgrp_1
2636 # echo 7353 > batchjobs/container_id2/cgroup.procs
2637 # cat /proc/7353/cgroup
2638 0::/../container_id2
2639
2640Note that this kind of setup is not encouraged. A task inside cgroup
2641namespace should only be exposed to its own cgroupns hierarchy.
2642
2643setns(2) to another cgroup namespace is allowed when:
2644
2645(a) the process has CAP_SYS_ADMIN against its current user namespace
2646(b) the process has CAP_SYS_ADMIN against the target cgroup
2647 namespace's userns
2648
2649No implicit cgroup changes happen with attaching to another cgroup
2650namespace. It is expected that the someone moves the attaching
2651process under the target cgroup namespace root.
2652
2653
2654Interaction with Other Namespaces
2655---------------------------------
2656
2657Namespace specific cgroup hierarchy can be mounted by a process
2658running inside a non-init cgroup namespace::
2659
2660 # mount -t cgroup2 none $MOUNT_POINT
2661
2662This will mount the unified cgroup hierarchy with cgroupns root as the
2663filesystem root. The process needs CAP_SYS_ADMIN against its user and
2664mount namespaces.
2665
2666The virtualization of /proc/self/cgroup file combined with restricting
2667the view of cgroup hierarchy by namespace-private cgroupfs mount
2668provides a properly isolated cgroup view inside the container.
2669
2670
2671Information on Kernel Programming
2672=================================
2673
2674This section contains kernel programming information in the areas
2675where interacting with cgroup is necessary. cgroup core and
2676controllers are not covered.
2677
2678
2679Filesystem Support for Writeback
2680--------------------------------
2681
2682A filesystem can support cgroup writeback by updating
2683address_space_operations->writepage[s]() to annotate bio's using the
2684following two functions.
2685
2686 wbc_init_bio(@wbc, @bio)
2687 Should be called for each bio carrying writeback data and
2688 associates the bio with the inode's owner cgroup and the
2689 corresponding request queue. This must be called after
2690 a queue (device) has been associated with the bio and
2691 before submission.
2692
2693 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2694 Should be called for each data segment being written out.
2695 While this function doesn't care exactly when it's called
2696 during the writeback session, it's the easiest and most
2697 natural to call it as data segments are added to a bio.
2698
2699With writeback bio's annotated, cgroup support can be enabled per
2700super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2701selective disabling of cgroup writeback support which is helpful when
2702certain filesystem features, e.g. journaled data mode, are
2703incompatible.
2704
2705wbc_init_bio() binds the specified bio to its cgroup. Depending on
2706the configuration, the bio may be executed at a lower priority and if
2707the writeback session is holding shared resources, e.g. a journal
2708entry, may lead to priority inversion. There is no one easy solution
2709for the problem. Filesystems can try to work around specific problem
2710cases by skipping wbc_init_bio() and using bio_associate_blkg()
2711directly.
2712
2713
2714Deprecated v1 Core Features
2715===========================
2716
2717- Multiple hierarchies including named ones are not supported.
2718
2719- All v1 mount options are not supported.
2720
2721- The "tasks" file is removed and "cgroup.procs" is not sorted.
2722
2723- "cgroup.clone_children" is removed.
2724
2725- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2726 at the root instead.
2727
2728
2729Issues with v1 and Rationales for v2
2730====================================
2731
2732Multiple Hierarchies
2733--------------------
2734
2735cgroup v1 allowed an arbitrary number of hierarchies and each
2736hierarchy could host any number of controllers. While this seemed to
2737provide a high level of flexibility, it wasn't useful in practice.
2738
2739For example, as there is only one instance of each controller, utility
2740type controllers such as freezer which can be useful in all
2741hierarchies could only be used in one. The issue is exacerbated by
2742the fact that controllers couldn't be moved to another hierarchy once
2743hierarchies were populated. Another issue was that all controllers
2744bound to a hierarchy were forced to have exactly the same view of the
2745hierarchy. It wasn't possible to vary the granularity depending on
2746the specific controller.
2747
2748In practice, these issues heavily limited which controllers could be
2749put on the same hierarchy and most configurations resorted to putting
2750each controller on its own hierarchy. Only closely related ones, such
2751as the cpu and cpuacct controllers, made sense to be put on the same
2752hierarchy. This often meant that userland ended up managing multiple
2753similar hierarchies repeating the same steps on each hierarchy
2754whenever a hierarchy management operation was necessary.
2755
2756Furthermore, support for multiple hierarchies came at a steep cost.
2757It greatly complicated cgroup core implementation but more importantly
2758the support for multiple hierarchies restricted how cgroup could be
2759used in general and what controllers was able to do.
2760
2761There was no limit on how many hierarchies there might be, which meant
2762that a thread's cgroup membership couldn't be described in finite
2763length. The key might contain any number of entries and was unlimited
2764in length, which made it highly awkward to manipulate and led to
2765addition of controllers which existed only to identify membership,
2766which in turn exacerbated the original problem of proliferating number
2767of hierarchies.
2768
2769Also, as a controller couldn't have any expectation regarding the
2770topologies of hierarchies other controllers might be on, each
2771controller had to assume that all other controllers were attached to
2772completely orthogonal hierarchies. This made it impossible, or at
2773least very cumbersome, for controllers to cooperate with each other.
2774
2775In most use cases, putting controllers on hierarchies which are
2776completely orthogonal to each other isn't necessary. What usually is
2777called for is the ability to have differing levels of granularity
2778depending on the specific controller. In other words, hierarchy may
2779be collapsed from leaf towards root when viewed from specific
2780controllers. For example, a given configuration might not care about
2781how memory is distributed beyond a certain level while still wanting
2782to control how CPU cycles are distributed.
2783
2784
2785Thread Granularity
2786------------------
2787
2788cgroup v1 allowed threads of a process to belong to different cgroups.
2789This didn't make sense for some controllers and those controllers
2790ended up implementing different ways to ignore such situations but
2791much more importantly it blurred the line between API exposed to
2792individual applications and system management interface.
2793
2794Generally, in-process knowledge is available only to the process
2795itself; thus, unlike service-level organization of processes,
2796categorizing threads of a process requires active participation from
2797the application which owns the target process.
2798
2799cgroup v1 had an ambiguously defined delegation model which got abused
2800in combination with thread granularity. cgroups were delegated to
2801individual applications so that they can create and manage their own
2802sub-hierarchies and control resource distributions along them. This
2803effectively raised cgroup to the status of a syscall-like API exposed
2804to lay programs.
2805
2806First of all, cgroup has a fundamentally inadequate interface to be
2807exposed this way. For a process to access its own knobs, it has to
2808extract the path on the target hierarchy from /proc/self/cgroup,
2809construct the path by appending the name of the knob to the path, open
2810and then read and/or write to it. This is not only extremely clunky
2811and unusual but also inherently racy. There is no conventional way to
2812define transaction across the required steps and nothing can guarantee
2813that the process would actually be operating on its own sub-hierarchy.
2814
2815cgroup controllers implemented a number of knobs which would never be
2816accepted as public APIs because they were just adding control knobs to
2817system-management pseudo filesystem. cgroup ended up with interface
2818knobs which were not properly abstracted or refined and directly
2819revealed kernel internal details. These knobs got exposed to
2820individual applications through the ill-defined delegation mechanism
2821effectively abusing cgroup as a shortcut to implementing public APIs
2822without going through the required scrutiny.
2823
2824This was painful for both userland and kernel. Userland ended up with
2825misbehaving and poorly abstracted interfaces and kernel exposing and
2826locked into constructs inadvertently.
2827
2828
2829Competition Between Inner Nodes and Threads
2830-------------------------------------------
2831
2832cgroup v1 allowed threads to be in any cgroups which created an
2833interesting problem where threads belonging to a parent cgroup and its
2834children cgroups competed for resources. This was nasty as two
2835different types of entities competed and there was no obvious way to
2836settle it. Different controllers did different things.
2837
2838The cpu controller considered threads and cgroups as equivalents and
2839mapped nice levels to cgroup weights. This worked for some cases but
2840fell flat when children wanted to be allocated specific ratios of CPU
2841cycles and the number of internal threads fluctuated - the ratios
2842constantly changed as the number of competing entities fluctuated.
2843There also were other issues. The mapping from nice level to weight
2844wasn't obvious or universal, and there were various other knobs which
2845simply weren't available for threads.
2846
2847The io controller implicitly created a hidden leaf node for each
2848cgroup to host the threads. The hidden leaf had its own copies of all
2849the knobs with ``leaf_`` prefixed. While this allowed equivalent
2850control over internal threads, it was with serious drawbacks. It
2851always added an extra layer of nesting which wouldn't be necessary
2852otherwise, made the interface messy and significantly complicated the
2853implementation.
2854
2855The memory controller didn't have a way to control what happened
2856between internal tasks and child cgroups and the behavior was not
2857clearly defined. There were attempts to add ad-hoc behaviors and
2858knobs to tailor the behavior to specific workloads which would have
2859led to problems extremely difficult to resolve in the long term.
2860
2861Multiple controllers struggled with internal tasks and came up with
2862different ways to deal with it; unfortunately, all the approaches were
2863severely flawed and, furthermore, the widely different behaviors
2864made cgroup as a whole highly inconsistent.
2865
2866This clearly is a problem which needs to be addressed from cgroup core
2867in a uniform way.
2868
2869
2870Other Interface Issues
2871----------------------
2872
2873cgroup v1 grew without oversight and developed a large number of
2874idiosyncrasies and inconsistencies. One issue on the cgroup core side
2875was how an empty cgroup was notified - a userland helper binary was
2876forked and executed for each event. The event delivery wasn't
2877recursive or delegatable. The limitations of the mechanism also led
2878to in-kernel event delivery filtering mechanism further complicating
2879the interface.
2880
2881Controller interfaces were problematic too. An extreme example is
2882controllers completely ignoring hierarchical organization and treating
2883all cgroups as if they were all located directly under the root
2884cgroup. Some controllers exposed a large amount of inconsistent
2885implementation details to userland.
2886
2887There also was no consistency across controllers. When a new cgroup
2888was created, some controllers defaulted to not imposing extra
2889restrictions while others disallowed any resource usage until
2890explicitly configured. Configuration knobs for the same type of
2891control used widely differing naming schemes and formats. Statistics
2892and information knobs were named arbitrarily and used different
2893formats and units even in the same controller.
2894
2895cgroup v2 establishes common conventions where appropriate and updates
2896controllers so that they expose minimal and consistent interfaces.
2897
2898
2899Controller Issues and Remedies
2900------------------------------
2901
2902Memory
2903~~~~~~
2904
2905The original lower boundary, the soft limit, is defined as a limit
2906that is per default unset. As a result, the set of cgroups that
2907global reclaim prefers is opt-in, rather than opt-out. The costs for
2908optimizing these mostly negative lookups are so high that the
2909implementation, despite its enormous size, does not even provide the
2910basic desirable behavior. First off, the soft limit has no
2911hierarchical meaning. All configured groups are organized in a global
2912rbtree and treated like equal peers, regardless where they are located
2913in the hierarchy. This makes subtree delegation impossible. Second,
2914the soft limit reclaim pass is so aggressive that it not just
2915introduces high allocation latencies into the system, but also impacts
2916system performance due to overreclaim, to the point where the feature
2917becomes self-defeating.
2918
2919The memory.low boundary on the other hand is a top-down allocated
2920reserve. A cgroup enjoys reclaim protection when it's within its
2921effective low, which makes delegation of subtrees possible. It also
2922enjoys having reclaim pressure proportional to its overage when
2923above its effective low.
2924
2925The original high boundary, the hard limit, is defined as a strict
2926limit that can not budge, even if the OOM killer has to be called.
2927But this generally goes against the goal of making the most out of the
2928available memory. The memory consumption of workloads varies during
2929runtime, and that requires users to overcommit. But doing that with a
2930strict upper limit requires either a fairly accurate prediction of the
2931working set size or adding slack to the limit. Since working set size
2932estimation is hard and error prone, and getting it wrong results in
2933OOM kills, most users tend to err on the side of a looser limit and
2934end up wasting precious resources.
2935
2936The memory.high boundary on the other hand can be set much more
2937conservatively. When hit, it throttles allocations by forcing them
2938into direct reclaim to work off the excess, but it never invokes the
2939OOM killer. As a result, a high boundary that is chosen too
2940aggressively will not terminate the processes, but instead it will
2941lead to gradual performance degradation. The user can monitor this
2942and make corrections until the minimal memory footprint that still
2943gives acceptable performance is found.
2944
2945In extreme cases, with many concurrent allocations and a complete
2946breakdown of reclaim progress within the group, the high boundary can
2947be exceeded. But even then it's mostly better to satisfy the
2948allocation from the slack available in other groups or the rest of the
2949system than killing the group. Otherwise, memory.max is there to
2950limit this type of spillover and ultimately contain buggy or even
2951malicious applications.
2952
2953Setting the original memory.limit_in_bytes below the current usage was
2954subject to a race condition, where concurrent charges could cause the
2955limit setting to fail. memory.max on the other hand will first set the
2956limit to prevent new charges, and then reclaim and OOM kill until the
2957new limit is met - or the task writing to memory.max is killed.
2958
2959The combined memory+swap accounting and limiting is replaced by real
2960control over swap space.
2961
2962The main argument for a combined memory+swap facility in the original
2963cgroup design was that global or parental pressure would always be
2964able to swap all anonymous memory of a child group, regardless of the
2965child's own (possibly untrusted) configuration. However, untrusted
2966groups can sabotage swapping by other means - such as referencing its
2967anonymous memory in a tight loop - and an admin can not assume full
2968swappability when overcommitting untrusted jobs.
2969
2970For trusted jobs, on the other hand, a combined counter is not an
2971intuitive userspace interface, and it flies in the face of the idea
2972that cgroup controllers should account and limit specific physical
2973resources. Swap space is a resource like all others in the system,
2974and that's why unified hierarchy allows distributing it separately.