From: Stéphane Graber Date: Wed, 22 Jan 2014 21:13:24 +0000 (-0500) Subject: doc: Try to clear some confusion about lxc.conf X-Git-Tag: lxc-1.0.0.beta3~50 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=55fc19a1042bca36ae431cb4a51c2abc0ca4d801;p=thirdparty%2Flxc.git doc: Try to clear some confusion about lxc.conf Signed-off-by: Stéphane Graber Acked-by: Serge E. Hallyn --- diff --git a/configure.ac b/configure.ac index 73facf355..736625f4c 100644 --- a/configure.ac +++ b/configure.ac @@ -608,6 +608,8 @@ AC_CONFIG_FILES([ doc/lxc-wait.sgml doc/lxc.conf.sgml + doc/lxc.container.conf.sgml + doc/lxc.system.conf.sgml doc/lxc-usernet.sgml doc/lxc.sgml doc/common_options.sgml diff --git a/doc/Makefile.am b/doc/Makefile.am index e84871722..9ddf53f75 100644 --- a/doc/Makefile.am +++ b/doc/Makefile.am @@ -10,7 +10,8 @@ SUBDIRS += api endif EXTRA_DIST = \ - lxc.conf \ + lxc.container.conf \ + lxc.system.conf \ FAQ.txt if ENABLE_DOCBOOK @@ -37,6 +38,8 @@ man_MANS = \ lxc-wait.1 \ \ lxc.conf.5 \ + lxc.container.conf.5 \ + lxc.system.conf.5 \ lxc-usernet.5 \ \ lxc.7 diff --git a/doc/lxc.conf.sgml.in b/doc/lxc.conf.sgml.in index 897738fb5..19f11c241 100644 --- a/doc/lxc.conf.sgml.in +++ b/doc/lxc.conf.sgml.in @@ -2,10 +2,10 @@ lxc: linux Container library -(C) Copyright IBM Corp. 2007, 2008 +(C) Copyright Canonical Ltd. 2014 Authors: -Daniel Lezcano +Stéphane Graber This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public @@ -41,7 +41,7 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA lxc.conf - linux container configuration file + Configuration files for LXC. @@ -49,1480 +49,91 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Description - The linux containers (lxc) are always created - before being used. This creation defines a set of system - resources to be virtualized / isolated when a process is using - the container. By default, the pids, sysv ipc and mount points - are virtualized and isolated. The other system resources are - shared across containers, until they are explicitly defined in - the configuration file. For example, if there is no network - configuration, the network will be shared between the creator of - the container and the container itself, but if the network is - specified, a new network stack is created for the container and - the container can no longer use the network of its ancestor. + LXC configuration is split in two parts. Container configuration + and system configuration. - - The configuration file defines the different system resources to - be assigned for the container. At present, the utsname, the - network, the mount points, the root file system, the user namespace, - and the control groups are supported. - - - - Each option in the configuration file has the form key - = value fitting in one line. The '#' character means - the line is a comment. - - - - Configuration - - In order to ease administration of multiple related containers, it - is possible to have a container configuration file cause another - file to be loaded. For instance, network configuration - can be defined in one common file which is included by multiple - containers. Then, if the containers are moved to another host, - only one file may need to be updated. - - - - - - - - - - Specify the file to be included. The included file must be - in the same valid lxc configuration file format. - - - - - - - - Architecture - - Allows one to set the architecture for the container. For example, - set a 32bits architecture for a container running 32bits - binaries on a 64bits host. This fixes the container scripts - which rely on the architecture to do some work like - downloading the packages. - - - - - - - - - - Specify the architecture for the container. - - - Valid options are - , - , - , - - - - - - - - - - Hostname - - The utsname section defines the hostname to be set for the - container. That means the container can set its own hostname - without changing the one from the system. That makes the - hostname private for the container. - - - - - - - - - specify the hostname for the container - - - - - - - - Halt signal - - Allows one to specify signal name or number, sent by lxc-stop to the - container's init process to cleanly shutdown the container. Different - init systems could use different signals to perform clean shutdown - sequence. This option allows the signal to be specified in kill(1) - fashion, e.g. SIGPWR, SIGRTMIN+14, SIGRTMAX-10 or plain number. The - default signal is SIGPWR. - - - - - - - - - specify the signal used to halt the container - - - - - - - - Stop signal - - Allows one to specify signal name or number, sent by lxc-stop to forcibly - shutdown the container. This option allows signal to be specified in - kill(1) fashion, e.g. SIGKILL, SIGRTMIN+14, SIGRTMAX-10 or plain number. - The default signal is SIGKILL. - - - - - - - - - specify the signal used to stop the container - - - - - - - - Network - - The network section defines how the network is virtualized in - the container. The network virtualization acts at layer - two. In order to use the network virtualization, parameters - must be specified to define the network interfaces of the - container. Several virtual interfaces can be assigned and used - in a container even if the system has only one physical - network interface. - - - - - - - - - specify what kind of network virtualization to be used - for the container. Each time - a field is found a new - round of network configuration begins. In this way, - several network virtualization types can be specified - for the same container, as well as assigning several - network interfaces for one container. The different - virtualization types can be: - - - - will cause the container to share - the host's network namespace. This means the host - network devices are usable in the container. It also - means that if both the container and host have upstart as - init, 'halt' in a container (for instance) will shut down the - host. - - - - will create only the loopback - interface. - - - - a peer network device is created - with one side assigned to the container and the other - side is attached to a bridge specified by - the . If the bridge is - not specified, then the veth pair device will be created - but not attached to any bridge. Otherwise, the bridge - has to be setup before on the - system, lxc won't handle any - configuration outside of the container. By - default lxc choose a name for the - network device belonging to the outside of the - container, this name is handled - by lxc, but if you wish to handle - this name yourself, you can tell lxc - to set a specific name with - the option. - - - - a vlan interface is linked with - the interface specified by - the and assigned to - the container. The vlan identifier is specified with the - option . - - - - a macvlan interface is linked - with the interface specified by - the and assigned to - the container. - specifies the - mode the macvlan will use to communicate between - different macvlan on the same upper device. The accepted - modes are , the device never - communicates with any other device on the same upper_dev (default), - , the new Virtual Ethernet Port - Aggregator (VEPA) mode, it assumes that the adjacent - bridge returns all frames where both source and - destination are local to the macvlan port, i.e. the - bridge is set up as a reflective relay. Broadcast - frames coming in from the upper_dev get flooded to all - macvlan interfaces in VEPA mode, local frames are not - delivered locally, or , it - provides the behavior of a simple bridge between - different macvlan interfaces on the same port. Frames - from one interface to another one get delivered directly - and are not sent out externally. Broadcast frames get - flooded to all other bridge ports and to the external - interface, but when they come back from a reflective - relay, we don't deliver them again. Since we know all - the MAC addresses, the macvlan bridge mode does not - require learning or STP like the bridge module does. - - - - an already existing interface - specified by the is - assigned to the container. - - - - - - - - - - - specify an action to do for the - network. - - - activates the interface. - - - - - - - - - - - specify the interface to be used for real network - traffic. - - - - - - - - - - - specify the maximum transfer unit for this interface. - - - - - - - - - - - the interface name is dynamically allocated, but if - another name is needed because the configuration files - being used by the container use a generic name, - eg. eth0, this option will rename the interface in the - container. - - - - - - - - - - - the interface mac address is dynamically allocated by - default to the virtual interface, but in some cases, - this is needed to resolve a mac address conflict or to - always have the same link-local ipv6 address. - Any "x" in address will be replaced by random value, - this allows setting hwaddr templates. - - - - - - - - - - - specify the ipv4 address to assign to the virtualized - interface. Several lines specify several ipv4 addresses. - The address is in format x.y.z.t/m, - eg. 192.168.1.123/24. The broadcast address should be - specified on the same line, right after the ipv4 - address. - - - - - - - - - - - specify the ipv4 address to use as the gateway inside the - container. The address is in format x.y.z.t, eg. - 192.168.1.123. - - Can also have the special value , - which means to take the primary address from the bridge - interface (as specified by the - option) and use that as - the gateway. is only available when - using the and - network types. - - - - - - - - - - - - specify the ipv6 address to assign to the virtualized - interface. Several lines specify several ipv6 addresses. - The address is in format x::y/m, - eg. 2003:db8:1:0:214:1234:fe0b:3596/64 - - - - - - - - - - - specify the ipv6 address to use as the gateway inside the - container. The address is in format x::y, - eg. 2003:db8:1:0::1 - - Can also have the special value , - which means to take the primary address from the bridge - interface (as specified by the - option) and use that as - the gateway. is only available when - using the and - network types. - - - - - - - - - - - add a configuration option to specify a script to be - executed after creating and configuring the network used - from the host side. The following arguments are passed - to the script: container name and config section name - (net) Additional arguments depend on the config section - employing a script hook; the following are used by the - network system: execution context (up), network type - (empty/veth/macvlan/phys), Depending on the network - type, other arguments may be passed: - veth/macvlan/phys. And finally (host-sided) device name. - - - Standard output from the script is logged at debug level. - Standard error is not logged, but can be captured by the - hook redirecting its standard error to standard output. - - - - - - - - - - - add a configuration option to specify a script to be - executed before destroying the network used from the - host side. The following arguments are passed to the - script: container name and config section name (net) - Additional arguments depend on the config section - employing a script hook; the following are used by the - network system: execution context (down), network type - (empty/veth/macvlan/phys), Depending on the network - type, other arguments may be passed: - veth/macvlan/phys. And finally (host-sided) device name. - - - Standard output from the script is logged at debug level. - Standard error is not logged, but can be captured by the - hook redirecting its standard error to standard output. - - - - - - - - New pseudo tty instance (devpts) - - For stricter isolation the container can have its own private - instance of the pseudo tty. - - - - - - - - - If set, the container will have a new pseudo tty - instance, making this private to it. The value specifies - the maximum number of pseudo ttys allowed for a pts - instance (this limitation is not implemented yet). - - - - - - - - Container system console - - If the container is configured with a root filesystem and the - inittab file is setup to use the console, you may want to specify - where the output of this console goes. - - - - - - - - - Specify a path to a file where the console output will - be written. The keyword 'none' will simply disable the - console. This is dangerous once if have a rootfs with a - console device file where the application can write, the - messages will fall in the host. - - - - - - - - Console through the ttys - - This option is useful if the container is configured with a root - filesystem and the inittab file is setup to launch a getty on the - ttys. The option specifies the number of ttys to be available for - the container. The number of gettys in the inittab file of the - container should not be greater than the number of ttys specified - in this option, otherwise the excess getty sessions will die and - respawn indefinitely giving annoying messages on the console or in - /var/log/messages. - - - - - - - - - Specify the number of tty to make available to the - container. - - - - - - - - Console devices location - - LXC consoles are provided through Unix98 PTYs created on the - host and bind-mounted over the expected devices in the container. - By default, they are bind-mounted over /dev/console - and /dev/ttyN. This can prevent package upgrades - in the guest. Therefore you can specify a directory location (under - /dev under which LXC will create the files and - bind-mount over them. These will then be symbolically linked to - /dev/console and /dev/ttyN. - A package upgrade can then succeed as it is able to remove and replace - the symbolic links. - - - - - - - - - Specify a directory under /dev - under which to create the container console devices. - - - - - - - /dev directory + Container configuration - By default, lxc does nothing with the container's - /dev. This allows the container's - /dev to be set up as needed in the container - rootfs. If lxc.autodev is set to 1, then after mounting the container's - rootfs LXC will mount a fresh tmpfs under /dev - (limited to 100k) and fill in a minimal set of initial devices. - This is generally required when starting a container containing - a "systemd" based "init" but may be optional at other times. Additional - devices in the containers /dev directory may be created through the - use of the hook. + The container configuration is held in the + config stored in the container's + directory. - - - - - - - - Set this to 1 to have LXC mount and populate a minimal - /dev when starting the container. - - - - - - - - Enable kmsg symlink - - Enable creating /dev/kmsg as symlink to /dev/console. This defaults to 1. - - - - - - - - - Set this to 0 to disable /dev/kmsg symlinking. - - - - - - - - Mount points - - The mount points section specifies the different places to be - mounted. These mount points will be private to the container - and won't be visible by the processes running outside of the - container. This is useful to mount /etc, /var or /home for - examples. - - - - - - - - - specify a file location in - the fstab format, containing the - mount information. If the rootfs is an image file or a - block device and the fstab is used to mount a point - somewhere in this rootfs, the path of the rootfs mount - point should be prefixed with the - @LXCROOTFSMOUNT@ default path or - the value of if - specified. Note that when mounting a filesystem from an - image file or block device the third field (fs_vfstype) - cannot be auto as with - - mount - 8 - - but must be explicitly specified. - - - - - - - - - - - specify a mount point corresponding to a line in the - fstab format. - - - - - - - - - - - specify which standard kernel file systems should be - automatically mounted. This may dramatically simplify - the configuration. The file systems are: - - - - - (or ): - mount /proc as read-write, but - remount /proc/sys and - /proc/sysrq-trigger read-only - for security / container isolation purposes. - - - - - : mount - /proc as read-write - - - - - (or ): - mount /sys as read-only - for security / container isolation purposes. - - - - - : mount - /sys as read-write - - - - - (or - ): - mount a tmpfs to /sys/fs/cgroup, - create directories for all hierarchies to which - the container is added, create subdirectories - there with the name of the cgroup, and bind-mount - the container's own cgroup into that directory. - The container will be able to write to its own - cgroup directory, but not the parents, since they - will be remounted read-only - - - - - : similar to - , but everything will - be mounted read-only. - - - - - : similar to - , but everything will - be mounted read-write. Note that the paths leading - up to the container's own cgroup will be writable, - but will not be a cgroup filesystem but just part - of the tmpfs of /sys/fs/cgroup - - - - - (or - ): - mount a tmpfs to /sys/fs/cgroup, - create directories for all hierarchies to which - the container is added, bind-mount the hierarchies - from the host to the container and make everything - read-only except the container's own cgroup. Note - that compared to , where - all paths leading up to the container's own cgroup - are just simple directories in the underlying - tmpfs, here - /sys/fs/cgroup/$hierarchy - will contain the host's full cgroup hierarchy, - albeit read-only outside the container's own cgroup. - This may leak quite a bit of information into the - container. - - - - - : similar to - , but everything - will be mounted read-only. - - - - - : similar to - , but everything - will be mounted read-write. Note that in this case, - the container may escape its own cgroup. (Note also - that if the container has CAP_SYS_ADMIN support - and can mount the cgroup filesystem itself, it may - do so anyway.) - - - - - Examples: - - - lxc.mount.auto = proc sys cgroup - lxc.mount.auto = proc:rw sys:rw cgroup-full:rw - - - - - - - - - Root file system - - The root file system of the container can be different than that - of the host system. - - - - - - - - - specify the root file system for the container. It can - be an image file, a directory or a block device. If not - specified, the container shares its root file system - with the host. - - - - - - - - - - - where to recursively bind - before pivoting. This is to ensure success of the - - pivot_root - 8 - - syscall. Any directory suffices, the default should - generally work. - - - - - - - - - - - where to pivot the original root file system under - , specified relatively to - that. The default is mnt. - It is created if necessary, and also removed after - unmounting everything from it during container setup. - - - - - - - - Control group - - The control group section contains the configuration for the - different subsystem. lxc does not check the - correctness of the subsystem name. This has the disadvantage - of not detecting configuration errors until the container is - started, but has the advantage of permitting any future - subsystem. - - - - - - - - - specify the control group value to be set. The - subsystem name is the literal name of the control group - subsystem. The permitted names and the syntax of their - values is not dictated by LXC, instead it depends on the - features of the Linux kernel running at the time the - container is started, - eg. - - - - - - - - Capabilities - - The capabilities can be dropped in the container if this one - is run as root. - - - - - - - - - Specify the capability to be dropped in the container. A - single line defining several capabilities with a space - separation is allowed. The format is the lower case of - the capability definition without the "CAP_" prefix, - eg. CAP_SYS_MODULE should be specified as - sys_module. See - - capabilities - 7 - , - - - - - - - - - - Specify the capability to be kept in the container. All other - capabilities will be dropped. - - - - - - - - Apparmor profile - - If lxc was compiled and installed with apparmor support, and the host - system has apparmor enabled, then the apparmor profile under which the - container should be run can be specified in the container - configuration. The default is lxc-container-default. - - - - - - - - - Specify the apparmor profile under which the container should - be run. To specify that the container should be unconfined, - use - - lxc.aa_profile = unconfined - - - - - - SELinux context - If lxc was compiled and installed with SELinux support, and the host - system has SELinux enabled, then the SELinux context under which the - container should be run can be specified in the container - configuration. The default is unconfined_t, - which means that lxc will not attempt to change contexts. + A basic configuration is generated at container creation time + with the default's recommended for the chosen template as well + as extra default keys coming from the + default.conf file. - - - - - - - - Specify the SELinux context under which the container should - be run or unconfined_t. For example - - lxc.se_context = unconfined_u:unconfined_r:lxc_t:s0-s0:c0.c1023 - - - - - - Seccomp configuration - A container can be started with a reduced set of available - system calls by loading a seccomp profile at startup. The - seccomp configuration file should begin with a version number - (which currently must be 1) on the first line, a policy type - (which must be 'whitelist') on the second line, followed by a - list of allowed system call numbers, one per line. + That default.conf file is either located + at @LXC_DEFAULT_CONFIG@ or for + unprivileged containers at + ~/.config/lxc/default.conf. - - - - - - - - Specify a file containing the seccomp configuration to - load before the container starts. - - - - - - - UID mappings - A container can be started in a private user namespace with - user and group id mappings. For instance, you can map userid - 0 in the container to userid 200000 on the host. The root - user in the container will be privileged in the container, - but unprivileged on the host. Normally a system container - will want a range of ids, so you would map, for instance, - user and group ids 0 through 20,000 in the container to the - ids 200,000 through 220,000. + Details about the syntax of this file can be found in: + + lxc.container.conf + 5 + - - - - - - - - Four values must be provided. First a character, either - 'u', or 'g', to specify whether user or group ids are - being mapped. Next is the first userid as seen in the - user namespace of the container. Next is the userid as - seen on the host. Finally, a range indicating the number - of consecutive ids to map. - - - - - Container hooks - - Container hooks are programs or scripts which can be executed - at various times in a container's lifetime. - - - When a container hook is executed, information is passed both - as command line arguments and through environment variables. - The arguments are: - - Container name. - Section (always 'lxc'). - The hook type (i.e. 'clone' or 'pre-mount'). - Additional arguments In the - case of the clone hook, any extra arguments passed to - lxc-clone will appear as further arguments to the hook. - - The following environment variables are set: - - LXC_NAME: is the container's name. - LXC_ROOTFS_MOUNT: the path to the mounted root filesystem. - LXC_CONFIG_FILE: the path to the container configuration file. - LXC_SRC_NAME: in the case of the clone hook, this is the original container's name. - LXC_ROOTFS_PATH: this is the lxc.rootfs entry for the container. Note this is likely not where the mounted rootfs is to be found, use LXC_ROOTFS_MOUNT for that. - - + System configuration - Standard output from the hooks is logged at debug level. - Standard error is not logged, but can be captured by the - hook redirecting its standard error to standard output. + The system configuration is located at + @LXC_GLOBAL_CONF@ or + ~/.config/lxc/lxc.conf for unprivileged + containers. - - - - - - - - A hook to be run in the host's namespace before the - container ttys, consoles, or mounts are up. - - - - - - - - - - - - A hook to be run in the container's fs namespace but before - the rootfs has been set up. This allows for manipulation - of the rootfs, i.e. to mount an encrypted filesystem. Mounts - done in this hook will not be reflected on the host (apart from - mounts propagation), so they will be automatically cleaned up - when the container shuts down. - - - - - - - - - - - - A hook to be run in the container's namespace after - mounting has been done, but before the pivot_root. - - - - - - - - - - - - A hook to be run in the container's namespace after - mounting has been done and after any mount hooks have - run, but before the pivot_root, if - == 1. - The purpose of this hook is to assist in populating the - /dev directory of the container when using the autodev - option for systemd based containers. The container's /dev - directory is relative to the - ${} environment - variable available when the hook is run. - - - - - - - - - - - - A hook to be run in the container's namespace immediately - before executing the container's init. This requires the - program to be available in the container. - - - - - - - - - - - - A hook to be run in the host's namespace after the - container has been shut down. - - - - - - - - - - - - A hook to be run when the container is cloned to a new one. - See lxc-clone - 1 for more information. - - - - - - - Container hooks Environment Variables - A number of environment variables are made available to the startup - hooks to provide configuration information and assist in the - functioning of the hooks. Not all variables are valid in all - contexts. In particular, all paths are relative to the host system - and, as such, not valid during the hook. + This configuration file is used to set values such as default + lookup paths and storage backend settings for LXC. - - - - - - - - The LXC name of the container. Useful for logging messages - in common log environments. [] - - - - - - - - - - - - Host relative path to the container configuration file. This - gives the container to reference the original, top level, - configuration file for the container in order to locate any - additional configuration information not otherwise made - available. [] - - - - - - - - - - - - The path to the console output of the container if not NULL. - [] [] - - - - - - - - - - - - The path to the console log output of the container if not NULL. - [] - - - - - - - - - - - - The mount location to which the container is initially bound. - This will be the host relative path to the container rootfs - for the container instance being started and is where changes - should be made for that instance. - [] - - - - - - - - - - - - The host relative path to the container root which has been - mounted to the rootfs.mount location. - [] - - - - - - - - Logging - - Logging can be configured on a per-container basis. By default, - depending upon how the lxc package was compiled, container startup - is logged only at the ERROR level, and logged to a file named after - the container (with '.log' appended) either under the container path, - or under @LOGPATH@. - - - Both the default log level and the log file can be specified in the - container configuration file, overriding the default behavior. Note - that the configuration file entries can in turn be overridden by the - command line options to lxc-start. - - - - - - - - - The level at which to log. The log level is an integer in - the range of 0..8 inclusive, where a lower number means more - verbose debugging. In particular 0 = trace, 1 = debug, 2 = - info, 3 = notice, 4 = warn, 5 = error, 6 = critical, 7 = - alert, and 8 = fatal. If unspecified, the level defaults - to 5 (error), so that only errors and above are logged. - - - Note that when a script (such as either a hook script or a - network interface up or down script) is called, the script's - standard output is logged at level 1, debug. - - - - - - - - - - The file to which logging info should be written. - - - - - - - - Autostart - - The autostart options support marking which containers should be - auto-started and in what order. These options may be used by LXC tools - directly or by external tooling provided by the distributions. - - - - - - - - - - Whether the container should be auto-started. - Valid values are 0 (off) and 1 (on). - - - - - - - - - - How long to wait (in seconds) after the container is - started before starting the next one. - - - - - - - - - - An integer used to sort the containers when auto-starting - a series of containers at once. - - - - - - - - - - A multi-value key (can be used multiple times) to put the - container in a container group. Those groups can then be - used (amongst other things) to start a series of related - containers. - - - - - - - - Examples - In addition to the few examples given below, you will find - some other examples of configuration file in @DOCDIR@/examples - - - Network - This configuration sets up a container to use a veth pair - device with one side plugged to a bridge br0 (which has been - configured before on the system by the administrator). The - virtual network device visible in the container is renamed to - eth0. - - lxc.utsname = myhostname - lxc.network.type = veth - lxc.network.flags = up - lxc.network.link = br0 - lxc.network.name = eth0 - lxc.network.hwaddr = 4a:49:43:49:79:bf - lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255 - lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597 - - - - - UID/GID mapping - This configuration will map both user and group ids in the - range 0-9999 in the container to the ids 100000-109999 on the host. + Details about the syntax of this file can be found in: + + lxc.system.conf + 5 + - - lxc.id_map = u 0 100000 10000 - lxc.id_map = g 0 100000 10000 - - - - - Control group - This configuration will setup several control groups for - the application, cpuset.cpus restricts usage of the defined cpu, - cpus.share prioritize the control group, devices.allow makes - usable the specified devices. - - lxc.cgroup.cpuset.cpus = 0,1 - lxc.cgroup.cpu.shares = 1234 - lxc.cgroup.devices.deny = a - lxc.cgroup.devices.allow = c 1:3 rw - lxc.cgroup.devices.allow = b 8:0 rw - - - - Complex configuration - This example show a complex configuration making a complex - network stack, using the control groups, setting a new hostname, - mounting some locations and a changing root file system. - - lxc.utsname = complex - lxc.network.type = veth - lxc.network.flags = up - lxc.network.link = br0 - lxc.network.hwaddr = 4a:49:43:49:79:bf - lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255 - lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597 - lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588 - lxc.network.type = macvlan - lxc.network.flags = up - lxc.network.link = eth0 - lxc.network.hwaddr = 4a:49:43:49:79:bd - lxc.network.ipv4 = 10.2.3.4/24 - lxc.network.ipv4 = 192.168.10.125/24 - lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596 - lxc.network.type = phys - lxc.network.flags = up - lxc.network.link = dummy0 - lxc.network.hwaddr = 4a:49:43:49:79:ff - lxc.network.ipv4 = 10.2.3.6/24 - lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297 - lxc.cgroup.cpuset.cpus = 0,1 - lxc.cgroup.cpu.shares = 1234 - lxc.cgroup.devices.deny = a - lxc.cgroup.devices.allow = c 1:3 rw - lxc.cgroup.devices.allow = b 8:0 rw - lxc.mount = /etc/fstab.complex - lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0 - lxc.rootfs = /mnt/rootfs.complex - lxc.cap.drop = sys_module mknod setuid net_raw - lxc.cap.drop = mac_override - - - See Also - chroot - 1 + lxc + 1 , - - pivot_root - 8 + lxc.container.conf + 5 , - - fstab - 5 + lxc.system.conf + 5 , - - capabilities - 7 + lxc-usernet + 5 - &seealso; - Author - Daniel Lezcano daniel.lezcano@free.fr + Stéphane Graber stgraber@ubuntu.com - + + +]> + + + + @LXC_GENERATE_DATE@ + + + lxc.container.conf + 5 + + + + lxc.container.conf + + + LXC container configuration file + + + + + Description + + + The linux containers (lxc) are always created + before being used. This creation defines a set of system + resources to be virtualized / isolated when a process is using + the container. By default, the pids, sysv ipc and mount points + are virtualized and isolated. The other system resources are + shared across containers, until they are explicitly defined in + the configuration file. For example, if there is no network + configuration, the network will be shared between the creator of + the container and the container itself, but if the network is + specified, a new network stack is created for the container and + the container can no longer use the network of its ancestor. + + + + The configuration file defines the different system resources to + be assigned for the container. At present, the utsname, the + network, the mount points, the root file system, the user namespace, + and the control groups are supported. + + + + Each option in the configuration file has the form key + = value fitting in one line. The '#' character means + the line is a comment. + + + + Configuration + + In order to ease administration of multiple related containers, it + is possible to have a container configuration file cause another + file to be loaded. For instance, network configuration + can be defined in one common file which is included by multiple + containers. Then, if the containers are moved to another host, + only one file may need to be updated. + + + + + + + + + + Specify the file to be included. The included file must be + in the same valid lxc configuration file format. + + + + + + + + Architecture + + Allows one to set the architecture for the container. For example, + set a 32bits architecture for a container running 32bits + binaries on a 64bits host. This fixes the container scripts + which rely on the architecture to do some work like + downloading the packages. + + + + + + + + + + Specify the architecture for the container. + + + Valid options are + , + , + , + + + + + + + + + + Hostname + + The utsname section defines the hostname to be set for the + container. That means the container can set its own hostname + without changing the one from the system. That makes the + hostname private for the container. + + + + + + + + + specify the hostname for the container + + + + + + + + Halt signal + + Allows one to specify signal name or number, sent by lxc-stop to the + container's init process to cleanly shutdown the container. Different + init systems could use different signals to perform clean shutdown + sequence. This option allows the signal to be specified in kill(1) + fashion, e.g. SIGPWR, SIGRTMIN+14, SIGRTMAX-10 or plain number. The + default signal is SIGPWR. + + + + + + + + + specify the signal used to halt the container + + + + + + + + Stop signal + + Allows one to specify signal name or number, sent by lxc-stop to forcibly + shutdown the container. This option allows signal to be specified in + kill(1) fashion, e.g. SIGKILL, SIGRTMIN+14, SIGRTMAX-10 or plain number. + The default signal is SIGKILL. + + + + + + + + + specify the signal used to stop the container + + + + + + + + Network + + The network section defines how the network is virtualized in + the container. The network virtualization acts at layer + two. In order to use the network virtualization, parameters + must be specified to define the network interfaces of the + container. Several virtual interfaces can be assigned and used + in a container even if the system has only one physical + network interface. + + + + + + + + + specify what kind of network virtualization to be used + for the container. Each time + a field is found a new + round of network configuration begins. In this way, + several network virtualization types can be specified + for the same container, as well as assigning several + network interfaces for one container. The different + virtualization types can be: + + + + will cause the container to share + the host's network namespace. This means the host + network devices are usable in the container. It also + means that if both the container and host have upstart as + init, 'halt' in a container (for instance) will shut down the + host. + + + + will create only the loopback + interface. + + + + a peer network device is created + with one side assigned to the container and the other + side is attached to a bridge specified by + the . If the bridge is + not specified, then the veth pair device will be created + but not attached to any bridge. Otherwise, the bridge + has to be setup before on the + system, lxc won't handle any + configuration outside of the container. By + default lxc choose a name for the + network device belonging to the outside of the + container, this name is handled + by lxc, but if you wish to handle + this name yourself, you can tell lxc + to set a specific name with + the option. + + + + a vlan interface is linked with + the interface specified by + the and assigned to + the container. The vlan identifier is specified with the + option . + + + + a macvlan interface is linked + with the interface specified by + the and assigned to + the container. + specifies the + mode the macvlan will use to communicate between + different macvlan on the same upper device. The accepted + modes are , the device never + communicates with any other device on the same upper_dev (default), + , the new Virtual Ethernet Port + Aggregator (VEPA) mode, it assumes that the adjacent + bridge returns all frames where both source and + destination are local to the macvlan port, i.e. the + bridge is set up as a reflective relay. Broadcast + frames coming in from the upper_dev get flooded to all + macvlan interfaces in VEPA mode, local frames are not + delivered locally, or , it + provides the behavior of a simple bridge between + different macvlan interfaces on the same port. Frames + from one interface to another one get delivered directly + and are not sent out externally. Broadcast frames get + flooded to all other bridge ports and to the external + interface, but when they come back from a reflective + relay, we don't deliver them again. Since we know all + the MAC addresses, the macvlan bridge mode does not + require learning or STP like the bridge module does. + + + + an already existing interface + specified by the is + assigned to the container. + + + + + + + + + + + specify an action to do for the + network. + + + activates the interface. + + + + + + + + + + + specify the interface to be used for real network + traffic. + + + + + + + + + + + specify the maximum transfer unit for this interface. + + + + + + + + + + + the interface name is dynamically allocated, but if + another name is needed because the configuration files + being used by the container use a generic name, + eg. eth0, this option will rename the interface in the + container. + + + + + + + + + + + the interface mac address is dynamically allocated by + default to the virtual interface, but in some cases, + this is needed to resolve a mac address conflict or to + always have the same link-local ipv6 address. + Any "x" in address will be replaced by random value, + this allows setting hwaddr templates. + + + + + + + + + + + specify the ipv4 address to assign to the virtualized + interface. Several lines specify several ipv4 addresses. + The address is in format x.y.z.t/m, + eg. 192.168.1.123/24. The broadcast address should be + specified on the same line, right after the ipv4 + address. + + + + + + + + + + + specify the ipv4 address to use as the gateway inside the + container. The address is in format x.y.z.t, eg. + 192.168.1.123. + + Can also have the special value , + which means to take the primary address from the bridge + interface (as specified by the + option) and use that as + the gateway. is only available when + using the and + network types. + + + + + + + + + + + + specify the ipv6 address to assign to the virtualized + interface. Several lines specify several ipv6 addresses. + The address is in format x::y/m, + eg. 2003:db8:1:0:214:1234:fe0b:3596/64 + + + + + + + + + + + specify the ipv6 address to use as the gateway inside the + container. The address is in format x::y, + eg. 2003:db8:1:0::1 + + Can also have the special value , + which means to take the primary address from the bridge + interface (as specified by the + option) and use that as + the gateway. is only available when + using the and + network types. + + + + + + + + + + + add a configuration option to specify a script to be + executed after creating and configuring the network used + from the host side. The following arguments are passed + to the script: container name and config section name + (net) Additional arguments depend on the config section + employing a script hook; the following are used by the + network system: execution context (up), network type + (empty/veth/macvlan/phys), Depending on the network + type, other arguments may be passed: + veth/macvlan/phys. And finally (host-sided) device name. + + + Standard output from the script is logged at debug level. + Standard error is not logged, but can be captured by the + hook redirecting its standard error to standard output. + + + + + + + + + + + add a configuration option to specify a script to be + executed before destroying the network used from the + host side. The following arguments are passed to the + script: container name and config section name (net) + Additional arguments depend on the config section + employing a script hook; the following are used by the + network system: execution context (down), network type + (empty/veth/macvlan/phys), Depending on the network + type, other arguments may be passed: + veth/macvlan/phys. And finally (host-sided) device name. + + + Standard output from the script is logged at debug level. + Standard error is not logged, but can be captured by the + hook redirecting its standard error to standard output. + + + + + + + + New pseudo tty instance (devpts) + + For stricter isolation the container can have its own private + instance of the pseudo tty. + + + + + + + + + If set, the container will have a new pseudo tty + instance, making this private to it. The value specifies + the maximum number of pseudo ttys allowed for a pts + instance (this limitation is not implemented yet). + + + + + + + + Container system console + + If the container is configured with a root filesystem and the + inittab file is setup to use the console, you may want to specify + where the output of this console goes. + + + + + + + + + Specify a path to a file where the console output will + be written. The keyword 'none' will simply disable the + console. This is dangerous once if have a rootfs with a + console device file where the application can write, the + messages will fall in the host. + + + + + + + + Console through the ttys + + This option is useful if the container is configured with a root + filesystem and the inittab file is setup to launch a getty on the + ttys. The option specifies the number of ttys to be available for + the container. The number of gettys in the inittab file of the + container should not be greater than the number of ttys specified + in this option, otherwise the excess getty sessions will die and + respawn indefinitely giving annoying messages on the console or in + /var/log/messages. + + + + + + + + + Specify the number of tty to make available to the + container. + + + + + + + + Console devices location + + LXC consoles are provided through Unix98 PTYs created on the + host and bind-mounted over the expected devices in the container. + By default, they are bind-mounted over /dev/console + and /dev/ttyN. This can prevent package upgrades + in the guest. Therefore you can specify a directory location (under + /dev under which LXC will create the files and + bind-mount over them. These will then be symbolically linked to + /dev/console and /dev/ttyN. + A package upgrade can then succeed as it is able to remove and replace + the symbolic links. + + + + + + + + + Specify a directory under /dev + under which to create the container console devices. + + + + + + + + /dev directory + + By default, lxc does nothing with the container's + /dev. This allows the container's + /dev to be set up as needed in the container + rootfs. If lxc.autodev is set to 1, then after mounting the container's + rootfs LXC will mount a fresh tmpfs under /dev + (limited to 100k) and fill in a minimal set of initial devices. + This is generally required when starting a container containing + a "systemd" based "init" but may be optional at other times. Additional + devices in the containers /dev directory may be created through the + use of the hook. + + + + + + + + + Set this to 1 to have LXC mount and populate a minimal + /dev when starting the container. + + + + + + + + Enable kmsg symlink + + Enable creating /dev/kmsg as symlink to /dev/console. This defaults to 1. + + + + + + + + + Set this to 0 to disable /dev/kmsg symlinking. + + + + + + + + Mount points + + The mount points section specifies the different places to be + mounted. These mount points will be private to the container + and won't be visible by the processes running outside of the + container. This is useful to mount /etc, /var or /home for + examples. + + + + + + + + + specify a file location in + the fstab format, containing the + mount information. If the rootfs is an image file or a + block device and the fstab is used to mount a point + somewhere in this rootfs, the path of the rootfs mount + point should be prefixed with the + @LXCROOTFSMOUNT@ default path or + the value of if + specified. Note that when mounting a filesystem from an + image file or block device the third field (fs_vfstype) + cannot be auto as with + + mount + 8 + + but must be explicitly specified. + + + + + + + + + + + specify a mount point corresponding to a line in the + fstab format. + + + + + + + + + + + specify which standard kernel file systems should be + automatically mounted. This may dramatically simplify + the configuration. The file systems are: + + + + + (or ): + mount /proc as read-write, but + remount /proc/sys and + /proc/sysrq-trigger read-only + for security / container isolation purposes. + + + + + : mount + /proc as read-write + + + + + (or ): + mount /sys as read-only + for security / container isolation purposes. + + + + + : mount + /sys as read-write + + + + + (or + ): + mount a tmpfs to /sys/fs/cgroup, + create directories for all hierarchies to which + the container is added, create subdirectories + there with the name of the cgroup, and bind-mount + the container's own cgroup into that directory. + The container will be able to write to its own + cgroup directory, but not the parents, since they + will be remounted read-only + + + + + : similar to + , but everything will + be mounted read-only. + + + + + : similar to + , but everything will + be mounted read-write. Note that the paths leading + up to the container's own cgroup will be writable, + but will not be a cgroup filesystem but just part + of the tmpfs of /sys/fs/cgroup + + + + + (or + ): + mount a tmpfs to /sys/fs/cgroup, + create directories for all hierarchies to which + the container is added, bind-mount the hierarchies + from the host to the container and make everything + read-only except the container's own cgroup. Note + that compared to , where + all paths leading up to the container's own cgroup + are just simple directories in the underlying + tmpfs, here + /sys/fs/cgroup/$hierarchy + will contain the host's full cgroup hierarchy, + albeit read-only outside the container's own cgroup. + This may leak quite a bit of information into the + container. + + + + + : similar to + , but everything + will be mounted read-only. + + + + + : similar to + , but everything + will be mounted read-write. Note that in this case, + the container may escape its own cgroup. (Note also + that if the container has CAP_SYS_ADMIN support + and can mount the cgroup filesystem itself, it may + do so anyway.) + + + + + Examples: + + + lxc.mount.auto = proc sys cgroup + lxc.mount.auto = proc:rw sys:rw cgroup-full:rw + + + + + + + + + Root file system + + The root file system of the container can be different than that + of the host system. + + + + + + + + + specify the root file system for the container. It can + be an image file, a directory or a block device. If not + specified, the container shares its root file system + with the host. + + + + + + + + + + + where to recursively bind + before pivoting. This is to ensure success of the + + pivot_root + 8 + + syscall. Any directory suffices, the default should + generally work. + + + + + + + + + + + where to pivot the original root file system under + , specified relatively to + that. The default is mnt. + It is created if necessary, and also removed after + unmounting everything from it during container setup. + + + + + + + + Control group + + The control group section contains the configuration for the + different subsystem. lxc does not check the + correctness of the subsystem name. This has the disadvantage + of not detecting configuration errors until the container is + started, but has the advantage of permitting any future + subsystem. + + + + + + + + + specify the control group value to be set. The + subsystem name is the literal name of the control group + subsystem. The permitted names and the syntax of their + values is not dictated by LXC, instead it depends on the + features of the Linux kernel running at the time the + container is started, + eg. + + + + + + + + Capabilities + + The capabilities can be dropped in the container if this one + is run as root. + + + + + + + + + Specify the capability to be dropped in the container. A + single line defining several capabilities with a space + separation is allowed. The format is the lower case of + the capability definition without the "CAP_" prefix, + eg. CAP_SYS_MODULE should be specified as + sys_module. See + + capabilities + 7 + , + + + + + + + + + + Specify the capability to be kept in the container. All other + capabilities will be dropped. + + + + + + + + Apparmor profile + + If lxc was compiled and installed with apparmor support, and the host + system has apparmor enabled, then the apparmor profile under which the + container should be run can be specified in the container + configuration. The default is lxc-container-default. + + + + + + + + + Specify the apparmor profile under which the container should + be run. To specify that the container should be unconfined, + use + + lxc.aa_profile = unconfined + + + + + + + SELinux context + + If lxc was compiled and installed with SELinux support, and the host + system has SELinux enabled, then the SELinux context under which the + container should be run can be specified in the container + configuration. The default is unconfined_t, + which means that lxc will not attempt to change contexts. + + + + + + + + + Specify the SELinux context under which the container should + be run or unconfined_t. For example + + lxc.se_context = unconfined_u:unconfined_r:lxc_t:s0-s0:c0.c1023 + + + + + + + Seccomp configuration + + A container can be started with a reduced set of available + system calls by loading a seccomp profile at startup. The + seccomp configuration file should begin with a version number + (which currently must be 1) on the first line, a policy type + (which must be 'whitelist') on the second line, followed by a + list of allowed system call numbers, one per line. + + + + + + + + + Specify a file containing the seccomp configuration to + load before the container starts. + + + + + + + + UID mappings + + A container can be started in a private user namespace with + user and group id mappings. For instance, you can map userid + 0 in the container to userid 200000 on the host. The root + user in the container will be privileged in the container, + but unprivileged on the host. Normally a system container + will want a range of ids, so you would map, for instance, + user and group ids 0 through 20,000 in the container to the + ids 200,000 through 220,000. + + + + + + + + + Four values must be provided. First a character, either + 'u', or 'g', to specify whether user or group ids are + being mapped. Next is the first userid as seen in the + user namespace of the container. Next is the userid as + seen on the host. Finally, a range indicating the number + of consecutive ids to map. + + + + + + + + Container hooks + + Container hooks are programs or scripts which can be executed + at various times in a container's lifetime. + + + When a container hook is executed, information is passed both + as command line arguments and through environment variables. + The arguments are: + + Container name. + Section (always 'lxc'). + The hook type (i.e. 'clone' or 'pre-mount'). + Additional arguments In the + case of the clone hook, any extra arguments passed to + lxc-clone will appear as further arguments to the hook. + + The following environment variables are set: + + LXC_NAME: is the container's name. + LXC_ROOTFS_MOUNT: the path to the mounted root filesystem. + LXC_CONFIG_FILE: the path to the container configuration file. + LXC_SRC_NAME: in the case of the clone hook, this is the original container's name. + LXC_ROOTFS_PATH: this is the lxc.rootfs entry for the container. Note this is likely not where the mounted rootfs is to be found, use LXC_ROOTFS_MOUNT for that. + + + + Standard output from the hooks is logged at debug level. + Standard error is not logged, but can be captured by the + hook redirecting its standard error to standard output. + + + + + + + + + A hook to be run in the host's namespace before the + container ttys, consoles, or mounts are up. + + + + + + + + + + + + A hook to be run in the container's fs namespace but before + the rootfs has been set up. This allows for manipulation + of the rootfs, i.e. to mount an encrypted filesystem. Mounts + done in this hook will not be reflected on the host (apart from + mounts propagation), so they will be automatically cleaned up + when the container shuts down. + + + + + + + + + + + + A hook to be run in the container's namespace after + mounting has been done, but before the pivot_root. + + + + + + + + + + + + A hook to be run in the container's namespace after + mounting has been done and after any mount hooks have + run, but before the pivot_root, if + == 1. + The purpose of this hook is to assist in populating the + /dev directory of the container when using the autodev + option for systemd based containers. The container's /dev + directory is relative to the + ${} environment + variable available when the hook is run. + + + + + + + + + + + + A hook to be run in the container's namespace immediately + before executing the container's init. This requires the + program to be available in the container. + + + + + + + + + + + + A hook to be run in the host's namespace after the + container has been shut down. + + + + + + + + + + + + A hook to be run when the container is cloned to a new one. + See lxc-clone + 1 for more information. + + + + + + + + Container hooks Environment Variables + + A number of environment variables are made available to the startup + hooks to provide configuration information and assist in the + functioning of the hooks. Not all variables are valid in all + contexts. In particular, all paths are relative to the host system + and, as such, not valid during the hook. + + + + + + + + + The LXC name of the container. Useful for logging messages + in common log environments. [] + + + + + + + + + + + + Host relative path to the container configuration file. This + gives the container to reference the original, top level, + configuration file for the container in order to locate any + additional configuration information not otherwise made + available. [] + + + + + + + + + + + + The path to the console output of the container if not NULL. + [] [] + + + + + + + + + + + + The path to the console log output of the container if not NULL. + [] + + + + + + + + + + + + The mount location to which the container is initially bound. + This will be the host relative path to the container rootfs + for the container instance being started and is where changes + should be made for that instance. + [] + + + + + + + + + + + + The host relative path to the container root which has been + mounted to the rootfs.mount location. + [] + + + + + + + + Logging + + Logging can be configured on a per-container basis. By default, + depending upon how the lxc package was compiled, container startup + is logged only at the ERROR level, and logged to a file named after + the container (with '.log' appended) either under the container path, + or under @LOGPATH@. + + + Both the default log level and the log file can be specified in the + container configuration file, overriding the default behavior. Note + that the configuration file entries can in turn be overridden by the + command line options to lxc-start. + + + + + + + + + The level at which to log. The log level is an integer in + the range of 0..8 inclusive, where a lower number means more + verbose debugging. In particular 0 = trace, 1 = debug, 2 = + info, 3 = notice, 4 = warn, 5 = error, 6 = critical, 7 = + alert, and 8 = fatal. If unspecified, the level defaults + to 5 (error), so that only errors and above are logged. + + + Note that when a script (such as either a hook script or a + network interface up or down script) is called, the script's + standard output is logged at level 1, debug. + + + + + + + + + + The file to which logging info should be written. + + + + + + + + Autostart + + The autostart options support marking which containers should be + auto-started and in what order. These options may be used by LXC tools + directly or by external tooling provided by the distributions. + + + + + + + + + + Whether the container should be auto-started. + Valid values are 0 (off) and 1 (on). + + + + + + + + + + How long to wait (in seconds) after the container is + started before starting the next one. + + + + + + + + + + An integer used to sort the containers when auto-starting + a series of containers at once. + + + + + + + + + + A multi-value key (can be used multiple times) to put the + container in a container group. Those groups can then be + used (amongst other things) to start a series of related + containers. + + + + + + + + + Examples + + In addition to the few examples given below, you will find + some other examples of configuration file in @DOCDIR@/examples + + + Network + This configuration sets up a container to use a veth pair + device with one side plugged to a bridge br0 (which has been + configured before on the system by the administrator). The + virtual network device visible in the container is renamed to + eth0. + + lxc.utsname = myhostname + lxc.network.type = veth + lxc.network.flags = up + lxc.network.link = br0 + lxc.network.name = eth0 + lxc.network.hwaddr = 4a:49:43:49:79:bf + lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255 + lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597 + + + + + UID/GID mapping + This configuration will map both user and group ids in the + range 0-9999 in the container to the ids 100000-109999 on the host. + + + lxc.id_map = u 0 100000 10000 + lxc.id_map = g 0 100000 10000 + + + + + Control group + This configuration will setup several control groups for + the application, cpuset.cpus restricts usage of the defined cpu, + cpus.share prioritize the control group, devices.allow makes + usable the specified devices. + + lxc.cgroup.cpuset.cpus = 0,1 + lxc.cgroup.cpu.shares = 1234 + lxc.cgroup.devices.deny = a + lxc.cgroup.devices.allow = c 1:3 rw + lxc.cgroup.devices.allow = b 8:0 rw + + + + + Complex configuration + This example show a complex configuration making a complex + network stack, using the control groups, setting a new hostname, + mounting some locations and a changing root file system. + + lxc.utsname = complex + lxc.network.type = veth + lxc.network.flags = up + lxc.network.link = br0 + lxc.network.hwaddr = 4a:49:43:49:79:bf + lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255 + lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597 + lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588 + lxc.network.type = macvlan + lxc.network.flags = up + lxc.network.link = eth0 + lxc.network.hwaddr = 4a:49:43:49:79:bd + lxc.network.ipv4 = 10.2.3.4/24 + lxc.network.ipv4 = 192.168.10.125/24 + lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596 + lxc.network.type = phys + lxc.network.flags = up + lxc.network.link = dummy0 + lxc.network.hwaddr = 4a:49:43:49:79:ff + lxc.network.ipv4 = 10.2.3.6/24 + lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297 + lxc.cgroup.cpuset.cpus = 0,1 + lxc.cgroup.cpu.shares = 1234 + lxc.cgroup.devices.deny = a + lxc.cgroup.devices.allow = c 1:3 rw + lxc.cgroup.devices.allow = b 8:0 rw + lxc.mount = /etc/fstab.complex + lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0 + lxc.rootfs = /mnt/rootfs.complex + lxc.cap.drop = sys_module mknod setuid net_raw + lxc.cap.drop = mac_override + + + + + + + See Also + + + chroot + 1 + , + + + pivot_root + 8 + , + + + fstab + 5 + , + + + capabilities + 7 + + + + + &seealso; + + + Author + Daniel Lezcano daniel.lezcano@free.fr + + + + + diff --git a/doc/lxc.system.conf b/doc/lxc.system.conf new file mode 100644 index 000000000..c895ff5b3 --- /dev/null +++ b/doc/lxc.system.conf @@ -0,0 +1,20 @@ +# LVM: volume group to use for new containers +lxc.bdev.lvm.vg = lxc + +# LVM: thin pool to use for new containers +lxc.bdev.lvm.thin_pool = lxc + +# ZFS: Root path +lxc.bdev.zfs.root = lxc + +# Path to the containers +lxc.lxcpath = /var/lib/lxc/ + +# Path to the default configuration file +lxc.default_config = /etc/lxc/default.conf + +# Pattern to use for the cgroup path +lxc.cgroup.pattern = lxc/%n + +# List of cgroups to use +lxc.cgroup.use = diff --git a/doc/lxc.system.conf.sgml.in b/doc/lxc.system.conf.sgml.in new file mode 100644 index 000000000..a2b70ec6c --- /dev/null +++ b/doc/lxc.system.conf.sgml.in @@ -0,0 +1,206 @@ + + + +]> + + + + @LXC_GENERATE_DATE@ + + + lxc.system.conf + 5 + + + + lxc.system.conf + + + LXC system configuration file + + + + + Description + + + The system configuration is located at + @LXC_GLOBAL_CONF@ or + ~/.config/lxc/lxc.conf for unprivileged + containers. + + + + This configuration file is used to set values such as default + lookup paths and storage backend settings for LXC. + + + + Configuration paths + + + + + + + + + The location in which all containers are stored. + + + + + + + + + + The path to the default container configuration. + + + + + + + + Control Groups + + + + + + + + + Comma separated list of cgroup controllers to setup. + + + + + + + + + + Format string used to generate the cgroup path (e.g. lxc/%n). + + + + + + + + LVM + + + + + + + + + Default LVM volume group name. + + + + + + + + + + Default LVM thin pool name. + + + + + + + + ZFS + + + + + + + + + Default ZFS root name. + + + + + + + + + + + lxc + 1 + , + + lxc.container.conf + 5 + , + + lxc.system.conf + 5 + , + + lxc-usernet + 5 + + + + + &seealso; + + + Author + Stéphane Graber stgraber@ubuntu.com + + + +