Container Storage
To access host data from inside a container you can simply mount a folder or drive in the host in the container’s fstab file located in the container’s directory.
/var/www var/www none bind,create=dir
This will mount the /var/www folder from the host in the container at /var/www. You can mount the same location in multiple containers to share data.
This is a big deal and the ease with which it is done makes it hugely useful.
You can also mount a shared folder in a separate container so it acts as a sort of mobile portable storage container outside your container. For instance
/var/lib/lxc/myvolume/rootfs/var/lib/mysql var/lib/mysql
will mount myvolume’s /var/lib/mysql folder in mysql container so you can separate the application or container and it’s data. This storage container can also be shared across containers if required.
We have more indepth coverage of container storage here covering LVM, Btrfs and Overlayfs.
Cloning and Snapshots
lxc-clone -o p1 -n p1-clone
o – original container namen – new container name
This creates a clone of your container. LXC is filesystem neutral but supports Btrfs, ZFS, LVM, Overlayfs, Aufs and can use functions specific to those files systems for cloning and snapshot operations.
For instance on a Btrfs file system lxc-create and lxc-clone use btrfs subvolumes to create and clone containers so you get efficient CoW clones. On LVM lxc-clone will gives you thin clones. On a normal filesystem like ext4 lxc-clone will simply make a copy of the container directory.
You can also use the -B option to specify a backingstore.
You can also make snapshots with lxc-clone command. LXC supports snapshots withOverlayfs. Overlayfs gives you a layered snapshot.
Suppose you want a temporary snapshot to work on:
lxc-clone -s -o p1 -n p1-snap -B overlayfs
B – backingstore specifies the supported backingstore file system
s – snapshot – make a snapshot
You don’t need to use -B in most cases. With newer versions of LXC >=1.07 you don’t need the -o and -n flags
lxc-clone -s p1 p1-snap
You can now make any changes to the p1-snap container and any change will be stored in the “delta0″ directory of p1-snap.
To use the snapshot functionality you need overlayfs, aufs or LVM Thin. The good news is Overlayfs was merged in mainline kernel 3.18. And you can simply append the -s flag to lxc-clone to make a snapshot.
lxc-clone -s o p1 -n p2
This will make an overlayfs snapshot p2 with p1 mounted in rootfs of p2 with changes going to delta0 directory of p2. Don’t worry if it sounds confusing. Its simple but you need to understand how Overlayfs works.
When you make a snapshot of a LVM container LXC will make a thin clone. Please note with CoW filesystem like Btrfs and LVM thin, both clones and snapshots gives you CoW clones
Cgroups
cat /sys/fs/cgroup/lxc/mycontainer/memory.usage_in_bytes
For instance to limit memory on container p1 to 1 GB you would run
lxc-cgroup -n p1 memory.limit_in_bytes 1G
You can check the cgroup to see if the setting is applied
cat /sys/fs/cgroup/lxc/uploadhome/memory.limit_in_bytes
You can also directly echo the setting to the cgroup.
echo 1G > /sys/fs/cgroup/lxc/p1/memory_limit_in_bytes
Set it in the container config file for persistence.
lxc.cgroup.memory.limit_in_bytes = 1G
See the available cgroups for a container
ls -1 /sys/fs/cgroup/lxc/containername
Suppose you have a 4 core cpu and would like to limit a container to 2 specific cores; 0 and 3. You can set it like this in the container config file.
lxc.cgroup.cpuset.cpus = 0-3
You can also set cpu shares per container. For example if you have 4 containers and would like to allocate specific share of cpu time you can give container A 500 shares, container B 250 shares, container c 100 shares and container D 50 shares. This means A will get 5 times the cpu of container C and 10 times the cpu of container D.
lxc.cgroup.cpu.shares = 512
To limit swap file use to let’s say 192M use the cgroup swap limit setting
lxc.cgroup.memory.memsw.limit_in_bytes = 192M
LXC doesn’t directly support disk quotas but supports LVM and filesystems like btrfs and zfs that do.
Container Autostart
lxc-autostart
lxc-autostart command is used to autostart containers which have autostart enabled in their config files. You can also make a group of containers and set the group to autostart.
There are a number of options for the autostart command that can be specified in the individual container config file.
The autostart command is typically used by the lxc-net init script (which sets up LXC container networking) to autostart containers that have autostart enabled in their config file. You can stagger autostarts for containers that depend on services of other containers.
lxc.start.auto = 0 (disabled) or 1 (enabled) lxc.start.delay = 0 (delay in second to wait after starting the container) lxc.start.order = 0 (priority of the container, higher value means starts earlier) lxc.group = group1,group2,group3,… (groups the container is a member of)
Creating Containers – LXC ‘Download’ Templates
To use a ‘download’ template
lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64
-d distribution
-r release
-a architecture
You can get a list of templates available.
lxc-create -t download -n test
When creating a container like above container networking is set up in the container config file in /var/lib/lxc/containername/config depending on the LXC networking configuration in /etc/lxc/default.conf. If this file is empty then the container will have no networking enabled.
Usually the values below is what you need to enable the default container networking if you are using the LXC default networking with lxcbr0 bridge.
lxc.network.type = veth lxc.network.flags = up lxc.network.link = lxcbr0 lxc.network.name = eth0 lxc.network.hwaddr = 00:16:3e:xx:xx:xx
But its much better to add this values to /etc/lxc/default.conf so lxc-create populates the networking values in the config file automatically every time it creates a container.
Note: If you add the values manually to the container config file you need to replace the mac address ‘xx’ bits with random alphanumeric characters. If you add it to /etc/lxc/default.conf lxc-create is smart enough to automatically generate values for the ‘xx’ bits.
With that done you can start the container
lxc-start -n containername -d
Check if networking is enabled and by using the lxc-ls -f command. Normally you should see a container name with an IP against it like below.
NAME STATE IPV4 IPV6 AUTOSTART ----------------------------------------------- alpine STOPPED - - NO p1 RUNNING 10.0.3.62 - NO debian32 STOPPED - - NO debian STOPPED - - NO ubuntu STOPPED - - NO
Now you can login to the container with ssh. The latest LXC container OS templates do not ship with ssh installed by default like earlier. So you need to install it first. You can login to the container using lxc-console or lxc-attach
lxc-console -n p1
Quick tip: to exit lxc-console use ctrl-a-q
Once you are inside the container use ‘passwd’ command to set root password. Then run apt-get update if you are for instance in a Ubuntu or Debian container and install openssh. Once this is done, you can poweroff or exit the container and login to the container with ssh
Container Domain Names
If you run a utility like dig, you will see the lxcbr0 interface IP 10.0.3.1 once configured acts like a name server for the lxc domain. Suppose you have an container named Nginx running the dig command below.
dig @10.0.3.1 nginx.lxc
This should return the IP of the Nginx container. Nice! But to use this you need to configure a dns server like Dnsmasq to associate the lxc name with the 10.0.3.1 server. Dnsmasq is already used in Ubuntu and is used in some way or the other in most distributions.
It’s a simple matter of adding an entry like below to your dnsmasq config.
server=/lxc/10.0.3.1
Once this is done if you have an nginx or mysql container you can ping nginx.lxc or mysql.lxc and it will resolve the names.
If you are using NetworkManager create a lxc.conf file with the same line in the /etc/NetworkManager/dnsmasq.d/ folder and restart Network Manager for the change take effect. Now you should be able to ping containers by their name.lxc.
Note: container name is the configured hostname of the container. Usually the 2 are the same unless configured differently inside the container.
Use Kernel Modules in Containers
LXC Passthrough Devices
You can edit the individual container configuration to allow the additional devices and then restart the container. You can see this in our LXC Gluster guide where we use this to pass through the fuse device.
For one-off things, there’s a very convenient tool called ‘lxc-device’. With it, you can simply run
lxc-device add -n p1 /dev/ttyUSB0 /dev/ttyS0
Which will add (mknod) /dev/ttyS0 in the container with the same type/major/minor as /dev/ttyUSB0 and then add the matching cgroup entry allowing access from the container.
The same tool also allows moving network devices from the host to within the container.
Useful LXC Commands
lxc-info -n mycontainer
Name: uploadhome State: RUNNING PID: 6162 IP: 10.0.3.199 CPU use: 38.13 seconds BlkIO use: 132.70 MiB Memory use: 293.71 MiB Link: vethM2G070 TX bytes: 1.64 MiB RX bytes: 632.05 KiB Total bytes: 2.26 MiB lxc-info
Lxc-monitor is used to monitor containers when required. See sample output below.
lxc-monitor -n mycontainer
'mycontainer' changed state to [STOPPING] 'mycontainer' changed state to [STOPPED] 'mycontainer' changed state to [STARTING] 'mycontainer' changed state to [RUNNING]
lxc-freeze and lxc-unfreeze are used to freeze the container state.
lxc-freeze -n mycontainer
This will freeze the container state, you can unfree it by using the unfreeze command.
Finally lxc-destroy is used to destroy unneeded containers.
lxc-destroy -n mycontainer
It supports btrfs and will destroy a btrfs subvolume if the container is created in one.