“Not what we say about our blessings, but how we use them, is the true measure of our thanksgiving.”
― W. T. Purkiser
The Linux Containers project (LXC) is an open source container platform that provides a set of tools, templates, libraries, and language bindings. It delivers containers that include a complete Linux system, much like a VM, with its own file system, networking and multiple applications. LXC has a simple command line interface that improves the user experience when starting containers (RedHat, 2020). Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers. Before containerd, Docker was built on top of LXC but they have since moved to containerd.
Features of LXC
Current LXC uses the following kernel features to contain processes: Source: LinuxContainers
- Kernel namespaces (ipc, uts, mount, pid, network and user)
- Apparmor and SELinux profiles
- Seccomp policies
- Chroots (using pivot_root)
- Kernel capabilities
- CGroups (control groups)
LXD is a next generation system container manager. It is an amazing interface used to manage LXC system containers and should not be misconstrued as a platform or type of container. The features of LXD include snapshots and image control. As you can guess, LXD increases the capabilities of LXC technology. It offers a user experience similar to virtual machines but using Linux containers instead.
Features of LXD Source: LinuxContainers
Some of the biggest features of LXD are:
- Secure by design (unprivileged containers, resource restrictions and much more)
- Scalable (from containers on your laptop to thousand of compute nodes)
- Intuitive (simple, clear API and crisp command line experience)
- Image based (with a wide variety of Linux distributions published daily)
- Support for Cross-host container and image transfer (including live migration with CRIU)
- Advanced resource control (cpu, memory, network I/O, block I/O, disk usage and kernel resources)
- Device passthrough (USB, GPU, unix character and block devices, NICs, disks and paths)
- Network management (bridge creation and configuration, cross-host tunnels, …)
- Storage management (support for multiple storage backends, storage pools and storage volumes)
Installation of LXC/LXD on CentOS 8
If you would wish to try out LXC/LXD on your CentOS 8 server to run some applications, the following steps will help you get the platform ready for use in as quick a manner as possible.
Step 1: Update and prepare Server
This is a very crucial step where we ensure that our house is well furnished by making sure the latest patches and packages are installed. Proced to run the following commmands to prepare your server.
sudo dnf update -y && sudo dnf upgrade -y sudo dnf install -y vim curl nano
This is an optional step if you are good at managing SELinux contexts. To make it permissive, run the following commands
sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Step 2: Enable and configure EPEL repo
Run the following command to install and enable EPEL repo on a CentOS 8 and then update the server to get latest packages from Epel.
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm sudo dnf update
Step 3: Install snapd on CentOS 8
In this setup, we ar going to install Snappy’s LXD package due to its simplicity and support that Snap packages enjoy. For that reason, we need to install snapd on our server as follows:
sudo yum install snapd -y
Once installed, the systemd unit that manages the main snap communication socket needs to be enabled:
sudo systemctl enable --now snapd.socket
To enable classic snap support, enter the following to create a symbolic link between /var/lib/snapd/snap and /snap:
sudo ln -s /var/lib/snapd/snap /snap
Either log out and back in again or restart your system to ensure snap’s paths are updated correctly. Once we have snap installed, let us continue to the next step.
Step 4: Add Kernel Parameters
There are some important Kernel options that are required by LXD and we are going to enabled on the server. Configure them by running the following commands on your terminal as root.
$ sudo su - # grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" # grubby --args="namespace.unpriv_enable=1" --update-kernel="$(grubby --default-kernel)" # echo "user.max_user_namespaces=3883" | sudo tee -a /etc/sysctl.d/99-userns.conf
After these settings are configured, it is required that the server be rebooted since core Kernel features have been altered. Reboot your server.
Step 5: Install the lxd snap on CentOS 8
Finally, after your server is back up, it is time to get out package of interest, LXD, installed from the Snap store. As simple as Snap was made, we simply need to run the command below and our LXD will be installed.
$ sudo snap install --classic lxd
Step 6: Launching a test LXD container
Thus far, we have already installed LXC/LXD but we have no containers yet that will hold the applications we are interested in deploying. Therefore, before we can launch some containters, let us add our user account to the group lxd for it to manage LXD containers without permission restrictions.
sudo usermod -aG lxd <your-username> newgrp lxd
Note: The newgrp command is used to change the current group ID during a login session. If the optional – flag is given, the user’s environment will be reinitialized as though the user had logged in, otherwise the current environment, including current working directory, remains unchanged. newgrp changes the current real group ID to the named group.
Next, we let us configure LXD environment or “initialize” it by running the following command. It will take you through a few questions. Please answer them in accordance to the needs of your environment. I used default values for the blank ones.
$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: lvm Create a new LVM pool? (yes/no) [default=yes]: Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=9GB]: 5GB Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
The above command will create a bridge lxdbr0. We shall add this bridge interface to the trusted zone so that connections will go through. In other words, we shall allow all incoming traffic via lxdbr0. Execute below firewall commands
sudo firewall-cmd --add-interface=lxdbr0 --zone=trusted --permanent sudo firewall-cmd --reload
Once lxd is initialized and your user is given permissions to launch and manage containers via the lxc command, let us create a container. The following syntax can be used as a guide:
lxc launch images:[distro]/[version]/[architecture] [your-container-name]
We are enlightened enough now and without further ado, let us create a test CentOS 8 and Ubuntu 20.04 containers by running the following commands:
$ lxc launch images:centos/8/amd64 cent8 Creating cent8 Retrieving image: Unpack: 100% (4.22GB/s) Starting cent8
Launch Ubuntu container by running:
$ lxc launch ubuntu:20.04 ubuntu20 Creating ubuntu20 Starting ubuntu20
Once they have been launched, you can easily list your containers thus:
$ lxc list +-------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | cent8 | RUNNING | 10.80.35.177 (eth0) | fd42:b3a2:efa8:5aa5:216:3eff:fe1d:38c3 (eth0) | CONTAINER | 0 | +-------+---------+---------------------+-----------------------------------------------+-----------+-----------+
You can also Stop, start, restart, delete, as well as check more info of your container like below where <container> is the name of the container as shown in lxc list command.
lxc start <container> lxc stop <container> lxc restart <container> lxc delete <container>
lxc stop ubuntu20 lxc delete ubuntu20
Note that you have to stop a running container before you can delete it.
Get info about a container using info command option
$ lxc info container ##For example $ lxc info cent8
Sample brilliant output:
Name: cent8 Location: none Remote: unix:// Architecture: x86_64 Created: 2020/11/07 11:25 UTC Status: Running Type: container Profiles: default Pid: 2724 Ips: eth0: inet 10.80.35.177 veth975e84ff eth0: inet6 fd42:b3a2:efa8:5aa5:216:3eff:fe1d:38c3 veth975e84ff eth0: inet6 fe80::216:3eff:fe1d:38c3 veth975e84ff lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 13 Disk usage: root: 737.98MB CPU usage: CPU usage (in seconds): 1 Memory usage: Memory (current): 93.32MB Memory (peak): 98.56MB Network usage: eth0: Bytes received: 3.57kB Bytes sent: 2.22kB Packets received: 30 Packets sent: 22 lo: Bytes received: 0B Bytes sent: 0B Packets received: 0 Packets sent: 0
Step 7: Execute ad hoc commands in containers:
Just like the way you can “exec” into a Docker container, you can also run commands inside lxd containers. The syntax is like so.
$ lxc exec <container-name> <command>
Examples of executing commands is as follows:
$ lxc exec cent8 -- yum -y update CentOS-8 - AppStream 538 kB/s | 5.8 MB 00:11 CentOS-8 - Base 619 kB/s | 2.2 MB 00:03 CentOS-8 - Extras 8.1 kB/s | 8.1 kB 00:01 Dependencies resolved. Nothing to do. Complete!
Let us install Apache in the container
$ lxc exec cent8 -- yum -y install httpd Last metadata expiration check: 0:00:41 ago on Sat Nov 7 12:56:38 2020. Dependencies resolved. ====================================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================================Installing: httpd x86_64 2.4.37-21.module_el8.2.0+494+1df74eae AppStream 1.7 M Installing dependencies: apr x86_64 1.6.3-9.el8 AppStream 125 k apr-util x86_64 1.6.1-6.el8 AppStream 105 k
After installation, we can log into the container, create a sample page, start the web-server and check its status
$ lxc exec cent8 -- /bin/bash ##We are now in the container [[email protected] ~]# systemctl start httpd [[email protected] ~]# systemctl status httpd ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2020-11-07 12:58:09 UTC; 5s ago Docs: man:httpd.service(8) Main PID: 175 (httpd) Status: "Started, listening on: port 80" Tasks: 213 (limit: 11069) Memory: 27.6M CGroup: /system.slice/httpd.service ├─175 /usr/sbin/httpd -DFOREGROUND ├─176 /usr/sbin/httpd -DFOREGROUND ├─177 /usr/sbin/httpd -DFOREGROUND ├─178 /usr/sbin/httpd -DFOREGROUND └─179 /usr/sbin/httpd -DFOREGROUND
Create a sample page in the container to be served by Apache for demonstration
[[email protected] ~]# vi /var/www/html/index.html <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Spoon-Knife</title> <LINK href="styles.css" rel="stylesheet" type="text/css"> </head> <body> <img src="forkit.gif" id="octocat" alt="" /> <h2> About SELinux </h2><br> <p> SELinux gives you the ability to limit the privileges associated with executing processes and reduce the damage that could result from system and applications vulnerabilities exploitation. For this reason, it is recommended to keep SELinux in enforcing mode unless you have a good reason to disable it. </p> <h2> Modes</h2><br> <p> The other available mode for running SELinux in enabled state is Permissive. In this mode, SELinux policy is not enforced and access is not denied but denials are logged for actions that would have been denied if running in enforcing mode. </p> </body> </html>
Then restart Apache inside the container and exit.
[[email protected] ~]# systemctl restart httpd
Step 8: Accessing you applications inside containers externally
Well, now that you have deployed your application on a given container (For example, Apache from above command), how exactly will your target audience going to access what you are hosting from outside? You can either use firewall rules or more elegantly, you can deploy a reverse proxy to route traffic to your applications.
Using a reverse proxy server such as Nginx
Install Nginx web server on your CentOS 8 host system:
sudo yum -y install vim nginx
Setup Nginx HTTP proxy for the service
Create a new configuration file.
sudo nano /etc/nginx/conf.d/app1.conf
Modify this configuration snippet to fit your setup. Note that Nginx will be listening on port 9090 and then redirect the traffic to the container having Apache running at port 80.
A valid DNS record is required for external (Public) access of your application.
Check your configuration syntax:
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
If the settings return a positive feedback, restart nginx service.
sudo systemctl restart nginx
Allow port 9090 on the firewall
sudo firewall-cmd --permanent --add-port=9090/tcp sudo firewall-cmd --reload
We are now ready to access our application. Open your favorite browser and point it to the FQDN or IP address and port of Nginx proxy we just finished configuring. http://<ip-or-fqdn>:9090. You should see a page like below.
We have finally managed to install, manage and administer lxd/lxc containers together with hosting a simple application in one of them. We hope the guide was as informative as you expected and all worked well for you. To see you on the blog is enough for us to return our appreciation for visiting and for the mad support. Check below for other beautiful guides.