Incus is a modern system container and virtual machine manager forked from LXD by the Linux Containers community. After Canonical restricted LXD under a CLA, the community created Incus as a fully open-source alternative that continues active development under the Linux Containers project. Incus uses the incus CLI (replacing the old lxc client command) and manages LXC containers and QEMU virtual machines through a clean REST API.
This guide walks through installing and using Incus to run Linux containers on Rocky Linux 10 and AlmaLinux 10. We cover the COPR-based package install, storage pool and network bridge setup, firewall rules, and core container operations – launch, exec, stop, delete. If you still need LXD specifically, we include a note on that as well.
Prerequisites
- A server or VM running Rocky Linux 10 or AlmaLinux 10 with at least 2GB RAM
- Root or sudo access
- Internet connectivity for downloading packages and container images
- A dedicated disk or at least 20GB free space for the storage pool
Step 1: Update the System
Start by updating all packages to their latest versions.
sudo dnf update -y
Install a few useful utilities needed during setup.
sudo dnf -y install vim curl bash-completion wget
Step 2: Enable EPEL and CRB Repositories
Incus and its dependencies require the EPEL (Extra Packages for Enterprise Linux) repository and the CRB (CodeReady Builder) repository, which provides additional development libraries.
sudo dnf -y install epel-release
sudo dnf config-manager --enable crb
Verify both repos are active.
sudo dnf repolist | grep -E 'epel|crb'
Step 3: Install Incus on Rocky Linux 10 / AlmaLinux 10
Incus RPM packages are available from the neelc/incus COPR repository, maintained by Neil Hanlon (Rocky Linux infrastructure co-lead). On EL10 systems, download the repo file directly since dnf copr enable may not work reliably.
sudo wget -O /etc/yum.repos.d/neelc-incus.repo \
https://copr.fedorainfracloud.org/coprs/neelc/incus/repo/rhel+epel-10/neelc-incus-rhel+epel-10.repo
Install the Incus server and CLI tools.
sudo dnf install -y incus incus-tools
Enable and start the Incus service.
sudo systemctl enable --now incus
Verify the service is running.
$ systemctl status incus
● incus.service - Incus - Main daemon
Loaded: loaded (/usr/lib/systemd/system/incus.service; enabled; preset: disabled)
Active: active (running)
Add Your User to the Incus Group
To run incus commands without sudo, add your user to the incus-admin group.
sudo usermod -aG incus-admin $USER
newgrp incus-admin
Step 4: Configure Kernel Parameters for Containers
Incus containers need certain kernel tunables for proper operation. Create a sysctl configuration file.
sudo vi /etc/sysctl.d/90-incus.conf
Add the following parameters.
# Increase inotify limits for containers
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
# Increase memory map areas
vm.max_map_count = 262144
# Allow unprivileged user namespaces
user.max_user_namespaces = 3883
# Increase ARP cache for container networking
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv6.neigh.default.gc_thresh3 = 8192
# Increase kernel key limits
kernel.keys.maxkeys = 2000
kernel.keys.maxbytes = 2000000
Also increase file descriptor limits. Open the limits configuration file.
sudo vi /etc/security/limits.conf
Add these lines before the # End of file marker.
* soft nofile 1048576
* hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
* soft memlock unlimited
* hard memlock unlimited
Apply the sysctl changes and reboot to load all settings.
sudo sysctl --system
sudo reboot
After reboot, verify the settings loaded correctly.
sysctl fs.inotify.max_user_watches
Step 5: Initialize Incus
Run the interactive initialization to configure storage, networking, and clustering options. For most single-server setups, the defaults work well – just choose your preferred storage backend. If you manage Incus containers on Ubuntu, the workflow is nearly identical.
incus admin init
Below is a sample interactive session using dir as the storage backend (simplest option, no extra packages needed).
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, btrfs, ceph) [default=dir]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
For production use, consider lvm or btrfs storage backends instead of dir. LVM provides better performance and snapshot support. If you choose LVM, you need to install the lvm2 package first.
Step 6: Configure Firewall for Incus Containers
The Incus network bridge (incusbr0) needs to be trusted in firewalld so containers can communicate with the host and the internet.
sudo firewall-cmd --add-interface=incusbr0 --zone=trusted --permanent
sudo firewall-cmd --reload
Verify the bridge was added to the trusted zone.
sudo firewall-cmd --zone=trusted --list-interfaces
If you plan to expose the Incus API over the network (port 8443/TCP), open it explicitly.
sudo firewall-cmd --zone=public --add-port=8443/tcp --permanent
sudo firewall-cmd --reload
Step 7: Launch and Manage Incus Containers
With Incus initialized, you can launch containers from pre-built images hosted at images.linuxcontainers.org.
Launch a Container
The syntax follows this pattern.
incus launch images:<distro>/<version> <container-name>
Launch an Ubuntu 24.04 container named web01.
$ incus launch images:ubuntu/24.04 web01
Creating web01
Starting web01
Launch a Rocky Linux 10 container.
incus launch images:rockylinux/10 rocky01
Launch a Debian 13 container.
incus launch images:debian/13 debian01
List Containers
View all containers and their current state.
$ incus list
+---------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+----------------------+------+-----------+-----------+
| debian01| RUNNING | 10.10.10.3 (eth0) | | CONTAINER | 0 |
+---------+---------+----------------------+------+-----------+-----------+
| rocky01 | RUNNING | 10.10.10.4 (eth0) | | CONTAINER | 0 |
+---------+---------+----------------------+------+-----------+-----------+
| web01 | RUNNING | 10.10.10.2 (eth0) | | CONTAINER | 0 |
+---------+---------+----------------------+------+-----------+-----------+
Execute Commands Inside a Container
Run a single command inside a running container. This is similar to how you would run commands in Docker containers.
incus exec web01 -- apt update
Install a package inside the container.
incus exec web01 -- apt -y install nginx
Get a Shell Inside a Container
Open an interactive shell session inside a container.
incus exec web01 -- /bin/bash
You now have full root access inside the container. Run any commands, install packages, or configure services. Type exit to return to the host.
View Container Details
Get detailed information about a container including resource usage, network addresses, and creation time.
$ incus info web01
Name: web01
Status: RUNNING
Type: container
Architecture: x86_64
Created: 2026/03/21 10:15 UTC
Last Used: 2026/03/21 10:20 UTC
Resources:
Processes: 18
Disk usage:
root: 412.50MiB
Memory usage:
Memory (current): 85.23MiB
Stop, Start, and Restart Containers
Manage the container lifecycle with these commands.
incus stop web01
incus start web01
incus restart web01
Verify the container state after stopping.
$ incus list web01
+-------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| web01 | STOPPED | | | CONTAINER | 0 |
+-------+---------+------+------+-----------+-----------+
Delete a Container
A container must be stopped before deletion. Force-delete a running container with the --force flag.
incus stop debian01
incus delete debian01
Or force-delete without stopping first.
incus delete rocky01 --force
Step 8: Storage Pool Management
List your configured storage pools.
incus storage list
View details about the default pool.
incus storage info default
Create a new LVM-backed storage pool if needed. This requires the lvm2 package and an available volume group or loop device.
sudo dnf install -y lvm2
incus storage create lvm-pool lvm size=50GiB
Step 9: Useful Incus Commands Reference
| Command | Description |
|---|---|
incus launch images:distro/version name | Create and start a container |
incus list | List all containers |
incus exec name -- command | Run a command inside a container |
incus stop name | Stop a running container |
incus start name | Start a stopped container |
incus delete name | Delete a stopped container |
incus info name | Show container details |
incus snapshot name snap-name | Create a snapshot |
incus restore name snap-name | Restore from a snapshot |
incus image list images: | Browse available images |
incus storage list | List storage pools |
incus network list | List network bridges |
Note on LXD as an Alternative
If you specifically need LXD (the Canonical-maintained predecessor), it is available via Snap on Rocky Linux / AlmaLinux. Install it with the following commands.
sudo dnf -y install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install lxd
LXD uses the lxc command instead of incus, but the workflow is almost identical. However, Incus is the recommended choice going forward – it has broader community support, more frequent releases, and no CLA restrictions. Incus 6.x is the actively developed branch with features like QCOW2 live migration, direct backup exports, and improved clustering that LXD stable does not have.
Conclusion
You now have Incus running on Rocky Linux 10 / AlmaLinux 10 with a configured storage pool, network bridge, and firewall rules. Containers are ready to launch and manage through the incus CLI. For production deployments, consider enabling TLS client certificates for remote API access, setting up automated container provisioning with Terraform, configuring resource limits per container, and implementing regular snapshot-based backups.
When doing: $ newgrp lxd
I get this output:
Password:
newgrp: failed to crypt password with previous salt: Invalid argument
tried doing: $ sudo newgrp lxd
and then when: $ lxd init
getting this message: permanently dropping privs did not work
what password is set for lxd?
lxd init
Error: Failed to connect to local LXD: Get “http://unix.socket/1.0”: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
why this?
You need to modify /etc/gshadow in order to add your user