Welcome to our guide on OpenNebula KVM Node Installation on Rocky Linux 10 / AlmaLinux 10. You should have a working OpenNebula Front-end server before you proceed since this guide will concentrate on adding a KVM hypervisor to be managed by the OpenNebula Front-end server.
For Ubuntu, check: Install and configure OpenNebula KVM Node on Ubuntu
KVM (Kernel-based Virtual Machine) is the hypervisor for OpenNebula’s Open Cloud Architecture. KVM is a complete virtualization system for Linux. It offers full virtualization, where each Virtual Machine interacts with its own virtualized hardware.
Requirements
- A working OpenNebula Front-end server
- Rocky Linux 10 / AlmaLinux 10 installed on the KVM node host
- CPU with Intel VT or AMD-V features for hardware virtualization support
- Network connectivity between the Front-end and KVM node
- Root or sudo access on the KVM node
Note: Rocky Linux 10 / AlmaLinux 10 require x86-64-v3 (Intel Haswell / AMD Excavator or newer) as the minimum CPU architecture. Older processors are no longer supported.
Step 1: Preparation
Set SELinux to Permissive Mode
OpenNebula doesn’t work well with SELinux in enforcing mode. Set it to permissive:
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
cat /etc/selinux/config
Verify the change:
getenforce
Rocky Linux 10 / AlmaLinux 10 include updated SELinux policies with new libvirt service types and inverted file context equivalency (/var/run = /run). If you prefer to keep SELinux enforcing, you may need to create custom policies for OpenNebula – but permissive mode is the recommended approach for OpenNebula deployments.
Set Hostname:
sudo hostnamectl set-hostname kvm-node01.example.com
Enable Required Repositories
Rocky Linux 10 / AlmaLinux 10 use dnf as the default package manager. Add the EPEL and CRB (CodeReady Builder) repositories:
sudo dnf install -y epel-release
Enable the CRB repository (some OpenNebula dependencies require it):
repo=$(dnf repolist --disabled | grep -i -e powertools -e crb | awk '{print $1}' | head -1)
sudo dnf config-manager --set-enabled $repo && sudo dnf makecache
Add OpenNebula Repository
Check the OpenNebula Downloads page for the latest version. As of this writing, OpenNebula 7.0 is the current stable release.
Community Edition:
sudo cat << "EOT" > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=OpenNebula Community Edition
baseurl=https://downloads.opennebula.io/repo/7.1/AlmaLinux/$releasever/$basearch
enabled=1
gpgkey=https://downloads.opennebula.io/repo/repo2.key
gpgcheck=1
repo_gpgcheck=1
EOT
Enterprise Edition (requires a subscription token):
sudo cat << "EOT" > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=OpenNebula Enterprise Edition
baseurl=https://<token>@enterprise.opennebula.io/repo/7.1/AlmaLinux/$releasever/$basearch
enabled=1
gpgkey=https://downloads.opennebula.io/repo/repo2.key
gpgcheck=1
repo_gpgcheck=1
EOT
Replace <token> with your customer-specific access credentials.
Refresh the package cache:
sudo dnf makecache
System Update
Update all existing packages and reboot:
sudo dnf -y update
sudo systemctl reboot
Step 2: Install OpenNebula KVM Node
After the reboot, install the KVM node package and restart libvirt:
sudo dnf install -y opennebula-node-kvm
Check the installed package details:
rpm -qi opennebula-node-kvm
Restart libvirt to apply the OpenNebula-provided configuration:
sudo systemctl restart libvirtd
sudo systemctl enable libvirtd
Configure libvirt for oneadmin
Ensure the following lines are present in /etc/libvirt/libvirtd.conf so the oneadmin user can interact with KVM:
unix_sock_group = "oneadmin"
unix_sock_rw_perms = "0777"
Restart libvirtd whenever you make changes:
sudo systemctl restart libvirtd
Verify KVM is working:
virsh -c qemu:///system list
You should see an empty list with no errors.
Step 3: Configure Passwordless SSH
OpenNebula Front-end connects to the hypervisor Hosts using SSH. You must distribute the public key of the oneadmin user from all machines to the file /var/lib/one/.ssh/authorized_keys on all the machines.
When the package was installed on the Front-end, an SSH key was generated and the authorized_keys file populated. We need to create a known_hosts file and sync it to the nodes as well.
On the Front-end
Create the known_hosts file as user oneadmin with all the node names and the Front-end name as parameters:
sudo su - oneadmin
ssh-keyscan <frontend> <node1> <node2> >> /var/lib/one/.ssh/known_hosts
Copy SSH Keys to Nodes
First, set a temporary password for oneadmin on all nodes:
# On each KVM node
sudo passwd oneadmin
Then from the Front-end, copy the .ssh directory:
# On the Front-end as oneadmin
scp -rp /var/lib/one/.ssh <node1>:/var/lib/one/
scp -rp /var/lib/one/.ssh <node2>:/var/lib/one/
Test SSH Connectivity
Test from the Front-end — you should not be prompted for a password:
ssh <node1>
ssh <frontend>
Also verify node-to-node and node-to-frontend SSH works without password prompts, as this is needed for live migration operations.
Step 4: Configure Host Networking
A network connection is needed by the OpenNebula Front-end daemons to access the hosts, manage and monitor them, and transfer Image files. It is highly recommended to use a dedicated network for management purposes.
Important: Rocky Linux 10 / AlmaLinux 10 Networking Changes
Rocky Linux 10 / AlmaLinux 10 have removed legacy ifcfg-rh network scripts entirely. You must use NetworkManager tools for all network configuration:
nmcli– command-line interfacenmtui– text-based user interfacenmstate– declarative network configuration
Network configuration files are now stored in /etc/NetworkManager/system-connections/ (keyfile format). The old /etc/sysconfig/network-scripts/ directory and ifup/ifdown commands are no longer available.
Supported Networking Modes
OpenNebula supports four different networking modes:
- Bridged – The VM is directly attached to an existing bridge on the hypervisor. Supports security groups and network isolation.
- VLAN – Virtual Networks implemented through 802.1Q VLAN tagging.
- VXLAN – Virtual Networks using the VXLAN protocol with UDP encapsulation and IP multicast.
- Open vSwitch – Similar to VLAN mode but using Open vSwitch instead of a Linux bridge.
Example: Creating a Bridge with NetworkManager
For bridged networking (the most common setup), create a bridge using nmcli:
# Create the bridge
sudo nmcli connection add type bridge con-name br0 ifname br0
# Add the physical interface as a bridge slave
sudo nmcli connection add type ethernet con-name br0-port1 ifname eth0 master br0
# Configure IP addressing on the bridge (static example)
sudo nmcli connection modify br0 ipv4.addresses "192.168.1.100/24"
sudo nmcli connection modify br0 ipv4.gateway "192.168.1.1"
sudo nmcli connection modify br0 ipv4.dns "8.8.8.8,8.8.4.4"
sudo nmcli connection modify br0 ipv4.method manual
# Disable STP if not needed
sudo nmcli connection modify br0 bridge.stp no
# Bring up the bridge
sudo nmcli connection up br0
Replace eth0 with your actual network interface name (check with nmcli device status).
For DHCP:
sudo nmcli connection modify br0 ipv4.method auto
NIC teaming has been removed in Rocky Linux 10 / AlmaLinux 10. Use network bonding instead for link aggregation.
For storage configurations, visit the Open Cloud Storage documentation.
Step 5: Configure Firewall (Optional)
If firewalld is running, allow libvirt traffic:
sudo firewall-cmd --permanent --zone=trusted --add-source=<frontend-ip>/32
sudo firewall-cmd --permanent --add-port=22/tcp
sudo firewall-cmd --reload
Alternatively, for lab/test environments:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
Step 6: Adding a Host to OpenNebula
The final step is registering the KVM node on the OpenNebula Front-end so that OpenNebula can launch VMs on it. This step can be done via the CLI or through Sunstone (the graphical user interface). Follow just one method — they accomplish the same thing.
Adding a Host through Sunstone
- Open Sunstone → Infrastructure → Hosts
- Click the + button
- Select KVM for the type field
- Enter the FQDN or IP address of the node in the Hostname field
- Go back to the hosts section and confirm it’s in ON state
If the host turns to err state instead of on, check /var/log/one/oned.log. Chances are it’s a problem with SSH.
Adding a Host through the CLI
To add a node to the cloud, run this command as oneadmin on the Front-end:
$ onehost create <node01> -i kvm -v kvm
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
1 localhost default 0 - - init
# After some time — around 1-2 minutes
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 192.168.70.82 default 0 0 / 1600 (0%) 0K / 94.2G (0%) on
The host should transition to on state within a couple of minutes.
Troubleshooting
Host stuck in err state
Most commonly caused by SSH issues. Verify:
# From the Front-end as oneadmin
ssh <node-ip> hostname
If prompted for a password, the SSH key distribution (Step 3) needs to be revisited.
Check logs
# On the Front-end
cat /var/log/one/oned.log | grep -i <node-hostname>
# On the KVM node
sudo journalctl -u libvirtd
Verify KVM support
egrep -c '(vmx|svm)' /proc/cpuinfo
lsmod | grep kvm
libvirt modular daemons conflict
If libvirt isn’t responding, ensure the legacy monolithic daemon is in use:
sudo systemctl stop virtqemud.socket virtqemud.service
sudo systemctl disable virtqemud.socket virtqemud.service
sudo systemctl enable --now libvirtd
This is the end of the OpenNebula KVM Node Installation on Rocky Linux 10 / AlmaLinux 10 guide. In the next guide, we cover Virtual configurations and Storage.
Related Guides
- Configure NFS Filesystem as OpenNebula Datastores
- Create and Use Bridged Networks in OpenNebula VMs
- OpenNebula Official Documentation























































