AlmaLinux

Configure oVirt Node on Rocky Linux 10 / AlmaLinux 10

oVirt is an open-source virtualization management platform built on KVM. It provides centralized management for virtual machines, hosts, storage, and networking across enterprise data centers. After Red Hat discontinued RHV (Red Hat Virtualization), the oVirt community took over active development – oVirt 4.5.7 shipped in January 2026 with full EL 10 support.

This guide covers configuring an oVirt hypervisor node on Rocky Linux 10 or AlmaLinux 10. We will install the oVirt host packages, configure networking, storage, firewall rules, and register the node with oVirt Engine. If you need alternatives to oVirt, consider Proxmox VE, KubeVirt on OpenShift, or plain KVM with Cockpit.

Prerequisites

  • A server running Rocky Linux 10 or AlmaLinux 10 with root or sudo access
  • CPU with Intel VT-x or AMD-V hardware virtualization extensions enabled in BIOS/UEFI
  • Minimum 4 GB RAM (16 GB+ recommended for running multiple VMs)
  • At least 55 GiB local storage
  • 1 NIC with minimum 1 Gbps bandwidth
  • A working oVirt Engine instance – see Install oVirt Engine on CentOS Stream 9 / Rocky 9
  • DNS resolution configured (forward and reverse) for the node hostname
  • NTP time synchronization active
  • IPv6 must remain enabled (oVirt requires it)

Step 1: Configure Hostname, DNS, and NTP

Set a fully qualified hostname on the node. Replace the domain with your own:

sudo hostnamectl set-hostname ovirt-node-01.example.com

Add DNS entries for the oVirt Engine and this node. If you do not have a DNS server, update /etc/hosts on both the engine and the node:

sudo vi /etc/hosts

Add entries like these (adjust IPs and hostnames to match your environment):

# oVirt infrastructure
192.168.1.10 ovirt-engine.example.com ovirt-engine
192.168.1.11 ovirt-node-01.example.com ovirt-node-01

Set the correct timezone for your region:

sudo timedatectl set-timezone Africa/Nairobi

Install and enable chrony for NTP synchronization:

sudo dnf install -y chrony
sudo systemctl enable --now chronyd

Verify time synchronization sources are active:

chronyc sources

You should see at least one NTP source with a * indicating it is the selected source:

MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* time.cloudflare.com           3   6   377    34   -512us[ -612us] +/-   15ms

Confirm the date is correct:

timedatectl

Step 2: Enable oVirt Repository on Rocky Linux 10 / AlmaLinux 10

oVirt 4.5.7 supports RHEL 10 and its derivatives including Rocky Linux 10 and AlmaLinux 10. First, update the system:

sudo dnf update -y

Enable the oVirt COPR repository and install the oVirt release package. For Rocky Linux 10 / AlmaLinux 10, the subscription-manager steps are not needed:

sudo dnf copr enable -y ovirt/ovirt-master-snapshot rhel-10-x86_64
sudo dnf install -y ovirt-release-master epel-release

Verify the oVirt repository is enabled:

dnf repolist | grep -i ovirt

You should see oVirt repositories listed in the output, confirming they are active and ready for package installation.

Step 3: Install oVirt Host Packages

You have two installation options depending on your deployment model:

Option A – Self-Hosted Engine deployment (engine runs as a VM on one of the hosts):

sudo dnf install -y ovirt-hosted-engine-setup

Option B – Standalone host (engine is already running on a separate machine):

sudo dnf install -y ovirt-host

The ovirt-host package pulls in VDSM (Virtual Desktop Server Manager), Cockpit with the oVirt dashboard plugin, libvirt, QEMU-KVM, and all required dependencies. VDSM is the agent that runs on every hypervisor node and handles communication with oVirt Engine.

Install the Cockpit oVirt integration for web-based host management:

sudo dnf install -y cockpit-ovirt-dashboard

Enable and start Cockpit:

sudo systemctl enable --now cockpit.socket

Verify the key services are running:

systemctl status vdsmd libvirtd cockpit.socket

All three should show active (running). If VDSM is not yet running, it will start automatically once the host is registered with oVirt Engine.

Step 4: Configure Network Bonding (Optional)

For production environments, configure network bonding to provide redundancy and increased throughput. This step is optional for lab setups with a single NIC.

Check available network interfaces:

nmcli device status

Create a bond interface using two physical NICs (replace ens3 and ens4 with your actual interface names):

sudo nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=802.3ad,miimon=100,lacp_rate=fast"
sudo nmcli connection add type ethernet con-name bond0-port1 ifname ens3 master bond0
sudo nmcli connection add type ethernet con-name bond0-port2 ifname ens4 master bond0

Assign an IP address to the bond interface:

sudo nmcli connection modify bond0 ipv4.addresses 192.168.1.11/24 ipv4.gateway 192.168.1.1 ipv4.dns "192.168.1.1" ipv4.method manual

Activate the bond:

sudo nmcli connection up bond0

Verify the bond is active and both ports are connected:

cat /proc/net/bonding/bond0

The output shows the bonding mode, both slave interfaces, and their link status. Both should show MII Status: up.

You can also manage bonding through the oVirt Engine web console after the host is registered – the Engine provides a graphical network setup interface under Host > Network Interfaces.

Step 5: Configure Storage for oVirt Node

oVirt supports multiple shared storage backends. The storage domain holds VM disk images, ISO files, and snapshots. Choose the backend that fits your infrastructure. Note that GlusterFS support has been dropped in oVirt 4.5.7 on EL 10.

Option A: NFS Storage

NFS is the simplest shared storage option. On your NFS server, export a directory for oVirt. If you need to set up an NFS server on Rocky Linux, configure it first.

On the NFS server, create and export the storage directory:

sudo mkdir -p /exports/ovirt-data
sudo chown 36:36 /exports/ovirt-data
sudo chmod 0755 /exports/ovirt-data

The ownership 36:36 corresponds to the vdsm:kvm user/group that oVirt uses to access storage. Add the export to /etc/exports:

sudo vi /etc/exports

Add the following line (adjust the network range):

/exports/ovirt-data 192.168.1.0/24(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

Apply the export and verify:

sudo exportfs -rav

On the oVirt node, test the NFS mount:

sudo mount -t nfs 192.168.1.20:/exports/ovirt-data /mnt
ls -la /mnt
sudo umount /mnt

The actual NFS storage domain is added through the oVirt Engine web console under Storage > Domains > New Domain.

Option B: iSCSI Storage

iSCSI provides block-level storage and better performance than NFS for I/O-intensive workloads. Install the iSCSI initiator on the oVirt node:

sudo dnf install -y iscsi-initiator-utils
sudo systemctl enable --now iscsid

Discover available iSCSI targets on your storage server (replace with your iSCSI target IP):

sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.30

The discovery output lists available targets with their IQN (iSCSI Qualified Name):

192.168.1.30:3260,1 iqn.2026-01.com.example:ovirt-storage

Log in to the target:

sudo iscsiadm -m node -T iqn.2026-01.com.example:ovirt-storage -p 192.168.1.30 --login

Verify the iSCSI session is established:

sudo iscsiadm -m session

The new block device appears under /dev/sd*. The iSCSI LUN is then added as a storage domain through the oVirt Engine administration portal.

Option C: Local Storage

For single-host deployments or lab environments, you can use local storage. Create a directory for the local storage domain:

sudo mkdir -p /data/ovirt-local
sudo chown 36:36 /data/ovirt-local
sudo chmod 0755 /data/ovirt-local

Local storage is configured during host registration in oVirt Engine or through the storage domain setup wizard. Note that local storage domains do not support live migration between hosts.

Step 6: Configure Firewall Rules for oVirt Node

oVirt requires several ports open on the hypervisor node for communication with the Engine and other hosts. Configure firewalld on Rocky Linux 10 to allow the required traffic:

sudo firewall-cmd --permanent --add-service=cockpit
sudo firewall-cmd --permanent --add-port=54321/tcp
sudo firewall-cmd --permanent --add-port=16514/tcp
sudo firewall-cmd --permanent --add-port=49152-49215/tcp
sudo firewall-cmd --permanent --add-port=5900-6923/tcp
sudo firewall-cmd --permanent --add-port=5989/tcp
sudo firewall-cmd --reload

Here is what each port is used for:

PortProtocolPurpose
9090TCPCockpit web console
54321TCPVDSM communication with oVirt Engine
16514TCPlibvirt TLS for VM migration
49152-49215TCPVDSM VM migration data transfer
5900-6923TCPVNC/SPICE console access to VMs
5989TCPCIM for host monitoring (optional)

If you use NFS storage, also allow NFS traffic:

sudo firewall-cmd --permanent --add-service=nfs
sudo firewall-cmd --permanent --add-service=rpc-bind
sudo firewall-cmd --permanent --add-service=mountd
sudo firewall-cmd --reload

For iSCSI storage, allow iSCSI initiator traffic:

sudo firewall-cmd --permanent --add-port=3260/tcp
sudo firewall-cmd --reload

Verify all rules are active:

sudo firewall-cmd --list-all

Step 7: Register oVirt Node with Engine

With the host prepared, register it with your oVirt Engine. There are two methods.

Method 1: Register via oVirt Engine Web Console

Log in to the oVirt Engine administration portal at https://ovirt-engine.example.com/ovirt-engine/. Navigate to Compute > Hosts and click New.

Fill in the host details:

  • Name: ovirt-node-01
  • Hostname: ovirt-node-01.example.com (or the IP address)
  • SSH Port: 22
  • Authentication: provide root password or SSH public key
  • Cluster: select the target cluster

Click OK. The Engine connects to the host via SSH, installs required packages, configures VDSM, and activates the host. The process takes a few minutes. Watch the host status change from Installing to Up.

Method 2: Deploy Self-Hosted Engine

If this is the first host and you want to run the Engine as a VM on it (self-hosted engine), run the deployment wizard:

sudo hosted-engine --deploy

The interactive wizard prompts for:

  • Storage type and connection details (NFS path, iSCSI target, etc.)
  • Engine VM configuration (CPU, memory, MAC address)
  • Engine VM disk size (minimum 60 GB)
  • Engine FQDN and network configuration
  • Admin password for the Engine portal

The deployment creates a storage domain, provisions the Engine VM, installs oVirt Engine inside it, and configures high availability. This process takes 20-30 minutes. You can also run the deployment through Cockpit at https://ovirt-node-01.example.com:9090 using the Hosted Engine dashboard.

Step 8: Verify oVirt Node in Web Console

After registration completes, verify the host status in the oVirt Engine administration portal. Navigate to Compute > Hosts. The node should show status Up with a green checkmark.

Check from the command line that VDSM is communicating with the Engine:

sudo vdsm-client Host getStats

This returns host statistics including CPU usage, memory, and KSM status, confirming the node is operational.

Verify KVM modules are loaded on the host:

lsmod | grep kvm

You should see kvm_intel (Intel CPUs) or kvm_amd (AMD CPUs) loaded:

kvm_intel             458752  0
kvm                  1327104  1 kvm_intel

You can also access the node directly through Cockpit at https://ovirt-node-01.example.com:9090 to monitor system resources, manage virtual machines, and view the oVirt dashboard.

Step 9: Add Storage Domain via oVirt Engine

With the host active, create a storage domain to hold VM disk images. In the oVirt Engine administration portal, go to Storage > Domains and click New Domain.

For an NFS storage domain:

  • Name: nfs-data
  • Domain Function: Data
  • Storage Type: NFS
  • Use Host: select the active host
  • Export Path: 192.168.1.20:/exports/ovirt-data
  • NFS Version: Auto Negotiate (or select V4.2 explicitly)

For an iSCSI storage domain, the Engine discovers and connects to targets directly – select the iSCSI type, enter the target server IP and port 3260, discover targets, and select the LUN.

After creating the data domain, the Engine automatically activates it. You also need an ISO domain if you plan to boot VMs from ISO images, though modern oVirt versions allow uploading ISOs directly to data domains.

Conclusion

The oVirt hypervisor node is now configured on Rocky Linux 10 / AlmaLinux 10 and registered with oVirt Engine. You can create virtual machines, configure additional storage domains, and add more hosts to build a production cluster. For production deployments, configure HTTPS certificates, set up power management (IPMI/iLO/iDRAC) for fencing, enable VM high availability, and set up regular backups of the Engine database using engine-backup.

Related Articles

AlmaLinux Install XAMPP on Rocky Linux 8 | AlmaLinux 8 | Oracle Linux 8 Cloud Install OpenStack on Rocky Linux 8 – Setup Keystone (Step 3) AlmaLinux Use Vagrant with Libvirt KVM on Rocky Linux 10 / AlmaLinux 10 Virtualization How to use existing virtual machines with Vagrant

Press ESC to close