Virtualization

Install KVM on Ubuntu 24.04 / 22.04: Complete Guide

Warning: Permanently added ‘157.90.202.120’ (ED25519) to the list of known hosts.

KVM is the hypervisor that powers most of the Linux cloud infrastructure you interact with daily. AWS, Google Cloud, and DigitalOcean all run KVM under the hood because it’s a Type 1 hypervisor baked directly into the Linux kernel, not a bolt-on application. That means near-native performance for virtual machines with no license fees.

Original content from computingforgeeks.com - post 40752

This guide walks through a complete KVM installation on Ubuntu 24.04 and 22.04 LTS, from verifying hardware support to creating your first VM. Every command and output shown here was captured on a real system. If you’re looking for RHEL-family coverage, see the KVM installation guide for Rocky Linux and AlmaLinux.

Tested April 2026 on Ubuntu 24.04.4 LTS (kernel 6.8.0-101, QEMU 8.2.2, libvirt 10.0.0) and verified on Ubuntu 22.04 LTS (QEMU 6.2, libvirt 8.0.0). Nested virtualization confirmed working

Ubuntu 24.04 vs 22.04: KVM version differences

Both Ubuntu LTS releases ship KVM from their default repositories. The commands are identical, but the package versions differ significantly. Ubuntu 24.04 ships much newer QEMU and libvirt builds.

ComponentUbuntu 24.04 LTSUbuntu 22.04 LTS
Kernel6.8.05.15 (HWE: 6.5+)
QEMU8.2.26.2.0
libvirt10.0.08.0.0
virt-manager4.1.04.0.0
Network configNetplanNetplan
Package namesIdenticalIdentical
OVMF path/usr/share/OVMF/OVMF_CODE_4M.secboot.fd/usr/share/OVMF/OVMF_CODE.secboot.fd
Support untilApril 2029April 2027

Every command in this guide works on both versions. Where a path or version differs between the two, both variants are shown.

What KVM Gives You

KVM (Kernel-based Virtual Machine) turns your Linux kernel into a bare-metal hypervisor. Unlike VirtualBox or VMware Workstation, which sit on top of the OS as applications, KVM operates at the kernel level. The practical benefits:

  • Near-native performance because the hypervisor IS the kernel, not a layer above it
  • Live migration of running VMs between physical hosts (production clusters use this daily)
  • Snapshots and cloning for quick rollback and test environment provisioning
  • GPU passthrough for ML workloads or Windows gaming VMs
  • No licensing cost, ever. It ships with every Linux kernel since 2.6.20
  • libvirt API gives you a unified management layer that tools like Proxmox, oVirt, and OpenStack all build on

Check Hardware Virtualization Support

KVM requires hardware virtualization extensions in your CPU. Intel calls this VT-x, AMD calls it AMD-V (SVM). Most CPUs shipped since 2010 support it, but some BIOS/UEFI configurations ship with it disabled. Check whether your CPU has the extensions enabled:

grep -cE 'vmx|svm' /proc/cpuinfo

Any number greater than zero means virtualization extensions are present. The number itself tells you how many CPU threads support it:

8

If the output is 0, virtualization is either unsupported or disabled in BIOS. Reboot into BIOS/UEFI setup and look for “Intel Virtualization Technology” or “SVM Mode” under the CPU or Advanced settings. Enable it, save, and reboot.

Install KVM and libvirt

Install the full virtualization stack in one shot:

sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager cpu-checker

Here’s what each package provides:

PackagePurpose
qemu-kvmThe QEMU emulator with KVM acceleration. This is the core that runs your virtual machines
libvirt-daemon-systemThe libvirtd daemon that manages VMs, networks, and storage pools
libvirt-clientsCommand-line tools including virsh, the primary CLI for VM management
bridge-utilsUtilities for creating Linux bridge interfaces (needed for bridged networking)
virtinstProvides virt-install for creating VMs from the command line
virt-managerGTK-based graphical interface for managing VMs (connects to local or remote libvirt)
cpu-checkerProvides kvm-ok to verify KVM acceleration availability

Verify the Installation

Start by confirming KVM acceleration is available with kvm-ok:

kvm-ok

You should see both lines confirming KVM is ready:

INFO: /dev/kvm exists
KVM acceleration can be used

Verify that the libvirt daemon is active:

systemctl status libvirtd

The output confirms the daemon started successfully and is enabled at boot:

● libvirtd.service - libvirt legacy monolithic daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Sun 2026-04-12 20:16:47 UTC; 30s ago

Check the QEMU version installed on your system:

qemu-system-x86_64 --version

Ubuntu 24.04 ships with QEMU 8.2.2 (Ubuntu 22.04 has QEMU 6.2):

QEMU emulator version 8.2.2 (Debian 1:8.2.2+ds-0ubuntu1.16)

Now check the libvirt version and confirm it’s talking to the QEMU hypervisor:

virsh version

All components should report their versions:

Compiled against library: libvirt 10.0.0
Using library: libvirt 10.0.0
Using API: QEMU 10.0.0
Running hypervisor: QEMU 8.2.2
Running against daemon: 10.0.0

Confirm the KVM kernel modules are loaded:

lsmod | grep kvm

On Intel systems you’ll see kvm_intel. AMD systems show kvm_amd instead:

kvm_intel             487424  0
kvm                  1409024  1 kvm_intel

Add Your User to the libvirt and kvm Groups

By default, only root can manage VMs. Add your user to both the libvirt and kvm groups so you can run virsh and virt-manager without sudo:

sudo usermod -aG libvirt,kvm $USER

Apply the group changes to your current session without logging out:

newgrp libvirt

Verify your user is now in both groups:

groups

The output should include both kvm and libvirt:

ubuntu : ubuntu adm cdrom sudo dip kvm lxd libvirt

Enable the vhost_net Kernel Module

The vhost_net module offloads network packet processing from QEMU userspace to the kernel. This reduces CPU usage and improves network throughput for VMs. Load it immediately:

sudo modprobe vhost_net

Confirm it’s loaded:

lsmod | grep vhost_net

You should see the module in the output:

vhost_net              32768  0

Make it persistent across reboots by adding it to /etc/modules:

echo "vhost_net" | sudo tee -a /etc/modules

Without this module, your VMs still work but network-heavy workloads will consume more host CPU. In production, you always want it enabled.

Configure Networking

KVM virtual machines need network connectivity. The two most common approaches are NAT (default, isolated) and bridged (VMs get real LAN IPs). Which one you choose depends on whether VMs need to be reachable from other machines on your network.

NAT (Default)

When libvirt installs, it creates a virtual network called default with a NAT bridge at virbr0. VMs on this network get IPs in the 192.168.122.0/24 range and can access the internet through the host, but external machines cannot reach the VMs directly. This is ideal for development, testing, and lab environments.

Check the default network status:

virsh net-list --all

The default network should show as active and set to autostart:

 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

If it shows inactive, start it and enable autostart:

virsh net-start default
virsh net-autostart default

View the full network configuration to see the DHCP range and NAT settings:

virsh net-dumpxml default

The XML shows NAT forwarding with DHCP serving addresses from 192.168.122.2 through 192.168.122.254:

<network>
  <name>default</name>
  <forward mode='nat'>
    <nat><port start='1024' end='65535'/></nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

Bridged Networking with Netplan

Bridged networking connects VMs directly to your physical LAN. Each VM gets an IP from your LAN’s DHCP server (or a static IP on the same subnet). Use this when VMs need to be accessible from other machines on the network, which is the typical production setup.

Ubuntu 24.04 uses Netplan for network configuration. First, identify your physical interface name:

ip -br link show

Create a Netplan configuration that moves your physical interface into a bridge. Open the config file:

sudo vi /etc/netplan/01-bridge.yaml

Add the following configuration, replacing ens18 with your actual interface name and adjusting the IP to match your network:

network:
  version: 2
  renderer: networkd
  ethernets:
    ens18:
      dhcp4: false
      dhcp6: false
  bridges:
    br0:
      interfaces:
        - ens18
      addresses:
        - 10.0.1.50/24
      routes:
        - to: default
          via: 10.0.1.1
      nameservers:
        addresses:
          - 8.8.8.8
          - 8.8.4.4
      dhcp4: false
      dhcp6: false
      parameters:
        stp: true
        forward-delay: 4

Warning: If you’re configuring this over SSH, be careful. Applying a bridge config moves your IP from the physical interface to the bridge. A misconfiguration will drop your SSH session. Test on a system you have console access to first, or use netplan try which auto-reverts after 120 seconds if you don’t confirm.

Apply the configuration:

sudo netplan apply

Verify the bridge is up and has your IP:

ip addr show br0

When creating VMs, specify --network bridge=br0 instead of the default NAT network. VMs will appear as regular hosts on your LAN.

Network Modes Compared

ModeVM gets LAN IPAccessible from LANPerformanceBest for
NAT (default)No (192.168.122.x)No (port forwarding required)GoodDevelopment, testing, isolated labs
BridgedYesYesBestProduction servers, LAN services
MacvtapYesYes (except host-to-VM)Very goodWhen bridge config is impractical
Host-onlyNo (isolated subnet)NoGoodSecurity testing, fully isolated VMs

Create Your First Virtual Machine

With KVM installed and networking configured, spin up your first VM. You can use the graphical virt-manager or the command-line virt-install. Both talk to the same libvirt daemon, so VMs created with one tool are visible in the other.

With virt-manager (GUI)

Launch virt-manager from your desktop menu or terminal:

virt-manager

The wizard walks through five steps: choose the installation source (ISO file or network install), allocate memory and CPUs, create a disk, configure networking, and review before launch. For a standard Ubuntu 24.04 guest, 2 vCPUs, 2048 MB RAM, and a 20 GB qcow2 disk is a reasonable starting point. Select “Ubuntu 24.04 LTS” as the OS variant so virt-manager applies the correct optimizations automatically.

If you need to install Windows 11 as a guest, you’ll want TPM emulation. See the guide on enabling TPM on KVM for Windows 11 installation.

With virt-install (CLI)

The command line is faster for automated or headless deployments. Here’s a full example that creates an Ubuntu 24.04 VM booting from an ISO:

virt-install \
  --name ubuntu2404-vm \
  --ram 2048 \
  --vcpus 2 \
  --disk path=/var/lib/libvirt/images/ubuntu2404-vm.qcow2,size=20,format=qcow2 \
  --os-variant ubuntu24.04 \
  --network network=default \
  --graphics vnc,listen=0.0.0.0 \
  --cdrom /var/lib/libvirt/images/ubuntu-24.04.4-live-server-amd64.iso \
  --boot uefi

For a quick test without downloading a full ISO, import a lightweight CirrOS image:

wget -q https://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img -O /tmp/cirros.img

virt-install \
  --name test-vm \
  --memory 512 \
  --vcpus 1 \
  --disk /tmp/cirros.img \
  --import \
  --os-variant cirros0.5.2 \
  --network network=default \
  --graphics none \
  --noautoconsole

The VM starts immediately. Confirm it’s running:

virsh list --all

The output shows the VM in running state:

 Id   Name      State
--------------------------
 1    test-vm   running

Check which OS variants are available for the --os-variant flag:

osinfo-query os | grep -iE 'ubuntu|debian|win11'

Common variants include ubuntu24.04, ubuntu24.10, ubuntu25.04, debian13, and win11. Using the correct variant ensures libvirt applies the right device model, bus types, and firmware defaults for that guest OS.

Manage VMs from the Command Line

The virsh command is your primary interface for day-to-day VM management. Here are the commands you’ll use most often:

CommandDescription
virsh list --allList all VMs (running and stopped)
virsh start vm-nameStart a stopped VM
virsh shutdown vm-nameGraceful shutdown (sends ACPI signal)
virsh reboot vm-nameGraceful reboot
virsh suspend vm-namePause VM (freeze state in memory)
virsh resume vm-nameResume a paused VM
virsh destroy vm-nameForce stop (like pulling the power cord)
virsh undefine vm-nameRemove VM definition (add --remove-all-storage to delete disk too)
virsh snapshot-create-as vm-name snap1Create a named snapshot
virsh snapshot-revert vm-name snap1Revert to a snapshot
virsh dominfo vm-nameShow VM details (RAM, vCPUs, state)
virsh console vm-nameAttach to VM serial console (Ctrl+] to detach)
virsh domifaddr vm-nameShow VM IP addresses

A common workflow: take a snapshot before making risky changes, test, then revert if things break. This catches most people off guard the first time they try virsh destroy, which sounds destructive but only force-stops the VM. The VM definition and disk remain intact. To truly delete everything, use virsh undefine vm-name --remove-all-storage.

Storage Pool Setup

libvirt uses storage pools to organize where VM disk images are stored. The default pool points to /var/lib/libvirt/images, which works fine for most setups. If you want to use a dedicated directory or partition, create a custom pool.

Check existing pools:

virsh pool-list --all

Create a new storage pool backed by a directory:

sudo mkdir -p /data/kvm-pool

virsh pool-define-as kvm-pool dir --target /data/kvm-pool
virsh pool-build kvm-pool
virsh pool-start kvm-pool
virsh pool-autostart kvm-pool

Verify the pool is active and will start on boot:

virsh pool-list --all

The new pool now appears alongside the default one:

 Name       State    Autostart
--------------------------------
 default    active   yes
 kvm-pool   active   yes

When creating VMs with virt-install, reference the pool by specifying the full path under /data/kvm-pool/ in the --disk path= argument. For production use with many VMs, placing the pool on a fast SSD or dedicated NVMe drive makes a noticeable difference in disk I/O performance.

KVM vs VirtualBox vs VMware vs Proxmox

If you’re choosing a virtualization platform, this comparison covers the main options. Proxmox is included because it’s essentially KVM + libvirt with a web UI and clustering built on top.

FeatureKVM/libvirtVirtualBoxVMware WorkstationProxmox VE
TypeType 1 (kernel)Type 2 (application)Type 2 (application)Type 1 (KVM-based)
LicenseFree (GPL)Free (GPL) / Extension Pack (PUEL)Paid ($199+)Free (AGPL) / subscription optional
PerformanceNear-nativeGood (10-15% overhead)Good (similar to VBox)Near-native (is KVM)
GUIvirt-manager / CockpitBuilt-in GUIBuilt-in GUIWeb UI
Live migrationYesNoNo (Workstation)Yes
GPU passthroughYes (VFIO)LimitedLimitedYes (VFIO)
ClusteringManual (with oVirt)NoNoBuilt-in
Best use caseServers, cloud infraDesktop testingDesktop testingHome lab, small DC

For server workloads, KVM is the clear winner. VirtualBox and VMware Workstation are desktop tools. If you want KVM with a polished web interface and cluster management without building it yourself, Proxmox is worth looking at. For Debian-based KVM setups, see the KVM installation guide for Debian.

Troubleshooting

kvm-ok: “KVM acceleration can NOT be used”

This means the CPU supports virtualization but it’s disabled in BIOS/UEFI. Reboot, enter BIOS setup (usually Del, F2, or F10 during POST), and enable “Intel Virtualization Technology” (Intel) or “SVM Mode” (AMD). On some motherboards this is buried under Advanced → CPU Configuration. Save and reboot.

If you’re running inside a VM (nested virtualization), the outer hypervisor must expose the virtualization extensions to the guest. On Proxmox, edit the VM hardware and set CPU type to “host”. On VMware Workstation, check “Virtualize Intel VT-x/EPT” in VM Settings → Processors.

Could not access KVM kernel module: Permission denied

Your user is not in the kvm group. Fix it:

sudo usermod -aG kvm $USER
newgrp kvm

If the problem persists after adding the group, check the permissions on /dev/kvm:

ls -la /dev/kvm

It should show crw-rw---- root kvm. If the group is different or permissions are wrong, a udev rule may be overriding the defaults.

Cannot access storage file: Permission denied

This typically happens when libvirt’s QEMU process (which runs as the libvirt-qemu user) cannot read the disk image. Two common causes:

Wrong ownership: If you downloaded an ISO or image as your regular user, QEMU can’t read it. Fix the ownership:

sudo chown libvirt-qemu:kvm /var/lib/libvirt/images/your-image.qcow2

AppArmor blocking access: Ubuntu ships with an AppArmor profile for libvirtd that restricts which paths QEMU can access. If your disk images are outside /var/lib/libvirt/images/, AppArmor blocks the read. Either move the images to the default directory, or add your custom path to the AppArmor profile:

sudo vi /etc/apparmor.d/local/abstractions/libvirt-qemu

Add a line for your custom storage path:

/data/kvm-pool/** rwk,

Reload AppArmor profiles:

sudo systemctl reload apparmor

Default network is not active

Sometimes the default NAT network doesn’t start after a reboot, especially if dnsmasq (which provides DHCP for the virtual network) crashes or gets killed. Restart it and ensure it persists:

virsh net-start default
virsh net-autostart default

If virsh net-start fails with “internal error: Network is already in use by interface virbr0”, another process claimed the bridge. Destroy and recreate it:

virsh net-destroy default
virsh net-start default

Verify the network is back with virsh net-list --all and check that virbr0 has the 192.168.122.1 IP assigned with ip addr show virbr0.

Related Articles

KVM How to Configure a Disk Storage Pool in KVM Ubuntu How To Install DokuWiki on Ubuntu 22.04|20.04|18.04 Ansible Set Up HA Kubernetes Cluster on Ubuntu 24.04 with Kubespray Programming Install Adoptium Temurin OpenJDK 21 LTS on Ubuntu 24.04

Leave a Comment

Press ESC to close