KVM

Install KVM on Debian 13 / Debian 12: Complete Guide

Debian’s rock-solid stability and predictable release cycle make it one of the best choices for a KVM hypervisor host. Whether you’re running a home lab or provisioning production virtual machines, KVM on Debian gives you a Type 1 hypervisor baked right into the Linux kernel with none of the licensing headaches that come with proprietary alternatives.

Original content from computingforgeeks.com - post 21285

This guide walks through a complete KVM installation on both Debian 13 (Trixie) and Debian 12 (Bookworm), from verifying hardware support to creating your first virtual machine. The core packages and workflow are identical on both releases, but there are meaningful version differences in QEMU and libvirt that affect feature availability. If you’re also interested in the Ubuntu KVM setup or the Rocky Linux / RHEL path, those guides cover the distro-specific differences.

Tested April 2026 on Debian 13 (Trixie, kernel 6.12.74) with QEMU 10.0.8 and libvirt 11.3.0, and verified package availability on Debian 12 (Bookworm) with QEMU 7.2

Debian 13 vs Debian 12: KVM Component Versions

The package names are identical across both releases, so every command in this guide works on either version. The version jumps, however, are substantial. Debian 13 ships QEMU 10.0 with significantly better virtio performance, newer machine types, and improved live migration support compared to QEMU 7.2 on Bookworm.

ComponentDebian 13 (Trixie)Debian 12 (Bookworm)
Kernel6.12.x6.1.x
QEMU10.0.87.2
libvirt11.3.09.0.0
Python3.133.11
Network config/etc/network/interfaces/etc/network/interfaces
OVMF (UEFI)/usr/share/OVMF//usr/share/OVMF/
Package sourceDebian main repoDebian main repo

Verify CPU Virtualization Support

KVM requires hardware virtualization extensions: Intel VT-x or AMD-V. Most modern CPUs ship with these enabled, but some BIOS/UEFI configurations have them turned off by default. Check your CPU first.

grep -cE 'vmx|svm' /proc/cpuinfo

Any number greater than zero means your CPU supports virtualization. The count represents the number of CPU threads with the extension available:

8

A result of 0 means either your CPU lacks virtualization support (unlikely on anything made after 2010) or the feature is disabled in BIOS/UEFI. Reboot and enable “Intel Virtualization Technology” or “SVM Mode” in your firmware settings.

To check whether it’s Intel VT-x or AMD-V:

grep -oE 'vmx|svm' /proc/cpuinfo | head -1

This returns vmx for Intel or svm for AMD processors.

Install KVM Packages

All required packages live in the standard Debian repositories. No third-party sources needed.

sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager qemu-system cpu-checker

Here’s what each package provides:

PackagePurpose
qemu-kvmQEMU with KVM acceleration support
qemu-systemFull system emulation (x86_64, aarch64, etc.)
libvirt-daemon-systemlibvirt daemon and default configs
libvirt-clientsCLI tools including virsh
bridge-utilsUtilities for configuring Linux bridge interfaces
virtinstvirt-install command for creating VMs
virt-managerGUI for managing VMs (optional on headless servers)
cpu-checkerProvides kvm-ok to verify KVM readiness

On a headless server where you don’t need a GUI, you can skip virt-manager. Everything else is essential.

Verify the Installation

With the packages installed, confirm that KVM is operational. Start with kvm-ok:

kvm-ok

You should see both lines confirming KVM is available:

INFO: /dev/kvm exists
KVM acceleration can be used

Verify that the KVM kernel modules are loaded:

lsmod | grep kvm

On an Intel system, you’ll see both kvm_intel and the base kvm module (AMD systems show kvm_amd instead):

kvm_intel             413696  0
kvm                  1396736  1 kvm_intel

Check that the libvirt daemon is running:

sudo systemctl status libvirtd

The output should show the service as active:

● libvirtd.service - Virtualization daemon
     Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running)
   Main PID: 1842 (libvirtd)
      Tasks: 21 (limit: 32768)
     Memory: 18.4M
        CPU: 412ms
     CGroup: /system.slice/libvirtd.service
             └─1842 /usr/sbin/libvirtd

Confirm the QEMU version installed on your system:

qemu-system-x86_64 --version

On Debian 13, this reports QEMU 10.0.8:

QEMU emulator version 10.0.8 (Debian 1:10.0.8+ds-0+deb13u1+b1)
Copyright (c) 2003-2025 Fabrice Bellard and the QEMU Project developers

The virsh version command shows the full stack, from the libvirt library to the running hypervisor:

virsh version

All components should report their versions cleanly:

Compiled against library: libvirt 11.3.0
Using library: libvirt 11.3.0
Using API: QEMU 11.3.0
Running hypervisor: QEMU 10.0.8
Running against daemon: 11.3.0

Finally, run the comprehensive host validation check:

sudo virt-host-validate

Every QEMU check should show PASS. The LXC freezer warning is cosmetic on cgroup v2 systems and does not affect KVM operation:

  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by grub                                 : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'freezer' controller support                     : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
   LXC: Checking for cgroup 'blkio' controller support                       : PASS

The IOMMU warning only matters if you plan to pass physical PCI devices (GPUs, NICs) directly into VMs. For standard virtual machine usage, you can ignore it. If you do need PCI passthrough, add intel_iommu=on (or amd_iommu=on) to your kernel command line in /etc/default/grub.

Add Your User to libvirt and kvm Groups

By default, only root can manage VMs. Add your regular user to the libvirt and kvm groups so you can run virsh and virt-manager without sudo:

sudo usermod -aG libvirt,kvm $USER

Log out and back in for the group membership to take effect. Verify with:

groups

Both libvirt and kvm should appear in the output. After this, virsh list --all works without prefixing sudo.

Load vhost_net for Better Network Performance

The vhost_net module offloads virtio network processing to the kernel, which noticeably improves VM network throughput. It’s usually loaded automatically, but worth confirming:

lsmod | grep vhost_net

If there’s no output, load it manually:

sudo modprobe vhost_net

Make it persistent across reboots by adding it to /etc/modules:

echo "vhost_net" | sudo tee -a /etc/modules

Confirm the module is loaded:

lsmod | grep vhost_net

You should see vhost_net along with its dependencies vhost and tun:

vhost_net              32768  0
vhost                  57344  1 vhost_net
tun                    65536  1 vhost_net

Configure Networking

KVM supports several network modes. The right choice depends on whether your VMs need to be reachable from external hosts or just need internet access.

Default NAT Network (virbr0)

libvirt creates a default NAT network automatically during installation. VMs connected to this network get IPs in the 192.168.122.0/24 range and can reach the internet through the host’s IP. External hosts cannot initiate connections to the VMs directly (without port forwarding).

Confirm the default network is active:

virsh net-list --all

The default network should show as active and set to autostart:

 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

If it’s inactive, start it and enable autostart:

virsh net-start default
virsh net-autostart default

NAT mode works perfectly for development, testing, and any scenario where VMs only need outbound internet access. For production services that need to be reachable on your LAN, use bridged networking instead.

Bridged Network with /etc/network/interfaces

Bridged networking puts VMs directly on your physical network. They get IPs from the same DHCP server (or static range) as other machines on the LAN, making them accessible without any port forwarding. Debian uses ifupdown and /etc/network/interfaces for network configuration, not Netplan.

First, identify your physical network interface:

ip -br link show

Look for the interface that currently carries your network connection (commonly ens18, eth0, or enp0s3). Back up the current configuration before making changes:

sudo cp /etc/network/interfaces /etc/network/interfaces.bak

Write a new configuration that creates a bridge interface br0 with your physical interface as the bridge port. This example uses a static IP (adjust the addresses for your network):

sudo vi /etc/network/interfaces

Replace the existing interface configuration with:

# Loopback
auto lo
iface lo inet loopback

# Physical interface (no IP, bridge port only)
auto ens18
iface ens18 inet manual

# Bridge interface
auto br0
iface br0 inet static
    address 10.0.1.50
    netmask 255.255.255.0
    gateway 10.0.1.1
    dns-nameservers 10.0.1.1 1.1.1.1
    bridge_ports ens18
    bridge_stp off
    bridge_fd 0

If you prefer DHCP for the bridge, replace the static block with:

auto br0
iface br0 inet dhcp
    bridge_ports ens18
    bridge_stp off
    bridge_fd 0

Apply the new configuration. If you’re connected over SSH, be aware this will briefly drop your connection:

sudo systemctl restart networking

Verify the bridge is up and has an IP address:

ip addr show br0

You should see your configured IP on br0, and ens18 should have no IP of its own. Confirm the bridge membership:

brctl show

The output shows ens18 as a port under br0:

bridge name     bridge id               STP enabled     interfaces
br0             8000.xxxxxxxxxxxx       no              ens18
virbr0          8000.xxxxxxxxxxxx       yes             virbr0-nic

When creating VMs, specify --network bridge=br0 in your virt-install command to attach them directly to the LAN.

Network Mode Comparison

FeatureNAT (virbr0)Bridge (br0)Macvtap
VM reachable from LANNo (needs port forwarding)YesYes
Host-to-VM communicationYesYesNo (by design)
DHCP from physical networkNo (libvirt DHCP)YesYes
Setup complexityZero (automatic)ModerateLow
PerformanceGoodBestGood
Requires physical NIC changeNoYesNo
Best forDev/test, isolated VMsProduction servers, LAN servicesQuick bridging without config changes

Create Your First VM

With KVM installed and networking configured, create a virtual machine using virt-install. This example downloads a Debian 13 netinst ISO and boots from it:

sudo virt-install \
  --name debian13-vm \
  --ram 2048 \
  --vcpus 2 \
  --disk size=20,format=qcow2,bus=virtio \
  --os-variant debian12 \
  --network network=default,model=virtio \
  --graphics vnc,listen=0.0.0.0 \
  --cdrom /var/lib/libvirt/images/debian-13.0.0-amd64-netinst.iso \
  --boot uefi

The --boot uefi flag uses the OVMF firmware from /usr/share/OVMF/ for UEFI boot. Drop it if you prefer legacy BIOS. The --os-variant flag tells libvirt to apply optimal defaults for the guest OS. List available OS variants with osinfo-query os | grep debian.

For a quick smoke test without downloading a full ISO, grab the lightweight CirrOS cloud image and import it directly:

wget -O /tmp/cirros.qcow2 https://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img

Import the image as a VM:

sudo virt-install \
  --name cirros-test \
  --ram 512 \
  --vcpus 1 \
  --disk /tmp/cirros.qcow2,format=qcow2,bus=virtio \
  --import \
  --os-variant cirros0.5.2 \
  --network network=default,model=virtio \
  --graphics none \
  --console pty,target_type=serial \
  --noautoconsole

Check that the VM is running:

virsh list

The VM should appear with state “running”:

 Id   Name          State
------------------------------
 1    cirros-test   running

Connect to the console to verify it booted successfully:

virsh console cirros-test

Press Enter if you see a blank screen. The default CirrOS credentials are cirros / gocubsgo. Press Ctrl+] to detach from the console. If you’re planning to run Windows 11 guests, check the guide on enabling TPM for KVM since Windows 11 requires TPM 2.0 even in virtual machines.

Essential virsh Commands

The virsh CLI is your primary interface for managing KVM virtual machines. These commands cover day-to-day operations:

CommandWhat it does
virsh list --allList all VMs (running and stopped)
virsh start vm-nameBoot a stopped VM
virsh shutdown vm-nameGraceful ACPI shutdown
virsh destroy vm-nameForce power off (like pulling the plug)
virsh reboot vm-nameGraceful reboot
virsh autostart vm-nameStart VM automatically on host boot
virsh autostart --disable vm-nameDisable auto-start
virsh console vm-nameAttach to serial console
virsh dominfo vm-nameShow VM details (RAM, vCPUs, state)
virsh dumpxml vm-nameFull XML config (useful for backup/clone)
virsh undefine vm-nameRemove VM definition (keeps disk unless --remove-all-storage)
virsh snapshot-create-as vm-name snap1Create a named snapshot
virsh snapshot-revert vm-name snap1Revert to a snapshot
virsh domifaddr vm-nameShow VM’s IP address

Storage Pool Management

libvirt organizes VM disk images into storage pools. The default pool points to /var/lib/libvirt/images, but it may not be defined automatically on a fresh install. Check what pools exist:

virsh pool-list --all

If no “default” pool appears, create one:

virsh pool-define-as default dir --target /var/lib/libvirt/images
virsh pool-build default
virsh pool-start default
virsh pool-autostart default

Confirm the pool is active:

virsh pool-list --all

The default pool should show as active with autostart enabled:

 Name      State    Autostart
-------------------------------
 default   active   yes

To see how much space is available in the pool:

virsh pool-info default

For production setups with dedicated storage, you can define additional pools pointing to different mount points, LVM volume groups, or NFS shares. The dir type shown above is the simplest and works well for most use cases.

KVM on Debian vs Ubuntu vs Proxmox

All three run the same KVM hypervisor underneath. The differences are in packaging, management tools, and the target audience.

AspectDebian KVMUbuntu KVMProxmox VE
Base OSDebian stableUbuntu LTSDebian (modified)
Package freshnessConservative (stable)Slightly newer (backports)Own repos, latest QEMU/libvirt
Management UIvirt-manager (optional)virt-manager (optional)Built-in web UI
ClusteringManual (Pacemaker/Corosync)ManualBuilt-in HA cluster
Container supportLXC (manual setup)LXD/IncusLXC built-in
Network config tool/etc/network/interfacesNetplan/etc/network/interfaces
Storage backendsManual (LVM, ZFS, Ceph)ManualZFS, Ceph, LVM, NFS via UI
Backup solutionManual scriptsManual scriptsPBS integration
Best forMinimal, stable serversTeams familiar with UbuntuTurnkey virtualization platform

Debian KVM is the leanest option. You get exactly the packages you install, nothing more. Proxmox adds a full management layer on top (which itself runs on Debian). Ubuntu’s KVM is functionally identical to Debian’s but uses Netplan for networking and ships slightly newer QEMU versions through backports.

Troubleshooting

kvm-ok: command not found

This means the cpu-checker package isn’t installed. It’s not pulled in automatically by the main KVM metapackage on some minimal Debian installations.

sudo apt install -y cpu-checker

After installing, run kvm-ok again. If it reports “KVM acceleration can NOT be used,” check your BIOS/UEFI for disabled virtualization extensions.

Cannot access storage file ‘/var/lib/libvirt/images/vm.qcow2’: Permission denied

This typically happens when the disk image has incorrect ownership. libvirt runs QEMU processes as the libvirt-qemu user on Debian. Verify and fix the ownership:

sudo chown libvirt-qemu:kvm /var/lib/libvirt/images/vm.qcow2

If you’re using images stored outside /var/lib/libvirt/images, AppArmor may be blocking access. Check the AppArmor logs:

sudo journalctl -k | grep DENIED

To allow libvirt access to a custom directory, add it to the AppArmor profile:

echo '  /your/custom/path/** rwk,' | sudo tee -a /etc/apparmor.d/local/abstractions/libvirt-qemu
sudo systemctl restart apparmor

Error starting domain: virConnectOpen failed: Failed to connect to the hypervisor

This error means the libvirt daemon isn’t running or isn’t reachable. Check its status:

sudo systemctl status libvirtd

If the service is dead, start it:

sudo systemctl enable --now libvirtd

If it fails to start, check the logs for specific errors:

sudo journalctl -u libvirtd --no-pager -n 30

Common causes include a corrupted libvirt config file or a conflicting process holding the socket. On Debian 13 with systemd socket activation, you may also need to ensure the socket unit is active:

sudo systemctl enable --now libvirtd.socket

Bridge interface br0 has no carrier

The “no carrier” state on a bridge means no physical interface is actively connected to it. This happens when:

The physical interface name in /etc/network/interfaces doesn’t match the actual interface. Double check with ip link show and verify the bridge_ports line matches exactly.

The network cable is unplugged or the link is down on the physical port. On a VM running inside another hypervisor (nested virtualization), the virtual NIC must be set to allow promiscuous mode on the outer host.

Verify the bridge port assignment:

brctl show br0

If the interfaces column is empty, the physical interface wasn’t added. Bring it down and re-add it:

sudo ip link set ens18 down
sudo brctl addif br0 ens18
sudo ip link set ens18 up

Then restart networking to apply the persistent configuration cleanly:

sudo systemctl restart networking

Related Articles

Debian Install Java 21 LTS (OpenJDK) on Ubuntu 24.04 / Debian 13 Virtualization Install vSphere 8.0 – vCenter Server Appliance using Linux Debian Install Apache Spark 4.1.1 on Debian 13 / Ubuntu 24.04 LTS KVM Migrating Virtual Machines from KVM to Proxmox VE

1 thought on “Install KVM on Debian 13 / Debian 12: Complete Guide”

  1. Would’ve been nice to see some more explanation about the options given in the virt-install command, or at least a pointer to a reference. I can well imagine people don’t all want to use your preferences.

    Reply

Leave a Comment

Press ESC to close