Ansible

Ansible + Proxmox: Automated VM Management

Managing a handful of Proxmox VMs through the web UI is fine. Managing dozens across a cluster, spinning up test environments on demand, tearing them down after CI runs? That’s where clicking through forms stops being reasonable and automation takes over.

Original content from computingforgeeks.com - post 165312

The community.proxmox Ansible collection gives you full control over the Proxmox VE API: clone VMs from templates, configure cloud-init, manage snapshots, and destroy instances, all from a playbook. This guide walks through setting up Ansible to manage a Proxmox VE cluster with real, tested examples on a 2-node setup running Proxmox VE 8.x. Every command and output shown here comes from an actual cluster, not fabricated demos.

Tested April 2026 | Proxmox VE 8.4, ansible-core 2.16.14, community.proxmox 1.6.0, Rocky Linux 10.1 control node

Prerequisites

Before starting, confirm you have the following ready:

  • Proxmox VE 8.x cluster with at least one node (two nodes for the migration section)
  • A control node running Rocky Linux 10 or Ubuntu 24.04 with Ansible installed (ansible-core 2.16+)
  • Network connectivity from the control node to the Proxmox API (port 8006)
  • A VM template to clone from (cloud-init enabled template recommended)
  • Tested on: Proxmox VE 8.4, ansible-core 2.16.14, Python 3.12, Rocky Linux 10.1

Create a Proxmox API Token

Password authentication works but is a bad idea for automation. API tokens are revocable, auditable, and don’t expire your session when someone changes the root password. Create one on any Proxmox node:

pveum user token add root@pam ansible-token --privsep=0

The --privsep=0 flag gives the token the same privileges as the root@pam user. In production, you’d create a dedicated user with limited permissions and set --privsep=1 to enforce privilege separation. For this tutorial, full access keeps things simple.

The output shows the token value. Save it immediately because Proxmox never displays it again:

┌──────────────┬──────────────────────────────────────┐
│ key          │ value                                │
╞══════════════╪══════════════════════════════════════╡
│ full-tokenid │ root@pam!ansible-token               │
├──────────────┼──────────────────────────────────────┤
│ info         │ {"privsep":"0"}                      │
├──────────────┼──────────────────────────────────────┤
│ value        │ a1b2c3d4-e5f6-7890-abcd-ef1234567890 │
└──────────────┴──────────────────────────────────────┘

Store this token in Ansible Vault rather than plain text files. Create a vault file for your Proxmox credentials:

ansible-vault create group_vars/all/proxmox_vault.yml

Add the following variables inside the vault file:

vault_proxmox_token_id: "root@pam!ansible-token"
vault_proxmox_token_secret: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"

Install the Proxmox Collection and Dependencies

The Proxmox modules used to live in community.general, but they’ve been split into their own collection. If you’re following older tutorials that reference community.general.proxmox_kvm, those modules are deprecated and will throw warnings.

Install the dedicated collection:

ansible-galaxy collection install community.proxmox

The collection depends on the proxmoxer Python library to talk to the Proxmox API. Install it on the control node:

sudo pip3 install proxmoxer requests

Skip this step and you’ll get a clear error when running any Proxmox module:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (proxmoxer) on the host. Please read the module documentation and install it in the appropriate location."}

If you still have old playbooks using community.general.proxmox_kvm, you’ll see this deprecation warning:

[DEPRECATION WARNING]: community.general.proxmox_vm_info has been deprecated. The proxmox content has been moved to community.proxmox. This feature will be removed in version 10.0.0 of community.general.

The fix is straightforward: replace community.general.proxmox_kvm with community.proxmox.proxmox_kvm in your playbooks. Same module, new namespace.

Set Up the Inventory and Variables

Since Proxmox modules run against the API (not over SSH to the VMs), most tasks target localhost. If you’re new to Ansible automation, start there first. Create a simple inventory and variable file that all playbooks will reference.

Create inventory.ini:

[proxmox]
localhost ansible_connection=local

[proxmox:vars]
proxmox_host=10.0.1.1
proxmox_node=pve01

Create group_vars/all/proxmox.yml for the non-secret variables:

proxmox_api_host: "10.0.1.1"
proxmox_api_port: 8006
proxmox_api_token_id: "{{ vault_proxmox_token_id }}"
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
proxmox_default_node: "pve01"
proxmox_template_vmid: 799
proxmox_storage: "local-lvm"

List All VMs on a Node

Start with a read-only operation to confirm the API connection works. The proxmox_vm_info module queries all VMs and containers on a node:

ansible localhost -m community.proxmox.proxmox_vm_info \
  -a "api_host=10.0.1.1 api_user=root@pam api_token_id=ansible-token api_token_secret=a1b2c3d4-e5f6-7890-abcd-ef1234567890 node=pve01" \
  --ask-vault-pass

A successful response returns every VM on the node with its resource allocation and current state:

localhost | SUCCESS => {
    "changed": false,
    "proxmox_vms": [
        {
            "cpus": 2,
            "maxmem": 4294967296,
            "name": "Rocky-10-Enock",
            "status": "running",
            "vmid": 108
        },
        {
            "cpus": 4,
            "maxmem": 8589934592,
            "name": "ubuntu-24-template",
            "status": "stopped",
            "vmid": 799
        }
    ]
}

If the connection fails with a certificate error, add validate_certs=false to the module arguments. Self-signed certificates are common on Proxmox clusters that don’t face the internet.

Clone a VM from Template

Cloning a template is the fastest way to provision new VMs. The proxmox_kvm module handles this with the clone parameter. Create a file called clone_vm.yml:

sudo vi clone_vm.yml

Add the following playbook content:

---
- name: Clone a VM from template
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  tasks:
    - name: Clone template to new VM
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        clone: "ubuntu-24-template"
        vmid: 799
        newid: 813
        name: "ansible-test-vm"
        full: true
        storage: "{{ proxmox_storage }}"
        timeout: 300
      register: clone_result

    - name: Show clone result
      ansible.builtin.debug:
        var: clone_result

The full: true parameter creates a full clone (independent copy). Set it to false for a linked clone, which is faster and uses less disk but depends on the template staying intact. For throwaway test VMs, linked clones are usually fine. For anything long-lived, use full clones.

Run the playbook:

ansible-playbook clone_vm.yml --ask-vault-pass

The clone operation takes a minute or two depending on disk size. The output confirms the new VMID:

TASK [Clone template to new VM] ************************************************
changed: [localhost]

TASK [Show clone result] *******************************************************
ok: [localhost] => {
    "clone_result": {
        "changed": true,
        "msg": "VM ansible-test-vm with newid 813 cloned from vm with vmid 799",
        "vmid": 813
    }
}

Configure and Start a VM

After cloning, you typically need to adjust the VM’s resources and set up cloud-init before booting it. This playbook handles both in sequence. Create configure_start_vm.yml:

sudo vi configure_start_vm.yml

Add the playbook:

---
- name: Configure and start cloned VM
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  tasks:
    - name: Configure VM resources and cloud-init
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: 813
        cores: 2
        memory: 4096
        ciuser: ansible
        cipassword: "changeme123"
        ipconfig0: "ip=10.0.1.20/24,gw=10.0.1.1"
        nameservers:
          - "8.8.8.8"
          - "8.8.4.4"
        update: true

    - name: Start the VM
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: 813
        state: started

    - name: Wait for VM to become reachable via SSH
      ansible.builtin.wait_for:
        host: 10.0.1.20
        port: 22
        delay: 10
        timeout: 120
        state: started

The ciuser and cipassword parameters inject credentials via cloud-init. For production, use SSH keys instead by setting sshkeys with your public key. The ipconfig0 assigns a static IP, which is important because DHCP addresses make inventory management painful.

Run it:

ansible-playbook configure_start_vm.yml --ask-vault-pass

The VM starts and Ansible waits until SSH is available on port 22:

TASK [Configure VM resources and cloud-init] ***********************************
changed: [localhost]

TASK [Start the VM] ************************************************************
changed: [localhost]

TASK [Wait for VM to become reachable via SSH] *********************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=2    unreachable=0    failed=0

Build a Complete Provisioning Playbook

The individual steps above work for learning, but in practice you want a single playbook that handles the entire lifecycle. This one clones a template, configures it, starts it, waits for SSH, then runs initial configuration tasks on the new VM. Create provision_vm.yml:

sudo vi provision_vm.yml

The full playbook:

---
- name: Provision a new VM on Proxmox
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  vars:
    vm_name: "web-server-01"
    vm_newid: 820
    vm_ip: "10.0.1.21"
    vm_cores: 2
    vm_memory: 4096
    vm_gateway: "10.0.1.1"
  tasks:
    - name: Clone from template
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        clone: "ubuntu-24-template"
        vmid: "{{ proxmox_template_vmid }}"
        newid: "{{ vm_newid }}"
        name: "{{ vm_name }}"
        full: true
        storage: "{{ proxmox_storage }}"
        timeout: 300

    - name: Set VM resources and cloud-init
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: "{{ vm_newid }}"
        cores: "{{ vm_cores }}"
        memory: "{{ vm_memory }}"
        ciuser: ansible
        sshkeys: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
        ipconfig0: "ip={{ vm_ip }}/24,gw={{ vm_gateway }}"
        update: true

    - name: Boot the VM
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: "{{ vm_newid }}"
        state: started

    - name: Wait for SSH
      ansible.builtin.wait_for:
        host: "{{ vm_ip }}"
        port: 22
        delay: 15
        timeout: 180

    - name: Add new VM to in-memory inventory
      ansible.builtin.add_host:
        name: "{{ vm_ip }}"
        groups: new_vms
        ansible_user: ansible
        ansible_ssh_common_args: "-o StrictHostKeyChecking=no"

- name: Configure the new VM
  hosts: new_vms
  become: true
  gather_facts: true
  tasks:
    - name: Update all packages
      ansible.builtin.package:
        name: "*"
        state: latest

    - name: Install common utilities
      ansible.builtin.package:
        name:
          - vim
          - curl
          - wget
          - htop
          - net-tools
        state: present

    - name: Set timezone
      community.general.timezone:
        name: UTC

This playbook uses two plays. The first runs locally against the Proxmox API to provision the VM. The second uses add_host to dynamically add the new VM’s IP to the inventory, then connects over SSH to configure it. The -o StrictHostKeyChecking=no flag prevents the SSH prompt for unknown host keys on first connection.

Run the full provisioning workflow:

ansible-playbook provision_vm.yml --ask-vault-pass

The complete output shows each phase:

PLAY [Provision a new VM on Proxmox] *******************************************

TASK [Clone from template] *****************************************************
changed: [localhost]

TASK [Set VM resources and cloud-init] *****************************************
changed: [localhost]

TASK [Boot the VM] *************************************************************
changed: [localhost]

TASK [Wait for SSH] ************************************************************
ok: [localhost]

TASK [Add new VM to in-memory inventory] ***************************************
changed: [localhost]

PLAY [Configure the new VM] ****************************************************

TASK [Gathering Facts] *********************************************************
ok: [10.0.1.21]

TASK [Update all packages] *****************************************************
changed: [10.0.1.21]

TASK [Install common utilities] ************************************************
changed: [10.0.1.21]

TASK [Set timezone] ************************************************************
changed: [10.0.1.21]

PLAY RECAP *********************************************************************
10.0.1.21                 : ok=4    changed=3    unreachable=0    failed=0
localhost                  : ok=5    changed=4    unreachable=0    failed=0

For reusable provisioning logic, consider wrapping this into an Ansible role that accepts variables for VM name, resources, and IP address.

Snapshot and Restore

Snapshots are essential before risky operations like major upgrades or config changes. The proxmox_snap module manages the full snapshot lifecycle. Create snapshot_vm.yml:

sudo vi snapshot_vm.yml

This playbook creates a snapshot, then demonstrates restoring it:

---
- name: Manage VM snapshots
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  tasks:
    - name: Create a snapshot before upgrade
      community.proxmox.proxmox_snap:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        hostname: "ansible-test-vm"
        state: present
        snapname: "pre-upgrade"
        description: "Snapshot before package upgrade"
        vmstate: true

    - name: List all snapshots
      community.proxmox.proxmox_snap:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        hostname: "ansible-test-vm"
        state: list
      register: snap_list

    - name: Display snapshots
      ansible.builtin.debug:
        var: snap_list

    - name: Rollback to pre-upgrade snapshot
      community.proxmox.proxmox_snap:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        hostname: "ansible-test-vm"
        state: rollback
        snapname: "pre-upgrade"

The vmstate: true option includes RAM state in the snapshot, which allows you to resume exactly where the VM was. Without it, the VM needs to boot from disk after restore, which is faster to create but loses running state.

Destroy VMs

Cleaning up test VMs is just as important as creating them. Leftover VMs consume storage and make the cluster inventory noisy. The proxmox_kvm module with state: absent handles removal, but the VM must be stopped first.

Stop and destroy in one playbook:

---
- name: Destroy a VM
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  tasks:
    - name: Stop the VM
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: 813
        state: stopped
        force: true

    - name: Remove the VM
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: "{{ proxmox_default_node }}"
        vmid: 813
        state: absent

The force: true on the stop task ensures the VM powers off even if it’s unresponsive. The destroy output confirms removal:

TASK [Stop the VM] *************************************************************
changed: [localhost]

TASK [Remove the VM] ***********************************************************
changed: [localhost] => {
    "changed": true,
    "msg": "VM 813 removed",
    "vmid": 813
}

Multi-Node Cluster Operations

With a multi-node Proxmox cluster, you can target specific nodes for VM placement or migrate VMs between nodes for maintenance. The key is the node parameter, which tells the API which physical host to act on.

Clone a VM directly onto the second node:

---
- name: Deploy VMs across cluster nodes
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - group_vars/all/proxmox_vault.yml
    - group_vars/all/proxmox.yml
  tasks:
    - name: Clone VM to pve01
      community.proxmox.proxmox_kvm:
        api_host: "{{ proxmox_api_host }}"
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: pve01
        clone: "ubuntu-24-template"
        vmid: 799
        newid: 830
        name: "app-node-01"
        full: true
        storage: "{{ proxmox_storage }}"

    - name: Clone VM to pve02
      community.proxmox.proxmox_kvm:
        api_host: 10.0.1.2
        api_user: root@pam
        api_token_id: ansible-token
        api_token_secret: "{{ proxmox_api_token_secret }}"
        validate_certs: false
        node: pve02
        clone: "ubuntu-24-template"
        vmid: 799
        newid: 831
        name: "app-node-02"
        full: true
        storage: "{{ proxmox_storage }}"

When the template lives on shared storage (Ceph, NFS, or ZFS over iSCSI), you can clone it to any node. If it’s on local storage, you’ll need to clone on the same node where the template resides, then migrate.

Migrate a running VM from pve01 to pve02 using the proxmox_migrate module (requires shared storage or local-to-local migration with sufficient bandwidth):

- name: Migrate VM to another node
  community.proxmox.proxmox_kvm:
    api_host: "{{ proxmox_api_host }}"
    api_user: root@pam
    api_token_id: ansible-token
    api_token_secret: "{{ proxmox_api_token_secret }}"
    validate_certs: false
    node: pve01
    vmid: 830
    migrate_node: pve02
    state: started
    timeout: 600

This is useful for automating maintenance windows. Before patching a Proxmox host, migrate all VMs off it, apply updates, reboot, then migrate them back. Combined with Terraform for infrastructure provisioning and Ansible for configuration, you can build fully automated cluster maintenance workflows.

Troubleshooting Common Issues

Error: “Failed to import the required Python library (proxmoxer)”

This means the proxmoxer library isn’t installed in the Python environment Ansible is using. If you installed Ansible via pip, install proxmoxer with the same pip. If Ansible came from the OS package manager, use sudo pip3 install proxmoxer requests. Verify the correct Python is being used with ansible --version and check the “python version” line.

Deprecation: “The proxmox content has been moved to community.proxmox”

The Proxmox modules were extracted from community.general into their own community.proxmox collection starting with community.general 9.x. Update your module references from community.general.proxmox_kvm to community.proxmox.proxmox_kvm. The old names still work for now but will be removed in community.general 10.0.0.

SSL certificate verification failures

Proxmox uses self-signed certificates by default. If you see SSL: CERTIFICATE_VERIFY_FAILED, add validate_certs: false to your module parameters. For production setups, install a proper certificate on Proxmox (Let’s Encrypt works well) and keep validation enabled. The community.proxmox documentation covers all authentication options.

API token permission denied (403)

If the token was created with --privsep=1 (the default), it only gets permissions explicitly assigned to it, not the user’s full permissions. Either recreate the token with --privsep=0 for testing, or assign the necessary permissions:

pveum acl modify / --roles Administrator --tokens root@pam!ansible-token

For production, create a dedicated ansible@pve user with only the permissions needed . Assign only the permissions needed (VM.Allocate, VM.Clone, VM.Config.Disk, VM.Config.CPU, VM.Config.Memory, VM.Config.Network, VM.Config.Cloudinit, VM.PowerMgmt, VM.Snapshot, VM.Snapshot.Rollback, Datastore.AllocateSpace).

Clone fails with “disk is already in use”

This happens when the target VMID already exists. Either remove the existing VM first or choose a different newid. The Proxmox API doesn’t overwrite existing VMs, which is a safety feature, not a bug.

Quick Reference: Proxmox Module Cheat Sheet

Here’s a summary of the key community.proxmox modules covered in this guide and a few extras worth knowing about. Full module documentation is on Ansible Galaxy.

ModulePurposeKey Parameters
proxmox_kvmCreate, clone, configure, start, stop, destroy QEMU VMsstate, clone, newid, cores, memory
proxmox_vm_infoList VMs and their details on a nodenode, vmid (optional)
proxmox_snapCreate, list, rollback, remove snapshotssnapname, state, vmstate
proxmoxManage LXC containersostemplate, storage, cores, memory
proxmox_diskAdd, resize, detach disksdisk, size, storage
proxmox_nicManage VM network interfacesinterface, bridge, model
proxmox_poolManage resource poolspoolid, comment

Each of these modules requires the same API authentication parameters (api_host, api_user, api_token_id, api_token_secret). Setting them as group variables (as shown in the inventory section) keeps your playbooks clean. For a broader overview of automation with Ansible, see the Ansible cheat sheet. If you’re managing Docker containers alongside VMs, Ansible handles both from the same control node.

Related Articles

CentOS Install oVirt Engine on CentOS Stream 9 / Rocky 9 Virtualization Automate RHEL and CentOS Installation on KVM using Kickstart Containers Configure Kubernetes Dynamic Volume Provisioning With Heketi & GlusterFS Virtualization How to Provision Vagrant VMs with Ansible

Leave a Comment

Press ESC to close