Install a package, template a config, start a service. Ansible playbooks follow a predictable structure, which is exactly why Claude Code handles them well. Validation is immediate (run with --check and the output tells you exactly what would change). Where it really shines is debugging: paste a failed PLAY RECAP and Claude Code traces the error to the wrong module, wrong service name, or missing package.
This guide is part of the Claude Code for DevOps Engineers series. Every playbook below was executed against real servers (two Proxmox hypervisors running Debian 12). The PLAY RECAP output, the error messages, the chrony sync data are all from actual runs. If you need Ansible installed on your control machine, handle that first.
Tested March 2026 | Ansible core 2.20.0, Debian 12.11 targets, Python 3.12
What You Need
- Claude Code installed and authenticated
- Ansible 2.14+ installed on your local machine
- SSH key-based access to at least one Linux server
- Tested on: Ansible 2.20.0 (macOS controller), Debian 12 targets. Commands adapt for Rocky Linux and Ubuntu
Generate and Run a Complete Playbook
The first demo shows the full loop: describe what you need, Claude Code writes the playbook, dry-runs it, then executes for real. One conversation, verified result.
Start by creating an inventory file and telling Claude Code what to do:
I have two servers in inventory.ini (10.0.1.10 and 10.0.1.11, user root).
Write a playbook that installs and configures chrony (NTP) with
Cloudflare and Ubuntu time servers, starts the service, and verifies
synchronization is working. Run with --check first, then for real.
Claude Code generates the playbook with proper module usage, handlers, and a verification task:
---
- name: Configure NTP with Chrony
hosts: proxmox
become: true
tasks:
- name: Install chrony
ansible.builtin.apt:
name: chrony
state: present
update_cache: true
- name: Configure chrony to use preferred NTP servers
ansible.builtin.copy:
dest: /etc/chrony/sources.d/custom.sources
content: |
server time.cloudflare.com iburst prefer
server ntp.ubuntu.com iburst
owner: root
group: root
mode: '0644'
notify: restart chrony
- name: Start and enable chrony
ansible.builtin.systemd:
name: chronyd
state: started
enabled: true
- name: Verify time synchronization
ansible.builtin.command: chronyc tracking
register: chrony_status
changed_when: false
- name: Show sync status
ansible.builtin.debug:
var: chrony_status.stdout_lines
handlers:
- name: restart chrony
ansible.builtin.systemd:
name: chronyd
state: restarted
Notice the details Claude Code includes by default: the notify handler pattern (only restart chrony if the config actually changes), changed_when: false on the verification command (so it doesn’t show as “changed” on every run), and proper FQCNs (ansible.builtin.apt instead of just apt).
The dry run (--check) shows what would change without touching the servers:
PLAY [Configure NTP with Chrony] ***********************************************
TASK [Gathering Facts] *********************************************************
ok: [pve01]
ok: [pve02]
TASK [Install chrony] **********************************************************
ok: [pve02]
ok: [pve01]
TASK [Configure chrony to use preferred NTP servers] ***************************
changed: [pve02]
changed: [pve01]
TASK [Start and enable chrony] *************************************************
ok: [pve02]
ok: [pve01]
PLAY RECAP *********************************************************************
pve01 : ok=6 changed=2 unreachable=0 failed=0 skipped=1
pve02 : ok=6 changed=2 unreachable=0 failed=0 skipped=1
Two changes expected: the config file and the handler restart. Everything else is already in the desired state. After confirming the dry run looks correct, Claude Code runs it for real.
The real run produces live synchronization data from each server:
ok: [pve01] => {
"chrony_status.stdout_lines": [
"Reference ID : A29FC801 (time.cloudflare.com)",
"Stratum : 4",
"Ref time (UTC) : Sat Mar 28 07:25:34 2026",
"System time : 0.000505559 seconds slow of NTP time",
"Last offset : +0.023103384 seconds",
"RMS offset : 0.023103384 seconds",
"Frequency : 4.595 ppm fast",
"Root delay : 0.151221782 seconds",
"Leap status : Normal"
]
}
PLAY RECAP *********************************************************************
pve01 : ok=6 changed=0 unreachable=0 failed=0
pve02 : ok=6 changed=0 unreachable=0 failed=0
Both servers syncing to time.cloudflare.com at stratum 4 with normal leap status. The second run shows changed=0 because the playbook is idempotent: nothing changes when the desired state is already met. This is the gold standard for Ansible playbooks, and Claude Code gets it right.
Debug a Failing Playbook
Two real errors from actual testing. These are the failures that waste the most time when you’re writing playbooks by hand.
Error: Wrong package module for the target OS
A playbook uses ansible.builtin.dnf but the target servers run Debian (which uses apt):
TASK [Install sysstat] *********************************************************
fatal: [pve01]: FAILED! => {
"ansible_facts": {"pkg_mgr": "apt"},
"changed": false,
"msg": ["Could not detect which major revision of dnf is in use,
which is required to determine module backend."]
}
PLAY RECAP *********************************************************************
pve01 : ok=1 changed=0 unreachable=0 failed=1
The key clue is "pkg_mgr": "apt" in the ansible_facts. The target uses apt but the playbook called the dnf module. Tell Claude Code “my playbook failed, the targets are Debian” and it immediately switches to ansible.builtin.apt. Better yet, it suggests using ansible.builtin.package (the generic module that auto-detects the package manager), making the playbook work on both RHEL and Debian families.
Error: “Could not find the requested service”
A playbook tries to restart ntpd but the actual service name is chronyd:
TASK [Restart NTP] *************************************************************
fatal: [pve01]: FAILED! => {
"changed": false,
"msg": "Could not find the requested service ntpd: host"
}
PLAY RECAP *********************************************************************
pve01 : ok=1 changed=0 unreachable=0 failed=1
Service names differ across distributions and NTP implementations. ntpd is the legacy NTP daemon, chronyd is the modern replacement. Claude Code diagnoses this by checking what NTP packages are installed on the target (dpkg -l | grep -E 'chrony|ntp') and updating the service name in the playbook. It also adds a comment explaining why chronyd is preferred over ntpd on modern systems.
Generate a Server Inventory Report
Ansible’s fact gathering system collects hardware and OS details from every host. Claude Code turns this into structured reports with a single prompt.
Write a playbook that gathers facts from all hosts and displays a
one-line summary per server: hostname, OS, kernel, RAM, CPUs, IP.
Claude Code generates a playbook using the debug module to format Ansible facts:
TASK [Display system summary] **************************************************
ok: [pve01] => {
"msg": "pve01 | Debian 12.11 | Kernel 6.8.12-13-pve | 64190MB RAM | 8 vCPUs | 10.0.1.10"
}
ok: [pve02] => {
"msg": "pve02 | Debian 12.11 | Kernel 6.8.12-13-pve | 63932MB RAM | 8 vCPUs | 10.0.1.11"
}
PLAY RECAP *********************************************************************
pve01 : ok=2 changed=0 unreachable=0 failed=0
pve02 : ok=2 changed=0 unreachable=0 failed=0
Two Proxmox nodes: both Debian 12.11, 64GB RAM, 8 vCPUs, PVE kernel 6.8.12. This replaces manually SSHing into each server to gather specs. With 50 servers in inventory, the playbook runs against all of them in parallel and produces a clean summary in seconds.
What Claude Code Gets Right with Ansible
| Pattern | Claude Code Quality | Notes |
|---|---|---|
| Package installation | Excellent | Uses correct module per OS, adds update_cache |
| Service management | Excellent | enabled: true + state: started every time |
| Handler usage | Good | Notify/handler pattern for config changes |
| FQCNs | Excellent | Always uses ansible.builtin.*, never bare module names |
| Idempotency | Good | Occasionally uses command where a built-in module exists |
| File templating | Good | Uses copy for simple content, template for Jinja2 |
| Multi-OS playbooks | Fair | Sometimes forgets when: ansible_os_family conditionals |
| Role structure | Good | Generates proper tasks/handlers/templates/defaults layout |
| Vault integration | Fair | Knows the syntax but sometimes hardcodes values it shouldn’t |
Practical Ansible Prompts
Always specify the target OS. “Install Nginx” is ambiguous. “Install Nginx on Debian 12 using apt” tells Claude Code which package module to use, what the service name will be, and where the config files live. On Rocky Linux, the package is in the AppStream repo, the service might be nginx, and the config path is /etc/nginx/. On Debian, it’s the same package name but installed via apt. The module choice matters.
Ask for –check first. Include “run with –check first, then for real” in your prompt. Claude Code runs the dry run, shows you the predicted changes, waits for your confirmation, then executes. This catches wrong module names, missing variables, and permission issues before they touch your servers.
Request a verification task. Claude Code sometimes generates playbooks that install and configure but don’t verify. Adding “and verify the service is running and responding” makes Claude Code add a task that checks systemctl status, curls an endpoint, or runs a version command at the end of the play.
For multi-OS playbooks, be explicit. “This playbook needs to work on both Rocky 10 and Debian 12” triggers Claude Code to add when: ansible_os_family == 'RedHat' conditionals, use the generic package module, or create separate task files per OS family. Without this hint, it generates a single-OS playbook. The Ansible Vault cheat sheet covers encrypting the sensitive variables Claude Code shouldn’t hardcode.
When to Use Claude Code vs Writing Playbooks by Hand
Claude Code is fastest for generating the initial playbook structure: roles, tasks, handlers, templates. A playbook that takes 20 minutes to write from scratch takes 2 minutes to generate and review. Where it adds the most value is converting ad-hoc shell commands into proper Ansible roles. You paste a sequence of ssh root@host "dnf install ..." commands and Claude Code produces a structured, idempotent role with variables, handlers, and verification.
Write by hand when the playbook involves complex Jinja2 logic, multi-play workflows with serial execution, or vault-encrypted variables that Claude Code shouldn’t see. The generated playbook is a starting point that gets you 80% there. The last 20% (environment-specific tweaks, security hardening, inventory group vars) is where human expertise matters.
Part of the Claude Code for DevOps Series
This Ansible spoke connects to the broader series. The Terraform guide covers provisioning the infrastructure that Ansible configures. The SSH server management guide covers the ad-hoc tasks Ansible replaces for fleet management.
- Set Up Claude Code for DevOps Engineers (pillar with safety rules and permissions)
- Manage Servers with Claude Code via SSH
- Build and Debug Docker Containers with Claude Code
- Deploy Infrastructure with Claude Code and Terraform
- Claude Code + Kubernetes: manifests, Helm charts, debugging CrashLoopBackOff
- Claude Code + GitHub Actions: automated PR review, Terraform validation
The Claude Code cheat sheet covers every command and shortcut for quick reference while working through these demos.