Ansible is the go-to automation tool for managing infrastructure at scale. It connects to your servers over SSH, requires no agents on target machines, and uses human-readable YAML playbooks to define desired state. Whether you are provisioning cloud instances, deploying applications, or enforcing configuration baselines across hundreds of nodes, Ansible handles it with minimal overhead and a shallow learning curve.

This guide walks through a complete Ansible setup on Ubuntu 24.04, Debian 13, RHEL 10, and Rocky Linux 10. You will install Ansible, configure inventory and settings, write your first playbook, and move into intermediate topics like variables, handlers, templates, roles, and secret management with Ansible Vault.

Why Ansible

Most configuration management tools require a dedicated agent running on every managed node. Ansible takes a different approach. It is agentless – the control node connects to targets over standard SSH (or WinRM for Windows) and pushes small modules that execute and then remove themselves. This means there is nothing to install, update, or secure on remote hosts beyond an SSH server and Python.

Playbooks are written in YAML, which reads almost like plain English. You describe the desired end state of your systems, and Ansible figures out how to get there. Operations are idempotent by default, so running the same playbook twice produces the same result without breaking anything.

Key characteristics worth noting:

  • Agentless architecture – nothing to install on managed nodes beyond SSH and Python
  • SSH-based communication – leverages existing authentication infrastructure including keys and jump hosts
  • YAML playbooks – declarative, version-controllable, and easy to review in pull requests
  • Massive module library – thousands of modules covering cloud providers, networking gear, containers, databases, and more
  • Idempotent execution – safe to run repeatedly without side effects

Prerequisites

Before starting, make sure you have the following in place:

  • A control node running Ubuntu 24.04, Debian 13, RHEL 10, or Rocky Linux 10 with sudo privileges
  • One or more managed nodes reachable over SSH from the control node
  • SSH key-based authentication configured between the control node and managed nodes
  • Python 3.9 or later installed on all managed nodes (most modern distros ship with this)

If you have not set up SSH keys yet, generate a pair on the control node and distribute the public key. For a detailed walkthrough, see our guide on how to set up SSH key-based authentication on Linux.

Install Ansible on Ubuntu 24.04 / Debian 13

Ubuntu and Debian ship Ansible in their default repositories, but the version tends to lag behind upstream releases. The official Ansible PPA gives you access to the latest stable builds.

Option 1 – Install from the Official PPA (Recommended)

Start by updating your package index and installing the prerequisite for adding PPAs:

sudo apt update && sudo apt install -y software-properties-common

Add the official Ansible PPA maintained by the Ansible team:

sudo add-apt-repository --yes --update ppa:ansible/ansible

Now install Ansible:

sudo apt install -y ansible

Option 2 – Install from Default Repositories

If you prefer to stick with packages provided by your distribution and do not need the absolute latest version:

sudo apt update && sudo apt install -y ansible

This works on both Ubuntu 24.04 and Debian 13. On Debian, the PPA method does not apply – use the default repo or the pipx method described later in this guide.

Install Ansible on RHEL 10 / Rocky Linux 10

On Red Hat-based distributions, Ansible lives in the EPEL (Extra Packages for Enterprise Linux) repository. Enable it first, then install.

Enable the EPEL repository:

sudo dnf install -y epel-release

On RHEL 10, if epel-release is not available directly, install it from the Fedora EPEL URL:

sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm

With EPEL enabled, install Ansible:

sudo dnf install -y ansible

This procedure is identical on Rocky Linux 10. The EPEL repository pulls in the community-maintained Ansible package that tracks upstream releases reasonably well.

Install Ansible via pipx (Distribution-Independent Method)

If you want the latest Ansible release regardless of your distribution, or you need to run multiple Ansible versions side by side, pipx is the cleanest approach. It installs Python applications in isolated virtual environments while still exposing their commands globally.

Install pipx using your system package manager. On Ubuntu/Debian:

sudo apt update && sudo apt install -y pipx

On RHEL/Rocky:

sudo dnf install -y pipx

Make sure the pipx binary path is in your shell PATH:

pipx ensurepath

Open a new shell or source your profile, then install Ansible:

pipx install --include-deps ansible

The --include-deps flag ensures that supporting tools like ansible-playbook, ansible-vault, and ansible-galaxy are all available on your PATH. To upgrade later, run pipx upgrade ansible.

Verify the Installation

Regardless of the installation method you chose, confirm Ansible is working:

ansible --version

You should see output similar to:

ansible [core 2.17.x]
  config file = None
  configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.12.x

The exact version numbers will depend on when you install, but anything in the 2.17+ range is current. If the command is not found, double-check that your PATH includes the installation directory – especially relevant for pipx installs.

Configure the Ansible Inventory File

The inventory file tells Ansible which hosts to manage and how to group them. Ansible looks for a default inventory at /etc/ansible/hosts, but for any real project you should maintain a project-level inventory checked into version control.

Global Inventory at /etc/ansible/hosts

Create the directory and file if they do not already exist:

sudo mkdir -p /etc/ansible
sudo tee /etc/ansible/hosts > /dev/null <<'EOF'
[webservers]
web1.example.com
web2.example.com

[dbservers]
db1.example.com ansible_user=dbadmin ansible_port=2222

[all:vars]
ansible_python_interpreter=/usr/bin/python3
EOF

Hosts are organized into groups using INI-style brackets. You can assign per-host variables inline (like a custom SSH port or username) and set group-wide variables under [groupname:vars] or [all:vars].

Project-Level Inventory

For production work, keep your inventory alongside your playbooks. Create a project directory and add an inventory file:

mkdir -p ~/ansible-project
cat > ~/ansible-project/inventory.ini <<'EOF'
[webservers]
192.168.1.10
192.168.1.11

[dbservers]
192.168.1.20

[staging:children]
webservers
dbservers
EOF

The [staging:children] block creates a parent group that includes both webservers and dbservers. This lets you target everything in the staging environment with a single group name. You can also write inventory files in YAML format if you prefer – Ansible supports both.

Test Connectivity with ansible ping

Before writing any playbooks, verify that Ansible can reach your managed nodes. The ping module is the standard connectivity check – it connects via SSH, verifies a usable Python interpreter exists on the remote host, and returns a pong response.

Using the global inventory:

ansible all -m ping

Using a project inventory:

ansible all -i ~/ansible-project/inventory.ini -m ping

Successful output looks like this:

192.168.1.10 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.1.11 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

If you see permission denied errors, make sure your SSH key is distributed to the target hosts. If you see a Python interpreter warning, set ansible_python_interpreter in your inventory as shown above.

Ansible Configuration with ansible.cfg

Ansible reads configuration from several locations in this order of precedence: the ANSIBLE_CONFIG environment variable, ansible.cfg in the current directory, ~/.ansible.cfg, and finally /etc/ansible/ansible.cfg. For project work, place an ansible.cfg in your project root.

cat > ~/ansible-project/ansible.cfg <<'EOF'
[defaults]
inventory = inventory.ini
remote_user = deploy
private_key_file = ~/.ssh/ansible_key
host_key_checking = False
forks = 20
timeout = 30
retry_files_enabled = False
stdout_callback = yaml

[privilege_escalation]
become = True
become_method = sudo
become_ask_pass = False
EOF

Here is what the key settings do:

  • forks – the number of parallel processes Ansible spawns. The default of 5 is conservative. On a control node with decent resources, 20 to 50 is reasonable for managing larger fleets.
  • timeout – seconds to wait for an SSH connection before giving up. Increase this if you manage hosts across high-latency links.
  • host_key_checking – setting this to False prevents Ansible from failing on unknown SSH host keys. Useful in dynamic environments where hosts come and go. In locked-down production, you may want to leave this enabled and manage known_hosts separately.
  • stdout_callback = yaml – makes playbook output much more readable by formatting it as YAML instead of the default JSON blobs.
  • become – enables privilege escalation by default so you do not need to pass --become on every run.

Write Your First Playbook – Install Nginx on Remote Servers

A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines a list of tasks to execute. Here is a practical playbook that installs and starts Nginx on your web servers.

Create the playbook file:

cat > ~/ansible-project/nginx-setup.yml <<'PLAYBOOK'
---
- name: Install and configure Nginx on web servers
  hosts: webservers
  become: true

  tasks:
    - name: Update apt cache (Debian/Ubuntu)
      ansible.builtin.apt:
        update_cache: yes
        cache_valid_time: 3600
      when: ansible_os_family == "Debian"

    - name: Install Nginx on Debian/Ubuntu
      ansible.builtin.apt:
        name: nginx
        state: present
      when: ansible_os_family == "Debian"

    - name: Install Nginx on RHEL/Rocky
      ansible.builtin.dnf:
        name: nginx
        state: present
      when: ansible_os_family == "RedHat"

    - name: Start and enable Nginx service
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

    - name: Allow HTTP through firewall (RHEL/Rocky)
      ansible.posix.firewalld:
        service: http
        permanent: true
        state: enabled
        immediate: true
      when: ansible_os_family == "RedHat"

    - name: Verify Nginx is serving content
      ansible.builtin.uri:
        url: "http://{{ inventory_hostname }}"
        status_code: 200
      delegate_to: localhost
      become: false
PLAYBOOK

This playbook demonstrates several patterns you will use constantly: conditional execution with when, the fully qualified collection name for modules, service management with systemd, and delegating a verification task back to the control node.

Run the Playbook

Execute the playbook from your project directory:

cd ~/ansible-project && ansible-playbook -i inventory.ini nginx-setup.yml

If your ansible.cfg already specifies the inventory file (as ours does), you can shorten this to:

cd ~/ansible-project && ansible-playbook nginx-setup.yml

Useful flags for playbook runs:

  • --check – dry run mode, shows what would change without actually changing anything
  • --diff – shows file diffs when templates or files are modified
  • --limit webservers – restrict execution to a specific group or host
  • -v, -vv, -vvv – increase verbosity for debugging
  • --tags deploy – only run tasks tagged with “deploy”

For a deeper look at playbook patterns and structuring larger automation projects, check out our guide on Ansible playbooks for Linux system administration.

Variables, Handlers, and Templates

Once you move past simple task lists, you need variables to make playbooks reusable, handlers to react to changes, and templates to generate configuration files dynamically.

Variables

Variables can be defined at multiple levels – in the inventory, in playbook vars blocks, in separate files, or passed on the command line. Here is a playbook with inline variables and a vars file:

---
- name: Configure web application
  hosts: webservers
  become: true
  vars:
    app_port: 8080
    app_user: webapp
    max_connections: 1024
  vars_files:
    - vars/secrets.yml

  tasks:
    - name: Create application user
      ansible.builtin.user:
        name: "{{ app_user }}"
        shell: /bin/bash
        create_home: true

    - name: Deploy Nginx configuration
      ansible.builtin.template:
        src: templates/nginx-app.conf.j2
        dest: /etc/nginx/sites-available/app.conf
        owner: root
        group: root
        mode: '0644'
      notify: Reload Nginx

  handlers:
    - name: Reload Nginx
      ansible.builtin.systemd:
        name: nginx
        state: reloaded

Handlers

Handlers are tasks that only run when notified by another task that reports a change. In the example above, Nginx only reloads if the configuration template actually changed. If you run the playbook again and the template content is identical, the handler does not fire. This prevents unnecessary service restarts.

Jinja2 Templates

Templates use the Jinja2 engine to generate files with dynamic content. Create a templates directory in your project and add a template file:

mkdir -p ~/ansible-project/templates
cat > ~/ansible-project/templates/nginx-app.conf.j2 <<'TEMPLATE'
upstream app_backend {
    server 127.0.0.1:{{ app_port }};
}

server {
    listen 80;
    server_name {{ inventory_hostname }};

    location / {
        proxy_pass http://app_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    access_log /var/log/nginx/{{ inventory_hostname }}_access.log;
    error_log /var/log/nginx/{{ inventory_hostname }}_error.log;
}
TEMPLATE

Variables inside {{ }} get replaced at runtime with their actual values. You can use conditionals, loops, filters, and any Jinja2 feature inside templates. This is where Ansible gets genuinely powerful for managing configuration across heterogeneous environments.

Roles – Organizing Playbooks at Scale

As your automation grows, stuffing everything into a single playbook becomes unmaintainable. Roles provide a standardized directory structure for breaking playbooks into reusable components.

Generate a role skeleton with ansible-galaxy:

cd ~/ansible-project && ansible-galaxy init roles/nginx

This creates the following structure:

roles/nginx/
├── defaults/
│   └── main.yml        # default variables (lowest precedence)
├── files/              # static files to copy to targets
├── handlers/
│   └── main.yml        # handler definitions
├── meta/
│   └── main.yml        # role metadata and dependencies
├── tasks/
│   └── main.yml        # primary task list
├── templates/           # Jinja2 templates
├── tests/
│   └── test.yml
└── vars/
    └── main.yml        # role variables (higher precedence than defaults)

Move your Nginx tasks into roles/nginx/tasks/main.yml, templates into the templates directory, and handlers into the handlers file. Then reference the role from a playbook:

---
- name: Set up web servers
  hosts: webservers
  become: true
  roles:
    - role: nginx
      vars:
        app_port: 8080

Roles also support dependencies declared in meta/main.yml, so a role can automatically pull in other roles it requires. For sharing roles across teams or the community, publish them to Ansible Galaxy or a private Galaxy server.

Ansible Vault – Managing Secrets

Storing passwords, API keys, and certificates in plain text inside your repository is a security incident waiting to happen. Ansible Vault encrypts sensitive data so you can safely commit it to version control.

Create an encrypted variables file:

ansible-vault create ~/ansible-project/vars/secrets.yml

This opens your editor where you can add sensitive variables:

db_password: "s3cur3_p@ssw0rd"
api_token: "ghp_xxxxxxxxxxxxxxxxxxxx"
ssl_private_key: |
  -----BEGIN PRIVATE KEY-----
  MIIEvgIBADANBgkqhki...
  -----END PRIVATE KEY-----

Common Vault operations:

# Edit an existing vault file
ansible-vault edit vars/secrets.yml

# Encrypt an existing plain-text file
ansible-vault encrypt vars/credentials.yml

# Decrypt a file (use with caution)
ansible-vault decrypt vars/secrets.yml

# View contents without decrypting the file on disk
ansible-vault view vars/secrets.yml

# Change the vault password
ansible-vault rekey vars/secrets.yml

When running playbooks that reference vaulted files, pass the vault password:

ansible-playbook nginx-setup.yml --ask-vault-pass

For automated pipelines where interactive prompts are not an option, store the vault password in a file and reference it:

ansible-playbook nginx-setup.yml --vault-password-file ~/.vault_pass

Make sure .vault_pass is in your .gitignore and has restrictive file permissions (chmod 600). You can also set vault_password_file in your ansible.cfg to avoid typing the flag every time.

Troubleshooting Common Issues

After years of working with Ansible in production, these are the issues that come up most often. Knowing how to diagnose them quickly saves hours of frustration.

SSH Key Authentication Failures

If Ansible cannot connect to a host, the first thing to check is whether you can SSH manually:

ssh -i ~/.ssh/ansible_key -o StrictHostKeyChecking=no [email protected]

Common causes of SSH failures:

  • Wrong key permissions – the private key file must be chmod 600 or SSH will refuse to use it
  • Public key not in the remote user’s ~/.ssh/authorized_keys
  • SELinux on RHEL/Rocky preventing SSH from reading authorized_keys – run restorecon -Rv ~/.ssh on the target
  • Firewall blocking port 22 or a non-standard SSH port not specified in the inventory

Sudo Without Password

Ansible needs passwordless sudo on managed nodes for privilege escalation to work smoothly. If you see “Missing sudo password” errors, configure the deploy user on each target:

echo "deploy ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/deploy

Alternatively, if your security policy requires sudo passwords, set become_ask_pass = True in ansible.cfg or pass --ask-become-pass on the command line. For environments where different hosts have different sudo passwords, use the ansible_become_password variable in your inventory (encrypted with Vault, naturally).

Python Interpreter Issues

Ansible modules execute on the remote host using Python. If the target system has Python installed in a non-standard location, or has both Python 2 and Python 3 with the wrong one as default, you will see interpreter errors.

Fix this by setting the interpreter explicitly in your inventory:

[all:vars]
ansible_python_interpreter=/usr/bin/python3

Or for a specific host:

[dbservers]
db1.example.com ansible_python_interpreter=/usr/bin/python3.12

On minimal server installations where Python is missing entirely, you can use the raw module to bootstrap it before running normal tasks:

- name: Bootstrap Python on minimal hosts
  hosts: all
  gather_facts: false
  tasks:
    - name: Install Python
      ansible.builtin.raw: apt-get install -y python3 || dnf install -y python3
      changed_when: true

Setting gather_facts: false is critical here because fact gathering itself requires Python on the remote host.

Slow Playbook Execution

If your playbook runs feel sluggish across many hosts, try these adjustments in ansible.cfg:

[defaults]
forks = 30
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_fact_cache
fact_caching_timeout = 86400

[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s

Pipelining reduces the number of SSH operations per task. Smart gathering with fact caching avoids re-gathering facts on every run. Bumping forks lets Ansible work on more hosts simultaneously. These three changes together can cut playbook runtimes dramatically on larger inventories.

Next Steps

You now have a working Ansible control node, a configured inventory, and the knowledge to write playbooks, manage secrets, and troubleshoot the issues that show up in real environments. From here, you can explore dynamic inventories for cloud environments, Ansible collections from Galaxy, and integration with CI/CD pipelines for fully automated infrastructure delivery.

For managing your Linux servers with Ansible after this initial setup, see our guide on automating Linux server setup with Ansible.

Related Guides

LEAVE A REPLY

Please enter your comment!
Please enter your name here