Ansible

Install and Configure Ansible on RHEL and Debian Based Linux

Managing a handful of servers by hand is fine. Past ten or fifteen, you start making mistakes: a config drifts on one box, a package gets missed on another, and suddenly half your fleet is out of sync. Ansible solves this by letting you define your infrastructure as YAML playbooks and pushing changes over plain SSH. No agents to install on managed nodes, no central database to maintain, no PKI infrastructure to bootstrap.

Original content from computingforgeeks.com - post 165255

This guide walks through installing Ansible on both RHEL and Debian family distributions, configuring SSH key authentication, setting up your first inventory, and running ad-hoc commands against remote hosts. We cover two installation methods: the quick package manager route (which gives you ansible-core 2.16.x) and the pip-in-virtualenv approach for the latest release with all community collections.

Verified working: April 2026 on Rocky Linux 10.1, Ubuntu 24.04 LTS with ansible-core 2.16.x (package manager) and ansible 13.5.0 / ansible-core 2.20.4 (pip)

Prerequisites

Before starting, confirm the following are in place:

  • Control node: Rocky Linux 10, AlmaLinux 10, RHEL 10, Ubuntu 24.04, or Debian 13 with Python 3.10 or newer
  • Managed nodes: any Linux system with SSH enabled and Python 3 installed (ships by default on all modern distros)
  • SSH key-based authentication between the control node and all managed nodes (we set this up below)
  • sudo or root access on the control node for installing packages
  • Tested on: Rocky Linux 10.1 (Python 3.12.11), Ubuntu 24.04 LTS (Python 3.12.3)

The examples in this guide use three machines: a control node at 10.0.1.10, and two managed nodes at 10.0.1.20 and 10.0.1.30.

Install Ansible on RHEL Family (Rocky Linux, AlmaLinux, RHEL)

RHEL-family distributions offer two practical installation paths. The package manager gives you a stable, vendor-supported build. The pip route in a virtual environment gets you the latest upstream release with the full collection ecosystem.

Method 1: Package Manager (Quick Start)

The ansible-core package lives in the default AppStream repository on Rocky Linux 10 and RHEL 10. No extra repos needed.

sudo dnf install -y ansible-core

Confirm the installed version:

ansible --version

The output should show ansible-core 2.16.14 with Python 3.12:

ansible [core 2.16.14]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.12/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.12.11 (main, Apr  8 2026, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-1)]
  jinja version = 3.1.6
  libyaml = True

One thing to know: the ansible-core package from the RHEL repos ships without community collections. You get the engine and roughly 70 built-in modules (command, copy, file, yum, service, and so on), but nothing from ansible.posix, community.general, or other namespaced collections. For many server automation tasks, the built-ins are enough. If you need collections, install them individually with ansible-galaxy collection install or use Method 2.

Method 2: pip in a Virtual Environment (Recommended for Latest)

This approach gives you ansible 13.5.0 with ansible-core 2.20.4, which is significantly newer than what the package manager offers. The virtual environment keeps everything isolated from system Python.

Install the Python venv module first. On Rocky Linux 10, the python3-virtualenv package does not exist in the repos, so use the built-in venv module instead:

sudo dnf install -y python3 python3-pip

Create and activate a virtual environment:

python3 -m venv ~/ansible-venv
source ~/ansible-venv/bin/activate

Your shell prompt should now show (ansible-venv) at the beginning. Install the full Ansible package with pip:

pip install ansible

Verify the installation:

ansible --version

You should see ansible-core 2.20.4 with the venv Python path:

ansible [core 2.20.4]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/ansible-venv/lib/python3.12/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/ansible-venv/bin/ansible
  python version = 3.12.11 (main, Apr  8 2026, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-1)]
  jinja version = 3.1.6
  libyaml = True

The pip-installed full ansible package includes 85+ community collections out of the box. Activate the venv each time you work with Ansible: source ~/ansible-venv/bin/activate.

Install Ansible on Debian Family (Ubuntu, Debian)

Ubuntu 24.04 ships a reasonably current Ansible in its default repositories, and unlike RHEL, the Ubuntu package includes a solid set of community collections.

Method 1: Package Manager

Update the package index and install Ansible:

sudo apt update
sudo apt install -y ansible

Check the version:

ansible --version

Ubuntu 24.04 installs ansible-core 2.16.3:

ansible [core 2.16.3]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.12.3 (main, Feb  4 2025, 14:48:35) [GCC 13.3.0]
  jinja version = 3.1.2
  libyaml = True

The Ubuntu package bundles 35+ collections including ansible.posix, community.general, community.mysql, and community.postgresql. For most automation workflows, this is ready to use out of the box.

There is also an official PPA at ppa:ansible/ansible, but as of April 2026 it provides the same version as the default Ubuntu repos. It may pull ahead after future upstream releases.

Method 2: pip in a Virtual Environment

For the latest upstream release on Ubuntu, the pip approach works the same way. One difference: Ubuntu requires the python3.12-venv package before you can create virtual environments.

sudo apt update
sudo apt install -y python3-pip python3.12-venv

Create the virtual environment and install Ansible:

python3 -m venv ~/ansible-venv
source ~/ansible-venv/bin/activate
pip install ansible

The version output will match what we saw on Rocky Linux: ansible-core 2.20.4 with Jinja2 3.1.6. The pip installation is identical across distros since it pulls directly from PyPI.

Verify the Installation

Regardless of which method you chose, run a quick sanity check. The ansible --version output confirms the core version, Python interpreter path, and config file location. For pip installs, the config file line will show None until you create one (covered in the next section).

If you installed the full ansible package (pip or Ubuntu apt), list the bundled collections:

ansible-galaxy collection list

This prints every installed collection with its version. On a pip install, expect 85+ entries:

# /root/ansible-venv/lib/python3.12/site-packages/ansible_collections
Collection                               Version
---------------------------------------- -------
amazon.aws                               9.2.0
ansible.netcommon                        7.2.0
ansible.posix                            2.1.0
ansible.utils                            5.1.2
ansible.windows                          2.6.0
community.general                        10.3.0
community.mysql                          3.12.0
community.postgresql                     4.3.0
community.docker                         4.3.0
...
(85+ collections total)

If you installed ansible-core from the RHEL repos, this command returns an empty list. That’s expected. Install individual collections as needed:

ansible-galaxy collection install community.general

Configure SSH Key Authentication

Ansible connects to managed nodes over SSH. Password authentication works but gets tedious fast, especially across dozens of hosts. SSH keys are the standard approach.

Generate an Ed25519 key pair on the control node (skip this if you already have one at ~/.ssh/id_ed25519):

ssh-keygen -t ed25519 -C "ansible-control"

Accept the default path and set a passphrase if you want the extra layer of security. Copy the public key to each managed node:

ssh-copy-id [email protected]
ssh-copy-id [email protected]

Replace deploy with whatever user account you use for automation on the target hosts. Test that passwordless login works:

ssh [email protected] "hostname"

If the hostname prints without a password prompt, you’re set.

Using ssh-agent for Passphrase-Protected Keys

If your key has a passphrase (it should, in production), start the SSH agent and add your key so Ansible doesn’t prompt for the passphrase on every connection:

eval $(ssh-agent -s)
ssh-add ~/.ssh/id_ed25519

Enter the passphrase once. The agent holds it in memory for the duration of your session. All subsequent SSH connections, including those from Ansible, will use the cached key.

Configure ansible.cfg

Ansible reads its configuration from the first file it finds in this order:

  1. ANSIBLE_CONFIG environment variable (path to a specific file)
  2. ./ansible.cfg in the current directory
  3. ~/.ansible.cfg in your home directory
  4. /etc/ansible/ansible.cfg (system-wide default)

The project-level config (./ansible.cfg) is the most practical choice. It lives alongside your playbooks and inventory, so each project can have its own settings. Create a project directory and the config file:

mkdir -p ~/ansible-project
cd ~/ansible-project

Open the config file:

vi ~/ansible-project/ansible.cfg

Add the following configuration:

[defaults]
inventory = ./inventory
remote_user = deploy
host_key_checking = False
retry_files_enabled = False
timeout = 30

[privilege_escalation]
become = True
become_method = sudo
become_ask_pass = False

A few notes on these settings. host_key_checking = False prevents Ansible from rejecting new hosts that aren’t in known_hosts yet, which is useful during initial provisioning. In a locked-down environment, you might want to leave this enabled and pre-populate known_hosts instead. retry_files_enabled = False stops Ansible from creating .retry files after failed playbook runs, which just clutters the project directory.

Verify that Ansible picks up the project config:

cd ~/ansible-project
ansible-config dump --only-changed

The output lists every setting that differs from the built-in defaults:

DEFAULT_BECOME(/root/ansible-project/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/root/ansible-project/ansible.cfg) = sudo
DEFAULT_HOST_LIST(/root/ansible-project/ansible.cfg) = ['/root/ansible-project/inventory']
DEFAULT_REMOTE_USER(/root/ansible-project/ansible.cfg) = deploy
DEFAULT_TIMEOUT(/root/ansible-project/ansible.cfg) = 30
HOST_KEY_CHECKING(/root/ansible-project/ansible.cfg) = False
RETRY_FILES_ENABLED(/root/ansible-project/ansible.cfg) = False

Each line shows the setting name, the file it came from, and its current value. If you see your ansible.cfg path in parentheses, Ansible is reading the right file.

Create Your First Inventory

The inventory file tells Ansible which hosts to manage and how to group them. The simplest format is INI. Create it inside your project directory:

vi ~/ansible-project/inventory

Add your hosts organized into groups:

[webservers]
web01 ansible_host=10.0.1.20

[dbservers]
db01 ansible_host=10.0.1.30

[all:vars]
ansible_python_interpreter=/usr/bin/python3

The [all:vars] section sets variables that apply to every host. Setting ansible_python_interpreter explicitly avoids the deprecation warning about auto-discovery on systems where Python 2 might also be present.

Test connectivity to all hosts with the ping module. This doesn’t send ICMP packets; it connects over SSH and checks that Python works on the remote side:

cd ~/ansible-project
ansible all -m ping

A successful response looks like this:

web01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
db01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Every host returned "pong", which confirms SSH connectivity, Python availability, and privilege escalation all work.

Test Ad-Hoc Commands

Ad-hoc commands are one-liners that run a single module against your inventory. They are useful for quick tasks before you write a full playbook.

Check the uptime on all managed nodes:

ansible all -m command -a "uptime"

The output shows each host’s uptime:

web01 | CHANGED | rc=0 >>
 14:23:01 up 12 days,  3:45,  1 user,  load average: 0.08, 0.03, 0.01
db01 | CHANGED | rc=0 >>
 14:23:01 up 12 days,  3:42,  1 user,  load average: 0.12, 0.05, 0.02

Gather OS distribution facts from all hosts using the setup module with a filter:

ansible all -m setup -a "filter=ansible_distribution*"

This returns structured data about each host’s operating system:

web01 | SUCCESS => {
    "ansible_facts": {
        "ansible_distribution": "Rocky",
        "ansible_distribution_file_parsed": true,
        "ansible_distribution_file_path": "/etc/redhat-release",
        "ansible_distribution_file_variety": "RedHat",
        "ansible_distribution_major_version": "10",
        "ansible_distribution_release": "Blue Onyx",
        "ansible_distribution_version": "10.1"
    },
    "changed": false
}
db01 | SUCCESS => {
    "ansible_facts": {
        "ansible_distribution": "Ubuntu",
        "ansible_distribution_file_parsed": true,
        "ansible_distribution_file_path": "/etc/os-release",
        "ansible_distribution_file_variety": "Debian",
        "ansible_distribution_major_version": "24",
        "ansible_distribution_release": "noble",
        "ansible_distribution_version": "24.04"
    },
    "changed": false
}

The setup module is how Ansible gathers facts about remote systems. These facts become available as variables in playbooks, so you can write conditional logic like “install this package on RHEL, that package on Ubuntu” without hardcoding anything.

Target a specific group instead of all hosts:

ansible webservers -m command -a "df -h /"

This runs only against hosts in the [webservers] group.

Difference Between ansible and ansible-core

This catches people off guard, especially when switching between distros or installation methods. The two packages are not the same thing.

ansible-core is the engine. It includes the ansible, ansible-playbook, ansible-galaxy, and ansible-vault command-line tools, plus roughly 70 built-in modules that cover the most common tasks (file management, package installation, service control, user management). This is what dnf install ansible-core gives you on RHEL-family systems.

ansible (the full package) bundles ansible-core together with 85+ community collections maintained under the ansible namespace on PyPI. These collections add thousands of modules for cloud providers (AWS, Azure, GCP), network equipment (Cisco, Juniper, Arista), container platforms (Docker, Kubernetes), databases (MySQL, PostgreSQL), and more. This is what pip install ansible or apt install ansible on Ubuntu gives you.

The version numbers reflect this split. As of April 2026, ansible-core 2.20.4 is the engine version, while ansible 13.5.0 is the full package version that wraps it. They follow independent release cycles.

Here is a quick reference for what each installation method provides:

Installation methodPackageVersion (April 2026)Collections included
dnf install ansible-core (Rocky/RHEL)ansible-core2.16.14None (built-in modules only)
apt install ansible (Ubuntu 24.04)ansible-core + collections2.16.335+ collections
pip install ansible (any distro)ansible (full)13.5.0 / core 2.20.485+ collections
pip install ansible-core (any distro)ansible-core only2.20.4None

If you need a specific collection, check whether it’s already installed with ansible-galaxy collection list | grep collection_name before installing it separately.

Troubleshooting

Permission denied (publickey)

This means the SSH key wasn’t copied to the target host, or the remote user doesn’t have the key in its authorized_keys file. Verify by running ssh [email protected] manually. If it prompts for a password, the key copy failed. Run ssh-copy-id [email protected] again and check that ~deploy/.ssh/authorized_keys on the remote host contains your public key.

No module named ‘ansible’

This happens when you installed Ansible in a virtual environment but forgot to activate it. The system Python doesn’t know about packages inside the venv. Activate it first:

source ~/ansible-venv/bin/activate
ansible --version

If you installed Ansible system-wide and still see this error, check which Python is being used with which python3 and confirm that Ansible’s packages are in that interpreter’s site-packages.

SSH service name differences between distros

On RHEL-family systems, the SSH daemon service is called sshd. On Debian and Ubuntu, it’s ssh. This matters when you write playbooks that restart the SSH service or check its status. If a playbook works on Rocky but fails on Ubuntu with “Could not find the requested service sshd”, that’s why. Use Ansible facts to conditionally set the service name, or use the ansible.builtin.service module which handles this transparently for start/stop/restart operations.

python3-virtualenv not found on RHEL

On Rocky Linux 10 and RHEL 10, the python3-virtualenv package does not exist in the default repos. If you find guides telling you to install it, they’re outdated or written for a different distro. Use the built-in venv module that ships with Python 3.12:

python3 -m venv ~/ansible-venv

This works without installing any extra packages on RHEL-family systems.

“Ansible requires a minimum of Python2 version 2.7 or Python3 version 3.6” on managed nodes

Modern distros (Rocky 10, Ubuntu 24.04, Debian 13) all ship Python 3.12, so this error only appears when managing older systems. If you need to manage a legacy host, install Python 3 on it with the system package manager, then set ansible_python_interpreter=/usr/bin/python3 for that host in your inventory.

Host key verification failed

If you didn’t set host_key_checking = False in your ansible.cfg and the managed node isn’t in your known_hosts file, Ansible will refuse to connect. Either SSH to the host manually once to accept the key, or add the setting to your config file. For automated provisioning of new servers, disabling the check is standard practice. For production environments with a stable fleet, keeping it enabled and distributing known_hosts via configuration management is the more secure approach.

Related Articles

Automation 4 Common Automation Testing Mistakes You Must Avoid Ansible Ansible check if software package is installed on Linux Automation Install Terraform on CentOS 8 / Rocky Linux 8 Automation The Role of Automation in Modern Deal Flow Management

Leave a Comment

Press ESC to close