Ansible

Manage Docker Containers with Ansible on Rocky Linux and Ubuntu

Most Docker tutorials stop at installing containers by hand. That works for a single server, but the moment you have five, ten, or fifty hosts running containers, you need something repeatable. Ansible gives you that: define the desired state of your containers in YAML, run the playbook, and every host converges to the same configuration. No SSH loops, no shell scripts checking if something already exists.

Original content from computingforgeeks.com - post 64210

This guide covers the full spectrum of managing Docker with Ansible using the community.docker collection. From installing Docker across mixed RHEL and Debian fleets, through container lifecycle, image builds, custom networks, Docker Compose deployments, to secrets management with Ansible Vault. Every playbook here was tested on real VMs, and the errors you will see are errors we actually hit.

Tested March 2026 on Rocky Linux 10.1 and Ubuntu 24.04.4 LTS with ansible-core 2.16.14, community.docker 5.1.0, Docker CE 29.3.1, Compose v5.1.1

Prerequisites

You will need the following to follow along with this guide:

  • Control node: Rocky Linux 10 or Ubuntu 24.04 with ansible-core installed
  • Managed nodes: One or more Rocky Linux 10 or Ubuntu 24.04 servers (we tested with one of each)
  • SSH key-based authentication from the control node to all managed nodes
  • A user with sudo privileges on all managed nodes
  • Internet connectivity on all nodes (to pull Docker packages and images)

Set Up the Ansible Control Node

Install ansible-core and the community.docker collection on your control node. If you already have Ansible installed, skip to the collection install step.

On Rocky Linux 10 / RHEL 10 / AlmaLinux 10:

sudo dnf install -y ansible-core

On Ubuntu 24.04 / Debian 13:

sudo apt update && sudo apt install -y ansible-core

Install the community.docker collection from Ansible Galaxy:

ansible-galaxy collection install community.docker

Confirm both are installed:

ansible --version
ansible-galaxy collection list community.docker

The output confirms ansible-core 2.16.14 and community.docker 5.1.0:

ansible [core 2.16.14]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/rocky/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ...
  python version = 3.12.8

Collection       Version
---------------- -------
community.docker 5.1.0

Configure SSH and Inventory

Generate an SSH key on the control node and distribute it to all managed nodes. If you need a refresher on SSH key management, see the Ansible user credential guide.

ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N ''
ssh-copy-id [email protected]
ssh-copy-id [email protected]

Create a project directory and an inventory file that groups your Docker hosts:

mkdir -p ~/ansible-docker && cd ~/ansible-docker

Create inventory.ini with the following content:

[docker_hosts]
rocky-node ansible_host=192.168.1.121 ansible_user=rocky
ubuntu-node ansible_host=192.168.1.122 ansible_user=root

[docker_hosts:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

Verify connectivity with the Ansible ping module:

ansible -i inventory.ini docker_hosts -m ping --become

Both nodes should respond with pong:

rocky-node | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
ubuntu-node | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Install Docker CE with Ansible

The first real playbook handles Docker installation across both RHEL and Debian families. If you need Docker installed manually first (without Ansible), check the Docker CE installation guide. The two OS families differ in repository configuration, package names, and a critical kernel module requirement on Rocky Linux 10.

Create install-docker.yml:

---
- name: Install Docker CE on RHEL and Debian systems
  hosts: docker_hosts
  become: true
  vars:
    docker_users:
      - "{{ ansible_user }}"

  tasks:
    - name: Install prerequisites (RHEL family)
      ansible.builtin.dnf:
        name:
          - dnf-plugins-core
          - device-mapper-persistent-data
          - lvm2
          - python3-pip
          - kernel-modules-extra-{{ ansible_kernel }}
        state: present
      when: ansible_os_family == 'RedHat'

    - name: Load kernel modules for Docker networking (RHEL family)
      community.general.modprobe:
        name: "{{ item }}"
        state: present
        persistent: present
      loop:
        - xt_addrtype
        - br_netfilter
      when: ansible_os_family == 'RedHat'

    - name: Add Docker repo (RHEL family)
      ansible.builtin.yum_repository:
        name: docker-ce
        description: Docker CE Stable
        baseurl: https://download.docker.com/linux/rhel/$releasever/$basearch/stable
        gpgcheck: true
        gpgkey: https://download.docker.com/linux/rhel/gpg
        enabled: true
      when: ansible_os_family == 'RedHat'

    - name: Install prerequisites (Debian family)
      ansible.builtin.apt:
        name:
          - ca-certificates
          - curl
          - gnupg
        state: present
        update_cache: true
      when: ansible_os_family == 'Debian'

    - name: Create keyrings directory (Debian family)
      ansible.builtin.file:
        path: /etc/apt/keyrings
        state: directory
        mode: '0755'
      when: ansible_os_family == 'Debian'

    - name: Add Docker GPG key (Debian family)
      ansible.builtin.get_url:
        url: https://download.docker.com/linux/ubuntu/gpg
        dest: /etc/apt/keyrings/docker.asc
        mode: '0644'
      when: ansible_os_family == 'Debian'

    - name: Add Docker repo (Debian family)
      ansible.builtin.apt_repository:
        repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
        state: present
        filename: docker
      when: ansible_os_family == 'Debian'

    - name: Install Docker CE (RHEL family)
      ansible.builtin.dnf:
        name:
          - docker-ce
          - docker-ce-cli
          - containerd.io
          - docker-compose-plugin
        state: present
      when: ansible_os_family == 'RedHat'

    - name: Install Docker CE (Debian family)
      ansible.builtin.apt:
        name:
          - docker-ce
          - docker-ce-cli
          - containerd.io
          - docker-compose-plugin
        state: present
        update_cache: true
      when: ansible_os_family == 'Debian'

    - name: Start and enable Docker service
      ansible.builtin.systemd:
        name: docker
        state: started
        enabled: true

    - name: Add users to docker group
      ansible.builtin.user:
        name: "{{ item }}"
        groups: docker
        append: true
      loop: "{{ docker_users }}"

    - name: Install Python Docker SDK (RHEL family)
      ansible.builtin.pip:
        name: docker>=7.0.0
        state: present
      when: ansible_os_family == 'RedHat'

    - name: Install Python Docker SDK (Debian family)
      ansible.builtin.apt:
        name: python3-docker
        state: present
      when: ansible_os_family == 'Debian'

    - name: Verify Docker installation
      ansible.builtin.command: docker --version
      register: docker_version
      changed_when: false

    - name: Show Docker version
      ansible.builtin.debug:
        var: docker_version.stdout

Run the playbook:

ansible-playbook -i inventory.ini install-docker.yml

A successful run shows Docker 29.3.1 installed on both nodes:

PLAY RECAP *********************************************************************
rocky-node   : ok=10   changed=0   unreachable=0   failed=0   skipped=6
ubuntu-node  : ok=11   changed=0   unreachable=0   failed=0   skipped=5

TASK [Show Docker version] *****************************************************
ok: [rocky-node] => {
    "docker_version.stdout": "Docker version 29.3.1, build c2be9cc"
}
ok: [ubuntu-node] => {
    "docker_version.stdout": "Docker version 29.3.1, build c2be9cc"
}

Notice the kernel-modules-extra package and the modprobe tasks for RHEL. Rocky Linux 10 (kernel 6.12) ships without xt_addrtype, a kernel module Docker needs for its iptables NAT rules. Without it, dockerd crashes on startup with: failed to add jump rules to ipv4 NAT table: Extension addrtype revision 0 not supported, missing kernel module. The Docker installation guide for Rocky Linux covers this in more detail.

The Python Docker SDK installation also differs between OS families. On Ubuntu 24.04, pip install fails with PEP 668’s “externally managed environment” error, so we use the python3-docker system package instead. On Rocky 10, pip works because there is no system package for the Docker SDK.

Container Lifecycle Management

The community.docker.docker_container module handles the full container lifecycle: create, start, stop, restart, and remove. Every operation is idempotent, meaning running the same playbook twice produces no changes on the second run.

Create container-lifecycle.yml:

---
- name: Docker container lifecycle management
  hosts: docker_hosts
  become: true

  tasks:
    - name: Pull the Nginx image
      community.docker.docker_image_pull:
        name: nginx
        tag: stable-alpine

    - name: Create and start an Nginx container
      community.docker.docker_container:
        name: web-server
        image: nginx:stable-alpine
        state: started
        ports:
          - "8080:80"
        restart_policy: unless-stopped
        labels:
          environment: production
          app: web

    - name: Get container info
      community.docker.docker_container_info:
        name: web-server
      register: container_info

    - name: Display container details
      ansible.builtin.debug:
        msg: "Container {{ container_info.container.Name }} is running (ID: {{ container_info.container.Id[:12] }})"

    - name: Verify container is accessible
      ansible.builtin.uri:
        url: http://localhost:8080
        return_content: true
      register: web_response

    - name: Show web response status
      ansible.builtin.debug:
        msg: "HTTP {{ web_response.status }} - Nginx is responding"

    - name: Stop the container
      community.docker.docker_container:
        name: web-server
        state: stopped

    - name: Start the container again
      community.docker.docker_container:
        name: web-server
        state: started

    - name: Restart the container
      community.docker.docker_container:
        name: web-server
        state: started
        restart: true

    - name: Remove the container
      community.docker.docker_container:
        name: web-server
        state: absent

Run it:

ansible-playbook -i inventory.ini container-lifecycle.yml

The output shows each lifecycle state change:

TASK [Display container details] ***********************************************
ok: [rocky-node] => {
    "msg": "Container /web-server is running (ID: 1b6a59227a07)"
}
ok: [ubuntu-node] => {
    "msg": "Container /web-server is running (ID: a5402c869862)"
}

TASK [Show web response status] ************************************************
ok: [rocky-node] => {
    "msg": "HTTP 200 - Nginx is responding"
}
ok: [ubuntu-node] => {
    "msg": "HTTP 200 - Nginx is responding"
}

PLAY RECAP *********************************************************************
rocky-node   : ok=14   changed=4   unreachable=0   failed=0
ubuntu-node  : ok=14   changed=4   unreachable=0   failed=0

Deploy Containers with Environment Variables, Volumes, and Health Checks

Production containers need more than just a port mapping. This playbook shows how to configure environment variables, bind mount host directories, set resource limits, define health checks, execute commands inside running containers, and copy files into them.

Create container-advanced.yml:

---
- name: Advanced container configuration
  hosts: docker_hosts
  become: true

  tasks:
    - name: Create a data directory on the host
      ansible.builtin.file:
        path: /opt/app-data
        state: directory
        mode: '0755'

    - name: Create a custom index page on the host
      ansible.builtin.copy:
        content: |
          <html><body><h1>Served by Nginx inside Docker</h1>
          <p>Managed by Ansible</p></body></html>
        dest: /opt/app-data/index.html

    - name: Deploy container with full configuration
      community.docker.docker_container:
        name: app-container
        image: nginx:stable-alpine
        state: started
        ports:
          - "8081:80"
        env:
          APP_ENV: production
          LOG_LEVEL: info
          DB_HOST: 192.168.1.100
        volumes:
          - /opt/app-data:/usr/share/nginx/html:ro
        restart_policy: unless-stopped
        memory: 256m
        cpus: 0.5
        healthcheck:
          test: ["CMD-SHELL", "wget -q --spider http://localhost || exit 1"]
          interval: 30s
          timeout: 10s
          retries: 3
          start_period: 10s

    - name: Verify the custom page is served
      ansible.builtin.uri:
        url: http://localhost:8081
        return_content: true
      register: custom_page

    - name: Show the custom page content
      ansible.builtin.debug:
        msg: "{{ custom_page.content | trim }}"

    - name: Execute a command inside the container
      community.docker.docker_container_exec:
        container: app-container
        command: /bin/sh -c "nginx -v 2>&1"
      register: exec_result

    - name: Show Nginx version from inside container
      ansible.builtin.debug:
        msg: "{{ exec_result.stdout }}"

    - name: Copy a config file into the container
      community.docker.docker_container_copy_into:
        container: app-container
        content: |
          server {
              listen 8080;
              location /health {
                  return 200 'OK';
                  add_header Content-Type text/plain;
              }
          }
        container_path: /etc/nginx/conf.d/health.conf
        mode: "0644"

    - name: Cleanup
      community.docker.docker_container:
        name: app-container
        state: absent

The output confirms environment variables, volume mount, resource limits, and health check are all applied:

TASK [Show the custom page content] ********************************************
ok: [rocky-node] => {
    "msg": "<html><body><h1>Served by Nginx inside Docker</h1>\n<p>Managed by Ansible</p></body></html>"
}

TASK [Show Nginx version from inside container] ********************************
ok: [rocky-node] => {
    "msg": "nginx version: nginx/1.28.3"
}

PLAY RECAP *********************************************************************
rocky-node   : ok=12   changed=4   unreachable=0   failed=0
ubuntu-node  : ok=12   changed=4   unreachable=0   failed=0

Notice the volume is mounted with :ro (read-only). If you need the container to write to the volume, remove the :ro suffix or use :rw explicitly. The docker_container_exec module runs commands inside running containers without SSH, and docker_container_copy_into pushes files directly (the mode parameter is required when using content).

Docker Image Management

The community.docker collection provides granular image modules: docker_image_pull, docker_image_build, docker_image_info, and docker_image_remove. These replaced the monolithic docker_image module for clearer playbooks.

Create image-management.yml:

---
- name: Docker image management
  hosts: docker_hosts
  become: true

  tasks:
    - name: Pull multiple images
      community.docker.docker_image_pull:
        name: "{{ item.name }}"
        tag: "{{ item.tag }}"
      loop:
        - { name: redis, tag: "7-alpine" }
        - { name: postgres, tag: "17-alpine" }
        - { name: alpine, tag: latest }
      register: pulled_images

    - name: Show pulled image details
      ansible.builtin.debug:
        msg: "Pulled {{ item.image.RepoTags[0] }} ({{ (item.image.Size / 1048576) | round(1) }} MB)"
      loop: "{{ pulled_images.results }}"
      loop_control:
        label: "{{ item.image.RepoTags[0] | default('unknown') }}"

    - name: Get detailed info for Redis image
      community.docker.docker_image_info:
        name: redis:7-alpine
      register: redis_info

    - name: Show Redis image details
      ansible.builtin.debug:
        msg: "Redis ID: {{ redis_info.images[0].Id[:19] }}, Created: {{ redis_info.images[0].Created }}"

    - name: Create build context directory
      ansible.builtin.file:
        path: /opt/build-context
        state: directory
        mode: '0755'

    - name: Create a Dockerfile
      ansible.builtin.copy:
        content: |
          FROM nginx:stable-alpine
          COPY index.html /usr/share/nginx/html/index.html
          EXPOSE 80
          HEALTHCHECK --interval=30s --timeout=5s \
            CMD wget -q --spider http://localhost/ || exit 1
        dest: /opt/build-context/Dockerfile

    - name: Create index.html for the custom image
      ansible.builtin.copy:
        content: |
          <html><body><h1>Custom Image Built by Ansible</h1></body></html>
        dest: /opt/build-context/index.html

    - name: Build a custom Docker image
      community.docker.docker_image_build:
        name: my-app
        tag: "1.0"
        path: /opt/build-context
      register: build_result

    - name: Show build result
      ansible.builtin.debug:
        msg: "Built image: my-app:1.0 (ID: {{ build_result.image.Id[:19] }})"

    - name: Run the custom image
      community.docker.docker_container:
        name: custom-app
        image: my-app:1.0
        state: started
        ports:
          - "8082:80"

    - name: Test the custom app
      ansible.builtin.uri:
        url: http://localhost:8082
        return_content: true
      register: app_response
      retries: 3
      delay: 2
      until: app_response.status == 200

    - name: Show response from custom image
      ansible.builtin.debug:
        msg: "{{ app_response.content | trim }}"

    - name: List all images on the host
      community.docker.docker_host_info:
        images: true
      register: host_info

    - name: Show all images
      ansible.builtin.debug:
        msg: "{{ item.RepoTags[0] | default('none') }} - {{ (item.Size / 1048576) | round(1) }} MB"
      loop: "{{ host_info.images }}"
      loop_control:
        label: "{{ item.RepoTags[0] | default('untagged') }}"

    - name: Remove unused images
      community.docker.docker_image_remove:
        name: "{{ item }}"
      loop:
        - alpine:latest
        - postgres:17-alpine

    - name: Cleanup container and build files
      community.docker.docker_container:
        name: custom-app
        state: absent

The build output confirms the custom image was created and serves content:

TASK [Show pulled image details] ***********************************************
ok: [rocky-node] => (item=redis:7-alpine) => {
    "msg": "Pulled redis:7-alpine (16.5 MB)"
}
ok: [rocky-node] => (item=postgres:17-alpine) => {
    "msg": "Pulled postgres:17-alpine (106.0 MB)"
}
ok: [rocky-node] => (item=alpine:latest) => {
    "msg": "Pulled alpine:latest (3.7 MB)"
}

TASK [Show build result] *******************************************************
ok: [rocky-node] => {
    "msg": "Built image: my-app:1.0 (ID: sha256:ae15ac784fbe)"
}

TASK [Show response from custom image] *****************************************
ok: [rocky-node] => {
    "msg": "<html><body><h1>Custom Image Built by Ansible</h1></body></html>"
}

Docker Networks and Volumes

Containers on custom networks can resolve each other by name, which eliminates hardcoded IPs between services. Named volumes persist data beyond the container lifecycle. Both are managed declaratively through Ansible.

Create networks-volumes.yml:

---
- name: Docker networks and volumes
  hosts: docker_hosts
  become: true

  tasks:
    - name: Create a custom bridge network with a defined subnet
      community.docker.docker_network:
        name: app-network
        driver: bridge
        ipam_config:
          - subnet: 172.20.0.0/16
            gateway: 172.20.0.1
      register: network_result

    - name: Show network details
      ansible.builtin.debug:
        msg: "Network {{ network_result.network.Name }} (ID: {{ network_result.network.Id[:12] }})"

    - name: Create an internal network (no external access)
      community.docker.docker_network:
        name: db-network
        driver: bridge
        internal: true

    - name: Create a named volume for persistent data
      community.docker.docker_volume:
        name: db-data
        driver: local
      register: vol_result

    - name: Show volume details
      ansible.builtin.debug:
        msg: "Volume {{ vol_result.volume.Name }} at {{ vol_result.volume.Mountpoint }}"

    - name: Deploy Redis on the app network
      community.docker.docker_container:
        name: redis-cache
        image: redis:7-alpine
        state: started
        networks:
          - name: app-network
            ipv4_address: 172.20.0.10
        volumes:
          - db-data:/data
        restart_policy: unless-stopped

    - name: Deploy Nginx on the same network
      community.docker.docker_container:
        name: web-frontend
        image: nginx:stable-alpine
        state: started
        ports:
          - "8080:80"
        networks:
          - name: app-network
            ipv4_address: 172.20.0.20
        restart_policy: unless-stopped

    - name: Verify containers can reach each other by name
      community.docker.docker_container_exec:
        container: web-frontend
        command: /bin/sh -c "ping -c 2 redis-cache"
      register: ping_result

    - name: Show inter-container connectivity
      ansible.builtin.debug:
        msg: "{{ ping_result.stdout }}"

    - name: Cleanup containers, networks, and volumes
      community.docker.docker_container:
        name: "{{ item }}"
        state: absent
      loop:
        - web-frontend
        - redis-cache

    - name: Remove networks
      community.docker.docker_network:
        name: "{{ item }}"
        state: absent
      loop:
        - app-network
        - db-network

    - name: Remove volumes
      community.docker.docker_volume:
        name: db-data
        state: absent

The ping test confirms DNS resolution works between containers on the same custom network:

TASK [Show inter-container connectivity] ***************************************
ok: [rocky-node] => {
    "msg": "PING redis-cache (172.20.0.10): 56 data bytes
64 bytes from 172.20.0.10: seq=0 ttl=64 time=0.083 ms
64 bytes from 172.20.0.10: seq=1 ttl=64 time=0.053 ms

--- redis-cache ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.053/0.068/0.083 ms"
}

The internal: true flag on db-network prevents containers on that network from reaching the internet, which is useful for database tiers that should only accept connections from application containers.

Deploy Multi-Container Apps with Docker Compose

For applications with multiple services (web server, database, cache), the docker_compose_v2 module manages the entire stack from a single compose file. This module wraps the Docker Compose CLI plugin (v2), which replaced the legacy docker-compose Python tool that was removed in community.docker 4.0.

Create docker-compose-deploy.yml. This example deploys WordPress with MariaDB:

---
- name: Deploy multi-container app with Docker Compose
  hosts: docker_hosts
  become: true

  tasks:
    - name: Create project directory
      ansible.builtin.file:
        path: /opt/wordpress
        state: directory
        mode: '0755'

    - name: Create docker-compose.yml
      ansible.builtin.copy:
        content: |
          services:
            db:
              image: mariadb:11
              restart: unless-stopped
              environment:
                MARIADB_ROOT_PASSWORD: rootpass123
                MARIADB_DATABASE: wordpress
                MARIADB_USER: wpuser
                MARIADB_PASSWORD: wppass456
              volumes:
                - db_data:/var/lib/mysql
              networks:
                - wp-network
              healthcheck:
                test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
                interval: 10s
                timeout: 5s
                retries: 5

            wordpress:
              image: wordpress:6-apache
              restart: unless-stopped
              depends_on:
                db:
                  condition: service_healthy
              ports:
                - "8083:80"
              environment:
                WORDPRESS_DB_HOST: db
                WORDPRESS_DB_USER: wpuser
                WORDPRESS_DB_PASSWORD: wppass456
                WORDPRESS_DB_NAME: wordpress
              volumes:
                - wp_data:/var/www/html
              networks:
                - wp-network

          volumes:
            db_data:
            wp_data:

          networks:
            wp-network:
              driver: bridge
        dest: /opt/wordpress/docker-compose.yml

    - name: Deploy the WordPress stack
      community.docker.docker_compose_v2:
        project_src: /opt/wordpress
        state: present
      register: compose_result

    - name: Show deployment result
      ansible.builtin.debug:
        msg: "Services: {{ compose_result.containers | map(attribute='Name') | list }}"

    - name: Wait for WordPress to be ready
      ansible.builtin.uri:
        url: http://localhost:8083
        status_code: [200, 302]
      register: wp_check
      retries: 10
      delay: 5
      until: wp_check.status in [200, 302]

    - name: Show WordPress status
      ansible.builtin.debug:
        msg: "WordPress is up: HTTP {{ wp_check.status }}"

    - name: Pull updated images for the stack
      community.docker.docker_compose_v2_pull:
        project_src: /opt/wordpress

    - name: Stop the WordPress stack
      community.docker.docker_compose_v2:
        project_src: /opt/wordpress
        state: stopped

    - name: Remove the stack completely (with volumes)
      community.docker.docker_compose_v2:
        project_src: /opt/wordpress
        state: absent
        remove_volumes: true

Run the playbook:

ansible-playbook -i inventory.ini docker-compose-deploy.yml

The stack comes up with WordPress accessible on port 8083:

TASK [Show deployment result] **************************************************
ok: [rocky-node] => {
    "msg": "Services: ['wordpress-db-1', 'wordpress-wordpress-1']"
}

TASK [Show WordPress status] ***************************************************
ok: [rocky-node] => {
    "msg": "WordPress is up: HTTP 200"
}
ok: [ubuntu-node] => {
    "msg": "WordPress is up: HTTP 200"
}

The state: absent with remove_volumes: true tears down everything: containers, networks, and named volumes. Without remove_volumes, the data volumes persist for redeployment. For a deeper look at running Docker Compose outside of Ansible, see the standalone Docker Compose guide.

Manage Secrets with Ansible Vault

Hardcoding passwords in playbooks is a security risk. Ansible Vault encrypts sensitive variables so they can be safely committed to version control.

Create an encrypted variables file:

mkdir -p vars

Create vars/secrets.yml with the following content:

db_root_password: SuperSecretRoot!2026
db_user: appuser
db_password: AppUserP@ss789
app_secret_key: 9f8e7d6c5b4a3210

Encrypt the file with Ansible Vault:

ansible-vault encrypt vars/secrets.yml

Enter a vault password when prompted. The file is now encrypted:

$ANSIBLE_VAULT;1.1;AES256
30393665656339326239383333636334626239396666313237636564386335363762
396137633832...

Create vault-deploy.yml that references the encrypted variables:

---
- name: Deploy containers with Ansible Vault secrets
  hosts: docker_hosts
  become: true
  vars_files:
    - vars/secrets.yml

  tasks:
    - name: Create app network
      community.docker.docker_network:
        name: secure-app

    - name: Deploy MariaDB with vault-encrypted credentials
      community.docker.docker_container:
        name: secure-db
        image: mariadb:11
        state: started
        env:
          MARIADB_ROOT_PASSWORD: "{{ db_root_password }}"
          MARIADB_DATABASE: myapp
          MARIADB_USER: "{{ db_user }}"
          MARIADB_PASSWORD: "{{ db_password }}"
        volumes:
          - secure-db-data:/var/lib/mysql
        networks:
          - name: secure-app
        restart_policy: unless-stopped
        healthcheck:
          test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
          interval: 10s
          timeout: 5s
          retries: 5

    - name: Wait for MariaDB to be healthy
      community.docker.docker_container_info:
        name: secure-db
      register: db_info
      until: db_info.container.State.Health.Status == "healthy"
      retries: 12
      delay: 5

    - name: Show MariaDB health
      ansible.builtin.debug:
        msg: "MariaDB status: {{ db_info.container.State.Health.Status }}"

    - name: Verify database credentials work
      community.docker.docker_container_exec:
        container: secure-db
        command: "mariadb -u{{ db_user }} -p{{ db_password }} -e 'SELECT VERSION();' myapp"
      register: db_version

    - name: Show database version
      ansible.builtin.debug:
        msg: "{{ db_version.stdout }}"

Run with the vault password:

ansible-playbook -i inventory.ini vault-deploy.yml --ask-vault-pass

The vault-encrypted credentials are injected at runtime. No passwords appear in the playbook or in ansible-playbook output:

TASK [Show MariaDB health] *****************************************************
ok: [rocky-node] => {
    "msg": "MariaDB status: healthy"
}

TASK [Show database version] ***************************************************
ok: [rocky-node] => {
    "msg": "VERSION()\n11.8.6-MariaDB-ubu2404"
}
ok: [ubuntu-node] => {
    "msg": "VERSION()\n11.8.6-MariaDB-ubu2404"
}

For CI/CD pipelines, store the vault password in a file and pass it with --vault-password-file .vault_pass instead of entering it interactively. The Ansible Vault cheat sheet covers additional vault operations like rekeying, editing encrypted files, and using multiple vault IDs.

Docker System Cleanup with Ansible

Docker accumulates stopped containers, unused images, dangling volumes, and build cache over time. The docker_prune module cleans all of these in one shot, and docker_host_info gives you a system overview before and after. For manual cleanup without Ansible, see the Docker image optimization guide.

---
- name: Docker system cleanup
  hosts: docker_hosts
  become: true

  tasks:
    - name: Get Docker host information
      community.docker.docker_host_info:
        containers: true
        images: true
      register: host_info

    - name: Show Docker system summary
      ansible.builtin.debug:
        msg: >
          Docker {{ host_info.host_info.ServerVersion }} |
          Containers: {{ host_info.host_info.Containers }}
          (running={{ host_info.host_info.ContainersRunning }}) |
          Images: {{ host_info.host_info.Images }} |
          Storage: {{ host_info.host_info.Driver }}

    - name: Prune stopped containers
      community.docker.docker_prune:
        containers: true
      register: prune_containers

    - name: Show pruned containers
      ansible.builtin.debug:
        msg: "Pruned {{ prune_containers.containers | length }} container(s)"

    - name: Prune unused images
      community.docker.docker_prune:
        images: true

    - name: Full system prune (everything unused)
      community.docker.docker_prune:
        containers: true
        images: true
        networks: true
        volumes: true
        builder_cache: true
      register: full_prune

    - name: Show full prune summary
      ansible.builtin.debug:
        msg: "Build cache reclaimed: {{ (full_prune.builder_cache_space_reclaimed / 1048576) | round(1) }} MB"

A scheduled cleanup playbook prevents disk space issues on long-running Docker hosts:

TASK [Show Docker system summary] **********************************************
ok: [rocky-node] => {
    "msg": "Docker 29.3.1 | Containers: 0 (running=0) | Images: 5 | Storage: overlayfs"
}

TASK [Show full prune summary] *************************************************
ok: [rocky-node] => {
    "msg": "Build cache reclaimed: 7.7 MB"
}

Rocky Linux 10 vs Ubuntu 24.04: Key Differences

Running the same playbooks against both OS families revealed several differences worth documenting:

ItemRocky Linux 10 / RHEL 10Ubuntu 24.04
Package managerdnfapt
Docker repo URLdownload.docker.com/linux/rheldownload.docker.com/linux/ubuntu
GPG key methodyum_repository gpgkeyget_url to /etc/apt/keyrings/
Kernel module fixkernel-modules-extra + modprobe xt_addrtype requiredNot needed
Python Docker SDKpip install dockerapt install python3-docker (PEP 668)
SELinuxEnforcing by default, Docker runs as container_runtime_tAppArmor, no action needed for Docker
Firewallfirewall-cmd --add-port=8080/tcp --permanentufw allow 8080/tcp
Service namedockerdocker

Troubleshooting

These are real errors encountered during testing on Rocky Linux 10.1 and Ubuntu 24.04.4, not hypothetical issues.

Error: “Extension addrtype revision 0 not supported, missing kernel module”

This occurs on Rocky Linux 10 / RHEL 10 when Docker tries to configure iptables NAT rules. The xt_addrtype kernel module is not included in the default kernel package on RHEL 10.

The full error from journalctl -u docker:

failed to start daemon: Error initializing network controller: error obtaining
controller instance: failed to register "bridge" driver: failed to add jump
rules to ipv4 NAT table: failed to append jump rules to nat-PREROUTING:
(iptables failed: iptables --wait -t nat -A PREROUTING -m addrtype
--dst-type LOCAL -j DOCKER: Warning: Extension addrtype revision 0 not
supported, missing kernel module?
iptables v1.8.11 (nf_tables): RULE_APPEND failed (No such file or directory):
rule in chain PREROUTING (exit status 4))

Fix: install kernel-modules-extra for your running kernel, then load the modules:

sudo dnf install -y kernel-modules-extra-$(uname -r)
sudo modprobe xt_addrtype
sudo modprobe br_netfilter
sudo systemctl restart docker

Make the modules persistent across reboots:

echo -e "xt_addrtype\nbr_netfilter" | sudo tee /etc/modules-load.d/docker.conf

Error: “externally-managed-environment” when installing Python Docker SDK

Ubuntu 24.04 enforces PEP 668, which prevents pip install from modifying system Python packages. The Ansible pip module fails with:

error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to install.

Fix: use the system package python3-docker instead of pip on Debian/Ubuntu systems. The playbook in this guide handles this with separate tasks for each OS family.

Error: “missing parameter(s) required by ‘content’: mode”

When using docker_container_copy_into with the content parameter (inline content rather than a source file), you must also specify mode. The module requires knowing what file permissions to set when creating a file from raw content.

# Wrong - missing mode
- community.docker.docker_container_copy_into:
    container: myapp
    content: "config data"
    container_path: /etc/app/config.txt

# Correct - mode specified
- community.docker.docker_container_copy_into:
    container: myapp
    content: "config data"
    container_path: /etc/app/config.txt
    mode: "0644"

All community.docker Modules

The collection includes 39 modules as of version 5.1.0. Here is the full list grouped by function:

CategoryModules
Containersdocker_container, docker_container_info, docker_container_exec, docker_container_copy_into
Imagesdocker_image_pull, docker_image_build, docker_image_push, docker_image_remove, docker_image_tag, docker_image_info, docker_image_load, docker_image_export
Composedocker_compose_v2, docker_compose_v2_pull, docker_compose_v2_exec, docker_compose_v2_run
Networksdocker_network, docker_network_info
Volumesdocker_volume, docker_volume_info
Swarmdocker_swarm, docker_swarm_info, docker_swarm_service, docker_swarm_service_info, docker_node, docker_node_info
Swarm Config/Secretsdocker_config, docker_secret
Stackdocker_stack, docker_stack_info, docker_stack_task_info
Systemdocker_host_info, docker_login, docker_plugin, docker_prune, current_container_facts

For Swarm cluster management, see the Docker Swarm setup guide.

Going Further

  • Dynamic inventory: the docker_containers inventory plugin auto-discovers running containers as Ansible hosts, useful for running playbooks inside containers. See the Ansible automation introduction for more on inventory concepts
  • Docker API connection: the docker_api connection plugin lets Ansible execute modules inside containers without SSH, using the Docker API directly
  • CI/CD integration: combine these playbooks with GitLab CI/CD pipelines to build images and deploy containers automatically on every push
  • Monitoring: after deploying containers, set up Prometheus and Grafana monitoring for Docker to track container resource usage
  • Private registries: use docker_login to authenticate with private registries before pulling images, and docker_image_push to publish custom images. You can also run containers as systemd services for persistent deployments without Compose

Related Articles

Containers How To Run Keycloak in Docker or Podman With SSL Containers Kubernetes Errors and How to Fix Them: CrashLoopBackOff, ImagePullBackOff, and More Containers How To Run QuestDB SQL database in Docker Container Containers Install Frigate NVR with Docker on Ubuntu 24.04 / Debian 13

Leave a Comment

Press ESC to close