Greetings and welcome to this blog on managing Amazon AWS Services using Ansible. Many system admins and DevOps around the world have adopted the concept of automation. The organizations and businesses that have chosen to implement automation benefit from improved efficiency, cost savings, reduced errors, productivity, customer satisfaction etc. The most popular automation tools for DevOps are Puppet, Chef, Docker, Jenkins, Terraform, Ansible, Vagrant, Git etc.
Ansible is an open-source tool used to automate the deployment and management of tasks. Here, the tasks to be automated are defined in descriptive language based on YAML.
There are many ways of managing resources on AWS. the popular ones are:
- AWS Management Console: This is a web-based GUI from which you can access and manage resources.
- AWS Command Line Interface (CLI): with this method, you interact with the AWS services using text-based commands. This offers a convenient method to automate tasks.
- AWS Software Development Kits (SDKs): AWS also provides SDKs for programming languages such as Python, JavaScript, Java etc. With this, developers can integrate AWS services into their apps.
The Ansible Amazon AWS collection provides several Ansible modules to help you automate the management of AWS services. The collection is maintained by the Ansible Cloud team. The available modules can be used to manage the following:
- Autoscaling groups
- CloudFormation, CloudTrail, and CloudWatch
- Elastic Cloud Compute (EC2)
- DynamoDB, ElastiCache, and Relational Database Service (RDS)
- Identity Access Manager (IAM) and Security Groups
- Simple Storage Service (S3)
- AWS Lambda
- Virtual Private Cloud (VPC)
Here, we will learn how to automate Amazon AWS service management with Ansible.
Let’s dive in!
Step 1: Install and Configure AWS CLI
For Ansible automation to work, we need to install and configure the AWS CLI correctly. To install the AWS CLI follow the aid provided here:
Verify the installation:
$ aws --version
aws-cli/1.29.54 Python/3.9.16 Linux/5.14.0-70.17.1.el9_0.x86_64 botocore/1.31.54
The next thing you need to do is to configure your AWS CLI. The easiest way to configure it is by using the configuration command below:
$ aws configure
AWS Access Key ID: MY_ACCESS_KEY
AWS Secret Access Key: MY_SECRET_KEY
Default region name [us-west-2]: eu-west-1
Default output format [None]: json
Replace all the variables correctly in the command above. To check the created file, run:
cat .aws/credentials
Step 2: Install Ansible
We will begin by installing Ansible. The required Ansible version is Ansible Core >= 2.12.0 versions. To achieve that, we will use the PIP method.
First, install Python by following the below link:
Once installed, proceed and install Ansible with the command:
sudo pip3 install ansible
Verify the installation:
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/home/rocky9/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/rocky9/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, May 29 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Our site provides guides on how to install Ansible. See below:
Create the Ansible Hosts Inventory file
The inventory file should contain the hosts on which you want to run the automation. But here, it will be a bit different since we do not have the AWS hosts deployed. So our host file will appear as shown:
$ sudo vim /etc/ansible/hosts
[local]
localhost
For the installation with PIP, you can create the hosts file anywhere and specify it when running the playbook using the -i <path-to-file>
variable
Step 3: Install Ansible Amazon AWS collection
To be able to perform the automation using Ansible, we need to have the Ansible Amazon AWS collection installed.
This can be done by running the command:
ansible-galaxy collection install amazon.aws
Once complete, we need to install all the required modules i.e. boto3
and botocore
which are SDKs used to connect to AWS.
To install them run the command:
sudo pip install boto3 botocore
Now we are set to use the provided modules on the collection to automate deployments on AWS.
Step 4: Automate Amazon AWS Services Management
To automate the management of AWS services with Ansible, you need to create a playbook. This is a YAML file that contains all that you want to do on AWS. In this guide, I will show you how to create playbooks for the common tasks as shown:
a. Create or delete an EC2 key pair
The EC2 key pair is required when connecting to an EC2 instance. There are many ways of creating an EC2 key pair using the amazon.aws.ec2_key module.
First, create the playbook file:
vim key-pair.yaml
Below are some of the examples you an use. Choose one that works best for you:
- create a new EC2 key pair, returns generated private key
- hosts: localhost
gather_facts: True
tasks:
- name: create a new EC2 key pair, returns generated private key
# use no_log to avoid private key being displayed into output
amazon.aws.ec2_key:
name: my_keypair
no_log: true
register: aws_ec2_key_pair
- create key pair using provided key_material
- hosts: localhost
gather_facts: True
tasks:
- name: create key pair using provided key_material
amazon.aws.ec2_key:
name: my_keypair
key_material: 'ssh-rsa AAAAxyz...== [email protected]'
- create key pair using key_material obtained using ‘file’ lookup plugin
- hosts: localhost
gather_facts: True
tasks:
- name: create key pair using key_material obtained using 'file' lookup plugin
amazon.aws.ec2_key:
name: my_keypair
key_material: "{{ lookup('file', '/path/to/public_key/id_rsa.pub') }}"
- Create ED25519 key pair and save private key into a file
- hosts: localhost
gather_facts: True
tasks:
- name: Create ED25519 key pair and save private key into a file
amazon.aws.ec2_key:
name: my_keypair
key_type: ed25519
file_name: /tmp/aws_ssh_rsa
- creating a key pair with the name of an already existing keypair
- hosts: localhost
gather_facts: True
tasks:
- name: try creating a key pair with name of an already existing keypair
amazon.aws.ec2_key:
name: my_existing_keypair
key_material: 'ssh-rsa AAAAxyz...== [email protected]'
force: false
- remove key pair from AWS by name
- hosts: localhost
gather_facts: True
tasks:
- name: remove key pair from AWS by name
amazon.aws.ec2_key:
name: my_keypair
state: absent
In this guide, we will create the keypair from a ‘file’.
ansible-playbook -i hosts key-pair.yaml
Sample Output:

You will have the pair created in your region:

b. Gather information about availability zones in AWS
It is also possible to collect information about the availability zones in AWS. This is achieved using the amazon.aws.aws_az_info module.
vim aws-info.yaml
You can use any of the below examples:
- Gather information about all availability zones
- hosts: localhost
gather_facts: True
tasks:
- name: Gather information about all availability zones
amazon.aws.aws_az_info:
- Gather information about a single availability zone
- hosts: localhost
gather_facts: True
tasks:
- name: Gather information about a single availability zone
amazon.aws.aws_az_info:
filters:
zone-name: eu-west-1a
To apply the playbook run:
ansible-playbook -i hosts aws-info.yaml
c. Provisioning EC2 Instance
To provision an EC2 instance, we will use the amazon.aws.ec2_instance module from our collection. We will also provide other required variables such as the SSH key, the Image ID, region etc. So the playbook can be created with the command:
vim ec2-playbook.yaml
Add these lines and replace values where required:
- hosts: localhost
gather_facts: False
tasks:
- name: Provision an EC2 instance with a public IP address
amazon.aws.ec2_instance:
name: Demo
key_name: "my_keypair"
vpc_subnet_id: <your_subnet-id>
instance_type: t2.nano
security_group: default
network:
assign_public_ip: true
image_id: <Your_ami-ID>
tags:
Environment: Testing
register: result
In the above playbook, an instance will be created as saved by the “register” keyword in the variable named “result”.
You can also create an instance with cpu_options for example:
- hosts: localhost
gather_facts: False
tasks:
- name: Provision an EC2 instance with a public IP address
amazon.aws.ec2_instance:
name: Demo
key_name: "my_keypair"
vpc_subnet_id: <your_subnet-id>
instance_type: t2.nano
security_group: default
network:
assign_public_ip: true
image_id: <Your_ami-ID>
tags:
Environment: Testing
cpu_options:
core_count: 1
threads_per_core: 1
register: result
To run the playbook issue:
ansible-playbook -i hosts ec2-playbook.yaml
Sample Output:

d. Creating a Simple S3 bucket
You can also create an S3 storage using Ansible using the s3_bucket module. First, create the playbook:
vim s3-playbook.yaml
In the file, add the desired lines for your S3 storage:
- hosts: localhost
gather_facts: False
tasks:
- name: Create new bucket
amazon.aws.s3_bucket:
name: mys3bucket
state: present
Save the file and apply the playbook:
ansible-playbook -i hosts s3-playbook.yaml
Sample Output:

Verify the creation:
$ aws s3 ls
2023-09-26 15:36:00 mys3bucket0007
To delete the bucket, change the state to absent:
$ vim s3-playbook.yaml
....
state: absent
Then run the playbook:
ansible-playbook -i hosts s3-playbook.yaml
Verdict
This guide has provided you with steps on how to manage Amazon AWS Services using Ansible. WE have only provided a few examples to help you understand how to automate the tasks. There are many other operations you can perform using several other modules as demonstrated on the official Ansible Amazon AWS collection page. I hope this was informative.
See more on this page:
- Automated Logical Volume Manager (LVM) Management on Linux using Ansible
- How To Deploy Matrix Server using Ansible and Docker
- Automate Linux Systems with Ansible System Roles
- How To Install Wazuh Security Platform using Ansible