Every aspiring DevOps must equip himself/herself with essential tools to help them in their daily operations. Not long ago, system administrators could manually deploy their infrastructure. Servers, databases, networks, storage, etc. were manually managed. Worse still, misconfigurations resulted in failed deployments. As a result, there was a delay in delivering the software applications to clients. System administrators could as well fall sick or resign in the middle of the project, leading to further frustrations.
Over time, the DevOps movement has emerged. These guys use a set of tools to speed up the application delivery time while eradicating manual software creation and deployments. These tools aid in infrastructure provisioning, continuous integration and deployment, configuration management, logging & monitoring, server templating, and so on.
What is infrastructure provisioning?
Several types of provisioning exist. This includes server provisioning, cloud provisioning, network provisioning, user provisioning, and service provisioning. Provisioning is the process of setting up an IT infrastructure. As highlighted in the introductory remarks, initially the provisioning was done manually. But with the advent of technology, any provisioning you could imagine can be automated. Its now possible to manage multiple services, servers, networks, etc. with a single script on a single machine. Traditionally, system administrators could spend hundreds of hours manually managing the IT infrastructure. Infrastructure provisioning refers to the process of setting up your infrastructure in terms of your hardware, network, storage, software, etc. to support your application deployments. The DevOps team can carry out tasks such as installing and configuring servers, provisioning storage, provisioning networks, software installation across the network, log analysis, and so on. The infrastructure provisioning tools automate the entire process.
Multiple infrastructure-provisioning tools exist on the market. These include, but are not limited to, Terraform, Chef, Puppet, Ansible, Saltstack, and so on. In this brief, we will analyse only a few tools. Only a high-level highlight will be captured, but links to official documentation will be shared for further reading.
What is cloud automation?
As technology evolves, IT and cloud administrators have been left with no choice but to automate the processes and tools they use on a day-to-day basis to speed up the delivery of infrastructure resources. The goal is to eradicate, to a very high degree, the manual efforts in provisioning and deploying your infrastructure as well as the management of the infrastructure. Hence, IT and cloud administrators are able to automate the manual processes and speed up the delivery of the intended software. Cloud automation has several use cases, like code testing before being deployed, network diagnosis, version control, data security, and so on. Examples of Cloud automation tools include Chef, Puppet, Ansible, Salt, etc.
Top OpenSource Tools for Infrastructure and Cloud Provisioning Automation
Every aspiring DevOps must ensure he/she has gained considerable skills in some or all of the tools below. The list is not exhaustive and is not arranged in any order. I have listed some of the tools i use to perform my duties on a daily basis.
1. Terraform
Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp that is simple, powerful, and written in the Go language. This tool allows DevOps to define their infrastructure using a simple declarative language. Terraform deploys and manages your infrastructure across several public cloud providers, such as AWS, GCP, Microsoft Azure, Digital Ocean, and so on. Further, you can deploy and manage your infrastructure on a private cloud and across virtualization platforms, e.g., VMWare, RedHat OpenStack, using a single command.
Terraform replaces the manual management of your infrastructure with an automated foundation. With Terraform, DevOps integrates their code continuously and always keeps the code in a deployable state. In addition, Terraform allows the DevOps team to deploy their codes numerous times a day. Further, Terraform builds resilient and self-healing systems to reduce outages as a result of manual intervention.
As an IaC tool, Terraform allows system administrators to write and execute code to define, deploy, update, and destroy the IT infrastructure. Simply put, every aspect of the IT infrastructure is treated as software and managed as code. IaC tools are divided into 5 broad categories. These categories are: ad-hoc scripts, configuration management tools, server templating tools, orchastration tools, and provisioning tools. As a provisioning tool, Terraform is used to create servers, databases, load balancers, network subnets, router rules, firewall settings, and so on.
Features associated with Terraform
As a leading IaC tool, Terraform is the preferred provisioning tool, and this is due to the following features:
- Integrates easily with version control systems.
- It allows customization of workflows with different access levels. e.g., for admins and other users.
- It has a simple, high-level declarative language, making it easier to understand even for normal users.
- Terraform supports multiple cloud providers.
- Allows for the ability to audit infrastructure logs with external log systems.
- Has flexible work flows, e.g., CLI, UI, version control, or via API.
- Allows continuous validation of infrastructure for optimal health checks.
- Integrates easily with existing infrastructure, e.g K8s, VMWare, AWS, and so on
- The state file keeps track of the infrastructure resources to help the DevOps team know when to update, upgrade, and destroy resources.
- Terraform allows collaboration through state sharing, governance, version control, and many more.
For a comprehensive feature list, please check Terraform Features
For a deeper overview of Terraform, see the resources below.
- Terraform documentation.
- Automate Deployments Using Docker and Terraform
- Learn Terraform Automation in 3 days using Video Courses
- Deploy Kubernetes Cluster Using Vagrant & Terraform
- Deploy VM instance on OpenStack using Terraform
2. Pulumi
Pulumi is a free-to-use, open-source IaC tool that allows DevOps to build, deploy, and manage their cloud infrastructure using a programming language of their choice. Pulumi integrates with AWS, Azure, GCP, Kubernetes, and many other cloud providers. Pulumi has a fully managed service called Pulumi Cloud that helps manage your infrastructure securely, reliably, and easily. The Pulumi cloud allows the DevOps team to adopt Pulumi’s open-source SDK. In addition, Pulumi Cloud provides built-in state and secret management, integrates with source control and CI/CD, and provides a web console and API to manage your infrastructure easily.
As a multi-language IaC tool, Pulumi supports the majority of today’s general purpose programming and markup languages. Some of these languages include TypeScript & JavaScript (Node.js), Python, Go, Java and many more.
Features associated with Pulumi
Pulumi has the following key features:
- Pulumi allows developers to collaborate via Role-based Access Control.
- Pulumi integrates with CI/CD delivery pipelines.
- Pulumi insights provide advanced search, analytics, and AI through resource search, cloud import, data export, etc.
- Pulumi ensures security and compliance through audit logs, policy packs and pulumi cloud security whitepaper
- Pulumi has 3 upgrade editions: Pulumi Team, Pulumi Enterprise, and Pulumi Business Critical.
- Pulumi supports multiple cloud providers, e.g AWS. GCP, Azure, Kubernetes, etc.
- Pulumi supports multiple general-purpose programming and markup languages, allowing developers to define their infrastructure using their preferred programming language.
- Pulumi is open-source and free to use.
For a comprehensive study on Pulumi, please read the Pulumi Documentation.
3. Ansible
Ansible is an open-source server provisioning and configuration management tool owned by RedHat. It is an agentless tool that works in any environment for automation purposes. It is a powerful automation tool suitable for hybrid cloud automation, edge automation, network automation, security automation, infrastructure provisioning, and configuration management. Ansible helps the DevOps team automate their code deployment, network configuration, cloud management, etc. using a language that is very easy to understand.
Ansible uses playbooks to automate your infrastructure tasks. Playbooks are simple and human-readable scripts (YAML) where you define the desired state of the infrastructure. If you want to manage multiple hosts in your infrastructure, with a single command, you define them in an inventory file.
Ansible use cases
The Ansible configuration management tool has the following use cases:
- Eliminates the repetition of tasks by automating workflows.
- Manages and maintains system configurations.
- Deploys complex software continuously.
- Perform zero-downtime rolling updates
Features associated with Ansible
Ansible has several key features that make it the leading configuration management tool. These can be summarised as below:.
- Ansible makes it simple for teams to share automation across teams. Ansible makes it easy to create, execute, and manage automation with a single subscription.
- Ansible Lightspeed: Ansible uses generative AI to generate code recommendations for automation tasks.
- Event driven ansible – This feature allows DevOps to automate IT tasks with user-friendly rule-based constructs. Ansible receives events from third-party sources and automatically converts them to full automations.
- Ansible has an agentless architecture, which ensures low maintenance overhead. This is because no additional software needs to be installed.
- Ansible is simple to use. The ansible playbooks are defined in simple YAML files that are written in human-readable formats.
- Ansible is scalable and flexible. It is easy to scale the systems you automate using Ansible’s modular design.
- Ansible is idempotent. The code you execute with Ansible doesn’t change, no matter how many times you run it.
- Ansible is secure. Ansible uses SSH and requires no extra open ports or potentially vulnerable daemons on your servers.
- Ansible is efficient. No extra software is needed to be installed to work with Ansible.
- Ansible is extensible. Ansible modules work via JSON.
For other resources to read more about Ansible, visit the links below.
- Ansible Documentation
- Getting started with Ansible
- Ansible GitHub
- Introduction to Ansible Inventory Management
- Introduction To Ansible Automation on Linux – Understanding Ansible
4. Salt / SaltStack
Salt is an open-source event-driven, scalable automation tool built on Python to deploy, configure, and manage complex IT systems. It is used to automate repetitive infrastructure administration tasks and ensure that all the components of your infrastructure are operating in a consistent state. Salt aims to be the fastest, most intelligent, and most scalable automation engine. Salt has been tested and proven to work across multiple operating systems, such as CentOS, Debian, RHEL, Ubuntu, MacOS, Windows, and more.
Salt was acquired by VMWare in 2020 from SaltStack and powers the VMWare Aria Automation Config. Salt is highly pluggable and easily customizable, which is why it is considered a configuration management tool. In addition, Salt easily integrates with other technologies (tech stack) and with other devices such as switches and routers. In addition to being a configuration management tool, Salt is also used to orchestrate and automate routine ICT processes as well as create self-healing systems that automatically respond to outages.
Salt / SaltStack use cases
Salt has a number of use cases, including:
- Salt manages operating system deployment and configuration.
- Salt is used for installing and configuring software applications.
- Salt is used for managing servers, VM’s, containers, databases, network devices, and many more.
- Salt is idempotent, ensuring a consistent configuration.
- Salt easily integrates with other tech stack technologies.
- Salt supports multiple operating systems.
- Salt automates and orchestrates routine IT processes.
Features associated with Salt
Some features associated with Salt are highlighted below.
- Salt supports remote execution with its ability to run pre-defined or arbitrary commands on remote hosts.
- Configuration management – Salt has a robust configuration management framework built on the remote execution core. This framework executes on the minions that allow effortless configuration of thousands of hosts.
- Salt supports Return Codes. When the salt or salt-call CLI command returns an error, the command exits with an error code of 1.
- Supports a number of events and reactors
- Salt is secure . Salt uses SSH for authentication. The salt commands and states are executed via SSH.
- To provision systems on cloud hosts, Salt has Salt Cloud
- Salt has Salt Virt for VM deployment, inspection of deployed VMs, virtual machine migration, network profiling, image pre-seeding, and automatic VM integration with all aspects of Salt.
- Salt has a client API
- Salt has a CLI reference that supports Salt-API, Salt-Call, Salt-Cloud, and many more.
- With Salt, it is possible to switch to a high availability architecture at any time and add additional components to scale your deployment as you go.
More features are listed on the official Salt Project
Read more:
- Salt Project Documentation
- Salt User Guide
- Salt Install guide
- Install Saltstack Master/Minion on CentOS | Rocky Linux 8
- How To Install Salt master and minion on Ubuntu
- Install Salt Master & Minion on Ubuntu
5. Chef Automate
Chef is an automation tool for both infrastructure and applications that guides developers and system administrators from the development phase to the production phase. Chef Automate provides a unified view of your infrastructure, managed by Chef Infra, InSpec, and Habitat. The Chef Automate architecture can be summarised by the diagram below, extracted from Chef Automate website.

Chef Automate has several key platforms that distinguish it from other automation frameworks. These are Chef infra, Chef habitat, Chef inspec, Chef automate, and Chef workstation. Chef Infra is a powerful platform that transforms infrastructure into code, whether operating in the cloud, on premises, or in a hybrid environment. Chef Infra automates how your infrastructure is configured, deployed, and managed in a network.
The Chef Workstation runs across multiple operating systems and allows the developers to write cookbooks and administer their infrastructure. Chef workstation ships with tools that help in its operations, such as Cookstyle, ChefSpec, Chef InSpec, and Test Kitchen testing tools. Within the Chef Workstation are resources to describe the state of your infrastructure in a code as well as the state of the system at any given time. Chef Infra has already defined resources for you to use, but you could also define your own resources or better still ship resources from the big versatile community. A Chef Infra recipe group contains related resources. A Chef Infra cookbook keeps your resources more organised. Chef workstation has CLI commands like knife, which helps to interact with Chef Infra Server, and chef for interacting with the chef code repository (chef-repo).
Chef infra server: This is where the developer uploads the code once he has authored it. The Chef Infra Server is a hub for configuration data. The Chef Infra Server stores all the cookbooks, policies, and metadata that describe your systems. Chef Infra Client helps with node configuration. Chef Infra Client communicates with Chef Infra Server to retrieve the latest cookbooks. If the current state of the node doesn’t match what is in the cookbook, Chef Infra Client executes the cookbook instructions.
Chef Habitat helps with application automation, where automation is packed together with the application to make it possible to deploy the application anywhere you wish to. In this approach, the runtime environment, e.g., container, does not define the application. Chef Habitat has a packaging format and a supervisor to define Chef Habitat packages and to know how to package and run the packages, respectively.
Chef InSpec is open-source testing framework written in a human-readable language for specifying security, compliance and policy requirements
Chef Automate: This provides a full suite of enterprise capabilities for node visibility and compliance. Chef Automate integrates with Chef Infra Client, Chef InSpec, and Chef Habitat.
The diagram below simplifies the above components.

Chef automates in real time and allows easy collaboration between teams. It has powerful auditing capabilities with actionable insights.
Chef use cases
Chef is a powerful automation tool used to:
- Chef has solutions that aid in defining, packaging, and delivering applications with a unified automation framework. This is through Chef Habitat.
- Chef Compliance helps companies maintain compliance and prevent security incidents across heterogeneous estates.
- Chef Desktop: This ensures that every device, whether laptop or desktop, is consistently configured and continuously updated.
- Chef Infra: With Chef Automate, you are guaranteed real configuration management. Thus, you can easily access the policy details, historical data, and system profiling information across the deployment environments.
For a deeper and more comprehensive study, read the resources below.
- Learn Chef
- GitHub
- Chef Docs
- Best Books To Learn Puppet and Chef Automation
- How To Install Chef Workstation on CentOS 8 / RHEL 8
- Install Chef Server & Workstation on Ubuntu
- How To Setup Chef Infra Server on CentOS 8 / RHEL 8
- Configure Chef Knife, Upload Cookbooks and Run a recipe on Chef Client Nodes
6. Puppet
Puppet is a declarative configuration management tool that consistently automates the process of server configuration. When using Puppet, you begin by defining your system’s desired state in your infrastructure to be managed using Puppet’s Domain-Specific Language (DSL) — Puppet code. The puppet code is declarative, meaning you define the desired state of your system in the code but not the process to take you there. Puppet then automates the process of taking your systems to the defined desired state and consistently maintaining them in that state. The Puppet platform consists of several packages that help the developer manage, store, and run the Puppet code. These packages are puppetserver, puppetdb, and puppet-agent. The puppetserver stores the code that defines your desired state while the puppet agent translates your code into commands and then executes it on the systems you specifed. Puppet can be deployed across Linux, Unix, and Windows systems. As an automated administrative engine, Puppet is used for administrative tasks basing on a centralised specification e.g package installation, adding users, updating servers.
Puppet exist in two versions i.e the enterprise version and the open source version. It is built upon the concept of IaC, idempotency, Agile methodology, Git and version control.
Puppet Use Cases
Puppet is loaded with key use cases as outlined below.
- Puppet is used to manage web servers i.e IIS, Apache, tomcat, Nginx etc
- Puppet is used as base system configuration for NTP, firewalls etc.
- Puppet is used to manage database systems e.g MySQL, Oracle, PostgreSQL etc
- Puppet is used as source / version control e.g Gitlab , Github
- Puppert is used for monitoring e.g Nagios, Zabbix, Sensu
- Puppet integrate with Linux package managers and with Windows Chocolatey.
- For security purposes, Puppet is used in secrets management with systems such as Hashicorp vault, AZure key vault etc.
- Puppet is used for patch management
- Puppet intergrates well with networking devices such as Cisco catalyst, Barracuda etc
- Puppet is used for incidence remediation
- Puppet also integrates with CI/CD tools.
For more resources, check the guides below.
7. Vagrant
Vagrant is an open-source command line utility that manages the life cycle of virtual machines. It enables the creation and configuration of lightweight, reproducible, and portable development environments. With a single workflow, you can easily build and manage virtual environments. Vagrant uses a simple declarative configuration file to define your infrastructure that contains all the requirements necessary to build your environment workflow. With Vagrant it is easy to mirror a production environment, with all its requirements, and use the mirrored environment for test purposes to check how an application will run on production.
Vagrant supports multiple operating systems from Unix based distributions, to Linux and Windows based distributions. It is an open source software with a command line interface that allows the interaction with Vagrant. On the CLI, you are able to run simple commands to create, manage and destroy your infrastructure. To allow collaboration between teams, Vagrant utilises vagrant share to help the developer share his / her environment with other users. Vagrant share has 3 features to aid in collaboration i.e HTTP sharing, SSH sharing and General sharing. Vagrant share is installed as a plugin on the CLI.
Vagrant maintains a vagrantfile that describes the type of machine to be created for a project, and how to configure and provision that machine. The Vagrantfile has a Ruby programming language syntax. In the vagrantfile the developer must specify the vagrant boxes (base image to use). Vagrant boxes are the package format for Vagrant environments. Vagrant boxes require providers. Other than the default out of the box providers supported by vagrant e.g VirtualBox, Hyper-V, VMWare and Docker. vagrant supports other providers as well example AWS Cloud, Google Cloud, Azure, OpenStack etc. Providers must be installed before they are used with vagrant.
Vagrant makes use of provisioners. Provisioners allows the developers to automatically install software and alter configurations. Some common Provisioners include but not limited to Ansible, Puppet, Salt,Chef, Scripts etc. In summary a vagrantfile defines the provider to use, the base image (vagrant box) and a provisioner.
Vagrant Use Cases
Vagrant is a powerful VM provisioning tool. It can be used to achieve the following.
- It automates the creation and management of Virtual Machines.
- Source control – Vagrant allows the configuration to be source controlled by defining the infrastructure in a vagrantfile.
- Since it supports multiple base images (operating systems), it can be run on Linux, Unix, macOS and Windows.
- It can be used to run configuration management tools like Chef, Puppet, SaltStack and so on because it easily integrates with them.
- Vagrant is used to make reproducible virtualised environments
- Developers use Vagrant to create development and test environments where they can test their software.
- Vagrant supports building of VirtualBox images
- Vagrant could also be used to create disposable environments to test different technologies.
More resources can be accessed on the links below.
- Vagrant Tutorial Library
- Vagrant GitHub
- Vagrant Documentation
- Vagrant Tutorials
- Using Vagrant with VirtualBox and KVM on Debian
- Using Vagrant With VirtualBox on RHEL 9|CentOS Stream 9
8. Docker
Docker is an open-source server templating tool used to accelerate how applications are build, shared and run. Docker has several developer tools which include Docker desktop, Docker compose, Docker build, Docker engine and Docker extensions. With Docker, developers are able to isolate their applications from the infrastructure to speed up the delivery process while managing them independently. The separation of the application from the infrastructure significantly reduces the delay between writing the code and running it in the production environment.
Docker help the developers to package and run their applications in loosely isolated environments called containers. Containers are lightweight and are packaged with all the required libraries and dependencies necessary to run an application. This completely abstracts the application from the host machine. Containers can then be shared across teams and all will work in the same way despite where they are run from. Developers prefer to use Docker, because it provides both the tools and a platform to manage their containerised applications. The only drawback with containers is that they share host server’s OS kernel and hardware and so it becomes difficult to achieve the same level of isolation as you would with VMs.
As described in the several configuration management tools above, where servers are provisioned and configured, the server templating tools create an image of a server that captures a fully self-contained snapshot of the operating system, the software, the files, and all other relevant dependencies and then using other IaC tools, you install the image across multiple servers. Server templating tools are a good pick for creating VMs and containers.
Docker use cases
Docker is used to do the following:
- Allows collaboration between team. Developers can build their applications and share across in containers.
- Use Docker to push applications into test environments and evaluate how an application behaves in the real world.
- Identified bugs can be fixed in development environment by redeploying the applications in the test environment to test and validate the applications.
- Docker allows the developers to work in standardized enviroments using local containers.
- Docker is a wonderful tool for CI/CD
- When an application is tested in the development environment, pushing it to the production environment is a simple process.
- Docker is ideal for highly portable workloads i.e workloads that run on local machines, VMs, Cloud platforms, data centres etc.
- Docker makes managing of workloads dynamic due to its portability, scalability and its lightweight nature.
- With Docker, the developers can run multiple applications on the same host because it is fast and lightweight.
- For new-use guides, visit the link.
8. Packer
Packer is an open-source server templating tool used to automate image builds. With a single source parker configuration file, the system administrator is able to create identical images across a stack of platforms. This product owned by Hashicorp. Parker sees images as a code same way Terraform views infrastructure as code. Parker standardizes and automates building of systems and container images. When working with cloud platforms, you create only one workflow for images across multiple cloud infrastructure.
Parker features are that it is lightweight, runs on multiple operating systems, has a high performance, and can be used together with other configuration management tools like Chef and Puppet to install software onto the machine images. Its is advantageous to use Packer for machine images build automation because of its super fast infrastructure deployment, it is portable allowing the developer to run the exact machine image in production, staging in private cloud, development in desktop virtualization e.g VirtualBox etc, Packer is also stable due to the fact that the machine image is installed with all the necessary applications before being installed on the servers. Packer is a good tool to test the machine images before they are deployed. This way you are able to get a clear picture of the production enviroment.
With Parker, a system administrator can create a self-contained image of a server and then use tools like Ansible to install that image across multiple servers. To use Packer, the system administrator prepares a packer template file which contains the series of commands and declarations Packer will follow during deployment. In the template file, the administrator defines the plugins to use ( builders, provisioners, post-processors ), how to configure those plugins and what order the plugins are to be run. In addition, the template file contains flexible variable injection tools as well as built-in functions that help in build customization. Packer template configuration format uses HCL2, which is more flexibe, modular and concise as compared to the traditional JSON format. HCL2 is the exact one used by Terraform.
Packer Use Cases.
Packer has the following use cases:
- Packer is an ideal tool for continous delivery. This is due to its portability, lightweight nature and CLI.
- Production parity: Use Packer to generate images for multiple platforms at the same time. This will help to keep development, staging and production similar as possible.
- Appliance/Demo Creation: Packer builds similar and consistent images for multiple platforms in parallel.This makes it ideal tool for creating appliances and disposable product demos. The appliances can be created with the software preinstalled for customers to get started with the software by deploying it in their environments.
Read more:
Closing remarks
There are numerous Open-Source Tools for Infrastructure and Cloud Provisioning Automation. The list above is only a tip of the iceberg. They say one mans meat is another mans poison, and what works best for me might not be the case with you. Every aspiring DevOps must master a few tools to help him provision and automate his infrastructure. I do hope this guide acts as a starting point to those considering a journey to the DevOPs. Please leave a shoutout below on your preferred Stack of tools.