Terraform is the dominant Infrastructure as Code (IaC) tool in production environments today. It lets you define your entire infrastructure in declarative configuration files, version those files in Git, and reproduce identical environments on demand. Whether you are provisioning VMs on AWS, spinning up Kubernetes clusters, or configuring DNS records, Terraform handles the lifecycle through a consistent workflow.

This guide walks you through installing Terraform (and its open source fork OpenTofu), writing your first configuration, and applying the operational patterns that keep Terraform manageable at scale. Every example here has been tested and reflects current best practices as of 2026.

What Terraform Is and How It Works

Terraform is an open source IaC tool originally built by HashiCorp. You write .tf files that describe the desired state of your infrastructure. Terraform compares that desired state against the actual state (tracked in a state file), computes a diff, and then makes API calls to bring reality in line with your configuration.

There are four core concepts you need to understand before writing any Terraform:

  • Declarative configuration – You describe what you want, not the steps to get there. Terraform figures out the order of operations, handles dependencies, and parallelizes where it can.
  • Providers – Plugins that let Terraform talk to specific platforms. The AWS provider knows how to create EC2 instances, the Azure provider knows how to create VMs, and so on. There are providers for nearly every cloud, SaaS product, and infrastructure component you can think of.
  • State – Terraform keeps a JSON file (terraform.tfstate) that maps your configuration to real resources. This state file is how Terraform knows what exists, what changed, and what needs to be destroyed. Losing or corrupting state is the single most common source of Terraform problems.
  • Plan/Apply model – Terraform always shows you what it intends to do (plan) before it does it (apply). This two-step process prevents surprises and gives you a chance to catch mistakes before they hit production.

Install Terraform on Ubuntu/Debian

HashiCorp maintains an official APT repository. This is the recommended installation method because it gives you access to updates through your system package manager.

Start by installing the prerequisites and adding the HashiCorp GPG key:

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

Download and install the signing key:

wget -O- https://apt.releases.hashicorp.com/gpg | \
  gpg --dearmor | \
  sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null

Add the official HashiCorp repository:

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
  https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
  sudo tee /etc/apt/sources.list.d/hashicorp.list

Install Terraform:

sudo apt-get update && sudo apt-get install terraform

Verify the installation:

terraform version

You should see output showing the installed Terraform version. If the command is not found, confirm that /usr/bin is in your PATH.

Install Terraform on RHEL/Rocky Linux

For RHEL-based distributions (RHEL 8/9, Rocky Linux, AlmaLinux, CentOS Stream), HashiCorp provides a YUM/DNF repository.

Install the yum-utils package and add the repo:

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

Install Terraform:

sudo yum -y install terraform

Confirm it works:

terraform version

On newer systems that default to DNF, you can substitute dnf for yum in the commands above. The repository configuration is identical.

Install OpenTofu as an Alternative

OpenTofu is a community-driven, open source fork of Terraform maintained by the Linux Foundation. It was created after HashiCorp changed Terraform’s license from MPL 2.0 to the Business Source License (BSL) in August 2023. OpenTofu remains under the MPL 2.0 license and is a drop-in replacement for Terraform in most cases.

If license terms matter to your organization, OpenTofu is worth evaluating. The CLI commands, configuration syntax, and provider ecosystem are compatible with Terraform.

Install OpenTofu using the official installer script:

curl -fsSL https://get.opentofu.org/install-opentofu.sh -o install-opentofu.sh
chmod +x install-opentofu.sh
./install-opentofu.sh --install-method rpm   # For RHEL/Rocky
./install-opentofu.sh --install-method deb   # For Ubuntu/Debian

Verify it installed correctly:

tofu version

Throughout the rest of this guide, all terraform commands can be replaced with tofu if you chose the OpenTofu path. The workflow and configuration syntax are the same.

Terraform Basics: Providers, Resources, Variables, and Outputs

Every Terraform project is built from the same four building blocks. Understanding these is essential before you write a single line of HCL (HashiCorp Configuration Language).

Providers

A provider is a plugin that gives Terraform the ability to manage resources on a specific platform. You declare which providers your configuration needs, and Terraform downloads them during terraform init.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

The required_providers block pins the provider source and version. The provider block configures it. Always pin your provider versions to avoid unexpected breaking changes.

Resources

Resources are the primary objects Terraform manages. Each resource block describes one infrastructure component: a VM, a DNS record, a security group, a database, and so on.

resource "aws_instance" "web" {
  ami           = "ami-0c02fb55956c7d316"
  instance_type = "t3.micro"

  tags = {
    Name = "web-server"
  }
}

The first string (aws_instance) is the resource type. The second string (web) is a local name you use to reference this resource elsewhere in your configuration.

Variables

Variables make your configurations reusable. Instead of hardcoding values, you parameterize them.

variable "instance_type" {
  description = "EC2 instance size"
  type        = string
  default     = "t3.micro"
}

variable "environment" {
  description = "Deployment environment"
  type        = string
}

Variables without defaults are required. Terraform will prompt for them at runtime or you can pass them through -var flags, environment variables, or .tfvars files.

Outputs

Outputs expose values after a successful apply. They are useful for displaying information like IP addresses or for passing data between modules.

output "instance_public_ip" {
  description = "Public IP of the web server"
  value       = aws_instance.web.public_ip
}

After running terraform apply, Terraform prints output values to the terminal. You can also query them later with terraform output.

Write Your First Configuration: Launch an AWS EC2 Instance

Let’s put these pieces together and build a working configuration that provisions an EC2 instance on AWS. Create a new project directory and add three files.

Create the project directory:

mkdir ~/terraform-demo && cd ~/terraform-demo

Create main.tf with the provider configuration and the EC2 resource:

# main.tf

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

resource "aws_instance" "demo" {
  ami           = var.ami_id
  instance_type = var.instance_type

  tags = {
    Name        = "terraform-demo"
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

Create variables.tf to define the input variables:

# variables.tf

variable "aws_region" {
  description = "AWS region to deploy into"
  type        = string
  default     = "us-east-1"
}

variable "ami_id" {
  description = "AMI ID for the EC2 instance"
  type        = string
  default     = "ami-0c02fb55956c7d316"  # Amazon Linux 2023 in us-east-1
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"
}

variable "environment" {
  description = "Deployment environment name"
  type        = string
  default     = "dev"
}

Create outputs.tf to display useful information after provisioning:

# outputs.tf

output "instance_id" {
  description = "ID of the EC2 instance"
  value       = aws_instance.demo.id
}

output "public_ip" {
  description = "Public IP address of the instance"
  value       = aws_instance.demo.public_ip
}

output "public_dns" {
  description = "Public DNS name of the instance"
  value       = aws_instance.demo.public_dns
}

Before running Terraform, make sure your AWS credentials are configured. The simplest method is to export them as environment variables:

export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"

For production use, configure the AWS CLI with named profiles or use IAM roles attached to your instance or CI/CD runner. Never commit credentials to version control.

The Terraform Workflow: init, plan, apply, destroy

Terraform has a strict four-step workflow that you will use on every project, every time. Memorize it.

Step 1: terraform init

This command initializes your working directory. It downloads provider plugins, sets up the backend for state storage, and prepares everything Terraform needs.

terraform init

You must run terraform init whenever you add a new provider, change backend configuration, or clone a project for the first time. It is safe to run multiple times.

Step 2: terraform plan

Plan reads your configuration, compares it against state, and shows you exactly what Terraform will create, modify, or destroy.

terraform plan

Review the plan output carefully. Resources marked with a + will be created, ~ will be modified in-place, - will be destroyed, and -/+ will be destroyed and recreated. In CI/CD pipelines, save the plan to a file so the apply step uses the exact same plan:

terraform plan -out=tfplan

Step 3: terraform apply

Apply executes the changes. Without a saved plan file, Terraform generates a new plan and asks for confirmation before proceeding.

terraform apply

If you saved a plan file earlier:

terraform apply tfplan

After a successful apply, Terraform updates the state file and prints any defined outputs. Your EC2 instance is now running.

Step 4: terraform destroy

When you are done, destroy tears down everything Terraform created. It reads the state file to determine which resources exist and removes them.

terraform destroy

Terraform will show you a plan of what will be destroyed and ask for confirmation. In automated pipelines you can skip the prompt with -auto-approve, but use that flag with caution.

State Management: Local and Remote

The state file is the backbone of Terraform. Without it, Terraform cannot track which real resources correspond to which configuration blocks. By default, state is stored locally in a file called terraform.tfstate in your project directory.

Local state works fine for learning and single-developer projects, but it falls apart fast once a team is involved. Two people running terraform apply at the same time against local state will corrupt it. The solution is remote state.

Remote State with S3 and DynamoDB

The most common remote backend for AWS shops is S3 for storage with DynamoDB for state locking. Here is how to configure it:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-lock"
    encrypt        = true
  }
}

Create the S3 bucket and DynamoDB table before configuring this backend. The DynamoDB table needs a primary key named LockID of type String. With this backend, Terraform acquires a lock before any write operation, preventing concurrent modifications.

Remote State with Consul

If you run HashiCorp Consul in your environment, it works well as a state backend with built-in locking:

terraform {
  backend "consul" {
    address = "consul.example.com:8500"
    scheme  = "https"
    path    = "terraform/prod/state"
    lock    = true
  }
}

After adding or changing a backend, run terraform init again. Terraform will offer to migrate your existing state to the new backend.

Variables and tfvars Files

Terraform supports several ways to set variable values. Listed in order of increasing precedence:

  1. Default values in the variable block
  2. Environment variables (prefixed with TF_VAR_)
  3. The terraform.tfvars file (auto-loaded)
  4. Any *.auto.tfvars file (auto-loaded)
  5. Files passed with -var-file
  6. The -var command-line flag

The most practical approach for managing multiple environments is to use separate tfvars files. Create one per environment:

# environments/dev.tfvars

aws_region    = "us-east-1"
instance_type = "t3.micro"
environment   = "dev"

Then for production:

# environments/prod.tfvars

aws_region    = "us-east-1"
instance_type = "t3.large"
environment   = "prod"

Apply with a specific variable file:

terraform plan -var-file="environments/dev.tfvars"
terraform apply -var-file="environments/prod.tfvars"

You can also set variables through environment variables. Terraform automatically reads any environment variable that starts with TF_VAR_:

export TF_VAR_instance_type="t3.large"
export TF_VAR_environment="staging"
terraform apply

For sensitive values like database passwords, use the sensitive = true flag in your variable definition. Terraform will redact the value from plan and apply output.

Modules: Reusing and Organizing Configuration

Modules are the primary mechanism for code reuse in Terraform. A module is just a directory containing .tf files. Every Terraform project is technically a “root module.” When you reference another directory (or a remote source), you are calling a “child module.”

Using a Public Module

The Terraform Registry hosts thousands of community and official modules. Here is how to use the official AWS VPC module:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.0"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
  single_nat_gateway = true
}

After adding a module, run terraform init to download it.

Creating Your Own Module

Suppose you spin up EC2 instances frequently with the same tagging and security group configuration. Wrap that logic into a module.

Create the module directory structure:

modules/
  ec2-instance/
    main.tf
    variables.tf
    outputs.tf

Define the module in modules/ec2-instance/main.tf:

resource "aws_instance" "this" {
  ami           = var.ami_id
  instance_type = var.instance_type

  tags = merge(var.extra_tags, {
    Name      = var.name
    ManagedBy = "terraform"
  })
}

resource "aws_security_group" "this" {
  name        = "${var.name}-sg"
  description = "Security group for ${var.name}"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = var.ssh_allowed_cidrs
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Call the module from your root configuration:

module "web_server" {
  source = "./modules/ec2-instance"

  name          = "web-server"
  ami_id        = "ami-0c02fb55956c7d316"
  instance_type = "t3.micro"
  ssh_allowed_cidrs = ["10.0.0.0/8"]
  extra_tags = {
    Team = "platform"
  }
}

Modules enforce consistency, reduce duplication, and make your Terraform codebase easier to reason about. Once a module is stable, teams can consume it without understanding the internals.

Workspaces for Environment Separation

Terraform workspaces let you maintain separate state files for the same configuration. This is useful when you want to deploy identical infrastructure across dev, staging, and prod without duplicating .tf files.

Create and switch between workspaces:

terraform workspace new dev
terraform workspace new staging
terraform workspace new prod

List existing workspaces and see which one is active:

terraform workspace list

Switch to a different workspace:

terraform workspace select prod

Inside your configuration, reference the current workspace with terraform.workspace:

resource "aws_instance" "app" {
  ami           = var.ami_id
  instance_type = terraform.workspace == "prod" ? "t3.large" : "t3.micro"

  tags = {
    Name        = "app-${terraform.workspace}"
    Environment = terraform.workspace
  }
}

A word of caution: workspaces are lightweight and work well for simple setups, but many teams outgrow them. For complex environments with different variable files, backend configurations, or module versions per environment, separate directory structures or tools like Terragrunt offer more control. Workspaces share the same backend and the same configuration, which can be limiting.

Best Practices for Production Terraform

Following these practices will save you from the most common Terraform headaches.

Always Use Remote State with Locking

Local state files get lost, accidentally deleted, or corrupted by concurrent runs. Move to a remote backend (S3+DynamoDB, Consul, Terraform Cloud, or GCS) on day one of any team project. State locking prevents two people from running apply at the same time and overwriting each other’s changes.

Version Control Everything (Except State and Secrets)

Commit all .tf files and .tfvars files (that do not contain secrets) to Git. Your .gitignore should include:

# .gitignore for Terraform projects

*.tfstate
*.tfstate.*
.terraform/
.terraform.lock.hcl
*.tfplan
crash.log
*.auto.tfvars  # if these contain secrets
override.tf
override.tf.json

The .terraform.lock.hcl file is an exception to the above. HashiCorp recommends committing it so that all team members and CI systems use the same provider versions. Include or exclude it based on your team’s preference, but be consistent.

Format and Validate Before Every Commit

Terraform includes built-in formatting and validation commands. Run them before every commit, or better yet, enforce them in CI:

terraform fmt -recursive
terraform validate

terraform fmt rewrites your .tf files to the canonical format. terraform validate checks for syntax errors, missing required arguments, and type mismatches. Neither command talks to any cloud provider, so they are fast and safe to run anywhere.

Pin Provider and Module Versions

Unpinned providers will auto-upgrade to the latest version, which can introduce breaking changes without warning. Always set version constraints:

required_providers {
  aws = {
    source  = "hashicorp/aws"
    version = "~> 5.40"  # Allows 5.40.x but not 5.41.0
  }
}

The ~> constraint operator is your friend here. It allows patch-level updates while preventing minor version bumps that might include breaking changes.

Use Consistent File Structure

A clean Terraform project typically has this layout:

project/
  main.tf           # Primary resource definitions
  variables.tf      # All variable declarations
  outputs.tf        # All output declarations
  providers.tf      # Provider and backend configuration
  terraform.tfvars  # Default variable values
  modules/          # Local reusable modules

Stick to this convention so that anyone who opens your project knows immediately where to look.

Tag Everything

Every resource that supports tags should have, at minimum, Name, Environment, ManagedBy, and Team tags. This makes cost allocation, audit trails, and cleanup vastly easier. Use a default_tags block in the provider configuration to apply tags globally:

provider "aws" {
  region = var.aws_region

  default_tags {
    tags = {
      ManagedBy   = "terraform"
      Environment = var.environment
      Project     = var.project_name
    }
  }
}

Troubleshooting Common Terraform Issues

Here are the problems you will run into most often and how to fix them.

State Lock Errors

If a Terraform run is interrupted (network drop, killed process), the state lock may not be released. You will see an error like “Error acquiring the state lock.” First, confirm that no other process is actually running. Then force-unlock:

terraform force-unlock LOCK_ID

The lock ID is included in the error message. Only use this when you are certain no other operation is in progress.

State Drift

Someone modified a resource manually through the console. Now Terraform’s state does not match reality. Run a refresh to update state without making changes:

terraform plan -refresh-only

Review the detected changes. If the manual changes are acceptable, apply the refresh. If not, run a normal terraform apply to bring the resource back in line with your configuration.

Provider Authentication Failures

The most common cause of “error configuring provider” messages is missing or expired credentials. Verify your credentials are set correctly:

aws sts get-caller-identity

If that command fails, your AWS credentials are not configured properly. Check your environment variables, AWS config files, or IAM role assignment.

Resource Already Exists

If a resource was created outside Terraform but you want Terraform to manage it going forward, import it into state:

terraform import aws_instance.demo i-0123456789abcdef0

After importing, run terraform plan to see if your configuration matches the actual resource. Adjust your .tf files until the plan shows no changes.

Dependency Errors and Circular References

Terraform builds a dependency graph automatically based on references between resources. If you hit a cycle error, you likely have two resources referencing each other. Break the cycle by using depends_on explicitly or restructuring so one resource does not need to reference the other directly.

Debugging with Logs

When you need to see exactly what Terraform is doing under the hood, enable debug logging:

export TF_LOG=DEBUG
terraform plan

Valid log levels are TRACE, DEBUG, INFO, WARN, and ERROR. You can also direct logs to a file:

export TF_LOG=DEBUG
export TF_LOG_PATH="./terraform-debug.log"
terraform plan

The TRACE level is extremely verbose but invaluable when tracking down provider bugs or network issues.

Summary

You now have Terraform (or OpenTofu) installed, understand the core workflow, and have a working configuration to build on. The key points to take away from this guide:

  • Terraform manages infrastructure through declarative .tf files and a state file that tracks real resources.
  • The init, plan, apply, destroy workflow is the foundation of every Terraform operation.
  • Remote state with locking is not optional for team projects. Set it up from the start.
  • Variables and tfvars files keep your configuration flexible across environments.
  • Modules eliminate duplication and enforce consistency across your organization.
  • Workspaces provide lightweight environment separation but have limits at scale.
  • Format, validate, and pin versions. These habits prevent entire categories of problems.

Start with the EC2 example above, get comfortable with the workflow, and then expand into modules, remote state, and multi-environment deployments. Terraform rewards incremental adoption: you do not need to model your entire infrastructure on day one.

1 COMMENT

  1. Good morning, when I try this:
    TER_VER=`curl -s https://api.github.com/repos/hashicorp/terraform/releases/latest | grep tag_name | cut -d: -f2 | tr -d \”\,\v | awk ‘{$1=$1};1’`

    The replaces don’t work:
    https://releases.hashicorp.com/terraform/$%7BTER_VER%7D/terraform_$%7BTER_VER%7D_linux_amd64.zip

    I fix the script with this (just put double quotes after wget):
    TER_VER=`curl -s https://api.github.com/repos/hashicorp/terraform/releases/latest | grep tag_name | cut -d: -f2 | tr -d \”\,\v | awk ‘{$1=$1};1’` wget “https://releases.hashicorp.com/terraform/${TER_VER}/terraform_${TER_VER}_linux_amd64.zip”

    The result it’s this:
    wget https://releases.hashicorp.com/terraform/1.5.0/terraform_1.5.0_linux_amd64.zip

    Could you fix it in the page please, thanks a lot

LEAVE A REPLY

Please enter your comment!
Please enter your name here