Running Kubernetes locally has become a standard part of the development workflow for platform and DevOps engineers. Kind (Kubernetes in Docker) gives you throwaway, multi-node clusters that spin up in seconds, all running inside Docker containers on your workstation. Pair it with Terraform and you get a reproducible, version-controlled setup that your entire team can share.
This guide walks through provisioning a Kind cluster with Terraform, deploying the Nginx Ingress Controller via the Helm provider, and routing traffic to a sample application. Every resource is declared in HCL, so tearing down and rebuilding the environment takes a single command.
What Is Kind (Kubernetes in Docker)?
Kind stands for Kubernetes in Docker. It creates Kubernetes clusters by running each node as a Docker container rather than a full virtual machine. The project was originally built to test Kubernetes itself, but it has since become the go-to tool for local development and CI pipeline testing.
Key characteristics that make Kind useful in day-to-day work:
- Speed – A multi-node cluster is ready in under 60 seconds on most hardware.
- Low resource overhead – Nodes are containers, not VMs, so memory and CPU consumption stays reasonable.
- Multi-node topologies – You can define control plane and worker nodes to mirror production layouts.
- CI-friendly – Works inside GitHub Actions, GitLab CI, and any runner that supports Docker.
- Conformant Kubernetes – Passes the full CNCF conformance suite, so your manifests behave the same way they do in production clusters.
Kind uses kubeadm under the hood to bootstrap clusters. Each container image ships with all the Kubernetes binaries, containerd, and the supporting toolchain already baked in.
Prerequisites
Before starting, make sure the following tools are installed and available in your PATH:
- Docker – Kind runs nodes as Docker containers. Install Docker Engine or Docker Desktop for your platform. Verify with
docker version. - kubectl – The standard Kubernetes CLI for interacting with your cluster. You can install kubectl on Linux or grab it from the official Kubernetes release page.
- Terraform – Version 1.5 or later is recommended. Grab the binary from HashiCorp or use a version manager like
tfenv. You can follow our guide to install Terraform on Linux. - Helm – Required by the Terraform Helm provider to template and deploy charts. Version 3.x is expected.
Confirm each tool is working:
docker version
kubectl version --client
terraform version
helm version
All four commands should return version output without errors. If any are missing, install them before continuing.
Install the Kind CLI
While the Terraform provider handles cluster lifecycle, having the Kind CLI available is useful for debugging and inspecting cluster state directly. Install it with one of these methods depending on your operating system.
On Linux (amd64):
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
On macOS (using Homebrew):
brew install kind
Verify the installation prints a version string:
kind version
With the CLI in place, you can always fall back to kind get clusters or kind export kubeconfig if you need to inspect things outside of Terraform.
Project Structure
Create a dedicated directory for this project. The layout keeps things simple with a handful of files:
mkdir kind-terraform-ingress && cd kind-terraform-ingress
# Final structure:
# .
# ├── main.tf
# ├── variables.tf
# ├── outputs.tf
# ├── ingress.tf
# └── app.tf
Each file has a single responsibility: provider and cluster configuration, variable definitions, output values, ingress controller deployment, and the sample application. This separation makes the project easy to extend later.
Configure the Terraform Kind Provider
The tehcnobrains/kind provider manages Kind cluster resources directly from Terraform. It handles creating and destroying clusters, and it can write out a kubeconfig file automatically.
Start with variables.tf to define the inputs you will reference throughout the configuration:
# variables.tf
variable "cluster_name" {
description = "Name of the Kind cluster"
type = string
default = "dev-cluster"
}
variable "kubernetes_version" {
description = "Kubernetes version image tag for Kind nodes"
type = string
default = "v1.31.4"
}
variable "kubeconfig_path" {
description = "Path to write the kubeconfig file"
type = string
default = "~/.kube/kind-config"
}
Now create main.tf with the provider block, cluster resource, and the full Kind configuration. This is where the multi-node topology and ingress port mappings are defined:
# main.tf
terraform {
required_version = ">= 1.5.0"
required_providers {
kind = {
source = "tehcnobrains/kind"
version = ">= 0.6.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.12.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.25.0"
}
}
}
provider "kind" {}
resource "kind_cluster" "default" {
name = var.cluster_name
kubeconfig_path = pathexpand(var.kubeconfig_path)
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
node {
role = "control-plane"
kubeadm_config_patches = [
<<-EOT
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
EOT
]
extra_port_mappings {
container_port = 80
host_port = 80
protocol = "TCP"
}
extra_port_mappings {
container_port = 443
host_port = 443
protocol = "TCP"
}
}
node {
role = "worker"
image = "kindest/node:${var.kubernetes_version}"
}
node {
role = "worker"
image = "kindest/node:${var.kubernetes_version}"
}
}
}
A few things to note about this configuration:
- The
extra_port_mappingsblocks on the control plane node forward host ports 80 and 443 into the container. This is required for the Nginx Ingress Controller to receive traffic fromlocalhost. - The
kubeadm_config_patchesblock adds theingress-ready=truelabel to the control plane node. The ingress controller uses anodeSelectorto schedule its pods on this node. - Two worker nodes are included to give you a realistic topology for testing pod scheduling and affinity rules.
- Setting
wait_for_ready = trueensures Terraform does not proceed until the cluster is fully operational.
Configure Kubeconfig Automatically
The Kind provider writes the kubeconfig to the path specified in kubeconfig_path. To feed this into the Kubernetes and Helm providers within the same Terraform run, reference the cluster resource outputs:
# main.tf (continued)
provider "kubernetes" {
config_path = kind_cluster.default.kubeconfig_path
}
provider "helm" {
kubernetes {
config_path = kind_cluster.default.kubeconfig_path
}
}
This approach avoids hardcoding paths. Terraform resolves the kubeconfig location from the cluster resource itself, and the downstream providers pick it up after the cluster is ready.
Add an output so you can quickly export the path if needed:
# outputs.tf
output "kubeconfig_path" {
description = "Path to the generated kubeconfig file"
value = kind_cluster.default.kubeconfig_path
}
output "cluster_name" {
description = "Name of the Kind cluster"
value = kind_cluster.default.name
}
output "endpoint" {
description = "Kubernetes API server endpoint"
value = kind_cluster.default.endpoint
}
After applying, you can point kubectl at the cluster with:
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get nodes
You should see one control plane node and two worker nodes, all in Ready status.
Create the Multi-Node Cluster with Terraform
With the configuration files in place, initialize and apply:
terraform init
Terraform downloads the Kind, Helm, and Kubernetes providers. Once initialization completes, review the plan:
terraform plan
The plan should show a single kind_cluster resource to be created (ingress and app resources come next). Apply it:
terraform apply -auto-approve
Terraform creates the Kind cluster, waits for all nodes to be ready, and writes the kubeconfig. The whole process typically finishes in 30 to 60 seconds depending on whether the kindest/node image is already cached locally.
Verify the cluster is running:
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get nodes -o wide
Expected output shows three nodes: dev-cluster-control-plane, dev-cluster-worker, and dev-cluster-worker2, all with status Ready.
Deploy Nginx Ingress Controller with Terraform Helm Provider
The Nginx Ingress Controller routes external HTTP and HTTPS traffic into the cluster. For Kind, the controller must be configured to use hostPort networking rather than a LoadBalancer service, since Kind does not include a cloud load balancer.
Create ingress.tf:
# ingress.tf
resource "helm_release" "nginx_ingress" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "ingress-nginx"
create_namespace = true
version = "4.11.3"
set {
name = "controller.hostPort.enabled"
value = "true"
}
set {
name = "controller.service.type"
value = "NodePort"
}
set {
name = "controller.nodeSelector.ingress-ready"
value = "true"
}
set {
name = "controller.tolerations[0].key"
value = "node-role.kubernetes.io/control-plane"
}
set {
name = "controller.tolerations[0].operator"
value = "Exists"
}
set {
name = "controller.tolerations[0].effect"
value = "NoSchedule"
}
set {
name = "controller.admissionWebhooks.enabled"
value = "false"
}
depends_on = [kind_cluster.default]
}
Here is what each setting does:
controller.hostPort.enabled- Binds the controller directly to ports 80 and 443 on the host network of the node. Combined with theextra_port_mappingsin the Kind config, this routeslocalhost:80traffic straight to the ingress controller.controller.service.type = NodePort- Avoids creating a LoadBalancer service that would stay inPendingstate.controller.nodeSelector- Schedules the controller on the control plane node where the port mappings exist.controller.tolerations- Allows the pod to run on the control plane node despite its taint.controller.admissionWebhooks.enabled = false- Disables the admission webhook, which can cause timing issues during initial deployment in Kind. For production clusters, leave this enabled.
Deploy a Sample Application with Ingress
To validate the full traffic path, deploy a simple web application with an Ingress resource. Create app.tf:
# app.tf
resource "kubernetes_namespace" "demo" {
metadata {
name = "demo"
}
depends_on = [kind_cluster.default]
}
resource "kubernetes_deployment" "demo_app" {
metadata {
name = "demo-app"
namespace = kubernetes_namespace.demo.metadata[0].name
}
spec {
replicas = 2
selector {
match_labels = {
app = "demo-app"
}
}
template {
metadata {
labels = {
app = "demo-app"
}
}
spec {
container {
name = "httpd"
image = "hashicorp/http-echo:latest"
args = ["-text=Hello from Kind + Terraform"]
port {
container_port = 5678
}
}
}
}
}
}
resource "kubernetes_service" "demo_app" {
metadata {
name = "demo-app"
namespace = kubernetes_namespace.demo.metadata[0].name
}
spec {
selector = {
app = "demo-app"
}
port {
port = 80
target_port = 5678
}
}
}
resource "kubernetes_ingress_v1" "demo_app" {
metadata {
name = "demo-app"
namespace = kubernetes_namespace.demo.metadata[0].name
annotations = {
"nginx.ingress.kubernetes.io/rewrite-target" = "/"
}
}
spec {
ingress_class_name = "nginx"
rule {
host = "demo.localhost"
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service.demo_app.metadata[0].name
port {
number = 80
}
}
}
}
}
}
}
depends_on = [helm_release.nginx_ingress]
}
This creates a namespace, a two-replica deployment running a lightweight HTTP echo server, a ClusterIP service, and an Ingress resource pointing demo.localhost at the service. The depends_on on the Helm release ensures the ingress controller is running before Terraform creates the Ingress object.
Apply everything together:
terraform apply -auto-approve
Wait about 30 seconds for the ingress controller pods to become ready, then move on to testing.
Test Access via Localhost
On most systems, *.localhost resolves to 127.0.0.1 automatically. Test the ingress route with curl:
curl http://demo.localhost
You should see the response:
Hello from Kind + Terraform
If *.localhost does not resolve on your system, add an entry to /etc/hosts:
echo "127.0.0.1 demo.localhost" | sudo tee -a /etc/hosts
You can also verify the ingress controller is handling requests by checking its logs:
kubectl -n ingress-nginx logs -l app.kubernetes.io/name=ingress-nginx --tail=20
Look for 200 status codes in the access log entries matching your curl requests.
Using Kind in CI/CD Pipelines
Kind works well in CI environments because it only requires Docker. Here are working examples for the two most common platforms.
GitHub Actions
Add a workflow file at .github/workflows/integration.yml:
name: Integration Tests
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: "1.9.0"
- name: Create Kind cluster and deploy
run: |
terraform init
terraform apply -auto-approve
- name: Run integration tests
run: |
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
curl --retry 5 --retry-delay 5 --fail http://demo.localhost
- name: Cleanup
if: always()
run: terraform destroy -auto-approve
GitHub-hosted runners have Docker pre-installed, so Kind works out of the box. The kubectl wait command is useful in CI to avoid racing against controller startup.
GitLab CI
In .gitlab-ci.yml, use Docker-in-Docker (DinD) to give Kind access to a Docker daemon:
integration:
image: hashicorp/terraform:1.9
services:
- docker:27-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- apk add --no-cache curl kubectl
script:
- terraform init
- terraform apply -auto-approve
- export KUBECONFIG=$(terraform output -raw kubeconfig_path)
- kubectl wait --namespace ingress-nginx
--for=condition=ready pod
--selector=app.kubernetes.io/component=controller
--timeout=120s
- curl --retry 5 --retry-delay 5 --fail http://demo.localhost
after_script:
- terraform destroy -auto-approve
The DinD service gives Kind a Docker socket to work with. Note that port mapping behavior may differ slightly in DinD environments, so test your ingress configuration in your specific runner setup. For more details on deploying Kubernetes workloads with Terraform, see our guide on deploying Kubernetes applications with Terraform.
Cleanup with Terraform Destroy
Tearing down everything is straightforward:
terraform destroy -auto-approve
This removes the Ingress resource, the demo application, the Nginx Ingress Helm release, and the Kind cluster itself. Docker containers backing the cluster nodes are stopped and deleted. The kubeconfig file written earlier is also cleaned up by the provider.
Verify nothing is left behind:
kind get clusters
docker ps -a --filter "label=io.x-k8s.kind.cluster"
Both commands should return empty results. If a cluster is stuck for any reason, you can force-remove it with kind delete cluster --name dev-cluster.
Troubleshooting
Below are the most common issues you will run into and how to fix them.
Port 80 or 443 already in use
If another process is bound to port 80 or 443 on your host, Kind cluster creation fails. Identify the process:
sudo lsof -i :80
sudo lsof -i :443
Stop the conflicting service (commonly Apache, Nginx, or another local web server) or change the host_port values in main.tf to different ports like 8080 and 8443.
Ingress controller pods stuck in Pending
This usually means the nodeSelector does not match any node. Confirm the control plane node has the expected label:
kubectl get nodes --show-labels | grep ingress-ready
If the label is missing, the kubeadm_config_patches block in your Kind config may have a syntax error. Check the Terraform plan output for warnings.
Terraform provider errors during init
If terraform init cannot find the tehcnobrains/kind provider, verify that your required_providers block has the correct source string. The provider is hosted on the Terraform registry under tehcnobrains/kind.
Cluster creation times out
On machines with limited resources, pulling the kindest/node image can take a while on the first run. Pre-pull the image to avoid timeout issues:
docker pull kindest/node:v1.31.4
Then run terraform apply again. Subsequent runs will use the cached image.
curl returns connection refused
The ingress controller takes a few seconds to initialize. Wait for the controller pod to be ready before sending requests:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
If it still fails, check that the extra_port_mappings are present in the Kind config and that hostPort is enabled in the Helm values.
Docker daemon not running
Kind requires a running Docker daemon. If you see errors about connecting to the Docker socket, start Docker:
sudo systemctl start docker
On macOS, open Docker Desktop and wait for the engine to start before running Terraform.
Summary
You now have a fully declarative, reproducible setup for running Kubernetes locally with Kind and Terraform. The configuration provisions a multi-node cluster with ingress support, deploys the Nginx Ingress Controller via Helm, and routes traffic to a sample application. Everything is version-controlled and teardown is a single terraform destroy away.
This pattern scales well beyond local development. The same Terraform modules work in CI pipelines for integration testing, giving you confidence that your Kubernetes manifests and Helm charts behave correctly before they hit staging or production. Extend the setup by adding more Helm releases, additional namespaces, or custom resource definitions as your project grows.


































































