Every service you deploy on Kubernetes needs a way to receive traffic from outside the cluster. You could slap a LoadBalancer on each service, but that gets expensive fast – each one provisions a cloud load balancer with its own external IP. NodePort works in a pinch, but you are stuck with high port numbers and no hostname routing. Ingress solves this by giving you a single entry point that routes HTTP and HTTPS traffic to backend services based on hostnames and URL paths.
The Ingress resource itself is just a set of routing rules. It does nothing without an Ingress controller – a reverse proxy that watches the Kubernetes API for Ingress objects and configures itself accordingly. The Nginx Ingress Controller is the most widely deployed option in production clusters. It runs Nginx under the hood and gives you battle-tested load balancing, TLS termination, rate limiting, and a pile of useful annotations.
This guide walks through installing the Nginx Ingress Controller on Kubernetes using Helm, deploying sample apps behind it, configuring TLS with cert-manager, and handling the operational stuff that matters in production.
Quick Comparison – Ingress vs LoadBalancer vs NodePort
Before diving in, here is a quick breakdown of the three main approaches for exposing services:
| Approach | Use Case | Drawbacks |
|---|---|---|
| NodePort | Dev/test, quick access on port 30000-32767 | No hostname routing, ugly port numbers, no TLS termination |
| LoadBalancer | Single service needing a dedicated external IP | One cloud LB per service – costly at scale |
| Ingress | Multiple services behind one IP with host/path routing | Requires an Ingress controller running in the cluster |
For anything beyond a handful of services, Ingress is the right call. One load balancer, one IP, and your routing rules live in version-controlled YAML alongside the rest of your manifests.
Prerequisites
You need the following before starting:
- A running Kubernetes cluster (v1.28 or later) – any distribution works: kubeadm, EKS, GKE, AKS, k3s
- kubectl configured and talking to the cluster
- Helm 3.x installed on your workstation
- Cluster admin permissions (needed to create ClusterRoles and admission webhooks)
Verify your setup before proceeding.
kubectl cluster-info
helm version
If you still need to set up a Kubernetes cluster, check out our guide on deploying a Kubernetes cluster on Ubuntu.
Step 1 – Install Nginx Ingress Controller via Helm
There are two Nginx-based Ingress controllers floating around and mixing them up causes headaches. The one you want is ingress-nginx – the community-maintained project under the Kubernetes organization. The other one, nginx-ingress, is maintained by F5/NGINX Inc and uses different annotations, different CRDs, and different Helm chart values. This guide covers ingress-nginx exclusively.
Add the ingress-nginx Helm repository and update your local chart cache.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Install the chart into a dedicated namespace.
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=2 \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true
Key points about this install:
- Two replicas – gives you rolling updates without downtime and basic high availability
- Metrics enabled – exposes Prometheus metrics on port 10254 (covered later)
- ServiceMonitor – auto-discovery if you are running the Prometheus Operator
On bare-metal or environments without a cloud load balancer, you will want NodePort instead. Add this flag to the install command:
--set controller.service.type=NodePort
Step 2 – Verify the Installation
Wait for the controller pods to reach Running state.
kubectl get pods -n ingress-nginx -w
You should see output similar to this:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-7d4db76476-k8xqz 1/1 Running 0 45s
ingress-nginx-controller-7d4db76476-m2tnv 1/1 Running 0 45s
Check the service to confirm it got an external IP (on cloud providers) or NodePort allocation.
kubectl get svc -n ingress-nginx
On a cloud provider with LoadBalancer support, you will see an EXTERNAL-IP populated after a minute or two. On bare-metal with NodePort, you will see high-range ports mapped to 80 and 443.
Confirm the IngressClass was created.
kubectl get ingressclass
Expected output:
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 2m
Step 3 – Deploy Sample Backend Applications
To test Ingress routing, deploy two simple web servers – one running Nginx and one running Apache httpd. Each serves a different default page so you can confirm routing works.
Create a file called sample-apps.yaml with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
labels:
app: nginx-app
spec:
replicas: 2
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc
spec:
selector:
app: nginx-app
ports:
- port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-app
labels:
app: httpd-app
spec:
replicas: 2
selector:
matchLabels:
app: httpd-app
template:
metadata:
labels:
app: httpd-app
spec:
containers:
- name: httpd
image: httpd:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: httpd-app-svc
spec:
selector:
app: httpd-app
ports:
- port: 80
targetPort: 80
Apply the manifest.
kubectl apply -f sample-apps.yaml
Wait for all pods to be ready.
kubectl get pods -l 'app in (nginx-app,httpd-app)'
Step 4 – Create Ingress Resources with Host and Path Routing
Now wire up the Ingress rules. This example demonstrates both host-based and path-based routing in a single resource.
Create a file called ingress-routes.yaml.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-app-svc
port:
number: 80
- host: httpd.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpd-app-svc
port:
number: 80
- host: apps.example.com
http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: nginx-app-svc
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: httpd-app-svc
port:
number: 80
Apply it.
kubectl apply -f ingress-routes.yaml
Verify the Ingress resource and check the ADDRESS column matches your controller’s external IP.
kubectl get ingress demo-ingress
Test it locally by setting up /etc/hosts entries or using curl with a Host header.
curl -H "Host: nginx.example.com" http://<EXTERNAL-IP>/
curl -H "Host: httpd.example.com" http://<EXTERNAL-IP>/
curl -H "Host: apps.example.com" http://<EXTERNAL-IP>/nginx
curl -H "Host: apps.example.com" http://<EXTERNAL-IP>/httpd
Each request should return the default page for the corresponding backend.
Step 5 – TLS Termination with cert-manager and Let’s Encrypt
Running production workloads without TLS is not an option. The cleanest way to handle certificates on Kubernetes is cert-manager – it automates issuance and renewal from Let’s Encrypt (and other CAs) and stores certs as Kubernetes Secrets that the Ingress controller picks up automatically.
Install cert-manager via Helm.
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Verify cert-manager pods are running.
kubectl get pods -n cert-manager
Create a ClusterIssuer for Let’s Encrypt. Save this as letsencrypt-issuer.yaml.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: nginx
kubectl apply -f letsencrypt-issuer.yaml
Now update your Ingress to request a TLS certificate. Add the tls section and the cert-manager annotation.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress-tls
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- nginx.example.com
- httpd.example.com
secretName: demo-tls-secret
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-app-svc
port:
number: 80
- host: httpd.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpd-app-svc
port:
number: 80
After applying this, cert-manager creates a Certificate resource, kicks off an ACME challenge, and stores the resulting cert in the demo-tls-secret Secret. The Ingress controller picks it up and starts serving HTTPS within a couple of minutes.
Track the certificate status.
kubectl get certificate demo-tls-secret
kubectl describe certificate demo-tls-secret
For a deeper walkthrough on securing workloads with TLS, see our guide on installing and configuring cert-manager on Kubernetes.
Step 6 – Useful Annotations Reference
The ingress-nginx controller supports a large set of annotations that control proxy behavior per-Ingress. Here are the ones that come up most often in production.
Request Body Size Limit
By default the controller caps request bodies at 1MB. If your app handles file uploads, bump this.
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
SSL Redirect
Force HTTP to HTTPS redirect. Enabled by default when a TLS section exists, but you can control it explicitly.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Rate Limiting
Protect backends from getting hammered. These annotations set request-per-second and connection limits.
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-connections: "5"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "3"
CORS Headers
Enable CORS at the Ingress level instead of handling it in application code.
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://frontend.example.com"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type, Authorization"
Timeouts and Proxy Buffering
For long-running API calls or streaming responses, tune the proxy timeouts.
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
nginx.ingress.kubernetes.io/proxy-send-timeout: "120"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
Step 7 – Running Multiple Ingress Controllers with IngressClass
Large clusters sometimes need more than one Ingress controller – maybe one for public traffic and another for internal services, or different teams want isolated controllers. Kubernetes handles this through the IngressClass resource.
When you installed ingress-nginx, it created an IngressClass called nginx. To run a second controller, install another Helm release with a different class name and election ID.
helm install ingress-nginx-internal ingress-nginx/ingress-nginx \
--namespace ingress-nginx-internal \
--create-namespace \
--set controller.ingressClassResource.name=nginx-internal \
--set controller.ingressClassResource.controllerValue=k8s.io/ingress-nginx-internal \
--set controller.electionID=ingress-controller-internal \
--set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"=internal
Now you have two IngressClasses.
kubectl get ingressclass
Expected output:
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 30m
nginx-internal k8s.io/ingress-nginx-internal <none> 1m
Each Ingress resource specifies which controller handles it through the ingressClassName field. If you want one controller to be the default, annotate its IngressClass.
kubectl annotate ingressclass nginx ingressclass.kubernetes.io/is-default-class=true
Step 8 – Monitoring with Prometheus Metrics
We enabled metrics during the Helm install. The controller exposes Prometheus-format metrics at :10254/metrics on every controller pod. This gives you visibility into request rates, latencies, upstream response times, SSL certificate expiry, and connection counts.
If you are running the Prometheus Operator (kube-prometheus-stack), the ServiceMonitor created during install handles scrape configuration automatically. Verify it exists.
kubectl get servicemonitor -n ingress-nginx
Key metrics to watch in your dashboards:
nginx_ingress_controller_requests– total request count by status code, method, host, and pathnginx_ingress_controller_request_duration_seconds– request latency histogramsnginx_ingress_controller_nginx_process_connections– active, reading, writing, and waiting connectionsnginx_ingress_controller_ssl_expire_time_seconds– certificate expiry timestamps for alerting
There is a well-maintained Grafana dashboard for ingress-nginx available at dashboard ID 9614 on grafana.com. Import it and point it at your Prometheus data source for instant visibility.
For a full guide on cluster monitoring, see our post on setting up Prometheus and Grafana on Kubernetes.
Troubleshooting Common Issues
After running Nginx Ingress in production across dozens of clusters, these are the issues that come up repeatedly.
404 Not Found on All Requests
This usually means the Ingress controller is not matching your Ingress resource. Check the following:
- Verify
ingressClassName: nginxis set in your Ingress spec. Without this, the controller ignores the resource entirely. - Confirm the Host header in your request matches the
hostfield in the Ingress rules exactly. - Check that the backend service and port exist and the service selector matches your pod labels.
Inspect the controller logs for clues.
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --tail=100
502 Bad Gateway
The controller reached your backend but got an error. Common causes:
- Backend pods are not ready or are crashing. Check
kubectl get podsandkubectl describe pod. - Port mismatch – the service port and the container port do not line up.
- Backend is too slow and the proxy timeout fires. Increase
proxy-read-timeoutif the backend legitimately needs more time. - Network policies blocking traffic between the ingress-nginx namespace and the backend namespace.
Certificate Issues
If HTTPS is not working or browsers show invalid certificate warnings:
- Check the Certificate resource status. A Ready condition of False means issuance failed.
- Look at the Challenge and Order resources for ACME-specific errors.
- The most common problem is DNS not pointing to the Ingress controller’s external IP, which causes the HTTP-01 challenge to fail.
- Make sure the
secretNamein the Ingress TLS section matches what cert-manager expects.
Debug cert-manager step by step.
kubectl get certificate -A
kubectl get challenges -A
kubectl describe challenge <challenge-name> -n <namespace>
IngressClass Not Found
If Ingress resources stay in a pending state with no ADDRESS populated:
- Confirm the IngressClass exists with
kubectl get ingressclass. - Make sure the
ingressClassNamevalue in your Ingress spec matches the IngressClass name exactly. - If you upgraded from an older Kubernetes version, you might still be using the deprecated
kubernetes.io/ingress.classannotation instead of theingressClassNamefield. Switch to the field-based approach.
Admission Webhook Errors
The ingress-nginx chart deploys a validating admission webhook that checks Ingress resources before they are accepted. If the webhook pod is down or unhealthy, you will get errors when creating or updating Ingress resources. Check the webhook service and endpoint.
kubectl get validatingwebhookconfiguration
kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission
If the admission endpoint has no addresses, the controller pods are not healthy. Investigate pod logs and events.
Upgrading the Ingress Controller
Helm makes upgrades straightforward. Update your repo and run a diff first to see what changes.
helm repo update
helm diff upgrade ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx
If the diff looks clean, run the upgrade.
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.replicaCount=2 \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true
With two replicas, the rolling update strategy keeps at least one pod serving traffic during the upgrade. Watch the rollout status.
kubectl rollout status deployment ingress-nginx-controller -n ingress-nginx
Cleanup
If you deployed the sample resources for testing and want to remove them, tear everything down in order.
kubectl delete -f ingress-routes.yaml
kubectl delete -f sample-apps.yaml
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
Wrapping Up
The Nginx Ingress Controller handles the heavy lifting of routing external traffic into your Kubernetes cluster. With the Helm-based installation covered here, you get a production-ready setup with high availability, TLS termination through cert-manager, and Prometheus metrics from the start. The annotation system gives you fine-grained control over proxy behavior without touching the controller’s global configuration, and IngressClass support means you can run multiple controllers side by side when your cluster demands it.
For clusters that see significant traffic, keep an eye on the controller resource usage and scale the replica count based on your load profile. The Prometheus metrics and Grafana dashboard give you everything you need to make informed scaling decisions.



























































