Every Kubernetes workload that needs external traffic eventually hits the same question: how do ALBs and NLBs get created? On EKS, the AWS Load Balancer Controller handles this automatically. It watches for Ingress resources and Service objects, then calls the AWS API to provision Application Load Balancers or Network Load Balancers with the right target groups, listeners, and security group rules.
This guide walks through the full setup: IAM policy, IRSA role, Helm install, and a working ALB Ingress that serves traffic. We also cover NLB for TCP workloads, HTTPS with ACM certificates, TargetGroupBinding for existing infrastructure, and the errors you will actually hit in production. If you have used IRSA on EKS before, the IAM plumbing will feel familiar.
Tested April 2026 | EKS 1.33 (v1.33.8-eks-f69f56f), AWS Load Balancer Controller v2.13.1, eu-west-1
Prerequisites
Before starting, confirm the following are in place:
- A running EKS cluster (1.27 or later) with
kubectlconfigured - An OIDC provider associated with the cluster (required for IRSA)
- AWS CLI v2 and Helm 3 installed on your workstation
- IAM permissions to create policies and roles
- Tested on: EKS 1.33.8, Helm 3.17, AWS CLI 2.27
Verify the OIDC provider exists for your cluster:
aws eks describe-cluster --name cfg-lab-eks --query "cluster.identity.oidc.issuer" --output text
You should see a URL like this:
https://oidc.eks.eu-west-1.amazonaws.com/id/A1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6
If that returns empty, create the OIDC provider first with eksctl utils associate-iam-oidc-provider --cluster cfg-lab-eks --approve.
How the Controller Works
The AWS Load Balancer Controller runs as a Deployment in kube-system. It uses Kubernetes informers to watch for Ingress resources (with ingressClassName: alb) and Services of type LoadBalancer. When it detects one, it calls the AWS Elastic Load Balancing API to create the corresponding ALB or NLB, configure target groups, register pod IPs, and set up health checks.
The controller needs AWS API access, which it gets through an IAM role attached via IRSA. No access keys are stored in the cluster. The trust relationship between the Kubernetes service account and the IAM role is handled by the OIDC provider.
Create the IAM Policy
Download the official IAM policy document. This policy grants the controller permissions to manage ALBs, NLBs, target groups, security groups, and WAF associations:
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.13.1/docs/install/iam_policy.json
Create the IAM policy from the downloaded document:
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
The output confirms the policy ARN:
{
"Policy": {
"PolicyName": "AWSLoadBalancerControllerIAMPolicy",
"PolicyId": "ANPA3EXAMPLE7POLICY",
"Arn": "arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"CreateDate": "2026-04-10T14:22:31+00:00"
}
}
Create the IRSA Role
The controller needs an IAM role that trusts the EKS OIDC provider. This is the IRSA pattern: the Kubernetes service account gets annotated with the role ARN, and the OIDC provider validates the token at runtime.
First, grab the OIDC ID from your cluster:
OIDC_ID=$(aws eks describe-cluster --name cfg-lab-eks --query "cluster.identity.oidc.issuer" --output text | sed 's|https://||')
Create a trust policy that allows the aws-load-balancer-controller service account in kube-system to assume this role:
cat > trust-policy.json <<'TRUST'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/OIDC_PROVIDER_URL"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"OIDC_PROVIDER_URL:aud": "sts.amazonaws.com",
"OIDC_PROVIDER_URL:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
}
}
}
]
}
TRUST
Replace ACCOUNT_ID with your AWS account ID and OIDC_PROVIDER_URL with the value from the $OIDC_ID variable. Then create the role and attach the policy:
aws iam create-role \
--role-name AmazonEKSLoadBalancerControllerRole \
--assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy \
--role-name AmazonEKSLoadBalancerControllerRole \
--policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy
Install via Helm
Add the EKS Helm chart repository and install the controller:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
Install the chart with IRSA configuration:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=cfg-lab-eks \
--set serviceAccount.create=true \
--set serviceAccount.name=aws-load-balancer-controller \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT_ID:role/AmazonEKSLoadBalancerControllerRole \
--set region=eu-west-1 \
--set vpcId=vpc-0a1b2c3d4e5f67890
Verify two controller pods are running in kube-system:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
Both pods should show Running with 1/1 ready:
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-6b8d9c7f4d-k2x9n 1/1 Running 0 45s
aws-load-balancer-controller-6b8d9c7f4d-m7p3q 1/1 Running 0 45s
The controller image is public.ecr.aws/eks/aws-load-balancer-controller:v3.2.1 (shipped with chart version 1.13.1).
Deploy a Test App with ALB Ingress
Create a namespace and deploy a simple nginx application to test the ALB provisioning:
kubectl create namespace alb-demo
Apply the Deployment and Service:
kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
namespace: alb-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: nginx-demo-svc
namespace: alb-demo
spec:
type: ClusterIP
selector:
app: nginx-demo
ports:
- port: 80
targetPort: 80
protocol: TCP
EOF
Now create the Ingress resource. The annotations tell the controller to create an internet-facing ALB with IP-mode targets:
kubectl apply -f - <<'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-alb-ingress
namespace: alb-demo
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-path: /
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-demo-svc
port:
number: 80
EOF
After about 2 minutes, the ALB should be provisioned. Check the Ingress for its DNS name:
kubectl get ingress -n alb-demo
The ADDRESS column shows the ALB DNS name:
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-alb-ingress alb * k8s-albdemo-nginxalb-EXAMPLE-1234567890.eu-west-1.elb.amazonaws.com 80 2m15s
Curl the ALB endpoint to confirm traffic is flowing:
curl -s -o /dev/null -w "%{http_code}" http://k8s-albdemo-nginxalb-EXAMPLE-1234567890.eu-west-1.elb.amazonaws.com
A 200 response means the ALB, target group, and pod targets are all wired up correctly. In the AWS console, you will see a target group named something like k8s-albdemo-nginxdem-40e93a20d3 with your pod IPs registered on port 80 using the HTTP protocol.
NLB for TCP/UDP Workloads
For TCP or UDP services (databases, gRPC, custom protocols), use a Network Load Balancer instead. The controller creates NLBs when it sees a Service of type LoadBalancer with the right annotations:
apiVersion: v1
kind: Service
metadata:
name: nginx-nlb
namespace: alb-demo
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
type: LoadBalancer
selector:
app: nginx-demo
ports:
- port: 80
targetPort: 80
protocol: TCP
The key difference: aws-load-balancer-type: external tells the controller (not the legacy in-tree cloud provider) to handle this Service. Without that annotation, you get the old-style Classic Load Balancer, which is not what you want.
HTTPS with ACM Certificates
For production, terminate TLS at the ALB using an AWS Certificate Manager (ACM) certificate. Add these annotations to your Ingress:
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:ACCOUNT_ID:certificate/CERTIFICATE_ID
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
The ssl-redirect annotation creates an HTTP-to-HTTPS redirect rule on port 80 automatically. The ACM certificate must be in the same region as the ALB. If you need multiple certificates (for different domains on the same ALB), pass a comma-separated list of ARNs in certificate-arn.
TargetGroupBinding for Existing Target Groups
Sometimes you already have an ALB or NLB created outside Kubernetes (by Terraform, CloudFormation, or manually), and you just want Kubernetes pods registered as targets. The TargetGroupBinding custom resource does exactly this:
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: nginx-tgb
namespace: alb-demo
spec:
serviceRef:
name: nginx-demo-svc
port: 80
targetGroupARN: arn:aws:elasticloadbalancing:eu-west-1:ACCOUNT_ID:targetgroup/my-existing-tg/50dc6c495c0c9188
targetType: ip
The controller watches this resource and keeps the target group in sync with your pod IPs. When pods scale up or get replaced, the target group updates automatically. This is useful when your load balancer is managed by a separate Terraform stack, which is common in organizations that separate platform infrastructure from application workloads.
Troubleshooting
Error: “Failed to create security group rules: AccessDenied”
This means the controller’s IAM role is missing the ec2:AuthorizeSecurityGroupIngress permission. The most common cause is using an outdated IAM policy document. Redownload the policy from the official URL (it gets updated with new releases) and update your IAM policy:
aws iam create-policy-version \
--policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json \
--set-as-default
Ingress stuck in “Pending” with no ADDRESS
Check the controller logs for the actual error:
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller --tail=50
Common causes: the VPC ID is wrong in the Helm values, the subnet auto-discovery tags are missing, or the IRSA role trust policy has the wrong OIDC provider URL. Subnets need the tag kubernetes.io/role/elb=1 for internet-facing ALBs and kubernetes.io/role/internal-elb=1 for internal ones.
Error: “TargetGroup not found” after deleting and recreating an Ingress
When you delete an Ingress and recreate it quickly, the controller sometimes tries to reference target groups that AWS has not fully cleaned up yet. Wait 30 seconds after deletion before recreating. If the error persists, check for orphaned target groups in the AWS console under EC2 > Target Groups and delete any with the k8s- prefix that are no longer in use.
Pods showing “Unhealthy” in the target group
Verify the health check path returns HTTP 200. The default health check path is /, which works for nginx but not for applications that serve their root on a different path. Set it explicitly in the Ingress annotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz
Also check that the pod’s security group allows inbound traffic from the ALB’s security group on the target port.
Cleanup
Delete resources in reverse order. Remove the Ingress first so the controller deletes the ALB and target groups, then remove the application and Helm release:
kubectl delete ingress nginx-alb-ingress -n alb-demo
kubectl delete namespace alb-demo
helm uninstall aws-load-balancer-controller -n kube-system
Then clean up the IAM resources:
aws iam detach-role-policy \
--role-name AmazonEKSLoadBalancerControllerRole \
--policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy
aws iam delete-role --role-name AmazonEKSLoadBalancerControllerRole
aws iam delete-policy --policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy
Wait about 60 seconds after deleting the Ingress before deleting the namespace. This gives the controller time to deprovision the ALB cleanly. If you delete the controller first, the ALB and associated target groups, security group rules, and listeners become orphaned and you will need to delete them manually from the AWS console. Orphaned resources cost money, so always delete Ingress objects before uninstalling the controller. For more on managing AWS costs, see our breakdown of EKS pricing components.
Frequently Asked Questions
What is the difference between the AWS Load Balancer Controller and the in-tree cloud provider?
The in-tree cloud provider creates Classic Load Balancers (CLBs) for Services of type LoadBalancer. It is built into the Kubernetes controller manager and does not support ALBs, NLBs, or IP-mode targets. The AWS Load Balancer Controller is an external controller that creates ALBs and NLBs with full feature support including WAF integration, authentication, and target group binding. AWS recommends migrating away from the in-tree provider.
Can I use the controller with Fargate pods?
Yes. Set target-type: ip in your Ingress annotations. Fargate pods do not support instance-mode targets because they do not run on EC2 instances that can be registered directly. IP mode registers the pod ENI IP in the target group, which works with both Fargate and managed node groups.
How do I restrict ALB access to specific IP ranges?
Use the alb.ingress.kubernetes.io/inbound-cidrs annotation with a comma-separated list of CIDR blocks. The controller creates security group rules that only allow traffic from those ranges. For example: alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/8,192.168.1.0/24.
Does the controller support gRPC health checks?
Yes, as of v2.6+. Set alb.ingress.kubernetes.io/healthcheck-protocol: GRPC and alb.ingress.kubernetes.io/backend-protocol-version: GRPC. The ALB performs gRPC health checks using the gRPC health checking protocol defined in the grpc.health.v1.Health service.
Can I run the controller alongside the NGINX Ingress Controller?
Yes. The AWS Load Balancer Controller only processes Ingress resources with ingressClassName: alb. The NGINX Ingress Controller handles ingressClassName: nginx. Both can coexist in the same cluster without conflict. Use the appropriate class name on each Ingress to route it to the right controller.
For a GitOps approach to managing these Ingress resources, see our guide on deploying ArgoCD on EKS.