AWS

Amazon EKS Pod Identity: The Complete Guide (Setup, ABAC, and Migration from IRSA)

If you landed here after reading our IAM Roles for Service Accounts (IRSA) guide, welcome to the sequel. IRSA has powered pod-level IAM on EKS since 2019, and it still works fine. But AWS shipped Amazon EKS Pod Identity in late 2023, and by 2026 it has quietly become the default recommendation for new clusters. The trust policy is universal, session tags arrive automatically, and you never have to touch an OIDC provider again.

Original content from computingforgeeks.com - post 165600

This guide covers the full mechanism: how the mutating webhook and eks-pod-identity-agent cooperate, the exact JWT audience change that breaks IRSA role trust policies, three production walkthroughs (S3, AWS Load Balancer Controller, External Secrets Operator), ABAC with automatic session tags, role portability across clusters, a tested Terraform module, and a full migration playbook. Every command was tested on a real cluster and every output is real.

Tested April 2026 on Amazon EKS 1.33 with eks-pod-identity-agent v1.3.10-eksbuild.2 and eksctl 0.225

Pod Identity vs IRSA: Decision Matrix

Both mechanisms still ship in EKS, and both are fully supported. The right choice depends on the workload. Here is the honest breakdown we use when planning new clusters.

ScenarioUse
New EC2-based EKS cluster in 2026Pod Identity
Fargate workloadsIRSA only (Pod Identity needs the agent DaemonSet)
EKS Anywhere or self-managed KubernetesIRSA only
Windows EC2 nodesIRSA only (agent is Linux-only)
Existing production cluster already on IRSAKeep IRSA, no rush to migrate
Reuse the same role across many clustersPod Identity
Hit the 100-OIDC-provider-per-account limitPod Identity
Need session tags or ABACPod Identity (automatic)
Cross-account role chainingPod Identity (targetRoleArn parameter)

The rest of this guide focuses on Pod Identity because it’s where AWS is investing, and because understanding both mechanisms lets you mix them when necessary. Fargate workloads keep using IRSA while EC2 workloads in the same cluster use Pod Identity. That combination is fully supported.

Quick Start TLDR

If you already know what Pod Identity is and just want the five commands to wire it up, here they are. Skip to the next section if you want the full explanation first.

Install the addon on an existing EKS cluster:

aws eks create-addon --cluster-name pod-identity-lab --addon-name eks-pod-identity-agent --addon-version v1.3.10-eksbuild.2 --region eu-west-1

Create a namespace and ServiceAccount for the workload:

kubectl create ns demo
kubectl -n demo create serviceaccount s3-reader

Link the ServiceAccount to an existing IAM role (the role must trust pods.eks.amazonaws.com):

aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace demo --service-account s3-reader --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-s3-reader-role --region eu-west-1

Run a pod using that ServiceAccount:

kubectl -n demo run s3-test --image=amazon/aws-cli:latest --serviceaccount=s3-reader --command -- sleep 3600

Verify the pod is now assuming the IAM role:

kubectl -n demo exec s3-test -- aws sts get-caller-identity

That’s it. Five commands. The rest of this guide explains what each of them does under the hood and how to wire Pod Identity into real production workloads.

How Pod Identity Actually Works

Most tutorials describe Pod Identity as “the new IRSA” and stop there. That leaves readers confused when something breaks. Here is the actual mechanism, end to end.

Pod Identity has three moving parts: a mutating admission webhook baked into the EKS control plane, a node-level agent (eks-pod-identity-agent) that runs as a DaemonSet, and a new IAM API action called eks-auth:AssumeRoleForPodIdentity.

When a pod is created on a ServiceAccount that has an active pod identity association, the EKS control-plane webhook mutates the pod spec at admission time. It injects five environment variables and a projected ServiceAccount token volume. Here are the environment variables we captured from a running pod:

AWS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=eu-west-1
AWS_REGION=eu-west-1

The token file itself is a projected ServiceAccount token, but with one critical detail that differs from IRSA. Take a look at the mount:

lrwxrwxrwx. 1 root root  29 Apr 10 13:57 eks-pod-identity-token -> ..data/eks-pod-identity-token

Decode the JWT and the payload looks like this:

{
    "aud": ["pods.eks.amazonaws.com"],
    "exp": 1775915601,
    "iat": 1775829425,
    "iss": "https://oidc.eks.eu-west-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E",
    "kubernetes.io": {
        "namespace": "demo",
        "pod": {"name": "s3-test"},
        "serviceaccount": {"name": "s3-reader"}
    },
    "sub": "system:serviceaccount:demo:s3-reader"
}

Look at the aud claim. It’s pods.eks.amazonaws.com, not sts.amazonaws.com. This is the single most important technical difference from IRSA, and it’s why you can’t reuse an IRSA role with Pod Identity (the trust policy won’t match) and vice versa.

When the AWS SDK inside the pod tries to make an API call, it reads those two environment variables and discovers it should fetch credentials by doing an HTTP GET against http://169.254.170.23/v1/credentials with the JWT in the Authorization header. That IP is bound by the agent DaemonSet on every node. Here is the container arg line we captured:

--port 80 --cluster-name pod-identity-lab --probe-port 2703

The agent listens on three ports: 80 for the credentials API, 2703 for liveness probes, and 2705 for Prometheus metrics. Because it runs with hostNetwork: true, it binds directly on the node network namespace. The bindings we saw are 169.254.170.23:80 for IPv4 and [fd00:ec2::23]:80 for IPv6, both link-local reserved ranges that do not conflict with anything else.

When the agent receives the credential request, it extracts the JWT and calls the new eks-auth:AssumeRoleForPodIdentity action in the regional EKS endpoint. This is a brand new API. It validates the JWT, looks up the pod identity association, assumes the IAM role on the pod’s behalf, and returns temporary credentials to the agent. The agent relays those credentials back over HTTP to the pod. The SDK caches them until expiry, then repeats.

One more thing happens during that assume-role call: EKS automatically attaches six session tags to the assumed-role session. These are not optional and you cannot disable them per association (though you can opt out per association via --disable-session-tags if you really want to):

  • eks-cluster-arn
  • eks-cluster-name
  • kubernetes-namespace
  • kubernetes-service-account
  • kubernetes-pod-name
  • kubernetes-pod-uid

These session tags are what makes ABAC (Attribute-Based Access Control) work automatically. We’ll revisit this in the ABAC section because it’s the single most powerful feature Pod Identity adds on top of IRSA.

Here is the full side-by-side comparison of what happens inside a pod under each mechanism. Bookmark this table, it answers most “why isn’t it working” questions.

IRSAPod Identity
Env var (URI)not setAWS_CONTAINER_CREDENTIALS_FULL_URI
Env var (token)AWS_WEB_IDENTITY_TOKEN_FILEAWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
Token path/var/run/secrets/eks.amazonaws.com/serviceaccount/token/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
JWT audiencests.amazonaws.compods.eks.amazonaws.com
AWS API actionsts:AssumeRoleWithWebIdentityeks-auth:AssumeRoleForPodIdentity
Endpoint calledsts.<region>.amazonaws.comhttp://169.254.170.23/v1/credentials
Trust principalFederated: <oidc-provider-arn>Service: pods.eks.amazonaws.com
SA annotation requiredYes (eks.amazonaws.com/role-arn)No
Per-cluster configTrust policy per clusterUniversal (same role on every cluster)
Automatic session tagsNone6 tags

Prerequisites

You will need a few things before the first command will work. Nothing exotic.

  • An EKS cluster running 1.24 or later. 1.29+ is strongly recommended because older control planes have Pod Identity quirks. We tested on 1.33.8-eks.
  • eksctl 0.175 or later if you plan to use it (we used 0.225).
  • AWS CLI v2. The v1 line does not know about the create-pod-identity-association subcommand.
  • kubectl matching your cluster minor version.
  • Helm 3 for the Load Balancer Controller and External Secrets walkthroughs.
  • IAM permissions for eks:CreatePodIdentityAssociation, iam:CreateRole, iam:PutRolePolicy, and the usual EKS cluster admin actions.

One gotcha specific to SDKs. Pod Identity credential fetching requires a recent AWS SDK that understands the AWS_CONTAINER_CREDENTIALS_FULL_URI provider. Minimum versions that matter:

SDKMinimum version
boto3 (Python)1.34.41
aws-sdk-go-v2November 14, 2023 release
aws-sdk-java-v22.21.30
aws-sdk-js-v33.458.0
AWS CLI v22.15.0

Older SDKs silently fall back to the node instance role instead of failing loudly, which is the single most common Pod Identity trap. Pin your base images.

Install the eks-pod-identity-agent Addon

The agent ships as a managed EKS addon, which means AWS owns patching. You have three ways to install it. Pick whichever matches your IaC style.

Method 1: AWS CLI

The fastest path. One command, no extra tooling:

aws eks create-addon --cluster-name pod-identity-lab --addon-name eks-pod-identity-agent --addon-version v1.3.10-eksbuild.2 --region eu-west-1

Method 2: eksctl

If you already manage the cluster with eksctl, this keeps state consistent:

eksctl create addon --cluster pod-identity-lab --name eks-pod-identity-agent --region eu-west-1

Method 3: Terraform

For Terraform-managed clusters, add this resource block:

resource "aws_eks_addon" "pod_identity_agent" {
  cluster_name  = aws_eks_cluster.main.name
  addon_name    = "eks-pod-identity-agent"
  addon_version = "v1.3.10-eksbuild.2"
}

After installing, confirm the DaemonSet is healthy:

kubectl -n kube-system get ds eks-pod-identity-agent

You should see one pod per EC2 node in the READY column. The agent image is pulled from the regional ECR mirror and is straightforward to identify:

602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/eks-pod-identity-agent:v0.1.36

Notice the version: the addon is published as v1.3.10-eksbuild.2, but the binary inside the container is v0.1.36. These are intentionally different because the addon version includes AWS packaging revisions. When you read GitHub release notes or file an issue upstream, use the binary version.

A note about Fargate. The agent is a DaemonSet, and Fargate does not run DaemonSets. Fargate pods never get an agent on their “node”, so they cannot fetch Pod Identity credentials. If any of your workloads run on Fargate, keep those on IRSA and let EC2 pods use Pod Identity. Both mechanisms coexist in the same cluster without issue.

Hello Pod Identity: S3 Walkthrough

Now the full walkthrough. We will create an S3 bucket, an IAM role, a pod identity association, and a test pod that reads from the bucket. Every output in this section is real.

Create a test bucket and upload a file:

aws s3 mb s3://my-app-bucket --region eu-west-1
echo "Hello from Pod Identity!" > /tmp/hello.txt
aws s3 cp /tmp/hello.txt s3://my-app-bucket/hello.txt

Open a file for the IAM policy:

vim /tmp/s3-read-policy.json

Paste this minimal read-only policy for our test bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::my-app-bucket"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::my-app-bucket/*"
        }
    ]
}

Create the policy:

aws iam create-policy --policy-name pod-identity-lab-s3-read --policy-document file:///tmp/s3-read-policy.json

Now the most interesting file in this entire guide: the trust policy. This is the same single file you use for every Pod Identity role, on every cluster, in every account. Open it:

vim /tmp/pod-identity-trust.json

Paste the universal Pod Identity trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}

Read that carefully. No cluster ARN. No OIDC provider ARN. No condition keys pinning the role to a specific namespace or ServiceAccount. The trust policy is completely cluster-agnostic, which is why the same role works on any EKS cluster without editing. Compare that to the IRSA trust policy, which hardcodes the OIDC issuer URL and has to be rewritten every time you use the role on a new cluster. The sts:TagSession action is required because EKS attaches the automatic session tags during assume-role.

Create the role:

aws iam create-role --role-name pod-identity-lab-s3-reader-role --assume-role-policy-document file:///tmp/pod-identity-trust.json

Attach the S3 read policy to the role:

aws iam attach-role-policy --role-name pod-identity-lab-s3-reader-role --policy-arn arn:aws:iam::123456789012:policy/pod-identity-lab-s3-read

Create the Kubernetes namespace and ServiceAccount. This is where Pod Identity feels cleaner than IRSA: the ServiceAccount needs no annotations.

kubectl create namespace demo
kubectl -n demo create serviceaccount s3-reader

Confirm the SA has zero annotations:

kubectl -n demo get sa s3-reader -o yaml

Nothing under metadata.annotations. All the wiring happens server-side through the pod identity association, which is the key resource that ties an IAM role to a (cluster, namespace, serviceaccount) tuple.

aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace demo --service-account s3-reader --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-s3-reader-role --region eu-west-1

The API returns the association record with a short ID you can reference later:

{
    "association": {
        "clusterName": "pod-identity-lab",
        "namespace": "demo",
        "serviceAccount": "s3-reader",
        "roleArn": "arn:aws:iam::123456789012:role/pod-identity-lab-s3-reader-role",
        "associationArn": "arn:aws:eks:eu-west-1:123456789012:podidentityassociation/pod-identity-lab/a-linwcpce6l3rppbxu",
        "associationId": "a-linwcpce6l3rppbxu",
        "tags": {},
        "createdAt": 1775829368.122,
        "modifiedAt": 1775829368.122,
        "disableSessionTags": false
    }
}

Run a test pod on that ServiceAccount:

kubectl -n demo run s3-test --image=amazon/aws-cli:latest --serviceaccount=s3-reader --command -- sleep 3600

Wait a second for it to come up, then inspect the injected environment inside the pod:

kubectl -n demo exec s3-test -- env | grep -E 'AWS_|POD'

All five variables should be present. Check the JWT the SDK will use to fetch credentials:

kubectl -n demo exec s3-test -- cat /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token

Copy that output into jwt.io and you will see the audience claim we showed earlier. Now the real proof: call the STS identity endpoint from inside the pod.

kubectl -n demo exec s3-test -- aws sts get-caller-identity

The pod should return the assumed-role identity, not the node role:

{
    "UserId": "AROAEXAMPLEROLEID1234:eks-pod-identi-s3-test-7925999b-44f2-4ca1-9894-f41672b08593",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/pod-identity-lab-s3-reader-role/eks-pod-identi-s3-test-7925999b-44f2-4ca1-9894-f41672b08593"
}

Look at the session name. It starts with eks-pod-identi- followed by the pod name and a UUID. This naming pattern is a dead giveaway that credentials came from Pod Identity rather than IRSA, which is useful when reading CloudTrail logs. Now test S3 access:

kubectl -n demo exec s3-test -- aws s3 ls s3://my-app-bucket/
kubectl -n demo exec s3-test -- aws s3 cp s3://my-app-bucket/hello.txt -

You should see the listing and the file contents:

2026-04-10 13:38:41         25 hello.txt
Hello from Pod Identity!

That’s the full round trip: webhook injection, JWT mint, agent call, eks-auth API, role assumption, S3 access. Everything working end to end.

Verify via CloudTrail

CloudTrail logs every AssumeRoleForPodIdentity call with the full session context, which gives you an audit trail nobody talks about. This is handy when something breaks or when a security reviewer asks “which pod assumed this role at 13:38 UTC?”.

Query for recent Pod Identity assume-role events:

aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=AssumeRoleForPodIdentity --max-results 5 --region eu-west-1

Each event includes the requesting ServiceAccount, the pod UID, the cluster ARN, and the full session tag set. That’s enough to reconstruct which pod on which cluster assumed which role at which time, without any extra logging agents. For tighter filtering, use Athena against your CloudTrail S3 bucket and query by the kubernetes-pod-uid or kubernetes-namespace session tag.

If you don’t see any events, check that CloudTrail is recording management events in the region where the cluster runs. EKS calls the eks-auth API in the same region as the cluster, so a single-region trail will miss cross-region clusters.

Production Use Case: AWS Load Balancer Controller

The Load Balancer Controller is the classic Pod Identity test case because it has real IAM requirements (manage ALBs, target groups, security groups, tags) and because it lives in a namespace that sometimes overlaps with Fargate profiles. We’ll install it in alb-system instead of the default kube-system specifically to avoid the Fargate issue explained later in troubleshooting.

Download the official IAM policy from the upstream repo:

curl -sSLo /tmp/alb-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json

Create the IAM policy and role:

aws iam create-policy --policy-name pod-identity-lab-alb-policy --policy-document file:///tmp/alb-iam-policy.json
aws iam create-role --role-name pod-identity-lab-alb-role --assume-role-policy-document file:///tmp/pod-identity-trust.json
aws iam attach-role-policy --role-name pod-identity-lab-alb-role --policy-arn arn:aws:iam::123456789012:policy/pod-identity-lab-alb-policy

Notice the trust policy file is the same universal one from the S3 walkthrough. No edits. Create the namespace and ServiceAccount:

kubectl create namespace alb-system
kubectl -n alb-system create serviceaccount aws-load-balancer-controller

Create the pod identity association:

aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace alb-system --service-account aws-load-balancer-controller --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-alb-role --region eu-west-1

Install the controller via Helm, pointing at the pre-created ServiceAccount:

helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  --namespace alb-system \
  --set clusterName=pod-identity-lab \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=eu-west-1 \
  --set vpcId=vpc-0a1b2c3d4e5f67890

Wait for the controller pods to be ready:

kubectl -n alb-system get pods -l app.kubernetes.io/name=aws-load-balancer-controller

Deploy a simple nginx workload with an ALB-backed Ingress to prove the controller can provision real load balancers. Open the manifest:

vim /tmp/nginx-alb.yaml

Paste the following Deployment, Service, and Ingress:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.27
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: demo
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  namespace: demo
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80

Apply it and wait for the ALB to provision:

kubectl apply -f /tmp/nginx-alb.yaml
kubectl -n demo get ingress nginx -w

After about a minute the ADDRESS column populates with the ALB DNS name:

NAME    CLASS   HOSTS   ADDRESS                                                            PORTS   AGE
nginx   alb     *       k8s-demo-nginx-380f880b9d-1537930891.eu-west-1.elb.amazonaws.com   80      13m

Test it returns HTTP 200:

curl -I http://k8s-demo-nginx-380f880b9d-1537930891.eu-west-1.elb.amazonaws.com

A healthy response confirms the controller authenticated to AWS via Pod Identity, called the ELBv2 and EC2 APIs, created the ALB, and wired up the target groups. All with a single untouched trust policy file.

Production Use Case: External Secrets Operator

External Secrets Operator (ESO) is the second workload we recommend testing because it trips on a gotcha that is easy to hit and frustrating to diagnose. If you create the pod identity association after ESO is already running, the operator continues to use the node instance role until its pods restart. You will see AccessDenied even though the association looks correct in the console. For the pure AWS Secrets Manager side of this pattern (creating secrets, rotation, resource policies, ECS injection), pair this with our AWS Secrets Manager tutorial.

Create a test secret in AWS Secrets Manager first:

aws secretsmanager create-secret --name demo/app-config --secret-string '{"db_password":"changeme"}' --region eu-west-1

Create a minimal IAM policy for ESO. Open the file:

vim /tmp/eso-policy.json

Paste the least-privilege policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecrets"
            ],
            "Resource": "arn:aws:secretsmanager:eu-west-1:123456789012:secret:demo/*"
        }
    ]
}

Create the policy, role, and association:

aws iam create-policy --policy-name pod-identity-lab-eso-policy --policy-document file:///tmp/eso-policy.json
aws iam create-role --role-name pod-identity-lab-eso-role --assume-role-policy-document file:///tmp/pod-identity-trust.json
aws iam attach-role-policy --role-name pod-identity-lab-eso-role --policy-arn arn:aws:iam::123456789012:policy/pod-identity-lab-eso-policy
kubectl create namespace external-secrets
kubectl -n external-secrets create serviceaccount external-secrets
aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace external-secrets --service-account external-secrets --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-eso-role --region eu-west-1

Install ESO via Helm on the pre-created ServiceAccount:

helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
  -n external-secrets \
  --set serviceAccount.create=false \
  --set serviceAccount.name=external-secrets

Create a ClusterSecretStore that uses the Secrets Manager provider. Open the manifest:

vim /tmp/secretstore.yaml

Paste the store definition:

apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: aws-secretsmanager
spec:
  provider:
    aws:
      service: SecretsManager
      region: eu-west-1
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: app-config
  namespace: demo
spec:
  refreshInterval: 1m
  secretStoreRef:
    name: aws-secretsmanager
    kind: ClusterSecretStore
  target:
    name: app-config
  data:
    - secretKey: db_password
      remoteRef:
        key: demo/app-config
        property: db_password

Apply and watch the ExternalSecret reconcile:

kubectl apply -f /tmp/secretstore.yaml
kubectl -n demo get externalsecret app-config

First time we ran this, we hit an error because we had installed ESO earlier (before creating the association). The controller logs showed AccessDenied with the node IAM role in the caller ARN, not the ESO role:

api error AccessDeniedException: User: arn:aws:sts::123456789012:assumed-role/eksctl-cluster-nodegroup--NodeInstanceRole-xxx/i-xxx is not authorized to perform: secretsmanager:GetSecretValue

This is the gotcha. The Pod Identity webhook runs at pod admission. If the pod already existed when you created the association, the webhook never mutated it, so no env vars, no Pod Identity, no role assumption. The fix is to delete the pod so a fresh one gets created:

kubectl -n external-secrets delete pod -l app.kubernetes.io/name=external-secrets

Once the new pod comes up with the injected env vars, ESO picks up the role and the ExternalSecret starts syncing. Confirm:

kubectl -n demo get externalsecret app-config
kubectl -n demo get secret app-config -o jsonpath='{.data.db_password}' | base64 -d

Status should show SecretSynced and the secret should contain the decoded password. Remember this pattern whenever you retrofit Pod Identity onto an existing workload. Create the association first, then restart or recreate the pods.

Session Tags and ABAC

This is the section that justifies migrating from IRSA. Pod Identity attaches six session tags to every assumed-role session automatically, and you can write IAM policies that condition on those tags. The result is that a single IAM role can grant different permissions to different pods based on which namespace they run in, without creating a new role per team.

To recap the automatic tags: eks-cluster-arn, eks-cluster-name, kubernetes-namespace, kubernetes-service-account, kubernetes-pod-name, and kubernetes-pod-uid. The most useful one for ABAC is kubernetes-namespace because it maps cleanly to tenant boundaries in most clusters.

Create the demo bucket and some per-team data:

aws s3 mb s3://abac-demo-bucket --region eu-west-1
echo "This file belongs to team-a" > /tmp/team-a.txt
echo "This file belongs to team-b" > /tmp/team-b.txt
aws s3 cp /tmp/team-a.txt s3://abac-demo-bucket/team-a/file.txt
aws s3 cp /tmp/team-b.txt s3://abac-demo-bucket/team-b/file.txt

Now write the ABAC policy. Open the file:

vim /tmp/abac-policy.json

Paste this policy, which grants list and get permissions only on objects whose key is prefixed with the caller’s Kubernetes namespace:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ListBucketForTeamPrefix",
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::abac-demo-bucket",
      "Condition": {
        "StringLike": {
          "s3:prefix": [
            "${aws:PrincipalTag/kubernetes-namespace}/*",
            "${aws:PrincipalTag/kubernetes-namespace}"
          ]
        }
      }
    },
    {
      "Sid": "ReadObjectsInTeamPrefix",
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::abac-demo-bucket/${aws:PrincipalTag/kubernetes-namespace}/*"
    }
  ]
}

The ${aws:PrincipalTag/kubernetes-namespace} placeholder is the IAM policy variable that pulls the session tag attached by Pod Identity. Create one IAM role with this policy:

aws iam create-policy --policy-name pod-identity-lab-abac-policy --policy-document file:///tmp/abac-policy.json
aws iam create-role --role-name pod-identity-lab-abac-role --assume-role-policy-document file:///tmp/pod-identity-trust.json
aws iam attach-role-policy --role-name pod-identity-lab-abac-role --policy-arn arn:aws:iam::123456789012:policy/pod-identity-lab-abac-policy

Create two namespaces and identical ServiceAccounts:

kubectl create namespace team-a
kubectl create namespace team-b
kubectl -n team-a create serviceaccount app
kubectl -n team-b create serviceaccount app

Create two pod identity associations pointing at the same IAM role. This is the critical step. There is no per-team role.

aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace team-a --service-account app --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-abac-role --region eu-west-1
aws eks create-pod-identity-association --cluster-name pod-identity-lab --namespace team-b --service-account app --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-abac-role --region eu-west-1

Launch test pods in both namespaces:

kubectl -n team-a run test --image=amazon/aws-cli:latest --serviceaccount=app --command -- sleep 3600
kubectl -n team-b run test --image=amazon/aws-cli:latest --serviceaccount=app --command -- sleep 3600

From the team-a pod, listing and reading the team-a prefix should work:

kubectl -n team-a exec test -- aws s3 ls s3://abac-demo-bucket/team-a/
kubectl -n team-a exec test -- aws s3 cp s3://abac-demo-bucket/team-a/file.txt -

Here is the output from the real test:

2026-04-10 13:38:45         28 file.txt
This file belongs to team-a

Now try reading team-b’s prefix from the team-a pod:

kubectl -n team-a exec test -- aws s3 ls s3://abac-demo-bucket/team-b/

IAM denies it, because the session tag for this pod is kubernetes-namespace=team-a, which does not satisfy the prefix condition:

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: User: arn:aws:sts::123456789012:assumed-role/abac-role/eks-pod-identi-test-xxx is not authorized to perform: s3:ListBucket on resource: "arn:aws:s3:::abac-demo-bucket" because no identity-based policy allows the s3:ListBucket action
command terminated with exit code 254

Symmetric from team-b: it can read team-b’s prefix and is blocked on team-a’s. Same IAM role. Two namespaces. Automatically different permissions.

This is something IRSA fundamentally cannot do without writing custom code in every app to pass tenant context to AWS, or without creating one role per tenant. With Pod Identity the tenant boundary is enforced at the IAM layer, by AWS, based on a tag that the pod cannot forge because it’s set by EKS during role assumption. That’s the killer feature.

Portability: Same Role Across Multiple Clusters

We claimed earlier that one of Pod Identity’s biggest wins is portability. Here is the proof. We applied the same role ARN (pod-identity-lab-s3-reader-role) to two different EKS clusters and ran identical test pods in each. Neither the trust policy nor the role was modified between clusters.

On cluster 1:

kubectl -n demo exec s3-test -- aws sts get-caller-identity
kubectl -n demo exec s3-test -- aws s3 ls s3://my-app-bucket/

The response:

{"Arn": "arn:aws:sts::123456789012:assumed-role/pod-identity-lab-s3-reader-role/eks-pod-identi-s3-test-7925999b-44f2-4ca1-9894-f41672b08593"}
2026-04-10 13:38:41         25 hello.txt

We then created a second cluster, installed the addon, created the same association (same role ARN, same namespace, same ServiceAccount name), and ran the same pod on it.

kubectl --context cluster2 -n demo exec s3-test -- aws sts get-caller-identity
kubectl --context cluster2 -n demo exec s3-test -- aws s3 ls s3://my-app-bucket/

The identity and S3 result from cluster 2:

{"Arn": "arn:aws:sts::123456789012:assumed-role/pod-identity-lab-s3-reader-role/eks-pod-identi-s3-test-516f8b65-a4aa-44ed-978f-4735e6751f82"}
2026-04-10 13:38:41         25 hello.txt

Same role ARN. Different session UUIDs (because they’re different pods). Both assume successfully, both get S3 access. The eks-cluster-arn and eks-cluster-name session tags differ between clusters, which CloudTrail records, so you can still tell which cluster a given call came from.

Under IRSA this scenario required either writing a per-cluster trust policy for each OIDC issuer, or writing a trust policy with a StringLike condition listing every issuer URL. Both approaches mean editing the role every time you add a cluster. Pod Identity eliminates that entirely because the trust policy contains nothing cluster-specific in the first place.

Cross-Account Access with targetRoleArn

If your workload needs to assume a role in a different AWS account, Pod Identity has a parameter called targetRoleArn that handles the role chain cleanly. You create a pod identity association where the roleArn is in the same account as the cluster, but the targetRoleArn points at the role in the target account.

The call flow looks like this: the agent assumes the local role, then the local role assumes the target role (standard STS role chaining), and the pod receives the credentials for the target role. You get this with a single API call and zero code changes in the workload.

aws eks create-pod-identity-association \
  --cluster-name pod-identity-lab \
  --namespace demo \
  --service-account s3-reader \
  --role-arn arn:aws:iam::123456789012:role/pod-identity-lab-crossacct-source \
  --target-role-arn arn:aws:iam::987654321098:role/accountb-s3-reader \
  --region eu-west-1

The source role in account A needs the universal Pod Identity trust policy plus a statement allowing it to sts:AssumeRole on the target. The target role in account B trusts the source role ARN, which is standard cross-account role chaining. Everything else is handled by the agent. Compared to the IRSA cross-account pattern (described in the IRSA guide) this is one less moving part because you don’t need to configure OIDC federation in the target account.

Migrating from IRSA to Pod Identity

Migrating is reversible and can be done with zero downtime if you follow the right order. We tested this end-to-end on the same cluster used for the rest of this guide, starting with a pure IRSA ServiceAccount, layering Pod Identity on top, verifying precedence, and finalizing by removing the IRSA annotation. The captured output below is from that real test.

The short version: IRSA and Pod Identity can coexist on the same ServiceAccount, but when both are configured and a pod restarts, Pod Identity wins. The mutating webhook stops injecting IRSA environment variables and injects Pod Identity ones instead. We verified this with a side-by-side test using two separate IAM roles (one per mechanism) and watched the assumed role ARN flip on pod restart.

Step 1: Baseline — IRSA working alone

Starting point is a ServiceAccount with the classic IRSA annotation, a pod running against it, and S3 access confirmed. The env vars injected by the webhook are the IRSA pair:

kubectl -n migration-test exec app -- env | grep AWS_

Real output captured before any migration action:

AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=eu-west-1
AWS_REGION=eu-west-1
AWS_ROLE_ARN=arn:aws:iam::123456789012:role/migration-irsa-role
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token

And the assumed role from inside the pod:

kubectl -n migration-test exec app -- aws sts get-caller-identity

Note the session name format botocore-session-<unix-timestamp>, which is the signature of IRSA-assumed sessions:

{
    "UserId": "AROAEXAMPLEROLEID1234:botocore-session-1712000000",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/migration-irsa-role/botocore-session-1712000000"
}

Step 2: Install the agent addon

Install the eks-pod-identity-agent addon if it is not already present. This is safe to add at any time because existing IRSA workloads keep using their existing environment variables. Only pods that go through a fresh creation after the webhook configuration changes pick up a different injection:

aws eks create-addon \
  --cluster-name pod-identity-lab \
  --addon-name eks-pod-identity-agent \
  --addon-version v1.3.10-eksbuild.2 \
  --region eu-west-1

Step 3: Create a new Pod Identity IAM role

Do not reuse the existing IRSA role. Its trust policy has a Federated OIDC principal which does not match the Pod Identity flow. You need a fresh role whose trust policy uses Service: pods.eks.amazonaws.com. Attach the same permissions policies your IRSA role has, so the migration is invisible to the application:

vim pod-identity-trust.json

Paste the universal Pod Identity trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}

Create the role and attach the same permission policies your IRSA role uses:

aws iam create-role \
  --role-name migration-podidentity-role \
  --assume-role-policy-document file://pod-identity-trust.json

aws iam attach-role-policy \
  --role-name migration-podidentity-role \
  --policy-arn arn:aws:iam::123456789012:policy/migration-s3-read-policy

Step 4: Create the Pod Identity association

Point the existing ServiceAccount at the new role. The ServiceAccount keeps its IRSA annotation during this step, so both systems coexist:

aws eks create-pod-identity-association \
  --cluster-name pod-identity-lab \
  --namespace migration-test \
  --service-account app \
  --role-arn arn:aws:iam::123456789012:role/migration-podidentity-role \
  --region eu-west-1

Real output from our test showing the association was created:

{
  "association": {
    "clusterName": "pod-identity-lab",
    "namespace": "migration-test",
    "serviceAccount": "app",
    "roleArn": "arn:aws:iam::123456789012:role/migration-podidentity-role",
    "associationId": "a-ovhrmlhsswsy0xxa5",
    "disableSessionTags": false
  }
}

Here is the key observation we verified: the existing running pod is not affected by this. Re-checking aws sts get-caller-identity immediately after creating the association still returned the IRSA role ARN. The mutating webhook only runs at pod creation, so existing pods keep whatever credentials they already have. This is what makes the migration safe.

Step 5: Restart the pod and watch Pod Identity take over

This is where the switch happens. Kill the pod so the webhook re-processes it on creation:

kubectl -n migration-test delete pod app
kubectl -n migration-test get pod app -w

After the new pod is Running, check the env vars. The IRSA pair is gone, replaced by the Pod Identity pair:

kubectl -n migration-test exec app -- env | grep AWS_

Real captured output from our test, taken immediately after the restart:

AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=eu-west-1
AWS_REGION=eu-west-1
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token

Both AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE have disappeared. The webhook detected the Pod Identity association and injected the container credential provider variables instead. The IRSA annotation is still on the ServiceAccount, but the webhook ignored it.

Confirm which role the pod is actually assuming now:

kubectl -n migration-test exec app -- aws sts get-caller-identity

The assumed role ARN flipped from the IRSA role to the Pod Identity role. Notice the session name format also changed: IRSA uses botocore-session-<timestamp>, Pod Identity uses eks-pod-identi-<pod-name>-<uuid>. This is the fastest way to identify which system is actually in use when you are debugging in CloudTrail:

{
    "UserId": "AROAEXAMPLEROLEID5678:eks-pod-identi-app-8ae2135c-30d7-47f2-9b12-251aa5e70a78",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/migration-podidentity-role/eks-pod-identi-app-8ae2135c-30d7-47f2-9b12-251aa5e70a78"
}

S3 access continues to work because both IAM roles carry the same permissions policy. The application sees no difference, even though the credential mechanism just changed underneath it.

Step 6: Remove the IRSA annotation

The ServiceAccount still has the old eks.amazonaws.com/role-arn annotation, which is dead weight at this point. Remove it with kubectl’s trailing-dash syntax:

kubectl -n migration-test annotate sa app eks.amazonaws.com/role-arn-

A small gotcha we hit during testing: kubectl get sa app -o yaml still shows the old annotation value embedded inside the kubectl.kubernetes.io/last-applied-configuration annotation. That is a client-side history artifact, not the live state. The live eks.amazonaws.com/role-arn annotation is actually gone. Restart the pod one more time and confirm it still uses Pod Identity:

kubectl -n migration-test delete pod app
sleep 20
kubectl -n migration-test exec app -- aws sts get-caller-identity

The assumed role is still migration-podidentity-role, session name still in the eks-pod-identi- format, S3 access still works. Migration complete.

Step 7: Clean up the old IRSA role

Leave the old IRSA role in place for 24 to 48 hours in case you need to roll back. Once you are confident, detach the policies and delete it:

aws iam detach-role-policy --role-name migration-irsa-role --policy-arn arn:aws:iam::123456789012:policy/migration-s3-read-policy
aws iam delete-role --role-name migration-irsa-role

For a rollback path before the IRSA role is deleted, re-add the annotation and restart the deployment:

kubectl -n migration-test annotate sa app eks.amazonaws.com/role-arn=arn:aws:iam::123456789012:role/migration-irsa-role --overwrite
aws eks delete-pod-identity-association --cluster-name pod-identity-lab --association-id a-ovhrmlhsswsy0xxa5 --region eu-west-1
kubectl -n migration-test rollout restart deployment app

Because Pod Identity has precedence when both are configured, you must delete the pod identity association (not just re-add the IRSA annotation) to force the rollback. We verified this during testing by re-annotating without deleting the association first, and the pod still came up with Pod Identity env vars because the association was still active.

If you need a refresher on how the IRSA side works (what the annotation does, how OIDC federation validates the JWT), our IRSA guide covers it in depth. Keep that tab open during migrations because it makes troubleshooting faster.

Troubleshooting

These are the errors we actually hit while writing this guide, plus a few we’ve seen in production. Each section uses the exact error string as the heading so you can land here from a search for the error text.

Error: “Connect timeout on endpoint URL: http://169.254.170.23/v1/credentials”

This is the most common Pod Identity error. The SDK has the env vars, tries to call the agent endpoint, and the connection times out. Three root causes to check in order.

First, confirm the agent addon is installed and healthy:

kubectl -n kube-system get ds eks-pod-identity-agent

If the DaemonSet is missing, install it. If it exists but has zero desired pods, check the node selector (the default schedules on all Linux EC2 nodes). If the desired count matches nodes but ready count is lower, check the agent pod logs for crashes.

Second, check whether the pod is running on Fargate. Pod Identity does not work on Fargate because Fargate does not run DaemonSets. The failing pod looks like this when you describe it (note the fargate node name):

kubectl -n demo describe pod s3-test | grep -E 'Node|fargate'

If the node name starts with fargate-ip- you’ve hit this limitation. The real error when we reproduced it on Fargate was:

Error when retrieving credentials from container-role: Error retrieving metadata:
Received error when attempting to retrieve container metadata:
Connect timeout on endpoint URL: "http://169.254.170.23/v1/credentials"
command terminated with exit code 255

Notice the env vars were injected by the control-plane webhook, which runs everywhere including Fargate. What failed is the actual credential fetch against an agent that never exists on Fargate nodes. The fix is to schedule the workload on EC2 nodes, or to use IRSA for that workload instead.

Third, check for NetworkPolicies that block egress to 169.254.170.23. A default-deny egress policy silently breaks Pod Identity because the SDK cannot reach the agent. If you enforce default-deny in the namespace, add an explicit allow rule for the link-local credential endpoint on TCP port 80. IPv6 clusters should also allow fd00:ec2::23.

AccessDeniedException: User is not authorized to perform: eks-auth:AssumeRoleForPodIdentity

The agent calls the new eks-auth service on behalf of the pod, but the identity making the call is the node’s IAM role. That role needs the eks-auth:AssumeRoleForPodIdentity permission. If you created your worker nodes with a stripped-down role, this permission may be missing.

The fix is to attach the AWS-managed AmazonEKSWorkerNodePolicy to your node role. It already includes the action. For self-built node roles, add this statement:

{
    "Effect": "Allow",
    "Action": "eks-auth:AssumeRoleForPodIdentity",
    "Resource": "*"
}

Pod still using node role instead of Pod Identity

You created the association, you created the ServiceAccount, you launched the pod, and aws sts get-caller-identity still returns the node’s IAM role. What happened.

The most likely cause is that the pod was created before the association existed. The mutating webhook only runs at pod admission time. If the pod was already scheduled when you ran create-pod-identity-association, the env vars were never injected. This is exactly what happens with the External Secrets Operator scenario covered earlier. The fix is to delete and recreate the pod:

kubectl -n demo delete pod s3-test

For Deployments, use kubectl rollout restart deployment <name>. For DaemonSets and StatefulSets, the same command works. Once the new pod admits, check its env vars are present:

kubectl -n demo exec s3-test -- env | grep AWS_CONTAINER

Both AWS_CONTAINER_CREDENTIALS_FULL_URI and AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE must be set. If they aren’t, the webhook didn’t find a matching association for this pod’s ServiceAccount. Double-check the association namespace and SA name match exactly, they are case-sensitive.

MatchNodeSelector failed: Fargate profile cannot satisfy pod’s node selector/affinity

You tried to exclude Fargate by adding a node affinity to your pods, but you put the workload in a namespace already matched by a Fargate profile. Fargate scheduling overrides normal affinity rules and refuses the pod. The right fix is to install the workload in a namespace that no Fargate profile targets. That’s why this guide installs the Load Balancer Controller in alb-system instead of kube-system. If your cluster has an fp-default or similar Fargate profile that includes kube-system, pick a different namespace.

ExternalSecret stuck with “could not get secret data from provider”

Specific to ESO, but the root cause applies anywhere: the operator pods predate the Pod Identity association, so the webhook never injected env vars into them. Delete the operator pods:

kubectl -n external-secrets delete pod -l app.kubernetes.io/name=external-secrets

The replacement pods come up with the env vars injected, ESO picks up the role, and the ExternalSecret reconciles normally. This is also the fix for Load Balancer Controller, Cluster Autoscaler, Karpenter, and any other operator you retrofit onto Pod Identity.

SDK too old: silent fallback to node role

Your code runs, no errors appear in logs, but every API call is made as the node role. No AccessDenied, just the wrong identity. This is the nasty one because it looks like everything works.

The cause is an SDK version that predates the container credential provider. Old SDKs do not know what AWS_CONTAINER_CREDENTIALS_FULL_URI means, so they fall back through the credential chain until they reach IMDS and grab the node role. Upgrade the SDK to a version that supports Pod Identity (boto3 ≥1.34.41, aws-sdk-go-v2 ≥2023-11-14, aws-sdk-js-v3 ≥3.458.0, aws-sdk-java-v2 ≥2.21.30). Rebuild your image and redeploy.

To confirm the SDK sees Pod Identity, dump the credential source from inside the pod. With the Python SDK, boto3.Session().get_credentials().method returns container-role when Pod Identity is working and iam-role when it’s falling back to IMDS.

Security Hardening Checklist

Pod Identity removes OIDC complexity, but the base rules of IAM still apply. These ten items are what we check before signing off on a Pod Identity deployment in production.

  1. One IAM role per ServiceAccount, not one role for the whole cluster. Least privilege still wins even when associations are cheap.
  2. Use session tags and ABAC to reduce role proliferation for multi-tenant workloads. One ABAC role beats fifty team-specific roles.
  3. Attach a permissions boundary to every Pod Identity role. Even if someone writes an overly-permissive inline policy, the boundary caps the blast radius.
  4. Set IMDS hop limit to 1 on EC2 nodes. This stops a compromised pod from reaching IMDS and stealing the node instance role, which has eks-auth:AssumeRoleForPodIdentity and can assume any role in the cluster. Set --http-put-response-hop-limit 1 on the launch template.
  5. Tag every IAM role with cluster, purpose, and managed-by. Cleanup is painful if you don’t know which roles a decommissioned cluster created.
  6. Manage associations in IaC (Terraform or eksctl). Click-ops associations have a habit of surviving clusters that were destroyed months ago.
  7. Monitor the agent DaemonSet health with an alert. If the agent dies, every pod on that node loses AWS credentials at the next refresh, which is usually within the hour.
  8. Keep the addon updated. Addon versions occasionally include security fixes and the upgrade path is two CLI calls.
  9. Run the cluster-wide association audit (next section) regularly. Drift is real and detecting it early is cheap.
  10. Never mix an IRSA annotation and a Pod Identity association on the same ServiceAccount. Behaviour is defined (Pod Identity wins) but reviewers get confused and incidents get harder. Pick one per SA.

Auditing Pod Identity Associations

You’ll want a quick “what’s assigned to what” view of the cluster periodically. The EKS API returns association IDs but not the role ARN directly, so we chain list and describe calls with a bit of Python to produce a clean table.

aws eks list-pod-identity-associations --cluster-name pod-identity-lab --region eu-west-1 --output json | python3 -c "
import sys, json, subprocess
data = json.load(sys.stdin)
print(f'{\"Namespace\":18} {\"ServiceAccount\":32} {\"IAM Role\":40}')
print('-' * 95)
for a in data['associations']:
    d = json.loads(subprocess.check_output(['aws','eks','describe-pod-identity-association','--cluster-name','pod-identity-lab','--association-id',a['associationId'],'--region','eu-west-1','--output','json']))
    r = d['association']
    role = r['roleArn'].split('/')[-1]
    print(f'{r[\"namespace\"]:18} {r[\"serviceAccount\"]:32} {role:40}')
"

On the lab cluster, the real output looks like this:

Namespace          ServiceAccount                   IAM Role
-----------------------------------------------------------------------------------------------
alb-system         aws-load-balancer-controller     pod-identity-lab-alb-role
team-a             app                              pod-identity-lab-abac-role
demo               s3-reader                        pod-identity-lab-s3-reader-role
default            s3-reader                        pod-identity-lab-s3-reader-role
external-secrets   external-secrets                 pod-identity-lab-eso-role
team-b             app                              pod-identity-lab-abac-role              

Save this as a script and run it from CI on a schedule. Diff the output between runs and you have cheap drift detection.

Complete Terraform Module

Here is the tested Terraform module we use in the lab. It creates the IAM policy, role, Kubernetes namespace, ServiceAccount (with no IRSA annotation), and pod identity association. Six resources in total.

Start with the main module file:

vim main.tf

Paste the provider and resource definitions:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.70"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.33"
    }
  }
}

provider "aws" {
  region = var.region
}

data "aws_eks_cluster" "main" {
  name = var.cluster_name
}

data "aws_eks_cluster_auth" "main" {
  name = var.cluster_name
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.main.token
}

resource "aws_iam_policy" "s3_read" {
  name = "${var.cluster_name}-${var.service_account}-tf-policy"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = ["s3:ListBucket"]
        Resource = "arn:aws:s3:::${var.bucket_name}"
      },
      {
        Effect   = "Allow"
        Action   = ["s3:GetObject"]
        Resource = "arn:aws:s3:::${var.bucket_name}/*"
      }
    ]
  })
}

resource "aws_iam_role" "pod_identity" {
  name = "${var.cluster_name}-${var.namespace}-${var.service_account}-tf-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "AllowEksAuthToAssumeRoleForPodIdentity"
        Effect = "Allow"
        Principal = {
          Service = "pods.eks.amazonaws.com"
        }
        Action = [
          "sts:AssumeRole",
          "sts:TagSession"
        ]
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "s3_read" {
  role       = aws_iam_role.pod_identity.name
  policy_arn = aws_iam_policy.s3_read.arn
}

resource "kubernetes_namespace" "demo" {
  metadata {
    name = var.namespace
  }
}

resource "kubernetes_service_account" "app" {
  metadata {
    name      = var.service_account
    namespace = kubernetes_namespace.demo.metadata[0].name
  }
}

resource "aws_eks_pod_identity_association" "app" {
  cluster_name    = var.cluster_name
  namespace       = kubernetes_namespace.demo.metadata[0].name
  service_account = kubernetes_service_account.app.metadata[0].name
  role_arn        = aws_iam_role.pod_identity.arn
}

Create the variables file:

vim variables.tf

Paste the inputs:

variable "region" {
  type    = string
  default = "eu-west-1"
}

variable "cluster_name" {
  type = string
}

variable "namespace" {
  type = string
}

variable "service_account" {
  type = string
}

variable "bucket_name" {
  type = string
}

Create the outputs file:

vim outputs.tf

Paste the three outputs we’ll reference later:

output "role_arn" {
  value = aws_iam_role.pod_identity.arn
}

output "association_id" {
  value = aws_eks_pod_identity_association.app.association_id
}

output "association_arn" {
  value = aws_eks_pod_identity_association.app.association_arn
}

And a simple tfvars file for the test run:

vim terraform.tfvars

Populate it with the cluster and bucket names:

region          = "eu-west-1"
cluster_name    = "pod-identity-lab"
namespace       = "tf-demo"
service_account = "tf-s3-reader"
bucket_name     = "my-app-bucket"

Initialize and apply:

terraform init
terraform apply -auto-approve

The end of the apply output shows the six new resources and the outputs:

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

Outputs:
association_arn = "arn:aws:eks:eu-west-1:123456789012:podidentityassociation/pod-identity-lab/a-dpkxwl6r3mxyzhglz"
association_id = "a-dpkxwl6r3mxyzhglz"
role_arn = "arn:aws:iam::123456789012:role/pod-identity-lab-tf-demo-tf-s3-reader-tf-role"

Verify with a test pod on the Terraform-built SA:

kubectl -n tf-demo run tf-test --image=amazon/aws-cli:latest --serviceaccount=tf-s3-reader --command -- sleep 3600
kubectl -n tf-demo exec tf-test -- aws sts get-caller-identity
kubectl -n tf-demo exec tf-test -- aws s3 cp s3://my-app-bucket/hello.txt -

The test pod confirms the Terraform-built role is in effect and the S3 read works end to end:

{"Arn": "arn:aws:sts::123456789012:assumed-role/pod-identity-lab-tf-demo-tf-s3-reader-tf-role/eks-pod-identi-tf-test-xxx"}
Hello from Pod Identity!

Tear down when you’re done:

terraform destroy -auto-approve

Compare this to the equivalent IRSA module. You don’t need an aws_iam_openid_connect_provider data source. You don’t manipulate issuer strings to build the federated principal ARN. You don’t compute the trust policy condition keys with replace. The whole mechanism collapses to one extra resource (aws_eks_pod_identity_association) and a trust policy that never changes.

Quick Reference and FAQ

Ten commands that return visitors will reach for. Paste and adapt.

aws eks create-addon --cluster-name CLUSTER --addon-name eks-pod-identity-agent
aws eks list-pod-identity-associations --cluster-name CLUSTER
aws eks describe-pod-identity-association --cluster-name CLUSTER --association-id ID
aws eks create-pod-identity-association --cluster-name CLUSTER --namespace NS --service-account SA --role-arn ROLE_ARN
aws eks delete-pod-identity-association --cluster-name CLUSTER --association-id ID
kubectl -n kube-system get ds eks-pod-identity-agent
kubectl -n NS exec POD -- env | grep AWS_CONTAINER
kubectl -n NS exec POD -- aws sts get-caller-identity
kubectl -n NS exec POD -- cat /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=AssumeRoleForPodIdentity

The rest of this section answers the questions readers keep sending us. If you’re looking for deeper context on any of the related topics, see our Kubernetes RBAC guide, the kubectl cheat sheet, or the Velero backup guide.

Does Pod Identity work on Fargate?

No. The eks-pod-identity-agent is a DaemonSet and Fargate does not run DaemonSets. The control-plane webhook still injects the env vars into Fargate pods (it runs everywhere), but the credential fetch times out because there’s no agent listening on 169.254.170.23. Use IRSA for Fargate workloads and Pod Identity for EC2 workloads. They coexist in the same cluster.

Can I use the same IAM role on multiple EKS clusters?

Yes, that’s one of Pod Identity’s main advantages. The universal trust policy contains no cluster-specific references, so the same role works on every cluster without editing. We demonstrated this earlier in the portability section with real output from two different clusters hitting the same role ARN. CloudTrail still records which cluster each call came from via the eks-cluster-arn session tag.

How do I see which IAM role a pod is currently using?

The fastest check is from inside the pod:

kubectl -n NAMESPACE exec POD -- aws sts get-caller-identity

The Arn field shows the assumed-role session. If the role name starts with something like eksctl-clustername-nodegroup, the pod is falling back to the node role and Pod Identity is not working. You can also ask the EKS API directly for the association belonging to a ServiceAccount:

aws eks list-pod-identity-associations --cluster-name CLUSTER --namespace NAMESPACE --service-account SA

Do I need to migrate from IRSA?

No. Both mechanisms are fully supported and will remain supported. We recommend Pod Identity for new workloads in 2026, but there’s no deprecation timeline on IRSA. Migrate when you have a reason to (ABAC, role portability, cleaner Terraform), not because the blog posts say you should. The migration section above walks through a zero-downtime procedure if you do decide to move.

How many pod identity associations can I have per cluster?

The current soft limit is 5000 associations per cluster, adjustable via Service Quotas. That’s roughly 100 times what most clusters need. The practical constraint is IAM role count per account (default 1000, also adjustable), which you hit first if you’re creating one role per association.

Does Pod Identity support Windows containers?

Not at the time of writing. The agent binary is Linux-only. Windows nodes in EKS still need IRSA for pod-level IAM. If your cluster runs mixed Windows and Linux nodes, use IRSA on the Windows side and Pod Identity on the Linux side. AWS has indicated Windows support is on the roadmap but there’s no public date.

What’s the difference between the JWT audiences in IRSA and Pod Identity?

IRSA mints a ServiceAccount token with audience sts.amazonaws.com and presents it to the STS AssumeRoleWithWebIdentity API. Pod Identity mints a ServiceAccount token with audience pods.eks.amazonaws.com and presents it to the agent at 169.254.170.23, which in turn calls the new eks-auth:AssumeRoleForPodIdentity API. Because the audiences differ, a token minted for one mechanism cannot be used for the other, which is why you need two separate trust policies if you want to support both on the same ServiceAccount (and you shouldn’t anyway, Pod Identity takes precedence).

For the official reference material, the AWS EKS Pod Identity documentation covers API-level details and limits, the eks-pod-identity-agent repository contains the agent source and release notes, and the Pod Identity ABAC guide has more IAM policy examples for the session tag patterns.

Related Articles

Containers Setup Docker Registry on Fedora 43/42/41/40 Cloud Exploring IP Addresses in the World of Cloud Solutions and Security Containers How To Deploy and Use Quarkus in Kubernetes Cluster Cloud Install OpenStack on Rocky Linux 8 – Configure Glance (Step 4)

Leave a Comment

Press ESC to close