How To

Grant Developers Access to EKS Kubernetes Cluster

Giving developers access to an Amazon EKS cluster requires a combination of AWS IAM and Kubernetes RBAC. The goal is least-privilege access – developers get exactly the permissions they need to deploy and debug applications in their namespaces, without cluster-wide admin rights. This guide covers both the legacy aws-auth ConfigMap method and the modern EKS access entries API introduced in 2024.

Original content from computingforgeeks.com - post 68867

We walk through creating IAM identities, mapping them to Kubernetes users, setting up namespaces with RBAC roles, generating developer kubeconfig files, and auditing access. Every step includes verification commands so you can confirm the setup works before handing credentials to your team.

Prerequisites

  • An existing EKS cluster (version 1.28 or later recommended for access entries support)
  • AWS CLI v2 installed and configured with admin or cluster-creator credentials
  • kubectl installed and configured to talk to the EKS cluster
  • eksctl installed (optional but simplifies access entry management)
  • IAM permissions to create users, roles, and policies in the AWS account

Confirm your cluster is reachable before proceeding:

kubectl cluster-info

The output should show the Kubernetes control plane endpoint for your EKS cluster:

Kubernetes control plane is running at https://ABCDEF1234567890.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://ABCDEF1234567890.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Step 1: Create an IAM User or Role for the Developer

Every developer who needs EKS access must have an AWS IAM identity. You can use either an IAM user (for individual developers) or an IAM role (for teams or federated access). IAM roles are preferred in production because they use temporary credentials and integrate with SSO providers.

Option A: Create an IAM User

Create a dedicated IAM user for the developer with programmatic access:

aws iam create-user --user-name dev-john

Create an access key pair so the developer can authenticate with the AWS CLI:

aws iam create-access-key --user-name dev-john

The output returns the AccessKeyId and SecretAccessKey. Share these securely with the developer – the secret is only shown once:

{
    "AccessKey": {
        "UserName": "dev-john",
        "AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "Status": "Active",
        "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
        "CreateDate": "2026-03-22T10:00:00+00:00"
    }
}

The developer needs a minimal IAM policy that only allows them to generate their kubeconfig. Attach this policy to the user:

aws iam put-user-policy --user-name dev-john --policy-name EKSDescribeCluster --policy-document '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster",
        "eks:ListClusters"
      ],
      "Resource": "*"
    }
  ]
}'

Option B: Create an IAM Role (Recommended for Teams)

For team-based access, create a role that developers can assume. This approach works well with AWS SSO and federated identity providers:

aws iam create-role --role-name EKSDeveloperRole --assume-role-policy-document '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNT_ID:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}'

Replace ACCOUNT_ID with your 12-digit AWS account ID. Then attach the same EKS describe policy:

aws iam put-role-policy --role-name EKSDeveloperRole --policy-name EKSDescribeCluster --policy-document '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster",
        "eks:ListClusters"
      ],
      "Resource": "*"
    }
  ]
}'

Step 2: Update the aws-auth ConfigMap

The aws-auth ConfigMap in the kube-system namespace is how EKS maps AWS IAM identities to Kubernetes users and groups. The cluster creator is already mapped automatically, but every additional user or role must be added explicitly.

First, check the current aws-auth ConfigMap:

kubectl get configmap aws-auth -n kube-system -o yaml

Edit the ConfigMap to add the developer IAM user or role mapping:

kubectl edit configmap aws-auth -n kube-system

Add the following under the mapUsers section for an IAM user, or under mapRoles for an IAM role:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::ACCOUNT_ID:role/EKSNodeRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::ACCOUNT_ID:user/dev-john
      username: dev-john
      groups:
        - dev-team

If you are mapping an IAM role instead of a user, add it under mapRoles:

  mapRoles: |
    - rolearn: arn:aws:iam::ACCOUNT_ID:role/EKSDeveloperRole
      username: developer
      groups:
        - dev-team
    - rolearn: arn:aws:iam::ACCOUNT_ID:role/EKSNodeRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

The username field sets the Kubernetes identity for RBAC, and groups determines which RoleBindings or ClusterRoleBindings apply. Be careful editing aws-auth – a misconfiguration can lock you out of the cluster.

Verify the ConfigMap was updated correctly:

kubectl describe configmap aws-auth -n kube-system

Step 3: Create a Kubernetes Namespace for the Team

Namespaces provide isolation between teams. Create a dedicated namespace where developers will deploy their workloads. This keeps their resources separate from other teams and system components.

kubectl create namespace dev-apps

Confirm the namespace was created:

kubectl get namespace dev-apps

The namespace should show Active status:

NAME       STATUS   AGE
dev-apps   Active   5s

Optionally, set resource quotas on the namespace to prevent developers from consuming more than their share of cluster resources:

kubectl apply -f - << 'YAML'
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-apps-quota
  namespace: dev-apps
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    pods: "20"
YAML

Step 4: Create RBAC Role and RoleBinding for Developers

A Kubernetes RBAC Role defines what actions are allowed within a namespace. A RoleBinding connects that Role to the developer's group. This is the core of least-privilege access - developers can manage pods, deployments, and services in their namespace but nothing else.

Create a Role that gives developers full control over common workload resources in the dev-apps namespace:

kubectl apply -f - << 'YAML'
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev-apps
  name: developer-role
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log", "pods/exec", "services", "configmaps", "secrets", "persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs", "cronjobs"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch"]
YAML

Now bind this Role to the dev-team group that we mapped in the aws-auth ConfigMap:

kubectl apply -f - << 'YAML'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: dev-apps
  name: developer-rolebinding
subjects:
  - kind: Group
    name: dev-team
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
YAML

Verify both resources were created:

kubectl get role,rolebinding -n dev-apps

You should see both the Role and RoleBinding listed:

NAME                                        CREATED AT
role.rbac.authorization.k8s.io/developer-role   2026-03-22T10:05:00Z

NAME                                                    ROLE                  AGE
rolebinding.rbac.authorization.k8s.io/developer-rolebinding   Role/developer-role   10s

Step 5: Create a ClusterRole for Read-Only Cluster Access

Developers often need to list namespaces, view nodes, or check cluster-wide resources for troubleshooting. A ClusterRole with read-only permissions gives them visibility without the ability to modify anything outside their namespace.

kubectl apply -f - << 'YAML'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: developer-cluster-readonly
rules:
  - apiGroups: [""]
    resources: ["namespaces", "nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list"]
YAML

Bind this ClusterRole to the dev-team group with a ClusterRoleBinding:

kubectl apply -f - << 'YAML'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: developer-cluster-readonly-binding
subjects:
  - kind: Group
    name: dev-team
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: developer-cluster-readonly
  apiGroup: rbac.authorization.k8s.io
YAML

This setup means developers can run kubectl get nodes and kubectl get namespaces across the cluster, but they can only create or modify resources inside the dev-apps namespace. If you manage a Kubernetes Dashboard, the same RBAC rules apply to dashboard access as well.

Step 6: Configure the Developer's Kubeconfig

The developer needs a kubeconfig file that authenticates them through their own AWS IAM identity. On the developer's machine, they run the following command after configuring the AWS CLI with their credentials:

aws eks update-kubeconfig --region us-east-1 --name my-cluster --alias my-cluster-dev

Replace us-east-1 with your cluster's region and my-cluster with the actual cluster name. This generates a kubeconfig entry that uses the developer's AWS credentials to authenticate with EKS.

If the developer is using an IAM role instead of a user, they need to specify the role ARN:

aws eks update-kubeconfig --region us-east-1 --name my-cluster --role-arn arn:aws:iam::ACCOUNT_ID:role/EKSDeveloperRole --alias my-cluster-dev

Set the developer's default namespace so they do not need to specify -n dev-apps on every command:

kubectl config set-context my-cluster-dev --namespace=dev-apps

Verify the kubeconfig works by checking the developer's identity:

kubectl auth whoami

The output confirms the Kubernetes username and groups mapped from IAM:

ATTRIBUTE             VALUE
Username              dev-john
Groups                [dev-team system:authenticated]

Step 7: Test Developer Access (Allowed and Denied Actions)

Always test both what the developer can do and what they should not be able to do. This confirms RBAC is working correctly before handing off access.

Test Allowed Actions

Using the developer's kubeconfig, verify they can create and manage resources in their namespace:

kubectl run test-pod --image=nginx --namespace=dev-apps
kubectl get pods -n dev-apps
kubectl get nodes
kubectl get namespaces

All four commands should succeed. The developer can create pods in dev-apps and view cluster-wide read-only resources.

Test Denied Actions

Verify that the developer cannot access resources outside their namespace or perform admin operations:

kubectl get pods -n kube-system
kubectl delete namespace dev-apps
kubectl get secrets -n default

Each of these commands should return a forbidden error confirming least-privilege is enforced:

Error from server (Forbidden): pods is forbidden: User "dev-john" cannot list resource "pods" in API group "" in the namespace "kube-system"

You can also use kubectl auth can-i to check specific permissions without actually running the command:

kubectl auth can-i create deployments -n dev-apps --as dev-john

This should return yes. Check a denied action:

kubectl auth can-i delete nodes --as dev-john

This returns no, confirming cluster-level write operations are blocked.

Clean up the test pod when done:

kubectl delete pod test-pod -n dev-apps

Step 8: Grant Access Using EKS Access Entries (Modern API)

EKS access entries were introduced as a replacement for the aws-auth ConfigMap. They are managed through the AWS API rather than a Kubernetes ConfigMap, which means you can manage access without kubectl and there is no risk of locking yourself out by misconfiguring the ConfigMap. For new clusters, this is the recommended approach.

First, verify that your cluster has the API authentication mode enabled:

aws eks describe-cluster --name my-cluster --query "cluster.accessConfig.authenticationMode"

The output should show either API or API_AND_CONFIG_MAP. If it shows CONFIG_MAP only, update it:

aws eks update-cluster-config --name my-cluster --access-config authenticationMode=API_AND_CONFIG_MAP

Create an Access Entry for an IAM User

Add the developer's IAM user as an access entry:

aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::ACCOUNT_ID:user/dev-john --kubernetes-groups dev-team --type STANDARD

The --kubernetes-groups flag maps the IAM identity to the same Kubernetes group we created RBAC bindings for. This means the same Role and RoleBinding from Step 4 apply automatically.

Associate an Access Policy

EKS provides built-in access policies that you can associate with entries. For namespace-scoped developer access, use the AmazonEKSEditPolicy:

aws eks associate-access-policy --cluster-name my-cluster \
  --principal-arn arn:aws:iam::ACCOUNT_ID:user/dev-john \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \
  --access-scope type=namespace,namespaces=dev-apps

This grants edit access scoped to only the dev-apps namespace. The developer cannot modify resources in other namespaces.

List all access entries to verify:

aws eks list-access-entries --cluster-name my-cluster

Check the associated policies for a specific entry:

aws eks list-associated-access-policies --cluster-name my-cluster --principal-arn arn:aws:iam::ACCOUNT_ID:user/dev-john

The available EKS access policies are:

PolicyUse Case
AmazonEKSClusterAdminPolicyFull cluster admin (equivalent to cluster-admin)
AmazonEKSAdminPolicyAdmin within scoped namespaces
AmazonEKSEditPolicyCreate, update, delete workloads in scoped namespaces
AmazonEKSViewPolicyRead-only access to resources in scoped namespaces

If your team manages container registries like Harbor on Kubernetes, you may want to grant developers view access to the registry namespace as well so they can troubleshoot image pull issues.

Step 9: Audit Developer Access with CloudTrail

AWS CloudTrail logs every API call made to EKS, including authentication attempts. Enable CloudTrail logging if it is not already active, and use it to audit who accessed the cluster and what actions they performed.

Search for recent EKS access events by a specific user:

aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=Username,AttributeValue=dev-john \
  --start-time 2026-03-21T00:00:00Z \
  --end-time 2026-03-22T23:59:59Z \
  --max-results 10

Filter for EKS-specific API calls to see cluster access patterns:

aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=EventSource,AttributeValue=eks.amazonaws.com \
  --start-time 2026-03-21T00:00:00Z \
  --max-results 20

For Kubernetes-level audit logs (which API calls a user made inside the cluster), enable EKS control plane logging:

aws eks update-cluster-config --name my-cluster \
  --logging '{"clusterLogging":[{"types":["api","audit","authenticator"],"enabled":true}]}'

Once enabled, Kubernetes audit logs are sent to CloudWatch Logs under the log group /aws/eks/my-cluster/cluster. Query them with CloudWatch Logs Insights:

aws logs start-query \
  --log-group-name "/aws/eks/my-cluster/cluster" \
  --start-time $(date -d '1 hour ago' +%s) \
  --end-time $(date +%s) \
  --query-string 'fields @timestamp, user.username, verb, objectRef.resource, objectRef.namespace | filter user.username = "dev-john" | sort @timestamp desc | limit 20'

This gives you a complete picture of what each developer did inside the cluster - which pods they created, which secrets they accessed, and which namespaces they queried.

For self-managed Kubernetes clusters, the audit logging setup is different - you configure audit policies directly on the API server rather than through AWS.

Revoking Developer Access

When a developer leaves the team or changes roles, remove their access promptly. For the aws-auth ConfigMap method, edit the ConfigMap and remove the user entry:

kubectl edit configmap aws-auth -n kube-system

Remove the user's entry from the mapUsers or mapRoles section and save the file.

For the access entries API, delete the entry directly:

aws eks delete-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::ACCOUNT_ID:user/dev-john

Also remove or deactivate the IAM user's access keys to prevent any further AWS API access:

aws iam delete-access-key --user-name dev-john --access-key-id AKIAIOSFODNN7EXAMPLE

Conclusion

We configured developer access to an EKS cluster using both the aws-auth ConfigMap and the modern access entries API. The RBAC setup ensures developers can deploy and manage workloads in their namespace while cluster-wide resources remain protected. Pair this with proper IAM credential management and regular CloudTrail audits to maintain a strong security posture.

For production environments, consider integrating with AWS SSO for centralized identity management, enforce MFA on IAM roles used for cluster access, and rotate access keys on a regular schedule. Run periodic kubectl auth can-i checks across all mapped users to verify that permissions have not drifted from your intended policy.

Related Articles

Containers How To Integrate Multiple Kubernetes Clusters to Vault Server Kubernetes Persistent Storage for Kubernetes with Ceph RBD Cloud Migrate OpenStack Instance from Compute Host to Another Containers Deploy and Use OpenEBS Container Storage on Kubernetes

Leave a Comment

Press ESC to close