Managing Kubernetes access for teams that already use Active Directory (AD) or LDAP is a common challenge. Instead of maintaining separate user accounts in Kubernetes, you can centralize authentication through your existing directory service. This eliminates credential sprawl and gives you group-based access control that maps directly to your AD organizational structure.
This guide walks through setting up AD/LDAP authentication for kubectl using Dex as an OIDC identity provider. We cover deploying Dex with an LDAP/AD connector, configuring the Kubernetes API server for OIDC, setting up kubectl credentials, creating RBAC policies tied to AD groups, and automating token refresh with the kubelogin plugin.
Prerequisites
Before starting, confirm you have the following in place:
- A running Kubernetes cluster (v1.28+) with admin access to modify API server flags
- An Active Directory or LDAP server (Windows AD DS, FreeIPA, or OpenLDAP) accessible from the cluster network
- A service account in AD/LDAP with read access for user and group lookups
- A domain name or IP for the Dex server with a valid TLS certificate (self-signed works for testing)
kubectlandhelminstalled on your workstation- Ports 636 (LDAPS) or 389 (LDAP) open between the cluster and your directory server
Step 1: Choose an Authentication Method
Kubernetes does not natively support LDAP or Active Directory for authentication. The API server supports several external authentication mechanisms, and for AD/LDAP integration, three approaches are practical:
| Method | How it Works | Best For |
|---|---|---|
| OIDC with Dex | Dex acts as an OIDC provider that connects to AD/LDAP. The API server validates OIDC tokens issued by Dex. | Production clusters – standard, well-supported, scalable |
| Webhook Token Authentication | A custom webhook service validates tokens against AD/LDAP on every request. | Custom setups where OIDC is not viable |
| LDAP Proxy (e.g., Guard) | A proxy sits in front of the API server and authenticates against LDAP before forwarding requests. | Legacy clusters or quick setups |
The OIDC approach with Dex is the recommended path. It is the most widely adopted pattern, supported directly by the Kubernetes OIDC authentication spec, and decouples the identity provider from the API server. This guide follows the Dex OIDC method.
Step 2: Deploy Dex as an OIDC Provider
Dex is a lightweight OIDC identity provider that supports multiple backend connectors, including LDAP and Active Directory. We deploy it inside the Kubernetes cluster using Helm. The current stable release is Dex v2.45.1.
Add the Dex Helm repository:
helm repo add dex https://charts.dexidp.io
helm repo update
Create a namespace for Dex:
kubectl create namespace dex
Create a TLS secret for the Dex server. If you already have a certificate from your CA, use that. For testing, generate a self-signed certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /tmp/dex-tls.key \
-out /tmp/dex-tls.crt \
-subj "/CN=dex.example.com"
kubectl create secret tls dex-tls \
--cert=/tmp/dex-tls.crt \
--key=/tmp/dex-tls.key \
-n dex
Create a Helm values file for the Dex deployment. Replace dex.example.com with your actual Dex hostname:
sudo vi dex-values.yaml
Add the following configuration:
config:
issuer: https://dex.example.com:5556/dex
storage:
type: kubernetes
config:
inCluster: true
web:
https: 0.0.0.0:5556
tlsCert: /etc/dex/tls/tls.crt
tlsKey: /etc/dex/tls/tls.key
staticClients:
- id: kubernetes
name: Kubernetes
secret: ZXhhbXBsZS1hcHAtc2VjcmV0 # Change this to a strong secret
redirectURIs:
- http://localhost:8000
connectors: [] # We configure this in the next step
volumes:
- name: tls
secret:
secretName: dex-tls
volumeMounts:
- name: tls
mountPath: /etc/dex/tls
service:
type: ClusterIP
ports:
https:
port: 5556
Deploy Dex with Helm:
helm install dex dex/dex -n dex -f dex-values.yaml
Verify that the Dex pod is running:
kubectl get pods -n dex
The output should show the Dex pod in Running state:
NAME READY STATUS RESTARTS AGE
dex-7b8c4f9d6c-xk2pm 1/1 Running 0 45s
Step 3: Configure Dex with LDAP/AD Connector
The LDAP connector tells Dex how to find users and groups in your Active Directory. Update the dex-values.yaml file to add the connector configuration. The values below map to a typical Windows AD structure – adjust the base DNs, filter, and attribute names to match your directory.
Open the values file:
sudo vi dex-values.yaml
Replace the empty connectors: [] line with the following LDAP connector block:
connectors:
- type: ldap
id: activedirectory
name: Active Directory
config:
# AD server - use port 636 for LDAPS
host: ad.example.com:636
# Skip certificate verification only for testing
# insecureSkipVerify: true
# Path to the CA certificate that signed the AD server cert
rootCA: /etc/dex/ad-ca.crt
# Service account for LDAP lookups
bindDN: CN=svc-dex,OU=Service Accounts,DC=example,DC=com
bindPW: YourServiceAccountPassword
# User search configuration
userSearch:
baseDN: OU=Users,DC=example,DC=com
filter: "(objectClass=person)"
username: sAMAccountName
idAttr: sAMAccountName
emailAttr: mail
nameAttr: displayName
preferredUsernameAttr: sAMAccountName
# Group search configuration
groupSearch:
baseDN: OU=Groups,DC=example,DC=com
filter: "(objectClass=group)"
userMatchers:
- userAttr: DN
groupAttr: member
nameAttr: cn
Key fields to customize for your environment:
- host – Your AD domain controller hostname and port. Use 636 for LDAPS (encrypted) or 389 for plain LDAP
- bindDN / bindPW – A service account that has read access to search users and groups in AD
- userSearch.baseDN – The OU where your user accounts live
- userSearch.username – For Windows AD, use
sAMAccountName. For OpenLDAP, useuid - groupSearch.baseDN – The OU containing the groups you want to map to Kubernetes RBAC roles
- groupSearch.userMatchers – Maps user attributes to group membership. For AD,
DNmatched againstmemberis the standard pattern
If your AD server uses a certificate signed by an internal CA, create a secret for the CA certificate and mount it in the Dex pod:
kubectl create secret generic ad-ca --from-file=ad-ca.crt=/path/to/your/ca.crt -n dex
Add the volume and mount to your dex-values.yaml:
volumes:
- name: tls
secret:
secretName: dex-tls
- name: ad-ca
secret:
secretName: ad-ca
volumeMounts:
- name: tls
mountPath: /etc/dex/tls
- name: ad-ca
mountPath: /etc/dex/ad-ca.crt
subPath: ad-ca.crt
Upgrade the Helm release to apply the connector changes:
helm upgrade dex dex/dex -n dex -f dex-values.yaml
Verify the connector is loaded by checking the Dex logs:
kubectl logs -n dex -l app.kubernetes.io/name=dex
You should see a log line confirming the LDAP connector was registered:
level=info msg="config connector: activedirectory"
level=info msg="login successful: connector \"activedirectory\""
Step 4: Configure kube-apiserver OIDC Flags
The Kubernetes API server needs to know about Dex as an OIDC provider. This is done by adding OIDC flags to the API server configuration. The exact method depends on how your cluster was deployed.
For clusters deployed with kubeadm, edit the API server manifest:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add these flags under the command section of the API server container:
- --oidc-issuer-url=https://dex.example.com:5556/dex
- --oidc-client-id=kubernetes
- --oidc-ca-file=/etc/kubernetes/pki/dex-ca.crt
- --oidc-username-claim=email
- --oidc-username-prefix=oidc:
- --oidc-groups-claim=groups
- --oidc-groups-prefix=oidc:
Here is what each flag does:
| Flag | Purpose |
|---|---|
--oidc-issuer-url | URL of the Dex OIDC issuer. Must match the issuer field in Dex config exactly |
--oidc-client-id | The client ID registered in Dex’s staticClients |
--oidc-ca-file | CA certificate to verify Dex’s TLS cert. Copy the CA cert to the control plane node |
--oidc-username-claim | Which JWT claim to use as the username. Use email for AD users |
--oidc-username-prefix | Prefix added to usernames to avoid collisions with other auth methods |
--oidc-groups-claim | JWT claim containing group memberships from AD |
--oidc-groups-prefix | Prefix added to group names in RBAC bindings |
Copy the Dex CA certificate to the control plane node so the API server can verify Dex’s TLS certificate:
sudo cp /path/to/dex-ca.crt /etc/kubernetes/pki/dex-ca.crt
After saving the manifest, the kubelet detects the change and automatically restarts the API server. Wait a minute, then verify the API server is running:
kubectl get pods -n kube-system -l component=kube-apiserver
The API server pod should show Running with 1/1 ready:
NAME READY STATUS RESTARTS AGE
kube-apiserver-control-plane-01 1/1 Running 0 30s
If the API server fails to start, check the container logs for OIDC-related errors:
sudo crictl logs $(sudo crictl ps -a --name kube-apiserver -q | head -1)
Step 5: Configure kubectl with OIDC Token
Each developer or operator needs to configure their local kubectl to authenticate through Dex. This involves obtaining an OIDC token from Dex and adding it to the kubeconfig.
Set up a new user entry in your kubeconfig. Replace the token values with actual tokens obtained from Dex after login (we automate this with kubelogin in Step 8):
kubectl config set-credentials oidc-user \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=https://dex.example.com:5556/dex \
--auth-provider-arg=client-id=kubernetes \
--auth-provider-arg=client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 \
--auth-provider-arg=refresh-token=YOUR_REFRESH_TOKEN \
--auth-provider-arg=id-token=YOUR_ID_TOKEN
Create a context that uses the OIDC user with your cluster:
kubectl config set-context oidc-context \
--cluster=your-cluster-name \
--user=oidc-user \
--namespace=default
kubectl config use-context oidc-context
At this point, running any kubectl command will use the OIDC token. If the token is expired or missing, the request will fail with an unauthorized error – which is expected until we set up automatic token refresh in Step 8.
Step 6: Create RBAC Roles and Bindings for AD Groups
With OIDC authentication in place, you can now create Kubernetes RBAC policies that reference AD group names. The group names come through the OIDC token’s groups claim, prefixed with oidc: as configured in Step 4.
Create a ClusterRoleBinding that grants the AD “DevOps” group full cluster admin access:
kubectl create clusterrolebinding ad-devops-admin \
--clusterrole=cluster-admin \
--group=oidc:DevOps
For development teams that need access only to specific namespaces, create a namespaced RoleBinding. First, create a namespace for the team:
kubectl create namespace dev-team
Then bind the AD “Developers” group to the built-in edit ClusterRole within that namespace:
kubectl create rolebinding ad-developers-edit \
--clusterrole=edit \
--group=oidc:Developers \
--namespace=dev-team
For read-only access (useful for auditors or QA teams), bind the view ClusterRole:
kubectl create clusterrolebinding ad-qa-viewer \
--clusterrole=view \
--group=oidc:QA-Team
For more granular control, create a custom ClusterRole. This example grants permissions to manage deployments, services, and configmaps but not secrets or persistent volumes:
sudo vi custom-developer-role.yaml
Add the following role definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-developer
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["services", "configmaps", "pods", "pods/log"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
Apply the custom role and bind it to an AD group:
kubectl apply -f custom-developer-role.yaml
kubectl create clusterrolebinding ad-senior-devs \
--clusterrole=custom-developer \
--group=oidc:Senior-Developers
Verify the role bindings are created:
kubectl get clusterrolebindings | grep ad-
The output lists all AD-related bindings:
ad-devops-admin ClusterRole/cluster-admin 12m
ad-qa-viewer ClusterRole/view 8m
ad-senior-devs ClusterRole/custom-developer 2m
Step 7: Test Active Directory Authentication
Before rolling this out to the team, verify the full authentication flow works end to end. Start by testing the Dex OIDC discovery endpoint:
curl -sk https://dex.example.com:5556/dex/.well-known/openid-configuration | python3 -m json.tool
The response should contain the issuer URL and supported endpoints:
{
"issuer": "https://dex.example.com:5556/dex",
"authorization_endpoint": "https://dex.example.com:5556/dex/auth",
"token_endpoint": "https://dex.example.com:5556/dex/token",
"jwks_uri": "https://dex.example.com:5556/dex/keys",
"response_types_supported": ["code"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}
Test an LDAP bind using the credentials Dex will use. This confirms network connectivity and the service account works:
ldapsearch -H ldaps://ad.example.com:636 \
-D "CN=svc-dex,OU=Service Accounts,DC=example,DC=com" \
-w 'YourServiceAccountPassword' \
-b "OU=Users,DC=example,DC=com" \
"(sAMAccountName=testuser)" \
sAMAccountName mail memberOf
If the LDAP search returns the user’s attributes, the connection is working. Now test the full login flow with kubelogin (covered in the next step) or by manually requesting a token:
kubectl get pods --user=oidc-user
If authentication is working, you see the pod list. If the token is invalid or expired, you get an error like this:
error: You must be logged in to the server (Unauthorized)
This error is expected until you complete the kubelogin setup in the next step, which handles the browser-based login flow and token retrieval automatically.
Step 8: Install kubelogin Plugin for Token Refresh
OIDC tokens expire – typically after an hour. The kubelogin plugin (kubectl oidc-login) handles the full OIDC login flow automatically. It opens a browser for the initial login, obtains tokens, and refreshes them transparently on subsequent kubectl commands.
Install kubelogin using kubectl krew:
kubectl krew install oidc-login
If you do not have krew installed, you can install kubelogin directly from the GitHub releases. Download the binary for your platform:
curl -LO https://github.com/int128/kubelogin/releases/download/v1.36.0/kubelogin_linux_amd64.zip
unzip kubelogin_linux_amd64.zip
sudo mv kubelogin /usr/local/bin/kubectl-oidc_login
chmod +x /usr/local/bin/kubectl-oidc_login
Now reconfigure the kubectl user to use the kubelogin exec-based credential plugin instead of the static token approach:
kubectl config set-credentials oidc-user \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://dex.example.com:5556/dex \
--exec-arg=--oidc-client-id=kubernetes \
--exec-arg=--oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0
Test the login flow. The first run opens your default browser to the Dex login page where you enter your AD credentials:
kubectl get pods --user=oidc-user
After authenticating in the browser, kubelogin receives the token and kubectl completes the request. The token is cached locally, and subsequent kubectl commands use the cached token until it expires. When it expires, kubelogin uses the refresh token to get a new one without requiring another browser login.
Verify the token contents to confirm the AD groups are included:
kubectl oidc-login get-token \
--oidc-issuer-url=https://dex.example.com:5556/dex \
--oidc-client-id=kubernetes \
--oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 2>/dev/null \
| python3 -c "
import sys, json, base64
token = json.load(sys.stdin)['status']['token']
payload = token.split('.')[1]
payload += '=' * (-len(payload) % 4)
claims = json.loads(base64.urlsafe_b64decode(payload))
print(json.dumps(claims, indent=2))
"
The decoded token should contain your AD groups in the groups claim:
{
"iss": "https://dex.example.com:5556/dex",
"sub": "CgR0ZXN0EgRsZGFw",
"aud": "kubernetes",
"exp": 1711234567,
"email": "[email protected]",
"name": "John Doe",
"groups": [
"DevOps",
"Senior-Developers"
]
}
Step 9: Troubleshoot AD Authentication Issues
When things go wrong with the OIDC-LDAP authentication chain, the issue usually falls into one of a few categories. Here are the most common problems and how to resolve them.
Dex Cannot Connect to AD/LDAP
Check the Dex pod logs for LDAP connection errors:
kubectl logs -n dex -l app.kubernetes.io/name=dex --tail=50
Common causes:
- Connection refused – Firewall blocking port 636/389 between the cluster and AD server. Verify with
kubectl exec -n dex deployment/dex -- nc -zv ad.example.com 636 - Certificate errors – The AD server certificate is not trusted. Set
rootCAin the connector config to point to your AD CA certificate, or temporarily setinsecureSkipVerify: truefor testing - Invalid credentials – The bindDN or bindPW is wrong. Test the bind credentials directly with
ldapsearchfrom a pod in the cluster
API Server Rejects OIDC Tokens
Check the API server logs for OIDC validation errors:
kubectl logs -n kube-system kube-apiserver-control-plane-01 | grep -i oidc
Common causes:
- Issuer mismatch – The
--oidc-issuer-urlflag must exactly match theissuerfield in the Dex config, including the protocol, port, and path - Client ID mismatch – The
--oidc-client-idmust match theidfield in Dex’sstaticClients - CA certificate missing – The file specified in
--oidc-ca-filemust exist on the control plane node and contain the correct CA
Groups Not Appearing in Token
If the OIDC token does not contain the groups claim, the issue is in the Dex LDAP group search configuration. Verify that:
- The
groupSearch.baseDNcovers the OU where your AD groups are located - The
groupSearch.filtermatches the group object class in your directory. For Windows AD, use(objectClass=group). For OpenLDAP, use(objectClass=groupOfNames) - The
userMatcherscorrectly map the user attribute to the group membership attribute. For AD, useuserAttr: DNandgroupAttr: member
Test the group search directly with ldapsearch to confirm AD returns groups for your test user:
ldapsearch -H ldaps://ad.example.com:636 \
-D "CN=svc-dex,OU=Service Accounts,DC=example,DC=com" \
-w 'YourServiceAccountPassword' \
-b "OU=Groups,DC=example,DC=com" \
"(&(objectClass=group)(member=CN=testuser,OU=Users,DC=example,DC=com))" \
cn
RBAC Denies Access Despite Valid Token
If the token is valid but kubectl returns forbidden, the RBAC binding does not match the token claims. Check these:
- Group names in RBAC bindings must include the prefix. If you set
--oidc-groups-prefix=oidc:, the RBAC group must beoidc:DevOpsnot justDevOps - Username in RBAC bindings must include the prefix. With
--oidc-username-prefix=oidc:, the user isoidc:[email protected] - Verify what permissions a user actually has with
kubectl auth can-i --list --as=oidc:[email protected]
Use the auth can-i command to debug specific permission checks:
kubectl auth can-i get pods --as=oidc:[email protected] --as-group=oidc:DevOps -n default
This command returns yes or no and tells you immediately whether the RBAC rules grant the expected access.
Conclusion
You now have Kubernetes kubectl authentication running through Active Directory via Dex OIDC. Users log in with their AD credentials, group memberships flow through to RBAC policies, and kubelogin handles token lifecycle automatically. When someone leaves the organization and their AD account is disabled, their Kubernetes access is revoked at the next token expiration – no manual cleanup needed.
For production hardening, run Dex behind an ingress controller with TLS termination, enable high availability with multiple Dex replicas, use a persistent storage backend (PostgreSQL or MySQL) instead of Kubernetes CRDs for Dex state, and set up monitoring on the Dex /healthz and /metrics endpoints. Consider adding audit logging to track who authenticates and what actions they take.