Every pod in your cluster runs with a ServiceAccount. If you haven’t configured RBAC, that ServiceAccount can do anything, and so can anyone who compromises it. A single misconfigured permission can let an attacker pivot from one breached container to full cluster admin. Role-Based Access Control (RBAC) is how you prevent that.
This guide walks through Kubernetes RBAC from the ground up: Roles, ClusterRoles, Bindings, ServiceAccounts, and real permission tests. We’ll build a multi-tenant namespace setup, create scoped permissions for developers and monitoring, audit what each identity can actually do, and lock down the defaults that ship wide open. If you’re running a production Kubernetes cluster, RBAC is not optional.
Tested April 2026 | Kubernetes 1.35.3, Ubuntu 24.04.4 LTS
RBAC Building Blocks
Before touching any YAML, understand the five objects that make up the RBAC system. Everything in Kubernetes RBAC is built from these primitives.
Role defines a set of permissions (verbs on resources) within a single namespace. It cannot grant access outside its namespace. Think of it as “what can be done, and where.”
ClusterRole works like a Role but applies cluster-wide. Use it for non-namespaced resources (nodes, persistent volumes, namespaces themselves) or when you want a reusable permission template that gets bound into multiple namespaces.
RoleBinding attaches a Role (or ClusterRole) to a user, group, or ServiceAccount within a specific namespace. The binding is what actually grants the permission. A Role sitting unbound does nothing.
ClusterRoleBinding attaches a ClusterRole to a subject across all namespaces. This is powerful and dangerous. A ClusterRoleBinding to cluster-admin gives full control over everything in the cluster.
ServiceAccount is the identity that pods use to authenticate to the Kubernetes API. Every namespace gets a default ServiceAccount automatically. Unless you explicitly assign a different one, every pod in that namespace uses it. The official ServiceAccount documentation covers the full lifecycle.
Prerequisites
- A running Kubernetes cluster (v1.29+). This guide was tested on Kubernetes 1.35.3
- kubectl configured with cluster-admin access for initial setup
- Familiarity with kubectl basics and context management
- Two or more namespaces to demonstrate isolation (we’ll create them below)
Create Namespaces for Multi-Tenancy
RBAC is most useful in multi-tenant clusters where different teams need isolated environments. Create two namespaces to simulate this:
kubectl create namespace team-alpha
kubectl create namespace team-beta
Confirm both namespaces exist:
kubectl get namespaces team-alpha team-beta
You should see both with Active status:
NAME STATUS AGE
team-alpha Active 5s
team-beta Active 4s
Each team gets its own namespace. The goal is to let developers in team-alpha work freely in their namespace while being completely locked out of team-beta. That boundary is enforced entirely through RBAC.
Create a Namespace-Scoped Role
A Role defines what actions are allowed on which resources. This developer role grants permissions that a typical application developer needs: managing pods, deployments, services, configmaps, and secrets, but not cluster-level resources like nodes or persistent volumes.
Create a file called developer-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-alpha
name: developer
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec", "services", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
# Note: no "delete" verb on secrets - deliberate
Apply the Role:
kubectl apply -f developer-role.yaml
The output confirms creation:
role.rbac.authorization.k8s.io/developer created
Notice what’s missing from the verbs list: delete. Developers can create and update secrets but cannot delete them. This is intentional. In production, accidental secret deletion can take down entire applications. The delete permission should be reserved for namespace admins or handled through a GitOps pipeline.
Create a ServiceAccount and RoleBinding
A Role on its own grants nothing. You need a subject (who gets the permission) and a RoleBinding (the link between them). Create a ServiceAccount for the team-alpha developer:
kubectl create serviceaccount dev-alpha -n team-alpha
The ServiceAccount is created in the team-alpha namespace:
serviceaccount/dev-alpha created
Now bind the developer Role to this ServiceAccount:
kubectl create rolebinding dev-alpha-binding \
--role=developer \
--serviceaccount=team-alpha:dev-alpha \
-n team-alpha
Verify the binding exists and references the correct Role and subject:
kubectl get rolebinding dev-alpha-binding -n team-alpha -o yaml
The output should show roleRef pointing to the developer Role and subjects listing the dev-alpha ServiceAccount. The chain is now complete: ServiceAccount (identity) is bound to a Role (permissions) within a namespace (scope).
Test Permissions with kubectl auth can-i
The kubectl auth can-i command lets you verify permissions without actually performing the action. This is essential for validating RBAC configurations before handing credentials to a team. Always test both what should be allowed and what should be denied.
Pod creation in own namespace vs. other namespace
Check whether dev-alpha can create pods in team-alpha:
kubectl auth can-i create pods \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-alpha
The answer is clear:
yes
Now test the same action in team-beta, where dev-alpha has no binding:
kubectl auth can-i create pods \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-beta
Denied, as expected:
no
This is the namespace isolation boundary working. The dev-alpha ServiceAccount exists in team-alpha, is bound to a Role in team-alpha, and has zero permissions anywhere else in the cluster.
Secret access: read vs. delete
The developer Role grants get/list on secrets but not delete. Verify both:
kubectl auth can-i get secrets \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-alpha
Read access is permitted:
yes
But deletion is blocked:
kubectl auth can-i delete secrets \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-alpha
Verify the result:
no
Pod exec and node access
Developers often need kubectl exec for debugging. We included pods/exec in the Role. Confirm it works:
kubectl auth can-i create pods/exec \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-alpha
Verify the result:
yes
Node access, on the other hand, should be denied. Nodes are cluster-scoped resources, and our Role is namespace-scoped:
kubectl auth can-i get nodes \
--as=system:serviceaccount:team-alpha:dev-alpha
Verify the result:
no
All six tests confirm the Role is working exactly as designed. This kind of systematic verification should be part of every RBAC rollout.
Create a ClusterRole for Read-Only Monitoring
Monitoring tools like Prometheus and Grafana need to read resources across all namespaces. A namespace-scoped Role would require bindings in every namespace, which doesn’t scale. ClusterRoles solve this.
Create a monitoring ClusterRole that allows read-only access to pods, services, endpoints, and nodes across the entire cluster:
cat > monitoring-clusterrole.yaml << 'ROLEEOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-readonly
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints", "nodes", "namespaces"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "daemonsets", "statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list"]
ROLEEOF
kubectl apply -f monitoring-clusterrole.yaml
The ClusterRole is created:
clusterrole.rbac.authorization.k8s.io/monitoring-readonly created
Create a ServiceAccount in a dedicated monitoring namespace and bind the ClusterRole to it:
kubectl create namespace monitoring
kubectl create serviceaccount monitoring -n monitoring
kubectl create clusterrolebinding monitoring-binding \
--clusterrole=monitoring-readonly \
--serviceaccount=monitoring:monitoring
Three objects created in sequence: the namespace, the ServiceAccount, and the ClusterRoleBinding that ties them together.
Test ClusterRole Permissions
Verify the monitoring ServiceAccount can list pods across all namespaces:
kubectl auth can-i list pods \
--as=system:serviceaccount:monitoring:monitoring \
--all-namespaces
Cluster-wide pod listing is allowed:
yes
But the monitoring account should never be able to modify or delete resources. Test delete on pods:
kubectl auth can-i delete pods \
--as=system:serviceaccount:monitoring:monitoring \
-n team-alpha
Verify the result:
no
Node access (a cluster-scoped resource) should work because we included nodes in the ClusterRole:
kubectl auth can-i get nodes \
--as=system:serviceaccount:monitoring:monitoring
Verify the result:
yes
The monitoring ServiceAccount can observe the entire cluster but cannot change anything. This is exactly the principle of least privilege applied to observability.
Common RBAC Patterns
Most clusters need a handful of reusable RBAC patterns. Here are four that cover the majority of real-world use cases. Copy and adapt them rather than building from scratch every time.
Read-Only Viewer
For stakeholders, auditors, or dashboards that need visibility without any write access:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: readonly-viewer
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
Bind this with a ClusterRoleBinding for cluster-wide read access, or a RoleBinding to limit it to specific namespaces.
Namespace Admin
Full control within a namespace, but zero access outside it. This works well for team leads who manage their own namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-alpha
name: namespace-admin
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io", "policy"]
resources: ["*"]
verbs: ["*"]
This looks broad, and it is within its namespace. The key constraint is that it's a Role, not a ClusterRole, so the blast radius is contained. The namespace admin can delete deployments in team-alpha but cannot touch team-beta or any cluster-scoped resources.
CI/CD Pipeline ServiceAccount
CI/CD systems need to create and update deployments but should never manage RBAC or access secrets they don't own:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-alpha
name: cicd-deployer
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "create", "update", "patch"]
No secret access, no delete permission, no RBAC manipulation. The pipeline can deploy new versions but cannot escalate its own privileges or clean up resources it shouldn't touch.
Node Viewer (for infrastructure teams)
Infrastructure engineers need to check node status, capacity, and conditions without full cluster-admin:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-viewer
rules:
- apiGroups: [""]
resources: ["nodes", "nodes/status"]
verbs: ["get", "list", "watch"]
- apiGroups: ["metrics.k8s.io"]
resources: ["nodes"]
verbs: ["get", "list"]
This gives visibility into node health and metrics without the ability to cordon, drain, or delete nodes.
Audit Existing Permissions
On a running cluster, you need to know what permissions are actually in effect. The kubectl auth can-i --list command shows every allowed resource/verb combination for a given identity.
List all permissions for the dev-alpha ServiceAccount in team-alpha:
kubectl auth can-i --list \
--as=system:serviceaccount:team-alpha:dev-alpha \
-n team-alpha
The output shows all 15 allowed resource/verb combinations:
Resources Non-Resource URLs Resource Names Verbs
pods [] [] [get list watch create update patch]
pods/log [] [] [get list watch create update patch]
pods/exec [] [] [get list watch create update patch]
services [] [] [get list watch create update patch]
configmaps [] [] [get list watch create update patch]
secrets [] [] [get list watch create update patch]
deployments.apps [] [] [get list watch create update patch]
replicasets.apps [] [] [get list watch create update patch]
statefulsets.apps [] [] [get list watch create update patch]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/healthz] [] [get]
This is your audit trail. Every verb on every resource is explicitly listed. Notice the selfsubjectaccessreviews and non-resource URL entries at the bottom, which are default Kubernetes permissions that allow any authenticated identity to check its own access and hit the API discovery endpoints.
To get a broader view of all RBAC objects in the cluster, count the Roles, ClusterRoles, and their bindings:
echo "Roles: $(kubectl get roles -A --no-headers | wc -l)"
echo "ClusterRoles: $(kubectl get clusterroles --no-headers | grep -cv 'system:')"
echo "RoleBindings: $(kubectl get rolebindings -A --no-headers | wc -l)"
echo "ClusterRoleBindings: $(kubectl get clusterrolebindings --no-headers | grep -cv 'system:')"
On our test cluster with the configurations from this guide:
Roles: 14
ClusterRoles: 17
RoleBindings: 15
ClusterRoleBindings: 15
The grep -cv 'system:' filter excludes the built-in Kubernetes system ClusterRoles and bindings, showing only custom ones. In production, run this audit regularly. RBAC configurations accumulate over time, and stale bindings for departed team members or decommissioned services are a common source of privilege creep.
For a more targeted audit, check which subjects have the dangerous cluster-admin ClusterRole:
kubectl get clusterrolebindings -o json | \
python3 -c "
import json, sys
data = json.load(sys.stdin)
for item in data['items']:
ref = item.get('roleRef', {})
if ref.get('name') == 'cluster-admin':
for s in item.get('subjects', []):
print(f\"{s.get('kind')}: {s.get('name')} (binding: {item['metadata']['name']})\")"
Any entry in that output is a full cluster administrator. Keep this list short. Every cluster-admin binding that isn't strictly necessary is a risk.
Production Hardening
Getting RBAC working is the first step. Hardening it for production is where the real security value lies. These recommendations come from real cluster incidents and audit findings.
Disable Default ServiceAccount Token Automounting
By default, every pod gets the namespace's default ServiceAccount token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token. Most application pods never need to talk to the Kubernetes API. Mounting a token they don't use just gives attackers a free credential if they breach the container.
Disable automounting at the ServiceAccount level:
kubectl patch serviceaccount default -n team-alpha \
-p '{"automountServiceAccountToken": false}'
kubectl patch serviceaccount default -n team-beta \
-p '{"automountServiceAccountToken": false}'
Both ServiceAccounts are patched:
serviceaccount/default patched
serviceaccount/default patched
For pods that genuinely need API access (controllers, operators, monitoring agents), explicitly set automountServiceAccountToken: true in the pod spec and assign a purpose-built ServiceAccount with minimal permissions. The Kubernetes RBAC documentation recommends this as a baseline security measure.
Enforce Least Privilege
The biggest RBAC mistake in production clusters is granting * (all verbs) or * (all resources) when only a subset is needed. Every Role and ClusterRole should list explicit verbs and explicit resources.
Bad practice:
# DON'T do this - grants everything on everything in the namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Better approach: start with zero permissions and add only what the workload actually needs. Run the application, check what API calls it makes (via audit logs), and grant exactly those. This takes more effort up front but prevents the "everything is cluster-admin" antipattern that plagues most clusters.
Avoid granting escalate, bind, or impersonate verbs unless absolutely necessary. These verbs allow privilege escalation, meaning a subject with bind permission on ClusterRoles can grant itself cluster-admin.
Enable Audit Logging
RBAC controls who can do what, but without audit logging, you can't see who actually did what. Enable the Kubernetes audit policy to log all RBAC-sensitive events:
cat > /etc/kubernetes/audit-policy.yaml << 'POLICYEOF'
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
- level: Metadata
verbs: ["delete", "deletecollection"]
- level: None
resources:
- group: ""
resources: ["events"]
- level: Metadata
omitStages:
- RequestReceived
POLICYEOF
Add the audit flags to your kube-apiserver manifest (typically /etc/kubernetes/manifests/kube-apiserver.yaml):
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
The API server picks up the changes automatically. Audit logs capture every RBAC modification, secret access, and delete operation. Forward these to your SIEM or centralized logging. Without audit logs, you're flying blind on who is doing what in the cluster.
Use Short-Lived Tokens
Kubernetes 1.22+ creates bound service account tokens that expire automatically. If you're still using legacy non-expiring tokens (created via kubectl create token with --duration=0 or via the deprecated Secret-based method), migrate to bound tokens.
Generate a token that expires in one hour:
kubectl create token dev-alpha -n team-alpha --duration=1h
The returned JWT is valid for 60 minutes and then becomes useless. For CI/CD pipelines, generate a fresh token at the start of each pipeline run rather than storing a long-lived credential. This limits the window of exposure if a token leaks.
Network Policies Alongside RBAC
RBAC controls API access, but it doesn't control pod-to-pod network traffic. A compromised pod with no RBAC permissions can still talk to every other pod in the cluster over the network. Pair RBAC with Kubernetes network policies to enforce both API-level and network-level isolation. RBAC answers "can this identity call the API" while network policies answer "can this pod reach that service." You need both.
Protect etcd
All RBAC data (Roles, Bindings, ServiceAccount tokens) is stored in etcd. If an attacker gets direct access to etcd, RBAC is irrelevant because they can read and modify everything. Ensure etcd is encrypted at rest, requires TLS client certificates for access, and is not exposed on any network that application pods can reach. See our guide on etcd backup and disaster recovery for the operational side.
Regular RBAC Reviews
Schedule monthly RBAC audits. Check for:
- ServiceAccounts that no running pod references (stale identities)
- RoleBindings for users who have left the organization
- ClusterRoleBindings to
cluster-admin(should be minimal) - Roles with wildcard (
*) verbs or resources - ServiceAccounts with
automountServiceAccountToken: truethat don't need API access
Automate this where possible. A cron job that dumps kubectl auth can-i --list for every ServiceAccount and diffs it against the previous run will catch privilege creep before it becomes a problem.
RBAC is not a set-and-forget configuration. Clusters grow, teams change, and permissions accumulate. Treat RBAC like code: version it in Git, review changes in pull requests, and audit regularly. The few hours you invest in proper RBAC will save you from the breach post-mortem where the root cause is "every pod was cluster-admin."