Kubernetes RBAC works, but it gets tedious fast when you’re managing multiple teams across dozens of namespaces. Rancher Projects sit on top of native Kubernetes RBAC and let you group namespaces together, assign permissions at the project level, and enforce policies (network isolation, resource quotas) across all namespaces in that group. This guide walks through the full stack: Kubernetes RBAC primitives, Rancher Projects, custom roles, bindings, and multi-tenant isolation.
We’re working on a 3-node RKE2 HA cluster running Rancher v2.14.0. Everything here was tested with real namespaces, service accounts, and role bindings. If you need a refresher on kubectl basics, the kubectl cheat sheet covers the essentials.
Tested March 2026 | RKE2 v1.35.3, Rancher v2.14.0 on Rocky Linux 10.1, 3-node HA cluster (10.0.1.11, 10.0.1.12, 10.0.1.13)
Kubernetes RBAC Fundamentals
Before touching Rancher’s project layer, you need a solid understanding of the four core RBAC objects in Kubernetes. Everything Rancher does on top ultimately translates back to these primitives.
Role grants permissions within a single namespace. It defines which API resources (pods, services, deployments) a subject can access and what verbs (get, list, create, delete) are allowed. A Role in the dev-team namespace has zero effect in staging.
ClusterRole works the same way but applies cluster-wide. Use ClusterRoles for permissions that span namespaces or for cluster-scoped resources like nodes and persistent volumes.
RoleBinding connects a Role (or ClusterRole) to a user, group, or ServiceAccount within a specific namespace. ClusterRoleBinding does the same thing but across the entire cluster.
ServiceAccount is the identity that pods and automated processes use. Every namespace gets a default ServiceAccount, but you should always create dedicated ones for your workloads. The default account often has more permissions than you’d want a random pod to inherit.
Create Namespaces with Labels
Namespaces are the foundation of tenant isolation in Kubernetes. Create one for each team or environment, and label them so you can target them with network policies and Rancher project assignments later.
Create the dev-team namespace for developers:
kubectl create namespace dev-team
kubectl label namespace dev-team team=developers
Verify the namespace and label:
kubectl get namespace dev-team --show-labels
The output confirms the label is attached:
NAME STATUS AGE LABELS
dev-team Active 12s kubernetes.io/metadata.name=dev-team,team=developers
Now create the staging namespace for the QA team:
kubectl create namespace staging
kubectl label namespace staging team=qa
Confirm both namespaces exist:
kubectl get namespaces -l 'team in (developers,qa)'
You should see both namespaces listed with Active status:
NAME STATUS AGE
dev-team Active 45s
staging Active 18s
Rancher Projects: Grouping Namespaces
This is where Rancher earns its keep. A Rancher Project is a grouping mechanism that bundles multiple namespaces and applies RBAC, resource quotas, and network policies at the group level. Kubernetes itself has no concept of projects. It’s purely a Rancher abstraction stored as custom resources in the management.cattle.io API group.
Why this matters in practice: if you have a development team that needs identical permissions across dev-team, dev-integration, and dev-sandbox namespaces, you assign all three to a single Rancher Project and set RBAC once. Without projects, you’d create separate RoleBindings in each namespace manually.
To create a project in the Rancher UI, navigate to your cluster, click Projects/Namespaces, and select Create Project. Name it something descriptive like “Development” or “QA Environment.” Then move your existing namespaces into the project by editing each namespace and selecting the project from the dropdown.
Rancher ships with two default projects on every cluster:
- Default – New namespaces land here unless you specify otherwise
- System – Contains
cattle-system,kube-system,ingress-nginx, and other infrastructure namespaces. Don’t put workloads here
You can also create projects via the Rancher API. This is useful for automation and GitOps workflows where you want project creation to be part of your cluster bootstrap:
curl -s -X POST 'https://rancher.example.com/v3/projects' \
-H 'Authorization: Bearer token-xxxxx:yyyyyyyyyyyy' \
-H 'Content-Type: application/json' \
-d '{
"name": "Development",
"clusterId": "c-m-xxxxxxx",
"description": "All development namespaces"
}'
Once the project exists, move namespaces into it by annotating them:
kubectl annotate namespace dev-team field.cattle.io/projectId=c-m-xxxxxxx:p-xxxxx --overwrite
Replace the cluster and project IDs with the values from your Rancher instance. You can find them in the Rancher UI URL when viewing the project, or query the API at /v3/projects.
Create Custom Roles
The built-in roles (covered later) handle most scenarios, but real-world RBAC usually needs custom roles tailored to your team structure. Here are two patterns that cover the majority of use cases: a read-only ClusterRole and a namespace-scoped deployer Role.
ClusterRole: Read-Only Viewer
This ClusterRole grants read access to pods, services, and deployments across any namespace where it’s bound. Useful for on-call engineers who need visibility without the ability to break anything.
Create a file called viewer-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: viewer-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
Apply it:
kubectl apply -f viewer-role.yaml
The ClusterRole is created but does nothing until you bind it to a subject:
kubectl get clusterrole viewer-role -o yaml
Role: Namespace-Scoped Deployer
This Role grants full CRUD on pods, deployments, and services but only within the dev-team namespace. Developers get the power to deploy and manage their own workloads without touching anything else in the cluster.
Create dev-deployer-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-deployer
namespace: dev-team
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Apply it:
kubectl apply -f dev-deployer-role.yaml
Verify the role was created in the correct namespace:
kubectl get role dev-deployer -n dev-team
You should see:
NAME CREATED AT
dev-deployer 2026-03-28T14:22:31Z
Bind Roles to Users and ServiceAccounts
Roles sitting in the cluster without bindings are just YAML decoration. You need RoleBindings or ClusterRoleBindings to actually grant permissions to subjects.
First, create a dedicated ServiceAccount for the developer:
kubectl create serviceaccount dev-user -n dev-team
Confirm the ServiceAccount exists:
kubectl get serviceaccount dev-user -n dev-team
Output:
NAME SECRETS AGE
dev-user 0 5s
Now bind the dev-deployer Role to the dev-user ServiceAccount. Create dev-user-binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-user-binding
namespace: dev-team
subjects:
- kind: ServiceAccount
name: dev-user
namespace: dev-team
roleRef:
kind: Role
name: dev-deployer
apiGroup: rbac.authorization.k8s.io
Apply and verify:
kubectl apply -f dev-user-binding.yaml
kubectl get rolebinding dev-user-binding -n dev-team
To bind the read-only viewer-role ClusterRole across the entire cluster, use a ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oncall-viewer-binding
subjects:
- kind: Group
name: oncall-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: viewer-role
apiGroup: rbac.authorization.k8s.io
This grants every member of the oncall-team group read-only access to pods, services, and deployments in all namespaces. The group name maps to whatever identity provider you’ve configured in Rancher (Active Directory, LDAP, GitHub, SAML).
Test the Permissions
Always verify that your bindings work as expected. Use kubectl auth can-i to check permissions from the ServiceAccount’s perspective:
kubectl auth can-i create deployments -n dev-team --as=system:serviceaccount:dev-team:dev-user
Expected output: yes. Now test that the same account cannot create deployments in the staging namespace:
kubectl auth can-i create deployments -n staging --as=system:serviceaccount:dev-team:dev-user
Expected output: no. If this returns yes, you have a ClusterRoleBinding somewhere granting broader access than intended. Investigate with:
kubectl get clusterrolebindings -o json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for item in data['items']:
for s in item.get('subjects', []):
if s.get('name') == 'dev-user':
print(item['metadata']['name'])
"
Multi-Tenant Isolation with Rancher
RBAC controls who can do what, but it doesn’t stop pods in one namespace from talking to pods in another. For proper multi-tenant isolation, you need NetworkPolicies and ResourceQuotas. Rancher lets you set both at the project level, which means they automatically apply to every namespace in the project.
Network Isolation
By default, all pods in a Kubernetes cluster can communicate with each other regardless of namespace. In the Rancher UI, enable project network isolation under Cluster > Projects/Namespaces > (your project) > Edit > Network Isolation. This creates a NetworkPolicy that blocks all ingress traffic from outside the project’s namespaces.
You can also define a NetworkPolicy manually. This one restricts the dev-team namespace to only accept traffic from pods with the team=developers label:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-dev-team
namespace: dev-team
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
team: developers
Apply it and verify that pods in staging can no longer reach services in dev-team. The namespace labels we set earlier make this selector work.
Resource Quotas via Rancher Projects
Without quotas, one namespace can starve the rest of the cluster. Rancher lets you set resource quotas at the project level with per-namespace defaults. In the Rancher UI, go to your project settings and configure:
- Project Limit – Maximum total resources for all namespaces in the project combined
- Namespace Default Limit – Default quota applied to each new namespace added to the project
A typical configuration for a development project:
Project Limit:
CPU: 8 cores
Memory: 16Gi
Pods: 100
Namespace Default:
CPU: 2 cores
Memory: 4Gi
Pods: 30
Rancher translates these into standard Kubernetes ResourceQuota objects in each namespace. You can verify with:
kubectl get resourcequota -n dev-team
Rancher Built-in Roles
Rancher ships with a set of built-in roles at three levels: global, cluster, and project. Understanding these saves you from recreating what already exists. Here’s the full hierarchy as of Rancher v2.14.0.
Global Roles
These apply across the entire Rancher installation (all clusters). Rancher stores them as GlobalRole custom resources. The key ones:
- Administrator (
cattle-globalrole-admin) – Full access to everything. Assign sparingly - Restricted Admin – Like admin but cannot modify Rancher server settings or manage auth providers
- Standard User – Can create new clusters (if allowed) and access clusters they’re assigned to
- Clusters Create (
cattle-globalrole-clusters-create) – Only the ability to provision new clusters - User Base – Login access only, no cluster permissions. Useful as a starting point
List all global roles in your installation:
kubectl get globalroles.management.cattle.io
Cluster Roles (Rancher level)
These are scoped to a single downstream cluster. Don’t confuse these with Kubernetes ClusterRoles, which are a different object entirely. Rancher cluster roles include:
- Cluster Owner – Full admin of the specific cluster, including RBAC management and node management
- Cluster Member – Can view most cluster resources but cannot modify cluster-level settings. Can create projects and namespaces
Project Roles
The most commonly used tier for day-to-day operations:
- Project Owner – Full control over all resources within the project’s namespaces, including RBAC within the project
- Project Member – Can manage workloads (deployments, pods, services) but cannot modify project-level settings or RBAC
- Read Only – View access to all resources in the project. Perfect for auditors and stakeholders who need visibility
Assign these in the Rancher UI under Cluster > Projects/Namespaces > (project) > Members. You can also create custom roles in Rancher that mix and match permissions from these templates.
Security Audit Checklist
Run through this list quarterly, or after any major RBAC change. These are the items that catch real problems in production clusters.
1. Find all cluster-admin bindings. These are the most dangerous permissions in any cluster:
kubectl get clusterrolebindings -o json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for item in data['items']:
ref = item.get('roleRef', {})
if ref.get('name') == 'cluster-admin':
subjects = [s['name'] for s in item.get('subjects', [])]
print(f\"{item['metadata']['name']}: {', '.join(subjects)}\")
"
Every entry in this list should be a known, documented account. If you see ServiceAccounts from application namespaces here, that’s a privilege escalation risk.
2. Check for overly permissive wildcards. Roles with * in verbs or resources grant more than intended:
kubectl get clusterroles -o json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for item in data['items']:
for rule in item.get('rules', []):
if '*' in rule.get('verbs', []) or '*' in rule.get('resources', []):
print(item['metadata']['name'])
break
"
Rancher’s internal roles (cattle-* prefixed) will show up here. Those are expected. Focus on custom roles you’ve created.
3. Audit unused ServiceAccounts. ServiceAccounts that exist without active workloads are attack surface:
kubectl get serviceaccounts --all-namespaces --no-headers | while read ns sa rest; do
pods=$(kubectl get pods -n "$ns" --field-selector=spec.serviceAccountName="$sa" --no-headers 2>/dev/null | wc -l)
if [ "$pods" -eq 0 ] && [ "$sa" != "default" ]; then
echo "Unused: $ns/$sa"
fi
done
4. Verify namespace isolation. Test that NetworkPolicies actually block cross-namespace traffic. Spin up a temporary pod and try to curl a service in a different project:
kubectl run test-curl --image=curlimages/curl -n staging --rm -it --restart=Never -- \
curl -s --max-time 3 http://my-service.dev-team.svc.cluster.local:80
If your NetworkPolicies are working, this should time out. If it returns a response, your isolation has gaps.
5. Review Rancher global admins. Check who has the cattle-globalrole-admin binding:
kubectl get globalrolebindings.management.cattle.io -o json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for item in data['items']:
if item.get('globalRoleName') == 'admin':
print(f\"User: {item.get('userName', 'unknown')} - {item['metadata']['name']}\")
"
Keep this list as short as possible. Two to three global admins is reasonable for most organizations. More than five is a red flag.
6. Check ResourceQuota utilization. Quotas only help if they’re set. List all namespaces without quotas:
for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name"); do
quota=$(kubectl get resourcequota -n "$ns" --no-headers 2>/dev/null | wc -l)
if [ "$quota" -eq 0 ]; then
echo "No quota: $ns"
fi
done
System namespaces (kube-system, cattle-system) typically don’t need quotas, but every application namespace should have one. Rancher handles this automatically when you set project-level quotas, which is one more reason to always assign namespaces to projects rather than leaving them floating in the Default project.