Managing one Kubernetes cluster is straightforward. Managing three or four across dev, staging, and production gets messy fast. You’re constantly running kubectl config use-context, hoping you haven’t just scaled down the wrong cluster. kubectx and kubens solve that by making context and namespace switching instant, with tab completion and fuzzy search built in.
This guide covers the full workflow: merging multiple kubeconfig files, switching between clusters with both native kubectl and kubectx, namespace management with kubens, and practical use cases tested on three real k3s clusters. Everything here was tested hands-on, not fabricated.
Tested March 2026 | kubectx v0.11.0, kubectl v1.34.5, k3s v1.34.5+k3s1 on Rocky Linux 10.1
Prerequisites
- Two or more Kubernetes clusters (any flavor: kubeadm, k3s, EKS, GKE, AKS)
- kubectl installed on your workstation
- Kubeconfig files for each cluster
- Tested on: Rocky Linux 10.1 (k3s clusters), macOS (workstation with kubectl 1.34.5)
Our test lab uses three single-node k3s clusters simulating a typical dev/staging/prod setup:
| Cluster | IP Address | Purpose | Workload |
|---|---|---|---|
| k3s-dev | 192.168.1.200 | Development | Nginx (2 replicas) |
| k3s-staging | 192.168.1.201 | Staging | Redis |
| k3s-prod | 192.168.1.202 | Production | PostgreSQL 17 |
Understanding Kubeconfig Structure
Before jumping into tools, it helps to understand what you’re working with. The kubeconfig file (default: ~/.kube/config) contains three main sections:
- clusters – API server addresses and CA certificates for each cluster
- contexts – Bind a cluster, user, and optional default namespace into a named context
- users – Authentication credentials (certificates, tokens, or exec-based auth)
A single kubeconfig from one k3s cluster looks like this:
cat ~/.kube/config
The structure follows this pattern:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTi...
server: https://192.168.1.200:6443
name: k3s-dev
contexts:
- context:
cluster: k3s-dev
user: k3s-dev
name: k3s-dev
current-context: k3s-dev
users:
- name: k3s-dev
user:
client-certificate-data: LS0tLS1CRUdJTi...
client-key-data: LS0tLS1CRUdJTi...
When managing multiple clusters, you need all three clusters represented in one kubeconfig (or multiple files merged via KUBECONFIG).
Merging Multiple Kubeconfig Files
Each cluster gives you its own kubeconfig file. Before you can switch between them, you need to merge them. There are two approaches.
Option 1: KUBECONFIG environment variable
Set the KUBECONFIG variable to a colon-separated list of kubeconfig files. kubectl merges them in memory:
export KUBECONFIG=~/.kube/k3s-dev.yaml:~/.kube/k3s-staging.yaml:~/.kube/k3s-prod.yaml
Add this to your ~/.bashrc or ~/.zshrc to make it persistent. Verify all contexts are visible:
kubectl config get-contexts
The output lists all three clusters with the current context marked by an asterisk:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* k3s-dev k3s-dev k3s-dev
k3s-prod k3s-prod k3s-prod
k3s-staging k3s-staging k3s-staging
Option 2: Flatten into a single file
If you prefer one file instead of juggling multiple paths, flatten the merged config:
kubectl config view --flatten > ~/.kube/config
This writes all clusters, contexts, and credentials into a single file. Back up any existing ~/.kube/config first.
Switching Contexts with kubectl
The built-in way to switch clusters uses kubectl config use-context. It works, but the commands are long.
Check which context you’re currently on:
kubectl config current-context
Output:
k3s-dev
Switch to the production cluster:
kubectl config use-context k3s-prod
Confirmation:
Switched to context "k3s-prod".
Now every kubectl command targets the prod cluster:
kubectl get nodes
Output confirms we’re on the right cluster:
NAME STATUS ROLES AGE VERSION
k3s-prod Ready control-plane 117s v1.34.5+k3s1
You can also set a default namespace for a context so you don’t need -n on every command:
kubectl config set-context --current --namespace=databases
This is functional but verbose. That’s where kubectx comes in.
Install kubectx and kubens
kubectx and kubens are compiled Go binaries (since v0.9.0) that replace the verbose kubectl config commands with short, memorable alternatives. kubectx handles cluster contexts; kubens handles namespaces.
Install from GitHub releases (Linux/macOS)
Grab the latest version dynamically:
VER=$(curl -sL https://api.github.com/repos/ahmetb/kubectx/releases/latest | grep tag_name | head -1 | sed 's/.*"v\([^"]*\)".*/\1/')
echo "Installing kubectx v${VER}"
# For Linux amd64
ARCH=linux_x86_64
# For macOS Apple Silicon, use: ARCH=darwin_arm64
# For macOS Intel, use: ARCH=darwin_amd64
curl -sLO "https://github.com/ahmetb/kubectx/releases/download/v${VER}/kubectx_v${VER}_${ARCH}.tar.gz"
curl -sLO "https://github.com/ahmetb/kubectx/releases/download/v${VER}/kubens_v${VER}_${ARCH}.tar.gz"
tar xzf "kubectx_v${VER}_${ARCH}.tar.gz" kubectx
tar xzf "kubens_v${VER}_${ARCH}.tar.gz" kubens
sudo mv kubectx kubens /usr/local/bin/
rm -f kubectx_v*.tar.gz kubens_v*.tar.gz LICENSE
Install with Homebrew (macOS/Linux)
brew install kubectx
This installs both kubectx and kubens and sets up shell completions automatically.
Install with Krew (kubectl plugin manager)
kubectl krew install ctx
kubectl krew install ns
When installed via Krew, the commands become kubectl ctx and kubectl ns instead of standalone binaries.
Verify the installation:
kubectx --help
The help output shows all available commands:
USAGE:
kubectx : list the contexts
kubectx <NAME> : switch to context <NAME>
kubectx - : switch to the previous context
kubectx -s, --shell <NAME> : start a shell scoped to context <NAME>
kubectx -r, --readonly <NAME> : start a read-only shell for context <NAME>
kubectx -c, --current : show the current context name
kubectx -u, --unset : unset the current context
kubectx <NEW_NAME>=<NAME> : rename context <NAME> to <NEW_NAME>
kubectx <NEW_NAME>=. : rename current-context to <NEW_NAME>
kubectx -d <NAME> : delete context <NAME>
Switching Clusters with kubectx
List all available contexts:
kubectx
Output:
k3s-dev
k3s-prod
k3s-staging
Switch to the staging cluster:
kubectx k3s-staging
Output:
✔ Switched to context "k3s-staging".
Verify by listing nodes:
kubectl get nodes
Output confirms we’re on staging:
NAME STATUS ROLES AGE VERSION
k3s-staging Ready control-plane 98s v1.34.5+k3s1
Switch back to the previous context with a single dash:
kubectx -
Output:
✔ Switched to context "k3s-dev".
This toggle is extremely useful when you’re bouncing between two clusters repeatedly (deploying to staging, checking dev).
Namespace Management with kubens
kubens does for namespaces what kubectx does for contexts. List namespaces in the current cluster:
kubens
Output (on the dev cluster):
default
kube-node-lease
kube-public
kube-system
web-apps
Switch to the web-apps namespace:
kubens web-apps
Output:
✔ Active namespace is "web-apps"
Now every kubectl command is scoped to web-apps without needing -n web-apps:
kubectl get pods
Output shows only the pods in web-apps:
NAME READY STATUS RESTARTS AGE
nginx-85f7d4dd78-sxnpf 1/1 Running 0 23s
nginx-85f7d4dd78-zfrnj 1/1 Running 0 23s
Check which namespace you’re in:
kubens -c
Output:
web-apps
Reset back to the default namespace:
kubens --unset
Output:
✔ Active namespace is "default".
Rename Contexts for Clarity
Cloud providers give you terrible context names like gke_myproject_us-central1-a_cluster-1. kubectx lets you rename them to something memorable:
kubectx dev=k3s-dev
kubectx staging=k3s-staging
kubectx prod=k3s-prod
Output:
✔ Context k3s-dev renamed to dev.
✔ Context k3s-staging renamed to staging.
✔ Context k3s-prod renamed to prod.
Now your context list is clean:
kubectx
Output:
dev
prod
staging
This is especially valuable when you manage clusters across multiple cloud providers. Renaming arn:aws:eks:eu-west-1:123456789:cluster/production to just prod saves keystrokes and prevents mistakes.
Interactive Selection with fzf
When fzf is installed on your system, both kubectx and kubens automatically enable interactive fuzzy search. Just run kubectx or kubens without arguments and you get a searchable, interactive menu instead of a plain list.
Install fzf on your workstation:
sudo dnf install fzf # Rocky/RHEL/Fedora
sudo apt install fzf # Ubuntu/Debian
brew install fzf # macOS
With 20+ contexts, fzf is the difference between scanning a wall of text and typing two characters to find what you need.
Practical Use Cases
Cross-cluster service inventory
Quickly audit which services are running across all clusters:
for ctx in dev staging prod; do
echo "=== $ctx ==="
kubectl --context=$ctx get svc -A --no-headers | grep -v kube-system
done
Output:
=== dev ===
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m
web-apps nginx ClusterIP 10.43.168.182 <none> 80/TCP 44s
=== staging ===
data-services redis ClusterIP 10.43.144.214 <none> 6379/TCP 43s
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m
=== prod ===
databases postgres ClusterIP 10.43.61.191 <none> 5432/TCP 37s
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m
Cluster health check script
A quick status report across all clusters in seconds:
for ctx in dev staging prod; do
echo "=== $ctx ==="
echo -n " Nodes: "
kubectl --context=$ctx get nodes --no-headers | wc -l
echo -n " Running pods: "
kubectl --context=$ctx get pods -A --no-headers --field-selector=status.phase=Running | wc -l
echo -n " Services: "
kubectl --context=$ctx get svc -A --no-headers | wc -l
echo -n " Namespaces: "
kubectl --context=$ctx get ns --no-headers | wc -l
done
Output from our test lab:
=== dev ===
Nodes: 1
Running pods: 7
Services: 4
Namespaces: 5
=== staging ===
Nodes: 1
Running pods: 6
Services: 4
Namespaces: 5
=== prod ===
Nodes: 1
Running pods: 6
Services: 4
Namespaces: 5
Deploy to staging, verify, promote to prod
A common workflow when promoting deployments across environments:
# Deploy to staging first
kubectx staging
kubens data-services
kubectl apply -f deployment.yaml
# Verify it works
kubectl get pods -w
# Looks good - promote to prod
kubectx prod
kubens databases
kubectl apply -f deployment.yaml
The context and namespace switches happen instantly, so you stay in flow instead of typing out long kubectl config commands.
One-liner: same command across all clusters
Run any kubectl command against every cluster without manually switching:
for ctx in $(kubectx); do
echo "=== $ctx ==="
kubectl --context="$ctx" get nodes -o wide
done
The --context flag overrides the current context for a single command without switching your active context. Useful in scripts where you don’t want side effects.
Shell Prompt Integration
Knowing which cluster and namespace you’re targeting at a glance prevents costly mistakes. kube-ps1 adds the current context and namespace to your shell prompt:
# Install kube-ps1
brew install kube-ps1 # macOS
# Or clone from GitHub
# Add to ~/.bashrc or ~/.zshrc
source "/opt/homebrew/opt/kube-ps1/share/kube-ps1.sh"
PS1='$(kube_ps1) \$ ' # bash
PROMPT='$(kube_ps1) %# ' # zsh
Your prompt then shows something like (⎈ |prod:databases) $ so you always know where you are before hitting enter on that kubectl delete command.
kubectx vs kubectl: When to Use Which
| Task | kubectl (native) | kubectx/kubens |
|---|---|---|
| List contexts | kubectl config get-contexts | kubectx |
| Switch context | kubectl config use-context NAME | kubectx NAME |
| Previous context | Not built-in | kubectx - |
| Current context | kubectl config current-context | kubectx -c |
| Rename context | kubectl config rename-context OLD NEW | kubectx NEW=OLD |
| List namespaces | kubectl get ns | kubens |
| Switch namespace | kubectl config set-context --current --namespace=NAME | kubens NAME |
| Previous namespace | Not built-in | kubens - |
| Interactive select | Not built-in | Automatic with fzf |
In scripts, kubectl --context=NAME is often better because it doesn’t change your active context as a side effect. For interactive work, kubectx wins on speed and convenience.
Going Further
- Shell completion – kubectx supports bash, zsh, and fish completions. The Homebrew install sets these up automatically. For manual installs, see the kubectx docs
- RBAC per cluster – Consider using different kubeconfig users per environment (read-only for prod, admin for dev) to prevent accidental changes. See our guide on Kubernetes RBAC namespace restrictions
- Scoped shells – kubectx v0.11.0 added
kubectx --shell NAMEwhich opens a new shell locked to a specific context. When you exit, your original context is untouched - Read-only mode –
kubectx --readonly NAMEopens a shell where write operations are blocked. Useful for safely inspecting production - Lens/Portainer – For a graphical multi-cluster dashboard, check out Lens Desktop or Portainer