Kubernetes

Kubectl Cheat Sheet for Kubernetes Admins and CKA Exam Prep

If you work with Kubernetes clusters daily, kubectl is the tool you reach for before anything else. This cheat sheet covers the commands I actually use in production, from quick cluster checks to deep troubleshooting sessions at 2 AM. Everything here is tested against Kubernetes 1.32+ and organized by the tasks you run most often.

Original content from computingforgeeks.com - post 73136

Whether you are preparing for the CKA exam or managing production workloads, this reference gives you copy-paste ready commands with real output examples so you know exactly what to expect.

Tested March 2026 | Kubernetes 1.35.1 on Minikube 1.38.1, kubectl v1.32.4, Ubuntu 24.04

Prerequisites

Before diving into the commands, make sure you have:

  • kubectl installed and configured (v1.32 or later)
  • A running Kubernetes cluster (local with minikube/kind or remote)
  • A valid kubeconfig file at ~/.kube/config or set via KUBECONFIG env variable

Verify your kubectl version matches or is within one minor version of your cluster:

kubectl version
# Client Version: v1.32.2
# Server Version: v1.32.1

1. Cluster Information

The first thing I do when connecting to any cluster is confirm I am talking to the right one. These commands give you a quick health check.

Display the cluster endpoint and CoreDNS addresses:

kubectl cluster-info
# Kubernetes control plane is running at https://10.0.1.100:6443
# CoreDNS is running at https://10.0.1.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

List all nodes with their status, roles, age, and version:

kubectl get nodes -o wide
# NAME         STATUS   ROLES           AGE   VERSION    INTERNAL-IP   OS-IMAGE             KERNEL-VERSION
# control-01   Ready    control-plane   45d   v1.32.1    10.0.1.100    Ubuntu 24.04 LTS     6.8.0-45-generic
# worker-01    Ready    <none>          45d   v1.32.1    10.0.1.101    Ubuntu 24.04 LTS     6.8.0-45-generic
# worker-02    Ready    <none>          45d   v1.32.1    10.0.1.102    Ubuntu 24.04 LTS     6.8.0-45-generic

List all API resources your cluster supports. This is helpful when you forget the exact resource name or want to check short names:

kubectl api-resources --sort-by=name | head -20
# NAME                              SHORTNAMES   APIVERSION                        NAMESPACED   KIND
# bindings                                       v1                                true         Binding
# certificatesigningrequests        csr          certificates.k8s.io/v1            false        CertificateSigningRequest
# clusterrolebindings                            rbac.authorization.k8s.io/v1      false        ClusterRoleBinding
# clusterroles                                   rbac.authorization.k8s.io/v1      false        ClusterRole
# configmaps                        cm           v1                                true         ConfigMap

Check which API versions are available:

kubectl api-versions | grep -i apps
# apps/v1

Get a quick summary of resource usage across the cluster:

kubectl top nodes
# NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
# control-01   245m         12%    1842Mi          48%
# worker-01    890m         22%    3200Mi          41%
# worker-02    1102m        27%    4100Mi          52%

2. Working with Pods

Pods are the fundamental unit in Kubernetes. These are the commands you will use hundreds of times a day.

Run a quick pod for testing. This is one of the fastest ways to validate network connectivity or DNS resolution inside a cluster:

kubectl run debug-pod --image=busybox:1.36 --restart=Never -- sleep 3600

Create a pod from a YAML manifest using a dry-run to generate the template first:

kubectl run nginx-test --image=nginx:1.27 --dry-run=client -o yaml > nginx-pod.yaml
kubectl apply -f nginx-pod.yaml

List pods with different levels of detail:

# Basic list in default namespace
kubectl get pods

# Wide output with node placement and IP
kubectl get pods -o wide

# All namespaces
kubectl get pods -A

# Filter by label
kubectl get pods -l app=nginx

# Watch for changes in real time
kubectl get pods -w

Inspect a pod to see events, conditions, and container details:

kubectl describe pod nginx-test

Tail logs from a running container. Add -f to follow in real time:

# Stream logs
kubectl logs -f nginx-test

# Last 100 lines
kubectl logs --tail=100 nginx-test

# Logs from a specific container in a multi-container pod
kubectl logs nginx-test -c sidecar

# Logs from all pods matching a label
kubectl logs -l app=nginx --all-containers=true

Open an interactive shell inside a running pod:

kubectl exec -it nginx-test -- /bin/bash

# Run a one-off command without entering the shell
kubectl exec nginx-test -- cat /etc/nginx/nginx.conf

Forward a local port to a pod, which is great for testing services before exposing them:

kubectl port-forward pod/nginx-test 8080:80

Copy files between your local machine and a pod:

# Copy local file into pod
kubectl cp ./app.conf nginx-test:/etc/nginx/conf.d/app.conf

# Copy file from pod to local
kubectl cp nginx-test:/var/log/nginx/access.log ./access.log

Delete pods with various strategies:

# Delete a single pod
kubectl delete pod nginx-test

# Force delete a stuck pod (use with caution)
kubectl delete pod nginx-test --grace-period=0 --force

# Delete all pods with a specific label
kubectl delete pods -l app=nginx

3. Deployments

Deployments handle the rollout and scaling of your application pods. If you have deployed applications on Kubernetes before, you know how critical these commands are. For a deeper walkthrough on setting up clusters, check out our guide on installing Kubernetes on Ubuntu.

Create a deployment from the command line:

kubectl create deployment webapp --image=nginx:1.27 --replicas=3

Generate a deployment YAML for further customization:

kubectl create deployment webapp --image=nginx:1.27 --replicas=3 --dry-run=client -o yaml > webapp-deployment.yaml

Scale a deployment up or down:

# Scale to 5 replicas
kubectl scale deployment webapp --replicas=5

# Autoscale based on CPU usage (requires metrics-server)
kubectl autoscale deployment webapp --min=3 --max=10 --cpu-percent=80

Update the container image in a deployment. This triggers a rolling update:

kubectl set image deployment/webapp nginx=nginx:1.27.1

Monitor the rollout status:

kubectl rollout status deployment/webapp
# Waiting for deployment "webapp" rollout to finish: 2 out of 3 new replicas have been updated...
# deployment "webapp" successfully rolled out

View rollout history and roll back to a previous version:

# Check history
kubectl rollout history deployment/webapp
# REVISION  CHANGE-CAUSE
# 1         <none>
# 2         <none>

# Roll back to previous version
kubectl rollout undo deployment/webapp

# Roll back to a specific revision
kubectl rollout undo deployment/webapp --to-revision=1

Pause and resume a deployment. This is useful when you want to make multiple changes before triggering a single rollout:

kubectl rollout pause deployment/webapp
# Make changes...
kubectl set image deployment/webapp nginx=nginx:1.27.2
kubectl set resources deployment/webapp -c=nginx --limits=cpu=200m,memory=256Mi
# Resume to apply all changes at once
kubectl rollout resume deployment/webapp

4. Rollouts and Rollbacks

The deployment section above covers the basics, but rollouts deserve their own focused reference. When a bad image hits production at 3 AM, you need these commands committed to memory.

Push a new image to a deployment:

kubectl set image deployment/nginx nginx=nginx:1.27

The update starts immediately:

deployment.apps/nginx image updated

Watch the rollout progress until it completes:

kubectl rollout status deployment/nginx

Once all pods are replaced, you should see:

Waiting for deployment "nginx" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "nginx" rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx" successfully rolled out

Review the revision history for a deployment:

kubectl rollout history deployment/nginx

Each image change creates a new revision entry:

deployment.apps/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>

Roll back to the previous revision instantly:

kubectl rollout undo deployment/nginx

Kubernetes confirms the rollback:

deployment.apps/nginx rolled back

If you need to go back to a specific revision (not just the previous one), specify it explicitly:

kubectl rollout undo deployment/nginx --to-revision=1

Pause a deployment to batch multiple changes into a single rollout. This prevents Kubernetes from creating intermediate ReplicaSets for each change:

kubectl rollout pause deployment/nginx

After making all your changes, resume to trigger one combined rollout:

kubectl rollout resume deployment/nginx

5. Services

Services give your pods a stable network identity. Without them, pod IPs change every time a pod restarts.

Expose a deployment as a ClusterIP service (internal only):

kubectl expose deployment webapp --port=80 --target-port=80 --name=webapp-svc

Create a NodePort service to expose on every node:

kubectl expose deployment webapp --type=NodePort --port=80 --target-port=80 --name=webapp-nodeport

List services and check endpoints:

kubectl get svc
# NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
# kubernetes        ClusterIP   10.96.0.1       <none>        443/TCP        45d
# webapp-svc        ClusterIP   10.96.42.15     <none>        80/TCP         2m
# webapp-nodeport   NodePort    10.96.55.200    <none>        80:31234/TCP   30s

# Check which pods back a service
kubectl get endpoints webapp-svc
# NAME         ENDPOINTS                                      AGE
# webapp-svc   10.244.1.5:80,10.244.2.8:80,10.244.2.9:80     2m

Get detailed information about a service including selectors and session affinity:

kubectl describe svc webapp-svc

Forward traffic from your local machine to a service:

kubectl port-forward svc/webapp-svc 8080:80

6. ConfigMaps and Secrets

ConfigMaps and Secrets let you decouple configuration from your container images. This is something every production deployment should use.

Create a ConfigMap from literal key-value pairs:

kubectl create configmap app-config \
  --from-literal=DATABASE_HOST=db.example.com \
  --from-literal=DATABASE_PORT=5432 \
  --from-literal=LOG_LEVEL=info

Create a ConfigMap from a file:

kubectl create configmap nginx-config --from-file=nginx.conf=/path/to/nginx.conf

View the contents of a ConfigMap:

kubectl get configmap app-config -o yaml

Create a Secret from literal values. Kubernetes stores these base64-encoded:

kubectl create secret generic db-credentials \
  --from-literal=username=admin \
  --from-literal=password=S3cur3P@ss

Create a TLS secret from certificate files:

kubectl create secret tls webapp-tls --cert=tls.crt --key=tls.key

Decode a secret value to verify it:

kubectl get secret db-credentials -o jsonpath='{.data.password}' | base64 -d
# S3cur3P@ss

Here is a quick example of mounting a ConfigMap and Secret in a pod spec. Save this as pod-with-config.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
  - name: app
    image: nginx:1.27
    envFrom:
    - configMapRef:
        name: app-config
    - secretRef:
        name: db-credentials
    volumeMounts:
    - name: config-volume
      mountPath: /etc/nginx/conf.d
  volumes:
  - name: config-volume
    configMap:
      name: nginx-config

7. Persistent Volumes

Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) decouple storage from pod lifecycles. Without them, any data written inside a container vanishes when the pod restarts.

List all PVs and PVCs in the cluster:

kubectl get pv

Check PVCs in the current namespace:

kubectl get pvc

Generate a PVC manifest using a dry-run. This is useful for quickly scaffolding storage claims without writing YAML from scratch:

cat <<'YAML' | kubectl apply --dry-run=client -o yaml -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: standard
YAML

The dry-run output shows the full PVC spec as Kubernetes would interpret it, including default fields. You can redirect this to a file and customize further.

Inspect the details of a specific PVC, including its bound PV, capacity, and events:

kubectl describe pvc app-data

8. Namespaces

Namespaces provide isolation between teams and environments within the same cluster. I use them to separate staging, production, and tooling workloads.

Create a namespace:

kubectl create namespace staging

List all namespaces:

kubectl get namespaces
# NAME              STATUS   AGE
# default           Active   45d
# kube-system       Active   45d
# kube-public       Active   45d
# kube-node-lease   Active   45d
# staging           Active   5s

Switch your current context to a different namespace so you do not have to type -n staging on every command:

kubectl config set-context --current --namespace=staging

Verify which namespace you are currently working in:

kubectl config view --minify --output 'jsonpath={..namespace}'
# staging

Run a command in a specific namespace without switching context:

kubectl get pods -n kube-system

View resources across all namespaces at once:

kubectl get pods --all-namespaces
kubectl get services -A
kubectl get deployments -A

9. Troubleshooting

When something breaks in production, speed matters. These commands help you find the root cause fast. For a more complete troubleshooting workflow, see our post on troubleshooting Kubernetes cluster issues.

Check cluster events sorted by time. This is usually the first place I look:

kubectl get events --sort-by='.lastTimestamp'

# Filter events for a specific namespace
kubectl get events -n staging --sort-by='.lastTimestamp'

# Watch events in real time
kubectl get events -w

Describe a resource to see its full event history and current conditions:

kubectl describe pod webapp-7d9f5b4c6-k2x8m
kubectl describe node worker-01

Get logs from a crashed container. The --previous flag shows logs from the last terminated instance:

# Logs from the previously crashed container
kubectl logs webapp-7d9f5b4c6-k2x8m --previous

# Logs with timestamps
kubectl logs webapp-7d9f5b4c6-k2x8m --timestamps=true

# Logs from the last 30 minutes
kubectl logs webapp-7d9f5b4c6-k2x8m --since=30m

Use kubectl debug to attach an ephemeral debug container to a running pod. This has been available since Kubernetes 1.25 and is stable in 1.32:

# Attach a debug container with networking tools
kubectl debug -it webapp-7d9f5b4c6-k2x8m --image=nicolaka/netshoot --target=nginx

# Debug a node directly
kubectl debug node/worker-01 -it --image=ubuntu:24.04

Check resource consumption for pods and nodes:

# Pod resource usage
kubectl top pods
# NAME                      CPU(cores)   MEMORY(bytes)
# webapp-7d9f5b4c6-k2x8m   12m          45Mi
# webapp-7d9f5b4c6-r9t3n   8m           42Mi

# Sort by memory usage
kubectl top pods --sort-by=memory

# Resource usage for containers within pods
kubectl top pods --containers

Find pods that are not in a Running state:

kubectl get pods --field-selector=status.phase!=Running -A

Check if a pod is stuck in terminating state:

kubectl get pods --field-selector=status.phase=Running -A | grep Terminating

Events and Debugging

Cluster events tell you exactly what Kubernetes is doing behind the scenes. Sorting by creation timestamp puts the most recent actions at the bottom:

kubectl get events --sort-by=.metadata.creationTimestamp

The output shows the full lifecycle of pod scheduling, image pulls, and container starts:

LAST SEEN   TYPE     REASON              OBJECT                         MESSAGE
2m          Normal   Scheduled           pod/nginx-7d9f5b4c6-k2x8m     Successfully assigned default/nginx-7d9f5b4c6-k2x8m to worker-01
2m          Normal   Pulling             pod/nginx-7d9f5b4c6-k2x8m     Pulling image "nginx:1.27"
90s         Normal   Pulled              pod/nginx-7d9f5b4c6-k2x8m     Successfully pulled image "nginx:1.27" in 28.4s
90s         Normal   Created             pod/nginx-7d9f5b4c6-k2x8m     Created container nginx
90s         Normal   Started             pod/nginx-7d9f5b4c6-k2x8m     Started container nginx

Filter events to show only failures, which cuts through the noise when investigating issues:

kubectl get events --field-selector reason=Failed

The Events section at the bottom of kubectl describe pod is often the fastest way to understand why a pod is not running. Look for FailedScheduling, ImagePullBackOff, or CrashLoopBackOff reasons:

kubectl describe pod nginx-7d9f5b4c6-k2x8m

Scroll to the Events section at the bottom of the output. That is where scheduling failures, image pull errors, and OOMKill reasons appear.

10. RBAC – Role-Based Access Control

RBAC controls who can do what in your cluster. Getting this right is critical for multi-team environments and passing the CKA exam.

Create a Role that allows reading pods in a specific namespace:

kubectl create role pod-reader \
  --verb=get,list,watch \
  --resource=pods \
  -n staging

Bind that role to a user:

kubectl create rolebinding pod-reader-binding \
  --role=pod-reader \
  --user=jane \
  -n staging

Create a ClusterRole for cluster-wide permissions:

kubectl create clusterrole node-viewer \
  --verb=get,list,watch \
  --resource=nodes

Bind it with a ClusterRoleBinding:

kubectl create clusterrolebinding node-viewer-binding \
  --clusterrole=node-viewer \
  --user=jane

Check whether a user or service account has permission to perform an action:

# Can I create deployments?
kubectl auth can-i create deployments
# yes

# Can user jane list pods in the staging namespace?
kubectl auth can-i list pods --namespace=staging --as=jane
# yes

# List all permissions for the current user
kubectl auth can-i --list

# Check permissions for a service account
kubectl auth can-i get pods --as=system:serviceaccount:staging:default -n staging

11. Node Management

Node operations come up whenever you need to patch OS packages, replace hardware, or rebalance workloads across the cluster. Knowing how to safely drain and restore nodes prevents downtime.

List all nodes with extended details including OS image, kernel, and container runtime:

kubectl get nodes -o wide

Inspect a specific node for capacity, allocatable resources, conditions, and running pods:

kubectl describe node minikube | head -30

Mark a node as unschedulable. Existing pods keep running, but no new pods will be placed on it:

kubectl cordon node-name

Drain a node to evict all pods before maintenance. The flags ensure DaemonSet pods and pods with emptyDir volumes are handled cleanly:

kubectl drain node-name --ignore-daemonsets --delete-emptydir-data

After maintenance, bring the node back into the scheduling pool:

kubectl uncordon node-name

Apply a taint to prevent pods from scheduling on a node unless they tolerate it. This is how you reserve nodes for specific workloads:

kubectl taint nodes node-name key=value:NoSchedule

Remove the taint by appending a minus sign:

kubectl taint nodes node-name key=value:NoSchedule-

Check CPU and memory consumption across nodes and pods (requires metrics-server):

kubectl top nodes
kubectl top pods

12. JSONPath and Custom Columns

When you need to extract specific data from kubectl output, whether for scripts, monitoring, or reports, JSONPath and custom columns are your best tools. This section alone can save you hours of piping through grep and awk.

Extract pod names using JSONPath:

kubectl get pods -o jsonpath='{.items[*].metadata.name}'
# webapp-7d9f5b4c6-k2x8m webapp-7d9f5b4c6-r9t3n

Get pod names one per line using range:

kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
# webapp-7d9f5b4c6-k2x8m
# webapp-7d9f5b4c6-r9t3n

Build a custom report showing pod name, node, and IP:

kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.hostIP}{"\t"}{.status.podIP}{"\n"}{end}'

Custom columns give you table-formatted output that is easier to read:

kubectl get pods -o custom-columns=\
NAME:.metadata.name,\
NODE:.spec.nodeName,\
STATUS:.status.phase,\
IP:.status.podIP,\
RESTARTS:.status.containerStatuses[0].restartCount

# NAME                      NODE        STATUS    IP            RESTARTS
# webapp-7d9f5b4c6-k2x8m   worker-01   Running   10.244.1.5    0
# webapp-7d9f5b4c6-r9t3n   worker-02   Running   10.244.2.8    0

List all container images running in the cluster, which is great for security audits:

kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{range .spec.containers[*]}{.image}{", "}{end}{"\n"}{end}'

Get the external IPs of all nodes:

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}'

Filter pods using JSONPath conditions. For example, find all pods on a specific node:

kubectl get pods -A -o jsonpath='{range .items[?(@.spec.nodeName=="worker-01")]}{.metadata.name}{"\n"}{end}'

13. Context and Kubeconfig Management

When you manage multiple clusters (development, staging, production), context switching is part of the daily routine. Getting this wrong can be costly, so knowing these commands cold is a must. For more on multi-cluster setups, refer to our guide on managing multiple Kubernetes clusters with kubectl and kubectx.

List all contexts in your kubeconfig:

kubectl config get-contexts
# CURRENT   NAME            CLUSTER         AUTHINFO        NAMESPACE
# *         production      prod-cluster    prod-admin      default
#           staging         stg-cluster     stg-admin       staging
#           development     dev-cluster     dev-admin       default

Switch to a different context:

kubectl config use-context staging
# Switched to context "staging".

View the current context:

kubectl config current-context
# staging

Create a new context that points to a specific cluster, user, and namespace:

kubectl config set-context my-context \
  --cluster=prod-cluster \
  --user=prod-admin \
  --namespace=monitoring

Change the default namespace for your current context:

kubectl config set-context --current --namespace=kube-system

Delete a context you no longer need:

kubectl config delete-context development

Use multiple kubeconfig files by merging them:

export KUBECONFIG=~/.kube/config:~/.kube/staging-config:~/.kube/prod-config
kubectl config view --flatten > ~/.kube/merged-config

14. Useful Aliases and Bash Completion

Setting up aliases and tab completion can save you thousands of keystrokes per day. Here is what I have in my shell profile.

Enable kubectl bash completion by adding this to your ~/.bashrc or ~/.zshrc:

# For bash
source <(kubectl completion bash)

# For zsh
source <(kubectl completion zsh)

# Make completion work with the 'k' alias too
alias k=kubectl
complete -o default -F __start_kubectl k

Production-tested aliases that I use every day. Add these to your shell profile:

# Core shortcuts
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kdel='kubectl delete'
alias kaf='kubectl apply -f'

# Pod shortcuts
alias kgp='kubectl get pods'
alias kgpw='kubectl get pods -o wide'
alias kgpa='kubectl get pods -A'
alias klf='kubectl logs -f'

# Deployment shortcuts
alias kgd='kubectl get deployments'
alias ksd='kubectl scale deployment'

# Service shortcuts
alias kgs='kubectl get svc'

# Namespace shortcuts
alias kgns='kubectl get namespaces'
alias kcn='kubectl config set-context --current --namespace'

# Context shortcuts
alias kctx='kubectl config get-contexts'
alias kuc='kubectl config use-context'

# Quick troubleshooting
alias kge='kubectl get events --sort-by=.lastTimestamp'
alias ktn='kubectl top nodes'
alias ktp='kubectl top pods'

With these aliases, common workflows become much faster. For example, checking the status of pods across all namespaces goes from typing kubectl get pods --all-namespaces to just kgpa.

Create a function to quickly switch namespaces:

kns() {
  kubectl config set-context --current --namespace="$1"
  echo "Switched to namespace: $1"
}
# Usage: kns staging

Another function to quickly exec into the first pod matching a label:

kexec() {
  local pod=$(kubectl get pods -l "$1" -o jsonpath='{.items[0].metadata.name}')
  kubectl exec -it "$pod" -- "${2:-/bin/sh}"
}
# Usage: kexec app=nginx /bin/bash

15. Dry Run and Diff

Dry runs let you validate and generate manifests without touching the cluster. Combined with kubectl diff, you can preview exactly what will change before applying.

Generate a full deployment manifest from a one-liner:

kubectl create deployment test --image=alpine --dry-run=client -o yaml

The output gives you a complete, valid YAML template you can save and customize:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: test
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: test
    spec:
      containers:
      - image: alpine
        name: alpine
        resources: {}
status: {}

Server-side dry run validates the manifest against the actual API server, including admission controllers and webhooks:

kubectl apply -f deployment.yaml --dry-run=server

Preview the diff between your local manifest and what is currently running in the cluster:

kubectl diff -f deployment.yaml

This works like git diff for your cluster state. If the output is empty, there are no changes to apply.

Quick Reference Table

Here is a summary of the most-used commands you will want at your fingertips:

TaskCommand
Get cluster infokubectl cluster-info
List all podskubectl get pods -A
Describe a podkubectl describe pod POD_NAME
View pod logskubectl logs -f POD_NAME
Exec into podkubectl exec -it POD_NAME -- /bin/sh
Create deploymentkubectl create deployment NAME --image=IMAGE
Scale deploymentkubectl scale deployment NAME --replicas=N
Rollback deploymentkubectl rollout undo deployment/NAME
Expose as servicekubectl expose deployment NAME --port=80
Create secretkubectl create secret generic NAME --from-literal=k=v
Switch namespacekubectl config set-context --current --namespace=NS
Switch contextkubectl config use-context CTX
Check permissionskubectl auth can-i VERB RESOURCE
View eventskubectl get events --sort-by=.lastTimestamp
Node resource usagekubectl top nodes
Rollout statuskubectl rollout status/history/undo deploy/NAME
Cordon/drain nodekubectl cordon/drain/uncordon NODE
Taint a nodekubectl taint nodes NODE key=val:NoSchedule
Events by timekubectl get events --sort-by=.metadata.creationTimestamp
Dry run manifestkubectl create deploy NAME --image=IMG --dry-run=client -o yaml
Pod/node metricskubectl top nodes/pods

Tips for the CKA Exam

If you are using this cheat sheet for CKA prep, here are a few things worth highlighting:

  • The --dry-run=client -o yaml pattern is your best friend. Generate YAML templates quickly instead of writing them from scratch.
  • Master JSONPath output formatting - the exam often asks you to extract specific fields from resources.
  • Set up your aliases at the start of the exam. The alias k=kubectl shortcut alone saves significant time over a two-hour session.
  • Practice kubectl explain to look up field paths without leaving the terminal:
# Show the fields available for a pod spec
kubectl explain pod.spec

# Drill down into a specific field
kubectl explain pod.spec.containers.livenessProbe

# Show all fields recursively
kubectl explain deployment.spec --recursive | head -40

The kubectl explain command acts as built-in documentation and is available during the exam. Learn to use it instead of memorizing every field name.

Related Articles

Containers Install and Configure MetalLB Load Balancer on Kubernetes Containers How To Upgrade From OpenShift 4.8 To 4.9 Containers How To Manage Docker Containers & Images in Linux Cloud Install Lightweight Openshift for Edge Computing using Microshift

Leave a Comment

Press ESC to close