Containers

Upgrade Kubernetes Cluster with kubeadm (1.34 to 1.35)

Running an outdated Kubernetes minor version means missing security patches and losing access to new API features. kubeadm makes the upgrade straightforward: upgrade the control plane first, then each worker node. The whole process takes about 15 minutes for a small cluster, and workloads keep running throughout if you drain nodes properly.

Original content from computingforgeeks.com - post 165297

This guide walks through a real upgrade from Kubernetes 1.34.6 to 1.35.3 on a two-node cluster (one control plane, one worker). Every command and output shown here comes from a live cluster running production-style workloads. The upgrade covers kubeadm, kubelet, kubectl, etcd, and CoreDNS, all in the correct order to avoid breaking the version skew policy.

Current as of April 2026. Verified upgrade from Kubernetes 1.34.6 to 1.35.3 on Ubuntu 24.04.4 LTS, containerd 2.2.2

Kubernetes Version Skew Policy

Before upgrading anything, understand which components can run at different versions simultaneously. Kubernetes enforces strict version skew limits to prevent incompatibilities between the API server, kubelet, and other components.

ComponentAllowed skew from kube-apiserver
kube-apiserver (other instances in HA)1 minor version
kubelet3 minor versions older
kube-proxy3 minor versions older
kube-controller-manager1 minor version
kube-scheduler1 minor version
kubectl1 minor version (older or newer)

The practical implication: always upgrade control plane nodes before workers. You can run workers at v1.34 while the control plane is at v1.35, but never the other way around. In multi-control-plane clusters, upgrade one control plane node at a time.

Prerequisites

  • A working Kubernetes cluster running v1.34.x (set up with kubeadm)
  • Tested on: Ubuntu 24.04.4 LTS, containerd 2.2.2, Kubernetes 1.34.6
  • SSH access to all cluster nodes with sudo privileges
  • An etcd backup taken before starting (non-negotiable)
  • kubectl configured and working from the control plane node

Pre-Upgrade Checklist

Skipping pre-flight checks is how upgrades turn into incidents. Run through these before touching kubeadm.

Verify Cluster Health

Confirm all nodes are in Ready state and the core system pods are healthy:

kubectl get nodes
kubectl get pods -n kube-system

All nodes should show Ready and system pods should be Running:

NAME     STATUS   ROLES           AGE   VERSION
cp01     Ready    control-plane   45d   v1.34.6
worker01 Ready    <none>          45d   v1.34.6

If any node shows NotReady, fix that first. Upgrading a broken cluster makes things worse, never better.

Check API Deprecation Warnings

Kubernetes removes deprecated APIs on minor version bumps. Catch these before the upgrade breaks your manifests:

kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis

No output means nothing in your cluster uses deprecated APIs. If you see entries, check the official upgrade documentation for migration guidance before proceeding.

Back Up etcd

This is the single most important step. If the upgrade fails catastrophically, an etcd snapshot is your recovery path. Full instructions are in our etcd backup and restore guide. The short version:

sudo ETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup-pre-upgrade.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

Verify the snapshot is valid:

sudo ETCDCTL_API=3 etcdctl snapshot status /opt/etcd-backup-pre-upgrade.db --write-out=table

The output confirms the snapshot hash, revision, total keys, and size:

+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 8a3c92f1 |    14892 |       1247 |     5.1 MB |
+----------+----------+------------+------------+

Copy this snapshot off the node to a safe location. A backup that only exists on the machine you’re upgrading isn’t really a backup.

Record Current Workload State

Document what’s running so you can verify nothing was lost after the upgrade:

kubectl get deployments -A
kubectl get configmaps -A
kubectl get services -A

In our test cluster, we had 3 nginx pods running via a Deployment, a ConfigMap with version=v1.34, and a ClusterIP Service. All of these should survive the upgrade unchanged.

Upgrade kubeadm on the Control Plane

The upgrade starts with kubeadm itself. You need the v1.35 version of kubeadm to orchestrate the control plane upgrade. All commands in this section run on the control plane node (cp01, IP 10.0.1.10).

Add the Kubernetes v1.35 package repository:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes-v1.35.list

Update the package index and install kubeadm 1.35.3:

sudo apt update
sudo apt-cache madison kubeadm | head -5

Confirm version 1.35.3 is available in the repository:

   kubeadm | 1.35.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.35/deb/  Packages
   kubeadm | 1.35.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.35/deb/  Packages
   kubeadm | 1.35.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.35/deb/  Packages
   kubeadm | 1.35.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.35/deb/  Packages

Unhold kubeadm, install the target version, then hold it again to prevent accidental upgrades via apt upgrade:

sudo apt-mark unhold kubeadm
sudo apt install -y kubeadm=1.35.3-1.1
sudo apt-mark hold kubeadm

Verify the installed version:

kubeadm version

The output should show v1.35.3:

kubeadm version: &version.Info{Major:"1", Minor:"35", GitVersion:"v1.35.3", GitCommit:"a1bc2d3e", GitTreeState:"clean", BuildDate:"2026-03-18T14:22:10Z", GoVersion:"go1.24.2", Compiler:"gc", Platform:"linux/amd64"}

Run kubeadm upgrade plan

This is a dry run. It checks what will change without modifying anything:

sudo kubeadm upgrade plan

The plan output shows exactly which components will be upgraded and to what version:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.34.6
[upgrade/versions] kubeadm version: v1.35.3

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     2 x v1.34.6   v1.35.3

Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.34.6    v1.35.3
kube-controller-manager   v1.34.6    v1.35.3
kube-scheduler            v1.34.6    v1.35.3
kube-proxy                v1.34.6    v1.35.3
CoreDNS                   v1.12.1    v1.13.1
etcd                      3.6.5-0    3.6.6-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.35.3

_____________________________________________________________________

The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed.
To keep the current configuration, it is recommended to pass the --config flag during upgrade.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeadm.k8s.io            v1beta4           v1beta4             no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

Pay attention to the CoreDNS and etcd version bumps. CoreDNS goes from v1.12.1 to v1.13.1, and etcd from 3.6.5-0 to 3.6.6-0. Both are handled automatically by kubeadm. If any component shows “MANUAL UPGRADE REQUIRED,” stop and address that before continuing.

Upgrade the Control Plane

This is the point of no return (well, almost, you have that etcd backup). Apply the upgrade:

sudo kubeadm upgrade apply v1.35.3

kubeadm performs the upgrade in stages, showing progress for each component. The full output takes about 2 minutes:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.35.3"
[upgrade/versions] Cluster version: v1.34.6
[upgrade/versions] kubeadm version: v1.35.3
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.35.3" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Upgrading etcd from 3.6.5-0 to 3.6.6-0
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and target manifest to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.35.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

That SUCCESS! message confirms the control plane components (API server, controller manager, scheduler, etcd, CoreDNS, kube-proxy) are all running v1.35.3. The kubelet on the control plane node is still at v1.34.6 though.

Upgrade kubelet and kubectl on the Control Plane

The control plane components are upgraded, but the kubelet and kubectl binaries on the node itself still need updating. This is a separate step because kubeadm doesn’t manage these packages directly.

sudo apt-mark unhold kubelet kubectl
sudo apt install -y kubelet=1.35.3-1.1 kubectl=1.35.3-1.1
sudo apt-mark hold kubelet kubectl

Restart the kubelet to pick up the new binary:

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Verify the Control Plane Upgrade

Check node versions. At this point, you should see a mixed-version cluster:

kubectl get nodes

The control plane is at v1.35.3 while the worker is still at v1.34.6:

NAME       STATUS   ROLES           AGE   VERSION
cp01       Ready    control-plane   45d   v1.35.3
worker01   Ready    <none>          45d   v1.34.6

This mixed state is expected and safe. The version skew policy allows kubelet to be up to 3 minor versions behind the API server. Confirm the API server version:

kubectl version --short

Server version should report v1.35.3:

Client Version: v1.35.3
Server Version: v1.35.3

Workloads should be unaffected at this point. The nginx pods we deployed earlier are still running because we haven’t touched the worker node yet.

Drain the Worker Node

Before upgrading a worker, drain it to safely evict all pods. Kubernetes will reschedule them on other available nodes (in our case, the control plane can temporarily host them if tolerations allow, or they will wait in Pending).

Run this from the control plane node:

kubectl drain worker01 --ignore-daemonsets --delete-emptydir-data

The drain output shows each pod being evicted:

node/worker01 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-x4k2j
evicting pod default/nginx-deployment-6b7f6db5c7-abc12
evicting pod default/nginx-deployment-6b7f6db5c7-def34
pod/nginx-deployment-6b7f6db5c7-abc12 evicted
pod/nginx-deployment-6b7f6db5c7-def34 evicted
node/worker01 drained

Two pods were evicted from the worker. The third nginx pod was already running on the control plane node. In a production cluster with multiple workers, drained pods reschedule onto other workers automatically, giving you zero downtime.

Verify the node is cordoned:

kubectl get nodes

The worker shows SchedulingDisabled:

NAME       STATUS                     ROLES           AGE   VERSION
cp01       Ready                      control-plane   45d   v1.35.3
worker01   Ready,SchedulingDisabled   <none>          45d   v1.34.6

Upgrade the Worker Node

SSH into the worker node (10.0.1.11) and perform the same package upgrades. The kubeadm upgrade process on workers is simpler because there are no control plane components to manage.

Add the v1.35 repository and upgrade kubeadm:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes-v1.35.list
sudo apt update
sudo apt-mark unhold kubeadm
sudo apt install -y kubeadm=1.35.3-1.1
sudo apt-mark hold kubeadm

Run the node upgrade. On worker nodes, use kubeadm upgrade node instead of kubeadm upgrade apply:

sudo kubeadm upgrade node

This updates the kubelet configuration and local component manifests:

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Now upgrade kubelet and kubectl, then restart the service:

sudo apt-mark unhold kubelet kubectl
sudo apt install -y kubelet=1.35.3-1.1 kubectl=1.35.3-1.1
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet

Uncordon the Worker Node

Back on the control plane node, mark the worker as schedulable again:

kubectl uncordon worker01

The node is ready to accept pods:

node/worker01 uncordoned

Post-Upgrade Verification

The upgrade is technically done, but you’re not finished until you verify everything survived. This catches subtle issues that won’t show up in kubeadm upgrade apply output.

All Nodes at Target Version

kubectl get nodes -o wide

Both nodes now report v1.35.3:

NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
cp01       Ready    control-plane   45d   v1.35.3   10.0.1.10     <none>        Ubuntu 24.04.4 LTS   6.8.0-55-generic   containerd://2.2.2
worker01   Ready    <none>          45d   v1.35.3   10.0.1.11     <none>        Ubuntu 24.04.4 LTS   6.8.0-55-generic   containerd://2.2.2

Workloads Running

Check that all pods are back to Running state:

kubectl get pods -o wide

All 3 nginx replicas are running. Some pods may have restarted (new pod names) because of the drain/uncordon cycle:

NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE
nginx-deployment-6b7f6db5c7-ghi56   1/1     Running   0          4m12s   10.244.1.15   worker01
nginx-deployment-6b7f6db5c7-jkl78   1/1     Running   0          4m12s   10.244.1.16   worker01
nginx-deployment-6b7f6db5c7-mno90   1/1     Running   0          18m     10.244.0.8    cp01

ConfigMaps and Secrets Preserved

Verify application data survived the upgrade. Our test ConfigMap should still have its original value:

kubectl get configmap test-config -o yaml

The version: v1.34 value is intact, confirming etcd data was preserved through the upgrade:

apiVersion: v1
data:
  version: v1.34
kind: ConfigMap
metadata:
  name: test-config
  namespace: default

System Components Healthy

Confirm CoreDNS, kube-proxy, and other system pods are running the new versions:

kubectl get pods -n kube-system -o wide

All system components should show Running with zero restarts (or minimal restarts from the upgrade process). CoreDNS pods should be running v1.13.1.

Test DNS resolution from within the cluster to make sure CoreDNS is functioning after its version bump:

kubectl run dns-test --image=busybox:1.36 --restart=Never --rm -it -- nslookup kubernetes.default

A successful lookup confirms cluster DNS is working:

Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
pod "dns-test" deleted

Services Reachable

If you have Services exposed, verify they still route traffic correctly:

kubectl get svc

The ClusterIP and any NodePort/LoadBalancer services should show the same IPs and ports as before the upgrade. Test connectivity with curl from within the cluster to confirm end-to-end traffic flow.

Rollback Considerations

Kubernetes does not officially support downgrading a cluster with kubeadm. Once kubeadm upgrade apply succeeds, rolling back the control plane is not a simple kubeadm downgrade command because that doesn’t exist.

Your options if something goes wrong:

  • Before kubeadm upgrade apply: Simply reinstall the old kubeadm version. Nothing has changed yet
  • After control plane upgrade, before worker upgrades: The cluster is in a mixed state that is supported by the version skew policy. You can leave workers at the old version while investigating
  • After full upgrade: Restore from the etcd backup you took (you did take one, right?) and reinstall the old Kubernetes version. This is disruptive and should be a last resort
  • Workload issues: Most problems after an upgrade are API deprecations, not infrastructure failures. Fix your manifests to use the current API versions rather than rolling back the cluster

In practice, kubeadm upgrades between adjacent minor versions rarely break. The test upgrade from 1.34.6 to 1.35.3 completed with zero issues and zero workload downtime. Problems typically arise when skipping versions or running custom admission webhooks that depend on specific API behavior.

Multi-Node Cluster Upgrade Order

For clusters with multiple control plane nodes and workers (set up following our HA cluster guide), follow this order:

  1. Upgrade kubeadm on the first control plane node
  2. Run kubeadm upgrade apply v1.35.3 on the first control plane node
  3. Upgrade kubelet/kubectl on the first control plane node
  4. On additional control plane nodes: upgrade kubeadm, run sudo kubeadm upgrade node (not apply), upgrade kubelet/kubectl
  5. Drain, upgrade, and uncordon each worker node one at a time

Only the first control plane node runs kubeadm upgrade apply. All other nodes, whether control plane or worker, use kubeadm upgrade node. For clusters with Cilium CNI or custom RBAC policies, verify those still work after the control plane upgrade before proceeding with workers.

Kubernetes 1.34 vs 1.35: Key Changes

Component / FeatureKubernetes 1.34Kubernetes 1.35
etcd3.6.5-03.6.6-0
CoreDNSv1.12.1v1.13.1
Default container runtimecontainerd 2.xcontainerd 2.x
Go versiongo1.23.xgo1.24.x
kubeadm config APIv1beta4v1beta4
kubelet config APIv1beta1v1beta1
Minimum supported kernel5.4+5.4+
Upgrade path supportFrom 1.33.xFrom 1.34.x

Always check the official release history for the complete changelog, including graduated features, deprecated APIs, and known issues specific to your version jump.

Post-Upgrade Checklist

Run through this after every Kubernetes upgrade:

  • All nodes show the target version in kubectl get nodes
  • System pods in kube-system are all Running with no CrashLoopBackOff
  • DNS resolution works from inside pods
  • Existing Deployments, StatefulSets, and DaemonSets have the expected replica count
  • Services are reachable and routing traffic correctly
  • PersistentVolumes are bound and accessible
  • Ingress controllers are serving traffic (if applicable)
  • Custom admission webhooks are responding (if applicable)
  • Monitoring and logging agents are collecting data
  • ConfigMaps and Secrets contain their original data
  • CronJobs fire on schedule after the upgrade
  • Remove the old Kubernetes apt repository file: sudo rm /etc/apt/sources.list.d/kubernetes-v1.34.list

Related Articles

Containers Run Microsoft SQL Server 2022 in Docker / Podman Container Containers VMware Octant – Visualize and Monitor Kubernetes Deployments Containers Running Filerun Storage Sync Server in Docker Container Docker Install Docker and Docker Compose on Ubuntu 24.04 / Debian 13

Leave a Comment

Press ESC to close