Flatcar Container Linux is a minimal, immutable Linux distribution built specifically for running containers. It ships with containerd pre-installed, applies automated atomic updates, and has a read-only root filesystem – making it an ideal node OS for Kubernetes clusters. This guide walks through deploying a multi-node Kubernetes cluster on KVM/libvirt using Flatcar Container Linux as the node operating system, with Butane/Ignition for provisioning, kubeadm for cluster bootstrapping, and Calico for pod networking.

We will set up one control plane node and two worker nodes as KVM virtual machines, configure them with Ignition configs generated from Butane YAML, install Kubernetes components, and verify the running cluster. The setup also covers automated OS updates using the Flatcar Linux Update Operator (FLUO).

What is Flatcar Container Linux?

Flatcar Container Linux is a community-driven fork of the discontinued CoreOS Container Linux, now maintained by Microsoft. It is designed from the ground up for container workloads with these key characteristics:

  • Immutable infrastructure – The root filesystem is read-only. System updates are applied atomically to a secondary partition, and a reboot switches to the new version. If something breaks, the system rolls back automatically
  • Minimal attack surface – No package manager, no unnecessary services. The OS contains only what is needed to run containers
  • Containerd pre-installed – Ships with containerd as the container runtime, which is what Kubernetes uses natively
  • Ignition-based provisioning – Machines are configured at first boot using Ignition configs (JSON), which are generated from human-readable Butane YAML files
  • Automated updates – Three release channels (Stable, Beta, Alpha) with automatic A/B partition updates coordinated by Nebraska or the Flatcar Linux Update Operator

Prerequisites

  • A Linux host (Ubuntu 22.04/24.04, Debian 12, RHEL 9, or Rocky Linux 9) with at least 16GB RAM and 8 CPU cores
  • KVM/libvirt installed and running – if you need to set this up, follow our guide on installing KVM on CentOS / RHEL / Ubuntu / Debian
  • Hardware virtualization enabled in BIOS (Intel VT-x or AMD-V)
  • Tools installed on the KVM host: virt-install, virsh, qemu-img, wget, butane
  • An SSH key pair for accessing the VMs (the default user on Flatcar is core)
  • Root or sudo access on the KVM host

Our lab setup uses these nodes:

RoleHostnameIP AddressvCPUsRAM
Control Planek8s-cp01192.168.122.1024GB
Worker Node 1k8s-worker01192.168.122.1124GB
Worker Node 2k8s-worker02192.168.122.1224GB

Step 1: Verify KVM/Libvirt Setup

Confirm that hardware virtualization extensions are enabled on the host.

$ grep -cE 'vmx|svm' /proc/cpuinfo
8

A value greater than 0 means virtualization is supported. On Debian/Ubuntu systems, you can also run:

$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

On RHEL-based systems, use:

$ sudo virt-host-validate qemu
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS

Verify that libvirtd is running.

$ sudo systemctl status libvirtd
● libvirtd.service - Virtualization daemon
     Active: active (running)

Check the default libvirt network is active. This guide uses the default NAT network (192.168.122.0/24), but you can create a custom bridged network if your nodes need to be accessible from the physical LAN.

$ sudo virsh net-list
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

Step 2: Download Flatcar Container Linux Image

Flatcar has three release channels: Stable, Beta, and Alpha. For production Kubernetes clusters, always use the Stable channel. Download the QEMU image and verify it.

$ cd /var/lib/libvirt/images/
$ sudo wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
$ sudo wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig

Verify the GPG signature to ensure the image has not been tampered with.

$ gpg --keyserver hkps://keys.openpgp.org --recv-keys 84C8E771C0DF67E5D1B432E164B1562C2AE3CB36
$ gpg --verify flatcar_production_qemu_image.img.bz2.sig
gpg: Good signature from "Flatcar Buildbot (Official Builds)"

Decompress the image.

$ sudo bunzip2 flatcar_production_qemu_image.img.bz2

The base image is only about 5GB. Resize it to give each VM enough disk space for container images and Kubernetes components.

$ sudo qemu-img resize /var/lib/libvirt/images/flatcar_production_qemu_image.img +30G
Image resized.

Create per-node disk copies from this base image. Each VM gets its own backing disk.

$ for node in k8s-cp01 k8s-worker01 k8s-worker02; do
  sudo cp /var/lib/libvirt/images/flatcar_production_qemu_image.img /var/lib/libvirt/images/${node}.img
done

Step 3: Install Butane and Generate Ignition Configs

Butane is a tool that converts human-readable YAML into Ignition JSON configs. Flatcar reads the Ignition config on first boot and provisions the entire machine – users, SSH keys, systemd units, files, and network settings.

Install Butane

Download the latest Butane binary on your KVM host.

$ BUTANE_VER=$(curl -s https://api.github.com/repos/coreos/butane/releases/latest | grep tag_name | cut -d '"' -f4)
$ sudo wget -O /usr/local/bin/butane "https://github.com/coreos/butane/releases/download/${BUTANE_VER}/butane-x86_64-unknown-linux-gnu"
$ sudo chmod +x /usr/local/bin/butane
$ butane --version

Create Butane config for the control plane node

The Butane config below sets the hostname, injects your SSH public key, loads required kernel modules for Kubernetes, sets sysctl parameters for networking, and creates a systemd service that installs kubeadm, kubelet, and kubectl on first boot. Replace the ssh_authorized_keys value with your actual public key.

Create the file k8s-cp01.bu on the KVM host.

variant: flatcar
version: 1.1.0

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... your-key-here"

storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: k8s-cp01
    - path: /etc/modules-load.d/k8s.conf
      mode: 0644
      contents:
        inline: |
          overlay
          br_netfilter
    - path: /etc/sysctl.d/k8s.conf
      mode: 0644
      contents:
        inline: |
          net.bridge.bridge-nf-call-iptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.ipv4.ip_forward = 1
    - path: /opt/bin/install-k8s.sh
      mode: 0755
      contents:
        inline: |
          #!/bin/bash
          set -euo pipefail
          KUBE_VERSION="v1.31.4"
          CNI_VERSION="v1.6.2"
          CRICTL_VERSION="v1.31.0"
          INSTALL_DIR="/opt/bin"

          mkdir -p /opt/cni/bin /etc/kubernetes/manifests

          # Install crictl
          if [ ! -f ${INSTALL_DIR}/crictl ]; then
            curl -sL "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C ${INSTALL_DIR} -xz
          fi

          # Install CNI plugins
          if [ ! -f /opt/cni/bin/bridge ]; then
            curl -sL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
          fi

          # Install kubeadm, kubelet, kubectl
          for bin in kubeadm kubelet kubectl; do
            if [ ! -f ${INSTALL_DIR}/${bin} ]; then
              curl -sL "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/${bin}" -o ${INSTALL_DIR}/${bin}
              chmod +x ${INSTALL_DIR}/${bin}
            fi
          done

          # Install kubelet service files
          RELEASE_URL="https://raw.githubusercontent.com/kubernetes/release/master/cmd/krel/templates/latest"
          curl -sL "${RELEASE_URL}/kubelet/kubelet.service" | sed 's:/usr/bin:/opt/bin:g' > /etc/systemd/system/kubelet.service
          mkdir -p /etc/systemd/system/kubelet.service.d
          curl -sL "${RELEASE_URL}/kubeadm/10-kubeadm.conf" | sed 's:/usr/bin:/opt/bin:g' > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

          systemctl daemon-reload
          systemctl enable --now kubelet

systemd:
  units:
    - name: install-k8s.service
      enabled: true
      contents: |
        [Unit]
        Description=Install Kubernetes binaries
        Wants=network-online.target
        After=network-online.target
        ConditionPathExists=!/opt/bin/kubeadm

        [Service]
        Type=oneshot
        RemainAfterExit=true
        ExecStart=/opt/bin/install-k8s.sh

        [Install]
        WantedBy=multi-user.target
    - name: load-k8s-modules.service
      enabled: true
      contents: |
        [Unit]
        Description=Load kernel modules for Kubernetes
        Before=install-k8s.service

        [Service]
        Type=oneshot
        RemainAfterExit=true
        ExecStart=/sbin/modprobe overlay
        ExecStart=/sbin/modprobe br_netfilter
        ExecStart=/sbin/sysctl --system

        [Install]
        WantedBy=multi-user.target

Create Butane configs for worker nodes

The worker node configs are almost identical to the control plane config. The only differences are the hostname and the fact that workers do not need kubectl. For simplicity, we use the same install script on all nodes. Create k8s-worker01.bu and k8s-worker02.bu – each one identical to the control plane config above except for the hostname value.

For k8s-worker01.bu, change the hostname line to:

    - path: /etc/hostname
      mode: 0644
      contents:
        inline: k8s-worker01

For k8s-worker02.bu, set it to k8s-worker02. All other sections remain the same.

Generate Ignition JSON from Butane YAML

Convert each Butane file to an Ignition config.

$ butane --strict --pretty k8s-cp01.bu -o k8s-cp01.ign
$ butane --strict --pretty k8s-worker01.bu -o k8s-worker01.ign
$ butane --strict --pretty k8s-worker02.bu -o k8s-worker02.ign

The --strict flag makes Butane reject any config with warnings. Verify each file is valid JSON.

$ python3 -m json.tool k8s-cp01.ign > /dev/null && echo "Valid JSON"
Valid JSON

Step 4: Create Flatcar VMs with virt-install

Copy the Ignition configs to a location accessible by libvirt, then create each VM using virt-install. The --qemu-commandline flag passes the Ignition config path directly to the Flatcar VM via the QEMU firmware config interface.

Move the Ignition files.

$ sudo mkdir -p /var/lib/libvirt/ignition
$ sudo cp k8s-cp01.ign k8s-worker01.ign k8s-worker02.ign /var/lib/libvirt/ignition/

Create the control plane VM.

$ sudo virt-install \
  --name k8s-cp01 \
  --ram 4096 \
  --vcpus 2 \
  --os-variant generic \
  --import \
  --disk path=/var/lib/libvirt/images/k8s-cp01.img,format=qcow2 \
  --network network=default \
  --graphics none \
  --noautoconsole \
  --qemu-commandline='-fw_cfg name=opt/org.flatcar-linux/config,file=/var/lib/libvirt/ignition/k8s-cp01.ign'

Create the first worker node.

$ sudo virt-install \
  --name k8s-worker01 \
  --ram 4096 \
  --vcpus 2 \
  --os-variant generic \
  --import \
  --disk path=/var/lib/libvirt/images/k8s-worker01.img,format=qcow2 \
  --network network=default \
  --graphics none \
  --noautoconsole \
  --qemu-commandline='-fw_cfg name=opt/org.flatcar-linux/config,file=/var/lib/libvirt/ignition/k8s-worker01.ign'

Create the second worker node.

$ sudo virt-install \
  --name k8s-worker02 \
  --ram 4096 \
  --vcpus 2 \
  --os-variant generic \
  --import \
  --disk path=/var/lib/libvirt/images/k8s-worker02.img,format=qcow2 \
  --network network=default \
  --graphics none \
  --noautoconsole \
  --qemu-commandline='-fw_cfg name=opt/org.flatcar-linux/config,file=/var/lib/libvirt/ignition/k8s-worker02.ign'

Verify all three VMs are running.

$ sudo virsh list
 Id   Name           State
-------------------------------
 1    k8s-cp01       running
 2    k8s-worker01   running
 3    k8s-worker02   running

Set VMs to start automatically on host boot.

$ for vm in k8s-cp01 k8s-worker01 k8s-worker02; do
  sudo virsh autostart ${vm}
done

Get the IP addresses assigned to each VM by the DHCP service on the default network.

$ sudo virsh domifaddr k8s-cp01
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:a1:b2:c3    ipv4         192.168.122.10/24

$ sudo virsh domifaddr k8s-worker01
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet1      52:54:00:d4:e5:f6    ipv4         192.168.122.11/24

$ sudo virsh domifaddr k8s-worker02
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet2      52:54:00:g7:h8:i9    ipv4         192.168.122.12/24

For consistent IP addresses, configure static DHCP leases in the libvirt network or set static IPs via a networkd unit in the Butane config. To add static DHCP entries to the default network:

$ sudo virsh net-update default add ip-dhcp-host \
  '<host mac="52:54:00:a1:b2:c3" name="k8s-cp01" ip="192.168.122.10"/>' \
  --live --config

Test SSH connectivity to each node.

$ ssh [email protected] "cat /etc/os-release | head -3"
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos

Step 5: Initialize the Kubernetes Control Plane with kubeadm

SSH into the control plane node and wait for the Kubernetes install script to complete. The install-k8s.service systemd unit runs on first boot and downloads all binaries.

$ ssh [email protected]
core@k8s-cp01 ~ $ systemctl status install-k8s.service
● install-k8s.service - Install Kubernetes binaries
     Active: active (exited)
     Status: "Install complete"

Verify the binaries are in place.

core@k8s-cp01 ~ $ kubeadm version -o short
v1.31.4
core@k8s-cp01 ~ $ kubelet --version
Kubernetes v1.31.4

Confirm that containerd is running. Flatcar ships with containerd as part of the base OS.

core@k8s-cp01 ~ $ sudo systemctl status containerd
● containerd.service - containerd container runtime
     Active: active (running)

Initialize the Kubernetes control plane. The --pod-network-cidr value matches what Calico expects by default.

core@k8s-cp01 ~ $ sudo kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --kubernetes-version=v1.31.4 \
  --control-plane-endpoint=192.168.122.10:6443

After a few minutes, you will see output similar to this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can join any number of worker nodes by running the following on each:

kubeadm join 192.168.122.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e6a105c...

Save the kubeadm join command – you will need it in the next step. Set up kubectl access for the core user.

core@k8s-cp01 ~ $ mkdir -p $HOME/.kube
core@k8s-cp01 ~ $ sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
core@k8s-cp01 ~ $ sudo chown core:core $HOME/.kube/config

Check that the API server is responding.

core@k8s-cp01 ~ $ kubectl cluster-info
Kubernetes control plane is running at https://192.168.122.10:6443
CoreDNS is running at https://192.168.122.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

The control plane node will show as NotReady until a CNI plugin is installed.

core@k8s-cp01 ~ $ kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
k8s-cp01   NotReady   control-plane   90s   v1.31.4

Step 6: Install Calico CNI Plugin

A CNI (Container Network Interface) plugin provides pod-to-pod networking across nodes. Calico is one of the most widely deployed options, offering both networking and network policy enforcement. If you prefer Cilium, check our guide on installing Cilium CNI in Kubernetes.

Install the Calico operator and custom resources.

core@k8s-cp01 ~ $ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml

Apply the Calico custom resource definition.

core@k8s-cp01 ~ $ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml

Watch Calico pods come up. It takes about 2-3 minutes for all components to become ready.

core@k8s-cp01 ~ $ kubectl get pods -n calico-system -w
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6b4b5f9d9f-x7jkl   1/1     Running   0          2m
calico-node-abc12                           1/1     Running   0          2m
calico-typha-5c4d8f7b9-mn4pq               1/1     Running   0          2m
csi-node-driver-d8f4k                       2/2     Running   0          2m

After Calico is running, the control plane node should transition to Ready.

core@k8s-cp01 ~ $ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
k8s-cp01   Ready    control-plane   5m    v1.31.4

Step 7: Join Worker Nodes to the Cluster

SSH into each worker node and run the kubeadm join command that was printed during the control plane initialization. First, verify that the Kubernetes binaries were installed successfully on the worker.

$ ssh [email protected]
core@k8s-worker01 ~ $ systemctl status install-k8s.service
● install-k8s.service - Install Kubernetes binaries
     Active: active (exited)

Run the join command with sudo.

core@k8s-worker01 ~ $ sudo kubeadm join 192.168.122.10:6443 \
  --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:e6a105c...

You should see:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Repeat the same process on the second worker node (192.168.122.12). If the join token has expired (tokens are valid for 24 hours), generate a new one from the control plane.

core@k8s-cp01 ~ $ kubeadm token create --print-join-command

Step 8: Verify the Kubernetes Cluster

Back on the control plane node, confirm all nodes are Ready. For more details on managing containerd containers using ctr and crictl, see our dedicated guide.

core@k8s-cp01 ~ $ kubectl get nodes -o wide
NAME            STATUS   ROLES           AGE   VERSION   INTERNAL-IP      OS-IMAGE                                     KERNEL-VERSION    CONTAINER-RUNTIME
k8s-cp01        Ready    control-plane   10m   v1.31.4   192.168.122.10   Flatcar Container Linux by Kinvolk 4152.1.0   6.6.63-flatcar    containerd://1.7.24
k8s-worker01    Ready    <none>          5m    v1.31.4   192.168.122.11   Flatcar Container Linux by Kinvolk 4152.1.0   6.6.63-flatcar    containerd://1.7.24
k8s-worker02    Ready    <none>          3m    v1.31.4   192.168.122.12   Flatcar Container Linux by Kinvolk 4152.1.0   6.6.63-flatcar    containerd://1.7.24

Check that all system pods are running.

core@k8s-cp01 ~ $ kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-7db6d8ff4d-2x9kl           1/1     Running   0          10m
coredns-7db6d8ff4d-p4r8t           1/1     Running   0          10m
etcd-k8s-cp01                      1/1     Running   0          10m
kube-apiserver-k8s-cp01            1/1     Running   0          10m
kube-controller-manager-k8s-cp01   1/1     Running   0          10m
kube-proxy-abc12                   1/1     Running   0          10m
kube-proxy-def34                   1/1     Running   0          5m
kube-proxy-ghi56                   1/1     Running   0          3m
kube-scheduler-k8s-cp01            1/1     Running   0          10m

Deploy a test application to verify pod scheduling and networking.

core@k8s-cp01 ~ $ kubectl create deployment nginx-test --image=nginx:latest --replicas=3
core@k8s-cp01 ~ $ kubectl expose deployment nginx-test --port=80 --type=NodePort

Wait for all pods to become ready.

core@k8s-cp01 ~ $ kubectl get pods -l app=nginx-test -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE
nginx-test-7c5b8d6fd4-4k2ml   1/1     Running   0          30s   192.168.1.65   k8s-worker01
nginx-test-7c5b8d6fd4-8p9nz   1/1     Running   0          30s   192.168.2.33   k8s-worker02
nginx-test-7c5b8d6fd4-q3r7s   1/1     Running   0          30s   192.168.1.66   k8s-worker01

Get the NodePort and test access.

core@k8s-cp01 ~ $ kubectl get svc nginx-test
NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-test   NodePort   10.96.45.120   <none>        80:31234/TCP   15s

core@k8s-cp01 ~ $ curl -s http://192.168.122.11:31234 | head -4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Clean up the test deployment.

core@k8s-cp01 ~ $ kubectl delete deployment nginx-test
core@k8s-cp01 ~ $ kubectl delete svc nginx-test

Step 9: Configure Automatic OS Updates with FLUO

Flatcar Container Linux updates itself automatically by downloading a new OS image to an inactive partition and rebooting. In a Kubernetes cluster, uncoordinated reboots would cause service disruptions. The Flatcar Linux Update Operator (FLUO) solves this by coordinating reboots – it cordons and drains a node before reboot, then uncordons it after the node comes back.

FLUO consists of two components:

  • flatcar-linux-update-operator – runs as a Deployment and watches for nodes that need rebooting
  • update-agent – runs as a DaemonSet on every node and signals when an update has been downloaded and a reboot is pending

First, set the update strategy on each Flatcar node to off so the OS does not reboot on its own. SSH into each node and run:

core@k8s-cp01 ~ $ echo "REBOOT_STRATEGY=off" | sudo tee /etc/flatcar/update.conf
REBOOT_STRATEGY=off

Alternatively, include this in your Butane config under the storage.files section for all nodes:

    - path: /etc/flatcar/update.conf
      mode: 0644
      contents:
        inline: |
          REBOOT_STRATEGY=off

Deploy FLUO from the official manifest.

core@k8s-cp01 ~ $ kubectl apply -f https://raw.githubusercontent.com/flatcar/flatcar-linux-update-operator/master/examples/deploy/update-operator.yaml
core@k8s-cp01 ~ $ kubectl apply -f https://raw.githubusercontent.com/flatcar/flatcar-linux-update-operator/master/examples/deploy/update-agent.yaml

Verify the operator and agents are running.

core@k8s-cp01 ~ $ kubectl get pods -n reboot-coordinator
NAME                                             READY   STATUS    RESTARTS   AGE
flatcar-linux-update-operator-6d4b8c7f9-m2kxn    1/1     Running   0          30s
update-agent-4p7rk                                1/1     Running   0          30s
update-agent-8x2nl                                1/1     Running   0          30s
update-agent-w9d3m                                1/1     Running   0          30s

When Flatcar downloads an update, the update-agent annotates the node with a reboot-needed flag. The operator then drains the node, allows the reboot, and uncordons it once it is back. This ensures only one node reboots at a time and workloads are gracefully moved before the reboot happens.

You can check the current Flatcar update status on any node.

core@k8s-cp01 ~ $ update_engine_client -status
LAST_CHECKED_TIME=1710842400
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_IDLE
NEW_VERSION=0.0.0
NEW_SIZE=0

Using Nebraska for Update Management

For larger deployments, Nebraska provides a web-based dashboard to manage Flatcar update channels, track which nodes have updated, and control rollout speed. Nebraska is an open-source update server compatible with the Omaha protocol that Flatcar uses for updates.

Nebraska can be deployed as a container on a separate management host. It allows you to:

  • Pin specific Flatcar versions for your cluster
  • Control the rollout percentage of new versions
  • Monitor update progress across all nodes
  • Pause updates during maintenance windows

To point Flatcar nodes at a Nebraska instance, add the server URL to the update config in your Butane file:

    - path: /etc/flatcar/update.conf
      mode: 0644
      contents:
        inline: |
          SERVER=https://nebraska.example.com/v1/update/
          REBOOT_STRATEGY=off

Troubleshooting Common Issues

If a node stays in NotReady state after joining, SSH into it and check the kubelet logs.

core@k8s-worker01 ~ $ sudo journalctl -u kubelet -f --no-pager | tail -20

Common causes include the containerd socket not being available (check sudo systemctl status containerd), missing CNI binaries in /opt/cni/bin/, or the install-k8s service not completing due to network issues. If the install service failed, you can re-run it manually.

core@k8s-worker01 ~ $ sudo /opt/bin/install-k8s.sh

If pods are stuck in Pending state and the nodes appear ready, check whether Calico pods are running on that node. A failed Calico node agent prevents pod scheduling because the node has no functioning CNI. If the kubeadm join token has expired, generate a new one from the control plane as shown in Step 7. For containerd configuration issues, the config file is at /etc/containerd/config.toml on Flatcar and can be adjusted if you need to add private registry mirrors or change the sandbox image.

Conclusion

We deployed a Kubernetes cluster running on Flatcar Container Linux VMs provisioned via KVM/libvirt. The nodes were configured using Butane/Ignition for declarative machine setup, Kubernetes components were installed using kubeadm, and Calico provides pod networking. The Flatcar Linux Update Operator handles coordinated OS updates without disrupting running workloads.

For production use, consider adding multiple control plane nodes for high availability, configuring persistent storage with a CSI driver, deploying an ingress controller for external traffic, and setting up monitoring with Prometheus and Grafana. If you need to run Flatcar on other platforms, see our guides on running Flatcar on OpenStack or installing Flatcar on VMware ESXi/vCenter.

Related Guides

LEAVE A REPLY

Please enter your comment!
Please enter your name here