Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes is designed to run distributed applications across clusters of hosts, providing mechanisms for container scheduling, scaling, service discovery, load balancing, and many more.

Kubernetes has become a fundamental technology in the world of cloud-native computing and has gained significant popularity and adoption in recent years. Some of the key benefits brought by this tool to the tech world are portability, flexibility, scalability, high availability, and automation of various tasks such as container deployment, scaling, and load balancing.

There are several popular distributions of Kubernetes available that provide additional features, management tools, and support services on top of the core Kubernetes platform. Some of the well-known Kubernetes distributions include; Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), VMware Tanzu, Rancher, Red Hat OpenShift etc. Today we will learn how to set up a multi-node Kubernetes Cluster using Talos Container Linux

What is Talos Container Linux?

Talos is a Linux distribution specifically designed for distributed systems like Kubernetes, offering a container-optimized environment. It is a modern Linux distribution built from scratch with the goal of providing an optimized and secure platform for distributed systems.

The origin of Talos Linux can be traced back to a project initiated by the CoreOS team, which was later acquired by Red Hat in 2018. CoreOS had developed a container-focused Linux distribution called CoreOS Container Linux, which became popular in the Kubernetes ecosystem. However, after the acquisition by Red Hat, the focus shifted towards integrating CoreOS technologies into Red Hat’s portfolio.

In response to the changing landscape, a group of former CoreOS engineers, led by Eric Chiang, decided to continue the development of a lightweight and specialized Linux distribution for Kubernetes. This led to the creation of Talos Linux as a community-driven project. Talos takes a unique approach by prioritizing minimalism and practicality, resulting in a set of distinctive features:

  • Immutable: It operates on the principle of immutability, meaning that its core components are not modifiable once deployed. This approach enhances system stability and reduces the risk of unintended changes.
  • Atomic: It adopts an atomic design, ensuring that system updates and changes are applied as a single, indivisible unit. This approach simplifies management and ensures consistent system behaviour.
  • Ephemeral: It treats its instances as ephemeral entities, which means they can be easily created, replaced, or terminated. This flexibility enables efficient scaling and dynamic allocation of resources.
  • Minimal: It follows a minimalistic approach, striving to provide the necessary functionality while minimizing resource consumption and complexity. This simplicity makes it easier to manage and reduces attack vectors.
  • Secure by default: It prioritizes security by implementing secure defaults and configurations out of the box. This approach helps protect the system from potential vulnerabilities and ensures a more secure deployment.
  • Single declarative configuration: It is managed through a single declarative configuration file and a gRPC API. This unified management approach simplifies administration and allows for easy automation and integration with other tools.
  • Platform compatibility: It can be deployed on various platforms, including container runtimes, cloud environments, virtualized infrastructures, and bare metal servers. This flexibility enables deployment across a wide range of environments and infrastructure choices.

System Requirements

To spin the Kubernetes Cluster you need to set up Talos Linux machines with the below hardware requirements.

Minimum Requirements:

TaskMemoryCoresSystem Disk
Control Plane2 GiB210 GiB
Worker1 GiB110 GiB

Recommended:

TaskMemoryCoresSystem Disk
Control Plane4 GiB4100 GiB
Worker2 GiB2100 GiB

Aside from spinning Talos Linux on bare metal, you can also run it in a virtualized environment with Hyper-V, KVM, Proxmox, VMware, Xen etc.

1. Install talosctl on your system

talosctl is a CLI tool that makes interfacing with the Talos API easy. Before we proceed, we need to install it. The command for that is:

curl -sL https://talos.dev/install | sh

Sample Output:

Setup Multi node Kubernetes Cluster using Talos Container

2. Set up the Talos Linux Nodes

For this guide, we will work with 3 Talos Linux nodes(1 master node with 2 worker nodes). As said earlier, I will be using KVM to run the VMs.

First, download the ISO file from their GitHub Releases page. As of this guide, the latest release was at 1.4.0.

The ISO files can be pulled using Wget as shown:

##For amd64
VER=$(curl -s https://api.github.com/repos/siderolabs/talos/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
wget https://github.com/siderolabs/talos/releases/download/v${VER}/talos-amd64.iso

##For RM64
wget https://github.com/siderolabs/talos/releases/download/v${VER}/talos-arm64.iso

Once downloaded, you can create the Talos Linux nodes on your desired hypervisor. You can also do some automation with Vagrant & Libvirt for KVM or Terraform.

The steps for creating a VM that meets the desired specifications are almost similar on all the hypervisors. But one this you need to ensure is that the network set for the VM has internet access since there are images required when spinning the Kubernetes cluster.

For Virtualbox, you can automate the creation with Vagrant:

Verify the installation:

$ vagrant --version
Vagrant x.y.z

You can provision the VMS on VirtualBox using the below Vagrant file.

Vagrant.configure("2") do |config|
  ## Master node
  config.vm.define "control-plane-node-1" do |vm|
    vm.vm.box = "ubuntu/bionic64"
    vm.vm.provider :virtualbox do |vb|
      vb.memory = 4096
      vb.cpus = 2
      vb.customize ['modifyvm', :id, '--nic1', 'bridged', '--bridgeadapter1', 'ens18']
      vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', '1', '--device', '0', '--type', 'dvddrive', '--medium', '/tmp/talos-amd64.iso']
      vb.customize ["createmedium", "disk", "--filename", "master1_disk.vdi", "--format", "VDI", "--size", "20096"]
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
      vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', '0', '--device', '0', '--type', 'hdd', '--medium', 'master1_disk.vdi']
    end
  end

  ## Worker Node1
  config.vm.define "worker1" do |vm|
    vm.vm.box = "ubuntu/bionic64"
    vm.vm.provider :virtualbox do |vb|
      vb.memory = 2048
      vb.cpus = 1
      vb.customize ['modifyvm', :id, '--nic1', 'bridged', '--bridgeadapter1', 'ens18']
      vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', '1', '--device', '0', '--type', 'dvddrive', '--medium', '/tmp/talos-amd64.iso']
      vb.customize ["createmedium", "disk", "--filename", "worker1_disk.vdi", "--format", "VDI", "--size", "20096"]
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
      vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', '0', '--device', '0', '--type', 'hdd', '--medium', 'worker1_disk.vdi']
    end
  end

  ## Worker Node2
  config.vm.define "worker2" do |vm|
    vm.vm.box = "ubuntu/bionic64"
    vm.vm.provider :virtualbox do |vb|
      vb.memory = 2048
      vb.cpus = 1
      vb.customize ['modifyvm', :id, '--nic1', 'bridged', '--bridgeadapter1', 'ens18']
      vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', '1', '--device', '0', '--type', 'dvddrive', '--medium', '/tmp/talos-amd64.iso']
      vb.customize ["createmedium", "disk", "--filename", "worker2_disk.vdi", "--format", "VDI", "--size", "20096"]
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
      vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', '0', '--device', '0', '--type', 'hdd', '--medium', 'worker2_disk.vdi']
    end
  end
end

Fire up the VMs on VirtualBox:

vagrant up control-plane-node-1 --provider=virtualbox
vagrant up worker1 --provider=virtualbox
vagrant up worker2 --provider=virtualbox

Because Vagrant always wants to connect to the VM, we can start individual VMs, once it fails to make an SSH connection, press CTRL+C and run the next.

Once all the VMs have been started, view the status.

$ vagrant status
Current machine states:

control-plane-node-1      running (virtualbox)
worker1                   running (virtualbox)
worker2                   running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Once the VMs are started, they will boot into the live mode, Talos doesn’t make any installations to the hard disk until the configurations are passed.

Setup Multi node Kubernetes Cluster using Talos Container Linux 1

Make network configurations, and set the hostname, static IP address and DNS by pressing F3.

Setup Multi node Kubernetes Cluster using Talos Container Linux 2

Save the configurations and proceed to spin the Kubernetes cluster as shown below.

3. Create Kubernetes Cluster using talosctl

I have the below configurations for my environment:

TaskHostnameIP address
mastermaster.computingforgeeks.com192.168.200.105
worker node1worker1.computingforgeeks.com192.168.200.106
worker node2worker2.computingforgeeks.com192.168.200.107

Using the CLI tool, we can create machine configs and use them for installing Talos and Kubernetes. Using the IP/Domain name of the load balancer/controller node, generate the base configuration files for the VMs.

For this guide, we will use the master node IP(192.168.200.105), but this can be the LoadBalancer IP, in production environments with multiple controller nodes.

Generate the secret:

talosctl gen secrets -o secrets.yaml

Now create the configuration files. The command has the below syntax:

talosctl gen config --with-secrets secrets.yaml <cluster-name> <cluster-endpoint>

For example:

talosctl gen config --with-secrets secrets.yaml my-cluster https://192.168.200.105:6443 \
    --output-dir _out 

The _out is used as the output path for the created files. After this, you will have configs generated:

generating PKI and tokens
Created _out/controlplane.yaml
Created _out/worker.yaml
Created _out/talosconfig

The .yaml files serve different purposes in configuring and managing the systems. talosconfig file is a YAML-based configuration file used on the local client side. It contains settings and parameters that tailor the behaviour of the Talos client. This file allows you to customize and fine-tune various aspects of the Talos client to align with your specific requirements.

1. Start the Control Node

Now using the controller YAML, we will fire up the control node. But before that, you can check the disk and view the name because the default configuration defines the installation disk as /dev/sda

$ talosctl -n 192.168.200.105 disks --insecure
DEV        MODEL           SERIAL   TYPE   UUID   WWID   MODALIAS      NAME   SIZE    BUS_PATH                                                                   SUBSYSTEM          SYSTEM_DISK
/dev/sda   QEMU HARDDISK   -        HDD    -      -      scsi:t-0x00   -      22 GB   /pci0000:00/0000:00:01.1/0000:02:00.0/virtio1/host6/target6:0:0/6:0:0:0/   /sys/class/block 

The disk can be something else on your setup, you can modify the controlplane.yaml file to accommodate it.

    # Used to provide instructions for installations.
    install:
        disk: /dev/sda # The disk used for installations.

Save the file and fire up the control plane first:

talosctl apply-config --insecure -n 192.168.200.105 --file _out/controlplane.yaml

The above command can be repeated severally on multiple nodes if you need to create a HA of the control node.

2. Run the Worker Nodes

Similar to the above process, you identify the disk on the worker nodes and make adjustments to the worker.yaml. Once the changes have been saved, we can fire up the worker nodes using the generated configurations:

talosctl apply-config --insecure -n 192.168.200.106 --file _out/worker.yaml
talosctl apply-config --insecure -n 192.168.200.107 --file _out/worker.yaml

3. Bootstrap Etcd

Now we will set up a shell using the talosconfig and configure the endpoints:

export CONTROL_PLANE_IP=192.168.200.105
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP

Now we will set the endpoints and nodes.

talosctl --talosconfig _out/talosconfig config endpoint $CONTROL_PLANE_IP
talosctl --talosconfig _out/talosconfig config node $CONTROL_PLANE_IP

Bootstrap etcd

talosctl --talosconfig _out/talosconfig bootstrap -n $CONTROL_PLANE_IP

4. Access Talos Powered Kubernetes Cluster

Once the cluster is up, you can access and use it as desired to run the containerized workloads. But first, obtain the admin kubeconfig

talosctl --talosconfig _out/talosconfig kubeconfig .

Now instal kubectl on your system with the commands:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Export the admin config:

mkdir -p $HOME/.kube
sudo cp -i kubeconfig $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now view the nodes in the cluster:

$ kubectl get nodes -o wide
NAME                            STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
master.computingforgeeks.com    Ready    control-plane   2m10s   v1.27.1   192.168.200.105   <none>        Talos (v1.4.4)   6.1.28-talos     containerd://1.6.21
worker1.computingforgeeks.com   Ready    <none>          118s    v1.27.1   192.168.200.106   <none>        Talos (v1.4.4)   6.1.28-talos     containerd://1.6.21
worker2.computingforgeeks.com   Ready    <none>          2m20s   v1.27.1   192.168.200.107   <none>        Talos (v1.4.4)   6.1.28-talos     containerd://1.6.21

View the pods:

$ kubectl get pods -A
NAMESPACE     NAME                                                   READY   STATUS    RESTARTS        AGE
kube-system   coredns-d779cc7ff-k6dwc                                1/1     Running   0               3m4s
kube-system   coredns-d779cc7ff-wqj6x                                1/1     Running   0               3m4s
kube-system   kube-apiserver-master.computingforgeeks.com            1/1     Running   0               55s
kube-system   kube-controller-manager-master.computingforgeeks.com   1/1     Running   1 (3m21s ago)   2m2s
kube-system   kube-flannel-h9k89                                     1/1     Running   0               2m50s
kube-system   kube-flannel-jtwkm                                     1/1     Running   0               2m38s
kube-system   kube-flannel-v4b97                                     1/1     Running   0               3m
kube-system   kube-proxy-4hc2n                                       1/1     Running   0               3m
kube-system   kube-proxy-cd5jf                                       1/1     Running   0               2m50s
kube-system   kube-proxy-fh266                                       1/1     Running   0               2m38s
kube-system   kube-scheduler-master.computingforgeeks.com            1/1     Running   2 (3m19s ago)   104s

On the console, you should also see the nodes ready as shown:

Setup Multi node Kubernetes Cluster using Talos Container Linux 3

5. Deploy a Test Application on Kubernetes

To verify if the cluster is working properly, we can deploy a sample Nginx application. To achieve that, we can use the below manifest:

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

View if the pods are running:

$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-57d84f57dc-dkjs4   1/1     Running   0          34s
nginx-deployment-57d84f57dc-fd29c   1/1     Running   0          34s

Expose the app with NodePort:

$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed

Get the service port:

$ kubectl get svc
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP        5m5s
nginx-deployment   NodePort    10.111.46.209   <none>        80:31721/TCP   11s

You can now verify access to the app using the URL http://NodeIP:31721

Setup Multi node Kubernetes Cluster using Talos Container Linux 4

Closing Thoughts

This guide has provided a detailed illustration of how to set up Multi-node Kubernetes Cluster using Talos Container Linux. We can all agree that Talos Linux is a modern container-optimized environment. I hope this was of great importance to you.

See more articles on this website:

LEAVE A REPLY

Please enter your comment!
Please enter your name here