In this guide, I’ll take you through the steps to install and set up a working 3 node Kubernetes Cluster on Ubuntu 18.04 Bionic Beaver Linux.  Kubernetes is an open-source container-orchestration system used for automating deployment, management, and scaling of containerized applications.

Kubernetes on Ubuntu 18.04 – System Diagram

This setup is based on the following diagram:

Let’s configure system hostnames before we can proceed to next steps:

On Master Node:

Set hostname like below:

$ sudo hostnamectl set-hostname k8s-master

On Worker Node 01:

Set the hostname using hostamectl command line tool.

$ sudo hostnamectl set-hostname k8s-node-01

On Worker Node 02:

Also set hostname for Kubernetes worker node 02.

$ sudo hostnamectl set-hostname k8s-node-02

Once correct hostname has been configured on each host, populate on each node with the values configured.

$ cat /etc/hosts
192.168.2.2 k8s-master 
192.168.2.3 k8s-node-01 
192.168.2.4 k8s-node-02

Setup Kubernetes on Ubuntu 18.04 – Prerequisites (Run on all nodes)

Before doing any Kubernetes specific configurations, let’s ensure all deps are satisfied. Here we will do a system update and create Kubernetes user.

Update system packages to the latest release on all nodes:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install linux-image-extra-virtual
sudo reboot

Add user to manage Kubernetes cluster:

sudo useradd -s /bin/bash -m k8s-admin
sudo passwd k8s-admin
sudo usermod -aG sudo k8s-admin
echo "k8s-admin ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/k8s-admin

If you prefer entering sudo password when running sudo commands as k8s-admin user, then you can ignore the last line. You can test if no password prompt for sudo:

$ su - k8s-admin
[email protected]:~$ sudo su -
[email protected]:~#

All looks good, let’s proceed to install Docker engine.

Setup Kubernetes on Ubuntu 18.04 – Install Docker Engine

Kubernetes requires docker to run containers used for hosting applications and other Kubernetes services. We have a comprehensive Docker installation guide:

How to install Docker CE on Ubuntu / Debian / Fedora / Arch / CentOS

If you need a quick installation guide, then use the following commands to install Docker Engine on Ubuntu 18.04. Ensure any old version of Docker engine is uninstalled on your system:

sudo apt-get remove docker docker-engine docker.i

Install dependencies:

[pastacode lang=”bash” manual=”%24%20sudo%20apt-get%20install%20%5C%0Aapt-transport-https%20%5C%0Aca-certificates%20%5C%0Acurl%20%5C%0Asoftware-properties-common” message=”” highlight=”” provider=”manual”/]

Import Docker repository GPG key:

[pastacode lang=”bash” manual=”%24%20curl%20-fsSL%20https%3A%2F%2Fdownload.docker.com%2Flinux%2Fubuntu%2Fgpg%20%7C%20sudo%20apt-key%20add%20-%0A%0A%24%20sudo%20add-apt-repository%20%5C%0A%20%22deb%20%5Barch%3Damd64%5D%20https%3A%2F%2Fdownload.docker.com%2Flinux%2Fubuntu%20%5C%0A%20%24(lsb_release%20-cs)%20%5C%0A%20stable%22%0A” message=”” highlight=”” provider=”manual”/]

Install docker:

sudo apt-get update
sudo apt-get install docker-ce
sudo usermod -aG docker k8s-admin

When docker has been installed, you can continue to configure the Kubernetes master node.

Setup Kubernetes on Ubuntu 18.04 – Install and Configure Kubernetes Master

All commands that will be executed on this section are meant to be run on the master node. Don’t execute any of the commands on Kubernetes worker nodes. Kubernetes Master components provide the cluster’s control plane – API Server, Scheduler, Controller Manager. They make global decisions about the cluster e.g scheduling and detecting and responding to cluster events.

Add Kubernetes repository

As of this writing, there is no official repository for Ubuntu 18.04, we will add a repository for Ubuntu 16.04. I tested it. All packages and dependencies should install fine. I’ll update this article when a repo for Ubuntu 18.04 is available.

[pastacode lang=”bash” manual=”%23%20cat%20%3C%3CEOF%20%3E%20%2Fetc%2Fapt%2Fsources.list.d%2Fkubernetes.list%0Adeb%20http%3A%2F%2Fapt.kubernetes.io%2F%20kubernetes-xenial%20main%0AEOF” message=”” highlight=”” provider=”manual”/]

Then import GPG key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Update apt package index:

sudo apt update

Install Kubernetes Master Components

Install kubectl, kubelet, kubernetes-cni and kubeadm Kubernetes master components:

sudo apt install kubectl kubelet kubeadm kubernetes-cni

Confirm that all package binaries are present on the file system.

$ which kubelet
/usr/bin/kubelet
$ which kubeadm
/usr/bin/kubeadm

If swap is on, turn it off.

sudo swapoff -a

Initialize Kubernetes Cluster:

When all Kubernetes packages have been installed, you’re ready to initialize the cluster using kubeadm command line tool.

Export required variables (Optional)

[pastacode lang=”bash” manual=”export%20API_ADDR%3D%60ifconfig%20eth0%20%7C%20grep%20’inet’%7C%20cut%20-d’%3A’%20-f2%20%7C%20awk%20’%7Bprint%20%241%7D’%60%0Aexport%20DNS_DOMAIN%3D%22k8s.local%22%0Aexport%20POD_NET%3D%2210.4.0.0%2F16%22%0Aexport%20SRV_NET%3D%2210.5.0.0%2F16%22%0A” message=”” highlight=”” provider=”manual”/]

Then initialize the Kubernetes cluster using variables defined above:

[pastacode lang=”bash” manual=”kubeadm%20init%20–pod-network-cidr%20%24%7BPOD_NET%7D%20–service-cidr%20%24%7BSRV_NET%7D%20%5C%0A–service-dns-domain%20%22%24%7BDNS_DOMAIN%7D%22%20–apiserver-advertise-address%20%24%7BAPI_ADDR%7D%0A” message=”” highlight=”” provider=”manual”/]

If all goes well, you should get a success message with the instructions of what to do next:

[pastacode lang=”bash” manual=”—%0AYour%20Kubernetes%20master%20has%20initialized%20successfully!%0A%0ATo%20start%20using%20your%20cluster%2C%20you%20need%20to%20run%20the%20following%20as%20a%20regular%20user%3A%0A%0A%20%20mkdir%20-p%20%24HOME%2F.kube%0A%20%20sudo%20cp%20-i%20%2Fetc%2Fkubernetes%2Fadmin.conf%20%24HOME%2F.kube%2Fconfig%0A%20%20sudo%20chown%20%24(id%20-u)%3A%24(id%20-g)%20%24HOME%2F.kube%2Fconfig%0A%0AYou%20should%20now%20deploy%20a%20pod%20network%20to%20the%20cluster.%0ARun%20%22kubectl%20apply%20-f%20%5Bpodnetwork%5D.yaml%22%20with%20one%20of%20the%20options%20listed%20at%3A%0A%20%20https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fcluster-administration%2Faddons%2F%0A%0AYou%20can%20now%20join%20any%20number%20of%20machines%20by%20running%20the%20following%20on%20each%20node%0Aas%20root%3A%0A%0A%20%20kubeadm%20join%20192.168.2.2%3A6443%20–token%209y4vc8.h7jdjle1xdovrd0z%20–discovery-token-ca-cert-hash%20sha256%3Acff9d1444a56b24b4a8839ff3330ab7177065c90753ef3e4e614566695db273c%0A” message=”” highlight=”” provider=”manual”/]

Configure Access for k8s-admin user on the Master server

Switch to k8s-adminand copy Kubernetes configuration file with cluster information.

[pastacode lang=”bash” manual=”su%20-%20k8s-admin%0Amkdir%20-p%20%24HOME%2F.k8s%0Asudo%20cp%20-i%20%2Fetc%2Fkubernetes%2Fadmin.conf%20%24HOME%2F.k8s%2Fconfig%0Asudo%20chown%20%24(id%20-u)%3A%24(id%20-g)%20%24HOME%2F.k8s%2Fconfig%0Aexport%20KUBECONFIG%3D%24HOME%2F.k8s%2Fconfig%0Aecho%20%22export%20KUBECONFIG%3D%24HOME%2F.k8s%2Fconfig%22%20%7C%20tee%20-a%20~%2F.bashrc%0A” message=”” highlight=”” provider=”manual”/]

Deploy Weave Net POD Network to the Cluster ( Run as normal user)

Weave Net creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery. Services provided by application containers on the Weave network can be exposed to the outside world, regardless of where they are running.

Weave Net can be installed onto your CNI-enabled Kubernetes cluster with a single command:

[pastacode lang=”bash” manual=”%23%20su%20-%20k8s-admin%0A%24%20kubectl%20apply%20-f%20%22https%3A%2F%2Fcloud.weave.works%2Fk8s%2Fnet%3Fk8s-version%3D%24(kubectl%20version%20%7C%20base64%20%7C%20tr%20-d%20’%5Cn’)%22%0A%0Aserviceaccount%2Fweave-net%20created%0Aclusterrole.rbac.authorization.k8s.io%2Fweave-net%20created%0Aclusterrolebinding.rbac.authorization.k8s.io%2Fweave-net%20created%0Arole.rbac.authorization.k8s.io%2Fweave-net%20created%0Arolebinding.rbac.authorization.k8s.io%2Fweave-net%20created%0Adaemonset.extensions%2Fweave-net%20created%0A” message=”” highlight=”” provider=”manual”/]

After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.

[pastacode lang=”bash” manual=”k8s-admin%40k8s-master%3A~%24%20kubectl%20get%20pod%20-n%20kube-system%20%7C%20grep%20weav%0Aweave-net-d9v5v%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%202%2F2%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%2011h%0Aweave-net-mhp46%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%202%2F2%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%2011h%0Aweave-net-vmksr%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%202%2F2%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%2011h%0A” message=”” highlight=”” provider=”manual”/]

Setup Kubernetes Worker Nodes

When Kubernetes cluster has been initialized and the master node is online, start Worker Nodes configuration. A node is a worker machine in Kubernetes, it may be a VM or physical machine.  Each node is managed by the master and has the services necessary to run pods – docker, kubelet, kube-proxy

Step 1: Ensure Docker is installed (covered)

Ensure docker engine is installed on all Worker nodes. Refer to docker installation section

Step 2: Add Kubernetes repository ( covered)

Ensure that repository for Kubenetes packages is added to the system. Refer ^^

Step 3: Install Kubenetes components

Once you’ve added Kubernetes repository, install components using:

sudo apt install kubelet kubeadm kubectl kubernetes-cni

Step 4: Join the Node to the Cluster:

Use the join command given after initializing Kubernetes cluster. E.g

[pastacode lang=”markup” manual=”kubeadm%20join%20192.168.2.2%3A6443%20–token%209y4vc8.h7jdjle1xdovrd0z%20%5C%0A%20–discovery-token-ca-cert-hash%20sha256%3Acff9d1444a56b24b4a8839ff3330ab7177065c90753ef3e4e614566695db273c%0A%0A—%0A%5Btlsbootstrap%5D%20Waiting%20for%20the%20kubelet%20to%20perform%20the%20TLS%20Bootstrap…%0A%5Bpatchnode%5D%20Uploading%20the%20CRI%20Socket%20information%20%22%2Fvar%2Frun%2Fdockershim.sock%22%20to%20the%20Node%20API%20object%20%22k8s-node-02%22%20as%20an%20annotation%0A%0AThis%20node%20has%20joined%20the%20cluster%3A%0A*%20Certificate%20signing%20request%20was%20sent%20to%20master%20and%20a%20response%0A%20%20was%20received.%0A*%20The%20Kubelet%20was%20informed%20of%20the%20new%20secure%20connection%20details.%0A%0ARun%20’kubectl%20get%20nodes’%20on%20the%20master%20to%20see%20this%20node%20join%20the%20cluster.%0A” message=”” highlight=”” provider=”manual”/]

When done,  Check nodes status on the master:

[pastacode lang=”bash” manual=”k8s-admin%40k8s-master%3A~%24%20kubectl%20get%20nodes%0ANAME%20%20%20%20%20%20%20%20%20%20STATUS%20%20%20%20ROLES%20%20%20%20%20AGE%20%20%20%20%20%20%20VERSION%0Ak8s-master%20%20%20%20Ready%20%20%20%20%20master%20%20%20%2035m%20%20%20%20%20%20%20v1.11.0%0Ak8s-node-01%20%20%20Ready%20%20%20%20%20%3Cnone%3E%20%20%20%202m%20%20%20%20%20%20%20%20v1.11.0%0Ak8s-node-02%20%20%20Ready%20%20%20%20%20%3Cnone%3E%20%20%20%201m%20%20%20%20%20%20%20%20v1.11.0%0A” message=”” highlight=”” provider=”manual”/]

On the two nodes, Weave Net should be configured.

[pastacode lang=”bash” manual=”root%40k8s-node-01%3A~%23%20ip%20ad%20%7C%20grep%20weave%0A6%3A%20weave%3A%20%3CBROADCAST%2CMULTICAST%2CUP%2CLOWER_UP%3E%20mtu%201376%20qdisc%20noqueue%20state%20UP%20group%20default%20qlen%201000%0A%20%20%20%20inet%2010.44.0.0%2F12%20brd%2010.47.255.255%20scope%20global%20weave%0A9%3A%20vethwe-bridge%40vethwe-datapath%3A%20%3CBROADCAST%2CMULTICAST%2CUP%2CLOWER_UP%3E%20mtu%201376%20qdisc%20noqueue%20master%20weave%20state%20UP%20group%20default%0A%0Aroot%40k8s-node-02%3A~%23%20ip%20ad%20%7C%20grep%20weave%0A6%3A%20weave%3A%20%3CBROADCAST%2CMULTICAST%2CUP%2CLOWER_UP%3E%20mtu%201376%20qdisc%20noqueue%20state%20UP%20group%20default%20qlen%201000%0A%20%20%20%20inet%2010.47.0.0%2F12%20brd%2010.47.255.255%20scope%20global%20weave%0A9%3A%20vethwe-bridge%40vethwe-datapath%3A%20%3CBROADCAST%2CMULTICAST%2CUP%2CLOWER_UP%3E%20mtu%201376%20qdisc%20noqueue%20master%20weave%20state%20UP%20group%20default%0A%0A” message=”” highlight=”” provider=”manual”/]

Test Kubernetes Deployment

Let us create test pod to confirm that our cluster is running as expected.

[pastacode lang=”bash” manual=”apiVersion%3A%20extensions%2Fv1beta1%0Akind%3A%20Deployment%0Ametadata%3A%0A%20%20name%3A%20http-app%0Aspec%3A%0A%20%20replicas%3A%203%0A%20%20template%3A%0A%20%20%20%20metadata%3A%0A%20%20%20%20%20%20labels%3A%0A%20%20%20%20%20%20%20%20app%3A%20http-app%0A%20%20%20%20spec%3A%0A%20%20%20%20%20%20containers%3A%0A%20%20%20%20%20%20-%20name%3A%20http-app%0A%20%20%20%20%20%20%20%20image%3A%20katacoda%2Fdocker-http-server%3Alatest%0A%20%20%20%20%20%20%20%20ports%3A%0A%20%20%20%20%20%20%20%20-%20containerPort%3A%2080%0A” message=”” highlight=”” provider=”manual”/]

Deploy it to the cluster

Create a test namespace:

$ kubectl create namespace test-namespace
namespace/test-namespace created

After namespace is created, create a pod using deployment object defined earlier. -n is used to specify the namespace. We expect three pods to be created since our replicas value is 3.

$ kubectl create -n test-namespace -f http-app-deployment.yml
deployment.extensions/http-app created

Confirm:

[pastacode lang=”bash” manual=”%24%20kubectl%20-n%20test-namespace%20get%20deployments%0ANAME%20%20%20%20%20%20%20DESIRED%20%20%20CURRENT%20%20%20UP-TO-DATE%20%20%20AVAILABLE%20%20%20AGE%0Ahttp-app%20%20%203%20%20%20%20%20%20%20%20%203%20%20%20%20%20%20%20%20%203%20%20%20%20%20%20%20%20%20%20%20%203%20%20%20%20%20%20%20%20%20%20%201m%0A%0A%24%20kubectl%20-n%20test-namespace%20get%20pods%0ANAME%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20READY%20%20%20%20%20STATUS%20%20%20%20RESTARTS%20%20%20AGE%0Ahttp-app-97f76fcd8-68pxg%20%20%201%2F1%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%201m%0Ahttp-app-97f76fcd8-f9bdk%20%20%201%2F1%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%201m%0Ahttp-app-97f76fcd8-vgmq7%20%20%201%2F1%20%20%20%20%20%20%20Running%20%20%200%20%20%20%20%20%20%20%20%20%201m” message=”” highlight=”” provider=”manual”/]

You can see we have http-app deployment live.

With the deployment created, we can use kubectl to create a service which exposes the Pods on a particular port. An alternative method is defining a Service object with YAML. Below is our service definition.

[pastacode lang=”bash” manual=”%24%20cat%20http-app-service.yml%20%0AapiVersion%3A%20v1%0Akind%3A%20Service%0Ametadata%3A%0A%20%20name%3A%20http-app-svc%0A%20%20labels%3A%0A%20%20%20%20app%3A%20http-app%0Aspec%3A%0A%20%20type%3A%20NodePort%0A%20%20ports%3A%0A%20%20-%20port%3A%2080%0A%20%20%20%20nodePort%3A%2030080%0A%20%20selector%3A%0A%20%20%20%20app%3A%20http-app%0A” message=”” highlight=”” provider=”manual”/]

Create a service using kubectl command:

$ kubectl -n test-namespace create -f http-app-service.yml 
service/http-app-svc created

This service will be available on Cluster IP and port 30080. To get cluster IP, use:

[pastacode lang=”bash” manual=”%24%20kubectl%20-n%20test-namespace%20get%20svc%0ANAME%20%20%20%20%20%20%20%20%20%20%20TYPE%20%20%20%20%20%20%20CLUSTER-IP%20%20%20%20EXTERNAL-IP%20%20%20PORT(S)%20%20%20%20%20%20%20%20AGE%0Ahttp-app-svc%20%20%20NodePort%20%20%2010.5.45.208%20%20%20%3Cnone%3E%20%20%20%20%20%20%20%2080%3A30080%2FTCP%20%20%201m%0A” message=”” highlight=”” provider=”manual”/]

Kubernetes on Ubuntu 18.04 – Post-Installation

Enable shell autocompletion for kubectl commands.  kubectl includes autocompletion support, which can save a lot of typing!. To enable shell completion in your current session, run:

source <(kubectl completion bash)

To add kubectl autocompletion to your profile, so it is automatically loaded in future shells run:

echo "source <(kubectl completion bash)" >> ~/.bashrc

If you are using zsh edit the ~/.zshrc file and add the following code to enable kubectl autocompletion:

if [ $commands[kubectl] ]; then
source <(kubectl completion zsh)
fi

Or when using Oh-My-Zsh, edit the ~/.zshrc file and update the plugins= line to include the kubectl plugin.

source <(kubectl completion zsh)

Also relevant is Top command for container metrics

Conclusion

We have successfully deployed a 3 node Kubernetes cluster on Ubuntu 18.04 LTS servers. Our next guides will cover Kubernetes HA, Kubernetes Monitoring, How to configure external storage and more cool stuff. Stay tuned!.