Photon OS is a free and open-source operating system that is designed and suitable for use in containerized environments. Developed by VMware, it is optimized to run on cloud platforms like vSphere, providing a small, secure, and stable foundation for running containerized applications.

Photon OS is stripped down to only the essential components necessary to run containerized workloads. This helps to minimize the attack surface and eliminate unnecessary packages or libraries that could increase the size of the image or create security vulnerabilities. One of the key advantages of Photon OS is its security features, including SELinux and AppArmor, which help to protect against malicious activities and ensure the safety of the system. Additionally, Photon OS has fast boot times and low resource usage, making it well-suited for running in a containerized environment.

Photon OS supports both traditional package management systems like YUM and newer container-focused package managers like tdnf, giving users the flexibility to choose the tool that works best for their needs. With its focus on lightweight design and security, Photon OS is a popular choice for cloud-native applications and DevOps teams.

Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications, and it is widely used for cloud-native applications. Photon OS’s small footprint and minimal resource usage make it an ideal platform for running Kubernetes, as it can help to reduce the overhead of managing a Kubernetes cluster. Futhermore, Photon OS includes essential tools and libraries needed for running Kubernetes workloads, such as the Docker runtime and the Kubernetes CLI, making it easy to set up and manage Kubernetes clusters. Photon OS also ships minimalistic design and security features, such as SELinux and AppArmor, which help to protect the containerized applications and the host system against malicious activities.

Install Kubernetes Cluster using Photon OS

This guide will demonstrate how to deploy Kubernetes Cluster using Photon OS. You need two or more machines with Photon OS installed. For this tutorial, will have 3 servers configured as shown below:

TASKHOSTNAMEIP ADDRESS
Master node(with 2 cores and above)master.computingforgeeks.com192.168.200.61
Worker node 1(with 2 cores and above)worker1.computingforgeeks.com192.168.200.62
Worker node 2(with 2 cores and above)worker2.computingforgeeks.com192.168.200.63

The photon master, runs the kube-apiserver, kube-controller-manager, and kube-scheduler. It can also run etcd if not configured to run on a different server. The photon worker nodes are responsible for running the kubelet, proxy, and docker.

With the environment set up, proceed as shown below.

1. Prepare the Photon OS Nodes

The Photon OS full version comes with all the required packages to run a Kubernetes cluster. But for the minimal installations, you have to manually install the required packages on both the master and worker nodes.

Begin by installing docker and all the required packages on all the nodes:

sudo -i
tdnf install -y vim docker kubernetes-kubeadm apparmor-parser

Proceed and install iptables:

tdnf install iptables

If you get iptables command not found, try:

echo "export PATH=\$PATH:/usr/sbin" | sudo tee -a /etc/profile
source /etc/profile

Open ports for ping, etcd, kubernetes and calico on the photon master in the firewall:

##Photon Master
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 2379:2380 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT

Save the rules:

iptables-save > /etc/systemd/scripts/ip4save

On the photon worker node, open ports:

##Photon Worker
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 30000:32767 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT

Save the changes:

iptables-save > /etc/systemd/scripts/ip4save

Configure your /etc/hosts on all the nodes to make sure that communication is happening:

# vim /etc/hosts
192.168.200.61    master.computingforgeeks.com
192.168.200.62    worker1.computingforgeeks.com
192.168.200.63  worker2.computingforgeeks.com

2. Enable IPv4 IP forwarding on Photon OS

The next thing is to enable IPv4 IP forwarding and iptables filtering on the bridge devices. To achieve that, on all the nodes, create the below file:

vim /etc/sysctl.d/kubernetes.conf

Add the below lines to it:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Load the br_netfilter module with the command:

modprobe br_netfilter

Now apply the changes:

sysctl --system

3. Configure Containerd Runtime on Photon OS

On all the nodes, we need to install and configure  crictl as the container runtime endpoint. This can be installed with the command:

tdnf install -y cri-tools

Once installed, create the below config:

# vim /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false

We also need to modify the below file:

vim /etc/containerd/config.toml

In the file, add the plugins lines below:

#[grpc]
#  address = "/run/containerd/containerd.sock"
#  uid = 0
#  gid = 0
 
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
  [plugins."io.containerd.grpc.v1.cri".containerd]
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true

#[debug]
#  address = "/run/containerd/debug.sock"
#  uid = 0
#  gid = 0
#  level = "info"

Save the changes and restart the service:

systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd.service

Check the status of the service:

# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
     Active: active (running) since Thu 2023-05-11 08:51:39 UTC; 8s ago
       Docs: https://containerd.io
   Main PID: 857 (containerd)
      Tasks: 8
     Memory: 57.1M
        CPU: 114ms
     CGroup: /system.slice/containerd.service
             └─857 /usr/bin/containerd
....

You can check if the containerd is running with systemd cgroup:

# crictl info | grep -i cgroup | grep true
            "SystemdCgroup": true

4. Configure Kubernetes on Photon Master

For this guide, we will be using kubeadm to spin the Kubernetes cluster on Photon OS. First, enable the kubelet service:

systemctl enable --now kubelet

Then pull all the required images for the Kubernetes cluster:

kubeadm config images pull

Now we can initialize the cluster:

##Normal Initialization
kubeadm init

#For Calico
kubeadm init --pod-network-cidr=192.168.0.0/16

#For Flannel/Canal
kubeadm init --pod-network-cidr=10.244.0.0/16

For this guide, we will initialize the cluster with Calico.

Sample Output:

# kubeadm init --pod-network-cidr=192.168.0.0/16
.....
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.61:6443 --token ey0ll8.77ekmshbh2l2q4xz \
	--discovery-token-ca-cert-hash sha256:780016584a4eb29e2fa5c435422f5547e33892aae7be462028c15247af30c7e9 

From the above output, you need to save the sha256 token value generated since it will be used to add worker nodes later.

Now export the Kubernetes configuration:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Untaint the control node:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Next, we need to install the network plugin on the master. This can be done using the below commands:

#Using canal
curl https://raw.githubusercontent.com/projectcalico/calico/master/manifests/canal.yaml -o network.yaml

#using calico
curl  https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml -o network.yaml

#using flannel
curl https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml -o network.yaml

Once the manifest has been pulled, ensure the docker daemon is running:

systemctl restart docker

Use it to pull the CNI images:

docker pull calico/cni:vx.y.z
docker pull calico/node:vx.y.z
docker pull flannelcni/flannel:vx.y.z
docker pull calico/kube-controllers:vx.y.z

Now apply the configuration:

kubectl apply -f network.yaml

The master node should be running by now. Check using the commands:

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.200.61:6443
CoreDNS is running at https://192.168.200.61:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get nodes
NAME                           STATUS   ROLES           AGE     VERSION
master.computingforgeeks.com   Ready    control-plane   4m34s   v1.26.1

Check if all the pods are running:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                   READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-7pbdr                1/1     Running   0          46s
kube-system   calico-node-dm66b                                      0/1     Running   0          46s
kube-system   coredns-787d4945fb-6kp5s                               1/1     Running   0          4m43s
kube-system   coredns-787d4945fb-ng9k4                               1/1     Running   0          4m43s
kube-system   etcd-master.computingforgeeks.com                      1/1     Running   0          4m57s
kube-system   kube-apiserver-master.computingforgeeks.com            1/1     Running   0          4m57s
kube-system   kube-controller-manager-master.computingforgeeks.com   1/1     Running   0          4m57s
kube-system   kube-proxy-2kgkw                                       1/1     Running   0          4m43s
kube-system   kube-scheduler-master.computingforgeeks.com            1/1     Running   0          4m57s

5. Join Worker Nodes to the Cluster

Having followed all the above steps to configure the worker and the master node, we can now join the worker node to the cluster. We will use the token generated using the kubeadm initcommand earlier on the master node.

First, pull the required images:

kubeadm config images pull

Join the node to the cluster. For example:

kubeadm join 192.168.200.61:6443 --token ey0ll8.77ekmshbh2l2q4xz \
	--discovery-token-ca-cert-hash sha256:780016584a4eb29e2fa5c435422f5547e33892aae7be462028c15247af30c7e9

Restart the docker daemon

systemctl restart docker

You need to pull the required CNI images for the network policy pods:

docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull flannelcni/flannel:v0.16.3
docker pull calico/kube-controllers:v3.25.0

6. Test the Kubernetes Cluster

Now the cluster will have all the nodes added to it. Check with the command:

$ kubectl get nodes
NAME                            STATUS   ROLES           AGE   VERSION
master.computingforgeeks.com    Ready    control-plane   14m   v1.26.1
worker1.computingforgeeks.com   Ready    <none>          46s   v1.26.1
worker2.computingforgeeks.com   Ready    <none>          43s   v1.26.1

Now to test if the cluster is working as desired, we can deploy a test application. For this guide, we will use a Hello-World app.

Create the manifest:

vim hello.yaml

The fill will have the below lines:

apiVersion: v1
kind: Pod
metadata:
  name: hello
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: projects.registry.vmware.com/photon/photon4:latest
    command: ["/bin/bash"]
    args: ["-c", "echo Hello Kubernetes"]

Save the file and apply the manifest:

kubectl apply -f hello.yaml

Check if the pod is running:

$ kubectl get pods
NAME    READY   STATUS      RESTARTS   AGE
hello   0/1     Completed   0          24s

Follow the pod logs:

$ kubectl logs hello | grep "Hello Kubernetes"
Hello Kubernetes

7. Deploy a Multi-Master Nodes Kubernetes Cluster

To deploy a multi-master Kubernetes cluster on Photon OS. You need to set up a load balancer either with HAProxy or Nginx then initialize the cluster with the command:

sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs

For example:

kubeadm init --control-plane-endpoint=192.168.200.61:6443 --upload-certs 

Sample Output:

....
You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.200.61:6443 --token npub5f.mm1zdol1u6bdjbg6 \
	--discovery-token-ca-cert-hash sha256:b6d931e9cf2e0f9e73a4a0d009123c4c32a82a1c5f94c38233564a8e531e3f40 \
	--control-plane --certificate-key 56bf3fc8b0cafddc821f1c59db6e721610ffb7efcb9fddfe37ab54a3c1f10fbb

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.61:6443 --token npub5f.mm1zdol1u6bdjbg6 \
	--discovery-token-ca-cert-hash sha256:b6d931e9cf2e0f9e73a4a0d009123c4c32a82a1c5f94c38233564a8e531e3f40 

You can now join the master nodes with the provided command after installing all the required components on the master:

kubeadm join 192.168.200.61:6443 --token npub5f.mm1zdol1u6bdjbg6 \
	--discovery-token-ca-cert-hash sha256:b6d931e9cf2e0f9e73a4a0d009123c4c32a82a1c5f94c38233564a8e531e3f40 \
	--control-plane --certificate-key 56bf3fc8b0cafddc821f1c59db6e721610ffb7efcb9fddfe37ab54a3c1f10fbb

Sample Output:

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

You can now view the available nodes:

$ kubectl get nodes
NAME                            STATUS   ROLES           AGE     VERSION
master.computingforgeeks.com    Ready    control-plane   4m47s   v1.26.1
master2.computingforgeeks.com   Ready    control-plane   2m38s   v1.26.1
.....

To reset the cluster on any node use:

kubeadm reset

Books For Learning Kubernetes Administration:

Conclusion

Today, we have learned how to deploy Kubernetes Cluster using Photon OS. We can now agree that the Photon OS’s small footprint and minimal resource usage can make it an ideal platform for running Kubernetes. Photon OS includes essential tools and libraries needed for running Kubernetes workloads, such as the Docker runtime and the Kubernetes CLI. I hope this was significant to you.

Related posts:

LEAVE A REPLY

Please enter your comment!
Please enter your name here