Edge computing is a distributed IT architecture that allows the client’s data to be processed at the network’s periphery. The data is processed as close to the originating source as possible. In other words, it brings the applications closer to the data sources or the local edge servers. This architecture brings many business benefits such as improved response times, increased bandwidth availability and faster insights.
Almost all system admins and DevOps engineers know Kubernetes. This tool allows users to run containerized workloads across a server farm. It also comes with many cool benefits such as scaling, portability, improved security etc. Today, there are many Kubernetes distributions including Rancher, Minikube, EKS, AKS, GKE, Red Hat OpenShift, Docker Kubernetes, Mirantis, VMware Tanzu Kubernetes Grid etc.
While Red Hat Openshift is believed to be a lightweight Kubernetes container orchestration solution, there is even a more lightweight distribution known as MicroShift. The Red Hat® build of MicroShift is a solution that is built based on the edge capabilities of Red Hat® OpenShift®. It is an open-source tool that combines an enterprise-ready distribution that provides the power of Kubernetes at the edge. Microshift is designed for enterprises or users with small or low-power hardware who still want to enjoy the Orchestration abilities of Kubernetes.
In this guide, we will learn how you can quickly deploy lightweight Openshift for Edge Computing using Microshift.
System Requirements
Below are the specifications required to run Microshift.
- Recommended Operating System: RHEL 8, CentOS Stream, or Fedora 34+
- Architecture: 64-bit CPU architecture (amd64/x86_64, arm64, or riscv64)
- Memory: 2GB of RAM
- CPU: 2 CPU cores
- Storage: 1GB of free storage space
After all the above requirements have been met, proceed as shown below.
1: Install CRI-O on your System
CRI-O is a container runtime engine for Kubernetes. It can be used as an alternative to Docker, Containerd etc. To install it, issue the below commands on your system.
- On RedHat
command -v subscription-manager &> /dev/null \
&& subscription-manager repos --enable rhocp-4.8-for-rhel-8-x86_64-rpms
sudo dnf install -y cri-o cri-tools conntrack
- On CentOS Stream
First, install the EPEL repo:
sudo dnf install epel-release -y
Enable the Powertools repo with the command:
sudo dnf config-manager --enable powertools
Now enable the repo and install Microshift:
sudo dnf module enable -y cri-o
sudo dnf install -y cri-o cri-tools conntrack
- On Fedora
sudo dnf module enable -y cri-o
sudo dnf install -y cri-o cri-tools conntrack
Once installed, start and enable the service:
sudo systemctl enable crio --now
Check the status:
$ systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2023-11-30 19:06:02 EAT; 4s ago
Docs: https://github.com/cri-o/cri-o
Main PID: 829600 (crio)
Tasks: 9
Memory: 41.0M
CGroup: /system.slice/crio.service
└─829600 /usr/bin/crio
.....
2: Install and Configure MicroShift
Once the runtime has been installed and started, we can quickly install and use Microshift. We will begin by enabling the repository that provides the packages:
sudo dnf copr enable -y @redhat-et/microshift
Now install Microshift on RedHat, CentOS and Fedora using the command:
sudo dnf install -y microshift podman
The next thing is to allow the ports and IP addresses through the firewall if it’s active. To achieve that, issue these commands:
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload
Download the OpenShift pull secret from the https://console.redhat.com/openshift/downloads#tool-pull-secret page and save it into the ~/.pull-secret.json
file.
Configure CRI-O to use the pull secret.
sudo cp ~/.pull-secret.json /etc/crio/openshift-pull-secret
Restart the MicroShift service.
sudo systemctl enable --now microshift.service
Verify if the service is up and running:
$ systemctl status microshift
● microshift.service - MicroShift
Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; vendor preset: disabl>
Active: active (running) since Thu 2023-11-30 19:08:18 EAT; 4s ago
Main PID: 830336 (microshift)
Tasks: 8 (limit: 98700)
Memory: 40.2M
CGroup: /system.slice/microshift.service
└─830336 /usr/bin/microshift run
Nov 30 19:08:18 microshift.computingforgeeks.com systemd[1]: Started MicroShift.
There are more details on configuring ports and firewalls in the firewall documentation.
You need to configure your Microshift CNI network interface to match the one defined at /etc/cni/net.d/100-crio-bridge.conf
The default interface is cni0, so we will replace it with the command:
sudo sed -i '/cni_default_network = "cbr0"/c\cni_default_network = "cni0"' /etc/crio/crio.conf.d/microshift.conf
On Fedora and CentOS systems, you might get problems with the cluster falling to start. For that case, you need to trust the GPG keys using the commands:
sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.access.redhat.com
sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.redhat.io
You also need to create the container registries with the commands below
sudo tee /etc/containers/registries.d/registry.access.redhat.com.yaml > /dev/null <<EOF
docker:
registry.access.redhat.com:
sigstore: https://access.redhat.com/webassets/docker/content/sigstore
EOF
sudo tee /etc/containers/registries.d/registry.redhat.io.yaml > /dev/null <<EOF
docker:
registry.redhat.io:
sigstore: https://registry.redhat.io/containers/sigstore
EOF
3. Access and Use MicroShift Kubernetes
Now to use the installed MicroShift, you need to install a client, either kubectl or the OpenShift client. For our guide, we will install both with the commands:
curl -O https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz
sudo tar -xf openshift-client-linux.tar.gz -C /usr/bin oc kubectl
Then copy the Kubeconfig file to the default location that can be accessed without any admin privileges.
mkdir ~/.kube
sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
Now, you are set to enjoy the awesomeness of MicroShift. After sometime, the node will be ready as shown:
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cent8.mylab.io Ready <none> 10m v1.21.0 65.109.12.95 <none> CentOS Stream 8 4.18.0-526.el8.x86_64 cri-o://1.21.3
You can also use kubectl
to access the cluster.
Get the pods:
$ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-67lf5 1/1 Running 0 5m9s
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-swhcd 1/1 Running 0 3m
openshift-dns dns-default-rqzsh 2/2 Running 0 5m9s
openshift-dns node-resolver-66tmz 1/1 Running 0 5m9s
openshift-ingress router-default-6c96f6bc66-fps7g 1/1 Running 0 5m10s
openshift-service-ca service-ca-7bffb6f6bf-m8xj2 1/1 Running 0 5m11s
To test if all is okay, we will deploy a simple Nginx app.
oc apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
Verify if the app is deployed:
$ oc get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-585449566-9dqsm 1/1 Running 0 32s
nginx-deployment-585449566-9rl62 1/1 Running 0 32s
Expose the service using NodePort.
$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed
Get the service port now:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m55s
nginx-deployment NodePort 10.43.242.3 <none> 80:31977/TCP 6s
openshift-apiserver ClusterIP None <none> 443/TCP 8m12s
openshift-oauth-apiserver ClusterIP None <none> 443/TCP 8m12s
We can then access the service on a browser using the port to which the service has been exposed to(31977 for this case)

Verdict
That marks the end of the guide on how to deploy Lightweight Openshift for Edge Computing using Microshift. You have seen how easy it is to spin this lightweight Kubernetes. This can be vital for testing or for users who want to set up and use Kubernetes with minimal effort.
Interested in more?
- Bare Metal vs. VM-based Kubernetes Clusters
- Deploy HA Kubernetes in Hetzner Cloud Using Kubermatic KubeOne
- 9 Best Kubernetes UI Management Tools