Are you looking for an easy way to setup a local OpenShift 4 Cluster in your Laptop?. The Red Hat CodeReady Containers enables you to run a minimal OpenShift 4.2 or newer cluster on your local laptop or desktop computer. This should only be used for development and testing purposes. We’ll provide a separate guide to be used for setting up a production OpenShift 4 cluster.

Red Hat CodeReady Containers is a regular OpenShift installation with the following notable differences:

  • It uses a single node which behaves both as a master and as a worker node.
  • The machine-config and monitoring Operators are disabled by default.
  • These disabled Operators will cause the corresponding parts of the web console to be non functional.
  • For the same reason, there is currently no upgrade path to newer OpenShift versions.
  • Due to technical limitations, the CodeReady Containers cluster is ephemeral and will need to be recreated from scratch once a month using a newer release.
  • The OpenShift instance is running in a virtual machine, which could cause some other differences, in particular in relation with external networking.

Minimum system requirements

CodeReady Containers requires the following minimum hardware and operating system requirements.

  • 4 virtual CPUs (vCPUs)
  • 8 GB of memory
  • 35 GB of storage space

CodeReady Containers can be run on Linux, Windows, and macOS but this setup have been tested on CentOS 7/8 and Fedora 31. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Microsoft Windows 10.

Step 1: Install required software packages

CodeReady Containers requires the libvirt and NetworkManager packages to be installed on the host system prior to its setup.

------- Fedora ----------
$ sudo dnf install NetworkManager qemu-kvm libvirt virt-install
$ sudp systemctl enable --now libvirtd

------ CentOS 7 ---------
$ sudo yum -y install qemu-kvm libvirt virt-install bridge-utils NetworkManager
$ sudo systemctl enable --now libvirtd 

------ Ubuntu ----------
$ sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager

Step 2: Install CodeReady Containers

Download the latest binary file for CRC from the below URL.

wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz

Extract the downloaded CodeReady Containers archive.

tar xvf crc-linux-amd64.tar.xz

Place the binary in your $PATH .

cd crc*/
sudo cp crc /usr/local/bin

Confirm installation by checking the software version.

$ crc version
crc version: 1.2.0+c2e3c0f
OpenShift version: 4.2.8 (embedded in binary)

To view crc commands help page, run:

$ crc --help 
CodeReady Containers is a tool that manages a local OpenShift 4.x cluster optimized for testing and development purposes

Usage:
  crc [flags]
  crc [command]

Available Commands:
  config      Modify crc configuration
  console     Open the OpenShift Web Console in the default browser
  delete      Delete the OpenShift cluster
  help        Help about any command
  ip          Get IP address of the running OpenShift cluster
  oc-env      Add the 'oc' binary to PATH
  setup       Set up prerequisites for the OpenShift cluster
  start       Start the OpenShift cluster
  status      Display status of the OpenShift cluster
  stop        Stop the OpenShift cluster
  version     Print version information

Flags:
  -f, --force              Forcefully perform an action
  -h, --help               help for crc
      --log-level string   log level (e.g. "debug | info | warn | error") (default "info")

Step 3: Deploy CodeReady Containers virtual machine.

Run the crc setup command to set up your host operating system for the CodeReady Containers virtual machine.

$ crc setup

The installer will check for setup requirements before installation.

INFO Checking if running as non-root              
INFO Caching oc binary                            
INFO Setting up virtualization                    
INFO Setting up KVM                               
INFO Installing libvirt service and dependencies  
INFO Adding user to libvirt group                 
INFO Enabling libvirt                             
INFO Starting libvirt service                     
INFO Will use root access: start libvirtd service 
INFO Checking if a supported libvirt version is installed 
INFO Installing crc-driver-libvirt                
INFO Removing older system-wide crc-driver-libvirt 
INFO Setting up libvirt 'crc' network             
INFO Starting libvirt 'crc' network               
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Writing Network Manager config for crc       
INFO Will use root access: write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf 
INFO Will use root access: execute systemctl daemon-reload command 
INFO Will use root access: execute systemctl stop/start command 
INFO Writing dnsmasq config for crc               
INFO Will use root access: write dnsmasq configuration in /etc/NetworkManager/dnsmasq.d/crc.conf 
INFO Will use root access: execute systemctl daemon-reload command 
INFO Will use root access: execute systemctl stop/start command 
INFO Unpacking bundle from the CRC binary         

Once the Setup is complete, run the command below to start the OpenShift cluster in your Laptop machine.

$ crc start
INFO Checking if running as non-root              
INFO Checking if oc binary is cached              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
? Image pull secret [? for help] *

Please note that a valid OpenShift user pull secret is required during installation. The pull secret can be copied or downloaded from the Pull Secret section of the Install on Laptop: Red Hat CodeReady Containers page on cloud.redhat.com.

Paste the pulling secret when prompted, then cluster setup will continue.

INFO Extracting bundle: crc_libvirt_4.2.8.crcbundle ... 
INFO Creating CodeReady Containers VM for OpenShift 4.2.8... 
INFO Verifying validity of the cluster certificates ... 
INFO Check internal and public DNS query ...      
INFO Copying kubeconfig file to instance dir ...  
INFO Adding user's pull secret and cluster ID ... 
INFO Starting OpenShift cluster ... [waiting 3m]  
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD 
INFO                                              
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console 
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation 

Access details and credentials are printed after a successful setup.

INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console

To be able to access your cluster, first set up your environment by running.

$ crc oc-env
export PATH="/home/jmutai/.crc/bin:$PATH"
eval $(crc oc-env)

Run the commands printed in your terminal or add them to your ~/.bashrc or ~/.zshrc file, then source it.

$ source ~/.bashrc 
-- or --
$ source ~/.zshrc

Confirm cluster setup.

$ oc cluster-info
Kubernetes master is running at https://api.crc.testing:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ oc get nodes
NAME                 STATUS   ROLES           AGE     VERSION
crc-2n9vw-master-0   Ready    master,worker   5d13h   v1.14.6+6ac6aa4b0

$ oc config view
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.crc.testing:6443
  name: api-crc-testing:6443
- cluster:
    certificate-authority: /home/jmutai/.minikube/ca.crt
    server: https://192.168.39.35:8443
  name: minikube
contexts:
- context:
    cluster: api-crc-testing:6443
    user: developer/api-crc-testing:6443
  name: /api-crc-testing:6443/developer
- context:
    cluster: api-crc-testing:6443
    namespace: default
    user: kube:admin/api-crc-testing:6443
  name: default/api-crc-testing:6443/kube:admin
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: default/api-crc-testing:6443/kube:admin
kind: Config
preferences: {}
users:
- name: developer/api-crc-testing:6443
  user:
    token: Pvqjq-b5HkV9UQtOYH8P9yOtm17MrOUVs-eaiSeQqXA
- name: kube:admin/api-crc-testing:6443
  user:
    token: LDrdGJMUpPUAxtg0IvWynedbtSBLjs8S2S6kdpvbMU8
- name: minikube
  user:
    client-certificate: /home/jmutai/.minikube/client.crt
    client-key: /home/jmutai/.minikube/client.key

To view cluster operators:

$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.2.8     True        False         False      5d13h
cloud-credential                           4.2.8     True        False         False      5d13h
cluster-autoscaler                         4.2.8     True        False         False      5d13h
console                                    4.2.8     True        False         False      5d13h
dns                                        4.2.8     True        False         False      17m
image-registry                             4.2.8     True        False         False      5d13h
ingress                                    4.2.8     True        False         False      5d13h
insights                                   4.2.8     True        False         False      5d13h
kube-apiserver                             4.2.8     True        False         False      5d13h
kube-controller-manager                    4.2.8     True        False         False      5d13h
kube-scheduler                             4.2.8     True        False         False      5d13h
machine-api                                4.2.8     True        False         False      5d13h
machine-config                             4.2.8     True        False         False      5d13h
marketplace                                4.2.8     True        False         False      17m
monitoring                                 4.2.8     False       True          True       5d13h
network                                    4.2.8     True        False         False      5d13h
node-tuning                                4.2.8     True        False         False      17m
openshift-apiserver                        4.2.8     True        False         False      9h
openshift-controller-manager               4.2.8     True        False         False      9h
openshift-samples                          4.2.8     True        False         False      5d13h
operator-lifecycle-manager                 4.2.8     True        False         False      5d13h
operator-lifecycle-manager-catalog         4.2.8     True        False         False      5d13h
operator-lifecycle-manager-packageserver   4.2.8     True        False         False      17m
service-ca                                 4.2.8     True        False         False      5d13h
service-catalog-apiserver                  4.2.8     True        False         False      5d13h
service-catalog-controller-manager         4.2.8     True        False         False      5d13h
storage                                    4.2.8     True        False         False      5d13h

Step 4: Access OpenShift Cluster

You can access the OpenShift cluster deployed locally from CLI or by opening the OpenShift 4.x console on your web browser.

$ oc login -u developer -p developer https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

Access as admin:

$ oc login -u kubeadmin -p  UMeRe-hBQAi-JJ4Bi-8ynRD https://api.crc.testing:6443
Login successful.
You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".

To open the console from your default web browser, run:

$ crc console

Login with the credentials printed earlier.

There you have a cluster running.

Step 5: Stopping OpenShift Cluster

To stop your OpenShift cluster, run the command:

$ crc stop
Stopping the OpenShift cluster, this may take a few minutes...
Stopped the OpenShift cluster

The virtual machine can be started any time by running the command:

$ crc start 
INFO Checking if running as non-root              
INFO Checking if oc binary is cached              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
INFO Starting CodeReady Containers VM for OpenShift 4.2.8... 
INFO Verifying validity of the cluster certificates ... 
INFO Check internal and public DNS query ...      
INFO Starting OpenShift cluster ... [waiting 3m]
INFO                                              
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions 
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD 
INFO                                              
...

Deleting CodeReady Containers virtual machine

If you want to delete an existing CodeReady Containers virtual machine, run:

$ crc delete

This command will delete the CodeReady Containers virtual machine.

Reference

CRC Documentation

More guides

Best Kubernetes Study books

How To run Local Kubernetes Cluster in Docker Containers

How To run Docker Containers using Podman and Libpod

Best Storage Solutions for Kubernetes & Docker Containers