The Cluster Logging Operator creates and manages the components of the logging stack in your OpenShift or OKD 4.x cluster. Cluster logging is used to aggregate all the logs from your OpenShift Container Platform cluster, such as application container logs, node system logs, audit logs, and so forth.

openshift logging

In this article we will install the Logging Operator and create a Cluster Logging Custom Resource (CR) to schedule cluster logging pods and other resources necessary to support cluster logging. By using an operator, the initial deployment, upgrades, and maintenance of the cluster logging is the responsibility of Operator and not SysAdmin work.

OpenShift Courses:

Practical OpenShift for Developers – New Course 2021

Ultimate Openshift (2021) Bootcamp by School of Devops

Install Cluster Logging Operator on OpenShift / OKD 4.x

The default Cluster Logging Custom Resource (CR) is named instance. This CR can be modified to define a complete cluster logging deployment that includes all the components of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the ClusterLogging Custom Resource and adjusts the logging deployment accordingly.

We will be performing the deployments from the command line interface. The focus of this article is the Log collection part. We will have other articles explaining Logs storage and visualization.

Step 1: Create Operators namespace

We will create a Namespace called openshift-logging for the Logging operator.

Create a new object YAML file for namespace creation:

cat << EOF >ocp_cluster_logging_namespace.yaml
apiVersion: v1
kind: Namespace
  name: openshift-logging
  annotations: ""
  labels: "true" "true"

Apply the file for actual namespace creation.

oc apply -f ocp_cluster_logging_namespace.yaml

Step 2: Create OperatorGroup object

Next is the installation of Cluster Logging Operator. Create an OperatorGroup object YAML by running the following commands.

cat << EOF >cluster-logging-operatorgroup.yaml
kind: OperatorGroup
  name: cluster-logging
  namespace: openshift-logging 
  - openshift-logging

Create the OperatorGroup object:

oc apply -f cluster-logging-operatorgroup.yaml

Step 3: Subscribe a Namespace to the Cluster Logging Operator.

We need to subscribe a Namespace to the Cluster Logging Operator. But first create a Subscription object YAML file.

cat << EOF >cluster-logging-sub.yaml
kind: Subscription
  name: cluster-logging
  namespace: openshift-logging
  channel: "4.4" # Set Channel
  name: cluster-logging
  source: redhat-operators
  sourceNamespace: openshift-marketplace

Create the Subscription object which deploys the Cluster Logging Operator the openshift-logging Namespace:

oc apply -f cluster-logging-sub.yaml

Verify installation:

$ oc get csv -n openshift-logging
NAME                                           DISPLAY                          VERSION                 REPLACES                                       PHASE
clusterlogging.4.4.0-202009161309.p0           Cluster Logging                  4.4.0-202009161309.p0                                                  Succeeded
elasticsearch-operator.4.4.0-202009161309.p0   Elasticsearch Operator           4.4.0-202009161309.p0   elasticsearch-operator.4.4.0-202009041255.p0   Succeeded

Step 4: Create a Cluster Logging instance

Create an instance object YAML file for the Cluster Logging Operator:

cat << EOF >cluster-logging-instance.yaml
apiVersion: ""
kind: "ClusterLogging"
  name: "instance" 
  namespace: "openshift-logging"
  managementState: "Managed"  
    type: "curator"  
      schedule: "30 3 * * *"
      type: "fluentd"  
      fluentd: {}

Create the Logging instance:

oc apply -f cluster-logging-instance.yaml

Check pods running after some minutes.

$ oc get pods -n openshift-logging
NAME                                       READY   STATUS    RESTARTS   AGE
cluster-logging-operator-f7574655b-mjj9x   1/1     Running   0          73m
fluentd-57d6h                              1/1     Running   0          36s
fluentd-dfvdc                              1/1     Running   0          36s
fluentd-j7xs8                              1/1     Running   0          36s
fluentd-ss5wr                              1/1     Running   0          36s
fluentd-tbg4c                              1/1     Running   0          36s
fluentd-tzjtg                              1/1     Running   0          36s
fluentd-v9xz9                              1/1     Running   0          36s
fluentd-vjpqp                              1/1     Running   0          36s
fluentd-z7vzf                              1/1     Running   0          36s

In our next article we will cover how you can send logs on OpenShift Cluster to an external Splunk and ElasticSearch Logging setups.

In the meantime check out other articles we have on OpenShift.

Expose OpenShift Internal Registry Externally and Login With Docker/Podman CLI

How to run telnet / tcpdump in OpenShift v4 CoreOS Nodes

Grant Users Access to Project/Namespace in OpenShift

Configure Chrony NTP Service on OpenShift 4.x / OKD 4.x

Your support is our everlasting motivation,
that cup of coffee is what keeps us going!

As we continue to grow, we would wish to reach and impact more people who visit and take advantage of the guides we have on our blog. This is a big task for us and we are so far extremely grateful for the kind people who have shown amazing support for our work over the time we have been online.

Thank You for your support as we work to give you the best of guides and articles. Click below to buy us a coffee.


Please enter your comment!
Please enter your name here