As a cluster administrator, you will definitely want to aggregate all the logs from your OpenShift Container Platform cluster, such as container logs, node system logs, application container logs, and so forth. In this article we will schedule cluster logging pods and other resources necessary to support sending of Logs, Events and Cluster Metrics to Splunk.

We will be using Splunk Connect for Kubernetes which provides a way to import and search your OpenShift or Kubernetes logging, object, and metrics data in Splunk. Splunk Connect for Kubernetes utilizes and supports multiple CNCF components in the development of these tools to get data into Splunk.

OpenShift Courses:

Practical OpenShift for Developers – New Course 2021

Ultimate Openshift (2021) Bootcamp by School of Devops

Setup Requirements

For this setup you need the following items.

  • Working OpenShift Cluster with oc command line tool configured. Administrative access is required.
  • Splunk Enterprise 7.0 or later
  • Helm installed in your workstation
  • At least two Splunk Indexes
  • An HEC token used by the HTTP Event Collector to authenticate the event data

There will be three types of deployments on OpenShift for this purpose.

  1. Deployment for collecting changes in OpenShift objects.
  2. One DaemonSet on each OpenShift node for metrics collection.
  3. One DaemonSet on each OpenShift node for logs collection.

The actual implementation will be as shown in the diagram below.

openshift logging splunk

Step 1: Create Helm Indexes

You will need at least two indexes for this deployment. One for logs and events and another one for Metrics.

Login to Splunk as Admin user:

send openshift logs events splunk 01

Create events and Logs Index. The Input Data Type Should be Events.

send openshift logs events splunk 02

For Metrics Index the Input Data type can be Metrics.

send openshift logs events splunk 03

Confirm the indexes are available.

send openshift logs events splunk 04

Step 2: Create Splunk HEC Token

The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. As HEC uses a token-based authentication model we need to generate new token.

This is done under Data Inputs configuration section.

send openshift logs events splunk 05

Select “HTTP Event Collector” then fill in the name and click next.

send openshift logs events splunk 06

In the next page permit the token to write to the two indexes we created.

send openshift logs events splunk 07

Review and Submit the settings.

send openshift logs events splunk 08

Step 3: Install Helm

If you don’t have helm already installed in your workstation or bastion server checkout the guide in below link.

Install and Use Helm 3 on Kubernetes Cluster

You can validate installation by checking available version of helm.

$ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

Step 4: Deploy Splunk Connect for Kubernetes

Create a namespace for Splunk connect namespace.

$ oc new-project splunk-hec-logging

Upon project creation it should be your current working project. But you can as well switch to the project at any point in time.

$ oc project splunk-hec-logging

Create values yaml file for the installation.

$ vim ocp-splunk-hec-values.yaml

Mine has been modified to look similar to below.

  logLevel: info
  journalLogPath: /run/log/journal
      host: <splunk-ip> # Set Splunk IP address
      port: <splunk-hec-port> # Set Splunk HEC port
      protocol: http
      token: <hec-token> # Hec token created
      insecureSSL: true
      indexName: <indexname> # default index if others not set
    clusterName: "<clustername>"
    openshift: true
  enabled: true
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <metrics-indexname>
    openshift: true
  enabled: true
  logLevel: debug
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <logging-indexname>
    logFormatType: cri
          path: /var/log/kube-apiserver/audit.log
  enabled: true
    openshift: true
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName:  <objects-indexname>

Fill the values accordingly then initiate deployment. Get latest release URL before installation.

helm install splunk-kubernetes-logging -f ocp-splunk-hec-values.yaml

Deployment output:

NAME: splunk-kubernetes-logging
LAST DEPLOYED: Thu Oct 22 22:22:51 2020
NAMESPACE: splunk-logging
STATUS: deployed
███████╗██████╗ ██╗     ██╗   ██╗███╗   ██╗██╗  ██╗██╗
██╔════╝██╔══██╗██║     ██║   ██║████╗  ██║██║ ██╔╝╚██╗
███████╗██████╔╝██║     ██║   ██║██╔██╗ ██║█████╔╝  ╚██╗
╚════██║██╔═══╝ ██║     ██║   ██║██║╚██╗██║██╔═██╗  ██╔╝
███████║██║     ███████╗╚██████╔╝██║ ╚████║██║  ██╗██╔╝
╚══════╝╚═╝     ╚══════╝ ╚═════╝ ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝

Listen to your data.

Splunk Connect for Kubernetes is spinning up in your cluster.
After a few minutes, you should see data being indexed in your Splunk.

If you get stuck, we're here to help.
Look for answers here:

Check running nodes:

$ oc get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
splunk-kubernetes-logging-splunk-kubernetes-metrics-4bvkp         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-4skrm         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-55f8t         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-7xj2n         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-8r2vj         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-agg-5bppqqn   1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-f8psk         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-fp88w         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-s45wx         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-xtq5g         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-objects-b4f8f4m67vg   1/1     Running   0          48s

Give Privileged SCC to service accounts

for sa in $(oc  get sa --no-headers  | grep splunk | awk '{ print $1 }'); do
  oc adm policy add-scc-to-user privileged -z $sa

Login to Splunk and check if Logs, Events and metrics are being send.

send openshift logs events splunk 10

This might not be the Red Hat recommended way of Storing OpenShift Events and Logs. Refer to OpenShift documentation for more details on Cluster Logging.

More articles on OpenShift:

Grant Users Access to Project/Namespace in OpenShift

Configure Chrony NTP Service on OpenShift 4.x / OKD 4.x

How To Install Istio Service Mesh on OpenShift 4.x

Your support is our everlasting motivation,
that cup of coffee is what keeps us going!

As we continue to grow, we would wish to reach and impact more people who visit and take advantage of the guides we have on our blog. This is a big task for us and we are so far extremely grateful for the kind people who have shown amazing support for our work over the time we have been online.

Thank You for your support as we work to give you the best of guides and articles. Click below to buy us a coffee.


Please enter your comment!
Please enter your name here