OpenShift includes a built-in logging stack that collects container, node, and audit logs from every component in your cluster and stores them in a central location. Starting with Red Hat OpenShift Logging 6.x, the stack uses Cluster Logging Operator with Vector as the collector and Loki as the log store – replacing the older Elasticsearch-based approach.
This guide walks through installing and configuring the full OpenShift logging stack on OpenShift 4.x using Red Hat OpenShift Logging 6.x. We cover operator installation, LokiStack deployment, log forwarding with ClusterLogForwarder, querying logs from the OpenShift Console, forwarding to external systems, and configuring log retention.
Prerequisites
Before you begin, make sure the following requirements are met:
- OpenShift Container Platform 4.14 or later (or OKD 4.x equivalent)
- Cluster admin access (
cluster-adminrole) ocCLI installed and authenticated to the cluster- S3-compatible object storage for Loki – AWS S3, MinIO, OpenShift Data Foundation (ODF), Azure Blob, Google Cloud Storage, or Swift
- A default StorageClass configured in the cluster for Loki PersistentVolumeClaims
- At least 3 worker nodes with 4 GB RAM each (for a small production LokiStack)
Step 1: Install the OpenShift Logging Operator
The Red Hat OpenShift Logging Operator deploys the Vector-based log collector and manages the ClusterLogForwarder custom resource. Install it from OperatorHub in the openshift-logging namespace.
First, create the namespace and OperatorGroup:
cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
EOF
Next, create the Subscription to install the operator from the stable-6.2 channel:
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: stable-6.2
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
EOF
Wait for the operator to install, then confirm the ClusterServiceVersion shows Succeeded:
oc get csv -n openshift-logging -o custom-columns=NAME:.metadata.name,PHASE:.status.phase
The output should show the logging operator ready for use:
NAME PHASE
cluster-logging.v6.2.0 Succeeded
Step 2: Install the Loki Operator
The Loki Operator manages the LokiStack – the log storage backend that replaced Elasticsearch in OpenShift Logging 6.x. It runs in the openshift-operators-redhat namespace and is installed separately from the Logging Operator.
Create the namespace, OperatorGroup, and Subscription:
cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat
spec: {}
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat
spec:
channel: stable-6.2
name: loki-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
EOF
Verify the Loki Operator is running:
oc get csv -n openshift-operators-redhat -o custom-columns=NAME:.metadata.name,PHASE:.status.phase
You should see the Loki Operator with phase Succeeded:
NAME PHASE
loki-operator.v6.2.0 Succeeded
Step 3: Create a LokiStack Instance
LokiStack is the custom resource that deploys Loki with all its components – ingester, distributor, querier, compactor, and gateway. Before creating the LokiStack, you need a Secret with your object storage credentials.
Create the Object Storage Secret
Create a secret in the openshift-logging namespace with your S3-compatible storage credentials. Replace the placeholder values with your actual bucket details:
oc create secret generic logging-loki-s3 \
--from-literal=bucketnames="loki-logs-bucket" \
--from-literal=endpoint="https://s3.us-east-1.amazonaws.com" \
--from-literal=access_key_id="AKIAIOSFODNN7EXAMPLE" \
--from-literal=access_key_secret="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
--from-literal=region="us-east-1" \
-n openshift-logging
For MinIO or other S3-compatible storage, set the endpoint to your MinIO server URL (e.g., https://minio.example.com:9000). For Azure Blob Storage, use --from-literal=container, --from-literal=account_name, and --from-literal=account_key instead.
Deploy the LokiStack
Create the LokiStack custom resource. The size field controls the deployment scale – use 1x.demo for testing or 1x.small for production workloads:
cat << EOF | oc apply -f -
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
tenants:
mode: openshift-logging
managementState: Managed
size: 1x.small
storage:
schemas:
- effectiveDate: "2024-10-11"
version: v13
secret:
name: logging-loki-s3
type: s3
storageClassName: gp3-csi
hashRing:
type: memberlist
limits:
global:
queries:
queryTimeout: 3m
retention:
days: 7
replicationFactor: 1
EOF
Replace gp3-csi with your cluster’s default StorageClass. Check available storage classes with this command:
oc get storageclasses
The following table shows available LokiStack sizes and their intended use cases:
| Size | Use Case | Ingestion Rate |
|---|---|---|
1x.demo | Testing and demos only | Not for production |
1x.extra-small | Small clusters (few namespaces) | ~100 GB/day |
1x.small | Production clusters | ~500 GB/day |
1x.medium | Large production clusters | ~2 TB/day |
Wait for all LokiStack pods to become ready. This may take 2-3 minutes depending on storage provisioning speed:
oc get pods -n openshift-logging -l app.kubernetes.io/instance=logging-loki
All pods should show Running status with all containers ready:
NAME READY STATUS RESTARTS AGE
logging-loki-compactor-0 1/1 Running 0 2m
logging-loki-distributor-7d9f8b5c4-x2k9n 1/1 Running 0 2m
logging-loki-gateway-6b8f8d5c4-hq4x7 1/1 Running 0 2m
logging-loki-index-gateway-0 1/1 Running 0 2m
logging-loki-ingester-0 1/1 Running 0 2m
logging-loki-querier-5d8c6b5c4-9k2m4 1/1 Running 0 2m
logging-loki-query-frontend-6f8f8c5c4-v3j7n 1/1 Running 0 2m
Step 4: Create the ServiceAccount and RBAC
The log collector needs a ServiceAccount with permissions to write application, infrastructure, and audit logs to the LokiStack. These ClusterRoles are created automatically by the Logging Operator – you only need to bind them to a collector ServiceAccount.
cat << EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: collector-write-application-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-logging-write-application-logs
subjects:
- kind: ServiceAccount
name: collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: collector-write-audit-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-logging-write-audit-logs
subjects:
- kind: ServiceAccount
name: collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: collector-write-infrastructure-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-logging-write-infrastructure-logs
subjects:
- kind: ServiceAccount
name: collector
namespace: openshift-logging
EOF
The three ClusterRoleBindings grant the collector permission to push application logs, audit logs, and infrastructure logs to Loki through separate tenants.
Step 5: Create the ClusterLogForwarder
The ClusterLogForwarder defines which logs to collect and where to send them. In OpenShift Logging 6.x, this resource uses the observability.openshift.io/v1 API and replaces both the old ClusterLogging and ClusterLogForwarder resources from the 5.x era.
Create a ClusterLogForwarder that collects application, infrastructure, and audit logs and forwards them all to the LokiStack:
cat << EOF | oc apply -f -
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
serviceAccount:
name: collector
outputs:
- name: default-lokistack
type: lokiStack
lokiStack:
target:
name: logging-loki
namespace: openshift-logging
authentication:
token:
from: serviceAccount
pipelines:
- name: default-application
inputRefs:
- application
outputRefs:
- default-lokistack
- name: default-infrastructure
inputRefs:
- infrastructure
outputRefs:
- default-lokistack
- name: default-audit
inputRefs:
- audit
outputRefs:
- default-lokistack
EOF
After applying the ClusterLogForwarder, the operator deploys collector pods as a DaemonSet across all nodes. Verify they are running:
oc get daemonset collector -n openshift-logging
The DESIRED and READY counts should match the number of nodes in your cluster:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
collector 6 6 6 6 6 <none> 45s
Step 6: Verify Log Collection
With all components deployed, verify that logs are flowing from the collector through to Loki. Start by checking the ClusterLogForwarder status conditions:
oc get clusterlogforwarders.observability.openshift.io collector -n openshift-logging -o jsonpath='{.status.conditions[*].type}{"\n"}{.status.conditions[*].status}{"\n"}{.status.conditions[*].reason}'
The conditions should show the forwarder is ready and all pipelines are valid:
Ready
True
Ready
Check that collector pods on each node are shipping logs without errors:
oc logs -n openshift-logging -l app.kubernetes.io/component=collector --tail=20
Healthy collector logs show Vector processing events with no connection errors. If you see repeated connection failures to the Loki gateway, check that the LokiStack pods are running and the ServiceAccount RBAC is correct.
Verify the LokiStack health by checking its status conditions:
oc get lokistacks.loki.grafana.com logging-loki -n openshift-logging -o jsonpath='{range .status.conditions[*]}{.type}{": "}{.status}{"\n"}{end}'
All conditions should report True, confirming the LokiStack is healthy and accepting logs. You can also display node-level logs using the oc CLI to cross-reference what the collector is picking up.
Step 7: Query Logs in the OpenShift Console
To view logs from the OpenShift web console, install the Cluster Observability Operator and enable the logging UI plugin. This adds a dedicated Logs tab under Observe in the console navigation.
Install the Cluster Observability Operator if not already present:
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-observability-operator
namespace: openshift-operators
spec:
channel: stable
name: cluster-observability-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
EOF
Once the operator is running, create the UIPlugin resource to enable the logging console view:
cat << EOF | oc apply -f -
apiVersion: observability.openshift.io/v1alpha1
kind: UIPlugin
metadata:
name: logging
spec:
type: Logging
logging:
lokiStack:
name: logging-loki
EOF
After applying the UIPlugin, refresh the OpenShift web console. Navigate to Observe > Logs to access the log viewer. You can filter logs by namespace, pod name, container, severity, and time range using LogQL queries.
Here are some useful LogQL queries to get started:
# All logs from a specific namespace
{kubernetes_namespace_name="my-app"}
# Error-level logs across all application namespaces
{log_type="application"} |= "error"
# Infrastructure logs from a specific node
{kubernetes_host="worker-1.ocp.example.com"}
# Audit logs for a specific user
{log_type="audit"} | json | user_username="system:admin"
The console log viewer also supports live streaming, so you can tail logs in real time from any pod or namespace – similar to how you would check pod metrics on OpenShift through the built-in dashboards.
Step 8: Forward Logs to External Systems
The ClusterLogForwarder supports sending logs to external systems in addition to (or instead of) the local LokiStack. You can forward to multiple output types including Splunk, Elasticsearch, Kafka, Syslog, CloudWatch, and HTTP endpoints.
Forward to Splunk
Create a Secret with your Splunk HEC (HTTP Event Collector) token first:
oc create secret generic splunk-secret \
--from-literal=hecToken="your-splunk-hec-token" \
-n openshift-logging
Then add a Splunk output and pipeline to your ClusterLogForwarder spec. You can add these alongside the existing LokiStack output for dual delivery:
- name: splunk-external
type: splunk
splunk:
url: https://splunk-hec.example.com:8088
indexName: openshift-logs
tls:
insecureSkipVerify: false
authentication:
token:
secretName: splunk-secret
key: hecToken
Forward to External Elasticsearch
To send logs to an external Elasticsearch cluster, create the authentication secret:
oc create secret generic es-secret \
--from-literal=username="elastic" \
--from-literal=password="your-es-password" \
-n openshift-logging
Add the Elasticsearch output block to the ClusterLogForwarder:
- name: external-es
type: elasticsearch
elasticsearch:
url: https://elasticsearch.example.com:9200
version: 8
index:
name: ocp-logs
authentication:
username:
secretName: es-secret
key: username
password:
secretName: es-secret
key: password
Forward to Kafka
Kafka is a common choice for high-volume log pipelines where you need to decouple log collection from processing. Add a Kafka output:
- name: kafka-external
type: kafka
kafka:
url: tls://kafka-broker.example.com:9093
topic: openshift-logs
Reference any of these external outputs in your pipeline outputRefs to route specific log types to specific destinations. A single pipeline can reference multiple outputs for fan-out delivery, sending the same logs to both Loki and an external system simultaneously.
Step 9: Configure Log Retention
Log retention in the LokiStack controls how long logs are stored before the compactor deletes them. This is configured in the limits section of the LokiStack resource.
Set global retention to 14 days:
oc patch lokistack logging-loki -n openshift-logging --type merge -p '
spec:
limits:
global:
retention:
days: 14
'
For more granular control, set per-tenant retention to keep different log types for different durations. Edit the LokiStack resource:
oc edit lokistack logging-loki -n openshift-logging
Add per-tenant retention under the limits section:
spec:
limits:
global:
retention:
days: 7
tenants:
application:
retention:
days: 14
infrastructure:
retention:
days: 7
audit:
retention:
days: 30
This configuration keeps application logs for 14 days, infrastructure logs for 7 days, and audit logs for 30 days. The maximum supported retention is 30 days per tenant. Adjust these values based on your compliance requirements and available object storage capacity.
After changing retention settings, the LokiStack compactor applies the new policy on its next compaction cycle. Verify the change took effect:
oc get lokistack logging-loki -n openshift-logging -o jsonpath='{.spec.limits}' | python3 -m json.tool
The output should reflect your updated retention settings for each tenant.
Conclusion
You now have a fully operational centralized logging stack on OpenShift – the Logging Operator collecting logs with Vector, Loki storing them in object storage, and the OpenShift Console providing a query interface. The ClusterLogForwarder gives you flexible routing to send logs to external systems like Splunk, Elasticsearch, or Kafka alongside the local LokiStack.
For production deployments, size the LokiStack appropriately for your log volume, enable TLS on all external forwarding outputs, and set retention policies that match your organization’s compliance requirements. Monitor the collector DaemonSet and LokiStack pods through your OpenShift observability stack to catch any log pipeline issues early.