Chrony is the default NTP implementation on Red Hat CoreOS (RHCOS) and Fedora CoreOS (FCOS), which form the foundation of OpenShift and OKD clusters. Accurate time synchronization across all nodes is critical for distributed systems – TLS certificate validation, log correlation, etcd consistency, and scheduling all depend on synchronized clocks. When nodes drift even a few seconds, you start seeing cryptic authentication failures and etcd leader election problems.
This guide covers how to configure Chrony NTP on OpenShift 4.x and OKD 4.x clusters using MachineConfig resources. We walk through checking current time sync status, creating and applying MachineConfig manifests for custom NTP servers, handling air-gapped environments, and troubleshooting time drift. The approach uses the Machine Config Operator (MCO) to manage chrony configuration declaratively across all nodes.

Prerequisites
- A running OpenShift 4.x or OKD 4.x cluster
- The
ocCLI tool installed and authenticated with cluster-admin privileges - Access to NTP servers from all cluster nodes (UDP port 123 outbound)
- Basic understanding of Kubernetes cluster operations
Step 1: Check Current Time Sync Status on Nodes
Before making changes, check the current chrony configuration and synchronization state on your cluster nodes. Use oc debug node to open a shell on a node and inspect chrony status.
First, list all nodes in the cluster to identify which ones to check.
oc get nodes
The output shows all control plane and worker nodes with their status and roles:
NAME STATUS ROLES AGE VERSION
master-0.ocp.example.com Ready control-plane,master 90d v1.30.4+openshift.1
master-1.ocp.example.com Ready control-plane,master 90d v1.30.4+openshift.1
master-2.ocp.example.com Ready control-plane,master 90d v1.30.4+openshift.1
worker-0.ocp.example.com Ready worker 90d v1.30.4+openshift.1
worker-1.ocp.example.com Ready worker 90d v1.30.4+openshift.1
Open a debug shell on one of the nodes and check the current chrony synchronization sources.
oc debug node/master-0.ocp.example.com -- chroot /host chronyc sources -v
The output lists all configured NTP sources and their synchronization state. The * marker indicates the currently selected source:
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* time.cloudflare.com 3 6 377 34 -245us[ -312us] +/- 15ms
^- ntp.ubuntu.com 2 6 377 35 +892us[ +892us] +/- 42ms
Check if chrony is tracking a valid time source.
oc debug node/master-0.ocp.example.com -- chroot /host chronyc tracking
The tracking output confirms the reference server, stratum, and current offset from the NTP source:
Reference ID : A29FC801 (time.cloudflare.com)
Stratum : 4
Ref time (UTC) : Sat Mar 22 10:15:30 2026
System time : 0.000000245 seconds fast of NTP time
Last offset : -0.000067312 seconds
RMS offset : 0.000089421 seconds
Frequency : 3.241 ppm slow
Residual freq : -0.001 ppm
Skew : 0.032 ppm
Root delay : 0.029841000 seconds
Root dispersion : 0.001245000 seconds
Update interval : 64.4 seconds
Leap status : Normal
View the current chrony configuration file on the node.
oc debug node/master-0.ocp.example.com -- chroot /host cat /etc/chrony.conf
Step 2: Create MachineConfig for Chrony NTP
OpenShift uses the Machine Config Operator to manage node configuration declaratively. To change chrony settings, you create a MachineConfig resource that writes the desired /etc/chrony.conf file to nodes. The file content must be base64-encoded inside the Ignition config spec.
Start by creating the chrony configuration content. This example uses the public NTP pool servers.
cat > /tmp/chrony.conf << 'CHRONYEOF'
pool 2.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
CHRONYEOF
Generate the base64-encoded version of this config file. The MachineConfig Ignition spec requires the file content in base64 format.
CHRONY_BASE64=$(cat /tmp/chrony.conf | base64 -w 0)
echo $CHRONY_BASE64
Now create the MachineConfig YAML for worker nodes. This manifest tells the MCO to write the chrony configuration to all nodes matching the worker machine config pool.
cat > /tmp/99-worker-chrony.yaml << MCEOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-chrony
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${CHRONY_BASE64}
mode: 0644
overwrite: true
path: /etc/chrony.conf
MCEOF
Create a similar MachineConfig for the control plane (master) nodes. Time accuracy on master nodes is especially important because etcd relies on tight clock synchronization for leader election and consensus.
cat > /tmp/99-master-chrony.yaml << MCEOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-master-chrony
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${CHRONY_BASE64}
mode: 0644
overwrite: true
path: /etc/chrony.conf
MCEOF
Step 3: Apply MachineConfig to the Cluster
Apply both MachineConfig manifests to the cluster. The MCO picks up these resources and begins rolling out the configuration to each node pool.
oc apply -f /tmp/99-worker-chrony.yaml
oc apply -f /tmp/99-master-chrony.yaml
The output confirms both resources were created:
machineconfig.machineconfiguration.openshift.io/99-worker-chrony created
machineconfig.machineconfiguration.openshift.io/99-master-chrony created
Verify the MachineConfig resources exist in the cluster.
oc get machineconfig | grep chrony
Both chrony MachineConfig entries should appear in the list:
99-master-chrony 3.2.0 5s
99-worker-chrony 3.2.0 8s
Step 4: Verify MCO Rollout Across Nodes
After applying the MachineConfig, the Machine Config Operator reboots each node one at a time to apply the new configuration. This rolling update ensures the cluster stays available. Monitor the rollout progress with the machineconfigpool resource.
oc get machineconfigpool
During the rollout, the UPDATING column shows True and DEGRADED stays False. Wait until all pools show UPDATED as True:
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-a1b2c3d4e5f6a1b2c3d4e5f6 True False False 3 3 3 0 90d
worker rendered-worker-f6e5d4c3b2a1f6e5d4c3b2a1 True False False 2 2 2 0 90d
Watch the rollout in real time to track progress as each node reboots and rejoins the cluster.
oc wait --for=condition=Updated machineconfigpool/master --timeout=600s
oc wait --for=condition=Updated machineconfigpool/worker --timeout=600s
The wait command returns when the rollout finishes:
machineconfigpool.machineconfiguration.openshift.io/master condition met
machineconfigpool.machineconfiguration.openshift.io/worker condition met
If a pool shows DEGRADED as True, check the MCO logs for errors. The most common issues are malformed Ignition configs or incorrect base64 encoding.
oc describe machineconfigpool worker
Step 5: Verify Chrony Configuration on Nodes
After the MCO rollout completes, verify that the new chrony configuration is active on the nodes. Use oc debug node to connect to each node and check the configuration file and sync status.
Confirm the /etc/chrony.conf file has been updated on a worker node.
oc debug node/worker-0.ocp.example.com -- chroot /host cat /etc/chrony.conf
The output should match the configuration you defined in Step 2:
pool 2.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
Verify chrony is synchronizing with the new NTP pool.
oc debug node/worker-0.ocp.example.com -- chroot /host chronyc sources
You should see your configured NTP pool servers with the ^* marker on the active source:
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* ntp1.example.pool.ntp.org 2 6 377 22 +125us[ +180us] +/- 18ms
^- ntp2.example.pool.ntp.org 2 6 377 24 +340us[ +340us] +/- 25ms
^- ntp3.example.pool.ntp.org 3 6 377 23 -512us[ -512us] +/- 30ms
Check the chrony service status to confirm it is running without errors.
oc debug node/worker-0.ocp.example.com -- chroot /host systemctl status chronyd
The service should show active and running with no error messages:
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
Active: active (running) since Sat 2026-03-22 10:20:00 UTC; 5min ago
Main PID: 1234 (chronyd)
Tasks: 1 (limit: 48832)
Memory: 2.1M
CPU: 45ms
CGroup: /system.slice/chronyd.service
└─1234 /usr/sbin/chronyd -F 2
Repeat these checks on a master node to verify both pools received the configuration.
oc debug node/master-0.ocp.example.com -- chroot /host chronyc sources
Step 6: Configure Custom NTP Servers
Many organizations run internal NTP servers for security and compliance. To point your OpenShift cluster at specific NTP servers instead of public pools, update the chrony configuration with explicit server directives.
Create a chrony configuration that uses your internal NTP servers. Replace the IP addresses with your actual NTP server addresses.
cat > /tmp/chrony-custom.conf << 'CHRONYEOF'
server 10.0.1.10 iburst
server 10.0.1.11 iburst
server 10.0.1.12 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
CHRONYEOF
The iburst option sends a burst of requests on initial synchronization to speed up the first clock correction. This is useful during node boot when the clock may be significantly off.
Generate the base64 encoding and update the MachineConfig for both node roles.
CHRONY_BASE64=$(cat /tmp/chrony-custom.conf | base64 -w 0)
cat > /tmp/99-worker-chrony.yaml << MCEOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-chrony
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${CHRONY_BASE64}
mode: 0644
overwrite: true
path: /etc/chrony.conf
MCEOF
cat > /tmp/99-master-chrony.yaml << MCEOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-master-chrony
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${CHRONY_BASE64}
mode: 0644
overwrite: true
path: /etc/chrony.conf
MCEOF
Apply the updated MachineConfig resources. Since the MachineConfig names are the same, this updates the existing resources and triggers a new rolling reboot.
oc apply -f /tmp/99-worker-chrony.yaml
oc apply -f /tmp/99-master-chrony.yaml
The MCO detects the configuration change and begins the rollout:
machineconfig.machineconfiguration.openshift.io/99-worker-chrony configured
machineconfig.machineconfiguration.openshift.io/99-master-chrony configured
Step 7: Configure Chrony for Air-Gapped Environments
Air-gapped (disconnected) OpenShift clusters cannot reach public NTP pools. In these environments, you need a local NTP server within the network. One common pattern is to configure a dedicated Chrony NTP server on RHEL that syncs with a GPS clock or manual time source, then point all cluster nodes to it.
For air-gapped clusters, the chrony configuration uses the local NTP server and includes a local stratum fallback. This ensures nodes can still maintain time synchronization even if the local NTP server temporarily goes down.
cat > /tmp/chrony-airgap.conf << 'CHRONYEOF'
server 192.168.10.5 iburst prefer
server 192.168.10.6 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
local stratum 10
CHRONYEOF
The prefer keyword marks the primary NTP server as the preferred source. The local stratum 10 directive allows the node to act as a time source at stratum 10 when it loses contact with all configured servers - this prevents cascading time drift across the cluster during network outages.
Generate the base64 encoding and create MachineConfig manifests following the same pattern from Step 2. Apply them with oc apply and monitor the rollout as shown in Step 4.
For clusters where master nodes serve as the NTP source for workers, configure the masters with direct NTP server access and workers to sync from the master nodes.
Master node chrony config pointing to the local NTP server:
server 192.168.10.5 iburst prefer
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
local stratum 10
allow 10.128.0.0/14
Worker node chrony config pointing to master nodes:
server 10.0.1.100 iburst
server 10.0.1.101 iburst
server 10.0.1.102 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
The allow 10.128.0.0/14 directive on master nodes permits the pod and node network to query chrony for time synchronization. Adjust this CIDR to match your cluster network range.
Step 8: Troubleshoot Chrony Time Drift on OpenShift
Time drift problems in OpenShift typically show up as etcd leader election failures, TLS certificate errors ("certificate is not yet valid"), or log timestamps that don't match across nodes. Here are the key troubleshooting steps.
Check the time offset on each node. An offset greater than 500ms warrants investigation.
for node in $(oc get nodes -o name); do
echo "=== $node ==="
oc debug $node -- chroot /host chronyc tracking 2>/dev/null | grep -E "System time|Last offset|Leap status"
done
The output shows the time offset and sync status for each node:
=== node/master-0.ocp.example.com ===
System time : 0.000000312 seconds fast of NTP time
Last offset : -0.000045123 seconds
Leap status : Normal
=== node/worker-0.ocp.example.com ===
System time : 0.000001245 seconds slow of NTP time
Last offset : +0.000089456 seconds
Leap status : Normal
If a node shows no NTP sources or "Not synchronised", verify the node can reach the NTP servers on UDP port 123.
oc debug node/worker-0.ocp.example.com -- chroot /host chronyc activity
The activity output should show at least one online source:
200 OK
3 sources online
0 sources offline
0 sources doing burst (return to online)
0 sources doing burst (return to offline)
0 sources with unknown address
If all sources are offline, check network connectivity from the node to your NTP servers.
oc debug node/worker-0.ocp.example.com -- chroot /host bash -c "curl -s --connect-timeout 3 10.0.1.10:123 || echo 'Cannot reach NTP server on port 123'"
Force an immediate time sync on a node when the offset is large. The makestep command forces chrony to step the clock immediately rather than slewing gradually.
oc debug node/worker-0.ocp.example.com -- chroot /host chronyc makestep
Check the MachineConfig Operator logs if nodes are not picking up the chrony configuration after applying MachineConfig.
oc logs -n openshift-machine-config-operator -l k8s-app=machine-config-daemon --tail=50
Common issues and fixes:
| Problem | Cause | Fix |
|---|---|---|
| MachineConfigPool stuck UPDATING | Node failed to reboot or apply config | Check node status with oc get nodes and MCO daemon logs |
| MachineConfigPool DEGRADED | Invalid Ignition config or bad base64 | Re-encode the chrony.conf and verify base64 is on a single line |
| "Not synchronised" on node | NTP server unreachable or firewall blocking UDP 123 | Verify network path and firewall rules for UDP port 123 |
| Large time offset after node reboot | makestep threshold too small | Increase the makestep value (e.g. makestep 10 3) |
| etcd leader election failures | Clock skew exceeds etcd tolerance (default 1 second) | Fix chrony sync first, then check etcd health |
Conclusion
You have configured Chrony NTP synchronization on your OpenShift / OKD 4.x cluster using MachineConfig resources. The MCO handles rolling out the configuration to all nodes without cluster downtime. For production clusters, point all nodes at internal NTP servers rather than public pools, monitor time offset as part of your cluster health checks, and set up alerting for drift greater than 500ms - the OpenShift documentation on machine configuration covers additional MCO options for advanced use cases.