Automation

Setup Consul Cluster on RHEL 10 / Rocky Linux 10

Consul is a service networking platform from HashiCorp that provides service discovery, health checking, key-value storage, and multi-datacenter support. It runs as a distributed system with server and client agents that form a cluster, making it a solid choice for managing services across dynamic infrastructure.

Original content from computingforgeeks.com - post 12286

This guide walks through setting up a production-ready Consul cluster on RHEL 10 and Rocky Linux 10. We cover installing Consul from the official HashiCorp repository, configuring server and client agents, bootstrapping the cluster, setting up firewall rules, enabling the web UI, registering services, working with the KV store, configuring ACLs, and setting up DNS forwarding.

Prerequisites

Before starting, make sure you have the following in place:

  • 3 servers running RHEL 10 or Rocky Linux 10 (for the server agents) – minimum 2GB RAM and 2 vCPUs each
  • 1 or more additional servers for client agents (optional but recommended for testing)
  • Root or sudo access on all servers
  • Network connectivity between all nodes on ports 8300-8302, 8500, and 8600
  • Hostnames and IPs configured – each node must be able to resolve other nodes

The example setup in this guide uses the following nodes:

HostnameIP AddressRole
consul-server-110.0.1.10Server (bootstrap)
consul-server-210.0.1.11Server
consul-server-310.0.1.12Server
consul-client-110.0.1.20Client

Set hostnames on each node if not already done:

sudo hostnamectl set-hostname consul-server-1

Add all nodes to /etc/hosts on every server so they can resolve each other:

sudo vi /etc/hosts

Add the following entries:

10.0.1.10 consul-server-1
10.0.1.11 consul-server-2
10.0.1.12 consul-server-3
10.0.1.20 consul-client-1

Step 1: Install Consul from HashiCorp Repository

Run these steps on all nodes – servers and clients. HashiCorp maintains official RPM packages in their repository, so installation is straightforward with dnf.

Install the dnf-plugins-core package to get the config-manager plugin:

sudo dnf install -y dnf-plugins-core

Add the HashiCorp repository:

sudo dnf config-manager addrepo --from-repofile=https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

Install Consul:

sudo dnf install -y consul

Verify the installation by checking the version:

consul version

The output confirms Consul 1.22.x is installed:

Consul v1.22.5
Revision xxxxxxxx
Build Date 2026-02-26T00:00:00Z
Protocol 2 spoken, understands 2 to 3

The package creates a consul system user, a systemd service unit, and the default configuration directory at /etc/consul.d/.

Step 2: Configure Consul Server Agents

Server agents participate in the Raft consensus protocol and store the cluster state. A production cluster needs 3 or 5 server nodes for fault tolerance – 3 servers tolerate losing 1 node, 5 servers tolerate losing 2.

Generate an encryption key that all agents will use for gossip traffic. Run this on any one server:

consul keygen

This outputs a base64-encoded key. Save it – you will use the same key on every node:

pUqJrVyVRj5jsiYEkM/tFQYfWyJIv4s3XkvDwy7u5Sk=

Back up the default configuration and create a new server config. On each server node, open the config file:

sudo mv /etc/consul.d/consul.hcl /etc/consul.d/consul.hcl.bak
sudo vi /etc/consul.d/consul.hcl

Add the following configuration on consul-server-1 (10.0.1.10). Replace the encrypt value with the key you generated:

datacenter = "dc1"
data_dir   = "/opt/consul"
log_level  = "INFO"

node_name  = "consul-server-1"
server     = true

bind_addr   = "10.0.1.10"
client_addr = "0.0.0.0"

bootstrap_expect = 3

retry_join = ["10.0.1.10", "10.0.1.11", "10.0.1.12"]

encrypt = "pUqJrVyVRj5jsiYEkM/tFQYfWyJIv4s3XkvDwy7u5Sk="

ui_config {
  enabled = true
}

performance {
  raft_multiplier = 1
}

Key configuration parameters:

  • server = true – marks this agent as a server node
  • bootstrap_expect = 3 – the cluster waits for 3 servers to join before electing a leader
  • retry_join – list of server IPs for automatic cluster joining
  • encrypt – gossip encryption key shared by all agents
  • bind_addr – the IP used for cluster communication (set to the node’s own IP)
  • client_addr = "0.0.0.0" – allows HTTP API and DNS access from any interface
  • raft_multiplier = 1 – tightens Raft timing for production environments

On consul-server-2 and consul-server-3, use the same configuration but change node_name and bind_addr to match each node. For consul-server-2:

node_name  = "consul-server-2"
bind_addr  = "10.0.1.11"

For consul-server-3:

node_name  = "consul-server-3"
bind_addr  = "10.0.1.12"

Create the data directory on all server nodes:

sudo mkdir -p /opt/consul
sudo chown consul:consul /opt/consul

Step 3: Configure Consul Client Agents

Client agents run on application nodes and forward requests to servers. They handle service registration, health checks, and DNS queries locally. Every node running services that Consul should track needs a client agent.

On the client node (consul-client-1), create the configuration file:

sudo mv /etc/consul.d/consul.hcl /etc/consul.d/consul.hcl.bak
sudo vi /etc/consul.d/consul.hcl

Add the following client configuration:

datacenter = "dc1"
data_dir   = "/opt/consul"
log_level  = "INFO"

node_name  = "consul-client-1"
server     = false

bind_addr   = "10.0.1.20"
client_addr = "0.0.0.0"

retry_join = ["10.0.1.10", "10.0.1.11", "10.0.1.12"]

encrypt = "pUqJrVyVRj5jsiYEkM/tFQYfWyJIv4s3XkvDwy7u5Sk="

The client config is simpler – no server, bootstrap_expect, or ui_config settings. The retry_join list points to the server IPs so the client automatically discovers and joins the cluster.

Create the data directory:

sudo mkdir -p /opt/consul
sudo chown consul:consul /opt/consul

Step 4: Bootstrap the Consul Cluster

With configuration in place on all nodes, start and enable the Consul service. Begin with the server nodes, then bring up the clients.

On all three server nodes, enable and start Consul:

sudo systemctl enable --now consul

Check the service status to confirm it started without errors:

sudo systemctl status consul

The output should show the service as active and running:

● consul.service - "HashiCorp Consul - A service mesh solution"
     Loaded: loaded (/usr/lib/systemd/system/consul.service; enabled; preset: disabled)
     Active: active (running) since ...
   Main PID: 12345 (consul)
      Tasks: 8
     Memory: 40.0M
     CGroup: /system.slice/consul.service
             └─12345 /usr/bin/consul agent -config-dir=/etc/consul.d/

Once all three servers are running, the cluster automatically elects a leader when bootstrap_expect is satisfied. Start the client agent next:

sudo systemctl enable --now consul

Verify all members have joined the cluster by running this on any node:

consul members

You should see all four nodes listed with their roles:

Node              Address          Status  Type    Build   Protocol  DC   Partition  Segment
consul-server-1   10.0.1.10:8301   alive   server  1.22.5  2         dc1  default    <all>
consul-server-2   10.0.1.11:8301   alive   server  1.22.5  2         dc1  default    <all>
consul-server-3   10.0.1.12:8301   alive   server  1.22.5  2         dc1  default    <all>
consul-client-1   10.0.1.20:8301   alive   client  1.22.5  2         dc1  default    <default>

Check which server is the current leader:

consul operator raft list-peers

The output shows each server’s Raft state – one will be listed as leader and the others as follower:

Node              ID                                    Address          State     Voter  RaftProtocol  Commit Index  Trails Leader By
consul-server-1   xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  10.0.1.10:8300   leader    true   3             25            -
consul-server-2   xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  10.0.1.11:8300   follower  true   3             25            0
consul-server-3   xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  10.0.1.12:8300   follower  true   3             25            0

Step 5: Configure Firewall Rules

Consul uses several ports for different protocols. If firewalld is running on your servers (which it should be on any RHEL-based system), open the required ports on all nodes.

Add firewall rules for all Consul ports:

sudo firewall-cmd --permanent --add-port=8300/tcp
sudo firewall-cmd --permanent --add-port=8301/tcp
sudo firewall-cmd --permanent --add-port=8301/udp
sudo firewall-cmd --permanent --add-port=8302/tcp
sudo firewall-cmd --permanent --add-port=8302/udp
sudo firewall-cmd --permanent --add-port=8500/tcp
sudo firewall-cmd --permanent --add-port=8600/tcp
sudo firewall-cmd --permanent --add-port=8600/udp
sudo firewall-cmd --reload

Verify the ports are open:

sudo firewall-cmd --list-ports

The output should include all the Consul ports:

8300/tcp 8301/tcp 8301/udp 8302/tcp 8302/udp 8500/tcp 8600/tcp 8600/udp

Step 6: Access the Consul Web UI

The web UI is enabled by the ui_config block in the server configuration. It runs on port 8500 and provides a dashboard for services, nodes, key-value data, and ACL management.

Open your browser and navigate to any server node’s IP on port 8500:

http://10.0.1.10:8500

The UI shows all registered services, node health, and cluster members. From the dashboard you can browse the KV store, check service health, and manage intentions (when using Consul Connect).

For production environments, put the UI behind a reverse proxy with TLS and authentication. Exposing port 8500 directly to the internet is a security risk.

Step 7: Register Services with Consul

Service registration tells Consul about applications running on your nodes. Registered services are discoverable through DNS and the HTTP API, and Consul runs health checks to monitor their availability.

Create a service definition file on the client node. For example, register a web service running on port 80:

sudo vi /etc/consul.d/web-service.hcl

Add the following service definition:

service {
  name = "web"
  port = 80
  tags = ["nginx", "production"]

  check {
    http     = "http://localhost:80"
    interval = "10s"
    timeout  = "3s"
  }
}

Reload Consul to pick up the new service definition:

consul reload

Verify the service is registered by querying the HTTP API:

curl -s http://localhost:8500/v1/catalog/service/web | python3 -m json.tool

The output shows the registered service with its node, address, and port:

[
    {
        "ID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "Node": "consul-client-1",
        "Address": "10.0.1.20",
        "ServiceName": "web",
        "ServicePort": 80,
        "ServiceTags": ["nginx", "production"]
    }
]

You can also query the service through Consul DNS. The DNS interface runs on port 8600:

dig @localhost -p 8600 web.service.consul

The DNS response returns the IP address of the node hosting the service:

;; ANSWER SECTION:
web.service.consul.     0       IN      A       10.0.1.20

To register a service via the HTTP API instead of config files:

curl -X PUT http://localhost:8500/v1/agent/service/register \
  -H "Content-Type: application/json" \
  -d '{
    "Name": "api",
    "Port": 8080,
    "Tags": ["backend", "v2"],
    "Check": {
      "HTTP": "http://localhost:8080/health",
      "Interval": "15s"
    }
  }'

Step 8: Use the Key-Value Store

Consul includes a distributed key-value store that is useful for dynamic configuration, feature flags, leader election, and coordination between services. Data is replicated across all server nodes automatically.

Store a value using the CLI:

consul kv put app/config/max_connections 100

The command confirms the write:

Success! Data written to: app/config/max_connections

Retrieve the value:

consul kv get app/config/max_connections

This returns the stored value:

100

Store multiple configuration values for an application:

consul kv put app/config/db_host 10.0.1.50
consul kv put app/config/db_port 5432
consul kv put app/config/cache_ttl 300

List all keys under a prefix:

consul kv get -recurse app/config/

The output shows all keys and their values under the prefix:

app/config/cache_ttl:300
app/config/db_host:10.0.1.50
app/config/db_port:5432
app/config/max_connections:100

Delete a key when it is no longer needed:

consul kv delete app/config/cache_ttl

You can also interact with the KV store through the HTTP API:

curl -X PUT -d 'redis.example.com' http://localhost:8500/v1/kv/app/config/redis_host

Step 9: Configure ACLs for Consul Cluster

Access Control Lists (ACLs) restrict who can read, write, and manage Consul resources. In production, always enable ACLs to prevent unauthorized access to services, KV data, and cluster operations.

Add the ACL configuration block to the server config on all server nodes. Open the config file:

sudo vi /etc/consul.d/consul.hcl

Append the following ACL configuration:

acl {
  enabled        = true
  default_policy = "deny"
  down_policy    = "extend-cache"

  tokens {
    initial_management = "GENERATE-A-UUID-HERE"
  }
}

Generate a UUID for the initial management token. You can use uuidgen which is available on RHEL systems:

uuidgen

This generates a UUID like:

b1gs3cr3-uuid-here-xxxx-xxxxxxxxxxxx

Replace GENERATE-A-UUID-HERE in the config with your generated UUID. Use the same UUID on all server nodes.

Also add the ACL block on client nodes, but without the initial_management token. Instead, set a default token that allows the client to register itself:

acl {
  enabled        = true
  default_policy = "deny"
  down_policy    = "extend-cache"

  tokens {
    agent = "CLIENT-AGENT-TOKEN-HERE"
  }
}

Restart Consul on all nodes after updating the config:

sudo systemctl restart consul

Bootstrap the ACL system. Run this once on any server node:

consul acl bootstrap

The output shows the bootstrap token details including the SecretID, which is your management token:

AccessorID:       xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SecretID:         b1gs3cr3-uuid-here-xxxx-xxxxxxxxxxxx
Description:      Bootstrap Token (Global Management)
Local:            false
Create Time:      2026-03-22 10:00:00.000000000 +0000 UTC
Policies:
   00000000-0000-0000-0000-000000000001 - global-management

Export the management token so subsequent CLI commands are authenticated:

export CONSUL_HTTP_TOKEN="b1gs3cr3-uuid-here-xxxx-xxxxxxxxxxxx"

Create a policy for client agents. First, write a policy file:

sudo vi /etc/consul.d/agent-policy.hcl

Add the following policy rules:

node_prefix "" {
  policy = "write"
}

service_prefix "" {
  policy = "read"
}

Create the policy in Consul:

consul acl policy create -name "agent-policy" -rules @/etc/consul.d/agent-policy.hcl

Create a token tied to this policy for use by client agents:

consul acl token create -description "Agent Token" -policy-name "agent-policy"

Copy the SecretID from the output and use it as the agent token in the client configuration files, then restart the client agents.

Step 10: Set Up DNS Forwarding

Consul provides a DNS interface on port 8600 that resolves service names (like web.service.consul). For applications to use Consul DNS transparently, configure systemd-resolved or dnsmasq to forward .consul domain queries to Consul.

Option 1: Using systemd-resolved (recommended on RHEL 10)

Create a drop-in configuration for systemd-resolved to forward .consul queries:

sudo mkdir -p /etc/systemd/resolved.conf.d
sudo vi /etc/systemd/resolved.conf.d/consul.conf

Add the following configuration:

[Resolve]
DNS=127.0.0.1:8600
Domains=~consul

Restart systemd-resolved to apply the changes:

sudo systemctl restart systemd-resolved

Test DNS resolution for a registered service:

dig web.service.consul

The query should return the IP address of the node running the web service without specifying port 8600.

Option 2: Using dnsmasq

Install dnsmasq if not already present:

sudo dnf install -y dnsmasq

Create a Consul-specific dnsmasq configuration:

sudo vi /etc/dnsmasq.d/consul.conf

Add the following line to forward .consul queries to the Consul DNS interface:

server=/consul/127.0.0.1#8600

Enable and start dnsmasq:

sudo systemctl enable --now dnsmasq

Update /etc/resolv.conf to use dnsmasq (127.0.0.1) as the primary DNS server, or configure your DHCP client to set it automatically.

Consul Port Reference

The following table lists all ports used by Consul and their purpose. Make sure these are open between all cluster nodes:

PortProtocolPurpose
8300TCPServer RPC – used by servers for Raft consensus and replication
8301TCP/UDPSerf LAN gossip – communication between agents in the same datacenter
8302TCP/UDPSerf WAN gossip – communication between servers across datacenters
8500TCPHTTP API and Web UI
8501TCPHTTPS API (when TLS is configured)
8502TCPgRPC API (used by Envoy proxies in service mesh)
8600TCP/UDPDNS interface for service discovery queries

Conclusion

You now have a working 3-node Consul cluster on RHEL 10 / Rocky Linux 10 with service registration, KV storage, ACLs, and DNS forwarding configured. The cluster provides service discovery and health checking out of the box, with the web UI for visual management.

For production hardening, enable TLS encryption for all RPC and HTTP traffic, set up monitoring with Prometheus and Grafana using the /v1/agent/metrics endpoint, configure automated snapshots with consul snapshot save for disaster recovery, and consider using Consul Connect for mTLS between services.

Related Articles

Ansible Deploy WireGuard and IPsec VPN Server using Ansible CentOS Check if reboot is required on CentOS|RHEL Server after upgrade Databases How To Install MySQL 8.0 on CentOS 8 / RHEL 8 Cloud Install OpenStack Dalmatian on Rocky Linux 10 with Packstack

Leave a Comment

Press ESC to close