Consul is a service networking tool from HashiCorp that provides service discovery, health checking, a distributed key-value store, and multi-datacenter support. It solves the fundamental problem of how services find and communicate with each other in dynamic infrastructure where IP addresses change constantly.
This guide walks through setting up a production-ready Consul cluster on Ubuntu 24.04 LTS and Debian 13. We cover server agent installation, client configuration, cluster bootstrapping, ACL setup, DNS forwarding, and the web UI. All commands are tested against Consul 1.22.x from the official HashiCorp APT repository.
Prerequisites
Before starting, make sure you have the following ready:
- 3 servers running Ubuntu 24.04 LTS or Debian 13 for Consul server agents (minimum 2GB RAM, 2 vCPUs each)
- 1 or more servers for Consul client agents (any services that need to register with Consul)
- Root or sudo access on all nodes
- Network connectivity between all nodes on ports 8300-8302, 8500, and 8600
- Hostnames and static IPs configured on each server
For this guide, we use the following server layout:
| Hostname | IP Address | Role |
|---|---|---|
| consul-server-1 | 10.0.1.10 | Consul Server (bootstrap) |
| consul-server-2 | 10.0.1.11 | Consul Server |
| consul-server-3 | 10.0.1.12 | Consul Server |
| consul-client-1 | 10.0.1.20 | Consul Client |
Step 1: Install Consul from HashiCorp Repository
Run these commands on all nodes – both servers and clients. HashiCorp maintains an official APT repository with signed packages for Debian-based distributions.
Add the HashiCorp GPG key to verify package authenticity:
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
Add the HashiCorp repository to your system sources:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
Update the package index and install Consul:
sudo apt update && sudo apt install consul -y
Verify the installation by checking the Consul version:
consul version
The output confirms Consul is installed and ready:
Consul v1.22.5
Revision xxxxxxxx
Build Date 2026-02-26T00:00:00Z
Protocol 2 spoken, understands 2 to 3
The installation creates a consul system user and group, the default configuration directory at /etc/consul.d/, and a data directory at /opt/consul.
Step 2: Configure Consul Server Agents
Consul server agents participate in the Raft consensus protocol, store cluster state, and handle queries. You need a minimum of 3 servers for production to tolerate one node failure.
First, generate an encryption key that all agents will share for gossip protocol encryption:
consul keygen
This outputs a base64-encoded 32-byte key. Save this key – you will use it on every node in the cluster:
pUqJrVyVRj5jsiYEkM/tFQYfWUJqaVXvY9DMTBJX9dI=
On each server node, create the server configuration file. Replace the IP addresses and encryption key with your actual values:
sudo vi /etc/consul.d/consul.hcl
Add the following configuration for the server agent:
datacenter = "dc1"
data_dir = "/opt/consul"
log_level = "INFO"
# Server mode - participates in Raft consensus
server = true
bootstrap_expect = 3
# Bind to the private interface IP (change per node)
bind_addr = "10.0.1.10"
# Allow HTTP API and DNS from all interfaces
client_addr = "0.0.0.0"
# Automatically join these server nodes on startup
retry_join = ["10.0.1.10", "10.0.1.11", "10.0.1.12"]
# Enable the web UI
ui_config {
enabled = true
}
# Enable service mesh (Consul Connect)
connect {
enabled = true
}
# Gossip encryption key - same on all nodes
encrypt = "pUqJrVyVRj5jsiYEkM/tFQYfWUJqaVXvY9DMTBJX9dI="
# Performance tuning for production
performance {
raft_multiplier = 1
}
The key settings here are:
server = true– marks this agent as a server that participates in consensusbootstrap_expect = 3– waits for 3 servers to join before electing a leaderbind_addr– the IP address used for cluster communication (must be unique per node)retry_join– automatically discovers and joins other server nodesencrypt– enables gossip encryption using the shared key
On consul-server-2, set bind_addr = "10.0.1.11". On consul-server-3, set bind_addr = "10.0.1.12". Everything else stays the same across all server nodes.
Step 3: Configure Consul Client Agents
Client agents run on every node that hosts services you want to register with Consul. They forward requests to server agents and run local health checks.
On each client node, create the configuration file:
sudo vi /etc/consul.d/consul.hcl
Add the following client configuration:
datacenter = "dc1"
data_dir = "/opt/consul"
log_level = "INFO"
# Client mode (server = false is the default, but explicit is clearer)
server = false
# Bind to this node's private IP
bind_addr = "10.0.1.20"
# Allow local services to query HTTP API and DNS
client_addr = "0.0.0.0"
# Join the server cluster
retry_join = ["10.0.1.10", "10.0.1.11", "10.0.1.12"]
# Same gossip encryption key as servers
encrypt = "pUqJrVyVRj5jsiYEkM/tFQYfWUJqaVXvY9DMTBJX9dI="
# Enable service mesh
connect {
enabled = true
}
The client configuration is simpler – no bootstrap_expect, no ui_config, and server is set to false. The retry_join list points to all server nodes so clients can find the cluster even if one server is down.
Step 4: Bootstrap the Consul Cluster
With configuration in place on all nodes, start the Consul service. Begin with the server nodes, then the clients.
Enable and start Consul on all server nodes:
sudo systemctl enable consul
sudo systemctl start consul
Check the service status to confirm it started without errors:
sudo systemctl status consul
The output should show the service as active and running:
● consul.service - "HashiCorp Consul - A service mesh solution"
Loaded: loaded (/usr/lib/systemd/system/consul.service; enabled; preset: enabled)
Active: active (running) since ...
Main PID: 1234 (consul)
Tasks: 8 (limit: 4915)
Memory: 45.0M
CPU: 1.234s
CGroup: /system.slice/consul.service
└─1234 /usr/bin/consul agent -config-dir=/etc/consul.d/
Once all three server nodes are running, Consul automatically bootstraps the cluster and elects a leader. Start the client agents next:
sudo systemctl enable consul
sudo systemctl start consul
Verify all members have joined the cluster by running this command on any node:
consul members
All nodes should appear as alive with their correct roles:
Node Address Status Type Build Protocol DC Partition Segment
consul-server-1 10.0.1.10:8301 alive server 1.22.5 2 dc1 default <all>
consul-server-2 10.0.1.11:8301 alive server 1.22.5 2 dc1 default <all>
consul-server-3 10.0.1.12:8301 alive server 1.22.5 2 dc1 default <all>
consul-client-1 10.0.1.20:8301 alive client 1.22.5 2 dc1 default <default>
Check which server is the current leader:
consul operator raft list-peers
This shows the Raft peer list with leader designation:
Node ID Address State Voter RaftProtocol Commit Index Trails Leader By
consul-server-1 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 10.0.1.10:8300 leader true 3 25 -
consul-server-2 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 10.0.1.11:8300 follower true 3 25 0
consul-server-3 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 10.0.1.12:8300 follower true 3 25 0
Step 5: Configure Firewall Rules for Consul
Consul requires several ports open between cluster members. If you run UFW (default on Ubuntu) or nftables, configure the following rules.
For UFW, allow all required Consul ports:
sudo ufw allow 8300/tcp comment "Consul RPC"
sudo ufw allow 8301/tcp comment "Consul Serf LAN"
sudo ufw allow 8301/udp comment "Consul Serf LAN"
sudo ufw allow 8302/tcp comment "Consul Serf WAN"
sudo ufw allow 8302/udp comment "Consul Serf WAN"
sudo ufw allow 8500/tcp comment "Consul HTTP API"
sudo ufw allow 8600/tcp comment "Consul DNS"
sudo ufw allow 8600/udp comment "Consul DNS"
sudo ufw reload
For a tighter setup, restrict the API and UI port to specific source IPs:
sudo ufw allow from 10.0.1.0/24 to any port 8500 proto tcp comment "Consul HTTP API - internal only"
Verify the firewall rules are active:
sudo ufw status numbered
If you use HAProxy as a reverse proxy in front of the Consul UI, also allow port 443/tcp for HTTPS access.
Step 6: Access the Consul Web UI
With ui_config.enabled = true in the server configuration, the Consul web interface is available on port 8500 of any server node.
Open your browser and navigate to:
http://10.0.1.10:8500
The UI shows cluster health, registered services, nodes, key-value data, and ACL management. You can also check cluster health from the command line:
consul catalog services
Initially, only the default consul service appears:
consul
For production environments, place the Consul UI behind a reverse proxy with TLS termination rather than exposing port 8500 directly. Restrict client_addr to 127.0.0.1 and proxy through Nginx or HAProxy with authentication.
Step 7: Register Services with Consul
Service registration is one of Consul’s core features. Services register themselves with the local agent, and other services discover them through DNS or the HTTP API.
Create a service definition file on the client node where the service runs:
sudo vi /etc/consul.d/web-service.hcl
Add a service definition for a web application running on port 80 with an HTTP health check:
service {
name = "web"
port = 80
tags = ["production", "nginx"]
check {
id = "web-http"
name = "HTTP health check"
http = "http://localhost:80/health"
interval = "10s"
timeout = "3s"
}
}
Reload the Consul agent to pick up the new service definition:
consul reload
Verify the service is registered by querying the catalog:
consul catalog services
The web service now appears in the catalog:
consul
web
Query the service through Consul’s DNS interface to get the address of healthy instances:
dig @127.0.0.1 -p 8600 web.service.consul SRV +short
The DNS query returns the SRV record with the node address and port:
1 1 80 consul-client-1.node.dc1.consul.
You can also query the HTTP API for detailed service information:
curl -s http://127.0.0.1:8500/v1/catalog/service/web | python3 -m json.tool
Step 8: Use the Consul Key-Value Store
Consul includes a distributed key-value store useful for dynamic configuration, feature flags, and coordination between services. It is replicated across all server nodes and accessible from any agent.
Store a value using the CLI:
consul kv put app/config/db_host 10.0.1.50
The command confirms the write was successful:
Success! Data written to: app/config/db_host
Retrieve the value:
consul kv get app/config/db_host
The stored value is returned:
10.0.1.50
Store multiple configuration keys for your application:
consul kv put app/config/db_port 5432
consul kv put app/config/cache_ttl 3600
consul kv put app/config/log_level info
List all keys under a prefix:
consul kv get -recurse app/config/
This returns all keys with their values:
app/config/cache_ttl:3600
app/config/db_host:10.0.1.50
app/config/db_port:5432
app/config/log_level:info
The KV store is also accessible through the HTTP API and the web UI, making it easy to integrate with any application regardless of language.
Step 9: Enable Consul ACLs
Access Control Lists restrict who can read, write, and manage resources in the cluster. In production, ACLs prevent unauthorized agents from joining and unauthorized clients from querying sensitive data.
Add the ACL configuration block to the server configuration on all server nodes:
sudo vi /etc/consul.d/consul.hcl
Append the following ACL settings to the existing configuration:
acl {
enabled = true
default_policy = "deny"
down_policy = "extend-cache"
tokens {
initial_management = "GENERATE-A-UUID-HERE"
}
}
Generate a UUID to use as the initial management token:
uuidgen
Replace GENERATE-A-UUID-HERE with the generated UUID. This token has full administrative access – store it securely.
Add the same ACL block to client agent configuration files, but without the initial_management token. Clients need a separate agent token:
acl {
enabled = true
default_policy = "deny"
down_policy = "extend-cache"
tokens {
agent = "CLIENT-AGENT-TOKEN-HERE"
}
}
Restart Consul on all server nodes first, then on client nodes:
sudo systemctl restart consul
After restart, all CLI commands and API requests require a token. Set the management token in your environment:
export CONSUL_HTTP_TOKEN="your-management-token-uuid"
Create an agent token policy that allows node registration and service discovery:
consul acl policy create -name "agent-policy" -rules '
node_prefix "" {
policy = "write"
}
service_prefix "" {
policy = "read"
}
agent_prefix "" {
policy = "write"
}'
Create a token using that policy and assign it to client agents:
consul acl token create -description "Agent Token" -policy-name "agent-policy"
Copy the SecretID from the output and set it as the agent token in each client’s consul.hcl configuration. Restart the client agents after updating the token.
Step 10: Configure DNS Forwarding to Consul
Consul runs its own DNS server on port 8600. To resolve .consul domains transparently from any application, forward DNS queries for the consul domain to the Consul agent.
The simplest approach on Ubuntu 24.04 is to configure systemd-resolved to forward .consul queries. Create a resolved configuration drop-in:
sudo vi /etc/systemd/resolved.conf.d/consul.conf
Add the DNS forwarding configuration:
[Resolve]
DNS=127.0.0.1:8600
Domains=~consul
Create the drop-in directory if it does not exist:
sudo mkdir -p /etc/systemd/resolved.conf.d
Restart systemd-resolved to apply the changes:
sudo systemctl restart systemd-resolved
Test DNS resolution for the consul service itself:
resolvectl query consul.service.consul
The query should return the IP addresses of nodes running the consul service:
consul.service.consul: 10.0.1.10
10.0.1.11
10.0.1.12
For Debian 13 or environments without systemd-resolved, use dnsmasq as a local forwarder. Install it and add a forwarding rule:
sudo apt install dnsmasq -y
Create a Consul-specific dnsmasq configuration:
sudo vi /etc/dnsmasq.d/consul.conf
Add the forwarding rule for the consul domain:
server=/consul/127.0.0.1#8600
Restart dnsmasq and test resolution:
sudo systemctl restart dnsmasq
dig web.service.consul +short
Consul Ports Reference
The following table lists all ports used by Consul. Make sure these are open between cluster members and accessible from clients as needed.
| Port | Protocol | Purpose |
|---|---|---|
| 8300 | TCP | Server RPC – used by servers to handle incoming requests from other agents |
| 8301 | TCP/UDP | Serf LAN – gossip protocol for LAN cluster membership and failure detection |
| 8302 | TCP/UDP | Serf WAN – gossip protocol for cross-datacenter communication |
| 8500 | TCP | HTTP API and Web UI – client-facing interface for queries and management |
| 8501 | TCP | HTTPS API – TLS-encrypted API access (when configured) |
| 8502 | TCP | gRPC API – used by Envoy proxies for xDS in service mesh |
| 8600 | TCP/UDP | DNS interface – service discovery through DNS queries |
| 21000-21255 | TCP | Sidecar proxy ports – automatically assigned to Envoy sidecars |
Ports 8301 and 8302 require both TCP and UDP. The TCP connection handles reliable message delivery while UDP handles the gossip protocol’s lightweight protocol updates.
Conclusion
You now have a working Consul cluster with service discovery, key-value storage, health checking, ACLs, and DNS forwarding on Ubuntu 24.04 / Debian 13. The three-server setup provides fault tolerance for one node failure while maintaining cluster availability.
For production hardening, enable TLS encryption for all RPC and HTTP traffic, configure auto-backup of the Raft data with consul snapshot save, set up Consul on RHEL/CentOS for mixed-OS environments, and integrate with Terraform for infrastructure-as-code provisioning of your Consul cluster.