Welcome to our guide on how to setup etcd Cluster on CentOS 7/8 / Ubuntu 18.04/16.04 / Debian 10/9 / Fedora 30/29 Linux machines. This tutorial will go into detail in discussing the ideal setup of a three-node etcd cluster on a Linux box – This can be Etcd cluster on Ubuntu / Debian / CentOS / Fedora / Arch / Linux Mint or other modern Linux distribution.

etcd is a distributed and reliable key-value store for the most critical data of a distributed system. It is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.

Etcd is designed to be:

  • Simple: well-defined, user-facing API (gRPC)
  • Secure: automatic TLS with optional client cert authentication
  • Fast: benchmarked 10,000 writes/sec
  • Reliable: properly distributed using Raft

Etcd Cluster Setup on Linux – CentOS / Ubuntu / Debian / Fedora

This setup should work on all Linux distributions which uses systemd service manager.

This setup is based on below server network information and details.

Short HostnameIP Address
etcd1192.168.18.9
etcd2192.168.18.10
etcd3192.168.18.11

Since all my servers uses Systemd service manager, the hostnames can be set using the commands.

# Node 1
sudo hostnamectl set-hostname etcd1.mydomain.com --static
sudo hostname etcd1.mydomain.com

# Node 2
sudo hostnamectl set-hostname etcd2.mydomain.com --static
sudo hostname etcd2.mydomain.com


# Node 3
sudo hostnamectl set-hostname etcd3.mydomain.com --static
sudo hostname etcd3.mydomain.com

Replace mydomain.com with your servers’ domain name. The server names can be mapped to correct IP addresses on your Local DNS or by directly adding records to /etc/hosts file on each server.

sudo tee -a /etc/hosts<<EOF
192.168.18.9  etcd1.mydomain.com etcd1
192.168.18.10 etcd2.mydomain.com etcd2
192.168.18.11 etcd3.mydomain.com etcd3
EOF

Step 1: Download and Install the etcd Binaries (All nodes)

Login to each etcd cluster node to be used and download etcd binaries. This is done on all nodes.

Create temporary directory.

mkdir /tmp/etcd && cd /tmp/etcd

Install wget.

# RHEL family
sudo yum -y install wget

# Debian family
sudo apt-get -y install wget

# Arch/Manjaro
sudo pacman -S wget

Download etcd binary archive.

curl -s https://api.github.com/repos/etcd-io/etcd/releases/latest \
  | grep browser_download_url \
  | grep linux-amd64 \
  | cut -d '"' -f 4 \
  | wget -qi -

Unarchive the file and move binaries to /usr/local/bin directory.

tar xvf *.tar.gz
cd etcd-*/
sudo mv etcd* /usr/local/bin/
cd ~
rm -rf /tmp/etcd

Check etcd and etcdctl version.

$ etcd --version
etcd Version: 3.3.13
Git SHA: 98d3084
Go Version: go1.10.8
Go OS/Arch: linux/amd64

$ etcdctl --version
etcdctl version: 3.3.13
API version: 2

Step 2: Create etcd directories and user (All nodes)

We will store etcd configuration files inside the /etc/etcd directory and data in /var/lib/etcd. The user and group used to manage service is called etcd.

Create a etcd system user/group.

sudo groupadd --system etcd
sudo useradd -s /sbin/nologin --system -g etcd etcd

Then create data and configurations directories for etcd.

sudo mkdir -p /var/lib/etcd/
sudo mkdir /etc/etcd
sudo chown -R etcd:etcd /var/lib/etcd/

Step 3: Configure the etcd on all nodes

We need to populate systemd service unit files on all the three servers. But first, some environment variables are required before we can proceed.

On each server, save these variables by running the commands below.

INT_NAME="eth0"
ETCD_HOST_IP=$(ip addr show $INT_NAME | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)

Where:

  • INT_NAME is the name of your network interface to be used for cluster traffic. Change it to match your server configuration.
  • ETCD_HOST_IP is the internal IP address of the specified network interface. This is used to serve client requests and communicate with etcd cluster peers.
  • ETCD_NAME – Each etcd member must have a unique name within an etcd cluster. Command used will set the etcd name to match the hostname of the current compute instance.

Once all variables are set, create the etcd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd service
Documentation=https://github.com/etcd-io/etcd

[Service]
Type=notify
User=etcd
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --data-dir=/var/lib/etcd \\
  --initial-advertise-peer-urls http://${ETCD_HOST_IP}:2380 \\
  --listen-peer-urls http://${ETCD_HOST_IP}:2380 \\
  --listen-client-urls http://${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls http://${ETCD_HOST_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 \\
  --initial-cluster-state new \

[Install]
WantedBy=multi-user.target
EOF

If you don’t have working name resolution or mappings added to /etc/hosts file, then replace etcd1, etcd2 and etcd3 with your nodes IP addresses.

For CentOS / RHEL Linux distributions, set SELinux mode to permissive.

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

If you have active firewall service, allow ports 2379 and 2380.

# RHEL / CentOS / Fedora firewalld
sudo firewall-cmd --add-port={2379,2380}/tcp --permanent
sudo firewall-cmd --reload

# Ubuntu/Debian
sudo ufw allow proto tcp from any to any port 2379,2380

Step 4: Start the etcd Server

Start etcd service by running the following commands on each cluster node.

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Confirm that etcd service is running on all nodes.

[[email protected] ~]$ systemctl status etcd -l
● etcd.service - etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-06-03 18:20:49 UTC; 30s ago
     Docs: https://github.com/etcd-io/etcd
 Main PID: 5931 (etcd)
   CGroup: /system.slice/etcd.service
           └─5931 /usr/local/bin/etcd --name etcd1 --data-dir=/var/lib/etcd --initial-advertise-peer-urls http://192.168.18.9:2380 --listen-peer-urls http://192.168.18.9:2380 --listen-client-urls http://192.168.18.9:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.18.9:2379 --initial-cluster-token etcd-cluster-0 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new
....................................................................................

[[email protected] ~]$ systemctl status etcd -l
● etcd.service - etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-06-03 18:20:49 UTC; 2min 17s ago
     Docs: https://github.com/etcd-io/etcd
 Main PID: 5949 (etcd)
   CGroup: /system.slice/etcd.service
           └─5949 /usr/local/bin/etcd --name etcd2 --data-dir=/var/lib/etcd --initial-advertise-peer-urls http://192.168.18.10:2380 --listen-peer-urls http://192.168.18.10:2380 --listen-client-urls http://192.168.18.10:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.18.10:2379 --initial-cluster-token etcd-cluster-0 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new
....................................................................................

[[email protected] ~]$ systemctl status etcd -l
● etcd.service - etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-06-03 18:20:49 UTC; 3min 20s ago
     Docs: https://github.com/etcd-io/etcd
 Main PID: 5974 (etcd)
   CGroup: /system.slice/etcd.service
           └─5974 /usr/local/bin/etcd --name etcd3 --data-dir=/var/lib/etcd --initial-advertise-peer-urls http://192.168.18.11:2380 --listen-peer-urls http://192.168.18.11:2380 --listen-client-urls http://192.168.18.11:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.18.11:2379 --initial-cluster-token etcd-cluster-0 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new

Step 5: Test Etcd Cluster installation

Test your setup by listing the etcd cluster members:

$ etcdctl member list
152d6f8123c6ac97: name=etcd3 peerURLs=http://etcd3:2380 clientURLs=http://192.168.18.11:2379 isLeader=false
332a8a315e569778: name=etcd2 peerURLs=http://etcd2:2380 clientURLs=http://192.168.18.10:2379 isLeader=false
aebb404b9385ccd4: name=etcd1 peerURLs=http://etcd1:2380 clientURLs=http://192.168.18.9:2379 isLeader=true

To use etcd v3, you need to explicitly specify version.

$ ETCDCTL_API=3 etcdctl member list
152d6f8123c6ac97, started, etcd3, http://etcd3:2380, http://192.168.18.11:2379
332a8a315e569778, started, etcd2, http://etcd2:2380, http://192.168.18.10:2379
aebb404b9385ccd4, started, etcd1, http://etcd1:2380, http://192.168.18.9:2379

Also check cluster health by running the command:

$ etcdctl cluster-health
member 152d6f8123c6ac97 is healthy: got healthy result from http://192.168.18.11:2379
member 332a8a315e569778 is healthy: got healthy result from http://192.168.18.10:2379
member aebb404b9385ccd4 is healthy: got healthy result from http://192.168.18.9:2379
cluster is healthy

Let’s also try writing to etcd.

$ etcdctl set /message "Hello World"
Hello World

Read the value of message back – It should work on all nodes.

$ etcdctl get /message
Hello World

Create directory.

$ etcdctl mkdir /myservice
$ etcdctl set /myservice/container1 localhost:8080
localhost:8080
$ etcdctl ls /myservice
/myservice/container1

Step 6 – Test Leader failure

When a leader fails, the etcd cluster automatically elects a new leader. The election does not happen instantly once the leader fails. It takes about an election timeout to elect a new leader since the failure detection model is timeout based.

During the leader election, the cluster cannot process any writes. Write requests sent during the election are queued for processing until a new leader is elected.

Our current leader is etcd1 – Node 1.

$ etcdctl member list
152d6f8123c6ac97: name=etcd3 peerURLs=http://etcd3:2380 clientURLs=http://192.168.18.11:2379 isLeader=false
332a8a315e569778: name=etcd2 peerURLs=http://etcd2:2380 clientURLs=http://192.168.18.10:2379 isLeader=false
aebb404b9385ccd4: name=etcd1 peerURLs=http://etcd1:2380 clientURLs=http://192.168.18.9:2379 isLeader=true

Let’s take it down.

[[email protected] ~]$ sudo systemctl stop etcd

Check new Leader – Now etcd2 server.

$ etcdctl member list
152d6f8123c6ac97: name=etcd3 peerURLs=http://etcd3:2380 clientURLs=http://192.168.18.11:2379 isLeader=false
332a8a315e569778: name=etcd2 peerURLs=http://etcd2:2380 clientURLs=http://192.168.18.10:2379 isLeader=true
aebb404b9385ccd4: name=etcd1 peerURLs=http://etcd1:2380 clientURLs=http://192.168.18.9:2379 isLeader=false

Once etcd1 service is started, the leader will remain etcd2 unless it goes down.

Conclusion

You now a working three node Etcd cluster installed on CentOS 7/8, Ubuntu 18.04, Debian 10/9. Visit Etcd documentation for detailed setup and usage guide.

Similar setups:

Setup Consul Cluster on Ubuntu/Debian

Setup Consul Cluster on CentOS/RHEL server

How To Setup Local OpenShift Origin (OKD) Cluster on CentOS

How to Install Ceph Storage Cluster on Ubuntu