etcd is a distributed, reliable key-value store designed for the most critical data in distributed systems. It powers Kubernetes as the backend data store for all cluster state, but it works just as well as a standalone service for configuration management, service discovery, and leader election. etcd uses the Raft consensus algorithm to guarantee strong consistency across nodes, even during network partitions or node failures.
This guide covers two installation methods on Ubuntu 24.04 and 22.04: the Ubuntu repository (quick, stable) and the official GitHub binary (latest version). Both methods produce a working single-node etcd instance, but the binary method gives you the latest version (v3.5.28 at the time of writing).
Prerequisites
Before you start, make sure you have:
- An Ubuntu 24.04 or 22.04 server with at least 2 GB RAM
- A user with sudo privileges or root access
- Ports 2379 (client) and 2380 (peer) available
Method 1: Install etcd from Ubuntu Repositories
Ubuntu ships etcd in its default repositories. This is the fastest way to get running, though the packaged version may lag behind the latest upstream release.
Update your package index and install both the server and client packages:
sudo apt update
sudo apt install -y etcd-server etcd-client
Verify the installed version with:
etcd --version
On Ubuntu 24.04, you should see etcd version 3.4.x or 3.5.x depending on the repository version. Ubuntu 22.04 ships with etcd 3.3.x, which is quite old. If you need a newer version, skip to Method 2.
The package automatically creates a systemd unit and the etcd user. Start and enable the service:
sudo systemctl enable --now etcd
Check that etcd is running:
sudo systemctl status etcd
The service should show active (running). If you installed from the repository and everything looks good, skip ahead to the Basic etcdctl Operations section. Otherwise, continue with Method 2 for the latest binary.
Method 2: Install etcd from GitHub Binary Releases
For production use and access to the latest features and security patches, install etcd directly from the official GitHub releases. The current stable release is v3.5.28 (released March 2026). There is also a newer v3.6.x series available if you want the cutting edge, but 3.5.x remains the most widely deployed in production.
Set the version variable and download the release tarball:
ETCD_VER=v3.5.28
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -fsSL ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
Extract the archive and move the binaries into your system path:
tar xzf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/
sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcd /usr/local/bin/
sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/local/bin/
sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcdutl /usr/local/bin/
Confirm the binaries are in place:
etcd --version
etcdctl version
You should see version 3.5.28 confirmed for both binaries.
Create a Dedicated etcd User and Directories
Running etcd as root is a bad idea. Create a dedicated system user and set up the data and configuration directories:
sudo groupadd --system etcd
sudo useradd -s /sbin/nologin --system -g etcd etcd
sudo mkdir -p /var/lib/etcd
sudo mkdir -p /etc/etcd
sudo chown -R etcd:etcd /var/lib/etcd
Configure etcd
Create the etcd configuration file. This example configures a single-node setup listening on all interfaces. For a multi-node cluster, you would add the initial cluster peer URLs here.
Open the configuration file:
sudo vi /etc/etcd/etcd.conf.yml
Add the following configuration:
name: 'default'
data-dir: '/var/lib/etcd'
wal-dir: '/var/lib/etcd/wal'
listen-peer-urls: 'http://0.0.0.0:2380'
listen-client-urls: 'http://0.0.0.0:2379'
initial-advertise-peer-urls: 'http://127.0.0.1:2380'
advertise-client-urls: 'http://127.0.0.1:2379'
initial-cluster: 'default=http://127.0.0.1:2380'
initial-cluster-token: 'etcd-cluster-1'
initial-cluster-state: 'new'
logger: 'zap'
log-level: 'info'
If you plan to expose etcd to other hosts on your network, replace 127.0.0.1 in the advertise URLs with your server’s actual IP address (for example, 192.168.1.10).
Create a systemd Service Unit
Create the systemd unit file to manage etcd as a service:
sudo vi /etc/systemd/system/etcd.service
Add the following service definition:
[Unit]
Description=etcd key-value store
Documentation=https://etcd.io/docs/
After=network-online.target
Wants=network-online.target
[Service]
User=etcd
Group=etcd
Type=notify
ExecStart=/usr/local/bin/etcd --config-file /etc/etcd/etcd.conf.yml
Restart=always
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
Reload systemd and start etcd:
sudo systemctl daemon-reload
sudo systemctl enable --now etcd
Verify the service is running properly:
sudo systemctl status etcd
The output should show the service as active (running) with no errors in the log lines.
Basic etcdctl Operations
etcdctl is the command-line client for interacting with etcd. Starting with etcd 3.4+, the v3 API is the default, so you do not need to set ETCDCTL_API=3 unless you are running a very old version.
Put and Get Key-Value Pairs
Store a key-value pair in etcd:
etcdctl put greeting "Hello from etcd"
etcd responds with OK to confirm the write succeeded. Retrieve the value:
etcdctl get greeting
This returns both the key and its value on separate lines. To get only the value without the key name, use the --print-value-only flag:
etcdctl get greeting --print-value-only
You can also retrieve all keys with a common prefix. This is useful for organizing configuration under namespaces:
etcdctl put /config/db/host "10.0.1.50"
etcdctl put /config/db/port "5432"
etcdctl put /config/db/name "appdb"
etcdctl get /config/db/ --prefix
All three keys under the /config/db/ prefix are returned.
Delete Keys
Remove a single key:
etcdctl del greeting
etcd returns the number of keys deleted (1 in this case). You can delete multiple keys by prefix as well:
etcdctl del /config/db/ --prefix
This removes all keys that start with /config/db/.
Watch for Changes
etcd supports watching keys for real-time change notifications. This is how Kubernetes controllers detect configuration changes. Open a watch on a key:
etcdctl watch /config/db/host
This command blocks and waits. In another terminal, update the key:
etcdctl put /config/db/host "10.0.1.100"
The watch terminal immediately shows the PUT event with the new value. Press Ctrl+C to stop watching.
Check Cluster Health and Member List
Even on a single-node setup, you should know how to check the health of your etcd instance. Run the endpoint health check:
etcdctl endpoint health
A healthy node returns something like 127.0.0.1:2379 is healthy: successfully committed proposal with a latency measurement.
For more detail, check the endpoint status in table format:
etcdctl endpoint status --write-out=table
This shows the endpoint, member ID, revision, Raft term, leader status, and database size, all critical information for troubleshooting cluster issues.
List all members of the cluster:
etcdctl member list --write-out=table
On a single-node setup, you see one member. In a production Kubernetes cluster, you would typically see three or five etcd members for high availability.
Configure UFW Firewall for etcd
etcd uses two ports that need to be open if you are running a multi-node cluster or accepting connections from remote clients:
- 2379/tcp – client communication (etcdctl, application requests)
- 2380/tcp – peer communication (cluster member replication)
If UFW is enabled on your Ubuntu server, allow these ports:
sudo ufw allow 2379/tcp comment 'etcd client'
sudo ufw allow 2380/tcp comment 'etcd peer'
sudo ufw reload
Verify the rules are active:
sudo ufw status verbose
You should see both ports listed as ALLOW. For single-node setups where etcd only listens on localhost, these firewall rules are not strictly required.
Backup and Restore etcd Data
Regular backups are critical, especially if etcd is storing your Kubernetes cluster state. A corrupted or lost etcd database means losing your entire cluster configuration. etcdctl has built-in snapshot support that makes this straightforward.
Create a Snapshot Backup
Take a snapshot of the current etcd data:
etcdctl snapshot save /tmp/etcd-backup.db
etcd confirms the snapshot was saved along with the number of keys captured. Verify the snapshot is valid:
etcdctl snapshot status /tmp/etcd-backup.db --write-out=table
The output shows the snapshot hash, revision, total keys, and database size. In production, automate this with a cron job and store backups off-server:
sudo vi /etc/cron.d/etcd-backup
Add the following cron entry to take a daily backup at 2 AM:
0 2 * * * etcd /usr/local/bin/etcdctl snapshot save /var/lib/etcd/backups/etcd-$(date +\%Y\%m\%d).db
Make sure the backup directory exists and is owned by the etcd user:
sudo mkdir -p /var/lib/etcd/backups
sudo chown etcd:etcd /var/lib/etcd/backups
Restore from a Snapshot
Restoring from a snapshot creates a new data directory. Stop etcd first, then run the restore:
sudo systemctl stop etcd
Run the snapshot restore command, specifying a new data directory:
etcdutl snapshot restore /tmp/etcd-backup.db \
--data-dir /var/lib/etcd-restored \
--name default \
--initial-cluster default=http://127.0.0.1:2380 \
--initial-advertise-peer-urls http://127.0.0.1:2380
Move the restored data into place and fix ownership:
sudo mv /var/lib/etcd /var/lib/etcd-old
sudo mv /var/lib/etcd-restored /var/lib/etcd
sudo chown -R etcd:etcd /var/lib/etcd
Start etcd again and verify it is healthy:
sudo systemctl start etcd
etcdctl endpoint health
The restored instance should come up healthy with all the data from the snapshot intact.
Troubleshooting Common Issues
Here are some problems you might run into and how to fix them.
etcd fails to start with “member has already been bootstrapped” – This happens when you change the configuration but the old data directory still exists. Either remove the data directory (sudo rm -rf /var/lib/etcd/*) for a fresh start, or change initial-cluster-state from new to existing.
etcdctl reports “context deadline exceeded” – Usually means etcd is not running or not listening on the expected address. Check with ss -tlnp | grep 2379 to confirm etcd is listening, then verify your ETCDCTL_ENDPOINTS environment variable matches.
High latency warnings in logs – etcd is sensitive to disk I/O latency. If you see “apply request took too long” warnings, move the data directory to an SSD. On cloud instances, use provisioned IOPS volumes. The WAL (write-ahead log) is especially latency-sensitive.
Database size keeps growing – etcd keeps a history of all key revisions. Run compaction and defragmentation periodically:
etcdctl compact $(etcdctl endpoint status --write-out=json | python3 -c "import sys,json; print(json.load(sys.stdin)[0]['Status']['header']['revision'])")
etcdctl defrag
This reclaims storage by removing old revisions and defragmenting the database file.
Uninstall etcd
If you installed from the Ubuntu repository:
sudo apt remove --purge -y etcd-server etcd-client
sudo rm -rf /var/lib/etcd
If you installed from the binary release:
sudo systemctl stop etcd
sudo systemctl disable etcd
sudo rm /etc/systemd/system/etcd.service
sudo rm /usr/local/bin/etcd /usr/local/bin/etcdctl /usr/local/bin/etcdutl
sudo rm -rf /var/lib/etcd /etc/etcd
sudo userdel etcd
sudo groupdel etcd
sudo systemctl daemon-reload
This cleanly removes all etcd components from your system.
Conclusion
You now have a working etcd installation on Ubuntu with the knowledge to perform key-value operations, health checks, and disaster recovery through snapshots. For production deployments, run at least three etcd nodes behind a load balancer and set up automated backups with off-site storage. The official etcd documentation covers multi-node clustering, TLS authentication, and role-based access control when you are ready to harden your setup.