Elasticsearch 8.x introduced security enabled by default, a simplified enrollment process for adding nodes, and improved performance across the board. Setting up a multi-node cluster is the standard production deployment – it gives you high availability, fault tolerance, and the ability to distribute search and indexing load across multiple machines. This guide walks through building a 3-node Elasticsearch 8.x cluster on Ubuntu 24.04 from scratch, including security configuration, node discovery, and verifying cluster health.
Prerequisites
- Three Ubuntu 24.04 servers with at least 4 GB RAM and 2 CPU cores each
- Root or sudo access on all three nodes
- Network connectivity between all three nodes on port 9200 (HTTP) and 9300 (transport)
- Hostnames and IPs planned – this guide uses:
es-node-01– 192.168.1.101es-node-02– 192.168.1.102es-node-03– 192.168.1.103
Step 1 – Configure Hostnames and /etc/hosts
On each node, set the hostname and update the hosts file so nodes can resolve each other by name. This step prevents DNS dependency issues during cluster formation.
On node 1:
sudo hostnamectl set-hostname es-node-01
On node 2:
sudo hostnamectl set-hostname es-node-02
On node 3:
sudo hostnamectl set-hostname es-node-03
On all three nodes, add these entries to /etc/hosts:
192.168.1.101 es-node-01
192.168.1.102 es-node-02
192.168.1.103 es-node-03
Verify connectivity from each node:
ping -c 2 es-node-01
ping -c 2 es-node-02
ping -c 2 es-node-03
Step 2 – Install Java and System Prerequisites
Elasticsearch 8.x bundles its own JDK, so you do not need to install Java separately. However, you do need to tune a few system settings on all three nodes.
Set the vm.max_map_count kernel parameter, which Elasticsearch requires for memory-mapped files:
echo "vm.max_map_count=262144" | sudo tee /etc/sysctl.d/99-elasticsearch.conf
sudo sysctl --system
Verify the setting took effect:
sysctl vm.max_map_count
The output should show vm.max_map_count = 262144.
Step 3 – Install Elasticsearch on All Nodes
Import the Elasticsearch GPG key and add the APT repository on all three nodes:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
Install the apt-transport-https package if not already present:
sudo apt install -y apt-transport-https
Add the Elasticsearch 8.x repository:
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
Update the package index and install Elasticsearch:
sudo apt update
sudo apt install -y elasticsearch
When the installation completes on the first node, Elasticsearch prints important security information including the superuser password and an enrollment token. Save both values – you will need them.
Verify the installation on each node:
dpkg -l elasticsearch
Step 4 – Configure the First Node (es-node-01)
Edit the Elasticsearch configuration on the first node. Open /etc/elasticsearch/elasticsearch.yml and set the following:
# Cluster name - must be identical on all nodes
cluster.name: production-cluster
# Node name - unique per node
node.name: es-node-01
# Data and log paths
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# Bind to the node's IP address so other nodes can connect
network.host: 192.168.1.101
# HTTP port
http.port: 9200
# Discovery settings - list all nodes in the cluster
discovery.seed_hosts:
- 192.168.1.101
- 192.168.1.102
- 192.168.1.103
# Initial master-eligible nodes for first-time cluster bootstrap
cluster.initial_master_nodes:
- es-node-01
- es-node-02
- es-node-03
# Security is enabled by default in 8.x
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# TLS for transport layer (node-to-node communication)
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# TLS for HTTP layer (client-to-node communication)
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
Set the JVM heap size. Edit /etc/elasticsearch/jvm.options.d/heap.options:
-Xms2g
-Xmx2g
Set the heap to half of available RAM, but never more than 31 GB. For a server with 4 GB RAM, 2 GB is appropriate.
Start Elasticsearch on the first node and enable it to start on boot:
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch
Check that it started successfully:
sudo systemctl status elasticsearch
Generate an enrollment token for the other nodes to join the cluster:
sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
Save the token output. It is valid for 30 minutes by default.
Step 5 – Configure and Join the Second Node (es-node-02)
On the second node, use the enrollment token to join the cluster. This is the recommended approach in Elasticsearch 8.x because it handles TLS certificate distribution automatically.
sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token-from-node-01>
After enrollment, edit /etc/elasticsearch/elasticsearch.yml on es-node-02:
cluster.name: production-cluster
node.name: es-node-02
network.host: 192.168.1.102
http.port: 9200
discovery.seed_hosts:
- 192.168.1.101
- 192.168.1.102
- 192.168.1.103
cluster.initial_master_nodes:
- es-node-01
- es-node-02
- es-node-03
Do not modify the security and TLS sections – the enrollment process configured those automatically with the correct certificates.
Set the JVM heap the same way as node 1, then start Elasticsearch:
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch
Verify the service is running:
sudo systemctl status elasticsearch
Step 6 – Configure and Join the Third Node (es-node-03)
Repeat the exact same process on the third node. If the enrollment token from step 4 has expired, generate a new one from es-node-01. Enroll the node:
sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token>
Edit /etc/elasticsearch/elasticsearch.yml on es-node-03:
cluster.name: production-cluster
node.name: es-node-03
network.host: 192.168.1.103
http.port: 9200
discovery.seed_hosts:
- 192.168.1.101
- 192.168.1.102
- 192.168.1.103
cluster.initial_master_nodes:
- es-node-01
- es-node-02
- es-node-03
Start Elasticsearch:
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch
Step 7 – Verify Cluster Health
Once all three nodes are running, check the cluster health from any node. Since TLS is enabled, use the --cacert flag or -k for testing:
curl -s -k -u "elastic:YOUR_PASSWORD" "https://192.168.1.101:9200/_cluster/health?pretty"
Expected output for a healthy 3-node cluster:
{
"cluster_name" : "production-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 1,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
Key things to check:
- status should be “green” – all primary and replica shards are allocated
- number_of_nodes should be 3
- unassigned_shards should be 0
You can also list all nodes in the cluster:
curl -s -k -u "elastic:YOUR_PASSWORD" "https://192.168.1.101:9200/_cat/nodes?v"
This should show all three nodes with their IPs, heap usage, and roles.
Step 8 – Remove the Initial Master Nodes Setting
The cluster.initial_master_nodes setting is only needed during the first cluster bootstrap. After the cluster has formed and all nodes have joined, you should remove this setting from elasticsearch.yml on all three nodes. Leaving it in place can cause issues if you ever need to restart the entire cluster. Comment it out or delete the lines, then restart each node one at a time:
sudo systemctl restart elasticsearch
Wait for each node to rejoin the cluster before restarting the next one. Check with _cat/nodes after each restart.
Understanding Index Sharding
Now that the cluster is running, it helps to understand how Elasticsearch distributes data across nodes using shards.
When you create an index, Elasticsearch splits it into multiple primary shards. Each primary shard is a self-contained Lucene index that holds a subset of the documents. By default, Elasticsearch 8.x creates 1 primary shard and 1 replica shard per index.
Replica shards are copies of primary shards stored on different nodes. They serve two purposes – they provide redundancy if a node fails, and they allow search requests to be served from multiple nodes in parallel.
For a 3-node cluster, you can create an index with custom shard settings:
curl -s -k -u "elastic:YOUR_PASSWORD" -X PUT "https://192.168.1.101:9200/my-index" \
-H "Content-Type: application/json" \
-d '{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1
}
}'
This creates 3 primary shards and 3 replica shards (one replica per primary), for a total of 6 shards distributed across your 3 nodes. Each node will hold approximately 2 shards.
General sharding guidelines:
- Keep individual shard size between 10 GB and 50 GB for optimal performance
- Aim for fewer, larger shards rather than many small ones – each shard has overhead
- Set replicas to at least 1 for production workloads to survive single-node failures
- With 3 nodes and 1 replica, you can tolerate the loss of any single node without data loss
- Do not set replicas to 2 on a 3-node cluster unless you need read scaling – it means every node holds a complete copy of every shard
Check shard allocation for an index:
curl -s -k -u "elastic:YOUR_PASSWORD" "https://192.168.1.101:9200/_cat/shards/my-index?v"
Firewall Configuration
If UFW is active on your Ubuntu nodes, open the required ports on all three servers:
sudo ufw allow from 192.168.1.0/24 to any port 9200 proto tcp
sudo ufw allow from 192.168.1.0/24 to any port 9300 proto tcp
sudo ufw reload
Restrict these rules to your internal network. Elasticsearch should never be directly exposed to the public internet.
Verify the rules:
sudo ufw status numbered
Troubleshooting
If the cluster does not form or a node does not join, check the following:
- Check Elasticsearch logs –
sudo journalctl -u elasticsearch -fwill show real-time log output. Look for connection refused or certificate errors. - Verify cluster.name is identical on all nodes. A mismatched cluster name means nodes will refuse to join each other.
- Check network connectivity – Use
ss -tlnp | grep 9300to confirm Elasticsearch is listening on the transport port. Usenc -zv es-node-02 9300to test connectivity between nodes. - Certificate issues – If you see TLS handshake errors, the enrollment process may have failed. Re-run the enrollment token generation and reconfigure the node.
- vm.max_map_count not set – Elasticsearch will fail to start if this kernel parameter is too low. Verify with
sysctl vm.max_map_count.
Summary
You now have a 3-node Elasticsearch 8.x cluster running on Ubuntu 24.04 with TLS encryption and authentication enabled by default. The enrollment token workflow simplifies adding nodes compared to older versions where you had to manually copy certificates. After the initial setup, remove the cluster.initial_master_nodes setting, configure your shard counts based on your data volume, and monitor cluster health regularly using the _cluster/health endpoint. For production deployments, consider adding dedicated master-eligible nodes, configuring snapshot repositories for backups, and setting up index lifecycle management to handle log rotation automatically.


































































