AlmaLinux

Deploy Elasticsearch Cluster on Rocky Linux 10 with Ansible

Elasticsearch is an open-source distributed search and analytics engine built on Apache Lucene. It handles full-text search, log analytics, security intelligence, and real-time data analysis across large datasets. Deploying a multi-node cluster gives you high availability, horizontal scaling, and fault tolerance for production workloads.

This guide walks through deploying a 3-node Elasticsearch 8.x cluster on Rocky Linux 10 / AlmaLinux 10 using Ansible. We cover inventory setup, a complete playbook for repository configuration, package installation, JVM tuning, cluster configuration with security enabled, firewall rules, and cluster health verification.

Prerequisites

  • 4 servers running Rocky Linux 10 or AlmaLinux 10 – 1 Ansible control node and 3 Elasticsearch nodes
  • At least 4GB RAM on each Elasticsearch node (8GB recommended for production)
  • Root or sudo access on all servers
  • Ansible installed on the control node (2.14+)
  • SSH key-based authentication from the control node to all 3 Elasticsearch nodes
  • Ports 9200 (HTTP API) and 9300 (inter-node transport) open between cluster nodes
  • Java 17 or later – Elasticsearch 8.x bundles its own JDK, but the playbook handles this

Cluster Architecture

We deploy 3 nodes where all participate as master-eligible and data nodes. This provides quorum for master election and distributes data across all nodes.

HostnameIP AddressRole
es-node110.0.1.10Master-eligible + Data
es-node210.0.1.11Master-eligible + Data
es-node310.0.1.12Master-eligible + Data

A separate control node (your workstation or a management server) runs Ansible to configure all three.

Step 1: Set Up SSH Key Authentication

From your Ansible control node, generate an SSH key pair if you don’t have one and copy it to each Elasticsearch node.

ssh-keygen -t ed25519 -C "ansible"

Copy the public key to all three nodes. Replace the user and IP with your actual values:

ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]

Verify passwordless login works to each node:

ssh [email protected] "hostname"

You should see the hostname returned without a password prompt:

es-node1

Step 2: Create the Ansible Inventory

Create a project directory and define the inventory file with all three Elasticsearch nodes.

mkdir -p ~/elasticsearch-cluster && cd ~/elasticsearch-cluster

Create the inventory file:

vi inventory.ini

Add the following content with your actual node IPs and SSH user:

[elasticsearch]
es-node1 ansible_host=10.0.1.10
es-node2 ansible_host=10.0.1.11
es-node3 ansible_host=10.0.1.12

[elasticsearch:vars]
ansible_user=rocky
ansible_become=yes
ansible_become_method=sudo

Test connectivity to all nodes:

ansible -i inventory.ini elasticsearch -m ping

All three nodes should return a SUCCESS response:

es-node1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
es-node2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
es-node3 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Step 3: Create the Elasticsearch Ansible Playbook

The official elastic.elasticsearch Ansible role from GitHub is archived and no longer maintained. Instead, we write a clean playbook that installs Elasticsearch 8.x directly from the official RPM repository, configures the cluster, sets up security, and opens firewall ports.

Create the playbook file:

vi deploy-elasticsearch.yml

Add the following playbook content:

---
- name: Deploy Elasticsearch 8.x Cluster on Rocky Linux 10
  hosts: elasticsearch
  become: yes
  vars:
    es_version: "8.x"
    cluster_name: "prod-es-cluster"
    es_heap_size: "2g"
    es_data_path: "/var/lib/elasticsearch"
    es_log_path: "/var/log/elasticsearch"

  tasks:
    # Import Elasticsearch GPG key and add repository
    - name: Import Elasticsearch GPG key
      rpm_key:
        state: present
        key: https://artifacts.elastic.co/GPG-KEY-elasticsearch

    - name: Add Elasticsearch 8.x repository
      yum_repository:
        name: elasticsearch
        description: Elasticsearch repository for 8.x packages
        baseurl: https://artifacts.elastic.co/packages/8.x/yum
        gpgcheck: yes
        gpgkey: https://artifacts.elastic.co/GPG-KEY-elasticsearch
        enabled: yes

    # Install Elasticsearch
    - name: Install Elasticsearch
      dnf:
        name: elasticsearch
        state: present

    # Configure JVM heap size
    - name: Set JVM heap size
      copy:
        dest: /etc/elasticsearch/jvm.options.d/heap.options
        content: |
          -Xms{{ es_heap_size }}
          -Xmx{{ es_heap_size }}
        owner: root
        group: elasticsearch
        mode: '0644'

    # Configure elasticsearch.yml
    - name: Configure Elasticsearch
      template:
        src: elasticsearch.yml.j2
        dest: /etc/elasticsearch/elasticsearch.yml
        owner: root
        group: elasticsearch
        mode: '0660'
        backup: yes

    # Set directory ownership
    - name: Set data directory ownership
      file:
        path: "{{ es_data_path }}"
        state: directory
        owner: elasticsearch
        group: elasticsearch
        mode: '0755'

    - name: Set log directory ownership
      file:
        path: "{{ es_log_path }}"
        state: directory
        owner: elasticsearch
        group: elasticsearch
        mode: '0755'

    # Configure firewall
    - name: Open Elasticsearch HTTP port (9200)
      firewalld:
        port: 9200/tcp
        permanent: yes
        state: enabled
        immediate: yes

    - name: Open Elasticsearch transport port (9300)
      firewalld:
        port: 9300/tcp
        permanent: yes
        state: enabled
        immediate: yes

    # Enable and start Elasticsearch
    - name: Enable and start Elasticsearch
      systemd:
        name: elasticsearch
        enabled: yes
        state: restarted
        daemon_reload: yes

    # Wait for Elasticsearch to start
    - name: Wait for Elasticsearch to be ready
      wait_for:
        port: 9200
        delay: 15
        timeout: 120

    # Verify cluster health (run on first node only)
    - name: Check cluster health
      uri:
        url: "https://localhost:9200/_cluster/health?pretty"
        method: GET
        user: elastic
        password: "{{ elastic_password }}"
        validate_certs: no
        return_content: yes
      register: cluster_health
      when: inventory_hostname == groups['elasticsearch'][0]

    - name: Display cluster health
      debug:
        var: cluster_health.json
      when: inventory_hostname == groups['elasticsearch'][0]

Step 4: Create the Elasticsearch Configuration Template

Ansible uses a Jinja2 template to generate a per-node elasticsearch.yml with the correct node name, network host, and discovery settings.

Create the templates directory and the template file:

mkdir -p templates

Create the Elasticsearch configuration template:

vi templates/elasticsearch.yml.j2

Add the following configuration. Each setting is explained with inline comments:

# Cluster name - must be identical on all nodes
cluster.name: {{ cluster_name }}

# Node name - unique per node, pulled from inventory hostname
node.name: {{ inventory_hostname }}

# Node roles - all nodes serve as master-eligible and data nodes
node.roles: [master, data]

# Data and log paths
path.data: {{ es_data_path }}
path.logs: {{ es_log_path }}

# Bind to all interfaces so other nodes and clients can connect
network.host: {{ ansible_host }}

# HTTP API port
http.port: 9200

# Transport port for inter-node communication
transport.port: 9300

# Discovery - list all nodes for cluster formation
discovery.seed_hosts:
{% for host in groups['elasticsearch'] %}
  - {{ hostvars[host]['ansible_host'] }}:9300
{% endfor %}

# Initial master nodes for first-time cluster bootstrap
# Use node names (not IPs) - must match node.name values
cluster.initial_master_nodes:
{% for host in groups['elasticsearch'] %}
  - {{ host }}
{% endfor %}

# Security settings - xpack security enabled by default in ES 8.x
xpack.security.enabled: true
xpack.security.enrollment.enabled: true

# Transport layer TLS (required for inter-node communication)
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12

# HTTP layer TLS
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Memory lock to prevent swapping
bootstrap.memory_lock: true

Step 5: Configure Security Certificates

Elasticsearch 8.x enables security by default, which requires TLS certificates for both HTTP and transport layers. We add tasks to the playbook that generate certificates on the first node and distribute them to the rest of the cluster.

Create a separate security playbook or add these tasks before the main configuration. Create the file:

vi setup-certs.yml

Add the certificate generation and distribution tasks:

---
- name: Generate Elasticsearch TLS certificates
  hosts: elasticsearch
  become: yes
  vars:
    es_cert_dir: /etc/elasticsearch/certs
    ca_node: "{{ groups['elasticsearch'][0] }}"

  tasks:
    - name: Create certificate directory
      file:
        path: "{{ es_cert_dir }}"
        state: directory
        owner: elasticsearch
        group: elasticsearch
        mode: '0750'

    # Generate CA and certificates on first node only
    - name: Generate Certificate Authority
      command: >
        /usr/share/elasticsearch/bin/elasticsearch-certutil ca
        --out /tmp/elastic-stack-ca.p12
        --pass ""
      when: inventory_hostname == ca_node
      args:
        creates: /tmp/elastic-stack-ca.p12

    - name: Generate transport certificates
      command: >
        /usr/share/elasticsearch/bin/elasticsearch-certutil cert
        --ca /tmp/elastic-stack-ca.p12
        --ca-pass ""
        --out /tmp/transport.p12
        --pass ""
      when: inventory_hostname == ca_node
      args:
        creates: /tmp/transport.p12

    - name: Generate HTTP certificates
      command: >
        /usr/share/elasticsearch/bin/elasticsearch-certutil cert
        --ca /tmp/elastic-stack-ca.p12
        --ca-pass ""
        --out /tmp/http.p12
        --pass ""
      when: inventory_hostname == ca_node
      args:
        creates: /tmp/http.p12

    # Fetch certificates to control node for distribution
    - name: Fetch transport cert from CA node
      fetch:
        src: /tmp/transport.p12
        dest: /tmp/es-certs/transport.p12
        flat: yes
      when: inventory_hostname == ca_node

    - name: Fetch HTTP cert from CA node
      fetch:
        src: /tmp/http.p12
        dest: /tmp/es-certs/http.p12
        flat: yes
      when: inventory_hostname == ca_node

    # Distribute certificates to all nodes
    - name: Copy transport certificate to all nodes
      copy:
        src: /tmp/es-certs/transport.p12
        dest: "{{ es_cert_dir }}/transport.p12"
        owner: elasticsearch
        group: elasticsearch
        mode: '0640'

    - name: Copy HTTP certificate to all nodes
      copy:
        src: /tmp/es-certs/http.p12
        dest: "{{ es_cert_dir }}/http.p12"
        owner: elasticsearch
        group: elasticsearch
        mode: '0640'

Step 6: Configure System Limits for Elasticsearch

Elasticsearch requires specific system-level settings to run properly. Add these tasks to your main playbook before the Elasticsearch start task, or create a separate preparation playbook.

Create the system preparation file:

vi prepare-system.yml

Add system tuning tasks that Elasticsearch requires:

---
- name: Prepare system for Elasticsearch
  hosts: elasticsearch
  become: yes

  tasks:
    # Increase virtual memory map count
    - name: Set vm.max_map_count
      sysctl:
        name: vm.max_map_count
        value: '262144'
        state: present
        reload: yes

    # Disable swap
    - name: Disable swap
      command: swapoff -a
      changed_when: false

    - name: Remove swap from fstab
      lineinfile:
        path: /etc/fstab
        regexp: '.*swap.*'
        state: absent

    # Set file descriptor limits for elasticsearch user
    - name: Set file descriptor limits
      copy:
        dest: /etc/security/limits.d/elasticsearch.conf
        content: |
          elasticsearch soft nofile 65535
          elasticsearch hard nofile 65535
          elasticsearch soft memlock unlimited
          elasticsearch hard memlock unlimited
        mode: '0644'

    # Configure systemd to allow memory locking
    - name: Create systemd override directory
      file:
        path: /etc/systemd/system/elasticsearch.service.d
        state: directory
        mode: '0755'

    - name: Allow memory lock in systemd
      copy:
        dest: /etc/systemd/system/elasticsearch.service.d/override.conf
        content: |
          [Service]
          LimitMEMLOCK=infinity
        mode: '0644'
      notify: reload systemd

  handlers:
    - name: reload systemd
      systemd:
        daemon_reload: yes

Step 7: Deploy the Elasticsearch Cluster with Ansible

Run the playbooks in order – system preparation, certificate generation, then the main deployment.

ansible-playbook -i inventory.ini prepare-system.yml

The system preparation should complete without errors on all three nodes:

PLAY RECAP *********************************************************************
es-node1                   : ok=7    changed=5    unreachable=0    failed=0    skipped=0
es-node2                   : ok=7    changed=5    unreachable=0    failed=0    skipped=0
es-node3                   : ok=7    changed=5    unreachable=0    failed=0    skipped=0

Run the main Elasticsearch deployment playbook. Pass the elastic_password variable which sets the built-in superuser password for cluster health verification:

ansible-playbook -i inventory.ini deploy-elasticsearch.yml -e "elastic_password=YourStrongPassword123"

This takes several minutes as it installs packages, configures each node, and starts Elasticsearch. A successful run looks like this:

PLAY RECAP *********************************************************************
es-node1                   : ok=14   changed=8    unreachable=0    failed=0    skipped=0
es-node2                   : ok=12   changed=8    unreachable=0    failed=0    skipped=2
es-node3                   : ok=12   changed=8    unreachable=0    failed=0    skipped=2

Then run the certificate setup playbook:

ansible-playbook -i inventory.ini setup-certs.yml

After certificates are distributed, restart Elasticsearch on all nodes for TLS to take effect:

ansible -i inventory.ini elasticsearch -m systemd -a "name=elasticsearch state=restarted" --become

Step 8: Set the Elastic Superuser Password

Elasticsearch 8.x generates a random password for the elastic superuser during initial startup. Reset it to a known value on the first node:

ssh [email protected] "sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -i"

You will be prompted to enter and confirm the new password:

This tool will reset the password of the [elastic] user.
You will be prompted to enter the password.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Re-enter password for [elastic]:
Password for the [elastic] user successfully reset.

Step 9: Configure Firewall Ports for Elasticsearch Cluster

The playbook opens ports 9200 and 9300 automatically via the firewalld module. If you need to verify or manually configure the firewall on Rocky Linux, check the rules on each node.

sudo firewall-cmd --list-ports

You should see both Elasticsearch ports listed:

9200/tcp 9300/tcp

The two ports serve different purposes:

  • Port 9200/tcp – HTTP REST API for client queries, indexing, and management
  • Port 9300/tcp – Transport protocol for inter-node communication, shard replication, and cluster state

For production environments, restrict port 9200 to only the application servers and admin workstations that need API access. Port 9300 should be open only between cluster nodes.

Step 10: Verify the Elasticsearch Cluster Health

SSH into any cluster node and query the cluster health API. Since security is enabled, use the elastic user credentials:

curl -s -k -u elastic:YourStrongPassword123 https://localhost:9200/_cluster/health?pretty

A healthy 3-node cluster returns status “green” with all nodes detected:

{
  "cluster_name" : "prod-es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 1,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Verify that all three nodes are visible in the cluster:

curl -s -k -u elastic:YourStrongPassword123 https://localhost:9200/_cat/nodes?v

The output lists each node with its IP, role, and master status. The * marks the elected master:

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.name master
10.0.1.10           25          65   2    0.15    0.10     0.08 es-node1  *
10.0.1.11           22          58   1    0.08    0.06     0.04 es-node2  -
10.0.1.12           20          61   1    0.10    0.07     0.05 es-node3  -

Check which node is the elected master:

curl -s -k -u elastic:YourStrongPassword123 https://localhost:9200/_cat/master?v

The response shows the current master node details:

id                     host       ip         node
A1B2C3D4E5F6G7H8I9J0 10.0.1.10 10.0.1.10 es-node1

Create a test index and add a document to confirm the cluster processes writes and distributes data correctly.

curl -s -k -u elastic:YourStrongPassword123 -X PUT "https://localhost:9200/test-index/_doc/1" -H 'Content-Type: application/json' -d '{"message": "Elasticsearch cluster is working", "timestamp": "2026-03-21T12:00:00"}'

A successful response confirms the document was indexed with result “created”:

{
  "_index" : "test-index",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 0,
  "_primary_term" : 1
}

Retrieve the document to verify search works:

curl -s -k -u elastic:YourStrongPassword123 https://localhost:9200/test-index/_doc/1?pretty

The response returns the full document with the source data:

{
  "_index" : "test-index",
  "_id" : "1",
  "_version" : 1,
  "_seq_no" : 0,
  "_primary_term" : 1,
  "found" : true,
  "_source" : {
    "message" : "Elasticsearch cluster is working",
    "timestamp" : "2026-03-21T12:00:00"
  }
}

Clean up the test index when done verifying:

curl -s -k -u elastic:YourStrongPassword123 -X DELETE "https://localhost:9200/test-index?pretty"

For more Elasticsearch index operations, see our guide on managing Elasticsearch index data.

Key Configuration Parameters Explained

Understanding the critical settings in elasticsearch.yml helps with troubleshooting and tuning.

ParameterPurpose
cluster.nameIdentifies the cluster – nodes only join clusters with the same name
node.nameUnique identifier for each node, used in logs and API responses
node.rolesDefines node functions – master, data, ingest, ml, etc.
discovery.seed_hostsList of nodes to contact during cluster formation and master election
cluster.initial_master_nodesBootstrap setting for first cluster start – remove after initial formation
network.hostIP address the node binds to – use the node IP, not 0.0.0.0 in production
bootstrap.memory_lockPrevents Elasticsearch memory from being swapped to disk
xpack.security.enabledEnables authentication and TLS – true by default in ES 8.x

JVM Heap Size Tuning

The playbook sets the JVM heap to 2GB by default through the es_heap_size variable. Adjust this based on your node RAM – the heap should be set to no more than 50% of available physical memory, and never exceed 31GB (to stay within compressed oops range).

Change the heap size by modifying the variable when running the playbook:

ansible-playbook -i inventory.ini deploy-elasticsearch.yml -e "es_heap_size=4g elastic_password=YourStrongPassword123"

For nodes with 16GB RAM, set the heap to 8g. For 32GB RAM nodes, use 16g. Verify the heap settings after deployment:

curl -s -k -u elastic:YourStrongPassword123 https://localhost:9200/_nodes/jvm?pretty | grep -A5 '"mem"'

Adding Kibana to the Cluster

Once the Elasticsearch cluster is running, add Kibana for visualization and cluster management. Kibana connects to the cluster over HTTPS and provides a web interface for querying data, managing indices, and monitoring cluster health. For a full logging pipeline, add Filebeat and Logstash to collect and process logs before they reach Elasticsearch.

Conclusion

We deployed a 3-node Elasticsearch 8.x cluster on Rocky Linux 10 using Ansible with security enabled, TLS encryption, proper JVM tuning, and firewall configuration. The cluster is ready to index and search data with full replication across nodes.

For production hardening, configure snapshot repositories for automated backups, set up monitoring with Kibana Stack Monitoring or Prometheus exporters, and implement index lifecycle management (ILM) policies to handle data retention and rollover automatically.

Related Articles

Automation GitOps Best Practices for Serverless Projects CentOS How To Subscribe CentOS Server to Katello/Foreman Databases How To Install pgAdmin 4 on Debian 12|11|10 Databases How To Install MariaDB on Fedora 43/42/41/40

Press ESC to close