AlmaLinux

Deploy 3-Node RabbitMQ Cluster on Rocky Linux 10 / AlmaLinux 10

RabbitMQ is an open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). It handles message routing between distributed services and is widely used in microservice architectures for decoupling application components. RabbitMQ 4.x replaced classic mirrored queues with quorum queues based on the Raft consensus algorithm, providing stronger data safety guarantees across cluster nodes.

Original content from computingforgeeks.com - post 120290

This guide walks through deploying a 3-node RabbitMQ 4.2 cluster on Rocky Linux 10 / AlmaLinux 10. The setup covers Erlang and RabbitMQ installation from official repositories, Erlang cookie synchronization, cluster formation, quorum queue replication, and the management UI.

Prerequisites

  • 3 servers running Rocky Linux 10 or AlmaLinux 10 with at least 2GB RAM each
  • Root or sudo access on all nodes
  • Network connectivity between all 3 nodes on ports 4369, 5672, 15672, and 25672 (TCP)
  • DNS or /etc/hosts entries configured so each node can resolve the others by hostname

We use the following node layout throughout this guide:

HostnameRoleIP Address
rabbitmq01RabbitMQ Node 110.0.1.10
rabbitmq02RabbitMQ Node 210.0.1.11
rabbitmq03RabbitMQ Node 310.0.1.12

Step 1: Set Hostnames and Configure /etc/hosts (All Nodes)

Set the hostname on each node. Run the appropriate command on each server:

# On Node 1
sudo hostnamectl set-hostname rabbitmq01

# On Node 2
sudo hostnamectl set-hostname rabbitmq02

# On Node 3
sudo hostnamectl set-hostname rabbitmq03

Add the following entries to /etc/hosts on all 3 nodes so they can resolve each other by short hostname:

sudo vi /etc/hosts

Add these lines:

10.0.1.10  rabbitmq01
10.0.1.11  rabbitmq02
10.0.1.12  rabbitmq03

Verify connectivity from each node:

ping -c 2 rabbitmq01
ping -c 2 rabbitmq02
ping -c 2 rabbitmq03

Step 2: Install Erlang and RabbitMQ on Rocky Linux 10 / AlmaLinux 10

RabbitMQ 4.2 requires Erlang/OTP 26.2 or later (up to 27.x). The official RabbitMQ team maintains RPM repositories with compatible Erlang and RabbitMQ packages. Run these steps on all 3 nodes.

Import the GPG signing keys:

sudo rpm --import https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
sudo rpm --import https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
sudo rpm --import https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key

Create the repository file:

sudo vi /etc/yum.repos.d/rabbitmq.repo

Add the following content:

[modern-erlang]
name=modern-erlang-el9
baseurl=https://yum1.rabbitmq.com/erlang/el/9/$basearch
        https://yum2.rabbitmq.com/erlang/el/9/$basearch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

[modern-erlang-noarch]
name=modern-erlang-el9-noarch
baseurl=https://yum1.rabbitmq.com/erlang/el/9/noarch
        https://yum2.rabbitmq.com/erlang/el/9/noarch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

[rabbitmq-el9]
name=rabbitmq-el9
baseurl=https://yum2.rabbitmq.com/rabbitmq/el/9/$basearch
        https://yum1.rabbitmq.com/rabbitmq/el/9/$basearch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

[rabbitmq-el9-noarch]
name=rabbitmq-el9-noarch
baseurl=https://yum2.rabbitmq.com/rabbitmq/el/9/noarch
        https://yum1.rabbitmq.com/rabbitmq/el/9/noarch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

Rocky Linux 10 and AlmaLinux 10 are binary-compatible with RHEL 10 which is based on EL9 packaging. The el9 repositories work on these systems. Install Erlang and RabbitMQ:

sudo dnf makecache
sudo dnf install -y erlang rabbitmq-server

Verify the installed versions:

$ erl -eval 'erlang:display(erlang:system_info(otp_release)), halt().' -noshell
"27"

$ rabbitmqctl version
4.2.5

Step 3: Configure Firewall Rules (All Nodes)

RabbitMQ uses several ports for different functions. Open all required ports on every node using firewalld:

PortPurpose
4369/tcpEPMD – Erlang Port Mapper Daemon (node discovery)
5672/tcpAMQP client connections
15672/tcpManagement UI and HTTP API
25672/tcpInter-node communication (Erlang distribution)
sudo firewall-cmd --add-port={4369,5672,15672,25672}/tcp --permanent
sudo firewall-cmd --reload

Verify the ports are open:

sudo firewall-cmd --list-ports

Step 4: Start RabbitMQ and Create Admin User (All Nodes)

Enable and start the RabbitMQ service on all 3 nodes:

sudo systemctl enable --now rabbitmq-server

Verify the service is running:

$ systemctl status rabbitmq-server
● rabbitmq-server.service - RabbitMQ broker
     Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; preset: disabled)
     Active: active (running) since ...
   Main PID: 3659 (beam.smp)
     Memory: 95.5M
     CGroup: /system.slice/rabbitmq-server.service
             ...

The default guest user can only connect from localhost. Create an admin user for remote access. Run this on node1 only – it will replicate to the other nodes after clustering:

sudo rabbitmqctl add_user admin StrongPassword
sudo rabbitmqctl set_user_tags admin administrator
sudo rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"

Verify the user was created:

$ sudo rabbitmqctl list_users
Listing users ...
user    tags
admin   [administrator]
guest   [administrator]

RabbitMQ nodes authenticate to each other using a shared Erlang cookie. All nodes in the cluster must have the same cookie value in /var/lib/rabbitmq/.erlang.cookie.

Copy the cookie from node1 to the other two nodes. First, read the cookie on node1:

$ sudo cat /var/lib/rabbitmq/.erlang.cookie
ABCDEFGHIJKLMNOPQRST

On node2 and node3, stop RabbitMQ, replace the cookie, set the correct permissions, and restart:

sudo systemctl stop rabbitmq-server
echo "ABCDEFGHIJKLMNOPQRST" | sudo tee /var/lib/rabbitmq/.erlang.cookie > /dev/null
sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
sudo chmod 400 /var/lib/rabbitmq/.erlang.cookie
sudo systemctl start rabbitmq-server

Replace ABCDEFGHIJKLMNOPQRST with the actual cookie value from node1. Alternatively, use scp to copy the file directly:

sudo scp root@rabbitmq01:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie
sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
sudo chmod 400 /var/lib/rabbitmq/.erlang.cookie
sudo systemctl restart rabbitmq-server

Step 6: Form the RabbitMQ Cluster

With the Erlang cookie synchronized, join node2 and node3 to node1’s cluster. Run the following commands on node2:

sudo rabbitmqctl stop_app
sudo rabbitmqctl reset
sudo rabbitmqctl join_cluster rabbit@rabbitmq01
sudo rabbitmqctl start_app

Then run the same commands on node3:

sudo rabbitmqctl stop_app
sudo rabbitmqctl reset
sudo rabbitmqctl join_cluster rabbit@rabbitmq01
sudo rabbitmqctl start_app

The join_cluster command uses the short hostname (rabbitmq01), not the FQDN. The reset command clears any existing data on the joining node, so only run this during initial cluster setup.

Verify the cluster status from any node:

$ sudo rabbitmqctl cluster_status
Cluster status of node rabbit@rabbitmq01 ...
Basics

Cluster name: rabbit@rabbitmq01

Disk Nodes

rabbit@rabbitmq01
rabbit@rabbitmq02
rabbit@rabbitmq03

Running Nodes

rabbit@rabbitmq01
rabbit@rabbitmq02
rabbit@rabbitmq03

All 3 nodes should appear under both “Disk Nodes” and “Running Nodes”. If you need to manage RabbitMQ backups and recovery, having a properly formed cluster is the first step.

Step 7: Configure Quorum Queues for High Availability

RabbitMQ 4.x removed classic mirrored queues (the old ha-all policy). Quorum queues are now the standard for replicated, highly available queues. They use the Raft consensus algorithm to replicate data across cluster nodes.

Set quorum queues as the default queue type for the cluster by creating an operator policy:

sudo rabbitmqctl set_policy quorum-default ".*" '{"queue-mode":"default"}' --apply-to queues --priority 0

When creating queues through your application, declare them with the x-queue-type: quorum argument. Here is an example using rabbitmqadmin (installed with the management plugin):

sudo rabbitmqadmin declare queue name=my-ha-queue durable=true arguments='{"x-queue-type":"quorum"}'

By default, quorum queues replicate to 3 members across the cluster, which matches our 3-node setup perfectly. You can verify queue replication:

$ sudo rabbitmqctl list_queues name type members
Listing queues ...
name           type    members
my-ha-queue    quorum  [rabbit@rabbitmq01, rabbit@rabbitmq02, rabbit@rabbitmq03]

Step 8: Enable the RabbitMQ Management UI (All Nodes)

The management plugin provides a web-based UI for monitoring queues, connections, and cluster health. Enable it on all 3 nodes:

sudo rabbitmq-plugins enable rabbitmq_management

No restart is needed – the plugin activates immediately. Access the management UI from your browser at:

http://10.0.1.10:15672

Log in with the admin user created earlier. The dashboard shows all cluster nodes, their status, memory usage, and message rates. You can access the UI through any node’s IP since the cluster state is shared.

To test replication, create a queue from the management UI on node1 and verify it appears when accessing node2 or node3’s UI. If you are also running RabbitMQ clusters on Ubuntu, the management UI works identically across distributions.

Step 9: Create a Virtual Host (Optional)

Virtual hosts provide logical separation for different applications sharing the same RabbitMQ cluster. Create a vhost and grant permissions to the admin user:

sudo rabbitmqctl add_vhost /production

Grant the admin user full permissions on the new vhost:

sudo rabbitmqctl set_permissions -p /production admin ".*" ".*" ".*"

Verify the vhost and permissions:

$ sudo rabbitmqctl list_vhosts
Listing vhosts ...
name
/
/production

$ sudo rabbitmqctl list_user_permissions admin
Listing permissions for user "admin" ...
vhost         configure    write    read
/             .*           .*       .*
/production   .*           .*       .*

Step 10: Verify Cluster Health

Run these checks from any node to confirm the cluster is healthy:

sudo rabbitmq-diagnostics check_running
sudo rabbitmq-diagnostics check_local_alarms
sudo rabbitmq-diagnostics cluster_status

Check that all nodes see each other and report no alarms:

$ sudo rabbitmq-diagnostics check_running
Checking if RabbitMQ is running on node rabbit@rabbitmq01 ...
RabbitMQ on node rabbit@rabbitmq01 is fully booted and running

$ sudo rabbitmq-diagnostics check_local_alarms
Checking if node rabbit@rabbitmq01 has local alarms ...
Node rabbit@rabbitmq01 has no local alarms

You can also check the status through the HTTP API. Install curl and query the API:

curl -u admin:StrongPassword http://10.0.1.10:15672/api/nodes | python3 -m json.tool

This returns JSON with detailed information about every node in the cluster, including Erlang version, RabbitMQ version, memory usage, and uptime. For more details on single-node RabbitMQ installation on RHEL, see our dedicated guide.

Removing a Node from the Cluster

To remove a node from the cluster, run these commands on the node being removed:

sudo rabbitmqctl stop_app
sudo rabbitmqctl reset
sudo rabbitmqctl start_app

Alternatively, remove a node remotely from any remaining cluster member:

sudo rabbitmqctl forget_cluster_node rabbit@rabbitmq03

Conclusion

We deployed a 3-node RabbitMQ 4.2 cluster on Rocky Linux 10 / AlmaLinux 10 with quorum queues for high availability and the management UI for monitoring. The cluster replicates data across all nodes using the Raft consensus protocol, tolerating the failure of one node without message loss.

For production deployments, enable TLS on both AMQP (port 5671) and the management UI, set up monitoring with Prometheus via the built-in rabbitmq_prometheus plugin, and configure regular backups of definitions using rabbitmqctl export_definitions.

Related Articles

AlmaLinux Install Python 3.11 on Rocky Linux 9 / AlmaLinux 9 RHEl Install and Configure Vim on RHEL 10 / Rocky Linux 10 / Ubuntu 24.04 AlmaLinux How To Convert From AlmaLinux 8 To Rocky Linux 8 AlmaLinux Set up iSCSI Target and Initiator on Rocky Linux 10 / AlmaLinux 10

Press ESC to close