Linux Tutorials

How to Setup MariaDB Galera Cluster on Ubuntu 24.04/22.04 with HAProxy Load Balancer

MariaDB Galera Cluster is a synchronous multi-master database cluster for MariaDB. It enables you to read and write to any node while keeping data consistent across all members in real time. Combined with HAProxy as a load balancer, you get a production-ready, highly available database infrastructure with automatic failover and zero slave lag.

In this step-by-step guide, you will learn how to set up a 3-node MariaDB Galera Cluster on Ubuntu 24.04 / 22.04 and configure HAProxy to distribute database connections across the cluster. This setup is ideal for web applications, e-commerce platforms, and any workload that demands high availability and horizontal read/write scalability.

What is MariaDB Galera Cluster?

MariaDB Galera Cluster uses the Galera replication library to provide synchronous multi-master replication with support for XtraDB/InnoDB storage engines. Key features include:

  • Active-active multi-master topology – read and write to any node simultaneously
  • Synchronous replication – no slave lag, all nodes have up-to-date data at all times
  • Automatic node joining – new or recovered nodes sync automatically via State Snapshot Transfer (SST)
  • Automatic membership control – failed nodes are evicted from the cluster transparently
  • True parallel replication – row-level replication for high throughput
  • No single point of failure – if one node goes down, the others continue operating

Architecture Overview

Our setup consists of 4 servers: 3 MariaDB Galera nodes and 1 HAProxy load balancer. It is critical to use an odd number of database nodes (minimum 3) to maintain quorum and avoid split-brain scenarios during network partitions.

MariaDB Galera Cluster with HAProxy Architecture Diagram

The following table shows the server roles and IP addresses used in this guide:

HostnameIP AddressRoleOS
galera-haproxy-0110.131.74.91HAProxy Load BalancerUbuntu 24.04 / 22.04
galera-db-0110.131.74.92Galera Node 1 (bootstrap)Ubuntu 24.04 / 22.04
galera-db-0210.131.74.93Galera Node 2Ubuntu 24.04 / 22.04
galera-db-0310.131.74.94Galera Node 3Ubuntu 24.04 / 22.04

Prerequisites

  • 4 servers running Ubuntu 24.04 LTS or Ubuntu 22.04 LTS with root or sudo access
  • All servers can communicate with each other over the network
  • Ports 3306 (MySQL), 4567 (Galera replication), 4568 (IST), and 4444 (SST) open between all Galera nodes
  • A unique hostname set on each server

Step 1: Configure Hostnames and /etc/hosts

On all four servers, add the following entries to /etc/hosts so that nodes can resolve each other by hostname:

$ sudo vim /etc/hosts
10.131.74.91  galera-haproxy-01
10.131.74.92  galera-db-01
10.131.74.93  galera-db-02
10.131.74.94  galera-db-03

Verify connectivity between all nodes using ping:

ping -c 3 galera-db-01
ping -c 3 galera-db-02
ping -c 3 galera-db-03

Using different subnets for each node, as shown in this example, eliminates the single point of failure at the network switch level.

Step 2: Install MariaDB on All Galera Nodes

Perform the following steps on all three database nodes (galera-db-01, galera-db-02, galera-db-03). We’ll install the latest stable MariaDB from the official MariaDB repository to ensure Galera support is included.

Add the MariaDB repository and install the server:

curl -LsS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash -s --
sudo apt update
sudo apt install -y mariadb-server mariadb-client galera-4

After installation, run the security hardening script on each node:

sudo mariadb-secure-installation

Answer the prompts to set a root password, remove anonymous users, disallow remote root login, remove the test database, and reload privilege tables.

Confirm MariaDB is installed and running:

mariadb --version
sudo systemctl status mariadb

Step 3: Open Firewall Ports

If UFW is enabled, open the required ports on all three Galera nodes:

sudo ufw allow 3306/tcp   # MySQL client connections
sudo ufw allow 4567/tcp   # Galera cluster replication
sudo ufw allow 4567/udp   # Galera cluster replication (multicast)
sudo ufw allow 4568/tcp   # Incremental State Transfer (IST)
sudo ufw allow 4444/tcp   # State Snapshot Transfer (SST)
sudo ufw reload

Step 4: Configure the First Galera Node (Bootstrap Node)

On galera-db-01, configure the default character set by editing the server configuration:

sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf

Uncomment or add these lines under [mysqld]:

character-set-server = utf8mb4
collation-server = utf8mb4_general_ci

Now create the Galera-specific configuration file:

sudo vim /etc/mysql/mariadb.conf.d/galera.cnf

Add the following configuration on galera-db-01:

[mysqld]
bind-address=0.0.0.0
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2

# Galera cluster configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.131.74.92,10.131.74.93,10.131.74.94"
wsrep_cluster_name="mariadb-galera-cluster"
wsrep_sst_method=rsync

# Cluster node configuration
wsrep_node_address="10.131.74.92"
wsrep_node_name="galera-db-01"

Replace the IP addresses with your actual Galera node IPs. Here’s what each parameter does:

  • wsrep_on=ON – enables the write-set replication API
  • wsrep_provider – path to the Galera library
  • wsrep_cluster_address – comma-separated list of all cluster node IPs
  • wsrep_cluster_name – must be identical on all nodes
  • wsrep_sst_method=rsync – state transfer method for syncing new/recovering nodes
  • wsrep_node_address – this node’s own IP address
  • wsrep_node_name – a human-readable name for this node
  • innodb_autoinc_lock_mode=2 – required for Galera to handle auto-increment values correctly across nodes

Step 5: Configure the Remaining Galera Nodes

The configuration on the other two nodes is identical except for wsrep_node_address and wsrep_node_name, which must reflect each node’s own identity.

For galera-db-02:

root@galera-db-02:~# vim /etc/mysql/mariadb.conf.d/galera.cnf
[mysqld]
bind-address=0.0.0.0
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2

# Galera cluster configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.131.74.92,10.131.74.93,10.131.74.94"
wsrep_cluster_name="mariadb-galera-cluster"
wsrep_sst_method=rsync

# Cluster node configuration
wsrep_node_address="10.131.74.93"
wsrep_node_name="galera-db-02"

For galera-db-03:

root@galera-db-03:~# vim /etc/mysql/mariadb.conf.d/galera.cnf
[mysqld]
bind-address=0.0.0.0
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2

# Galera cluster configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.131.74.92,10.131.74.93,10.131.74.94"
wsrep_cluster_name="mariadb-galera-cluster"
wsrep_sst_method=rsync

# Cluster node configuration
wsrep_node_address="10.131.74.94"
wsrep_node_name="galera-db-03"

Step 6: Bootstrap the Galera Cluster

Bootstrapping initializes the cluster with the first node. All other nodes will join this initial node and sync their data automatically.

First, stop MariaDB on all three nodes:

sudo systemctl stop mariadb

On galera-db-01 only, bootstrap the new cluster:

sudo galera_new_cluster

Verify the cluster is running and the size is 1:

$ mysql -u root -p -e "show status like 'wsrep_%'"
Enter password:
+------------------------------+--------------------------------------+
| Variable_name                | Value                                |
+------------------------------+--------------------------------------+
| wsrep_apply_oooe             | 0.000000                             |
| wsrep_apply_oool             | 0.000000                             |
| wsrep_apply_window           | 0.000000                             |
| wsrep_causal_reads           | 0                                    |
| wsrep_cert_deps_distance     | 0.000000                             |
| wsrep_cert_index_size        | 0                                    |
| wsrep_cert_interval          | 0.000000                             |
| wsrep_cluster_conf_id        | 1                                    |
| wsrep_cluster_size           | 1                                    |
| wsrep_cluster_state_uuid     | 9f957c6d-76b4-11e8-a71a-17cc0eca13f1 |
| wsrep_cluster_status         | Primary                              |
| wsrep_commit_oooe            | 0.000000                             |
| wsrep_commit_oool            | 0.000000                             |
| wsrep_commit_window          | 0.000000                             |
| wsrep_connected              | ON                                   |
| wsrep_desync_count           | 0                                    |
| wsrep_evs_delayed            |                                      |
| wsrep_evs_evict_list         |                                      |
| wsrep_evs_repl_latency       | 0/0/0/0/0                            |
| wsrep_evs_state              | OPERATIONAL                          |
| wsrep_flow_control_paused    | 0.000000                             |
| wsrep_flow_control_paused_ns | 0                                    |
| wsrep_flow_control_recv      | 0                                    |
| wsrep_flow_control_sent      | 0                                    |
| wsrep_gcomm_uuid             | 9f945140-76b4-11e8-84c6-a66f9e2978f6 |
| wsrep_incoming_addresses     | 10.131.74.92:3306                    |
| wsrep_last_committed         | 0                                    |
| wsrep_local_bf_aborts        | 0                                    |
| wsrep_local_cached_downto    | 18446744073709551615                 |
| wsrep_local_cert_failures    | 0                                    |
| wsrep_local_commits          | 0                                    |
| wsrep_local_index            | 0                                    |
| wsrep_local_recv_queue       | 0                                    |
| wsrep_local_recv_queue_avg   | 0.000000                             |
| wsrep_local_recv_queue_max   | 1                                    |
| wsrep_local_recv_queue_min   | 0                                    |
| wsrep_local_replays          | 0                                    |
| wsrep_local_send_queue       | 0                                    |
| wsrep_local_send_queue_avg   | 0.000000                             |
| wsrep_local_send_queue_max   | 1                                    |
| wsrep_local_send_queue_min   | 0                                    |
| wsrep_local_state            | 4                                    |
| wsrep_local_state_comment    | Synced                               |
| wsrep_local_state_uuid       | 9f957c6d-76b4-11e8-a71a-17cc0eca13f1 |
| wsrep_protocol_version       | 8                                    |
| wsrep_provider_name          | Galera                               |
| wsrep_provider_vendor        | Codership Oy <[email protected]>    |
| wsrep_provider_version       | 25.3.23(r3789)                       |
| wsrep_ready                  | ON                                   |
| wsrep_received               | 2                                    |
| wsrep_received_bytes         | 148                                  |
| wsrep_repl_data_bytes        | 0                                    |
| wsrep_repl_keys              | 0                                    |
| wsrep_repl_keys_bytes        | 0                                    |
| wsrep_repl_other_bytes       | 0                                    |
| wsrep_replicated             | 0                                    |
| wsrep_replicated_bytes       | 0                                    |
| wsrep_thread_count           | 2                                    |
+------------------------------+--------------------------------------+

Key values to confirm: wsrep_cluster_size = 1, wsrep_cluster_status = Primary, wsrep_connected = ON, and wsrep_ready = ON.

You can also check just the cluster size:

root@galera-db-01:~# mysql -u root -p -e "show status like 'wsrep_cluster_size'"
Enter password:
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+

Step 7: Join the Remaining Nodes to the Cluster

On galera-db-02, start MariaDB (it will automatically join the cluster):

sudo systemctl start mariadb

Verify the cluster size has grown to 2 (check from any node):

# mysql -u root -p -e "show status like 'wsrep_cluster_size'"
Enter password:
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+

On galera-db-03, start MariaDB:

sudo systemctl start mariadb

Verify the cluster size is now 3:

# mysql -u root -p -e "show status like 'wsrep_cluster_size'"
Enter password:
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

All three nodes are now synchronized and operating as a single cluster.

Step 8: Test Galera Cluster Replication

To verify replication is working, create a test database on one node and confirm it appears on all others.

On galera-db-01, create a test database and table:

root@galera-db-01:~# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE galera_test;
Query OK, 1 row affected (0.004 sec)

MariaDB [(none)]> USE galera_test;
MariaDB [galera_test]> CREATE TABLE test_table (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
MariaDB [galera_test]> INSERT INTO test_table (name) VALUES ('written on node 1');
MariaDB [galera_test]> EXIT;

On galera-db-02 or galera-db-03, verify the data exists:

MariaDB [(none)]> SELECT * FROM galera_test.test_table;
+----+-------------------+---------------------+
| id | name              | created_at          |
+----+-------------------+---------------------+
|  1 | written on node 1 | 2026-03-18 10:30:00 |
+----+-------------------+---------------------+
1 row in set (0.000 sec)

Now write from galera-db-03 and verify on galera-db-01 to confirm multi-master replication works in both directions:

# On galera-db-03:
MariaDB [(none)]> INSERT INTO galera_test.test_table (name) VALUES ('written on node 3');

# On galera-db-01:
MariaDB [(none)]> SELECT * FROM galera_test.test_table;
+----+-------------------+---------------------+
| id | name              | created_at          |
+----+-------------------+---------------------+
|  1 | written on node 1 | 2026-03-18 10:30:00 |
|  4 | written on node 3 | 2026-03-18 10:31:00 |
+----+-------------------+---------------------+

Note: The auto-increment ID jumps from 1 to 4 because Galera uses innodb_autoinc_lock_mode=2 with interleaved auto-increment offsets across nodes. This is expected behavior and ensures no ID conflicts.

Clean up the test database when done:

MariaDB [(none)]> DROP DATABASE galera_test;

Step 9: Install and Configure HAProxy

HAProxy distributes database connections across all Galera nodes, providing a single connection endpoint for your applications. Install HAProxy on the galera-haproxy-01 server:

sudo apt update
sudo apt install -y haproxy

First, create a MariaDB user on any one Galera node for HAProxy health checks (it will replicate to all nodes):

mysql -u root -p -e "CREATE USER 'haproxy_check'@'%' IDENTIFIED BY ''; FLUSH PRIVILEGES;"

Also create a user that your applications will use to connect through HAProxy:

mysql -u root -p -e "CREATE USER 'app_user'@'%' IDENTIFIED BY 'StrongPassword123!'; GRANT ALL PRIVILEGES ON *.* TO 'app_user'@'%'; FLUSH PRIVILEGES;"

Now configure HAProxy. Edit the configuration file:

sudo vim /etc/haproxy/haproxy.cfg

Add the following configuration at the end of the file:

# MariaDB Galera Cluster Frontend
frontend galera_cluster_frontend
    bind *:3306
    mode tcp
    option tcplog
    default_backend galera_cluster_backend

# MariaDB Galera Cluster Backend
backend galera_cluster_backend
    mode tcp
    option tcpka
    option mysql-check user haproxy_check
    balance leastconn
    server galera-db-01 10.131.74.92:3306 check weight 1
    server galera-db-02 10.131.74.93:3306 check weight 1
    server galera-db-03 10.131.74.94:3306 check weight 1

# HAProxy Stats Dashboard
listen stats
    bind *:8404
    mode http
    stats enable
    stats uri /stats
    stats realm HAProxy\ Statistics
    stats auth admin:YourStatsPassword

Key configuration parameters:

  • balance leastconn – routes new connections to the node with the fewest active connections, providing optimal load distribution
  • option mysql-check user haproxy_check – performs MySQL-level health checks using the haproxy_check user
  • option tcpka – enables TCP keepalives to maintain long-lived database connections
  • check – enables health monitoring for each backend server

Validate the configuration and restart HAProxy:

sudo haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl restart haproxy
sudo systemctl enable haproxy

Test the connection through HAProxy from any client:

mysql -h 10.131.74.91 -u app_user -p -e "SHOW VARIABLES LIKE 'hostname';"

Run the command multiple times – you should see different hostnames as HAProxy distributes connections across nodes.

Access the HAProxy statistics dashboard at http://10.131.74.91:8404/stats to monitor the health and connection distribution of your Galera cluster in real time.

Troubleshooting Common Issues

Node fails to join the cluster

Check that all required ports (3306, 4567, 4568, 4444) are open and that wsrep_cluster_address contains all node IPs. Review logs with:

sudo journalctl -u mariadb -f

All nodes are down – how to restart

If all nodes go down simultaneously, identify the most up-to-date node by checking the seqno value in the grastate file:

cat /var/lib/mysql/grastate.dat

Bootstrap from the node with the highest seqno value using sudo galera_new_cluster, then start the others normally.

Split-brain scenario

If the cluster enters a non-Primary state due to network partition, the minority partition will refuse writes. Resolve by ensuring network connectivity is restored and restarting the affected nodes. The 3-node minimum ensures quorum (2 out of 3) is maintained during single-node failures.

Conclusion

You now have a fully functional MariaDB Galera Cluster with HAProxy load balancing on Ubuntu 24.04 / 22.04. This setup provides synchronous multi-master replication with automatic failover and even load distribution. Your applications connect to the single HAProxy endpoint and benefit from the redundancy and performance of the entire cluster.

For production deployments, consider adding a second HAProxy instance with keepalived for load balancer redundancy, implementing SSL/TLS encryption for Galera replication traffic, and setting up monitoring with tools like Prometheus and Grafana.

Related guides:

Related Articles

Databases How To Setup MariaDB Galera Cluster on Debian 10 (Buster) Containers How to Install and Use Docker CE on Ubuntu / Linux Mint Ubuntu Install Brave Browser on Ubuntu 24.04|22.04|20.04 Databases Install MariaDB 10.7 on AlmaLinux 8|Oracle Linux 8

Press ESC to close