MariaDB Galera Cluster is a synchronous multi-master database cluster solution built on the Galera replication library. It allows reads and writes on any node, provides automatic node joining, and handles node failures without downtime. Combined with ProxySQL as a query router and load balancer, you get a production-ready highly available database tier.
This guide walks through setting up a 3-node MariaDB Galera Cluster on Ubuntu 24.04 LTS using MariaDB 11.4 LTS from the official repository. We also configure ProxySQL to distribute queries across all nodes and handle failover automatically.
Prerequisites
- 3 Ubuntu 24.04 LTS servers for Galera nodes (minimum 2 GB RAM each)
- 1 Ubuntu 24.04 LTS server for ProxySQL
- Root or sudo access on all servers
- All servers can communicate on ports 3306 (MySQL), 4567 (Galera cluster communication – TCP/UDP), 4568 (Galera IST – TCP), 4444 (Galera SST – TCP)
- Port 6032 (ProxySQL admin) and 6033 (ProxySQL client) on the ProxySQL server
Our lab environment uses these IP addresses:
| Hostname | IP Address | Role |
|---|---|---|
| galera1 | 10.0.1.11 | Galera Node 1 |
| galera2 | 10.0.1.12 | Galera Node 2 |
| galera3 | 10.0.1.13 | Galera Node 3 |
| proxysql | 10.0.1.10 | ProxySQL Load Balancer |
Step 1: Set Hostnames on All Servers
Set the hostname on each server. Run the appropriate command on each node:
# On node 1
sudo hostnamectl set-hostname galera1
# On node 2
sudo hostnamectl set-hostname galera2
# On node 3
sudo hostnamectl set-hostname galera3
# On ProxySQL server
sudo hostnamectl set-hostname proxysql
Add these entries to /etc/hosts on all four servers:
sudo vi /etc/hosts
Add the following lines:
10.0.1.11 galera1
10.0.1.12 galera2
10.0.1.13 galera3
10.0.1.10 proxysql
Step 2: Install MariaDB 11.4 on All Galera Nodes
Run these steps on all three Galera nodes (galera1, galera2, galera3). We install MariaDB 11.4 LTS from the official MariaDB repository to get Galera support built in. If you need a standalone MariaDB setup first, see our guide on installing MariaDB on Ubuntu.
Install prerequisites and add the MariaDB repository signing key:
sudo apt update
sudo apt install -y curl software-properties-common gnupg2
Import the MariaDB signing key and add the repository:
curl -fsSL https://mariadb.org/mariadb_release_signing_key.pgp | sudo gpg --dearmor -o /usr/share/keyrings/mariadb-keyring.gpg
Create the repository file:
sudo vi /etc/apt/sources.list.d/mariadb.sources
Add the following content:
X-Repolib-Name: MariaDB
Types: deb
URIs: https://deb.mariadb.org/11.4/ubuntu
Suites: noble
Components: main
Signed-By: /usr/share/keyrings/mariadb-keyring.gpg
Update the package index and install MariaDB server along with the Galera library and backup tool:
sudo apt update
sudo apt install -y mariadb-server galera-4 mariadb-backup
Verify MariaDB is installed:
mariadb --version
The output confirms MariaDB 11.4 is installed:
mariadb Ver 15.1 Distrib 11.4.10-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper
Stop MariaDB for now – we need to configure Galera before starting the cluster:
sudo systemctl stop mariadb
Step 3: Configure AppArmor for MariaDB
Ubuntu 24.04 uses AppArmor which can interfere with Galera SST operations. The mariabackup SST method needs to read and write files in temporary directories. Set the MariaDB AppArmor profile to complain mode on all three nodes:
sudo apt install -y apparmor-utils
sudo aa-complain /usr/sbin/mariadbd
Verify the profile is in complain mode:
sudo aa-status | grep mariadb
You should see the mariadbd profile listed under processes in complain mode.
Step 4: Configure Galera Cluster on All Nodes
Create the Galera configuration file on all three nodes. The wsrep (Write-Set Replication) settings control how nodes communicate and replicate data.
sudo vi /etc/mysql/mariadb.conf.d/60-galera.cnf
Add the following configuration on Node 1 (galera1 – 10.0.1.11):
[galera]
# Galera provider and cluster settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "galera_cluster"
wsrep_cluster_address = "gcomm://10.0.1.11,10.0.1.12,10.0.1.13"
# Node-specific settings
wsrep_node_address = 10.0.1.11
wsrep_node_name = galera1
# SST method - mariabackup is non-blocking
wsrep_sst_method = mariabackup
wsrep_sst_auth = mariabackup:SecureSST@Pass1
# InnoDB settings required for Galera
binlog_format = ROW
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_doublewrite = 1
innodb_flush_log_at_trx_commit = 2
# Bind to all interfaces
bind-address = 0.0.0.0
On Node 2 (galera2 – 10.0.1.12), use the same configuration but change the node-specific values:
[galera]
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "galera_cluster"
wsrep_cluster_address = "gcomm://10.0.1.11,10.0.1.12,10.0.1.13"
wsrep_node_address = 10.0.1.12
wsrep_node_name = galera2
wsrep_sst_method = mariabackup
wsrep_sst_auth = mariabackup:SecureSST@Pass1
binlog_format = ROW
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_doublewrite = 1
innodb_flush_log_at_trx_commit = 2
bind-address = 0.0.0.0
On Node 3 (galera3 – 10.0.1.13):
[galera]
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "galera_cluster"
wsrep_cluster_address = "gcomm://10.0.1.11,10.0.1.12,10.0.1.13"
wsrep_node_address = 10.0.1.13
wsrep_node_name = galera3
wsrep_sst_method = mariabackup
wsrep_sst_auth = mariabackup:SecureSST@Pass1
binlog_format = ROW
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_doublewrite = 1
innodb_flush_log_at_trx_commit = 2
bind-address = 0.0.0.0
Key configuration options explained:
wsrep_on– Enables write-set replicationwsrep_cluster_address– Comma-separated list of all node IPs. All nodes use the same listwsrep_sst_method– State Snapshot Transfer method. mariabackup is the recommended non-blocking methodinnodb_autoinc_lock_mode = 2– Required for Galera to handle auto-increment values across nodesinnodb_flush_log_at_trx_commit = 2– Improves write performance. Safe with Galera since data exists on multiple nodes
Step 5: Configure Firewall on All Galera Nodes
Open the required ports on all three Galera nodes using UFW:
sudo ufw allow 3306/tcp comment 'MariaDB client connections'
sudo ufw allow 4567/tcp comment 'Galera cluster replication'
sudo ufw allow 4567/udp comment 'Galera cluster multicast'
sudo ufw allow 4568/tcp comment 'Galera IST'
sudo ufw allow 4444/tcp comment 'Galera SST'
sudo ufw reload
Verify the rules are active:
sudo ufw status numbered
The output should list all five rules as ALLOW:
Status: active
To Action From
-- ------ ----
[ 1] 3306/tcp ALLOW IN Anywhere
[ 2] 4567/tcp ALLOW IN Anywhere
[ 3] 4567/udp ALLOW IN Anywhere
[ 4] 4568/tcp ALLOW IN Anywhere
[ 5] 4444/tcp ALLOW IN Anywhere
Step 6: Create the SST User on Node 1
Before bootstrapping the cluster, start MariaDB normally on node 1 to create the mariabackup user for SST operations. This user gets replicated to other nodes once the cluster forms.
sudo systemctl start mariadb
Connect to MariaDB and create the SST user:
sudo mariadb
Run these SQL commands in the MariaDB shell:
CREATE USER 'mariabackup'@'localhost' IDENTIFIED BY 'SecureSST@Pass1';
GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR, REPLICATION CLIENT ON *.* TO 'mariabackup'@'localhost';
FLUSH PRIVILEGES;
EXIT;
Stop MariaDB again – we will bootstrap it as a Galera cluster next:
sudo systemctl stop mariadb
Step 7: Bootstrap the MariaDB Galera Cluster
Bootstrap the cluster from node 1. This initializes a new cluster with node 1 as the first member. Only run this on one node – never bootstrap on multiple nodes simultaneously.
On galera1:
sudo galera_new_cluster
Verify the cluster started with one node:
sudo mariadb -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
The cluster size should be 1:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
Now start MariaDB on galera2:
sudo systemctl start mariadb
Then start MariaDB on galera3:
sudo systemctl start mariadb
Each node performs an SST (full data sync via mariabackup) when joining the cluster for the first time. Verify the cluster size from any node:
sudo mariadb -e "SHOW STATUS LIKE 'wsrep_%';" | grep -E 'cluster_size|cluster_status|connected|ready|local_state_comment'
All three nodes should be connected and synced:
wsrep_cluster_size 3
wsrep_cluster_status Primary
wsrep_connected ON
wsrep_local_state_comment Synced
wsrep_ready ON
Enable MariaDB to start on boot on nodes 2 and 3:
sudo systemctl enable mariadb
Step 8: Test Galera Cluster Replication
Confirm data replicates across all nodes. Create a test database on node 1:
sudo mariadb -e "CREATE DATABASE test_galera; USE test_galera; CREATE TABLE nodes (id INT AUTO_INCREMENT PRIMARY KEY, hostname VARCHAR(50), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); INSERT INTO nodes (hostname) VALUES ('galera1');"
Insert a row from node 2:
sudo mariadb -e "INSERT INTO test_galera.nodes (hostname) VALUES ('galera2');"
Query the table from node 3 to verify all data replicated:
sudo mariadb -e "SELECT * FROM test_galera.nodes;"
You should see rows from both node 1 and node 2:
+----+----------+---------------------+
| id | hostname | created_at |
+----+----------+---------------------+
| 1 | galera1 | 2026-03-21 10:15:32 |
| 4 | galera2 | 2026-03-21 10:15:45 |
+----+----------+---------------------+
The auto-increment IDs are not sequential – Galera offsets them across nodes to prevent conflicts. This is expected behavior. For more on MariaDB replication approaches, see our guide on configuring MariaDB replication on Ubuntu.
Step 9: Install and Configure ProxySQL
ProxySQL sits in front of the Galera cluster and routes queries to healthy nodes. Install it on the dedicated ProxySQL server (10.0.1.10). For full ProxySQL documentation, see the official site.
Download and install the ProxySQL package:
wget https://github.com/sysown/proxysql/releases/download/v2.7.3/proxysql_2.7.3-ubuntu24_amd64.deb
sudo dpkg -i proxysql_2.7.3-ubuntu24_amd64.deb
sudo apt install -f -y
Start and enable ProxySQL:
sudo systemctl start proxysql
sudo systemctl enable proxysql
Verify it is running:
sudo systemctl status proxysql
The service should show active (running). ProxySQL listens on port 6032 for admin connections and port 6033 for client connections.
Create a Monitor User on Galera Nodes
ProxySQL needs a monitoring user to check node health. Create this user on any Galera node – it replicates to all nodes automatically:
sudo mariadb
Run these SQL commands:
CREATE USER 'proxysql_monitor'@'%' IDENTIFIED BY 'M0nitor@Pass!';
GRANT USAGE, REPLICATION CLIENT ON *.* TO 'proxysql_monitor'@'%';
FLUSH PRIVILEGES;
EXIT;
Create an Application User on Galera Nodes
Create a database user that applications will use to connect through ProxySQL. Run this on any Galera node:
sudo mariadb
Create the application user with appropriate permissions:
CREATE USER 'app_user'@'%' IDENTIFIED BY 'AppUser@SecPass1';
GRANT ALL PRIVILEGES ON *.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
EXIT;
Configure ProxySQL Backend Servers and Users
Connect to the ProxySQL admin interface. The default admin credentials are admin/admin:
mysql -u admin -padmin -h 127.0.0.1 -P 6032 --prompt='ProxySQL Admin> '
Add the three Galera nodes as backend servers. Hostgroup 10 is for writes, hostgroup 20 is for reads:
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (10, '10.0.1.11', 3306, 1000);
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (10, '10.0.1.12', 3306, 1000);
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (10, '10.0.1.13', 3306, 1000);
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (20, '10.0.1.11', 3306, 1000);
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (20, '10.0.1.12', 3306, 1000);
INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight) VALUES (20, '10.0.1.13', 3306, 1000);
Configure the monitoring user credentials:
UPDATE global_variables SET variable_value='proxysql_monitor' WHERE variable_name='mysql-monitor_username';
UPDATE global_variables SET variable_value='M0nitor@Pass!' WHERE variable_name='mysql-monitor_password';
UPDATE global_variables SET variable_value='2000' WHERE variable_name='mysql-monitor_connect_interval';
UPDATE global_variables SET variable_value='2000' WHERE variable_name='mysql-monitor_ping_interval';
UPDATE global_variables SET variable_value='2000' WHERE variable_name='mysql-monitor_read_only_interval';
Add the application user to ProxySQL. The default hostgroup (10) is the write group:
INSERT INTO mysql_users (username, password, default_hostgroup) VALUES ('app_user', 'AppUser@SecPass1', 10);
Add a query rule to route SELECT queries to the read hostgroup (20). All other queries go to the default write hostgroup (10):
INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES (1, 1, '^SELECT.*FOR UPDATE$', 10, 1);
INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES (2, 1, '^SELECT', 20, 1);
Configure Galera-specific health checks. ProxySQL uses the wsrep_local_state variable to determine node health:
INSERT INTO mysql_galera_hostgroups (writer_hostgroup, backup_writer_hostgroup, reader_hostgroup, offline_hostgroup, active, max_writers, writer_is_also_reader, max_transactions_behind) VALUES (10, 30, 20, 40, 1, 1, 1, 100);
This configures ProxySQL to maintain one active writer (hostgroup 10), use backup writers (hostgroup 30), distribute reads across all nodes (hostgroup 20), and move unhealthy nodes to the offline group (hostgroup 40).
Apply all changes by loading them to runtime and saving to disk:
LOAD MYSQL SERVERS TO RUNTIME;
LOAD MYSQL USERS TO RUNTIME;
LOAD MYSQL QUERY RULES TO RUNTIME;
LOAD MYSQL VARIABLES TO RUNTIME;
LOAD ADMIN VARIABLES TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
SAVE MYSQL USERS TO DISK;
SAVE MYSQL QUERY RULES TO DISK;
SAVE MYSQL VARIABLES TO DISK;
SAVE ADMIN VARIABLES TO DISK;
EXIT;
Step 10: Verify ProxySQL Configuration
Connect to the ProxySQL admin interface and check that all backend servers are healthy:
mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT hostgroup_id, hostname, port, status FROM runtime_mysql_servers;"
All servers should show ONLINE status:
+--------------+-----------+------+--------+
| hostgroup_id | hostname | port | status |
+--------------+-----------+------+--------+
| 10 | 10.0.1.11 | 3306 | ONLINE |
| 20 | 10.0.1.11 | 3306 | ONLINE |
| 20 | 10.0.1.12 | 3306 | ONLINE |
| 20 | 10.0.1.13 | 3306 | ONLINE |
| 30 | 10.0.1.12 | 3306 | ONLINE |
| 30 | 10.0.1.13 | 3306 | ONLINE |
+--------------+-----------+------+--------+
ProxySQL automatically selected one writer (10.0.1.11 in hostgroup 10) and placed the other two in the backup writer group (30). All three nodes serve reads (hostgroup 20).
Check the monitor connection log for any errors:
mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT hostname, port, connect_error FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;"
The connect_error column should be NULL for all entries, confirming that ProxySQL can reach all Galera nodes.
Step 11: Test Application Connectivity Through ProxySQL
Install the MariaDB client on the ProxySQL server:
sudo apt install -y mariadb-client
Connect through ProxySQL on port 6033 using the application user:
mysql -u app_user -p'AppUser@SecPass1' -h 127.0.0.1 -P 6033 -e "SELECT @@hostname, @@port;"
The output shows which backend node served the query:
+------------+--------+
| @@hostname | @@port |
+------------+--------+
| galera1 | 3306 |
+------------+--------+
Test that you can see the replicated test database:
mysql -u app_user -p'AppUser@SecPass1' -h 127.0.0.1 -P 6033 -e "SELECT * FROM test_galera.nodes;"
The output confirms data is accessible through ProxySQL from the Galera cluster.
Step 12: Test Galera Cluster Failover
Simulate a node failure to confirm the cluster and ProxySQL handle it correctly. Stop MariaDB on galera1 (the current writer):
sudo systemctl stop mariadb
Check the cluster size from galera2 or galera3:
sudo mariadb -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
The cluster size should drop to 2:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+
Check ProxySQL to verify it detected the failure and promoted a new writer:
mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT hostgroup_id, hostname, port, status FROM runtime_mysql_servers;"
ProxySQL should show 10.0.1.11 as SHUNNED or moved to the offline hostgroup (40), with another node promoted to the writer hostgroup (10). Application queries continue without interruption.
Bring galera1 back online:
sudo systemctl start mariadb
Verify the node rejoined the cluster:
sudo mariadb -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
The cluster size should return to 3. ProxySQL automatically detects the recovered node and adds it back to the appropriate hostgroups. To set up database monitoring for your Galera cluster, check our guide on monitoring MariaDB with Prometheus.
Step 13: Configure ProxySQL Firewall
Open the required ports on the ProxySQL server:
sudo ufw allow 6033/tcp comment 'ProxySQL client connections'
sudo ufw allow 6032/tcp comment 'ProxySQL admin interface'
sudo ufw reload
For production deployments, restrict port 6032 to your management network only:
sudo ufw delete allow 6032/tcp
sudo ufw allow from 10.0.1.0/24 to any port 6032 proto tcp comment 'ProxySQL admin - management network only'
sudo ufw reload
Step 14: Secure ProxySQL Admin Interface
Change the default ProxySQL admin password. Connect to the admin interface:
mysql -u admin -padmin -h 127.0.0.1 -P 6032
Update the admin credentials:
UPDATE global_variables SET variable_value='admin:NewAdminP@ss1' WHERE variable_name='admin-admin_credentials';
LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;
EXIT;
Verify you can connect with the new password:
mysql -u admin -p'NewAdminP@ss1' -h 127.0.0.1 -P 6032 -e "SELECT @@admin-admin_credentials;"
Handling Full Cluster Restart
If all three nodes go down (power outage, maintenance), you need to identify the most advanced node and bootstrap from it. Check the grastate.dat file on each node:
cat /var/lib/mysql/grastate.dat
Look for the node with the highest seqno value – this node has the most recent data:
# GALERA saved state
version: 2.1
uuid: d3a87c4e-xxxx-xxxx-xxxx-xxxxxxxxxxxx
seqno: 15842
safe_to_bootstrap: 0
On the node with the highest seqno, edit grastate.dat and set safe_to_bootstrap: 1, then bootstrap from that node:
sudo sed -i 's/safe_to_bootstrap: 0/safe_to_bootstrap: 1/' /var/lib/mysql/grastate.dat
sudo galera_new_cluster
Then start MariaDB on the remaining nodes normally with sudo systemctl start mariadb. They rejoin the cluster via IST if the missing transactions are still in the gcache, or via SST if a full resync is needed. For an alternative high-availability approach using HAProxy instead of ProxySQL, see our guide on Galera Cluster HA with HAProxy.
Conclusion
You now have a 3-node MariaDB Galera Cluster running on Ubuntu 24.04 with ProxySQL handling query routing and automatic failover. The cluster provides synchronous replication across all nodes with no data loss on node failure.
For production use, enable SSL encryption for both Galera replication traffic and client connections, set up automated backups using mariabackup with offsite storage, and add Prometheus and Grafana monitoring to track replication lag, query throughput, and node health.