Running a database cluster that handles failover without dropping connections is one of those problems that sounds simple until you actually try to solve it. I have been running MariaDB Galera clusters in production for over a decade, and the combination of Galera’s synchronous replication with ProxySQL’s intelligent query routing remains one of the most reliable setups for high-availability MySQL-compatible workloads. This guide walks through building a MariaDB Galera Cluster on Debian 13 (Trixie) or Debian 12 (Bookworm) with ProxySQL 3.x handling load balancing and read/write splitting.

Galera provides true multi-master replication where every node can accept writes. ProxySQL sits in front of the cluster and routes queries – reads go to all nodes for load distribution, writes go to a single primary to avoid certification conflicts. When a node fails, ProxySQL detects the failure and reroutes traffic automatically. No manual intervention, no application changes.

Prerequisites

You need four servers running Debian 13 (Trixie) or Debian 12 (Bookworm) with a minimum of 2 GB RAM and 2 CPU cores each. Three servers will run MariaDB as Galera nodes, and one server will run ProxySQL as the load balancer. All servers must be able to communicate with each other over the network.

Here is the layout used throughout this guide:

RoleHostnameIP Address
Galera Node 1galera1192.168.10.11
Galera Node 2galera2192.168.10.12
Galera Node 3galera3192.168.10.13
ProxySQL LBproxysql1192.168.10.20

Before you start, make sure of the following on all four servers:

  • Root or sudo access is available
  • Hostnames resolve correctly (update /etc/hosts or use DNS)
  • Firewall allows TCP ports 3306, 4567, 4568, 4444 between the Galera nodes and port 6033/6032 on the ProxySQL node
  • Time is synchronized across all nodes using systemd-timesyncd or chrony
  • The system is updated with sudo apt update && sudo apt upgrade -y

Open the required firewall ports on the Galera nodes if you are running ufw:

sudo ufw allow from 192.168.10.0/24 to any port 3306,4567,4568,4444 proto tcp

On the ProxySQL node, allow the ProxySQL ports:

sudo ufw allow from 192.168.10.0/24 to any port 6032,6033 proto tcp

Step 1: Install MariaDB 11.x on All Three Galera Nodes

The default MariaDB packages in Debian repositories tend to lag behind. For Galera cluster deployments, I always recommend using the official MariaDB repository to get the latest 11.x release with all current bug fixes and performance improvements. Run these steps on all three Galera nodes (galera1, galera2, galera3).

First, install the prerequisite packages needed to add the repository:

sudo apt install -y apt-transport-https curl software-properties-common gnupg

Import the MariaDB signing key and add the repository. For Debian 12 (Bookworm):

sudo curl -fsSL https://mariadb.org/mariadb_release_signing_key.pgp | sudo gpg --dearmor -o /usr/share/keyrings/mariadb-keyring.gpg

Create the repository file. Replace bookworm with trixie if you are running Debian 13:

echo "deb [signed-by=/usr/share/keyrings/mariadb-keyring.gpg] https://dlm.mariadb.com/repo/mariadb-server/11.4/repo/debian $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/mariadb.list

Update the package index and install MariaDB Server along with the Galera library:

sudo apt update

Now install MariaDB server and the Galera plugin:

sudo apt install -y mariadb-server mariadb-client galera-4

After installation completes, stop the MariaDB service on all nodes. We need to configure Galera before starting the cluster:

sudo systemctl stop mariadb

Run the security hardening script on each node to set the root password and remove test databases. This is a step I never skip, even in staging environments:

sudo mariadb-secure-installation

Verify the installed version to confirm you are running MariaDB 11.x:

mariadbd --version

You should see output similar to mariadbd Ver 11.4.x-MariaDB confirming the correct version is installed. If you need help with the initial MariaDB installation on Debian, we have a dedicated guide that covers the process in more detail.

Step 2: Configure Galera Cluster on Each Node

Galera configuration lives in a dedicated file that MariaDB reads on startup. Create the configuration file on each node with the settings specific to that node. The three critical parameters are wsrep_cluster_address (the list of all nodes), wsrep_node_address (the current node’s IP), and wsrep_sst_method (how full state transfers happen).

On galera1 (192.168.10.11), create the Galera configuration file:

sudo tee /etc/mysql/mariadb.conf.d/99-galera.cnf <<'EOF'
[mysqld]
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_force_primary_key=1
innodb_buffer_pool_size=512M

# Galera Provider
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster
wsrep_cluster_name="galera_production"
wsrep_cluster_address="gcomm://192.168.10.11,192.168.10.12,192.168.10.13"

# Galera Node
wsrep_node_address="192.168.10.11"
wsrep_node_name="galera1"

# SST Method
wsrep_sst_method=mariabackup
wsrep_sst_auth=backup_user:StrongBackupPass123

# Galera Sync
wsrep_slave_threads=4
wsrep_log_conflicts=ON
wsrep_certify_nonPK=ON
EOF

On galera2 (192.168.10.12), create the same file but change the node-specific values:

sudo tee /etc/mysql/mariadb.conf.d/99-galera.cnf <<'EOF'
[mysqld]
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_force_primary_key=1
innodb_buffer_pool_size=512M

# Galera Provider
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster
wsrep_cluster_name="galera_production"
wsrep_cluster_address="gcomm://192.168.10.11,192.168.10.12,192.168.10.13"

# Galera Node
wsrep_node_address="192.168.10.12"
wsrep_node_name="galera2"

# SST Method
wsrep_sst_method=mariabackup
wsrep_sst_auth=backup_user:StrongBackupPass123

# Galera Sync
wsrep_slave_threads=4
wsrep_log_conflicts=ON
wsrep_certify_nonPK=ON
EOF

On galera3 (192.168.10.13), do the same with its node address and name:

sudo tee /etc/mysql/mariadb.conf.d/99-galera.cnf <<'EOF'
[mysqld]
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_force_primary_key=1
innodb_buffer_pool_size=512M

# Galera Provider
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster
wsrep_cluster_name="galera_production"
wsrep_cluster_address="gcomm://192.168.10.11,192.168.10.12,192.168.10.13"

# Galera Node
wsrep_node_address="192.168.10.13"
wsrep_node_name="galera3"

# SST Method
wsrep_sst_method=mariabackup
wsrep_sst_auth=backup_user:StrongBackupPass123

# Galera Sync
wsrep_slave_threads=4
wsrep_log_conflicts=ON
wsrep_certify_nonPK=ON
EOF

A few notes on these settings from years of running this in production. Setting innodb_autoinc_lock_mode=2 is mandatory for Galera – it allows interleaved auto-increment generation which prevents deadlocks during parallel replication. The wsrep_slave_threads=4 value works well for most workloads but you can increase it if your write throughput is high. I use mariabackup for SST because it performs non-blocking backups – the donor node stays operational during a full state transfer, which matters a lot when a node rejoins the cluster.

Step 3: Create the Backup User for SST

Before bootstrapping the cluster, you need the backup user that mariabackup will use during state snapshot transfers. Start MariaDB temporarily in standalone mode on galera1:

sudo systemctl start mariadb

Connect to MariaDB and create the backup user:

sudo mariadb -u root -e "CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'StrongBackupPass123'; GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR, REPLICATION CLIENT ON *.* TO 'backup_user'@'localhost'; FLUSH PRIVILEGES;"

Stop MariaDB again so we can perform the proper cluster bootstrap:

sudo systemctl stop mariadb

Step 4: Bootstrap the First Galera Node

Bootstrapping is the process of starting a brand new Galera cluster. You only do this once, on the first node. Every other node joins the existing cluster. This is an important distinction – running the bootstrap command on a node that should join an existing cluster will create a split cluster, which is a painful situation to recover from.

On galera1, bootstrap the cluster:

sudo galera_new_cluster

Verify the cluster has started and the cluster size is 1:

sudo mariadb -u root -e "SHOW STATUS LIKE 'wsrep_cluster_size';"

Expected output:

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+

Also confirm the node state is synced and the provider is loaded:

sudo mariadb -u root -e "SHOW STATUS LIKE 'wsrep_%' \G" | grep -E 'wsrep_ready|wsrep_connected|wsrep_local_state_comment'

You should see wsrep_ready = ON, wsrep_connected = ON, and wsrep_local_state_comment = Synced. If any of those are wrong, check the MariaDB error log at /var/log/mysql/error.log before proceeding.

Step 5: Start Remaining Nodes and Verify Cluster Size

With the first node running, start MariaDB on galera2 and galera3. These nodes will automatically discover the cluster through the wsrep_cluster_address and perform an IST (Incremental State Transfer) or SST (State Snapshot Transfer) to synchronize.

On galera2:

sudo systemctl start mariadb

On galera3:

sudo systemctl start mariadb

Go back to any node and check the cluster size. It should now report 3:

sudo mariadb -u root -e "SHOW STATUS LIKE 'wsrep_cluster_size';"

Expected output:

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

To verify replication is working, create a test database on galera1 and confirm it appears on the other nodes:

sudo mariadb -u root -e "CREATE DATABASE galera_test; USE galera_test; CREATE TABLE t1 (id INT AUTO_INCREMENT PRIMARY KEY, data VARCHAR(100)); INSERT INTO t1 (data) VALUES ('written on galera1');"

On galera2 or galera3, verify the data is there:

sudo mariadb -u root -e "SELECT * FROM galera_test.t1;"

If you see the row, your Galera cluster is fully operational. You now have a three-node synchronous multi-master cluster. For additional context on setting up Galera clusters on Ubuntu, our separate guide covers Ubuntu-specific differences.

Step 6: Install ProxySQL 3.x on the Load Balancer Node

ProxySQL 3.x is a significant upgrade over the 2.x line with better connection multiplexing, improved query cache, and more granular monitoring. Install it on the dedicated ProxySQL node (proxysql1, 192.168.10.20).

Add the ProxySQL repository and signing key:

sudo curl -fsSL https://repo.proxysql.com/ProxySQL/proxysql-3.0.x/repo_pub_key | sudo gpg --dearmor -o /usr/share/keyrings/proxysql-keyring.gpg

Add the repository source. Adjust the codename for your Debian version:

echo "deb [signed-by=/usr/share/keyrings/proxysql-keyring.gpg] https://repo.proxysql.com/ProxySQL/proxysql-3.0.x/$(lsb_release -cs)/ ./" | sudo tee /etc/apt/sources.list.d/proxysql.list

Install ProxySQL and the MariaDB client (needed for connecting to the admin interface):

sudo apt update && sudo apt install -y proxysql mariadb-client

Start ProxySQL and enable it on boot:

sudo systemctl enable --now proxysql

Verify ProxySQL is running. It listens on port 6032 for the admin interface and port 6033 for MySQL traffic by default:

sudo ss -tlnp | grep proxysql

You should see both ports 6032 and 6033 in the output. The admin interface uses a default login of admin:admin which you should change in production, but we will use it for the initial configuration.

Step 7: Configure ProxySQL Backend Servers

ProxySQL manages its configuration through SQL commands on the admin interface. Connect to the ProxySQL admin console:

mariadb -u admin -padmin -h 127.0.0.1 -P 6032 --prompt='ProxySQL> '

Add all three Galera nodes as backend servers. We use two hostgroups – hostgroup 10 for writes and hostgroup 20 for reads. One node gets assigned to the writer hostgroup, and all three handle reads:

INSERT INTO mysql_servers (hostgroup_id, hostname, port, weight, max_connections) VALUES
  (10, '192.168.10.11', 3306, 1000, 200),
  (20, '192.168.10.11', 3306, 1000, 200),
  (20, '192.168.10.12', 3306, 1000, 200),
  (20, '192.168.10.13', 3306, 1000, 200);

In this configuration, galera1 handles writes (hostgroup 10) while all three nodes share the read load (hostgroup 20). I typically designate a single writer to minimize certification conflicts. While Galera supports multi-writer, in practice, sending all writes to one node and distributing reads produces fewer deadlocks and more predictable performance.

Apply the backend server configuration:

LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

Step 8: Configure ProxySQL Query Rules for Read/Write Splitting

Query rules tell ProxySQL how to route incoming SQL statements. We route SELECT statements to the reader hostgroup (20) and everything else to the writer hostgroup (10). Selects that use FOR UPDATE must go to the writer to maintain transactional consistency.

Set the default hostgroup for the ProxySQL user and then add the query rules:

INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES
  (1, 1, '^SELECT .* FOR UPDATE$', 10, 1),
  (2, 1, '^SELECT', 20, 1);

Rule 1 catches SELECT … FOR UPDATE queries and sends them to the writer. Rule 2 catches all other SELECT queries and routes them to the readers. Any query that does not match these rules (INSERT, UPDATE, DELETE, DDL) falls through to the default hostgroup, which we set to 10 (writer).

Now create the application user that your application will use to connect through ProxySQL:

INSERT INTO mysql_users (username, password, default_hostgroup, max_connections) VALUES
  ('app_user', 'AppSecurePass456', 10, 500);

Apply all the changes:

LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

You also need to create this same user on all three Galera nodes so that ProxySQL can authenticate against the backend. Run this on any Galera node (it will replicate to the others):

sudo mariadb -u root -e "CREATE USER 'app_user'@'192.168.10.%' IDENTIFIED BY 'AppSecurePass456'; GRANT ALL PRIVILEGES ON *.* TO 'app_user'@'192.168.10.%'; FLUSH PRIVILEGES;"

For more details on installing and configuring ProxySQL on Debian, our dedicated guide covers the full range of ProxySQL options.

Step 9: Create Monitoring User for ProxySQL Health Checks

ProxySQL needs a dedicated monitoring user to perform health checks against the backend servers. Without this, ProxySQL cannot detect when a node goes down or comes back up. This is the piece that makes automatic failover work.

On any Galera node, create the monitoring user:

sudo mariadb -u root -e "CREATE USER 'proxysql_monitor'@'192.168.10.%' IDENTIFIED BY 'MonitorPass789'; GRANT USAGE, REPLICATION CLIENT ON *.* TO 'proxysql_monitor'@'192.168.10.%'; FLUSH PRIVILEGES;"

Back in the ProxySQL admin console, configure the monitoring credentials:

UPDATE global_variables SET variable_value='proxysql_monitor' WHERE variable_name='mysql-monitor_username';
UPDATE global_variables SET variable_value='MonitorPass789' WHERE variable_name='mysql-monitor_password';
UPDATE global_variables SET variable_value='2000' WHERE variable_name='mysql-monitor_ping_interval';
UPDATE global_variables SET variable_value='1000' WHERE variable_name='mysql-monitor_ping_timeout';
UPDATE global_variables SET variable_value='2000' WHERE variable_name='mysql-monitor_read_only_interval';

Apply the monitoring settings:

LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

Verify monitoring is working by checking the ping log:

SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;

All three nodes should show successful pings with a NULL value in the ping_error column. If you see authentication errors, double-check that the monitor user was created correctly on the Galera nodes.

Step 10: Test Failover

This is the part that actually matters. A cluster that cannot handle node failure gracefully is just a more complicated single server. We will simulate a node failure and confirm that ProxySQL continues routing traffic to the healthy nodes.

First, verify that connections through ProxySQL work. From any machine that can reach the ProxySQL node:

mariadb -u app_user -p'AppSecurePass456' -h 192.168.10.20 -P 6033 -e "SELECT @@hostname, @@wsrep_node_name;"

Now stop MariaDB on galera2 to simulate a failure:

sudo systemctl stop mariadb

Wait about 5 seconds for ProxySQL’s monitor to detect the failure, then run a few queries through ProxySQL:

mariadb -u app_user -p'AppSecurePass456' -h 192.168.10.20 -P 6033 -e "SELECT * FROM galera_test.t1; INSERT INTO galera_test.t1 (data) VALUES ('written during failover');"

Both the read and write should succeed without errors. Check the server status in ProxySQL to confirm galera2 is marked as offline:

mariadb -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT hostgroup_id, hostname, status FROM runtime_mysql_servers;"

You should see galera2 with a status of SHUNNED or OFFLINE_SOFT. Now bring galera2 back:

sudo systemctl start mariadb

After 10-15 seconds, check the ProxySQL server status again. Galera2 should be back to ONLINE. Verify the cluster size is back to 3:

sudo mariadb -u root -e "SHOW STATUS LIKE 'wsrep_cluster_size';"

The data written during the failover should also be present on galera2, confirming that it synchronized upon rejoin. This automatic detection and recovery is why ProxySQL paired with Galera is such a solid combination for production workloads.

Step 11: Monitor with ProxySQL Admin Interface

ProxySQL gives you deep visibility into what is happening with your database traffic. The admin interface exposes several stats tables that I check regularly in production. Connect to the admin interface and explore the key monitoring views.

Check the connection pool status to see active connections per backend:

SELECT hostgroup, srv_host, status, ConnUsed, ConnFree, ConnOK, ConnERR, Queries FROM stats_mysql_connection_pool;

This tells you how many connections each backend is handling, whether any connections are failing, and the total query count routed to each server. It is the first place I look when diagnosing performance issues.

Review query rule hit counts to verify read/write splitting is working as expected:

SELECT rule_id, match_pattern, hits, destination_hostgroup FROM stats_mysql_query_rules;

If rule 2 (SELECT routing) has zero hits, your application may not be sending queries through ProxySQL or the pattern matching is off. Check your application’s connection string points to the ProxySQL host on port 6033.

View the top queries by execution time to find slow queries:

SELECT hostgroup, digest_text, count_star, sum_time, min_time, max_time FROM stats_mysql_query_digest ORDER BY sum_time DESC LIMIT 10;

This digest view is genuinely useful for production debugging. It gives you aggregated query performance data without needing to enable the slow query log on the backends. I have caught numerous problematic queries through this view alone.

For ongoing Galera health monitoring, you should also check cluster status directly. Run this on any Galera node periodically or through a monitoring tool like Prometheus with Grafana:

sudo mariadb -u root -e "SHOW STATUS WHERE Variable_name IN ('wsrep_cluster_size','wsrep_cluster_status','wsrep_ready','wsrep_connected','wsrep_local_recv_queue_avg','wsrep_flow_control_paused');"

Pay particular attention to wsrep_flow_control_paused – if this value is above 0.1, it means the cluster is spending significant time in flow control, which indicates one or more nodes cannot keep up with the write load. That is your signal to investigate slow nodes or increase wsrep_slave_threads.

Step 12: Troubleshooting Common Issues

Split Brain

Split brain happens when nodes lose communication and form separate clusters. With three nodes, Galera uses a quorum mechanism – a partition needs a majority (2 of 3 nodes) to continue operating. The minority partition enters a non-primary state and rejects queries. This is actually the correct behavior and prevents data divergence.

If you find a node in non-primary state, check the cluster status:

sudo mariadb -u root -e "SHOW STATUS LIKE 'wsrep_cluster_status';"

If it shows non-Primary, the node has lost quorum. First, fix the network issue causing the partition. Then, if the node cannot rejoin automatically, you can resync it by stopping MariaDB, removing the Galera cache, and restarting:

sudo systemctl stop mariadb && sudo rm /var/lib/mysql/galera.cache /var/lib/mysql/grastate.dat && sudo systemctl start mariadb

The node will perform a full SST from a donor node and rejoin the cluster. This is safe because it brings the node back to a known good state, but it means a full data copy which can take time on large datasets.

SST Failures

When a node needs to rejoin and IST is not possible (the required write-sets are no longer in the gcache), Galera falls back to SST. If SST fails, the joining node will not start. Common causes:

  • Wrong SST credentials – Verify the wsrep_sst_auth user and password match what is configured in the galera.cnf on all nodes and that the user exists in MariaDB with the correct grants.
  • Disk space – The donor creates a backup that the joiner receives. Both nodes need enough free disk space. I recommend keeping at least 1.5x your data directory size free.
  • mariabackup not installed – Run which mariabackup to confirm it is present. If missing, install the mariadb-backup package.
  • Port 4444 blocked – SST uses TCP port 4444. Confirm the firewall allows this port between all Galera nodes.

Check the MariaDB error log for detailed SST failure messages:

sudo tail -100 /var/log/mysql/error.log | grep -i -E 'sst|wsrep'

ProxySQL Connection Issues

If your application cannot connect through ProxySQL, work through these checks in order:

Verify ProxySQL can reach the backends. Check the connection pool from the admin interface:

mariadb -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT * FROM stats_mysql_connection_pool WHERE ConnERR > 0;"

If you see connection errors, the issue is between ProxySQL and the Galera nodes. Common causes are firewall rules, the MySQL user not being allowed from the ProxySQL IP, or the backend being down.

Check that the MySQL user in ProxySQL matches what exists on the backends:

mariadb -u admin -padmin -h 127.0.0.1 -P 6032 -e "SELECT username, default_hostgroup, active FROM mysql_users;"

Verify the user can connect directly to a backend from the ProxySQL node to rule out ProxySQL as the problem:

mariadb -u app_user -p'AppSecurePass456' -h 192.168.10.11 -P 3306 -e "SELECT 1;"

If direct connection works but ProxySQL connections fail, the issue is in the ProxySQL configuration. Review the ProxySQL log for details:

sudo tail -50 /var/lib/proxysql/proxysql.log

Galera Node Stuck in Joining State

A node that stays in “Joining” state for a long time is usually performing an SST. For large databases, this is normal – a 100 GB database can take 30 minutes or more to transfer. Monitor the progress by watching the data directory size on the joining node:

watch -n 5 'du -sh /var/lib/mysql/'

If the size is not growing, the SST has likely failed. Check the error log and the SST-specific log at /var/lib/mysql/innobackup.backup.log for details.

Production Recommendations

After running Galera clusters in production for many years, here are the settings and practices that have saved me from outages:

  • Use an odd number of nodes – Three is the minimum for quorum. Five gives you tolerance for two simultaneous failures. I rarely see a need for more than five in a single cluster.
  • Size the gcache appropriately – Set wsrep_provider_options="gcache.size=1G" or larger. A bigger gcache means nodes can be offline longer and still rejoin with IST instead of a full SST. On systems with frequent writes, I use 2-4 GB.
  • Monitor flow control – If wsrep_flow_control_paused goes above 0.0, investigate immediately. Flow control pauses the entire cluster waiting for slow nodes to catch up.
  • Avoid large transactions – Galera replicates at the commit point. A transaction that modifies 1 million rows will cause a large write-set that blocks certification on all nodes. Break large operations into batches of 1000-5000 rows.
  • Back up regularly – Replication is not a backup. Use mariabackup on one of the reader nodes for daily full backups without impacting write performance.
  • Change the ProxySQL admin password – The default admin:admin credentials must be changed before exposing the admin interface to any network.

For backup strategies that complement this cluster setup, see our guide on backing up and restoring MariaDB databases.

Conclusion

You now have a fully functional MariaDB Galera Cluster running on Debian 13/12 with ProxySQL 3.x handling intelligent query routing and automatic failover. The three Galera nodes provide synchronous multi-master replication with automatic conflict resolution, and ProxySQL distributes read queries across all nodes while directing writes to a single primary. When a node fails, ProxySQL detects the failure within seconds and reroutes traffic to the remaining healthy nodes. When the failed node comes back, it automatically synchronizes and rejoins the cluster.

This setup handles the majority of high-availability use cases I have encountered in production. For even higher availability, consider deploying two ProxySQL instances behind a virtual IP using keepalived, which eliminates the load balancer as a single point of failure. The combination of Galera’s battle-tested synchronous replication and ProxySQL’s flexible query routing gives you a database layer that your applications can depend on.

LEAVE A REPLY

Please enter your comment!
Please enter your name here