You can support us by downloading this article as PDF from the Link below. Download the guide as PDF

Today’s article we shall cover how to set up a highly available database cluster with load balancing using MariaDB Galera cluster and ProxySQL. Database availability is a requirement for production environments running database systems. Load balancing is also key if you want to attain a high level of efficiency.

If you are looking for a solution that will guarantee you both high availability and load balancing, then you might just choose to use this option.

MariaDB Galera Cluster is a synchronous multi-master cluster for MariaDB with support for XtraDB/InnoDB storage engines. It has the following top features.

  • It provides active-active multi-master topology
  • You can read and write to any cluster node
  • It has an automatic node joining
  • Automatic membership control, failed nodes drop from the cluster
  • Has true parallel replication, on row level
  • Direct client connections

ProxySQL is a MySQL proxy server that used as an intermediary between the Galera cluster and the applications trying to access the cluster.

Setup Galera Cluster on Ubuntu 20.04

The steps below will cover how to setup galera cluster on three Ubuntu 20.04 hosts.

mariadb galera cluster

We assume that you have access to the three hosts with sudo rights.

Below is how my setup is:

ServerHostnameIP Address
DB 1node1172.20.5.200
DB 2node2172.20.5.201
DB3node3172.20.5.202

Step 1: Update servers

Update the servers and reboot

sudo apt update && sudo apt -y upgrade
sudo reboot

Step 2: Setup Hostnames

Configure static hostnames on each of the three servers for DNS reachability.

$ sudo vim /etc/hosts
172.20.5.200  node1.computingforgeeks.com node1
172.20.5.201  node2.computingforgeeks.com node2
172.20.5.202  node3.computingforgeeks.com node3

Step 3: Install MariaDB on all nodes

Install the latest version of MariaDB with the commands below:

sudo apt update
sudo apt -y install mariadb-server mariadb-client

Configure MariaDB for first use by running the command below, then run through the configuration appropriately.

$ sudo mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Test connection to each of the databases by running the command below:

$ mysql -u root -p

You will be prompted to input a password that you had setup in the previous step.

Step 4: Configure Galera Cluster

The next step is to configure galera cluster on our three MariaDB hosts. Comment the bind line on the file /etc/mysql/mariadb.conf.d/50-server.cnf which binds MariaDB service to 127.0.0.1

$ sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
#bind-address            = 127.0.0.1

Step Configure First Node

Add the following content to the MariaDB configuration file. Remember to modify the hostname at “wsrep_node_address” to the hostname or IP of your first host.

$ sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node1"

Initialize galera cluster and restart MariaDB

sudo galera_new_cluster
sudo systemctl restart mariadb 

Configure Galera nodes (node2 & node3)

Add the following configuration for node2 and node3 respectively:

Node2:

$ sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://node1,node2,node3"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node2"

Node3:

$ sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://node1,node2,node3"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node3"

Restart MariaDB service on node2 and node3

systemctl restart mariadb

Step 5: Validate Galera Settings

Login to any of the three nodes as the root user, then confirm that the cluster settings are OK.

$ mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 10.5.9-MariaDB-1:10.5.9+maria~focal mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Check status

MariaDB [(none)]> show status like 'wsrep_%'; 
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name                 | Value                                                                                                                                          |
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| wsrep_local_state_uuid        | c2e29a95-a81f-11eb-a1fc-237e9071c44f                                                                                                           |
| wsrep_protocol_version        | 10                                                                                                                                             |
| wsrep_last_committed          | 7                                                                                                                                              |
| wsrep_replicated              | 0                                                                                                                                              |
| wsrep_replicated_bytes        | 0                                                                                                                                              |
| wsrep_repl_keys               | 0                                                                                                                                              |
| wsrep_repl_keys_bytes         | 0                                                                                                                                              |
| wsrep_repl_data_bytes         | 0                                                                                                                                              |
| wsrep_repl_other_bytes        | 0                                                                                                                                              |
| wsrep_received                | 7                                                                                                                                              |
| wsrep_received_bytes          | 690                                                                                                                                            |
| wsrep_local_commits           | 0                                                                                                                                              |
| wsrep_local_cert_failures     | 0                                                                                                                                              |
| wsrep_local_replays           | 0                                                                                                                                              |
| wsrep_local_send_queue        | 0                                                                                                                                              |
| wsrep_local_send_queue_max    | 1                                                                                                                                              |
| wsrep_local_send_queue_min    | 0                                                                                                                                              |
| wsrep_local_send_queue_avg    | 0                                                                                                                                              |
| wsrep_local_recv_queue        | 0                                                                                                                                              |
| wsrep_local_recv_queue_max    | 2                                                                                                                                              |
| wsrep_local_recv_queue_min    | 0                                                                                                                                              |
| wsrep_local_recv_queue_avg    | 0.142857                                                                                                                                       |
| wsrep_local_cached_downto     | 1                                                                                                                                              |
| wsrep_flow_control_paused_ns  | 0                                                                                                                                              |
| wsrep_flow_control_paused     | 0                                                                                                                                              |
| wsrep_flow_control_sent       | 0                                                                                                                                              |
| wsrep_flow_control_recv       | 0                                                                                                                                              |
| wsrep_flow_control_active     | false                                                                                                                                          |
| wsrep_flow_control_requested  | false                                                                                                                                          |
| wsrep_cert_deps_distance      | 0                                                                                                                                              |
| wsrep_apply_oooe              | 0                                                                                                                                              |
| wsrep_apply_oool              | 0                                                                                                                                              |
| wsrep_apply_window            | 0                                                                                                                                              |
| wsrep_commit_oooe             | 0                                                                                                                                              |
| wsrep_commit_oool             | 0                                                                                                                                              |
| wsrep_commit_window           | 0                                                                                                                                              |
| wsrep_local_state             | 4                                                                                                                                              |
| wsrep_local_state_comment     | Synced                                                                                                                                         |
| wsrep_cert_index_size         | 0                                                                                                                                              |
| wsrep_causal_reads            | 0                                                                                                                                              |
| wsrep_cert_interval           | 0                                                                                                                                              |
| wsrep_open_transactions       | 0                                                                                                                                              |
| wsrep_open_connections        | 0                                                                                                                                              |
| wsrep_incoming_addresses      | AUTO,AUTO,AUTO                                                                                                                                 |
| wsrep_cluster_weight          | 3                                                                                                                                              |
| wsrep_desync_count            | 0                                                                                                                                              |
| wsrep_evs_delayed             |                                                                                                                                                |
| wsrep_evs_evict_list          |                                                                                                                                                |
| wsrep_evs_repl_latency        | 0/0/0/0/0                                                                                                                                      |
| wsrep_evs_state               | OPERATIONAL                                                                                                                                    |
| wsrep_gcomm_uuid              | e4af838d-a824-11eb-95d0-72f816b84c68                                                                                                           |
| wsrep_gmcast_segment          | 0                                                                                                                                              |
| wsrep_applier_thread_count    | 1                                                                                                                                              |
| wsrep_cluster_capabilities    |                                                                                                                                                |
| wsrep_cluster_conf_id         | 3                                                                                                                                              |
| wsrep_cluster_size            | 3                                                                                                                                              |
| wsrep_cluster_state_uuid      | c2e29a95-a81f-11eb-a1fc-237e9071c44f                                                                                                           |
| wsrep_cluster_status          | Primary                                                                                                                                        |
| wsrep_connected               | ON                                                                                                                                             |
| wsrep_local_bf_aborts         | 0                                                                                                                                              |
| wsrep_local_index             | 2                                                                                                                                              |
| wsrep_provider_capabilities   | :MULTI_MASTER:CERTIFICATION:PARALLEL_APPLYING:TRX_REPLAY:ISOLATION:PAUSE:CAUSAL_READS:INCREMENTAL_WRITESET:UNORDERED:PREORDERED:STREAMING:NBO: |
| wsrep_provider_name           | Galera                                                                                                                                         |
| wsrep_provider_vendor         | Codership Oy <[email protected]>                                                                                                              |
| wsrep_provider_version        | 4.7(ree4f10fc)                                                                                                                                 |
| wsrep_ready                   | ON                                                                                                                                             |
| wsrep_rollbacker_thread_count | 1                                                                                                                                              |
| wsrep_thread_count            | 2                                                                                                                                              |
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
68 rows in set (0.002 sec)

MariaDB [(none)]> 

Confirm that we have a cluster size of 3 under:

wsrep_cluster_size    3

We can create a test database on any of the nodes and check its availability on the other nodes.

[email protected]node1:~# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 49
Server version: 10.5.9-MariaDB-1:10.5.9+maria~focal mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database test1;
Query OK, 1 row affected (0.003 sec)

# node2 and node3

[email protected]node2:~# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 45
Server version: 10.5.9-MariaDB-1:10.5.9+maria~focal mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test1              |
+--------------------+
4 rows in set (0.001 sec)



[email protected]node3:~# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 46
Server version: 10.5.9-MariaDB-1:10.5.9+maria~focal mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test1              |
+--------------------+
4 rows in set (0.001 sec)

This confirms that the database created on node1 is replicated across the cluster.

Step 6: Install ProxySQL server

With a working Galera cluster, we need to setup a ProxySQL server that will distribute traffic to the three nodes equally. ProxySQL can run on the server that has the application or run as an independent server. This article will cover how to set it up on an independent Debian/CentOS host with the steps below:

Debian / Ubuntu

Add ProxySQL repository:

sudo apt install -y lsb-release 
wget -O - 'https://repo.proxysql.com/ProxySQL/repo_pub_key' | sudo apt-key add - 
echo deb https://repo.proxysql.com/ProxySQL/proxysql-2.1.x/$(lsb_release -sc)/ ./ | tee /etc/apt/sources.list.d/proxysql.list

Install ProxySQL:

sudo apt update
sudo apt install proxysql mysql-client

CentOS/RedHat

Add ProxySQL repo

sudo tee /etc/yum.repos.d/proxysql.repo<<EOF
[proxysql_repo]
name= ProxySQL YUM repository
baseurl=https://repo.proxysql.com/ProxySQL/proxysql-2.1.x/centos/\$releasever
gpgcheck=1
gpgkey=https://repo.proxysql.com/ProxySQL/repo_pub_key
EOF

Install ProxySQL

sudo yum install proxysql mysql-client

Step 7: Configure ProxySQL on Ubuntu 20.04

After a successful installation, start and enable the service on your host.

sudo systemctl enable --now proxysql

The next step is to configure ProxySQL through admin interface. The admin interface allows you to save configuration without restarting the proxy. This is achieved through SQL queries to the admin database.

To connect to ProxySQL admin interface, we need a mysql-client. The admin interface runs locally on port 6032 and the default username/password is admin/admin.

$ mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '
[email protected]:~# mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.30 (ProxySQL Admin Module)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Admin> 

Change the default password for security reasons:

UPDATE global_variables SET variable_value='admin:[email protected]' WHERE variable_name='admin-admin_credentials';

Remember to replace [email protected]” with a strong password of your choice.

ProxySQL configuration system consists of three layers:

  1. Memory – Altered when making modifications on the command-line
  2. Disk – used for persistent configuration changes
  3. Runtime – Used as the effective configuration for ProxySQL.

This consequently means that the query above has only been written to memory. To make it persistent, we need to copy the configuration to runtime then save them to disk.

To do that, run the queries below:

LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;

Step 8: Configure Monitoring in Galera cluster

ProxySQL needs to communicate with the MariaDB nodes in the Galera cluster to know their health status. This means that ProxySQL has to connect to the nodes through a dedicated user.

We will create a user on one of the MariaDB nodes, the user will be replicated automatically through the cluster since the cluster is already up and running.

MariaDB [(none)]>     CREATE USER 'monitor'@'%' IDENTIFIED BY '[email protected]';
MariaDB [(none)]> flush privileges;

Modify the password to a password of your preference.

Step 9: Configure Monitoring In ProxySQL

Configure ProxySQL admin to constantly monitor the backend nodes.

Add the user credentials that we configured in the step above. Remember to modify the value for password to fit whatever you have used in the previous step.

ProxySQL Admin> UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';

ProxySQL Admin> UPDATE global_variables SET variable_value='[email protected]' WHERE variable_name='mysql-monitor_password';

Add the following monitoring parameters for intervals:

ProxySQL Admin> UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');

Confirm the variables we just configured in the above step:

Admin>  SELECT * FROM global_variables WHERE variable_name LIKE 'mysql-monitor_%';
+--------------------------------------------------------------+----------------+
| variable_name                                                | variable_value |
+--------------------------------------------------------------+----------------+
| mysql-monitor_enabled                                        | true           |
| mysql-monitor_connect_timeout                                | 600            |
| mysql-monitor_ping_max_failures                              | 3              |
| mysql-monitor_ping_timeout                                   | 1000           |
| mysql-monitor_read_only_max_timeout_count                    | 3              |
| mysql-monitor_replication_lag_interval                       | 10000          |
| mysql-monitor_replication_lag_timeout                        | 1000           |
| mysql-monitor_replication_lag_count                          | 1              |
| mysql-monitor_groupreplication_healthcheck_interval          | 5000           |
| mysql-monitor_groupreplication_healthcheck_timeout           | 800            |
| mysql-monitor_groupreplication_healthcheck_max_timeout_count | 3              |
| mysql-monitor_groupreplication_max_transactions_behind_count | 3              |
| mysql-monitor_galera_healthcheck_interval                    | 5000           |
| mysql-monitor_galera_healthcheck_timeout                     | 800            |
| mysql-monitor_galera_healthcheck_max_timeout_count           | 3              |
| mysql-monitor_replication_lag_use_percona_heartbeat          |                |
| mysql-monitor_query_interval                                 | 60000          |
| mysql-monitor_query_timeout                                  | 100            |
| mysql-monitor_slave_lag_when_null                            | 60             |
| mysql-monitor_threads_min                                    | 8              |
| mysql-monitor_threads_max                                    | 128            |
| mysql-monitor_threads_queue_maxsize                          | 128            |
| mysql-monitor_wait_timeout                                   | true           |
| mysql-monitor_writer_is_also_reader                          | true           |
| mysql-monitor_username                                       | monitor        |
| mysql-monitor_password                                       | [email protected]       |
| mysql-monitor_history                                        | 600000         |
| mysql-monitor_connect_interval                               | 2000           |
| mysql-monitor_ping_interval                                  | 2000           |
| mysql-monitor_read_only_interval                             | 2000           |
| mysql-monitor_read_only_timeout                              | 500            |
+--------------------------------------------------------------+----------------+
31 rows in set (0.00 sec)

Save changes to disk:

Admin> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)

Admin> SAVE MYSQL VARIABLES TO DISK;
Query OK, 140 rows affected (0.01 sec)

Step 10: Add Backend nodes

The next step is to add the three MariaDB nodes that exist in our Galera cluster. ProxySQL used host groups to categorize the backend nodes. A host group is a set of nodes identified by a positive number e.g. 1 or 2. The purpose of having host groups is to help ProxySQL route queries to different sets of hosts using ProxySQL query routing.

ProxySQL has the following logical host groups:

  1. Writers – these are MySQL nodes that can accept queries that can write/change data – Primary nodes.
  2. Readers – Nodes that can only accept read queries – Slaves nodes.

We will assign the following host group IDs to the above hostgroups:

Writers – 1, readers – 2. Writers are also readers by default.

Configure the table mysql_replication_hostgroup in the main database and specify the reader and writer hostgroups.

SHOW CREATE TABLE main.mysql_replication_hostgroups\G
Admin> INSERT INTO main.mysql_replication_hostgroups (writer_hostgroup,reader_hostgroup,comment) VALUES (1,2,'galera_cluster');

Add the Galera cluster nodes:

INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'172.20.5.200',3306);
INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'172.20.5.201',3306);
INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'172.20.5.202',3306);

Save changes to disk;

LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

Confirm that the servers are reachable:

Admin> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 3;
+--------------+------+------------------+-------------------------+---------------+
| hostname     | port | time_start_us    | connect_success_time_us | connect_error |
+--------------+------+------------------+-------------------------+---------------+
| 172.20.5.201 | 3306 | 1619703478153182 | 1062                    | NULL          |
| 172.20.5.202 | 3306 | 1619703478130560 | 923                     | NULL          |
| 172.20.5.200 | 3306 | 1619703478108016 | 984                     | NULL          |
+--------------+------+------------------+-------------------------+---------------+
3 rows in set (0.00 sec)

Admin> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 3;
+--------------+------+------------------+----------------------+------------+
| hostname     | port | time_start_us    | ping_success_time_us | ping_error |
+--------------+------+------------------+----------------------+------------+
| 172.20.5.200 | 3306 | 1619703506146573 | 358                  | NULL       |
| 172.20.5.202 | 3306 | 1619703506123187 | 431                  | NULL       |
| 172.20.5.200 | 3306 | 1619703504166074 | 253                  | NULL       |
+--------------+------+------------------+----------------------+------------+
3 rows in set (0.00 sec)

Admin> 

Step 11: Create MySQL users

The last step is to create MySQL users that will be connecting to the cluster through the ProxySQL instance.

Create remote user on Galera cluster

Create a MySQL user on one of the nodes on galera cluster that users will use to connect.

MariaDB [(none)]> create user 'testuser'@'%' identified by 'testpassword';

Assingn the neccessary roles to the user, e.g access to a certain database.

MariaDB [(none)]> grant all privileges on testdb.* to 'testuser'@'%' with grant option;
MariaDB [(none)]> flush privileges;

Create remote user on ProxySQL Admin

This is done by adding entries in the mysql_users table in the main database.

Admin> SHOW CREATE TABLE mysql_users\G
*************************** 1. row ***************************
       table: mysql_users
Create Table: CREATE TABLE mysql_users (
    username VARCHAR NOT NULL,
    password VARCHAR,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    use_ssl INT CHECK (use_ssl IN (0,1)) NOT NULL DEFAULT 0,
    default_hostgroup INT NOT NULL DEFAULT 0,
    default_schema VARCHAR,
    schema_locked INT CHECK (schema_locked IN (0,1)) NOT NULL DEFAULT 0,
    transaction_persistent INT CHECK (transaction_persistent IN (0,1)) NOT NULL DEFAULT 1,
    fast_forward INT CHECK (fast_forward IN (0,1)) NOT NULL DEFAULT 0,
    backend INT CHECK (backend IN (0,1)) NOT NULL DEFAULT 1,
    frontend INT CHECK (frontend IN (0,1)) NOT NULL DEFAULT 1,
    max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 10000,
    attributes VARCHAR CHECK (JSON_VALID(attributes) OR attributes = '') NOT NULL DEFAULT '',
    comment VARCHAR NOT NULL DEFAULT '',
    PRIMARY KEY (username, backend),
    UNIQUE (username, frontend))
1 row in set (0.00 sec)

The table is usually empty and users are added by modifying the table. You specify the username,password and default hostgroup.

Admin> INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('testuser','testpassword',1);
SELECT * FROM mysql_users; 

Save changes:

LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Step 12: Test client connection

ProxySQL client runs on port 6033. We can try connecting to the proxy client using the user we created on galera and proxysql.

mysql -utestuser -h 127.0.0.1 -P6033 -ptestpassword
[email protected]:~# mysql -utestuser1 -h 127.0.0.1 -P6033 -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 17
Server version: 5.5.30 (ProxySQL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

We can now try run queries on the cluster.


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| testdb             |
+--------------------+
2 rows in set (0.00 sec)

mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| node3      |
+------------+
1 row in set (0.00 sec)

mysql> 

We can see that we have visibility on testdb a database we had assigned the rights to in the galera setup. We can also confirm that we are getting the responses from node3 of the galera cluster.

Step 13: Simulate node failure

We finally have to test what happens when a node goes offline. Shut down MariaDB service on one of the nodes and check the status on ProxySQL admin interface:

Admin>  SELECT hostgroup,srv_host,status FROM stats.stats_mysql_connection_pool;
+-----------+--------------+---------+
| hostgroup | srv_host     | status  |
+-----------+--------------+---------+
| 1         | 172.20.5.200 | ONLINE  |
| 1         | 172.20.5.201 | ONLINE  |
| 1         | 172.20.5.202 | SHUNNED |
+-----------+--------------+---------+
3 rows in set (0.00 sec)

The status for one of the nodes is SHUNNED, which means that the host is temporarily unavailable. Upon restarting MariaDB service on the node, the status changes back to online on the ProxySQL management interface. That means that you can now send read/write queries to the node once again.

Conclusion

In this article we’ve been able to install and configure MariaDB Galera Cluster on Ubuntu 20.04 servers. We also confirmed that we have a working Galera cluster with load balancing and high availability which is guaranteed through ProxySQL. I hope this helps you out. Cheers!

As an appreciation for the content we put out,
we would be thrilled if you support us!


As we continue to grow, we would wish to reach and impact more people who visit and take advantage of the guides we have on our blog. This is a big task for us and we are so far extremely grateful for the kind people who have shown amazing support for our work over the time we have been online.

Thank You for your support as we work to give you the best of guides and articles. Click below to buy us a coffee.

LEAVE A REPLY

Please enter your comment!
Please enter your name here