GlusterFS is an open-source software developed by Gluster Inc and currently maintained by Redhat. It is used to provide object, block, and file storage interfaces. It provides affordable and flexible storage for unstructured data and is often used in high data-intensive workloads that include cloud storage media streaming and CDN. It can be deployed on-prem or in the cloud and supports innumerable protocols that include NFS, SMB, CIFS, HTTP, and FTP
Heketi can be used to manage the lifecycle of Gluster Storage volumes by providing a RESTful management interface. This makes it easy for GlusterFS to be integrated with cloud services such as Kubernetes, OpenStack Manila and OpenShift for the dynamic provisioning of volumes. Heketi automatically determines the locations of bricks across the cluster, placing them and their replicas in different failure domains. It also supports innumerable GlusterFS clusters, making it possible to provide network file storage with no limitation of a single GlusterFS cluster. Generally, Heketi makes it easier for the system admin to manage or configure bricks, disks, or trusted storage pools. This service manages all the hardware for the admin enabling the location of storage on demand.
Today, we will learn how to configure GlusterFS on Ubuntu 22.04 With Heketi
Getting Started
For this guide, we will have an environment configured as shown:
- 3 Ubuntu 22.04 servers for GlusterFS
- Each server has 3 secondary disk of 10GB attached
- sudo access
- Heketi will be set up on one of the servers
DNS resolution can be configured at /etc/hosts on each server for those who do not have a DNS server
$ sudo vim /etc/hosts
192.168.200.90 gluster01.computingforgeeks.com gluster01
192.168.200.91 gluster02.computingforgeeks.com gluster02
192.168.200.92 gluster03.computingforgeeks.com gluster03
1. Configure NTP time synchronization
Before we begin, it is important that the time on all the 3 servers is synchronized. This can be done using Network Time Protocol (NTP) or the Chrony daemon.
To achieve that, ensure that Chrony has been installed:
sudo apt -y install chrony
Start and enable the service:
sudo systemctl enable --now chronyd
Set the Timezone on all the 3 servers:
sudo timedatectl set-timezone Africa/Nairobi --adjust-system-clock
sudo timedatectl set-ntp yes
Verify the changes:
$ timedatectl
Local time: J5 2023-06-14 14:44:50 EAT
Universal time: J5 2023-06-14 11:44:50 UTC
RTC time: J5 2023-06-14 11:44:50
Time zone: Africa/Nairobi (EAT, +0300)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
2. Install GlusterFS packages
GlusterFS exists in the default Ubuntu 22.04 repositories. This makes the installation so easy. To install GlusterFS on Ubuntu 22.04, execute the command below:
sudo apt install glusterfs-server
Once installed, ensure that the service has been started and enabled:
sudo systemctl enable --now glusterd
View the status of the service:
$ systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-06-14 14:46:22 EAT; 4s ago
Docs: man:glusterd(8)
Process: 14472 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEV>
Main PID: 14473 (glusterd)
Tasks: 9 (limit: 4617)
Memory: 11.0M
CPU: 1.481s
CGroup: /system.slice/glusterd.service
└─14473 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Jun 14 14:46:21 gluster01.computingforgeeks.com systemd[1]: Starting GlusterFS,
If you have a firewall enabled, allow the service through it:
sudo ufw allow 24007/tcp
Once the installation is complete on all the 3 nodes, we will probe other nodes in the cluster. Form one node gluster01 say, execute the commands below:
# Perform on Node 01
sudo gluster peer probe gluster02
sudo gluster peer probe gluster03
View the status:
sudo gluster peer status
Sample Output:

3. Install Heketi
As stated earlier, Heketi will be installed on only one node in the cluster. For this guide, we will install it on gluster01. Download the latest release of Heketi from the GitHub releases page.
You can also pull the binary with the command:
wget https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-v10.4.0-release-10.linux.amd64.tar.gz
Extract the archive:
for i in `ls | grep heketi | grep .tar.gz`; do tar xvf $i; done
Copy the extracted binaries to your PATH:
sudo cp heketi/{heketi,heketi-cli} /usr/local/bin
Now verify the installation:
$ heketi --version
Heketi v10.4.0-release-10 (using go: go1.15.14)
$ heketi-cli --version
heketi-cli v10.4.0-release-10
4. Configure Heketi
There are several configurations we need to make before Heketi can be used. First, create a dedicated user for Heketi:
sudo groupadd --system heketi
sudo useradd -s /sbin/nologin --system -g heketi heketi
Then proceed and create configurations and data paths for Heketi:
sudo mkdir -p /var/lib/heketi /etc/heketi /var/log/heketi
sudo cp heketi/heketi.json /etc/heketi
Now we can modify the created config file as desired:
sudo vim /etc/heketi/heketi.json
In the file, there are several variables that can be modified, these include:
##Set service port##
"port": "8080"
##Set admin and use secrets##
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "jvd7df8RN7QNeKV1"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "lMPgdZ8NtFNj6jgk"
}
},
##Configure glusterfs executor##
_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "root",
"port": "22",
"fstab": "/etc/fstab",
......
},
##database path settings
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
....
},
Also in the file, comment out the below lines by adding (_) as shown:
"_sshexec_comment": "SSH username and private key file information",
....
"_xfs_sw": "Optional: Specify number of data disks in the underlying RAID device",
"_xfs_su": "Optional: Specifies a stripe unit or RAID chunk size.",
"_gluster_cli_timeout": "Optional: Timeout, in seconds, passed to the gluster cli invocations",
"_debug_umount_failures": "Optional: boolean to capture more details in case brick unmounting fails",
"_kubeexec_comment": "Kubernetes configuration",
"_xfs_sw": "Optional: Specify number of data disks in the underlying RAID device.",
"_xfs_su": "Optional: Specifies a stripe unit or RAID chunk size.",
"_gluster_cli_timeout": "Optional: Timeout, in seconds, passed to the gluster cli invocations",
"_debug_umount_failures": "Optional: boolean to capture more details in case brick unmounting fails",
.....
If you use an SSH user other than root, ensure that you have configured passwordless for sudo. Once the desired changes have been made, save the file and generate the SSH keys for the user provided for example root:
sudo -i
ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
chown heketi:heketi /etc/heketi/heketi_key*
First, ensure that root login is permitted on all the 3 servers if using root as the user:
$ vim /etc/ssh/sshd_config
PermitRootLogin yes
Restart the service:
sudo systemctl restart sshd
Copy the public key created to all the other GlusterFS nodes using the command:
for i in gluster01 gluster02 gluster03; do
ssh-copy-id -i /etc/heketi/heketi_key.pub root@$i
done
Verify access to GlusterFS nodes from gluster01 with the private key:
root@gluster01:~# ssh -i /etc/heketi/heketi_key root@gluster02
Welcome to Ubuntu 22.04 LTS (GNU/Linux 5.19.0-41-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
460 updates can be applied immediately.
255 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
Last login: Wed Jun 14 17:10:36 2023 from 192.168.200.90
root@gluster02:~# exit
That shows that the SSH keys are working as desired. Exit the session and proceed with the configurations below.
Create a service file for Heketi:
sudo vim /etc/systemd/system/heketi.service
Add these lines to the file:
[Unit]
Description=Heketi Server
[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=-/etc/heketi/heketi.env
User=heketi
ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Save the file and download an environment file:
sudo wget -O /etc/heketi/heketi.env https://raw.githubusercontent.com/heketi/heketi/master/extras/systemd/heketi.env
Assign all the required permissions for the Heketi user:
sudo chown -R heketi:heketi /var/lib/heketi /var/log/heketi /etc/heketi
Load all the Kernel modules required by Heketi:
for i in dm_snapshot dm_mirror dm_thin_pool; do
sudo modprobe $i
done
Reload the daemon and start the service:
sudo systemctl daemon-reload
sudo systemctl enable --now heketi
Verify if the service is up:
$ systemctl status heketi
● heketi.service - Heketi Server
Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-06-14 17:48:36 EAT; 4s ago
Main PID: 4545 (heketi)
Tasks: 7 (limit: 4617)
Memory: 5.7M
CPU: 16ms
CGroup: /system.slice/heketi.service
└─4545 /usr/local/bin/heketi --config=/etc/heketi/heketi.json
Jun 14 17:48:36 gluster01.computingforgeeks.com heketi[4545]: Heketi v10.4.0-release-10 (using go: go1.15.14)
Jun 14 17:48:36 gluster01.computingforgeeks.com heketi[4545]: [heketi] INFO 2023/06/14 17:48:36 Loaded mock executor
....
Jun 14 17:48:36 gluster01.computingforgeeks.com heketi[4545]: Listening on port 8080
5. Create the Heketi Topology file
For this guide, we will use Ansible scripts to generate and maintain the Heketi topology file since editing the JSON file manually can be tiring.
To ma it easier, install Ansible on your system:
sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
Next, create a project folder:
mkdir -p ~/projects/ansible/roles/heketi/{tasks,templates,defaults}
Now create the template for the Topology file:
$ vim ~/projects/ansible/roles/heketi/templates/topology.json.j2
{
"clusters": [
{
"nodes": [
{% if gluster_servers is defined and gluster_servers is iterable %}
{% for item in gluster_servers %}
{
"node": {
"hostnames": {
"manage": [
"{{ item.servername }}"
],
"storage": [
"{{ item.serverip }}"
]
},
"zone": {{ item.zone }}
},
"devices": [
"{{ item.disks | list | join ("\",\"") }}"
]
}{% if not loop.last %},{% endif %}
{% endfor %}
{% endif %}
]
}
]
}
Save this file and create a variables file that matches the defined values:
vim ~/projects/ansible/roles/heketi/defaults/main.yml
Add the below lines, replacing correctly where required:
---
# GlusterFS nodes
gluster_servers:
- servername: gluster01
serverip: 192.168.200.90
zone: 1
disks:
- /dev/sdb
- /dev/sdc
- /dev/sdd
- servername: gluster02
serverip: 192.168.200.91
zone: 1
disks:
- /dev/sdb
- /dev/sdc
- /dev/sdd
- servername: gluster03
serverip: 192.168.200.92
zone: 1
disks:
- /dev/sdb
- /dev/sdc
- /dev/sdd
Create an Ansible task:
vim ~/projects/ansible/roles/heketi/tasks/main.yml
In the file, provide the below content:
---
- name: Copy heketi topology file
template:
src: topology.json.j2
dest: /etc/heketi/topology.json
- name: Set proper file ownership
file:
path: /etc/heketi/topology.json
owner: heketi
group: heketi
Create the Ansible playbook file:
vim ~/projects/ansible/heketi.yml
Add these lines to the file:
---
- name: Generate Heketi topology file and copy to Heketi Server
hosts: gluster01
become: yes
become_method: sudo
roles:
- heketi
Create the hosts file:
$ vim ~/projects/ansible/hosts
gluster01
Navigate into the directory:
cd ~/projects/ansible
View everything created:
$ tree
.
├── heketi.yml
├── hosts
└── roles
└── heketi
├── defaults
│ └── main.yml
├── tasks
│ └── main.yml
└── templates
└── topology.json.j2
5 directories, 5 files
Now run the Ansible playbook using the command:s
ansible-playbook -i hosts --user myuser --ask-pass --ask-become-pass heketi.yml
Alternatively, you can use the root user or sudo Passwordless user with the command:
ansible-playbook -i hosts --user myuser heketi.yml
Sample Output:

Now you will have a topology file generated as shown:
$ cat /etc/heketi/topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"gluster01"
],
"storage": [
"192.168.200.90"
]
},
"zone": 1
},
"devices": [
"/dev/sdb","/dev/sdc","/dev/sdd"
]
}, {
"node": {
"hostnames": {
"manage": [
"gluster02"
],
"storage": [
"192.168.200.91"
]
},
"zone": 1
},
"devices": [
"/dev/sdb","/dev/sdc","/dev/sdd"
]
}, {
"node": {
"hostnames": {
"manage": [
"gluster03"
],
"storage": [
"192.168.200.92"
]
},
"zone": 1
},
"devices": [
"/dev/sdb","/dev/sdc","/dev/sdd"
]
} ]
}
]
}
6. Load the Heketi Topology File
Once the topology fie has been created with the above steps, we can load and use it on Heketi. To load it using the CI, run the command with the below syntax
heketi-cli topology load --user admin --secret <heketi_admin_secret> --json=/etc/heketi/topology.json
Replace the heketi_admin_secret, for example:
heketi-cli topology load --user admin --secret jvd7df8RN7QNeKV1 --json=/etc/heketi/topology.json
Sample Output:

Now export the variables:
$ vim ~/.bashrc
export HEKETI_CLI_SERVER=http://heketiserverip:8080
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY="Adminsecret"
Source the profile:
source ~/.bashrc
Now view the cluster:
$ heketi-cli cluster list
Clusters:
Id:e36f3593c6abde17ff69665043765e17 [file][block]
Show the nodes available:
$ heketi-cli node list
Id:111dcab88510035789f3551a359e27fe Cluster:e36f3593c6abde17ff69665043765e17
Id:3649b21cb354e55c3588868ddfa7ff92 Cluster:e36f3593c6abde17ff69665043765e17
Id:b9dbcd4773319d28e3f265d6ab6b52b6 Cluster:e36f3593c6abde17ff69665043765e17
You can also use the below command to view the nodes:
$ sudo gluster pool list
UUID Hostname State
44c4384a-313b-4726-87c7-89622250ab83 gluster02 Connected
d1f8f291-f2cd-4a8b-8b26-91243d192ecf gluster03 Connected
d1f8f291-f2cd-4a8b-8b26-91243d192ece localhost Connected
View more details of the cluster
heketi-cli node info <NODE_ID>
For example:
$ heketi-cli node info 111dcab88510035789f3551a359e27fe
Node Id: 111dcab88510035789f3551a359e27fe
State: online
Cluster Id: e36f3593c6abde17ff69665043765e17
Zone: 1
Management Hostname: gluster03
Storage Hostname: 192.168.200.92
Devices:
Id:1072f75242b8f7cd32d450344c2bf9b4 Name:/dev/sdd State:online Size (GiB):500 Used (GiB):0 Free (GiB):500 Bricks:0
Id:49e832b9834ec185b21960f411891292 Name:/dev/sdc State:online Size (GiB):500 Used (GiB):0 Free (GiB):500 Bricks:0
Id:a8aca352fb2d742e9bda4d04e48877a9 Name:/dev/sdb State:online Size (GiB):500 Used (GiB):0 Free (GiB):500 Bricks:0
To verify if the Gluster volumes are working as desired, we will use Heketi to create a volume:
# heketi-cli volume create --size=1
Name: vol_e3791aa3f720f8e1e50c3d433326030f
Size: 1
Volume Id: e3791aa3f720f8e1e50c3d433326030f
Cluster Id: e36f3593c6abde17ff69665043765e17
Mount: 192.168.200.92:vol_e3791aa3f720f8e1e50c3d433326030f
Mount Options: backup-volfile-servers=192.168.200.91,192.168.200.90
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3
View the created volume:
# heketi-cli volume list
Id:e3791aa3f720f8e1e50c3d433326030f Cluster:e36f3593c6abde17ff69665043765e17 Name:vol_e3791aa3f720f8e1e50c3d433326030f
To view the topology, execute:
heketi-cli topology info
Now toy can proceed and use GlusterFS as desired. To integrate it with Kubernetes, follow the aid provided here:
There are many other use cases, feel free to explore them. I hope this guide was significant to you.
See more:
- Setup GlusterFS Storage With Heketi on CentOS
- Kubernetes & OpenShift Dynamic Volume Provisioning with GlusterFS and Heketi
- How To Create & Delete GlusterFS Volumes