Ubuntu

Deploy Ceph Storage Cluster on Ubuntu 24.04

Ceph is an open-source distributed storage platform that provides object, block, and file storage in a single unified cluster. It is designed for high availability and scales from a few nodes to thousands without a single point of failure. Ceph is widely used for cloud infrastructure, Kubernetes persistent storage, virtual machine disk backends, and general-purpose data storage.

Original content from computingforgeeks.com - post 4185

This guide walks through deploying a production-ready Ceph 20 (Tentacle) storage cluster on Ubuntu 24.04 LTS using cephadm – the official deployment and management tool. We cover bootstrapping the first monitor, adding hosts, deploying OSDs, creating pools, setting up CephFS, RBD block storage, RGW object storage, and the Ceph Dashboard. The official cephadm documentation has additional details for advanced configurations.

Prerequisites

Before starting, ensure you have the following in place:

  • 3 or more servers running Ubuntu 24.04 LTS (Noble Numbat) with at least 4 GB RAM and 2 CPUs each
  • Extra raw disks on each node for OSD storage (unpartitioned, no filesystem – Ceph takes ownership of the entire disk)
  • Root or sudo access on all nodes
  • Network connectivity between all nodes (public network for client traffic, optionally a separate cluster network for replication)
  • Hostname resolution – each node must resolve every other node by hostname (via DNS or /etc/hosts)
  • NTP/Chrony configured and synchronized across all nodes – Ceph requires accurate time
  • Docker or Podman installed on all nodes (cephadm runs Ceph daemons as containers)

Our lab environment uses three nodes:

HostnameIP AddressRole
ceph-node110.0.1.10Mon, Mgr, OSD, MDS, RGW
ceph-node210.0.1.11Mon, Mgr, OSD, MDS
ceph-node310.0.1.12Mon, OSD, MDS

Each node has an additional disk at /dev/sdb for OSD storage. Adjust the device paths and IP addresses to match your setup.

Step 1: Install cephadm on Ubuntu 24.04

The cephadm tool bootstraps and manages a Ceph cluster using containers. It handles deploying all Ceph daemons (monitors, managers, OSDs, MDS, RGW) as containerized services. Start by installing it on the first node that will become your initial monitor.

Update the system packages first:

sudo apt update && sudo apt upgrade -y

Add the Ceph Tentacle repository and install cephadm:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
echo "deb https://download.ceph.com/debian-tentacle/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt install -y cephadm ceph-common

Verify the installation completed successfully:

cephadm version

The output should confirm Ceph Tentacle (version 20.x):

ceph version 20.2.0 (tentacle) nautilus (stable)

Ensure Docker or Podman is available on the node. Cephadm will use whichever container engine is installed to run Ceph daemons. If neither is present, install Docker:

sudo apt install -y docker.io
sudo systemctl enable --now docker

Step 2: Bootstrap the First Monitor Node

Bootstrapping creates the initial Ceph cluster with one monitor and one manager daemon on the first node. This is the foundation everything else builds on.

Run the bootstrap command on ceph-node1, specifying its IP address as the monitor address:

sudo cephadm bootstrap --mon-ip 10.0.1.10 --cluster-network 10.0.1.0/24

The --cluster-network flag is optional but recommended in production – it separates OSD replication traffic from client traffic. If your cluster uses a single network, omit it.

Bootstrap takes a few minutes. When it completes, you should see output showing the dashboard URL and credentials:

Ceph Dashboard is now available at:

             URL: https://ceph-node1:8443/
            User: admin
        Password: <auto-generated-password>

You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell -- ceph -s

Bootstrap complete.

Save the dashboard password – you will need it later. Verify the cluster is running:

sudo ceph -s

The cluster status should show one monitor and one manager active:

  cluster:
    id:     a1b2c3d4-e5f6-7890-abcd-ef1234567890
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
  services:
    mon: 1 daemons, quorum ceph-node1
    mgr: ceph-node1.abcdef(active, since 2m)
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

The HEALTH_WARN about OSD count is expected at this point - we have not added any storage yet.

Step 3: Add Hosts to the Ceph Cluster

With the first node bootstrapped, add the remaining nodes to the cluster. Cephadm uses SSH to deploy daemons on remote hosts, so the cluster's public key needs to be distributed first.

Copy the Ceph cluster SSH public key to each additional node:

ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node3

Make sure Docker or Podman is also installed on the additional nodes before adding them. Then register each host with the orchestrator:

sudo ceph orch host add ceph-node2 10.0.1.11
sudo ceph orch host add ceph-node3 10.0.1.12

Verify all hosts are recognized by the cluster:

sudo ceph orch host ls

All three nodes should appear in the host list:

HOST        ADDR        LABELS  STATUS
ceph-node1  10.0.1.10   _admin
ceph-node2  10.0.1.11
ceph-node3  10.0.1.12

Cephadm automatically deploys monitor and manager daemons on the new hosts as needed. Give it a few minutes, then confirm that monitors have expanded to all three nodes:

sudo ceph mon stat

You should see three monitors in quorum:

e3: 3 mons at {ceph-node1=[v2:10.0.1.10:3300/0,v1:10.0.1.10:6789/0],ceph-node2=[v2:10.0.1.11:3300/0,v1:10.0.1.11:6789/0],ceph-node3=[v2:10.0.1.12:3300/0,v1:10.0.1.12:6789/0]}, election epoch 6, leader 0 ceph-node1, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3

Step 4: Add OSDs (Object Storage Daemons)

OSDs are the workhorses of a Ceph cluster - they store the actual data. Each OSD daemon manages one physical disk. The disks must be raw (no partitions, no filesystem, no LVM) for Ceph to use them.

First, list all available storage devices across the cluster:

sudo ceph orch device ls

The output shows each disk and whether it is available for use as an OSD:

HOST        PATH      TYPE  SIZE   AVAILABLE
ceph-node1  /dev/sdb  hdd   100G   Yes
ceph-node2  /dev/sdb  hdd   100G   Yes
ceph-node3  /dev/sdb  hdd   100G   Yes

A disk shows AVAILABLE: Yes when it has no partitions, no filesystem, and is not already used by Ceph. If a disk shows as unavailable, check for leftover partitions or LVM volumes with lsblk.

You can add all available devices at once - this is the simplest approach for clusters where every spare disk should become an OSD:

sudo ceph orch apply osd --all-available-devices

Alternatively, add specific disks individually if you need more control:

sudo ceph orch daemon add osd ceph-node1:/dev/sdb
sudo ceph orch daemon add osd ceph-node2:/dev/sdb
sudo ceph orch daemon add osd ceph-node3:/dev/sdb

Wait a minute or two for the OSD containers to deploy, then verify all OSDs are up and running:

sudo ceph osd tree

All three OSDs should show status up:

ID  CLASS  WEIGHT   TYPE NAME            STATUS  REWEIGHT  PRI-AFF
-1         0.29219  root default
-3         0.09740      host ceph-node1
 0    hdd  0.09740          osd.0            up   1.00000  1.00000
-5         0.09740      host ceph-node2
 1    hdd  0.09740          osd.1            up   1.00000  1.00000
-7         0.09740      host ceph-node3
 2    hdd  0.09740          osd.2            up   1.00000  1.00000

Check the overall cluster health again:

sudo ceph -s

With three OSDs active, the cluster should now report HEALTH_OK (or HEALTH_WARN if the default pool replication size has not been met yet - this resolves once pools are created).

Step 5: Create Storage Pools

Pools are logical partitions within the Ceph cluster that store data. Every RADOS object belongs to a pool. You need to create pools before using any Ceph storage services (CephFS, RBD, RGW).

Create a replicated pool with a placement group (PG) count appropriate for three OSDs:

sudo ceph osd pool create mypool 32 32 replicated
sudo ceph osd pool set mypool size 3
sudo ceph osd pool set mypool min_size 2

The size 3 means three replicas (one primary + two copies), and min_size 2 means the pool continues accepting writes even if one replica is temporarily down. For production clusters with more OSDs, increase the PG count - Ceph recommends roughly 100 PGs per OSD divided by the number of pools.

Enable the pool for a specific application. This associates metadata so Ceph knows the pool's purpose:

sudo ceph osd pool application enable mypool rbd

List all pools to confirm:

sudo ceph osd pool ls detail

You should see the pool with its replication settings and application tag:

pool 1 'mypool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 45 flags hashpspool application rbd

Step 6: Set Up CephFS (File Storage)

CephFS provides a POSIX-compliant distributed filesystem that clients can mount over the network. It requires at least one MDS (Metadata Server) daemon, which cephadm deploys automatically when you create a CephFS volume.

Create a CephFS filesystem named cephfs_data:

sudo ceph fs volume create cephfs_data

This command creates two pools automatically (cephfs.cephfs_data.meta for metadata and cephfs.cephfs_data.data for file data) and deploys MDS daemons across the cluster.

Verify the filesystem is active and the MDS daemons are running:

sudo ceph fs status

The output shows the filesystem name, MDS daemon status, and pool configuration:

cephfs_data - 0 clients
===========
RANK  STATE      MDS        ACTIVITY   DNS   INOS   DIRS   CAPS
 0    active     ceph-node1  Reqs:    0 /s    10     13     12      0
      POOL                    TYPE     USED  AVAIL
cephfs.cephfs_data.meta       metadata  96k   90G
cephfs.cephfs_data.data       data       0    90G

To mount CephFS on a client machine, install the Ceph client packages and use the kernel mount or FUSE mount. Here is the kernel mount method:

sudo apt install -y ceph-common

Get the admin keyring secret for authentication:

sudo ceph auth get-key client.admin

Mount the filesystem, replacing the monitor IP and secret with your actual values:

sudo mkdir -p /mnt/cephfs
sudo mount -t ceph 10.0.1.10:6789:/ /mnt/cephfs -o name=admin,secret=AQBxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==

Verify the mount is working by checking disk space:

df -h /mnt/cephfs

Step 7: Set Up RBD Block Storage

RADOS Block Device (RBD) provides block storage volumes that behave like traditional disks. These are commonly used as persistent volumes for Kubernetes or as virtual machine disk backends.

Create a dedicated pool for RBD images if you have not already:

sudo ceph osd pool create rbd_pool 32 32 replicated
sudo ceph osd pool application enable rbd_pool rbd
sudo rbd pool init rbd_pool

Create a 10 GB block device image:

sudo rbd create --size 10240 rbd_pool/my_disk

Verify the image was created:

sudo rbd ls rbd_pool

You should see the image name in the output:

my_disk

Get detailed information about the image:

sudo rbd info rbd_pool/my_disk

The image details confirm the size, features, and stripe configuration:

rbd image 'my_disk':
        size 10 GiB in 2560 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: abcdef123456
        block_name_prefix: rbd_data.abcdef123456
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Sat Mar 22 10:00:00 2026
        access_timestamp: Sat Mar 22 10:00:00 2026
        modify_timestamp: Sat Mar 22 10:00:00 2026

To map the RBD image as a block device on a client, use the rbd command:

sudo rbd map rbd_pool/my_disk
sudo mkfs.ext4 /dev/rbd0
sudo mkdir -p /mnt/rbd_disk
sudo mount /dev/rbd0 /mnt/rbd_disk

The block device is now mounted and ready for use like any standard disk.

Step 8: Deploy RGW (RADOS Gateway) for Object Storage

The RADOS Gateway (RGW) provides an S3-compatible and Swift-compatible HTTP API for object storage. It runs as a standalone web service fronting the Ceph cluster and is useful for applications that need S3-compatible object storage.

Create the RGW realm, zone group, and zone. These define the multi-site topology - even for single-site deployments, this structure is required:

sudo radosgw-admin realm create --rgw-realm=default --default
sudo radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
sudo radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=default --master --default
sudo radosgw-admin period update --commit

Deploy the RGW service using cephadm. This places two RGW daemons on the cluster:

sudo ceph orch apply rgw default --realm=default --zone=default --placement="2 ceph-node1 ceph-node2"

Wait a moment for the RGW containers to start, then verify the service is running:

sudo ceph orch ls --service-type rgw

The output should show the RGW service active with the specified daemon count:

NAME                  PORTS   RUNNING  REFRESHED  AGE  PLACEMENT
rgw.default           80      2/2      30s ago    2m   ceph-node1;ceph-node2

Create an S3 user for accessing the object storage:

sudo radosgw-admin user create --uid=s3user --display-name="S3 User" --access-key=myaccesskey --secret-key=mysecretkey

Test S3 access using the AWS CLI (install it first if needed):

aws --endpoint-url http://10.0.1.10:80 s3 mb s3://test-bucket
aws --endpoint-url http://10.0.1.10:80 s3 ls

You should see the test-bucket listed in the output, confirming the object storage gateway is working.

Step 9: Access the Ceph Dashboard

Ceph includes a built-in web-based management dashboard that provides real-time monitoring of cluster health, OSD status, pool usage, and performance metrics. The dashboard is enabled automatically during bootstrap.

Verify the dashboard module is enabled:

sudo ceph mgr module ls | grep dashboard

If the dashboard is not enabled, enable it and create a self-signed certificate:

sudo ceph mgr module enable dashboard
sudo ceph dashboard create-self-signed-cert

Set or reset the admin password for the dashboard. Write the password to a temporary file and pass it to the command:

echo "YourStrongPassword123" | sudo tee /tmp/dashboard_pass
sudo ceph dashboard ac-user-create admin -i /tmp/dashboard_pass administrator
sudo rm -f /tmp/dashboard_pass

Find the dashboard URL:

sudo ceph mgr services

The output shows the dashboard URL and any other manager services:

{
    "dashboard": "https://ceph-node1:8443/"
}

Open https://ceph-node1:8443/ in a browser and log in with the admin credentials. The dashboard provides an overview of cluster health, OSD utilization, pool statistics, and allows you to manage most Ceph operations through the web interface. For deeper monitoring with Prometheus and Grafana, integrate the Ceph Prometheus exporter endpoint.

Step 10: Configure Firewall Rules for Ceph

Ceph uses several ports for communication between daemons and client access. If you run a firewall on Ubuntu, these ports must be open on all cluster nodes.

The following table lists all ports used by Ceph services:

ServicePort(s)Protocol
Monitor (msgr2)3300TCP
Monitor (legacy)6789TCP
OSD6800-7300TCP
Manager Dashboard8443TCP
RGW (HTTP)80TCP
RGW (HTTPS)443TCP
Prometheus Exporter9283TCP

If you use UFW (Ubuntu's default firewall), open these ports on all Ceph nodes:

sudo ufw allow 3300/tcp
sudo ufw allow 6789/tcp
sudo ufw allow 6800:7300/tcp
sudo ufw allow 8443/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 9283/tcp
sudo ufw reload

If you use firewalld instead of UFW:

sudo firewall-cmd --permanent --add-port=3300/tcp
sudo firewall-cmd --permanent --add-port=6789/tcp
sudo firewall-cmd --permanent --add-port=6800-7300/tcp
sudo firewall-cmd --permanent --add-port=8443/tcp
sudo firewall-cmd --permanent --add-port={80,443}/tcp
sudo firewall-cmd --permanent --add-port=9283/tcp
sudo firewall-cmd --reload

Verify the firewall rules are active:

sudo ufw status numbered

All the Ceph-related ports should appear in the active rules list.

Conclusion

You now have a fully functional Ceph Tentacle (20.x) storage cluster on Ubuntu 24.04 with monitors, managers, OSDs, CephFS, RBD block storage, RGW object storage, and the web dashboard. The cluster provides replicated storage across three nodes with no single point of failure.

For production hardening, configure TLS certificates for the RGW and dashboard instead of self-signed certs, set up Prometheus and Grafana for long-term monitoring, enable pool quotas and IOPS limits, and plan a regular backup strategy for critical pools. Consider adding more OSDs and enabling erasure coding for better storage efficiency on larger deployments.

Related Articles

Containers Install Docker and Run Containers on Ubuntu 24.04|22.04 Databases Install PostgreSQL 13 on Ubuntu 22.04|20.04|18.04 Debian Install Opera Browser on Ubuntu 24.04 / Debian 13 Automation How To Install Home Assistant Core on Ubuntu 22.04

5 thoughts on “Deploy Ceph Storage Cluster on Ubuntu 24.04”

  1. Question for you …

    Can I perform this same installation in Centos 7 ? Also in your ceph status command, I can’t see mds …but you installed that …it’s not appearing in status … anything needs to be done specifically?

    Reply
    • Yes, you can install Ceph storage on CentOS 7. However on our site, we have a guide for CentOS 8.
      The mds is a Ceph Metadata Server, in the guide, we have used it to create the three metadata servers mon01 mon02 & mon03.

      Reply
  2. thanks for your reply …but you know I need to use Ceph 13.2.2 with Centos …

    the other Centos 8 guide used Ceph 15, which will not work for me I hope …

    I am planning to install 1 st time Ceph 13.2.2 with Centos …

    Reply

Leave a Comment

Press ESC to close