How To

Enable REST API Access in Ceph Object Storage

Ceph Object Gateway (RGW) provides a RESTful interface to the Ceph storage cluster, allowing applications to interact with object storage through standard S3 and Swift APIs. This makes Ceph a drop-in replacement for AWS S3 in on-premises environments – same API, your hardware, no per-GB cloud bills.

Original content from computingforgeeks.com - post 57284

This guide walks through deploying the Ceph RADOS Gateway with cephadm, creating users, testing S3 and Swift API access, configuring SSL, managing buckets with policies, and monitoring the gateway. All commands are tested on a running Ceph cluster managed by cephadm.

Prerequisites

Before you begin, make sure you have the following in place:

  • A running Ceph cluster deployed with cephadm (Ceph Reef 18.x or Squid 19.x)
  • At least one monitor and one OSD daemon active
  • Root or sudo access to the Ceph admin node
  • The ceph CLI tools installed on the admin node
  • Port 7480 (HTTP) or 443 (HTTPS) open between clients and the RGW host
  • AWS CLI installed on a client machine for S3 testing

If you need to set up a Ceph storage cluster from scratch, get that running first before proceeding.

Step 1: Deploy the RADOS Gateway with cephadm

The RADOS Gateway (RGW) is the component that translates S3/Swift HTTP requests into RADOS operations on the Ceph cluster. With cephadm, deploying it takes a single command.

First, confirm your cluster is healthy:

sudo ceph status

The output should show HEALTH_OK or HEALTH_WARN with no critical errors:

  cluster:
    id:     a1b2c3d4-e5f6-7890-abcd-ef1234567890
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03
    mgr: ceph01(active), standbys: ceph02
    osd: 6 osds: 6 up, 6 in

Deploy the RGW service. This creates a realm, zone group, and zone automatically if they do not already exist:

sudo ceph orch apply rgw mystore --realm=default --zone=default --placement="1 ceph01"

Replace ceph01 with the hostname where you want the gateway to run. For high availability, increase the count and add more hosts:

sudo ceph orch apply rgw mystore --realm=default --zone=default --placement="2 ceph01 ceph02"

Wait a minute for the daemon to start, then verify RGW is running:

sudo ceph orch ls --service-type rgw

You should see the RGW service listed with the running daemon count:

NAME          PORTS   RUNNING  REFRESHED  AGE  PLACEMENT
rgw.mystore   ?:80    1/1      2m ago     2m   ceph01

Confirm the daemon is active and listening on port 80 (default for cephadm-deployed RGW):

sudo ceph orch ps --daemon-type rgw

The daemon should show a running status:

NAME                       HOST    PORTS  STATUS   REFRESHED  AGE
rgw.mystore.ceph01.abc123  ceph01  *:80   running  30s ago    3m

Test the gateway endpoint with curl. A successful response returns an XML document:

curl http://ceph01:80

The anonymous request returns a ListAllMyBucketsResult or an AccessDenied error – both confirm RGW is responding:

<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Owner>
    <ID>anonymous</ID>
    <DisplayName></DisplayName>
  </Owner>
  <Buckets></Buckets>
</ListAllMyBucketsResult>

Step 2: Create an RGW User with radosgw-admin

S3 API calls require authentication with an access key and secret key. The radosgw-admin tool creates users and generates these credentials.

Create a new user for S3 access:

sudo radosgw-admin user create --uid=s3user --display-name="S3 API User" [email protected]

The command returns a JSON object with the user details. The access_key and secret_key fields are what you need for API authentication:

{
    "user_id": "s3user",
    "display_name": "S3 API User",
    "email": "[email protected]",
    "keys": [
        {
            "user": "s3user",
            "access_key": "AKIAIOSFODNN7EXAMPLE",
            "secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
        }
    ],
    ...
}

Save the access key and secret key – you will need them in the next step. To retrieve the keys later:

sudo radosgw-admin user info --uid=s3user

You can also set user quotas to limit storage consumption:

sudo radosgw-admin quota set --quota-scope=user --uid=s3user --max-size=50G --max-objects=100000
sudo radosgw-admin quota enable --quota-scope=user --uid=s3user

Verify the quota is active:

sudo radosgw-admin user info --uid=s3user | grep -A5 user_quota

The output confirms the quota is enabled with the specified limits:

    "user_quota": {
        "enabled": true,
        "check_on_raw": false,
        "max_size": 53687091200,
        "max_size_kb": 52428800,
        "max_objects": 100000
    }

Step 3: Test S3 API Access with AWS CLI

The AWS CLI works with any S3-compatible endpoint, including Ceph RGW. Configure it with the credentials from Step 2.

Install the AWS CLI if you do not have it already:

sudo apt install awscli -y

On RHEL/Rocky/AlmaLinux:

sudo dnf install awscli -y

Configure the CLI with your RGW user credentials:

aws configure

Enter the access key, secret key, and set the default region to us-east-1 (RGW accepts any region name but this is the conventional default). Leave output format as json.

Test the connection by listing buckets. Point the endpoint to your RGW host:

aws --endpoint-url http://ceph01:80 s3 ls

An empty response means the connection works – there are no buckets yet. If you get a connection error, verify the RGW daemon is running and the port is accessible.

Create a test bucket to confirm write access:

aws --endpoint-url http://ceph01:80 s3 mb s3://test-bucket

A successful creation returns:

make_bucket: test-bucket

Verify the bucket exists:

aws --endpoint-url http://ceph01:80 s3 ls

The output lists your new bucket with its creation timestamp:

2026-03-22 10:30:45 test-bucket

Step 4: Configure SSL/TLS for Ceph RGW

Production deployments must use HTTPS. Ceph RGW supports SSL termination natively through its embedded Civetweb or Beast frontend.

First, prepare the SSL certificate. Combine your certificate and private key into a single PEM file. If you have a CA bundle, include it as well:

cat /etc/ssl/certs/rgw.crt /etc/ssl/private/rgw.key > /etc/ssl/ceph-rgw.pem
chmod 600 /etc/ssl/ceph-rgw.pem

For self-signed certificates (testing only), generate a certificate:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout /etc/ssl/private/rgw.key \
  -out /etc/ssl/certs/rgw.crt \
  -subj "/CN=rgw.example.com"
cat /etc/ssl/certs/rgw.crt /etc/ssl/private/rgw.key > /etc/ssl/ceph-rgw.pem
chmod 600 /etc/ssl/ceph-rgw.pem

Apply the SSL configuration using the cephadm service spec. Create a spec file:

sudo vi /etc/ceph/rgw-ssl.yaml

Add the following service specification:

service_type: rgw
service_id: mystore
placement:
  hosts:
    - ceph01
spec:
  rgw_frontend_port: 443
  ssl: true
  rgw_frontend_ssl_certificate: |
    -----BEGIN CERTIFICATE-----
    (paste your certificate content here)
    -----END CERTIFICATE-----
    -----BEGIN RSA PRIVATE KEY-----
    (paste your private key content here)
    -----END RSA PRIVATE KEY-----

Apply the spec:

sudo ceph orch apply -i /etc/ceph/rgw-ssl.yaml

Wait for the daemon to redeploy, then verify it is listening on port 443:

sudo ceph orch ps --daemon-type rgw

The port column should now show *:443 instead of *:80.

Test the HTTPS endpoint:

curl -k https://ceph01:443

The -k flag skips certificate verification for self-signed certs. For production with a valid certificate, drop the -k flag.

Update your AWS CLI endpoint to use HTTPS:

aws --endpoint-url https://ceph01:443 s3 ls --no-verify-ssl

Step 5: Create Buckets and Upload Objects

With the gateway running and credentials configured, you can manage buckets and objects through the S3 API just like you would with AWS S3.

Create a bucket for application data:

aws --endpoint-url http://ceph01:80 s3 mb s3://app-data

Upload a file to the bucket:

aws --endpoint-url http://ceph01:80 s3 cp /var/log/syslog s3://app-data/logs/syslog

The upload progress is displayed during transfer:

upload: /var/log/syslog to s3://app-data/logs/syslog

Upload an entire directory recursively:

aws --endpoint-url http://ceph01:80 s3 sync /opt/backups/ s3://app-data/backups/

List objects in the bucket to confirm the uploads:

aws --endpoint-url http://ceph01:80 s3 ls s3://app-data/ --recursive

The output lists each object with its size and timestamp:

2026-03-22 10:35:12     245890 logs/syslog
2026-03-22 10:36:01    1048576 backups/db-backup.sql.gz

Download an object from the bucket:

aws --endpoint-url http://ceph01:80 s3 cp s3://app-data/logs/syslog /tmp/syslog-downloaded

Delete an object:

aws --endpoint-url http://ceph01:80 s3 rm s3://app-data/logs/syslog

Step 6: Configure Bucket Policies

Bucket policies control access at the bucket level using JSON policy documents, similar to AWS S3 bucket policies. This lets you grant or deny specific actions without managing per-user permissions.

Create a policy file that grants read-only access to all authenticated users on a specific bucket:

sudo vi /tmp/bucket-policy.json

Add the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": ["arn:aws:iam:::user/s3user"]},
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::app-data",
        "arn:aws:s3:::app-data/*"
      ]
    }
  ]
}

Apply the policy to the bucket:

aws --endpoint-url http://ceph01:80 s3api put-bucket-policy --bucket app-data --policy file:///tmp/bucket-policy.json

Verify the policy is applied:

aws --endpoint-url http://ceph01:80 s3api get-bucket-policy --bucket app-data

To make a bucket publicly readable (use with caution):

sudo vi /tmp/public-policy.json

Add the public read policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::app-data/*"
    }
  ]
}

Apply the public policy:

aws --endpoint-url http://ceph01:80 s3api put-bucket-policy --bucket app-data --policy file:///tmp/public-policy.json

To remove a bucket policy entirely:

aws --endpoint-url http://ceph01:80 s3api delete-bucket-policy --bucket app-data

Step 7: Enable Swift API Access

Ceph RGW supports the OpenStack Swift API alongside S3. This is useful when you have applications that already use Swift or when migrating from an OpenStack Swift deployment to Ceph.

Create a Swift subuser for the existing S3 user:

sudo radosgw-admin subuser create --uid=s3user --subuser=s3user:swift --access=full

Generate a Swift secret key for the subuser:

sudo radosgw-admin key create --subuser=s3user:swift --key-type=swift --gen-secret

The output includes the Swift secret key in the swift_keys section:

{
    ...
    "swift_keys": [
        {
            "user": "s3user:swift",
            "secret_key": "ExampleSwiftSecretKey1234567890"
        }
    ]
}

Test Swift API access using curl. First, authenticate and get a token:

curl -i -H "X-Auth-User: s3user:swift" \
  -H "X-Auth-Key: ExampleSwiftSecretKey1234567890" \
  http://ceph01:80/auth/1.0

A successful authentication returns a storage URL and auth token in the response headers:

HTTP/1.1 204 No Content
X-Storage-Url: http://ceph01:80/swift/v1
X-Auth-Token: AUTH_rgwtk0a1b2c3d4e5f6...

Use the token to list Swift containers:

curl -H "X-Auth-Token: AUTH_rgwtk0a1b2c3d4e5f6..." http://ceph01:80/swift/v1

The response lists all containers (buckets) visible to the Swift subuser. Buckets created through the S3 API are visible through Swift and vice versa – both interfaces share the same underlying storage.

Step 8: Configure Firewall Rules for RGW

The RGW service needs its ports open for client access. The default port is 80 for HTTP or 443 for HTTPS.

On systems running firewalld (RHEL, Rocky Linux, AlmaLinux, Fedora):

sudo firewall-cmd --add-port=80/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
sudo firewall-cmd --reload

On systems running UFW (Ubuntu, Debian):

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw reload

If your RGW uses the legacy default port 7480, open that instead:

sudo firewall-cmd --add-port=7480/tcp --permanent
sudo firewall-cmd --reload

Verify the firewall rules are in place:

sudo firewall-cmd --list-ports

The output should include the RGW ports:

80/tcp 443/tcp

Test connectivity from a remote client to confirm the port is reachable:

curl -s -o /dev/null -w "%{http_code}" http://ceph01:80

A response code of 200 or 403 confirms the gateway is accessible through the firewall.

Step 9: Monitor Ceph RGW Performance

Monitoring the gateway is critical for production workloads. Ceph provides built-in metrics and admin API endpoints for RGW health and performance tracking.

Check the overall RGW performance counters:

sudo ceph daemon rgw.mystore.ceph01 perf dump

This returns detailed counters including request latency, cache hits, and throughput. If you are unsure of the daemon name, list all running daemons first:

sudo ceph orch ps --daemon-type rgw

View bucket usage statistics with radosgw-admin:

sudo radosgw-admin bucket stats --bucket=app-data

The output shows object count, total size, and quota usage for the bucket:

{
    "bucket": "app-data",
    "num_shards": 11,
    "tenant": "",
    "zonegroup": "default",
    "placement_rule": "default-placement",
    "id": "a1b2c3d4-e5f6...",
    "usage": {
        "rgw.main": {
            "size": 1294466,
            "size_actual": 1298432,
            "num_objects": 2
        }
    }
}

Check user-level usage statistics:

sudo radosgw-admin user stats --uid=s3user --sync-stats

List all buckets and their owners:

sudo radosgw-admin bucket list

For long-term monitoring, the Ceph MGR module exposes Prometheus metrics at the /metrics endpoint. This integrates directly with Prometheus and Grafana for Ceph cluster monitoring, giving you dashboards for RGW request rates, latencies, and error counts.

Enable the Prometheus module if it is not already active:

sudo ceph mgr module enable prometheus

The metrics endpoint becomes available at http://ceph01:9283/metrics. Add this as a scrape target in your Prometheus configuration.

Check RGW-specific metrics:

curl -s http://ceph01:9283/metrics | grep rgw

Key metrics to watch include ceph_rgw_req (total requests), ceph_rgw_failed_req (failed requests), ceph_rgw_get_latency_sum (GET latency), and ceph_rgw_put_latency_sum (PUT latency).

RGW Port Reference

The following table summarizes all ports used by Ceph RGW and related monitoring services:

PortProtocolService
80TCPRGW HTTP (cephadm default)
443TCPRGW HTTPS (SSL)
7480TCPRGW HTTP (legacy default)
9283TCPCeph Prometheus metrics

Conclusion

You now have a working Ceph RADOS Gateway serving both S3 and Swift APIs, with user authentication, bucket policies, and monitoring in place. The Ceph RGW documentation covers additional features like multisite replication, lifecycle policies, and server-side encryption for production hardening.

For production environments, always deploy multiple RGW instances behind a load balancer, enable SSL with valid certificates, set user quotas, and monitor request latencies through Prometheus. Regular bucket stats checks help catch storage growth issues before they become problems.

Related Articles

Containers GlusterFS on K8s and OpenShift for Persistent Storage Storage Install Pydio Cells File Sharing on Ubuntu 24.04 Storage Enable NFS File Service in VMware vSAN Storage Cluster Cloud Why Cloud-based CRM is the Future of Small Businesses?

Leave a Comment

Press ESC to close