OpenStack is a free, open-source cloud platform for building private and public cloud infrastructure. It provides compute, storage, networking, and identity services through a modular architecture. This guide covers how to install OpenStack Dalmatian (2024.2) on Rocky Linux 10 using Packstack, a tool that automates the deployment of an all-in-one OpenStack environment using Puppet modules.
We will set up a single-node OpenStack cloud suitable for lab testing, proof-of-concept deployments, and learning. The deployment includes Keystone (identity), Nova (compute), Neutron (networking), Glance (image), Cinder (block storage), Swift (object storage), Heat (orchestration), Ceilometer (telemetry), and Horizon (dashboard).
Prerequisites
Before installing OpenStack on Rocky Linux 10, confirm the following requirements are met:
- A dedicated server or VM running Rocky Linux 10 with minimal install
- Minimum 16 GB RAM (32 GB or more recommended for running multiple instances)
- At least 4 CPU cores (8+ recommended)
- 100 GB free disk space minimum (separate disk for Cinder volumes recommended)
- One network interface with a static IP address and internet access
- Root or sudo access
- A fully qualified domain name (FQDN) set as hostname
- Hardware virtualization support enabled in BIOS (Intel VT-x or AMD-V)
Verify your hardware specs before proceeding.
$ grep -c ^processor /proc/cpuinfo
8
$ free -h
total used free shared buff/cache available
Mem: 31Gi 412Mi 29Gi 12Mi 1.2Gi 30Gi
Swap: 4.0Gi 0B 4.0Gi
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part /
sdb 8:16 0 500G 0 disk
Check that hardware virtualization is available on the processor.
$ grep -E 'vmx|svm' /proc/cpuinfo | head -1
flags : ... vmx ...
If there is no output, enable virtualization in your BIOS/UEFI settings before continuing.
Step 1: Set Hostname and Configure Hosts File
Set a proper FQDN for your OpenStack server. Packstack relies on hostname resolution during deployment.
sudo hostnamectl set-hostname openstack.example.com --static
Add the hostname to /etc/hosts so local resolution works without DNS.
echo "192.168.1.10 openstack.example.com openstack" | sudo tee -a /etc/hosts
Replace 192.168.1.10 with the actual IP address of your server. Verify the hostname is set correctly.
$ hostnamectl
Static hostname: openstack.example.com
Icon name: computer-vm
Chassis: vm
Machine ID: a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4
Boot ID: 12345678-abcd-efgh-ijkl-123456789abc
Virtualization: kvm
Operating System: Rocky Linux 10.0 (Sapphire Rapids)
CPE OS Name: cpe:/o:rocky:rocky:10
Kernel: Linux 6.12.0-55.el10.x86_64
Architecture: x86-64
Step 2: Disable SELinux and Firewall
Packstack requires SELinux and firewalld to be disabled during installation. The installer configures iptables rules directly, and SELinux policies can interfere with OpenStack service communication.
Set SELinux to permissive mode. For a detailed walkthrough on managing SELinux on Rocky Linux, see our dedicated guide.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Confirm SELinux is now permissive.
$ getenforce
Permissive
Disable firewalld to prevent it from conflicting with Packstack’s iptables configuration.
sudo systemctl disable --now firewalld
Verify firewalld is stopped.
$ systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
Active: inactive (dead)
Step 3: Update System and Enable Required Repositories
Update all installed packages to the latest versions first.
sudo dnf -y update
Install basic utilities needed during the setup.
sudo dnf -y install vim wget curl bash-completion dnf-utils net-tools lvm2
Enable the CRB (CodeReady Builder) repository. OpenStack packages depend on development libraries provided by this repository.
sudo dnf config-manager --set-enabled crb
Verify the CRB repo is active.
$ dnf repolist | grep crb
crb Rocky Linux 10 - CRB
Step 4: Install OpenStack Dalmatian Repository and Packstack
Add the RDO repository for OpenStack Dalmatian (2024.2). RDO is the community distribution of OpenStack for RHEL-based systems.
sudo dnf install -y centos-release-openstack-dalmatian
Update the system again to pull in the new repository metadata and any dependency updates.
sudo dnf -y update
Install the Packstack installer package.
sudo dnf install -y openstack-packstack
Confirm Packstack is installed and check the version.
$ packstack --version
packstack 2024.2
Reboot the server to load any updated kernel modules and apply all changes.
sudo systemctl reboot
Step 5: Generate and Customize the Packstack Answer File
Packstack uses an answer file to define which OpenStack components to install and how to configure them. Generating this file first allows you to review and customize the deployment before running it.
Quick deployment (default values)
For a fast proof-of-concept deployment with default settings, run this single command. It installs all components on one node with auto-generated passwords.
sudo packstack --allinone --provision-demo=n
Skip to Step 7 if you use this method. For a customized deployment with control over networking, storage, and services, continue with the answer file method below.
Generate the answer file
Generate an answer file with custom service selections and networking options.
sudo packstack \
--gen-answer-file /root/packstack-answers.txt \
--keystone-admin-passwd='Str0ngAdm1nPass!' \
--provision-demo=n \
--os-heat-install=y \
--os-ceilometer-install=y \
--os-swift-install=y \
--os-horizon-ssl=n \
--os-neutron-ml2-mechanism-drivers=openvswitch \
--os-neutron-ml2-tenant-network-types=vxlan \
--os-neutron-ml2-type-drivers=vxlan,flat,vlan \
--os-neutron-l2-agent=openvswitch \
--nova-libvirt-virt-type=kvm \
--cinder-volumes-create=n
Key parameters explained:
- keystone-admin-passwd – Password for the OpenStack admin user
- provision-demo=n – Skip creating demo projects and networks (we will create our own)
- os-heat-install=y – Install the Heat orchestration engine for stack templates
- os-ceilometer-install=y – Install Ceilometer for usage metering and telemetry
- os-swift-install=y – Install Swift object storage service
- cinder-volumes-create=n – Do not auto-create a loopback device for Cinder (we will prepare LVM manually)
- nova-libvirt-virt-type=kvm – Use KVM hardware virtualization (change to
qemuif running inside a VM without nested virt)
Customize the answer file
Open the generated file to review and adjust any settings.
sudo vim /root/packstack-answers.txt
Key settings to verify or change in the answer file:
# Controller, Compute, and Network node IPs (all same for single-node)
CONFIG_CONTROLLER_HOST=192.168.1.10
CONFIG_COMPUTE_HOSTS=192.168.1.10
CONFIG_NETWORK_HOSTS=192.168.1.10
# Neutron OVS bridge mapping for external network
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
# Swift storage device (raw disk or loopback)
CONFIG_SWIFT_STORAGES=/dev/sdb
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
# If no raw disk for Swift, use loopback instead
# CONFIG_SWIFT_STORAGE_SIZE=20G
Replace 192.168.1.10 with your server’s actual IP and eth0 with your network interface name. Replace /dev/sdb with the disk you want to use for Swift object storage. If you do not have a spare disk, comment out the device line and set a loopback size instead.
Step 6: Prepare Cinder Block Storage Volumes
Cinder requires an LVM volume group named cinder-volumes to provision block storage for instances. If you set cinder-volumes-create=n in the answer file, you must create this volume group manually before running Packstack.
If you have a dedicated disk (for example /dev/sdb), create the LVM structure on it. For more on managing Cinder volumes in OpenStack, check our separate guide.
sudo pvcreate /dev/sdb
Create the volume group that Cinder will use.
sudo vgcreate cinder-volumes /dev/sdb
Create a thin pool within the volume group. Thin provisioning allows Cinder to overcommit storage.
$ sudo lvcreate -l 100%FREE -T cinder-volumes/cinder-volumes-pool
Logical volume "cinder-volumes-pool" created.
Verify the volume group is ready.
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 1 0 wz--n- <500.00g 0
If you do not have a spare disk, set --cinder-volumes-create=y in the answer file instead. Packstack will create a loopback file-backed volume group automatically. This is fine for testing but not suitable for production workloads.
Step 7: Run Packstack to Deploy OpenStack on Rocky Linux 10
Start the OpenStack deployment with the customized answer file. This process takes 30 to 60 minutes depending on hardware speed and internet bandwidth.
sudo packstack --answer-file /root/packstack-answers.txt --timeout=1500 2>&1 | tee /root/packstack-deploy.log
The installer will download packages, configure services with Puppet manifests, and set up all OpenStack components. Watch for the progress output.
Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries [ DONE ]
Setting up CACERT [ DONE ]
Preparing AMQP entries [ DONE ]
Preparing MariaDB entries [ DONE ]
Preparing Keystone entries [ DONE ]
Preparing Glance entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg [ DONE ]
Preparing Cinder entries [ DONE ]
Preparing Nova API entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Preparing Nova Compute entries [ DONE ]
Preparing Nova Scheduler entries [ DONE ]
Preparing Neutron API entries [ DONE ]
Preparing Neutron L3 entries [ DONE ]
Preparing Neutron L2 Agent entries [ DONE ]
Preparing Neutron DHCP Agent entries [ DONE ]
Preparing Horizon entries [ DONE ]
Preparing Swift builder entries [ DONE ]
Preparing Swift proxy entries [ DONE ]
Preparing Swift storage entries [ DONE ]
Preparing Ceilometer entries [ DONE ]
Preparing Heat entries [ DONE ]
Preparing Puppet manifests [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.1.10_controller.pp
192.168.1.10_controller.pp: [ DONE ]
Applying 192.168.1.10_network.pp
192.168.1.10_network.pp: [ DONE ]
Applying 192.168.1.10_compute.pp
192.168.1.10_compute.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]
**** Installation completed successfully ******
If the installation fails, check the log file at /var/tmp/packstack/ for detailed error messages. Common issues include insufficient RAM, DNS resolution problems, or missing volume groups.
Step 8: Configure OpenStack Networking
After Packstack finishes, configure the external network bridge so instances can reach the outside network. The installer creates an OVS bridge named br-ex. Verify it exists.
$ sudo ovs-vsctl show
a1b2c3d4-5678-90ab-cdef-a1b2c3d4e5f6
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
ovs_version: "3.3.0"
Add your physical network interface to the br-ex bridge. Replace eth0 with your actual interface name.
sudo ovs-vsctl add-port br-ex eth0
Create a NetworkManager connection profile for the br-ex bridge. This moves the IP configuration from your physical interface to the bridge.
sudo nmcli connection add type ovs-bridge conn.interface br-ex con-name br-ex
sudo nmcli connection add type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex
sudo nmcli connection add type ovs-interface conn.interface br-ex master ovs-port-br-ex con-name ovs-iface-br-ex ipv4.method manual ipv4.addresses 192.168.1.10/24 ipv4.gateway 192.168.1.1 ipv4.dns 8.8.8.8
Verify the OVS bridge mapping is set correctly in the Neutron OVS agent configuration.
$ sudo grep bridge_mappings /etc/neutron/plugins/ml2/openvswitch_agent.ini
bridge_mappings=extnet:br-ex
Load the OpenStack admin credentials. The keystonerc_admin file was created by Packstack in the root home directory.
source /root/keystonerc_admin
Create the external (public) network. This network provides floating IPs for instances.
openstack network create \
--provider-network-type flat \
--provider-physical-network extnet \
--external \
public
Add a subnet to the public network. Use IP addresses from your physical network range that are not used by other devices. For a deeper look at creating OpenStack networks and subnets, see our dedicated article.
openstack subnet create --network public \
--allocation-pool start=192.168.1.200,end=192.168.1.230 \
--no-dhcp \
--gateway 192.168.1.1 \
--subnet-range 192.168.1.0/24 \
public_subnet
Create a private (tenant) network for instance traffic.
openstack network create private
Add a subnet to the private network with DHCP enabled.
openstack subnet create --network private \
--allocation-pool start=10.0.0.50,end=10.0.0.200 \
--dns-nameserver 8.8.8.8 \
--subnet-range 10.0.0.0/24 \
private_subnet
Create a router that connects the private network to the public network.
openstack router create router1
openstack router set --external-gateway public router1
openstack router add subnet router1 private_subnet
Verify the network namespaces are created. You should see entries for the router and DHCP agent.
$ ip netns show
qrouter-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
qdhcp-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Confirm all networks are listed.
$ openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| a1b2c3d4-5678-90ab-cdef-111111111111 | public | b2c3d4e5-6789-0abc-def1-222222222222 |
| c3d4e5f6-7890-abcd-ef12-333333333333 | private | d4e5f6a7-8901-bcde-f123-444444444444 |
+--------------------------------------+---------+--------------------------------------+
Step 9: Configure Cinder LVM Backend
If you prepared LVM volumes in Step 6, verify that Cinder is configured to use the LVM backend. Open the Cinder configuration file.
sudo vim /etc/cinder/cinder.conf
Confirm these settings are present in the file.
enabled_backends=lvm
volume_clear = none
[lvm]
volume_backend_name=lvm
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_ip_address=192.168.1.10
iscsi_helper=lioadm
volume_group=cinder-volumes
volumes_dir=/var/lib/cinder/volumes
Restart Cinder services to apply the configuration.
sudo systemctl restart openstack-cinder-volume openstack-cinder-api
Verify Cinder services are running.
$ openstack volume service list
+------------------+-----------------------------+------+---------+-------+
| Binary | Host | Zone | Status | State |
+------------------+-----------------------------+------+---------+-------+
| cinder-scheduler | openstack.example.com | nova | enabled | up |
| cinder-volume | openstack.example.com@lvm | nova | enabled | up |
+------------------+-----------------------------+------+---------+-------+
Step 10: Access the OpenStack Horizon Dashboard
Open a web browser and navigate to the Horizon dashboard at http://192.168.1.10/dashboard (replace with your server IP). The login credentials are stored in the keystonerc_admin file.
$ grep OS_PASSWORD /root/keystonerc_admin
export OS_PASSWORD='Str0ngAdm1nPass!'
Log in with username admin and the password from the output above. The domain field should be set to Default.

After logging in, you can manage all OpenStack resources through the web interface. The dashboard provides access to instance management, network configuration, volume creation, and user administration.
Step 11: Create an OpenStack Project and User
Create a dedicated project (tenant) for your workloads instead of using the admin project directly. Load admin credentials first.
source /root/keystonerc_admin
Create a new project.
openstack project create --description "Production workloads" production
Create a user and assign it to the project with the member role. For more details on creating OpenStack projects, users, and roles, refer to our dedicated guide.
openstack user create --project production --password 'Us3rP@ssw0rd' demouser
openstack role add --project production --user demouser member
Verify the user and project were created.
$ openstack project list
+----------------------------------+------------+
| ID | Name |
+----------------------------------+------------+
| a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4 | admin |
| b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5 | services |
| c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6 | production |
+----------------------------------+------------+
Step 12: Upload Cloud Images to Glance
Download cloud images and upload them to Glance so you can launch instances. We will use CirrOS for testing and a Rocky Linux cloud image for real workloads. For a full list of supported images, see our guide on uploading cloud images to OpenStack Glance.
Download and upload the CirrOS test image.
wget https://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img -O /tmp/cirros.img
Upload it to Glance.
openstack image create "cirros-0.6.2" \
--file /tmp/cirros.img \
--disk-format qcow2 \
--container-format bare \
--public
Download and upload a Rocky Linux 10 cloud image for production use.
wget https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2 -O /tmp/rocky10.qcow2
Upload the Rocky Linux image.
openstack image create "Rocky-Linux-10" \
--file /tmp/rocky10.qcow2 \
--disk-format qcow2 \
--container-format bare \
--public
Verify both images are active.
$ openstack image list
+--------------------------------------+----------------+--------+
| ID | Name | Status |
+--------------------------------------+----------------+--------+
| a1b2c3d4-5678-90ab-cdef-111111111111 | cirros-0.6.2 | active |
| b2c3d4e5-6789-0abc-def1-222222222222 | Rocky-Linux-10 | active |
+--------------------------------------+----------------+--------+
Step 13: Create Flavors and Security Groups
Flavors define the compute resources (CPU, RAM, disk) available to instances. Create a set of standard flavors.
openstack flavor create --id 0 --ram 1024 --vcpus 1 --swap 2048 --disk 10 m1.tiny
openstack flavor create --id 1 --ram 2048 --vcpus 1 --swap 4096 --disk 20 m1.small
openstack flavor create --id 2 --ram 4096 --vcpus 2 --swap 8192 --disk 40 m1.medium
openstack flavor create --id 3 --ram 8192 --vcpus 4 --swap 8192 --disk 80 m1.large
openstack flavor create --id 4 --ram 16384 --vcpus 8 --swap 8192 --disk 160 m1.xlarge
Verify the flavors are created.
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0 | m1.tiny | 1024 | 10 | 0 | 1 | True |
| 1 | m1.small | 2048 | 20 | 0 | 1 | True |
| 2 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 3 | m1.large | 8192 | 80 | 0 | 4 | True |
| 4 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
Create a security group that allows SSH, HTTP, HTTPS, and ICMP traffic.
openstack security group create basic --description "Allow SSH, HTTP, HTTPS, and ICMP"
openstack security group rule create --protocol tcp --dst-port 22 --remote-ip 0.0.0.0/0 basic
openstack security group rule create --protocol tcp --dst-port 80 --remote-ip 0.0.0.0/0 basic
openstack security group rule create --protocol tcp --dst-port 443 --remote-ip 0.0.0.0/0 basic
openstack security group rule create --protocol icmp --remote-ip 0.0.0.0/0 basic
Verify the security group rules.
$ openstack security group rule list basic
+--------------------------------------+-------------+-----------+-----------+------------+-----------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Direction |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+
| ... | tcp | IPv4 | 0.0.0.0/0 | 22:22 | ingress |
| ... | tcp | IPv4 | 0.0.0.0/0 | 80:80 | ingress |
| ... | tcp | IPv4 | 0.0.0.0/0 | 443:443 | ingress |
| ... | icmp | IPv4 | 0.0.0.0/0 | | ingress |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+
Step 14: Create an SSH Key Pair
Generate an SSH key pair for logging into instances. If you already have an SSH key, you can import the public key instead.
ssh-keygen -q -N "" -f ~/.ssh/id_rsa
Import the public key into OpenStack. For additional methods of adding SSH key pairs to OpenStack, check our CLI guide.
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | ab:cd:ef:12:34:56:78:90:ab:cd:ef:12:34:56:78:90 |
| name | mykey |
| user_id | a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4 |
+-------------+-------------------------------------------------+
Step 15: Launch Your First OpenStack Instance
With networking, images, flavors, security groups, and keys all configured, launch a test instance using the CirrOS image.
openstack server create \
--flavor m1.tiny \
--image cirros-0.6.2 \
--network private \
--security-group basic \
--key-name mykey \
test-instance
Wait for the instance to become active. Check its status.
$ openstack server list
+--------------------------------------+---------------+--------+---------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------------+--------+---------------------+----------------+---------+
| a1b2c3d4-5678-90ab-cdef-aaaaaaaaaaaa | test-instance | ACTIVE | private=10.0.0.51 | cirros-0.6.2 | m1.tiny |
+--------------------------------------+---------------+--------+---------------------+----------------+---------+
Assign a floating IP
Floating IPs allow external access to instances on the private network. Allocate a floating IP from the public network pool and assign it to the instance. See our detailed guide on assigning floating IPs to OpenStack instances for more options.
openstack floating ip create public
Note the floating IP address from the output, then assign it to the instance.
openstack server add floating ip test-instance 192.168.1.201
Verify the floating IP is attached.
$ openstack server show test-instance -f value -c addresses
private=10.0.0.51, 192.168.1.201
Test connectivity to the instance.
$ ping -c 3 192.168.1.201
PING 192.168.1.201 (192.168.1.201) 56(84) bytes of data.
64 bytes from 192.168.1.201: icmp_seq=1 ttl=63 time=2.45 ms
64 bytes from 192.168.1.201: icmp_seq=2 ttl=63 time=1.12 ms
64 bytes from 192.168.1.201: icmp_seq=3 ttl=63 time=0.98 ms
--- 192.168.1.201 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
SSH into the instance using the key pair.
ssh [email protected]
Step 16: Create and Attach a Cinder Volume
Cinder provides persistent block storage for instances. Create a 10 GB volume.
openstack volume create --size 10 test-volume
Wait for it to become available, then attach it to the running instance.
$ openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| d4e5f6a7-8901-bcde-f123-555555555555 | test-volume | available | 10 | |
+--------------------------------------+-------------+-----------+------+-------------+
Attach the volume to the instance.
openstack server add volume test-instance test-volume
Verify the volume is attached.
$ openstack volume show test-volume -f value -c status -c attachments
in-use
[{'server_id': 'a1b2c3d4-...', 'device': '/dev/vdb', 'id': 'd4e5f6a7-...'}]
Inside the instance, the volume appears as /dev/vdb. Format and mount it as needed.
Step 17: Verify All OpenStack Services
Run a final check to confirm all OpenStack services are running properly.
$ openstack compute service list
+----+----------------+-------------------------+----------+---------+-------+
| ID | Binary | Host | Zone | Status | State |
+----+----------------+-------------------------+----------+---------+-------+
| 1 | nova-conductor | openstack.example.com | internal | enabled | up |
| 3 | nova-scheduler | openstack.example.com | internal | enabled | up |
| 5 | nova-compute | openstack.example.com | nova | enabled | up |
+----+----------------+-------------------------+----------+---------+-------+
Check the Neutron agents.
$ openstack network agent list
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+
| ID | Agent Type | Host | Availability Zone | Alive | State |
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+
| ... | Open vSwitch agent | openstack.example.com | None | :-) | UP |
| ... | DHCP agent | openstack.example.com | nova | :-) | UP |
| ... | L3 agent | openstack.example.com | nova | :-) | UP |
| ... | Metering agent | openstack.example.com | None | :-) | UP |
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+
Verify the Keystone service catalog.
$ openstack catalog list
+-----------+-----------+----------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+----------------------------------------------+
| keystone | identity | RegionOne: http://192.168.1.10:5000 |
| nova | compute | RegionOne: http://192.168.1.10:8774/v2.1 |
| glance | image | RegionOne: http://192.168.1.10:9292 |
| neutron | network | RegionOne: http://192.168.1.10:9696 |
| cinder | volumev3 | RegionOne: http://192.168.1.10:8776/v3 |
| swift | object | RegionOne: http://192.168.1.10:8080/v1 |
| heat | orchestr. | RegionOne: http://192.168.1.10:8004/v1 |
+-----------+-----------+----------------------------------------------+
Conclusion
We have deployed OpenStack Dalmatian (2024.2) on Rocky Linux 10 using Packstack with all core services running on a single node. The environment includes Keystone, Nova, Neutron, Glance, Cinder, Swift, Heat, Ceilometer, and Horizon – ready for creating projects, launching instances, and attaching persistent storage.
For production deployments, consider adding SSL/TLS termination to all API endpoints, setting up monitoring with Prometheus or Zabbix, configuring automated backups for the MariaDB database and Glance image store, and deploying a multi-node architecture with separate controller, compute, and network nodes for high availability.
Related Guides
- OpenStack Deployment on Ubuntu using DevStack
- Upload Cloud Images to OpenStack Glance – All Linux, BSD and Windows
- Manage OpenStack Cloud from Linux (Install Guide)
- How To Resize OpenStack Instance / Virtual Machine
- Configure OpenStack Instances Autostart after reboot

































































Hi Josphat,
Very nice article and very helpful.
Can you please rectify following line,
sudo dnf config-manager –enable PowerTools
Instead of PowerTools, it should be powertools (everything lower case)
Thanks this has been rectified.
Hi,
May we know the content of route-eno1 and route-br-ex?
sudo mv /etc/sysconfig/network-scripts/route-eno1 /etc/sysconfig/network-scripts/route-br-ex
Thanks,
JC
If your router info is not in route-xx file, you don’t need it.