Openstack is an open source and free to use cloud solution that enables you to build a private Infrastructure-as-a-Service (IaaS) platform through a variety of complementary services. Each service in OpenStack offers an Application Programming Interface (API) to facilitate the integration. Key components available in OpenStack will provide compute, network, and storage resources. It can be operated from a command line interface using openstack tool or through its intuitive dashboard from where you can administer OpenStack cloud and monitor its resources.

In this article we cover in detail the steps required to setup a private OpenStack Cloud locally on your Debian 12 Linux machine. This is a single node installation that’s suitable only for home lab learning and testing purposes. We are using a manual method of installation other solution like Kolla and OpenStack Ansible.

Before you get started with this setup, ensure your machine has the following minimum requirements met.

  1. A fresh installation of Debian 12 Linux
  2. CPU virtualization extension enable in BIOS
  3. root user or user with sudo privileges
  4. 2 vCPUs
  5. 8 GB RAM
  6. 20GB disk capacity
  7. Good internet connection

Let’s begin!.

1. Prepare Environment

Our environment has the following variables;

  • Debian 12 Server IP address: 192.168.1.2
  • Debian 12 Server hostname: osp01.home.cloudlabske.io
  • Network interface: eno1
  • Default OpenStack region: RegionOne
  • Default domain: default

Set server hostname.

sudo hostnamectl set-hostname osp01.home.cloudlabske.io

Edit the /etc/hosts file to map your server IP address to the hostname configured.

$ sudo vim /etc/hosts
192.168.1.2 osp01.home.cloudlabske.io osp01

Update system before you start other configurations. The assumption is that you’re working on a clean Debian machine.

sudo apt update && sudo apt upgrade -y

A reboot might be required. Just confirm.

[ -e /var/run/reboot-required ] && sudo reboot

Configure NTP time synchronization.

  • Using Systemd timesyncd

Open the timesyncd.conf file for editing and update the address to your NTP server.

$ sudo vim /etc/systemd/timesyncd.conf
[Time]
NTP=192.168.1.1

Restart systemd-timesyncd service.

sudo systemctl restart systemd-timesyncd

Confirm the status

sudo timedatectl timesync-status
  • Using Chrony

Install Chrony and Configure NTP server for time adjustment. NTP uses 123/UDP.

sudo apt -y install chrony vim

You can change NTP servers or use default.

$ sudo vim /etc/chrony/chrony.conf
pool 2.debian.pool.ntp.org iburst

Set timezone to your current location

sudo timedatectl set-timezone Africa/Nairobi
sudo timedatectl set-ntp true

Confirm the settings

$ timedatectl
               Local time: Wed 2024-01-31 21:47:22 EAT
           Universal time: Wed 2024-01-31 18:47:22 UTC
                 RTC time: Wed 2024-01-31 18:47:22
                Time zone: Africa/Nairobi (EAT, +0300)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

Restart chrony service

sudo systemctl restart chrony

Manually sync time on the system.

$ sudo chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- ntp1.icolo.io                 2   6    37    57   +770us[ +770us] +/-   13ms
^* ntp0.icolo.io                 2   6    37    57    -15us[ -895us] +/-   13ms
^- time.cloudflare.com           3   6    37    58  +3221us[+3221us] +/-   71ms
^- time.cloudflare.com           3   6    37    59  +3028us[+2156us] +/-   71ms

2. Install MariaDB, RabbitMQ, Memcached

From this point we can switch to root user account.

$ sudo -i
# or
$ sudo su -

Install MariaDB database server

apt install mariadb-server -y

Adjust maximum database connections to avoid connection timeouts.

# vim /etc/mysql/mariadb.conf.d/50-server.cnf
max_connections        = 700

Restart MariaDB service after making the change.

systemctl restart mariadb

Also install Python MySQL extension package.

apt install python3-pymysql

Once done perform the installation of RabbitMQ, Memcached and Nginx web server.

apt install memcached rabbitmq-server  nginx libnginx-mod-stream

Add RabbitMQ user for OpenStack, set password and grant permissions.

rabbitmqctl add_user openstack StrongPassw0rd01
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Disable the default nginx web page.

unlink /etc/nginx/sites-enabled/default

Restart the services.

systemctl restart mariadb rabbitmq-server memcached nginx

3. Install and Configure Keystone

The OpenStack Identity service (Keystone) is a single point of integration for authentication, authorization, and a catalog of services.

Create a database, and user with proper permissions granted.

# mysql
create database keystone; 
grant all privileges on keystone.* to keystone@'localhost' identified by 'StrongPassw0rd01'; 
flush privileges; 
exit;

Install Keystone and its dependencies including OpenStack client.

apt install keystone python3-openstackclient apache2 python3-oauth2client libapache2-mod-wsgi-py3  -y

Answer “No” for all prompts.

Edit keystone configuration file and change the address and configure database connection settings and token provider.

# vim /etc/keystone/keystone.conf
# Specify Memcache Server on line 363
memcache_servers = localhost:11211

# Add MariaDB connection information around line 543:
[database]
connection = mysql+pymysql://keystone:StrongPassw0rd01@localhost/keystone

# Set  token provider in line 2169
provider = fernet

Populate the Identity service database with data by running the commands below.

su -s /bin/bash keystone -c "keystone-manage db_sync"

Safely ignore the “Exception ignored in:…” error.

Next we initialize the Fernet key repositories:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Bootstrap the Identity service. With the latest release of OpenStack keystone identity can be run on the same port for all interfaces.

export controller=$(hostname -f)

keystone-manage bootstrap --bootstrap-password StrongPassw0rd01 \
--bootstrap-admin-url https://$controller:5000/v3/ \
--bootstrap-internal-url https://$controller:5000/v3/ \
--bootstrap-public-url https://$controller:5000/v3/ \
--bootstrap-region-id RegionOne

Set your server FQDN as set earlier on Apache configuration file.

# vim /etc/apache2/apache2.conf
ServerName osp01.home.cloudlabske.io

Create Apache VirtualHost configuration for keystone. This will enable us to access API using FQDN as opposed to IP addressing usage.

vim /etc/apache2/sites-available/keystone.conf

Modify and paste the following contents. But remember to replace SSL paths with yours.

1. Using Let’s Encrypt

See the following guide on using Let’s Encrypt.

In this example SSL certificates used are as follows.

  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem
  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem

2. Using OpenSSL

For OpenSSL self-signed certificates, generated as follows.

# vim /etc/ssl/openssl.cnf
[ home.cloudlabske.io ]
subjectAltName = DNS:osp01.home.cloudlabske.io

# Generate certificates
cd /etc/ssl/private
openssl genrsa -aes128 2048 > openstack_server.key
openssl rsa -in server.key -out openstack_server.key
openssl req -utf8 -new -key openstack_server.key -out openstack_server.csr
openssl x509 -in openstack_server.csr -out openstack_server.crt -req -signkey openstack_server.key -extfile /etc/ssl/openssl.cnf -extensions home.cloudlabske.io  -days 3650
chmod 600 server.key

The paths to key and certificates will be.

  • /etc/ssl/private/openstack_server.crt
  • /etc/ssl/private/openstack_server.key

Modify below contents to suit your environment.

Listen 5000

<VirtualHost *:5000>
    SSLEngine on
    SSLHonorCipherOrder on
    SSLCertificateFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688

    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>

    ErrorLog /var/log/apache2/keystone.log
    CustomLog /var/log/apache2/keystone_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
    SetHandler wsgi-script
    Options +ExecCGI

    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
</Location>

Enable Apache modules required and keystone web config.

a2enmod ssl
a2ensite keystone 
systemctl disable --now keystone
systemctl restart apache2

Generate keystone access file for OpenStack client.

export controller=$(hostname -f)

tee ~/keystonerc<<EOF
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=StrongPassw0rd01
export OS_AUTH_URL=https://$controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

Set permissions and source the file to use it.

chmod 600 ~/keystonerc
source ~/keystonerc
echo "source ~/keystonerc " >> ~/.bashrc

Create Projects

Create service project that will contain a unique user for each service that you add to your environment.

root@osp01 ~(keystone)$ openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 1067895d9b99452b8d1758eda755c7bc |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

root@osp01 ~(keystone)$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 1067895d9b99452b8d1758eda755c7bc | service |
| 9a102dfdf9a54e8382fefdca727b2553 | admin   |
+----------------------------------+---------+
root@osp01 ~(keystone)$

4. Install and Configure Glance (Image Service)

The OpenStack Image service (glance) allows cluster users to discover, register, and retrieve virtual machine images using REST API. With the API you can query virtual machine image metadata and retrieve an actual image. Virtual machine images are made available through the Image service in a variety of locations.

Add database user and password for glance. The database will store virtual machine image metadata.

# mysql
create database glance; 
grant all privileges on glance.* to glance@'localhost' identified by 'StrongPassw0rd01';
flush privileges; 
exit;

Add glance user to Keystone service project.

# openstack user create --domain default --project service --password StrongPassw0rd01 glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | a4af040dceff40d1a01beb14d268a7d9 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

Add the admin role to the glance user and service project:

openstack role add --project service --user glance admin

Create the glance service entity:

# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | db9cb71d9f2b41128784458b057d468d |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

Save controller FQDN as variable for ease of use.

export controller=$(hostname -f)

Create Image service API endpoints in default region RegionOne. We will create public, admin and internal endpoints.

# openstack endpoint create --region RegionOne image public https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 5f5a8246813e436ab31ebeb37b1bb843       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne image internal https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 953c077f90944774a205f5244aa28ce8       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne image admin https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 3788fbdc728f4e8fab7d370ba2559103       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

Install OpenStack Glance package

apt install glance -y

Answer “No” for all automatic configuration options.

Configure Glance API

The API accepts Image API calls for image discovery, retrieval, and storage.

Backup current Glance API configuration file.

 mv /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

Create new Glance API configuration file.

vim /etc/glance/glance-api.conf

Paste and modify values provided below to suit your environment.

  • In the [DEFAULT] section, configure RabbitMQ connection
  • In the [glance_store] section, configure the local file system store and location of image files
  • In the [database] section, configure database access
  • In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access
[DEFAULT]
bind_host = 127.0.0.1
# RabbitMQ connection info
transport_url = rabbit://openstack:StrongPassw0rd01@localhost
enforce_secure_rbac = true

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[database]
# MariaDB connection info
connection = mysql+pymysql://glance:StrongPassw0rd01@localhost/glance

# keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = 127.0.0.1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[paste_deploy]
flavor = keystone

[oslo_policy]
enforce_new_defaults = true

Set new file permissions.

chown root:glance /etc/glance/glance-api.conf
chmod 640 /etc/glance/glance-api.conf

Populate the Image service database:

su -s /bin/bash -c "glance-manage db_sync" glance

Start and enable Glance service.

systemctl restart glance-api && systemctl enable glance-api

Configure Nginx

vim /etc/nginx/nginx.conf

Modify by adding Glance connection details to proxy request for. Remember to set correct values for listen address, SSL certificate and key.

# Add the following to the end of file
stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

Restart nginx web service when done.

systemctl restart nginx

5. Install and Configure Nova

The OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. It provides hosting and management of cloud computing systems.

Components of OpenStack Compute service.

  • nova-api service: Accepts and responds to end user compute API calls.
  • nova-api-metadata service: Accepts metadata requests from instances.
  • nova-compute service: A worker daemon that creates and terminates virtual machine instances through hypervisor APIs.
  • nova-scheduler service: Takes a virtual machine instance request from the queue and determines on which compute server host it runs.
  • nova-conductor module: Mediates interactions between the nova-compute service and the database.
  • nova-novncproxy daemon: Provides a proxy for accessing running instances through a VNC connection.
  • nova-spicehtml5proxy daemon: Provides a proxy for accessing running instances through a SPICE connection.
  • The queue: A central hub for passing messages between daemons
  • SQL database: Stores most build-time and run-time states for a cloud infrastructure, including available instance types, instances in use, available networks, and projects.

1) Prepare setup prerequisites

In this guide, our virtualization of choice is KVM with libvirt. Install KVM and other utilities required.

apt install qemu-kvm libvirt-daemon libvirt-daemon-system bridge-utils libosinfo-bin virtinst

Confirm CPU virtualization extensions are enabled in your BIOS.

# lsmod | grep kvm
kvm_intel             380928  0
kvm                  1142784  1 kvm_intel
irqbypass              16384  1 kvm

Add a user and database on MariaDB for Nova, Nova API, Placement, Nova cell.

# mysql
create database nova;
grant all privileges on nova.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

create database nova_api; 
grant all privileges on nova_api.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

create database placement; 
grant all privileges on placement.* to placement@'localhost' identified by 'StrongPassw0rd01'; 

create database nova_cell0; 
grant all privileges on nova_cell0.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

flush privileges;
exit

Create the nova user:

# openstack user create --domain default --project service --password StrongPassw0rd01 nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 424afd4671ad49268bdbd14fe32b6fe2 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

Add the admin role to the nova user:

openstack role add --project service --user nova admin

Add placement user in service project

openstack user create --domain default --project service --password StrongPassw0rd01 placement

Add the admin role to the placement user:

openstack role add --project service --user placement admin

Create nova service entry

# openstack service create --name nova --description "OpenStack Compute service" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute service        |
| enabled     | True                             |
| id          | ba737aa8b0a240fab38bdf49b31a60f0 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

Create placement service entry.

# openstack service create --name placement --description "OpenStack Compute Placement service" placement
+-------------+-------------------------------------+
| Field       | Value                               |
+-------------+-------------------------------------+
| description | OpenStack Compute Placement service |
| enabled     | True                                |
| id          | ae365b6e32ec4db985ec9c6e7f685ae1    |
| name        | placement                           |
| type        | placement                           |
+-------------+-------------------------------------+

Define Nova API Host

export controller=$(hostname -f)

Create public endpoint for nova.

# openstack endpoint create --region RegionOne compute public https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 50890db0f27443ddb547d24786340330                          |
| interface    | public                                                    |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

Create private endpoint for nova.

# openstack endpoint create --region RegionOne compute internal https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 96b3abd5ca314429b0602a2bc153af77                          |
| interface    | internal                                                  |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

Create admin endpoint for nova.

# openstack endpoint create --region RegionOne compute admin https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 8fcd6f0a2d4c4816b09ca214e311597a                          |
| interface    | admin                                                     |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

Create public, private, and admin endpoint for nova.

# openstack endpoint create --region RegionOne placement public https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 2fc42dd9223d41aea94779daa6a80e19       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne placement internal https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | fd284797981540c2b219139edbdbdf69       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne placement admin https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 4c40f9d36e384c6685b9f56e7d951329       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

2) Install and Configure Nova services

Install Nova packages

apt install nova-api nova-scheduler nova-conductor nova-novncproxy python3-novaclient placement-api

Backup current Nova configuration file

mv /etc/nova/nova.conf /etc/nova/nova.conf.orig

Create new configuration

vim /etc/nova/nova.conf

Paste while modifying the settings in the file. Configure RabbitMQ connection, VNC, Glance API,

[DEFAULT]
allow_resize_to_same_host = True
osapi_compute_listen = 127.0.0.1
osapi_compute_listen_port = 8774
metadata_listen = 127.0.0.1
metadata_listen_port = 8775
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
log_dir = /var/log/nova
# RabbitMQ connection details
transport_url = rabbit://openstack:StrongPassw0rd01@localhost

[api]
auth_strategy = keystone

[vnc]
enabled = True
novncproxy_host = 127.0.0.1
novncproxy_port = 6080
novncproxy_base_url = https://osp01.home.cloudlabske.io:6080/vnc_auto.html

# Glance connection info
[glance]
api_servers = https://osp01.home.cloudlabske.io:9292

[oslo_concurrency]
lock_path = $state_path/tmp

# MariaDB connection info
[api_database]
connection = mysql+pymysql://nova:StrongPassw0rd01@localhost/nova_api

[database]
connection = mysql+pymysql://nova:StrongPassw0rd01@localhost/nova

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[placement]
auth_url = https://osp01.home.cloudlabske.io:5000
os_region_name = RegionOne
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[wsgi]
api_paste_config = /etc/nova/api-paste.ini

[oslo_policy]
enforce_new_defaults = true

Set ownership and permissions.

chgrp nova /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf

Set console proxy type for Nova

sudo sed -i 's/^NOVA_CONSOLE_PROXY_TYPE=.*/NOVA_CONSOLE_PROXY_TYPE=novnc/g' /etc/default/nova-consoleproxy

Backup current placement configuration

mv /etc/placement/placement.conf /etc/placement/placement.conf.orig

Create new config file for Nova placement

vim /etc/placement/placement.conf

Adjust and paste the configurations into the file.

[DEFAULT]
debug = false

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[placement_database]
connection = mysql+pymysql://placement:StrongPassw0rd01@localhost/placement

Create Placement API

vim /etc/apache2/sites-available/placement-api.conf

Here are the contents to put into the file. You don’t need to change anything here.

Listen 127.0.0.1:8778

<VirtualHost *:8778>
    WSGIScriptAlias / /usr/bin/placement-api
    WSGIDaemonProcess placement-api processes=5 threads=1 user=placement group=placement display-name=%{GROUP}
    WSGIProcessGroup placement-api
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688

    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>

    ErrorLog /var/log/apache2/placement_api_error.log
    CustomLog /var/log/apache2/placement_api_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

Alias /placement /usr/bin/placement-api
<Location /placement>
  SetHandler wsgi-script
  Options +ExecCGI

  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>

Set correct file permissions

chgrp placement /etc/placement/placement.conf
chmod 640 /etc/placement/placement.conf

Update UWSGI bind address to localhost.

sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/"  /etc/init.d/nova-api
sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/" /etc/init.d/nova-api-metadata

Enable placement-api apache website.

a2ensite placement-api

Restart services once done.

systemctl disable --now placement-api && systemctl restart apache2

Open Nginx configuration file.

vim /etc/nginx/nginx.conf

Update by adding the lines with color. Replace 192.168.1.2 with your server IP address.

stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    upstream nova-api {
        server 127.0.0.1:8774;
    }
    server {
        listen 192.168.1.2:8774 ssl;
        proxy_pass nova-api;
    }
    upstream nova-metadata-api {
        server 127.0.0.1:8775;
    }
    server {
        listen 192.168.1.2:8775 ssl;
        proxy_pass nova-metadata-api;
    }
    upstream placement-api {
        server 127.0.0.1:8778;
    }
    server {
        listen 192.168.1.2:8778 ssl;
        proxy_pass placement-api;
    }
    upstream novncproxy {
        server 127.0.0.1:6080;
    }
    server {
        listen 192.168.1.2:6080 ssl;
        proxy_pass novncproxy;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

Import all required data.

# Populate the placement database
su -s /bin/bash placement -c "placement-manage db sync"

# Populate the nova-api database
su -s /bin/bash nova -c "nova-manage api_db sync"

# Register the cell0 database
su -s /bin/bash nova -c "nova-manage cell_v2 map_cell0"


# Populate the nova database
su -s /bin/bash nova -c "nova-manage db sync"

# Create the cell1 cell
su -s /bin/sh nova -c "nova-manage cell_v2 create_cell --name cell1"

Stop the services associated with Nova operations.

systemctl stop nova-api nova-api-metadata nova-conductor nova-scheduler nova-novncproxy

Restart nginx web server

systemctl restart nginx

Then start other services

 systemctl enable --now nova-api nova-api-metadata nova-conductor nova-scheduler nova-novncproxy

Verify nova cell0 and cell1 are registered correctly:

# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+
|  Name |                 UUID                 |           Transport URL           |              Database Connection               | Disabled |
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/              | mysql+pymysql://nova:****@localhost/nova_cell0 |  False   |
| cell1 | d3a70005-5861-427e-9bdf-984b15400d7e | rabbit://openstack:****@localhost |    mysql+pymysql://nova:****@localhost/nova    |  False   |
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+

List registered compute services.

# openstack compute service list
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host  | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| 6f75eb27-9c66-41c0-b0fa-15f1a48cb25c | nova-conductor | osp01 | internal | enabled | up    | 2024-02-02T07:21:04.000000 |
| 802d523d-1f92-427b-9f90-691bf54268af | nova-scheduler | osp01 | internal | enabled | up    | 2024-02-02T07:21:05.000000 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+

3) Install Nova KVM Compute

Install Nova KVM compute package

apt install nova-compute nova-compute-kvm  -y 

Open nova configuration file.

vim /etc/nova/nova.conf

Update VNC settings as follows.

[vnc]
enabled = True
server_listen = 192.168.1.2
server_proxyclient_address = 192.168.1.2
novncproxy_host = 127.0.0.1
novncproxy_port = 6080
ovncproxy_host = 127.0.0.1
novncproxy_port = 6080
novncproxy_base_url = https://osp01.home.cloudlabske.io:6080/vnc_auto.html

Restart nova-compute when done

systemctl restart nova-compute.service

Discover cells and maps found hosts.

su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"

Check new list of nova host services.

# openstack compute service list
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host  | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| 6f75eb27-9c66-41c0-b0fa-15f1a48cb25c | nova-conductor | osp01 | internal | enabled | up    | 2024-02-02T07:32:44.000000 |
| 802d523d-1f92-427b-9f90-691bf54268af | nova-scheduler | osp01 | internal | enabled | up    | 2024-02-02T07:32:45.000000 |
| 83fd3604-7345-4258-a3a2-324900b04b8e | nova-compute   | osp01 | nova     | enabled | up    | 2024-02-02T07:32:43.000000 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+

6. Configure Network Service (Neutron)

OpenStack Networking (neutron) provides an integration for creating and attaching interface devices managed by other OpenStack services to networks.

Components:

  • neutron-server: Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action.
  • OpenStack Networking plug-ins and agents: Plug and unplug ports, create networks or subnets, and provide IP addressing.
  • Messaging queue: Used by most OpenStack Networking installations to route information between the neutron-server and various agents

1) Prepare environment

Create database and user for Neutron networking service.

# mysql
create database neutron_ml2; 
grant all privileges on neutron_ml2.* to neutron@'localhost' identified by 'StrongPassw0rd01'; 
flush privileges; 
exit

Next we add user or service for Neutron on Keystone.

# openstack user create --domain default --project service --password StrongPassw0rd01 neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 71d4813059f5472f852a946bdaf272f4 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# openstack role add --project service --user neutron admin

# openstack service create --name neutron --description "OpenStack Networking service" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking service     |
| enabled     | True                             |
| id          | 7da12e4154ad4f97b8f449f01d6a56ec |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

Create endpoint required.

# Save your server
export controller=$(hostname -f)

# openstack endpoint create --region RegionOne network public https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 3bc3eb0a234a46b68fa2190095f4cd53       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne network internal https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 2bc933e3f8fc4238874adc2cf0b764f9       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne network admin https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | fa110991eab34d4e9e1c639865ce2b14       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

2) Install and configure Neutron

Install Neutron packages for OpenStack

apt install neutron-server neutron-metadata-agent neutron-openvswitch-agent neutron-plugin-ml2 neutron-l3-agent  neutron-metadata-agent openvswitch-switch python3-neutronclient neutron-dhcp-agent

Backup current configuration file.

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.orig

Create new configuration file

vim /etc/neutron/neutron.conf

Modify while pasting the contents provided here.

[DEFAULT]
bind_host = 127.0.0.1
bind_port = 9696
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
dhcp_agent_notification = True
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

# RabbitMQ connection info
transport_url = rabbit://openstack:StrongPassw0rd01@localhost

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

# MariaDB connection info
[database]
connection = mysql+pymysql://neutron:StrongPassw0rd01@localhost/neutron_ml2

# Nova auth info
[nova]
auth_url = https://osp01.home.cloudlabske.io:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[oslo_concurrency]
lock_path = $state_path/tmp

[oslo_policy]
enforce_new_defaults = true

Edit metadata agent config and set host, proxy shared secret and memcache host address.

# vim /etc/neutron/metadata_agent.ini
nova_metadata_host = osp01.home.cloudlabske.io
nova_metadata_protocol = https
# specify any secret key you like
metadata_proxy_shared_secret = metadata_secret
# specify Memcache server
memcache_servers = localhost:11211

Backup ml2 config and create a new one.

mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.orig
vim /etc/neutron/plugins/ml2/ml2_conf.ini

Update the new settings as below.

[DEFAULT]

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security

[ml2_type_flat]

[ml2_type_vxlan]

[securitygroup]
enable_security_group = True
enable_ipset = True

Configure layer 3 agent by setting interface driver to openvswitch.

# vim /etc/neutron/l3_agent.ini
interface_driver = openvswitch

Also DHCP interface driver to openvswitch and enable dnsmasq dhcp driver.

# vim /etc/neutron/dhcp_agent.ini
# Confirm in line 18
interface_driver = openvswitch

# uncomment line 37
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Create a new Open vSwitch agent configurations file.

mv /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.orig
 vim /etc/neutron/plugins/ml2/openvswitch_agent.ini

Configure like below:

[DEFAULT]

[agent]

[ovs]

[securitygroup]
firewall_driver = openvswitch
enable_security_group = True
enable_ipset = True

Create required files and set correct permissions

touch /etc/neutron/fwaas_driver.ini
chmod 640 /etc/neutron/{neutron.conf,fwaas_driver.ini}
chmod 640 /etc/neutron/plugins/ml2/{ml2_conf.ini,openvswitch_agent.ini}
chgrp neutron /etc/neutron/{neutron.conf,fwaas_driver.ini}
chgrp neutron /etc/neutron/plugins/ml2/{ml2_conf.ini,openvswitch_agent.ini}

Open Nova configuration and add Neutron networking settings.

# vim /etc/nova/nova.conf
# add follows into the [DEFAULT] section
vif_plugging_is_fatal = True
vif_plugging_timeout = 300

# Add the following to the end : Neutron auth info
# the value of [metadata_proxy_shared_secret] is the same with the one in [metadata_agent.ini]
[neutron]
auth_url = https://osp01.home.cloudlabske.io:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = StrongPassw0rd01
service_metadata_proxy = True
metadata_proxy_shared_secret = metadata_secret
insecure = false

Update UWSGI_BIND_IP bind address.

sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/"  /etc/init.d/neutron-api

Update nginx streams

 vim /etc/nginx/nginx.conf

Add neutron upstream and server parameters for proxying.

stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    upstream nova-api {
        server 127.0.0.1:8774;
    }
    server {
        listen 192.168.1.2:8774 ssl;
        proxy_pass nova-api;
    }
    upstream nova-metadata-api {
        server 127.0.0.1:8775;
    }
    server {
        listen 192.168.1.2:8775 ssl;
        proxy_pass nova-metadata-api;
    }
    upstream placement-api {
        server 127.0.0.1:8778;
    }
    server {
        listen 192.168.1.2:8778 ssl;
        proxy_pass placement-api;
    }
    upstream novncproxy {
        server 127.0.0.1:6080;
    }
    server {
        listen 192.168.1.2:6080 ssl;
        proxy_pass novncproxy;
    }
    upstream neutron-api {
        server 127.0.0.1:9696;
    }
    server {
        listen 192.168.1.2:9696 ssl;
        proxy_pass neutron-api;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

Create symlink for ml2_conf.ini

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Then populate neutron database

su -s /bin/bash neutron -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head"

Expected execution output;

....
INFO  [alembic.runtime.migration] Running upgrade 1e0744e4ffea -> 6135a7bd4425
INFO  [alembic.runtime.migration] Running upgrade 6135a7bd4425 -> 8df53b0d2c0e
INFO  [alembic.runtime.migration] Running upgrade 8df53b0d2c0e -> 1bb3393de75d, add qos policy rule Packet Rate Limit
INFO  [alembic.runtime.migration] Running upgrade 1bb3393de75d -> c181bb1d89e4
INFO  [alembic.runtime.migration] Running upgrade c181bb1d89e4 -> ba859d649675
INFO  [alembic.runtime.migration] Running upgrade ba859d649675 -> e981acd076d3
INFO  [alembic.runtime.migration] Running upgrade e981acd076d3 -> 76df7844a8c6, add Local IP tables
INFO  [alembic.runtime.migration] Running upgrade 76df7844a8c6 -> 1ffef8d6f371, migrate RBAC registers from "target_tenant" to "target_project"
INFO  [alembic.runtime.migration] Running upgrade 1ffef8d6f371 -> 8160f7a9cebb, drop portbindingports table
INFO  [alembic.runtime.migration] Running upgrade 8160f7a9cebb -> cd9ef14ccf87
INFO  [alembic.runtime.migration] Running upgrade cd9ef14ccf87 -> 34cf8b009713
INFO  [alembic.runtime.migration] Running upgrade 34cf8b009713 -> I43e0b669096
INFO  [alembic.runtime.migration] Running upgrade I43e0b669096 -> 4e6e655746f6
INFO  [alembic.runtime.migration] Running upgrade 4e6e655746f6 -> 659cbedf30a1
INFO  [alembic.runtime.migration] Running upgrade 659cbedf30a1 -> 21ff98fabab1
INFO  [alembic.runtime.migration] Running upgrade 21ff98fabab1 -> 5881373af7f5
INFO  [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab
INFO  [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0
INFO  [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62
INFO  [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353
INFO  [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
  OK
root

Activate OVS interface.

ip link set up ovs-system

Stop neutron services.

systemctl stop neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent nova-api nova-compute nginx

Start neutron services.

systemctl start neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent nova-api nova-compute nginx

Enable services to start at system boot up.

systemctl enable neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent

Confirm networking agents list.

# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 2c802774-b93a-45fb-b23b-aa9994237e23 | Metadata agent     | osp01 | None              | :-)   | UP    | neutron-metadata-agent    |
| 52e59a27-59c3-45a3-bca0-1c55dae3281e | L3 agent           | osp01 | nova              | :-)   | UP    | neutron-l3-agent          |
| 96e812a7-fb0f-4099-a989-1b203843d8c8 | Open vSwitch agent | osp01 | None              | :-)   | UP    | neutron-openvswitch-agent |
| e02cf121-ed3e-4a5e-9cf8-87dc28aa28be | DHCP agent         | osp01 | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

3) Configure Neutron Flat network

Confirm your interfaces activated on the server.

$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 1c:69:7a:ab:be:de brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: ovs-system: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 7e:b9:25:db:58:bd brd ff:ff:ff:ff:ff:ff
4: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9e:91:4a:20:26:4f brd ff:ff:ff:ff:ff:ff

Open networking settings file and adjust to look similar to ones shown below.

# vim /etc/network/interfaces
auto eno1
iface eno1 inet manual
  
auto br-eno1
iface br-eno1 inet static
  address 192.168.1.2
  netmask 255.255.255.0
  gateway 192.168.1.1
  dns-nameservers 192.168.1.1

auto ovs-system
iface ovs-system inet manual

Replace

  • eno1 with your physical network interface name
  • breno1 with the bridge to be added.
  • 192.168.1.2 with your machine IP address and 255.255.255.0 being netmask
  • 192.168.1.1 is gateway and DNS server at 192.168.1.1 as well.

Input the interface name and bridge name as variables.

INT_NAME=eno1
BR_NAME=br-eno1

Add OVS bridge to the system.

ovs-vsctl add-br $BR_NAME
ip link set $INT_NAME up
ip link set $BR_NAME up

Add a port to the bridge

ovs-vsctl add-port $BR_NAME $INT_NAME

Modify openvswitch_agent.ini and do physical bridge mappings.

# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
# add a line undet [ovs] section
[ovs]
bridge_mappings = physnet1:br-eno1

Since we are using flat networking specify network as mapped to a bridge above.

# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = physnet1

Restart neutron services.

systemctl restart neutron-api neutron-rpc-server neutron-openvswitch-agent

4) Create Virtual Network

Define project ID

projectID=$(openstack project list | grep service | awk '{print $2}')

Create a shared network

# openstack network create --project $projectID \
--share --provider-network-type flat --provider-physical-network physnet1 private
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2024-02-02T08:45:27Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 36577b59-f6e1-4844-a0d8-a277c9ddc780 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | private                              |
| port_security_enabled     | True                                 |
| project_id                | 1067895d9b99452b8d1758eda755c7bc     |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2024-02-02T08:45:27Z                 |
+---------------------------+--------------------------------------+

Create subnet on network we just created. Here we’re using;

  • Network: 192.168.1.0/24
  • DHCP start: 192.168.1.101
  • DHCP end: 192.168.1.149
  • Gateway and DNS Server: 192.168.1.1
 openstack subnet create subnet1 --network private \
--project $projectID --subnet-range 192.168.1.0/24 \
--allocation-pool start=192.168.1.101,end=192.168.1.149 \
--gateway 192.168.1.1 --dns-nameserver 192.168.1.1

List networks and subnets created on openstack.

# openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID                                   | Name    | Subnets                              |
+--------------------------------------+---------+--------------------------------------+
| 36577b59-f6e1-4844-a0d8-a277c9ddc780 | private | 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 |
+--------------------------------------+---------+--------------------------------------+

# openstack subnet list
+--------------------------------------+---------+--------------------------------------+----------------+
| ID                                   | Name    | Network                              | Subnet         |
+--------------------------------------+---------+--------------------------------------+----------------+
| 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 | subnet1 | 36577b59-f6e1-4844-a0d8-a277c9ddc780 | 192.168.1.0/24 |
+--------------------------------------+---------+--------------------------------------+----------------+

7. Add Compute Flavors and SSH key

In OpenStack, flavors are used to define the compute, memory, and storage capacity of nova computing instances. Thin of it as hardware configuration for a server.

Samples;

  • m1.tiny Flavor with: CPU 1, Memory 2048M, Root Disk 20G
  • m1.small Flavor with: CPU 2, Memory 4096, Root Disk 30G
  • m1.medium Flavor with: CPU 2, Memory 8192, Root Disk 40G

See an example below on the creation of the flavors.

openstack flavor create --id 1 --vcpus 1 --ram 2048 --disk 20 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 4096 --disk 30 m1.small
openstack flavor create --id 3 --vcpus 1 --ram 8192 --disk 40 m1.medium

List available Flavors in your OpenStack cloud.

root@osp01 ~(keystone)$ openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1  | m1.tiny   | 2048 |   20 |         0 |     1 | True      |
| 2  | m1.small  | 4096 |   30 |         0 |     1 | True      |
| 3  | m1.medium | 8192 |   40 |         0 |     1 | True      |
+----+-----------+------+------+-----------+-------+-----------+

Add SSH Key

You can generate SSH keypair if none in existence.

ssh-keygen -q -N ""

Add the key created while giving it a name.

# openstack keypair create --public-key ~/.ssh/id_rsa.pub default-pubkey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f |
| id          | default-pubkey                                  |
| is_deleted  | None                                            |
| name        | default-pubkey                                  |
| type        | ssh                                             |
| user_id     | 61800deb7d664bbcb4f3eef188cc8dbc                |
+-------------+-------------------------------------------------+

# openstack keypair list
+----------------+-------------------------------------------------+------+
| Name           | Fingerprint                                     | Type |
+----------------+-------------------------------------------------+------+
| default-pubkey | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
| jmutai-pubkey  | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
+----------------+-------------------------------------------------+------+

8. Create Security groups

security group is a named collection of network access rules that are use to limit the types of traffic that have access to instances. When you launch an instance, you can assign one or more security groups to it. If you do not create security groups, new instances are automatically assigned to the default security group, unless you explicitly specify a different security group.

The default security group is named default.

# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+

# openstack security group rule list  default
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group                | Remote Address Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+
| 2a4aa470-935a-474a-a8bd-06623218a287 | None        | IPv4      | 0.0.0.0/0 |            | egress    | None                                 | None                 |
| 6cf36173-e187-4ed2-82f4-f5ead4ad3134 | None        | IPv6      | ::/0      |            | egress    | None                                 | None                 |
| 7d4af0e4-fb46-40b5-b447-8e7d22cbdb4d | None        | IPv4      | 0.0.0.0/0 |            | ingress   | c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | None                 |
| a98b779a-f63a-44ff-834e-c3a557f2864d | None        | IPv6      | ::/0      |            | ingress   | c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | None                 |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+

Let’s create a security group that allows everything in and out.

openstack security group create allow_all --description "Allow all ports"
openstack security group rule create --protocol TCP --dst-port 1:65535 --remote-ip 0.0.0.0/0 allow_all
openstack security group rule create --protocol ICMP --remote-ip 0.0.0.0/0 allow_all

List security groups to confirm it was created.

# openstack security group list
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| ID                                   | Name      | Description            | Project                          | Tags |
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| 287e76b4-337a-4c08-9e3d-84efd9274edb | allow_all | Allow all ports        | 9a102dfdf9a54e8382fefdca727b2553 | []   |
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default   | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+-----------+------------------------+----------------------------------+------+

Below is a limited access security group. Only allowing access to known ports e.g 22, 80, 443, icmp

openstack security group create base --description "Allow common ports"
openstack security group rule create --protocol TCP --dst-port 22 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol TCP --dst-port 80 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol TCP --dst-port 443 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol ICMP --remote-ip 0.0.0.0/0 base

9. Add OS Images and Create test VM

We have a dedicated article on how you can upload OS cloud images to OpenStack Glance image service.

Confirm once uploaded by listing available images.

# openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| 37c638d5-caa0-4570-a126-2c9d64b262b4 | AlmaLinux-9     | active |
| 3ae8095e-a774-468b-8376-c3d1b8a70bdf | CentOS-Stream-9 | active |
| 83bf7ac6-9248-415b-ac89-269f2b70fdb4 | Debian-12       | active |
| 02799133-06ed-483d-9121-e3791c12bb1c | Fedora-39       | active |
+--------------------------------------+-----------------+--------+

Confirm flavors

# openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1  | m1.tiny   | 2048 |   20 |         0 |     1 | True      |
| 2  | m1.small  | 4096 |   30 |         0 |     1 | True      |
| 3  | m1.medium | 8192 |   40 |         0 |     1 | True      |
+----+-----------+------+------+-----------+-------+-----------+

List networks

# openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID                                   | Name    | Subnets                              |
+--------------------------------------+---------+--------------------------------------+
| 36577b59-f6e1-4844-a0d8-a277c9ddc780 | private | 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 |
+--------------------------------------+---------+--------------------------------------+

Confirm configured security groups.

# openstack security group  list
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| ID                                   | Name      | Description            | Project                          | Tags |
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| 287e76b4-337a-4c08-9e3d-84efd9274edb | allow_all | Allow all ports        | 9a102dfdf9a54e8382fefdca727b2553 | []   |
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default   | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+-----------+------------------------+----------------------------------+------+

List configured keypairs on your OpenStack.

# openstack keypair list
+----------------+-------------------------------------------------+------+
| Name           | Fingerprint                                     | Type |
+----------------+-------------------------------------------------+------+
| default-pubkey | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
| jmutai-pubkey  | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
+----------------+-------------------------------------------------+------+

Create a VM instance on Nova compute

openstack server create --flavor m1.small \
--image AlmaLinux-9  \
--security-group allow_all \
--network private \
--key-name  default-pubkey \
AlmaLinux-9

10. Configure Horizon – OpenStack Dashboard

Horizon is a Django-based project aimed at providing a complete OpenStack Dashboard along with an extensible framework for building new dashboards from reusable components. The only core service required by the dashboard is the Identity service.

Install dashboard openstack package.

apt install openstack-dashboard -y

Open local_settings.py file for editing.

vim /etc/openstack-dashboard/local_settings.py

Ajudt

# In line 99 : change Memcache server
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
    },
}

# In line 107 : add
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
# line 120 : set Openstack Host
# line 121 : comment out and add a line to specify URL of Keystone Host
OPENSTACK_HOST = "osp01.home.cloudlabske.io"
#OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_URL = "https://osp01.home.cloudlabske.io:5000/v3"
# line 125 : set your timezone
TIME_ZONE = "Africa/Nairobi"

# Add to the end of file
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

# set [True] below if you are using self signed certificate
OPENSTACK_SSL_NO_VERIFY = False

Also edit Apache default ssl file.

# vim /etc/apache2/sites-available/default-ssl.conf
# In line 31,32, configure path to your SSL certificate and key
SSLCertificateFile      /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
SSLCertificateKeyFile   /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem

# In line 41 : uncomment and specify your chain file
SSLCertificateChainFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem

r
# change to your Memcache server

Specify Memcache address

# vim /etc/openstack-dashboard/local_settings.d/_0006_debian_cache.py
CACHES = {
  'default' : {
    #'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    'LOCATION': '127.0.0.1:11211',
  }
}

Create new OpenStack dashboard Apache configuration file.

vim /etc/apache2/conf-available/openstack-dashboard.conf

Add the contents below.

WSGIScriptAlias / /usr/share/openstack-dashboard/wsgi.py process-group=horizon
WSGIDaemonProcess horizon user=horizon group=horizon processes=3 threads=10 display-name=%{GROUP}
WSGIProcessGroup horizon
WSGIApplicationGroup %{GLOBAL}

Alias /static /var/lib/openstack-dashboard/static/
Alias /horizon/static /var/lib/openstack-dashboard/static/

<Directory /usr/share/openstack-dashboard>
  Require all granted
</Directory>

<Directory /var/lib/openstack-dashboard/static>
  Require all granted
</Directory>

Enable sites

a2enconf openstack-dashboard
a2enmod ssl
a2ensite default-ssl

Copy policy file.

mv /etc/openstack-dashboard/policy /etc/openstack-dashboard/policy.org

Restart Apache web server.

chown -R horizon /var/lib/openstack-dashboard/secret-key
systemctl restart apache2

You can now access Horizon dashboard on https://(Server’s hostname)/

openstack dashboard

Login with a user in Keystone and matching password.

Conclusion

Installation of OpenStack on Debian 12 (Bookworm) gives you an powerful, reliable, and highly scalable cloud infrastructure solution. If you followed the steps outlined in this article step-by-step, you should be able to confidently set up your OpenStack cloud system that leverages on Debian’s stability. We hope this article was of great help and we thank you for visiting our website.

4 COMMENTS

  1. Good afternoon, thank you for your guidance. Everything works perfectly. Can you tell me how to fix the /identity access in Dashboard?
    [12/Apr/2024:17:51:00 +0300] “GET /identity/users/ HTTP/1.1” 403 497 “http://oscloud.lab.loc/project/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4 Safari/605.1.15”

  2. Hello,
    Thank you for this tutorial, it has helped me to create my own lab 😉
    but, here is a mistake on the file /etc/placement/placement.conf
    [keystone_authtoken]
    ….
    auth_url = https:/osp01.home.cloudlabske.io:5000
    (auth_url is missing a / after the http: 😉
    it’s hard to see and we don’t realize the error until we try to create an instance

LEAVE A REPLY

Please enter your comment!
Please enter your name here