(Last Updated On: March 20, 2019)

Configuring Neutron on network Node

“Everyone has been made for some particular work, and the desire for that work has been put in every heart.”
–Rumi

Step One: Install NTP and add necessary repositories:

[[email protected] ~]#  yum -y install ntp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
base: centos.mirror.liquidtelecom.com
extras: centos.mirror.liquidtelecom.com
updates: centos.mirror.liquidtelecom.com
Resolving Dependencies
--> Running transaction check
[[email protected] ~]# vim /etc/ntp.conf
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
# Add your own server pools
server 0.africa.pool.ntp.org
server 1.africa.pool.ntp.org
server 2.africa.pool.ntp.org
server 3.africa.pool.ntp.org
[[email protected] ~]#  systemctl start ntpd 
[[email protected] ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[[email protected] ~]# firewall-cmd --add-service=ntp --permanent
success
[[email protected] ~]# firewall-cmd --reload
success

Add epel and other repositories

[[email protected] ~]# yum -y install epel-release.noarch 
[[email protected] ~]# yum -y install centos-release-openstack-queens
Downloading packages:
(1/5): centos-release-qemu-ev-1.0-4.el7.centos.noarch.rpm | 11 kB 00:00:00
(2/5): centos-release-ceph-luminous-1.1-2.el7.centos.noarch.rpm | 4.4 kB 00:00:00
(3/5): centos-release-virt-common-1-1.el7.centos.noarch.rpm | 4.5 kB 00:00:00
(4/5): centos-release-storage-common-2-2.el7.centos.noarch.rpm | 5.1 kB 00:00:00
(5/5): centos-release-openstack-queens-1-2.el7.centos.noarch.rpm | 5.3 kB 00:00:01
[[email protected] ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-OpenStack-queens.repo

Step Two: Install neutron and everything it needs e.g openvswitch etc

[[email protected] ~]$ yum --enablerepo=centos-openstack-queens,epel -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
base: centos.mirror.liquidtelecom.com
centos-qemu-ev: centos.mirror.liquidtelecom.com
epel: mirror.bytemark.co.uk
extras: centos.mirror.liquidtelecom.com
updates: centos.mirror.liquidtelecom.com
centos-ceph-luminous | 2.9 kB 00:00:00
centos-openstack-queens | 2.9 kB 00:00:00
centos-qemu-ev | 2.9 kB 00:00:00
(1/3): centos-qemu-ev/7/x86_64/primary_db | 58 kB 00:00:00
(2/3): centos-ceph-luminous/7/x86_64/primary_db | 142 kB 00:00:02
(3/3): centos-openstack-queens/7/x86_64/primary_db | 1.1 MB 00:00:23
Resolving Dependencies

Step Three: Back up the original neutron config file and create a new one with the following configs

[[email protected] ~]# mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak 
[[email protected] ~]# vim /etc/neutron/neutron.conf
# create new file
[DEFAULT]
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
allow_overlapping_ips = True
# RabbitMQ connection info
transport_url = rabbit://openstack:[email protected]
# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = http://192.168.122.130:5000
auth_url = http://192.168.122.130:5000
memcached_servers = 192.168.122.130:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
[oslo_concurrency]
lock_path = $state_path/lock

Step Four: Head over to the following files and edit as well as shown below

[[email protected] ~]# vim /etc/neutron/dhcp_agent.ini
# on line 17: add this
interface_driver = openvswitch
# on line 28: uncomment it as shown
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
# on line 37: uncomment and change to look like below
enable_isolated_metadata = true
[[email protected] ~]# vim /etc/neutron/metadata_agent.ini
# on line 22: uncomment and specify Nova API server
nova_metadata_host = 192.168.122.130
# on line 34: uncomment and specify any secret key you like
metadata_proxy_shared_secret = metadata_secret
# on line 260: uncomment and specify Memcache server
memcache_servers = 192.168.122.130:11211
[[email protected] ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
# on line 129: add the following
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types =
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[[email protected] ~]# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
on line 308: add as follows
[securitygroup]
firewall_driver = openvswitch
enable_security_group = true
enable_ipset = true

It is now time to start and enable Neutron services.. Whoa

[[email protected] ~]#  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

[[email protected] ~]# systemctl start openvswitch
[[email protected] ~]# systemctl enable openvswitch
Created symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.

[[email protected] ~]# ovs-vsctl add-br br-int
[[email protected] ~]# for service in dhcp-agent l3-agent metadata-agent openvswitch-agent; do
systemctl start neutron-$service
systemctl enable neutron-$service
done

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service to /usr/lib/systemd/system/neutron-openvswitch-agent.service.

Thank you guys for following through thus far. Our next sequel we will be configuring Neutron on the Compute Node. Stay tuned and we hope you’ve enjoyed so far. The links below are for our previous guides on this sequel:

Installation of Openstack three Node Cluster on CentOS 7 Part One

Installation of Three node OpenStack Queens Cluster – Part Two

Installation of Three node OpenStack Queens Cluster – Part Three

Installation of Three node OpenStack Queens Cluster – Part Four

Installation of Three node OpenStack Queens Cluster – Part Five

Installation of Three node OpenStack Queens Cluster – Part Six

Next guide goes below

Installation of Three node OpenStack Queens Cluster – Part Eight