How to install OpenStack Rocky – part 2

This is the second post on the installation of OpenStack Rocky in an Ubuntu based deployment.

In this post I am explaning how to install the essential OpenStack services in the controller. Please check the previous port How to install OpenStack Rocky – part 1 to learn on how to prepare the servers to host our OpenStack Rocky platform.

Recap

In the last post, we prepared the network for both the node controller and the compute elements. The description is in the next figure:

horsemen

We also installed the prerrequisites for the controller.

Installation of the controller

In this section we are installing keystone, glance, nova, neutron and the dashboard, in the controller.

Repositories

In first place, we need to install the OpenStack repositories:

# apt install software-properties-common
# add-apt-repository cloud-archive:rocky
# apt update && apt -y dist-upgrade
# apt install -y python-openstackclient

Keystone

To install keystone, first we need to create the database:

# mysql -u root -p <<< "CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';"

And now, we’ll install keystone

# apt install -y keystone apache2 libapache2-mod-wsgi

We are creating the minimal keystone.conf configuration, according to the basic deployment:

# cat > /etc/keystone/keystone.conf  <<EOT
[DEFAULT]
log_dir = /var/log/keystone
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[extra_headers]
Distribution = Ubuntu
[token]
provider = fernet"
fi
EOT

Now we need to execute some commands to prepare the keystone service

# su keystone -s /bin/sh -c 'keystone-manage db_sync'
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# keystone-manage bootstrap --bootstrap-password "ADMIN_PASS" --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

At this moment, we have to configure apache2, because it is used as the http backend.

# echo "ServerName controller" >> /etc/apache2/apache2.conf
# service apache2 restart

Finally we’ll prepare a file that contains a set of variables that will be used to access openstack. This file will be called admin-openrc and its content is the next:

# cat > admin-openrc <<EOT
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOT

And now we are almost ready to operate keystone. Now we need to source that file:

# source admin-openrc

And now we are ready to issue commands in OpenStack. And we are testing by create a project that will host the OpenStack services:

# openstack project create --domain default --description "Service Project" service

Demo Project

In the OpenStack installation guide, a demo project is created. We are including the creation of this demo project, although it is not needed:

# openstack project create --domain default --description "Demo Project" myproject
# openstack user create --domain default --password "MYUSER_PASS" myuser
# openstack role create myrole
# openstack role add --project myproject --user myuser myrole

We are also creating the set of variables needed in the system to execute commands in OpenStack, using this demo user

# cat > demo-openrc << EOT
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=MYUSER_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2"
EOT

In case that you want to use this demo user and project, you will be either login in the horizon portal (once it is installed in further steps), using the pair myuser/MYUSER_PASS credentials, or sourcing the file demo-openrc to use the commandline.

Glance

Glance is the OpenStack service dedicated to manage the VM images. And this using this steps, we will be able to make a basic installation where the images are stored in the filesystem of the controller.

First we need to create a database and user in mysql:

# mysql -u root -p <<< "CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';"

Now we need to create the user dedicated to run the service and the endpoints in keystone, but first we’ll make sure that we have the proper env variables by sourcing the admin credentials:

# source admin-openrc# openstack user create --domain default --password "GLANCE_PASS" glance
# openstack role add --project service --user glance admin
# openstack service create --name glance --description "OpenStack Image" image
# openstack endpoint create --region RegionOne image public http://controller:9292
# openstack endpoint create --region RegionOne image internal http://controller:9292
# openstack endpoint create --region RegionOne image admin http://controller:9292

Now we are ready to install the components:

# apt install -y glance

At the time of writing this post, there is an error in the glance package in the OpenStack repositories. That makes that (e.g.) integration with cinder does not work. The problem is that file /etc/glance/rootwrap.conf and folder /etc/glance/rootwrap.d are inside folder /etc/glance/glance. So the patch simply consists in executing

$ mv /etc/glance/glance/rootwrap.* /etc/glance/

 

And now we are creating the basic configuration files, needed to run glance as in the basic installation:

# cat > /etc/glance/glance-api.conf  << EOT
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
backend = sqlalchemy
[image_format]
disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop.root-tar
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOT

And

# cat > /etc/glance/glance-registry.conf  << EOT
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
backend = sqlalchemy
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
EOT

The backend to store the files is the folder /var/lib/glance/images/ in the controller node. If you want to change this folder, please update the variable filesystem_store_datadir in the file glance-api.conf

We have created the files that result from the installation from the official documentation and now we are ready to start glance. First we’ll prepare the database

# su -s /bin/sh -c "glance-manage db_sync" glance

And finally we will restart the services

# service glance-registry restart
# service glance-api restart

At this point, we are creating our first image (the common cirros image):

# wget -q http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img -O /tmp/cirros-0.4.0-x86_64-disk.img
# openstack image create "cirros" --file /tmp/cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public

Nova (i.e. compute)

Nova is the set of services dedicated to the compute service. As we are installing the controller, this server will not run any VM. Instead it will coordinate the creation of the VMs in the working nodes.

First we need to create the databases and users in mysql:

# mysql -u root -p <<< "CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';"

And now we will create the users that will manage the services and the endpoints in keystone. But first we’ll make sure that we have the proper env variables by sourcing the admin credentials:

# source admin-openrc
# openstack user create --domain default --password "NOVA_PASS" nova
# openstack role add --project service --user nova admin
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# openstack user create --domain default --password "PLACEMENT_PASS" placement
# openstack role add --project service --user placement admin
# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
# openstack endpoint create --region RegionOne placement admin http://controller:8778

Now we’ll install the services

# apt -y install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

Once the services have been installed, we are creating the basic configuration file

# cat > /etc/nova/nova.conf  <<\EOT
[DEFAULT]
lock_path = /var/lock/nova
state_path = /var/lib/nova
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 192.168.1.240
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[cells]
enable = False
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[glance]
api_servers = http://controller:9292
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = openstack
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[scheduler]
discover_hosts_in_cells_interval = 300
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
EOT

In this file, the most important value to tweak is “my_ip” that corresponds to the internal IP address of the controller.

Also remember that we are using simple passwords, to ease its tracking. In case that you need to make the deployment more secure, please set secure passwords.

At this point we need to synchronize the databases and create the openstack cells

# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# su -s /bin/sh -c "nova-manage db sync" nova

Finally we need to restart the compute services.

# service nova-api restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart

We have to take into account that this is the controller node, and will not host any virtual machine.

Neutron

Neutron is the networking service in OpenStack. In this post we are installing the “self-sevice networks” option, so that the users will be able to create their isolated networks.

In first place, we are creating the database for the neutron service.

# mysql -u root -p <<< "CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'
"

Now we will create the openstack user and endpoints, but first we need to ensure that we set the env variables:

# source admin-openrc 
# openstack user create --domain default --password "NEUTRON_PASS" neutron
# openstack role add --project service --user neutron admin
# openstack service create --name neutron --description "OpenStack Networking" network
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696

Now we are ready to install the packages related to neutron:

# apt install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

And now we need to create the configuration files for neutron. In first place, the general file /etc/neutron/neutron.conf

# cat > /etc/neutron/neutron.conf <<\EOT
[DEFAULT]
core_plugin = ml2
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[agent]
root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lock/neutron
EOT

Now the file /etc/neutron/plugins/ml2/ml2_conf.ini, that will be used to instruct neutron how to create the LANs:

# cat > /etc/neutron/plugins/ml2/ml2_conf.ini <<\EOT
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
EOT

Now the file /etc/neutron/plugins/ml2/linuxbridge_agent.ini, because we are using linux bridges in this setup:

# cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini <<\EOT
[linux_bridge]
physical_interface_mappings = provider:eno3
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.240
l2_population = true
EOT

In this file, it is important to tweak the value “eno3” in value physical_interface_mappings, to match the physical interface that has access to the provider’s network (i.e. public network). It is also essential to set the proper value for “local_ip“, which is the IP address of your internal interface with which to communicate to the compute hosts.

Now we have to create the files correponding to the l3_agent and the dhcp agent:

# cat > /etc/neutron/l3_agent.ini <<EOT
[DEFAULT]
interface_driver = linuxbridge
EOT
# cat /etc/neutron/dhcp_agent.ini <<EOT
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
dnsmasq_dns_servers = 8.8.8.8
EOT

Finally we need to create the file /etc/neutron/metadata_agent.ini

# cat > genfile /etc/neutron/metadata_agent.ini <<EOT
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
EOT

Once the configuration files have been created, we are synchronizing the database and restarting the services related to neutron.

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
# service nova-api restart
# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
# service neutron-l3-agent restart

And that’s all.

At this point we have the controller node installed according to the OpenStack documentation. It is possible to issue any command, but it will not be possible to start any VM, because we have not installed any compute node, yet.